Hi all,
I'd like to run my current understanding of a behaviour by you to see if I have it correct.
I have a circuit where the input source of variable voltage, variable current but capped current at about 500mA. Voltage won't go above about 33V under no-load but will sag under load to 20V, or 10W peak.
I then wish to charge two 120F super-capacitors in series with as little restriction as possible.
I use a sync buck, which on my first iteration is a CPU controlled one. The CPU runs at 16MHz or 20MHz and the PWM controller on it is set to have 100 steps, the minimum resolution I think is acceptable. This gives a frequency of 160-200KHz.
For this frequency and input voltage a inductor of 68uH is chosen. This is high enough to keep the buck in CCM at the given input current.
Being in CCM, the equation D=Input/Output*Efficiency% can be used. The CPU measures the output voltage and chooses a duty to force an input voltage, aka MPPT.
All good so far, and it all works, except where the capacitors are at or nigh on dead. In such a situation the duty always equals the minimum, 1, and the circuit sits there not charging. To make it charge with an input voltage of 10V I need to force the duty to 4 or 5. At 20V it needs 3 or 4.
So why is this?
Let's say the super-capacitors are at 0.1V and the input voltage is at 20V. The input current is 500mA which gives 10W. 10W/0.1V is 100A! This is excluding inductor ripple, but either way is implausible to support.
But in reality 100A won't occur, because the inductor, despite being 13mm^2 still has a ESR of about 130mOhm. 5A is a 0.65V drop over the inductor alone.
So my understanding is when charging super-capacitors from dead the losses in the inductor and mosfets have increased significance, and with efficiency taking a hit the D=Input/Output*Efficiency% can no longer use a fixed reasonable value for the efficiency part, which is why I have to force a higher duty.
To reduce the main loss the inductance needs to decrease and the frequency needs to go up. This moves the approach away from CPU and onto a Buck controller. At 1MHz a 10uH is plausible. However, the ripple increases and the current is still the same, so the limit now is the current limiter inside the controller. A LMR51450 is 5A. Moving up to something more substantial becomes seriously more costly!
So in summary:
To use CPU control, the inductor needs to be massive in order to get the ESR down, otherwise charging at a lower voltage level is restricted by ESR.
To use a Buck Controller one needs to empty the wallet otherwise the IC will become the more limiting factor.
*I have omitted the losses of the MOSFETS due to them being about 18mOhm, so relatively small compared to the inductor ESR.
I'd like to run my current understanding of a behaviour by you to see if I have it correct.
I have a circuit where the input source of variable voltage, variable current but capped current at about 500mA. Voltage won't go above about 33V under no-load but will sag under load to 20V, or 10W peak.
I then wish to charge two 120F super-capacitors in series with as little restriction as possible.
I use a sync buck, which on my first iteration is a CPU controlled one. The CPU runs at 16MHz or 20MHz and the PWM controller on it is set to have 100 steps, the minimum resolution I think is acceptable. This gives a frequency of 160-200KHz.
For this frequency and input voltage a inductor of 68uH is chosen. This is high enough to keep the buck in CCM at the given input current.
Being in CCM, the equation D=Input/Output*Efficiency% can be used. The CPU measures the output voltage and chooses a duty to force an input voltage, aka MPPT.
All good so far, and it all works, except where the capacitors are at or nigh on dead. In such a situation the duty always equals the minimum, 1, and the circuit sits there not charging. To make it charge with an input voltage of 10V I need to force the duty to 4 or 5. At 20V it needs 3 or 4.
So why is this?
Let's say the super-capacitors are at 0.1V and the input voltage is at 20V. The input current is 500mA which gives 10W. 10W/0.1V is 100A! This is excluding inductor ripple, but either way is implausible to support.
But in reality 100A won't occur, because the inductor, despite being 13mm^2 still has a ESR of about 130mOhm. 5A is a 0.65V drop over the inductor alone.
So my understanding is when charging super-capacitors from dead the losses in the inductor and mosfets have increased significance, and with efficiency taking a hit the D=Input/Output*Efficiency% can no longer use a fixed reasonable value for the efficiency part, which is why I have to force a higher duty.
To reduce the main loss the inductance needs to decrease and the frequency needs to go up. This moves the approach away from CPU and onto a Buck controller. At 1MHz a 10uH is plausible. However, the ripple increases and the current is still the same, so the limit now is the current limiter inside the controller. A LMR51450 is 5A. Moving up to something more substantial becomes seriously more costly!
So in summary:
To use CPU control, the inductor needs to be massive in order to get the ESR down, otherwise charging at a lower voltage level is restricted by ESR.
To use a Buck Controller one needs to empty the wallet otherwise the IC will become the more limiting factor.
*I have omitted the losses of the MOSFETS due to them being about 18mOhm, so relatively small compared to the inductor ESR.
Last edited: