The challenges of charging large super-capacitors

Status
Not open for further replies.

ACharnley

Member
Hi all,

I'd like to run my current understanding of a behaviour by you to see if I have it correct.

I have a circuit where the input source of variable voltage, variable current but capped current at about 500mA. Voltage won't go above about 33V under no-load but will sag under load to 20V, or 10W peak.

I then wish to charge two 120F super-capacitors in series with as little restriction as possible.

I use a sync buck, which on my first iteration is a CPU controlled one. The CPU runs at 16MHz or 20MHz and the PWM controller on it is set to have 100 steps, the minimum resolution I think is acceptable. This gives a frequency of 160-200KHz.

For this frequency and input voltage a inductor of 68uH is chosen. This is high enough to keep the buck in CCM at the given input current.

Being in CCM, the equation D=Input/Output*Efficiency% can be used. The CPU measures the output voltage and chooses a duty to force an input voltage, aka MPPT.

All good so far, and it all works, except where the capacitors are at or nigh on dead. In such a situation the duty always equals the minimum, 1, and the circuit sits there not charging. To make it charge with an input voltage of 10V I need to force the duty to 4 or 5. At 20V it needs 3 or 4.

So why is this?

Let's say the super-capacitors are at 0.1V and the input voltage is at 20V. The input current is 500mA which gives 10W. 10W/0.1V is 100A! This is excluding inductor ripple, but either way is implausible to support.

But in reality 100A won't occur, because the inductor, despite being 13mm^2 still has a ESR of about 130mOhm. 5A is a 0.65V drop over the inductor alone.

So my understanding is when charging super-capacitors from dead the losses in the inductor and mosfets have increased significance, and with efficiency taking a hit the D=Input/Output*Efficiency% can no longer use a fixed reasonable value for the efficiency part, which is why I have to force a higher duty.

To reduce the main loss the inductance needs to decrease and the frequency needs to go up. This moves the approach away from CPU and onto a Buck controller. At 1MHz a 10uH is plausible. However, the ripple increases and the current is still the same, so the limit now is the current limiter inside the controller. A LMR51450 is 5A. Moving up to something more substantial becomes seriously more costly!

So in summary:

To use CPU control, the inductor needs to be massive in order to get the ESR down, otherwise charging at a lower voltage level is restricted by ESR.

To use a Buck Controller one needs to empty the wallet otherwise the IC will become the more limiting factor.

*I have omitted the losses of the MOSFETS due to them being about 18mOhm, so relatively small compared to the inductor ESR.
 
Last edited:
I don't like that term dissipating with capacitance, it sounds like heat!

But besides that you're looking at it as a resistive load whereas this is a buck. The current within the inductor is much higher than 500mA at 0.1V.

I've attached a small sim which shows the buck controller being the limiting factor. I'd like to resolve/clarify my understanding on why additional duty is needed at low output voltage.

 

Attachments

  • ina13x.lib.zip
    2.5 KB · Views: 212
Last edited:
I'd have thought that running in constant current mode (or current steps, to optimise throughput) until the charge voltage is reached and then change to current regulation.

Trying to put high currents in to use all the available input power at very low voltage just seems impractical; a large part of the charge range will be able to work the more practical voltage and current figures, and concentrate on the efficiency through that range.

Two caps in series is presumably giving around a 5V full charge rating?
So 60 Farad, 750 Joule storage.

The minimum possible charge time at 10W (10J / sec) is 75 Seconds.


10W input gives around 2A at the end of the charge cycle.
2A constant current would be ~30 seconds per volt, into 60F; 150 seconds from 0V to 5V.


If you aimed for 10A, that current would start dropping off due to the power limit at just 1V, after six seconds; 30J stored, 720 left so 72+6 = 78 seconds.

At 5A, the current would start dropping from the 2V point; 24 seconds; 120 Joules stored, 630 left so 63 + 24 = 87 seconds.


[If I've got all my mental calcs correct..]

At 10A it would be just three seconds over the minimum possible, while at 5A it would be 12 seconds over the minimum possible; nine seconds more, for a lot less cost, by the sound of it?
 
Good calculations.

I actually ran some simulation work on it earlier comparing a DIY method at 1MHz. Even with a 17mOhm inductor the maximum output was around the 8.5A mark. It was a considerable increase (about 3A) over the 68uH/180mohm but exactly as you say when the voltage reaches just 1V it drops a lot and then the difference diminishes.

And these are at the maximum input power which will almost never happen. 5W is typical, 7W at times. Given the rarity of full power, and much of the time the super-capacitor will be above 1V anyway it does indeed make sense to not stress about the edge scenarios. I've opted for LMR51450, a 5A buck.

I still don't fully know if the efficiency loss through the 180mOhm inductor at the sub 1V voltage is the reason why the duty = output/input*eff equation no longer functions as expected however.
 
When you are charging a capacitor at 0 V, the efficiency will be zero. Near 0 V, the current at a good efficiency will be very high, and practical limit will simply be the current limit of the buck converter, so that means that getting a good efficiency over a very wide voltage range is impractical.

Your equation of duty = output/input * efficiency could easily lead to the situation where the duty cycle is tiny. At very low duty cycles with large inductors, buck regulators don't work properly.

Normally, when the switch is turned on, the current builds up in the inductor, so when the switch turns off, the current carries on flowing in the inductor, but from ground, through the diode. As the switch turns off, the input of the inductor goes from the supply voltage to minus the diode voltage.

At the point in time when the switch turns off, and there is a big change in the voltage on the input of the inductor, there are various stray capacitances that need to be discharged. Those would be the output of the switch, the diode, and the capacitance of the windings of the inductor. All that charge needs to flow through the inductor before the inductor current has decayed much.

With very low duty cycles you can get to the situation the inductor starts with very little current when the switch turns off. By the time that the stray capacitances are discharged, the current has decayed to zero before the diode turns on, so the average output current is equal to the average input current.

I think that at very low output voltages, you would be far better to set a fixed output current, or a fixed maximum inductor current, or a fixed minimum duty cycle.
 
And you're talking about DCM mode, where the equation doesn't apply. I actually 'solved' it by doing as you say, if the voltage is very low I set a minimum duty cycle. I just didn't understand exactly why I was doing it.
 
Status
Not open for further replies.
Cookies are required to use this site. You must accept them to continue using the site. Learn more…