Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Consulting on my next project

Status
Not open for further replies.

electroRF

Member
Hi,
I have the following mechanism to implement and would appreciate your advises as always :)

So far, I programmed the uC to write logs Cyclically to a 10KB Buffer of fast memory.

Now I'd like to use a 1MB buffer of slower memory, to be able to increase the amount of logs I capture.

The problem is the due to real-time, I can't afford the delay of having the uC writing logs directly to the 1MB buffer.

Therefore I'd use a DMA to transfer data from the 10KB buffer to the 1MB buffer, in background.

What algorithm would you use for that transfer?

the difficulties are:
I can't instruct the DMA to transfer the 10KB data to the 1MB buffer, because then I'd have to wait for it to finish until I can log messages again to the 10KB buffer.

On the other hand, I can't tell the DMA to transfer data to the 1MB buffer after each message the uC writes to the 10KB buffer, that would be waste of time due to overhead.

Therefore, when should I tell the DMA to start transferring data?

Thank you.
 
Last edited:
The information you give is too vague..

DMA, as I understand it, is something that moves data from one place to the other "in the background".. so that it should not take any processor time (other than setup the dma transaction).

Write some code that periodically transfers data from the 10kb buffer to the 1mb storage... that is all.. adjust the period to be "fast enough" for you.
 
Last edited:
Hi T,
Thank you for taking part of this thread :)

DMA, as I understand it, is something that moves data from one place to the other "in the background".
Indeed

so that it should not take any processor time (other than setup the dma transaction).
Commanding the DMA to take the Data from the 10KB Buffer to the 1MB Buffer and knowing when to do it are the things that need to be done efficiently.

For example, should I ask after each time I finished writing a message, whether I reached halfway (or to the end) of the buffer, and then Trigger the DMA?
That would be inefficient to write an "IF REACHED HALFWAY/END" each time I write a message into the 10KB buffer.

Write some code that periodically transfers data from the 10kb buffer to the 1mb storage... that is all.. adjust the period to be "fast enough" for you.
Periodically?
Well, it could be that in 10ms I already filled half of the buffer with messages, or that it'd take me 100ms to fill it up.
So how can I do it periodically?
 
Don't wait. As soon as something is written to the buffer start DMA.

Obviously, if you write faster than DMA can transfer it, you can run out of buffer space. You cannot prevent this, so you should be ready for this situation.
 
There are so many options here.. you could trigger the dma write after every write you do in the 10kb buffer.. or when it is half full.. or something else. You really need to have some statistics before you can make "smart" decisions. Why don't you just write something and if it works.. let it be.
 
Hi T and NorthGuy,
Thank you again, I appreciate both of your opinions friends. :)

NorthGuy said:
Don't wait. As soon as something is written to the buffer start DMA.
misterT said:
you could trigger the dma write after every write you do in the 10kb buffer
That would not be possible:
1. the DMA is not always available, therefore if I attempt to use the DMA on every written message (messages can be written within very short intervals in between, as short as 1us), then if I write 2 messages within very short time, I'd obviously write them faster to the 10KB fast memory than the DMA to the 1MB slow memory.

2. Triggering a DMA also costs some cycles, and if I trigger it every time I write a message into the L2 Buffer, I'd extend the time-duration of the 'Writing a message' function which would be problematic for the entire system.

Therefore, I'm trying to think of a way which:
1. Calls the DMA as few times as possible - for example, calling it each time 1/2 or 1/3 (or 1/4, etc) of the 10KB buffer is filled - in order to spare cycles
2. If I follow 1) above, How do I check efficiently whether I reached 1/2 (or 1/3 or 1/4, etc) of the 10KB buffer?
writing an 'if 10KB buffer is 1/2 filled' would be cycle consuming, and I'm wondering how to do it more efficiently.

But perhaps you got other method than #1?

NorthGuy said:
Obviously, if you write faster than DMA can transfer it, you can run out of buffer space. You cannot prevent this, so you should be ready for this situation.
You're right.
I'm trying to think of a good way to recover from such option.
Do I compromise and overwrite the half buffer which was not moved yet to the larger 1MB buffer?
 
This problem is a straight follow-up to the previous problem you had. There is something fundamentally wrong with the program design. You will end up with more and more problems... think it through and redesign.
 
I agree with T. If you would describe the whole pupose of the program that you write, we might be able to give you some ideas.
 
Hi guys,
Let me please describe the purpose of this project.

the uC writes different messages from different tasks very frequently (sometimes every few micro-seconds).
therefore it uses a fast Memory, and only 10KB of it, since we can't afford more than that.

When the system stops working, we'll have the last 10KB of messages that were written into the 10KB buffer (they are written in Cyclic mode).

Since we'd like to store more than just the last 10KB of message, we thought to use a slower and larger memory, where we could use 1MB Buffer.

Because the uC must work fast, it can't write messages directly to the 1MB Buffer (since this memory is slow), and therefore we'd like the DMA to do it in background.

You see the purpose now?
What is the "design" problem here?

My concerns are:

How to work efficiently with the DMA?
Triggering the DMA every few messages will be time consuming, because the 'triggering' itself takes time, and I'd also not like to get interrupted by the DMA every few messages.
What should the the method of operating the DMA?
 
If "triggering" the DMA transaction is too time-consuming, then you have some serious problems...

Could you even give us a hint.. what kind of data you are logging.. how often and how much? How fast is the 10kb memory and how fast is the 1mb memory? How does the DMA work in your chip..
 
What is the "design" problem here?

Apparently the problem is that nobody has not done any design.. Even a simple calculation how much data needs to be logged etc...
 
I can understand the actions which are performed, but I don't understand the purpose, that is the reason why these actions are performed, "reson d'etre".

If you only need the messages before the system stops, why do you want to write them all the time? If you want to write more, can you delay stopping the system to have more time for DMA?
 
Hi T and NorthGuy,
Thank you very much again :)
I learn from your questions and I'd be happy to answer them.

misterT said:
could you even give us a hint.. what kind of data you are logging.
different Data Parameters - the value of some real-time calculation, real-time data as the strength of the received signal at a certain time, logging messages which are transferred between 2 processors (the uC and other processor), etc.

misterT said:
how often and how much?
How often -> every few us to every few ms.
how much -> tens or hundreds of MB.

NorthGuy said:
If you only need the messages before the system stops, why do you want to write them all the time?
I don't know when the system stops.
It could stop due to unexpected failure, and at that case, I'd want to have as much data as possible prior to the failure, which would help understanding the reason for the failure and use that data to fix it.

NorthGuy said:
If you want to write more, can you delay stopping the system to have more time for DMA?
The problem is while the system is running.
When it's running, I can't delay it too much, and therefore I can't write logs directly into the slower 1MB buffer, and must always write logs to the faster 10KB buffer.
I need to find a way in which I could efficiently operate the DMA to pass data from the 10KB buffer to the 1MB buffer.
I would of course need to "tell" the DMA to write into the 1MB buffer in Cyclic mode (since there's more than 1MB of logged data, but I'll keep only the last 1MB data prior the system's failure).

misterT said:
How fast is the 10kb memory and how fast is the 1mb memory?
I haven't compared yet the time differences between writing to each memory.

misterT said:
How does the DMA work in your chip..
Simply.
You configure it to a start source address, start destination address, and size, and at completion it generates an interrupt.


My question is:
What Algorithm would you use to copy data from the 10KB buffer into the 1MB buffer, which takes into account:
1. losing as much data as possible
2. wasting as less cycles as possible on 'managing' actions (e.g. actions which are not pure writing to the memory - for example, checking when the DMA should be trigger, the amount of times the uC triggers the DMA, etc).

Thank you very much again friends!
 
My question is:
What Algorithm would you use to copy data from the 10KB buffer into the 1MB buffer, which takes into account:
1. losing as much data as possible
2. wasting as less cycles as possible on 'managing' actions (e.g. actions which are not pure writing to the memory - for example, checking when the DMA should be trigger, the amount of times the uC triggers the DMA, etc).

If I want to nitpick, I would say that not doing any DMA transactions will 1) lose all data and 2) take no cycles.

But really.. Impossible to say without better knowledge about the incoming data.. just trigger the DMA transaction at constant intervals. Or when the 10kb buffer reaches some limit.
What kind of system you have running at the moment?

If you write some "smart decision making algorithm", the algorithm itself takes many times more cycles than just triggering the DMA at constant intervals. The very idea of DMA is to be efficient way of moving data. You will ruin that idea if you build complex algorithms around it.

You say you want the "managing actions" to be very efficient, but it sounds like you have no idea what is the max. limit you can spare.. and you have no statistics about the incoming data. You need to test and test and test and test. Find out what the real problem is before you try to solve it.
 
Last edited:
What uC are you planning to use?
 
Hi T and atferrari.

T, I love reading your posts, thank you friend! :)

misterT said:
Impossible to say without better knowledge about the incoming data..
There're many tasks running on the uC - each task has many threads and each thread logs its own parameters to the 10KB buffer using the functions I wrote for logging.
It can be a log with just 8 bytes of data, or with 48 bytes of data - size varies.
The interval between logging can be small as 10us and large as 1ms - I wrote the max pace below with reply to your remark.

just trigger the DMA transaction at constant intervals. Or when the 10kb buffer reaches some limit.
I thought of actually doing that.
What would be a 'cycles'-efficient way to check whether the 10KB buffer has reached some limit?

Triggering the DMA every some time-limit, might be better because at this case, I'd not need to check in the logging function whether the 10KB buffer has reached a certain limit, right?

You say you want the "managing actions" to be very efficient, but it sounds like you have no idea what is the max. limit you can spare.. and you have no statistics about the incoming data.
I tested it and I see that the logging is not more than a pace of 1KB / 0.47ms.
How would you use that statistics for when to trigger the DMA?


atferrari said:
What uC are you planning to use?
It's a company-internal uC.
 
I tested it and I see that the logging is not more than a pace of 1KB / 0.47ms.
How would you use that statistics for when to trigger the DMA?
Can't call that statistics really.. but if you have a way to time 3ms intervals then I would trigger the DMA at 3ms rate to transfer everything in the buffer to the larger storage.

What would be a 'cycles'-efficient way to check whether the 10KB buffer has reached some limit?
Hard to say without knowing how the buffer is implemented. Calculate how much data is in the buffer and after some limit trigger the dma.
 
So this is just for debugging in case the system crashes?

Usually you do not want to work hard to optimize your debugging because it's going to be removed from production anyway.

The processor you use is probably fast, since you run so many threads. If you're called every 10us on 100MHz processor, it's one write every 1000 cycles. Initiating DMA, will probably take no more than 10 cycles, 1% overhead. If your system cannot take that, it's a problem in itself.
 
On data wait then fire the dma Is about all you can do. Got to know when the data is coming in and how long it takes to save it to the second buffer from the first is the only way to figure this out. The op needs to tell more about this.

Like I'm loging 10 logs to the first buffer at X times I have a break at of T time before next log and buffer full and time of X to send to second buffer.
 
Hi again, and thank you very much guys.

misterT said:
Can't call that statistics really.. but if you have a way to time 3ms intervals then I would trigger the DMA at 3ms rate to transfer everything in the buffer to the larger storage.
well the DMA doesn't pass the entire buffer, but just section of it at a time.
Say the buffer is from address (0) to (10KB-1)
Say the uC writes to the buffer from (o) to (3KB-1).
Now the DMA starts transferring data of (o) to (3KB-1).
The uC of course continued writing from (3KB) to (6KB-1).
Now the DMA needs to starts transferring data of (3KB) to (6KB-1).

I think I should pass only known-sized sections from the 10KB buffer to the 1MB buffer, because the writing to the 1MB buffer is also done in Cyclic mode.
Therefore I should transfer known-sized sections, which 1MB is dividable by them.

misterT said:
Hard to say without knowing how the buffer is implemented. Calculate how much data is in the buffer and after some limit trigger the dma.
The buffer is simply a static array.
It's starting address and size are known post linkage.
each task writes different amounts of data to the 10KB buffer each time (no constant writing pattern, as it's a real-time application).
I write to the 10KB buffer in cyclic mode.

misterT said:
Calculate how much data is in the buffer and after some limit trigger the dma.
That's exactly the thing - how to calculate when to trigger the DMA, efficiently.
If I add an 'if 10KB buffer reached limit' EVERY TIME I write a log to the 10KB buffer, that could be waste of time, IF there's a more efficient way of course.

NorthGuy said:
So this is just for debugging in case the system crashes?
Mostly, yes.

NorthGuy said:
Usually you do not want to work hard to optimize your debugging because it's going to be removed from production anyway.
Actually it is very important, because without the ability to debug your failures, your product will not be mature enough nor market ready.
It has high value and needs to be optimized in order not to interrupt with the 'normal' work of the system.

The processor you use is probably fast, since you run so many threads. If you're called every 10us on 100MHz processor, it's one write every 1000 cycles. Initiating DMA, will probably take no more than 10 cycles, 1% overhead. If your system cannot take that, it's a problem in itself.
I'm not sure what you're suggesting.
Are you suggesting to trigger the DMA after every write?
It'd be a waste to trigger the DMA every time I write few bytes to the 10KB buffer.

My main problem is, how to efficiently decide when to trigger the DMA?
And base on what to make that decision?


What's the algorithm you'd recommend friends?
 
Status
Not open for further replies.

Latest threads

Back
Top