This is the way I see it. 8)
Say I have a 8-bit ADC1 that gives me a value of
196 after conversion. And a second ADC2 that outputs a
15. You are to devide both of them and spit it out to a 8-bit DAC. So scaling is good to get rid of the floating point problem.
Lets scale
196* 100=19600. Now we use the unscaled version of ADC2
15. So now instead of having
196/15=13.06, we have
19600/15=1306
Now if you were to show this result on a LCD, the scaling would have helped. But if you are going to output the result to a parallel 8-bit DAC, what do you do? You can only convert numbers from
0-255, so what happens you still have to get rid of the decimal point. Going back to the problem that I mention in my past post. :?
You could then go to a 10-bit DAC so you have more resolution. (But that changes the problem if you only have 8-bit DAC on hand).
Now with a 10 bit resolution DAC your 13.06 is equivalent to a
52 = (13.06*1024/255). The difference here is that a ADC conversion of
200/15=13.33 would yield an equivalent 10-bit number of
(13.33*1024/255) = 53 for the DAC.
So the range of ADC conversions that might give you the same result when dividing has been reduced. But you still have
197/15 giving you the 10-bit equivalent of 52, the same as your
196/15 and
195/15. But these are only
3 numbers instead of the
15 numbers that would yield the same result if a 8-bit DAC was used. If you can live with that, then no problem.
I might not see how the scaling would help in this context, pehaps I am overlooking something? I know that it will help you work without using a float point, but at the time of outputing you will need to scale back and have the same problem.
Ivancho