Nigel and I have exchanged a handful of posts, in another thread, on our views of optimization and resource allocation. I think this is an important topic for discussion, and I think the community will benefit from the frank and fair exchange of views. Rather than hijack the original thread I suggest we continue the discussion here.
First let me stipulate that this discussion is not about winners and losers. IMHO Nigel is an extremely competent and dedicated individual. In short he is a man worthy of our respect.
In embedded design there are a number of common metrics that we use to compare competing solutions. This applies to both hardware and firmware. A non-exhaustive short list would include gate count, package count, board area, bytes of code space, bytes of data space, number of instruction cycles, clock frequency, and so forth.
The start of this discussion involved an attempt by a poster to save a couple of bytes writing data to an LCD with an 8051. I made a comment, which was critical of the time involved to save what I considered a trivial number of bytes before he had a solution that worked.
In a somewhat over the top riposte, Nigel rose to the defense of optimizers and super coders everywhere dedicated to the absolute and total elimination of bloatware.
I replied that in an earlier time I would have agreed with him because our options for dealing with the problems of not enough time and not enough memory were restricted. I argued that today there are many options for dealing with the problem of running out of resources, and therfore there is less of an imperative to spend inordinate amounts of time doing manual optimization.
The crux of my argument is that silicon resources are cheap and according to Moore's law are getting cheaper, and at an accelerating rate. People, with salaries, benefits, and expenses are getting more expensive, also at an accelerating rate. At some point the imperatives of our lives and our business will dictate the following priorities:
First, make it work
Second, make it small or fast
Third, make it elegant
To respond to Nigel's last point in the previous thread. In 1978 when the 8051 was introduced it had 2K of code space. If you ran out of space there were not many choices that maintained the single chip nature of the design. Later this was expanded to 4K and 8K. Today you can buy 8051's from numerous manufacturers that have a full 64K of code space, in packages a fraction of the size of a 40-pin DIP gunboat. Most of our companies products use 8051's and we have trouble filling even 40% of the available code space. There are no doubt problems which cannot be solved in a midrange PIC16Fxxx with even 8K words of program memory. Do we really imagine that Microchip has no alternatives for us?
As a final point in this post I would like to add the following idea, which did not originate with me.
There is no substitute for the right algorithm
What this means to me is, that if you pick the wrong algorithm you can optimize to your hearts content and never achieve the result from throwing your first attempt away and choosing a superior algorithm. The classic example is to sort a five records or 1000 records. In the first case a bubble sort is superior to a quicksort. In the second case the reverse is true.
That's my opinion -- yours can and probably will vary. I really want to know what you all think.
First let me stipulate that this discussion is not about winners and losers. IMHO Nigel is an extremely competent and dedicated individual. In short he is a man worthy of our respect.
In embedded design there are a number of common metrics that we use to compare competing solutions. This applies to both hardware and firmware. A non-exhaustive short list would include gate count, package count, board area, bytes of code space, bytes of data space, number of instruction cycles, clock frequency, and so forth.
The start of this discussion involved an attempt by a poster to save a couple of bytes writing data to an LCD with an 8051. I made a comment, which was critical of the time involved to save what I considered a trivial number of bytes before he had a solution that worked.
In a somewhat over the top riposte, Nigel rose to the defense of optimizers and super coders everywhere dedicated to the absolute and total elimination of bloatware.
I replied that in an earlier time I would have agreed with him because our options for dealing with the problems of not enough time and not enough memory were restricted. I argued that today there are many options for dealing with the problem of running out of resources, and therfore there is less of an imperative to spend inordinate amounts of time doing manual optimization.
The crux of my argument is that silicon resources are cheap and according to Moore's law are getting cheaper, and at an accelerating rate. People, with salaries, benefits, and expenses are getting more expensive, also at an accelerating rate. At some point the imperatives of our lives and our business will dictate the following priorities:
First, make it work
Second, make it small or fast
Third, make it elegant
To respond to Nigel's last point in the previous thread. In 1978 when the 8051 was introduced it had 2K of code space. If you ran out of space there were not many choices that maintained the single chip nature of the design. Later this was expanded to 4K and 8K. Today you can buy 8051's from numerous manufacturers that have a full 64K of code space, in packages a fraction of the size of a 40-pin DIP gunboat. Most of our companies products use 8051's and we have trouble filling even 40% of the available code space. There are no doubt problems which cannot be solved in a midrange PIC16Fxxx with even 8K words of program memory. Do we really imagine that Microchip has no alternatives for us?
As a final point in this post I would like to add the following idea, which did not originate with me.
There is no substitute for the right algorithm
What this means to me is, that if you pick the wrong algorithm you can optimize to your hearts content and never achieve the result from throwing your first attempt away and choosing a superior algorithm. The classic example is to sort a five records or 1000 records. In the first case a bubble sort is superior to a quicksort. In the second case the reverse is true.
That's my opinion -- yours can and probably will vary. I really want to know what you all think.