ah, i see. outta my league then.
**broken link removed**
http://stackoverflow.com/questions/2190470/neural-network-simulator-in-fpga
"Most attempts at building a 'literal' neural network on an FPGA hit the routing limits very quickly, you might get a few hundred cells before P&R pulls takes longer to finish than your problem is worth waiting for. Most of the research into NN & FPGA takes this approach, concentrating on a minimal 'node' implementation and suggesting scaling is now trivial.
The way to make a reasonably sized neural network actually work is to use the FPGA to build a dedicated neural-network number crunching machine. Get your initial node values in a memory chip, have a second memory chip for your next timestamp results, and a third area to store your connectivity weights. Pump the node values and connection data through using techniques to keep the memory buses saturated (order node loads by CAS line, read-ahead using pipelines). It will take a large number of passes over the previous dataset as you pair off weights with previous values, run them through DSP MAC units to evaluate the new node weights, then push out to the result memory area once all connections evaluated. Once you have a whole timestep finished, reverse the direction of flow so the next timestep writes back to the original storage area."