In theory, you can do an 8 bit conversion, convert that back to analog with a D/A, subtract the result from the input, amplify the difference by 256, and send that to another 8 bit A/D. The problem is that, in order to get 16 bit accuracy, the first converter, even though its resolution is only 8 bits, needs to be 16 bits accurate. The D/A also needs to be 16 bits accurate, and so does the gain of the amplifier. There is a way to eliminate the gain accuracy requirement, but it is complex and may not be applicable to your situation.
I think the bottom line is - you need a 16 bit A/D.