This is probably a dumb question, but I could not find a reasonable explanation after hours of searching. When do I use 0x prefix and when do I use 0000h? I keep seeing both appear in the same program, and all the tutorials I can find do not explain the difference. I know both are hex. I mostly see the 0x prefix for addresses, but have also seen it in variable declarations. Can someone clear up my confusion?
The two methods of representing a hexadecimal constants come from different origins. The 'h' suffix was popular in assemblers for intel microprocessors in the early 1970s. The assemblers for the 8008, 8080, and 8085 all had this feature. The 0x notation appears to have made its appearence with the K&R C compiler in the late 1970's. Since that time many assembler writers have made both forms of expression acceptable since we no longer have the memory limitations of the early 1970s. Curiously the compiler writers do not seem to have returned the favor. I guess we can chalk that up to standards committees.
The various different number formats are explained in the MPASM/MPLAB help file, to make the assembler as versatile as possible, it accepts HEX in a number of different styles - as suggested, these vary according to your programming heritage.