JunglePython
Member
I am a total newbie with pics and I am trying to get my head around assembly.
I have been reading "PIC in Practice" by D.W. Smith.
In it he uses a 0.5 second delay routine.
This routine dosent make sense to me, could someone please
explain it to me.
The circuit uses a 32.768kHz xtl, prescalar is /256, timer is 1/32 secs.
As I see it the routine will * clear tmr0
*put tmr0 in w
*subtract 16 from W
*see if answer is 0 if so skip next instruction and then return to program, otherwise it will goto LOOPB.
As I understand it once TMR0 is reset it will start count up with each clock cycle. In this instance that would be 1/32 of a second. Does this mean that for this to really be a 0.5 second delay that it should only take 16 clock cycles from the time the delay is called to the time it returns to the program.
Does this routine acheive this?
I have been reading "PIC in Practice" by D.W. Smith.
In it he uses a 0.5 second delay routine.
This routine dosent make sense to me, could someone please
explain it to me.
The circuit uses a 32.768kHz xtl, prescalar is /256, timer is 1/32 secs.
Code:
;0.5 second delay.
DELAYP5 CLRF TMR0 ;START TMR0.
LOOPB MOVF TMR0,W ;READ TMR0 INTO W.
SUBLW .16 ;TIME - 16
BTFSS STATUS,ZEROBIT ;Check TIME-W =0
GOTO LOOPB ;Time is not =16.
RETLW 0 ;Time is 16, return.
As I see it the routine will * clear tmr0
*put tmr0 in w
*subtract 16 from W
*see if answer is 0 if so skip next instruction and then return to program, otherwise it will goto LOOPB.
As I understand it once TMR0 is reset it will start count up with each clock cycle. In this instance that would be 1/32 of a second. Does this mean that for this to really be a 0.5 second delay that it should only take 16 clock cycles from the time the delay is called to the time it returns to the program.
Does this routine acheive this?