how can one time the ultra fast event and deliver that time as a number / voltage etc to slower microprocessor?
You should be aware though that a laser rangefinder (one that works off the basis of time of flight at least) is beyond the capabilities of almost any hobbyist (even more sure for someone asking general questions on a forum).
An ultrasonic rangefinder is much hobbyist-feasible.
THat said, one approach is by doing as much as possible in the analog domain and converting the data to a "low speed form" and then transferring that low speed data to an MCU. Because even the fastest processor would be unlikely to keep up (let alone one you'd actually be able to mount on a board and figure out how to use). One idea is to have a pulse whose time is the length of time between the transmit and receive and then stretch that pulse out long enough so that it can be easily measured by a slower processor.
Take this a REALLY simplified conceptual case for example of a processor directly interfacing with the laser hardware. Suppose you want 1m resolution and assume each step below can be done in one clock cycle.
1. "Pulse pin connected to power transistor to turn laser on and off"
2. "Start timer"
3. "when pin connected to laser diode goes high, stop timer"
Then you'd need a 150MHz processor (not 300MHz, remember...the light travels to the target and back so you can be twice as slow but it's still really fast) if that's all it was doing. In reality, each step described would take multiple instructions to execute along with all the overhead and other things the processor is doing (especially if is running an operating system of any kind), easily pushing it up into the multi-GHz range.
Now, if you are talking about an FPGA on the other hand...then things might look a little better (they are used extensively in RADAR after all). No software to execute to eat up clock cycles. The FPGA just sits there like a non-software digital circuit doing the only job it was configured to do. THough, they aren't the most hobby-friendly devices though in terms of availability of development tools and parts, ease of PCB mounting. THings are much better now though than they used to be. They also don't "work" or "program" like processors. It requires thinking in terms of the physical parts you need to pull off an algorithm rather than the "steps" that must be done.
EIther way, it will involve some very high frequency hardware which is very sensitive to layout. You'd certainly need to get a very carefully designed one made at a board house, and requiring that it be a special impedance matched board is likely. ($$$)