Each object detected by the analysis generates a stream of data records. Each record has a timestamp that identifies when the raw data was actually measured, the three dimensional location of the object in polar coordinates, and additional tags. These tags identify the class of object, the orientation and other factors to ascertain whether it is dangerous, and thus what color code should be used for the blob in the video overlay.
The timestamp is naturally always a tiny fraction of a second old, so the software must extrapolate from recent data records to determine where the object is likely to be at the time that the next video frame is generated by the camera. This ensures that the blob is correctly drawn for moving people and objects.