Since there is no audio in the example video we added some text annotations to highlight important parts. The GazeTag demo shows a couple of things:

a) loading video and data files which originated from the Yarbus eye-tracking software

b) shows that the software has video-editing style interaction (i.e. playhead and video scrubbing). However, stepping with the left or right arrow keys steps one “fixation" rather than one frame. You can also enter a specific fixation number to jump directly. You can also play just the fixation segment of the video (i.e. from begin fixation to end fixation).

c) the early part of the video shows how to tag objects and build up an object library in a linear, fixation-by-fixation, approach

d) later we show Cluster coding which can really speed up the process (10x to 100x). It works by grouping similar fixation frames, then you can select several fixations and label them all simultaneously.