![]() Studies have used different techniques to combine eye and head tracking when driving to better understand how drivers scan while approaching an intersection. Some large (60°) scans were slow and comprised of multiple saccades (e.g., C) while others were quick and comprised of only one saccade (e.g., D). A), while others were made without any head movements (e.g. Some gaze scans were made with large head movements (e.g. Any scan below 0° eccentricity is a scan to the left and any scan above 0° eccentricity is a scan to the right. The dotted blue arrows in the top left plot indicate the direction of the gaze and head scans. Participants decelerated at different rates, hence the different spacings between tick marks on the top (distance-to-intersection) axis. The black arrow in front of the car in the top left plot indicates the travel direction (i.e. Each plot shows data from 100 to 0 m before the intersection. Sections of these plots will be used in subsequent figures to illustrate different aspects of the gaze scan algorithm. This research extends our previous quantification of head scans ( Bowers et al., 2014) by taking account of eye position as well as head position to characterize gaze scanning while driving.Įxamples of the diversity of individuals’ scanning patterns on approach to an intersection (gaze = blue, head = red). Here, we are interested in quantifying visual scanning as lateral gaze scans, which encompass all of the gaze movements (the combination of eye and head movements) that extend horizontally from the starting point near the straight ahead positon to the maximally eccentric gaze position. ![]() These studies and analyses of police crash reports ( McKnight & McKnight, 2003 Braitman, Kirley, McCartt, & Chaundhry, 2008) suggest that scanning plays an important role in driving and that quantifying scanning may provide insights into why some individuals fail to detect hazards at intersections. Individuals with vision loss have also been found to demonstrate scanning deficits at intersections in a driving simulator ( Bowers, Ananyev, Mandel, Goldstein, & Peli, 2014). Previous studies have reported that older adults scan insufficiently at intersections compared to younger adults in on-road driving ( Bao & Boyle, 2009a) and in a driving simulator ( Romoser & Fisher, 2009 Romoser, Pollatsek, Fisher, & Williams, 2013 Savage et al., 2017 Bowers et al., 2019 Savage et al., Revise and Resubmit). Insufficient scanning has been suggested as one mechanism for increased crash risk at intersections ( Hakamies-Blomqvist, 1993). The scans become increasingly larger in magnitude as the driver approaches an intersection with larger scans requiring different numbers and sizes of eye and head movements ( Figure 1). Typically, drivers make left and right scans that start near and return to the straight ahead position. Scanning is especially important when approaching intersections, where a large field of view (e.g., 180° at a T-intersection) needs to be checked for vehicles, pedestrians, and other road users. When driving we use head and eye movements to scan the environment to search for potential hazards and to navigate. The algorithm also complements existing eye tracking and mobility research by quantifying the number, direction, magnitude and timing of gaze scans and can be used to better understand how individuals scan their environment. Therefore, the algorithm may be used in lieu of manual marking of gaze data, significantly accelerating the time consuming marking of gaze movement data in driving simulator studies. We found that the gaze scan algorithm successfully marked 96% of gaze scans, produced magnitudes and durations close to ground truth, and the differences between the algorithm and ground truth were similar to the differences found between expert coders. ![]() ![]() We evaluated the algorithm by comparing gaze scans generated by the algorithm to manually-marked ‘consensus ground truth’ gaze scans taken from gaze data collected in a high-fidelity driving simulator. The algorithm works by first detecting lateral saccades, then merging these lateral saccades into gaze scans, with the start and end point of each gaze scan marked in time and eccentricity. We detail an algorithm called the gaze scan algorithm that automatically quantifies the magnitude, duration, and composition of such large lateral gaze scans. In particular, when approaching an intersection, large gaze scans to the left and right, comprising of head and multiple eye movements, are made. Eye and head movements are used to scan the environment when driving. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |