SmartEye’s eye tracking system: our review
Lucille Lecoutre 20 avril 2015

Last september, we presented SmartEye’s eye-tracking system to you. It works with cameras set in the environment that allow building a 3D model of the scene. We recently had the opportunity to test it in two setups: on a computer and using a driving simulator.

Our main findings can be summarised as follows: the system’s features make it robust, flexible, but require delving ourselves into its functioning. In our opinion, it is mostly fit for experts who have special needs for eye tracking in on-board situations and who will take full advantage this technology.

The user model

Upstream of the study the system allows building and saving a template for each tested individual. The profile includes a model of the user’s head, and it is also possible to save the calibration of his/her gaze.

The head model provides a robust tracking of the eyes: the system is based on a head tracking algorithm, which allows to evaluate the head’s position (and eyes) from an image to another. The construction of this head model can be done automatically, with good results in most cases.

Some situations can be more problematic, especially when objects obstruct a part of the head (car steering wheel in front of the face, headphones). We tested the system in a driving simulator in which the driver’s hand appear sometimes in front of his head when he turned the wheel. Once the head is identified by the system, the head tracking is quite robust despite these occlusions.

It is also possible to construct a head model manually for challenging situations (when coupling an eye-tracker with an EEG headset, for example).

 

When it comes to gaze calibration, the system provides the possibility to set up the number of calibration points and their positions. Once done, you can then save it in the individual’s profile.

It is also possible to calibrate the gaze in a different setup from the one you will use. For instance, this will be useful in driving situations where it is not possible to calibrate directly in the car. It is also time saving to have to only calibrate the gaze once.

The 3D environment model

SmartEye’s system has a very interesting feature: the ability to build a 3D model of the scene. It allows defining beforehand areas of interest for the analysis. Here again, the robustness of the data and the variety of configurations that can be modelled are the added value of this system.

SmartEye1

We tested this feature on a simple setup with a computer screen. As we can see, the screen area in which the eyes are positioned is framed in red.

If this functionality offers flexibility in its implementation, it goes with its disadvantages. Such flexibility requires giving control to the user so that he builds himself his configurations of interest.

Indeed, modeling the world can only be done manually. You have to enter the coordinates of all desired shapes. Incidentally, this type of construction is not very user-friendly, but more importantly data accuracy will directly depend on the accuracy of measurement errors in the construction stage. 3D landmark for modeling is based on the position of the cameras. The slightest movement of cameras is going to disrupt the whole system of coordinates.

Given that the items in this 3D world can only be static, the system is not very suitable for visual course analyses on web interfaces for example, for which we are interested in the interaction between the eye and the user navigation dynamic. In contrast, the use of this world model enables a decomposition of the user’s environment in areas of interest. For example, it is possible to model the interior of a vehicle and its mirrors, and the system analyses in real-time the gaze position in relation to these areas of interest. If we add to this the option of positioning data acquisition by GPS, Eye Tracking in embedded systems is a real added value.

In addition to viewing the intersection of eyes with live interest areas (which serves primarily to check before acquiring that the objects are positioned relative to the position of the eyes), it is possible to record the data and so, to gain a precious time during final analysis. Even if sometimes it could be cumbersome, the manual construction of the scene allows modelling a wide variety of environments.

The eye tracking acquisition system

The system present interesting features before and during the acquisition but what could we think about the acquisition itself? We found that more complicated to apply. The software is rather useful but it is easy to forget a step (wrong definition of the coordinates system, oversight to record or to overwrite the data, not stopping the record).

We didn’t find how to add easily a record track allowing the synchronisation between our data and another acquisition system (video stream capture for example). This feature is supported by another software (Mapps) available in add-on.

At first sight, the recorded data seems very complete. As we said previously, the eye tracking is enough robust even if the head movements sometimes degrade it on the z axe (quick direction change for example). We will evaluate the data deeper in another article.

Conclusion

In conclusion, the SmartEye system offers very interesting features that, in order to be fully exploited, need to delve a little into its operation. It is not really a « plug-and-play » system, but rather a « do-it-yourself » system, feature necessarily linked to the system flexibility: adaptation of head profiles, possible experimental configurations … Note however that SmartEye provides an available and effective help support much appreciated when needed.

We therefore find SmartEye’s system very suitable for oculometric measures in operational conditions.

 

 

Lucille Lecoutre
Lucille Lecoutre

Lucille a suivi un premier cursus de recherche en Neurosciences Cognitives où elle s’est intéressée aux aspects perceptifs et prédictifs de la cognition humaine. Elle s’est ensuite tournée vers les applications potentielles des Sciences Cognitives pour les Facteurs Humains. Elle est aujourd’hui ingénieure Facteurs Humains au sein d’Akiani.

Votre commentaire

Your email address will not be published. Required fields are marked *