Abstract
In the field of marketing eye-tracking has developed into a common method for the study of consumers’ visual attention and response to advertisements, packaging, and in-store experiences. Over the last ten years, many eyetracking studies have investigated visual attention of such daily decisions, well knowing that there might be a difference between humans’ perception of real life exposure and those shown on the screen. New hardware and software have enabled us to use and analyze data from wearable eye-trackers, and by that compare eye-track data from sceneries shown on a screen (2D) with data form real-life experiences (3D). Yet, it also poses a great challenge, as the amount of data and the complexity increases significantly, and what we win on the ecological validity, we might loose on the internal validity. In a set-up we investigated three scenarios of the very same shelf of products in order to compare decision processes in 2D and 3D. In the 2D scenario we used a Tobii 60XL remote eye-tracker and in the 3D scenario we used a Tobii2 mobile eye-tracker (50Hz). The 3D scenario was actually split into two scenarios, as we had the very same shelf display build up in our SenseLab. That way we ended up having three comparable sets of eye-track data related to the same decision process. The outcome showed a more systematic search processes in the 2D scenario and the closer we come to a real-life setting the more chaotic became the search process, including larger head movement and smaller eye-movements. Another challenge using 3D eye-tracking is, that it easily becomes too resource demanding due to manually interpretation and analysis of the enormous amount of data from a representative sample size. The required workload and the value of the result easily exceeds the cost of the study. A common way for a fast-and-easy analysis of eye-track data is the use of a heatmap, making use of either raw data or number of fixations. Mapping data form a portable eye-tracker recorded in a real-life scenario onto a 2D heatmap is not an easy task. One major challenge is the parallax error due to the natural perspective. In order build a model solving this issue, we took a simple approach using homographic correspondence between a reference image and frames coming from the portable eye-tracker, and to these combined images, we added the gaze-data to 3D-modelled AOIs. The procedure has 3-steps, starting with the design of the 3D AOI. In the second phase, we added the combined frames one by one to this 3D AOI, and then finally we were able to create a 3D heatmap. The proposed model runs automatic with approximately two frames pr. second. We found the 3D heatmap model to work fine for investigating the instore search process and especially for shelf displays with packaging having distinct image features. However, making a final decision in which consumers might start reading text on the packaging, the 3D heatmap reaches its limit.
Original language | English |
---|---|
Title of host publication | Measuring Behavior 2018 |
Editors | Robyn Grant, Tom Allen, Andrew Spink, Matthew Sullivan |
Place of Publication | Manchester |
Publisher | Manchester Metropolitan University |
Publication date | 2018 |
Pages | 69 |
ISBN (Print) | 9781910029398 |
Publication status | Published - 2018 |
Event | 11th International Conference on Methods and Techniques in Behavioral Research. Measuring Behavior 2018 - Manchester Metropolitan University, Manchester, United Kingdom Duration: 5 Jun 2018 → 8 Jun 2018 Conference number: 11 https://www.measuringbehavior.org/mb2018/ |
Conference
Conference | 11th International Conference on Methods and Techniques in Behavioral Research. Measuring Behavior 2018 |
---|---|
Number | 11 |
Location | Manchester Metropolitan University |
Country/Territory | United Kingdom |
City | Manchester |
Period | 05/06/2018 → 08/06/2018 |
Internet address |