“What you see is what you buy”
It is believed that 95% of the purchase decisions happen in the subconscious (let’s call it the reptile brain). The decisions taken by the reptile brain are strongly influenced by what we see. When walking across a supermarket aisle, we see hundreds of packs of different products. But then, there are some products that capture our attention. When we see such products, the reptile brain kicks into action, it fixates its attention on that packaging that looks interesting (“ooh shiny!!!”) and before we know it — we enter the process of seriously considering to purchase that product.
Eye-tracking is a well known tool to implicitly measure how people respond to different product packaging, advertisement copies, web-page layouts, banner placements and more. Performing eye-tracking studies have the potential to generate tremendous RoI on product packaging, advertisement creatives and placement decisions. But, we (and the clients we speak to) believe that the industry is only beginning to scratch the surface when it comes to using this technique for generating insights. The reasons cited include that it is — expensive, slow, not objective (as participants get influenced by the process). However, we believe that the key reason is that the technology is not that developed yet.
With this whitepaper, we intend to demonstrate how embedding of Artificial Intelligence (AI) into the current eye tracking technology can make the process of eye tracking analysis much more powerful, faster and cost-effective. Note that we assume that the reader is familiar with the eye tracking process for physical environments.
Eye-Tracking at a Glance
There are two primary methodologies associated with eye tracking: Physical and Online. In Physical eye-tracking, research participants wear eye tracking-hardware (glasses) and walk around a physical space (usually a retail-shelf) where the actual copies of the research subject are kept. However, in a virtual environment, participants look at the research subject (typically a virtual shelf, website layout, video ad) through a computer screen and a webcam is used to track the gaze movements.
While virtual eye-tracking is faster and cheaper, it has disadvantages of being less accurate (webcam is not as accurate as eye tracking hardware) and not being close to reality (a virtual shelf is very different from what it would look in real life). In this whitepaper, we are going to specifically focus our attention to Physical Eye Tracking for the purpose of testing the visual appeal of product packages on retail shelves.
Coding in Physical Eye Tracking
The key constraint with eye tracking technology is that it can tell you ‘where’the customer is looking, but not ‘what’ the customer is looking at. The hardware figures out the direction and location where the gaze of a person is fixated. However it has no knowledge about what exactly the person is seeing. It is blind to whether the person is looking at a price tag, Red Bull can, Gatorade can, her mobile phone etc.
Manual Coding has many challenges
The current coding solutions in the market for physical eye tracking don’t offer fully automated coding of gaze videos. They are essentially an annotation/tagging software designed to make manual coding more efficient. Manual coding creates following challenges in a typical physical eye tracking project:
Smart-Gaze — AI Solution for Eye Tracking Coding
Smart-Gaze uses Deep Neural Networks based architecture to analyse raw gaze videos. Through some training, the algorithm understands what the key areas of interests (AOIs) look like and once that it is done, it does the coding automatically. And this is done with accuracy that is comparable to what a human coder would achieve.
Smart Gaze makes coding for physical eye tracking projects much more effective due to the following advantages:
How Does It Work?
At the moment, Smart-Gaze is not a self-serving product that a researcher can log into and use. At the moment it is a solution where the researcher captures raw gaze videos from eye tracking glasses (any brand works, no constraints here), briefs us about the key Areas of Interests and then we take over. Within a period of 3 days, our team at Karna AI trains an AI algorithm and codes the data as per the client’s need.
The accuracy can be assessed through a coding visualisation tool. Some other key benefits of this model include:
© 2021 TechTarget, Inc.
Powered by
Badges | Report an Issue | Privacy Policy | Terms of Service
Most Popular Content on DSC
To not miss this type of content in the future, subscribe to our newsletter.
Other popular resources
Archives: 2008-2014 | 2015-2016 | 2017-2019 | Book 1 | Book 2 | More
Most popular articles
You need to be a member of Data Science Central to add comments!
Join Data Science Central