We’ve recently been debating and challenging ourselves after completing another eye tracking programme (being a digital partner of Southampton Solent University, we get to use their fantastic, state-of-the-art eye tracking facilities).

We’ve been specifically analysing fixation points at 0-5 seconds, and post-10 seconds for direct response orientated email designs.  It’s got us thinking…
In a UX study, how much credence should be apportioned to:
1)    Where users look?
2)    What users say?
3)    What users do?

debenhams_macbook_464x364

As we’ve seen in many tests, the 3 can contradict each other, and could lead to misled interpretation.
1)    Where users look?  Maybe somewhere other than where they’d look if they were based on their own pc or laptop?
2)    What users say? Maybe what they think you want to hear?
3)    What users do? Maybe they’re only doing something because you’re set with them?   

When eye tracking in particular, we do try and nullify any unnatural behaviour by keeping schtum for at least the first 15 seconds of a reveal, and sitting outside of the testers eye line.

We actually try to apply a ‘psychology of communication’ approach to interpreting these types of results – assuming that:

graph_stat_464x261

So, we look at facial expressions, classic non-verbal signs of frustration and listen to voice tonality compared to the rest of the session, in order to gain a truer insight into user experience.

I guess we could add a couple of extra points of analysis to the above list:
4)    How users ‘seem’ (i.e. What users do ‘off screen’)
5)    How users say it?

In a nutshell
When it comes to UX and eye tracking studies, it’s advisable to analyse the bigger picture with body language and tonality, rather than just spoken word.

Share