One of the most heartening things I took from the MRS Conference was the positioning of data visualisation at the centre of some of the topics. Professor Jo Wood showed us how masses of Barclays bicycle hire data could be explored and understood through visual representation. Simon Rogers, editor of the Guardian Datablog, talked about data-driven journalism - clearly demonstrating how visualisations can provide unparalleled impact and how the opening up of data presents great opportunities for generating insight and sparking debate. This was refreshing as it marks a recognition that market researchers shouldn't just be doing data visualisation for the sake of it and a move away from the attitude of 'let's do data visualisation because we can, or because it's a hot topic'.
However it forced me to think again about the role of data visualisation in market research. Low barriers to entry and the pervasive nature of the dreaded "infographic" has perhaps meant that market researchers have looked-on somewhat whilst designers, individuals, marketers and PR agencies rushed in to take the reins. This is a real shame as there is an opportunity for market researchers to capitalise on their skills and use visualisation to explore and present data which would enrich the field. But with low barriers to entry how can (or do) we ensure best practice and standards?
Debate raged last week between Simon Rogers and Stephen Few (editor of Perceptual Edge and well-respected author on data visualisation) about the Guardian Datablog's role as champion and curator of good visualisations. [Stephen's blog post that sparked the debate 1, Simon's and Stephen's comments 2]. To me this highlights a conversation that we still need to have about visualisation in market research - How do we handle the (sometimes) antagonistic ideas of accuracy and impact?
Given my background in Cognitive Psychology I am a firm believer in accurate visualisations. The power of optical illusions highlights just how the mechanisms of the visual system can cause biases in perception that are automatic. These can be very hard to overcome. Although most people are familiar with the Müller-Lyer Illusion, we still "see" one line as longer than the other.
A very comprehensive picture of how the science of perception informs our understanding of how we should visualise data can be found in the weighty tome of Colin Ware. There are also very influential works by Few and Edward Tufte. For a gentler introduction The Functional Art by Alberto Cairo is a good starting point. In the second chapter of his book he gives the classic example of how people struggle comparing bubbles or circles much more so than comparing bars. It would be foolish not to consider factors like this when visualising data and I’m sure most people such as Simon and Stephen would fervently agree.
But I remain torn. Market Researchers need to be story-tellers. Visualisations need to be understandable but sometimes those which have the greatest impact aren't those that follow all of the "rules". As Simon pointed out at the conference, sometimes the importance of a visualisation is the role it causes in sparking a debate.
How much can and should we bend design "rules" in delivering visualisation for market research? Is it more important that a visualisation can be considered relentlessly accurate or is it the impact made in the boardroom and on subsequent business decisions that really matters?
What do you think? I would love to hear your views.
Research Innovation Specialist