In Sword Graphs Part I, I introduced the concept of self-encoding with this chart:
The graphic is self-encoded because the images themselves represent a value, rather than that value being translated into a mark like a bar or dot. Information about the length of the blade is represented by the length of the blade: the sword encodes itself.
But why not go a step further and show actual photographs of the swords, or a step fewer and use the same generic outline for all of them? The choice of images in self-encoding depends on specificity and processing speed.
In Understanding Comics, Scott McCloud writes that simpler, more iconic images are more universal. The photorealistic drawing on the left could only be one person, while the abstract two-dots-and-a-line face could be almost anyone:
Photographed from Understanding Comics by Scott McCloud
In comics, this effect is useful for helping readers to identify with main characters. A simply drawn character is probably going to resemble the reader more than an extremely detailed character. Stripped-down design also focuses reader attention on the most important ideas and identities at play.
Simpler designs are also faster to process and easier to understand. As Alberto Cairo writes in The Functional Art, people recognize abstract depictions more quickly than detailed images. If the goal of an infographic is to communicate the details of how something looks, make it detailed. If the goal is to communicate a specific aspect or part of a process, show only most relevant details so the viewer can focus on the process.
Tying these ideas together, a simpler image is faster to process and more generally applicable. The more detail is in an image, the slower it is to process and the more specific it becomes.
That doesn’t mean that detailed images are bad or that abstract images are good. It does mean that simple and detailed images do very different things in a visualization, and a designer should make a conscious choice about what imagery they’re using.
For example, let’s look at height.
I can immediately see that the bar on the left is taller than the bar on the right. I also have no idea what these bars represent. People? Trees? Vehicles? They could each represent an individual, or they could summarize a group. Perhaps the bar on the left represents everything on Earth that is seventy-three inches tall, and the bar on the right represents everything on Earth that is sixty-eight inches tall. Annotations would clear that up, but the images themselves only tell me that something is slightly taller than something else.
Turn the bars into abstract icons of people, and I know I’m looking at human height:
Icon by David Courey from the Noun Project
I still don’t know if I’m looking at individuals or representations of groups, but it’s easier to contextualize the information. Left Side can’t quite look over the head of Right Side, and Right Side has to look up to meet the eyes of Left Side. (Note also that I’m filling in details that aren’t in the picture, based on my mental model of what the icons represent. The icons don’t have eyes!)
Add a little more detail: the icons are now simple (but highly gendered) silhouettes.
Gendered height comparison from here.
This feels like Scott McCloud shouting in my ear: I automatically assume that the included details are the relevant details, and so I read this graph as a comparison of men and women. Here’s what I can see about the subjects of the graph:
- There are two figures.
- Each figure has a set of features stereotypically associated with one side of an attribute often treated as binary.
- That’s it!
I automatically assume that broad shoulders/visible ears/clenched fists represents all men, and a nipped-in waist/hair-covered ears/open hands represents all women. The enormous variety of men’s shapes and sizes are included behind the image on the left; the enormous variety of women’s shapes and sizes are lurking behind the image on the right. It also takes longer for me to figure out precisely how tall the figures are in relation to each other. The information is still available, but there are many more details to take in.
With the addition of a few more details, my sense of who is represented narrows dramatically:
Cutlass icon by Anbileru Adaleru and parrot icon by Em Elvin, both from The Noun Project
In a stripped-down image, any included detail must be important, so I read this graph as the heights of men and women on pirate ships. Done. Next graph.
These silhouettes could represent individuals, or they could represent groups. Since body shape isn’t the only thing I can see, any of the other information could be just as important: the mullet and sash combo on the left, the jaunty hat and flared sleeves on the right. These figures might represent groups, but I don’t automatically assume that is the case.
And finally, down to the highest level of detail, photographic representations of two pirates from Black Sails:
When I look at this, I don’t see isolated data about heights, or height aggregated by gender, or even height aggregated by gender among pirates. Instead, I see Jack Rackham and Anne Bonny. The dense details of their expressions, clothing, and hair identify them as specific individuals. Comparing their heights also takes a lot longer because there’s just so much more to take in. (That sash belt? Those boots? Those sideburns?)
Effective self-encoding requires careful thought about what exactly is being represented, and what information the designer wants to provide. Photorealistic detail communicates a lot of texture about specific subjects, which is great if the detail matters and the graphic is about individuals rather than groups. On the flip side, icons and simple diagrams are faster to process and imply information about a group, which is great if the graphic actually contains aggregated information.
A riff on Tamara Munzner’s expressiveness principle may be helpful when choosing images for self-encoding. Munzner’s actual rule is that a visual channel should express all the information in an attribute, and only the information in an attribute. That is, don’t imply information about quantity when you only have information about identity, and don’t obscure information about quantity when it is available. In the case of self-encoding, that means include as much detail as you need to communicate what your visualization represents, but include only as much detail as needed. When you mean Jack Rackham, don’t imply all men; when you mean all people, don’t imply pirates.