and what it means for data visualization

Type can be legible, but still unreadable. Consider this image:

The letters are perfectly legible, but the text, upside down and mirrored is unreadable.

Beautiful Helvetica, mirrored and rotated 180 degrees. Two highly common words from the English language. Letters are perfectly legible and even turn into other letters: p becomes b, m turns into a strange w. But it’s unreadable (thanks truetypestories).

Legibility is concerned with the clear delineation of the individual letterforms and their separability from one another.

  • Are the individual letters clearly designed?
    For example, is the opening of a c sufficient.
  • Are the letters clearly distinguishable from one another?
    For example, Helvetica uppercase I, lowercase l and numeral 1 are extremely similar.
  • Is there potential for letters to run together to be mistaken for a different letter?
    E.g. rn in Helvetica could be mistaken for an m, particularly if a drop shadow fills the gap between the r and n.
  • Do the proportions between letters make a potential letter ambiguous.?
    A letter with high x-height may not have much variation to distinguish between an h and an n (again see Helvetica).

Legibility is very much in the domain of the font designer and is very concerned with the shapes of letters, spacing, consistency across the design.

Readability, however, goes beyond type design. Readability is a comprehension issue concerned with ease of reading lines and paragraphs. Readability can be affected by many factors:

  • Line length: paragraphs that are very wide or very narrow are harder to read.
  • Spacing, kerning and leading: spacing between letters and lines, and tuning these spaces. For example too far   a p a r t  and words break apart.
  • Font weight: text that is too heavy or too light can be more difficult to read. Note that most fonts with variable weight have a “book” weight.
  • X-height: a font with a high x-height may increase legibility of words at a distance on signage, but may be more difficult to read for long paragraphs.
  • Uppercase: all uppercase is more difficult to read than type set in mixed case.  This is NOT an endorsement of readability based on word shape, rather simply that all uppercase has no ascenders or descenders, meaning that there is less shape differentiation between letterforms.

Readability is also related to cultural conventions. For example, in languages with longer/shorter words, optimal paragraph widths may be longer/shorter. Font choice is related to readability. Fonts that are more familiar are easy to read:

  • font-blackletter font  is  difficult to read these days because it is uncommon, but was used regularly in Germanic countries until the early 20th century.
  • font-baskerville was considered difficult to read when introduced (claim it hurt eyes), but would likely be unnoticed today.
  • font-neueswift is a modern font, designed by Gerard Unger in 1985. Gerard says: “When I first released Swift, people criticized it has hard to read with many angry angles: now it is a standard used in many newspapers, dictionaries and other major works.” (presentation by Gerard at University of Reading, 2016).
  • Note that there is an ongoing discussion as to whether sans serif or serif fonts are easier to read. In practice, for long printed texts, the convention tends towards serif-based fonts; while on mobile screens, the convention currently tends towards sans-serif fonts (perhaps this may change with more more devices at higher resolutions). Or perhaps a notion that sans serif is better for short bursts (headlines, narrow mobile devices) versus serifs for wide lines (Williams).

So what does readability mean to  visualization? The visualization programmer has control over choice of font, spacing, weight, shadows, and so on – so readability should be considered. Furthermore, techniques that may change font weights or other attributes in running text,  m a y  negatively impact  r e a d a b i l i t y,  particularly if  t h e r e  are multiple different attributes adjusted co-occurring within a text (Carl Dair).

There are also cases where readability is not an issue. Short snippets of text, such as headlines or text specifically designed for skimming are an example. For example, dictionaries often use a wide mix of typographic techniques to differentiate elements within each entry to facilitate ability to quickly skip across parts of a definition of interest.

Furthermore, a visualization may be interested in deliberately interrupting readability, given the appropriate application. The ideal exemplar here is Tallman lettering, used to differentiate among similar sounding medications.

(For more info and examples, see, e.g. Victoria Squire et al’s: Doing it Right with Type; Beier’s Reading Letters, Designing for Legibility; Walter Tracy: Letters of Credit; Isabel Gauthier et al, Font Tuning associated with expertise in letter perception)


Posted in Font Visualization, Legibility, Text Skimming, Text Visualization | Leave a comment

The 19 Dimensional Word Cloud of Pokémon

If you want to catch all 719 Pokémon, the serious gamer needs to know their different skills and abilities. At a high-level, Pokémon classify into 18 different types, such as fire, water or grass. For example, Pikachu is an electric type, Charmander is fire, Squirtle is water and so on. This is important because type determines a Pokémon’s advantage in battle, e.g water-type do well against fire-type.


Pokémon may have more than one skill type, for example, Charizard is both fire and flying; while  Gengar is both ghost and poison. This makes Pokémon interesting to the visualization researcher: there are 18 different categories and all those different categories can be combined in different ways. So, an interesting visualization question is how to represent all those different possible combinations?

Visualizing all the combinations of types of Pokémon

Of course, the Pokémon community has made various tables and diagrams to represent this information. Given that there are cute images of each Pokémon, one fun way is to organize these images. Furthermore, the color of a Pokémon is usually related to its type, for example fire type Pokémon tend to be red, water type tend to be blue. Here’s an awesome poster of all Pokémon organized by color. (original by Marie Chelxie Gomez, via polygon):


Pokemon sorted by color, and color is related to type.

But the above image doesn’t make explicit the types nor the combinations of types. Here’s a great cross-tabulation of all the pairwise combinations of Pokémon posted on reddit a few years ago (

Pokémon organized into a table by combination of type. Click for big.

In this case, the combinations are depicted. But you have to zoom in: there’s a lot of empty space since there are a lot of type-combinations that don’t exist. Also, if you don’t happen to know the names that correspond to each picture, you can’t identify a Pokémon by its image (for example, I don’t know all the Pokémon, but my son does). So how could you show all the text, have a readable layout, and indicate all the different types?

Word Clouds

Word clouds are pretty efficient at one thing: packing a lot of words into a tight space. Here’s a word cloud of the 151 first generation Pokémon made with Wordle:
Unfortunately, word clouds have lots of problems. The visualization community isn’t fond of word clouds. In the most typical way that they are used, they convey only the data of the word itself, and usually indicate one more data attribute by setting the size of the word to the frequency of that word in some document. Usually, the position, the color and the orientation are random and convey no information.

“Wordles [i.e. word clouds] are driven by a single minded fetish for filling space.”
– André Skupin, Mapping Text, 2010.
So, instead, consider the opposite question:

How many different dimensions of data could be represented in the words?
This is an interesting question. The answer you get today will depend who you ask:

Visualization Researcher: 5-6 In visualization, one can refer to the standard visualization attributes – perhaps 6  or so are commonly used: x position, y position, size, hue and brightness, plus the word itself. In fact, you can look at all the text visualizations at – if you should happen to count up all the examples you’ll find x,y,size,hue and intensity account for more than 95% of the encodings used on text – and in most cases, only 0-2 of these are actually used (see Table 2 here).

6-8. Cartographers come from a different perspective and have been using visual attributes to add information to labels for many centuries: italics for water, heavy text for large cities, s p a c e d  o u t text mountain ranges, and so on. One of my favorites is an Ordnance Survey map from the 1920’s:

Town labels indicate more than five dimensions of data (click for big).

Each city label indicates: 1) the name of the city via text; 2-3) latitude and longitude via x,y position; 4) town vs. village via uppercase/lowercase; 5) county towns via italics; 6) population category via font size; and 7) country via font-family (serif for U.K. slab-serif or serif variant for Scotland). That’s impressive.

Let’s go further. Certainly more than 10 could be done. How about 15? Or 20?

Why? Sometimes its a good to explore possibilities. Even if the result isn’t pragmatic, it forces new ideas and force new strategies to be considered. Some of these might even be useful in some other context in the future.
The challenge is that each visual attribute needs to be able to be combined with another attribute. For example, one dimension of data might be set to size and another set to shape. However, as size decreases, all the shapes end up being ambiguous dots. So, we need to find a lot of different visual attributes that can work together.

16 Types of Pokémon (first generation)

Let’s consider a set-type visualization looking at Pokémon. In the first version of Pokémon, there are 151 different Pokemon of 16 different skill types (aka first generation Pokémon).

X,Y Layout

Here’s a quick visualization of the 151 Pokémon Generation 1 arranged using a graph layout. The large underlying words indicate the type:

First generation Pokémon, arranged by type (shown underneath in large type).

Each Pokémon is placed in proximity to its type(s). Pokémon out near the perimeter belong only to the one type they are close to. Those belonging to more than one type are placed in between the types they belong to. This uses only two dimensions (the spatial dimensions: x,y). Unfortunately, it is highly ambiguous for Pokémon near the center: you can’t tell which types they belong too. If you added lines, it would help, but there could be too many lines all criss-crossing making it difficult to distinguish.


Instead, we can use some other visual attributes to identify type. Many Pokémon guides use color: it can be fairly intuitive, e.g. green for grass type, red for fire type. Here’s the same plot using type colors from Bulbapedia:

First generation Pokémon indicating type by color. Some colors are ambiguous.

This works OK for the Pokémon around the perimeter but not the Pokemon of multiple type in the interior. Pokémon of more than one type can  end up with muddy, hard to distinguish, hard to decode colors, e.g.:
Purple + green = greyish brown
Orange + blue = brownish grey
And so on.
So, when you look at Dodrio, you see its greyish purple – is that Flying+Dragon? Or is it Ghost+Normal? Or something else?
The reason for this problem is that the original palette of 16 colors is being used to encode 16 separate categories. Attempting to combine these colors results in 128 possible colors (16×16/2). Unfortunately, humans are not good at readily identifying 128 different colors (e.g. see Colin Ware‘s or Tamara Munzner‘s books on visualization). Another way to think about color is as a three dimensional space, such as a combination of red, green and blue; or as a combination of hue, brightness and saturation. Trying to squish 16 different dimensions into a 3 dimensional space is problematic at best.

16 Different Visual Attributes

What’s needed are 16 different dimensions of visual attributes, all of which can be combined together in any order, and unambiguously deciphered. Since we need so many different visual attributes, we need to consider many possibilities, including common visual attributes (e.g. rotation, scale, texture, motion, shape, shadow); and font-specific attributes too (e.g. e.g. bold, italic, case, underline, shifting baseline, punctuation, serif style, outline).  Some of these have to be discarded, for example: 1) shadows on text reduces text legibility – since we’re using text we’d rather not make it illegible; 2) shape isn’t easy to combine with text so skip that; 3) motion attributes such as blink or wobble are so visually dominant they can be annoying, so we’d rather not use them.
Here’s a grid showing 16 different variations of labels across the top row and first column. the middle of the grid shows all 128 pair combinations. Each cell is uniquely different from its neighbors. With some cognitive effort, the viewer can determine what attribute is different in each case:

Many different visual attributes for labels, and all the pair combinations (click for big).

The attributes used are plain serif, upper case, shifting baseline, surround quotes, tracking (i.e. spacing), exclamation mark, underline, boxy version of font, bold, narrow version of font, italic, deep brackets on serifs of font, wide serif version of font, low x-height version of font, outline version of font, tall stretched version, rotated, horizontal stripe texture, vertical stripe texture.
The same approach can be applied to the Pokémon visualization:

151 first generation Pokemon with type indicated by unique visual attributes (click for big).

Each type now has a specific visual attribute associated with it. Small caps for fighting, slightly rotated text for flying, italics for poison, and so on. Now it is possible to create some of the interesting combinations, for example:
In each case you can see how the different attributes can be combined: Parasect gets the combination of the narrow font for Bug and the wide brackets for Grass (wide brackets, i.e. the fat parts on the letter like the bottom of the r). Kabutops gets the blocky font for Rock and the low-x-height font for Water.  Mr. Mime is the only Pokemon that gets the combination of vertical stripes and horizontal stripes to end up with plaid.
Note that Ice was originally a wide serif which seemed hard to see, so a bumpy edge was added for Ice as well to further differentiate it.

18 Types of Pokémon

The visualization above has only 16 different types. Pokémon aficionados know that two more types were introduced with the second generation of Pokémon, for a total of 18 different types (and more than 300 possible two-way combinations). You may have even picked out Magneton and Magnemite have underlines underneath their labels in the plot above, even though there is no underline showing in the legend (underline is for steel type which didn’t exist in first generation Pokémon but was retroactively added). Here’s the same visualization, now showing all 719+ Pokémon across 18 types. Click for a big version.


All Pokémon by Type. Click for big.

So, with 18 different visual attributes, plus the text itself, this word cloud represents 19 different data dimensions.


So this is a new kind of strange visualization of Pokémon. There may be many questions:
Pokémon  Questions
Variants. Some Pokémon may can have different combinations. For example, the Pokémon Rotom can be Electric+Fire; or Electric+Flying; or Electric+Ghost and so on. Since I’m not a Pokémon expert, I wasn’t expecting this (the last time I played was on a GameBoy Color). I considered representing a single Rotom with attributes for every possible combination – which isn’t correct; so instead, Rotom occurs multiple times in the visualization, each with some appended text to indicate which flavor (e.g. Rotum-EFr for electric-fire variant, Rotum-EFl for the electric-flying variant)

Data Errors
. As mentioned, I’m not a Pokémon expert. I just took data from Bulbapedia used it as is. I don’t know why some Pokémon are a single type such as fire and some have two types, one of which is normal, such as Bibarel, which is listed as water+normal. To me, it seems that water+normal should be the same as water? The visualization just draws the data, and no guarantees that I cut and paste the data correctly.
Visualization Questions
24 Dimensions: In addition to the 19 visual dimensions listed above; each label also has position and color. Position uses x,y spatial location for 2 more dimensions, and color uses variations in hue, brightness and saturation (or variation in  red, green and blue) for another 3 dimensions. That’s on the order of 24 visual dimensions. But from a data perspective, it’s only 19 hence the title is 19 dimensions.
Many-way combinations: In the visualization, the fonts can be assembled in any combination. In the case of Pokémon, it turns out that any single Pokémon can only belong to at most two different types. From a combinatorics perspective, with only 1 way and 2 way combinations there are only a few hundred possible type combinations. However, from a visualization perspective, this palette of 18 visual attributes can be combined in any combination: 2 way, 3 way, 5 way combinations. If Pokémon version 11 has new characters with 5-way combinations, this particular visualization will accommodate it: all the millions of possible font-combinations can be constructed using this approach.
Does it Work? In order to understand which types any particular Pokémon belongs to, it takes some cognitive effort to decode it. A thorough evaluation would require user studies and they have not been done. From a design perspective, I was unsatisfied with some of the font variants: for example the wide serif variant didn’t stand out. So, to enhance the differentiation, I added a bumpy edge to the wide serif (i.e. the strange font for the Ice type Pokémon). Dark type use vertically oriented text, which really jumps out and isn’t particularly easy to read. Electric and Normal use punctuation (exclamation for Electric, surrounding dashes for Normal), which seem a bit arbitrary, although they might be detectable without actually reading the text. And so on.
Typography Questions 
Different font per Pokemon: I was asked: “Why didn’t you use different font types for each Pokémon type?”. Font type is similar to color. You could use Old English for Fighting type and Comic Sans for Psychic, but there’s no good way to combine those fonts together (e.g. what do you get when you combine Old English + Comic Sans?). When you have 18 different fonts that get combined together you won’t necessarily end up with something that’s easily distinguishable from all the other font combinations (e.g. Slab Serif + Script look different than Bodoni + Varsity). And, even if they are distinguishable, it will be difficult to visually assess which fonts a particular font was made out of.
Instead, the approach used here has visually distinct typographic attributes: tall brackets and low-x-heights are separate, can be combined, and still understood as the combination of those two separate things.
Many font variants: You won’t find fonts with variable widths, variable x-heights, variable bracket sizes, and variable serif widths in a commercial-off-the-shelf font family. Ideally, the concept of multiple-master fonts should have made this easy, but that doesn’t exist with current fonts used in browsers. Instead, I used a parametric font generator, in this case from In this case, you start with a font and lots of parameters, such as x-height, weight, italic slope angle, serif width, bracket height, and so on. To get the variants I needed, I started with a basic serif font, then created 7 different base types (heavy weight, italic, boxy, narrow, low-x-height, wide-serif, and tall-brackets) and then all pairwise combinations (e.g. heavy+italic, boxy+narrow, etc) to create a total 29 fonts.

Note that a careful selection of attribute must be considered. Sans-serif is not one of the base types because attributes such as wide-serifs or tall-brackets can’t be combined with sans-serif – only one or the other can be represented at a time. However, if all the base types contain serifs, then all the serif combinations are supported.

So What?

From a visualization perspective, this is a useful thought experiment to see what happens when you attempt to use 18-24 different visual attributes all at once – it suggests that we can certainly go well beyond 5-10 attributes. There is lots more research that can be done.

From a typography perspective, it’s a useful thought experiment to think about why multiple-master fonts or parametric fonts may have uses in data visualization in the future, and what sort of technical enhancements might be needed to support this: generating all possible permutations and combinations of a font is not a feasible approach to meet the needs of very high dimensionality.

From a Pokémon perspective: bring it on. I want to see next generation Pokémon that have more than 2 types. How about an evolution of Charizard that includes ghost type; or Dark steel version of Pikachu? The visualization is ready.

P.S. Happy Birthday A. Sorry this is a bit late:-)


Posted in Data Visualization, Pokemon, Text Visualization | Leave a comment

Papers, Rejections and Critiques

There have been a few recent blog posts after VisWeek about paper rejections such as Niklas’ Elmqvist’s Dealing with Rejection and Robert Kosara’s related Dealing with Paper Rejections. Rejections are certainly a painful part of the academic review process I’ve had my share of rejections.

A paper review is a criticism of a work, but, it’s not a dialogue. One of the painful aspects of a paper rejection is that you don’t get to address your reviewer: perhaps they misunderstood some aspect of the work, missed a key point. Or maybe they have some valid criticisms about some aspects of your work, but you can’t get more details from them which could be really useful in improving your work. Or, they may have some uncovered some other relevant confounding factors that you hadn’t addressed. Or, identified some other relevant prior work. In all the above scenarios you’re stuck without being able to engage your reviewer with more dialogue – and the review process is not the place for discussion as pointed out by Elmqvist and Kosara.

Critiques are very similar to criticism in paper reviews – however critiques are about back-and-forth dialogue, ideally in a face-to-face setting. Critique originates in 18th century Enlightenment when scholars and bourgeoisie were struggling against the absolutists in church and state. Critique is a distinct public discourse based on rational judgement (Eagleton). It is a public exchange of opinion, open to debate, attempts to convince, and invites contradiction (Hohendahl). This dialogue is really useful to explore, investigate, deliberate, explain, expand, probe, refine and revise ideas. The feedback is quick and many different aspects can be considered.


Early critical debate.

While conference papers are a good opportunity to get feedback after a paper has been written, a critique can be used to collect a lot of detailed feedback from a variety of peers and experts before a paper is written. This can help with authoring a better paper, and it can help guide better research by stimulating ideas, framing research, exposing assumptions and so on.

Furthermore, critiques from experts (peers, expert users, etc) represent a form of evaluation. Critiques are used often in design education (and medical education) where there is a lot of complexity and many tradeoff decisions. Unlike a traditional time and error test, a critique is wide-ranging across the broad design space and can uncover various unforeseen issues.  Just like rejection, critiques can be difficult and painful for the person receiving the critique as issues are exposed. However, the dialogue allows the person to engage with the critic: to go deeper, to debate, to contradict, to understand, to accept, to learn. Note that formal critiques are the primary form of evaluation in design fields such as architecture.


Snapshots of critiques in architecture (from film Archiculture via Arbuckle Industries).

Interestingly, some types of conferences are more open towards critical discussion than others. Marquee conferences (VisWeek, CHI, etc) are so big and diverse that it might be hard to generate much interest in a specific topic. The marquee conferences are so big that everyone is hurrying from session to session and you don’t necessarily get great group dialogue. Instead, many smaller workshops and smaller conferences are much better for engaging in dialogue directly related to a research topic. Acceptance rates tend to be higher at these more narrowly focused conferences. And, all attendees at these small workshops are focused on similar research and therefore more willing to engage in critical discussion. I.e. smaller conferences and workshops may be better than large conferences.

There are other ways of engaging in critiques as well, such as reaching out to experts via email, blogs, skype, doctoral colloquiums and other means. “Do-it-yourself critiques” and other aspects of critiques are discussed more in my recent paper at BELIV back in October at the BELIV workshop at VisWeek; or, for the abbreviated version, here are the slides.


Posted in Critique, Data Visualization, Design Space | Tagged | Leave a comment

Microtext Line Charts

Tangled Lines

Line charts are a staple of data visualization. They’ve existed at least since William Playfair and possibly earlier. Like many charts, they can be very powerful and also have their limitations. One limitation is the number of lines that can be displayed. One line works well: you can see trend, volatility, highs, lows, reversals. Two lines provides opportunity for comparison. 5 lines might be getting crowded. 10 lines and you’re starting to run out of colors. But what if the task is to compare across a peer group of 30 or 40 items? Lines get jumbled, there aren’t enough discrete colors, legends can’t clearly distinguish between them. Consider this example looking at unemployment across 37 countries from the OECD: which country had the lowest unemployment in 2010?


Tooltips are an obvious way to solve this, but tooltips have problems – they are much slower than just shifing visual attention. And tooltips don’t work on hardcopy, nor in PowerPoint, nor do they work during a presentation unless you’re the person holding the mouse.

The Limits of Small Multiples

There are other visual ways to solve this, for example, sparklines, small multiples (i.e. separating each line into its own chart), horizon charts and so on. Each of these techniques creates a lot of separate charts. For example, with small multiples, you can see trend, but it’s very difficult to compare magnitude when all the charts are at different scales. Here is a subset of 16 of the 37 countries – is Denmark higher or lower than Estonia at the end?


Of course, the small multiples could be made at a common scale, so that magnitudes can be compared. Here’s the same subset, all sharing the same vertical scale:


Now, it is easier to tell that Estonia is higher than Denmark at the end. But all of Austria’s performance is squished into a few vertical pixels, making it difficult to get a sense of any of the detail of Austria’s trend. And it’s still very difficult to answer a question such as which country has the lowest unemployment in 2010. In order to do this comparison, you have to make a note of a particular point on each chart and rely on short term memory to keep track of all the different values. The benefit of a direct visual inference possible when the lines were superimposed in a single chart has been lost.

Superimposition vs. Juxtaposition

I think of this as the superimposition/juxtaposition tradeoff. With superimposition, everything is overlaid in one space with high resolution enabling local comparisons between entities – but points can be occluded, lines tangled, and difficult to identify individual elements. Juxtaposition pulls everything apart into little separate little charts – but you sacrifice a huge amount of resolution as each data subset is forced into 1/(number of items), and you have to rely much more on short term visual memory. A lot of popular visualization approaches today go for juxtaposition, e.g. dashboards.

Bringing Text More Directly into the Chart

Following the theme of this blog, consider how text can be integrated more directly with the chart. There are many possibilities:

  1. End Labels

You could remove the legend and place labels directly associated with each line, and use some collision detection make sure labels are spaced apart. This is useful if you want to compare two lines at the end – e.g. does Spain or Greece have higher unemployment in 2014?. But it is less useful the further away from the end that you move into the chart, especially if there are a lot of crossovers making it difficult to visually trace the lines – e.g. how were Spain and Greece doing back in 2006?


2. River Labels

Why do the labels need to be outside the chart? Instead, consider moving the label directly into the chart. Labels can be directly associated with the path of a line, just like river labels on a map. A little bit of collision detection can be used again to reduce label overlap. Yes, Greece had slightly higher unemployment than Spain back in 2006.


3. Microtext Lines

Consider the line again. Why do we need both lines and labels? The entire line can be replaced with a continuous string of small-sized labels. You lose a little bit of accuracy, but each line is identifiable and can be re-found after passing through a congested area. E.g. What happened to Estonia’s unemployment rate before, during and after the crisis of 07-08?


In effect the line has become a multifunctioning graphical element:  it works as both a line and as a label. There’s quite a few things going on here, so let’s address them:

  1. Color: The different colors have been maintained: it helps differentiate one line from another. If all the text were black, it would be very difficult to read where there is any overlap. Note how the yellow text (Australia) is harder to read than other text. Legibility depends on contrast, light colors on a white background are difficult to read: Australia should have been a darker shade.
  2. Different font styles: Similarly, different fonts have been used to help differentiate lines. Each font has different widths, weights, spacing, which give it a rhythm and helps distinguish it from other lines.
  3. Size: The text on the lines is smaller than the labels at the end. It’s still readable to my eyes, but brings into question how small can the text go and still be readable. I printed out this chart so that the font was 5 points (1.7mm) and gave it to a half-dozen 40-60 year olds: 5 out of 6 could read it.
    Microtext is super tiny text, sometimes printed on money as a security feature. It’s used to form lines or areas that can be revealed to be text on close inspection. I asked two teens to read microtext and they could read text down to 1.5 points (about 1/2mm). Here’s a full shot and closeup of an old Canada $2 bill – the even brown texture in the center behind CANADA is actually text as seen in the closeup on the right.
    canada2far canada2close
    Historically, you couldn’t use microtext in data visualization because monitors had very poor resolution. Monitors remained at 72-96 dpi for about 20 years – if a font became too small it was too pixelated to be readable. Guidelines in the late 1990’s recommended 12 point font with 8 or 9 point as a minimum.
    But much higher resolution displays with much higher pixel densities finally broke out into the market in the late 2000’s (thanks Steve Jobs). Which means that much smaller fonts are technically feasible. Going back to printed maps, guidelines recommended 5 or 6 point font and allowed for minimums down to 3 or 4 point (Robinson et al Elements of Cartography, 1995, or Hodges The Guild Handbook of Scientific Illustration, 2003).
    Side note, talking about fonts in point sizes on computers is tricky now. A point used to be defined in the physical world as 1/72 of an inch (about 1/3mm). However, in CSS, point sizes are relative to the display device and a font defined as “13 points” on a stylesheet actually renders physically at approximately 3.5 points on an iPhone.
    The likely answer will be that end-users may need some control over font-size on screen, or for print, map conventions are likely reasonable (5-6 point recommended minimum).

There are also some interesting questions about distinguishing lines in congested areas. With maps, typically there aren’t alot of overlapping things at one location, so maps don’t have to deal with 20 different items all overlapping at once (e.g. typographic maps). As discussed earlier, one problem with superimposed lines is areas of congestion and losing track of lines. The variation in colors and font characteristics help. On maps halos (white outlines around text) are commonly used in some software. I tried halos (left) and no-halo (right):

unemployment_halo   unemployment-nohalo

The left image, with halos, is more clear for the words on top, e.g. Brazil, but words at the lowest level are completely obscured. However, in the right image, words at lower levels are still partially visible: the colors and forms can be seen through the gaps between letters at the higher level. Try to follow Finland in the left image and the right image.


Once a decision has been made to use text along a line, there are more opportunities. The text doesn’t necessarily need to be a short label repeated over and over. Phrases, sentences or multiple languages can be used. Here’s the same chart with the microtext in Japanese, Greek, French, Russian, German, Arabic and English (click for bigger version):


Once we start bringing text into the core of the chart, it opens up a lot of new possibilities. What do tweets, news headlines, poetry, table of contents and web pages look like when they become integrated within charts?


Posted in Alphanumeric Chart, Data Visualization, Line Chart, Microtext | Tagged | Leave a comment

Visualizing Emotions

Emotion analysis of text documents is an emerging area of interest. Closely related is the visualization of emotions. Emotion analysis is the next step after sentiment analysis. In many respects, sentiment analysis is easier – there’s a single dimension of sentiment ranging from positive to negative. Emotion is a bit more difficult because there are many emotions and the first challenge is to define the range of emotions for the particular analysis. Pixar’s Inside Out settled for five emotions:

Five personified emotions from Inside Out: Anger, Disgust, Joy, Fear and Sadness

Five personified emotions from Inside Out: Anger, Disgust, Joy, Fear and Sadness (c) Pixar.

But there are more. Surprise was considered but dropped from Inside Out early on. Plutchik argues for eight, adding trust and anticipation on top of Ekman’s six (anger, disgust, joy, fear, sadness and surprise). There are also alternative taxonomies of emotion (e.g. pleasure/pain, excited/calm, etc). And emotions are not mutually exclusive – which is part of the plot line of Inside Out as sadness becomes mixed with other emotions. Here’s Plutchik’s Wheel of Emotions – with 8 different emotions, various degrees of emotion and also items between emotions:

Plutchik's Wheel Of Emotions.

Plutchik’s Wheel Of Emotions.

Also there is the challenge of generating an emotion lexicon – that is – a long list of words and their associated emotions.  Saif Mohammed uses a crowd-source approach to tag more than 10,000 words with Plutchik’s eight emotions.

Given a text corpus and an emotion lexicon, scores can then be created for different texts or different characters in the text. For example, Scharl et al created radar plots to show emotions associated with characters from Game of Thrones or Saif Mohammad profiles various texts, including love letters, hate mail, and Hamlet using bar and line charts. 

However, it’s useful to consider the words themselves and how they relate to emotions. One approach is to consider the emotional intensity associated with a word – for example, terror is a more intense version of anxiety in the emotion of fear, as shown in the Atlas of Emotions

But suppose you want to understand how a given word is associated with multiple emotions? Words such as death, money or freedom have complex associations. Given a set of words and their associations to emotions, this becomes a set visualization problem. Venn diagrams (and Euler diagrams) are a common type of set visualization.

For emotion words, each word belongs to some combination of eight emotions. However, representing all the possible combinations of eight sets is difficult to do with a traditional Venn diagram (below is a beautiful Venn diagram with seven sets – drag to flip it over). These high-order Venn diagrams are difficult to visually understand: to tell what the set membership is at any given point it’s difficult to trace around the complex looping shapes. Even with a dataset about color, a strange shade of bluish-greenish-greyish color isn’t very obvious which colors it is made up with at the perimeter:

7 way Venn: 128 different set combinations shown by colors.

7 way Venn: 128 different set combinations shown by colors by Santiago Ortiz.

Instead of a Venn layout, we can use a graph-based layout. In this case, each set category is a big anchor around the perimeter, and each emotion word is a node linked to the sets that it belongs to. Using a force-directed layout, each item ends up close to the sets that it belongs to as discussed in this paper Anchored Maps.  This approach can work well with small number of items. When using a larger number of items, a few problems emerge, including: a) all the graph lines overlap and it becomes difficult to visually detangle; and, b) items can end up in the same location with completely different memberships (e.g. using anchors set in a square, it is feasible to have an item end up in the middle if the membership of that item corresponds to all four points on the square, or any two diagonally opposing points on the square).

So, we adapt the anchored map approach for visualizing emotion words.  We start by setting the eight Plutchik emotions as anchors around the perimeter. Then we use a force-directed graph to locate all the words corresponding to their emotions (plus collision detection, so that words don’t overlap each other). Next  we use color to indicate set membership using the same color scheme as Plutchik – words that are a combination of emotions have the average color of the corresponding emotions. Finally, we add font attributes to indicate set membership. A bouncy baseline for joy, w i d e l y  s p a c e d  letters for trust, underline for fear, exclamation mark (!) for surprise,  light-weight letters for sadness, italic for disgust, blackletter for anger and SMALL CAPS for anticipation:

Top 250 words associated with one or more emotions.

Top 250 words associated with one or more emotions.

In the above visualization, clusters of words are immediately visible. For example, around the anchor word joy are emotion words such as love, daughter, specialbeautiful and so on. We can see that there are many words around the anchor word  t r u s t  but few around anger or disgust.

You can also see the graph lines underneath connecting words back to their target emotions, for example, half way between anger and disgust are lying and angry – words associated with both anger and disgust. You can tell that both words lying and angry are associated with both anger and disgust by three different cues: 1) the graph lines underneath; 2) the color is magenta – halfway between red (anger) and purple (disgust); and 3) the font is both blackletter (anger) and italic (disgust).

These words that belong to multiple sets is where things get interesting. Near the middle of this plot are words that are all variants of muddy reddish-brownish-greenish colors. Color isn’t particularly effective when trying to communicate eight different dimensions. However, font attributes are useful at two levels of understanding relationships between words and the emotions they are associated with:

1) The variation in font attributes makes it very obvious when two words have the same set membership – they have the same format. If the formats are different then the words have different memberships. For example, ANXIOUS and ESCAPE have the same membership, while S W E E T ! and D E A L ! have the same membership but different from ANXIOUS and ESCAPE.

2) Furthermore, with some cognitive effort, you can decode the membership of any word. ANXIOUS and ESCAPE belong to anticipate (small caps) and fear (underline).
S W E E T ! and D E A L ! belong to surprise (exclamation mark), anticipate (small caps), joy (bouncing baseline) and trust (wide spacing).

While the top 250 words are nice for a readable graph at blog size, Saif Mohammad’s original analysis has 10,000+ words, of which some 4,463 words have at least one emotion associated with them. Below is an image of the same graph, with all 4463 emotion words. Click for full size image to zoom in. Clusters are still visible, font differentiation is still identifiable, and individual words can be visually decoded if needed.

4463 words associated with 8 different emotions. Click for larger version.

4463 words associated with 8 different emotions. Click for larger version.

There’s more discussion about set visualization, labels and font attributes in this recent paper: Typographic Sets: Labeled Set Elements with Font Attributes. The emotion word visualization in this paper uses color to represent membership in the additional sets of positive sentiment and negative sentiment – pushing the number of sets uniquely indicated up to 10 sets. This means that each word in the visualization indicates 11 different data attributes: the literal word itself, two sentiments, and eight emotions.

Posted in Data Visualization, Font Visualization, Graph Visualization, Set Visualization, Text Visualization, Venn Diagram | Tagged , | Leave a comment

The Design Space of Typographic Data Visualization

There are many possible new visualizations using typography, some of which I’ve previously discussed in posts on this blog. One way to consider this design space is to decompose it into the different elements that can be used to assemble visualizations. These elements include:

  1. Typographic attributes. This is all the variation within type that can be used to create differentiation and encode information. This includes the literal alphanumeric glyphs as well as font weight, italic, case, typeface (e.g. Helvetica, Times), underline, width, baseline shifts (e.g. superscript), delimiters, x-height, as so on. Of course, other visual attributes such as color, size and outline can also be used.
  2. Data Encoding. Type can encode different kinds of data. Labels on maps use type to indicate different types of data as shown in the example below. The names of areas in this map use type to indicate: a) literal data, such as the name of the town or region; b) categoric data, such as whether the area is a country, province or city; and c) quantities, such as the population.


    Steiler’s Atlas (1920). Labels indicate place (literal text), categorize the type of place (typeface), indicate the level of political administration (underlines ordered dash, single, double), and population size (ordering of case, italics and size). From

  3. Scope. Type attributes may extend across a sequence of letters. They scope of the type attributes may apply to whole words (as on the map); could apply to a subset of letters within word (for example, to indicate silent letters in words such as though and answer); extend across multiple words (e.g. “There goes the HMS Titanic.”); or even extend across lines, paragraphs or portions of a document.

So What?

This creates a multi-dimensional space for design exploration: attribute x data-type x scope, which we can then use to consider some interesting new kinds of visualizations. For example, we could apply literal text to a line in a line chart (alphanumeric text x literal data x sentence). Why bother using a tooltip or creating a visually separate legend, when the content can be directly embedded in the line?

Line chart showing retweets over time for some top tweets about Trump from late Aug 2015.

Line chart showing retweets over time for some top tweets about Trump from late Aug 2015.

Or, we can vary a type attribute, such as weight to indicate word frequency. For example, the chart below indicates how frequently adjectives are associated with characters from Grimms’ Fairy Tales (i.e. font weight x quantitative data x word).

Font weight indicates the frequency of adjectives associated with characters from Grimms Fairy Tales. Kings are old, princesses are beautiful and girls are little.

Font weight indicates the frequency of adjectives associated with characters from Grimms Fairy Tales. Kings are old, princesses are beautiful and girls are little.

How I got to this framework and lots of other examples – both historic and new types of visualizations – are discussed in more detail in this journal article Using Typography to Expand the Design Space of Data Visualization (html version, PDF version), which was just published in the open-access journal She Ji: The Journal of Design, Economics and Innovation (here).  

Posted in Data Visualization, Design Space, Font Visualization, Text Visualization | Leave a comment

500+ years of increasing separation of text from visualization

In the beginning typography and infovis were tightly integrated. In this illuminated manuscript from circa 1480, the biblical text and the genealogical tree interwoven. Text in the visualization is not just simple labels. While bold, italics and sans serif don’t exist yet to create differentiation, here the text varies in color. Some node labels are plain black, some are red, some start with a red initial (and node size, outline color, outline shape all vary too). Similarly, the explanatory text is woven around the graph: it’s not separate to the visualization. These same kinds of relationships between text, typographic attributes and visualization can be seen in other medieval visualizations and tables (e.g. see more examples at Bodleian).

Genealogical tree from late 1400's. Note graph nodes use of image (people, shield) or text, where text may be black, red or start with a red initial. The nodes can vary in size, color, or shape (circle, crescent, shield). Textual commentary is intertwined throughout.

Genealogical tree from late 1400’s. Note graph nodes use of image (people, shield) or text, where text may be black, red or start with a red initial. The nodes can vary in size, color, or shape (circle, crescent, shield). Textual commentary is intertwined throughout. Via Bodleian library.

Step forward a century to the proliferation of the printing press and movable type. With movable type it is easier to set an entire page of type, but more difficult to set type and images together. It’s hard to get the image (an engraving such as a woodcut) to work together with movable type. It’s difficult to configure a page to get the components to lock together, difficult to get the ink to spread evenly, difficult to set everything to the same height. It’s hard to use color – that’s two separate pressings or laborious masking of different areas for ink to be spread. So images start to move into separate blocks or separate pages. Words within images now need to be carved, in reverse. It becomes simpler to create an image without text (or very little) and make reference to the image from the text or with captions.

1573: Image separate from text. from William Bullein, A Dialogue… Against the Fever Pestilence. @ Bodleian.

1573: Image separate from text. from William Bullein’s A Dialogue… Against the Fever Pestilence. Author photo from Bodleian exhibition “Shakespeare’s Dead“.

By the time of the Enlightenment, images have become beautifully engraved plates executed by skilled engravers. Diderot’s famous Encyclopedia (1751-1777) has beautiful images and a wealth of text – completely separated. Plate numbers and key letters on the images provide the sole reference to relate the detailed text pages to the plates, which are in separate sections in the bound volume.

Diderot's Encyclopedia has great illustrations of various occupations - all neatly labeled, but the viewer has to cross-reference the text to understand.

Diderot’s Encyclopedia has great illustrations of various occupations – all neatly labeled, but the viewer has to cross-reference the text to understand.


Bring that forward another century and you see statistical graphics. Like the earlier Enlightenment illustrations, text is separate from charts. Within charts, text is minimal – pushed to the edges (title, axis labels) and maybe the occasional label on a line internally, carefully placed to avoid colliding with a grid line.

1930 book explaining charts. Text is pushed to the periphery of the chart. (T.G. Rose, Business Charts, 1930)

1930 book explaining charts. Text is pushed to the periphery of the chart. (T.G. Rose, Business Charts, 1930)

Information visualization utilizes many of the techniques from charting and statistical graphics. In general, most of the text is pushed to the edge in information visualizations. Yes, there are many news infographics where text is integrated into the visualization. And there are text visualizations where the entire visualization is made of text (e.g. tag clouds) or perhaps labels on markers (e.g. graphs). But there’s still gaps. From the infographics perspective, text is typically hand-crafted annotations carefully placed around the visualization. From the information visualization perspective, the text is limited – i.e. usually labels. Detailed text might be accessible via a tooltip, but tooltips are slow and if you don’t focus on the particular item then the tooltip content is not available. Detailed text might be visible in another linked panel (think Google finance charts), but this requires cross-referencing back and forth between two different visuals. This cross-referencing is a point of slowness (e.g. see Larkin and Simon’s Why a picture is sometimes worth 10,000 words). In a few instances, a full sentence might make it into an information visualization (e.g., but even these have various issues (e.g. newsmap has many headlines too small to read) .

Should the medieval visualization be dismissed as an early visualization created with limited tools; or should it be considered an exemplar of how visualization, text, imagery and typographic attributes can all be used together to create a clear communication of complex data. And furthermore, the medieval scribe achieved this using only a pen while we have incredible computing resources. If the medieval example is considered a goal, then the question is:
How can we move towards automated information visualization with rich textual information directly integrated into visualizations?

(This post was inspired by discussions at TDi2016 Reading University and exhibits at Bodleian Library).

Posted in Data Visualization, Text Visualization | Tagged | 2 Comments