Revisiting Maps for Inspiration

I write a lot about typography and visualization. It all started with critically looking at maps and noticing differences between modern visualization and old maps. I did a PhD looking at typography, text and visualization. (Stay tuned, there will even be a book in late 2020 about visualizing with text – with many new visualizations beyond what I had in my thesis!)

Back to maps. I was invited to speak at ESAD Valence about visualization and I decided to take a break from book writing and revisit the original inspiration: maps. Cartography has different rules than visualization, a much longer history, and many different techniques readily visible. So, I cobbled together some of my favorite maps to talk about and point out some observations.

Gough Map, 1360

The Gough map is a wonderful medieval hand-drawn map. Rivers are diagrammatic starting as bullets and flowing in almost straight lines. The iconography for towns varies from simple sheds, to an added cathedral tower, to a cluster of small buildings, to the walled city of London.  Typographically, it’s interesting with an ordering of labels. While most towns are labeled in brown, London is literally labelled in gold. Distances between towns are labelled in red, and counties are labelled in red with boxes (e.g. Suffolk).


The Gough map. London is literally labelled in gold.

Munster’s Geographia Universalis, 1540

Skipping ahead two centuries, Munster’s maps from Geographia Universalis (1540) are interesting maps at the transition to the printing press. Like the medieval Gough map, rivers, mountains and towns are highly stylized forms and pictographs, which are combined together with typographically differentiated text in italics, caps and roman. Although the geographic map is a woodcut, the lettering is highly uniform and likely metal type composed together with the woodcut by a form cutter. The resulting aesthetic balances the rougher shapes and textures of the woodcut with the fine metal letters plus some ingenuity by the artisans to get it all fit together. Towns are consistently horizontal but labels are angled to fit, such as Vincentza turned almost upside down:


Munster’s maps: woodcuts plus text.

Willem Janszoon Blaeu, 1629

Engraving enabled much finer detail than feasible with woodcuts: both the topography and the labels could be engraved in detail. Willem Janszoon Blaeu‘s maps have an expanded set of iconography, now reduced even smaller to tents, pyramids and tiny houses. The path of rivers is more accurate and mountains have shading. The engraved text now has more opportunity for variation. River labels more closely align with river courses. Labels corresponding to areas are larger and spacing starts to increase (e.g. D A N).  Plus many other text variants (size, case, italics) differentiate between names of towns, cities, provinces and regions.


Blaeu’s engravings: more detail and more text variation.

Crome’s Neue Carte von Europa, 1782

Crome creates an early thematic map, Neue Carte Von Europa, showing location of different crops, livestock and minerals in Europe in 1782 (previous post). An even wider range of icons are now required to indicate all the different types of resources: gold, silver, copper, zinc, iron, mercury, marble, fruit, honey, salt, rice, fish, wood, horses, pigs, etc. — 56 different types of commodities. After running out of icons, two letter codes are used, e.g. Kr for cork, Tb for tobacco, Cr for currants and so on.


Crome’s map filled with icons and alphanumeric codes.

Sherman’s map, 1864

During the U.S. Civil war, general Sherman lead his army deep into the Confederacy, far beyond his supply lines. Sherman’s map combines traditional topographic detail with an overlay of resources summarized from the 1860 census. Starting with a base map showing counties, cities, rivers and railroads, an additional 15 variables of census data are added regarding the quantitative resources available: population, livestock, and agriculture. The map provide Sherman with the ability “to act with confidence that insured success.” As an early datamap for analytical and planning purposes, it shows the value of depicting many dimensions of data simultaneously, to aid in trade-off decisions, such as food available, potential resistance and potential supporters.


Sherman’s map: 15 quantitative resources per county.

Ordnance Survey, 1921

Modern maps, using printing presses, reach a high in the early 20th century for the amount of information packed into them. Ordnance survey are a favorite for the amount of information that they pack into each label. In this example from the early 1920’s, place names vary capitalization, italics, size, font family (plus the actual name) to indicate 5 attributes per label (legend here).


Ordnance Survey: 5 variables indicated per name.

Steiler’s Atlas, 1924

Similar to the Ordnance survey, mapmakers on the continent also created maps with high-dimensional labels. Stieler‘s maps are typographically interesting as the labels use an ordering of underlines (dot, dash, solid, double solid) to indicate cities with different levels of governance (e.g. capital of a county, province or country). Also, backward italics for water features, curved and spaced test to indicate area features, and so on.


Reverse italics, multi-level underlines, and more.


FAA Aeronautic Chart, 2019

Here’s a map that’s only a few months old from, and packed with a phenomenal amount of information for pilots. There are many different classes of information, visually distinct from each other. The base map has topographical shading in hilly areas, bright yellow in urban areas. Overlaid are blue and red layers, each with a wealth of information regarding the corresponding airport, runway configuration, airspace, routes, waypoints, radio frequency, visual markers such as stadiums, wide turbines and bridges, and more. Icons and alphanumeric codes are heavily used to compact data for expert users. All text remains legible, with the background/basemap largely being light/bright upon which other layers can be superimposed, and if needed, some text is set with light halos.


Aeronautic chart, packed with relevant data for navigation.

So what?

Even though most people might think of Google maps these days, with minimal representation of roads and highly undifferentiated labels, the history of maps shows far richer solutions packed with many layers of information. These much richer maps, like the aeronautic chart and Sherman’s map, show that there are uses and applications where people need more information than only a couple classes of information within one visualization. And all the examples here show how all this extra data can be communicated with labels, symbols, lines, layers and more.

So, where and when could scatterplots, timeseries charts and treemaps add many layers to increase their information content and aid new analytical uses?


Posted in Data Visualization | Leave a comment

Bertin’s Reorderable Matrix

I recently had the opportunity to attend a workshop at ESAD Valence. To my surprise, in their collection, they have original parts from one of Bertin’s reorderable matrix!


I had the opportunity to use the rebuilt matrix at VisWeek in Paris 2014. I’ve simulated the matrix using Excel macros and Excel conditional formats. Essentially the reorderable matrix is a physical visualization that takes a table of structured data and enables resorting of rows and columns based on data values to reveal clusters. Each block shows data on the top surface which represents a numeric value varying from the lowest value (full white) to the highest value (full black) and various textures inbetween. The user can then shuffle (i.e. reorder) full rows or full columns to regroup the data based on values so that clusters visually appear (Bertin called the process diagonalization, see the video). It’s a human-powered physical clustering algorithm.

This particular version is made with tiny plastic blocks, about the size of Lego 1×1 bricks and sound the same as Lego when they jostle in the big bag of bricks (Bertin called them dominoes). I arranged a few on a desk into a matrix (the connecting rods weren’t available). You can see how patterns of all black, textured, and partially textured surfaces are highly visible:


One really interesting aspect that I noticed is the colored edge stripe on some of the bricks, seen in the picture below (and quite noticeable in the bag where you can see some blocks have bright stripes in green, blue, yellow, orange, etc). I asked, but it was uncertain what their purpose was. The stripes are always on the sides where the rods go in; never the top. I’m guessing that it is some kind of recording system. Perhaps the user would draw a stripe across a row of bricks, maybe as a way to record the state. Since these colors were on the sides of the blocks, they wouldn’t be visible from above and therefore not interfere with patterns and clusters being created.

Another interesting aspect is that both the tops and bottoms of the blocks have the black-to-white texture patterns. We speculated that the blocks were reused from analysis to analysis, and it was easy to code both sides of the blocks. But, maybe there’s more. It would be feasible to re-order a matrix, take some kind of intervention, collect more data, then color the new state on the bottom of the blocks. Then a user could flip over the entire matrix, to see if the pattern had changed in some way. Again, speculation on my part.

The Lego-like aspect also suggests to me that a reorderable matrix could potentially be constructed out of standard Lego-blocks today: a 1×1 with holes on both sides, rods, and tiles in assorted shades of grey. And then concepts about data clustering could be taught in grade school.



Posted in Bertin, Data Visualization | Leave a comment

Visualizations with perceptual free-rides

We create visualizations to aid viewers in making visual inferences. Different visualizations are suited to different inferences. Some visualizations offer more additional perceptual inferences over comparable visualizations. That is, the specific configuration enables additional inferences to be observed directly, without additional cognitive load. (e.g. see Gem Stapleton et al, Effective Representation of Information: Generalizing Free Rides 2016).

Here’s an example from 1940, a bar chart where both bar length and width indicate data:


The length of the bar (horizontally) is the percent increase in income in each industry.  Manufacturing has the biggest increase in income (18%), Contract Construction is second at 13%.

The width of the bar (vertically) is the relative size of that industry: Manufacturing is wide – it’s the biggest industry – it accounts for about 23% of all industry. Contract Construction is narrow, perhaps the third smallest industry, perhaps around 3-4%.

What’s really interesting is that area represented by each bar is highly meaningful: the percent increase x size of industry = total income gained in that industry. For example, the area of Transportation and Contract Construction are perceptually quite similar. This can be validated mathematically, Transportation at 7% increase x 7% industry size, is a similar total income gain as Contract Construction at 13% increase x 3.5% industry size. Or Mining at 9% increase x 3% industry size, is about the same total income gain as Agriculture 3.5% increase x 8% industry size.

This meaningful area is the free-ride. Perceptually, one can directly observe and compare relative areas. Total income gain hasn’t been explicitly encoded, it’s a result of the choice on encoding length and width. If the viewer is potentially interested in total income gain in addition to percent increase and relative size, this is a useful encoding. Total income gain might be very important in government policy, for example, as the total income gain is directly proportional to the taxes generated.

A more common design choice these days might be to use a treemap to show one variable (relative industry size) and color to show the second variable (color to indicate percent increase); like this:


In the treemap, size and color are explicit, but there’s no free-ride. The combination of color and area isn’t a perceivable combination: the similarity in total income between Transport and Construction is not obvious; nor the similarity between Mining and Agriculture. In the treemap, the area encodes relative size, but the length and the width of the boxes are not meaningful. The color encodes percent change, but color isn’t effective for comparing relative quantities. If total income gain is a desirable insight, then the treemap fails.

Edward Tufte (1983) discusses multi-functioning graphic elements, which doesn’t quite  align with the idea of a free-ride. Johanna Drucker (2014) discusses this notion as generative: a representation that produces knowledge as opposed to a representation that simply displays data. But I like the definition of a free-ride, which succinctly explains the perceptual benefit created by the choice of representation. See Gem’s paper for an example applied to Euler diagrams.

Visualization designers need to consider the free-rides and other perceptual inferences different visualization alternatives provide, and choose among visualizations on how those inferences suit the viewers’ task.

Percent Increase in National Income by Industry is from page 178 in the book How to Chart: Facts from Figures with Graphs, by Walter Weld, 1960. Walter didn’t particularly like this chart, partially because there is no legend nor axis for the widths. Personally, I have seen this type of bar chart used effectively in financial services.

Posted in bar chart, Critique, Data Visualization | 1 Comment

Metabolic Pathways and Visualization Pathways

Metabolic pathway diagrams show series of linked chemical reactions occurring within cells (Wikipedia). These diagrams started more than a half-century ago, such as this example from 1967 in the Smithsonian:


These diagrams have been continuously expanded over decades as new research identifies new reactions and new connections. A 2017 version at Roche is a massive interactive poster documenting thousands of compounds and reactions:


These are extremely interesting visualizations that document the knowledge of a research community showing the connection and flows of chemical reactions.

Could the equivalent exist in data visualization and analytics? The field is growing rapidly and there are many techniques. Like biology, as the visual analytics field grows, it becomes more difficult to keep track of all the evolving techniques. Surely, a similar diagram of data and the many ways it can flow through analytics into visualizations (and other perceptualizations) and interactions – should be feasible and useful for the community. Here’s an attempt to sketch out a bit of it related to data that expresses structures such as hierarchies, graphs or sequences; and corresponding visualization approaches:


It’s bit trickier than biochemical processes as there are many-to-many relationships potentially making it overloaded with too many connections, so there’s some editorial or process to determine which pathways to show. And, it’s missing so much, e.g. no interactions, many data analytic techniques, and no visual attributes (color, size, icons, etc). And it’s not obvious how to group visualization layouts, e.g. by mark type, by coordinate system, or maybe by the primary structure that they represent?

Perhaps someone else has already created something going down this path already? If not, is something like this valuable? Let me know.

Posted in Data Visualization, Design Space, Graph Visualization | Leave a comment

Legacies of Isotype

ISOTYPE was a dramatic reconceptualization of statistical graphics in the 1930’s by Otto and Marie Neurath and their collaborators. Contemporary charts, such as seen in Brinton, were mostly black, simple dots or lines, tiny captions and full of dense grid lines, axes, ticks and labels. Isotype instead was bold; almost always devoid of grid lines, axes and tick marks; minimal bold sans serif text; and usually relied on repetition of expressive icons to convey quantities. Compare the two images below. Isotype evolved at the same time as Modernism, where these same ideas — broadly, “less is more” — was applied to many areas of design including architecture, art, dance, industrial design, etc.

How did Isotype’s visual language become diffused across charts, visualization and interfaces over the next few decades? Here’s three:

Pictographic Icons

Perhaps the best known feature of Isotype is the use of pictographic icons. Use of pictographic icons to indicate things became increasingly important with post-war globalization. Pictographic icons are recognizable across language and use less space than long labels. Standardized icons became popular across many areas of society such as highway traffic signs, Olympic symbols, airport signage, warning symbols and so on. And then Mac and Windows used icons as core interaction elements in graphic user interfaces (How many icons are visible in your screen right now? I have more than 125). Here’s a mid-1970’s set of standardized symbols for the US Dept. of Transport:


Standardized icons from mid 1970’s, US Dept. of Transport.

No Grids

The diffusion of Isotype benefited in part from technical changes to printing, moving from metal-based printing (which could handle fine detail) to offset printing (which was based on photographic compositing techniques and this reduced the ability to use fine details such as thin lines and crisp serifs). As such, thin grid lines and small text are more difficult to use than chunky icons, large patches of color and bold, heavy-weight labels. This lines up well with design ideology of Isotype. If we look at some charts from the mid-1970’s, we can see the remains of Isotype — few or no grid lines, minimal text, and expressive pictographs:


Charts from 1975: low on grids, low on text and some icons (Graphis Diagrams, 1976)

Labeled Values

Isotype worked hard to reduce text, but showing the numeric values seems to be important when we look at charts after Isotype. In the prior image, there are explicitly labelled numeric values in all six charts. Presumably viewers want an estimate of numerical quantities corresponding to the visual marks, and they don’t want the cognitive load of counting icons or guessing the area associated with a circles, folded corners or the relative width of smoke. Or, perhaps icons are difficult to express fractions. Regardless, the addition of numerical values either as labels on marks or labels on axes come back. This was probably one of the first aspects of Isotype that may have slipped — here’s a US Dept Agriculture bar chart from 1950, highly influenced by Isotype:


Chart from 1950, highly influenced by Isotype (compare to first pair of images).

It has the icons (although moved to the axis and explicitly labelled), and minimal grids (although an outer frame has been added to the plot area). And it labels the bars. In this chart, like the 1970’s charts, the values are explicitly labelled.

The take-away is that removing value labels completely may have been a bit too far on Isotype’s part. Even Haroz et al‘s study on “Isotype” charts always included quantities along the y-axis in all test conditions. Either a numeric axis or labelled bars or some numeric guidance on the values seems to be broadly desired. We see these labelled values in many charts, such as many Excel charts that label both the numeric axis and number value per bar (3 of the 11 quick styles provide both) such as this one:


or the USA Today Snapshots (which use many cues from Isotype, including pictographs, minimal text and no grids):
Or in the very first bar chart in the very first tutorial of D3js (“Let’s make a bar chart):





Posted in Data Visualization, Isotype | Leave a comment

Which scatterplot is preferred?

Next week, we’ll be presenting a machine learning (ML) and visualization paper at the IV2019 conference in Paris. The core idea is to tag and display relevant news headlines in a real-time ambient visualization system.

From an ML perspective, the challenge is to use an open source news dataset (e.g. GDELT) where thousands of headlines are available and updated frequently (e.g. every 15 minutes) but the tags provided don’t match the needs. In general, classifying news stories is an ongoing challenge, as new topics and words emerge (e.g. Brexit, tariffs), and topics may change over time (e.g. Clinton, environment, etc.) We provide a classification module where expert users start with a simple text search against the headlines. Then we automatically suggest additional relevant keywords, which the user may explore, add, or remove. Additionally, the user may inspect sample headlines associated with any of the keywords, grouped by similarity. Then the the user can explicitly select target headlines and keywords that match their intended search topic (or not). Finally, we can run a classifier to tag all the headlines with respect to this topic defined by sample keywords and headlines. The user can iterate, modify, reclassify, and so on. Lots more detail on the technical approach is in the paper.

From the visualization perspective, the challenge is how to display a subset of these headlines. We were working with an existing animated ambient visualization which provided either a scatterplot or map upon which we could display headlines and would automatically select headlines at random to popup. Note that a map with point data is essentially the same as a scatterplot: the x and y location of the data points is based on latitude and longitude; plus an underlying image is used to show geographic features, such as land/sea.

We created four scatterplot variations: 1) a scatterplot map (with underlying land/sea image); 2) a scatterplot with explicit axes (recency vs. # stories); 3) a scatterplot based on a multi-dimensional projection (e.g. PCA or t-SNE); and 4) scatterplot based on random layout (i.e. like a wordcloud) – which is visually similar to the multi-dimensional projection:


These visualizations were then reviewed with a small target group of users, in a meeting environment. This is where things get really interesting:

  1. Map: Everyone likes the map. It is immediately understandable.
  2. Scatterplot with explicit axes: Everyone logically understands the representation, but otherwise largely unenthusiastic. A few people are very interested, but given the broad audience for an ambient visualization, there is not enough support to push this into production.
  3. Scatterplot projection: The experts are not expert in multidimensional data projection. Multidimensional data projection is difficult to explain and difficult for people to comprehend. They can’t use what they don’t understand.  Unfortunately, dead on arrival.
  4. Scatterplot random layout (aka cloud): Surprisingly, some people really like word clouds, but this community of experts is not interested. It’s art, not information.

So, the map wins: there is a strong preference for the map over all other scatterplot variants. Why should the map have such a strong preference? Unfortunately, the project didn’t have scope to consider this, and there are various confounding effects going on too: titles are smaller on the map, the scatterplot also uses color-coding of text, the map had leader lines to association headlines with locations, etc.

Here are some hypotheses why there was such a strong preference for maps:

a. Maps are easy to decode. The specific map we used is always global. A global map is easy to decode because people are very familiar with them. A global map has very low cognitive load. Low cognitive load may be very important in an ambient visualization as people don’t want to have to think too hard about what they are seeing. Scatterplots, however, have higher cognitive load. You have to be actively engaged to decode it: you have to reference back and forth between data points and an axis, or you need to understand what a projection means. And a cloud doesn’t have anything to decode positionally, so there’s no information there. From an information standpoint, the map provides more information than a random cloud.

b. Maps automatically engage prior knowledge. A news headline situated in Iowa means that we can bring all of our knowledge about Iowa immediately as context to interpret the visual representation. If the news headline is about corn, that probably matches prior knowledge that Iowa is farmland and may have a lot of corn crops.

c. Maps are visceral. In Norman’s Design of Everyday Things, some objects elicit a visceral response. A visceral responses are immediate and emotive. The map is immediately accessible and engaging. In a ranking of maps vs. scatterplot with axes vs. multidimensional projection scatterplot, my hunch is that the map is most viscerally engaging.

Thoughts? Comments?



Posted in Data Visualization, Text Visualization | Tagged | Leave a comment


SparkWords are words in running text, such as narrative prose or lists, where the words have additional data embedded in them as visual attributes, such as color, bold or italic. The simplest use is differentiation, such as italic to indicate the name of a ship, such as Titanic, however SparkWords can go much further. Attributes can be combined, for example, one could indicate political candidates using color to represent their political party, italic to indicate gender and underline to represent an incumbent: Mazie Hirono, John McCain or Bernie Sanders.

SparkWords can go a lot further. Here’s a paragraph with some text about departments in France using SparkWords:


Word weight and proportion of red, green, blue are based on data. Four different quantitative data values are conveyed by visual attributes applied to the words. There is no need for a separate legend, it’s embedded in the explanation. There’s no need for spark lines or spark bars: it bars were used instead, you would still need some kind of interaction to identify the individual bars. With SparkWords, the words uniquely encode the identity of each item. That is, in addition to weight, color, etc, the word itself is encoding one more dimension of data.

Note that SparkWords do NOT adjust size of words, because, in running text the size of text stays consistent.

Here’s another example, an entire paragraph of SparkWords showing all the 2018 baseball games of the NY Yankees:

Each three letter sequence is an opposing team (e.g. TOR for Toronto, TBR for Tampa Bay). Each character is a game, red for a loss, green for a win (grey if no game). The background bar is the score differential: The initial game, represented by the first T in the paragraph was a win for the Yankees with five runs over the Toronto Blue Jays. The second game was won with a smaller run differential (2), the third game was lost by a couple runs.

So What?

Perhaps most interesting is that SparkWords represent a different way to think about visualizations. Instead of a separate visualization and a separate paragraph explaining the visualization, with SparkWords the visualization moves directly into the words of the narrative text and there is no need for an additional visualization.  The notion of a separate plot area or even a micro-sized plot such as a spark line, is not required. And the SparkWords do provide more information context than just text: in the example above, there’s a lot of similarity between Gers, Creuse and Lozere — same green, same light weight. But Paris, Seine and Nord are different: Paris is more blue (services) while Nord is more industrial (red). Or in the case of the Yankees, there’s alot of green overall, but some bad sequences of losses (TBR and BOS swept 3 games series in the middle of the season).

Instead of putting text into a visualization, SparkWords puts the visualization into the text.

So why should we want to use SparkWords? There is an increasing need to explain data: data journalism, explainable AI, automated insights, data-driven natural language generation. Visualization, by itself, does not direct attention — there are many possible patterns and it’s not obvious what to look at and what the specific insights are. Data narratives do explicitly talk about specific data points and trends — but do not provide context to help inform critical understanding. Techniques which offer tighter integration between explanation and visualization can be much more informative. SparkWords, like data comics, automated annotations, and in-line visualizations (e.g. spark lines) all bring visualization and narrative closer together. SparkWords are the only option that is pure text, so maybe there are some use cases where SparkWords are uniquely well suited for explanations.

I’ll be talking more about SparkWords at EuroVis on Thursday next week (June 6) at the 11-1 session on Text Visualization.

Posted in Alphanumeric Chart, Data Visualization, SparkWord | Leave a comment