Thursday, December 20, 2007
the NYT takes it up a notch
The New York Times recently posted this impressive interactive visualization showing the degree to which each of the presidential candidates are mentioning one another in their debates. The NYT graphics team has certainly been ahead of the curve in terms of producing readable (and beautiful) infovis, but this one strikes me as a step above their usual (great) stuff. A network diagram like this is a considerably more abstract visual encoding than a bar chart or line graph, so I'm surprised and pleased to see it being deployed in such a high-traffic context.
Matt Ericson, the head of their graphics department, has talked about the challenges they face in designing "infovis for the masses;" I wonder if broadening their visualization repertoire with examples like this represents an increased confidence in the "visualization literacy" of their readers. I also wonder if they work more experimentally with the graphics they produce for the NYT website versus the print edition, and if this reflects a perceived difference in the "visualization literacy" of those respective audiences. Assuming it could be reworked as a static image, would they feel confident deploying this visualization in the print edition?
Either way, this is a fantastic example of effective public-facing visualization.
Monday, December 17, 2007
the most wonderful time of the year
Anyways, I don't usually like to advertise my own visualization work here, but I just posted the final project for a course I was taking this semester, Media in Transition, that people might find interesting. It's a prototype visualization of a set of Incan artifacts called "khipu," as cataloged by the Khipu Database Project at Harvard. Without going in to too much detail, the khipu take the form of hierarchically knotted strings used to encode information (so, arguably early information visualization!), though the meanings of these encodings remain largely undeciphered. I thought it would be interesting to prototype a visualization of the collection to suggest the value of infovis in facilitating exploration and analysis of the (somewhat unconventional) data set. You can check out the project page and applet here. More information about the khipu can be found at the Khipu Database Project page, and of course on Wikipedia. There's also an interesting article on them on Wired.com from a few months ago.
Happy holidays!
Wednesday, November 28, 2007
the usability of YouTube
I just came across this short paper on the usability of YouTube that I thought was interesting given the recent discussion here and on Stephen Few's blog. The authors point out that YouTube is a hugely successful site despite sporting an interface design that apparently doesn't respect many conventional usability heuristics. By considering what users enjoy about the site, and what keeps them coming back, they suggest that these traditional evaluation methods may need to be redefined; among other things, "engagement" and "playfulness," two terms we've been throwing around here lately, are becoming increasingly important.
While I won't argue that YouTube serves the same purpose as information visualization (particularly if its "apparently bad design" is intentional, as the paper suggests), what I find most poignant about this article is how it identifies the "new" Web 2.0-enabled user, a digital native that grew up with the web, as having a heightened literacy for the internet and its technologies. This new user interacts with information differently and more fluently than users of the past, so the design of interfaces for them should reflect this -- a sentiment that certainly applies to information visualization design as well.
Sunday, November 25, 2007
continuing the discussion with Stephen Few - my response
Rather than responding point-by-point to Steve’s post, I will try to address what I see as the underlying disconnect in our positions. This disconnect, I think, revolves around our differing conceptions of the scope of information visualization, as well as our differing definitions of certain key terms. I’ll elaborate from some of the statements he makes in his response:
What is perhaps not obvious, based on my capstone presentation alone, is
the fact that I spend a fair amount of time trying to understand what draws
people to ineffective visualizations—those that fail to serve the needs of the
audience while managing to appeal to that audience on some level.
[…]
Many-Eyes has managed to make the process of data exploration and analysis interesting and fun, without resorting to features that undermine the effectiveness of the activity.
[…]
I hope that I’m never guilty of writing off meaningful aspects of visualization as useless, simply because they don’t match my own preferences—aesthetic or otherwise. If you ever catch me doing so, I want you to call me on it. Just make sure that what you point out as useful in a
visualization is actually useful and not just appealing to your own preferences.
[…]
When a great deal of evidence indicates that certain visualization practices work better than others, I believe that it’s helpful to teach people to follow the best practices and avoid those that fail.
[…]
I define the “right way” as the way that best satisfies the needs of people—the way that works. I’m a pragmatist. What I don’t do is define the “right way” as the way that people desire things to be done. Our desires, our notions of how things should be done, often conflict with the way that really works.
[…]
The real “disservice to the goal of popularizing information visualization” is the existence of (1) ineffective or irrelevant infovis projects and products that represent our work poorly, and (2) the unfortunate inability of many experts in the field to present their work to those who need it in a way that they can relate to, care about, and understand.
What makes these statements problematic to me, and what was part of what I was trying to get at in my original critique, are Steve’s definitions of terms like “useful,” “(in)effective,” and “[user] needs,” with regard to information visualization. If I were to try to identify, right off the bat, the fundamental disconnect between the two “camps” that Steve and I represent, it would be that we don’t necessarily agree on what these terms mean.
Steve, coming from the area of business intelligence, presumably values the clarity of the data above all else, which I think is a perfectly reasonable position given the “needs” of that field. I’m not totally familiar with the world of business intelligence, but the use of information graphics and visualization in that area is undoubtedly motivated by a need to understand the meaning of quantitative data and make critical decisions based on that understanding. So, for him, having the data presented as plainly, directly, and efficiently as possible is the prime consideration in the design of their visualizations. Anything that distracts from that is considered ineffective or not useful.
My contention is that this sort of traditional, “scientific” understanding of information visualization, while certainly valuable in some domains (such as business intelligence), is too restrictive when considering its broader, more popular uses. For one thing, there is the obvious point that “popular visualization” does not necessarily share the same critical goals. Many of the infovis examples that Steve criticized from the Smashing Magazine article exemplify this, in that they present information that is not necessarily “mission critical” in the same way BI information might be – people are not necessarily viewing these visualizations because they need to make critical decisions based on the meaning of the data they present. Rather, they are perhaps more “casual” forms of information visualization in which directness and efficiency of transmission are not the primary goal, which then complicates our conception of “usefulness” or “effectiveness.”
Now, I am the first to admit that, at this moment, I don’t have formal definitions of these terms that stand in opposition to the ones applied in Steve’s conception of infovis, but work to define them is starting to emerge: at InfoVis, we saw Zack Pousman’s (et al.) presentation and paper on “casual infovis,” Martin Wattenberg’s discussion of “vernacular visualization,” Fernanda Viegas’ description of the many expected uses of Many Eyes, Ola Rosling’s emphasis on visualization as storytelling, Brent Fitzgerald on Swivel as “big (social) database in the sky,” and Robert Kosara suggesting new uses for infovis (I’m sure I’m leaving someone out, but these are some I just blogged about). Across the internet, the discussion continues: Andrew Vande Moere, Andrea Lau, and Nick Cawthon ponder information aesthetics, the guys at Stamen Design discuss “useless” visualization, as does Matthew Hurst of Microsoft Live Labs… the list goes on!
What bothers me about Steve’s position, which I sensed mirrored in a lot of the discussion at InfoVis, is the suggestion that this new research, or the type of visualization that it describes, is somehow “not infovis.” While it’s true that infovis, as a field, grew out of a “strict” scientific tradition (ie. computer science) that informs its theories and methodologies, it is going to have to broaden its understanding of the ways in which “normal” people interact with information if it wants to present itself as accessible to the masses. I think the field will need to start thinking more in terms of “engagement design” rather than the highly quantified metrics of efficiency, time on task, etc., that have traditionally characterized user studies in HCI and interface design. Those metrics may make sense (and I think give rise to, at least in part, the “great deal of evidence” supporting best practices that Steve refers to) when describing visualization usage by users who spend their lives (and make their livelihoods by) working with charts, graphs, and other visualizations, but almost certainly don’t capture what is important about the way “the masses,” as I define them, use information visualization. For their purposes, engagement (another ambiguous term) should perhaps trump efficiency; for instance, maybe a particular design should sacrifice some of the “directness” of the data for other elements, such as “playfulness,” that encourage continued use. I can understand why Steve and others might be skeptical of this sort of approach to infovis, as there doesn’t yet exist a large body of research support it, but I would argue that this is because we haven’t really starting thinking about visualization in these terms yet, rather than because there is something fundamentally wrong with it. I’m concerned that the traditional infovis field believes too strongly that their understanding of their domain is the final word on information visualization. This is a problem.
Steve himself admitted that, if given a choice of visualization design, most users would always pick the “flashier” one. I’ll just reiterate that instead of assuming that “flashier” equals “less effective,” we should try hard to understand the value of “flashiness” (or “novelty,” or “aesthetics,” or whatever we want to call it) against the expanded definitions of “useful” and “effective” I’ve referenced here. At his tutorial session at InfoVis, I made the (admittedly somewhat outlandish) comparison to Jim Cramer and his highly-rated CNBC program, Mad Money. Cramer uses an almost ridiculous amount of (literally) bells-and-whistles to “decorate” his presentation of financial information, and yet the information gets across. Is it presented as thoroughly as it might be by a more Spartan “talking head” presentation? Probably not, but the show’s popularity says something about what engages non-experts. Coming back to the realm of infovis, there is something undeniably important and valuable about the fact that visualization examples such as Jonathan Harris’ “We Feel Fine,” or Stamen Design’s visualizations of Digg.com, or, yes, the Ambient Orb, are popular and engaging. What’s interesting about Many Eyes, an example Steve does like, is that it supports many types of representation, from traditional analytic uses to the more unusual ones that Fernanda described in her presentation at InfoVis. Arguing that the mere existence of these sorts of projects undermines information visualization is a real problem. It implicitly assumes a very narrow definition of what “information visualization” is, which in turn rejects the type of understanding that I think is absolutely necessary to promoting the field as a medium that anyone can use. This kind of work should be taken seriously when considering information visualization design “for the masses.” I’m not arguing that its design principles should replace the ones that Steve talks about, but I’m completely certain that the two perspectives are not mutually exclusive; the reality, I think, is that both could benefit from a more robust understanding of one another.
So, in the end, Steve and I may be talking about two different things. As I stated at the beginning of this post, I certainly agree with most of the principles he employs to promote more effective design of business-related information visualization. What troubled me about his capstone at InfoVis, and a lot of the other discussions there, was perhaps that this was as far as his conception of “infovis for the masses” seemed to go. I applaud him for a lot of what he tried to convey to the audience there (they needed to hear it!), but I would suggest that “the masses,” to me and many others, includes a much wider population of users – most of whom are not “data analysts” of any sort.
Tuesday, November 20, 2007
continuing the discussion with stephen few
I really appreciate Steve's interest in continuing the discussion (and frankly I'm flattered that he even read what I've been writing), so this is just a quick note to indicate that I fully intend to respond to his post, either in the comments on his blog or here (or both). I'm in crunch-mode on a couple of projects now that the semester is winding down, but I will have something up as soon as possible in the next couple of days.
Thursday, November 15, 2007
InfoVis impressions, part 4: "Infovis as Seen by the World Out There"
This is the fourth (and final) part of my impressions of InfoVis 2007. Click here for part 3.
UPDATE: Stephen Few has made a written version (PDF) of his capstone presentation available on his website.
The final “for the masses” presentation at InfoVis was Stephen Few’s capstone, “InfoVis as Seen by the World Out There.” In it he discussed his observations on “the big picture view of what’s going on in infovis,” and how the lay-public, who “desperately need what we have to offer,” perceives (or misperceives) the field of information visualization. He argued that the outsider’s perspective is based on poor, primitive examples of visualization, and tried to motivate the audience to address the situation.
Rather than going in to too much more detail, I wanted to defer to Joe Parry’s impression of the talk, because it matches my own almost exactly:
On a separate topic I found Stephen Few's capstone talk rather unsettling - I understand why he is so passionate about designing clear visuals, but sometimes that passion can err on the abrasive side. And that style won't endear the visualization community to the world out there. I also think he underestimates the power of playfulness and fun in reaching out to an audience - come on - Swivel's option to 'bling your graph' is just funny! Another worry is that the very Spartan style of visuals he favours actually imposes an aesthetic in its own right, for all of its good intentions and intelligent rationale. We should accept some people just won't like that aesthetic.
Over the course of his talk, Few showed many “bad” examples of infovis from the web. Some of them I agreed were pretty terrible (including some of the new charting features in Excel), but most I felt he was dismissing without trying to understand what is useful about them (in addition to his criticism of Swivel, he wrote off the Ambient Orb, and the list of infovis examples recently published by Smashing Magazine). What was clear, as Parry alludes to, is that Few’s conception of popular infovis design is particularly hard-line -- more about telling the masses how to display information the “right” way, rather than thinking about how non-experts might interact with information differently and with different needs. This is understandable given his usual focus on design for the business intelligence community, but I think that attitude does a disservice to the goal of popularizing information visualization. I’m not arguing that something like the Ambient Orb is a fantastic example of visualizing information, but the fact that (some) people find it compelling suggests that there is something engaging about its presentation. Rather than writing it off as useless, why not try to figure out how to incorporate its engaging qualities in to more “sophisticated” visualization systems?
I wholeheartedly agree with Few’s assertion that the infovis community would do well to consider the scope of impact their work has, and I obviously believe that the field of information visualization can help “the world out there,” but I didn’t find the rhetoric of his talk particularly encouraging. On the one hand, he promotes bringing infovis “to the masses,” but on the other his conception of that process feels a bit too evangelical. Everything about his presentation revolved around showing “outsiders” why their intuition about infovis is wrong. If our goal is to produce infovis that makes sense to these non-experts, I don’t think this mentality is constructive.
Monday, November 12, 2007
InfoVis impressions, part 3: "The Impact of Social Data Visualization"
This is part 3 of my impressions of InfoVis 2007. Click here for part 2.
This panel essentially continued the discussion from the “Infovis for the Masses Panel” the previous day, and, to me, was the best of the conference. The presenters were Brent Fitzgerald from Swivel, Ola Rosling (son of Hans Rosling) from Google/Gapminder, Fernanda Viegas from Many Eyes, and Martin Wattenberg (subbing for Warren Sack) talking about examples of social visualization on the web. The panel was chaired by Robert Kosara from the University of North Carolina at Charlotte.
Kosara kicked off the discussion by motivating the idea of focusing on the visualization needs of “everybody.” He pointed out several important functions for popular visualization, including using it to address data that is public but not readily available to most people (census data, federal spending data, the type of information visualized by They Rule, etc.). He also emphasized the value of visualizations as political statements and as storytelling (referencing Hans Rosling and Al Gore’s recent uses of infovis). Finally, and appropriately, he asked the audience whether we can use visualization to change the world, and whether anyone wants to use visualization for anything beyond presenting at InfoVis every year. Slightly rhetorical, but poignant!
Next, Fernanda Viegas spoke about the various different ways people are using visualizations on and around Many Eyes, including as political commentary, art (“lit-mash” with tag clouds, etc.), as focal points for conversation, and as educational tools. She pointed out that these are all “new” ways of using visualization that see it as a medium for communication rather than scientific tool. This, I think, is one of the most significant points you can make about what visualization “for the masses” is about.
Following Fernanda, Brent Fitzgerald talked about Swivel, contrasting its more private data model to Many Eyes’ open one. He described their goal as creating the “big database in the sky,” where anyone can find the information they’re looking for, emphasizing Swivel’s support for collaborative data editing and analysis. Swivel aims to produce a “data marketplace” based on a social network of contributors.
Ola Rosling presented next, starting with a characteristically “Rosling-esque” demo of child mortality data from across the world using the Trendalyzer/Google visualization tools. He stressed the importance of animation in information visualization (particularly with regard to time-based data), arguing that “static representations are lying” by only showing a single snapshot of a changing data set. He also suggested that visualization literacy should be motivated by democratizing access to data, as well as simply creating visualizations of compelling data. Finally, he ended with a fantastic analogy that compared the information visualization community to the photography community; he asked us to imagine a photography conference where the photographers were all horrified by how their field was being devalued by the ubiquity of photographs taken by average folks with their cellphone cameras. It’s not a perfect analogy, but certainly an apt one, and one that I think touched on a real “fear” within the community. As Stephen Few pointed out in his capstone the next day, there was a lot of laughter after Ola’s analogy, but it was nervous laughter.
Finally, Martin Wattenberg presented on the popular use of visualization on the web by introducing the concept of “vernacular visualization,” the use of “unsophisticated” visualizations by non-(visualization)-experts to “get things done.” I really liked this framework for understanding non-expert use, as it relates closely to Thomas McLaughlin’s concept of “vernacular theory” from cultural/media studies, and the idea that non-experts (or non-academics) engaged in the production and consumption of (in this case) information visualization make use of a rich set of “theories” whose basis is not necessarily in traditional ivory-tower academic theory (and which may in fact break from it), but which are nonetheless very effective and meaningful for “getting things done.” Martin argued that in order to conceptualize “infovis for the masses,” we need to understand this vernacular. As an example, he pointed to the impact of xkcd’s “Map of the Internet,” a “vernacular” (literally a cartoon) visualization, and its influence on more traditional IP-space visualization within the infovis community (hint: evidently it quickly became one of the most cited examples of IP-space visualization in academic infovis papers!). He closed by identifying three directives essential to promoting social data analysis: 1) Love your data! 2) Enable conversation, 3) Visualizations can be very simple or very complex, and emphasized that “Anything can be successful [in popular visualization], so let’s try a lot of things!”
The discussion that followed in the Q&A session was varied and at times tense. The initial questions again reflected a lot of concern about the misappropriation of data on sites like Many Eyes, as well as a concern that visualization could be used in “unexpected (read: “negative”) ways. This culminated in what I perceived as bizarrely derogatory question from Ben Schneiderman, who asked the panel how we can move beyond the “modest” successes of their work, and increase the credibility and reliability of their data by “removing the garbage from view.” Aside from ignoring what these sites are actually about, this, again, struck me as characteristic of the dominant conception of information visualization at the conference as a truth-telling tool. It reflects a lack of awareness of visualization as a medium, with all the subtleties such awareness entails. As Martin pointed out, people misrepresent information in any medium, and several members of the panel reiterated that part of understanding visualization as a medium is gain a literacy that allows you detect the “lies.”
There was an interesting question about how to motivate an analytic approach (in visualizations) to understanding for non-experts, and how you might use elements of “artistic” infovis to bring people in to analytic infovis. This is something I’m interested in myself. Fernanda Viegas responded by suggesting that non-experts are not necessarily trying to do “analysis,” but rather trying to tell stories, or trying to see themselves in the data, and this is a type of use that should be taken in to consideration (and supported). Robert Kosara suggested that designers should look at “playfulness” as a means to capture attention, and that giving users the “means of production” with regard to visualization will help engage them beyond what happens when they are merely presented with a finished product. Ola Rosling emphasized the need to keep things simple, as non-experts will quickly turn away from a tool that seems too complex.
In a related question, Zach Pousman asked how we might evaluate the effectiveness of social visualization tools, or even identify the measure of effectiveness. Martin favored the “ethnographic” approach, arguing that since visualization is a medium, we need to look at how it’s being used and written about, rather than relying on traditional HCI metrics. The line of questioning ended there, but I think this an area in need of much more exploration; it is fairly obvious that the “small-N” evaluations attached to most infovis research cannot accurately capture real-world usage patterns, particularly when talking about social or mass-audience systems. Coming from media studies, I always think of this as analogous to the situation the television industry is facing, where the networks are only just now realizing that Nielson ratings don’t account for the myriad of new ways (YouTube, etc.) that viewers encounter and interact with television content. The same could be said about visualization.
The rest of the questions continued to revolve around issues of data provenance, and whether or not it was “wise” to empower non-experts with the “means of production” of information visualization. On the whole, I couldn’t help but find the conversation discouraging. I felt that, at best, most of the questions ignored what was significant about the popularization of infovis, and at worst, their rhetoric was actually quite demeaning to the concept. Even the perspectives that favored empowering non-experts were based on an undercurrent of intellectual superiority – there were many repeated metaphors of non-experts being “illiterate” or having a “child-like” ignorance of how to do proper visual analysis. The idea that they could be using visualization in meaningful ways that did not necessarily adhere to the traditional “values” of infovis remained largely unaddressed.
InfoVis impressions, part 2: "Infovis for the Masses"
This is part 2 of my impressions of InfoVis 2007. Click here for part 1.
On Sunday afternoon was the panel I was most excited about coming in to the conference, “Infovis for the Masses.” It featured Fernanda Viegas, Martin Wattenberg, and Frank van Ham from the IBM Visual Communication Lab presenting on Many Eyes, Wesley Willet representing a group from UC Berkeley presenting their paper “Scented Widgets: Improving Navigational Cues with Embedded Visualizations,” Jock Mackinlay from Tableau Software presenting “Show Me: Automatic Presentation for Visual Analysis,” and Zach Pousman representing a group from Georgia Tech presenting a paper called “Casual Information Visualization: Depictions of Data in Everyday Life” (I blogged about this one a couple of weeks ago). Ben Schneiderman chaired the panel.
However, while each of the panelists made interesting presentations, the direction of the panel as a whole came off as largely incoherent. It was clear that we were looking at several different conceptions of “infovis for the masses,” none of which were really able to interface with one another. To me, the presentations on Many Eyes and “casual visualization” were most related to the conception of popular visualization (“for the masses”) that I write about on this blog. They were also the most controversial, based on audience response. Many Eyes is about democratizing and socializing visualization, empowering anyone to become a producer and consumer of visualization, and encouraging the benefits that come from social interaction around the visualization of data. The paper on “casual visualization” suggests that there are valuable ways to use visualization that don’t have to do with solving specific problems in an analytic way, and that traditional design principles don’t necessarily support this type of interaction.
The other two presentations seemed less “masses”-focused to me. The “scented widgets” presentation was certainly interesting, but maybe too technique-specific for the scope of the discussion (or what I hoped the discussion would be). Mackinlay’s presentation was a demo of the charting features in Tableau. To him, “the masses” referred to corporate data analysts needing to produce charts and graphs with software like his. While I think Tableau is an impressive piece of work, its cost alone guarantees that it isn’t “for the masses.”
Finally, following the panelist presentations, Ben Schneiderman, one of the forefathers of information visualization field, closed the discussion with a few statements related to this slide (this was taken from a presentation of his in 1998, but the slide he showed was the same) that hammered home the incoherence of the panel. He argued the need for “scientific studies” and “whole product solutions” to bridge the gap between “visionaries” and “pragmatists, conservatives, and skeptics.” I may not have motivated it well enough here, but this, in my mind, had almost nothing to do with what was discussed in the previous two hours. Perhaps it spoke to what Mackinlay was talking about, but it utterly ignored the presentations about Many Eyes and “casual infovis.” Discussion on “infovis for the masses” should be about how non-experts consume and produce information visualization, not about how brilliant “visionaries” can sell product to them.
The questions from the audience at this panel mostly revolved around Many Eyes, but in my opinion also missed the point. There was a lot of concern over the validity of the data being uploaded to the Many Eyes site, and the possibility of “poor” visualizations being produced by people who don’t understand how to “best” use the visualization templates offered. These questions certainly reflected the characterization of the community described in part 1 of my impressions: overly focused on the scientific validity and usefulness of the content on Many Eyes. Such a focus ignores what Many Eyes is actually about; what’s interesting about the site is not really the data being uploaded, but the type of use and interaction that is happening around the visualization of that data.
InfoVis impressions, part 1
I got back from InfoVis 2007 a week and a half ago, and after catching up on the week I missed back home, I am just getting around to putting my impressions in order. Since I have a lot of stuff to post, I decided to serialize it. This is part 1.
Overall, it was a fantastic experience, and really gave me some much-needed insight in to what’s going on in the visualization community. There were many compelling presentations, panels, and posters there, but I’m mostly going to focus on the “for the masses” elements of the conference, since that is what is most related to my work. As the unofficial theme of InfoVis this year, the focus on the popularization of information visualization provided a fascinating and extended look in to not only what work is being done in this nascent “genre” of infovis, but also how the information visualization community as a whole is responding to its ideas and incorporating them in to the existing infovis domain.
To clarify that last point, I’ll start by saying that, as a first-time attendee of the InfoVis conference, I was quite surprised by the type of visualization discourse that dominated the event and, by extension, the infovis community that it represents. The outstanding impression I got from this conference is that the majority of research being done in the field of information visualization is focused on the technical details of designing visualizations (layout algorithms, data management, etc.), and is driven by a methodology that emphasizes a traditional scientific approach to this research. In other words, it presents itself (to this admittedly naïve observer) primarily as a branch of computer science, obviously reflecting and reinforcing its origins in that field.
That said, my exposure to the field originally came through my work developing interactive scientific visualizations for educational purposes (teaching principles of physics and biology to undergrads and high school students), the design of which focused primarily on supporting informal learning rather than, or in addition to, “focused episodes of work” (to borrow a term from Pousman, et al.). Though these educational tools were often used to help solve specific problems, we were more concerned with making them approachable to non-experts and promoting a casual exploratory usage model. As I began to study information visualization, most of my encounters with it have come through the internet, and though I’ve read most of the “bible” texts of the field that are clearly representative of the “computer science” approach, the “live,” publicly available, high-profile examples of infovis that you typically see on the net would generally be characterized as “casual infovis,” or “information aesthetics,” or “data art,” or whatever you want to call it. This would include systems and tools like Many Eyes, the Baby Name Voyager, the ubiquitous work of design firms like Stamen Design, the various visual web search tools like TouchGraph, art projects like wefeelfine.org, probably some business intelligence related visualizations like the Map of the Market, and even advertising-related infovis like the now defunct Coca-Cola WorldChill visualization (there are of course many more examples, these are just the ones I can think of off the top of my head). So, going in to the conference, my impression was that visualizations like these, regardless of the quality of their implementation, represented a legitimate area of study within the field of “information visualization.”
As it turns out, though, it appears that most of these examples probably wouldn’t be considered “information visualization” by the “information visualization” community represented by the InfoVis conference, presumably because, for the most part, they aren’t designed as tools with which you do rigorous analytic work. I saw a lot of evidence for this (which I’ll get to below), but it was most explicitly stated by Stephen Few in his capstone presentation: Stephen pointed to these kinds of examples on the web (in addition to some that I agreed were legitimately horrifying) as presenting a “primitive,” misleading view of what “information visualization” is, and suggested that it was the job of the conference attendees to be “model thinkers and communicators” that “take up residence in the real world” to show the “outsiders” what infovis is really all about. And this was framed as one of the more progressive viewpoints on visualization at InfoVis.
What was also surprising to me, based on the panels and presentations that explicitly addressed the idea of popular visualization, was that there didn’t seem to be much agreement on what “infovis for the masses” even referred to, or what its concerns should be. After the recap of the keynote below, the following posts are my observations on how this played out in the various related panels.
I already discussed Matthew Ericson’s Sunday morning keynote on info-graphics at the New York Times, so I won’t say much more about it other than reiterating how useful I thought it was. Matt described a design process that was very much “in the trenches,” emphasizing the special considerations that need to be made when designing for a mass, non-expert audience (i.e. readers of the New York Times). This included their development of a vernacular theory of visualization design that at times broke with what traditional infovis theory would suggest; examples included his observation that scatter-plots tend to confuse non-expert readers, because they are not familiar with graphs in which time is not on the x-axis. He also stressed the value of embedding of text annotations into their graphics to help guide a user through the data – something that a Tuftean purist might consider chartjunk. I was very impressed with his presentation and was happy to have a chance to speak with him afterwards.
InfoVis impressions, part 2: "Infovis for the Masses."
InfoVis impressions, part 3: "The Impact of Social Data Visualization."
InfoVis impressions, part 4: "Infovis as Seen by the World Out There."
Monday, October 29, 2007
visualization at the new york times
InfoVis 2007 kicked off with a keynote presentation from Matt Ericson, Deputy Graphics Director at the New York Times, titled “Visualizing Data for the Masses: Information Graphics at The New York Times.” This is obviously a topic near and dear to my heart, as a publication like the New York Times represents a space where the average “layperson” (as in visualization non-expert) is most likely to encounter information visualization.
Fernanda Viegas has blogged about the talk already on behalf of infosthetics, so I don’t want to repeat too much of what she had to say, but I did want to emphasize a few points that Matt made in his presentation (and afterwards).
As Fernanda mentions, one of Matt’s main points was that they approach infovis primarily as journalists, and that they see their work as “storytelling for the masses.” At the same time, he expressed one of their design principles as “show, don’t tell,” constantly asking themselves where they can show readers what’s happening rather than telling them in words. Through a bunch of examples, Matt described the design challenges they face in producing these pieces, including extremely short publication deadlines, readability issues (his point about scatterplots was really interesting -- apparently readers have a hard time understanding a two dimensional graph where time is not on the x (horizontal) axis), and issues of Tufte-style “honest portrayal,” where they must pay attention to whether the visualization gives the “right impression” of the data. His example of this latter point revolved around producing political maps (“red vs. blue” population maps, etc.) whose visual presentation reflects the actual statistics (for instance, he pointed to red vs. blue maps of the last presidential election results, by county, that show the US as primarily “red,” despite a much more even distribution in the popular vote).
Also interesting, despite their “show, don’t tell” mantra, was Matt’s emphasis on the usefulness of embedding textual descriptions within the visualizations themselves to help guide users, as well as the usefulness of combining different types of visualizations to reinforce the data (for instance, coupling a more “complex” image with a simpler one). Related to this was his suggestion that “presets” or “shortcuts” in to the data are particularly important to interactive visualizations, where users may be overwhelmed by the number of options on the interface. This directly reflects my experience developing educational visualization tools in the past.
So, after hearing all of this, I was really curious to know if Matt’s group, or the NYT in general does any analysis on the effectiveness (or even popularity) of their info-graphics. I got a chance to talk with him at length last night, where he indicated that while they would like to, they haven’t tried to collect that kind of information in a formal way. Being so restricted by deadlines, he described their operation as very much “by the seat of their pants,” where they mainly use their own editors at usability testers. He did suggest that they are trying to collect more statistics with their online interactive work, as it much easier to capture information about what their readers are spending time on. I would be really interested to see the results of that kind of survey. Either way, the principles their team employs seem to be quite effective, as they haven’t received majorly negative feedback about the graphics they produce. This surely speaks to the particular considerations of designing visualization “for the masses.”
Matt has published his presentation slides at his website, http://www.ericson.net/.
More InfoVis to report on later, but my internet connection is spotty at the hotel, so it may take a while.
Friday, October 26, 2007
infovis 2007!
If anyone reading this is going to be there and wants to meet up, shoot me an email (it's down in the "about" section on the right side of the page). Otherwise, I'm hoping to experience a lot of blog-fodder while I'm there.
no love for financial visualization
The use of information visualization in the field of "business intelligence" is interesting to me, as it ostensibly represents an area where a large community of non-experts (with regard to visualization) are actively making use of information visualization, in the form of information dashboards, market maps, and other financial analysis tools. Somewhat curiously, despite their reliance on visualization systems, innovation in the visual design of these systems has evidently been lacking. Daniel Buenza has recently written about this in a blog post titled "Is there room for art in financial visualization?" I found this paragraph particularly telling:
[The lack of innovation] is surprising, because existing visualizations do not support profitable trading strategies. Indeed, most systems are based on timely news and time series of stock prices and volume. And yet, we know from basic financial economics that both past prices and news are a bad predictor of future stock prices. As for the ticker… the animated display of selected stock prices is a low-bandwidth visualization born in the era of the telegraph. That’s right: 19th C. Nowadays, information can travel much faster, and one does not need to wait for “my stock” to come up on the ticker.
As the rest of the post describes, there is a conflict between the design of new visualization strategies and the reluctance of their users to let go of what "works well enough." While this is obviously an issue with any type of software design (or just an issue with "change" in general), it is particularly surprising to see in an area that so clearly relies on visual displays to do business.
I think this emphasizes two important points related to the design of "popular" visualization: First, obviously, we should pay special attention to that "final step" of getting new systems deployed in the field. While designers can produce useful systems based on theory (or even user-centered testing), the tools are only useful if they can be successfully be delivered to a user "in the wild." While it might be a logistical issue not related to the quality of the product, this final step may require special consideration at the design level.
Second (although I already gave it away), we need to be aware of the specific usage patterns of the people using (or not using) visualization tools. A design that looks good (haha) on paper may simply not integrate well within a users existing "workflow." Also not a particularly profound point, but one that often gets ignored in visualization design (even in "casual" systems).
As far as business intelligence goes, I'm excited to hear Stephen Few speak at InfoVis in a few days, as he is well known for his analysis of visualization in that field. Again, it strikes me as an interesting test-bed for popular visualization design (also, I am particularly intrigued by the use of symbology and semiotics in dashboard design, but that's a topic for another post), so I'm anxious to learn about the theoretical and methodological design strategies employed by designers in that particular sector.
Thursday, October 18, 2007
casual (as in casual) visualization
Casual Infovis is the use of computer mediated tools to depict personally meaningful information in visual ways that support everyday users in both everyday work and non-work situations.
It is a great read that focuses on the visualization needs and usage patterns of non-experts. They keenly identify characteristics of infovis that are meaningful to this group (including the personalization of infovis systems that I was trying to gesture at in my thoughts about the smartvote.ch system), and suggest ways to emphasize a type of information interaction that "exists outside focused episodes of work." I was particularly interested in their discussion of "utilitarianism" versus "usefulness," as well as their re-consideration of evaluation methods for "casual" infovis systems. Being entrenched in the world of media studies, I am in complete agreement with an ethnographic approach to visualization evaluation.
My only criticism of their argument is that they seem to distance the various types of (valuable) insight gained through the use of casual systems from the analytic insights ostensibly achieved with "traditional" infovis systems. Ideally, casual use and analytic insight need not be mutually exclusive processes. While a casual system should necessarily avoid the levels of complexity characteristic of deeply analytical ("work focused") visualization tools, they should still (I would hope) encourage an appreciation for analysis in non-expert users while emphasizing the types of approachability described in this paper.
Anyways, I am really looking forward to hearing them present this paper at InfoVis 2007, as well as the whole "Visualization for the Masses" panel. It's very exciting to see visualization being thought about in this way.
Monday, October 15, 2007
casual (as in sex or Friday) visualization
Today Nick pointed me to this article by Ian Bogost that asks a similar question with regard to game design, with particular focus on casual games. Ian maps out the casual game space in terms of the player's level of involvement:
Applied to games, casual as informality characterizes the notions of pick-up play common in casual games while still calling for repetition and mastery. This is why casual games can value both short session duration and high replayability or addictiveness. Casual games may allow short session play time, but they demand high total playtime, and therefore high total time commitment on the part of the player. Low commitment represents the primary unexplored design space in the casual games market.
[...] If Casual Friday is the metaphor that drives casual games as we know them now, then Casual Sex might offer a metaphor to summarize the field’s unexplored territory. If casual games (as in Friday) focus on simplicity and short individual play sessions that contribute to long-term mastery and repetition, then casual games (as in sex) focus on simplicity and short play that might not ever be repeated—or even remembered.
While I'm not sure how well these conceptions of casual games literally map back to the world of information visualization (although the "Casual Friday" model sounds like the sort of happy relationship with visualization tools that I could support), the terms of the analysis undoubtedly do. Just as casual games took gaming mainstream, a "pick up and play" paradigm will have to define popular encounters with information visualization tools. And while the "easy to learn, difficult to master" casual game design philosophy already echoes Ben Schneiderman's long-standing information visualization mantra of "Overview, zoom & filter, details-on-demand," there's certainly more insight to be gleaned from the sort of contextual consideration Ian presents here.
Actually, given their similarity in terms of user interaction, popular information visualization design might do well to learn from the successes of the casual games sector!
Thursday, October 4, 2007
visualizing... Halo 3!
Tuesday, October 2, 2007
visual search engines
Briefly looking at each tool individually (searching on "information visualization"), I like KartOO's related semantic categories listed on the left, but I don't find the visual element particularly useful. Some features are confusing or counter-intuitive: What is the significance of the colored "heat map" in the background? What is it registering? It suggests that the area between nodes is meaningful, but I'm fairly certain it isn't (unless very abstractly). Also, if it is some kind of density map, its distribution should be weighted by the size of the nodes it is referring to... again, this doesn't seem to be the case. Adding to the confusion, the lines connecting nodes are only visible when you mouse-over them. Finally, KartOO's approach is to present several "maps" per search, but it isn't at all clear how those maps are related (which they presumably should be, given that they are all built off the same search terms).
Visually, I like TouchGraph better. The interface is much cleaner, as is the relationship between nodes. It also handles much more data on the same map, which is both a blessing and a curse. While it allows you to see much more structure, the maps can quickly get too dense to read. There are some simple filtering mechanisms provided, but some more robust options would be nice (for example, using transparency to (de)emphasize elements of the graph would be nice here, so that you could focus on certain areas while still retaining some perception of the larger whole. As it stands now, you can only make nodes visible or invisible, which can be misleading in some cases.). One very nice feature is that the logo of a website is attached to its node, so that if you know your logos, you can immediately identify various groups of pages (for instance, Blogger pages are identified by the red "B" logo, and can often be seen grouped together).
Tuesday, September 25, 2007
the semantic side of visualization
Last week, the MIT Communications Forum (hosted by the Comparative Media Studies department) held its first event of the Fall, a forum entitled "What is Civic Media?" (podcast available here). As a sort of launch-party for the new MIT Center for Future Civic Media (a collaboration between CMS and the Media Lab), the forum addressed the new possibilities for civic engagement and "participatory democracy" afforded by the rise of "Web 2.0 related" media (blogs, *casts, virtual worlds, etc.). To my pleasant surprise, information visualization was well represented as an important tool for promoting these practices! One of the panelists, Ethan Zuckerman of Harvard's Berkman Center for Internet and Society, presented several map-based projects designed to visualize media attention across the globe. Beth Noveck of NYU Law School explicitly stated the need for effective visualization of civic data to engage and inform participant citizens. I couldn't agree more!
She mentioned several existing tools out there, including the sense.us project (more on this in a minute), but I was most intrigued by her reference to smartvote.ch, a site that has evidently been deployed in Switzerland for several years now. In their words:
smartvote is a scientifically conceived online election aid for communal, cantonal and national elections in Switzerland.
smartvote aims to:
- improve the transparency of elections and give voters a new way to make an
informed choice- increase people's interest in politics
- demonstrate the potential of e-democracy and e-voting
The political profiles of candidates are established through questions about
their policies and attitudes, and these are saved in a database. Voters can
then answer the same questions, and smartvote will recommend to them those candidates who show the closest political match.
(Please excuse any messy formatting -- I've about had it with Blogger's post editor!)
The site has a strong visual component, where candidate's positions on various issues (gleaned from a questionnaire) are presented in several graphical formats, including a spider chart that creates a representation of their political "footprint." What makes it more compelling, though, is that a user can fill out the questionnaire as well, and see how their spider chart compares with those of the various candidates. This may be a trivial observation, but this sort of interactivity (at the "extra-visual" level) undoubtedly makes the system more engaging. Essentially they are offering the user the ability to insert themselves in to the data being visualized; we can easily imagine that a user would be more compelled to explicitly match candidate "footprints" to their own rather than simply browsing the footprints with only an abstract idea of what they are looking for. A huge part of engaging a user in a visualization tool (or any tool, for that matter) is getting them invested in what they are looking at; what better way than allowing them to see themselves in the data (incidentally, this may have been the sort of entry-point that made the Baby Name Voyager so popular)? The smartvote.ch site has only limited English translation, so I'd be curious to find out how it is being used and received by its audience.
The second visualization project using extra-visual techniques to great effect is the aforementioned sense.us and, more recently, Many Eyes. Developed by the folks at the IBM Visual Communications Lab, including Fernanda Viegas and Martin Wattenberg, both of these tools emphasize "the social side of visualization." In sense.us (which doesn't appear to have a live demo anymore), users can explore a series of visualizations of United States census data. While the visuals themselves are quite nice and clear, the key feature of the tool is the way that it encourages collaborative analysis, "recasting visualizations as not just analytic tools, but social spaces." Among other features, sense.us embeds comment boards within the visualization, allowing users to comment on, and mark up, particular features of the graphs (including the ability to "hyperlink" to specific views of the data, so that other users can see exactly what you have discovered through your manipulation of filters, etc.). This is all designed, obviously, to encourage discussion of the data -- another great way to engage and invest users. (Martin and Fernanda gave a great presentation on Many Eyes last Spring at the CMS graduate colloquium. For some reason the podcast of the talk was never posted, but I'm working on rectifying this and will post a link to it in the near future.)
This feature set continues in Many Eyes, which is a live service that allows users to upload their own data sets and create visualizations of them based on a series of graph and chart templates provided by the site. Once again, the graphics are quite nice (and all fully interactive), but the real draw is the "extra-visual" social element that encourages discussion and collaboration. I see it as a way of subverting the sorts of purely visual issues that come up when designing information visualization tools (some of which I have written about here already); essentially, it recognizes that a universally readable visual syntax may be impossible to produce, and simply uses this social framework as a "helper." Aside from more analytical discussion, a user confused or intimidated by the data has the ability to voice that confusion and learn from other users.
The value of this kind of interaction shouldn't be overlooked - I saw this while designing educational visualizations. Visualization can be imposing and confusing; many times it presents itself as a stand-alone, fully functional tool that dispenses information and expects to be understood without much qualification. It's very easy for users to become discouraged in this situation if they can't immediately "read" what is being presented (Donald Norman talks about this in "The Design of Everyday Things"), which is exactly the opposite of the experience someone should have with information visualization. Incorporating this social framework, I think, goes a long way towards easing this tension. If nothing else, it reminds users that visualization is for the people.
The final project I've been interested in along these lines is the work Hans Rosling has done at gapminder.org. Rosling has been using the Trendalyzer software to develop interactive, animated tools that "visualize human development." I first discovered him through his two great TED talks ("New insights on poverty and life around the world," and "Debunking third-world myths with the best stats you've ever seen"), which were extremely entertaining (for me, at least!). However, while the software itself seems well designed, what makes his presentations so compelling is the narrative that he builds around the software and the data it is presenting. Rosling acts as a sports announcer, giving a running commentary on the graphics as they move across the screen, and not only is his excitement for the data contagious, but it clearly increases the effectiveness of the visualizations (evidently so much so that the new "gapcasts" at gapminder.org feature Rosling green-screened over the Trendalyzer images, continuing to narrate the action!).
So, what does this mean in terms of visualization design? In this case it isn't totally clear to me yet. Obviously, information visualization should be a narrative, and Rosling has expertly produced examples of this. But how might designers incorporate some of the "magic" that Rosling generates in these presentations without the services of a charismatic Swedish sword-swallower (watch the first TED talk all the way to the end!)? In principle it should be possible, as he isn't cheating too much; most of the information he references in his narration is contained within the visualization he is presenting. So is it just his charismatic performance that adds value here? Certainly that's part of it, but ultimately what he's doing is serving as a guide, providing a guided tour of the data. In this sense, he's doing something that could conceivably be implemented in other ways, which might mean providing context for the visualization, or "tutorials" for its functionality, or finding other ways of increasing its narrative qualities. Or perhaps this overlaps with the sort of social activity used in Many Eyes. Or maybe information visualization does need more Swedish sword-swallowers.
In any case, what these examples indicate to me is that visualization tools are more than just visuals. While pure visual design works in many cases, addressing semantic and "extra-visual" content, directly or indirectly, can also encourage interaction and investment.
Tuesday, September 11, 2007
information and aesthetics
One of the dimensions of information visualization that I’m particularly interested in is the impact of aesthetics on the design of functional visualization tools “for the people.” This is basically the age-old debate of form versus function, or beauty and expressiveness versus practicality and usability, or any of the variations on that theme. As someone who was trained in both visual art and physics, I’ve spent a lot of time thinking about this (ostensible) opposition in various contexts over the years (as anecdotal evidence, I consider most of my work here to be “aesthetic remixes” of versions I developed for more formal educational purposes). It’s also a fairly subtle one for a number of reasons, but particularly because the “first principles” of aesthetics aren’t as easily categorized as those of other fields that inform information visualization design (graphic design, perception/cognition, semiotics, HCI, etc.), making it difficult to establish what works and what doesn’t.
Last week I criticized Twitter Blocks on the grounds that it was lacking in usability, despite being aesthetically compelling. Tom Carden’s response (as one of the designers of Twitter Blocks) got me thinking more about the role of aesthetics in visualization once again. While I understand and agree with his position completely (visualizations need not necessarily be useful to be interesting), I’m mostly concerned with the “traditional” goals of information visualization, and the ways in which we can develop a more robust literacy for them. In that sense, I’m inclined to promote usability over aesthetic novelty. That said, what’s clear to me from examples out “in the wild” like Twitter Blocks and the DiggLabs stuff (I don’t mean to pick on Stamen’s work, it’s just that they’re one of the most ubiquitous popular visualization designers out there right now), and the discussion surrounding them, is that the aesthetics are compelling: In walking the line Tom describes in his post, Stamen has produced a body of visualization work that both excites (“Beautiful!”) and frustrates (“Useless!”). This suggests that incorporating an aesthetic approach to visualization design can be beneficial in its capability to engage users, but that taking it “too far” (whatever that means in specific cases) may be counter-productive if we are looking to develop a type of standard visualization literacy. The big and difficult question, then, is determining what aesthetic considerations we can take away from these and other examples that will support this goal.
I recently discovered a set of research papers by Adrew Vande Moere (of infosthetics fame) and his grad students at the University of Syndey, Nick Cawthon and Andrea Lau, that attempt to address this question:
- A Conceptual Model for Evaluating Aesthetic Effect within the User Experience of Information Visualization. Nick Cawthon, Andrew Vande Moere (no public link available).
- The Effect of Aesthetic on the Usability of Information Visualization. Nick Cawthon, Andrew Vande Moere.
The first begins to describe the role of aesthetics in visualization, arguing that it is as important an influence on user experience as the more traditional components of information visualization. Parallels are drawn to the aesthetics of cinematography, choreography and staging as a means to streamline a visual presentation. An appeal is made to the importance of aesthetics in producing effective information narratives, in developing useful visual metaphors, and in using interaction to encourage “play.” I agree with all of these points, and I think they are particularly relevant to the concept of “information visualization for the people.” Part of what makes a visualization successful (and, presumably, effective) is its ability to engage the user, and I’m certain these techniques, in principle, promote engagement. I spent a lot of time thinking about designing playful tools when I was working on educational visualizations and simulations for teaching physics and biology. The trick was to make them forget they were learning!
That said, while I think this is the right way to frame an analysis of aesthetics in visualization, the problem in its application is usually one of degree. The informed “use of aesthetics” (a difficult to define concept) can go a long way towards improving effectiveness, but without a measured approach, you run the risk of interfering with the effective presentation of your data. Ignoring the designer’s intent, this is my critique of many of the more popular visualizations on the web right now: too great a focus on beauty and not enough on functionality. This is fine for work intended to be artistic, but it defies the development of a more functional visualization literacy.
The second paper addresses this issue by categorizing various “sub-genres” of visualization along an aesthetic spectrum:
I was less convinced by this analysis, for a couple of reasons: First, it seemed more concerned with promoting “information aesthetics” as a valid visualization field (that “reaches beyond” information visualization) than with providing concrete details of what its benefits actually are. It establishes a large “grey area” to be defined as information aesthetics, but left me wondering about the nuances within that space. Second, I’m not entirely convinced that it makes sense to divide visualization-space in to these vague categories; I’m more inclined to divide the space in to two broad categories (information visualization, and “information-based art”), rather than confusing the taxonomy with all these sub-genres (that sort of categorization reminds me of the taxonomy of electronic music). Some of the rhetoric in the paper suggests that designers decide to produce work within these sub-genres, rather than the work being assigned to a genre by virtue of its characteristics. If I’m being very cynical, it almost sounds like a broad space is being carved out to support information design failures; if an information visualization isn’t coherent enough to be effective, don’t worry – it was supposed to be an information aesthetic visualization.
The third paper was the most interesting. As the title suggests, it presents research aimed at quantifying the effects of aesthetics on visualization usability. The researchers devised a web-based survey that asked participants to answer information-retrieval questions based on a series of visualizations characterized by (presumably) varying levels of aesthetic attractiveness. They were primarily measuring response time; in particular, erroneous response time (the amount of time spent on a question before getting it wrong) and latency of task abandonment (the amount of time a user spends on a question before opting to “give up”). Their theory was that increased aesthetic quality would decrease these times. Without going in to the details, they were right. The visualization deemed most attractive by the participants generally performed best in the survey. Very interesting to see this quantified in some way.
However, to be fair, I felt there were several issues with their survey (which I took myself the other day), most of which are recognized in the paper. The most obvious had to do with the fact that the visualizations used in the survey were all static, non-interactive images. A few of them were snapshots from three dimensional systems, which really require interactivity to make sense of them (by allowing users to rotate the structures, etc.), so those got a raw deal. But even for the 2D ones, viewing static images isn’t representative of information visualization. Similarly, despite the fact that the test visualizations were representatives of actual visualization systems, they didn’t strike me as representative of the sorts of visualizations (information, or information aesthetic) that are out in the wild these days. These were pretty conservative as far as aesthetic detail goes, and the use of a consistent color palette reinforced this. Oddly enough, this choice actually struck me as representative of the sorts of “standardization” I’ve been promoting to encourage popular literacy. I would be really interested to see a similar survey involving actual examples from the web. The measure of aesthetic is qualitative enough that enforcing consistency between survey examples seems unnecessary and maybe counter-productive.
Another possibly significant issue was the participant demographic. It wasn’t a randomly selected sample – the survey was primarily advertised to people in the design community, who most likely have a higher than average visual literacy. If we’re trying to determine the effects of aesthetics on visualization usability, these probably aren’t the best people to survey. Also interesting to note was that nearly half of the participants in the survey did not complete it; their data wasn’t used in the results.
Finally, I was slightly amused by what almost sounds like an admission of bias in their conclusion: “The original purpose of this research was to increase the awareness of the positive role and purpose of aesthetic in the design of data visualization techniques.” Does this mean anything? Who knows!
In any case, these were all great reads. They’re not long, so I encourage anyone interested in these questions to look them over. While I’m not totally convinced by some of their conclusions, I’m glad that someone is asking them. Aesthetics can undoubtedly inform a visualization literacy, but nailing down how much is too much is definitely tricky.