MD | Andrew Hinton | @inkblurt Elements of Information Environments I’m Andrew Hinton, and I’m an information architect with The Understanding Group. Today we’re going to be talking about information environments -- and as a way into that conversation, I’ve titled this talk “the world is the screen” -- so let’s start by considering what I mean by that.
They’re proliferating to the point where we’re interacting with them as often as any other objects in our surroundings. So, in a way, we might say that screens are filling up the world to the point that it’ll feel like the world is made of them. There’s some truth to that. But it’s just one facet of the issue. kiosk: kodak.com table/phone: android.com gps: garmin.com
screen is because of technology like Google Glass, which essentially lays a screen over the world around us, mediating between our perception and the stuff we’re perceiving. This is certainly worth considering, but it, too, is only a facet of how the world is the screen. It’s also the case that these things -- all sorts of device screens and augmented displays - are getting integrated into our environmental experience. left image wired.com / others from google
isn’t confined to the things themselves. They’re part of a larger context. Here, on an airliner, I surreptitiously snapped a picture while waiting for take-off. I did it because it’s a good example of how we don’t just sit in front of screens alone, in a vacuum. We do it as part of our activity in the world around us.
talking to friends about a football game in progress -- a game that had a mirror-world of itself happening in a little avatar of itself on a smartphone. There’s a relationship between the digitally generated “information” environments we use, and the non-digital environments we live in. We live in both. I wondered, at the core of how we understand the world, are there differences between them? Do they matter?
example. What is the difference, really, between shopping for office supplies in a brick and mortar retail store, and shopping for them through an online retailer like Amazon? If we frame both of these as “information environments” -- does that help us understand why one might be eating the other’s lunch? If we think of the physical store as an information interface, how much information is conveyed, and of what kind, through one interface versus another?
What about when information environments use all sorts of methods for communication, all at once? We’ve had blended information environments for a while. Here’s a marvelous exhibit at the American Museum of Natural History in New York. It has a whole taxonomy -- the old-school meaning of taxonomy, meaning an organized hierarchy of creatures. But this one is instantiated on the wall. It also has it represented in written form, in a printed document, and in digitally presented form, in a kiosk. (These were taken when I took my daughter there some years back.) This is a curated, complex information environment. Physical objects, digital interfaces, lots of language and labels around. All connected together to form a whole experience. It’s is a highly controlled version of the world we now live in -- which is more emergent, messier, but even more pervasively connected & digitally enabled. photos by andrew hinton
Here’s an everyday intersection in Dublin. This is an environment that also has many different layers and modalities, but it’s not controlled and curated in the same way as the museum. It’s been added to, streets have been widened, signage added, infrastructure installed. And on top of that, lots of other information is pouring through it in the form of newspapers, or advertisements on buses and vans. The digital signs are something relatively new for our environments. It used to be that signs said one thing, and you learned what they said, and then you could forget about them until somebody put up a new one. These days, we can’t depend on surfaces being stable, persistent homes for written information. The stuff is embedded in all sorts of places. This street intersection in Dublin has digital signage mixed in with everything else. Pervasive computing technology means that the world is only getting weirder and more complex. We’re not talking about just consumer devices, but whole infrastructures, urban networks, and wired economies. photo by andrew hinton
Environment”?? For over a decade, we’ve been saying that IA is, in part, the structural design of shared information environments. But what do we mean by that phrase? It sounded right at the time, because even when most of what we were doing were static websites, we knew the scope could be bigger, and that the world was going to change toward more complexity. So, here we are in that spot we supposed we’d be in -- with all this pervasive information complexity - and it seems high time to nail this thing down better.
work (or not)? Methods, Tools, Processes Thesaurus Card Sorting Controlled Vocabularies Facets Taxonomies Mental Models Navigation Labels Af nity Diagrams Task Analysis Hierarchies Hyperlinks Context Models Ontologies I’m actually a bit worried because I’m not sure our current tool sets are really up to the task. We have a lot of methods, but not a lot of understanding about why or how they actually work. (Kind of like antidepressants.) We also tend to talk about a lot of things like “understanding” and “information” and whatnot -- but what do we mean by those things? We need more rigor, more science - I don’t mean information science but science about humans. I’ve been working on a book about how information creates and shapes context. And in part of that work, I’ve had the realization that we’re often looking at information and environments the wrong way around, by starting with the technology first. (8 min)
Especially since all the technology is becoming more and more pervasively integrated into our surroundings, now I’m thinking we should start with something more basic -- how do we comprehend our environment generally? What if we start with pre-digital structurally designed environments? >> And even further: is something like this field not just an environment, but an information environment? I believe it is. We’ll get to that, but first something from ten years ago. http://commons.wikimedia.org/wiki/File:Derbyshire_Landscape.jpg
2003 Back in 2003, at the IA Summit in Portland Oregon, Stewart Brand gave the opening keynote. One of the things he discussed was “pace layers” - the idea that some layers of human life move more slowly than others. Nature changes very slowly, and all the stuff we’ve built up from that foundation tends to move and change more quickly -- quicker and quicker still at each concentric layer. photo: Mike Lee http://www.flickr.com/photos/curiouslee/15238458/sizes/o/in/photostream/
ENVIRONMENTS Taking that idea and running with it, I’m working out a sort of pace layer stack about information environments. At the root is our perception and cognition of environment -- these are things that don’t, at core, change much at all over millennia. Then there’s spoken language, something we’ve had with us possibly for over a million years -- to the point that it’s probably a shaping factor in our evolution as a species. Writing and graphical symbolic language come later than speech. They’re technologies, in a sense, for encoding, recording and sharing spoken language. And only later do we get into information organization and design, or what we call “information technology” -- digital computing, networks, & devices. We tend to start our work through the lens of the upper two layers - but they’re the ones that change and fluctuate the fastest. >> I think we should start with perception/cognition as the lens for understanding the rest.
01110011 10100010 01001000 01110011 Digital systems transmitting to & receiving from other digital systems. Digital Animals (including people) perceiving the environment. Ecological People communicating with people. Semantic There’s a long history of people trying to define information. I’m not into defining things so much these days -- I’m more interested in describing them. And that frees us up to understand a thing in more than one mode or dimension -- to be OK with grasping something in all its facets. Rather than defining information, I’d like to describe how it operates. I think information affects perception and understanding in three major modes. Let me mention them all, then we’ll look at each in more detail. >> First is “ecological” information. It’s about how animals perceive their environment. >> The second is “semantic” information: it’s the mode people use to communicate with one another. >> Third is digital information: digital information is information used by digital systems to transmit to and receive from other digital systems. It’s what happens between the black boxes of our digital infrastructure. Like I said, we’ll look at each of these more closely. Let’s start with ecological information. (12 min)
01001000 01110011 Digital systems transmitting to & receiving from other digital systems. Digital People communicating with people. Semantic Animals (including people) perceiving the environment. Ecological So, starting with ecological information. The word ecological means having to do with the relationship between an animal and its natural environment. I’m using the term this way because many of the ideas I’m using are based on ecological psychology and embodied cognition, which is different from mainstream cognitive science.
... • Works like a computer to “process information.” • Uses symbolic logic, “images” & representational models. • Is primarily (if not exclusively) responsible for cognition. Mainstream cognitive science, which still forms the foundation for most HCI theory and practice, assumes that the brain works like a computer as a sort of information processor. The brain takes sensory inputs from a sort of dumb, robotic body, processes those inputs as “information” -- representational images and symbols of the world, along with images and symbols stored in memory -- and once it has figured out what to do, it tells the body how to respond. This is still the predominant way of seeing how the brain works. It’s part of the assumptions built into many of our methods and training.
there’s embodied cognition theory. Embodied cognition argues that cognition is not brain-exclusive, but actually uses the body and even the environment around the body for cognitive activity. There are many flavors and schools of thought even within the embodiment movement; but one in particular is what some call “radical embodied cognition” that says we should not try to marry embodiment with the traditional cognitive science perspective, but replace it entirely. Full disclosure: the ‘radical’ or ‘replacement’ camp is the one I find myself aligning with.
of Perception Long sidelined, now hailed as pioneer of embodied cognition. The so-called ‘radical embodiment’ movement has adopted the work of James J Gibson, who was a scientist of something called “ecological psychology” in the mid 20th century.. He started out studying WWII pilots - and found that centuries-old assumptions about how people comprehend their environment were simply wrong. His ideas have been acknowledged and quasi-appropriated here and there, but now many are starting to see his whole corpus of thought more clearly -- he was really writing about embodied cognition (but calling it ecological psychology).
theories We perceive the environment in human- scale terms, not scientific abstractions. We perceive environment as “nested,” not in logical hierarchy. We perceive elements in the environment as invariant (persistent) or variant (in flux). There’s no way to cover all the important stuff from Gibson in this talk, but here are a few key ideas. >> We perceive elements in the environment as invariant or variant. Invariant elements are necessary for orientation of everything else. They include at the widest scale, the earth and the sky. Or perhaps a mountain range. Or even the occluding edge of one’s nose. Variant elements are things that are in flux that we don’t rely on for their persistent structures. >> We perceive the environment in human-scale terms, not scientific abstractions. Perception doesn’t grasp the abstraction of space or time. Our bodies don’t perceive a fallen tree limb in terms of centimeters, but in terms of whether it will fit in the hand, or if it’s too heavy to pick up. >> We perceive the environment as nested. A stream is nested between banks, which are nested between hills, which are nested within a range of larger hills, all of which is nested within the canopy of sky. This is importantly different from strict hierarchy, though. It overlaps and shifts depending on the activity of the perceiver. A cave might feel like “inside” but then feel like “outside” when rain starts leaking in. A stone may just be clutter to me when I walk by it the first time, but when I need a stone to pound something, it becomes an object I can pick up. Then when I pick it up, it becomes an extension of my body. All of these are important ideas for the structures we make for digital and other systems, because our cognition expects the world to accommodate these ways of perceiving.
perceive affordances. “... the perceived functional properties of objects, places and events in relation to an individual perceiver.” - JJ Gibson AFFORDANCE JJ Gibson invented the concept of affordance. Others have since popularized it, but gotten it somewhat wrong -- mainly because they’re coming at it from a traditional cognitive-science perspective, not an embodied perspective. For Gibson, affordance isn’t a thing you add to something. Affordance is the organizing principle behind *ALL* perception. We don’t perceive anything unless it affords meaningful action for a given context.
Ecological Information "Pick-up" Agent Perception Action Ambient, structured energy arrays We perceive affordance through something called “information pick-up.” A perceiver, or agent, takes *action* in an environment in order to discover its affordances. The action part is very important. Gibson rails against traditional cognitive science laboratories that strap people into chairs to keep their heads still -- cognition doesn’t function from stationary positions. We evolved as active, moving, interacting creatures that perceive through action. And when we act in the environment, we perceive, which then affects our action, which then affects what we perceive, in a continuous loop of cognitive activity. This is a very different way of thinking about “information” - but it’s valid, and forms the basis for all the other sorts of information in our lives.
dog, Sigmund. When I try taking him for a walk, he’ll stop as if the ground has grabbed him. Sometimes I’ll let him explore to see what’s up, and it’s almost always something that I didn’t perceive the way he did - either because it wasn’t relevant or because I physically can’t perceive it. I’ve learned a lot by watching my dog figure out the world. It’s not that different from us. He just doesn’t have the rich layer of language draped across the world like we do. It’s that layer of language that humans have added to the environment that makes up the next information mode. (+7 = 19 min)
01001000 01110011 Digital systems transmitting to & receiving from other digital systems. Digital Animals (including people) perceiving the environment. Ecological People communicating with people. Semantic The semantic mode, in short, is language. But I mean language in the broad sense of things we put into the environment to communicate with people. This can be all sorts of stuff: speech, gestures, text, iconography, even buildings have semantic qualities.
ENVIRONMENT flickr - uicdigital Information (in the sense we tend to mean it colloquially) is what creates and changes much of what we consider to be contextual reality. Look at this photo -- there’s information everywhere in this scene. >>The lines on the road tell us where to drive; the traffic light is a virtual barrier that affects our behavior; the road signs give us a layer of instruction that adds meaning to the city around us. without the information here, it would quite literally be a different place. Really, you could have civilization without cars, lightposts and buildings, but you couldn’t have it without language. Language is our reality in many ways. And a city is as much a construct made of language - speech as well as labels, signs, other semantic artifacts - as one made of atoms. http://www.flickr.com/photos/uicdigital/5410417461/
a presentation last year, I heard Peter Merholz talk about how a cube farm in an office building is like the org chart “made manifest.” That’s due to the fact that language structures are an architecture that we live within together, whether we know it or not. Whether these structures are defined explicitly like in this early IBM management diagram, or defined tacitly through the collective assumptions within a shared culture, the way we talk and write about our shared environment is also a structural feature of that environment.
form of cognitive scaffolding...” - Andy Clark - Supersizing the Mind Language is not information. Language is environment. When I am speaking I’m vibrating the air - affecting the environment, putting structures into it that weren’t there before. The same goes for writing - it’s environmental structure we’re adding to the world. We then pick up information about what those environmental features mean; you hear the vibrations and, because you’ve learned what those words mean, they have affordance for you.
ict Don Norman famously talks about the affordances of door handles. In this case, a similar affordance situation can help us understand how different modes of information can be in conflict. I was walking into a store and did not even notice the sign. The language of “Pull” had an intended affordance -- I’m supposed to read it and allow it to control my action. But the ecological information I picked up from the structure of the handle had a stronger effect on my action. I was talking with someone as I was entering the store, so my language perception was preoccupied; also I could see through the glass into the store toward the context I intended to enter -- essentially seeing right past the sign.
“con rm” action. Digital-Ecological & Semantic Information In Con ict Graphical user interfaces are essentially simulated ecological information. Objects with affordances, simulated on screens. But they’re also performing semantically. It can get very confusing. Logically speaking, the red X’s in the first example are all very different -- but ecologically, they require too much thought to disambiguate. In this app I found myself always deleting rather than declining, closing rather than deleting, etc. When I’m in a hurry, I just reach for the closest red X to do whatever I’m trying to do - close the message, decline an invite, or delete it entirely. About half the time, I end up clicking the wrong one if I don’t stop and think about it explicitly. >>In an unsubscribe interface for fab.com, my wife discovered that she was apparently re- subscribing without realizing it, because that big red button -- like a big berry you can’t help but pick -- contextually feels like it’s a confirmation, not a cancellation/re-subscription action.
publicly? Ecological Information / Affordance for action. Very little semantic or ecological information about what context I’m in The infamous Twitter “DM Fail” problem is largely caused by users responding to DMs via SMS. In this case, it’s hard to tell: which of these is a Twitter app that will safely allow me to DM someone, and which is my SMS app that will tweet to everyone who follows me? The physicality of the interface can easily override my perception of the semantic information’s differentiating cues.
Ecological People communicating with people. Semantic Digital Information 10100010 01001000 01110011 10100010 01001000 01110011 Digital systems transmitting to & receiving from other digital systems. Digital So the examples we just looked at weren’t just any sort of semantic information, they were semantic information driven by digital technology. And digital technology relies on digital information. Digital information is how the black boxes talk to the other black boxes. It’s the lifeblood of information technology. The whole point of digital information is to strip human meaning out of it to make it efficient for transmitting and storing encoded information. This isn’t stuff we see face to face very much. Mainly we encounter its *effects* in the environment.
Environment We see machines around us trying to get us to perceive what they are saying, or what they want to hear from us. We see them murmuring to each other in weird, noisy machine-only semantics that we do not comprehend either ecologically or semantically. •The gas pump above has to have a sticker added to it that explains what “Enter Data” means. >>The Twitter profile with the iPhone coordinates expresses my location not in a semantic way (the name of a city, for instance) but in a Cartesian grid that I have no contextual orientation for, either semantically or ecologically. >>The Delta app has information that I, as a human, can read, but it gives priority to the machines that I encounter in the workflow of the airport.
I don’t mean to paint digital information as a villain. It isn’t. The ability to transmit, store and retrieve information in this way is a miracle. An platform I like a lot is Avocado - it lets a couple keep in touch and share a place together, pervasively. It has nice touches that key into embodied experience of semantic information, like sending a hug by touching the screen to your heart. Another nice touch: the couple shares the same password - making a word into a very real link of co-ownership of the place, like having the same keys to your home. This sort of pervasively available place would be impossible without digital information in the background. But it also requires a lot of discipline with semantic information structure to make the place coherent.
INFORMATION MAKES PLACES, KIND OF LIKE THIS PICTURE MAKES A PIPE. This is the famous Magritte painting -- it says “this is not a pipe” The picture definitely shows a pipe but it’s not a real pipe you can smoke. >>Information is kind of like this in the way it makes places. >>Except for a key difference that, with Information, you can smoke the pipe.
LABEL LABEL RULES And Language is Infrastructure. We essentially make things out of labels, connections and rules. Too often, we assume the labels are something to add later - but in reality they’re the thing we have to figure out first. This is why issues like ontology and taxonomy are so important - they establish the “invariant” features of the environments we make.
world? How do I exist in it? Please describe a formal, explicit specification of a shared conceptualization for purposes of structuring semantic data. 00101011100100101110100101 ONTOLOGY Behind the scenes of all this is Ontology. Ontology can be the philosophical sort -- about the nature of one’s being and the relationship of the self to the environment. Or it can be the information-technology sort - developed for digital information work, to define the formal specification for data purposes. A big part of what IA should be doing is bridging these two planes of existence.
fields are preoccupied with how to have content and functionality make sense in various contexts. Ontology is at the heart of this problem. In many organizations and project teams, there’s an over-obsession with things like layout in each of the instantiations of a thing, but not enough discussion about how to define the nature of the thing in abstract. That requires an ontological perspective. And, done properly, it forms the main structures of an information environment - the invariant pillars, so to speak - that allow language to stitch together coherence across channels.
Here’s an ontology example. Lowes launched a service called MyLowes -- that requires the registration of a card. But they also have a “Lowe’s Card” that’s a consumer credit card. Conversations at checkout can end up like a “who’s on first” routine -- “do you have your Lowe’s card?” “My Lowe’s card? That’s what I’m paying with.” “No I mean your ‘my lowe’s’ card.” “This IS my lowe’s card!”
Digital architecture determining ecological & semantic context. If I walked into a bank and asked to access an account, it’d be clear what I meant. But online, it can mean different things (my profile-account represents me in the digital context -- and needs a label, which happens to also be “account”). The digital systems behind the scenes at Kohls require that these two things we call “account” be separate - requiring disambiguation. The ontology of ‘account’ is in question here. It’s one of the many sorts of things we have to sort out with language, when we’re working in an environment that’s made of almost nothing *but* language.
the Cloud Now that retailers are trying to be in the cloud and on the ground at the same time, context is especially confounding. It requires a great deal of work to situate the user’s perception of place. For many retailers, product price and availability are driven by location - yet shoppers online tend to come to the experience as if it’s a cloud-based store, not thinking about geography yet. It puts the user in a strange environmental position of being in a local store and in an amorphous web-shop experience simultaneously. The ontology of place is dissonant.
here we have a situation where a subway station is also filled with pictures of products that you can actually buy -- not unlike Magritte pipes that you can smoke. With the QR code sprinkled throughout -- digital information wrapped in massive simulacra of ecological information -- plus the semantic information of labels/brands. This could have just been a list of words with QR codes next to them, but perhaps wisely, the retailer decided to create the place in our image, to help bring the “reality” of shopping for groceries into what would otherwise override perception as a subway station.
Ecological Semantic The examples we’ve looked at are going to seem primitive in a matter of just a few years. So we need ways of breaking down whole environments into their essential elements - and those elements are bound up in human perception & cognition. This has been a very cursory overview of what I hope are a useful beginning for principles and frameworks for doing this work into the future.
we’ve looked at are going to seem primitive in a matter of just a few years. So we need ways of breaking down whole environments into their essential elements - and those elements are bound up in human perception & cognition. This has been a very cursory overview of what I hope are a useful beginning for principles and frameworks for doing this work into the future.