The ghosts of AI

I fell in love with Artificial Intelligence (AI) back in the 1990s when I went to Aberdeen University as a post-graduate stalker, even though I only signed up for the MSc in AI because it had an exchange program which meant that I could study in Paris for six months.

And, even though they flung me and my pal out of French class for being dreadful students, and I ended up living in Chambéry (which is so small it mentions the launderette in the guidebook) instead of Paris, it was a brilliant experience, most surprisingly of all, because it left me with a great love of l’intelligence artificielle: Robotics, machine learning, knowledge based systems.

AI has many connotations nowadays, but back in 1956 when the term was coined, it was about thinking machines and how to get computers to perform tasks which humans, i.e., life with intelligence, normally do.

The Singularity is nigh

Lately, I have been seeing lots of news about robots and AI taking over the world and the idea that the singularity – that moment when AI becomes all powerful it self-evolves and changes human existence – is soon. The singularity is coming to get us. We are doomed.

Seriously, the singularity is welcome round my place to hold the door open for its pal and change my human existence any day of the week. I have said it before: Yes please dear robot, come round, manage my shopping, wait in for Virgin media because they like to mess me about, and whilst you are there do my laundry too, thank you.

And, this got me thinking. One article said the singularity is coming in 2029 which reminded me of all those times the world was going to end according to Nostradamus, Old Mother Shipton, the Mayan Calendar, and even the Y2K bug. As we used to say in Chambéry : Plus ça change, plus c’est la même chose. To be honest, I never, ever said that, but my point is that our fears don’t change, even when dressed up in a tight shiny metallic suit. Nom du pipe!

We poor, poor humans we are afraid of extinction, afraid of being overwhelmed, overtaken, and found wanting. True to form I will link to Maslow’s hierarchy of needs and repeat that we need to feel safe and we need to feel that we are enough. Our technology may be improving – not fast enough as far as I am concerned – but our fears, our hopes, our dreams, our aspirations remain the same. As I say in the link above, we have barely changed since Iron Age times, and yet we think we have because we buy into the myth of progress.

We frighten ourselves with our ghosts. The ghosts which haunt us: In the machine, in the wall, and in our minds where those hungry ghosts live – the ones we can never satisfy.

The ghost in the machine

The ghost in the machine describes the Cartesian view of the mind–body relationship, that the mind is a ghost in the machine of the body. It is quoted in AI, because after all it is a philosophical question: What is the mind? What is intelligence? And, it remains a tantalising possibility, especially in fiction that somewhere in the code of a machine or a robot, there is a back door or cellular automata – a thinking part, which like natural intelligence is able  to create new thoughts, new ideas, as it develops. The reality is that the guy who first came up with the term talked about the human ability to destroy itself with its constant repeating patterns in the arena of political–historical dynamics but used the brain as the structure. The idea that there is a ghost in the machine is an exciting one which is why fiction has hung onto it like a willo the wisp and often uses it as a plot device, for example, in the Matrix (there’s lots of odd bits of software doing their own thing) and I, Robot (Sunny has dreams).

Arthur C Clarke talked about it when he said that technology is magic – something, I say all the time, not least of all, because it is true. When I look back to the first portable computer I used and today, the power of the phone in my hand, well, it is just magic.

That said, we want the ghost in the machine to do something, to haunt us, to surprise us, to create for us, because we love variety, discoverability, surprise, and the fact that we are so clever, we can create life. Actually we do create life, mysteriously, magically, sexily.

The ghost in the wall

The ghost in the wall is that feeling that things change around us with little understanding. HCI prof, Alan Dix uses the term here. If HCI experts don’t follow standards and guidelines, the user ends up confused in an app without consistency which gives the impression of a ghost in the wall moving things, ‘cos someone has to be moving the stuff, right?

We may love variety, discoverability and surprise, but it has to be logical to fit within certain constraints and within the consistency of an interface with which we are interacting, so that we say: I am smart, I was concentrating, but yeah, I didn’t know that that would happen at all, in the same we do after an excellent movie, and we leave thrilled at the cleverness of it all.

Fiction: The ghost of the mind

Fiction has a lot to answer for. Telling stories is how we make sense of the world, they shape society and culture, and they help us feel truth.

Since we started storytelling, the idea of artificial beings which were given intelligence, or just came alive, is a common trope. In Greek mythology, we had Pygmalion, who carved a woman from ivory and fell in love with her so Aphrodite gave her life and Pervy Pygmalion and his true love lived happily ever after. It is familar – Frankinstein’s bride, Adam’s spare rib, Mannequin (1987). Other variations less womeny-heterosexy focused include Pinocchio, Toy Story, Frankinstein, Frankenweenie, etc.

There are two ways to go: The new life and old life live happily ever after and true love conquers all (another age old trope), or there is the horror that humans have invented something they can’t control. They messed with nature, or the gods, they flew too close to the sun. They asked for more and got punished.

It is control we are after even though we feel we are unworthy, and if we do have control we fear that we will become power crazed. And then, there are recurring themes about technology such as humans destroying the world, living in a post-apocalyptic world or dystopia, robots taking over, mind control (or dumbing down), because ultimately we fear the hungry ghost.

The hungry ghost

In Buddhism, the hungry ghosts are when our desires overtake us and become unhealthy, and insatiable, we become addicted to what is not good for us and miss out on our lives right now.

There is also the Hungry Ghosts Festival which remembers the souls who were once on earth and couldn’t control their desires so they have gotten lost in the ether searching, constantly unsatisfied. They need to be fed so that they don’t bother the people still on earth who want to live and have good luck and happy lives. People won’t go swimming because the hungry ghosts will drown them, dragging them down with their insatiable cravings.

In a lovely blog the Chinese character which represents ghost but in English looks like gui, which is very satisfying given this is a techyish blog – though I can’t reproduce the beautiful character here, is actually nothing to do with ghosts or disincarnate beings, it is more like a glitch in the matrix – a word to explain when there is no logical explanation. It also explains when someone behaves badly – you dead ghost. And, perhaps is linked to when someone ghosts you, they behave badly. No, I will never forgive you, you selfish ghost. Although when someone ghosts you they do the opposite to what you wish a ghost would do, which is hang around, haunt you, and never leave you. When someone ghosts you, you become the ghost.

And, for me the description of a ghost as a glitch in the matrix works just as well for our fears, especially about technology and our ghosts of AI – those moments when we fear and when we don’t know why we are afraid. Or perhaps we do really? We are afraid we aren’t good enough, or perhaps we are too good and have created a monster. It would be good if these fears ghosted us and left us well alone.

Personally, my fears go the other way. I don’t think the singularity will be round to help me any time soon. I am stuck in the Matrix doing the washing. What if I’m here forever? Please come help me through it, there’s no need to hold the door – just hold my hand and let me know there’s no need to be afraid, even if the singularity is not coming, change is, thankfully it always is, it’s just around the corner.

Human-computer interaction, cyberpsychology and core disciplines

A heat map of the multidisciplinary field of HCI @ Alan Dix

I first taught human-computer interaction (HCI) in 2001. I taught it from a viewpoint of software engineering. Then, when I taught it again, I taught it from a design point of view, which was a bit trickier, as I didn’t want to trawl through a load of general design principles which didn’t absolutely boil down to a practical set of guidelines for graphical-user interface or web design. That said, I wrote a whole generic set of design principles here: Designing Design, borrowing Herb Simon’s great title: The Science of the Artificial. Then, I revised my HCI course again and taught it from a practical set of tasks so that my students went away with a specific skill set. I blogged about it in a revised applied-just-to-web-design version blog series here: Web Design: The Science of Communication.

Last year, I attended a HCI open day Bootstrap UX. The day in itself was great and I enjoyed hearing some new research ideas until we got to one of the speakers who gave a presentation on web design, I think he did, it’s hard to say really, as all his examples came from architecture.

I have blogged about this unsatisfactory approach before. By all means use any metaphor you like, but if you cannot relate it back to practicalities then ultimately all you are giving us is a pretty talk or a bad interview question.

You have to put concise constraints around a given design problem and relate it back to the job that people do and which they have come to learn about. Waffling on about Bucky Fuller (his words – not mine) with some random quotes on nice pictures are not teaching us anything. We have a billion memes online to choose from. All you are doing is giving HCI a bad name and making it sound like marketing. Indeed, cyberpsychologist Mary Aiken, in her book The Cyber Effect, seems to think that HCI is just insidious marketing. Anyone might have been forgiven for making the same mistake listening to the web designer’s empty talk on ersatz architecture.

Cyberpsychology is a growing and interesting field but if it is populated by people like Aiken who don’t understand what HCI is, nor how artificial intelligence (AI) works then it is no surprise that The Cyber Effect reads like the Daily Mail (I will blog about the book in more detail at a later date, as there’s some useful stuff in there but too many errors). Aiken quotes Sherry Turkle’s book Alone Together, which I have blogged about here, and it makes me a little bit dubious about cyberpsychology, I am waiting for the book written by the neuroscientist with lots of brainscan pictures to tell me exactly how our brains are being changed by the Internet.

Cyberpsychology is the study of the psychological ramifications of cyborgs, AI, and virtual reality, and I was like wow, this is great, and rushed straight down to the library to get the books on it to see what was new and what I might not know. However, I was disappointed because if the people who are leading the research anthropomorphise computers and theorise about metaphors about the Internet instead of the Internet itself, then it seems that the end result will be skewed.

We are all cyberpsychologists and social psychologists now, baby. It’s what we do

We are all cyberpsychologists and social psychologists, now baby. It’s what we do. We make up stories to explain how the world works. It doesn’t mean to say that the stories are accurate. We need hard facts not Daily Mail hysteria (Aiken was very proud to say she made it onto the front page of the Daily Mail with some of her comments). However, the research I have read about our behaviour online says it’s too early to say. It’s just too early to say how we are being affected and as someone who has been online since 1995 I only feel enhanced by the connections the WWW has to offer me. Don’t get me wrong, it hasn’t been all marvellous, it’s been like the rest of life, some fabulous connections, some not so.

I used to lecture psychology students alongside the software engineering students when I taught HCI in 2004 at Westminster University, and they were excited when I covered cognitive science as it was familiar to them, and actually all the cognitive science tricks make it easy to involve everyone in the lectures, and make the lectures fun, but when I made them sit in front of a computer, design and code up software as part of their assessment, they didn’t want to do it. They didn’t see the point.

This is the point: If you do not know how something works how can you possibly talk about it without resorting to confabulation and metaphor? How do you know what is and what is not possible? I may be able to drive a car but I am not a mechanic, nor would I give advice to anyone about their car nor write a book on how a car works, and if I did, I would not just think about a car as a black box, I would have to put my head under the bonnet, otherwise I would sound like I didn’t know what I was talking about. At least, I drive a car, and use a car, that is something.

Hey! We’re not all doctors, baby.

If you don’t use social media, and you just study people using it, what is that then? Theory and practice are two different things, I am not saying that theory is not important, it is, but you need to support your theory, you need some experience to evaluate the theory. Practice is where it’s at. No one has ever said: Theory makes perfect. Yep, I’ve never seen that on a meme. You get a different perspective, like Jack Nicholson to his doctor Keanu Reeves says in Something’s Gotta Give: Hey! We’re not all doctors, baby. Reeves has seen things Nicholson hasn’t and Nicholson is savvy enough to know it.

So, if you don’t know the theory and you don’t engage in the practice, and you haven’t any empirical data yourself, you are giving us conjecture, fiction, a story. Reading the Wikipedia page on cyberpsychology, I see that it is full of suggested theories like the one about how Facebook causes depression. There are no constraints around the research. Were these people depressed before going on Facebook? I need more rigour. Aiken’s book is the same, which is weird since she has a lot of references, they just don’t add up to a whole theory. I have blogged before about how I was fascinated that some sociologists perceived software as masculine.

In the same series I blogged about women as objects online with the main point being, that social media reflects our society and we have a chance with technology to impact society in good ways. Aiken takes the opposite tack and says that technology encourages and propagates deviant sexual practices (her words) – some I hadn’t heard of, but for me, begs the question: If I don’t know about a specific sexual practice, deviant or otherwise, until I learn about on the Internet (Aiken’s theory), then how do I know which words to google? It is all a bit chicken and egg and doesn’t make sense. Nor does Aiken’s advice to parents which is: Do not let your girls become objects online. Women and girls have been objectified for centuries, technology does not do anything by itself, it supports people doing stuff they already do. And, like the HCI person I am, I have designed and developed technology to support people doing stuff they already do. I may sometimes inadvertently change the way people do a task when supported by technology for good or for bad, but to claim that technology is causing people to do things they do not want to do is myth making and fear mongering at its best.

The definition of HCI that I used to use in lectures at the very beginning of any course was:

HCI is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them (ACM, 1992).

For me, human-computer interaction was and still remains Gestaltian: The whole is greater than the sum of the parts, by this I mean, that the collaboration of a human and a computer is more than a human typing numbers into a computer and then waiting for the solution, or indeed typing in sexually deviant search terms into a web crawler to find a tutorial. And, with the advent of social media, HCI is more than one person connecting to another, or broadcasting online, which is why the field of cyberpsychology is so intriguing.

But the very reason why I left the field of AI and went into HCI is: AI reasons in a closed world and the limits of the computational power you have available. There are limits. With HCI, that world opens up and the human gets to direct the computer to do something useful. Human to human communication supported by technology does something else altogether which is why you might want the opinion of a sociologist or a psychologist. But, you don’t want the opinion of the sociologist on AI when they don’t understand how it works and has watched a lot of sci-fi and thinks that robots are taking over the world. Robots can do many things but it takes a lot of lines of code. And, you don’t want the opinion of a cyberpsychologist who thinks that technology teaches people deviant sexual practices and encourages us all to literally pleasure ourselves to death (Aiken’s words – see what I mean about the Daily Mail?) ‘cos she read one dodgy story and linked it to a study of rats in the 1950s.

Nowadays, everyone might consider themselves to be a bit of a HCI expert and can judge the original focus of HCI which is the concept of usability: easy to learn, easy to use. Apps are a great example of this, because they are easy to learn and easy to use, mainly though because they have limited functionality, that is they focus on one small task, like getting a date, ordering a taxi, sharing a photo, or a few words.

However, as HCI professor Alan Dix says in his reflective Thirty years of HCI and also here about the future: HCI is a vast and multifaceted community, bound by the evolving concept of usability, and the integrating commitment to value human activity and experience as the primary driver in technology.

He adds that sometimes the community can get lost and says that Apple’s good usability has been sacrificed for aesthetics and users are not supported as well as they should be. Online we can look at platforms like Facebook and Twitter and see that they do not look after their users as well as they could (I have blogged about that here). But again it is not technology, it is people who have let the users down. Somewhere along the line someone made a trade-off: economics over innovation, speed over safety, or aesthetics over usability.

HCI experts are agents of change. We are hopefully designing technology to enhance human activity and experience, which is why the field of HCI keeps getting bigger and bigger and has no apparent core discipline.

It has a culture of designer-maker which is why at any given HCI conference you might see designers, hackers, techies and artists gathering together to make things. HCI has to exist between academic rigour and exciting new tech, no wonder it seems to not be easy to define. But as we create new things, we change society and have to keep debating areas such as intimacy, privacy, ownership, visibility as well as what seems pretty basic like how to keep things usable. Dix even talks about having human–data interaction, as we put more and more things online, we need to make sense of the data being generated and interact with it. There is new research being funded into trust (which I blogged about here). And Dix suggest that we could look into designing for solitude and supporting users to not respond immediately to every text, tweet, digital flag. As an aside, I have switched off all notifications, my husband just ignores his, and it just boggles my mind a bit that people can’t bring themselves to be in charge of the technology they own. Back to the car analogy, they wouldn’t have the car telling them where they should be going.

Psychology is well represented in HCI, AI is well represented in HCI too. Hopefully we can subsume cyberpsychology too, so that the next time I pick up a book on the topic, it actually makes sense, and the writer knows what goes on under the bonnet.

Technology should be serving us, not scaring us, so if writers could stop behaving like 1950s preachers who think society is going to the dogs because they view how people embrace technology in the same way they once did rocknroll and the television, we could be more objective about how we want our technological progress to unfold.

Web design (5): Structure

A collaborative medium, a place where we all meet and read and write.
Tim Berners-Lee

[Part 5 of 7 : 0) intro, 1) story, 2) pictures,  3) users, 4) content, 5) structure, 6) social media, 7) evaluation]

Many designers have adopted a grid structure to design web pages because a) it lends itself well to responsive design and b) it allows a design which is easy for users to understand. Designers literally have about five seconds before a user will click away to find a different service/page/content provider if the page is laid out in a way which is difficult to understand.

In a great talk for An Event Apart, Designer and Developer Advocate at Mozilla, Jen Simmons looks offline at magazines for inspiration and remembers how there was much experimentation and creativity online until everyone adopted grids and fell into a rut of grids.

But, it is easy to understand why everyone adopted grids, because users create their own understanding of a webpage from its structure. Text is complete within itself and meaning comes from its structure and language rather than the ideas it contains. This is a fundamental principle of semiotics, the study of meaning.

Managing expectations

When a webpage is judged to be useless, it is often because it does not behave in the way the user is expecting, particularly if it is not very attractive.

Designers either need to manage a user’s expectations by giving them what they are expecting in terms of the service they are looking for, or they need to make it super attractive.  Attractive things don’t necessarily work better but we humans perceive them as doing so  because they light up the brain’s reward centre and make us feel better when we are around them. We are attracted to attractive things which is given by certain Gestalt principles such as unity, symmetry, and the golden ratio.

Gestalt: similarity, promixity

Good design is one thing, but we also have specific expectations about  any given webpage. We scan for headings and white space and interpret a page in those terms.  This is because according to Gestalt theory we will interpret items according to their proximity: items which are close together, we will group together; or similarity, items which are similar we interpret as together.

And also, because we have been to others sites and we transfer our experiences from one site to another and anticipate where certain functions should be.

Where am I? Where have I been? Where am I going?

Main menus are usually at the top of the page, grouped together and are used for navigation through the site.  Secondary navigation may take place in drop down menus, or in  left or right hand columns. Specific house keeping information can be found in the footer, or the common links bar if there is one.

If users are completely lost they will use the breadcrumbs, which Google now uses instead of the URL of sites as part of the results their search engine serves up. Therefore, it is in a designer’s interest to put breadcrumbs on the top of page.

Users will stay longer and feel better if they can answer the three questions of navigation as articulated by usability consultant Steve Krug:

  1. Where am I?
  2. Where have I been?
  3. Where am I going?

Often this answered by changing links to visited, not visited and enforcing the consistency of the design by adopting a sensible approach to colour. There is a theory of colour in terms of adding and subtracting colour to create colour either digitally, or on a palette, but there is alas, no theory about how to use colour to influence branding and marketing, as personal preferences are impossible to standardise.

HTML 5 & CSS 3

As discussed earlier in part 1 of this series, we separate out our content from our presentation which is styled using CSS 3. Then, once we know what we want to say we use HTML 5 to structure our text to give it meaning to the reader. This may be a screen reader or it may be a human being.

HTML 5 breaks a page into its header and body, and then the body is broken down further into specific instructions. Headings from <h1> to <h6>, paragraphs, lists, sections and paragraphs, etc., so that we can structure a nice layout.  There are thousands of tutorials online which teach HTML 5.

The nice thing about sections is that we can use them to source linked data from elsewhere and fill our pages that way, but still keep a consistent appearance.

Theoretically one page is great, or a couple of pages fine, but once we get into hundreds of pages, we need to think about how we present everything consistently and evenly across a site and still provide users the information for which they came.

Information architecture

Information architecture (IA) is the way to organise the structure of a whole website. It asks: How you categorise and structure information? How do you label it so that users can navigate or search through it in order to find what they need?

The first step is to perform some knowledge elicitation of the  business or context and what everyone (owners, customers) known as stakeholders expect from the proposed system. This may include reading all the official documentation a business has (yawn!).

If there is a lot of existing information the best way to organise it is to perform a card sort. A card sort is when a consultant calls in some users, gives them a stack of index cards with content subjects written on them, along with a list of headings from the client’s site—“Business and News,” “Lifestyle,” “Society and Culture”— then users decide where to put “How to floss your teeth”.

This can take a few days each time and a few goes, until a pattern is found, us humans love to impose order on chaos, we love to find a pattern to shape and understand our world.

Once we have a structure from the card sort, it becomes easier to start designing the structure across the site and we begin with the site map.

The site map reflects the hierarchy of a system (even though Tim Berners-Lee was quite emphatic that the web should not have a hierarchical structure).

Then, once a site map is in place, each page layout can be addressed and the way users will navigate. Thus, we get main menus (global navigation), local navigation, content types to put in sections and paragraphs, etc., along with the functional elements needs to interact with users.

Other tools created at this time to facilitate the structure are wireframes, or annotated page layouts, because if is is a big site lots of people may be working on it and clear tools for communication are needed so that the site structure remains consistent.

Mock up screen shots and paper prototypes may be created and sometimes in the case of talented visual designers, storyboards are created. Storyboards are sketches showing how a user could interact with a system, sometimes they take a task-base approach, so that users could complete a common task.

Depending on the size of a project, information architects will work with content strategists who will have asked all the questions in the last section (part 4) on content and/or usability consultants who will have spoken to lots of users (part 3) to get an understanding of their experiences, above and beyond their understanding of the labelling of information in order to answer questions such as:

  • Does the website have great usability which is measured by being: effective and efficient; easy to learn and remember; useful and safe?
  • How do we guide users to our key themes, messages, and recommended topics?
  • Is the content working hard enough for our users?

Sometimes, it may just be one person who does all of these roles and is responsible for answering all of these questions.

It takes time to create great structure, often it takes several iterations of these these steps, until it is time to go on to the next stage (part 6) to start sharing this beautiful content on social media.

[Part 6]

Game theory in social media marketing (2): Customers and competitors


[Part 2 of 4: Game theory & social media: Part 1, Part 3, Part 4]

In part 1, we saw how people love to play games. Game theory was first recognised in 1928, by John Von Neumann’s paper which was about two people playing a game together with only one winner (known as: two person game-zero sum).

If we apply game theory to social media marketing, we could say that the customer and the marketer are playing a two person game, zero sum – winner takes all. Before social media, this might have been the case, for customers believed that shops were acting in their own self interests and so they, the customer, did too. Everyone was out to get what they could. In reality though, the relationship is more of a win-win: Without the marketer, the customer might not learn about the product on offer and not buy or benefit from the product, and without the customer, the marketer doesn’t have a job at all.

Playing your customer

In his book, Social Media Marketing, Eric Anderson describes the marketer-customer as a two-way mutually dependent conflict and, points out that in the world of marketing everything is described combatively. There are marketing campaigns, killer apps and dead lists, which fit with game theory: Two parties with opposing and mutual interests both engaged in winning the outcome of combat.

For if the customer doesn’t engage and play the game then, they effectively kill the product, or even the market the product exists in. More worryingly for a marketer, if a customer engages and is an influencer, this customer with a few well placed tweets and reviews on a social computing site (their blog, Amazon, Goodreads) can begin a campaign which can sink a product. On his blog, Nathan Bransford describes how books have been effectively killed prior to publication due to bad reviews on Goodreads.

A nice equation given by Kyle Wong on Forbes describes what an influencer does as follows:

Influence = Audience Reach (# of followers) x Brand Affinity (expertise and credibility) x Strength of Relationship with Followers

Influencers have immense power to kill or create sales, which is a totally new thing in marketing. This is potentially such a powerful way to sell to millions across the globe, especially amongst certain demographics – mums, millennials –  that many companies view social media marketing as the only way to market nowadays. They know that they must, like influencers,  build relationships with their customers. One way to do this is by creating content.

Playing your competitor

In a great blog on, Julie Niedlinger, describes how game theory approaches to creating content can help marketers decide whether their strategy (another military word) is appropriate with the competitors and with their customers.

Niedlinger advises marketers to take a moment, before reacting to comments that potential customers will leave on blogs, in order to ask whether there is a game going on. If so which game? And most importantly, are the rules clear? Once they are then and only then should a marketer make a move.

Secondly, she looks at competitors producing a similar blog of content rich potentially market cornering information and asks what is the next move?  Do you steal their writers? Mimic them? Join forces? Or, follow trends in an effort to win their share of the market.

It is important to know your game, it’s rules, and the moves you should be taking.

In part 3, we will look at specific game theory theories and see what moves and games we could play.

[Part 3]

Designing design: No function in structure

astrolabe pic

[Part 4 of 12: 1) The science of the artificial 2) function, behaviour structure 3) form follows function, 4) no function in structure, 5) the medium is the message 6) types and schemas 7) aesthetics: attractive things work better 8) managing (great) expectations 9) colour 10) styles and standards 11) design solution spaces 12) conclusions]

Previously we saw that the design principle: Form follows function, can be fabulous but sometimes limiting, and in nature it does not necessarily apply, sometimes function follows form. However, if you take the form (or structure) outside of its natural context or situation, so that there are few clues as to what an artefact was designed for, users may find completely new functions for it. This is known as the no function in structure principle.

The post-it note originated because one 3M employee thought that the small pieces of paper for testing glue were actually a new type of bookmark. Thankfully, no one was around to explain that everyone else was focusing on the glue to keep this person from serendipitously finding a new tool.

On the World Wide Web, Pinterest is a great example of no function in structure. The user collects pictures, or looks at other peoples’ collections of pictures from across the WWW, and they just browse and click, and browse and click (actually, designer Jeffrey Zeldman had a different way, until Pinterest disabled the feature which stopped him from enjoying the app – which is a different approach altogether to not listening to the user). Either way, looking at, and saving pins is an alternative method to the standard way of navigating around a website and asking Steve Krug’s three questions: Where am I? Where have been? Where am I going?  The users on Pinterest don’t necessarily care. They are there to experience the site by looking at all the lovely pins without any exact expectation of what order things need to happen in.

Treading the paths of desire

Instead of prescribing how someone should exactly use your website or artefact, sometimes it can be insightful to watch what a user does when presented with an artefact without clear instructions. During his TED talk, designer Tom Hume, showed an aerial shot of the centre of Brasilia which was designed for cars only. There are paths of desire trodden in by pedestrians across 15 lanes of motorways and roads, so that pedestrians can get to where they need to go in a city only designed for cars. Consequently, pedestrian road accidents are higher in Brasilia than anywhere else in the world. In contrast, a good use of the paths of desire is of the ones that are allowed to appear in newly built University Quads which are left without paths until people have trodden them in, then the designers come back and concrete them over.

Serendipity and discoverability

The world is constantly changing, especially in this our digital era, and it is necessary for the designer to have empathy for the users. Adopting a no function in structure approach and watching users discover new experiences and ways of using artefacts (or the infrastructure) is a truly empathetic way of providing the design solutions that people really want.