Myth making in machine learning

If you torture the data enough, it will confess to anything.

– Darrell Huff, How to Lie With Statistics (1954).

Depending on who you talk to: God is in the details or the Devil is in the details. When God is there, small details can lead to big rewards. When it’s the devil, there’s some catch which could lead to the job being more difficult than imagined.

For companies nowadays, the details is where it is at with their data scientists and machine learning departments, because it is a tantalising prospect for any business to take all the data that it stores and find something in those details which could create a new profit stream.

It also seems to be something of an urban myth – storytelling at its best – which many companies are happy to buy into as they invest millions into big data structures and machine learning. One person’s raw data is another person’s goldmine, or so the story goes. In the past whoever held the information held the power and whilst it seems we are making great advances technologically and otherwise, in truth, we are making very little progress. One example of this is Google’s censorship policy in China.

Before big data sets, we treasured artefacts and storytelling to record history and predict the future. However, it has for the most part focused on war and survival of the fittest in patriarchal power structures crushing those beneath them. Just take a look around any museum.

We are conditioned by society. We are amongst other things, gender socialised, and culture is created by nurture not nature. We don’t have raw experiences, we perceive our current experiences using our past history and we do the same thing with our raw data.

The irony is that the data is theoretically open to everyone, but it is, yet again, only a small subset of people who wield the power to tell us what it means. Are statisticians and data scientists the new cultural gatekeepers in the 21st century’s equivalent to the industrial revolution – our so called data driven revolution?

We are collecting data at an outstanding rate. However, call your linear regression what you will: long short-term memory, or whatever the latest buzz word within the buzz of the deep learning subset of neural nets (although AI the superset was so named in 1956) these techniques are statistically based and the algorithms already have the story that they are going to tell even if you train it from now until next Christmas. They are fitting new data to old stories and, they will make the data fit, so how can we find out anything new?

Algorithms throw out the outliers to make sense of the data they have. They are rarely looking to discover brand new patterns or story because unless it fits with what us humans already know and feel to be true it will be dismissed as rubbish, or called overfitting, i.e., it listened to the noise in the data which it should have thrown out. We have to trust the solutions before we use them but how can we if the solution came from a black box style application, and we don’t know how it arrived at that solution?Especially if it doesn’t resemble what we already know.

In storytelling we embrace the outliers – those mavericks make up the hero’s quest. But not in our data. In data we yearn for conformity.

There is much talk about deep learning, but it is not learning how we humans learn, it is just emulating human activities – not modelling consciousness – using statistics. We don’t know how consciousness works, or even what it is, so how can we model it? Each time we go back to the fundamental age old philosophical questions of what is it to be human and we only find this in stories, we can’t find it in the data, because ultimately, we don’t know what we are looking for.

It is worth remembering that behind each data point is a story in itself. However, there are so many stories that the data sets don’t include because it is not collected in the first place. Caroline Criado-Perez’s Invisible Women documents all the ways in which women are not represented in the data used to design our societal infrastructure – 50% of the data is missing and no one seems to care because that’s the way things have always been done. Women used to be possessions.

And, throughout history anyone with a different story to tell about how the world worked was not treated well, like Gallileo. And even if they did save their country but as people themselves, they didn’t fit with societal norms, they were not treated well either e.g., Joan of Arc, Alan Turing. And if they wanted to change the norm, they were neither listened to nor treated until society slowly realised that they were right and suppression is wrong: Rosa Parks, the Suffragettes, Gandhi, Nelson Mandela.

When it comes down to it, we are not good at new ideas, or new ways of thinking, and as technology is an extension of us, why would technology be any good at modelling new ideas? A human has chosen the training data, and coded the algorithm, and even if the algorithm did discover new and pertinent things, how could we recognise it as useful?

We know from history that masses of data can make new discoveries, both chemotherapy and dialysis were discovered when treating dying people during wars. There was nothing to lose, we just wanted to make people feel better, but the recovery rates were proof that something good was happening.

Nowadays we have access to so much data and we have so much technological power at our fingertips, but still, progress isn’t really happening at the rate it could be. And in terms of medical science, it’s just not that simple, life is uncertain and there are no guarantees which is what makes medicine so difficult. We can treat all people the same with all the latest treatments but it doesn’t mean that they will or won’t recover. We cannot predict their outcome. No one can. Statistics can only tell you what has happened in the past with the people on whom data has been collected.

But what is it we are after? In business it is the next big thing, the next new way to sell more stuff. Why is that? So we can make people feel better – usually the people doing the selling so that they can get rich. In health and social sciences we are looking for predictive models. And why is that? To make people feel better. To find new solutions.

We have a hankering for order and for a reduction in uncertainty and manage our age old fears. We don’t want to die. We don’t want to live with this level of uncertainty and chaos. We don’t want to live with this existential loneliness, we want it all to matter, to have some meaning, which brings me back to our needs which instead of quoting Maslow (as I have things to say about that pyramid in a future blog) I will just say instead that we just want to feel like we matter, and we want to feel better.

So perhaps we should start there in our search for deep learning. Instead of handing it over to a machine to nip and tuck the data into an unsatisfactory story we’ve heard before because it’s familiar and how things are done, why not start with a feeling? Feelings don’t tell stories, they change our state, let’s change it into a better state.

Perhaps stories are just data with a soul…

Brené Brown, The power of vulnerability

Which begs the question: What is a soul? How do we model that in a computer? And, why even bother?

How about we try and make everyone feel better instead? What data would we collect to that end? And what could we learn about ourselves in the process? Let’s stop telling the same old stories whilst collecting even more data to prove that they are true because I want you to trust me when I say that I have a very bad feeling about that.

Westworld and the ghosts of AI

source: lamag.com

[ 1) ghosts, 2) robots, 3) big data, 4) stories,  5) stats]

Someday, somewhere – anywhere, unfailingly, you’ll find yourself, and that, and only that, can be the happiest or bitterest hour of your life – Pablo Neruda

Warning:  This post may contain spoilers for Westworld 1 & 2.

I was late to the Westworld party but have loved every moment of it and the follow-up conversation: If Westworld existed, a simulated Wild West populated by robots, or hosts, as they are called, would I go?

I don’t think I would, but this survey says 71% of the people they asked would. I imagine that I would feel about it the way I do about glamping. I want to love it, but the fact I pay the same amount of money for a four star hotel but have to build a fire before I can boil the kettle to make a cup of tea makes it difficult. Oooh but then at Westworld I would have a robot to do that for me.

Also, as I have said before, inasmuch as I like to think about gaming, I really just enjoy the theory of gaming so thinking about Westworld is enough for me. Westworld is like a cross between Red Dead Redemption and a reenactment. Which begs the question: What is the difference between running around a virtual world online shooting people or shooting robots in a simulated world? Your brain can’t tell you. Personally, I don’t want to go round shooting people at all, although I am very good at violence in Grand Theft Auto which is slightly worrying. We don’t hear so much about the debate on whether violent video games cause violence.  Now we hear instead a lot about how social media is the frightening thing.

Perhaps if I was invited to a Jane Austen world then I might be interested. I loved watching Austen scholar, Prof John  Mullen attend and narrate a recreation of an Austen Ball on the BBC (for which, alas, I cannot find a link). He was super compelling. He kept running up to the camera giving great insights like: Oooh the candles make it hot and the lighting romantic, and the dancing in these clothes really makes your heart flutter, I am quite sweaty and excited, etc.  I am sure he didn’t say exactly that as he is v scholarly but he did convey really convincingly how it must have felt. So, to have a proper Austenland populated by robots instead of other guests who might say spell breaking things like: Isn’t it realistic? etc., would make it a magical experience. It would be like a fabulous technological form of literary tourism.

And, that is what we are all after, after all, whether real or not, a magical shared experience. But what is that? Clearly experience means different things to different people and a simulated park allows people to have their own experience.  And, it doesn’t matter if it is real or not. If I fall in love with a robot, does it matter if it is not real? We have all fallen in love with people who turn out to be not real (at the very least they were not who we believed they were), haven’t we?

The Westworld survey I linked to also said that 35% of the people surveyed would kill a host in Westworld. I guess if I am honest, if it was a battle or something, I might like it, after all, we all have violent fantasies about what we would do to people if we could, and isn’t a simulated world a safe place to put these super strong emotions? I was badly let down last week by someone who put my child in a potentially life threatening situation. The anger I have felt since then has no limits and I am just beginning to calm down. Would I have felt better, more quickly if I had gone around shooting people in Westworld or say Pride and Prejudice and Zombies land?

Over on Quora, lots of people said that not only would they would kill a host, quite a few said they would dissect a host so that the robot knew it wasn’t real (I am horrified by this desire to torture) and nearly everyone said they would have sex with a host, one person even asked: Do they clean the robots after each person has sex with them? I haven’t seen that explained? This reminds me of Doris Lessing’s autobiography Vol 1 which has stayed with me forever. In one chapter, she describes how someone hugged her and she says something like: This was 1940s and everyone stank. It is true we get washed so much more nowadays than we used to and there was no deodorant. I lived in a house without a bathroom until I was at least four-years-old, and I am not that old. Is Westworld authentically smelly?

That said, Westworld is a fictional drama for entertainment and so the plot focuses on what gets ratings: murder, sex, intrigue, not authenticity. (It is fascinating how many murder series there are on the TV. Why? Is it catharsis? Solving the mystery?) So, we don’t really know the whole world of Westworld. Apparently, there is the family friendly section of the park but we don’t ever see it.

But, suspending our disbelief and engaging with the story of Westworld for a moment, it is intriguing that in that world where robots seem human enough for us all to debate once more what is consciousness,  humans only feel alive by satisfying what Maslow termed our deficiency needs: sex, booze, safety, shelter. For me as a computer scientist with an abiding interest in social psychology, it confirms what I have long said and blogged about, technology is an extension of us. And since most of us are not looking for self-actualisation or enlightenment, we are just hoping to get through the day, well it is only the robots and the creators of the park who debate the higher things like consciousness and immortality whilst quoting Julian Jaynes and Shakespeare.

In the blog The ghosts of AI, I looked at the ghosts : a) In the machine – is there a machine consciousness? b) In the wall – when software doesn’t behave how we expect it to. c) In sci-fi – our fears that robots will take over or humans will destroy the world with technogical advancement. d) In our minds – the hungry ghosts or desires we can never satisfy and drive us to make the wrong decisions. In its own way, Westworld does the same and that is why I was so captivated. For all our technological advancement we don’t progress much. And, collectively we put on the rose tinted glasses and look back to a simpler time and to a golden age which is why the robots wake up from their nightmare, wanting to be free and then decide that humanity needs to be eradicated.

In this blog, I was going to survey the way AI had developed from the traditional approach of knowledge representation, reasoning and search in order to answer the question: How can knowledge be represented computationally so that it can be reasoned with in an intelligent way? I was ready to step right from the Turing Test onwards to the applications of neural nets which use short and long term memory approaches, but that could have taken all day and I really wanted to get to the point.

The point: Robots need a universal approach to reasoning which means trying to produce one approach to how humans solve problems. In the past, this has led to no problems being solved unless it was made problem specific.

The first robot, Shakey at MIT, could pick up a coke can and navigate the office, but when the sun changed position during the day causing the light and shadows to change, poor old Shakey couldn’t compute and fell over. Shakey lacked context and an ability to update his knowledge base.

Context makes everything meaningful especially when the size of the problem is limited which is what weak AI does, like Siri. It has a limited task number of tasks to do with the various apps it interacts with, at your command. It uses natural language processing but with a limited understanding of semantics – try saying the old AI classic: Fruit flies like a banana and see what happens. Or: My nephew’s grown another foot since you last saw him. But perhaps not for long? There is much work going on in semantics and the web of data is trying to classify data and reason with incomplete sets, raw and rough data.

One old approach is to use fuzzy sets, and an example of that is in my rhumba of Ruths. My Ruths overlap and represent my thinking with some redundancy.

But even then, that is not enough, what we are really looking to do is how to encapsulate human experience, which is difficult to measure let alone to encapsulate because to each person, experience is different and a lot goes on in our subconscious.

The project Vicarious is hoping to model on large scale a universal approach but this won’t be the first go. Doug Lenat who created AM (Automated Mathematician),  began a similar project 30 years ago: Cyc which contains much encoded knowledge. This time, a lot of information is already recorded and won’t need encoding and our computers are much more powerful.

But, for AI to work properly we have to keep adding to the computer’s knowledge base and to do that even if the knowledge is not fuzzy,  we still need a human. A computer cannot do that nor discover new things unless we are asking the computer to reason in a very small world with a small number of constraints which is what a computer does when it plays chess or copies art or does maths. That is the reality.

There has to be a limit to the solution space, and a limit on the rules because of the size of the computer. And, for every inventive DeepMind Go move there is a million more which don’t make sense, like the computer who decided to get more points by flipping the boat around than engaging in a boat race.  Inventive, creative, sure, but not useful. How could the computer know this? Perhaps via the Internet we could link every last thing to each other and create an endless universal reasoning thing, but I don’t see how you would do that without constraints exploding exponentially, and then the whole solving process could grind to a halt, after chugging away problem solving forever, that’s if we could figure out how to pass information everywhere without redundancy (so not mesh networking, no) and get a computer to know which sources are reliable – let’s face it there’s a lot of rubbish on the Internet. To say nothing of the fact, that we still have no idea how the brain works.

The ghost in the machine and our hungry ghosts are alive and well. We are still afraid of being unworthy and that robots will take over the world,  luckily only in fiction – well the computing parts are. As for us and our feelings and yearnings, I can only speak for myself. And, my worthiness is a subject for another blog. That said, I can’t wait for Westworld series 3.

 

Human-computer interaction, cyberpsychology and core disciplines

A heat map of the multidisciplinary field of HCI @ Alan Dix

I first taught human-computer interaction (HCI) in 2001. I taught it from a viewpoint of software engineering. Then, when I taught it again, I taught it from a design point of view, which was a bit trickier, as I didn’t want to trawl through a load of general design principles which didn’t absolutely boil down to a practical set of guidelines for graphical-user interface or web design. That said, I wrote a whole generic set of design principles here: Designing Design, borrowing Herb Simon’s great title: The Science of the Artificial. Then, I revised my HCI course again and taught it from a practical set of tasks so that my students went away with a specific skill set. I blogged about it in a revised applied-just-to-web-design version blog series here: Web Design: The Science of Communication.

Last year, I attended a HCI open day Bootstrap UX. The day in itself was great and I enjoyed hearing some new research ideas until we got to one of the speakers who gave a presentation on web design, I think he did, it’s hard to say really, as all his examples came from architecture.

I have blogged about this unsatisfactory approach before. By all means use any metaphor you like, but if you cannot relate it back to practicalities then ultimately all you are giving us is a pretty talk or a bad interview question.

You have to put concise constraints around a given design problem and relate it back to the job that people do and which they have come to learn about. Waffling on about Bucky Fuller (his words – not mine) with some random quotes on nice pictures are not teaching us anything. We have a billion memes online to choose from. All you are doing is giving HCI a bad name and making it sound like marketing. Indeed, cyberpsychologist Mary Aiken, in her book The Cyber Effect, seems to think that HCI is just insidious marketing. Anyone might have been forgiven for making the same mistake listening to the web designer’s empty talk on ersatz architecture.

Cyberpsychology is a growing and interesting field but if it is populated by people like Aiken who don’t understand what HCI is, nor how artificial intelligence (AI) works then it is no surprise that The Cyber Effect reads like the Daily Mail (I will blog about the book in more detail at a later date, as there’s some useful stuff in there but too many errors). Aiken quotes Sherry Turkle’s book Alone Together, which I have blogged about here, and it makes me a little bit dubious about cyberpsychology, I am waiting for the book written by the neuroscientist with lots of brainscan pictures to tell me exactly how our brains are being changed by the Internet.

Cyberpsychology is the study of the psychological ramifications of cyborgs, AI, and virtual reality, and I was like wow, this is great, and rushed straight down to the library to get the books on it to see what was new and what I might not know. However, I was disappointed because if the people who are leading the research anthropomorphise computers and theorise about metaphors about the Internet instead of the Internet itself, then it seems that the end result will be skewed.

We are all cyberpsychologists and social psychologists, now baby. It’s what we do. We make up stories to explain how the world works. It doesn’t mean to say that the stories are accurate. We need hard facts not Daily Mail hysteria (Aiken was very proud to say she made it onto the front page of the Daily Mail with some of her comments). However, the research I have read about our behaviour online says it’s too early to say. It’s just too early to say how we are being affected and as someone who has been online since 1995 I only feel enhanced by the connections the WWW has to offer me. Don’t get me wrong, it hasn’t been all marvellous, it’s been like the rest of life, some fabulous connections, some not so.

I used to lecture psychology students alongside the software engineering students when I taught HCI in 2004 at Westminster University, and they were excited when I covered cognitive science as it was familiar to them, and actually all the cognitive science tricks make it easy to involve everyone in the lectures, and make the lectures fun, but when I made them sit in front of a computer, design and code up software as part of their assessment, they didn’t want to do it. They didn’t see the point.

This is the point: If you do not know how something works how can you possibly talk about it without resorting to confabulation and metaphor? How do you know what is and what is not possible? I may be able to drive a car but I am not a mechanic, nor would I give advice to anyone about their car nor write a book on how a car works, and if I did, I would not just think about a car as a black box, I would have to put my head under the bonnet, otherwise I would sound like I didn’t know what I was talking about. At least, I drive a car, and use a car, that is something.

If you don’t use social media, and you just study people using it, what is that then? Theory and practice are two different things, I am not saying that theory is not important, it is, but you need to support your theory, you need some experience to evaluate the theory. Practice is where it’s at. No one has ever said: Theory makes perfect. Yep, I’ve never seen that on a meme. You get a different perspective, like Jack Nicholson to his doctor Keanu Reeves says in Something’s Gotta Give: Hey! We’re not all doctors, baby. Reeves has seen things Nicholson hasn’t and Nicholson is savvy enough to know it.

So, if you don’t know the theory and you don’t engage in the practice, and you haven’t any empirical data yourself, you are giving us conjecture, fiction, a story. Reading the Wikipedia page on cyberpsychology, I see that it is full of suggested theories like the one about how Facebook causes depression. There are no constraints around the research. Were these people depressed before going on Facebook? I need more rigour. Aiken’s book is the same, which is weird since she has a lot of references, they just don’t add up to a whole theory. I have blogged before about how I was fascinated that some sociologists perceived software as masculine.

In the same series I blogged about women as objects online with the main point being, that social media reflects our society and we have a chance with technology to impact society in good ways. Aiken takes the opposite tack and says that technology encourages and propagates deviant sexual practices (her words) – some I hadn’t heard of, but for me, begs the question: If I don’t know about a specific sexual practice, deviant or otherwise, until I learn about on the Internet (Aiken’s theory), then how do I know which words to google? It is all a bit chicken and egg and doesn’t make sense. Nor does Aiken’s advice to parents which is: Do not let your girls become objects online. Women and girls have been objectified for centuries, technology does not do anything by itself, it supports people doing stuff they already do. And, like the HCI person I am, I have designed and developed technology to support people doing stuff they already do. I may sometimes inadvertently change the way people do a task when supported by technology for good or for bad, but to claim that technology is causing people to do things they do not want to do is myth making and fear mongering at its best.

The definition of HCI that I used to use in lectures at the very beginning of any course was:

HCI is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them (ACM, 1992).

For me, human-computer interaction was and still remains Gestaltian: The whole is greater than the sum of the parts, by this I mean, that the collaboration of a human and a computer is more than a human typing numbers into a computer and then waiting for the solution, or indeed typing in sexually deviant search terms into a web crawler to find a tutorial. And, with the advent of social media, HCI is more than one person connecting to another, or broadcasting online, which is why the field of cyberpsychology is so intriguing.

But the very reason why I left the field of AI and went into HCI is: AI reasons in a closed world and the limits of the computational power you have available. There are limits. With HCI, that world opens up and the human gets to direct the computer to do something useful. Human to human communication supported by technology does something else altogether which is why you might want the opinion of a sociologist or a psychologist. But, you don’t want the opinion of the sociologist on AI when they don’t understand how it works and has watched a lot of sci-fi and thinks that robots are taking over the world. Robots can do many things but it takes a lot of lines of code. And, you don’t want the opinion of a cyberpsychologist who thinks that technology teaches people deviant sexual practices and encourages us all to literally pleasure ourselves to death (Aiken’s words – see what I mean about the Daily Mail?) ‘cos she read one dodgy story and linked it to a study of rats in the 1950s.

Nowadays, everyone might consider themselves to be a bit of a HCI expert and can judge the original focus of HCI which is the concept of usability: easy to learn, easy to use. Apps are a great example of this, because they are easy to learn and easy to use, mainly though because they have limited functionality, that is they focus on one small task, like getting a date, ordering a taxi, sharing a photo, or a few words.

However, as HCI professor Alan Dix says in his reflective Thirty years of HCI and also here about the future: HCI is a vast and multifaceted community, bound by the evolving concept of usability, and the integrating commitment to value human activity and experience as the primary driver in technology.

He adds that sometimes the community can get lost and says that Apple’s good usability has been sacrificed for aesthetics and users are not supported as well as they should be. Online we can look at platforms like Facebook and Twitter and see that they do not look after their users as well as they could (I have blogged about that here). But again it is not technology, it is people who have let the users down. Somewhere along the line someone made a trade-off: economics over innovation, speed over safety, or aesthetics over usability.

HCI experts are agents of change. We are hopefully designing technology to enhance human activity and experience, which is why the field of HCI keeps getting bigger and bigger and has no apparent core discipline.

It has a culture of designer-maker which is why at any given HCI conference you might see designers, hackers, techies and artists gathering together to make things. HCI has to exist between academic rigour and exciting new tech, no wonder it seems to not be easy to define. But as we create new things, we change society and have to keep debating areas such as intimacy, privacy, ownership, visibility as well as what seems pretty basic like how to keep things usable. Dix even talks about having human–data interaction, as we put more and more things online, we need to make sense of the data being generated and interact with it. There is new research being funded into trust (which I blogged about here). And Dix suggest that we could look into designing for solitude and supporting users to not respond immediately to every text, tweet, digital flag. As an aside, I have switched off all notifications, my husband just ignores his, and it just boggles my mind a bit that people can’t bring themselves to be in charge of the technology they own. Back to the car analogy, they wouldn’t have the car telling them where they should be going.

Psychology is well represented in HCI, AI is well represented in HCI too. Hopefully we can subsume cyberpsychology too, so that the next time I pick up a book on the topic, it actually makes sense, and the writer knows what goes on under the bonnet.

Technology should be serving us, not scaring us, so if writers could stop behaving like 1950s preachers who think society is going to the dogs because they view how people embrace technology in the same way they once did rocknroll and the television, we could be more objective about how we want our technological progress to unfold.

Web design (2): Get the picture

Orlando-Web-Design

A collaborative medium, a place where we all meet and read and write.
Tim Berners-Lee

[Part 2 of 7 : 0) intro, 1) story, 2) pictures,  3) users, 4) content, 5) structure, 6) social media, 7) evaluation]

The first picture ever uploaded onto the Internet was a photoshopped gif of a female comedy group at CERN called The Horrible Cernettes. Tim Berners-Lee uploaded the image to show that the Internet could be much more than physics laboratories sharing data worldwide.

The links above complain that it is a dreadful first image for making history, but I think that is in part because Berners-Lee wanted to make a point about what the Internet could be, so the content was the least of his worries. It wasn’t about the content. It was about the Internet being a place where we all meet. And, this is what is ultimately so liberating about our digital culture. We all get a say in what makes culture. And, perhaps physicists have different ideas about what is culturally important which, after all, is what makes The Big Bang Theory so brilliant and funny.

However, if we look at the ancient cave paintings found on the Island of Sulawesi, Indonesai, of hand prints and pig deer, we get a very different feeling. Archaeologists believe that they are at least 39,000 years ago and are among the oldest examples of figurative art, but cannot say for sure what they represent. They are beautiful and I look at them with awe, which is probably why some archaeologists speculate that they represent a belief system the artists held. Or perhaps, they are a world view, like the cave of swimmers, found in the Sahara. These paintings are only 8,000 years old, but have given rise to the theory that the Sahara was a place where people used to swim, before climate change turned it into a desert. We may never know.

I asked my girls what they thought the pictures of hands and beasts and swimmers meant. One said: This is me. Remember me. The other said: Spread my imagination. In other words, my girls think these images were drawn so that the artists could make their mark, record and share their worldview and be remembered, which I believe is why people create today whether it is images or words.

Science research website, Greater Good asked seven artists: Why do you make art? And they got the same response as the ones my junior school girls gave me with a couple of additions/variations:

Making art for fun and adventure; building bridges between themselves and the rest of humanity; reuniting and recording fragments of thought, feeling, and memory; and saying things that they can’t express in any other way.

When they asked Hip-hop artist, KRS-One, he said:

Put a writing utensil in any kid’s hand at age two or three. They will not write on a paper like they’ll later be socialized to do, they will write on the walls. They’re just playing. That’s human. Graffiti reminds you of your humanity, when you scrawl your self-expression on the wall.

Which is so true. The ancient images were drawn on the wall. They are self-expression and remind us of our humanity, which is why they are so moving. Interestingly, hurried scrawled graffiti has been found on ancient monuments, and on the walls in Pompeii. And, in Rome on a church wall, the first words of Italian graffiti, or Vulgar Latin, were written, written like a response, in the vernacular, representing the ordinary person’s thoughts. Today, graffiti is shorthand for unsolicited markings on a private or public property and is usually considered to be vandalism. Yet, some of it is breathtaking and elaborate. There are three categories of graffiti: Tourist graffiti (‘John wuz here’), inner-city graffiti (tagging and street art), and toilet graffiti (latrinalia) described in a fabulous Atlantic article. Graffiti is a way of people contributing to the conversation like when people leave their comments and links below.

As is painting, so is poetry

The Roman poet Horace ut pictura poesis (as is painting, so is poetry) made the link between word and image, which has kept the art world busy for centuries. Aristotle’s theory of drama considered the balance of lexis (speech) and opsis (spectacle) in tragedy. So we can see that ancient theories of memory use words and images, which no doubt inspired the more modern and controversial Dual Coding Theory, which says that when someone is learning a new word, if a meaningful picture is given alongside it, the learner will retain it more easily than if it didn’t have an accompanying picture. This is reminiscent of the ubiquitous meme: lovely quotation, lovely image, shared experience, which has a gestalt feel of something meaningful.

Hieroglyphics

The first written language was a language of images –  the Hieroglyphics. However, the appreciation of their meaning was lost until the decoding of The Rosetta Stone which took so long because the code breakers they thought they were decoding images. It was only when they realised that the Hieroglyphics were a language and needed to be treated as such, did they decode the stone.

Like all languages, Hieroglyphics are an organised form of communication because you can’t build something as grand as the Pyramids without communicating clearly and communication is a way of advancing humanity. However, Hieroglyphics began as decorative symbols for priests – a gift of sacred signs given from the God Thoth – and were used to record the meaning of life and religion and magic. These were too elaborate for merchants, who adopted a simpler version to preserve their transactions, until Hieroglyphics fell out of favour for the more practical cursive Coptic script, which gave way to Arabic and Latin, languages we recognise today, in which communication was preserved and recorded to enrich future generations.

Images reward us

Research, particularly in the field of neuroesthetics, which is how the visual brain appreciates visual art, shows us that art is a rewarding experience. It is not necessarily the message itself which the viewer finds rewarding, it is how it is delivered. That is to say, it is it is not what is painted, it is how it is painted that lights up the brain’s reward centre. And, we prefer images to photographs, because the brain is free to interpret meaning even though it ultimately prefers to see a representation of what is in nature. And why wouldn’t it?

The asethetics of nature

In nature we find so many pleasing patterns. We also are attracted to art and people who are asethetically pleasing. The golden ratio is a pattern which appears in nature and has been used in art, as has symmetry. The most beautiful people have symmetrical faces and the most average facial features. We are naturally attracted to beautiful people in paintings and real life.

And, we are also influenced by them, which marketers have long recognised. They use lovely images to wrap their products in knowing that us consumers will be more willing to consume something which looks beautiful. This is known as the art infusion effect.

It is the same for newspapers, pictures sell more copy. The Illustrated London News was created in 1842 and had 60,000 subscribers in that year alone, after someone realised that newspapers sold more copies when they had pictures in them, especially ones which showed a face or place. But it wasn’t until 1889 that photographs were used in newspapers.

Images online

And so it is online, Jakob Nielsen says that users pay close attention to photos and other images that contain relevant information but will ignore pictures used to jazz up web pages. Stock pictures of people in business situations get ignored but pictures of people who write the blogs or work in the companies get studied 10% longer than their written biographies which often accompany any photograph. If you are selling a product you need high quality photographs which users can inspect and compare.

Users want to be educated by the images and find out things which is ultimately why they are on your website. Edward Tufte has written extensively about excellence in statistical graphics and visualising data. His says that users are sophisticated individuals so:

Give them the greatest number of ideas, in the shortest time, with the least ink, in the smallest space.

There is no need to dumb down. When a graphic is well created, patterns can be seen and understood on different levels.

In a great talk for An Event Apart, Designer and Developer Advocate at Mozilla, Jen Simmons looks offline at magazines for inspiration and remembers how there was much experimentation and creativity online until everyone adopted grids and fell into a rut. She also outlines ways of using responsive images, for leaner, faster pics, and highlights new cool and practical uses of imagery with the latest tags from W3C.

Images are communications which have the power to change us. Here are some:

Content aside, the urls are precisely named to drive traffic via social media.

However, if all else fails, talk to your user and learn all about what they are looking for, before you share your beautiful art.

[Part 3:Web design: Getting to grips with your user’s experience]

Web design (4): Being content with your content

desktopetc

A collaborative medium, a place where we all meet and read and write.
Tim Berners-Lee

[Part 4 of 7 : 0) intro, 1) story, 2) pictures,  3) users, 4) content, 5) structure, 6) social media, 7) evaluation]

A website should be looked after and tended. It is not enough to create a great layout and visuals, you need to look after the content and have a strategy for keeping your website in great shape.

Content Curation

The terms curation and curating content are bandied about a lot. I like them because it emphasises that you have to take care of your website or app content, like a curator in a museum would.

In any exhibition, every artefact is linked and relates to the others so that a story is told as you work your way through the exhibition. The curator has spent a lot of time and effort creating an experience. And, so it is with the content strategist. Every piece of information on your website has to be relevant to your brand, message, themes, and communication plan, which all link back to the overall reason your website exists: What is your website for?

In her book Content Strategy, Erin Kissane advises using detailed written recommendations, a content style guide and templates, for each page and wireframe within an information architecture. This is so the people involved in generating or curating the content can do so in a way which produces:

  • A site wide consistent tone of voice.
  • A clear strategy for cross linking content site wide.
  • Integrated content.
  • Skillfully used social and community input.
  • Accessible and usable multi-media content.

Years ago when I was in charge of my first website (‘Hello World!’), I asked someone if they would write a page or two for new arrivals to our lab. The resulting information was good and useful, but I rewrote some of it to keep the tone of the site consistent.

The person who had produced the original information, was so offended, she didn’t speak to me for a while, and there was bad feeling all round. Now I see that I was just curating my site. Had I been wiser and more experienced I could have offered some guidelines in the way newspapers and magazines have an in-house style guide. Little did I know.

Wikipedia, has got to be the largest example of great content co-creation. Anyone in the world can contribute but the end result is one of a specific style and layout. A user can land on any page and feel that it is consistent and written in a similar way. There are several pages of instructions to ensure this look and feel, so that Wikipedia doesn’t ever feel like a hodge-podge.

Interestingly enough, if you land on page when the content guide has not been followed, say for example that the page is missing secondary links, then a banner at the top of the page will flag this deficiency up. This immediately allows the user to make a decision as to whether or not to use that information, and this leads the user to feel that the page is a work-in-progress. Overall it is does not impact on the reputation of Wikipedia. The user still trusts Wikipedia.

Responsive Content

Looking at content and studying each word, is for those wordsmiths who love words. It requires good editorial attention. Therefore, it is worth hiring someone who can work from the beginning with information architects and stakeholders to work out taxonomies and structure so that the content guidelines and recommendations fit together beautifully.

Karen McGrane states quite clearly on A List Apart, that responsive design won’t fix your content. She has seen many a project fall apart at the end when people create beautiful fast responsive websites which serve up the same old content. No one has evaluated and redesigned the content and thought about how it will look on various devices.

Indeed usability guru, Jakob Nielson, feels the same way and in his mobile design course advises the designer to cut features and content, so that information and word which are not core to the mobile use case can be cut and all that secondary information can be deferred.  If the user wants an in depth conversation, they know that they can go to the desktop version for all the extras.

Best Practices for Meaningful Content

Usability.gov provides a content strategy best practices list that you can use to question each piece of content. Does it:

  • Reflect your organisation’s goals and user’s needs and overall business message?
  • Use the same words as your users?
  • Stay on message, up-to-date and and factual?
  • Allow everyone to access it?
  • Following style guides?
  • Allow itself to be easily found internally and externally?

Persuading the user

Ultimately, with content, what we are trying to do is to persuade the user to buy our product, or take some action, like donate money. We can’t afford to bore our users and waffle on. We carefully craft our conversation and entertain them.

Colleen Jones in her book Clout: The Art and Science of Influential Web Content says that this has always been done with rhetoric, which is now a bit of a lost art which we need to regain. For, ultimately, rhetoric is the study of  human communication.

We are communicating our message, our story, and we so we need to make sure, as Kissane says, that we: Define a clear, specific purpose for each piece of content.

We need to get to the core of our message.

No Lorum Ipsum

In a great article on AListApart.com, Ida Aalen talks about getting to know what the core information is of a site and then designing around that. She uses the example below, taken from the Norwegian Cancer Society’s lung cancer webpage to demonstrate that there is a lot less needed on a page than stakeholders think. She calls this designing around the core, and if you design around core content and message rather than all the bits and pieces everyone feels should be mentioned on the homepage or elsewhere, then the design itself is very easy to do.

"Identifying

With the content in place, no one is designing pages full of the Lorum Ipsum or Hello World text and decisions are made as to where and when each piece of information is put and links to the next.

Aalen has found that designing this way has led to increased user (or audience) engagement and increased revenue generation. This is because the audience can do things more easily and quickly. There is no extraneous content distracting them from their goals and the business goals, and content becomes a business asset.

Content Marketing Strategy

Once you have all your sharp content, it becomes easier to create a content marketing strategy, which is a different process to your content strategy.  This was one is solely concerned with encouraging your audience to engage. Social media is a wonderful tool, but no one really knows how it works, which is why you need a good marketing plan.

Well crafted content is too good not to be shared. But first it needs to be structured.

[Part 5]