Alone Together three years on: Is social media changing us?

technology-disconnect-s from vortaloptics.com

You are not alone – Oprah Winfrey

Alone Together (1)

Three years ago, I watched social psychologist Sherry Turkle’s TED talk (2015) and then read her book: Alone Together: Why We Expect More From Technology and Less From Each Other, (2011) which prompted me to write a blog called: Alone Together: Is social media changing us?

Rereading my blog, I see that my opinion hasn’t changed and on checking, neither has Turkle’s. She now consults on reclaiming conversation ™ to stop the flight from face-to-face conversation.

I am not so sure we don’t want to talk face to face at all, rather it’s just technology gives us the option to avoid those particular prickly peeps we’d rather not see face to face if we can.

Added to that, I don’t believe that technology is taking us to places we don’t want to go. We have no idea what we are doing online or where we need to be, and I am tired of hearing technology described as an unstoppable force outside of our control as if it were freak weather or a meteorite zooming towards earth about to destroy us all. Economics is often the driver of technological advancement and human decisions drive economics.

Glorious technology

Our behaviour online and towards technology reflects us in all our glory – the good, bad and the ugly – along with all our hopes and fears. I do not believe that we expect more from technology and less from each other. Instead, I believe that we turn to technology to plug the gaps and find solace in those moments when we feel alone, afraid, unloved, and indeed sadly, sometimes, unloveable.

Life can be crushingly hard, and many of us know that there are certain people in our lives with whom we will never have the rich, robust and trusting relationships Turkle believes have been eroded by digital technology. Some people are just not up to the job. It may be the same with our friendships online but the hope is there.

Many of us just want to get in and out of any given, often potentially stressful, situation – work, meetings, the playground, the hospital, the dinner table, events with relatives – without saying or doing anything to cause any bad feeling. So that when we do finally get to our tiny slivers of leisure time we can use them to fill ourselves up with what makes us feel better, rather than analysing what we didn’t get right.

If that means staring at a tiny screen then what’s wrong with that? One person I know spoke of their phone, and the access it gave them to an online friend, a person they hadn’t met at that point, as an Eden between meetings. And, why not? Whatever works.

That is not possible now

Turkle says that we use online others as spare parts of ourselves, which makes me believe that she hasn’t really engaged with people on Twitter in a normal way in conversation, and she hasn’t ever met people who do that offline either. Many people make new friends on Twitter and meet up #irl a long time afterwards and then only occasionally. Their relationships are mainly based online. Rather like families who live a long way away from each other. It doesn’t mean it’s less real or not important. It just means they are physically not there which might be difficult but we don’t want to not have any contact with these people because we love them. Maya Angelou said something really beautiful about this when she was on the Oprah show one time. She said:

Love liberates it doesn’t bind. Love says I love you. I love you if you’re cross town. I love you if you move to China. I love you. I would like to be near you. I’d like to have your arms around me. I’d like to have your voice in my ear. But that is not possible now. So, I love you. Go.

We want to be in contact with people whom we love and appreciate, and who love and appreciate us in return. Those people who make us remember the best bits about ourselves. We like people who like us. It is that simple and these people are not always in our daily lives. It’s not for nothing that vulnerability expert Brene Brown says that people armour up everyday to get through the day.

To cultivate the sorts of relationships Turkle feels that we should be having without our phones takes not only a lot of time and energy (and Brene Brown books) but a fearlessness which is not easy. Our greatest fear is social rejection and a robust conversation can leave us badly bruised. Online it is slightly easier because if a person drops out of your life, then you have some control over the day to day reminders unless you turn stalker, which is understandable as the grief of any online loss feels just as real. However, know this:

You are not alone

When we seek answers to our problems emotional like grief, or physical, spiritual, legal, fiscal. Technology really does say: You are not alone.

In real life, difficult relatives and tough-love friends don’t make the best agony aunts and may make us want to keep our questions to ourselves. We may forgo the embarrassment or shame by keeping our anonymity and seeking counsel elsewhere. Giving and receiving advice makes the world go round. In the book Asking for a Friend, the history of agony aunt columns is given over three centuries, and even today with all our technology, they remain as popular as ever.

But, if we can’t wait for our favourite agony aunt or uncle, a quick google/bing or peek round Quora can give us the reassurance we need. No, we are not shoddy, terrible people. Our thoughts and feelings are completely normal. The article What’s wrong with Quora? says that we may prefer a dialectic communication (a chat) say on Twitter, but we don’t use it in the same way as the didactic Q and A on Quora. We may never join Quora or Mumsnet but plenty of us (lurkers) use these and similar forums to find answers and feel better about the difficult circumstances we often find ourselves in.

It is reassuring to know that someone somewhere has already asked the question, either under a real or false name, and some other lovely human has written something underneath which just may help.

I don’t really believe that anyone of us is afraid of having a regular conversation because we have a phone. Turkle mentions research done on teenagers a lot, but they are specific user group and shouldn’t be taken as representative of the general population nor the future. How many teenagers want to talk to anyone? The teenage years are torture. As adults, however, because of the way society is set up, we often have to spend time with people we wouldn’t choose to, at work or in families. In the past we may have tried harder, felt shittier, been robust or at least tried to tell ourselves that, nowadays, it is more acceptable, a relief even, to be alone together, and to save our thoughts and feelings for those we love and who love us in return, wherever and whenever they may be.

Sociability amongst strangers

At school pick-up one day, I walked over to a mum whose kid plays with mine. She was staring at her mobile phone not typing or speaking so it didn’t feel like I was interrupting anything when I said Hi. She looked up at me and immediately looked back down at her phone. I stood awkwardly wondering what to do next. Then another mum came over and said: Hi. Mobile phone mum looked up, immediately put her phone in her pocket, and began an animated conversation with the new mum.

Sociologist Sherry Turkle says that even a silent phone disconnects us, it indicates that any conversation can be interrupted at anytime as the phone has an equality with the now. In this way, Turkle believes that mobile technologies erode our empathy for other people.

I find this an old-fashioned view. Turkle and others are basically saying that technology is a thing outside of us, an unstoppable force over which we have no control and which carries us away to places we don’t want to go.

I beg to differ. Like Marshall McLuhan, I believe that technology is an extension of us and how we behave. And, more importantly, we can choose how to use it and we just must take responsibility for our actions. Mobile phone mum is a perfect example. She knew exactly what she was doing when she wordlessly wielded her phone at me and then put it away for the next mum.

The smartphone in and of itself is an amazing invention. It is a mini-computer which is all people could talk about back in 2007 during some usability research I did for Orange. It thrills me everyday, I kid you not, to hold so powerful a device in my hand (see Augmenting Humans and Travels without my phone).

I think this is because I was fifteen years old when my parents first got a phone in our house and I’d barely gotten used to the excitement of it ringing when I went off to university to not have a phone number to give to people. I would go to the phone box if I wanted to phone someone. As a student in France I could only make a phone call if I had money and if I had remembered to go to the tabac to buy a phone card. I wonder how different life would have been, and indeed how different life is for students today, with a mobile phone and instant access to anyone.

Back then, I wandered around the world unreachable. Unless you knew my address and wrote me a letter, or you came to visit, you couldn’t contact me. Sometimes I was lonely. I spent all my time in shared spaces indoors and out, private and public (like parks and cafes, flats and universities) alone and with people, friends and strangers. In fact one time I was sat in the park in Chambéry and a friend I hadn’t seen in weeks who had moved to the Dordogne, wandered across and said: Thank God, you’re here. I was running out of places to look and was worried you’d gone away. I’ve nowhere else to stay tonight.

Feeling at home in shared spaces can be difficult and so designing public spaces to make them seem more friendly and safe and accessible remains a fascinating area of research. In Jane Jacobs’s classic book The Death and Life of Great American Cities, and Bill Hillier’s Space Syntax, the question often is: How do we make the public more sociable?

Many people think that the mobile phone is an invasion of the public by the private. Dom Joly’s I’m on the phone sketch is as funny today as it was when mobile phones were new. Similarly, last summer in the Louvre, I couldn’t get near the Mona Lisa because it had a billion people in front of it taking selfies.

Today, as I write this I think, well why not? Why not have a Mona Lisa selfie? Why not talk really loudly on your phone in public? Why not take up space and behave like you belong?

It can be hard to feel like somewhere public is familiar and friendly, but with easy connection to the Internet anywhere and anytime, people can use their phones to engage with their location by reading restaurant reviews, historical information, the locations of other people nearby, and of course by taking a selfie. There is much research into how we can redefine public spaces with mobile technology so everyone can feel familiar in a new or intimidating place but already the phone helps.

In my time as a student, wandering about Europe, I didn’t have such a luxury and as such was always at the mercy of strangers and exhausted by trying to figure out how things worked. Strange men would come and talk to me and give me their addresses if I sat in the park or on trains or when I wandered down the street. I have fond memories of the French farmer who used to jump out when I cycled past on my way to or from Bourget du Lac. He wanted me to come to his farm and meet his son: Venez, venez, madamoiselle. My mother always warned me about strange men, she was worried I would end up behind someone’s wallpaper. (Funnily enough strange women never approached me with their pockets full of written addresses. Would I have responded differently if they had?)

My first day in France, I cried on the bus. I didn’t have the right ticket because the bus worked differently to what I had expected. The driver let me on free and the next day when I was on another bus going the other way he stopped his bus when he saw me, beeped his horn and waved at me. It never occurred to me he was waving at me so half a dozen people on the bus tapped me on the shoulder to let me know it was me. Mortified, I waved back and cried again and a couple of old ladies comforted me whilst saying Oooh-la-la as I remembered how I had gotten off at the wrong stop, gotten lost, and gave up, at which point I let some random bloke take me to my home in his car. With a phone, I would have known how the ticket system worked, where to go exactly, which stop and so on, and I would have cried a lot less. Without a phone, I saw just how kind people can be to a lost and lonely girl.

In the book Mobile interfaces in Public Spaces, the authors consider the social and spatial changes in our society which have come about with mobiles phones by comparing it to the book, the Walkman and the iPod. These are all things we have used in the past to feel more at home say on a train, in a cafe, or in the park. They allows us to be present and yet go elsewhere as I have pondered in the blog Where do we go when we go online? That said, when I used to read the English paper in the park in Chambéry, it was always a day old, a male Jehovah’s Witness would regularly appear. He wanted to check the football scores in the Premier League.

There is the worry that phones are disconnecting us from the world and people around us because these interactions will no longer happen if we are too busy staring into our screens and everyone has access to the same information. But the authors above argue that mobile devices work as interfaces to public spaces and strengthen our connections to locations.

But what about our connection to people? Well! There are times when you just don’t want to be sociable or you require a different sociability, that of strangers, say who are enduring a long commute and need to carve out a space of their own whilst in a public space.

In July, I went to a talk given by Alastair Horne aka @pressfuturist at the British Library on ambient literature, in particular Keitai shousetsu, the first mobile phone fictions or Japanese cell phone novels in the noughties. They were written by young women, in the same way that they were read, on a small screen using text language, in serial form, during a commute. It was an intimate form of storytelling which led readers to give suggestions as to how the story should continue. The phone was often an integral part of the story because the writer and reader were both writing and reading in similar circumstances, exploring the story as it unfolded, and their commute became an exciting shared experience.

Interactive fiction and text adventures are not new, but their transfer to a mobile phone was and the immediacy it offers. Ten years later with better connectivity, ambient fiction is the next step. Stories are heard in a particular place and location and the phone again becomes part of the story, the shared experience and the connection.

Shared experiences and connection give our lives meaning. But, sometimes the reality of a moment or a person in a public space – like mobile mum – can really let us down, which is why I love the power of the mobile phone in my hand. It can interrupt my reality and get me through a difficult moment and onto the next. Not all strangers are kind, but from experience, especially the ones which I have shared here with you today, I can definitely tell you, the unkind phone wielding ones are absolutely in the minority – an amazing thought which will make me cry with gratitude every time. My mother always told me that I would never get through life if I cried like that all that time. I am pleased to report I have gotten through life exactly like that, yes, crying all the time. And can say, I have been shown many kindnesses and I am  immensely grateful.

Human-Computer Interaction Conclusions: Dialogue, Conversation, Symbiosis (6)

[ 1) Introduction, 2) Dialogue or Conversation, 3) User or Used, 4) Codependency or Collaboration, 5) Productive or Experiential, 6) Conclusions]

I love the theory that our brains, like computers, use binary with which to reason and when I was an undergraduate I enjoyed watching NAND and NOR gates change state.

As humans, we are looking for a change of state. It is how we make sense of the world, as in semiotics, we divide the world into opposites: good and bad, light and dark, day and night. Then we group information together and call them archetypes and symbols to imbue meaning so that we can recognise things more quickly.

According to the binary-brain theory, our neurons do too. They form little communities of neurons that work together to recognise food, not-food; shelter, not-shelter; friends, foes; the things which preoccupy us all and are classed as deficiency needs in Maslow’s Hierarchy of Needs.

Over on researchgate, there was discussion about moving beyond binary which used this example:

Vegetarian diet vs Free Range Animals vs Battery Farmed Meat

If it was just vegetarian diet v battery farming it would be binary and an easy choice but add in free range and we see the complexities of life, the sliding continuum from left to right. We know life is complex but it is easier in decision making to just have two options, we are cognitive misers and hate using up all our brainpower. We want to see a change in state or a decision made. It also reflects the natural rhythms of life like the tide: ebb and flow, the seasons: growing and dying, it’s not just our neurons its our whole bodies which reflect the universe so patterns in nature resonate with us.

I began this series with an end in mind. As human-computer interaction (HCI) is an ever expanding subject, I wanted to pin it down and answer this question: What am I thinking these days when I think about human-computer interaction?

For me, HCI is all about the complexities of the interaction of a human and a computer, which we try to simplify in order to make it a self-service thing, so everyone can use it. But with the progress of the Internet, HCI has become less about creating a fulfilling symbiosis between human and computer, and more about economics. And, throughout history, economics has been the driving force behind technological progress, but often with human suffering. It is often in the arts where we find social conscience.

Originally though, the WWW was thought of by Tim Berners-Lee to connect one computer to another so everyone could communicate. However, this idea has been replaced by computers connecting through intermediaries, owned by large companies, with investors looking to make a profit. The large companies not only define how we should connect and what are experience should be, but then they take all our data. And it is not just social media companies, it is government and other institutions who make all our data available online without asking us first. They are all in the process of redefining what privacy and liberty means because we don’t get a choice.

I have for sometime now gone about saying that we live in an ever changing digital landscape but it’s not really changing. We live the same lives, we are just finding different ways to achieve things without necessarily reflecting whether it is progress or not. Economics is redefining how we work.

And whilst people talk about community and tribes online, the more that services get shifted online, the more communities get destroyed. For example, by putting all post office services online, the government destroyed the post office as a local hub for community, and yet at the time it seemed like a good thing – more ways to do things. But, by forcing people to do something online you introduce social exclusion. Basically, either have a computer or miss out. If you don’t join in, you are excluded which taps into so many human emotions, that we will give anything away to avoid feeling lonely and shunned, and so any psychological responsibility we have towards technology is eroded especially as many online systems are binary: Give me this data or you cannot proceed.

Economic-driven progress destroys things to make new things. One step forward, two steps back. Mainly it destroys context and context is necessary in our communication especially via technology.

Computers lack context and if we don’t give humans a way to add context then we are lost. We lose meaning and we lose the ability to make informed decisions, and this is the same whether it is a computer or a human making the decisions. Humans absorb context naturally. Robots need to ask. That is the only way to achieve a symbiosis, by making computers reliant on humans. Not the other way round.

And not everything has to go online. Some things, like me and my new boiler don’t need to be online. It is just a waste of wifi.

VR man Jaron Lanier said in the FT Out to Lunch section this weekend that social media causes cognitive confusion as it decontextualises, i,e., it loses context, because all communication is chopped up into algorithmic friendly shreds and loses its meaning.

Lanier believes in the data as labour movement, so that huge companies have to pay for the data they take from people. I guess if a system is transparent for a user to see how and where their data goes they might choose more carefully what to share, especially if they can see how it is taken out of context and used willy-nilly. I have blogged in the past how people get used online and feel powerless.

So way back when I wrote that social media reflects us rather than taking us places we don’t want to go, in my post Alone Together: Is social media changing us? I would now add that it is economics which changes us. Progress driven by economics and the trade-offs humans think it is ok for other humans to make along the way. We are often seduced by cold hard cash as it does seem to be the answer to most of our deficiency needs. It is not social media per se, it is not the Internet either which is taking us places we don’t want to go, it is the trade-offs of economics and how we lose sight of other humans around us when we feel scarcity.

So, since we work in binary, let’s think on this human v technology conundrum. Instead of viewing it as human v technology, what about human v economics? Someone is making decisions on how best to support humans with technology but each time this is eroded by the bottom line. What about humans v scarcity?

Lanier said in his interview I miss the future as he was talking about the one in which he thought he would be connected with others through shared imagination, which is what we used to do with stories and with the arts. Funny I am starting to miss it too. As an aside, I have taken off my Fitbit. I am tired of everything it is taking from me. It is still possible online to connect imaginatively, but it is getting more and more difficult when every last space is prescribed and advertised all over as people feel that they must be making money.

We need to find a way to get back to a technological shared imagination which allows us to design what’s best for all humanity, and any economic gain lines up with social advancement for all, not just the ones making a profit.

Let’s Talk! Human-Computer Interaction: Dialogue, Conversation, Symbiosis (2)

[ 1) Introduction, 2) Dialogue or Conversation, 3) User or Used, 4) Codependency or Collaboration, 5) Productive or Experiential, 6) Conclusions]

I chuckled when I read Rebecca Solnit describing her 1995 life: She read the newspaper in the morning, listened to the news in the evening and received other news via letter once a day. Her computer was unconnected to anything. Working on it was a solitary experience.

Fast forward 20+ years and her computer, like most other people’s, feels like a cocktail party, full of chatter and fragmented streams of news and data. We are living permanently in Alvin Toffler’s information overload. We are creating more data per second than we did in a whole year in the 1990s. And yet, data or information exchange is why we communicate in the first place, so I wanted to ponder here, how do we talk using computers?

Commandments

Originally, you had to ask computer scientists like me. And, we had to learn the commands of the operating system we were using say, on a mainframe with VAX/VMS or DEC; on a networked workstation with UNIX, or a personal computer which used MS/DOS.

Then, we had to learn whatever language we needed. Some of the procedural languages I have known and loved are: Assembler, Pascal, COBOL, ADA, C/C++, Java, X/Motif, OpenGL (I know I will keep adding to these as I remember them). The declarative PROLOG, and (functional, brackety) LISP, and scripts like php, Perl, Python, Javascript. The main problem with scripts is that they don’t have strong types, so you can quite easily pass a string to an integer and cause all sorts of problems and the compiler won’t tell you otherwise. They are like a hybrid of the old and new. The old when computer time was expensive and humans cheap so we had to be precise in our instructions, and the new computers are cheap and humans cost more, so bang in some code. Don’t worry about memory or space. This is ok up to a point but if the human isn’t trained well, days may be lost.

As an undergraduate I had to learn about sparse matrices to not waste computer resources, and later particularly using C++ I would patiently wait and watch programs compile. And, it was in those moments, I realised why people had warned me that to choose computers was to choose a way of life which could drive you mad.

How things have changed. Or have they?

Dialogue

When I used to lecture human-computer interaction, I would include Ben Schneiderman’s eight golden rules of interface design. His book Designing the User Interface is now in its sixth edition.

When I read the first edition, there was a lot about dialog design as way back then there were a lot of dialog boxes (and American spelling) to get input/output going smoothly. Graphical-user interfaces had taken over from the command line with the aim of making computers easy to use for everyone. The 1990s were all about the efficiency and effectiveness of a system.

Just the other week I was browsing around the Psychology Now website, and came upon a blogpost about the psychological term locus of control. If it is internal, a person thinks that their success depends on them, if it is external their success is down to fate or luck. One of Scheidermann’s rules is: Support internal locus of control, so you make the user feel that they can successfully achieve the task they have set out to do on the computer because they trust it to behave consistently because they know what to expect next, things don’t move around like the ghost in the wall.

Schneiderman’s rules were an interpretation of a dialogue in the sense of a one-to-one conversation (dia means two, logos can mean speech) to clarify and make coherent. That is to say: One person having a dialogue with one computer by the exchange of information in order to achieve a goal.

This dialogue is rather like physicist David Bohm’s interpretation which involves a mutual quest for understanding and insight. So, the user was be guided to put in specific data via a dialog box and the computer would use that information to give new information to create understanding and insight.

This one-to-one seems more powerful nowadays with Siri, Alexa, Echo, but, it’s still a computer waiting on commands and either acting on them or searching for the results in certain areas online. Put this way, it’s not really much of a dialogue. The computer and user are not really coming to a new understanding.

Bohm said that a dialogue could involve up to 40 people and would have a facilitator, though other philosophers would call this conversation. Either way, it is reminiscent of computer supported cooperative work (CSCW) a term coined in 1984 that looked at behaviour and technology and how computers can facilitate, impair, or change collaborative activities (the medium is the message) whether people do this on the same or different time zone, in the same or different geographical locations, synchronously or asynchronously. CSCW has constantly changed and evolved especially with the World Wide Web and social media.

I remember being at an AI conference in 1996 and everyone thought that the answer to everything was just put it online and see what happened then. But just because the WWW can compress time and space it doesn’t follow that a specific problem can be solved more easily.

Monologue to Interaction

The first people online were really delivering a monologue. Web 1.0 was a read-only version of the WWW. News companies like the BBC published news like a newspaper. Some people had personal web pages on places like Geocities. Web pages were static and styled with HTML and then some CSS.

With the advent of Web 2.0, things got more interactive with backend scripting so that webpages could serve up data from databases and update pages to respond to users input data. Social media sites like Flickr, YouTube, Facebook, Twitter were all designed for users to share their own content. Newspapers and news companies opened up their sites to let users comment and feel part of a community.

But this chatter was not at all what Bohm had in mind, this is more like Solnit’s cocktail party with people sharing whatever pops in their head. I have heard people complain about the amount of rubbish on the WWW. However, I think it is a reflection of our society and the sorts of things we care about. Not everyone has the spare capacity or lofty ambition to advance humanity, some people just want to make it through the day.

Web 3.0 is less about people and more about things and semantics – the web of data. Already, the BBC uses the whole of the internet instead of a content management system to keep current. Though as a corporation, I wonder, has the BBC ever stopped to ask: How much news is too much? Why do we need this constant output?

Social media as a cocktail party

But, let’s just consider for a moment, social media as a cocktail party, what an odd place with some very strange behaviour going on:

  • The meme: At a cocktail party, imagine if someone came up to us talking like a meme: Tomorrow, is the first blank page of a 365 page book. Write a good one. We would think they had banged their head or had one shandy too many.
  • The hard sell: What if someone said: Buy my book, buy my book, buy my book in our faces non-stop?
  • The auto Twitter DM which says follow me on facebook/Instagram/etc. We’ve gone across said hi, and the person doesn’t speak but slips us a note which says: Thanks for coming over, please talk to me at the X party.
  • The rant: We are having a bit of a giggle and someone comes up and rants in our faces about politics, religion, we try to ignore them all the while feeling on a downer.
  • The retweet/share:That woman over there just said, this man said, she said, he said, look at this picture… And, if it’s us, we then say: Thanks for repeating me all over the party.

Because it is digital, it becomes very easy to forget that we are all humans connected together in a social space. The result being that there’s a lot of automated selling, news reporting, and shouting going on. Perhaps it’s less of a cocktail party more of a market place with voices ringing out on a loop.

Today, no one would say that using a computer is a solitary experience, it can be noisy and distracting, and it’s more than enough to drive us mad.

How do we get back to a meaningful dialogue? How do we know it’s time to go home when the party never ends, the market never closes and we still can’t find what we came for?

[Part 3]

Human-computer interaction, cyberpsychology and core disciplines

A heat map of the multidisciplinary field of HCI @ Alan Dix

I first taught human-computer interaction (HCI) in 2001. I taught it from a viewpoint of software engineering. Then, when I taught it again, I taught it from a design point of view, which was a bit trickier, as I didn’t want to trawl through a load of general design principles which didn’t absolutely boil down to a practical set of guidelines for graphical-user interface or web design. That said, I wrote a whole generic set of design principles here: Designing Design, borrowing Herb Simon’s great title: The Science of the Artificial. Then, I revised my HCI course again and taught it from a practical set of tasks so that my students went away with a specific skill set. I blogged about it in a revised applied-just-to-web-design version blog series here: Web Design: The Science of Communication.

Last year, I attended a HCI open day Bootstrap UX. The day in itself was great and I enjoyed hearing some new research ideas until we got to one of the speakers who gave a presentation on web design, I think he did, it’s hard to say really, as all his examples came from architecture.

I have blogged about this unsatisfactory approach before. By all means use any metaphor you like, but if you cannot relate it back to practicalities then ultimately all you are giving us is a pretty talk or a bad interview question.

You have to put concise constraints around a given design problem and relate it back to the job that people do and which they have come to learn about. Waffling on about Bucky Fuller (his words – not mine) with some random quotes on nice pictures are not teaching us anything. We have a billion memes online to choose from. All you are doing is giving HCI a bad name and making it sound like marketing. Indeed, cyberpsychologist Mary Aiken, in her book The Cyber Effect, seems to think that HCI is just insidious marketing. Anyone might have been forgiven for making the same mistake listening to the web designer’s empty talk on ersatz architecture.

Cyberpsychology is a growing and interesting field but if it is populated by people like Aiken who don’t understand what HCI is, nor how artificial intelligence (AI) works then it is no surprise that The Cyber Effect reads like the Daily Mail (I will blog about the book in more detail at a later date, as there’s some useful stuff in there but too many errors). Aiken quotes Sherry Turkle’s book Alone Together, which I have blogged about here, and it makes me a little bit dubious about cyberpsychology, I am waiting for the book written by the neuroscientist with lots of brainscan pictures to tell me exactly how our brains are being changed by the Internet.

Cyberpsychology is the study of the psychological ramifications of cyborgs, AI, and virtual reality, and I was like wow, this is great, and rushed straight down to the library to get the books on it to see what was new and what I might not know. However, I was disappointed because if the people who are leading the research anthropomorphise computers and theorise about metaphors about the Internet instead of the Internet itself, then it seems that the end result will be skewed.

We are all cyberpsychologists and social psychologists now, baby. It’s what we do

We are all cyberpsychologists and social psychologists, now baby. It’s what we do. We make up stories to explain how the world works. It doesn’t mean to say that the stories are accurate. We need hard facts not Daily Mail hysteria (Aiken was very proud to say she made it onto the front page of the Daily Mail with some of her comments). However, the research I have read about our behaviour online says it’s too early to say. It’s just too early to say how we are being affected and as someone who has been online since 1995 I only feel enhanced by the connections the WWW has to offer me. Don’t get me wrong, it hasn’t been all marvellous, it’s been like the rest of life, some fabulous connections, some not so.

I used to lecture psychology students alongside the software engineering students when I taught HCI in 2004 at Westminster University, and they were excited when I covered cognitive science as it was familiar to them, and actually all the cognitive science tricks make it easy to involve everyone in the lectures, and make the lectures fun, but when I made them sit in front of a computer, design and code up software as part of their assessment, they didn’t want to do it. They didn’t see the point.

This is the point: If you do not know how something works how can you possibly talk about it without resorting to confabulation and metaphor? How do you know what is and what is not possible? I may be able to drive a car but I am not a mechanic, nor would I give advice to anyone about their car nor write a book on how a car works, and if I did, I would not just think about a car as a black box, I would have to put my head under the bonnet, otherwise I would sound like I didn’t know what I was talking about. At least, I drive a car, and use a car, that is something.

Hey! We’re not all doctors, baby.

If you don’t use social media, and you just study people using it, what is that then? Theory and practice are two different things, I am not saying that theory is not important, it is, but you need to support your theory, you need some experience to evaluate the theory. Practice is where it’s at. No one has ever said: Theory makes perfect. Yep, I’ve never seen that on a meme. You get a different perspective, like Jack Nicholson to his doctor Keanu Reeves says in Something’s Gotta Give: Hey! We’re not all doctors, baby. Reeves has seen things Nicholson hasn’t and Nicholson is savvy enough to know it.

So, if you don’t know the theory and you don’t engage in the practice, and you haven’t any empirical data yourself, you are giving us conjecture, fiction, a story. Reading the Wikipedia page on cyberpsychology, I see that it is full of suggested theories like the one about how Facebook causes depression. There are no constraints around the research. Were these people depressed before going on Facebook? I need more rigour. Aiken’s book is the same, which is weird since she has a lot of references, they just don’t add up to a whole theory. I have blogged before about how I was fascinated that some sociologists perceived software as masculine.

In the same series I blogged about women as objects online with the main point being, that social media reflects our society and we have a chance with technology to impact society in good ways. Aiken takes the opposite tack and says that technology encourages and propagates deviant sexual practices (her words) – some I hadn’t heard of, but for me, begs the question: If I don’t know about a specific sexual practice, deviant or otherwise, until I learn about on the Internet (Aiken’s theory), then how do I know which words to google? It is all a bit chicken and egg and doesn’t make sense. Nor does Aiken’s advice to parents which is: Do not let your girls become objects online. Women and girls have been objectified for centuries, technology does not do anything by itself, it supports people doing stuff they already do. And, like the HCI person I am, I have designed and developed technology to support people doing stuff they already do. I may sometimes inadvertently change the way people do a task when supported by technology for good or for bad, but to claim that technology is causing people to do things they do not want to do is myth making and fear mongering at its best.

The definition of HCI that I used to use in lectures at the very beginning of any course was:

HCI is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them (ACM, 1992).

For me, human-computer interaction was and still remains Gestaltian: The whole is greater than the sum of the parts, by this I mean, that the collaboration of a human and a computer is more than a human typing numbers into a computer and then waiting for the solution, or indeed typing in sexually deviant search terms into a web crawler to find a tutorial. And, with the advent of social media, HCI is more than one person connecting to another, or broadcasting online, which is why the field of cyberpsychology is so intriguing.

But the very reason why I left the field of AI and went into HCI is: AI reasons in a closed world and the limits of the computational power you have available. There are limits. With HCI, that world opens up and the human gets to direct the computer to do something useful. Human to human communication supported by technology does something else altogether which is why you might want the opinion of a sociologist or a psychologist. But, you don’t want the opinion of the sociologist on AI when they don’t understand how it works and has watched a lot of sci-fi and thinks that robots are taking over the world. Robots can do many things but it takes a lot of lines of code. And, you don’t want the opinion of a cyberpsychologist who thinks that technology teaches people deviant sexual practices and encourages us all to literally pleasure ourselves to death (Aiken’s words – see what I mean about the Daily Mail?) ‘cos she read one dodgy story and linked it to a study of rats in the 1950s.

Nowadays, everyone might consider themselves to be a bit of a HCI expert and can judge the original focus of HCI which is the concept of usability: easy to learn, easy to use. Apps are a great example of this, because they are easy to learn and easy to use, mainly though because they have limited functionality, that is they focus on one small task, like getting a date, ordering a taxi, sharing a photo, or a few words.

However, as HCI professor Alan Dix says in his reflective Thirty years of HCI and also here about the future: HCI is a vast and multifaceted community, bound by the evolving concept of usability, and the integrating commitment to value human activity and experience as the primary driver in technology.

He adds that sometimes the community can get lost and says that Apple’s good usability has been sacrificed for aesthetics and users are not supported as well as they should be. Online we can look at platforms like Facebook and Twitter and see that they do not look after their users as well as they could (I have blogged about that here). But again it is not technology, it is people who have let the users down. Somewhere along the line someone made a trade-off: economics over innovation, speed over safety, or aesthetics over usability.

HCI experts are agents of change. We are hopefully designing technology to enhance human activity and experience, which is why the field of HCI keeps getting bigger and bigger and has no apparent core discipline.

It has a culture of designer-maker which is why at any given HCI conference you might see designers, hackers, techies and artists gathering together to make things. HCI has to exist between academic rigour and exciting new tech, no wonder it seems to not be easy to define. But as we create new things, we change society and have to keep debating areas such as intimacy, privacy, ownership, visibility as well as what seems pretty basic like how to keep things usable. Dix even talks about having human–data interaction, as we put more and more things online, we need to make sense of the data being generated and interact with it. There is new research being funded into trust (which I blogged about here). And Dix suggest that we could look into designing for solitude and supporting users to not respond immediately to every text, tweet, digital flag. As an aside, I have switched off all notifications, my husband just ignores his, and it just boggles my mind a bit that people can’t bring themselves to be in charge of the technology they own. Back to the car analogy, they wouldn’t have the car telling them where they should be going.

Psychology is well represented in HCI, AI is well represented in HCI too. Hopefully we can subsume cyberpsychology too, so that the next time I pick up a book on the topic, it actually makes sense, and the writer knows what goes on under the bonnet.

Technology should be serving us, not scaring us, so if writers could stop behaving like 1950s preachers who think society is going to the dogs because they view how people embrace technology in the same way they once did rocknroll and the television, we could be more objective about how we want our technological progress to unfold.