Let’s Talk! Human-Computer Interaction: Dialogue, Conversation, Symbiosis (2)

[Part 1]

I chuckled when I read Rebecca Solnit describing her 1995 life: She read the newspaper in the morning, listened to the news in the evening and received other news via letter once a day. Her computer was unconnected to anything. Working on it was a solitary experience.

Fast forward 20+ years and her computer, like most other people’s, feels like a cocktail party, full of chatter and fragmented streams of news and data. We are living permanently in Alvin Toffler’s information overload. We are creating more data per second than we did in a whole year in the 1990s. And yet, data or information exchange is why we communicate in the first place, so I wanted to ponder here, how do we talk using computers?

Commandments

Originally, we had to learn the commands of the operating system we were using say, on a mainframe with VAX/VMS or DEC; on a networked workstation with UNIX, or a personal computer which used MS/DOS.

Then, we had to learn whatever language we needed. Some of the procedural languages I have known and loved are: Assembler, Pascal, COBOL, ADA, C/C++, Java, X/Motif, OpenGL (I know I will keep adding to these as I remember them). The declarative PROLOG, and (functional, brackety) LISP, and scripts like php, Perl, Python, Javascript. The main problem with scripts is that they don’t have strong types, so you can quite easily pass a string to an integer and cause all sorts of problems and the compiler won’t tell you otherwise. They are like a hybrid of the old and new. The old when computer time was expensive and humans cheap so we had to be precise in our instructions, and the new computers are cheap and humans cost more, so bang in some code. Don’t worry about memory or space. This is ok up to a point but if the human isn’t trained well, days may be lost.

As an undergraduate I had to learn about sparse matrices to not waste computer resources, and later particularly using C++ I would patiently wait and watch programs compile. And, it was in those moments, I realised why people had warned me that to choose computers was to choose a way of life which could drive you mad.

How things have changed. Or have they?

Dialogue

When I used to lecture human-computer interaction, I would include Ben Schneiderman’s eight golden rules of interface design. His book Designing the User Interface is now in its sixth edition.

When I read the first edition, there was a lot about dialog design as way back then there were a lot of dialog boxes (and American spelling) to get input/output going smoothly. Graphical-user interfaces had taken over from the command line with the aim of making computers easy to use for everyone. The 1990s were all about the efficiency and effectiveness of a system.

Just the other week I was browsing around the Psychology Now website, and came upon a blogpost about the psychological term locus of control. If it is internal, a person thinks that their success depends on them, if it is external their success is down to fate or luck. One of Scheidermann’s rules is: Support internal locus of control, so you make the user feel that they can successfully achieve the task they have set out to do on the computer because they trust it to behave consistently because they know what to expect next, things don’t move around like the ghost in the wall.

Schneiderman’s rules were an interpretation of a dialogue in the sense of a one-to-one conversation (dia means two, logos can mean speech) to clarify and make coherent. That is to say: One person having a dialogue with one computer by the exchange of information in order to achieve a goal.

This dialogue is rather like physicist David Bohm’s interpretation which involves a mutual quest for understanding and insight. So, the user was be guided to put in specific data via a dialog box and the computer would use that information to give new information to create understanding and insight.

This one-to-one seems more powerful nowadays with Siri, Alexa, Echo, but, it’s still a computer waiting on commands and either acting on them or searching for the results in certain areas online. Put this way, it’s not really much of a dialogue. The computer and user are not really coming to a new understanding.

Bohm said that a dialogue could involve up to 40 people and would have a facilitator, though other philosophers would call this conversation. Either way, it is reminiscent of computer supported cooperative work (CSCW) a term coined in 1984 that looked at behaviour and technology and how computers can facilitate, impair, or change collaborative activities (the medium is the message) whether people do this on the same or different time zone, in the same or different geographical locations, synchronously or asynchronously. CSCW has constantly changed and evolved especially with the World Wide Web and social media.

I remember being at an AI conference in 1996 and everyone thought that the answer to everything was just put it online and see what happened then. But just because the WWW can compress time and space it doesn’t follow that a specific problem can be solved more easily.

Monologue to Interaction

The first people online were really delivering a monologue. Web 1.0 was a read-only version of the WWW. News companies like the BBC published news like a newspaper. Some people had personal web pages on places like Geocities. Web pages were static and styled with HTML and then some CSS.

With the advent of Web 2.0, things got more interactive with backenf scripting so that webpages could serve up data from databases and update pages to respond to users input data. Social media sites like Flickr, YouTube, Facebook, Twitter were all designed for users to share their own content. Newspapers and news companies opened up their sites to let users comment and feel part of a community.

But this chatter was not at all what Bohm had in mind, this is more like Solnit’s cocktail party with people sharing whatever pops in their head. I have heard people complain about the amount of rubbish on the WWW. However, I think it is a reflection of our society and the sorts of things we care about. Not everyone has the spare capacity or lofty ambition to advance humanity, some people just want to make it through the day.

Web 3.0 is less about people and more about things and semantics – the web of data. Already, the BBC uses the whole of the internet instead of a content management system to keep current. Though as a corporation, I wonder, has the BBC ever stopped to ask: How much news is too much? Why do we need this constant output?

Social media as a cocktail party

But, let’s just consider for a moment, social media as a cocktail party, what an odd place with some very strange behaviour going on:

  • The meme: At a cocktail party, imagine if someone came up to us talking like a meme: Tomorrow, is the first blank page of a 365 page book. Write a good one. We would think they had banged their head or had one shandy too many.
  • The hard sell: What if someone said: Buy my book, buy my book, buy my book in our faces non-stop?
  • The auto Twitter DM which says follow me on facebook/Instagram/etc. We’ve gone across said hi, and the person doesn’t speak but slips us a note which says: Thanks for coming over, please talk to me at the X party.
  • The rant: We are having a bit of a giggle and someone comes up and rants in our faces about politics, religion, we try to ignore them all the while feeling on a downer.
  • The retweet/share:That woman over there just said, this man said, she said, he said, look at this picture… And, if it’s us, we then say: Thanks for repeating me all over the party.

It is easy to forget that we are all humans connected together in a digital space when in that social space there’s a lot of automated selling, news reporting, and shouting going on. Perhaps it’s less of a cocktail party more of a market place with voices ringing out on a loop.

Today, no one would say that using a computer is a solitary experience, it can be noisy and distracting, and it’s more than enough to drive us mad.

How do we get back to a meaningful dialogue? How do we know it’s time to go home when the party never ends and the market never closes and we still can’t find what we came for?

Virtual Presence: Where do we go when we go online?

Steve Mann, Augmented Reality Man

I spent most of Sunday morning staring into the eyes of spiritual teacher Eckart Tolle. I was in my garden in London and he was at home in Vancouver giving a SoundsTrue webinar on The Power of Presence. Tolle was demonstrating to me and the other 100,000+ people on the webinar that it can be useful to connect with another human being who is free of mind, even on a screen.

Tolle’s demonstration of thought-less presence was a continuation of The Power of Now in which he discusses that we only have the now. Nothing happens in the past or future, our senses, perceptions, feelings and thoughts all make up the now. He extended this on Sunday by defining presence as being aware of ourselves as a perceiving consciousness deep in the essence of now.

And this, reminded me of a question I have been pondering for some time now: Where do we go when we go online?

As Tolle talked about the surface of now whilst I was staring into the screen at him, I was conscious of the external world outside of me and my focus on him on a screen, that is to say I was peripherally aware of the garden I was in, I could hear the birds tweet, the traffic go by and what he was saying all at once. Then, when he was telling me to feel my breath and my inner body aliveness I focused completely on my presence whilst Tolle said that I was entering the now, the external or surface now, and then the internal or deep now of my unseen thoughts and feelings.

And, this was all working until I began to wonder about presence, our physical presence like mine in the garden, and our virtual presence when we are connecting to the Internet at which point I missed what he was saying, I was off wondering:

Where do we go in the space? Is it a connection to our own thoughts and inner fire as I discussed in Lighting the Fire and The Space Between Us? Is it a connection to a collective consciousness as Jung believed and as Deepak Chopra believes? Or, is the Internet an external world of ideas as Plato postulated?

Tolle during his webinar mentioned that when he introduces language to describe presence as consciousness it creates a duality which reminded me of Decartes and his theory of Cartesian Dualism of the mind and body as separate. But, some scientists and artists don’t feel this way and think that our embodiment needs an upgrade as our bodies don’t keep up with our ever expanding technology which expands our minds.

The Internet is a medium which expands our capacity for thought, for ideas, for information and it demonstrates perfectly how the medium is the message. This medium – the Internet – expands us and influences how the message is perceived and so, creates a symbiotic relationship.

We talk about going online or being online. And when we talk about the Internet, which after all is just a network of computers, we talk about it as a space which we navigate, we surf, we go back or forward in. Is it a mental space for us? If so what happens to our physical? Where is our presence?

I have been online and had access to the Internet for over two decades now and I have often gotten lost online – not so much in hyperspace – but lost myself completely, lost all sense of time and space, or specifically an idea of where I was, during say a unix talk which would split the screen in two and you could see both sides of the conversation, or during chats on Facebook Messenger, or DM on Twitter, when both parties have treated this asynchronous feature as a chat in real time. According to Tolle this is because I have identified with, in this case, the chat, I’ve let them/it take me over and I am longer in the now. I have been drawn into unconsciousness to which I would add I have been drawn into the collective unconsciousness. But then most of us have had this experience when reading a book or in the cinema well before we all went online.

Research into literary realism – a 19th century art movement which we might call sociology nowadays – has established that human comprehension and language cannot encompass reality in its entirety. We may have a partial understanding which comes from our experiences and senses in the now, but most of what we understand is largely based in concepts, or mental representations.

So, since we are limited by our senses, perceptions and feelings which make up the now, it makes sense that we are easily led and go elsewhere, we fall into the collective unconsciousness. A while back I talked about flow, and the gap and falling into other people or into an online video, or argument in the Moments in modern technology blog as I couldn’t quite figure out if technology was causing us to miss moments or not – were we absent or present? Tolle says that being conscious of our presence in a moment is the way we feel super alive. Being taken over by thoughts and triggers is being absent.

In the field of literary theory, absence and presence has long been debated and understood that people can be made to believe that they are somewhere they are not, or in the presence of people and objects that do not actually exist. Our suspension of disbelief as Coleridge put it whilst reading text on a page, allows us to go online and enter virtual spaces.

Virtual architecture and design creates social norms in virtual spaces which affects how people use and communicate in a given space for they follow the cues offered. So, if an online group meet in a virtual lecture with a lecturer at the front they will behave quite differently to say if they meet in a virtual coffee shop, and it will impact how a student learns.

As I said in Games,Storytelling and Ludology, the more the environment demands of us, along with giving our senses all the information they need – sight, sound, touch (haptic feedback) the more complete it feels. And our minds, don’t really know, or care if it is real or not.

Sculpted virtual environments aside, even in text-only chats, we still lose ourselves online. I believe it is our desire to connect and experience and be experienced which really drives our minds, not the technology. It is our willingness to want to reach out. We are hardwired for connection and shared experiences are a quick way to connect. As Tolle says: When you are really present you are not looking past or future or comparing you are no longer a person… you and the now are one and the same… you can understand experientially or conceptually.

The yogis says that experience can be Nirguna (formless) and Saguna (with form), and I see now that this means, if we give it form, we break it down conceptually and then it is just a partial understanding. A formless experiential experience expands us and influences us.

I think that is what we do online, we experience experientially in the now, and when we come back from online, like on TV after an ad break, a presenter will say: Welcome back, as if we’d been somewhere, perhaps it is then when we interpret conceptually.

If we, as Tolle recommends, learn to cultivate a stillness inside us against which everything happens then it is will be easier to retain a sense of self online, a sense of presence, and our virtual and physical will be aligned.

However, if you are like me, I lose myself everywhere and anywhere and yet I am often told by people that I have great presence, just be reassured I’ve gotten lost a million times online, but I always find my way home.

The ghosts of AI

I fell in love with Artificial Intelligence (AI) back in the 1990s when I went to Aberdeen University as a post-graduate Stalker, even though I only signed up for the MSc in AI because it had an exchange program which meant that I could study in Paris for six months.

And, even though they flung me and my pal out of French class for being dreadful students ( je parle le C++), and instead of Paris, I ended up living in Chambéry (which is so small it mentions the launderette in the guidebook), it was a brilliant experience, most surprisingly of all, because it left me with a great love of l’intelligence artificielle: Robotics, machine learning, knowledge based systems.

AI has many connotations nowadays, but back in 1956 when the term was coined, it was about thinking machines and how to get computers to perform tasks which humans, i.e., life with intelligence, normally do.

The Singularity is nigh

Lately, I have been seeing lots of news about robots and AI taking over the world and the idea that the singularity – that moment when AI becomes all powerful it self-evolves and changes human existence – is soon. The singularity is coming to get us. We are doomed.

Seriously, the singularity is welcome round my place to hold the door open for its pal and change my human existence any day of the week. I have said it before: Yes please dear robot, come round, manage my shopping, wait in for Virgin media because they like to mess me about, and whilst you are there do my laundry too, thank you.

And, this got me thinking. One article said the singularity is coming in 2029 which reminded me of all those times the world was going to end according to Nostradamus, Old Mother Shipton, the Mayan Calendar, and even the Y2K bug. As we used to say in Chambéry : Plus ça change, plus c’est la même chose. To be honest, we never, ever said that, but my point is that our fears don’t change, even when dressed up in a tight shiny metallic suit. Nom du pipe!

We poor, poor humans we are afraid of extinction, afraid of being overwhelmed, overtaken, and found wanting. True to form I will link to Maslow’s hierarchy of needs and repeat that we need to feel safe and we need to feel that we are enough. Our technology may be improving – not fast enough as far as I am concerned – but our fears, our hopes, our dreams, our aspirations remain the same. As I say in the link above, we have barely changed since Iron Age times, and yet we think we have because we buy into the myth of progress.

We frighten ourselves with our ghosts. The ghosts which haunt us: In the machine, in the wall, and in our minds where those hungry ghosts live – the ones we can never satisfy.

The ghost in the machine

The ghost in the machine describes the Cartesian view of the mind–body relationship, that the mind is a ghost in the machine of the body. It is quoted in AI, because after all it is a philosophical question: What is the mind? What is intelligence? And, it remains a tantalising possibility, especially in fiction that somewhere in the code of a machine or a robot, there is a back door, or cellular automata – a thinking part, which like natural intelligence is able to create new thoughts, new ideas, as it develops. The reality is that the guy who first came up with the term talked about the human ability to destroy itself with its constant repeating patterns in the arena of political–historical dynamics but used the brain as the structure. The idea that there is a ghost in the machine is an exciting one which is why fiction has hung onto it like a willo the wisp and often uses it as a plot device, for example, in the Matrix (there’s lots of odd bits of software doing their own thing) and I, Robot (Sunny has dreams).

Arthur C Clarke talked about it when he said that technology is magic – something, I say all the time, not least of all, because it is true. When I look back to the first portable computer I used and today, the power of the phone in my hand, well, it is just magic.

That said, we want the ghost in the machine to do something, to haunt us, to surprise us, to create for us, because we love variety, discoverability, surprise, and the fact that we are so clever, we can create life. Actually we do create life, mysteriously, magically, sexily.

The ghost in the wall

The ghost in the wall is that feeling that things change around us with little understanding. HCI prof, Alan Dix uses the term here. If HCI experts don’t follow standards and guidelines, the user ends up confused in an app without consistency which gives the impression of a ghost in the wall moving things, ‘cos someone has to be moving the stuff, right?

We may love variety, discoverability and surprise, but it has to be logical to fit within certain constraints and within the consistency of an interface with which we are interacting, so that we say: I am smart, I was concentrating, but yeah, I didn’t know that that would happen at all, in the same we do after an excellent movie, and we leave thrilled at the cleverness of it all.

Fiction: The ghost of the mind

Fiction has a lot to answer for. Telling stories is how we make sense of the world, they shape society and culture, and they help us feel truth.

Since we started storytelling, the idea of artificial beings which were given intelligence, or just came alive, is a common trope. In Greek mythology, we had Pygmalion, who carved a woman from ivory and fell in love with her so Aphrodite gave her life and Pervy Pygmalion and his true love lived happily ever after. It is familar – Frankinstein’s bride, Adam’s spare rib, Mannequin (1987). Other variations less womeny-heterosexy focused include Pinocchio, Toy Story, Frankinstein, Frankenweenie, etc.

There are two ways to go: The new life and old life live happily ever after and true love conquers all (another age old trope), or there is the horror that humans have invented something they can’t control. They messed with nature, or the gods, they flew too close to the sun. They asked for more and got punished.

It is control we are after even though we feel we are unworthy, and if we do have control we fear that we will become power crazed. And then, there are recurring themes about technology such as humans destroying the world, living in a post-apocalyptic world or dystopia, robots taking over, mind control (or dumbing down), because ultimately we fear the hungry ghost.

The hungry ghost

In Buddhism, the hungry ghosts are when our desires overtake us and become unhealthy, and insatiable, we become addicted to what is not good for us and miss out on our lives right now.

There is also the Hungry Ghosts Festival which remembers the souls who were once on earth and couldn’t control their desires so they have gotten lost in the ether searching, constantly unsatisfied. They need to be fed so that they don’t bother the people still on earth who want to live and have good luck and happy lives. People won’t go swimming because the hungry ghosts will drown them, dragging them down with their insatiable cravings.

Chinese character gui meaning ghost (thanks @john_sorensen_AU)

In a lovely blog the Chinese character above which represents ghost but in English looks like gui, which is very satisfying given this is a techyish blog, is actually nothing to do with ghosts or disincarnate beings, it is more like a glitch in the matrix – a word to explain when there is no logical explanation. It also explains when someone behaves badly – you dead ghost. And, perhaps is linked to when someone ghosts you, they behave badly. No, I will never forgive you, you selfish ghost. Although when someone ghosts you they do the opposite to what you wish a ghost would do, which is hang around, haunt you, and never leave you. When someone ghosts you, you become the ghost.

And, for me the description of a ghost as a glitch in the matrix works just as well for our fears, especially about technology and our ghosts of AI – those moments when we fear and when we don’t know why we are afraid. Or perhaps we do really? We are afraid we aren’t good enough, or perhaps we are too good and have created a monster. It would be good if these fears ghosted us and left us well alone.

Personally, my fears go the other way. I don’t think the singularity will be round to help me any time soon. I am stuck in the Matrix doing the washing. What if I’m here forever? Please come help me through it, there’s no need to hold the door – just hold my hand and let me know there’s no need to be afraid, even if the singularity is not coming, change is, thankfully it always is, it’s just around the corner.

Human-computer interaction, cyberpsychology and core disciplines

A heat map of the multidisciplinary field of HCI @ Alan Dix

I first taught human-computer interaction (HCI) in 2001. I taught it from a viewpoint of software engineering. Then, when I taught it again, I taught it from a design point of view, which was a bit trickier, as I didn’t want to trawl through a load of general design principles which didn’t absolutely boil down to a practical set of guidelines for graphical-user interface or web design. That said, I wrote a whole generic set of design principles here: Designing Design, borrowing Herb Simon’s great title: The Science of the Artificial. Then, I revised my HCI course again and taught it from a practical set of tasks so that my students went away with a specific skill set. I blogged about it in a revised applied-just-to-web-design version blog series here: Web Design: The Science of Communication.

Last year, I attended a HCI open day Bootstrap UX. The day in itself was great and I enjoyed hearing some new research ideas until we got to one of the speakers who gave a presentation on web design, I think he did, it’s hard to say really, as all his examples came from architecture.

I have blogged about this unsatisfactory approach before. By all means use any metaphor you like, but if you cannot relate it back to practicalities then ultimately all you are giving us is a pretty talk or a bad interview question.

You have to put concise constraints around a given design problem and relate it back to the job that people do and which they have come to learn about. Waffling on about Bucky Fuller (his words – not mine) with some random quotes on nice pictures are not teaching us anything. We have a billion memes online to choose from. All you are doing is giving HCI a bad name and making it sound like marketing. Indeed, cyberpsychologist Mary Aiken, in her book The Cyber Effect, seems to think that HCI is just insidious marketing. Anyone might have been forgiven for making the same mistake listening to the web designer’s empty talk on ersatz architecture.

Cyberpsychology is a growing and interesting field but if it is populated by people like Aiken who don’t understand what HCI is, nor how artificial intelligence (AI) works then it is no surprise that The Cyber Effect reads like the Daily Mail (I will blog about the book in more detail at a later date, as there’s some useful stuff in there but too many errors). Aiken quotes Sherry Turkle’s book Alone Together, which I have blogged about here, and it makes me a little bit dubious about cyberpsychology, I am waiting for the book written by the neuroscientist with lots of brainscan pictures to tell me exactly how our brains are being changed by the Internet.

Cyberpsychology is the study of the psychological ramifications of cyborgs, AI, and virtual reality, and I was like wow, this is great, and rushed straight down to the library to get the books on it to see what was new and what I might not know. However, I was disappointed because if the people who are leading the research anthropomorphise computers and theorise about metaphors about the Internet instead of the Internet itself, then it seems that the end result will be skewed.

We are all cyberpsychologists and social psychologists now, baby. It’s what we do

We are all cyberpsychologists and social psychologists, now baby. It’s what we do. We make up stories to explain how the world works. It doesn’t mean to say that the stories are accurate. We need hard facts not Daily Mail hysteria (Aiken was very proud to say she made it onto the front page of the Daily Mail with some of her comments). However, the research I have read about our behaviour online says it’s too early to say. It’s just too early to say how we are being affected and as someone who has been online since 1995 I only feel enhanced by the connections the WWW has to offer me. Don’t get me wrong, it hasn’t been all marvellous, it’s been like the rest of life, some fabulous connections, some not so.

I used to lecture psychology students alongside the software engineering students when I taught HCI in 2004 at Westminster University, and they were excited when I covered cognitive science as it was familiar to them, and actually all the cognitive science tricks make it easy to involve everyone in the lectures, and make the lectures fun, but when I made them sit in front of a computer, design and code up software as part of their assessment, they didn’t want to do it. They didn’t see the point.

This is the point: If you do not know how something works how can you possibly talk about it without resorting to confabulation and metaphor? How do you know what is and what is not possible? I may be able to drive a car but I am not a mechanic, nor would I give advice to anyone about their car nor write a book on how a car works, and if I did, I would not just think about a car as a black box, I would have to put my head under the bonnet, otherwise I would sound like I didn’t know what I was talking about. At least, I drive a car, and use a car, that is something.

Hey! We’re not all doctors, baby.

If you don’t use social media, and you just study people using it, what is that then? Theory and practice are two different things, I am not saying that theory is not important, it is, but you need to support your theory, you need some experience to evaluate the theory. Practice is where it’s at. No one has ever said: Theory makes perfect. Yep, I’ve never seen that on a meme. You get a different perspective, like Jack Nicholson to his doctor Keanu Reeves says in Something’s Gotta Give: Hey! We’re not all doctors, baby. Reeves has seen things Nicholson hasn’t and Nicholson is savvy enough to know it.

So, if you don’t know the theory and you don’t engage in the practice, and you haven’t any empirical data yourself, you are giving us conjecture, fiction, a story. Reading the Wikipedia page on cyberpsychology, I see that it is full of suggested theories like the one about how Facebook causes depression. There are no constraints around the research. Were these people depressed before going on Facebook? I need more rigour. Aiken’s book is the same, which is weird since she has a lot of references, they just don’t add up to a whole theory. I have blogged before about how I was fascinated that some sociologists perceived software as masculine.

In the same series I blogged about women as objects online with the main point being, that social media reflects our society and we have a chance with technology to impact society in good ways. Aiken takes the opposite tack and says that technology encourages and propagates deviant sexual practices (her words) – some I hadn’t heard of, but for me, begs the question: If I don’t know about a specific sexual practice, deviant or otherwise, until I learn about on the Internet (Aiken’s theory), then how do I know which words to google? It is all a bit chicken and egg and doesn’t make sense. Nor does Aiken’s advice to parents which is: Do not let your girls become objects online. Women and girls have been objectified for centuries, technology does not do anything by itself, it supports people doing stuff they already do. And, like the HCI person I am, I have designed and developed technology to support people doing stuff they already do. I may sometimes inadvertently change the way people do a task when supported by technology for good or for bad, but to claim that technology is causing people to do things they do not want to do is myth making and fear mongering at its best.

The definition of HCI that I used to use in lectures at the very beginning of any course was:

HCI is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them (ACM, 1992).

For me, human-computer interaction was and still remains Gestaltian: The whole is greater than the sum of the parts, by this I mean, that the collaboration of a human and a computer is more than a human typing numbers into a computer and then waiting for the solution, or indeed typing in sexually deviant search terms into a web crawler to find a tutorial. And, with the advent of social media, HCI is more than one person connecting to another, or broadcasting online, which is why the field of cyberpsychology is so intriguing.

But the very reason why I left the field of AI and went into HCI is: AI reasons in a closed world and the limits of the computational power you have available. There are limits. With HCI, that world opens up and the human gets to direct the computer to do something useful. Human to human communication supported by technology does something else altogether which is why you might want the opinion of a sociologist or a psychologist. But, you don’t want the opinion of the sociologist on AI when they don’t understand how it works and has watched a lot of sci-fi and thinks that robots are taking over the world. Robots can do many things but it takes a lot of lines of code. And, you don’t want the opinion of a cyberpsychologist who thinks that technology teaches people deviant sexual practices and encourages us all to literally pleasure ourselves to death (Aiken’s words – see what I mean about the Daily Mail?) ‘cos she read one dodgy story and linked it to a study of rats in the 1950s.

Nowadays, everyone might consider themselves to be a bit of a HCI expert and can judge the original focus of HCI which is the concept of usability: easy to learn, easy to use. Apps are a great example of this, because they are easy to learn and easy to use, mainly though because they have limited functionality, that is they focus on one small task, like getting a date, ordering a taxi, sharing a photo, or a few words.

However, as HCI professor Alan Dix says in his reflective Thirty years of HCI and also here about the future: HCI is a vast and multifaceted community, bound by the evolving concept of usability, and the integrating commitment to value human activity and experience as the primary driver in technology.

He adds that sometimes the community can get lost and says that Apple’s good usability has been sacrificed for aesthetics and users are not supported as well as they should be. Online we can look at platforms like Facebook and Twitter and see that they do not look after their users as well as they could (I have blogged about that here). But again it is not technology, it is people who have let the users down. Somewhere along the line someone made a trade-off: economics over innovation, speed over safety, or aesthetics over usability.

HCI experts are agents of change. We are hopefully designing technology to enhance human activity and experience, which is why the field of HCI keeps getting bigger and bigger and has no apparent core discipline.

It has a culture of designer-maker which is why at any given HCI conference you might see designers, hackers, techies and artists gathering together to make things. HCI has to exist between academic rigour and exciting new tech, no wonder it seems to not be easy to define. But as we create new things, we change society and have to keep debating areas such as intimacy, privacy, ownership, visibility as well as what seems pretty basic like how to keep things usable. Dix even talks about having human–data interaction, as we put more and more things online, we need to make sense of the data being generated and interact with it. There is new research being funded into trust (which I blogged about here). And Dix suggest that we could look into designing for solitude and supporting users to not respond immediately to every text, tweet, digital flag. As an aside, I have switched off all notifications, my husband just ignores his, and it just boggles my mind a bit that people can’t bring themselves to be in charge of the technology they own. Back to the car analogy, they wouldn’t have the car telling them where they should be going.

Psychology is well represented in HCI, AI is well represented in HCI too. Hopefully we can subsume cyberpsychology too, so that the next time I pick up a book on the topic, it actually makes sense, and the writer knows what goes on under the bonnet.

Technology should be serving us, not scaring us, so if writers could stop behaving like 1950s preachers who think society is going to the dogs because they view how people embrace technology in the same way they once did rocknroll and the television, we could be more objective about how we want our technological progress to unfold.

Society of the mind: A Rhumba of Ruths

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind.

Recently, I watched the episode The Relaxation Integration (S10, E3) of the Big Bang Theory in which Sheldon keeps dreaming of being Laid-Back Sheldon. At the end of episode he has a council of Sheldons to decide if Laid-Back Sheldon gets a say in Sheldon’s life. This got me thinking: What goes on in my council of Ruths? Is there a Laid-Back Ruth?

I don’t think there is. Not yet anyway. What do you even call a council of Ruths? A rising? A regiment? I looked up animal groups for one with an r. There was a raft, a run, a rabble, but I decided on a rhumba which is defined as a complex, violent dance. Yes, I would definitely say that is going on inside my head. Who is in charge? I am worried that it is Emergency Ruth.

Emergency Ruth

Emergency Ruth woke me up last night. I was in a deep sleep and then around 1am, she woke me up mid-panic, flailing and drowning. I smacked my husband around the head who didn’t seem to notice but sat up a couple of minutes later to wonder why he was awake at 1am.

Emergency Ruth is great. She is fabulous in a crisis. She pays attention to detail, she can spot what will go wrong miles ahead of everyone else. She always turns in a top-quality performance even when she is a completely knackered-in, nervous wreck. She can sprint down to A&E. She can stay up all night pressing buttons on a dialysis machine or a food pump, pass an NG tube, inject a tiny baby with a big needle, or herself, if no one else is around. She can give you, or a tiny cat, medicine on the hour every hour, with a syringe all night, or help you write a paper and meet your deadline. She sucks it up, sleepless, fearless (well she pretends she is) and does the thing that needs to be done: that medical procedure, that difficult conversation, that potential-to-get-nasty situation. Emergency Ruth is a total badass and she has my back.

But, in the middle of the night, when she should stand down, she is on red-alert, flight or fight, and she wakes me several times a night, every night, with a false alarm, and if I am too tired and fall into the dark night of the soul, she cannot help me feel better because that’s not what she does. Every morning she wakes me with a story of panic and a crick in my neck. She is intense.

Lately, I have taken to greeting her with: Good Morning, Doom. It makes me laugh and allows a tiny space in which Hippy Ruth can breathe and help unfurl my clenched heart.

Hippy Ruth

Sat chit ananda. I love Hippy Ruth. She had us vegetarian and organic for years. She rescues spiders and puts them through the cat flap. She recycles everything and wastes nothing. She worries about the environment, landfills, and data centres but talks to Techno Ruth who calms her, so that she truly believes that everything has a solution and all is well.

Hippy Ruth made us stopped dying our hair to grow it out and make it big and hippy once more, like it always was. She also makes us wear shorts at Bikram, so that we can embrace our body. She loves us. She loves our life. She is the best version of us. She is kind and compassionate and loves everyone, especially those people who behave badly towards us, for they are the most needy. (Emergency Ruth would eat them for breakfast.)

Hippy Ruth is happy on her mat or zazen cushion but equally happy to be interrupted part way through because she understands the tantra – or weaving – of the tapestry of life. Hippy Ruth knows that the mystical is to be found in the kitchen and the cuddles, as well as in the silence and the space of solitude. Always calm she hears the still small voice within.

Wild and Free Ruth

Wild and Free Ruth is an old, old joke between my husband and I. Though writing this, I asked him: What about Sensible Ruth? He said: I don’t think there is one. Wild and Free Ruth hates routine and doesn’t manage well in one. When she gets out, she’s up all night living wild and free. She is all about connection and go with the flow. But she doesn’t have the wisdom or the yin and yang of Hippy Ruth so she can fall into doing foolish things, and never says no even when she must. She is freespirited, rolls with it, sees what happens. She has a massive appetite for life and the ability to see the funny side in anything.

We’ve had some great times, hitching round the Alps, sleeping on the beach in Cinquaterra, flying to Kandmandu last minute and hoping our pal really meant it when she said she’d see us there, because Wild and Free Ruth always keeps a promise even if it’s a crazy one.

According to my husband Wild and Free Ruth causes trouble even when under lock and key, and my mother used to say: You’d cause a row in an empty house, but that’s just their opinion.

Boro Ruth

Boro Ruth is the bit of us who knew exactly what she liked to do and how she liked to be, before a million other people got involved and told her not to.

She discovered very early on that she liked: yoga, rollerskating, making music, zoning out (Hippy Ruth calls it meditation), the mystical and magical, the library, avoiding boring conversation. The things we still love to do today.

She loves anything which will make her life easy which is why she is fascinated by technology and can type faster than she speaks. Boro Ruth loves to talk, to learn, to teach and September – falling leaves and the promise of a new academic year.

She lives life like it matters and knows, as all kids do, that there is no need to improve the self. There is only acceptance. We are all just part of a bigger dance, there’s nothing else to do but to enjoy it.

Team Ruth

Team Ruth loves company and finds that everything is better in a group. She loves doing Bikram and meditation in a studio with like minded people. She soaks up that fantastic group energy and shares the love.

Ruth’s best programming happens in teams. She loves solution sharing and working super hard so her bit is ready for the person who needs it. She loves the art of great documentation and beautifully commented code which someone else can understand even when she is not around.

And, then the celebration at the end. Celebrations are always better in a team.

In a fabulous podcast hosted by SoundsTrue and which I listened to four times – it is that good, Mindfulness professor John Kabat-Zinn says that mindfulness is really about heartfulness, or open-heartedness, and not anything to do with the mind at all. I find this a really lovely thought and super encouraging. For as much as these personalities run around in my mind with a few others I haven’t outlined [like Techno Ruth who is a complete nerd, or Stalker Ruth (see what I did there?) who loves to research obsessively], it is a relief not to be limited by those personalities or stories, or any experiences I have had. As the Buddha said:

Nothing is to be clung to as I, me or my.

No clinging, but we don’t mind a cuddle as we welcome new joiners, I am looking forward to Laid-Back Ruth signing up and contrary to popular belief, I’m sure Sensible Ruth is already in there somewhere, I can’t wait til she’s ready to speak.

Group Hug, Ruths!