Human-computer interaction: Can you see what it is yet?

check out the video of this interface on ted.com

The recent furore over the 2012 Olympics Logo reminds me of how people react to the user interfaces they find on everything they interact with, from websites to washing machines. If an interface, like a logo, is well-designed, no one notices or mentions it. If it is difficult or unsightly, people complain loudly and when given a choice, won’t use an interface they don’t like. Interaction designers, like IT support staff, are never thanked when all is well and severely criticised when interfaces cause users problems.

An Oxford mathematics professor once told me, after asking me what I did, that he considered human-computer interaction (HCI) to be the null set. The misguided old duffer obviously couldn’t see that mathematics might be the foundations of the systems we use, but without HCI providing good interfaces people wouldn’t or couldn’t actually use them. There are many examples of how tweaking a graphical-user interface (GUI) here, or removing a log-in screen there, has saved time and human error and in some cases millions of dollars in the working day of many corporations.

What is HCI?

HCI is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them (ACM, 1992).

I like this definition because HCI is more than the sum of its parts. When a human uses a computer to solve a problem there is a lot more which happens during the collaboration than whether the human appreciates having attractive buttons in the right place.

With the advent of GUIs and the internet, computers are now everywhere and everyone uses them. As interaction designers, we designs systems so people with very little idea of what is going on underneath the interface, can interact with them. If we compare that to human-human interaction: How often do we know what is going on in someone else’s mind? How often do we misunderstand each other because we are thinking of something else or because the other person is using terminology or an approach different to the ones in our heads?

In the same way, designers often produce interfaces which do not match with users’ expectations. This is known as the gulf of execution. Unless your users are a group of people who are trained to solve problems in the same way, like structural engineers, then describing how a user works is a difficult process. Humans are sophisticated machines and no one really knows how the brain and mind work.

Modelling the human

This does not, however, stop us trying to model our user. There are many models of the human we can use, especially since philosophers and scientists have been trying to understand what makes us tick since time began. We have mental models from artificial intelligence, we have user models, user profiles, user personas – all intent on describing how our users behave and think. Borrowing from anthropology, we adopt an ethnographic perspective and use cultural probes and video footage of our users to gather information about the context within which they work.

We look at their tasks and analyse them, we also look at their guidelines and manuals which document how they carry out their jobs. We borrow from cognitive science and design systems to help users’ memories. We limit tasks to Miller’s magic number seven. Humans remember procedures better when they are described in pictures and words – a concept known as dual encoding. We try and anticipate how users reason, borrowing from Charles Pierce and his deduction, abduction and induction.

Modelling the computer

We present them with small prototypes from paper and pictures to software and videos. We want their reactions and we try to follow standards laid down by the platforms we are designing on. We want systems to feel familiar so the majority of our software looks the same, whether it is on the now defunct Silicon Graphics machines or on Microsoft Windows. On the internet we use web standards for coding to satisfy W3C accessibility guidelines and we follow their usability guidelines to give our users standards internet interfaces.

Job done

Once we have our systems in place we can evaluate how they are working. If no one complains or congratulates – then the system is working well. It is self-evident. However, there are many measurements we can use. We measure:

  • ‘Utility’ – what the system can do
  • ‘Effectiveness’ and ‘efficiency’
  • ‘Usability’ – making systems easy to learn and easy to use

We may want to measure satisfaction and entertainment.

WYSIWYG or WYSIATI

As our computers continue their move from the desktop to the mobile phone, the palm pilot, and into ubiquity, we need to balance usability with multi-functionality. The more functions an artefact can do, the less usable it becomes. Instead of having a what you see is all you get, you need to dig around to find out the functions that make you happy. Systems are so complex nowadays that the times when what you see is all there is are long gone.

This is where HCI is most useful. Here are some of its obvious, but difficult to answer, research questions:

  • How can interaction be made clearer, more efficient and offer better support for their users tasks, plans and goals?
  • How can information be presented more effectively?
  • How can the design and implementation of good interfaces be made easier?

We need solid answers. It is no wonder that we borrow ideas from philosophy to engineering whilst constantly refining our methods. The interaction designer is a jack of all trades – but don’t be deceived, it is harder than being a master of one.

One Reply to “Human-computer interaction: Can you see what it is yet?”

Leave a Reply