The ghosts of AI

I fell in love with Artificial Intelligence (AI) back in the 1990s when I went to Aberdeen University as a post-graduate stalker, even though I only signed up for the MSc in AI because it had an exchange program which meant that I could study in Paris for six months.

And, even though they flung me and my pal out of French class for being dreadful students, and I ended up living in Chambéry (which is so small it mentions the launderette in the guidebook) instead of Paris, it was a brilliant experience, most surprisingly of all, because it left me with a great love of l’intelligence artificielle: Robotics, machine learning, knowledge based systems.

AI has many connotations nowadays, but back in 1956 when the term was coined, it was about thinking machines and how to get computers to perform tasks which humans, i.e., life with intelligence, normally do.

The Singularity is nigh

Lately, I have been seeing lots of news about robots and AI taking over the world and the idea that the singularity – that moment when AI becomes all powerful it self-evolves and changes human existence – is soon. The singularity is coming to get us. We are doomed.

Seriously, the singularity is welcome round my place to hold the door open for its pal and change my human existence any day of the week. I have said it before: Yes please dear robot, come round, manage my shopping, wait in for Virgin media because they like to mess me about, and whilst you are there do my laundry too, thank you.

And, this got me thinking. One article said the singularity is coming in 2029 which reminded me of all those times the world was going to end according to Nostradamus, Old Mother Shipton, the Mayan Calendar, and even the Y2K bug. As we used to say in Chambéry : Plus ça change, plus c’est la même chose. To be honest, I never, ever said that, but my point is that our fears don’t change, even when dressed up in a tight shiny metallic suit. Nom du pipe!

We poor, poor humans we are afraid of extinction, afraid of being overwhelmed, overtaken, and found wanting. True to form I will link to Maslow’s hierarchy of needs and repeat that we need to feel safe and we need to feel that we are enough. Our technology may be improving – not fast enough as far as I am concerned – but our fears, our hopes, our dreams, our aspirations remain the same. As I say in the link above, we have barely changed since Iron Age times, and yet we think we have because we buy into the myth of progress.

We frighten ourselves with our ghosts. The ghosts which haunt us: In the machine, in the wall, and in our minds where those hungry ghosts live – the ones we can never satisfy.

The ghost in the machine

The ghost in the machine describes the Cartesian view of the mind–body relationship, that the mind is a ghost in the machine of the body. It is quoted in AI, because after all it is a philosophical question: What is the mind? What is intelligence? And, it remains a tantalising possibility, especially in fiction that somewhere in the code of a machine or a robot, there is a back door or cellular automata – a thinking part, which like natural intelligence is able  to create new thoughts, new ideas, as it develops. The reality is that the guy who first came up with the term talked about the human ability to destroy itself with its constant repeating patterns in the arena of political–historical dynamics but used the brain as the structure. The idea that there is a ghost in the machine is an exciting one which is why fiction has hung onto it like a willo the wisp and often uses it as a plot device, for example, in the Matrix (there’s lots of odd bits of software doing their own thing) and I, Robot (Sunny has dreams).

Arthur C Clarke talked about it when he said that technology is magic – something, I say all the time, not least of all, because it is true. When I look back to the first portable computer I used and today, the power of the phone in my hand, well, it is just magic.

That said, we want the ghost in the machine to do something, to haunt us, to surprise us, to create for us, because we love variety, discoverability, surprise, and the fact that we are so clever, we can create life. Actually we do create life, mysteriously, magically, sexily.

The ghost in the wall

The ghost in the wall is that feeling that things change around us with little understanding. HCI prof, Alan Dix uses the term here. If HCI experts don’t follow standards and guidelines, the user ends up confused in an app without consistency which gives the impression of a ghost in the wall moving things, ‘cos someone has to be moving the stuff, right?

We may love variety, discoverability and surprise, but it has to be logical to fit within certain constraints and within the consistency of an interface with which we are interacting, so that we say: I am smart, I was concentrating, but yeah, I didn’t know that that would happen at all, in the same we do after an excellent movie, and we leave thrilled at the cleverness of it all.

Fiction: The ghost of the mind

Fiction has a lot to answer for. Telling stories is how we make sense of the world, they shape society and culture, and they help us feel truth.

Since we started storytelling, the idea of artificial beings which were given intelligence, or just came alive, is a common trope. In Greek mythology, we had Pygmalion, who carved a woman from ivory and fell in love with her so Aphrodite gave her life and Pervy Pygmalion and his true love lived happily ever after. It is familar – Frankinstein’s bride, Adam’s spare rib, Mannequin (1987). Other variations less womeny-heterosexy focused include Pinocchio, Toy Story, Frankinstein, Frankenweenie, etc.

There are two ways to go: The new life and old life live happily ever after and true love conquers all (another age old trope), or there is the horror that humans have invented something they can’t control. They messed with nature, or the gods, they flew too close to the sun. They asked for more and got punished.

It is control we are after even though we feel we are unworthy, and if we do have control we fear that we will become power crazed. And then, there are recurring themes about technology such as humans destroying the world, living in a post-apocalyptic world or dystopia, robots taking over, mind control (or dumbing down), because ultimately we fear the hungry ghost.

The hungry ghost

In Buddhism, the hungry ghosts are when our desires overtake us and become unhealthy, and insatiable, we become addicted to what is not good for us and miss out on our lives right now.

There is also the Hungry Ghosts Festival which remembers the souls who were once on earth and couldn’t control their desires so they have gotten lost in the ether searching, constantly unsatisfied. They need to be fed so that they don’t bother the people still on earth who want to live and have good luck and happy lives. People won’t go swimming because the hungry ghosts will drown them, dragging them down with their insatiable cravings.

In a lovely blog the Chinese character which represents ghost but in English looks like gui, which is very satisfying given this is a techyish blog – though I can’t reproduce the beautiful character here, is actually nothing to do with ghosts or disincarnate beings, it is more like a glitch in the matrix – a word to explain when there is no logical explanation. It also explains when someone behaves badly – you dead ghost. And, perhaps is linked to when someone ghosts you, they behave badly. No, I will never forgive you, you selfish ghost. Although when someone ghosts you they do the opposite to what you wish a ghost would do, which is hang around, haunt you, and never leave you. When someone ghosts you, you become the ghost.

And, for me the description of a ghost as a glitch in the matrix works just as well for our fears, especially about technology and our ghosts of AI – those moments when we fear and when we don’t know why we are afraid. Or perhaps we do really? We are afraid we aren’t good enough, or perhaps we are too good and have created a monster. It would be good if these fears ghosted us and left us well alone.

Personally, my fears go the other way. I don’t think the singularity will be round to help me any time soon. I am stuck in the Matrix doing the washing. What if I’m here forever? Please come help me through it, there’s no need to hold the door – just hold my hand and let me know there’s no need to be afraid, even if the singularity is not coming, change is, thankfully it always is, it’s just around the corner.

Human-computer interaction, cyberpsychology and core disciplines

A heat map of the multidisciplinary field of HCI @ Alan Dix

I first taught human-computer interaction (HCI) in 2001. I taught it from a viewpoint of software engineering. Then, when I taught it again, I taught it from a design point of view, which was a bit trickier, as I didn’t want to trawl through a load of general design principles which didn’t absolutely boil down to a practical set of guidelines for graphical-user interface or web design. That said, I wrote a whole generic set of design principles here: Designing Design, borrowing Herb Simon’s great title: The Science of the Artificial. Then, I revised my HCI course again and taught it from a practical set of tasks so that my students went away with a specific skill set. I blogged about it in a revised applied-just-to-web-design version blog series here: Web Design: The Science of Communication.

Last year, I attended a HCI open day Bootstrap UX. The day in itself was great and I enjoyed hearing some new research ideas until we got to one of the speakers who gave a presentation on web design, I think he did, it’s hard to say really, as all his examples came from architecture.

I have blogged about this unsatisfactory approach before. By all means use any metaphor you like, but if you cannot relate it back to practicalities then ultimately all you are giving us is a pretty talk or a bad interview question.

You have to put concise constraints around a given design problem and relate it back to the job that people do and which they have come to learn about. Waffling on about Bucky Fuller (his words – not mine) with some random quotes on nice pictures are not teaching us anything. We have a billion memes online to choose from. All you are doing is giving HCI a bad name and making it sound like marketing. Indeed, cyberpsychologist Mary Aiken, in her book The Cyber Effect, seems to think that HCI is just insidious marketing. Anyone might have been forgiven for making the same mistake listening to the web designer’s empty talk on ersatz architecture.

Cyberpsychology is a growing and interesting field but if it is populated by people like Aiken who don’t understand what HCI is, nor how artificial intelligence (AI) works then it is no surprise that The Cyber Effect reads like the Daily Mail (I will blog about the book in more detail at a later date, as there’s some useful stuff in there but too many errors). Aiken quotes Sherry Turkle’s book Alone Together, which I have blogged about here, and it makes me a little bit dubious about cyberpsychology, I am waiting for the book written by the neuroscientist with lots of brainscan pictures to tell me exactly how our brains are being changed by the Internet.

Cyberpsychology is the study of the psychological ramifications of cyborgs, AI, and virtual reality, and I was like wow, this is great, and rushed straight down to the library to get the books on it to see what was new and what I might not know. However, I was disappointed because if the people who are leading the research anthropomorphise computers and theorise about metaphors about the Internet instead of the Internet itself, then it seems that the end result will be skewed.

We are all cyberpsychologists and social psychologists now, baby. It’s what we do

We are all cyberpsychologists and social psychologists, now baby. It’s what we do. We make up stories to explain how the world works. It doesn’t mean to say that the stories are accurate. We need hard facts not Daily Mail hysteria (Aiken was very proud to say she made it onto the front page of the Daily Mail with some of her comments). However, the research I have read about our behaviour online says it’s too early to say. It’s just too early to say how we are being affected and as someone who has been online since 1995 I only feel enhanced by the connections the WWW has to offer me. Don’t get me wrong, it hasn’t been all marvellous, it’s been like the rest of life, some fabulous connections, some not so.

I used to lecture psychology students alongside the software engineering students when I taught HCI in 2004 at Westminster University, and they were excited when I covered cognitive science as it was familiar to them, and actually all the cognitive science tricks make it easy to involve everyone in the lectures, and make the lectures fun, but when I made them sit in front of a computer, design and code up software as part of their assessment, they didn’t want to do it. They didn’t see the point.

This is the point: If you do not know how something works how can you possibly talk about it without resorting to confabulation and metaphor? How do you know what is and what is not possible? I may be able to drive a car but I am not a mechanic, nor would I give advice to anyone about their car nor write a book on how a car works, and if I did, I would not just think about a car as a black box, I would have to put my head under the bonnet, otherwise I would sound like I didn’t know what I was talking about. At least, I drive a car, and use a car, that is something.

Hey! We’re not all doctors, baby.

If you don’t use social media, and you just study people using it, what is that then? Theory and practice are two different things, I am not saying that theory is not important, it is, but you need to support your theory, you need some experience to evaluate the theory. Practice is where it’s at. No one has ever said: Theory makes perfect. Yep, I’ve never seen that on a meme. You get a different perspective, like Jack Nicholson to his doctor Keanu Reeves says in Something’s Gotta Give: Hey! We’re not all doctors, baby. Reeves has seen things Nicholson hasn’t and Nicholson is savvy enough to know it.

So, if you don’t know the theory and you don’t engage in the practice, and you haven’t any empirical data yourself, you are giving us conjecture, fiction, a story. Reading the Wikipedia page on cyberpsychology, I see that it is full of suggested theories like the one about how Facebook causes depression. There are no constraints around the research. Were these people depressed before going on Facebook? I need more rigour. Aiken’s book is the same, which is weird since she has a lot of references, they just don’t add up to a whole theory. I have blogged before about how I was fascinated that some sociologists perceived software as masculine.

In the same series I blogged about women as objects online with the main point being, that social media reflects our society and we have a chance with technology to impact society in good ways. Aiken takes the opposite tack and says that technology encourages and propagates deviant sexual practices (her words) – some I hadn’t heard of, but for me, begs the question: If I don’t know about a specific sexual practice, deviant or otherwise, until I learn about on the Internet (Aiken’s theory), then how do I know which words to google? It is all a bit chicken and egg and doesn’t make sense. Nor does Aiken’s advice to parents which is: Do not let your girls become objects online. Women and girls have been objectified for centuries, technology does not do anything by itself, it supports people doing stuff they already do. And, like the HCI person I am, I have designed and developed technology to support people doing stuff they already do. I may sometimes inadvertently change the way people do a task when supported by technology for good or for bad, but to claim that technology is causing people to do things they do not want to do is myth making and fear mongering at its best.

The definition of HCI that I used to use in lectures at the very beginning of any course was:

HCI is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them (ACM, 1992).

For me, human-computer interaction was and still remains Gestaltian: The whole is greater than the sum of the parts, by this I mean, that the collaboration of a human and a computer is more than a human typing numbers into a computer and then waiting for the solution, or indeed typing in sexually deviant search terms into a web crawler to find a tutorial. And, with the advent of social media, HCI is more than one person connecting to another, or broadcasting online, which is why the field of cyberpsychology is so intriguing.

But the very reason why I left the field of AI and went into HCI is: AI reasons in a closed world and the limits of the computational power you have available. There are limits. With HCI, that world opens up and the human gets to direct the computer to do something useful. Human to human communication supported by technology does something else altogether which is why you might want the opinion of a sociologist or a psychologist. But, you don’t want the opinion of the sociologist on AI when they don’t understand how it works and has watched a lot of sci-fi and thinks that robots are taking over the world. Robots can do many things but it takes a lot of lines of code. And, you don’t want the opinion of a cyberpsychologist who thinks that technology teaches people deviant sexual practices and encourages us all to literally pleasure ourselves to death (Aiken’s words – see what I mean about the Daily Mail?) ‘cos she read one dodgy story and linked it to a study of rats in the 1950s.

Nowadays, everyone might consider themselves to be a bit of a HCI expert and can judge the original focus of HCI which is the concept of usability: easy to learn, easy to use. Apps are a great example of this, because they are easy to learn and easy to use, mainly though because they have limited functionality, that is they focus on one small task, like getting a date, ordering a taxi, sharing a photo, or a few words.

However, as HCI professor Alan Dix says in his reflective Thirty years of HCI and also here about the future: HCI is a vast and multifaceted community, bound by the evolving concept of usability, and the integrating commitment to value human activity and experience as the primary driver in technology.

He adds that sometimes the community can get lost and says that Apple’s good usability has been sacrificed for aesthetics and users are not supported as well as they should be. Online we can look at platforms like Facebook and Twitter and see that they do not look after their users as well as they could (I have blogged about that here). But again it is not technology, it is people who have let the users down. Somewhere along the line someone made a trade-off: economics over innovation, speed over safety, or aesthetics over usability.

HCI experts are agents of change. We are hopefully designing technology to enhance human activity and experience, which is why the field of HCI keeps getting bigger and bigger and has no apparent core discipline.

It has a culture of designer-maker which is why at any given HCI conference you might see designers, hackers, techies and artists gathering together to make things. HCI has to exist between academic rigour and exciting new tech, no wonder it seems to not be easy to define. But as we create new things, we change society and have to keep debating areas such as intimacy, privacy, ownership, visibility as well as what seems pretty basic like how to keep things usable. Dix even talks about having human–data interaction, as we put more and more things online, we need to make sense of the data being generated and interact with it. There is new research being funded into trust (which I blogged about here). And Dix suggest that we could look into designing for solitude and supporting users to not respond immediately to every text, tweet, digital flag. As an aside, I have switched off all notifications, my husband just ignores his, and it just boggles my mind a bit that people can’t bring themselves to be in charge of the technology they own. Back to the car analogy, they wouldn’t have the car telling them where they should be going.

Psychology is well represented in HCI, AI is well represented in HCI too. Hopefully we can subsume cyberpsychology too, so that the next time I pick up a book on the topic, it actually makes sense, and the writer knows what goes on under the bonnet.

Technology should be serving us, not scaring us, so if writers could stop behaving like 1950s preachers who think society is going to the dogs because they view how people embrace technology in the same way they once did rocknroll and the television, we could be more objective about how we want our technological progress to unfold.

Sit. Feast on your blogs

My blogging tag cloud generated by

I have had this blog 11 years now. It feels like a lifetime ago when I first installed WordPress complete with the Kubrick WordPress theme as a place just for me to come and figure out what I thought.

Recently, I discovered my Top Posts for all days ending … which sounds very dramatic and very satisfying, so thought I would look at my most popular top 11 posts of all time and remember how I wrote them. In order of most popular first, here goes:

1) Stalkers in space and Facebook in your face, (February, 2007)

I wrote this blog as I was fascinated by someone’s reaction to me googling them even though everyone else I knew had been online for years and so didn’t mind, but then that was from an era where we decided what to put online, nowadays because of genealogy websites and companies house there is a lot more information in the public domain about a person than they may even realise, anyone can find out anything. The Internet makes it super easy to become a Stalker!

But even now, this blog post gets read by someone everyday, and in the top ten search terms of all time there’s: facebook 1995, facebook, facebook screenshot, old facebook, early facebook screenshots, facebook webpage, facebook 2007

The other three terms are: ruth stalker firth, design pattern, IT security.

I love search terms. They are fascinating. So, I was saddened when Google decided to keep search terms private as I am a total nerd and love patterns (see 3) in statistics and words, which is why I find the above tag cloud completely beautiful. However, I do remember there was a lot Stalker search terms kept coming up and bringing them here.

And, people googling me helped me to decide to put up an About page as I hadn’t had one for a long time. I find About pages really interesting on other peoples’ websites so am thinking that people might want to know more about me. I added a Now page inspired by the NowNowNow initiative and I use it myself. It is like a to-do list.

2) User motivation: Maslow’s hierarchy of needs, (December, 2007)

I remember being very pregnant writing this and I had been already given the news that there was a problem with my unborn child’s kidneys. So, I came here to think about Crannogs and holidays instead of googling renal fetal problems and driving myself mad with worry.

For me, technology is all about people, and humans are the central factor in any design project. Maslow’s hierarchy is a lovely way of organising things from social media (see 4) to chakras, though he only used two women in his sample of people but since women have rarely been written about, I am glad he used two. I can’t find anything better to organise our human experience which is to be felt, seen, heard. Soon I will write about Maslow’s hierarchy of technology.

3) Using patterns to shape our world, (March, 2007)

I’ve long been excited about patterns. In my PhD research I looked for patterns in my big data and graphical-user interfaces, which reminds me of the time my husband and I were in a restaurant arguing about whether object-oriented design was good for graphical-user interface design, the people on the next table asked to be reseated far away from us.

I have written quite a few blogs about finding the patterns in storytelling, in data (see 10), and in design. This was the very first blog I wrote about it and it thrills me to see that it gets read nearly as often as the social media blogs.

4) Maslow’s hierarchy of social media, (April, 2015)

I love thinking about social media, again what motivates people to share which is the need to be experienced. This is one of my favourite blogs as it was the first time I figured out what social media was about and how we use it. From this blog came the social animal on social media series which regularly gets hits because we like to know why we do what we do and social media is fascinating.

5) Chemotherapy: The year of my hair, (October 2012)

My hair was always my crowning glory and people would comment it on it all the time. It was big and black and beautiful, though for many years, out of a bottle. So, to be completely bald wasn’t much of a giggle even though it was only for four months. Sadly, though it never grew back in quite the same way, my hair is a lot less curly now. When I took off my wig and had a shorn head, people used to tell me that I was brave for getting a haircut that short. It felt really nice and furry and my baby girls would rub my head.

Brave was the term people used again when gave up the hair dye so I am not surprised that Fifty shades of my grey hair, (December 2016) came in no. 12 of all time even though it is relatively new. People like pictures to guide them through their own hair growth. I know I do. I still look at both sets of pictures to remember where I’ve been, because even now I want to dye my hair black and so remind myself how long it has taken to get where I am and how my dyed hair didn’t look very good anymore.

6) Cognitive Science for IT Security, (August, 2007)

This one was written for my students when I lectured at Westminster. It is one of my favourite subjects as it involves how we think and technology and how the two don’t always fit together too well. It was the saddest of days when I couldn’t lecture after my daughter was born, not least of all, because when I was ready to return the course had changed and this topic had disappeared because I had made it up and no one else had my unique skill set to teach it.

7) Why my coffee machine is so sexy, (February, 2007)

I have been in love with my coffee machine forever. My husband and I were newly married and were totally broke, and we spent a month’s rent money on this coffee machine which we ordered from a dodgy Italian website which didn’t say anything at all, so we didn’t know if they’d got the money, or if they really existed, or if we’d been ripped off. Ah, the joys of early international Internet shopping.

8) Bad design: Fresenius Applix Smart food pump, (December, 2008)

I took this one down as it attracted a lot of negativity. I talk about it here but I reread it again today and it is a good blog, a solid UX review, and there are comments by people who agree with me which I had forgotten about as I, like most humans, tend to remember the bad stuff more easily. What occurred to me today is that the blog is a demonstration of the medium is the message. People got so focused on the criticisms I had, that they thought I was criticising the purpose of the foodpump which I wasn’t. I thought about putting it back up but then thought again. I would never write another blog like it and I only want to spread positivity.

After this post, and apart from one about augmented and virtual realities and wearables, I didn’t blog again until 2011, and when I did it was about WordPress, this was when I had just finished chemotherapy and was about start radiotherapy and more surgery that I had the energy to think about things – seriously though, would I listen to myself? I had two small children to look after, one who was about to have another big surgery too. I hadn’t slept in years. However, it was important to me to think about technology and people, it’s what I do, it’s what I’ve always done, so I read all of Alan Dix’s TouchIT and took notes so that I could feel more like myself. I lost the notes before I got the chance to put them online, but the experience in itself kept me going, so thank you Alan, for sharing your book-to-be online, it kept me going.

In 2012 I managed to blog about embodiment during chemotherapy and the experience of my daughter’s first day at school, which was really nice. It brought me back to me and helped me remember how I like to write.

9) Katie Hopkins’s #fatstory one year on (January, 2016)

This one is a pop psychology blog about why Katie Hopkins is so mean. It gets hits all the time and is always in my most popular this week. I have no idea why people want to read about her. I guess it is the same reason I needed to write about her. I just wanted to understand why someone would be that mean, which is probably why my blog on Prejudice: The social animal on social media (April 2016) comes in at no 13 on the all time blog hits.

10) Storytelling: Narrative, Databases, and Big Data (April, 2016)

I was asked to lecture the module introduction to databases and the notes were a bit dry so I wrote this blog for my students to let them know that while we were linking together small tables of ten rows, people working with databases have millions and millions of rows to manipulate. Database design is exciting and patterns are where it is at.

11) Bikram: Heat is the way to inner peace March 2015

I love yoga. I started doing yoga when I was 14 years old, and am a trained teacher (of course I am, if there’s a formal way of learning anything, you can count on me to be your most enthusiastic student. Sign me up!). Bikram is just another wonderful variation of this wonderful gift. I love the heat, the sweat, and the way my body feels bending over lots of times in a hot room. I would recommend Bikram to anyone. It is a super hard discipline and never gets any easier, but I love it.

And, I love blogging. I love this space of mine. I write slowly and at great length. I used to have yoast installed which tells you how to make your blogs more SEO friendly, and says basically: 300 words long, H2 headers must have the keyword of the blog in them as the title must too, and you must sprinkle the keyword through the text. Yawn! I switched it off.

I take my time to write my blogs as I am not doing them to impress a search engine. I edit a lot, otherwise I end up with a blog like this one which as I reread it now, is a little disconnected and full of it’s brilliant, I love it. Pressing publish after grappling to understand something I didn’t before is just brilliant and yeah, I love it. I am so grateful to WordPress and Tim Berners-Lee for creating a platform for me to explore what’s on my heart, and for anyone who takes the time to read what I have written. Thank you.

Game theory & social media (3): What are you playing at?


[Part 3 of 4: Game theory & social media: Part 1Part 2, Part 4]

Whatever else anything is, it ought to begin with being personal – Kathleen Kelly, You’ve got mail (1998)

Kermit drinking his tea and throwing shade makes me laugh. However, I think we all understand his frustration. It seems that in business and personal relationships, people play games. We may not know why, and we may not know the rules. But as we saw in part 2, before we react, we might want to find out more: if a game is being played, which one, and if we want to play or not.

Games, payoffs, and winning

A game is normally defined as having two or more players, who have a choice of possible strategies to play which determine the outcome of a game. Each outcome has a payoff which is calculated numerically to represent its value. Usually, a player will want to get the biggest payoff possible in order to be certain of winning.

Dominance, saddles, and mixed strategies

Playing the strategy with the biggest payoff is known as the Dominance Strategy, and a rational player would never do otherwise, but it’s not always easy to identify which strategy is best.

So, players sometimes take a cautious approach which will guarantee a favourable result (also known as the Saddle Point Principle). Other times, there is no saddle point so players have to choose at random what strategy to play and hope for the best. They can calculate the probability of mixing up strategies and their chances of winning. If their probability skills are not great they can play experimentally and record their results 30 times (for statistical significance) to see which strategies work.

How does this work on social media? Well, no one knows how social media works so a trial and error approach whilst recording results can be useful. Luckily, Twitter and Facebook both provide services and stats to help.

Free will, utility, and Pareto’s principle

A major question is whether players have free will or not and whether their choices are predetermined based on who they are playing with and the circumstances in which the game takes place. This can depend on the amount of information players have available to them,  and as new information becomes available, they play a specific strategy, thus seeming as if they didn’t have free will at all.

Players assign numbers to describe the value of the outcomes (known in economics as utility theory) which they can use to guide themselves to the most valued outcome.

This is useful if we have a game where the winner doesn’t necessarily take all. If the players have interests which are not opposed and by cooperating the players can end up potentially with a win-win situation or at least a situation where everyone gains some benefits and the solution is not the worst outcome for everyone involved. This is known as the Pareto Principle.

On social media? Retweeting and sharing other’s businesses news is a nice way of ensuring everyone gains some benefits because with a potential market of 307 millions and there is enough of a market to go around for everyone to win-win and of course, reciprocate.

The Nash equilibrium

Taking this further is the Nash equilibrium which was named after John Nash, who proved that every two player game has one equalizing strategy (either pure or mixed) in each game. By looking at the equilibrium strategies of the other players, everyone plays to equalize. This is because, no player has anything to gain by changing only his or her own strategy, so it is win-win.

Are you chicken?

Ducks have been known share out the bread thrown to them so they all get some rather than one duck eating everything. This is known as the Hawk-Dove approach in game theory. When there is competition for a shared resource, players can choose either conciliation or conflict.

Research has shown that when a player is naturally a hawk (winner takes all) and plays amongst doves, then the player will adapt and cooperate. Conversely a dove amongst hawks will adapt too and turn into a fighter.

If there are two hawks playing each other the game is likely to go chicken, which is when both players will risk everything (known as mutually assured destruction in warfare) not to yield first.

We adapt very easily to what is going on around us, and on social media this is totally the same. In a 2014 study Pew Research Center found that people are less likely to share their honest opinions on social media, and will often only post opinions on Facebook with which they know their followers will agree – we like to conform.

The volunteer’s dilemma

In contrast, the volunteer’s dilemma is an altruistic approach where one person does the right thing for the benefit of everyone. For example, one meerkat will look out for predators, at the risk of getting eaten, whilst the rest of the meerkats look for food. And, we admire this too. We love a hero, a maverick, someone who is ready to stand up and be different.

The prisoner’s dilemma

But we hated to feel duped which is why the prisoner’s dilemma is one of the most popular game theories of all. Created by Albert W. Tucker in 1950, it is as follows:

Two prisoners are arrested for a joint crime and put in separate interrogation rooms. The district attorney sets out these rules:

  1. If one of them confesses and the other doesn’t, the confessor will be rewarded, the other receive a heavy sentence.
  2. If both confess each will get a light sentence. Which leads to the belief that:
  3. If neither confesses both will go free.

It is in each prisoner’s interest to confess (dominant strategy = 1) and if they both do that satisfies the Pareto principle (2). However, if they both confess, they are worse off than if neither do (3).

The prisoner’s dilemma embodies the struggle between individual rationality and group rationality which Nigel Howard described as a metagame of a prisoner cooperating if and only if, they believe that the other prisoner will cooperate, if and only if, they believe that the first prisoner will cooperate. A mind boggling tit-for-tat. But, this is common on Twitter with those: Follow me, I will follow you back and constant following and unfollowing.

And, in any transaction we hate feeling like we have been had, that we were a chump, that we trusted when we shouldn’t have, which is why some people are so angry and like to retaliate. Anger feels better than feeling vulnerable does. But, great daring starts with vulnerability, the fear of failure, and even the failure to start, the hero’s quest shows us that.

Promises, threats, and coalitions

As we add more players, all rationality may go out of the window as players decide whether to form coalitions or to perform strategic style voting. If we introduce the idea of the players communicating then we add the issues of trust in promises, or fear of threats and it all starts to sound rather Hunger Games.

On social media aggression and threats are common, because of prejudice, or group think, especially on Twitter where there is no moderation. And, online and off, we have all been promised things and relationships which have ultimately left us disappointed, and told us that we have been misinformed, like the fake news, we’ve been hearing about a lot lately.  Fake news is not new, in other contexts it is known as propaganda.  And, if it is not completely fake, just exaggerated, well that’s not new either, New Labour loved spin which led to a sexed up dossier, war and death.

Kermit’s next move

Philip D. Straffin says in his book Game theory and strategy, that game theory only works up to a point, after which a player must ask for some clarification about what is going on because mathematics applied to human behaviour will only explain so much.

And so we turn back to Kermit. What is he to do?  He has passive-aggressively asked for clarification and had a cup of tea. What’s his next move? Well, he could wait and see if he gets a reply (tit for tat). Who will crack first (chicken)? But, with the texts he has sent her, it is likely that her response is somewhat predetermined, or perhaps not, perhaps she will repond with Nash’s equilibria, or at the very least the Pareto principle of everyone not getting the worst outcome.

Alternatively, he could take a breath and remember that he is talking to someone he likes and with whom he wants to spend some time, someone human with the same vulnerabilities as him. He could adopt the volunteer’s dilemma approach and send her an honest text to explain that his feelings are hurt, he thought they had something special, and that she liked communicating with him as much as other people. By seeking clarification in this way, Kermit may just end up having a very nice evening after all –  or not. Whoever said: All’s fair in love and war, didn’t have instant access to social media and all the complications it can cause.

[Part 4]

Stories, Semantics and the Web of Data

My most used words on facebook in 2016
My most used words on Facebook in 2016

As a computer scientist I have spent hours talking to designers, architects and engineers to capture their domain knowledge to model in a computer, with the end goal of helping them do their jobs better. It isn’t always straight forward to perform knowledge elicitation with people who have been doing complex tasks, very well, for a long time. Often, they can no longer articulate why or how they do things. They behave intuitively, or so it seems. So, I listen to them as they tell me their stories. Everyone has a story. Everyone! It is how we communicate. We tell stories to make sense of ourselves and the world around us.

As Brené Brown says in her extraordinary TED talk on vulnerability:

…Stories are just data with a soul…

Up until now, stories have been the most effective way of transferring information but once we involve a computer,  we become very aware of how clever and complex we humans are. With semiotics, we study how humans construct meaning from stories;  with semantics, we are looking at what the meaning actually is. That is to say,  when we link words and phrases together, we are creating relationships between them. What do they stand for? What do they mean?


English Professor Marshall McLuhan who termed the phrase the medium is the messagedescribed reading as rapid guessing. I see a lot of rapid guessing when my daughter reads aloud to me. Sometimes, she says sentences which are semantically correct and representative of what happens in the story, but they are not necessarily the sentences which are written down. She is basically giving me the gist. And, that is what our semantic memory does – it preserves the gist or the meaning of whatever it is we want to remember.

Understanding the gist, or constructing meaning, relies on the context of a given sentence, and causality – one thing leads to another – something humans, even young ones like my daughter, can infer easily. But this is incredibly difficult for a computer even a clever one steeped in artificial intelligence and linguistics. The classic example of ambiguity in a sentence is Fruit flies like a banana, which is quite funny until you extend this to a whole model such as our legal system, expressed as it is in natural language, and then it is easy to see how all types of misunderstandings are created, as our law courts, which debate loopholes and interpretations, demonstrate daily.

Added to the complexities of natural language, humans are reasoning in a constantly changing open world, in which new facts and rules are added all the time. The closed-world limited-memory capacity of the computer can’t really keep up. One of the reasons I moved out of the field of artificial intelligence and into human-computer interaction was because I was interested in opening up the computer to human input. The human is the expert not the computer. Ultimately, we don’t want our computers to behave like experts, we want them to behave like computers and calculate the things we cannot. We want to choose the outcome, and we want transparency to see how the computer arrived at that solution, so that we trust it to be correct. We want to be augmented by computers, not dictated to by them.

Modelling: Scripts and Frames

We can model context and causality,  as Marvin Minsky’s frames first suggested. We frame everything in terms of what we have done and our experiences as sociologist Lucy Suchman proposed with her plans and situated actions.

For example, when we go to the supermarket, we follow a script at the checkout with the checkout operator (or self-service machine):

a) the goods are scanned, b) the final price is calculated, c) we pay, d) our clubcard is scanned, and e) we might buy a carrier bag.

Unless we know the person on the cash desk, or we run into difficulties with the self-service checkout and need help in the form of human intervention, the script is unlikely to deviate from the a) to e) steps above.

This modelling approach recognises the cognitive processes needed to construct semantic models (or ontologies) to communicate, explain, and make predictions in a given situation which differs from a formal models which uses mathematical proofs. However, in these human centred situations a formal proof model can be inappropriate.

However, either approach was always done inside one computer until Tim Berners-Lee found a way of linking many computers together with the World Wide Web (WWW). Berners-Lee realised that having access to potentially endless amounts of information in a collaborative medium, a place where we all meet and read and write was much more empowering than us working alone each with a separate model.

And, then once online, it is interesting to have social models, like informal community tagging improves Flickr and Popular tags get used and unpopular ones don’t, rather like evolution. In contrast formal models use proofs to make predictions so we lose human input and the interesting social dynamic.

Confabulation and conspiracy

But it is data we are interested in. Without enough data points in a data set on which we apply a model, we make links and jumps from point to point until we create a different story which might or might not be accurate. This is how a conspiracy theory gets started. And, then if we don’t have enough data at all, we speculate and may end up telling a lie as if it is a truth which is known as confabulation. Ultimately having lots of data and the correct links gives us knowledge and power and the WWW gives us that.

Freeing the data

Throughout history we often have confused the medium with the message. We have taken our most precious stories and built institutions to protect the containers – the scrolls and books – which hold stories whilst limiting who can access them, in order to preserve them for posterity.

Now, we have freed the data and it is potentially available to everyone. The WWW has changed publishing and journalism, and the music industry forever.  We have never lived in a more exciting time.

At first we weren’t too bothered how we were sharing data, pictures, pdfs, because humans could understand them. But, since computers are much better at dealing with large data sets, it makes sense for them to interpret data and help us find everything we need. And so, the idea of the semantic web was born.

Semantic Web

The term semantic web was suggested by Berners-Lee in 1999 to allow computers to interpret data and its relationships, and even create relationships between data on the WWW in a way in which only humans can do currently.

For example, if we are doing a search about a person, humans can easily make links between the data they find: Where the person lives, with whom, their job, their past work experience, ex-colleagues. A computer might have difficulty making the connections. However, by adding data descriptions and declaring relationships between the data to allow reasoning and inference capabilities, then the computer might be able to pull together all that data in a useful coherent manner for a human to read.

Originally the semantic web idea included software agents, like virtual personal assistants, which would help us with our searches, and link together to share data with other agents in order to perform functions for us such as organising our day, getting more milk in the fridge, and paying our taxes. But due to the limitations of intelligent agents, it just wasn’t as easy to do. So, the emphasis shifted from computers doing the work, to the semantic web becoming a dynamic system through which data flows, with human intervention, especially when the originator of the data could say: Here machine interpret this data this way by adding machine friendly markup.

Cooperation without coordination

It seems strange to contemplate now, but originally no one believed that people would voluntarily spend time putting data online, in the style of distributed authorship, but we have Wikipedia, DBPedia, GeoNames to name but a few places where data is trustworthy. And, we have W3C which recommends the best way to share online.

The BBC uses websites like the ones above and curates the information there to ensure the integrity of the data. That is to say, the BBC works with these sites, to fact check the data, rather than trying to collect the data by itself. So, it cooperates with other sites but does not coordinate the output. It just goes along and gets what it needs, and so the BBC now has a content management system which is potentially the whole of the WWW. This approach of cooperation without coordination is part of what has become known as linked data, and the WWW is becoming the Web of Data.

Linked Data and the Web of Data

Linked data is a set of techniques for the publication of data on the web using standard formats and interfaces so that we can gather any data we need in a single step on the fly and combine it to form new knowledge. This can be done online or behind enterprise firewalls on private networks, or both.

We can then link our data to other data that is relevant and related, whilst declaring meaningful relationships between otherwise arbitrary data elements (which as we have seen a computer couldn’t figure out by itself).

Google rich snippets and  Facebook likes use the same approach of declaring relationships between data in order to share more effectively.

Trust: Data in the wild, dirty data, data mashups

It all sounds brilliant. However, it is impossible to figure out how to get your data mashup right from different sources when they all have different formats. This conundrum is known as data in the wild. For example, there is lots of raw data on, which is not yet in the recommended format.

Then, there is the problem of dirty data. How can we trust the data we are getting if anyone can put it online? We can go to the sites we trust, but what if they are not collecting the data we need? What if we don’t trust data? What if we use the data anyway? What will happen? These are things we will find out.

How can we ensure that we are all using the same vocabularies? What if they are not? Again, we will find a way.

Modelling practice: extendable, reusable, discoverable

The main thing to do when putting up your data and developing models is to name things as meaningfully as you can. And, whilst thinking about reuse, design for yourself, do not include everything and the kitchen sink. Like all good design, if it is well designed for you, even if you leave specific instructions, someone will find a new way to extend and use your model, this is guaranteed. It is the no function in structure principle. Someone will always discover something new in anything you design.

So what’s next?

Up until now search engines have worked on matching words and phrases, not what terms actually mean. But, with our ability to link data together, already Google is using the knowledge graph to help uncover the next generation search engine. Facebook is building on its open graph protocol  whilst harvesting and analysing its data to help advertisers find their target audience.

Potentially we have the whole world at our fingertips,  we have freed the data, and we are sharing our stories. It may be written in Ecclesiastes that there is nothing new under the sun, but it is also written in the same place: Everything is meaningless. I think it is wrong on both counts,  with this amount of data mashup and collaboration, I like to believe instead: Everything is new under the sun and nothing is meaningless. We live in the most interesting of times.