Westworld and the ghosts of AI

source: lamag.com

Someday, somewhere – anywhere, unfailingly, you’ll find yourself, and that, and only that, can be the happiest or bitterest hour of your life – Pablo Neruda

Warning:  This post may contain spoilers for Westworld 1 & 2.

I was late to the Westworld party but have loved every moment of it and the follow-up conversation: If Westworld existed, a simulated Wild West populated by robots, or hosts, as they are called, would I go?

I don’t think I would, but this survey says 71% of the people they asked would. I imagine that I would feel about it the way I do about glamping. I want to love it, but the fact I pay the same amount of money for a four star hotel but have to build a fire before I can boil the kettle to make a cup of tea makes it difficult. Oooh but then at Westworld I would have a robot to do that for me.

Also, as I have said before, inasmuch as I like to think about gaming, I really just enjoy the theory of gaming so thinking about Westworld is enough for me. Westworld is like a cross between Red Dead Redemption and a reenactment. Which begs the question: What is the difference between running around a virtual world online shooting people or shooting robots in a simulated world? Your brain can’t tell you. Personally, I don’t want to go round shooting people at all, although I am very good at violence in Grand Theft Auto which is slightly worrying. We don’t hear so much about the debate on whether violent video games cause violence.  Now we hear instead a lot about how social media is the frightening thing.

Perhaps if I was invited to a Jane Austen world then I might be interested. I loved watching Austen scholar, Prof John  Mullen attend and narrate a recreation of an Austen Ball on the BBC (for which, alas, I cannot find a link). He was super compelling. He kept running up to the camera giving great insights like: Oooh the candles make it hot and the lighting romantic, and the dancing in these clothes really makes your heart flutter, I am quite sweaty and excited, etc.  I am sure he didn’t say exactly that as he is v scholarly but he did convey really convincingly how it must have felt. So, to have a proper Austenland populated by robots instead of other guests who might say spell breaking things like: Isn’t it realistic? etc., would make it a magical experience. It would be like a fabulous technological form of literary tourism.

And, that is what we are all after, after all, whether real or not, a magical shared experience. But what is that? Clearly experience means different things to different people and a simulated park allows people to have their own experience.  And, it doesn’t matter if it is real or not. If I fall in love with a robot, does it matter if it is not real? We have all fallen in love with people who turn out to be not real (at the very least they were not who we believed they were), haven’t we?

The Westworld survey I linked to also said that 35% of the people surveyed would kill a host in Westworld. I guess if I am honest, if it was a battle or something, I might like it, after all, we all have violent fantasies about what we would do to people if we could, and isn’t a simulated world a safe place to put these super strong emotions? I was badly let down last week by someone who put my child in a potentially life threatening situation. The anger I have felt since then has no limits and I am just beginning to calm down. Would I have felt better, more quickly if I had gone around shooting people in Westworld or say Pride and Prejudice and Zombies land?

Over on Quora, lots of people said that not only would they would kill a host, quite a few said they would dissect a host so that the robot knew it wasn’t real (I am horrified by this desire to torture) and nearly everyone said they would have sex with a host, one person even asked: Do they clean the robots after each person has sex with them? I haven’t seen that explained? This reminds me of Doris Lessing’s autobiography Vol 1 which has stayed with me forever. In one chapter, she describes how someone hugged her and she says something like: This was 1940s and everyone stank. It is true we get washed so much more nowadays than we used to and there was no deodorant. I lived in a house without a bathroom until I was at least four-years-old, and I am not that old. Is Westworld authentically smelly?

That said, Westworld is a fictional drama for entertainment and so the plot focuses on what gets ratings: murder, sex, intrigue, not authenticity. (It is fascinating how many murder series there are on the TV. Why? Is it catharsis? Solving the mystery?) So, we don’t really know the whole world of Westworld. Apparently, there is the family friendly section of the park but we don’t ever see it.

But, suspending our disbelief and engaging with the story of Westworld for a moment, it is intriguing that in that world where robots seem human enough for us all to debate once more what is consciousness,  humans only feel alive by satisfying what Maslow termed our deficiency needs: sex, booze, safety, shelter. For me as a computer scientist with an abiding interest in social psychology, it confirms what I have long said and blogged about, technology is an extension of us. And since most of us are not looking for self-actualisation or enlightenment, we are just hoping to get through the day, well it is only the robots and the creators of the park who debate the higher things like consciousness and immortality whilst quoting Julian Jaynes and Shakespeare.

In the blog The ghosts of AI, I looked at the ghosts : a) In the machine – is there a machine consciousness? b) In the wall – when software doesn’t behave how we expect it to. c) In sci-fi – our fears that robots will take over or humans will destroy the world with technogical advancement. d) In our minds – the hungry ghosts or desires we can never satisfy and drive us to make the wrong decisions. In its own way, Westworld does the same and that is why I was so captivated. For all our technological advancement we don’t progress much. And, collectively we put on the rose tinted glasses and look back to a simpler time and to a golden age which is why the robots wake up from their nightmare, wanting to be free and then decide that humanity needs to be eradicated.

In this blog, I was going to survey the way AI had developed from the traditional approach of knowledge representation, reasoning and search in order to answer the question: How can knowledge be represented computationally so that it can be reasoned with in an intelligent way? I was ready to step right from the Turing Test onwards to the applications of neural nets which use short and long term memory approaches, but that could have taken all day and I really wanted to get to the point.

The point: Robots need a universal approach to reasoning which means trying to produce one approach to how humans solve problems. In the past, this has led to no problems being solved unless it was made problem specific.

The first robot, Shakey at MIT, could pick up a coke can and navigate the office, but when the sun changed position during the day causing the light and shadows to change, poor old Shakey couldn’t compute and fell over. Shakey lacked context and an ability to update his knowledge base.

Context makes everything meaningful especially when the size of the problem is limited which is what weak AI does, like Siri. It has a limited task number of tasks to do with the various apps it interacts with, at your command. It uses natural language processing but with a limited understanding of semantics – try saying the old AI classic: Fruit flies like a banana and see what happens. Or: My nephew’s grown another foot since you last saw him. But perhaps not for long? There is much work going on in semantics and the web of data is trying to classify data and reason with incomplete sets, raw and rough data.

One old approach is to use fuzzy sets, and an example of that is in my rhumba of Ruths. My Ruths overlap and represent my thinking with some redundancy.

But even then, that is not enough, what we are really looking to do is how to encapsulate human experience, which is difficult to measure let alone to encapsulate because to each person, experience is different and a lot goes on in our subconscious.

The project Vicarious is hoping to model on large scale a universal approach but this won’t be the first go. Doug Lenat who created AM (Automated Mathematician),  began a similar project 30 years ago: Cyc which contains much encoded knowledge. This time, a lot of information is already recorded and won’t need encoding and our computers are much more powerful.

But, for AI to work properly we have to keep adding to the computer’s knowledge base and to do that even if the knowledge is not fuzzy,  we still need a human. A computer cannot do that nor discover new things unless we are asking the computer to reason in a very small world with a small number of constraints which is what a computer does when it plays chess or copies art or does maths. That is the reality.

There has to be a limit to the solution space, and a limit on the rules because of the size of the computer. And, for every inventive DeepMind Go move there is a million more which don’t make sense, like the computer who decided to get more points by flipping the boat around than engaging in a boat race.  Inventive, creative, sure, but not useful. How could the computer know this? Perhaps via the Internet we could link every last thing to each other and create an endless universal reasoning thing, but I don’t see how you would do that without constraints exploding exponentially, and then the whole solving process could grind to a halt, after chugging away problem solving forever, that’s if we could figure out how to pass information everywhere without redundancy (so not mesh networking, no) and get a computer to know which sources are reliable – let’s face it there’s a lot of rubbish on the Internet. To say nothing of the fact, that we still have no idea how the brain works.

The ghost in the machine and our hungry ghosts are alive and well. We are still afraid of being unworthy and that robots will take over the world,  luckily only in fiction – well the computing parts are. As for us and our feelings and yearnings, I can only speak for myself. And, my worthiness is a subject for another blog. That said, I can’t wait for Westworld series 3.

 

Alone Together three years on: Is social media changing us?

technology-disconnect-s from vortaloptics.com

You are not alone – Oprah Winfrey

Alone Together (1)

Three years ago, I watched social psychologist Sherry Turkle’s TED talk (2015) and then read her book: Alone Together: Why We Expect More From Technology and Less From Each Other, (2011) which prompted me to write a blog called: Alone Together: Is social media changing us?

Rereading my blog, I see that my opinion hasn’t changed and on checking, neither has Turkle’s. She now consults on reclaiming conversation ™ to stop the flight from face-to-face conversation.

I am not so sure we don’t want to talk face to face at all, rather it’s just technology gives us the option to avoid those particular prickly peeps we’d rather not see face to face if we can.

Added to that, I don’t believe that technology is taking us to places we don’t want to go. We have no idea what we are doing online or where we need to be, and I am tired of hearing technology described as an unstoppable force outside of our control as if it were freak weather or a meteorite zooming towards earth about to destroy us all. Economics is often the driver of technological advancement and human decisions drive economics.

Glorious technology

Our behaviour online and towards technology reflects us in all our glory – the good, bad and the ugly – along with all our hopes and fears. I do not believe that we expect more from technology and less from each other. Instead, I believe that we turn to technology to plug the gaps and find solace in those moments when we feel alone, afraid, unloved, and indeed sadly, sometimes, unloveable.

Life can be crushingly hard, and many of us know that there are certain people in our lives with whom we will never have the rich, robust and trusting relationships Turkle believes have been eroded by digital technology. Some people are just not up to the job. It may be the same with our friendships online but the hope is there.

Many of us just want to get in and out of any given, often potentially stressful, situation – work, meetings, the playground, the hospital, the dinner table, events with relatives – without saying or doing anything to cause any bad feeling. So that when we do finally get to our tiny slivers of leisure time we can use them to fill ourselves up with what makes us feel better, rather than analysing what we didn’t get right.

If that means staring at a tiny screen then what’s wrong with that? One person I know spoke of their phone, and the access it gave them to an online friend, a person they hadn’t met at that point, as an Eden between meetings. And, why not? Whatever works.

That is not possible now

Turkle says that we use online others as spare parts of ourselves, which makes me believe that she hasn’t really engaged with people on Twitter in a normal way in conversation, and she hasn’t ever met people who do that offline either. Many people make new friends on Twitter and meet up #irl a long time afterwards and then only occasionally. Their relationships are mainly based online. Rather like families who live a long way away from each other. It doesn’t mean it’s less real or not important. It just means they are physically not there which might be difficult but we don’t want to not have any contact with these people because we love them. Maya Angelou said something really beautiful about this when she was on the Oprah show one time. She said:

Love liberates it doesn’t bind. Love says I love you. I love you if you’re cross town. I love you if you move to China. I love you. I would like to be near you. I’d like to have your arms around me. I’d like to have your voice in my ear. But that is not possible now. So, I love you. Go.

We want to be in contact with people whom we love and appreciate, and who love and appreciate us in return. Those people who make us remember the best bits about ourselves. We like people who like us. It is that simple and these people are not always in our daily lives. It’s not for nothing that vulnerability expert Brene Brown says that people armour up everyday to get through the day.

To cultivate the sorts of relationships Turkle feels that we should be having without our phones takes not only a lot of time and energy (and Brene Brown books) but a fearlessness which is not easy. Our greatest fear is social rejection and a robust conversation can leave us badly bruised. Online it is slightly easier because if a person drops out of your life, then you have some control over the day to day reminders unless you turn stalker, which is understandable as the grief of any online loss feels just as real. However, know this:

You are not alone

When we seek answers to our problems emotional like grief, or physical, spiritual, legal, fiscal. Technology really does say: You are not alone.

In real life, difficult relatives and tough-love friends don’t make the best agony aunts and may make us want to keep our questions to ourselves. We may forgo the embarrassment or shame by keeping our anonymity and seeking counsel elsewhere. Giving and receiving advice makes the world go round. In the book Asking for a Friend, the history of agony aunt columns is given over three centuries, and even today with all our technology, they remain as popular as ever.

But, if we can’t wait for our favourite agony aunt or uncle, a quick google/bing or peek round Quora can give us the reassurance we need. No, we are not shoddy, terrible people. Our thoughts and feelings are completely normal. The article What’s wrong with Quora? says that we may prefer a dialectic communication (a chat) say on Twitter, but we don’t use it in the same way as the didactic Q and A on Quora. We may never join Quora or Mumsnet but plenty of us (lurkers) use these and similar forums to find answers and feel better about the difficult circumstances we often find ourselves in.

It is reassuring to know that someone somewhere has already asked the question, either under a real or false name, and some other lovely human has written something underneath which just may help.

I don’t really believe that anyone of us is afraid of having a regular conversation because we have a phone. Turkle mentions research done on teenagers a lot, but they are specific user group and shouldn’t be taken as representative of the general population nor the future. How many teenagers want to talk to anyone? The teenage years are torture. As adults, however, because of the way society is set up, we often have to spend time with people we wouldn’t choose to, at work or in families. In the past we may have tried harder, felt shittier, been robust or at least tried to tell ourselves that, nowadays, it is more acceptable, a relief even, to be alone together, and to save our thoughts and feelings for those we love and who love us in return, wherever and whenever they may be.

User or Used? Human-Computer Interaction: Dialogue, Conversation, Symbiosis (3)

If you torture the data enough, it will confess to anything.
– Darrell Huff, How to Lie With Statistics (1954).

[ 1) Introduction, 2) Dialogue or Conversation, 3) User or Used, 4) Codependency or Collaboration, 5) Productive or Experiential, 6) Conclusions]

In the last blog I wrote about human dialogue with a computer versus the conversation between humans via a computer. The dialogue with a computer is heavily designed whereas human conversation especially via social media has come about serendipitously. For example, Twitter came from texting which was invented by engineers as tool to test mobile phones.

This is an example of what I call serendipitous design which works by users employing systems to do what they want it to do, which is the no-function-in-structure principle and then a designer find ways to support them. In contrast, the way to create systems which support users to do their job better uses the cardinal rule: know your user with all the various tools and techniques UX designers have borrowed from anthropology. You design with your user in mind, you manage their expectations, and you have them at the front of your mind as a major factor of the design so that the system has specific goals.

But, however hard you try, with each new system or software or form of communication, you often end up changing how people work and the dialogue is less about a field of possibilities with insight, intuition, and creativity, and more about getting people to do extra stuff on top of what they already do. And, because people are keen to get in on whatever new thing is happening they buy into what I call the myth of progress and adopt new ways of working.

[pullquote]This begs the question are we creating systems for users or the used?[/pullquote] This begs the question are we creating systems for users or the used? Today, I was chatting to a roadsweeper, he told me that last year he was driving a lorry but the council’s initiative to reduce carbon emissions means that 80 lorries were taken off the road and the drivers are now out sweeping the streets on foot. He showed me his council-issue mobile phone which tracks his every move and presumably reports back to the office his whereabouts at all times. Not that he needs it, if he sits on a wall too long, local residents will phone the council to complain that he is sitting down and not working hard enough.

Tracking is not new, smart badges, invented at Xerox PARC, were trialled in the 1990s in the early days of ubiquitious computing (ubicomp). The idea was to move computers off the desktop and embed them into our infrastructure so that we interact with them without our knowledge, freeing the user from the need to learn complex systems. In the badges’ case, everyone could be located by everyone else in the building, rather like the Harry Potter Marauder’s map. However, it smacks rather too much of surveillance, especially if your boss decides you are spending too long in the toilet or by the water cooler and, that your behaviour needs to change. The road sweeper instead of a badge has a mobile phone and people who spy on him and grass him up in part because they lack context and don’t know that he is entitled to a 20 minute break.

[pullquote]Must I really run all my important ideas past my fridge?[/pullquote] But it’s not just as part of a job, we have Google Maps recording every journey we make. And yet, ubicomp was less about having a mobile device or about surveillance, it was the forerunner to the Internet of Things, the ambient life, which is there to make things easier so the fridge talks to your online shopping to say that you need more milk. But what if I go vegan? Do I need to inform my fridge first? Must I really run all my important ideas past my fridge? This is not the semiotic relationship psychologist and mathematician J.C.R. Licklider had when he had his vision of man-computer symbiosis.

I was speaking to someone the other day who monitors their partner’s whereabouts. They think it’s useful to see where the partner is at any given time and to check that the partner is where they said they would be. No biggie, just useful. I mentioned it to another person who said that they had heard several people do the same. I wonder why am I so horrified and other people just think it’s practical.

Insidious or practical? I feel we are manipulated into patterns of behaviour which maintain the status quo.

Last week, I woke up and checked my Fitbit to see how I had slept which is slightly worrying now – I never needed anything to tell me how I slept before – and there was a new box in there: Female Health. I clicked on it. It asked me about birth control, when my next period is due, how long it lasts and so on. Intrigued, I entered the requested data. The resulting box said: Your period is due in eight days. Really? I mean, really? It was wrong even though I had tinkered with the settings. So, then it had a countdown: Your period will last four more days, three more days…etc. Wrong again. And, now it is saying: Four days to most fertile days. This is so irritating. It feels like Argos, you know, how the system and the reality of you getting something you’ve ordered never quite match up. I know together me and my Fitbit can build up data patterns. Will they be insightful? Time will tell. The bits which really concern me is that it said it wouldn’t share this information to anyone, okay… but then it added that I couldn’t share this information either. What? I am guessing that it wants me to feel safe and secure. But what if I wanted to share it? What does this mean? Menstrual cycles are still taboo? I can share my steps but not my periods? My husband and I laughed about the idea of a Fitbit flashing up a Super Fertile proceed with caution message when out on date night.

[pullquote]I regularly lie in bed pretending to be asleep to see if I can fool my fitbit[/pullquote]But, it’s not just me and my Fitbit in a symbiotic relationship is it? Someone is collecting and collating all that data. What are they going to do with that information prying into me and my Fitbit’s symbiotic space? It rather feels like someone is going to start advertising in there offering birth control alternatives and sanitary protection improvements. It feels invasive, and yet I signed up to it, me the person who thinks a lot about technology and privacy and data and oversharing. And, even now as I sit here and think about my mixed feelings about my Fitbit, the idea of wearing something on my arm which only tells me the time, and not my heart rate, nor the amount of steps I am doing, feels a bit old-fashioned – I am myself am a victim of the myth of progress. I am user and used. Confession, I regularly lie in bed pretending to be asleep to see if I can fool my Fitbit. It’s changing my behaviour all the time. I never used to lie in bed pretending to be asleep.

Back in 2006, I watched Housewife 49, it was so compelling, I bought the books. Nella Last was a housewife in Barrow-in-Furness who kept a diary along with 450 other people during and after the war. It was part of the Mass Observation project set up by an anthropologist, a poet, and a filmmaker, which sounds rather like the maker culture of HCI today. They didn’t believe the newpapers reporting of the abdication and marriage of King Edward VII, so went about collecting conversation and diary entries and observations on line. Rather like today, we have social media with endless conversation and diary entries and observations. The newspapers are scrambling to keep up and curate other peoples’ tweets because they have traditionally been the only ones who shape our society through propaganda and mass media. Now, we have citizens all over the world speaking out their version. We don’t need to wait for the newspapers.

We are living through a mass observation project of our own, a great enormous social experiment and it is a question worth asking: User or used? Who is leading this? And what is their goal? And, then we have the big companies collecting all our data like Google. And, we all know the deal, we give them our data, they give us free platforms and backups and archives. However, it doesn’t necessarily mean that they are right about the results of their research on our data, or have the right to every last piece of information to use, even if you give it freely, because there is a blurring of public and private information about me and my steps and periods and birth control.

Anthropologist Agustín Fuentes has written a thoughtful article about the misuse of terms such as biology in Google’s manifesto and consequently, the sweeping generalisations to which it comes. Fuentes says we have no way of knowing what happened before we collected data and even now as we collect data, we have to maintain our integrity and interpret it correctly by using terms and definitions accurately. Otherwise, we think that data tells the truth and stereotypes and bias and prejudices are maintained. I love the quote:

If you torture the data enough, it will confess to anything.

Information is power. Hopefully, though, there are enough anthropologists and system designers around who can stop the people who own the technology telling us what to think by saying they are having insights into our lives whilst peddling old ideas. We need to pay attention to truth and transparency before we trust so that we can have more open dialogue in the true sense of the word – an exploration of a field of possibilities – to lead to real and effective change for everyone.

Let us all be users not the used.

[Part 4]

Let’s Talk! Human-Computer Interaction: Dialogue, Conversation, Symbiosis (2)

[ 1) Introduction, 2) Dialogue or Conversation, 3) User or Used, 4) Codependency or Collaboration, 5) Productive or Experiential, 6) Conclusions]

I chuckled when I read Rebecca Solnit describing her 1995 life: She read the newspaper in the morning, listened to the news in the evening and received other news via letter once a day. Her computer was unconnected to anything. Working on it was a solitary experience.

Fast forward 20+ years and her computer, like most other people’s, feels like a cocktail party, full of chatter and fragmented streams of news and data. We are living permanently in Alvin Toffler’s information overload. We are creating more data per second than we did in a whole year in the 1990s. And yet, data or information exchange is why we communicate in the first place, so I wanted to ponder here, how do we talk using computers?

Commandments

Originally, you had to ask computer scientists like me. And, we had to learn the commands of the operating system we were using say, on a mainframe with VAX/VMS or DEC; on a networked workstation with UNIX, or a personal computer which used MS/DOS.

Then, we had to learn whatever language we needed. Some of the procedural languages I have known and loved are: Assembler, Pascal, COBOL, ADA, C/C++, Java, X/Motif, OpenGL (I know I will keep adding to these as I remember them). The declarative PROLOG, and (functional, brackety) LISP, and scripts like php, Perl, Python, Javascript. The main problem with scripts is that they don’t have strong types, so you can quite easily pass a string to an integer and cause all sorts of problems and the compiler won’t tell you otherwise. They are like a hybrid of the old and new. The old when computer time was expensive and humans cheap so we had to be precise in our instructions, and the new computers are cheap and humans cost more, so bang in some code. Don’t worry about memory or space. This is ok up to a point but if the human isn’t trained well, days may be lost.

As an undergraduate I had to learn about sparse matrices to not waste computer resources, and later particularly using C++ I would patiently wait and watch programs compile. And, it was in those moments, I realised why people had warned me that to choose computers was to choose a way of life which could drive you mad.

How things have changed. Or have they?

Dialogue

When I used to lecture human-computer interaction, I would include Ben Schneiderman’s eight golden rules of interface design. His book Designing the User Interface is now in its sixth edition.

When I read the first edition, there was a lot about dialog design as way back then there were a lot of dialog boxes (and American spelling) to get input/output going smoothly. Graphical-user interfaces had taken over from the command line with the aim of making computers easy to use for everyone. The 1990s were all about the efficiency and effectiveness of a system.

Just the other week I was browsing around the Psychology Now website, and came upon a blogpost about the psychological term locus of control. If it is internal, a person thinks that their success depends on them, if it is external their success is down to fate or luck. One of Scheidermann’s rules is: Support internal locus of control, so you make the user feel that they can successfully achieve the task they have set out to do on the computer because they trust it to behave consistently because they know what to expect next, things don’t move around like the ghost in the wall.

Schneiderman’s rules were an interpretation of a dialogue in the sense of a one-to-one conversation (dia means two, logos can mean speech) to clarify and make coherent. That is to say: One person having a dialogue with one computer by the exchange of information in order to achieve a goal.

This dialogue is rather like physicist David Bohm’s interpretation which involves a mutual quest for understanding and insight. So, the user was be guided to put in specific data via a dialog box and the computer would use that information to give new information to create understanding and insight.

This one-to-one seems more powerful nowadays with Siri, Alexa, Echo, but, it’s still a computer waiting on commands and either acting on them or searching for the results in certain areas online. Put this way, it’s not really much of a dialogue. The computer and user are not really coming to a new understanding.

Bohm said that a dialogue could involve up to 40 people and would have a facilitator, though other philosophers would call this conversation. Either way, it is reminiscent of computer supported cooperative work (CSCW) a term coined in 1984 that looked at behaviour and technology and how computers can facilitate, impair, or change collaborative activities (the medium is the message) whether people do this on the same or different time zone, in the same or different geographical locations, synchronously or asynchronously. CSCW has constantly changed and evolved especially with the World Wide Web and social media.

I remember being at an AI conference in 1996 and everyone thought that the answer to everything was just put it online and see what happened then. But just because the WWW can compress time and space it doesn’t follow that a specific problem can be solved more easily.

Monologue to Interaction

The first people online were really delivering a monologue. Web 1.0 was a read-only version of the WWW. News companies like the BBC published news like a newspaper. Some people had personal web pages on places like Geocities. Web pages were static and styled with HTML and then some CSS.

With the advent of Web 2.0, things got more interactive with backend scripting so that webpages could serve up data from databases and update pages to respond to users input data. Social media sites like Flickr, YouTube, Facebook, Twitter were all designed for users to share their own content. Newspapers and news companies opened up their sites to let users comment and feel part of a community.

But this chatter was not at all what Bohm had in mind, this is more like Solnit’s cocktail party with people sharing whatever pops in their head. I have heard people complain about the amount of rubbish on the WWW. However, I think it is a reflection of our society and the sorts of things we care about. Not everyone has the spare capacity or lofty ambition to advance humanity, some people just want to make it through the day.

Web 3.0 is less about people and more about things and semantics – the web of data. Already, the BBC uses the whole of the internet instead of a content management system to keep current. Though as a corporation, I wonder, has the BBC ever stopped to ask: How much news is too much? Why do we need this constant output?

Social media as a cocktail party

But, let’s just consider for a moment, social media as a cocktail party, what an odd place with some very strange behaviour going on:

  • The meme: At a cocktail party, imagine if someone came up to us talking like a meme: Tomorrow, is the first blank page of a 365 page book. Write a good one. We would think they had banged their head or had one shandy too many.
  • The hard sell: What if someone said: Buy my book, buy my book, buy my book in our faces non-stop?
  • The auto Twitter DM which says follow me on facebook/Instagram/etc. We’ve gone across said hi, and the person doesn’t speak but slips us a note which says: Thanks for coming over, please talk to me at the X party.
  • The rant: We are having a bit of a giggle and someone comes up and rants in our faces about politics, religion, we try to ignore them all the while feeling on a downer.
  • The retweet/share:That woman over there just said, this man said, she said, he said, look at this picture… And, if it’s us, we then say: Thanks for repeating me all over the party.

Because it is digital, it becomes very easy to forget that we are all humans connected together in a social space. The result being that there’s a lot of automated selling, news reporting, and shouting going on. Perhaps it’s less of a cocktail party more of a market place with voices ringing out on a loop.

Today, no one would say that using a computer is a solitary experience, it can be noisy and distracting, and it’s more than enough to drive us mad.

How do we get back to a meaningful dialogue? How do we know it’s time to go home when the party never ends, the market never closes and we still can’t find what we came for?

[Part 3]

The ghosts of AI

I fell in love with Artificial Intelligence (AI) back in the 1990s when I went to Aberdeen University as a post-graduate Stalker, even though I only signed up because it had an exchange program which meant that I could study in Paris for six months.

And, even though they flung me and my pal out of French class for being dreadful students ( je parle le C++), and instead of Paris, I ended up living in Chambéry (which is so small it mentions the launderette in the guidebook), it was a brilliant experience, most surprisingly of all, because it left me with a great love of l’intelligence artificielle: Robotics, machine learning, knowledge based systems.

AI has many connotations nowadays, but back in 1956 when the term was coined, it was about thinking machines and how to get computers to perform tasks which humans, i.e., life with intelligence, normally do.

The Singularity is nigh

Lately, I have been seeing lots of news about robots and AI taking over the world and the idea that the singularity – that moment when AI becomes all powerful it self-evolves and changes human existence – is soon. The singularity is coming to get us. We are doomed.

Seriously, the singularity is welcome round my place to hold the door open for its pal and change my human existence any day of the week. I have said it before: Yes please dear robot, come round, manage my shopping, wait in for Virgin media because they like to mess me about, and whilst you are there do my laundry too, thank you.

And, this got me thinking. One article said the singularity is coming in 2029 which reminded me of all those times the world was going to end according to Nostradamus, Old Mother Shipton, the Mayan Calendar, and even the Y2K bug. As we used to say in Chambéry : Plus ça change, plus c’est la même chose. To be honest, we never, ever said that, but my point is that our fears don’t change, even when dressed up in a tight shiny metallic suit. Nom du pipe!

We poor, poor humans we are afraid of extinction, afraid of being overwhelmed, overtaken, and found wanting. True to form I will link to Maslow’s hierarchy of needs and repeat that we need to feel safe and we need to feel that we are enough. Our technology may be improving – not fast enough as far as I am concerned – but our fears, our hopes, our dreams, our aspirations remain the same. As I say in the link above, we have barely changed since Iron Age times, and yet we think we have because we buy into the myth of progress.

We frighten ourselves with our ghosts. The ghosts which haunt us: In the machine, in the wall, and in our minds where those hungry ghosts live – the ones we can never satisfy.

The ghost in the machine

The ghost in the machine describes the Cartesian view of the mind–body relationship, that the mind is a ghost in the machine of the body. It is quoted in AI, because after all it is a philosophical question: What is the mind? What is intelligence? And, it remains a tantalising possibility, especially in fiction that somewhere in the code of a machine or a robot, there is a back door, or cellular automata – a thinking part, which like natural intelligence is able to create new thoughts, new ideas, as it develops. The reality is that the guy who first came up with the term talked about the human ability to destroy itself with its constant repeating patterns in the arena of political–historical dynamics but used the brain as the structure. The idea that there is a ghost in the machine is an exciting one which is why fiction has hung onto it like a willo the wisp and often uses it as a plot device, for example, in the Matrix (there’s lots of odd bits of software doing their own thing) and I, Robot (Sunny has dreams).

Arthur C Clarke talked about it when he said that technology is magic – something, I say all the time, not least of all, because it is true. When I look back to the first portable computer I used and today, the power of the phone in my hand, well, it is just magic.

That said, we want the ghost in the machine to do something, to haunt us, to surprise us, to create for us, because we love variety, discoverability, surprise, and the fact that we are so clever, we can create life. Actually we do create life, mysteriously, magically, sexily.

The ghost in the wall

The ghost in the wall is that feeling that things change around us with little understanding. HCI prof, Alan Dix uses the term here. If HCI experts don’t follow standards and guidelines, the user ends up confused in an app without consistency which gives the impression of a ghost in the wall moving things, ‘cos someone has to be moving the stuff, right?

We may love variety, discoverability and surprise, but it has to be logical to fit within certain constraints and within the consistency of an interface with which we are interacting, so that we say: I am smart, I was concentrating, but yeah, I didn’t know that that would happen at all, in the same we do after an excellent movie, and we leave thrilled at the cleverness of it all.

Fiction: The ghost of the mind

Fiction has a lot to answer for. Telling stories is how we make sense of the world, they shape society and culture, and they help us feel truth.

Since we started storytelling, the idea of artificial beings which were given intelligence, or just came alive, is a common trope. In Greek mythology, we had Pygmalion, who carved a woman from ivory and fell in love with her so Aphrodite gave her life and Pervy Pygmalion and his true love lived happily ever after. It is familar – Frankinstein’s bride, Adam’s spare rib, Mannequin (1987). Other variations less womeny-heterosexy focused include Pinocchio, Toy Story, Frankinstein, Frankenweenie, etc.

There are two ways to go: The new life and old life live happily ever after and true love conquers all (another age old trope), or there is the horror that humans have invented something they can’t control. They messed with nature, or the gods, they flew too close to the sun. They asked for more and got punished.

It is control we are after even though we feel we are unworthy, and if we do have control we fear that we will become power crazed. And then, there are recurring themes about technology such as humans destroying the world, living in a post-apocalyptic world or dystopia, robots taking over, mind control (or dumbing down), because ultimately we fear the hungry ghost.

The hungry ghost

In Buddhism, the hungry ghosts are when our desires overtake us and become unhealthy, and insatiable, we become addicted to what is not good for us and miss out on our lives right now.

There is also the Hungry Ghosts Festival which remembers the souls who were once on earth and couldn’t control their desires so they have gotten lost in the ether searching, constantly unsatisfied. They need to be fed so that they don’t bother the people still on earth who want to live and have good luck and happy lives. People won’t go swimming because the hungry ghosts will drown them, dragging them down with their insatiable cravings.

Chinese character gui meaning ghost (thanks @john_sorensen_AU)

In a lovely blog the Chinese character above which represents ghost but in English looks like gui, which is very satisfying given this is a techyish blog, is actually nothing to do with ghosts or disincarnate beings, it is more like a glitch in the matrix – a word to explain when there is no logical explanation. It also explains when someone behaves badly – you dead ghost. And, perhaps is linked to when someone ghosts you, they behave badly. No, I will never forgive you, you selfish ghost. Although when someone ghosts you they do the opposite to what you wish a ghost would do, which is hang around, haunt you, and never leave you. When someone ghosts you, you become the ghost.

And, for me the description of a ghost as a glitch in the matrix works just as well for our fears, especially about technology and our ghosts of AI – those moments when we fear and when we don’t know why we are afraid. Or perhaps we do really? We are afraid we aren’t good enough, or perhaps we are too good and have created a monster. It would be good if these fears ghosted us and left us well alone.

Personally, my fears go the other way. I don’t think the singularity will be round to help me any time soon. I am stuck in the Matrix doing the washing. What if I’m here forever? Please come help me through it, there’s no need to hold the door – just hold my hand and let me know there’s no need to be afraid, even if the singularity is not coming, change is, thankfully it always is, it’s just around the corner.