Westworld and the ghosts of AI

source: lamag.com

Someday, somewhere – anywhere, unfailingly, you’ll find yourself, and that, and only that, can be the happiest or bitterest hour of your life – Pablo Neruda

Warning:  This post may contain spoilers for Westworld 1 & 2.

I was late to the Westworld party but have loved every moment of it and the follow-up conversation: If Westworld existed, a simulated Wild West populated by robots, or hosts, as they are called, would I go?

I don’t think I would, but this survey says 71% of the people they asked would. I imagine that I would feel about it the way I do about glamping. I want to love it, but the fact I pay the same amount of money for a four star hotel but have to build a fire before I can boil the kettle to make a cup of tea makes it difficult. Oooh but then at Westworld I would have a robot to do that for me.

Also, as I have said before, inasmuch as I like to think about gaming, I really just enjoy the theory of gaming so thinking about Westworld is enough for me. Westworld is like a cross between Red Dead Redemption and a reenactment. Which begs the question: What is the difference between running around a virtual world online shooting people or shooting robots in a simulated world? Your brain can’t tell you. Personally, I don’t want to go round shooting people at all, although I am very good at violence in Grand Theft Auto which is slightly worrying. We don’t hear so much about the debate on whether violent video games cause violence.  Now we hear instead a lot about how social media is the frightening thing.

Perhaps if I was invited to a Jane Austen world then I might be interested. I loved watching Austen scholar, Prof John  Mullen attend and narrate a recreation of an Austen Ball on the BBC (for which, alas, I cannot find a link). He was super compelling. He kept running up to the camera giving great insights like: Oooh the candles make it hot and the lighting romantic, and the dancing in these clothes really makes your heart flutter, I am quite sweaty and excited, etc.  I am sure he didn’t say exactly that as he is v scholarly but he did convey really convincingly how it must have felt. So, to have a proper Austenland populated by robots instead of other guests who might say spell breaking things like: Isn’t it realistic? etc., would make it a magical experience. It would be like a fabulous technological form of literary tourism.

And, that is what we are all after, after all, whether real or not, a magical shared experience. But what is that? Clearly experience means different things to different people and a simulated park allows people to have their own experience.  And, it doesn’t matter if it is real or not. If I fall in love with a robot, does it matter if it is not real? We have all fallen in love with people who turn out to be not real (at the very least they were not who we believed they were), haven’t we?

The Westworld survey I linked to also said that 35% of the people surveyed would kill a host in Westworld. I guess if I am honest, if it was a battle or something, I might like it, after all, we all have violent fantasies about what we would do to people if we could, and isn’t a simulated world a safe place to put these super strong emotions? I was badly let down last week by someone who put my child in a potentially life threatening situation. The anger I have felt since then has no limits and I am just beginning to calm down. Would I have felt better, more quickly if I had gone around shooting people in Westworld or say Pride and Prejudice and Zombies land?

Over on Quora, lots of people said that not only would they would kill a host, quite a few said they would dissect a host so that the robot knew it wasn’t real (I am horrified by this desire to torture) and nearly everyone said they would have sex with a host, one person even asked: Do they clean the robots after each person has sex with them? I haven’t seen that explained? This reminds me of Doris Lessing’s autobiography Vol 1 which has stayed with me forever. In one chapter, she describes how someone hugged her and she says something like: This was 1940s and everyone stank. It is true we get washed so much more nowadays than we used to and there was no deodorant. I lived in a house without a bathroom until I was at least four-years-old, and I am not that old. Is Westworld authentically smelly?

That said, Westworld is a fictional drama for entertainment and so the plot focuses on what gets ratings: murder, sex, intrigue, not authenticity. (It is fascinating how many murder series there are on the TV. Why? Is it catharsis? Solving the mystery?) So, we don’t really know the whole world of Westworld. Apparently, there is the family friendly section of the park but we don’t ever see it.

But, suspending our disbelief and engaging with the story of Westworld for a moment, it is intriguing that in that world where robots seem human enough for us all to debate once more what is consciousness,  humans only feel alive by satisfying what Maslow termed our deficiency needs: sex, booze, safety, shelter. For me as a computer scientist with an abiding interest in social psychology, it confirms what I have long said and blogged about, technology is an extension of us. And since most of us are not looking for self-actualisation or enlightenment, we are just hoping to get through the day, well it is only the robots and the creators of the park who debate the higher things like consciousness and immortality whilst quoting Julian Jaynes and Shakespeare.

In the blog The ghosts of AI, I looked at the ghosts : a) In the machine – is there a machine consciousness? b) In the wall – when software doesn’t behave how we expect it to. c) In sci-fi – our fears that robots will take over or humans will destroy the world with technogical advancement. d) In our minds – the hungry ghosts or desires we can never satisfy and drive us to make the wrong decisions. In its own way, Westworld does the same and that is why I was so captivated. For all our technological advancement we don’t progress much. And, collectively we put on the rose tinted glasses and look back to a simpler time and to a golden age which is why the robots wake up from their nightmare, wanting to be free and then decide that humanity needs to be eradicated.

In this blog, I was going to survey the way AI had developed from the traditional approach of knowledge representation, reasoning and search in order to answer the question: How can knowledge be represented computationally so that it can be reasoned with in an intelligent way? I was ready to step right from the Turing Test onwards to the applications of neural nets which use short and long term memory approaches, but that could have taken all day and I really wanted to get to the point.

The point: Robots need a universal approach to reasoning which means trying to produce one approach to how humans solve problems. In the past, this has led to no problems being solved unless it was made problem specific.

The first robot, Shakey at MIT, could pick up a coke can and navigate the office, but when the sun changed position during the day causing the light and shadows to change, poor old Shakey couldn’t compute and fell over. Shakey lacked context and an ability to update his knowledge base.

Context makes everything meaningful especially when the size of the problem is limited which is what weak AI does, like Siri. It has a limited task number of tasks to do with the various apps it interacts with, at your command. It uses natural language processing but with a limited understanding of semantics – try saying the old AI classic: Fruit flies like a banana and see what happens. Or: My nephew’s grown another foot since you last saw him. But perhaps not for long? There is much work going on in semantics and the web of data is trying to classify data and reason with incomplete sets, raw and rough data.

One old approach is to use fuzzy sets, and an example of that is in my rhumba of Ruths. My Ruths overlap and represent my thinking with some redundancy.

But even then, that is not enough, what we are really looking to do is how to encapsulate human experience, which is difficult to measure let alone to encapsulate because to each person, experience is different and a lot goes on in our subconscious.

The project Vicarious is hoping to model on large scale a universal approach but this won’t be the first go. Doug Lenat who created AM (Automated Mathematician),  began a similar project 30 years ago: Cyc which contains much encoded knowledge. This time, a lot of information is already recorded and won’t need encoding and our computers are much more powerful.

But, for AI to work properly we have to keep adding to the computer’s knowledge base and to do that even if the knowledge is not fuzzy,  we still need a human. A computer cannot do that nor discover new things unless we are asking the computer to reason in a very small world with a small number of constraints which is what a computer does when it plays chess or copies art or does maths. That is the reality.

There has to be a limit to the solution space, and a limit on the rules because of the size of the computer. And, for every inventive DeepMind Go move there is a million more which don’t make sense, like the computer who decided to get more points by flipping the boat around than engaging in a boat race.  Inventive, creative, sure, but not useful. How could the computer know this? Perhaps via the Internet we could link every last thing to each other and create an endless universal reasoning thing, but I don’t see how you would do that without constraints exploding exponentially, and then the whole solving process could grind to a halt, after chugging away problem solving forever, that’s if we could figure out how to pass information everywhere without redundancy (so not mesh networking, no) and get a computer to know which sources are reliable – let’s face it there’s a lot of rubbish on the Internet. To say nothing of the fact, that we still have no idea how the brain works.

The ghost in the machine and our hungry ghosts are alive and well. We are still afraid of being unworthy and that robots will take over the world,  luckily only in fiction – well the computing parts are. As for us and our feelings and yearnings, I can only speak for myself. And, my worthiness is a subject for another blog. That said, I can’t wait for Westworld series 3.

 

Human-Computer Interaction Conclusions: Dialogue, Conversation, Symbiosis (6)

[ 1) Introduction, 2) Dialogue or Conversation, 3) User or Used, 4) Codependency or Collaboration, 5) Productive or Experiential, 6) Conclusions]

I love the theory that our brains, like computers, use binary with which to reason and when I was an undergraduate I enjoyed watching NAND and NOR gates change state.

As humans, we are looking for a change of state. It is how we make sense of the world, as in semiotics, we divide the world into opposites: good and bad, light and dark, day and night. Then we group information together and call them archetypes and symbols to imbue meaning so that we can recognise things more quickly.

According to the binary-brain theory, our neurons do too. They form little communities of neurons that work together to recognise food, not-food; shelter, not-shelter; friends, foes; the things which preoccupy us all and are classed as deficiency needs in Maslow’s Hierarchy of Needs.

Over on researchgate, there was discussion about moving beyond binary which used this example:

Vegetarian diet vs Free Range Animals vs Battery Farmed Meat

If it was just vegetarian diet v battery farming it would be binary and an easy choice but add in free range and we see the complexities of life, the sliding continuum from left to right. We know life is complex but it is easier in decision making to just have two options, we are cognitive misers and hate using up all our brainpower. We want to see a change in state or a decision made. It also reflects the natural rhythms of life like the tide: ebb and flow, the seasons: growing and dying, it’s not just our neurons its our whole bodies which reflect the universe so patterns in nature resonate with us.

I began this series with an end in mind. As human-computer interaction (HCI) is an ever expanding subject, I wanted to pin it down and answer this question: What am I thinking these days when I think about human-computer interaction?

For me, HCI is all about the complexities of the interaction of a human and a computer, which we try to simplify in order to make it a self-service thing, so everyone can use it. But with the progress of the Internet, HCI has become less about creating a fulfilling symbiosis between human and computer, and more about economics. And, throughout history, economics has been the driving force behind technological progress, but often with human suffering. It is often in the arts where we find social conscience.

Originally though, the WWW was thought of by Tim Berners-Lee to connect one computer to another so everyone could communicate. However, this idea has been replaced by computers connecting through intermediaries, owned by large companies, with investors looking to make a profit. The large companies not only define how we should connect and what are experience should be, but then they take all our data. And it is not just social media companies, it is government and other institutions who make all our data available online without asking us first. They are all in the process of redefining what privacy and liberty means because we don’t get a choice.

I have for sometime now gone about saying that we live in an ever changing digital landscape but it’s not really changing. We live the same lives, we are just finding different ways to achieve things without necessarily reflecting whether it is progress or not. Economics is redefining how we work.

And whilst people talk about community and tribes online, the more that services get shifted online, the more communities get destroyed. For example, by putting all post office services online, the government destroyed the post office as a local hub for community, and yet at the time it seemed like a good thing – more ways to do things. But, by forcing people to do something online you introduce social exclusion. Basically, either have a computer or miss out. If you don’t join in, you are excluded which taps into so many human emotions, that we will give anything away to avoid feeling lonely and shunned, and so any psychological responsibility we have towards technology is eroded especially as many online systems are binary: Give me this data or you cannot proceed.

Economic-driven progress destroys things to make new things. One step forward, two steps back. Mainly it destroys context and context is necessary in our communication especially via technology.

Computers lack context and if we don’t give humans a way to add context then we are lost. We lose meaning and we lose the ability to make informed decisions, and this is the same whether it is a computer or a human making the decisions. Humans absorb context naturally. Robots need to ask. That is the only way to achieve a symbiosis, by making computers reliant on humans. Not the other way round.

And not everything has to go online. Some things, like me and my new boiler don’t need to be online. It is just a waste of wifi.

VR man Jaron Lanier said in the FT Out to Lunch section this weekend that social media causes cognitive confusion as it decontextualises, i,e., it loses context, because all communication is chopped up into algorithmic friendly shreds and loses its meaning.

Lanier believes in the data as labour movement, so that huge companies have to pay for the data they take from people. I guess if a system is transparent for a user to see how and where their data goes they might choose more carefully what to share, especially if they can see how it is taken out of context and used willy-nilly. I have blogged in the past how people get used online and feel powerless.

So way back when I wrote that social media reflects us rather than taking us places we don’t want to go, in my post Alone Together: Is social media changing us? I would now add that it is economics which changes us. Progress driven by economics and the trade-offs humans think it is ok for other humans to make along the way. We are often seduced by cold hard cash as it does seem to be the answer to most of our deficiency needs. It is not social media per se, it is not the Internet either which is taking us places we don’t want to go, it is the trade-offs of economics and how we lose sight of other humans around us when we feel scarcity.

So, since we work in binary, let’s think on this human v technology conundrum. Instead of viewing it as human v technology, what about human v economics? Someone is making decisions on how best to support humans with technology but each time this is eroded by the bottom line. What about humans v scarcity?

Lanier said in his interview I miss the future as he was talking about the one in which he thought he would be connected with others through shared imagination, which is what we used to do with stories and with the arts. Funny I am starting to miss it too. As an aside, I have taken off my Fitbit. I am tired of everything it is taking from me. It is still possible online to connect imaginatively, but it is getting more and more difficult when every last space is prescribed and advertised all over as people feel that they must be making money.

We need to find a way to get back to a technological shared imagination which allows us to design what’s best for all humanity, and any economic gain lines up with social advancement for all, not just the ones making a profit.

Productive or Experiential? Human-Computer Interaction: Dialogue, Conversation, Symbiosis (5)

[ 1) Introduction, 2) Dialogue or Conversation, 3) User or Used, 4) Codependency or Collaboration, 5) Productive or Experiential, 6) Conclusions]

Recently, I met up with an old friend and as we reminisced about our university days, she wondered if I still went about asking people really nosy questions. Now, I don’t exactly remember asking people really nosy questions, but I do like things to make sense and in my experience, people like to fill in the gaps in their stories and show me things because they know that I am listening, they know I care.

That said, back in April, I was in Naples where a man came out of his booth to ask me to stop staring at his funicular. I told him that he shouldn’t have it out in public if he didn’t want me staring at it. I remain very pleased to have managed that in Italian, though I still can’t understand why he was so upset about my admiration.

Italian funicular employees aside, I still believe as I have said here many many times before, we all want to be seen, we all want to be heard, we all want to matter. We make sense of ourselves, of others, and the world around us with stories. And, we do this even if we are not trying to write software, we are doing it to tidy up ourselves and our minds.

The thing is though, I thought I went into computing to get away from humans but really all I have done in my job is gravitate towards people to ask them about their life experiences to figure out how technology could make their lives easier, faster, better.

So, I was taken by HCI Professor Brenda Laurel’s division of what software does for us. In her book Computers as Theatre, she said that there are two types of HCI :

  1. Experiential computing for fun and games.
  2. Productive computing which is measured by outcome or seriousness with implications like writing a book and transmitting knowledge.

This chimes with anthropologist Lionel Tiger’s descriptions of designing for pleasure (experiential) or designing for achievement (productive).

But, don’t we do both? If something is designed well and is pleasurable to use, doesn’t it increase our productivity? Isn’t that what Apple has been super successful at doing with aesthetics, discoverability, and user experience? And, isn’t that the point of gamification? To make not fun things fun.

I’ve always wanted to help humans harness the power of computers, to help make their lives easier by automating the grunt work to free up more time to be creative in. I know that creativity is our life force. It keeps us expanding. It keeps us young. And like, J C Licklider, I believe that the best collaboration of computing and humans is a creative one of collaboration not codependency.

I have blogged about eliciting knowledge for web design as a way to get all the information a designer might need. And, my favourite part has always been shadowing people at work. I have done this round building sites, on bridges, chemical factories, exhibition centres, architects offices and half-built apartments, steel rolling mills, print factories, and people using mobile phones . I love to see what happens a day in the life of people doing jobs I will never have the opportunity to do. I am fascinated by people.

Ever since the first time I was in charge of changing some software, which involved users needing more fields in a database, I have loved helping people with their tech. However, simple this job was, it was my first insight into seeing how the database was there to be manipulated by the user to give new insight into the information they had. Nowadays we tell stories with databases. But, the database must always serve the user not the other way round. I think we forget this sometimes.

When I worked in the field of artificial intelligence, I purposely put errors into various parts of a knowledge based system. The idea being that the test cases I wrote to find my errors should uncover other similar errors which were there inadvertently. It needed extensive training for a user to understand what the system was calculating so that code was precious and had to be error free. And, if it needed to be changed because things are always changing in the real world, it needed a computer scientist to add more code. This I didn’t like so much. This was not empowering. Here the user and computer scientist served the code, not the other way round.

Also, it was difficult to model and represent things which experts knew inherently. So, in the case of the exhibition planning, the software I worked on used a constraint solver which could easily allocate the correct sized booths with required utilities such as electricity and water, but it couldn’t easily model or reason with exhibitor A wanting to be by the door, or not near exhibitor B, without a human. This is a common problem also for dinner planning for fundraisers, so I am told. The software has to be told the nuances of human life, but you don’t want to hard code it, as it is forever changing, which is why you either need a human, or you need a super good graphical user interface otherwise it is quicker by hand.

For a while I thought 3D applications and visualisation were the way forward especially in bridges. Bridges are enormous, last a long time, and information gets lost and the data needed to understand them is extensive, so why not visualise it. I got very excited about augmented reality, to overlay a bridge with plans, original ones, proposed changes plans. It was much harder to do back then as you needed to measure and calibrate the exact camera angle with the AR software in order by hand to overlay the original view (i.e. the bridge) with all the extra information (plans, proposed changes, future behaviour). I remember being out on a bridge for ages fiddling away. However, these days it would be much easier if you use an app you have written on the phone and it’s native camera.

But still inputting new information is not easy, especially on a mobile phone in 3D. I was playing games this morning on my mobile phone and I had trouble putting pizzas in boxes using 3D direct hand manipulation. More functionality equates to more complexity and constantly changing instructions which can be clever but requiring a learning curve as it not always intuitive, but if you are having fun, like I was, then I didn’t mind the learning curve, if it’s not fun, then we all need to be aiming for simplexity.

Experience impacts productivity and why wouldn’t it? Websites and apps are are a bit like designing a self-service instrument. As a user you figure out what is going on yourself. The better and easier it is to figure it out, the more likely you come back and the more you enjoy yourself. If not you will go elsewhere, where someone is listening, who wants to hear your story, to make you feel that you count and that your experiences matter. As Danielle La Porte said:

Design is love.

And what is love if it is not the best experience? Experiential HCI makes everything better. Let’s share the love!

[Part 6]

Codependency or Collaboration? Human-Computer Interaction: Dialogue, Conversation, Symbiosis (4)

[ 1) Introduction, 2) Dialogue or Conversation, 3) User or Used, 4) Codependency or Collaboration, 5) Productive or Experiential, 6) Conclusions]

The fig tree is pollinated only by the insect Blastophaga grossorun. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree; together, they constitute not only a viable but a productive and thriving partnership. This cooperative “living together in intimate association, or even close union, of two dissimilar organisms” is called symbiosis. (Licklider, 1960).

The above quotation is from JCR Licklider’s seminal paper Man-Computer Symbiosis (1960). I had to grit my teeth as he kept going on about man and men and computers. To distract myself from the onset of a patriarchal rage, I decided that I needed to know the precise definition of seminal. Google gave me seed of semen.

The original meaning of the word computer actually means a person who did calculations, like women in 1731 who ran a household and were advised to be clever with their husband’s wages, and whilst I am wondering about who is the tree and who the insect in this scenario of man-computer symbiosis, I am thinking that it really isn’t a good idea to aspire to have computers and people functioning as natural living organisms who cannot survive without each other. I love technology I really do, but the idea that I cannot function without it is, at the very least, disturbing.

We got a new boiler the other day complete with a smart thermostat. The smart thermostat uses an app now installed on everyone’s phone along with our locations activated to see if we are home or not and the thermostat uses the information and corrects the temperature accordingly. It will also build up patterns of our daily routine what time we get up, have a shower, take a bath, etc., so that it can be ready for us. But, only if we are home and have our phones charged. Thankfully, there are buttons on the thermostat if the WiFi goes down, apparently the earlier versions didn’t have them, and we can also log into a web browser to make changes to its routine.

Theoretically I should be thrilled, it is better than my Fitbit asking when my period is due so that it can tell me when my period is due – and since that blog, I have been given Google adverts for different sanitary products, I told you so! – or Google Nest which needs you to tell it how you want your house to run rather like my Fitbit. And, I do like data and patterns so I am interested in what it collects and what it does and if it is clever enough to respond to a cold snap or whatever.

But old habits die hard, so far we have got out an old room thermometer to check if the smart thermostat is measuring the temperature correctly as it seemed a bit high. It was right. (Just checked it, it says 25.8c the room thermometer says 22.8c quite a big difference) I guess I have just worked in computing too long and I have technology trust issues. If the Sonos is anything to go by when the WiFi goes down we are completely without music, well digital music at any rate. Last time, we got out the guitar and turned into the Vonn Trapps, I am not even joking. The alternative would be to keep other music formats and a player. But that idea doesn’t do a lot for me, I am more of a donor than a collector. I hate stuff filling up my space.

When I reader Licklider, I am reminded of ubiquitious computing rather than any other technology. I know I would rather my tech be ubiquitious than making me feel codependent on my mobile phone. All these apps for my heating, my music, my blogging, my Bikram, my electricity, my gas, it is slowly but surely turning into the precious and I feel like Gollum. I worry about my phone and can’t stand anyone touching it. Whereas ubicomp had the idea of one person, lots of computers interacting with, if I was doing it, the physiology of a person and making changes accordingly, rather than with the person’s mobile phone’s location, which strikes me as being a bit simplistic and not smart at all. (Just ‘cos you call it smart doesn’t make it smart.) And, then collecting and sharing over the Internet which causes us all to have the symptoms in the above codependent link – my phone does make me anxious and I try to please it all the time. I am forever staring at it and hoping it is doing ok, and I don’t like that feeling.

I have spent a lot of time writing about what social media can do for us, and how we can feel connected. But, in this scenario when it is not other people and it is an app, or sometimes it is other people via an app, if we are not in control, we become disconnected from ourselves and then we become addicted to the app or to the person. We give away our power and our data. The problem with these apps is that we have no control and we are barely a factor in the design. Our mobile phone is the factor in the design and it trains us to be codependent, addicted, anxious. Warmth after all is the bottom rung of Maslow’s hierarchy of needs and I am staring at my bloody phone hoping for the best. This is not symbiosis, this is codependency.

But back to Licklider and his seed of semen paper, a lot of what he was trying to imagine was probably about cooperation or collaboration of the kind I have blogged about before: a space of solutions to explore, with the computer doing the grunt work and the human doing the thinking. And, I believe it is possible to do that even with apps.

In a post, David Scott Brown, looks at what Licklider suggested in Man-Computer Symbiosis and what is possible today: shared processing, memory, better input-output, different languages, etc. And, I would add, in fields where the language is precise and limited, for example, Trading think about: buy, sell, high, low, and often over the phone so applications of AI are useful and will be able to do all sorts. All the data and conversations can be recorded and used and mined. It is extremely exciting, and memory and storage is it seems infinite which would make Licklider’s mind boggle.

As an undergraduate I had to learn sparce matrices, memory was something not to waste, it was expensive. In his paper, Licklider says:

The first thing to face is that we shall not store all the technical and scientific papers in computer memory. We may store the parts that can be summarized most succinctly-the quantitative parts and the reference citations-but not the whole. Books are among the most beautifully engineered, and human-engineered, components in existence, and they will continue to be functionally important within the context of man-computer symbiosis.

Imagine his face looking at the data centres all over the world storing memes and cat pictures and abusive tweets repeatedly without checking if they have already been saved, without indexing, without any sort of check on redundancy, an endless stream of drivvel. I am sure even Tim Berners-Lee wonders sometimes about the monster he has created.

And, books take so long to write, beautifully engineered they are, we lose ourselves in them and learn from them, they take us out of ourselves in the same way our phones do, but we are addicted to our phones and to our social media to that little hit of dopamine that social media gives us, which our books don’t always do. Books are work and we are passive whereas on our phones we feel active, but because our phones are controlling our homes and training us to be codependent and anxious and powerless, it is a vicious circle of more phones, fewer books.

In these times when I look at where we are going and I am not feeling good about it, like Licklider I turn to nature, as the best designs are in nature. I also look to the wisdom of yoga. So this is what I have:

When a bee gathers pollen, it also gathers a small amount of poison along with the pollen, both get transformed into nectar in the hive. The yogis say that when we learn to take negative situations and turn them into wisdom, it means we are progressing, and becoming skilful agents of the positive.

So, even though I worry about what happens when my whole life is literally on my phone and the world’s nature reserves are all full of data centres which contain every last terrible expression of humanity, and we are so disconnected from the nature around us that the oceans are filled with plastic, and many of us are in offices far away from the natural world staring into our bloody phones, and many of us do it to create technology. Surely we can create technology to change where we are. If we want a symbiosis we must make a human-planet one not a human-computer one. I don’t care what my Fitbit says I don’t want any technology in my ovaries thank you very much.

So, with that thought and the amazing technology I have at my fingertips today, I want to share an animated gif of my cat drinking from his water fountain. Licklider said:

Those years should be intellectually the most creative and exciting in the history of mankind

And, they are. I remain hopeful that we can collect enough data on ourselves to become self-aware enough to transform it into wisdom and create something better for all humanity and our planet. In the meantime I will be enjoying watching my cat have a drink of water and I am sure Licklider in 1960 would have been just as amazed too to see my cat drinking from the fountain on a phone more powerful technologically and psychologically than he ever could have imagined. It remains to be seen whether this is progress or not.

[Part 5]

The ghosts of AI

I fell in love with Artificial Intelligence (AI) back in the 1990s when I went to Aberdeen University as a post-graduate Stalker, even though I only signed up because it had an exchange program which meant that I could study in Paris for six months.

And, even though they flung me and my pal out of French class for being dreadful students ( je parle le C++), and instead of Paris, I ended up living in Chambéry (which is so small it mentions the launderette in the guidebook), it was a brilliant experience, most surprisingly of all, because it left me with a great love of l’intelligence artificielle: Robotics, machine learning, knowledge based systems.

AI has many connotations nowadays, but back in 1956 when the term was coined, it was about thinking machines and how to get computers to perform tasks which humans, i.e., life with intelligence, normally do.

The Singularity is nigh

Lately, I have been seeing lots of news about robots and AI taking over the world and the idea that the singularity – that moment when AI becomes all powerful it self-evolves and changes human existence – is soon. The singularity is coming to get us. We are doomed.

Seriously, the singularity is welcome round my place to hold the door open for its pal and change my human existence any day of the week. I have said it before: Yes please dear robot, come round, manage my shopping, wait in for Virgin media because they like to mess me about, and whilst you are there do my laundry too, thank you.

And, this got me thinking. One article said the singularity is coming in 2029 which reminded me of all those times the world was going to end according to Nostradamus, Old Mother Shipton, the Mayan Calendar, and even the Y2K bug. As we used to say in Chambéry : Plus ça change, plus c’est la même chose. To be honest, we never, ever said that, but my point is that our fears don’t change, even when dressed up in a tight shiny metallic suit. Nom du pipe!

We poor, poor humans we are afraid of extinction, afraid of being overwhelmed, overtaken, and found wanting. True to form I will link to Maslow’s hierarchy of needs and repeat that we need to feel safe and we need to feel that we are enough. Our technology may be improving – not fast enough as far as I am concerned – but our fears, our hopes, our dreams, our aspirations remain the same. As I say in the link above, we have barely changed since Iron Age times, and yet we think we have because we buy into the myth of progress.

We frighten ourselves with our ghosts. The ghosts which haunt us: In the machine, in the wall, and in our minds where those hungry ghosts live – the ones we can never satisfy.

The ghost in the machine

The ghost in the machine describes the Cartesian view of the mind–body relationship, that the mind is a ghost in the machine of the body. It is quoted in AI, because after all it is a philosophical question: What is the mind? What is intelligence? And, it remains a tantalising possibility, especially in fiction that somewhere in the code of a machine or a robot, there is a back door, or cellular automata – a thinking part, which like natural intelligence is able to create new thoughts, new ideas, as it develops. The reality is that the guy who first came up with the term talked about the human ability to destroy itself with its constant repeating patterns in the arena of political–historical dynamics but used the brain as the structure. The idea that there is a ghost in the machine is an exciting one which is why fiction has hung onto it like a willo the wisp and often uses it as a plot device, for example, in the Matrix (there’s lots of odd bits of software doing their own thing) and I, Robot (Sunny has dreams).

Arthur C Clarke talked about it when he said that technology is magic – something, I say all the time, not least of all, because it is true. When I look back to the first portable computer I used and today, the power of the phone in my hand, well, it is just magic.

That said, we want the ghost in the machine to do something, to haunt us, to surprise us, to create for us, because we love variety, discoverability, surprise, and the fact that we are so clever, we can create life. Actually we do create life, mysteriously, magically, sexily.

The ghost in the wall

The ghost in the wall is that feeling that things change around us with little understanding. HCI prof, Alan Dix uses the term here. If HCI experts don’t follow standards and guidelines, the user ends up confused in an app without consistency which gives the impression of a ghost in the wall moving things, ‘cos someone has to be moving the stuff, right?

We may love variety, discoverability and surprise, but it has to be logical to fit within certain constraints and within the consistency of an interface with which we are interacting, so that we say: I am smart, I was concentrating, but yeah, I didn’t know that that would happen at all, in the same we do after an excellent movie, and we leave thrilled at the cleverness of it all.

Fiction: The ghost of the mind

Fiction has a lot to answer for. Telling stories is how we make sense of the world, they shape society and culture, and they help us feel truth.

Since we started storytelling, the idea of artificial beings which were given intelligence, or just came alive, is a common trope. In Greek mythology, we had Pygmalion, who carved a woman from ivory and fell in love with her so Aphrodite gave her life and Pervy Pygmalion and his true love lived happily ever after. It is familar – Frankinstein’s bride, Adam’s spare rib, Mannequin (1987). Other variations less womeny-heterosexy focused include Pinocchio, Toy Story, Frankinstein, Frankenweenie, etc.

There are two ways to go: The new life and old life live happily ever after and true love conquers all (another age old trope), or there is the horror that humans have invented something they can’t control. They messed with nature, or the gods, they flew too close to the sun. They asked for more and got punished.

It is control we are after even though we feel we are unworthy, and if we do have control we fear that we will become power crazed. And then, there are recurring themes about technology such as humans destroying the world, living in a post-apocalyptic world or dystopia, robots taking over, mind control (or dumbing down), because ultimately we fear the hungry ghost.

The hungry ghost

In Buddhism, the hungry ghosts are when our desires overtake us and become unhealthy, and insatiable, we become addicted to what is not good for us and miss out on our lives right now.

There is also the Hungry Ghosts Festival which remembers the souls who were once on earth and couldn’t control their desires so they have gotten lost in the ether searching, constantly unsatisfied. They need to be fed so that they don’t bother the people still on earth who want to live and have good luck and happy lives. People won’t go swimming because the hungry ghosts will drown them, dragging them down with their insatiable cravings.

Chinese character gui meaning ghost (thanks @john_sorensen_AU)

In a lovely blog the Chinese character above which represents ghost but in English looks like gui, which is very satisfying given this is a techyish blog, is actually nothing to do with ghosts or disincarnate beings, it is more like a glitch in the matrix – a word to explain when there is no logical explanation. It also explains when someone behaves badly – you dead ghost. And, perhaps is linked to when someone ghosts you, they behave badly. No, I will never forgive you, you selfish ghost. Although when someone ghosts you they do the opposite to what you wish a ghost would do, which is hang around, haunt you, and never leave you. When someone ghosts you, you become the ghost.

And, for me the description of a ghost as a glitch in the matrix works just as well for our fears, especially about technology and our ghosts of AI – those moments when we fear and when we don’t know why we are afraid. Or perhaps we do really? We are afraid we aren’t good enough, or perhaps we are too good and have created a monster. It would be good if these fears ghosted us and left us well alone.

Personally, my fears go the other way. I don’t think the singularity will be round to help me any time soon. I am stuck in the Matrix doing the washing. What if I’m here forever? Please come help me through it, there’s no need to hold the door – just hold my hand and let me know there’s no need to be afraid, even if the singularity is not coming, change is, thankfully it always is, it’s just around the corner.