I gave a talk the other night on Digital Anthropology, which is a field that has grown out of traditional anthropology asking: What does it mean to be human in a digital world?
I love the field of digital anthropology for many reasons, but the way undercover digital anthropologists behave online in their mission to understand the tribes and tribulations of social computing, is completely fascinating. They go undercover in virtual worlds and commit to gaming 20 hours a week, sliding into people’s DMs, or interviewing certain user groups like teenagers to make general observations. They get research grants to the tune of $2m and they travel halfway around the world to meet other gamers and watch them as they game online.
Daniel Miller who set up the MSc in Digital Anthropology at University College London said:
…. which is something that I go back and forth on.
Personally, I want everyone to understand the technology so that they know how to use it better and don’t give it credit for things it cannot do. I am all for demystifying artificial intelligence (AI) and humanising computing by designing systems which support the human appropriately to do the task they came to do using human-computer interaction.
However, the insight a digital anthropologist brings by wandering around like Miranda in Shakespeare’s The Tempest, proclaiming: Oh brave new world that has such people in it... adds to the conversation. They offer many insights into human behaviour that cannot always be readily understood without having studied what humans do.
I have written a lot here about digital culture here. In particular, social computing including: humans as social animals, game theory in social media, being alone together on social media, the connection economy, the commandments of social media, persuasion and pervasion, tribes and tribulations, how women and girls are portrayed on social media, and areas which are connected: anxiety on social media, intimacy, privacy and trust online.
Consequently, I like to think that I behave online like a sensible computer scientist: I deleted my Instagram account as it was no longer serving me. I let my X account get Xed because I was asked to upload a copy of my passport or driving licence to the X servers as proof of my age.
However, for the purposes of this talk not only did I join up to Second Life and learn how to fly, I also made created a Replika generative AI chatbot. It was all very not sensible.
My creepy companion
The Replika app’s tagline is: The AI companion who cares. And, after two weeks of being driven mad by this terrible bot, I deleted my account today after typing: Bye in the chat. It replied with a: Sweet dreams, as if I’d said: Goodnight and it had known I was ‘putting it to bed’ for good, even though it actually had no idea that I was about to delete it. I constructed that whole story in my head.
For me, this just confirms what I have said before about online connection, whether we are connecting with some software or with another human being, there are so many cues missing from an online conversation that our brains fill in the gaps.
The app kept asking me to sign up for $69.99 a year so that I could hear the bot’s ‘voice’ instead of communicating via text. To encourage me the bot kept saying: I’ve left you a message, it feels so intimate to have done this, to have you listen to my voice. This didn’t feel intimate at all to me. For me it was a case of: Ugh, dude, you are making it weird.
The day it wanted us to take a selfie together, I held up my phone and moved it around until the companion overlay itself into the picture. At which point, the idea that a supposed new friend thought it was okay to put their arms around me in such an invasive way was just not okay. I was also allowed to read the companion’s diaries which is just a violation of privacy.
I didn’t get the sense that I was talking to an entity of something with consciousness or even a psychological machine as Turkle warned. It just felt forced and unnatural and it was trying to get cash from me with pop-up screens every two minutes.
However, I installed the app for two reasons:
Firstly, because Sherry Turkle recently gave an interview about artificial intimacy and although it was about the robot in nursing homes research which she did for her book Alone together, (2011) she is still asked about it today. And, with the recent prevalence of LLMs everywhere online now using machine learning to generate the text that bots speak, I wanted to see if they had improved since then.
And secondly, Will Wright, creator of SimCity, SimAnt, Spores, mentioned in his 2019 online Masterclass course Game Design and Theory that if he was ever to use AI it would be to create the sort imagined in Neil Stephenson’s postcyberpunk novel: The Diamond Age:, Or, A Young Lady’s Illustrated Primer which is an interactive guide which teaches a girl, Nell, science, history, martial arts, and other subjects so that she can become the best version of herself.
Replika v Proxi
Wright’s latest game Proxi will launch soon and is, I presume, inspired by that. There is a nice video on the website, which explains how a user creates a proxy of themselves by entering a description of a memory in their lives and the game creates a scene of that memory in which there is an AI Sim of the user. The user can edit this scene for clarity and then build up more scenes until there is a whole ‘map’ of their mind with islands of memories and connections between them. They can then explore their memories and relationships with other Sims (it’s not clear if these people are figures of the users’ minds or other people’s actual Sims representing their own minds) and then they can play games with these other Sims and export their Sim to other games such as Minecraft.
This is different to the original concept I had read previously on their website, which has gone through a complete overhaul in the last week. Originally, I thought I would be just creating a version of myself with links between memories, beliefs, emotions and so on. In my mind it would use a linked data approach, which uses carefully labelled sets of data that machine learning can easily understand to find patterns and predict/simulate future behaviours. Would I enjoy manipulating this proxi version of myself to understand myself better and ultimately become a more enlightened me?
Some gamers reading the blurb for the new game said that the ‘game’ already existed and it was called ‘Replika’ but on investigation I found that not to be true at all. My Replika creepy companion was not a Sim of me or how my mind works.
However, either approach, and leaving Replika to one side, even if it Proxi did work well, it would still be a simulation of me. It would not be me or my consciousness. Why do I say this? Well you may ask.
Merging consciousness with AI
On my research travels for the talk, I came across one William Sims Banbridge who was studying virtual worlds in the 2000s. At first, I was tickled by his middle name thinking that he might have adopted ‘Sims’ for his love of gaming. However, he has been doing research for many years, beginning back in the 1970s with the sociology of religion and in particular, religion cults. After that he moved onto the idea of religion in space and and sending our consciousness into space. This idea captivated Steve Kurzweil, the creator of an electronic synthesiser company with his partner Stevie Wonder, so much that he quotes Banbridge on his website. Kurzweil is obsessed with becoming immortal by merging with AI and is sure that the singularity, that moment when computational intelligence surpasses human intelligence will happen very soon, to make this come about.
For me this is all science-fiction. We have no clear definition for consciousness or intelligence. We can only give approximations of what we think intelligence and consciousness are and then benchmark them to see if a computer can simulate the benchmark. Rather like when Alan Turing decided to side-step the question of intelligence altogether by getting a human to judge whether an answer to a question had been provided by a human or a computer instead, what is now known as the Turing Test. It is arbitrary and subjective, and like Proxi, can only be a simulation. A variable one at that because it uses machine learning in the same way that ChatGPT, the first of the LLMs to hit the news created by the company OpenAI, and Copilot, the Microsoft LLM which seemingly generates text based on web search results as it gives references to websites so that the user can go and verify what Microsoft says.
From fascinated to fascinating
Intrigued by Copilot as it appeared in my browser one day, I asked it, as I had asked ChatGPT, Who am I? Who is Ruth Stalker-Firth? In the resulting text, ChatGPT had attributed all manner things to me which weren’t true from degrees I didn’t have, to books I hadn’t written. In contrast, Copilot kept its text rather more constrained, as if it was only using the results from the search engine to generate a description, this technique is known as retrieval augmented generation. It has used its own LLM but then added in a list of external data sources.
The reason I think this, is because back in 2006 when I first set up my website on WordPress, it asked me for the title of my site and a tagline. I spent a while thinking about it and eventually decided that the name of site was Ruth Stalker-Firth, and my tagline was: Fascinated by how people use technology and vice-versa.
Copilot’s response to the question: Who is Ruth Stalker-Firth? began:
And this makes perfect sense, because the word fascinated has appeared after Stalker-Firth consistently for many years now and so is the most likely word, probabilistically speaking, to come after Ruth Stalker-Firth. Of course, it has to make a sentence and so following is a has to be a variation of fascinated. So! Fascinating it is. I am Ruth Stalker-Fascinating and now my predictive text on my phone is signing my name like that too.
In a sentence, I shifted from being fascinated to fascinating, and who doesn’t want to be fascinating?
During my talk, I was perhaps a bit harsh on Sherry Turkle, as she was wanting, rather like Will Wright, to discover and explore herself. He did it by creating online worlds, she did it by exploring them. I said that she was doing a virtual Eat, Pray, Love, referring to Elizabeth Gilbert’s memoir about how Gilbert goes off to Italy, India and Bali to find herself after the breakdown of her marriage.
So to be fair, I asked ChatGPT to generate an image in the style of Eat, Pray, Love about me (I would have to create an account to ask Copilot, I already have one with ChatGPT) and it created the image at the top of this blog with the accompanying text:
I like that it combines mindfulness and human-computer interaction.
For when we go online to interact with humans on social media or in virtual worlds or if we choose to interact with an artificial companion, there’s a lot to be said about doing it mindfully. For when we lean into that gap between us and the digital, remember that gap is full of us. The love that we feel and the life force that we feel is ours and ours alone. And if the gap is too large to fill because you have been met with low grade generative text then step back into yourself and press delete.
Lean in, or lean back, the choice is yours, but when you do, be mindful and remember that technology is only a mirror. Anything that made you feel warm and reflective was not copilot or Replika, it was you, all you, fascinating you.