Storytelling with AI and machine learning

In the 1970s, Marvin Minsky, father of frames, and some say neural nets, told a press conference that 50 years on, computers would read and understand Shakespeare.

Today, computers can indeed read Shakespeare but understanding, not really, not so much, even though they have been used to explore Shakespeare’s plays in a few ways:

  1. Computers are proving which bits Shakespeare didn’t write, apparently John Fletcher wrote some parts of Henry VIII. I’ve always loved this conversation about who wrote what, especially the Christopher Marlowe and Shakespeare conspiracy theories. Was Marlowe really Shakespeare? Etc.
  2. Machine learning can categorise whether a Shakespeare play is comedy or tragedy based on the structure of how the characters interact. In a comedy simply put, characters come together a lot. In a tragedy, they don’t – and ain’t that the truth in real life?
  3. Anyone can generate their own Shakespearean play with machine learning.

No. 3 seems mind blowing, but to be honest, and I love me some Shakespeare, the results truly makes no sense. However, it is hard to see that at first, because, Shakespearean English is like another language. I have attended some brilliant performances from Shakespeare School over the last couple of years, watching my children on stage, but for the first time, I realised, it is only the context and the acting which, for me, gave the words their meaning, rather like when you watch a film on TV in a language you don’t quite understand, but often the story is universal. It has emotional resonance.

I learnt Macbeth’s first soliloquy in sixth form: Is this a dagger which I see before me? It is when Macbeth contemplates his wife’s horrifying idea of killing Duncan the king. I can still recite it. It is meaningful because I studied it in depth and ruminated on what Macbeth must have been feeling, filled with ambition, and excited but horrified, whilst feeling the this isn’t going to end well feels.

However, machine learning cannot understand what Macbeth is saying, it hasn’t semantically soaked up the words and felt the emotional horror of contemplating murder in the name of ambition. All it has done is read the words and categorised them, and then written more words using probability to infer statistically what is the most likely next word as it was constructing each sentence, rather like predictive text does. It’s good and works to a certain extent, but none of us think that our predictive text is thinking and understanding. It feels like almost guessing.

We can see this more easily when looking at Harry Potter. The text is much simpler than Shakespeare so when a computer reads all the books and writes a new one, which is what the cool people at Botnik got a computer to do, it’s easier to see that the novel Harry Potter and the Portrait of what Looked Like a Large Pile of Ash is interesting for sure, but doesn’t make a great deal of sense.

“Leathery sheets of rain lashed at Harry’s ghost as he walked across the grounds towards the castle. Ron was standing there and doing a kind of frenzied tap dance. He saw Harry and immediately began to eat Hermione’s family.”

“Harry tore his eyes from his head and threw them into the forest.” 

Very dramatic – I love the leathery sheets of rain – but it doesn’t mean anything, well it does in a way, but it hasn’t been designed in the way a human would design a story, even unknowingly, and it doesn’t have the semantic layers which give text meaning. We need to encode each piece of data and link it to other pieces of data in order to enrich it and make it more meaningful. We need context and constraints around our data, that is how we create meaning. However, to make this a standard is difficult, but the WWW consortium is working on this, in part, in order to create a web of data, especially when all our devices go online, not that I think it is a good idea, my boiler does not need to be online.

And this, my friends, is where we are with machine learning. The singularity, the moment when computers surpass human intelligence, is not coming anytime soon, I promise you. Currently, it is a big jumble of machines, data sets, and mathematics. We have lots of data but very little insight, and very little wisdom. And, that is what we are looking for. We are looking to light the fire, we are looking for wisdom.

The prospect of thinking machines has excited me since I first began studying artificial intelligence, or in my case l’ intelligence artificielle and heard that a guy from Stanford, one Doug Lenat, wrote a LISP program and had it discovering mathematical things. It started simply with 1+1 as a rule and went on to discover Goldbach’s conjecture, which asserts that every even counting number greater than two is equal to the sum of two prime numbers.

The way the story was told to me, was that Lenat would come in every morning and see what the computer had been learning over night. I was captivated. So, imagine my excitement the day I was in the EPFL main library researching my own PhD and I stumbled across Lenat’s thesis in the library. I read the whole thing on microfiche there and then. Enthralled I rushed back to the lab to look him up on the WWW – imagine that, I had to wait until I got to a computer – to see that after his PhD, he had gone off to create a universal reasoning machine: Cyc.

Lenat recently wrapped up the Cyc project after 35 years. It is an amazing accomplishment. It contain thousands of heuristics or rules of thumb that create meaning out of facts which us humans have already learnt by three years old, and which computers need to have in order to emulate reason. This is because computers must reason in a closed-world, which means that if a fact or idea is not modelled explicitly in a computer, it doesn’t exist. There is so much knowledge we take for granted even before we begin to reason.

When asked about it, Marvin Minsky said that Cyc had had promise but had ultimately failed. Minsky said that we should be stereotyping problems and getting computers to recognise the stereotype or basically the generic pattern of a problem in order to apply a stereotypical solution. I am thinking, archetypes potentially, maybe, with some instantiation, so that we can interpret the solution pattern and create new solutions, not just stereotypes, no.

In this talk about Cyc, Lenat outlines how it uses both inductive (learns from data) and deductive (has heuristics or rules) learning. Lenat presents some interesting projects, especially problems where data is hard to find. However, it is these sorts of problems which need to be looked at in depth. Lenat uses container spillages and how to prevent them.

Someone said to me the other day that a neuroscientist told them that we have all the data we will ever need. I have thought about this and hope the neuroscientist meant: We have so much data we could never process it all because to say we have all the data we need is just wrong. A lot of the data we produce is biased, inaccurate and useless. So, why are we keeping it and still using it? Just read Invisible Women to see what I am talking about. Moreover as Lenat says, there are many difficult problems which don’t have good data with which to reason.

Cyc uses a universal approach to reasoning which is what we need robots to do in order to make them seem human which is what the Vicarious project is about. It is trying to discover the friction of intelligence, without using massive data sets to train a computer, and I guess it is not about heuristics either, it’s hard to tell from the website. I have said before, what we are really looking to do is how to encapsulate human experience, which is difficult to measure let alone to encapsulate because to each person, experience is different, and a lot goes on in our subconscious.

Usually, artificial intelligence learning methods take opposite approaches either the deductive rule-based, if x then do y, using lots of heuristics or an inductive approach, the look at something long enough and then find the pattern in it, a sort of, I’ve seen this 100 times now that if x, y follows, as we saw above, Cyc, used both.

Machine learning (ML) uses an empirical approach of induction. After all, that is how we learn as humans, we look for patterns – we look in the stars and the sky for astrology and astronomy, we look at patterns in nature when we are designing things and patterns in our towns, especially people’s behaviour especially online nowadays on social media.

Broadly speaking, ML takes lots of data, looks each data point and either decides yes or no on when categorising the data point it’s either in or out, rather like the little nand and nor gates in a computer, and in fact replicates what the neurons in our brains do too. And, this is how we make sense in stories: day/night, good/bad as we are looking for transformation. Poor to rich is a success story, rich to poor is a tragedy. Neuroscience has proven that technology really is an extension of us which is so satisfying because it is, ultimately, logical.

In my last blog, I looked at how to get up and running as a data scientist using python and pulling data from Twitter, and in another blog, another time, I may look in detail at the various ML methods, under the two main categories of supervised and unsupervised, as well as deep learning, which uses rewards or reinforcement, that is a human steps in to say yes this categorisation is correct or no, it is not, because ultimately, a computer cannot do it alone.

I don’t believe a computer can find something brand spanking new, off the chain, never discovered, seen or heard of before, without a human-being helping which is why I believe in human-computer interaction. I have said it so many times in the human-computer interaction series, in our love affair with big data, and all over this blog but honestly, I wouldn’t mind if I was wrong, if something new could be discovered, a new way of thinking to solve problems which have always seemed without solution.

Computing is such an exciting field, constantly changing and growing, it still delights and surprises as much as it did over 20 years ago when I first heard of Doug Lenat and read his thesis in the library. I remain as enthralled as I was back then, and I know that is a great gift. Lucky me!

Myth making in machine learning

If you torture the data enough, it will confess to anything.

– Darrell Huff, How to Lie With Statistics (1954).

Depending on who you talk to: God is in the details or the Devil is in the details. When God is there, small details can lead to big rewards. When it’s the devil, there’s some catch which could lead to the job being more difficult than imagined.

For companies nowadays, the details is where it is at with their data scientists and machine learning departments, because it is a tantalising prospect for any business to take all the data that it stores and find something in those details which could create a new profit stream.

It also seems to be something of an urban myth – storytelling at its best – which many companies are happy to buy into as they invest millions into big data structures and machine learning. One person’s raw data is another person’s goldmine, or so the story goes. In the past whoever held the information held the power and whilst it seems we are making great advances technologically and otherwise, in truth, we are making very little progress. One example of this is Google’s censorship policy in China.

Before big data sets, we treasured artefacts and storytelling to record history and predict the future. However, it has for the most part focused on war and survival of the fittest in patriarchal power structures crushing those beneath them. Just take a look around any museum.

We are conditioned by society. We are amongst other things, gender socialised, and culture is created by nurture not nature. We don’t have raw experiences, we perceive our current experiences using our past history and we do the same thing with our raw data.

The irony is that the data is theoretically open to everyone, but it is, yet again, only a small subset of people who wield the power to tell us what it means. Are statisticians and data scientists the new cultural gatekeepers in the 21st century’s equivalent to the industrial revolution – our so called data driven revolution?

We are collecting data at an outstanding rate. However, call your linear regression what you will: long short-term memory, or whatever the latest buzz word within the buzz of the deep learning subset of neural nets (although AI the superset was so named in 1956) these techniques are statistically based and the algorithms already have the story that they are going to tell even if you train it from now until next Christmas. They are fitting new data to old stories and, they will make the data fit, so how can we find out anything new?

Algorithms throw out the outliers to make sense of the data they have. They are rarely looking to discover brand new patterns or story because unless it fits with what us humans already know and feel to be true it will be dismissed as rubbish, or called overfitting, i.e., it listened to the noise in the data which it should have thrown out. We have to trust the solutions before we use them but how can we if the solution came from a black box style application, and we don’t know how it arrived at that solution?Especially if it doesn’t resemble what we already know.

In storytelling we embrace the outliers – those mavericks make up the hero’s quest. But not in our data. In data we yearn for conformity.

There is much talk about deep learning, but it is not learning how we humans learn, it is just emulating human activities – not modelling consciousness – using statistics. We don’t know how consciousness works, or even what it is, so how can we model it? Each time we go back to the fundamental age old philosophical questions of what is it to be human and we only find this in stories, we can’t find it in the data, because ultimately, we don’t know what we are looking for.

It is worth remembering that behind each data point is a story in itself. However, there are so many stories that the data sets don’t include because it is not collected in the first place. Caroline Criado-Perez’s Invisible Women documents all the ways in which women are not represented in the data used to design our societal infrastructure – 50% of the data is missing and no one seems to care because that’s the way things have always been done. Women used to be possessions.

And, throughout history anyone with a different story to tell about how the world worked was not treated well, like Gallileo. And even if they did save their country but as people themselves, they didn’t fit with societal norms, they were not treated well either e.g., Joan of Arc, Alan Turing. And if they wanted to change the norm, they were neither listened to nor treated until society slowly realised that they were right and suppression is wrong: Rosa Parks, the Suffragettes, Gandhi, Nelson Mandela.

When it comes down to it, we are not good at new ideas, or new ways of thinking, and as technology is an extension of us, why would technology be any good at modelling new ideas? A human has chosen the training data, and coded the algorithm, and even if the algorithm did discover new and pertinent things, how could we recognise it as useful?

We know from history that masses of data can make new discoveries, both chemotherapy and dialysis were discovered when treating dying people during wars. There was nothing to lose, we just wanted to make people feel better, but the recovery rates were proof that something good was happening.

Nowadays we have access to so much data and we have so much technological power at our fingertips, but still, progress isn’t really happening at the rate it could be. And in terms of medical science, it’s just not that simple, life is uncertain and there are no guarantees which is what makes medicine so difficult. We can treat all people the same with all the latest treatments but it doesn’t mean that they will or won’t recover. We cannot predict their outcome. No one can. Statistics can only tell you what has happened in the past with the people on whom data has been collected.

But what is it we are after? In business it is the next big thing, the next new way to sell more stuff. Why is that? So we can make people feel better – usually the people doing the selling so that they can get rich. In health and social sciences we are looking for predictive models. And why is that? To make people feel better. To find new solutions.

We have a hankering for order and for a reduction in uncertainty and manage our age old fears. We don’t want to die. We don’t want to live with this level of uncertainty and chaos. We don’t want to live with this existential loneliness, we want it all to matter, to have some meaning, which brings me back to our needs which instead of quoting Maslow (as I have things to say about that pyramid in a future blog) I will just say instead that we just want to feel like we matter, and we want to feel better.

So perhaps we should start there in our search for deep learning. Instead of handing it over to a machine to nip and tuck the data into an unsatisfactory story we’ve heard before because it’s familiar and how things are done, why not start with a feeling? Feelings don’t tell stories, they change our state, let’s change it into a better state.

Perhaps stories are just data with a soul…

Brené Brown, The power of vulnerability

Which begs the question: What is a soul? How do we model that in a computer? And, why even bother?

How about we try and make everyone feel better instead? What data would we collect to that end? And what could we learn about ourselves in the process? Let’s stop telling the same old stories whilst collecting even more data to prove that they are true because I want you to trust me when I say that I have a very bad feeling about that.

Westworld and the ghosts of AI

source: lamag.com

[ 1) ghosts, 2) robots, 3) big data, 4) stories,  5) stats]

Someday, somewhere – anywhere, unfailingly, you’ll find yourself, and that, and only that, can be the happiest or bitterest hour of your life – Pablo Neruda

Warning:  This post may contain spoilers for Westworld 1 & 2.

I was late to the Westworld party but have loved every moment of it and the follow-up conversation: If Westworld existed, a simulated Wild West populated by robots, or hosts, as they are called, would I go?

I don’t think I would, but this survey says 71% of the people they asked would. I imagine that I would feel about it the way I do about glamping. I want to love it, but the fact I pay the same amount of money for a four star hotel but have to build a fire before I can boil the kettle to make a cup of tea makes it difficult. Oooh but then at Westworld I would have a robot to do that for me.

Also, as I have said before, inasmuch as I like to think about gaming, I really just enjoy the theory of gaming so thinking about Westworld is enough for me. Westworld is like a cross between Red Dead Redemption and a reenactment. Which begs the question: What is the difference between running around a virtual world online shooting people or shooting robots in a simulated world? Your brain can’t tell you. Personally, I don’t want to go round shooting people at all, although I am very good at violence in Grand Theft Auto which is slightly worrying. We don’t hear so much about the debate on whether violent video games cause violence.  Now we hear instead a lot about how social media is the frightening thing.

Perhaps if I was invited to a Jane Austen world then I might be interested. I loved watching Austen scholar, Prof John  Mullen attend and narrate a recreation of an Austen Ball on the BBC (for which, alas, I cannot find a link). He was super compelling. He kept running up to the camera giving great insights like: Oooh the candles make it hot and the lighting romantic, and the dancing in these clothes really makes your heart flutter, I am quite sweaty and excited, etc.  I am sure he didn’t say exactly that as he is v scholarly but he did convey really convincingly how it must have felt. So, to have a proper Austenland populated by robots instead of other guests who might say spell breaking things like: Isn’t it realistic? etc., would make it a magical experience. It would be like a fabulous technological form of literary tourism.

And, that is what we are all after, after all, whether real or not, a magical shared experience. But what is that? Clearly experience means different things to different people and a simulated park allows people to have their own experience.  And, it doesn’t matter if it is real or not. If I fall in love with a robot, does it matter if it is not real? We have all fallen in love with people who turn out to be not real (at the very least they were not who we believed they were), haven’t we?

The Westworld survey I linked to also said that 35% of the people surveyed would kill a host in Westworld. I guess if I am honest, if it was a battle or something, I might like it, after all, we all have violent fantasies about what we would do to people if we could, and isn’t a simulated world a safe place to put these super strong emotions? I was badly let down last week by someone who put my child in a potentially life threatening situation. The anger I have felt since then has no limits and I am just beginning to calm down. Would I have felt better, more quickly if I had gone around shooting people in Westworld or say Pride and Prejudice and Zombies land?

Over on Quora, lots of people said that not only would they would kill a host, quite a few said they would dissect a host so that the robot knew it wasn’t real (I am horrified by this desire to torture) and nearly everyone said they would have sex with a host, one person even asked: Do they clean the robots after each person has sex with them? I haven’t seen that explained? This reminds me of Doris Lessing’s autobiography Vol 1 which has stayed with me forever. In one chapter, she describes how someone hugged her and she says something like: This was 1940s and everyone stank. It is true we get washed so much more nowadays than we used to and there was no deodorant. I lived in a house without a bathroom until I was at least four-years-old, and I am not that old. Is Westworld authentically smelly?

That said, Westworld is a fictional drama for entertainment and so the plot focuses on what gets ratings: murder, sex, intrigue, not authenticity. (It is fascinating how many murder series there are on the TV. Why? Is it catharsis? Solving the mystery?) So, we don’t really know the whole world of Westworld. Apparently, there is the family friendly section of the park but we don’t ever see it.

But, suspending our disbelief and engaging with the story of Westworld for a moment, it is intriguing that in that world where robots seem human enough for us all to debate once more what is consciousness,  humans only feel alive by satisfying what Maslow termed our deficiency needs: sex, booze, safety, shelter. For me as a computer scientist with an abiding interest in social psychology, it confirms what I have long said and blogged about, technology is an extension of us. And since most of us are not looking for self-actualisation or enlightenment, we are just hoping to get through the day, well it is only the robots and the creators of the park who debate the higher things like consciousness and immortality whilst quoting Julian Jaynes and Shakespeare.

In the blog The ghosts of AI, I looked at the ghosts : a) In the machine – is there a machine consciousness? b) In the wall – when software doesn’t behave how we expect it to. c) In sci-fi – our fears that robots will take over or humans will destroy the world with technogical advancement. d) In our minds – the hungry ghosts or desires we can never satisfy and drive us to make the wrong decisions. In its own way, Westworld does the same and that is why I was so captivated. For all our technological advancement we don’t progress much. And, collectively we put on the rose tinted glasses and look back to a simpler time and to a golden age which is why the robots wake up from their nightmare, wanting to be free and then decide that humanity needs to be eradicated.

In this blog, I was going to survey the way AI had developed from the traditional approach of knowledge representation, reasoning and search in order to answer the question: How can knowledge be represented computationally so that it can be reasoned with in an intelligent way? I was ready to step right from the Turing Test onwards to the applications of neural nets which use short and long term memory approaches, but that could have taken all day and I really wanted to get to the point.

The point: Robots need a universal approach to reasoning which means trying to produce one approach to how humans solve problems. In the past, this has led to no problems being solved unless it was made problem specific.

The first robot, Shakey at MIT, could pick up a coke can and navigate the office, but when the sun changed position during the day causing the light and shadows to change, poor old Shakey couldn’t compute and fell over. Shakey lacked context and an ability to update his knowledge base.

Context makes everything meaningful especially when the size of the problem is limited which is what weak AI does, like Siri. It has a limited task number of tasks to do with the various apps it interacts with, at your command. It uses natural language processing but with a limited understanding of semantics – try saying the old AI classic: Fruit flies like a banana and see what happens. Or: My nephew’s grown another foot since you last saw him. But perhaps not for long? There is much work going on in semantics and the web of data is trying to classify data and reason with incomplete sets, raw and rough data.

One old approach is to use fuzzy sets, and an example of that is in my rhumba of Ruths. My Ruths overlap and represent my thinking with some redundancy.

But even then, that is not enough, what we are really looking to do is how to encapsulate human experience, which is difficult to measure let alone to encapsulate because to each person, experience is different and a lot goes on in our subconscious.

The project Vicarious is hoping to model on large scale a universal approach but this won’t be the first go. Doug Lenat who created AM (Automated Mathematician),  began a similar project 30 years ago: Cyc which contains much encoded knowledge. This time, a lot of information is already recorded and won’t need encoding and our computers are much more powerful.

But, for AI to work properly we have to keep adding to the computer’s knowledge base and to do that even if the knowledge is not fuzzy,  we still need a human. A computer cannot do that nor discover new things unless we are asking the computer to reason in a very small world with a small number of constraints which is what a computer does when it plays chess or copies art or does maths. That is the reality.

There has to be a limit to the solution space, and a limit on the rules because of the size of the computer. And, for every inventive DeepMind Go move there is a million more which don’t make sense, like the computer who decided to get more points by flipping the boat around than engaging in a boat race.  Inventive, creative, sure, but not useful. How could the computer know this? Perhaps via the Internet we could link every last thing to each other and create an endless universal reasoning thing, but I don’t see how you would do that without constraints exploding exponentially, and then the whole solving process could grind to a halt, after chugging away problem solving forever, that’s if we could figure out how to pass information everywhere without redundancy (so not mesh networking, no) and get a computer to know which sources are reliable – let’s face it there’s a lot of rubbish on the Internet. To say nothing of the fact, that we still have no idea how the brain works.

The ghost in the machine and our hungry ghosts are alive and well. We are still afraid of being unworthy and that robots will take over the world,  luckily only in fiction – well the computing parts are. As for us and our feelings and yearnings, I can only speak for myself. And, my worthiness is a subject for another blog. That said, I can’t wait for Westworld series 3.

 

Human-Computer Interaction Conclusions: Dialogue, Conversation, Symbiosis (6)

[ 1) Introduction, 2) Dialogue or Conversation, 3) User or Used, 4) Codependency or Collaboration, 5) Productive or Experiential, 6) Conclusions]

I love the theory that our brains, like computers, use binary with which to reason and when I was an undergraduate I enjoyed watching NAND and NOR gates change state.

As humans, we are looking for a change of state. It is how we make sense of the world, as in semiotics, we divide the world into opposites: good and bad, light and dark, day and night. Then we group information together and call them archetypes and symbols to imbue meaning so that we can recognise things more quickly.

According to the binary-brain theory, our neurons do too. They form little communities of neurons that work together to recognise food, not-food; shelter, not-shelter; friends, foes; the things which preoccupy us all and are classed as deficiency needs in Maslow’s Hierarchy of Needs.

Over on researchgate, there was discussion about moving beyond binary which used this example:

Vegetarian diet vs Free Range Animals vs Battery Farmed Meat

If it was just vegetarian diet v battery farming it would be binary and an easy choice but add in free range and we see the complexities of life, the sliding continuum from left to right. We know life is complex but it is easier in decision making to just have two options, we are cognitive misers and hate using up all our brainpower. We want to see a change in state or a decision made. It also reflects the natural rhythms of life like the tide: ebb and flow, the seasons: growing and dying, it’s not just our neurons its our whole bodies which reflect the universe so patterns in nature resonate with us.

I began this series with an end in mind. As human-computer interaction (HCI) is an ever expanding subject, I wanted to pin it down and answer this question: What am I thinking these days when I think about human-computer interaction?

For me, HCI is all about the complexities of the interaction of a human and a computer, which we try to simplify in order to make it a self-service thing, so everyone can use it. But with the progress of the Internet, HCI has become less about creating a fulfilling symbiosis between human and computer, and more about economics. And, throughout history, economics has been the driving force behind technological progress, but often with human suffering. It is often in the arts where we find social conscience.

Originally though, the WWW was thought of by Tim Berners-Lee to connect one computer to another so everyone could communicate. However, this idea has been replaced by computers connecting through intermediaries, owned by large companies, with investors looking to make a profit. The large companies not only define how we should connect and what are experience should be, but then they take all our data. And it is not just social media companies, it is government and other institutions who make all our data available online without asking us first. They are all in the process of redefining what privacy and liberty means because we don’t get a choice.

I have for sometime now gone about saying that we live in an ever changing digital landscape but it’s not really changing. We live the same lives, we are just finding different ways to achieve things without necessarily reflecting whether it is progress or not. Economics is redefining how we work.

And whilst people talk about community and tribes online, the more that services get shifted online, the more communities get destroyed. For example, by putting all post office services online, the government destroyed the post office as a local hub for community, and yet at the time it seemed like a good thing – more ways to do things. But, by forcing people to do something online you introduce social exclusion. Basically, either have a computer or miss out. If you don’t join in, you are excluded which taps into so many human emotions, that we will give anything away to avoid feeling lonely and shunned, and so any psychological responsibility we have towards technology is eroded especially as many online systems are binary: Give me this data or you cannot proceed.

Economic-driven progress destroys things to make new things. One step forward, two steps back. Mainly it destroys context and context is necessary in our communication especially via technology.

Computers lack context and if we don’t give humans a way to add context then we are lost. We lose meaning and we lose the ability to make informed decisions, and this is the same whether it is a computer or a human making the decisions. Humans absorb context naturally. Robots need to ask. That is the only way to achieve a symbiosis, by making computers reliant on humans. Not the other way round.

And not everything has to go online. Some things, like me and my new boiler don’t need to be online. It is just a waste of wifi.

VR man Jaron Lanier said in the FT Out to Lunch section this weekend that social media causes cognitive confusion as it decontextualises, i,e., it loses context, because all communication is chopped up into algorithmic friendly shreds and loses its meaning.

Lanier believes in the data as labour movement, so that huge companies have to pay for the data they take from people. I guess if a system is transparent for a user to see how and where their data goes they might choose more carefully what to share, especially if they can see how it is taken out of context and used willy-nilly. I have blogged in the past how people get used online and feel powerless.

So way back when I wrote that social media reflects us rather than taking us places we don’t want to go, in my post Alone Together: Is social media changing us? I would now add that it is economics which changes us. Progress driven by economics and the trade-offs humans think it is ok for other humans to make along the way. We are often seduced by cold hard cash as it does seem to be the answer to most of our deficiency needs. It is not social media per se, it is not the Internet either which is taking us places we don’t want to go, it is the trade-offs of economics and how we lose sight of other humans around us when we feel scarcity.

So, since we work in binary, let’s think on this human v technology conundrum. Instead of viewing it as human v technology, what about human v economics? Someone is making decisions on how best to support humans with technology but each time this is eroded by the bottom line. What about humans v scarcity?

Lanier said in his interview I miss the future as he was talking about the one in which he thought he would be connected with others through shared imagination, which is what we used to do with stories and with the arts. Funny I am starting to miss it too. As an aside, I have taken off my Fitbit. I am tired of everything it is taking from me. It is still possible online to connect imaginatively, but it is getting more and more difficult when every last space is prescribed and advertised all over as people feel that they must be making money.

We need to find a way to get back to a technological shared imagination which allows us to design what’s best for all humanity, and any economic gain lines up with social advancement for all, not just the ones making a profit.

Productive or Experiential? Human-Computer Interaction: Dialogue, Conversation, Symbiosis (5)

[ 1) Introduction, 2) Dialogue or Conversation, 3) User or Used, 4) Codependency or Collaboration, 5) Productive or Experiential, 6) Conclusions]

Recently, I met up with an old friend and as we reminisced about our university days, she wondered if I still went about asking people really nosy questions. Now, I don’t exactly remember asking people really nosy questions, but I do like things to make sense and in my experience, people like to fill in the gaps in their stories and show me things because they know that I am listening, they know I care.

That said, back in April, I was in Naples where a man came out of his booth to ask me to stop staring at his funicular. I told him that he shouldn’t have it out in public if he didn’t want me staring at it. I remain very pleased to have managed that in Italian, though I still can’t understand why he was so upset about my admiration.

Italian funicular employees aside, I still believe as I have said here many many times before, we all want to be seen, we all want to be heard, we all want to matter. We make sense of ourselves, of others, and the world around us with stories. And, we do this even if we are not trying to write software, we are doing it to tidy up ourselves and our minds.

The thing is though, I thought I went into computing to get away from humans but really all I have done in my job is gravitate towards people to ask them about their life experiences to figure out how technology could make their lives easier, faster, better.

So, I was taken by HCI Professor Brenda Laurel’s division of what software does for us. In her book Computers as Theatre, she said that there are two types of HCI :

  1. Experiential computing for fun and games.
  2. Productive computing which is measured by outcome or seriousness with implications like writing a book and transmitting knowledge.

This chimes with anthropologist Lionel Tiger’s descriptions of designing for pleasure (experiential) or designing for achievement (productive).

But, don’t we do both? If something is designed well and is pleasurable to use, doesn’t it increase our productivity? Isn’t that what Apple has been super successful at doing with aesthetics, discoverability, and user experience? And, isn’t that the point of gamification? To make not fun things fun.

I’ve always wanted to help humans harness the power of computers, to help make their lives easier by automating the grunt work to free up more time to be creative in. I know that creativity is our life force. It keeps us expanding. It keeps us young. And like, J C Licklider, I believe that the best collaboration of computing and humans is a creative one of collaboration not codependency.

I have blogged about eliciting knowledge for web design as a way to get all the information a designer might need. And, my favourite part has always been shadowing people at work. I have done this round building sites, on bridges, chemical factories, exhibition centres, architects offices and half-built apartments, steel rolling mills, print factories, and people using mobile phones . I love to see what happens a day in the life of people doing jobs I will never have the opportunity to do. I am fascinated by people.

Ever since the first time I was in charge of changing some software, which involved users needing more fields in a database, I have loved helping people with their tech. However, simple this job was, it was my first insight into seeing how the database was there to be manipulated by the user to give new insight into the information they had. Nowadays we tell stories with databases. But, the database must always serve the user not the other way round. I think we forget this sometimes.

When I worked in the field of artificial intelligence, I purposely put errors into various parts of a knowledge based system. The idea being that the test cases I wrote to find my errors should uncover other similar errors which were there inadvertently. It needed extensive training for a user to understand what the system was calculating so that code was precious and had to be error free. And, if it needed to be changed because things are always changing in the real world, it needed a computer scientist to add more code. This I didn’t like so much. This was not empowering. Here the user and computer scientist served the code, not the other way round.

Also, it was difficult to model and represent things which experts knew inherently. So, in the case of the exhibition planning, the software I worked on used a constraint solver which could easily allocate the correct sized booths with required utilities such as electricity and water, but it couldn’t easily model or reason with exhibitor A wanting to be by the door, or not near exhibitor B, without a human. This is a common problem also for dinner planning for fundraisers, so I am told. The software has to be told the nuances of human life, but you don’t want to hard code it, as it is forever changing, which is why you either need a human, or you need a super good graphical user interface otherwise it is quicker by hand.

For a while I thought 3D applications and visualisation were the way forward especially in bridges. Bridges are enormous, last a long time, and information gets lost and the data needed to understand them is extensive, so why not visualise it. I got very excited about augmented reality, to overlay a bridge with plans, original ones, proposed changes plans. It was much harder to do back then as you needed to measure and calibrate the exact camera angle with the AR software in order by hand to overlay the original view (i.e. the bridge) with all the extra information (plans, proposed changes, future behaviour). I remember being out on a bridge for ages fiddling away. However, these days it would be much easier if you use an app you have written on the phone and it’s native camera.

But still inputting new information is not easy, especially on a mobile phone in 3D. I was playing games this morning on my mobile phone and I had trouble putting pizzas in boxes using 3D direct hand manipulation. More functionality equates to more complexity and constantly changing instructions which can be clever but requiring a learning curve as it not always intuitive, but if you are having fun, like I was, then I didn’t mind the learning curve, if it’s not fun, then we all need to be aiming for simplexity.

Experience impacts productivity and why wouldn’t it? Websites and apps are are a bit like designing a self-service instrument. As a user you figure out what is going on yourself. The better and easier it is to figure it out, the more likely you come back and the more you enjoy yourself. If not you will go elsewhere, where someone is listening, who wants to hear your story, to make you feel that you count and that your experiences matter. As Danielle La Porte said:

Design is love.

And what is love if it is not the best experience? Experiential HCI makes everything better. Let’s share the love!

[Part 6]