Women in Tech: Society, Storytelling, Technology (7)

Ada Lovelace and her laptop

The world’s first programmer, Ada Lovelace. Source: Mashable

We cannot live in a world that is not our own, in a world that is interpreted for us by others. An interpreted world is not a home. – Hildegard of Bingen

[Women Part 7 of 9: 1) Introduction, 2) Bodies, 3) Health, 4) Work, 5) Superwomen, 6) Religion, 7) In Tech, 8) Online 9) Conclusions]

A couple of years ago, one of the dads at my girls’ school, following an initiative at his workplace, wanted help setting up an after school coding club to teach kids to program. He asked me if I would come along and help because there was a bit about Ada Lovelace and the guidelines would preferably have a woman giving that presentation.  I said I would be pleased to be a role model to guide young girls into IT. I said I would bring my girls and yep, sign me up, show me the materials.

One of my girls at the time was one year too young for the club (following his guidelines) but I said that it would be fine, she’s smart with a love of mathematics, she should come, Indeed she had to come as I look after her, but this man was insistent that she couldn’t come. He didn’t want me childminding – not that I would have been, I would have been teaching – and doing a job. His own wife who had worked in IT stayed at home and looked after his children whilst he ran the code club.

So there you have it. If there hadn’t been a mention in his materials about needing a woman to talk about their job in IT, I doubt he would have even asked me, male group think is prevalent in IT, as well as lots of parts of society. He certainly never felt the need to explain his reasons for not updating me on his plans, and he ran the club regardless with other dads and never mentioned it to me again nor did he ever show me any of the materials. The worst bit of all in this troubling tale is that this man is an IT manager.  A manager!!!

This anecdote, for me, sums up many experiences I have had in the world of IT: A socially awkward male cannot imagine what it is like to be a woman nor can he bend a tiny rule for something bigger than himself.

I am so used to this sort of nonsense in society, I just let it slide.  His individual lack of initiative and imagination can be found everywhere. There are a million stories of women being treated as unimportant in the computing industry and other domains as I discussed in the blog on Women’s Work and that is before we mention the purposeful aggression and sexism and appalling behaviour which happens towards women too.

The picture above is a mashup of Ada Byron, Countess of Lovelace, who worked with Charles Babbage on his computing machine so officially she is the first computer programmer.  A lot of computing pioneers were women. According to National Program Radio, who looked at the statistics for women in computing, the number of women studying computer science grew faster than the number of men until 1984, when the home computer was invented and marketed to boys, inventing the nerd stereotype and overwriting all the true stories of women in IT.

I was a final year undergraduate the first time I heard about Ada Lovelace and the only reason I learnt about her was because the programming language ADA is named after her. Sitting in a lecture hall full of men, the story of a woman was so invigorating, I taught myself ADA and wrote my final year project in ADA. It only took a few facts of her life to make me feel excited, included, inspired. What other things might I have decided to do had I known about NASA programmer Margaret Hamilton whose code put men on the moon,  she brought her daughter with her to the lab too, and Grace Hopper and her machine independent language ideas which led to COBOL? I learnt COBOL in my second year but no one ever thought she was worth a mention. I tell you COBOL and I might have gotten along much better had I known about Grace.

Female computer scientists were not mentioned during my many years of formal education. Rather like the early 19th century women scientists Caroline Herschel, Jane Marcet, and Mary Somerville, who in their lifetimes were recognised as being at the forefront of European science, but were no longer spoken about by the end of the 19th century because all women had been barred from graduating from university. Written out of history, and not given the legitimacy of belonging like men. What message does that send a woman?

Our culture sends messages whether we like or not and mass culture likes to give us what we already like because it is based on economics. So the moment the male computing geek stereotype was invented, that narrative excluded women, it overwrote those great female stories. Like sells like, and fiscal reasoning doesn’t care about telling new stories especially when it comes to women. Progress is a myth where technology is concerned. We think that any progress is an advancement but it is not. Semiotically speaking, we look for a how not a what, and we choose and reject stories based on how true they feel, which is based on familiarity i.e. the stories we know. So, if a constant narrative is that girls don’t do computing and boys do then this must be true.

It encourages a cultural devaluation of women across society and in particular in technology. Take Stuff Magazine, a magazine for men who are interested in technology. It made me so cross objectifying women that I had to write a whole blog slagging it off and I only slag things off when I am angry. A Menkind shop has just opened up near me which is a gadget shop. Why is it called Menkind? When I passed it, it had a Harry Potter cutout in the window.  Harry Potter eh? We all know that J K Rowling chose her pen name so that she would appeal to young boys. Heaven forbid that society encourages little boys to take women seriously and to listen to whatever story they might have to tell. The bottom line is like sells like, and the bottom line is hard cold cash. Progress is a myth and women’s stories are unimportant.

New Scientist news editor @PennySarchet  wrote in a tweet how she was advised during her PhD to explain everything really simply as if you were talking to a child or your mother. The original tweet she quotes and which has been deleted says grandmother. The cultural devaluation of women starts at home with the mother.

And yet there is hope. There is always hope. Recently, I read  Goodnight Stories for Rebel Girls by Elena Favilli and Francesca Cavallo which in the link there to the Guardian has the female reviewer saying her daughter was disappointed not to find J K Rowling and the reviewer herself was disappointed to find Margaret Thatcher. J K Rowling writes books, yes successfully, whereas Thatcher was the first UK female Prime Minister, so the book has made the right choice. You can’t edit Thatcher out of history just because you don’t want to hear her story. She is, historically speaking, an incredibly important figure. Rowling, we can’t say yet, time will tell. But we can say this, she wasn’t the first woman writer in UK history. She is just one that the female reviewer’s daughter has heard of because she hasn’t heard many women’s stories. Why? Because many women have been written out of history.  Am I repeating myself?

I read the book with my daughter who was really interested in the coders and physicists because of me. She kept showing me them and having a chat about it because she is looking for stories which make sense about her world, (even though she was excluded from code club, miaow), a world in which luckily for her, her mother loves computing, and takes up space in that field. But what about those girls whose mothers don’t and only the dads do computing in after school code club?

Lillian Robinson says in Wonder Women: Feminism in stories is about the politics of stories. Each time a story about a woman doing something in a domain that society has traditionally defined as a man’s world is told, that narrative becomes part of the information we women and our girls coming after us use to process our experiences, which leads to that man’s world becoming less male and more populated by women. Hopefully an equal world of equal opportunity. And, the opposite is true, if all the sources of narrative tell the same story about women then nothing will ever change. Like sells like remember.

Let us know as truth that the narratives behind the field of computer science need to be rewritten, let’s stop dealing in stereotypes and lazy journalism, and the misogyny of female prime ministers (which is a whole other blog in itself). Let us look at the big picture, the bright one which stops telling us only men do IT.  In Living a Feminist Life, Sara Ahmed says:

Feminism helps you to make sense that something is wrong; to recognise a wrong is to realise that you are not in the wrong.

Don’t make our girls wrong about computing.

[8) Online]

Is this progress? Humans, computers and stories

As a computer scientist, I have to say my job has changed very little in the last  last twenty-odd years. The tech has, admittedly, but I am still doing what I did back then, sitting in front of a computer, thinking about how computers can make peoples’ lives easier, what makes people tick, and how can we put the two together to make something cool?  Sometimes I even program something up to demonstrate what I am talking about.

It seems to me though that everyone else’s jobs (non-computer scientists) have changed and not necessarily for the better. People do their jobs and then they do a load of extras like social media, blogging, content creation, logging stuff in systems- the list is endless – on top of their workload.

It makes me wonder: Is this progress?

Humans and stories

As a teenager, on hearing about great literature and the classics, I figured that it must be something hifalutin’. In school we did a lot of those kitchen sink, gritty dramas (A Kind of Loving, Billy Liar, Kes, etc.,). So, when I found the section in the library: Classics, Literature, or whatever, it was a pleasant surprise to see that they were just stories about people, and sometimes gods, often behaving badly, and I was hooked. Little did I know that reading would be the best training I could receive to become a computer scientist.

Human and computer united together

In my first job as systems analyst and IT support, I found that I enjoyed listening to people’s stories in and amongst their descriptions about their interactions with computers. My job was to talk to people. What could be better? I then had to capture all the information about how computers were complex and getting in the way and try to make them more useful. Sometimes I had to whip out my screwdriver and fix it there and then. Yay!! Badass tech support.

The thing that struck me the most was that people anthropomorphised their computers, talking about them needing time to warm up, being temperamental, and being affected by circumstances, as if they were in some way human and not just a bunch of electronic circuits. And, that the computer was always the way of progress, even if they hated it and didn’t think so.

I think this is partly because it was one person with one computer working solely, so the computer was like a companion, the office worker you love or hate, who helps or hinders. There was little in the way of email or anything else unless you were on the mainframe and then it was used sparingly, especially in a huge companies. Memos were still circulated around. The computer was there to do a task – crunch numbers, produce reports, run the Caustic Soda Plant (I did not even touch the door handles when I went in there) –  the results of which got transferred from one computer to another by me, and sometimes by that advanced user who knew how to handle a floppy disk.

Most often information was transferred orally by presentation in a meeting or on paper with that most important of tools, the executive summary whilst the rest of it was a very dry long winded explanation, hardly a story at all.

Human and computer and human and computer united

Then the Internet arrived and humans (well mainly academics) began sharing information more easily, without needing to print things out and post them.  This was definitely progress. I began researching how people with different backgrounds like architects and engineers could work together with collaborative tools even though they use different terminology and different software. How could we make their lives easier when working together?

I spent a lot of time talking to architects and standing on bridges with engineers in order to see what they did. Other times I talked to draftsmen to see if a bit of artificial intelligence could model what they did. It could up to a point, but modelling all that information in a computer is limiting in comparison to what a human can know instinctively, which is when I realised that people need help automating the boring bits, not the instinctive bits.

I was fascinated by physiological computing, that is, interacting using our bodies rather than typing – so using our voices or our fingerprints. However, when it was me, my Northern accent, and my French colleagues, all speaking our fabulous variations of the English language into some interesting software written by some Bulgarians I believe, on a slow running computer, well, the results were interesting, to say the least.

Everyone online

The UK government’s push to get everything electronic seemed like a great idea, so everyone could access all the information they needed. It impacted Post Offices, but seemed to free up the time spent waiting in a queue and to provide more opportunities to do all those things like pay a TV licence, get a road tax disc, and passport, etc. This felt like progress.

I spent a lot time working on websites for the government with lovely scripts to guide people through forms like self-assessment so that life was easier. We all know how daunting a government form can be, so what could be better than being told by a website which bit to fill in? Mmm progress.

Lots of businesses came online and everyone thought that Amazon was great way back when. I know I did living in Switzerland and being able to order any book I wanted was such a relief as opposed to waiting or reading it in French. (Harry Potter in French although very good is just not the same.) Progress.

Then businesses joined in and wanted to be seen, causing the creation of banners, ads, popups, buying links to promote themselves, and lots of research into website design so they were all polished and sexy, even though the point of the Internet is that it is a work in progress constantly changing and will never be finished.

I started spending my time in labs, rather than in-situ, watching people use websites and asking them how they felt. I was still capturing stories but in a different way, in a more clinical, less of a natural habitat, way which of course alters what people say and which I found a bit boring. It didn’t feel like progress. It felt businessy – means to an end like – and not much fun.

Human -computer -human

Then phones became more powerful and social media was born, and people started using computers just to chat, which felt lovely and like progress. I had always been in that privileged position of being able to chat to people the world over, online, whatever the time, with the access I had to technology, now it was just easier and available to everyone – definitely progress.   Until of course, companies wanted to be in on that too. So, now we have a constant stream of ads on Facebook and Twitter and people behaving like they are down the market jostling for attention, shouting out their wares 24/7, with people rushing up asking:  Need me to shout for you?

And, then there are people just shouting about whatever is bothering them. It’s fantastic and fascinating, but is it progress?

The fear of being left behind

The downside is that people all feel obliged to jump on the bandwagon and be on multiple channels without much to say which is why they have to do extras like creating content as part of their ever expanding jobs. The downside is that your stream can contain the same information repeated a zillion times. The upside is that people can say whatever they like which is why your stream can contain the same information repeated a zillion times.

Me, I am still here wondering about the experience everyone is having when this is all happening on top of doing a job.  It feels exhausting and it feels like we are being dictated to by technology instead of the other way around. I am not sure what the answer is. I am not sure if I am even asking the right question. I do know how we got here. But is this where we need to be? Do we need to fix it? Does it needs fixing?  And, where we should go next? I think we may need a course correct, because when I ask a lot of people, I find that they agree. If you don’t, answer me this, how do you feel when I ask: Is this progress?

Web design (5): Structure

A collaborative medium, a place where we all meet and read and write.
Tim Berners-Lee

[Part 5 of 7 : 0) intro, 1) story, 2) pictures,  3) users, 4) content, 5) structure, 6) social media, 7) evaluation]

Many designers have adopted a grid structure to design web pages because a) it lends itself well to responsive design and b) it allows a design which is easy for users to understand. Designers literally have about five seconds before a user will click away to find a different service/page/content provider if the page is laid out in a way which is difficult to understand.

In a great talk for An Event Apart, Designer and Developer Advocate at Mozilla, Jen Simmons looks offline at magazines for inspiration and remembers how there was much experimentation and creativity online until everyone adopted grids and fell into a rut of grids.

But, it is easy to understand why everyone adopted grids, because users create their own understanding of a webpage from its structure. Text is complete within itself and meaning comes from its structure and language rather than the ideas it contains. This is a fundamental principle of semiotics, the study of meaning.

Managing expectations

When a webpage is judged to be useless, it is often because it does not behave in the way the user is expecting, particularly if it is not very attractive.

Designers either need to manage a user’s expectations by giving them what they are expecting in terms of the service they are looking for, or they need to make it super attractive.  Attractive things don’t necessarily work better but we humans perceive them as doing so  because they light up the brain’s reward centre and make us feel better when we are around them. We are attracted to attractive things which is given by certain Gestalt principles such as unity, symmetry, and the golden ratio.

Gestalt: similarity, promixity

Good design is one thing, but we also have specific expectations about  any given webpage. We scan for headings and white space and interpret a page in those terms.  This is because according to Gestalt theory we will interpret items according to their proximity: items which are close together, we will group together; or similarity, items which are similar we interpret as together.

And also, because we have been to others sites and we transfer our experiences from one site to another and anticipate where certain functions should be.

Where am I? Where have I been? Where am I going?

Main menus are usually at the top of the page, grouped together and are used for navigation through the site.  Secondary navigation may take place in drop down menus, or in  left or right hand columns. Specific house keeping information can be found in the footer, or the common links bar if there is one.

If users are completely lost they will use the breadcrumbs, which Google now uses instead of the URL of sites as part of the results their search engine serves up. Therefore, it is in a designer’s interest to put breadcrumbs on the top of page.

Users will stay longer and feel better if they can answer the three questions of navigation as articulated by usability consultant Steve Krug:

  1. Where am I?
  2. Where have I been?
  3. Where am I going?

Often this answered by changing links to visited, not visited and enforcing the consistency of the design by adopting a sensible approach to colour. There is a theory of colour in terms of adding and subtracting colour to create colour either digitally, or on a palette, but there is alas, no theory about how to use colour to influence branding and marketing, as personal preferences are impossible to standardise.

HTML 5 & CSS 3

As discussed earlier in part 1 of this series, we separate out our content from our presentation which is styled using CSS 3. Then, once we know what we want to say we use HTML 5 to structure our text to give it meaning to the reader. This may be a screen reader or it may be a human being.

HTML 5 breaks a page into its header and body, and then the body is broken down further into specific instructions. Headings from <h1> to <h6>, paragraphs, lists, sections and paragraphs, etc., so that we can structure a nice layout.  There are thousands of tutorials online which teach HTML 5.

The nice thing about sections is that we can use them to source linked data from elsewhere and fill our pages that way, but still keep a consistent appearance.

Theoretically one page is great, or a couple of pages fine, but once we get into hundreds of pages, we need to think about how we present everything consistently and evenly across a site and still provide users the information for which they came.

Information architecture

Information architecture (IA) is the way to organise the structure of a whole website. It asks: How you categorise and structure information? How do you label it so that users can navigate or search through it in order to find what they need?

The first step is to perform some knowledge elicitation of the  business or context and what everyone (owners, customers) known as stakeholders expect from the proposed system. This may include reading all the official documentation a business has (yawn!).

If there is a lot of existing information the best way to organise it is to perform a card sort. A card sort is when a consultant calls in some users, gives them a stack of index cards with content subjects written on them, along with a list of headings from the client’s site—“Business and News,” “Lifestyle,” “Society and Culture”— then users decide where to put “How to floss your teeth”.

This can take a few days each time and a few goes, until a pattern is found, us humans love to impose order on chaos, we love to find a pattern to shape and understand our world.

Once we have a structure from the card sort, it becomes easier to start designing the structure across the site and we begin with the site map.

The site map reflects the hierarchy of a system (even though Tim Berners-Lee was quite emphatic that the web should not have a hierarchical structure).

Then, once a site map is in place, each page layout can be addressed and the way users will navigate. Thus, we get main menus (global navigation), local navigation, content types to put in sections and paragraphs, etc., along with the functional elements needs to interact with users.

Other tools created at this time to facilitate the structure are wireframes, or annotated page layouts, because if is is a big site lots of people may be working on it and clear tools for communication are needed so that the site structure remains consistent.

Mock up screen shots and paper prototypes may be created and sometimes in the case of talented visual designers, storyboards are created. Storyboards are sketches showing how a user could interact with a system, sometimes they take a task-base approach, so that users could complete a common task.

Depending on the size of a project, information architects will work with content strategists who will have asked all the questions in the last section (part 4) on content and/or usability consultants who will have spoken to lots of users (part 3) to get an understanding of their experiences, above and beyond their understanding of the labelling of information in order to answer questions such as:

  • Does the website have great usability which is measured by being: effective and efficient; easy to learn and remember; useful and safe?
  • How do we guide users to our key themes, messages, and recommended topics?
  • Is the content working hard enough for our users?

Sometimes, it may just be one person who does all of these roles and is responsible for answering all of these questions.

It takes time to create great structure, often it takes several iterations of these these steps, until it is time to go on to the next stage (part 6) to start sharing this beautiful content on social media.

[Part 6]

Game theory & social media marketing (4): Conclusions

The Royal Game of UR, Early Dynastic III, 2600BC, British Museum

[Part 4 of 4: Game theory & social media: Part 1Part 2, Part 3]

No, I’m no super lady, I don’t have no game whatsoever,
I put my high heels on and see how that goes, yeah
– Pauline, Sucker for love

Ask a mathematician why they like maths, and they will tell you that mathematics gives a definite yes or no. There is beauty in clarity. And, everyone likes to feel that they understand and have control over what is happening in their world. This feeling of certainty is reflected in the bottom two rows of Maslow’s hierarchy of needs: physiological and safety needs.

Tapping into fear and belonging

That said, we also love variety and surprise, which is the most popular information shared on social media. We crave new stimulus which is why we love games. We love the idea of chance or fortune transforming our lives for the better, and surely if we learn the rules, then we will succeed. And, that is why marketing has such a pull on us. Marketers tell us that we will have improved lives if we do/buy/or have what they are selling, and, marketers themselves will have improved lives too if we do/buy/or have what they are selling.

There are so many ways to market something, this link has 52 types of marketing strategies. The most effective, of course, aims at the bottom of Maslow’s hierarchy of needs – safety – which is why fear quite often drives news and coupled with specific instructions gives a compliant society.

Tapping into belonging is another way to market, which is why the connection economy and building friendship with your customers is gaining so much traction as a marketing strategy.

Modelling emotion and what-ifs

Modelling human emotion is impossible to do with game theory especially on social media, a fluid, still unknown, type of communication. We will never quite know who our audience is. We may target our demographic, but if they retweet or share something outside of that, then you never exactly know who is looking at your content, or how they will react to it. All game theory can do is offer interesting and potentially useful partial explanations to model a selection of what-ifs scenarios when employing different strategies.

In the last post (part 3), we looked at various game theory strategies from the aggressive to the altruistic, and saw that people generally behave like the people around them (hawk-dove) and that Kermit was in a bit of hurry to get together with his girl, which caused him to behave passive-aggressively, and probably not get what he wanted.

 Don’t be like Kermit

Game theory is a tool for social media marketing and the best application of it is recording trial and error attempts (with statistical significance) whilst using our emotional intelligence.

Be aware of your emotions and triggers (your personal competence) so you don’t get involved in a big wrangle either privately, which could damage a relationship, or publicly, which might be retweeted everywhere and could wreck your brand or reputation.  Even in the mathematics of game theory we need to understand other players moods and motives (social competence) and not assume anything. We need to ask for more clarification, so that when we do make a move, we do so with clarity and certainty that we are doing the right thing, and as any mathematician would tell you if you asked them, there is beauty in clarity for it gives us certainty and a sense of control, things which are harder to come by in our ever changing world.

Game theory & social media (3): What are you playing at?

Source: buzzfeed.com

[Part 3 of 4: Game theory & social media: Part 1Part 2, Part 4]

Whatever else anything is, it ought to begin with being personal – Kathleen Kelly, You’ve got mail (1998)

Kermit drinking his tea and throwing shade makes me laugh. However, I think we all understand his frustration. It seems that in business and personal relationships, people play games. We may not know why, and we may not know the rules. But as we saw in part 2, before we react, we might want to find out more: if a game is being played, which one, and if we want to play or not.

Games, payoffs, and winning

A game is normally defined as having two or more players, who have a choice of possible strategies to play which determine the outcome of a game. Each outcome has a payoff which is calculated numerically to represent its value. Usually, a player will want to get the biggest payoff possible in order to be certain of winning.

Dominance, saddles, and mixed strategies

Playing the strategy with the biggest payoff is known as the Dominance Strategy, and a rational player would never do otherwise, but it’s not always easy to identify which strategy is best.

So, players sometimes take a cautious approach which will guarantee a favourable result (also known as the Saddle Point Principle). Other times, there is no saddle point so players have to choose at random what strategy to play and hope for the best. They can calculate the probability of mixing up strategies and their chances of winning. If their probability skills are not great they can play experimentally and record their results 30 times (for statistical significance) to see which strategies work.

How does this work on social media? Well, no one knows how social media works so a trial and error approach whilst recording results can be useful. Luckily, Twitter and Facebook both provide services and stats to help.

Free will, utility, and Pareto’s principle

A major question is whether players have free will or not and whether their choices are predetermined based on who they are playing with and the circumstances in which the game takes place. This can depend on the amount of information players have available to them,  and as new information becomes available, they play a specific strategy, thus seeming as if they didn’t have free will at all.

Players assign numbers to describe the value of the outcomes (known in economics as utility theory) which they can use to guide themselves to the most valued outcome.

This is useful if we have a game where the winner doesn’t necessarily take all. If the players have interests which are not opposed and by cooperating the players can end up potentially with a win-win situation or at least a situation where everyone gains some benefits and the solution is not the worst outcome for everyone involved. This is known as the Pareto Principle.

On social media? Retweeting and sharing other’s businesses news is a nice way of ensuring everyone gains some benefits because with a potential market of 307 millions and there is enough of a market to go around for everyone to win-win and of course, reciprocate.

The Nash equilibrium

Taking this further is the Nash equilibrium which was named after John Nash, who proved that every two player game has one equalizing strategy (either pure or mixed) in each game. By looking at the equilibrium strategies of the other players, everyone plays to equalize. This is because, no player has anything to gain by changing only his or her own strategy, so it is win-win.

Are you chicken?

Ducks have been known share out the bread thrown to them so they all get some rather than one duck eating everything. This is known as the Hawk-Dove approach in game theory. When there is competition for a shared resource, players can choose either conciliation or conflict.

Research has shown that when a player is naturally a hawk (winner takes all) and plays amongst doves, then the player will adapt and cooperate. Conversely a dove amongst hawks will adapt too and turn into a fighter.

If there are two hawks playing each other the game is likely to go chicken, which is when both players will risk everything (known as mutually assured destruction in warfare) not to yield first.

We adapt very easily to what is going on around us, and on social media this is totally the same. In a 2014 study Pew Research Center found that people are less likely to share their honest opinions on social media, and will often only post opinions on Facebook with which they know their followers will agree – we like to conform.

The volunteer’s dilemma

In contrast, the volunteer’s dilemma is an altruistic approach where one person does the right thing for the benefit of everyone. For example, one meerkat will look out for predators, at the risk of getting eaten, whilst the rest of the meerkats look for food. And, we admire this too. We love a hero, a maverick, someone who is ready to stand up and be different.

The prisoner’s dilemma

But we hated to feel duped which is why the prisoner’s dilemma is one of the most popular game theories of all. Created by Albert W. Tucker in 1950, it is as follows:

Two prisoners are arrested for a joint crime and put in separate interrogation rooms. The district attorney sets out these rules:

  1. If one of them confesses and the other doesn’t, the confessor will be rewarded, the other receive a heavy sentence.
  2. If both confess each will get a light sentence. Which leads to the belief that:
  3. If neither confesses both will go free.

It is in each prisoner’s interest to confess (dominant strategy = 1) and if they both do that satisfies the Pareto principle (2). However, if they both confess, they are worse off than if neither do (3).

The prisoner’s dilemma embodies the struggle between individual rationality and group rationality which Nigel Howard described as a metagame of a prisoner cooperating if and only if, they believe that the other prisoner will cooperate, if and only if, they believe that the first prisoner will cooperate. A mind boggling tit-for-tat. But, this is common on Twitter with those: Follow me, I will follow you back and constant following and unfollowing.

And, in any transaction we hate feeling like we have been had, that we were a chump, that we trusted when we shouldn’t have, which is why some people are so angry and like to retaliate. Anger feels better than feeling vulnerable does. But, great daring starts with vulnerability, the fear of failure, and even the failure to start, the hero’s quest shows us that.

Promises, threats, and coalitions

As we add more players, all rationality may go out of the window as players decide whether to form coalitions or to perform strategic style voting. If we introduce the idea of the players communicating then we add the issues of trust in promises, or fear of threats and it all starts to sound rather Hunger Games.

On social media aggression and threats are common, because of prejudice, or group think, especially on Twitter where there is no moderation. And, online and off, we have all been promised things and relationships which have ultimately left us disappointed, and told us that we have been misinformed, like the fake news, we’ve been hearing about a lot lately.  Fake news is not new, in other contexts it is known as propaganda.  And, if it is not completely fake, just exaggerated, well that’s not new either, New Labour loved spin which led to a sexed up dossier, war and death.

Kermit’s next move

Philip D. Straffin says in his book Game theory and strategy, that game theory only works up to a point, after which a player must ask for some clarification about what is going on because mathematics applied to human behaviour will only explain so much.

And so we turn back to Kermit. What is he to do?  He has passive-aggressively asked for clarification and had a cup of tea. What’s his next move? Well, he could wait and see if he gets a reply (tit for tat). Who will crack first (chicken)? But, with the texts he has sent her, it is likely that her response is somewhat predetermined, or perhaps not, perhaps she will repond with Nash’s equilibria, or at the very least the Pareto principle of everyone not getting the worst outcome.

Alternatively, he could take a breath and remember that he is talking to someone he likes and with whom he wants to spend some time, someone human with the same vulnerabilities as him. He could adopt the volunteer’s dilemma approach and send her an honest text to explain that his feelings are hurt, he thought they had something special, and that she liked communicating with him as much as other people. By seeking clarification in this way, Kermit may just end up having a very nice evening after all –  or not. Whoever said: All’s fair in love and war, didn’t have instant access to social media and all the complications it can cause.

[Part 4]