Human-Computer Interaction Conclusions: Dialogue, Conversation, Symbiosis (6)

[ 1) Introduction, 2) Dialogue or Conversation, 3) User or Used, 4) Codependency or Collaboration, 5) Productive or Experiential, 6) Conclusions]

I love the theory that our brains, like computers, use binary with which to reason and when I was an undergraduate I enjoyed watching NAND and NOR gates change state.

As humans, we are looking for a change of state. It is how we make sense of the world, as in semiotics, we divide the world into opposites: good and bad, light and dark, day and night. Then we group information together and call them archetypes and symbols to imbue meaning so that we can recognise things more quickly.

According to the binary-brain theory, our neurons do too. They form little communities of neurons that work together to recognise food, not-food; shelter, not-shelter; friends, foes; the things which preoccupy us all and are classed as deficiency needs in Maslow’s Hierarchy of Needs.

Over on researchgate, there was discussion about moving beyond binary which used this example:

Vegetarian diet vs Free Range Animals vs Battery Farmed Meat

If it was just vegetarian diet v battery farming it would be binary and an easy choice but add in free range and we see the complexities of life, the sliding continuum from left to right. We know life is complex but it is easier in decision making to just have two options, we are cognitive misers and hate using up all our brainpower. We want to see a change in state or a decision made. It also reflects the natural rhythms of life like the tide: ebb and flow, the seasons: growing and dying, it’s not just our neurons its our whole bodies which reflect the universe so patterns in nature resonate with us.

I began this series with an end in mind. As human-computer interaction (HCI) is an ever expanding subject, I wanted to pin it down and answer this question: What am I thinking these days when I think about human-computer interaction?

For me, HCI is all about the complexities of the interaction of a human and a computer, which we try to simplify in order to make it a self-service thing, so everyone can use it. But with the progress of the Internet, HCI has become less about creating a fulfilling symbiosis between human and computer, and more about economics. And, throughout history, economics has been the driving force behind technological progress, but often with human suffering. It is often in the arts where we find social conscience.

Originally though, the WWW was thought of by Tim Berners-Lee to connect one computer to another so everyone could communicate. However, this idea has been replaced by computers connecting through intermediaries, owned by large companies, with investors looking to make a profit. The large companies not only define how we should connect and what are experience should be, but then they take all our data. And it is not just social media companies, it is government and other institutions who make all our data available online without asking us first. They are all in the process of redefining what privacy and liberty means because we don’t get a choice.

I have for sometime now gone about saying that we live in an ever changing digital landscape but it’s not really changing. We live the same lives, we are just finding different ways to achieve things without necessarily reflecting whether it is progress or not. Economics is redefining how we work.

And whilst people talk about community and tribes online, the more that services get shifted online, the more communities get destroyed. For example, by putting all post office services online, the government destroyed the post office as a local hub for community, and yet at the time it seemed like a good thing – more ways to do things. But, by forcing people to do something online you introduce social exclusion. Basically, either have a computer or miss out. If you don’t join in, you are excluded which taps into so many human emotions, that we will give anything away to avoid feeling lonely and shunned, and so any psychological responsibility we have towards technology is eroded especially as many online systems are binary: Give me this data or you cannot proceed.

Economic-driven progress destroys things to make new things. One step forward, two steps back. Mainly it destroys context and context is necessary in our communication especially via technology.

Computers lack context and if we don’t give humans a way to add context then we are lost. We lose meaning and we lose the ability to make informed decisions, and this is the same whether it is a computer or a human making the decisions. Humans absorb context naturally. Robots need to ask. That is the only way to achieve a symbiosis, by making computers reliant on humans. Not the other way round.

And not everything has to go online. Some things, like me and my new boiler don’t need to be online. It is just a waste of wifi.

VR man Jaron Lanier said in the FT Out to Lunch section this weekend that social media causes cognitive confusion as it decontextualises, i,e., it loses context, because all communication is chopped up into algorithmic friendly shreds and loses its meaning.

Lanier believes in the data as labour movement, so that huge companies have to pay for the data they take from people. I guess if a system is transparent for a user to see how and where their data goes they might choose more carefully what to share, especially if they can see how it is taken out of context and used willy-nilly. I have blogged in the past how people get used online and feel powerless.

So way back when I wrote that social media reflects us rather than taking us places we don’t want to go, in my post Alone Together: Is social media changing us? I would now add that it is economics which changes us. Progress driven by economics and the trade-offs humans think it is ok for other humans to make along the way. We are often seduced by cold hard cash as it does seem to be the answer to most of our deficiency needs. It is not social media per se, it is not the Internet either which is taking us places we don’t want to go, it is the trade-offs of economics and how we lose sight of other humans around us when we feel scarcity.

So, since we work in binary, let’s think on this human v technology conundrum. Instead of viewing it as human v technology, what about human v economics? Someone is making decisions on how best to support humans with technology but each time this is eroded by the bottom line. What about humans v scarcity?

Lanier said in his interview I miss the future as he was talking about the one in which he thought he would be connected with others through shared imagination, which is what we used to do with stories and with the arts. Funny I am starting to miss it too. As an aside, I have taken off my Fitbit. I am tired of everything it is taking from me. It is still possible online to connect imaginatively, but it is getting more and more difficult when every last space is prescribed and advertised all over as people feel that they must be making money.

We need to find a way to get back to a technological shared imagination which allows us to design what’s best for all humanity, and any economic gain lines up with social advancement for all, not just the ones making a profit.

Let’s Talk! Human-Computer Interaction: Dialogue, Conversation, Symbiosis (2)

[ 1) Introduction, 2) Dialogue or Conversation, 3) User or Used, 4) Codependency or Collaboration, 5) Productive or Experiential, 6) Conclusions]

I chuckled when I read Rebecca Solnit describing her 1995 life: She read the newspaper in the morning, listened to the news in the evening and received other news via letter once a day. Her computer was unconnected to anything. Working on it was a solitary experience.

Fast forward 20+ years and her computer, like most other people’s, feels like a cocktail party, full of chatter and fragmented streams of news and data. We are living permanently in Alvin Toffler’s information overload. We are creating more data per second than we did in a whole year in the 1990s. And yet, data or information exchange is why we communicate in the first place, so I wanted to ponder here, how do we talk using computers?

Commandments

Originally, you had to ask computer scientists like me. And, we had to learn the commands of the operating system we were using say, on a mainframe with VAX/VMS or DEC; on a networked workstation with UNIX, or a personal computer which used MS/DOS.

Then, we had to learn whatever language we needed. Some of the procedural languages I have known and loved are: Assembler, Pascal, COBOL, ADA, C/C++, Java, X/Motif, OpenGL (I know I will keep adding to these as I remember them). The declarative PROLOG, and (functional, brackety) LISP, and scripts like php, Perl, Python, Javascript. The main problem with scripts is that they don’t have strong types, so you can quite easily pass a string to an integer and cause all sorts of problems and the compiler won’t tell you otherwise. They are like a hybrid of the old and new. The old when computer time was expensive and humans cheap so we had to be precise in our instructions, and the new computers are cheap and humans cost more, so bang in some code. Don’t worry about memory or space. This is ok up to a point but if the human isn’t trained well, days may be lost.

As an undergraduate I had to learn about sparse matrices to not waste computer resources, and later particularly using C++ I would patiently wait and watch programs compile. And, it was in those moments, I realised why people had warned me that to choose computers was to choose a way of life which could drive you mad.

How things have changed. Or have they?

Dialogue

When I used to lecture human-computer interaction, I would include Ben Schneiderman’s eight golden rules of interface design. His book Designing the User Interface is now in its sixth edition.

When I read the first edition, there was a lot about dialog design as way back then there were a lot of dialog boxes (and American spelling) to get input/output going smoothly. Graphical-user interfaces had taken over from the command line with the aim of making computers easy to use for everyone. The 1990s were all about the efficiency and effectiveness of a system.

Just the other week I was browsing around the Psychology Now website, and came upon a blogpost about the psychological term locus of control. If it is internal, a person thinks that their success depends on them, if it is external their success is down to fate or luck. One of Scheidermann’s rules is: Support internal locus of control, so you make the user feel that they can successfully achieve the task they have set out to do on the computer because they trust it to behave consistently because they know what to expect next, things don’t move around like the ghost in the wall.

Schneiderman’s rules were an interpretation of a dialogue in the sense of a one-to-one conversation (dia means two, logos can mean speech) to clarify and make coherent. That is to say: One person having a dialogue with one computer by the exchange of information in order to achieve a goal.

This dialogue is rather like physicist David Bohm’s interpretation which involves a mutual quest for understanding and insight. So, the user was be guided to put in specific data via a dialog box and the computer would use that information to give new information to create understanding and insight.

This one-to-one seems more powerful nowadays with Siri, Alexa, Echo, but, it’s still a computer waiting on commands and either acting on them or searching for the results in certain areas online. Put this way, it’s not really much of a dialogue. The computer and user are not really coming to a new understanding.

Bohm said that a dialogue could involve up to 40 people and would have a facilitator, though other philosophers would call this conversation. Either way, it is reminiscent of computer supported cooperative work (CSCW) a term coined in 1984 that looked at behaviour and technology and how computers can facilitate, impair, or change collaborative activities (the medium is the message) whether people do this on the same or different time zone, in the same or different geographical locations, synchronously or asynchronously. CSCW has constantly changed and evolved especially with the World Wide Web and social media.

I remember being at an AI conference in 1996 and everyone thought that the answer to everything was just put it online and see what happened then. But just because the WWW can compress time and space it doesn’t follow that a specific problem can be solved more easily.

Monologue to Interaction

The first people online were really delivering a monologue. Web 1.0 was a read-only version of the WWW. News companies like the BBC published news like a newspaper. Some people had personal web pages on places like Geocities. Web pages were static and styled with HTML and then some CSS.

With the advent of Web 2.0, things got more interactive with backend scripting so that webpages could serve up data from databases and update pages to respond to users input data. Social media sites like Flickr, YouTube, Facebook, Twitter were all designed for users to share their own content. Newspapers and news companies opened up their sites to let users comment and feel part of a community.

But this chatter was not at all what Bohm had in mind, this is more like Solnit’s cocktail party with people sharing whatever pops in their head. I have heard people complain about the amount of rubbish on the WWW. However, I think it is a reflection of our society and the sorts of things we care about. Not everyone has the spare capacity or lofty ambition to advance humanity, some people just want to make it through the day.

Web 3.0 is less about people and more about things and semantics – the web of data. Already, the BBC uses the whole of the internet instead of a content management system to keep current. Though as a corporation, I wonder, has the BBC ever stopped to ask: How much news is too much? Why do we need this constant output?

Social media as a cocktail party

But, let’s just consider for a moment, social media as a cocktail party, what an odd place with some very strange behaviour going on:

  • The meme: At a cocktail party, imagine if someone came up to us talking like a meme: Tomorrow, is the first blank page of a 365 page book. Write a good one. We would think they had banged their head or had one shandy too many.
  • The hard sell: What if someone said: Buy my book, buy my book, buy my book in our faces non-stop?
  • The auto Twitter DM which says follow me on facebook/Instagram/etc. We’ve gone across said hi, and the person doesn’t speak but slips us a note which says: Thanks for coming over, please talk to me at the X party.
  • The rant: We are having a bit of a giggle and someone comes up and rants in our faces about politics, religion, we try to ignore them all the while feeling on a downer.
  • The retweet/share:That woman over there just said, this man said, she said, he said, look at this picture… And, if it’s us, we then say: Thanks for repeating me all over the party.

Because it is digital, it becomes very easy to forget that we are all humans connected together in a social space. The result being that there’s a lot of automated selling, news reporting, and shouting going on. Perhaps it’s less of a cocktail party more of a market place with voices ringing out on a loop.

Today, no one would say that using a computer is a solitary experience, it can be noisy and distracting, and it’s more than enough to drive us mad.

How do we get back to a meaningful dialogue? How do we know it’s time to go home when the party never ends, the market never closes and we still can’t find what we came for?

[Part 3]

Human-computer interaction, cyberpsychology and core disciplines

A heat map of the multidisciplinary field of HCI @ Alan Dix

I first taught human-computer interaction (HCI) in 2001. I taught it from a viewpoint of software engineering. Then, when I taught it again, I taught it from a design point of view, which was a bit trickier, as I didn’t want to trawl through a load of general design principles which didn’t absolutely boil down to a practical set of guidelines for graphical-user interface or web design. That said, I wrote a whole generic set of design principles here: Designing Design, borrowing Herb Simon’s great title: The Science of the Artificial. Then, I revised my HCI course again and taught it from a practical set of tasks so that my students went away with a specific skill set. I blogged about it in a revised applied-just-to-web-design version blog series here: Web Design: The Science of Communication.

Last year, I attended a HCI open day Bootstrap UX. The day in itself was great and I enjoyed hearing some new research ideas until we got to one of the speakers who gave a presentation on web design, I think he did, it’s hard to say really, as all his examples came from architecture.

I have blogged about this unsatisfactory approach before. By all means use any metaphor you like, but if you cannot relate it back to practicalities then ultimately all you are giving us is a pretty talk or a bad interview question.

You have to put concise constraints around a given design problem and relate it back to the job that people do and which they have come to learn about. Waffling on about Bucky Fuller (his words – not mine) with some random quotes on nice pictures are not teaching us anything. We have a billion memes online to choose from. All you are doing is giving HCI a bad name and making it sound like marketing. Indeed, cyberpsychologist Mary Aiken, in her book The Cyber Effect, seems to think that HCI is just insidious marketing. Anyone might have been forgiven for making the same mistake listening to the web designer’s empty talk on ersatz architecture.

Cyberpsychology is a growing and interesting field but if it is populated by people like Aiken who don’t understand what HCI is, nor how artificial intelligence (AI) works then it is no surprise that The Cyber Effect reads like the Daily Mail (I will blog about the book in more detail at a later date, as there’s some useful stuff in there but too many errors). Aiken quotes Sherry Turkle’s book Alone Together, which I have blogged about here, and it makes me a little bit dubious about cyberpsychology, I am waiting for the book written by the neuroscientist with lots of brainscan pictures to tell me exactly how our brains are being changed by the Internet.

Cyberpsychology is the study of the psychological ramifications of cyborgs, AI, and virtual reality, and I was like wow, this is great, and rushed straight down to the library to get the books on it to see what was new and what I might not know. However, I was disappointed because if the people who are leading the research anthropomorphise computers and theorise about metaphors about the Internet instead of the Internet itself, then it seems that the end result will be skewed.

We are all cyberpsychologists and social psychologists now, baby. It’s what we do

We are all cyberpsychologists and social psychologists, now baby. It’s what we do. We make up stories to explain how the world works. It doesn’t mean to say that the stories are accurate. We need hard facts not Daily Mail hysteria (Aiken was very proud to say she made it onto the front page of the Daily Mail with some of her comments). However, the research I have read about our behaviour online says it’s too early to say. It’s just too early to say how we are being affected and as someone who has been online since 1995 I only feel enhanced by the connections the WWW has to offer me. Don’t get me wrong, it hasn’t been all marvellous, it’s been like the rest of life, some fabulous connections, some not so.

I used to lecture psychology students alongside the software engineering students when I taught HCI in 2004 at Westminster University, and they were excited when I covered cognitive science as it was familiar to them, and actually all the cognitive science tricks make it easy to involve everyone in the lectures, and make the lectures fun, but when I made them sit in front of a computer, design and code up software as part of their assessment, they didn’t want to do it. They didn’t see the point.

This is the point: If you do not know how something works how can you possibly talk about it without resorting to confabulation and metaphor? How do you know what is and what is not possible? I may be able to drive a car but I am not a mechanic, nor would I give advice to anyone about their car nor write a book on how a car works, and if I did, I would not just think about a car as a black box, I would have to put my head under the bonnet, otherwise I would sound like I didn’t know what I was talking about. At least, I drive a car, and use a car, that is something.

Hey! We’re not all doctors, baby.

If you don’t use social media, and you just study people using it, what is that then? Theory and practice are two different things, I am not saying that theory is not important, it is, but you need to support your theory, you need some experience to evaluate the theory. Practice is where it’s at. No one has ever said: Theory makes perfect. Yep, I’ve never seen that on a meme. You get a different perspective, like Jack Nicholson to his doctor Keanu Reeves says in Something’s Gotta Give: Hey! We’re not all doctors, baby. Reeves has seen things Nicholson hasn’t and Nicholson is savvy enough to know it.

So, if you don’t know the theory and you don’t engage in the practice, and you haven’t any empirical data yourself, you are giving us conjecture, fiction, a story. Reading the Wikipedia page on cyberpsychology, I see that it is full of suggested theories like the one about how Facebook causes depression. There are no constraints around the research. Were these people depressed before going on Facebook? I need more rigour. Aiken’s book is the same, which is weird since she has a lot of references, they just don’t add up to a whole theory. I have blogged before about how I was fascinated that some sociologists perceived software as masculine.

In the same series I blogged about women as objects online with the main point being, that social media reflects our society and we have a chance with technology to impact society in good ways. Aiken takes the opposite tack and says that technology encourages and propagates deviant sexual practices (her words) – some I hadn’t heard of, but for me, begs the question: If I don’t know about a specific sexual practice, deviant or otherwise, until I learn about on the Internet (Aiken’s theory), then how do I know which words to google? It is all a bit chicken and egg and doesn’t make sense. Nor does Aiken’s advice to parents which is: Do not let your girls become objects online. Women and girls have been objectified for centuries, technology does not do anything by itself, it supports people doing stuff they already do. And, like the HCI person I am, I have designed and developed technology to support people doing stuff they already do. I may sometimes inadvertently change the way people do a task when supported by technology for good or for bad, but to claim that technology is causing people to do things they do not want to do is myth making and fear mongering at its best.

The definition of HCI that I used to use in lectures at the very beginning of any course was:

HCI is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them (ACM, 1992).

For me, human-computer interaction was and still remains Gestaltian: The whole is greater than the sum of the parts, by this I mean, that the collaboration of a human and a computer is more than a human typing numbers into a computer and then waiting for the solution, or indeed typing in sexually deviant search terms into a web crawler to find a tutorial. And, with the advent of social media, HCI is more than one person connecting to another, or broadcasting online, which is why the field of cyberpsychology is so intriguing.

But the very reason why I left the field of AI and went into HCI is: AI reasons in a closed world and the limits of the computational power you have available. There are limits. With HCI, that world opens up and the human gets to direct the computer to do something useful. Human to human communication supported by technology does something else altogether which is why you might want the opinion of a sociologist or a psychologist. But, you don’t want the opinion of the sociologist on AI when they don’t understand how it works and has watched a lot of sci-fi and thinks that robots are taking over the world. Robots can do many things but it takes a lot of lines of code. And, you don’t want the opinion of a cyberpsychologist who thinks that technology teaches people deviant sexual practices and encourages us all to literally pleasure ourselves to death (Aiken’s words – see what I mean about the Daily Mail?) ‘cos she read one dodgy story and linked it to a study of rats in the 1950s.

Nowadays, everyone might consider themselves to be a bit of a HCI expert and can judge the original focus of HCI which is the concept of usability: easy to learn, easy to use. Apps are a great example of this, because they are easy to learn and easy to use, mainly though because they have limited functionality, that is they focus on one small task, like getting a date, ordering a taxi, sharing a photo, or a few words.

However, as HCI professor Alan Dix says in his reflective Thirty years of HCI and also here about the future: HCI is a vast and multifaceted community, bound by the evolving concept of usability, and the integrating commitment to value human activity and experience as the primary driver in technology.

He adds that sometimes the community can get lost and says that Apple’s good usability has been sacrificed for aesthetics and users are not supported as well as they should be. Online we can look at platforms like Facebook and Twitter and see that they do not look after their users as well as they could (I have blogged about that here). But again it is not technology, it is people who have let the users down. Somewhere along the line someone made a trade-off: economics over innovation, speed over safety, or aesthetics over usability.

HCI experts are agents of change. We are hopefully designing technology to enhance human activity and experience, which is why the field of HCI keeps getting bigger and bigger and has no apparent core discipline.

It has a culture of designer-maker which is why at any given HCI conference you might see designers, hackers, techies and artists gathering together to make things. HCI has to exist between academic rigour and exciting new tech, no wonder it seems to not be easy to define. But as we create new things, we change society and have to keep debating areas such as intimacy, privacy, ownership, visibility as well as what seems pretty basic like how to keep things usable. Dix even talks about having human–data interaction, as we put more and more things online, we need to make sense of the data being generated and interact with it. There is new research being funded into trust (which I blogged about here). And Dix suggest that we could look into designing for solitude and supporting users to not respond immediately to every text, tweet, digital flag. As an aside, I have switched off all notifications, my husband just ignores his, and it just boggles my mind a bit that people can’t bring themselves to be in charge of the technology they own. Back to the car analogy, they wouldn’t have the car telling them where they should be going.

Psychology is well represented in HCI, AI is well represented in HCI too. Hopefully we can subsume cyberpsychology too, so that the next time I pick up a book on the topic, it actually makes sense, and the writer knows what goes on under the bonnet.

Technology should be serving us, not scaring us, so if writers could stop behaving like 1950s preachers who think society is going to the dogs because they view how people embrace technology in the same way they once did rocknroll and the television, we could be more objective about how we want our technological progress to unfold.

Women in Tech: Society, Storytelling, Technology (7)

Ada Lovelace and her laptop

The world’s first programmer, Ada Lovelace. Source: Mashable

We cannot live in a world that is not our own, in a world that is interpreted for us by others. An interpreted world is not a home. – Hildegard of Bingen

[Women Part 7 of 9: 1) Introduction, 2) Bodies, 3) Health, 4) Work, 5) Superwomen, 6) Religion, 7) In Tech, 8) Online 9) Conclusions]

A couple of years ago, one of the dads at my girls’ school, following an initiative at his workplace, wanted help setting up an after school coding club to teach kids to program. He asked me if I would come along and help because there was a bit about Ada Lovelace and the guidelines would preferably have a woman giving that presentation.  I said I would be pleased to be a role model to guide young girls into IT. I said I would bring my girls and yep, sign me up, show me the materials.

One of my girls at the time was one year too young for the club (following his guidelines) but I said that it would be fine, she’s smart with a love of mathematics, she should come, Indeed she had to come as I look after her, but this man was insistent that she couldn’t come. He didn’t want me childminding – not that I would have been, I would have been teaching – and doing a job. His own wife who had worked in IT stayed at home and looked after his children whilst he ran the code club.

So there you have it. If there hadn’t been a mention in his materials about needing a woman to talk about their job in IT, I doubt he would have even asked me, male group think is prevalent in IT, as well as lots of parts of society. He certainly never felt the need to explain his reasons for not updating me on his plans, and he ran the club regardless with other dads and never mentioned it to me again nor did he ever show me any of the materials. The worst bit of all in this troubling tale is that this man is an IT manager.  A manager!!!

This anecdote, for me, sums up many experiences I have had in the world of IT: A socially awkward male cannot imagine what it is like to be a woman nor can he bend a tiny rule for something bigger than himself.

I am so used to this sort of nonsense in society, I just let it slide.  His individual lack of initiative and imagination can be found everywhere. There are a million stories of women being treated as unimportant in the computing industry and other domains as I discussed in the blog on Women’s Work and that is before we mention the purposeful aggression and sexism and appalling behaviour which happens towards women too.

The picture above is a mashup of Ada Byron, Countess of Lovelace, who worked with Charles Babbage on his computing machine so officially she is the first computer programmer.  A lot of computing pioneers were women. According to National Program Radio, who looked at the statistics for women in computing, the number of women studying computer science grew faster than the number of men until 1984, when the home computer was invented and marketed to boys, inventing the nerd stereotype and overwriting all the true stories of women in IT.

I was a final year undergraduate the first time I heard about Ada Lovelace and the only reason I learnt about her was because the programming language ADA is named after her. Sitting in a lecture hall full of men, the story of a woman was so invigorating, I taught myself ADA and wrote my final year project in ADA. It only took a few facts of her life to make me feel excited, included, inspired. What other things might I have decided to do had I known about NASA programmer Margaret Hamilton whose code put men on the moon,  she brought her daughter with her to the lab too, and Grace Hopper and her machine independent language ideas which led to COBOL? I learnt COBOL in my second year but no one ever thought she was worth a mention. I tell you COBOL and I might have gotten along much better had I known about Grace.

Female computer scientists were not mentioned during my many years of formal education. Rather like the early 19th century women scientists Caroline Herschel, Jane Marcet, and Mary Somerville, who in their lifetimes were recognised as being at the forefront of European science, but were no longer spoken about by the end of the 19th century because all women had been barred from graduating from university. Written out of history, and not given the legitimacy of belonging like men. What message does that send a woman?

Our culture sends messages whether we like or not and mass culture likes to give us what we already like because it is based on economics. So the moment the male computing geek stereotype was invented, that narrative excluded women, it overwrote those great female stories. Like sells like, and fiscal reasoning doesn’t care about telling new stories especially when it comes to women. Progress is a myth where technology is concerned. We think that any progress is an advancement but it is not. Semiotically speaking, we look for a how not a what, and we choose and reject stories based on how true they feel, which is based on familiarity i.e. the stories we know. So, if a constant narrative is that girls don’t do computing and boys do then this must be true.

It encourages a cultural devaluation of women across society and in particular in technology. Take Stuff Magazine, a magazine for men who are interested in technology. It made me so cross objectifying women that I had to write a whole blog slagging it off and I only slag things off when I am angry. A Menkind shop has just opened up near me which is a gadget shop. Why is it called Menkind? When I passed it, it had a Harry Potter cutout in the window.  Harry Potter eh? We all know that J K Rowling chose her pen name so that she would appeal to young boys. Heaven forbid that society encourages little boys to take women seriously and to listen to whatever story they might have to tell. The bottom line is like sells like, and the bottom line is hard cold cash. Progress is a myth and women’s stories are unimportant.

New Scientist news editor @PennySarchet  wrote in a tweet how she was advised during her PhD to explain everything really simply as if you were talking to a child or your mother. The original tweet she quotes and which has been deleted says grandmother. The cultural devaluation of women starts at home with the mother.

And yet there is hope. There is always hope. Recently, I read  Goodnight Stories for Rebel Girls by Elena Favilli and Francesca Cavallo which in the link there to the Guardian has the female reviewer saying her daughter was disappointed not to find J K Rowling and the reviewer herself was disappointed to find Margaret Thatcher. J K Rowling writes books, yes successfully, whereas Thatcher was the first UK female Prime Minister, so the book has made the right choice. You can’t edit Thatcher out of history just because you don’t want to hear her story. She is, historically speaking, an incredibly important figure. Rowling, we can’t say yet, time will tell. But we can say this, she wasn’t the first woman writer in UK history. She is just one that the female reviewer’s daughter has heard of because she hasn’t heard many women’s stories. Why? Because many women have been written out of history.  Am I repeating myself?

I read the book with my daughter who was really interested in the coders and physicists because of me. She kept showing me them and having a chat about it because she is looking for stories which make sense about her world, (even though she was excluded from code club, miaow), a world in which luckily for her, her mother loves computing, and takes up space in that field. But what about those girls whose mothers don’t and only the dads do computing in after school code club?

Lillian Robinson says in Wonder Women: Feminism in stories is about the politics of stories. Each time a story about a woman doing something in a domain that society has traditionally defined as a man’s world is told, that narrative becomes part of the information we women and our girls coming after us use to process our experiences, which leads to that man’s world becoming less male and more populated by women. Hopefully an equal world of equal opportunity. And, the opposite is true, if all the sources of narrative tell the same story about women then nothing will ever change. Like sells like remember.

Let us know as truth that the narratives behind the field of computer science need to be rewritten, let’s stop dealing in stereotypes and lazy journalism, and the misogyny of female prime ministers (which is a whole other blog in itself). Let us look at the big picture, the bright one which stops telling us only men do IT.  In Living a Feminist Life, Sara Ahmed says:

Feminism helps you to make sense that something is wrong; to recognise a wrong is to realise that you are not in the wrong.

Don’t make our girls wrong about computing.

[8) Online]

Is this progress? Humans, computers and stories

As a computer scientist, I have to say my job has changed very little in the last  last twenty-odd years. The tech has, admittedly, but I am still doing what I did back then, sitting in front of a computer, thinking about how computers can make peoples’ lives easier, what makes people tick, and how can we put the two together to make something cool?  Sometimes I even program something up to demonstrate what I am talking about.

It seems to me though that everyone else’s jobs (non-computer scientists) have changed and not necessarily for the better. People do their jobs and then they do a load of extras like social media, blogging, content creation, logging stuff in systems- the list is endless – on top of their workload.

It makes me wonder: Is this progress?

Humans and stories

As a teenager, on hearing about great literature and the classics, I figured that it must be something hifalutin’. In school we did a lot of those kitchen sink, gritty dramas (A Kind of Loving, Billy Liar, Kes, etc.,). So, when I found the section in the library: Classics, Literature, or whatever, it was a pleasant surprise to see that they were just stories about people, and sometimes gods, often behaving badly, and I was hooked. Little did I know that reading would be the best training I could receive to become a computer scientist.

Human and computer united together

In my first job as systems analyst and IT support, I found that I enjoyed listening to people’s stories in and amongst their descriptions about their interactions with computers. My job was to talk to people. What could be better? I then had to capture all the information about how computers were complex and getting in the way and try to make them more useful. Sometimes I had to whip out my screwdriver and fix it there and then. Yay!! Badass tech support.

The thing that struck me the most was that people anthropomorphised their computers, talking about them needing time to warm up, being temperamental, and being affected by circumstances, as if they were in some way human and not just a bunch of electronic circuits. And, that the computer was always the way of progress, even if they hated it and didn’t think so.

I think this is partly because it was one person with one computer working solely, so the computer was like a companion, the office worker you love or hate, who helps or hinders. There was little in the way of email or anything else unless you were on the mainframe and then it was used sparingly, especially in a huge companies. Memos were still circulated around. The computer was there to do a task – crunch numbers, produce reports, run the Caustic Soda Plant (I did not even touch the door handles when I went in there) –  the results of which got transferred from one computer to another by me, and sometimes by that advanced user who knew how to handle a floppy disk.

Most often information was transferred orally by presentation in a meeting or on paper with that most important of tools, the executive summary whilst the rest of it was a very dry long winded explanation, hardly a story at all.

Human and computer and human and computer united

Then the Internet arrived and humans (well mainly academics) began sharing information more easily, without needing to print things out and post them.  This was definitely progress. I began researching how people with different backgrounds like architects and engineers could work together with collaborative tools even though they use different terminology and different software. How could we make their lives easier when working together?

I spent a lot of time talking to architects and standing on bridges with engineers in order to see what they did. Other times I talked to draftsmen to see if a bit of artificial intelligence could model what they did. It could up to a point, but modelling all that information in a computer is limiting in comparison to what a human can know instinctively, which is when I realised that people need help automating the boring bits, not the instinctive bits.

I was fascinated by physiological computing, that is, interacting using our bodies rather than typing – so using our voices or our fingerprints. However, when it was me, my Northern accent, and my French colleagues, all speaking our fabulous variations of the English language into some interesting software written by some Bulgarians I believe, on a slow running computer, well, the results were interesting, to say the least.

Everyone online

The UK government’s push to get everything electronic seemed like a great idea, so everyone could access all the information they needed. It impacted Post Offices, but seemed to free up the time spent waiting in a queue and to provide more opportunities to do all those things like pay a TV licence, get a road tax disc, and passport, etc. This felt like progress.

I spent a lot time working on websites for the government with lovely scripts to guide people through forms like self-assessment so that life was easier. We all know how daunting a government form can be, so what could be better than being told by a website which bit to fill in? Mmm progress.

Lots of businesses came online and everyone thought that Amazon was great way back when. I know I did living in Switzerland and being able to order any book I wanted was such a relief as opposed to waiting or reading it in French. (Harry Potter in French although very good is just not the same.) Progress.

Then businesses joined in and wanted to be seen, causing the creation of banners, ads, popups, buying links to promote themselves, and lots of research into website design so they were all polished and sexy, even though the point of the Internet is that it is a work in progress constantly changing and will never be finished.

I started spending my time in labs, rather than in-situ, watching people use websites and asking them how they felt. I was still capturing stories but in a different way, in a more clinical, less of a natural habitat, way which of course alters what people say and which I found a bit boring. It didn’t feel like progress. It felt businessy – means to an end like – and not much fun.

Human -computer -human

Then phones became more powerful and social media was born, and people started using computers just to chat, which felt lovely and like progress. I had always been in that privileged position of being able to chat to people the world over, online, whatever the time, with the access I had to technology, now it was just easier and available to everyone – definitely progress.   Until of course, companies wanted to be in on that too. So, now we have a constant stream of ads on Facebook and Twitter and people behaving like they are down the market jostling for attention, shouting out their wares 24/7, with people rushing up asking:  Need me to shout for you?

And, then there are people just shouting about whatever is bothering them. It’s fantastic and fascinating, but is it progress?

The fear of being left behind

The downside is that people all feel obliged to jump on the bandwagon and be on multiple channels without much to say which is why they have to do extras like creating content as part of their ever expanding jobs. The downside is that your stream can contain the same information repeated a zillion times. The upside is that people can say whatever they like which is why your stream can contain the same information repeated a zillion times.

Me, I am still here wondering about the experience everyone is having when this is all happening on top of doing a job.  It feels exhausting and it feels like we are being dictated to by technology instead of the other way around. I am not sure what the answer is. I am not sure if I am even asking the right question. I do know how we got here. But is this where we need to be? Do we need to fix it? Does it needs fixing?  And, where we should go next? I think we may need a course correct, because when I ask a lot of people, I find that they agree. If you don’t, answer me this, how do you feel when I ask: Is this progress?