Codependency or Collaboration? Human-Computer Interaction: Dialogue, Conversation, Symbiosis (4)

[Part 1, Part 2, Part 3]

The fig tree is pollinated only by the insect Blastophaga grossorun. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree; together, they constitute not only a viable but a productive and thriving partnership. This cooperative “living together in intimate association, or even close union, of two dissimilar organisms” is called symbiosis. (Licklider, 1960).

The above quotation is from JCR Licklider’s seminal paper Man-Computer Symbiosis (1960). I had to grit my teeth as he kept going on about man and men and computers. To distract myself from the onset of a patriarchal rage, I decided that I needed to know the precise definition of seminal. Google gave me seed of semen.

The original meaning of the word computer actually means a person who did calculations, like women in 1731 who ran a household and were advised to be clever with their husband’s wages, and whilst I am wondering about who is the tree and who the insect in this scenario of man-computer symbiosis, I am thinking that it really isn’t a good idea to aspire to have computers and people functioning as natural living organisms who cannot survive without each other. I love technology I really do, but the idea that I cannot function without it is, at the very least, disturbing.

We got a new boiler the other day complete with a smart thermostat. The smart thermostat uses an app now installed on everyone’s phone along with our locations activated to see if we are home or not and the thermostat uses the information and corrects the temperature accordingly. It will also build up patterns of our daily routine what time we get up, have a shower, take a bath, etc., so that it can be ready for us. But, only if we are home and have our phones charged. Thankfully, there are buttons on the thermostat if the WiFi goes down, apparently the earlier versions didn’t have them, and we can also log into a web browser to make changes to its routine.

Theoretically I should be thrilled, it is better than me telling my Fitbit when my period is due so that it can tell me when my period is due – and since that blog, I have been given Google adverts for different sanitary products, I told you so! – or Google Nest which needs you to tell it how you want your house to run rather like my Fitbit. And, I do like data and patterns so I am interested in what it collects and what it does and if it is clever enough to respond to a cold snap or whatever.

But old habits die hard, so far we have got out an old room thermometer to check if the smart thermostat is measuring the temperature correctly as it seemed a bit high. It was right. (Just checked it, it says 25.8c the room thermometer says 22.8c quite a big difference) I guess I have just worked in computing too long and I have technology trust issues. If the Sonos is anything to go by when the WiFi goes down we are completely without music, well digital music at any rate. Last time, we got out the guitar and turned into the Vonn Trapps, I am not even joking. The alternative would be to keep other music formats and a player. But that idea doesn’t do a lot for me, I am more of a donor than a collector. I hate stuff filling up my space.

When I reader Licklider, I am reminded of ubiquitious computing rather than any other technology. I know I would rather my tech be ubiquitious than making me feel codependent on my mobile phone. All these apps for my heating, my music, my blogging, my Bikram, my electricity, my gas, it is slowly but surely turning into the precious and I feel like Gollum. I worry about my phone and can’t stand anyone touching it. Whereas ubicomp had the idea of one person, lots of computers interacting with, if I was doing it, the physiology of a person and making changes accordingly, rather than with the person’s mobile phone’s location, which strikes me as being a bit simplistic and not smart at all. (Just ‘cos you call it smart doesn’t make it smart.) And, then collecting and sharing over the Internet which causes us all to have the symptoms in the above codependent link – my phone does make me anxious and I try to please it all the time. I am forever staring at it and hoping it is doing ok, and I don’t like that feeling.

I have spent a lot of time writing about what social media can do for us, and how we can feel connected. But, in this scenario when it is not other people and it is an app, or sometimes it is other people via an app, if we are not in control, we become disconnected from ourselves and then we become addicted to the app or to the person. We give away our power and our data. The problem with these apps is that we have no control and we are barely a factor in the design. Our mobile phone is the factor in the design and it trains us to be codependent, addicted, anxious. Warmth after all is the bottom rung of Maslow’s hierarchy of needs and I am staring at my bloody phone hoping for the best. This is not symbiosis, this is codependency.

But back to Licklider and his seed of semen paper, a lot of what he was trying to imagine was probably about cooperation or collaboration of the kind I have blogged about before: a space of solutions to explore, with the computer doing the grunt work and the human doing the thinking. And, I believe it is possible to do that even with apps.

In a post, David Scott Brown, looks at what Licklider suggested in Man-Computer Symbiosis and what is possible today: shared processing, memory, better input-output, different languages, etc. And, I would add, in fields where the language is precise and limited, for example, Trading think about: buy, sell, high, low, and often over the phone so applications of AI are useful and will be able to do all sorts. All the data and conversations can be recorded and used and mined. It is extremely exciting, and memory and storage is it seems infinite which would make Licklider’s mind boggle.

As an undergraduate I had to learn sparce matrices, memory was something not to waste, it was expensive. In his paper, Licklider says:

The first thing to face is that we shall not store all the technical and scientific papers in computer memory. We may store the parts that can be summarized most succinctly-the quantitative parts and the reference citations-but not the whole. Books are among the most beautifully engineered, and human-engineered, components in existence, and they will continue to be functionally important within the context of man-computer symbiosis.

Imagine his face looking at the data centres all over the world storing memes and cat pictures and abusive tweets repeatedly without checking if they have already been saved, without indexing, without any sort of check on redundancy, an endless stream of drivvel. I am sure even Tim Berners-Lee wonders sometimes about the monster he has created.

And, books take so long to write, beautifully engineered they are, we lose ourselves in them and learn from them, they take us out of ourselves in the same way our phones do, but we are addicted to our phones and to our social media to that little hit of dopamine that social media gives us, which our books don’t always do. Books are work and we are passive whereas on our phones we feel active, but because our phones are controlling our homes and training us to be codependent and anxious and powerless, it is a vicious circle of more phones, fewer books.

In these times when I look at where we are going and I am not feeling good about it, like Licklider I turn to nature, as the best designs are in nature. I also look to the wisdom of yoga. So this is what I have:

When a bee gathers pollen, it also gathers a small amount of poison along with the pollen, both get transformed into nectar in the hive. The yogis say that when we learn to take negative situations and turn them into wisdom, it means we are progressing, and becoming skilful agents of the positive.

So, even though I worry about what happens when my whole life is literally on my phone and the world’s nature reserves are all full of data centres which contain every last terrible expression of humanity, and we are so disconnected from the nature around us that the oceans are filled with plastic, and many of us are in offices far away from the natural world staring into our bloody phones, and many of us do it to create technology. Surely we can create technology to change where we are. If we want a symbiosis we must make a human-planet one not a human-computer one. I don’t care what my Fitbit says I don’t want any technology in my ovaries thank you very much.

So, with that thought and the amazing technology I have at my fingertips today, I want to share an animated gif of my cat drinking from his water fountain. Licklider said:

Those years should be intellectually the most creative and exciting in the history of mankind

And, they are. I remain hopeful that we can collect enough data on ourselves to become self-aware enough to transform it into wisdom and create something better for all humanity and our planet. In the meantime I will be enjoying watching my cat have a drink of water and I am sure Licklider in 1960 would have been just as amazed too to see my cat drinking from the fountain on a phone more powerful technologically and psychologically than he ever could have imagined. It remains to be seen whether this is progress or not.

Let’s Talk! Human-Computer Interaction: Dialogue, Conversation, Symbiosis (2)

[Part 1]

I chuckled when I read Rebecca Solnit describing her 1995 life: She read the newspaper in the morning, listened to the news in the evening and received other news via letter once a day. Her computer was unconnected to anything. Working on it was a solitary experience.

Fast forward 20+ years and her computer, like most other people’s, feels like a cocktail party, full of chatter and fragmented streams of news and data. We are living permanently in Alvin Toffler’s information overload. We are creating more data per second than we did in a whole year in the 1990s. And yet, data or information exchange is why we communicate in the first place, so I wanted to ponder here, how do we talk using computers?

Commandments

Originally, you had to ask computer scientists like me. And, we had to learn the commands of the operating system we were using say, on a mainframe with VAX/VMS or DEC; on a networked workstation with UNIX, or a personal computer which used MS/DOS.

Then, we had to learn whatever language we needed. Some of the procedural languages I have known and loved are: Assembler, Pascal, COBOL, ADA, C/C++, Java, X/Motif, OpenGL (I know I will keep adding to these as I remember them). The declarative PROLOG, and (functional, brackety) LISP, and scripts like php, Perl, Python, Javascript. The main problem with scripts is that they don’t have strong types, so you can quite easily pass a string to an integer and cause all sorts of problems and the compiler won’t tell you otherwise. They are like a hybrid of the old and new. The old when computer time was expensive and humans cheap so we had to be precise in our instructions, and the new computers are cheap and humans cost more, so bang in some code. Don’t worry about memory or space. This is ok up to a point but if the human isn’t trained well, days may be lost.

As an undergraduate I had to learn about sparse matrices to not waste computer resources, and later particularly using C++ I would patiently wait and watch programs compile. And, it was in those moments, I realised why people had warned me that to choose computers was to choose a way of life which could drive you mad.

How things have changed. Or have they?

Dialogue

When I used to lecture human-computer interaction, I would include Ben Schneiderman’s eight golden rules of interface design. His book Designing the User Interface is now in its sixth edition.

When I read the first edition, there was a lot about dialog design as way back then there were a lot of dialog boxes (and American spelling) to get input/output going smoothly. Graphical-user interfaces had taken over from the command line with the aim of making computers easy to use for everyone. The 1990s were all about the efficiency and effectiveness of a system.

Just the other week I was browsing around the Psychology Now website, and came upon a blogpost about the psychological term locus of control. If it is internal, a person thinks that their success depends on them, if it is external their success is down to fate or luck. One of Scheidermann’s rules is: Support internal locus of control, so you make the user feel that they can successfully achieve the task they have set out to do on the computer because they trust it to behave consistently because they know what to expect next, things don’t move around like the ghost in the wall.

Schneiderman’s rules were an interpretation of a dialogue in the sense of a one-to-one conversation (dia means two, logos can mean speech) to clarify and make coherent. That is to say: One person having a dialogue with one computer by the exchange of information in order to achieve a goal.

This dialogue is rather like physicist David Bohm’s interpretation which involves a mutual quest for understanding and insight. So, the user was be guided to put in specific data via a dialog box and the computer would use that information to give new information to create understanding and insight.

This one-to-one seems more powerful nowadays with Siri, Alexa, Echo, but, it’s still a computer waiting on commands and either acting on them or searching for the results in certain areas online. Put this way, it’s not really much of a dialogue. The computer and user are not really coming to a new understanding.

Bohm said that a dialogue could involve up to 40 people and would have a facilitator, though other philosophers would call this conversation. Either way, it is reminiscent of computer supported cooperative work (CSCW) a term coined in 1984 that looked at behaviour and technology and how computers can facilitate, impair, or change collaborative activities (the medium is the message) whether people do this on the same or different time zone, in the same or different geographical locations, synchronously or asynchronously. CSCW has constantly changed and evolved especially with the World Wide Web and social media.

I remember being at an AI conference in 1996 and everyone thought that the answer to everything was just put it online and see what happened then. But just because the WWW can compress time and space it doesn’t follow that a specific problem can be solved more easily.

Monologue to Interaction

The first people online were really delivering a monologue. Web 1.0 was a read-only version of the WWW. News companies like the BBC published news like a newspaper. Some people had personal web pages on places like Geocities. Web pages were static and styled with HTML and then some CSS.

With the advent of Web 2.0, things got more interactive with backend scripting so that webpages could serve up data from databases and update pages to respond to users input data. Social media sites like Flickr, YouTube, Facebook, Twitter were all designed for users to share their own content. Newspapers and news companies opened up their sites to let users comment and feel part of a community.

But this chatter was not at all what Bohm had in mind, this is more like Solnit’s cocktail party with people sharing whatever pops in their head. I have heard people complain about the amount of rubbish on the WWW. However, I think it is a reflection of our society and the sorts of things we care about. Not everyone has the spare capacity or lofty ambition to advance humanity, some people just want to make it through the day.

Web 3.0 is less about people and more about things and semantics – the web of data. Already, the BBC uses the whole of the internet instead of a content management system to keep current. Though as a corporation, I wonder, has the BBC ever stopped to ask: How much news is too much? Why do we need this constant output?

Social media as a cocktail party

But, let’s just consider for a moment, social media as a cocktail party, what an odd place with some very strange behaviour going on:

  • The meme: At a cocktail party, imagine if someone came up to us talking like a meme: Tomorrow, is the first blank page of a 365 page book. Write a good one. We would think they had banged their head or had one shandy too many.
  • The hard sell: What if someone said: Buy my book, buy my book, buy my book in our faces non-stop?
  • The auto Twitter DM which says follow me on facebook/Instagram/etc. We’ve gone across said hi, and the person doesn’t speak but slips us a note which says: Thanks for coming over, please talk to me at the X party.
  • The rant: We are having a bit of a giggle and someone comes up and rants in our faces about politics, religion, we try to ignore them all the while feeling on a downer.
  • The retweet/share:That woman over there just said, this man said, she said, he said, look at this picture… And, if it’s us, we then say: Thanks for repeating me all over the party.

Because it is digital, it becomes very easy to forget that we are all humans connected together in a social space. The result being that there’s a lot of automated selling, news reporting, and shouting going on. Perhaps it’s less of a cocktail party more of a market place with voices ringing out on a loop.

Today, no one would say that using a computer is a solitary experience, it can be noisy and distracting, and it’s more than enough to drive us mad.

How do we get back to a meaningful dialogue? How do we know it’s time to go home when the party never ends, the market never closes and we still can’t find what we came for?

[Part 3]

Human-computer interaction, cyberpsychology and core disciplines

A heat map of the multidisciplinary field of HCI @ Alan Dix

I first taught human-computer interaction (HCI) in 2001. I taught it from a viewpoint of software engineering. Then, when I taught it again, I taught it from a design point of view, which was a bit trickier, as I didn’t want to trawl through a load of general design principles which didn’t absolutely boil down to a practical set of guidelines for graphical-user interface or web design. That said, I wrote a whole generic set of design principles here: Designing Design, borrowing Herb Simon’s great title: The Science of the Artificial. Then, I revised my HCI course again and taught it from a practical set of tasks so that my students went away with a specific skill set. I blogged about it in a revised applied-just-to-web-design version blog series here: Web Design: The Science of Communication.

Last year, I attended a HCI open day Bootstrap UX. The day in itself was great and I enjoyed hearing some new research ideas until we got to one of the speakers who gave a presentation on web design, I think he did, it’s hard to say really, as all his examples came from architecture.

I have blogged about this unsatisfactory approach before. By all means use any metaphor you like, but if you cannot relate it back to practicalities then ultimately all you are giving us is a pretty talk or a bad interview question.

You have to put concise constraints around a given design problem and relate it back to the job that people do and which they have come to learn about. Waffling on about Bucky Fuller (his words – not mine) with some random quotes on nice pictures are not teaching us anything. We have a billion memes online to choose from. All you are doing is giving HCI a bad name and making it sound like marketing. Indeed, cyberpsychologist Mary Aiken, in her book The Cyber Effect, seems to think that HCI is just insidious marketing. Anyone might have been forgiven for making the same mistake listening to the web designer’s empty talk on ersatz architecture.

Cyberpsychology is a growing and interesting field but if it is populated by people like Aiken who don’t understand what HCI is, nor how artificial intelligence (AI) works then it is no surprise that The Cyber Effect reads like the Daily Mail (I will blog about the book in more detail at a later date, as there’s some useful stuff in there but too many errors). Aiken quotes Sherry Turkle’s book Alone Together, which I have blogged about here, and it makes me a little bit dubious about cyberpsychology, I am waiting for the book written by the neuroscientist with lots of brainscan pictures to tell me exactly how our brains are being changed by the Internet.

Cyberpsychology is the study of the psychological ramifications of cyborgs, AI, and virtual reality, and I was like wow, this is great, and rushed straight down to the library to get the books on it to see what was new and what I might not know. However, I was disappointed because if the people who are leading the research anthropomorphise computers and theorise about metaphors about the Internet instead of the Internet itself, then it seems that the end result will be skewed.

We are all cyberpsychologists and social psychologists now, baby. It’s what we do

We are all cyberpsychologists and social psychologists, now baby. It’s what we do. We make up stories to explain how the world works. It doesn’t mean to say that the stories are accurate. We need hard facts not Daily Mail hysteria (Aiken was very proud to say she made it onto the front page of the Daily Mail with some of her comments). However, the research I have read about our behaviour online says it’s too early to say. It’s just too early to say how we are being affected and as someone who has been online since 1995 I only feel enhanced by the connections the WWW has to offer me. Don’t get me wrong, it hasn’t been all marvellous, it’s been like the rest of life, some fabulous connections, some not so.

I used to lecture psychology students alongside the software engineering students when I taught HCI in 2004 at Westminster University, and they were excited when I covered cognitive science as it was familiar to them, and actually all the cognitive science tricks make it easy to involve everyone in the lectures, and make the lectures fun, but when I made them sit in front of a computer, design and code up software as part of their assessment, they didn’t want to do it. They didn’t see the point.

This is the point: If you do not know how something works how can you possibly talk about it without resorting to confabulation and metaphor? How do you know what is and what is not possible? I may be able to drive a car but I am not a mechanic, nor would I give advice to anyone about their car nor write a book on how a car works, and if I did, I would not just think about a car as a black box, I would have to put my head under the bonnet, otherwise I would sound like I didn’t know what I was talking about. At least, I drive a car, and use a car, that is something.

Hey! We’re not all doctors, baby.

If you don’t use social media, and you just study people using it, what is that then? Theory and practice are two different things, I am not saying that theory is not important, it is, but you need to support your theory, you need some experience to evaluate the theory. Practice is where it’s at. No one has ever said: Theory makes perfect. Yep, I’ve never seen that on a meme. You get a different perspective, like Jack Nicholson to his doctor Keanu Reeves says in Something’s Gotta Give: Hey! We’re not all doctors, baby. Reeves has seen things Nicholson hasn’t and Nicholson is savvy enough to know it.

So, if you don’t know the theory and you don’t engage in the practice, and you haven’t any empirical data yourself, you are giving us conjecture, fiction, a story. Reading the Wikipedia page on cyberpsychology, I see that it is full of suggested theories like the one about how Facebook causes depression. There are no constraints around the research. Were these people depressed before going on Facebook? I need more rigour. Aiken’s book is the same, which is weird since she has a lot of references, they just don’t add up to a whole theory. I have blogged before about how I was fascinated that some sociologists perceived software as masculine.

In the same series I blogged about women as objects online with the main point being, that social media reflects our society and we have a chance with technology to impact society in good ways. Aiken takes the opposite tack and says that technology encourages and propagates deviant sexual practices (her words) – some I hadn’t heard of, but for me, begs the question: If I don’t know about a specific sexual practice, deviant or otherwise, until I learn about on the Internet (Aiken’s theory), then how do I know which words to google? It is all a bit chicken and egg and doesn’t make sense. Nor does Aiken’s advice to parents which is: Do not let your girls become objects online. Women and girls have been objectified for centuries, technology does not do anything by itself, it supports people doing stuff they already do. And, like the HCI person I am, I have designed and developed technology to support people doing stuff they already do. I may sometimes inadvertently change the way people do a task when supported by technology for good or for bad, but to claim that technology is causing people to do things they do not want to do is myth making and fear mongering at its best.

The definition of HCI that I used to use in lectures at the very beginning of any course was:

HCI is a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them (ACM, 1992).

For me, human-computer interaction was and still remains Gestaltian: The whole is greater than the sum of the parts, by this I mean, that the collaboration of a human and a computer is more than a human typing numbers into a computer and then waiting for the solution, or indeed typing in sexually deviant search terms into a web crawler to find a tutorial. And, with the advent of social media, HCI is more than one person connecting to another, or broadcasting online, which is why the field of cyberpsychology is so intriguing.

But the very reason why I left the field of AI and went into HCI is: AI reasons in a closed world and the limits of the computational power you have available. There are limits. With HCI, that world opens up and the human gets to direct the computer to do something useful. Human to human communication supported by technology does something else altogether which is why you might want the opinion of a sociologist or a psychologist. But, you don’t want the opinion of the sociologist on AI when they don’t understand how it works and has watched a lot of sci-fi and thinks that robots are taking over the world. Robots can do many things but it takes a lot of lines of code. And, you don’t want the opinion of a cyberpsychologist who thinks that technology teaches people deviant sexual practices and encourages us all to literally pleasure ourselves to death (Aiken’s words – see what I mean about the Daily Mail?) ‘cos she read one dodgy story and linked it to a study of rats in the 1950s.

Nowadays, everyone might consider themselves to be a bit of a HCI expert and can judge the original focus of HCI which is the concept of usability: easy to learn, easy to use. Apps are a great example of this, because they are easy to learn and easy to use, mainly though because they have limited functionality, that is they focus on one small task, like getting a date, ordering a taxi, sharing a photo, or a few words.

However, as HCI professor Alan Dix says in his reflective Thirty years of HCI and also here about the future: HCI is a vast and multifaceted community, bound by the evolving concept of usability, and the integrating commitment to value human activity and experience as the primary driver in technology.

He adds that sometimes the community can get lost and says that Apple’s good usability has been sacrificed for aesthetics and users are not supported as well as they should be. Online we can look at platforms like Facebook and Twitter and see that they do not look after their users as well as they could (I have blogged about that here). But again it is not technology, it is people who have let the users down. Somewhere along the line someone made a trade-off: economics over innovation, speed over safety, or aesthetics over usability.

HCI experts are agents of change. We are hopefully designing technology to enhance human activity and experience, which is why the field of HCI keeps getting bigger and bigger and has no apparent core discipline.

It has a culture of designer-maker which is why at any given HCI conference you might see designers, hackers, techies and artists gathering together to make things. HCI has to exist between academic rigour and exciting new tech, no wonder it seems to not be easy to define. But as we create new things, we change society and have to keep debating areas such as intimacy, privacy, ownership, visibility as well as what seems pretty basic like how to keep things usable. Dix even talks about having human–data interaction, as we put more and more things online, we need to make sense of the data being generated and interact with it. There is new research being funded into trust (which I blogged about here). And Dix suggest that we could look into designing for solitude and supporting users to not respond immediately to every text, tweet, digital flag. As an aside, I have switched off all notifications, my husband just ignores his, and it just boggles my mind a bit that people can’t bring themselves to be in charge of the technology they own. Back to the car analogy, they wouldn’t have the car telling them where they should be going.

Psychology is well represented in HCI, AI is well represented in HCI too. Hopefully we can subsume cyberpsychology too, so that the next time I pick up a book on the topic, it actually makes sense, and the writer knows what goes on under the bonnet.

Technology should be serving us, not scaring us, so if writers could stop behaving like 1950s preachers who think society is going to the dogs because they view how people embrace technology in the same way they once did rocknroll and the television, we could be more objective about how we want our technological progress to unfold.

Women and religion: Society, Storytelling, Technology (6)

We cannot live in a world that is not our own, in a world that is interpreted for us by others. An interpreted world is not a home. – Hildegard of Bingen

[Women Part 6 of 9: 1) Introduction, 2) Bodies, 3) Health, 4) Work, 5) Superwomen, 6) Religion, 7) In Tech, 8) Online 9) Conclusions]

I grew up in the Church of England and went to church every Sunday, often twice when I was a chorister: Sung Eucharist in the morning and back for Evensong later on that day. I have always loved high-church ritual: incense, candles, and drama, especially on Good Friday, when the vicar would prostrate himself in front of the altar.

Like many teenage girls with a religious mindset, I wanted to feel a divine transcendence, and watched The Song of Bernadette many times. My brother called it my happy-clappy phase. A Muslim friend of mine said that when she was growing up it was commonly known as la phase mystique for which I was very grateful, as it was a mysterious longing and not happy-clappy at all. And recently I read White Hot Truth, and was like wow, yes, when Danielle La Porte said she too, was desperate, as a religious Roman Catholic, to experience God.

Meggan Watterson, in Reveal, says that before the 9th century, a theologian was someone who had direct experience of the Divine. Nowadays we think of theologians studying and interpreting religion in a cerebral manner. There has long been the idea that we need to transcend our embodiment, which results in organised religion assigning sexuality to the female body (materia – or matter, blood and procreation), and the higher attributes of soul and spirit to the male mind. Watterson, herself a theologian, goes on to say:

And this has always been the reason why, from the Talmud to the New Testament and the Koran, women have been asked to remain silent, […]why their experience is not considered of equal value to that of men.

We are second-class citizens and not worth bothering about. Consequently, it was a Father God who sent his son Jesus to save all mankind, the brotherhood of man, whereas Eve, the first woman in the Bible, is responsible for the downfall of all mankind. She is the temptress with the forbidden fruit and her pal the snake aka the devil incarnate.

In her book The Dance of the Dissident Daughter, Sue Monk Kidd says that prior to Christianity the snake was a symbol of feminine power, wisdom and regeneration, adding that no wonder a woman will feel lost in organised religion as she is cut off from her intuition, which is an evil thing, and which she understands from listening to her body, which is a dirty thing tempting men into sinning.

Both Watterson and Monk Kidd discuss the irony of the Eucharist. This is body which is given for you… this is my blood which is shed for you. Women can give their bodies to breastfeed their kids, and they shed blood every month so that they are able to create new life, but in religious terms, this earthly way is unclean and unspiritual, which is why women were not, until fairly recently, allowed to handle the Eucharist or play a role in the service.

But then religion is a man-made power structure. We had the Holy Roman Empire, which wasn’t about God, or experiencing the divine, it was about man and power. And, the Church of England was created by randy King Henry VIII who wanted to divorce Catherine of Aragon, in order to marry and have sex with Anne Boleyn, whom he then beheaded and called a witch. No divinity there then. As a woman in this faith, I was taught, from birth, to be validated by the masculine, with a male saviour, male vicars, male apostles, and male stories. It is so indoctrinated in me that until my girls took me to one side at St Paul’s Cathedral after attending a service, and asked me to point out the female apostles and the female saviour, I no longer noticed. And, therein lies a particularly painful irony, I took my girls to church, because I wanted them to know how to pray in order to find comfort. I wanted for them, in those worst moments which life can serve up, to know their way around a church in case they needed to transcend their earthly troubles and experience the divine. What on earth was I thinking?

A few years ago, after several traumatic life events, I took to weeping a lot in church. Just weeping. I would weep all the way through the service, as it was the only time I had to myself as my girls were in Sunday School being looked after, and I had a tiny slither of time in which I couldn’t do anything but weep.
One day the vicar came over and said:
I have noticed you have been weeping a lot during the service.
And I said:
Yes I am very sad.
And he said:
Don’t you think you should get some help for that? See a counsellor? A therapist? Go see someone.

Basically he didn’t want me in his church as a weeping woman in pain from my life experiences. He wanted me to stop it, to go away, to be silent. I was so upset that he didn’t want me there expressing myself, I told everyone, every woman I came across: female friends, random women in the street, anyone who looked at me. And all the women I talked to said that they too had wept in church and wasn’t that the point of church, to get comfort?

It has taken a while, but I am finally at the opinion that the Church is the last place a woman should look for comfort. Comfort comes from being free from constraint, being at ease, and from the familiar. In contrast, the Bible is full of constraints. All those Thou Shalt Nots… written in a time when women were classed as possessions, not people, don’t put anyone at ease. And, the subjugation of women means that there is no familiar femininity just a load of blokes standing about in dresses, saying things like: This is my body which I give to you. It is mind-boggling that the centre piece of Christianity is something women can do and are considered unclean when they do it, and men cannot do and have turned into a spiritual but cerebral act. Women are to be seen not heard. Do your crying elsewhere, woman.

I did try to stay in the Church. I asked the vicar and a few other ministers if they had anything for me to read on the feminine divine as the whole Jesus thing was no longer working for me. They looked at me like I was insane and made me feel wrong about who I am and how I feel. The results of my life have been experienced in, and written on, my body, a thing that I am supposed to deny, because it is not a divine thing.

Feminist theologian Nicola Slee captures the female role in religion perfectly in Seeking the Risen Christa, when she describes her first experiences of faith in the Methodist Church as an intensely personal quasi-erotic relationship with Jesus [..] which mirrored a white middle-class patriarchal upbringing. He was a trial run for that ultimate act of female self-fulfillment, oh yes the wedding day. Because of course what more does a woman need out of life? And, if this sounds far fetched, look at the Roman Catholic nuns who wore wedding rings because they were the brides of Christ (which always reminded me of the Bride of Frankenstein, who was created like Eve was for Adam, so that Frankenstein could have a bit of company and his laundry done and his tea made). The church is a power structure which reflects an old fashioned outdated patriarchal society in which women are not to be themselves.

And so when this is all the Church has to offer women, what are we to do? Watterson says we have to do what our heart desires and that we are worthy of love and recognition simply because we exist. Something the Church could never say because it wants everyone down on their knees kept inline. They don’t want people following their heart’s desires.

Both Watterson and Monk Kidd have left organised religion to form their own definition of the feminine divine, because she, Herself, can be found, if you know where to look. It is a lot of work, but seems to me to be the only way forward because, as Lucy H Pearce says in The Burning Woman: Feminine stands for all that we have been taught to reject as deeply flawed or inconsequential: our mothers, ourselves, other women, nature – in society, in religion, in work. And this is so wrong.

It’s time to reclaim the feminine, and indeed the feminine divine. It is time to teach our girls that they are whole, and worthy and loved, and that there is nothing wrong with them. It is time to stop making us women wrong about who we are and telling us that the message came from a weirdy-beardy bloke called God.

It is time to reinterpret the message and make it right.

[7) In Tech]

Women as superheroes: Society, Storytelling, Technology (5)

We cannot live in a world that is not our own, in a world that is interpreted for us by others. An interpreted world is not a home. – Hildegard of Bingen

[Women Part 5 of 9: 1) Introduction, 2) Bodies, 3) Health, 4) Work, 5) Superwomen, 6) Religion, 7) In Tech, 8) Online 9) Conclusions]

We all love Wonder Woman, we do. My childhood memories tell me that it was the only show with a main female protagonist, and I was glued to the telly when she was on. Until I had girls, I had forgotten that I had minded about the lack of females on TV until the day I watched part of the James Bond movie Die Another Day with my girls and they kept making me replay the scenes in which Jinx was centre stage. They didn’t want to see Bond.

I didn’t want to see Bond, I wanted to see women living out loud and having adventures. I have blogged about women centre stage before, mentioning: Suffragette, Spy, Star Wars, Hunger Games and The White Queen. And, after watching the Ghostbusters (2016) the all female reboot, I was so looking forward to Wonder Woman (2017), as I was expecting a modern day women-centred interpretation of a favourite from my childhood.

What a huge disappointment. I will just state up front: Wonder Woman is a male idea of a female superhero (or self-actualised woman), which would be par for the course if it had been produced by an all male team, but it wasn’t.

Paradise Island is a male fantasy of women warriors, honestly it was only missing some mud-wrestling, and it’s so patriarchal, all those sexy women – liminal women: an extraordinary phrase used by A S Byatt and @IsabelWriter in her fabulous poetry collection Don’t ask – hanging about, waiting for Ares to come back whilst preserving (probably fondling and worshipping) relics donated by Zeus. This pressed all my patriarchal buttons until it got worse and we saw that Paradise Island has permeable boundaries, and none of these women were monitoring the perimeter. Really?

Permeable boundaries is another fabulous phrase which resonated with me when I read it first in Ann Monk Kidd’s The Dance of the Dissident’s Daughter. She says that women are trained from birth to have permeable boundaries, so we can be invaded, serve others, not listen to our own self-actualisation, etc. Nowhere to date have I seen it better demonstrated than on the Paradise Island of Wonder Woman (2017).

So, hot (he tells her lots of times) Steve Trevor washes up on the shore and a glorious woman can’t take her eyes off him, even though she has lived for an eternity, and she follows him in his quest, to war: A war in which he doesn’t treat her as an equal, he tells her to be quiet, talks over her, renames her, denies her her identity and heritage, tells her how to dress, how to look, how to be, and expects her to toe the line. He then nips off to be a hero leaving her to endure a supporting role in her own movie!

The whole (clunky) plot fits right into the hero’s quest as defined by Christopher Vogler as the masculine need to overcome obstacles to achieve, conquer and possess and his updated female interpretation of the hero’s quest which sadly fits Wonder Woman’s journey in this film: Grapples with emotions as a romantic heroine, looking for the missing piece romantically. I’ll spare you the bit about homemaking. Yes please – feel the rage.

I am totally with James Cameron‘s criticism of this film when he says that she looks spectacular but seems to be designed to appeal to 14 or 18 year-old males. Looking at her half-brother Ares you don’t see him wearing a skimpy outfit which shows off his sexy form. Gods are supposed to have beautiful physiques – Diana does and add insult to injury, she is referred to as a God never a Goddess. Though, it works the other way with the female scientist – who was an anachronism if ever I saw one – she wears a mask because she is beautiful but has to be scarred to seem unattractive ‘cos she’s evil – a clumsy attempt as Chaucer put it in The Canterbury Tales as an outer manifestation of … inner characteristics. At no point does the film take us anywhere new and empowering, though it got rave reviews saying it did.

And, I get it. I do! I wanted Wonder Woman to be empowering and I wanted to write great things about it. But all it does is reminds me of those times when you want something so badly, like that job, that friendship, that interest in your book, to be good for you, and you want it so badly that you ignore the signs, you know the ones: the creepy, fake, lame behaviour which you think that with enough energy and patience you can turn into something else, but you can’t. All that happens is you feel betrayed by someone’s lack of integrity and you are left feeling that you’ve been had.

Lillian Robinson wrote a fabulous book about female super heroes called Wonder Women which aligned her joy of comics with her work as a feminist. She had lots to say about how Wonder Woman was created in 1942, and her creator Charles Moulton or William Moulton Marston had an interesting home life with his wife and children and girlfriend and children all living in the same house. Consequently, he thought Wonder Woman and her gang (which included Etta Candy) would conquer the world with some sexy lovefest which overpowers men’s need for domination and war.

Also, Wonder Woman’s magic lasso was really a symbol for using her wiles and feminine sexy powers to get a man to tell her anything. Her bracelets were to control her savageness. Anger is never accepted from any female – we have seen this from The Taming of the Shrew to Little Women’s Jo March. When women mature, they accept male domination, get behind the scenes and distract the menfolk by getting busy. Consequently, if Wonder Woman’s bracelets are chained together she loses her power, and Robinson had to wade through a lot of S&M themed editions as well as Marston’s copious lovefest fantasy notes to understand what was really going on.

Wonder Woman may be super powerful but she is not like Batman or Superman all muscly as she has to remain super sexy and attractive, with those magnificent breasts which stand up on their own in those metal breastplates. This means that she was super slim in the 40s and super toned in the 80s. She has always kept up her babe-status but is one of the rare female superheroes allowed to grow up: Super Girl for example never becomes Super Woman. She remains just a non-threatening girl. We don’t want our women fully grown, we want them malleable.

Robinson also points out that the term Superwoman is used to describe women who do everything, have a family, have a big career, run a home, which suggests potential exhaustion and no balance. There is no male equivalent. Men never talk about having it all. Men don’t need to have that conversation. So where does that leave us with self-actualised women and female superheroes?

Normally, at this point I turn to turn to Maslow’s hierarchy of needs, it explains most things. However, this time I can’t. Maslow only used two women in his group of self-actualised people, which Betty Friedan pointed out in 1963. Though, Maslow himself said he never expected the psychology community to swallow it whole and cite it indefinitely, he wanted it to be debated.

So, what I guess I am asking now is: What does a superwoman look like when viewed through a female perspective? And, fiction aside: What does a self-actualised woman look like look through the eyes of another self-actualised woman? I am asking because, that’s the movie that I want to see.

[6) Religion]