Storytelling with AI and machine learning

In the 1970s, Marvin Minsky, father of frames, and some say neural nets, told a press conference that 50 years on, computers would read and understand Shakespeare.

Today, computers can indeed read Shakespeare but understanding, not really, not so much, even though they have been used to explore Shakespeare’s plays in a few ways:

  1. Computers are proving which bits Shakespeare didn’t write, apparently John Fletcher wrote some parts of Henry VIII. I’ve always loved this conversation about who wrote what, especially the Christopher Marlowe and Shakespeare conspiracy theories. Was Marlowe really Shakespeare? Etc.
  2. Machine learning can categorise whether a Shakespeare play is comedy or tragedy based on the structure of how the characters interact. In a comedy simply put, characters come together a lot. In a tragedy, they don’t – and ain’t that the truth in real life?
  3. Anyone can generate their own Shakespearean play with machine learning.

No. 3 seems mind blowing, but to be honest, and I love me some Shakespeare, the results truly makes no sense. However, it is hard to see that at first, because, Shakespearean English is like another language. I have attended some brilliant performances from Shakespeare School over the last couple of years, watching my children on stage, but for the first time, I realised, it is only the context and the acting which, for me, gave the words their meaning, rather like when you watch a film on TV in a language you don’t quite understand, but often the story is universal. It has emotional resonance.

I learnt Macbeth’s first soliloquy in sixth form: Is this a dagger which I see before me? It is when Macbeth contemplates his wife’s horrifying idea of killing Duncan the king. I can still recite it. It is meaningful because I studied it in depth and ruminated on what Macbeth must have been feeling, filled with ambition, and excited but horrified, whilst feeling the this isn’t going to end well feels.

However, machine learning cannot understand what Macbeth is saying, it hasn’t semantically soaked up the words and felt the emotional horror of contemplating murder in the name of ambition. All it has done is read the words and categorised them, and then written more words using probability to infer statistically what is the most likely next word as it was constructing each sentence, rather like predictive text does. It’s good and works to a certain extent, but none of us think that our predictive text is thinking and understanding. It feels like almost guessing.

We can see this more easily when looking at Harry Potter. The text is much simpler than Shakespeare so when a computer reads all the books and writes a new one, which is what the cool people at Botnik got a computer to do, it’s easier to see that the novel Harry Potter and the Portrait of what Looked Like a Large Pile of Ash is interesting for sure, but doesn’t make a great deal of sense.

“Leathery sheets of rain lashed at Harry’s ghost as he walked across the grounds towards the castle. Ron was standing there and doing a kind of frenzied tap dance. He saw Harry and immediately began to eat Hermione’s family.”

“Harry tore his eyes from his head and threw them into the forest.” 

Very dramatic – I love the leathery sheets of rain – but it doesn’t mean anything, well it does in a way, but it hasn’t been designed in the way a human would design a story, even unknowingly, and it doesn’t have the semantic layers which give text meaning. We need to encode each piece of data and link it to other pieces of data in order to enrich it and make it more meaningful. We need context and constraints around our data, that is how we create meaning. However, to make this a standard is difficult, but the WWW consortium is working on this, in part, in order to create a web of data, especially when all our devices go online, not that I think it is a good idea, my boiler does not need to be online.

And this, my friends, is where we are with machine learning. The singularity, the moment when computers surpass human intelligence, is not coming anytime soon, I promise you. Currently, it is a big jumble of machines, data sets, and mathematics. We have lots of data but very little insight, and very little wisdom. And, that is what we are looking for. We are looking to light the fire, we are looking for wisdom.

The prospect of thinking machines has excited me since I first began studying artificial intelligence, or in my case l’ intelligence artificielle and heard that a guy from Stanford, one Doug Lenat, wrote a LISP program and had it discovering mathematical things. It started simply with 1+1 as a rule and went on to discover Goldbach’s conjecture, which asserts that every even counting number greater than two is equal to the sum of two prime numbers.

The way the story was told to me, was that Lenat would come in every morning and see what the computer had been learning over night. I was captivated. So, imagine my excitement the day I was in the EPFL main library researching my own PhD and I stumbled across Lenat’s thesis in the library. I read the whole thing on microfiche there and then. Enthralled I rushed back to the lab to look him up on the WWW – imagine that, I had to wait until I got to a computer – to see that after his PhD, he had gone off to create a universal reasoning machine: Cyc.

Lenat recently wrapped up the Cyc project after 35 years. It is an amazing accomplishment. It contain thousands of heuristics or rules of thumb that create meaning out of facts which us humans have already learnt by three years old, and which computers need to have in order to emulate reason. This is because computers must reason in a closed-world, which means that if a fact or idea is not modelled explicitly in a computer, it doesn’t exist. There is so much knowledge we take for granted even before we begin to reason.

When asked about it, Marvin Minsky said that Cyc had had promise but had ultimately failed. Minsky said that we should be stereotyping problems and getting computers to recognise the stereotype or basically the generic pattern of a problem in order to apply a stereotypical solution. I am thinking, archetypes potentially, maybe, with some instantiation, so that we can interpret the solution pattern and create new solutions, not just stereotypes, no.

In this talk about Cyc, Lenat outlines how it uses both inductive (learns from data) and deductive (has heuristics or rules) learning. Lenat presents some interesting projects, especially problems where data is hard to find. However, it is these sorts of problems which need to be looked at in depth. Lenat uses container spillages and how to prevent them.

Someone said to me the other day that a neuroscientist told them that we have all the data we will ever need. I have thought about this and hope the neuroscientist meant: We have so much data we could never process it all because to say we have all the data we need is just wrong. A lot of the data we produce is biased, inaccurate and useless. So, why are we keeping it and still using it? Just read Invisible Women to see what I am talking about. Moreover as Lenat says, there are many difficult problems which don’t have good data with which to reason.

Cyc uses a universal approach to reasoning which is what we need robots to do in order to make them seem human which is what the Vicarious project is about. It is trying to discover the friction of intelligence, without using massive data sets to train a computer, and I guess it is not about heuristics either, it’s hard to tell from the website. I have said before, what we are really looking to do is how to encapsulate human experience, which is difficult to measure let alone to encapsulate because to each person, experience is different, and a lot goes on in our subconscious.

Usually, artificial intelligence learning methods take opposite approaches either the deductive rule-based, if x then do y, using lots of heuristics or an inductive approach, the look at something long enough and then find the pattern in it, a sort of, I’ve seen this 100 times now that if x, y follows, as we saw above, Cyc, used both.

Machine learning (ML) uses an empirical approach of induction. After all, that is how we learn as humans, we look for patterns – we look in the stars and the sky for astrology and astronomy, we look at patterns in nature when we are designing things and patterns in our towns, especially people’s behaviour especially online nowadays on social media.

Broadly speaking, ML takes lots of data, looks each data point and either decides yes or no on when categorising the data point it’s either in or out, rather like the little nand and nor gates in a computer, and in fact replicates what the neurons in our brains do too. And, this is how we make sense in stories: day/night, good/bad as we are looking for transformation. Poor to rich is a success story, rich to poor is a tragedy. Neuroscience has proven that technology really is an extension of us which is so satisfying because it is, ultimately, logical.

In my last blog, I looked at how to get up and running as a data scientist using python and pulling data from Twitter, and in another blog, another time, I may look in detail at the various ML methods, under the two main categories of supervised and unsupervised, as well as deep learning, which uses rewards or reinforcement, that is a human steps in to say yes this categorisation is correct or no, it is not, because ultimately, a computer cannot do it alone.

I don’t believe a computer can find something brand spanking new, off the chain, never discovered, seen or heard of before, without a human-being helping which is why I believe in human-computer interaction. I have said it so many times in the human-computer interaction series, in our love affair with big data, and all over this blog but honestly, I wouldn’t mind if I was wrong, if something new could be discovered, a new way of thinking to solve problems which have always seemed without solution.

Computing is such an exciting field, constantly changing and growing, it still delights and surprises as much as it did over 20 years ago when I first heard of Doug Lenat and read his thesis in the library. I remain as enthralled as I was back then, and I know that is a great gift. Lucky me!

Tutorial: A quick guide to data mining on Twitter

photo: usa today

Data mining and sentiment analysis, which is measuring and interpreting what people are saying about a particular subject on Twitter, is a fascinating thing to do, but be warned you may lose a lot of time once you get started. I know I am finding it to be slightly addictive.

There are so many examples online, but here is my very basic guide which will get you up and running in no time at all.

The four main steps are:

  1. Anaconda: Install the Anaconda platform.
  2. Twitter developer: Register yourself as a Twitter developer.
  3. Install tweepy: Connect from Python to Twitter.
  4. Hello World!: Experiment.

Let’s dive into more detail:

1. Anaconda

  1. Go to and click on the download button to install the latest version.
  2. Once the .exe file is downloaded, double click on it, and step through the installation process, clicking next when prompted.

The reason we are using Anaconda and not just python from is that Anaconda contains all the packages we want to access (apart from tweepy, which is the one for Twitter). Had we installed just python we would have to go and install each package separately, as python was not originally designed to support mathematical manipulations.

The main ones we will be using to get started, and which we will call using the ‘import’ command at the beginning of each session, are as follows:

  • numpy is short for numerical python it contains mathematical functions for manipulating arrays and matrices of numbers.
  • pandas is an easy-to-use data structures and data analysis library.
  • matplotlib is how generate plot our data on histograms, bar charts, scatterplots, etc., with just a few lines of code.

We can now practice using the software.

Launch Spyder (Anaconda 3) from the Windows menu. On the left hand side there will be the script editor and on the right hand side is the console. Cut and paste this short script to use matplottib and create a scatterplot of the results. You put it on the left hand side and remove the stuff that is already there, and then you press the run button (it looks like a play button) and you see the results in the bottom right hand side in the console window.

import matplotlib.pyplot as plt
girls_grades = [89, 90, 70, 89, 100, 80, 90, 100, 80, 34]
boys_grades = [30, 29, 49, 48, 100, 48, 38, 45, 20, 30]
grades_range = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
ax.scatter(grades_range, girls_grades, color=’r’)
ax.scatter(grades_range, boys_grades, color=’b’)
ax.set_xlabel(‘Grades Range’)
ax.set_ylabel(‘Grades Scored’)
ax.set_title(‘scatter plot’)

You will need to look up some of the commands over on matplotlib and if you don’t know what the commands are doing or indeed why you would want a scatter plot then Google that too. But, already we can see how easy it is to have some data to visualise and how quick it is to do so.

For bigger sets of data instead of declaring them in arrays as we did below:

girls_grades = [89, 90, 70, 89, 100, 80, 90, 100, 80, 34]
boys_grades = [30, 29, 49, 48, 100, 48, 38, 45, 20, 30]
grades_range = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]

We would put those line in a file called, for example: and read what we needed like this:

import girl_grades from

So that if we add or delete data we can keep a copy. Make sure you save all your files with useful names you can recognise when you come back to them.

Also make sure that your python PATH is set up correctly. (Google this.)

It is worth following some python online tutorials over on to get a feel for simple python commands and how to read and write from files either in the .py – python format which we used above – or other formats such as csv (often used in Excel) or json (often used in web apps), because we may want to use other people’s datasets or create our own from Twitter and store them in files.

2. Twitter developer

  1. Log into Twitter, or create an account if you don’t have one.
  2. Go to the apps section using this link, and register a new application by clicking on the create an app button.
  3. There’s a bit of form filling to explain what you want to do:
    1. I chose hobbyist, exploring the API, and put in my phone no and verified the account using the text they sent me.
    2. Next page: How will you use the Twitter API or Twitter data? I said that I am using the account for python practice, not sharing it with anyone, but I will be analysing Twitter data to practice manipulating data in real time.
    3. They send you an email and/or a SMS text with a code. After confirming a couple of times, you will get a Congratulations screen. Ta-daa!!
    4. Go to the dropdown menu on the top right hand corner and choose the Apps menu, which will take you to the Apps webpage.
    5. Click the Create an App button. Give your app a name and description e.g. I said: Stalker’s Python Practice, and give a description to the Twitter team about how you will just be using this app for practice. (You won’t need Callbacks or enable Twitter Sign-in.) Click Create at the bottom.
    6. The page which appears is your Apps page. Go to Keys and Permission and you will see your Consumer API keys which are called consumer key and consumer secret. These keys should always be kept private otherwise people will be able to pull your data from your account and your account will become compromised and potentially suspended. Underneath them it says Access token & access token secret, so click Create, and you will receive you an access token and an access token secret. Similarly to the consumer keys, this information must also be kept private .

Stay logged into Twitter but now we move onto Anaconda.

3. Install Tweepy

Launch the Anaconda Prompt (Anaconda 3) which you will find in the menu on Windows and then type: pip install tweepy

pip install tweepy

Theoretically we could do everything in this console but the Spyder set up makes it so much easier. Close this console and we are now ready to begin!

4. Hello World!

Cut and paste this script into the left hand side and replace each xxxxxxxxx with your consumer_key, consumer_secret, access_token, and access_secret, but leaving the quote marks around them:

import tweepy
from tweepy import OAuthHandler
consumer_key = ‘xxxxxxxxxxxxxxxxx’
consumer_secret = ‘xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx’
access_token = ‘xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx’
access_secret= ‘xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx’
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)
api = tweepy.API(auth)

for status in tweepy.Cursor(api.home_timeline).items(10):
# Process a single status

The top section of code will give you access to Twitter and the last three lines will print out 10 of the latest status tweets which normally appear in your home timeline.

Again save this script in a file so that you can reuse it.

And there you have it.

Next steps

Now you are set up, you are ready to begin manipulating the data you read from Twitter and here are three tutorials of varying complexity:

You may want to use datasets from or perhaps ones which are geolocated such as the ones at or Github, or from Twitter itself. There really is a world of data out there.

And, as part of the Anaconda framework there is the Jupyter notebook which can create webpages on the fly so that you can share your findings really easily. And then there is Tensorflow which is can be used for machine learning in particular neural nets because it contains all sorts of statistical techniques to help you manipulate data in a super powerful way and yet straightforward way.

The possibilities with Anaconda really are endless.


I am writing this tutorial as timestamped and running it on Windows 10. All the apps I mention are updated frequently, so these instructions may not represent what you have to do in the future. You may need to explore, but don’t worry, you won’t break anything. The worst case scenario is that you delete what you have done and start again which is always great practice.

If you get an error message, check you cut and paste the whole script correctly, and that your PATH is pointing in the right diection. If that doesn’t help, read the message carefully and see what it says. If you still don’t know, cut and paste the message into Google, someone somewhere will have found the solution to the problem.

This is the gift of the World Wide Web, someone somewhere can always help you, you can find whatever you are looking for, and someone is always creating something new and amazing to use. It really is magic.

Good luck and happy hunting.

Myth making in machine learning

If you torture the data enough, it will confess to anything.

– Darrell Huff, How to Lie With Statistics (1954).

Depending on who you talk to: God is in the details or the Devil is in the details. When God is there, small details can lead to big rewards. When it’s the devil, there’s some catch which could lead to the job being more difficult than imagined.

For companies nowadays, the details is where it is at with their data scientists and machine learning departments, because it is a tantalising prospect for any business to take all the data that it stores and find something in those details which could create a new profit stream.

It also seems to be something of an urban myth – storytelling at its best – which many companies are happy to buy into as they invest millions into big data structures and machine learning. One person’s raw data is another person’s goldmine, or so the story goes. In the past whoever held the information held the power and whilst it seems we are making great advances technologically and otherwise, in truth, we are making very little progress. One example of this is Google’s censorship policy in China.

Before big data sets, we treasured artefacts and storytelling to record history and predict the future. However, it has for the most part focused on war and survival of the fittest in patriarchal power structures crushing those beneath them. Just take a look around any museum.

We are conditioned by society. We are amongst other things, gender socialised, and culture is created by nurture not nature. We don’t have raw experiences, we perceive our current experiences using our past history and we do the same thing with our raw data.

The irony is that the data is theoretically open to everyone, but it is, yet again, only a small subset of people who wield the power to tell us what it means. Are statisticians and data scientists the new cultural gatekeepers in the 21st century’s equivalent to the industrial revolution – our so called data driven revolution?

We are collecting data at an outstanding rate. However, call your linear regression what you will: long short-term memory, or whatever the latest buzz word within the buzz of the deep learning subset of neural nets (although AI the superset was so named in 1956) these techniques are statistically based and the algorithms already have the story that they are going to tell even if you train it from now until next Christmas. They are fitting new data to old stories and, they will make the data fit, so how can we find out anything new?

Algorithms throw out the outliers to make sense of the data they have. They are rarely looking to discover brand new patterns or story because unless it fits with what us humans already know and feel to be true it will be dismissed as rubbish, or called overfitting, i.e., it listened to the noise in the data which it should have thrown out. We have to trust the solutions before we use them but how can we if the solution came from a black box style application, and we don’t know how it arrived at that solution?Especially if it doesn’t resemble what we already know.

In storytelling we embrace the outliers – those mavericks make up the hero’s quest. But not in our data. In data we yearn for conformity.

There is much talk about deep learning, but it is not learning how we humans learn, it is just emulating human activities – not modelling consciousness – using statistics. We don’t know how consciousness works, or even what it is, so how can we model it? Each time we go back to the fundamental age old philosophical questions of what is it to be human and we only find this in stories, we can’t find it in the data, because ultimately, we don’t know what we are looking for.

It is worth remembering that behind each data point is a story in itself. However, there are so many stories that the data sets don’t include because it is not collected in the first place. Caroline Criado-Perez’s Invisible Women documents all the ways in which women are not represented in the data used to design our societal infrastructure – 50% of the data is missing and no one seems to care because that’s the way things have always been done. Women used to be possessions.

And, throughout history anyone with a different story to tell about how the world worked was not treated well, like Gallileo. And even if they did save their country but as people themselves, they didn’t fit with societal norms, they were not treated well either e.g., Joan of Arc, Alan Turing. And if they wanted to change the norm, they were neither listened to nor treated until society slowly realised that they were right and suppression is wrong: Rosa Parks, the Suffragettes, Gandhi, Nelson Mandela.

When it comes down to it, we are not good at new ideas, or new ways of thinking, and as technology is an extension of us, why would technology be any good at modelling new ideas? A human has chosen the training data, and coded the algorithm, and even if the algorithm did discover new and pertinent things, how could we recognise it as useful?

We know from history that masses of data can make new discoveries, both chemotherapy and dialysis were discovered when treating dying people during wars. There was nothing to lose, we just wanted to make people feel better, but the recovery rates were proof that something good was happening.

Nowadays we have access to so much data and we have so much technological power at our fingertips, but still, progress isn’t really happening at the rate it could be. And in terms of medical science, it’s just not that simple, life is uncertain and there are no guarantees which is what makes medicine so difficult. We can treat all people the same with all the latest treatments but it doesn’t mean that they will or won’t recover. We cannot predict their outcome. No one can. Statistics can only tell you what has happened in the past with the people on whom data has been collected.

But what is it we are after? In business it is the next big thing, the next new way to sell more stuff. Why is that? So we can make people feel better – usually the people doing the selling so that they can get rich. In health and social sciences we are looking for predictive models. And why is that? To make people feel better. To find new solutions.

We have a hankering for order and for a reduction in uncertainty and manage our age old fears. We don’t want to die. We don’t want to live with this level of uncertainty and chaos. We don’t want to live with this existential loneliness, we want it all to matter, to have some meaning, which brings me back to our needs which instead of quoting Maslow (as I have things to say about that pyramid in a future blog) I will just say instead that we just want to feel like we matter, and we want to feel better.

So perhaps we should start there in our search for deep learning. Instead of handing it over to a machine to nip and tuck the data into an unsatisfactory story we’ve heard before because it’s familiar and how things are done, why not start with a feeling? Feelings don’t tell stories, they change our state, let’s change it into a better state.

Perhaps stories are just data with a soul…

Brené Brown, The power of vulnerability

Which begs the question: What is a soul? How do we model that in a computer? And, why even bother?

How about we try and make everyone feel better instead? What data would we collect to that end? And what could we learn about ourselves in the process? Let’s stop telling the same old stories whilst collecting even more data to prove that they are true because I want you to trust me when I say that I have a very bad feeling about that.

Just like me

Just like me but a baby…

When I was a girl and I used to go to my Grandma’s house with my mother, there was a picture on the sideboard of her (me mam) as a girl and she looked just like me. It was so like me, but not me, that I was mesmerised.

Now a mum with daughters, I am mesmerised when I look through old photographs which look just like them but are actually me. I have always liked my face, not least of all because it looks like my mother’s and whenever I look in the mirror, I see her and I am comforted.

One of my Bikram teachers has a thing about looking in the mirror. She is constantly saying things like: It’s hard to look at yourself in the mirror. Perhaps you feel old when you look in the mirror because you are grey and fat. And she says it with such passion and commitment that I find it hard to bear. It stirs all sorts of painful emotions within me.

I am very grey and occasionally fat, and most of the time I am okay with that. I actually like looking in the mirror at myself doing bikram, even when going through a mini Porker-Firth phase, like I did last year after eating all the cookies. I missed my mum and comfort ate my way through grief which created a prosperous roll around my midriff which my girls and I affectionately referred to as my cookie belt. I was okay demonstrating that if I comfort eat I put on weight, and it’s important to listen to our bodies, not our pain, where food is concerned. And, I was okay demonstrating that if I have grey hair, it’s ok embrace it and not dye it and not conform, as my youngest got her first grey hair at seven years old. Equally it is okay to dye it, though it took me long enough to grow mine out which was definitely less torturous than dying it every two weeks.

That said, I don’t know if it was the heat, but whenever my yoga teacher went on about feeling fat and grey and old, it made me want to say and do very un-yogic things even though Pantajali said that the first yama, or rule, of his limbs of yoga was ahimsa, do no harm. Instead, he advised that we act with loving kindness, or what the Buddhists refer to as mitri.

After writing the comfort blog, I asked my eldest if she minded looking like me, – I hadn’t always felt grateful to look like my mother – and this daughter of mine being smart and completely charming, mentioned Kung Fu Panda 3, when Po arrives at the secret Panda village and sees lots of pandas for the first time: You look just like me but a baby. You look just like me, but old. It made us laugh so much that we have been saying such things to each other ever since.

The day I came home from Bikram complaining about my teacher, saying that I just couldn’t believe that someone who practices yoga and teaches yoga daily is still focused on physical appearances, my daughter changed her phrase to : You look like me but old… and fat … and grey, until of course I was helpless with laughter, and that got me thinking: Why did I care so much what this teacher was saying?

I think it is because even though that I have made my decision about my grey hair, and I have shed my cookie belt after my bikram 30 day challenge last month, like my teacher, I still buy into society’s message for women who have grey hair, which is: I am disposable, invisible. This is utter nonsense of course (I stick out a bit with the grey) but the hair dye industry is so invested in selling hair dye to grey haired women that it has to tell us that we would look better with our grey hair covered up.

After all, selling is about making people feel less than, it is about hitting them as low down on Maslow’s hierarchy of needs as possible. Consequently, it is hard sometimes to keep the grey hair faith, or the Graith, which disappoints me because it bothers me to look in the mirror and feel less than satisfied with my appearance, after everything I have lived through and all the meditation and yoga I do.

And if I am not squirming enough saying that out loud, I think just because that someone is a teacher, then they should have mastery over themselves, and be able to teach me things I cannot teach myself. I am so intolerant. It’s awful, and now I am judging myself as well as judging her which makes me feel less than.

I trained as a yoga teacher, and I have also lectured computing for many years, and one thing I have just come to realise and now know to be truth is that sometimes you have teach something in order to learn it. That is the beauty of teaching and learning, it is a magical exchange of energy. Even when I think I know something pretty well and I am teaching it, there always comes a moment part way through whatever course I am giving that I, the teacher, learn something new because someone in the room has a different experience and a different perspective regardless of their age and experience, hair, weight, lifestyle. Everyone in every part of our lives is a teacher, we just have to be willing to listen. And, this is why I love teaching. We are all in it together teaching and learning with each other, we resonate with a shared passion for computing or yoga or whatever it is that has brought us together.

This energy exchange puts me in mind of Tonglen, the meditation practice of breathing in and out and exchanging fear for love in those moments when life gets unbearable. It was something I started doing after reading buddhist nun Pema Chödrön.

So I was delighted the other day, when I was thinking about my feelings of intolerance and impatience with my yoga teacher, and I happened upon Chödrön’s meditation of equality practice, often simply called Just like me:

The equality practice is simple [..] You think, “Just like me, she wants to be happy; she doesn’t want to suffer.” …it lifts the barrier of indifference to other people’s joy, to their private pain, and to their wonderful uniqueness.

– Pema Chödrön, Tonglen in Daily Life

After a few rounds of just like me, I realised that my yoga teacher is teaching me all manner of things I do not want to teach myself. She echoes the thoughts in my head, the ones I pretend are not there. She really is just like me, only braver.

The truth is, I am not as sorted as I like to pretend I am with my grey hair, nor am I as tolerant and yogic as I like to think. My teacher has the courage to talk about her private pain, well our private pain, which is why her words disturb me, she is just like me, but speaks up, when I keep quiet so that I don’t have to feel that people will think that I am less than. She has shown me that by speaking up, I am not less than because I don’t judge her as less than either, I see her as brave and authentic, and perhaps that’s how people view me or perhaps they don’t, as Deepak Chopra says: What people think of me is none of my business. I need to let go of that thinking altogether.

Lesson learnt – well not quite, but I am looking forward to her next class. I am ready to step into that magical energy exchange of teaching and learning, of yoga and meditation, and come out slightly different at the end, but not too different, after all, she is just like me, but different and I give thanks for that and for everything she has to teach me.



The Northern Lights: Allison Labine

Your heart and my heart are very, very old friends.

– Hafiz

Years ago, I had a summer job in a delicatessen in Putney. One sunny afternoon, a man came up to my counter to buy something. I don’t remember what he ordered. I don’t remember what he said, how he spoke, or even what he looked like. The only thing I remember about him is the way he made me feel, so much so that I can still remember it all these years later.

I have him in mind today as I ponder resonance which, after a look around the Internet, can be defined as evoking a strong emotion. Depth. Spaciousness. Timelessness. Love, which is exactly how I felt that day on my deli counter in his presence. I felt waves of love and comfort in a timeless space, the likes of which I’d only felt a few times before, sometimes in dreams, and indeed I’ve only felt a few times since. It was such a special encounter which just happened.

I have blogged about how connection is our life force, how it satisfies our noble need for emotional resonance in order to feel seen and heard, and to receive comfort. But, it takes courage to live with an open heart, because at each stage we risk and we fear rejection.

Resonance, however seems to be something else altogether. It happens almost instinctively, like my interaction with the man. But it is not just people with whom we resonate. YouTube, books, a story or a piece of music, and even blogs stir our emotions, they feel rich and significant, they feel true, and we fall into their depths and enjoy a sense of spaciousness and timelessness as we take to our hearts the messages they have for us. The way we feel about anything is the only measurement of truth we have, even though it is often hard to go with our gut, because society and life trains us how we should behave, rather than encouraging us to follow our feelings.

In physics, resonance is defined as a specific vibrational frequency where energy is efficiently transferred into a mechanical system or indeed from one body to another so that it vibrates in sympathy with its neighbour. In quantum physics, quantum entanglement is when pairs or groups of particles behave in such a way that their states cannot be described independently of the states of the others, even when the particles are separated by a large distance, famously described by Einstein as spooky action at a distance.

Even though these particles can only be seen under a microscope, the idea that the universe is vibrating and expanding and shifting and changing echoes the ideas which some world wisdom traditions have had for centuries, which is that we are all one.

And, with the discovery of mirror neurons a couple of decades ago, it seems that humans have the same ability to resonate like the particles do, with everything and everyone around them. Apparently, all the regions of the brain involved with thinking and sensory input, appear to have mirror neuron activity. We can resonate with people even at a distance, on the Internet, or TV and radio.

We all know certain people, particularly in social situations, who just lift our hearts, around them we feel better about who we are. We mingle in their energy, we feel love and joy and happiness. We feel better.

I love this. I love that we don’t all have to work so hard to reach that lovely state of resonance. Just by chance it is possible to resonate, as Rumi says, the love within our love because in the centre of us all, in our hearts, indeed, our heart of hearts, or in the centre, of the centre, of all of us, the same consciousness and vibration is occurring. How wonderful is that?

It is just that in our busy world with all of its demands, sometimes we forget. Worse still our mirror neurons can become overwhelmed by interactions with people with whom we don’t empathise or understand, because we have forgotten that we all feel the same way and deep down the person who is getting on our nerves has the same core as us. They are just like us.

But, we don’t always need another person to do this. We can do it for ourselves, apparently. We can resonate with that love within our love anytime we like.

Meditation is the best way to calm our nervous systems and retrain our neurons so that we can form a bridge between our hearts and minds. In this way we can be the deepest, most spacious loving person in the room from which everyone takes their cue. A Course in Miracles (which I’ll be honest with you I struggle to read, it is not exactly a page turner) says something like: Give whatever is lacking in a situation. And, this is so true, if you are not feeling love in a situation, bring it, bring it first and foremost to yourself, and know that you are enough, and then share it with the situation. Because, when we let go of all the tension, thinking and feeling that is where the magic begins.

After four months of daily ecstatic breathwork, I am starting to feel that it may well be possible. In fact, I know it is possible to connect to that inner state of timelessness, spaciousness and love because of the man that day, the man I only met briefly and only once, but who resonated so brightly, so beautifully, with a pure love whilst asking for his quarter of salami or smoked salmon, or whatever it was, so prosaically and yet so magically. His energy was transformative and thankfully, I have never been the same since.

I love me some woo-woo, quantum physics, quantum love conversation. But, in these circles I often hear people talking about negative energies, energy vampires, and protecting ourselves from negativity. I am not such a fan of this advice, this constant need to defend, but will concede that we are not obliged to resonate with anything and everything that has a pulse, but that said, there will be days when we want to but still get it wrong and are left afterwards with that yukky feeling as we have clashed with people who are not on our wavelength.

But, we shouldn’t get disheartened. It is only temporary and even in those horrible moments, clashing can be an amazing creative force too, like dissonant notes in a piece of music which give us space, before leading us to harmony. Or by creating something spectacular and as glorious as the Northern Lights.

When I was choosing the picture at the top of the blog, I was looking for something which looked like resonance, which resounded with resonance, and I fell into browsing pictures of the Northern Lights or Aurora Borealis. Ironically, the magical Aurora Borealis is created when solar wind ions collide with atoms of oxygen and nitrogen from the Earth’s atmosphere. They collide, they don’t resonate. So, even when crashing something good can come out of it. We need contrast so that we can feel the difference, sometimes we need a moment of dissonance, so that when it changes we appreciate and get ready to go on our way to somewhere else, to something else, to a new magical quantum entanglement which we can enjoy all the more.

You are not a drop in the ocean. You are the entire ocean in a drop.

– Rumi