Web design (3): Getting to grips with your user’s experience


A collaborative medium, a place where we all meet and read and write.
Tim Berners-Lee

[[Part 3 of 7 : 0) intro, 1) story, 2) pictures, 3) users, 4) content, 5) structure, 6) social media, 7) evaluation]

In the 1970s, the waterfall model was enthusiastically embraced by software engineers to enable them to get a handle on designing big software systems, although its proposer Winston Royce had issues with his model and discussed how it could be improved in the very first paper. He even included some agile-like programming ideas.

The waterfall model, Ian Somerville, Software Engineering (1992)
The waterfall model drawn by Ian Somerville in his book Software Engineering,(1992)

The main problem with the waterfall model was that the testing phase was the first time users saw their proposed new system. This meant that once users had experienced it, they wanted to change the requirements to better reflect what they needed. This would either result in a redesign of delays and rising costs, or users being lumbered with a system that did not reflect their needs.

Technology may have moved on immeasurably since the ’70s, but human communication skills have remained pretty much the same and identifying requirements is as huge a challenge today as it was back in Royce’s time. This is because what users think they want, is often very different from what they actually need.

To try and manage this gap, there have been all sorts of waterfall model modifications and alternatives: iterative waterfall, exploratory programming, rapid prototyping, and agile programming to name but a few.

Each approach has tried to be more flexible and iterative in order to accommodate users and produce better usable systems. In 2001, the agile manifesto was declared, promising to deliver software regularly to meet the changing requirements of all the people involved who would work together to produce something good.

And, nowadays, a usability specification is included in the requirements phase of the software lifecycle under ISO, International Standards. This demonstrates the recognised need for a more user-centred design practice, which hopefully leads to focusing on how users want to use a system rather than forcing them to change their behaviour and adapt their working practice to a given system.

Designer Donald Norman invented the term user experience to: cover all aspects of the person’s experience with the system including industrial design graphics, the interface, the physical interaction and the manual.

Norman felt that the term usability, the assessment of how easy an interface is to use, was too narrow, and wanted to reflect how a user experienced the system rather than just defining usability goals as part of a user requirements specification.

However, usability and users are just one, albeit important, part of the equation. There are business managers and stakeholders and a whole host of socio-organisational issues in the culture of each business, which can make the design process a complex one, which is why good designers shouldn’t just solve the problem that is asked of them, as Norman says:

It’s almost always the wrong problem. Almost always when somebody comes to you with a problem, they’re really telling you the symptoms and the first and the most difficult part of design is to figure out what is really needed to get to the root of the issue and solve the correct problem.

– Interview at adaptivepath, (2010)

Each business has its own way, or culture, when designing software. Some companies employ user experience and interaction design consultants (the lovely pic below describes the distinction) – individuals or companies – and some businesses just leave it to their programmers or web designers who might either work on the front or back end, or both, depending on the size and complexity of the system needed, and funds available.

Difference between user experience and interaction designer by Per Axbom

There is no exact science to design and often the boundaries are blurred between roles. The main goal is to solve the right problem and support your user. And so we must begin with research.

Qualitative and quantitative research

Most projects are dictated by time and money constraints. However, even if you only have access to five users in total, it is still better to talk to them rather than designing something without any user consultation. Designers can never anticipate how users will interpret and use their designs.

Quantitative research, such as giving users a questionnaire or capturing their clicks as they perform a specific task on a website, is a useful approach if you want to collect results which are statistically significant, which you can present to stakeholders.

However, even if you use a more qualitative approach such as one-to-one interviews, full of open-ended questions resulting in many different answers, be assured, patterns will emerge in order to guide you.  They always do.

If time is short, a quick way to gain insight into your users’ minds is by using one of my favourite approaches: the cultural probe. You give users a way to describe their tasks and thoughts and opinions whilst going about their business. This can take the form of a diary or an online blog, video, flickr account, whatever you think you might want to see and analyse.

The resulting data can be a most powerful way of demonstrating how users behave and work almost as well as having your user give an opinion in the lab, the other side of a two-way mirror, whilst your client/stakeholder is watching.

Human nature is endlessly fascinating and we have a nature empathy with one another. Thus, it makes sense that when we present all our facts, figures and qualitative patterns to the stakeholder, we shape them into easy to understand formats which echo our natural storytelling talents.  We may want to use some or all of the following:

User personas

Personas were informally developed by the father of Visual Basic Alan Cooper in the 1980s as a way to understand the mindset of the users who would use the software he was designing.

Personas are an essential part of goal-directed design. Each group of users researched is represented by a persona, which in turn is represented by a document. Several personas are not uncommon in a typical project. (Image: Gemma MacNaught)
Personas by Gemma MacNaught

They are a way of representing a particular audience segment, and generally have a group name: i.e., web manager. Added to this is responsibilities, age and education, and their goals, tasks and environment.  Pictures and quotes help, as they capture more easily, a person’s motivations, frustrations and the essence of who they are.

Once the personas are in place, we can hear some of their stories:

User Stories

User stories, are mainly used in agile programming environments and they shift the focus from the system design: Hey, this is a cool feature, to the actual user and what the user wants to be able to do with the proposed system whether a given feature is cool or not.

As an Industrial Facilities Manager, Cathy is responsible for maintaining production systems and sustainability, which includes keeping equipment functional. She needs quick access to maintenance information and parts supply for her facility’s entire inventory.”

Example of a user story from www.newfangled.com

As newfangled.com says a story’s details are collaboratively worked on over time by everyone involved so that the story becomes a promise and a way to hold a conversation with users.

In agile environments, stories are the smallest building blocks which can be all joined together to make sprints, epics, and versions.

In a user experience environment, instead user stories are extended to make scenarios and journeys:

User scenarios and user journeys

A user scenario expands upon your user stories by including more details about how a system might be interpreted, experienced, and used by each persona group.  Scenarios describe a user’s goal, specify any assumed knowledge, and outline details of the user’s interaction experience.

A use case expands on scenarios and creates a long list of steps, sometimes in a call and response approach between user and system, which a user might take in trying to get something done. It starts with how the user got there and steps through each user/system state until they have successfully achieved their goal or given up.

The focus of use cases is what a system needs to allow a user to do and so, use cases are used to define product features and requirements. The result is use case diagrams or user flows and some descriptive text, which illustrate and describe the sequence of user interactions.

This all feeds into the user requirements, so that finally, we have a rich understanding of who the users are, what the users need, the tasks they perform, and how they go about it.

Delivering deliverables

We have come a long way since the days of the waterfall model and system-based design. However, I still hear stories of users being asked to fill in a spreadsheet field or two with their requirements so the programming team can write an epic.  Or, I hear that users can’t request requirements and code changes because: That’s just how the code works. Argh!

User personas etc., as tangible as they are, are only methods. They are not the deliverables of a project. Their only purpose is to facilitate user understanding. And we may have repeat them over and over until everyone is happy.
Used correctly, these methods can give everyone a good experience including stakeholders and designers. How brilliant is that?

[Part 4: Web design: Being content with your content]

Web design (1): What’s the story?

cartoon desktop tablet phone

The WorldWideWeb (W3) is a wide-area hypermedia information retrieval initiative aiming to give universal access to a large universe of documents.
Tim Berners-Lee

[Part 1 of 7 : 0) intro, 1) story, 2) pictures,  3) users, 4) content, 5) structure, 6)  social media, 7) evaluation]

Tim Berners-Lee used this description to describe the Web, 25 years ago, on the first website of his newly created World Wide Web. In 2014, the world’s Internet users surpassed 3 billion or 43.6 percent of world population, inadvertently, creating digital culture. But Berners-Lee’s words are as relevant today as they were back then. The Web was always intended to give universal access to a large universe of documents.

Alongside documents, we have the Internet of Things with devices passing information to each other to create ambient atmospheres. And, we have the Internet (of people) which compresses time and space so we can communicate instantly all over the world regardless of time zone: To shop, to make money, to make friends.

Until recently, we would only do this via a website, but whether we are reading, or chatting, or creating art, our main means of communicating is using words. And so, whatever else we think a website is, or can be, a website is words on a page which communicate a message, a story, or instructions.

For a long time designers were very conscious of words and used the idea of a newspaper as to how a website should be designed. We had above the fold, headlines, and images to break up the text making lots of websites look like newspapers. We controlled the layout and told each browser exactly how to display our website design.

This worked well because humans like to recognise things, and can transfer their knowledge of  one thing (newspaper works like this) to another (ah very similar to a newspaper, this website works like this) as well as being storytellers who, whilst feeling happy with certain plots or layouts, like to be surprised and amazed. Apple calls this design approach discoverability, which recognises that we are always looking for the next thing.

Humans are motivated by needs and are constantly looking to satisfy them. This is reflected online by us getting interactive with our words and making services available: to buy books worldwide, get food delivered locally, or check our bank accounts, before moving onto the next thing.

So, it may seem that websites today look and feel very different to Berners-Lee first website. But, they aren’t. No matter what you offer on a website to captivate your users and to make your design shine, the fundamental purpose of your website is to communicate the content – information or a service – which we do using words.

All websites look the same

Web designer, Dave Ellis says that most websites are starting to look the same, especially those ones done by web design companies advertising their web design capabilities.

All websites look the same - Web Designer, Dave Ellis
All websites look the same – Web Designer, Dave Ellis

UX designer@timcaynes went so far as to collect some of these same looking home pages together.

When laid out side by side they do look, interestingly enough, very similar.

They are following, according to 99designs, one of the top trends of 2015, which is: make it big, others trends include circles in your design and animated storytelling.

But, trends come and go. Web developer Jeremy Keith reminds us in one of his great blog posts how JavaScript was quickly adopted for three primary use cases:

  1. Swapping out images when the user moused over a link.
  2. Doing really bad client-side form validation.
  3. Spawning pop-up windows.


But, it is not just the trend of make it big which designers blame for the homogeneous nature of web pages. Many believe it is because of the mobile first design approach.

Usability guru, Jakob Nielson, in his mobile design course advises the designer to:

  • cut features: eliminate things that are not core to the mobile use case;
  • cut content: to reduce word count and defer secondary information;
  • enlarge interface elements: to accommodate the “fat finger” problem.

But it’s not just space! Theoretically, mobile-first web design accommodates the most difficult context first. By removing convenient user assumptions: strong connection, ajax support. You have to get users to achieve their goals quickly and easily, which sounds great, in theory. But, the above list causes you to design very differently for a mobile than for a desktop and gives you different experiences.

This realisation has led to adaptive web design (AWD) and responsive web design (RWD). Both approaches design for different devices (e.g., desktop and mobile) whilst maintaining similar experiences.

RWD relies on flexible and fluid grids, and AWD relies on predefined screen sizes. RWD might take more code and implementation strategies with the fluid grids, CSS, and flexible foundations, whilst AWD has a streamlined, layered approach, which utilizes scripting to assist with adapting to various devices and screen sizes.

In practice though, this has led to AWD providing several websites on one URL: one mobile and one desktop, etc.

Cutting the mustard: Progress enhancement

Designer and web standards advocate, @Zeldman, says that there are 18,796 distinct Android devices on the market, and this will only continue to increase as technology gets cheaper and more widely available.

How do you design for this?

The BBC ‘s responsive team came up with cutting the mustard, which is about testing a browser for its capabilities before serving up all of the scripts which are going to create website behaviour and better experiences. This is similar to what Zeldman and The Web Standards Project proposed too back in the day.

However, do we need to have the same experience? Even if we all use identical websites we are having different experiences anyway because we are embodied and we see the world through our own particular lense of experience. If we add a different device to the mix, what does that mean? Nothing much, if users can easily do what they came to do.

An escalator can never break – it can only become stairs. You would never see an “Escalator Temporarily Out Of Order” sign, just “Escalator Temporarily Stairs. Sorry for the convenience. We apologize for the fact that you can still get up there.”

Chris Heilmann

Seamless experience

Last year, facebook commissioned a study which found out that 40% of adults switch between devices, often web and mobile, to complete a single task. Nilay Patel, says that the mobile web needs to get much better in his article: The mobile web sucks and perhaps this is why.

But as Raj Aggarwal in Wired Magazine, says we need to understand what actions are best suited to which device and platform, i.e., desktop or mobile, and make it as easy as possible for users to perform those actions.

Mobile phones contain all sorts of extra functionality such as GPS location, camera, which is why many companies have gone for straight to app  design instead of mobile web. This approach encourages users to switch devices depending on what they are doing. E.g., people use their Uber mobile app to find a cab, but are likely leave feedback on the Web.

Designers need to create a seamless experience between their web app and the mobile app which remembers where users left off. Modern sites such as Gmail, Facebook, Twitter, Yelp and Mint.com all do this as well as updating themselves which makes users feel secure. For, Aaron Gustafson has proof that we don’t control our web pages, because big companies are serving up ads and putting extra html and css into webpages as they deliver them.

Gustafson says that there is no way of controlling this, but says that we need to make the content we serve as good as possible in order for users to still achieve their goals,when some resources are missing or markup is altered. This is much easier to do with an app.

Great guidelines exist for producing accessible mobile apps, particularly clutter free logical text which can be read out by a screen reader. One lovely example this site gives is the Met Office’s Weather App. When someone uses the Met Office’s app, they want to know what the weather is going to be like today. They don’t need to see the sunshine and showers. Words which contain a description of the weather is enough:

Today twenty January two thousand and twelve (heading, screen reader identifies the text as a heading)
Cloudy with light rain eight degrees centigrade
Wind westerly eleven
sunrise seven fifty five
sunset sixteen twenty eight

-Reading the Met Office Weather App, One Voice

Job done!

All a website needs to do is to give users the information for which they came.

If we compare this to Berners-Lee’s very first website, or get a screen reader to read Berners-Lee’s text for us, we may find that the experiences are very similar.

Content is key. If we design for content, we continue to give universal access to a large universe of documents and services and information, which, after all, is what the Web is all about.

[Part 2: Web design: Get the picture]

Digital Culture

pic borrowed from maltatimes.com

When we think of culture, we often think of art galleries and museums, places where curators decide which works of art should be preserved and presented to future generations. Treasuring artefacts and storytelling has been a way of recording history since civilisation began. The cultural gatekeepers, who judge what cultural achievement is and which artefacts or stories should be preserved, have been until now few in number, which makes their choices political ones.

However, culture is much bigger than art and cultural achievement. Culture is learnt, not inherited and yet it influences our biology, our behaviour, our individuality. It is often used interchangeably with nation, race, ethnicity, identity and community, and so it is not surprising that in 1952, anthropologists, Kroeber and Kluckhohn compiled a list of 164 definitions of culture.

We talk about culture in politics, education and the work place, particularly how organisations have different management cultures such as Amazon’s bruising culture and, the idea of a global culture with global citizens.

Before 1995, the Internet belonged more or less to the culture of academia, until Tim Berners-Lee created a client-server information system called the World Wide Web and ran it over the Internet. Berners-Lee’s goal was to give universal access to a large universe of documents.  He didn’t realise how the world would change and we would have another culture to consider: Digital Culture.

Digitisation and a cultural shift

Many arts and cultural organisations have turned to technology to help their archiving and preservation work, which in turn has led to a number of changes in these organisations and how they use technology.

They use technology to:

  • Automate processes such as ticket sales and fundraising.
  • Understand audience engagement with their exhibitions using data analysis.
  • Promote and reach new audiences with social media.

Digitising works of art allows easy reproduction and distribution to an audience worldwide, whilst technology reduces the overheads of staffing, print materials, etc., so that organisations can lower costs. Digital installations accompany exhibits and augmented reality apps like Aurasma, add an extra dimension and experience to an exhibit.

Digitisation and access to the Internet has led to a shift in cultural gatekeeping and opened the door to normal people creating too. Anyone in the world can create and contribute. It is no longer a small world where only the select few can say what is good and worth treasuring. And the use of technology creates new types of digital art, to add to the culture of art.

Self-publishing is a perfect example of this approach. Writers no longer have to wait for the gatekeepers of publishing to condone their work and present it to an audience. Instead, anyone with access to the Internet can publish their work electronically and on paper for a price, cheap or otherwise, to attract an audience. They no longer need to wait for someone else’s established approval.

News media and television journalism have been instrumental in shaping our collective memory for much of the twentieth century but now, thanks to twitter and facebook, no longer. There are many others alongside the gatekeepers of the media who influence where we focus our attention. Anyone with access to the Internet and a story to tell access new platforms to reach people who were once only accessed by the media. Indeed, the media often curate tweets and other social media posts to record public opinion when there is breaking news.

The Internet compresses time and space so that the night shift is always covered because it is the day shift somewhere else in the world. Banks operate these hours. When one member of a team goes to bed in London, they know that someone in New York will be there well into the night, and once night falls in New York, there will be someone else working away in Hong Kong.

This flexible workforce and creation of intangible products such as databases, knowledge, or apps is known as a weightless economy or knowledge economy. Once something is made it can be reproduced and distributed at a low cost; infinitely.

Theoretically, this should ensure there is enough for everyone much more easily than it would be when sharing out physical goods and resources. Often though, the opposite is seen. This is because of the ‘superstar effect’. Consumers prefer to buy famous or branded knowledge goods e.g, ebooks, songs, movies or apps. So, because of the infinite, low cost distribution, the superstar or winning product can have an enormous market share, limited only by other competing superstar products.

Interestingly enough this economic inequality is tolerated better in the digital world than the real one, because everyone feels that they could create the next superstar product.

The addressable individual

Traditional marketing methods are dying out because of the many fragmented channels of social media and so content marketing is the new marketing and it is big business. The global sponsorship sales director at Manchester United describes fans on social media as addressable individuals with whom the club can have more intimate relationships [because with every interaction] we build up knowledge about who that fan is and what type of content they like to consume. With tailored content, the club encourages its 659 million worldwide fans to feel that special Man U connection by buying branded products endorsed by the club, and then share this feeling with the fans around them. If Man U gives something its stamp of approval then that creates value.

A common thread running through all definitions of social media is a blending of technology and social interaction for the co-creation of value. Rather like the shift in art culture, and everyone choosing what to treasure, we now have everyone creating content and sharing it across social relationships. But, even in this equal world, we have those more equal than others, with their influence and their ability to create content and also to make money – again they are tolerated more readily, because anyone has the chance to become the next great influencer or superstar.

Prosumers at playbor and weisure

As the role of consumer and end user disappears, the distinctions between producers and users of content fade. In many spaces online such as Wikipedia, users are also producers of the shared knowledge base, regardless of whether they are aware of their role. They are produsers or prosumers who collapse the gap between producers and consumers.

Often prosumers don’t have well-defined jobs in the 20th century sense of the word. The Internet allows them to blur that boundary between play and labour or work and leisure so they are in the environment of playbor or weisure.

@Stampylongnose, is a man in his twenties, whose job is to play video games in his bedroom. He creates a video of himself playing Minecraft everyday and uploads it to YouTube, and he has great fun and earns lots of money. On YouTube, he is more popular than Justin Bieber and seems like a really nice man. My girls watch him a lot, probably more than they watch television. But to them, because they often watch YouTube on our television, or television on a tablet or phone, it is all one and the same. They are digital natives, and for them, there is no distinction between Stampy, a superstar on YouTube and Katy Perry, a superstar popstar who puts her song videos on YouTube.

Convergence or Splinternet

My kids see the world quite differently from me. So, it is no surprise that sometimes when I talk about my childhood, one in which the Internet (well WWW, didn’t exist), they ask me questions which seem mad to me, but perfectly normal in the context within which my girls live: Did you have music, mummy, when you were little? The existence of the Internet is as normal as the existence of music. Using your phone to watch TV, is normal too.

Designer and web standards advocate, @Zeldman, says that there are 18,796 distinct Android devices on the market, and this will only continue to increase as technology gets cheaper and more widely available. So, although you can do similar things on more devices, known as convergence, more and different devices mean that everyone is beginning to experience the Internet differently. This different experience worries some, who wonder if such diversity contributes to the Splinternet and the fear of the balkanisation of the Web.

However, until now, no country has built an intranet, disconnected from the rest of the world. Many countries have blocked websites like Netflix due to intellectual property regulations and social media during times of crisis, which goes against all the ideas of what the Internet stands for which is no central governing body, and universal access for everyone.

One variation of splintering is the filter bubbles people can live in. Rather than the long tail of choice and diversity, Google, since changing its search strategies, now serves up more of what you have already seen rather than more of what is out there and Facebook has personalised newsfeeds. So, people can become less exposed to viewpoints which are different from their own and live in their own personal bubble, reading only opinions with which they agree.

Digital culture: Utopia or Dystopia

Social Scientists like Sherry Turkle, once believed that the Internet could help her learn about herself. Others believed that it would radically change our culture. In some ways it has – we have the weightless economy, and more opportunities to create jobs in a weisure space. We can publish our works and our art and we don’t have to wait for anyone’s permission. But for many, their lives are still mediated by the TV and newspapers, who tell us where to put our focus, even online.

And, the main downside of equal access for everyone without a central governing body, is that as the Internet has been adopted by the majority of the population in advanced economies, all of the inclinations, prejudices, and habits of society came online too. 

So in that virtual space where feminists can meet together and use social media to change sexist attitudes, we also have young women publishing erotic/sex confessional memoirs, pictures, videos and self-harm vlogs. Is this a disturbing trend or representative of young women today? In her book, Postfeminist Digital Cultures, Sociologist Amy Shields Dobson discusses young women and their behaviour on the Internet.

Similarly, with the Internet, we have a space free from social constructs which allow us to create different social structures. But in that space, there also exists groups of people who like patriarchy and hierarchy. We can have great discussions with like-minded people and find our tribe, but we also have trolling and bad behaviour far worse than what would happen in face-to-face discussion.

Without the Internet, you might never know that your lovely sweet old-lady neighbour is capable of saying the most awful things, in her second identity as a hateful tweeting Internet troll.  She is a great example of the post-structuralism (brilliantly explained in the link using hipsters) – a theory of the individual as an unstable entity in this digital world within which we now live. But without the Internet you might not have met those inspirational people either.

Individuals change, and digital culture, like culture itself, is constantly growing and changing, whilst everyone is renegotiating the rules of how it works. The Internet reflects all of this, which to quote the Eagles’s Hotel California: … could be heaven or this could be hell.

Thankfully, there is enough digital space to make it what we want.

Simplexity and the Internet of Things

The internet of things pic

What is now proved was once only imagined
– William Blake

In the brilliant (alas, cancelled) Forever series, Episode 17: Social Engineering, Detective Jo Martinez, and her ME Dr Henry Morgan are called to the apartment of a young murder victim whose flat switches itself on whilst they are there. The morning radio comes on, the coffee maker starts to percolate, the heating switches on, and the blinds open, all controlled by an alarm on the victim’s phone.

The victim turns out to be a Faceless hactivist who hacks into Times Square’s billboards to play footage of politicians behaving badly, and into New York City’s municipal systems to alter everything from traffic lights to residents database information. So, it makes sense he would wire up his creature comforts to make his flat more ambient. The only downside, according to the NYPD cybercrime unit, is that his network was hacked. Someone logged in to turn on his boiler and cut the pilot light, which resulted in his being gassed. Murder by remote control.

The story might be fiction, but in reality, having a wired flat is very much a reality. According to the US Federal Trade Commission there are around 25 billion devices connected to the Internet which will double by 2050.

These devices are available in every context from heart monitoring implants to field fire-fighting search and rescue operations. Each device collects data and then autonomously flows the data between other devices using APIs, data formats and network protocol stacks in order to improve overall performance.

It sounds complex, but when coupled with a familiar device like a coffee machine and resulting nice ambiance, Mmm smell the coffee, the result is one of simplexity – an emerging theory which balances the need for simplicity and complexity, and the design focus for the future of the Internet of Things (IoT).

In June this year, Wired magazine produced a supplement about the connected home, 30-odd pages full of futuristic devices that are already on the market and connect to the IoT. A few of my favourites were:

  • The Triby Fridge Memo, an e-ink display you put on the fridge and when you write on it, it sends messages to the rest of the family.
  • The smarter am app which will customise your coffee, so if your fitness tracker says you slept badly, it will make you a double espresso to get you up and at ’em.

Gimmicks aside (cocktail mixer and fizzy water dispenser, yes please), a really useful one is the CO2 detector which in the event of an emergency would talk to you, your thermostat, and turn off your boiler.

The biggest problem us consumers have is deciding who will look after our smart homes. Is it Google with Nest? Or Apple and its golden handcuffs of proprietory software? Shame really, as these simplex gadgets have been around for many years just waiting on an industry standard to allow them to talk to each other.

But, it is not just the devices in this ambient intelligence interestingly enough which need monitoring, it is us humans. HCI designers have been saying for years the human is a factor in the design. With IoT, this is truer than ever before. Humans become devices to be monitored.

One way is with physiological computing. The physiology of a human is monitored and used as input to a system. So, if you arrive home and are a bit hot, your home might turn the heating down. Or, a computer game could modify its level of difficulty according to the amount of times you shake the controller in frustration.

Feeling wired: The human as a thing

Recently, Douglas Coupland asked in the FT: How much data am I generating? Involuntarily and otherwise. Everywhere we go, we generate data with Oyster cards and shopping bills at Tesco. Coupland wonders which algorithm is at work mining away in some big data pool in order to learn everything about us. His main fear is that it will all be monetised and we will end up being part of some sort of pay-per-click click junkies.

Ironically when I reread Coupland’s article online, it kept asking me if I wanted to tweet a quote. And, often when signing up for something online, I am asked to share this with friends on Facebook. Just imagine a wired house of consumer products: You’ve just left a note on the  Triby Fridge Memo, share this with your friends. Your coffee is a double espresso today, tweet this to your boss. Gah!

But, it is not just posting online which causes oversharing and potential security risks. Many people don’t change the settings on their new devices when they bring them home. So, devices are left to broadcast openly across the Internet which allows a would be burglar to scan local IP space and then gain access to footage of people at home, build up a pattern of behaviour and then break in, when everyone is out. To say nothing of the virtual visitors who tiptoe around and tamper with your systems when you are at home.

But even those humans who change the passwords on their devices, might still write their passwords on post-it notes and stick them somewhere everyone can see them, or worse still, use the same password everywhere. Designers know that humans are the weakest link in any system, which is why biometrics are being proposed as the way forward. If we use what humans have already it will be less painful than implanting chips under our skin or needing to remember our wearables.

We are all unique

We are all unique, well, not really, our fingerprints, contrary to popular belief, are not unique identifiers, but the retinal scan has an error rate of 1 in 10 million – not bad! Even so, if someone wants to access your system they will. Using brute force attacks as a starting point, it is easy to imagine someone compiling a database of fingerprints or even retinal scans to virtually or physically enter your home. To counter this, unique biometric identifiers are being explored such as gait analysis and Nymi’s heartbeat recognition.

Say the intruders have got in and left with your best kit, everything is not lost, the broadening application of block chain authenticity could help you retrieve them. It is possible to stamp your devices, rather like your bicycle.

Up until now Bitcoin has been used as cryptocurrency a form of money that can be transferred securely and anonymously across a widely distributed peer-to-peer network. The Bitcoin blockchain is an auditable ledger of all the transactions that have occurred on the network so far. It is a trustless system because the Bitcoin network itself is guaranteed to keep a fair and accurate record of which bitcoins belong to whom. Removing the emphasis on currency and keeping the blockchain technology, it is possible to track the history of individual devices and keep a ledger of data exchanges between it and other devices, web services, and human users.

The only downside is creating massive data trails, but when you have lots of devices in your home, your office, and in cities talking to each other, to humans, to the Internet, well we are talking a lot of data anyway. Plus more machines needed to process it into something meaningful. How much energy will the IoT need? Currently, 25% of UK energy is consumed in the home and this will only increase.

Sustainability in simplexity

Panasonic in Japan has created the first sustainable smart town called Fujisawa. It is built on the site of an old Panasonic factory and is designed for a population of 3,000 people.

The town has a smart grid with everything connected to it. Each house and apartment block has solar panels, and fuel-cell generators which generate and redistribute energy around the house, and then the town grid juggles all these variables of renewable technology and town demands.

Engineers anticipate a 70% drop in each house’s carbon footprint, and have anticipated earthquakes too. Enough power can be stored for three days of off-grid operations.

And, this is where the IoT gets a whole lot more interesting. If we can use technology to generate energy and redistribute the resources that we have across towns and eventually countries, then there is the hope, that one day everyone the world over will be able to wake up in a secure home to listen to the sounds of their creature comforts making their home an ambient one.

The IoT has the potential to redistribute the future more evenly. Simplexity at its best.