Skip to Main Content

Has automation gone too far?

By Stephen Hudson - Digital Marketing Consultant.

Google Driverless CarLet me start off by saying that I love automation; I love the fact that my phone, tablet, TV and games console automatically connect to the Internet to download the latest firmware updates while I sleep! I love how I can set my central heating to automatically turn itself on and off at certain times, and I love that my car’s windscreen wipers turn on automatically when it starts to rain. However, and I can’t stress this enough, I love that I can still do these things manually when necessary.

Automation is designed to make our lives easier and, from my personal experience, it tends to work. I’d hate to have to get up at 5am on a cold winter’s morning to turn the heating on to ensure I'm not eating my breakfast with icicles hanging from my nose (OK, I'm exaggerating, but you see my point)! Despite this, I believe it’s possible to have too much automation, especially when there’s no manual override; wouldn't it be awful if a pilot couldn't manually shut down a plane’s engine in the event of an emergency, or if a nuclear power plant couldn't be shut down with the flick of a switch?

The point I'm trying to make is that automation is great, but only to a certain extent, and no one knows this better than search giant Google! Google’s recent history is littered with situations where its constant desire to automate everything has left the company more than a little red-faced!

Google Photos and its tags

The most recent case - and perhaps the most shocking one - of automation gone wrong happened just a few days ago. The Google Photos app, which launched in May, automatically filed a number of photos of a computer programmer and his black friend in an album called ‘gorillas’. Naturally, the issue was picked up on social media and caused global outrage, with many calling Google racist.

The computer programmer also took to Twitter to demand answers from Google, asking: “what kind of sample image data [they] collected that would result in this […]?” Google did respond saying: "This is 100% Not OK" and promised to fix the issue. However, after a number of unsuccessful attempts to fix the algorithm, Google pulled the ‘gorilla’ tag from the app altogether. Google has stated that the app’s technology is new and that it will learn as more photos are posted to the application, but why do we need tags to be automatically added to photos? I'm sure the user was more than capable of adding his own tags and folder names. Without this automation Google wouldn't have been branded racist and the user wouldn't have experienced such unpleasantness from a global giant.

Driverless Cars – an accident waiting to happen?

I’ll admit I'm a massive petrol-head, I love everything about cars, from driving them to keeping them clean. So it saddens me a little bit that Google felt the need to create automatic driverless cars, especially as they seem to be crashing quite frequently! According to the National Highway Traffic Safety Administration, Google’s self-driving cars have suffered 5 minor accidents while driving 200,000 miles over the last few months in the Silicon Valley suburb. This figure is nearly 10 times higher than the national average and could indicate that these ‘machines’ aren't as safe as we have been led to believe.

Google has stated that these accidents were a result of human error; in other words, humans driving ‘normal’ vehicles crashed into their cars. The reason? Apparently, humans were distracted by the word ‘Google’ on the driverless cars along with the gizmos on the roofs which give the Google cars a near 360 degree view of the road around them. Nevertheless, no humans were involved when two driverless cars from Google and Audi almost crashed in June!

Many experts have claimed that the driverless cars will save lives in the long run, and I have no doubt they will, but the same experts did also say that while the driverless cars may have a better ‘all-round driving style’, that might not be a ‘human driving style’. Humans are unpredictable by nature, and it’s hard to define what a ‘human driving style’ is because everyone has their own. What’s more, thousands of people are found guilty each year of speeding and other misdemeanours such as not indicating, so is it really sensible for automated cars to drive like humans, and if experts believe so, is there any point in driverless cars at all, as we already have 7 billion people on earth ready to drive like humans?

Don’t believe everything Google tells you!

One of the main rules of the Internet is to never believe everything you see and read on it! A great example of this is Google’s SERPs. If you search for the capital of Wales for instance, you would expect to get accurate, detailed information about Cardiff and, at the time of writing, I can report that you do! However, Google’s algorithm got it very wrong late last year and early this year when users asked about Barack Obama and Kim Yo-jong. In November 2014, Google thought Obama was the ‘King and Emperor of the United States of America’!? The search giant’s algorithm automatically pulled this hilarious answer from News syndicate Breitbart and added it automatically to its Answers Database without any manual checks (or at least I hope so!).

Then in January, Google believed that a clearly photoshopped image of Kim Jong-un was his younger sister when users searched for Kim Yo-jong. The image appears to have come from the social media website Taringa, but once again it proved that Google’s automatic results aren't necessarily the most accurate. Despite the image being hilarious, I bet it didn't go down well with officials in North Korea!

Once again, I must say that I'm totally in favour of automation, but in the examples above I feel it has gone just a little too far! I’d love to hear your thoughts on automation, do you feel the same way I do, or would you love to see more of it? Share your thoughts with us on Facebook, Google+ and Twitter!


You might also like...