Why you still need to apply your own Human Intelligence to your Artificial Intelligence solutions

Why you still need to apply your own Human Intelligence to your Artificial Intelligence solutions

Why you still need to apply Human Intelligence to your AI solutions

Why you need to apply your own Human Intelligence to your AI solutions? Artificial Intelligence is getting a lot of press lately. From horror scenarios which would rival the Terminator Hollywood movies, through to driverless cars and even through to supporting medical diagnosis.  However as with a lot of things, the hype is driven in a large part by fear. In a world optimised by "clicks" on the internet, it's the sensational, rather than the balanced which comes to the fore. Perhaps I should have used a click-bait type headline instead?

"10 ways Artificial Intelligence will wipe out humanity, the 3rd you won't believe!" would probably get me more likes and views. But as a natural analytical person I tend to prefer a balanced view. That said I do seem to be arguing against some pretty powerful and intelligent people. Just look at some of the names below quoting their views on AI.

"We are summoning the Devil!"

- Elon Musk on AI

"The development of full artificial intelligence could spell the end of the human race."

- Professor Stephen Hawking

“As a technologist, I see how AI and the fourth industrial revolution will impact every aspect of people’s lives.”

– Fei-Fei Li

“AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies.”

-  Sam Altman

"I am in the camp that is concerned about super intelligence."

– Bill Gates

“What all of us have to do is to make sure we are using AI in a way that is for the benefit of humanity, not to the detriment of humanity.”

-  Tim Cook

It's a cheerful thought isn't it. It would appear that AI is going to take your job, take your home and take your planet! The truth is, nobody really knows what is going to happen. Certainly not 200 years into the future. The thing is I can imagine AI taking over, but just because I can easily imagine something. It doesn't mean it's likely or even possible. I can imagine winning the lottery, I can feel the excitement. I can see the house I'd live in, I can see the bike I'd buy. It feels real. It feels likely. But it's not, I'm not going to win the lottery. I'm almost certain of it!

That said AI is pretty powerful, and it is a part of your life right now. But let's not kid ourselves it's not actually that intelligent. Computers don't understand the way humans do, they can't feel, they don't see the world like we do. AI "see's" the world in numbers & math.

The Difference between Human & Artificial Intelligence

Human

Humans have general intelligence, meaning we can perform a variety of different tasks. For example we can drive a car, fly a plane, ride a bike, read a book, paint a picture, write poetry, develop philosophies and set meaningful goals.

Humans are good at understanding context. I.e. we can understand why. For example a human could spot a correlation between the average daily temperature on the summit of Olympus Mons (it's a mountain on Mars by the way). And perhaps the responsiveness to a ladies catalogue mailing in the United Kingdom. A (sensible) human analyst would know that this would be daft to include in a model. An AI would not.

Artificial Intelligence

AI currently has Narrow Intelligence, in that it is focused on one particular task. This could be image recognition, playing chess or Go, driving a car on a highway.

AI's are fantastic at finding patterns within data, much better than humans. But AI's don't understand the patterns. In the example above, an AI doesn't understand the context. It doesn't know what Mars is, or what a catalogue mailing is. As a result AI's tend to work best when bound within a limited context, such as playing chess.

AI Image Recognition - A Example of Lack of Intelligence

I was recently having an email conversation with one of my clients, which invariably turned to cakes and baking. She sent me a picture of some Norwich City cupcakes she'd baked. Not wanting to be outdone I wanted to send her a picture of a cake I'd baked for my son's 1st Birthday, so I turned to my Amazon Prime photo app.

I browsed through the photos by date, but for some reason I couldn't find the photo I was looking for. So I decided to enter "Cake" in the search bar, I'd never used it before and was vaguely aware of it existing. Immediately I was presented with loads of photos of cakes from my library of photos. Including the very one I was looking for! Awesome, AI did the job. It found what I was looking for, quickly and easily just by applying image recognition to my photos. But there were a fair few photos which were not cakes.

Cake or Not?

This is the cake I was looking for! (Actually, it’s not, this is the one my wife made. I’m far too embarrassed to show the one I produced!). So, yes it’s a cake.

This is a photo of my little boy holding the birthday card he got me. In fact he was gripping it very hard and wouldn’t give it to me! However, this is clearly not a birthday cake. But this is the point, the AI doesn’t see what we see. The AI would have been trained by showing it loads of photos, some of cakes, others not featuring cakes. The AI would spot features and patterns which are common within images of cakes, then if it recognises these patterns within an image, it gives it a high probability that it’s a cake.

So, in this case, the AI has probably spotted the “Happy Birthday” text on the card, or even the card itself and given me a false positive of a cake. This is because the training data would almost certainly contain lots of images of birthday cakes with cards also shown on the image. So, it’s not intelligent, its just spotting correlations.

The next image is of a potentially poisonous mushroom, certainly not something you’d want to eat instead of a cake. Again here, we’ve got a false positive result. In this case I think it’s because its spotted a round object (cakes are often round), it’s got a soft looking texture (like a lot of cakes).

Interestingly if I search of mushroom it would also return this image, so the AI suggests the image is a cake and a mushroom!

The false-positive is the Ninky Nonk from the CBeebies children’s programme In the Night Garden. If I’m honest I don’t know why the AI has suggested that this is a cake, maybe it’s the colours? Shapes? Or maybe the training dataset contained a lot of In the Night Garden themed cakes?

Train or Not?

Now, I don’t have many images of trains in my collection, but it’s another area where the AI failed quite miserably at. In fact, it only returned one correct image or a train, whilst the rest were false positives. There were even a couple of examples where it failed to spot a train.

This image is of my little boy looking at the steam train passing on the Gloucestershire Warwickshire Steam heritage railway. It’s an incomplete image of a train, but it’s been correctly identified.

However, I’m not convince it’s correctly identified for the right reasons, as we’ll see in the next couple of examples. Because parallel lines seem to confuse the AI, and the above image has loads!

This is a picture of the Arctic Monkeys back in 2005. It’s not a train, however they did have a song called Choo Choo which was never released. In this case I think the AI has spotted the guitar stem.

The next image is closer, it’s the train track on the Llanberis track up Snowdon. Now, trains typically exist on train tracks. So, the training data is almost certainly seeing parallel train tracks in the images of trains. So, in this case, its spotting the track and suggesting there is a train there. Opps.

But it did also recognise this as a train track? So, does the AI know the difference between a train track and a train? I think it needs a person to revise the model a little more.

Finally, this is an image of the train climbing up Snowdon. The AI didn’t recognise this as a train. Probably because you can’t really see the train track.

So, what does this mean to me as a data-led marketer?

The key message from this blog is that AI doesn’t think, it doesn’t understand. AI sees a pattern and a correlation so decides to use it in its decision making. However just because two things are correlated, doesn’t mean that they are related in any way. In fact the wider and bigger the amount of data available which is brought into modelling, the chance of finding spurious relationships increases.

So, when you are building your predictive models and deploying your recommendation engines, ask yourself the key questions of “So what?”, “Does it make sense?” and “How could this be wrong?”. If a feature in your model is unexpected or unexplainable I’d be inclined not to use it, or at least investigate further.

Artificial Intelligence isn’t as clever as you think it is, although it would still beat me convincingly at chess!


Similar Articles

How might charities use Artificial Intelligence for Fundraising in 2019

5 Reasons why you should adopt a business intelligence solution

Artificial Intelligence Solutions from Adroit

Leave a Comment

Enjoy this blog? Please spread the word :)