Machine Learning : Drawing an ‘Artificial Intelligence’ line in the sand ...

, , No Comments

Have you ever wondered how to world looks through the eyes of a party raver having a Psychedelic experience? Well Google may have just made this ‘legally’ possible (and yes without one having to take any illegal substance) through one of their new platforms.

A few months ago, Google had announced that they now possess the technology to gaze inside the mind of an artificial intelligence program. Having invested heavily in machine learning technology - Google is one of the world’s biggest backers of artificial intelligence development. Googles recent acquisition of a British company ‘DeepMind’ is a testament of Googles vigor to unlock the potential of Artificial intelligence and it is through the Deep dream platform that one can perceive what a machine ‘Sees’ or ‘Dreams’

Google Image

The network uses 10-30 staked layers of artificial neurons with each layer adding incrementally to the results of its predecessor in order to obtain the final answer as produced by the last layer.  On the lines of image recognition, the network seems to set a new benchmark by returning results better than anything before and as a by-product, it can also “dream.” These artificial dreams output some captivating images to say the least, going from virtually white noise to something that looks out of a surrealist painting or probably the vision of our raver above having a psychedelic trip - and you thought Machines can’t be creative!

To access image patterns of how Google’s neural network “sees” or “dreams” to go through this post 

The above seems really creative and is all well however the million dollar question remains – how dependable is Artificial Intelligence? Where do humans draw a line in the sand regarding machine driven vs human driven output. A stark reminder of the current AI technological limitations were made evident to Google the hard way. Google Photos employs advanced artificial neural networks to analyse gazillion images, interpret them and return the right one that a user has queried using the google search engine. The app uses face and object recognition software to automatically tag and sort photos however in a recent instance had mistakenly tagged pictures of a black couple as ‘Gorillas’. Google had to issue an apology after Jacky Alcine, the man in the picture, was outraged to see the racially charged term appear in the app. Alcine also tweeted a screenshot showing every image of his friend was being tagged, and suggested the reference images Google had collected did not have black people in mind.



The above incident confirms one of our worst fears that artificial intelligence is racist, or so it seems. The supposedly dumb and stupid data-in and data-out machines, that strived to always catch up to us humans have acquired prejudice.

0 comentarios:

Post a Comment