I have a confession to make: I’m addicted to trip planning.

I blame my first-World problem on two major reasons: curiosity and dynamic pricing. While curiosity might fuel my craving to explore new places, dynamic pricing has taken this craving to a whole new level.



Long gone are the days I would walk into a travel agency and accept what was given. Today, I spend days monitoring flight prices online – always in the outlook of a good deal.

Early 2013, travel search engine Kayak introduced a fare-forecasting tool that allows travelers to assess whether the search prices will rise or fall within the next week. This was the first time I began to wonder how the travel industry could forecast demand and optimize its prices. What lies behind those flight purchase recommendations?

I soon learned that Kayak uses large sets of historical data from prior search queries and complex mathematical models to develop its forecasting algorithm. I imagined that some of the input variables include the current number of unsold seats, the date and time of the booking (specially the number of days left until departure) and the current competition on the same route. But what if Kayak also learned more about my personal preferences based on my past behavior? If there were only one window seat left on that flight, would it change its purchase recommendation from wait to buy? Would it be able to recommend me a trip, knowing I prefer nature over cityscapes, am most likely to buy on Tuesdays and usually travel over the weekend?


A lot has been done on dynamic pricing since 2013, of course. One of the latest companies to join the trend is AirBnB. Its hottest new feature, Price Tips, helps hosts to easily price their listings dynamically by using the company’s new open source Aersosolve machine learning tools. Their use of machine learning isn’t just limited to the standard dynamic pricing, though: their models automatically generate local neighbourhoods, rank images and learn about AirBnB hosts’ preferences for accommodation requests based on their past behaviour.


What does this mean for trip planning addicts?

Well, I'm personally looking forward to the time when choosing my next travel destination is made simpler. I picture personalised travel discovery and contextual insights - the kind that take into account both, your conscious and unconscious preferences. The Texas-based startup, WayBlazer, already is tipping into this by employing IBM Watson cognitive computing technology to listen and understand its customers and present considered, tailored travel suggestions.

What could be next?
First of all, let me come clean on this: I am an old (proud) seasoned gamer, but a newbie in respect to artificial intelligence, I must confess.

1.   Metal Gear Solid franchise is cool Background: Around 2008 I played a game called Metal Gear Solid 4.  The storyline was far away from simple. In the world pictured in the game, war is no longer sporadic and in a specific location; instead, the nations all around the world are in constant battle. In fact, war has become a market/industry with full potential: Private Military Corporations Contractors (PMCs) and governments invest in world nomination.  In the case of the former, they are freely hired in the market to the highest bidder.

Our hero, Snake (although since in this game he is almost retiring, so he is now called Old Snake) is taken away from retirement summoned for one last mission: To assassinate his twin brother, Liquid Snake, which has become one of the heads of the PMCs and an actual thread for the world (of course, he wants to dominate it).

                                                                                         Old Snake having a smoke

2.     I am lost. So, how does this relate to AI? We are getting there. In this world, since (i) war is a super competitive business and (ii) everyone reading this is aware of the law of supply and demand, contractors (PMCs) want to offer the best service possible (meaning: better soldiers).

How did they become more competitive? Nano-machines were inserted in soldiers in order to maximize their performance.  These gadgets gathered information from previous soldiers experiences, and based on that they could enhance a soldier’s performance by  reducing its levels of stress in battle, improving its sight, reducing tiredness, developing awareness and strength, etc.  What’s more, the nano-machine system linked the guns that the soldiers used with each individual soldier (so, only the user could fire them). Nano- machines also restricted the individual’s actions, such as shooting a gun, seeing, walking and running pace.  They were so advanced, that they could control the effects of a disease, supply and administer adrenalin, nutrients, sugar, nootropics, and medicines.

3.    AI development: Although released in 2008 by Hideo Kojima and Konami, the AI idea behind this franchise had been developing since last century. Even considering that it is an imaginary world, it gives us a taste of the challenges and goals of artificial intelligence and machine learning.  Could the events triggered in this game be possible in the near future?

4.    Storyline- So what happens in the game?: Of course, you know how this goes: the super bad guy Liquid Snake takes control of the AI behind the nano-machines and, of course, gets control of all the guns and will of PMCs soldiers with this mechanism. So, Old Snake has to fight against his super-powerful brother in order to restore balance in the world while he is dying (oh yes, failed to mention that. Old Snake is dying while the game develops).  Interesting huh? 

Take a peek at Liquid Snake insurrection and how he took over the system:




If you haven’t played, be sure to get it. In fact, I strongly encourage you to play it. Forget about it: Get the whole franchise and start playing the whole franchise from the beginning! Cheers!


Recently reflecting on an article by Paul Larosa in the Huffington Post, as an African American who still struggles to successfully hail cabs, I could relate to his story. In the article he talks about how African Americans are still not being picked up by taxi drivers. Larosa introduces his story about a friend who was with his daughter and tried to hail a cab in downtown Manhattan with no luck. He then thought his biracial daughter could do it. Her efforts failed as well.  He then tried what many black Americans do, use the “bait and switch” method, when you get a different person to hail the cab.  In this case Larosa’s friend had hiswife, who is white, to hail the taxi. Within two minutes she was successful. This is an all too familiar reality in America. The article wonders why this happens. Only 6% of cab drivers are American born, so how can they have these deeply held biases? They gain this after being indoctrinated into American society, and likely even before they reach American soil. They are probably taught black Americans are less desirable fares and they should not pick them up. What was even more shocking, a Senegalese taxi driver admitted he can spot American blacks from a block away and tries to avoid them. This is ironic because, as a black man in America, he would be ignored as well.
                
Though frustrating, there is a potential solution that will solve this discriminatory practice within the next decade. Driverless taxis. As driverless cars are quickly becoming a reality, machine learning can be used to tackle this problem. Ideally, a driverless taxi would not discriminate no longer requiring black Americans to rely on “bait and switch” methods. In Fujisawa, Japan Robot taxis will test a driverless taxi as early as March of 2016. The cars work with a series of sensors that help the car stay on the road, avoid accidents, and avoid hazards. These will run on their own with human supervision, but eventually will be sufficient enough to run completely on their own.  If this is successful, Toyota plans to launch a driverless taxi by 2020 for the Olympics in Tokyo. These taxis will take spectators to venues across Tokyo and surrounding areas. If this is successful, then cab companies around the world will likely switch to driverless taxis.


After successful tests at the Olympics driverless cars can revolutionize the industry. For the first time, I could potentially get the first available taxi without experiencing racial bias. This is extremely exciting.  However, this excitement comes with reservations. The fear is with machine learning. Will these cars learn from the internet the same biases current taxi drivers have? With only 6 % of drivers
American born many of these drivers developed their biases here and before coming to America. As we learned from IBM’s Watson, machines connected to the internet learned many swear words. When driverless cars connect to the internet, what will happen? Will they learn the same biases as humans.  If this happens, black Americans have no hope for fairness. 


picture: www.hitc.com


Machines informing financial positions and investments. Are we simply moving forward with new technology or we are just down right lazy?

While our past is cast in stone and unchangeable and our present is an ongoing phenomenon; our future however is more uncertain. Nowhere else are the consequences of these uncertainties more significant than the world of investment banking. Within the investment banking area, particularly in the light of artificial intelligence (machine learning), there are two critical Investment banking products to consider: Mergers and acquisitions (M&As) and equity stock markets (shares).

Let us consider M&As as they were done previously, companies (medium and large scale) required the services of industry experts to identify the best fit M&A companies and provide a web of information needed for the closure of deals. With shares in the past, the biggest challenge for stock traders was predicting stock prices and market trends in the financial equity market to maximize returns. One of Warren Buffet’s more inspiring quotes, “It’s easier to look back than to look into the future’’, emphasizes the difficulty in predicting stock behavior. In those days, stock trades involved buying stocks on a physical trading floor and required a network of persons to execute a trade, with traders relying more on bare intuition and basic trend analytic tools. These all happened in the analog world!!

In the digital age of today, there are different algorithms and indicators that have been developed to change interaction between buyers and sellers. Buyers can find a best fit company in a matter of hours, saving them the cost of research and precious time. The use of artificial intelligence tools such as machine learning and big data have emerged with several programs.  These programs obtain as much available historical data about a company and seek to create a relationship between this historical data and future prices of the shares of the company. To achieve this, investors and researchers create different algorithms (decision trees, support vector machines, Naive Bayes classifiers, etc.) and price indicators. Some of these algorithms have been designed to be so intelligent that they are not rigid to any single investment approach but adapt to market trends and situations.
All this is not to say that traditional investment banking as was practiced in the ‘analog world’ is dead. In fact, very intelligent and experienced bankers and traders are still very relevant in the process of negotiating and structuring deals to reduce tax payments and place stock trades based on knowledge of future occurrences. A valid question though is: “How much longer will human experience still be required?’ Currently, a tech start-up based in Cambridge, Massachusetts, USA called Kensho has received a lot of buzz and huge investments from large corporations such as Goldman Sachs, Google ventures and Consumer News and Business Channel (CNBC) for its development of a software called Warren. Kensho believes that Warren will replace financial analysts in investments transactions. The company boasts of the capacity of Warren to search financial data and reports and provide replies in natural language within seconds.

I have a friend, George, back home in Nigeria. He is one of the hundreds of thousands of investment bankers in the world. Does this mean he is going to be jobless in the near future? I know machines are smart or can be programmed to be smart, but we all realize the intuition of humans and the capacity of mankind to reason through complex and confusing situations. In such situations, simple algorithms may fail, especially the situation that has not obeyed the rules. Take for instance, the earthquake and Tsunami in Japan in 2011 (deemed the costliest natural disaster in history). Then you need a human, then you need someone like my friend George. Now, I am not in anyway suggesting that technology is not the way forward. However, in going forward technologically, we need to allow for man to move machine and machine to listen to man. After all, even unmanned space shuttles are still “manned” by men on Earth.



References
Walters, Richard 2015 Investor rush to artificial intelligence is real deal http://www.ft.com/intl/cms/s/2/019b3702-92a2-11e4-a1fd-00144feabdc0.html#axzz3oG7w4n00 Financial Times 4th January 2015

The Economist 2015 Artificial intelligence: Rise of the machines
http://www.economist.com/node/21650526/print1/13Artificialintelligence 
The Economist 5th August 2015


¨We are using artificialintelligence so people can remain in their homes for as long as possible¨  
Lead scientist Alex Mihailidis
Several years have passed by since the moment smart homes started to be created for people with Alzheimer´s disease or other cognitive impairments. Due to the amount of time people with Alzheimer require from their caregivers, they move in to institutions were 24 hour service is provided to them by professionals. In the case of the majority of patients, they would prefer to stay home during a longer period, however, this results impossible in many families were none of the members is available all day, every day, to take care of them.

Due to this reality, artificial intelligence was used to create smart homes, allowing people with Alzheimer and other cognitive impairments to live longer in their house and at the same time providing them a solution that would help them live a more independent life. 




In order to capture specific activities and movements happening inside the house, these smart homes include technological devices (sensors). These sensors, for example, can be cameras in the ceiling of the house which are linked to computers and are able to detect whether a person has fallen down. Additionally, prompts are also used in order to assist them with any necessity when needed and give them ¨hints¨ in order to accomplish a task. There are various types of prompts used: auditory, pictorial, video and light. Auditory prompts can be divided in three categories: verbal (instructions), sound (alerts) or music. Pictorial prompts can be either photographic (pictures) or textual (keywords). Video prompts are pictorial or modelling (someone performing what should be done). For instance, bathrooms include a computer screen with a video showing them how to wash their hands. Finally, the light prompts are changes of different colors of a light bulb or laser pen to reflect an action. It order for these prompts to be effective it is essential to take into account that each patient is very different: different prompts work to different people.

Smart homes give a sense of hope not only for people with Alzheimer disease, but to their caregivers as well, since they feel these technologies will allow them to remain with their loved ones for a longer period of time at home, rather than sending them to an institution.  

In the future which other diseases will artificial intelligence be able to cure?

Have you ever wondered how to world looks through the eyes of a party raver having a Psychedelic experience? Well Google may have just made this ‘legally’ possible (and yes without one having to take any illegal substance) through one of their new platforms.

A few months ago, Google had announced that they now possess the technology to gaze inside the mind of an artificial intelligence program. Having invested heavily in machine learning technology - Google is one of the world’s biggest backers of artificial intelligence development. Googles recent acquisition of a British company ‘DeepMind’ is a testament of Googles vigor to unlock the potential of Artificial intelligence and it is through the Deep dream platform that one can perceive what a machine ‘Sees’ or ‘Dreams’

Google Image

The network uses 10-30 staked layers of artificial neurons with each layer adding incrementally to the results of its predecessor in order to obtain the final answer as produced by the last layer.  On the lines of image recognition, the network seems to set a new benchmark by returning results better than anything before and as a by-product, it can also “dream.” These artificial dreams output some captivating images to say the least, going from virtually white noise to something that looks out of a surrealist painting or probably the vision of our raver above having a psychedelic trip - and you thought Machines can’t be creative!

To access image patterns of how Google’s neural network “sees” or “dreams” to go through this post 

The above seems really creative and is all well however the million dollar question remains – how dependable is Artificial Intelligence? Where do humans draw a line in the sand regarding machine driven vs human driven output. A stark reminder of the current AI technological limitations were made evident to Google the hard way. Google Photos employs advanced artificial neural networks to analyse gazillion images, interpret them and return the right one that a user has queried using the google search engine. The app uses face and object recognition software to automatically tag and sort photos however in a recent instance had mistakenly tagged pictures of a black couple as ‘Gorillas’. Google had to issue an apology after Jacky Alcine, the man in the picture, was outraged to see the racially charged term appear in the app. Alcine also tweeted a screenshot showing every image of his friend was being tagged, and suggested the reference images Google had collected did not have black people in mind.



The above incident confirms one of our worst fears that artificial intelligence is racist, or so it seems. The supposedly dumb and stupid data-in and data-out machines, that strived to always catch up to us humans have acquired prejudice.