Our world has suddenly fast being reshaped by machine learning. In the last 3-4 years we have suddenly woken up to the fact that we no longer need to teach computers how to perform complex tasks like image recognition , text translation, computer generated speech or algorithmic trading: instead, we build systems that let them learn how to do it themselves.

Currently the most respected form of machine learning being used today, called “deep learning”, builds a complex mathematical structure called a neural network based on vast quantities of data. This is all designed to be analogous to how human brain works, neural networks themselves were first described in the 1930s.

In the last three or four years, the computers have followed Moore's Law and have become powerful enough to use them effectively for the task.When Google DeepMind’s AlphaGo program defeated South Korean Master Lee Se-dol in the board game Go earlier this year, the terms AI, machine learning, and deep learning were used in the media to describe how "DeepMind" won. And all three are part of the reason why AlphaGo trounced Lee Se-Dol. But they are not quite the same things.

DeepMind is to create artificial agents that can achieve a similar level of performance to a human. Like a human, Deep Mind allows agents learn for themselves to achieve successful strategies that lead to the greatest long-term rewards. This paradigm of learning by trial-and-error, solely from rewards or punishments, is known as reinforcement learning (RL) . RL is inspired by behaviourist psychology, concerned with how software agents ought to take actions in an environment so as to maximise some notion of cumulative reward.

Before I go on further with the subject, let it be known that I am making a couple of important assumptions.

  1. We do not already live in a computer simulation or as the movie Matrix suggested a decade ago and both Elton Musk and Nick Bostrom are wrong and we are not an experiment of an advanced civilisation living in their supper massive computer. It would be rather strange for the world that already lives inside matrix to go on and create second generation matrix.
  2. We accept the fact that Moore's law will live beyond the silicon wafer limitation and move to new materials, techniques or quantum computers. Therefore, we will assume that computers and other devices will continue to become more powerful—just in different and more varied ways.

Also like a human, the Deep Mind agents construct and learn their own knowledge directly from raw inputs, such as vision, without any hand-engineered features or domain heuristics. This is achieved by deep learning of neural networks. At DeepMind They have pioneered the combination of these approaches - deep reinforcement learning - to create the first artificial agents to achieve human-level performance across many challenging domains.

The impact of Moore’s law is visible all around us. Today 4 billion people carry smartphones in their pockets: each one is more powerful than a room-sized supercomputer from the 1980s. Countless industries have been upended by digital disruption. Abundant computing power has provided no boundaries to scientific calculations.

The most basic form of Machine Learning is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. This allows the programmers to rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task.

Here is a timeline on how today's AI explosion is happening;

  • 1950- 1980 >> Early Artificial Intelligence was conceived.
  • 1980- 2010 >> Early Machine Learning begins to flourish
  • 2010 - beyond >> Deep Learning Breakthroughs and drives the new AI boom.

So what changed in the recent times to make AI to exploded. The changes are very recent where since, 2015 there is a wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) – images, text, transactions, mapping data, you name it.

The concept of Machine Learning has come from the very early work done in the 50s by the AI crowd, but as we know, none achieved the ultimate goal of General AI, and even Narrow AI was mostly out of reach with early machine learning approaches.

Things that are most promising for the current generation of Machine Learning work are Computer Vision, Computer Generated Speech and a number of Health related solutions. However, machine learning is still a complex beast. Other than simplified playgrounds, and demos there’s not much you can do with neural networks yourself unless you have a strong background in coding. But I wanted to put Conrado’s claims to the test: if machine learning will be something “everybody can do a little of” in the future, how close is it to that today?

Currently, at DeepMind, their moto is based on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how.

"If we’re successful, we believe this will be one of the most important and widely beneficial scientific advances ever made, increasing our capacity to understand the mysteries of the universe and to tackle some of our most pressing real-world challenges. From climate change to the need for radically improved healthcare, too many problems suffer from painfully slow progress, their complexity overwhelming our ability to find solutions. With AI as a multiplier for human ingenuity, those solutions will come into reach. "

You can indeed play with some of the demos and check from time to time to see the advances made in the subject. Our world is changing fast and we need to re-position ourselves for whats to come very soon if not for us as it is probably too late - at least for our kids;

The Sentiment Analysis demo is a good way to see how the science is evolving - I tested a movie review for "Girl on the Train" and you cannot just drop the review and see results - need to spoon feed this baby AI.

Try this;

  • Ultimately it's a semblance that works
  • thanks to coming first and being the better movie
  • but it's no reason to dismiss The Girl On The Train
  • the film does manage to build as a compelling mystery driven by a truly fantastic lead performance

There are much more demos at for you to try.

The only thing that seems to really worry people at the moment is the speed of advances in the subject. A.I. is setting its sights on world domination. I noticed that the researchers at DeepMind along with University of Oxford scientists aren’t taking any chances with the possible negative outcome of what they have created. The intuitive learning A.I., while groundbreaking and amazing with near infinite practical uses, might pose a threat to the humans it will someday serve. "The slave only dreams to be king".

The list of corporations working on A.I. solutions are growing and it is not just Google that is working on A.I. solutions to real world problems. Facebook (whose A.I. is spending its time mining your so-called personal data to create a better bot), Microsoft , IBM (Watson is learning how to be more human) and NVIDIA are all developing A.I. software to solve complex human problems and move us into a more automated future. On top of that the big universities around the world are gearing up to more research in the subject right now.

In this article titled "Google DeepMind Researchers Developing A.I. Kill Switch, Just In Case" The white paper, titled “Safely Interruptible Agents” was authored by Laurent Orseau of Google Deep Mind and Stuart Armstrong of The Future of Humanity Institute at University of Oxford Machine Intelligence Research Institute.

There is a lot of high math that is impossible for me to understand, but the paper does have some regular words that you don’t need A.I. or a PhD to comprehend — they want humans to be able to shut down A.I. without the A.I. being self aware of the human’s ability to do so. That much is made clear in the conclusion.

We have proposed a framework to allow a human operator to repeatedly safely interrupt a reinforcement learning agent while making sure the agent will not learn to prevent or induce these interruptions.

Safe incorruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this.

Their abstract states that it might be necessary for humans to press the “big red button”, and whilst it is rather reassuring for all of us to hear this, However, the paper also supposes that A.I. may not want to be shut down and theoretically can learn away to avoid being shut down!

If at some point in the future, the A.I. learn to avoid situations that would require a temporary shutdown then - I don't know and IMHO no one knows for sure what the end result is. It is really theorised in many SciFi books and Hollywood blockbusters and we have already seen the story told in many ways. I stop here and just sincerely hope it does not come to that.

Posted in: Salaro

Post Rating


There are currently no comments, be the first to post one!

Post Comment

Name (required)

Email (required)