Samples of our students' work
Because students must learn how to write
A sample of students' blog posts on the topics they like
Deep Learning by Maxime Zamani : a sample of a student's blog
Nowadays, we hear a lot about the revolution of artificial intelligence, so we believe it is important to understand the mechanics behind these technologies. One of the most used and promising field of this kind is Deep learning. It has a great story behind it and it shows why we need and why we will need such methods to make progress.
What is Deep Learning
First of all, Deep learning is a subfield of Machine learning inspired by the neuronal structure of the human brain.
Machine learning is a branch of artificial intelligence where the AI is capable of altering the algorithm itself without human help, to obtain a certain output. On the other hand, AIs in Deep learning and Machine learning work in a similar way, but there are many more layers in Deep learning and this is why we use the term ‘deep’ to define it.
Each layer of the algorithm is composed of many neurons(nodes). When a certain data is entered into the AI, each neuron will analyze a different part of it and verify if what is analyzed corresponds to the nodes criterias. If it does then it will send an input with a certain weight, otherwise it won’t do anything (just like a human neuron).
In a neural network, you give the algorithm inputs and the expected output(s). The use of a loss function determines how close its output is from what we expected, then the neuron modifies the weight and the criterias to reduce the difference between the real outputs and the expected outputs.
Its evolution until today
In 1943, a first mathematical version of the neural network, the McCulloch-Pitts model, named after their inventors, was the first step toward machine learning and therefore, deep learning. Their goal at the time was to simulate a human thought process. At that time, the neural network is not able to have hidden layers and, therefore the idea of deep learning is far to be achieved.
Even though, in 1952, the first “deep learning algorithm” appeared : its goal was to play a game of checkers and would improve for every game played. Despite the fact that it is very different from the algorithm we use today, it is a good example of what deep learning is able to achieve. It is important to notice that this algorithm lacks deep layers, which is very different from a modern deep learning algorithm.
1957, the discovery of the perceptrons permits to go further in the domain : once again, the neural networks become more modern and more efficient to be used in programming.
Of course, at this time, programming is far from what it is today and the algorithms developed are never used in a computer, and the deep learning field does not really exist.
It changes in 1965, when Alexey Ivakhnenko, create the first deep neural network which was still only theory at the time. For that mere reason, Ivakhnenko is considered as the father of deep learning.
One of the common uses of deep learning today becomes possible 15 years later, in 1980 : a deep learning algorithm becomes able to recognize visual patterns. Once again, the result are way less impressive than what we get today but it still is a great step forward.
For the following 20 years, many improvements in speech (recognition and creation) is made in the domain but it still is minor compared to 1998.
In 1998, a french scientist, Yann le Cun, made yet another great progress in artificial intelligence : he was one of the greatest scientist and helped to improve object recognition algorithm in his work “Gradient-Based Learning”
Next, in 2009, another huge change appears : the image-net database appears, which is a database containing a enormous amount of images and a description. it is important to have such a database in deep learning because such an algorithm needs to be trained with easily and a data of good quality.
Also, in 2012, image-net organized a contest of image recognition (large scale visual recognition challenge 2012) using machine learning programs. In the previous years, the algorithms used only machine learning programs and deep learning algorithm were not even considered. However, this year the winner crushed its opponents using a deep learning algorithm. Since then, deep learning is a very popular field of machine learning and a trending topic in computer science in general. If the algorithm could make such a tour de force, it’s because of the image-net database. Before, without internet, the deep learning algorithm could not be trained properly, but using Big data, it is more easily achievable.
Now we will discuss the potential future of deep learning. This technique is promising and gets more and more uses in different domains.
It is for instance as we speak used in video game development in order to program more human-like characters. Usually to create a character’s movements, developers use recorded human performances and implement them in the game. In this case, it is necessary to use also a software to program transitions between different movements, which doesn’t look very natural and human. On the other hand, using deep learning’s neural networks, the animations look rather smooth and lifelike. Given the video game market’s growing significance, deep learning will be an important asset in the near future.
Deep learning is also useful in teaching autonomous cars how to drive, as well as making critical decisions on the road. It is of course far from ready, as it is a rather long process, and there is still some ethical questions about decisions that remain. Sometimes, on the road, situations with a dilemma may appear, like having to drive through either a kid or two grandmas. What will the car choose between the amount of lives and the value of the lives saved ? Why did the car make this certain decision and not the other one ? These ethical debates still belong to human minds for now, and not everyone agrees on what should be done. A website called Moral Machine lets you have a take on these decisions as well.
Now we can also study a bit of its future evolution. For instance, in architecture search, a technique used to automate the design of artificial neural networks (ANN), we can create algorithms that themselves create convolutional neural networks (ANNs specializing in data processing) outperforming human minds. Deep compression is another useful technique. Networks tend to weight a bit, the VVG-16 image classification network for example weights 552 MB. It is then hard to use multiple networks for apps when loading them into the RAM, but deep compression can help, as the VVG-16 can be reduced to up to 11.3 MB. Also, generative adversarial networks (GAN), a method which consist in making two networks face each other in a game (as in game theory) allows those networks to train and learn to generate new data. For example, showing the networks pictures of a certain item will let them create a very realistic image of this item. For now, this technique is having a hard time generating high definition photographs, so data scientists keep improving and creating new models of GANs.
In the end, we saw different fields in which deep learning has a future use, and also some parts of its evolution.
Deep learning is a great technology that has evolved through time and faced many difficulties. At the beginning, it was totally impossible with computers too slow and databases too small to get result this stunning but with the evolution of computers and the appearance of Big data, it has become clear that such a technology will be ubiquitous in the near future. We studied several domains in which deep learning can and will be useful, as well as certain parts of it that are currently evolving, showing the significance of deep learning in computer science nowadays.