I am a young researcher in Robotics and Artificial Intelligence, deeply interested about any kind of learning algorithm applied on physical robots, because I believe that this is the only way to allow robots to be significantly a part of our everyday life. Working systematically with physical robots has been one of the challenges of my PhD and I desire to continue to do it, because we can be sure that something really works only when it works in reality.
Even if my research is focused on artificial intelligence, I have a versatile background thanks to my engineer degree in Robotics. During my undergraduate study, I acquired strong knowledge in programming (C/C++, Python, Java, Matlab), control theory and mechanical/electrical design. I also have a master degree in signal processing and, in a few months, a PhD in artificial intelligence and robotics. With these three degrees, I am able to deal with entire robotics projects, from the design of robotics platforms to the development of complex learning strategies.
Instead of using complex and perfectible diagnostic procedures, we proposed to let robots to deals with the current situation by learning new solutions/actions on their own. With my PhD-advisor Jean-Baptiste Mouret and colleagues, we developed new learning techniques that allow robots to adapt to a large variety of situation in less than 2 minutes. To achieve these performances, my work takes advantage both of the creativity of Evolutionary Algorithms and of the quickness of Machine Learning algorithms.
This new algorithm will help develop more robust, effective, autonomous robots, which will deliver tremendous benefits to society, such as in search and rescue and putting out forest fires. For example, it would enable robots that can help rescuers without requiring their continuous attention. It also makes easier the creation of personal robotic assistants that can continue to be helpful even when a part is broken.
The creativity of evolution
Everyday, we can admire the creativity of evolution through the diversity of plants, of animals, of insects and even of humans. This large diversity is one of the best examples showing that evolution is not only a matter of the best-adapted individual. Evolution is not simply finding a single individual but rather finding all the individuals that may fit in the environment. This research of diversity leads inexorably to a notion of creativity. How to find all these different species without an ounce of creativity?
Unfortunately, artificial evolution is most of the time reduced to a kind of optimization algorithm that tries to find a solution that maximizes a fitness function. Even if it shows interesting properties, like multiple-objective optimization, artificial evolution may be significantly more than that. In our researches we derive classic evolutionary algorithms to allow them to exhibit their creativity potential.
This creativity is also instrumental in robotics. It may allow robots to imagine a large number of potential solutions to their problems. It may help robots to discover on their own all their possible actions, making easier to develop complex systems. This is exactly the direction of our researches based on "behavioral repertoires" or "behavior-performance maps". Our evolutionary algorithms autonomously define and generate all the possible ways for a hexapod robot to walk straight in line and as fast as possible, or all the ways to turn while walking forward and backward.
The quickness of machine learning
The creativity of evolutionary algorithms has a cost: It requires several hundred, if not thousand, trials or evaluations. This is similarly to the million years used to evolve the current form of humans. Such huge number of test makes these approaches impracticable with real robots. Machine learning techniques like reinforcement learning algorithms are quicker but also significantly less creative. The difference comes from the aim of these two different families of algorithms. While evolutionary algorithms may search for all the potential solutions of one problem, reinforcement-learning algorithms look for only one solution, potentially the best one. These approaches can be used to learn a task after a demonstration, or to learn to play video games, but most of the time the quality of the obtained solution depends on the search starting point. For example, it will depend on the demonstration quality. This is mainly a consequence of the lack of creativity of such techniques but this is also the source of their quickness.