Learning about Animal Locomotion Control from Robots, Auke Ijspeert
The ability to efficiently move in complex environments is a fundamental property both for animals and for robots, and the problem of locomotion and movement control is an area in which neuroscience and robotics can fruitfully interact.
Animal locomotion control is in a large part based on spinal cord circuits that combine reflex loops and central pattern generators (CPGs), i.e. neural networks capable of producing complex rhythmic or discrete patterns while being activated and modulated by relatively simple control signals. These networks are located in the spinal cord for vertebrate animals and interact with the musculoskeletal system to provide “motor primitives” for higher parts of the brain, i.e. building blocks of motor control that can be activated and combined to generate rich movements.
In this talk, I will present how we model the spinal cord circuits of lower vertebrates (lamprey and salamander) using systems of coupled oscillators, and how we test these models on board of amphibious robots. The models and robots were instrumental in testing some novel hypotheses concerning the mechanisms of gait transition, sensory feedback integration, and generation of rich motor skills in vertebrate animals.
I will also discuss how the models can be extended to control biped locomotion, and how they can help deciphering the respective roles of pattern generation, reflex loops, and descending modulation in human locomotion.
Self-Exploration of Autonomous Robots Using Attractor-Based Behavior Control, Manfred Hild
An autonomous robot which is equipped with sensorimotor loops and situated within a specific environment can be regarded as a dynamical system. In the language of dynamical systems theory, behavioral body postures and repetitive body motions then correspond with fixed points and quasiperiodic orbits. Both of which can either be naturally stable, i.e., attractors of the situated physical body, or be stabilized by the whole system including the sensorimotor loops.
As is well-known, even most simple two-neuron networks may already exhibit many co-existing attractors, which, if properly chosen, may nicely go along with the overt behavior of an autonomous robot. Standing and walking is one example, obstacle avoidance another one. The question arises, how larger neural networks can be found (preferable by the robot itself), that explore behavioral options, starting from scratch and getting increasingly rich over time.
Attractor-Based Behavior Control (ABC) follows the afore-mentioned path and confidently finds attractors which correspond to energy-efficient behavioral body postures, either fully alone or in gently guiding physical human-machine interaction. The latter helps protecting the robot from harming itself as it would easily happen if using, e.g., motor-babbling or homeokinetic learning rules. In addition, ABC learning does not only find behavioral attractors, but also the corresponding attractor-connecting heteroclinic orbits which can be utilized to generate stable motion trajectories.
After briefly revisiting the necessary concepts, I will introduce ABC-Learning and demonstrate how it enables an autonomous robot to self-explore its behavioral capabilities from scratch and without any given body model.
Practical approaches to exploiting body dynamics in robot motor control, Joni Dambre
Motor control systems in the brain of humans and mammals are hierarchically organised, with each level controlling increasingly complex motor actions. Each level is controlled by the higher levels and also receives sensory and/or proprioceptive feedback. Through learning, this hierarchical structure adapts to its body, its sensors and the way these interact with the environment.
An even more integrated view is taken in morphological or embodied computation. On the one hand, there is both biological and mechanical (robotics) evidence that a properly chosen body morphology can drastically facilitate control when the body dynamics naturally generate low level motion primitives. On the other hand, several papers have used robot bodies as reservoirs in a reservoir computing setup. In some cases, reservoir computing was used as an easy way to obtain robust linear feedback controllers for locomotion.
In other cases, the body dynamics of soft robots were shown to perform general computations in response to some input stimulation. In general, very specific highly compliant bodies were used. At Ghent University’s Reservoir Lab, we have previously used reservoir computing to generate locomotion on quite different robot platforms: the highly compliant tensegrity robot Recter and the far less compliant quadruped robot Oncilla and a new low cost modular quadruped puppy robot. In all cases, we succeeded in generating stable gaits. However, not surprisingly, not all robot bodies are equally suitable to help generating their own motor actuations. As a result, the reservoir computing principle alone was not always sufficient.
We present an overview of our experience with these different robot platforms and give practical guidelines for applying physical reservoir computing to new robots. We finally discuss some perspectives on a more systematic evaluation between body morphology, compliance and the complexity of generating stable gaits for locomotion.
Generating and Modulating Complex Motion Patterns with Recurrent Neural Networks and Conceptors, Herbert Jaeger
In biological brains “higher” cognitive control modules regulate “lower” brain layers in many ways. Examples for such top-down processing pathways include triggering motion commands (“reach for that cup”), modulating ongoing lower-level motor pattern generation (“wider steps, slooow down!”), setting attentional focus (“look closer… there!”), or predicting the next sensory impressions (“oops – that will hit me”). Not much is known about computational mechanisms which would implement such top-down governance functions on the neural level. As a consequence, in machine learning systems which are based on artificial neural networks, top-down regulation is rarely implemented.
Specifically, today’s top-performing pattern recognition systems (“deep learning” architectures) do not exploit top-down regulation pathways. This talk gives an introduction to a novel neural control mechanism which addresses such top-down governance mechanisms in modular, neural learning architectures. This computational principle, called conceptors, allows higher neural modules to control lower ones in a dynamical, online-adaptive fashion. The conceptor mechanism lends itself to numerous purposes:
- A single neural network can learn a large number of different dynamical patterns (e.g. words, or motions).
- After some patterns have been learnt by a neural network, it can re-generate not only the learnt “prototypes” but a large collection of morphed, combined, or abstracted patterns.
- Patterns learnt by a neural network can become logically combined with operations AND, OR, NOT subject to rules of Boolean logic. This reveals a fundamental link between the worlds of “subsymbolic” neural dynamics and of “symbolic” cognitive operations.
- This intimate connection between the worlds of neural dynamics and logical-symbolic operations yields novel algorithms and architectures for lifelong learning, signal filtering, attending to particular signal sources (“party talk” effect), and more.
Expressed in a nutshell, conceptors enable “top-down logico-conceptual control” of the nonlinear, pattern-generating dynamics of recurrent neural networks.
Training and Understanding Deep Neural Networks for Robotics, Design, and Perception, Jason Yosinski
Artificial Neural Networks (ANNs) form a powerful class of models with both theoretical and practical advantages. Networks with more than one hidden layer (deep neural networks) compute multiple functions on later layers that share the use of intermediate results computed on earlier layers. This compositional, hierarchical structure provides a strong bias, or regularization, toward solutions that seem to work well on a large variety of real-world problems.
In this talk we will examine this bias in action via several vignettes. First we will look at a method for using ANNs to learn fast gaits for walking robots. Second, we will see how the same method can be applied to design three dimensional solid objects. Finally we will discuss a few simple experiments that shed light on the inner workings of neural nets trained to classify images. The studies shed light on the computation performed by each layer of a network and by the network as a whole. The experiments taken together reveal some surprising behaviors of large networks and lead to a greater understanding and intuition for the computation performed by deep neural nets.
The Neurorobotics Platform of the Human Brain Project, Stefan Ulbrich
The Neurorobotics Platform (NRP) is a web-based simulation environment for neuroscientists for performing neurorobotic experiments. It is developed in the sub-project 10 “Neurorobotics” of the Future and Emerging Technologies (FET) Flagship project “The Human Brain Project” funded by the European Commission. The software grants neuroscientists painless access to sophisticated brain, robot and physical simulators and provides the necessary tools for designing brain-body interfaces, virtual world and robot modeling as well as the definition of complex experiments. The NRP is still under development and this talk presents its current state and capabilities as well as an outlook on future development.