Revolutionary DreamWaQ technology enables blind locomotion for quadrupedal robots in challenging environments

A team of Korean engineering researchers has made a remarkable breakthrough in quadrupedal robot technology. They have developed a cutting-edge system that allows the robot to climb stairs and traverse uneven terrains, all without relying on visual or tactile sensors. This groundbreaking achievement is especially significant in disaster scenarios where visual confirmation may be hindered by factors like darkness or thick smoke.

The Urban Robotics Lab, led by Professor Hyun Myung at the School of Electrical Engineering, is responsible for this groundbreaking walking robot control technology, which enables robust “blind locomotion” in a wide range of challenging environments.

The team named their innovative creation DreamWaQ, symbolizing its ability to enable walking robots to move effortlessly even in the absence of light, much like how humans can navigate in the dark without visual assistance when getting up from bed and going to the bathroom. By implementing this technology into legged robots, a diverse range of DreamWaQers can be developed.

Traditional walking robot controllers primarily rely on kinematics and dynamics models, which are model-based control methods. However, in unconventional environments such as uneven fields, it becomes crucial to rapidly gather terrain feature information to maintain stability while walking. Existing approaches heavily depend on the robot’s cognitive capabilities to survey the surrounding environment.

However, the DreamWaQ technology breaks free from these limitations, providing a new and efficient method for walking robots to navigate through various atypical environments with confidence and stability.

Credit: KAIST (Korea Advanced Institute of Science and Technology)

In contrast, Professor Hyun Myung’s research team has developed a revolutionary controller for walking robots that utilizes deep reinforcement learning (RL) methods. This controller stands out for its ability to rapidly calculate precise control commands for each motor of the robot based on data obtained from various simulated environments. Unlike previous controllers that required extensive adjustments to work with real robots after being trained in simulations, the team’s controller can be easily applied to different walking robots without the need for additional tuning processes.

Named DreamWaQ, this controller consists primarily of two components: a context estimation network and a policy network. The context estimation network plays a crucial role in estimating both the ground information and the robot’s status. It achieves this by incorporating inertial information and joint data, implicitly estimating the ground information while explicitly determining the robot’s state. These estimations are then passed to the policy network, which generates optimal control commands based on the received information. Both networks are trained simultaneously within the simulation environment.

The context-aided estimator network is trained using supervised learning techniques, whereas the policy network employs an actor-critic architecture—a deep RL approach. The actor network can infer surrounding terrain information implicitly. During simulation training, the exact terrain information is known, and the critic (or value network), equipped with this knowledge, evaluates the actor network’s policy.

By leveraging deep RL methods, the DreamWaQ controller represents a significant advancement in walking robot control. Its ability to learn from simulations and transfer knowledge to real-world scenarios without extensive reconfiguration makes it highly adaptable to various walking robots.

Figure 2. Since the estimator can implicitly estimate the ground information as the foot touches the surface, it is possible to adapt quickly to rapidly changing ground conditions. Credit: KAIST (Korea Advanced Institute of Science and Technology)
Figure 3. Results showing that even a small walking robot was able to overcome steps with height differences of about 20cm. Credit: KAIST (Korea Advanced Institute of Science and Technology)

The entire learning process can be completed in just around an hour on a GPU-enabled PC. Once trained, the physical robot only requires the implementation of the learned actor network. It relies solely on the inertial sensor (IMU) inside the robot and the measurement of joint angles to imagine its surroundings and determine which learned environment is most similar. If the robot encounters an unexpected offset, like a staircase, it won’t be aware of it until its foot touches the step. However, upon contact, it swiftly gathers terrain information and immediately adjusts its walking pattern accordingly. This enables the robot to adapt rapidly by transmitting suitable control commands to each motor based on the estimated terrain information.

By leveraging the information from the IMU and joint angles, the robot can make real-time decisions without relying on visual or tactile sensors. It dynamically responds to the terrain it encounters, ensuring stable and efficient locomotion even in challenging and unpredictable environments.

Credit: KAIST (Korea Advanced Institute of Science and Technology)

The DreamWaQer robot not only successfully navigated the controlled laboratory environment but also demonstrated its remarkable capabilities in real-world outdoor settings. It effortlessly maneuvered through various obstacles such as curbs, speed bumps, and uneven terrains with tree roots and gravel, showcasing its adaptability and robustness. One of its notable achievements was conquering a staircase with a height difference equivalent to two-thirds of its own body.

The research team extensively tested the robot’s walking capabilities in diverse environments, ensuring stable locomotion across a wide range of speeds. It demonstrated its agility by maintaining stability and control while walking at both slow speeds of 0.3 m/s and relatively faster speeds of 1.0 m/s.

The findings of this groundbreaking study have been made publicly available through a paper published on the arXiv preprint server. Furthermore, the research has been recognized and accepted for presentation at the prestigious IEEE International Conference on Robotics and Automation (ICRA), scheduled to take place in London at the end of May. This acknowledgment highlights the significance of the research and the innovative contributions made by the team in the field of robotics and locomotion control.

Source: KAIST (Korea Advanced Institute of Science and Technology)

Leave a Reply

Your email address will not be published. Required fields are marked *