• last year
Santa may get a robo-helper this year, with an AI droid that can pick pressies and launch them into shipping crates.

The incredible ANYmal robot has four legs and wheels to enable it to zip around and stand up straight.

Now it has been allowed to solve its own problems, which has led it to be able to open doors and manipulate boxes.

Developers Swiss-Mile have worked out that if they give the robot a task, it will explore and play to work out on its own how to complete the challenge.

A new showcase sees ANYmal working out how to open a door, before picking up a box and figuring out it can accurately throw it into a storage bin.

It has previously been seen negotiating stairs, standing on two legs and calling itself an elevator.

Dr. Marko Bjelonic, of ANYmal developer Swiss-Mile, a spin-off from ETH Zurich research university, explains: "The package handling use an interesting approach by utilising reinforcement learning to solve the whole problem without any human heuristics.

"The robot can in simulation self-discover how to grab a package and how to throw it as fast as possible into the bin.

"Seems like Santa clause is a bit in a rush this year," Dr. Bjelonic joked.

Swiss-Mile say they have researched for the last five years to tackle last-mile delivery challenges and logistics with "superior speed, efficiency, versatility and payload capability".

They explain: "With both legs and wheels, our robot outperforms state-of-the-art wheeled delivery platforms as well as lightweight delivery drones.

"It is the only solution capable of carrying tools, materials, goods and sensors over long distances with energy efficiency and speed while overcoming challenging obstacles like steps and stairs and enabling seamless navigation in indoor and outdoor urban environments."

ANYmal can travel up to 6.2 metres per second (22.32 kph/13.87 mph), can carry a payload of 50 kg and is claimed to be 83% more efficient than legged systems.

Dr. Bjelonic adds: "We are bringing our embodied AI to real-world use cases, and we will focus more on creating value for our customers and society."

Category

😹
Fun
Transcript
00:00 This wheeled-legged robot is able to perform hybrid motions to combine the advantages of wheeled and legged locomotion.
00:07 Recently, the robot learned how to stand up.
00:11 Its front legs can now act as arms to fulfill manipulation tasks, such as opening a door or moving a package.
00:19 In this work, we present a curiosity-driven reinforcement learning approach to achieve such behaviors.
00:26 Learning and non-learning-based methods usually require an extensive amount of task-specific engineering, for example in the form of reward shaping.
00:42 We overcome this limitation by defining a single task-specific reward that is given if the task is achieved.
00:48 Resulting control policies show a high level of repeatability.
00:55 To be able to discover this sparse reward, the agent needs to explore its environment.
01:00 Making the agent curious intrinsically motivates exploration.
01:04 We define a state to focus the agent's curiosity on the object of interest.
01:09 For the door-opening task, the curiosity state is simply defined as the door's position and velocity, as well as the distance between the robot and the door.
01:17 As a result, the robot starts to play with the door until the task is achieved.
01:22 Note that the curiosity state is the only thing we have to change to achieve a different task, such as manipulating a package.
01:36 We employ a simple perception system relying on a single camera for all task-specific observations.
01:50 We simulate the camera's field of view during training to enable active tracking of visual markers.
01:55 [END]
02:00 [BLANK_AUDIO]

Recommended