Developing a competitive table tennis player out of a robot arm

 

Researchers at Google Deepmind, the company’s artificial intelligence research laboratory, have developed ABB’s robot arm into a competitive table tennis player. It can swing its 3D-printed paddle back and forth and win against its human competitors. In the study that the researchers published on August 7th, 2024, the ABB robot arm plays against a professional coach. It is mounted on top of two linear gantries, which allow it to move sideways.

 

It holds a 3D-printed paddle with short pips of rubber. As soon as the game begins, Google Deepmind’s robot arm strikes, ready to win. The researchers train the robot arm to perform skills typically used in competitive table tennis so it can build up its data. The robot and its system collect data on how each skill is performed during and after training. This collected data helps the controller make decisions about which type of skill the robot arm should use during the game. In this way, the robot arm may have the ability to predict the move of its opponent and match it.

google robot arm tennis
all video stills courtesy of researcher Atil Iscen via Youtube

 

 

Google deepmind researchers collect the data for training

 

For the ABB robot arm to win against its competitor, the researchers at Google Deepmind need to make sure the device can choose the best move based on the current situation and counteract it with the right technique in just seconds. To handle these, the researchers write in their study that they’ve installed a two-part system for the robot arm, namely the low-level skill policies and a high-level controller. The former comprises routines or skills that the robot arm has learned in terms of table tennis.

 

These include hitting the ball with topspin using the forehand as well as with the backhand and serving the ball using the forehand. The robot arm has studied each of these skills to build its basic ‘set of principles.’ The latter, the high-level controller, is the one deciding which of these skills to use during the game. This device can help assess what’s currently happening in the game. From here, the researchers train the robot arm in a simulated environment, or a virtual game setting, using a method called Reinforcement Learning (RL).

google robot arm tennis
Google Deepmind researchers have developed ABB’s robot arm into a competitive table tennis player

 

 

robot arm wins 45 percent of the matches

 

Continuing the Reinforcement Learning, this method helps the robot practice and learn various skills, and after training in simulation, the robot arms’s skills are tested and used in the real world without additional specific training for the real environment. So far, the results demonstrate the device’s ability to win against its opponent in a competitive table tennis setting.

 

To see how good it is at playing table tennis, the robot arm played against 29 human players with different skill levels: beginner, intermediate, advanced, and advanced plus. The Google Deepmind researchers made each human player play three games against the robot. The rules were mostly the same as regular table tennis, except the robot couldn’t serve the ball. 

the study finds that the ABB robot arm won 45 percent of the matches and 46 percent of the individual games
the study finds that the robot arm won 45 percent of the matches and 46 percent of the individual games

 

 

From the games, the researchers gathered that the robot arm won 45 percent of the matches and 46 percent of the individual games. Against beginners, it won all the matches, and versus the intermediate players, the robot arm won 55 percent of its matches.

 

On the other hand, the device lost all of its matches against advanced and advanced plus players, hinting that the robot arm has already achieved intermediate-level human play on rallies. Looking into the future, the Google Deepmind researchers believe that this progress ‘is also only a small step towards a long-standing goal in robotics of achieving human-level performance on many useful real-world skills.’

against the intermediate players, the robot arm won 55 percent of its matches
against the intermediate players, the robot arm won 55 percent of its matches

on the other hand, the device lost all of its matches against advanced and advanced plus players
on the other hand, the device lost all of its matches against advanced and advanced plus players

the robot arm has already achieved intermediate-level human play on rallies
the robot arm has already achieved intermediate-level human play on rallies

 

 

project info:

 

group: Google Deepmind | @googledeepmind

researchers: David B. D’Ambrosio, Saminda Abeyruwan, Laura Graesser, Atil Iscen, Heni Ben Amor, Alex Bewley, Barney J. Reed, Krista Reymann, Leila Takayama, Yuval Tassa, Krzysztof Choromanski, Erwin Coumans, Deepali Jain, Navdeep Jaitly, Natasha Jaques, Satoshi Kataoka, Yuheng Kuang, Nevena Lazic, Reza Mahjourian, Sherry Moore, Kenneth Oslund, Anish Shankar, Vikas Sindhwani, Vincent Vanhoucke, Grace Vesom, Peng Xu, and Pannag R. Sanketi