July 1, 2022 feature
A model that allows robots to follow and guide humans in crowded environments

Assistance robots are typically mobile robots designed to assist humans in malls, airports, health care facilities, home environments and various other settings. Among other things, these robots could help users to find their way around unknown environments, for instance guiding them to a specific location or sharing important information with them.
While the capabilities of assistance robots have improved significantly over the past decade, the systems that have so far been implemented in real-world environments are not yet capable of following or guiding humans efficiently within crowded spaces. In fact, training robots to track a specific user while navigating a dynamic environment characterized by many randomly moving "obstacles" is far from a simple task.
Researchers at the Berlin Institute of Technology have recently introduced a new model based on deep reinforcement learning that could allow mobile robots to guide a specific user to a desired location or follow him/her around while carrying their belongings, all within a crowded environment. This model, introduced in a paper pre-published on arXiv, could help to significantly enhance the capabilities of robots in malls, airports and other public places.
"The task of guiding or following a human in crowded environments, such as airports or train stations, to carry weight or goods is still an open problem," Linh Kästner , Bassel Fatloun , Zhengcheng Shen , Daniel Gawrisch and Jens Lambrecht wrote in their paper. "In these use cases, the robot is not only required to intelligently interact with humans, but also to navigate safely among crowds."
When they trained their model, the researchers also included semantic information about the states and behaviors of human users (e.g., talking, running, etc.). This allows their model to make decisions about how to best assist users, moving alongside them at a similar pace and without colliding with other humans or nearby obstacles.
"We propose a deep reinforcement learning based agent for human-guiding and -following tasks in crowded environments," the researchers wrote in their paper. "Therefore, we incorporate semantic information to provide the agent with high-level information like the social states of humans, safety models, and class types."
To test their model's effectiveness, the researchers carried out a series of tests using arena-rosnav, a two-dimensional (2D) simulation environment for training and assessing deep learning models. The results of these tests were promising, as the artificial agent in the simulated scenarios could both guide humans to specific locations and follow them, adjusting its velocity to that of the user and avoiding nearby obstacles.
"We evaluate our proposed approach against a benchmark approach without semantic information and demonstrated enhanced navigational safety and robustness," the researchers wrote in their paper. "Moreover, we demonstrate that the agent could learn to adapt its behavior to humans, which improves the human-robot interaction significantly."
The deep reinforcement learning model developed by this team of researchers appeared to work well in simulations, so its performance will now need to be validated using physical robots in real-world environments. In the future, this work could pave the way toward the creation of more efficient robot assistants for airports, train stations, and other crowded public spaces.
Explore further
© 2022 Science X Network