Are humans autonomous robots

18.02.2021 10:50

Autonomous and safe: How humans and robots work together in dangerous environments

Linda Treugut Office
Learning systems - the platform for artificial intelligence

Whether during rescue missions, in space or in the deep sea: self-learning robots, which use artificial intelligence (AI) to find their way in unknown situations, will in future be able to support people in activities in dangerous environments. On the one hand, the AI ​​systems must act as autonomously as possible so that people can stay at a safe distance from the danger zone. On the other hand, people must be able to intervene at any time in problematic situations. This requires robots that flexibly adapt their degree of autonomy to the situation at hand during an operation. A white paper from the Lernende Systeme platform shows how this works.

Munich, February 18, 2021 - AI systems and self-learning robots are already working together with people in many areas. In industrial production or automated driving, a certain degree of autonomy is already set for various tasks during the development of the AI ​​system. What the division of labor between humans and the AI ​​system looks like and how independently the system acts depends on its capabilities, the type of task and the environment.

In contrast to the strictly regulated application areas of mobility and industry, the use of a self-learning system in the event of natural disasters or extinguishing work is more difficult to plan in advance. These so-called hostile environments are very diverse and generally unknown. Deployments here have a high degree of variability, combined with very different requirements for autonomy in the respective situation, according to the white paper. As a rule of thumb, the authors of the whitepaper state: a lack of system competence always means increased human intervention, including remote control of the system.

“When using self-learning robots in hostile environments, the following applies: as much autonomy as possible, as little human intervention as necessary. The AI ​​system can only protect people from danger if it performs its tasks independently and reliably. On the other hand, unpredictable situations quickly arise in dangerous environments in which people have to be able to intervene, for example when deciding which fire victim is to be recovered first, "says co-author Jürgen Beyerer, head of the Fraunhofer Institute for Optronics, System Technology and Image Evaluation (IOSB ), Professor for Interactive Real-Time Systems at the Karlsruhe Institute of Technology (KIT) and Head of the Working Group Hostile Environments of the Platform Learning Systems.

Variable degrees of autonomy enable adaptation to unpredictable situations

In addition to ethically problematic questions, legally unclear situations can also arise, or the autonomous robot needs technical support, for example because it has got stuck. In order to master these special dynamic requirements of hostile environments on autonomous systems, the systems must be able to adapt the degree of their autonomy to the respective situation during use. Or the person has to adapt it. Instead of the rigid concepts for levels of autonomy established in other areas of application, variable and continuously changeable degrees of autonomy should be implemented for the use of self-learning systems in hostile environments, the authors recommend.

In a successful cooperation with the autonomous system, people should not have to follow its use step by step, if possible. Rather, by sharing the work with the robot, he should gain time to do more important tasks. It is therefore desirable that autonomous systems inform and involve people independently and reliably when they encounter problems that they cannot solve themselves and therefore request help, teleoperation or decisions from people.

“In order for learner systems to be adequately operational in life-threatening environments, such as disaster relief or space missions, not only the autonomy is important, but also the respective context of the action in conjunction with the competence of the system. Deciding on the right level of autonomy must take these factors into account. If learner systems are to make this decision themselves, for example to adjust the degree of autonomy themselves, we have to develop systems that can analyze their competencies and relate them to action. In this way we enable them to assess whether their skills are sufficient to solve a problem or whether human support is required, ”says co-author Sirko Straube from the German Research Center for Artificial Intelligence in Bremen, member of the hostile environments working group Platform for learning systems.

Currently, autonomous systems in dangerous environments do not yet have the ability to independently assess their situation and compare it with their competencies. There is a considerable need for research here. At the present time, autonomous systems will never deploy without humans as monitors. In the future, too, people will remain the superordinate authority, which in case of doubt always retains the final decision, according to the authors.

About the whitepaper

The whitepaper “Competent in action. Variable autonomy of learner systems in hostile environments ”was written by experts from the hostile environments working group of the learning systems platform. It is available for free download on the platform's website.
How learning systems can support emergency services in the future is illustrated by the application scenarios “Quick help during rescue operations” and “Underwater autonomously on the move” of the learning systems platform.
In a short interview, co-author Jürgen Beyerer explains in detail how the requirements for variably autonomous robots can be implemented and where there is still a need for research. The interview may be used freely for editorial purposes.

Via the learning systems platform

The Learning Systems platform was founded in 2017 by the Federal Ministry of Education and Research (BMBF) at the suggestion of the Autonomous Systems specialist forum of the High-Tech Forum and acatech. It brings together experts from science, business, politics and civil society in the field of artificial intelligence. In working groups they develop options for action and recommendations for the responsible use of learning systems. The aim of the platform is to promote social dialogue as an independent broker, to stimulate cooperation in research and development and to position Germany as a leading technology provider for learning systems. The platform is managed by Federal Minister Anja Karliczek (BMBF) and Karl-Heinz Streibich (President acatech).

Original publication:

https: // ...

Additional Information:

https: // Einsatz.htm ... - application scenario "Quick help during rescue operations"
https: // ... - application scenario "autonomously traveling under water"
https: // ... - Interview with co-author Jürgen Beyerer

Features of this press release:
Information technology, mechanical engineering, economy
Research / knowledge transfer, scientific publications