Robots learn by checking in on team members

Robots learn by checking in on team members
Mohamed Abdelkader is one of the researchers that developed an algorithm that enables a team of unmanned aerial vehicles to work together in real time under a capture the flag scenario to intercept an attacker drone. Credit: © 2018 Kuat Telegenov

The software and hardware needed to coordinate a team of unmanned aerial vehicles (UAVs) that can communicate and work toward a common goal have recently been developed by KAUST researchers.

"Giving UAVs more autonomy makes them an even more valuable resource," says Mohamed Abdelkader, who worked on the project with his colleagues under the guidance of Jeff Shamma. "Monitoring the progress of a sent out on a specific task is far easier than remote-piloting one yourself. A team of drones that can communicate among themselves provides a tool that could be used widely, for example, to improve security or capture images simultaneously over a large area."

The researchers trialled a Capture the Flag game scenario, whereby a team of defender drones worked together within a defined area to intercept an intruder drone and prevent it from reaching a specific place. To give the game more authenticity, and to check if their algorithms would work under unpredictable conditions, the intruder drone was remote-piloted by a researcher.

Abdelkader and the team quickly dismissed the idea of having a central base station that the drones would communicate with. Instead, they custom-built UAVs and incorporated a lightweight, low-power computing and wi-fi module on each one so that they could talk to each other during flight.

"A centralized architecture takes significant computing power to receive and relay multiple signals, and it also has a potential single point of total failure—the ," explains Shamma. "Instead, we designed a distributed architecture in which the drones coordinate based on local information and peer-to-peer communications."

The team's aims to achieve an optimal level of peer-to-peer messaging—which needed to be not too much, not too little—and rapid reaction times, without too much heavy computation. This allows the algorithm to work effectively in real time while the drones are chasing an intruder.

"Each of our drones makes its own plan based on a forecast of optimistic views of their teammates' actions and pessimistic views of the opponent's actions," explains Abdelkader. "Since these forecasts may be inaccurate, each drone executes only a portion of its plan, then reassesses the situation before re-planning."

Their algorithm worked well in both indoor and outdoor arenas under different attack scenarios. Abdelkader hopes their software, which is now available as open-source, will provide the test bed for multiple applications. The KAUST team hope to enable the drones to work in larger, outdoor areas and to improve the software by incorporating adaptive machine-learning techniques.

More information: Abdelkader, M., Lu, Y., Jaleel, H. & Shamma, J. Distributed real time control of multiple UAVs in adversarial environment: algorithm and flight testing results. IEEE International Conference on Robotics and Automation (ICRA) May 21-25, 2018, Brisbane, Australia, pp. 6659-6664.

Citation: Robots learn by checking in on team members (2018, June 13) retrieved 19 April 2024 from https://phys.org/news/2018-06-robots-team-members.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Emerging 5G networks – new opportunities for drone detection?

7 shares

Feedback to editors