Robot Swarms + Humans

Q:What does it take to send a swarm of robotic scouts into a disaster area and get them to report back?

A:Complex distributed computing and controls algorithms.

Developing these kinds of algorithms is the challenge that Jorge Cortes and Sonia Martinez, both professors of mechanical engineering at the Jacobs School, have decided to tackle.

While it can take up to 30 people to safely operate a single unmanned vehicle, Cortes and Martinez and their research team are trying to “invert the pyramid” and get one person to control more than 20 robots.

To develop safe and effective human-swarm interaction techniques to control a swarm of robotic scouts, the researchers take into account the dynamics of each robot, the robot’s decision-making process and robot-to-robot communications. A researcher giving input to the swarm of robots can generate a complex cascade of effects that must be understood across the swarm.

The team’s first task focused on optimal swarm deployment in known environments. Researchers developed algorithms to get the robots to optimally cover a specific area. The network of robots can adapt when one robot fails or another is added. The algorithm can also take into account a wide range of factors, such as the robots’ battery levels, to give the bigger territory to the robots with the most battery life.


Left to right: Mike Ouimet (Ph.D. '14) and grad student Aaron Ma

Cortes and Martinez’s team used Turtlebots and quadcopters as a testbed for the algorithms. (See cover image.) The team includes graduate students Aaron Ma and Evan Gravelle and undergraduate students from several majors, including mechanical and electrical engineering and computer science.

Ma has developed an Android app that allows a user to easily specify a density function for an area, which then gets conveyed to the robotic swarm. The density function concept is the mechanism that enables seemingly effortless human-swarm coordination towards a common goal.

Next steps include applying these ideas in complex unknown scenarios, equipping the robots with better scene understanding tools and maybe even developing a brain-machine interface to control them.


Pedestrian Detector

Electrical engineers have created a new pedestrian detection system that’s faster and more accurate than existing systems. The technology, which could be used in cars, robotics, security cameras and image and video search systems, is unique in that it analyzes video images closer to real time (2 to 4 image frames per second) with close to half the errors of existing systems.

The new system combines a traditional computer vision classification architecture, known as cascade detection, with deep learning models. Cascade detection works by cropping out areas in an image that are clearly not pedestrians, like the sky or empty road. While this method is fast, it isn’t powerful enough to distinguish between a pedestrian and very similar objects like trees, which the algorithm could recognize as having person-like features such as shape, color and contours.

On the other hand, deep learning models are capable of complex pattern recognition, which they can perform after being trained with hundreds or thousands of examples, but they work too slowly for real- time implementation.

Electrical engineering professor Nuno Vasconcelos and his team developed an algorithm that takes the best of both worlds: it uses the quicker and simpler cascade detection technology to filter out most of the non-pedestrian parts of an image, then uses deep learning models to process the more complex remainders of the image.

According to Vasconcelos, this is the first algorithm to include stages of deep learning and cascade detection. “The results we’re obtaining with this new algorithm are substantially better for real-time, accurate pedestrian detection.”

The algorithm currently only works for binary detection tasks, such as pedestrian detection, but the researchers are working to make it detect different types of objects simultaneously.

Read more:

Table of Contents