Defending human-robot teams against adversaries goal of computer science grant
|Computer science professor Kamalika Chaudhuri is part of a MURI grant going to multiple institutions. Her work is on Cohesive and Robust Human-Bot Cybersecurity Teams.|
March 16, 2021-- UC San Diego computer science professor Kamalika Chaudhuri is part of a multi-university team that has won a prestigious US Department of Defense Multidisciplinary University Research Initiative (MURI) Award to develop rigorous methods for robust human-machine collaboration against adversaries.
Chaudhuri will receive $750,000 to fund her research on the project titled Cohesive and Robust Human-Bot Cybersecurity Teams, which aims to develop rigorous parameters for Human-Bot Cybersecurity teams with the goal of developing a cohesive team that is not vulnerable to active human and machine learning (ML) adversaries.
“We now know how to develop machine learning methods that are robust to adversaries, yet in many applications, humans work in coordination with machine learning software, and adversaries can still disrupt the process,” said Chaudhuri, whose research interests lie in the foundations of trustworthy machine learning. “This project will investigate in detail how that can happen, and how we can design algorithms and tools to be robust to these adversaries."
Cybersecurity is the most challenging task that the Department of Defense (DoD) faces today. A typical human analyst in a cybersecurity task has to deal with a plethora of information, such as intrusion logs, network flows, executables, and provenance information for files. Real cybersecurity scenarios are even more challenging: an active adversarial environment, with large amounts of information and techniques that neither humans nor machines can handle alone.
In addition to human analysts, machine learning (ML) bots have become part of these cybersecurity teams. ML bots reduce the burden on human analysts by filtering information, thus freeing up cognitive resources for tasks related to the high-level mission.
The team will first focus on ways to build trust within the HBCT by investigating techniques to produce explanations for the human analysts of how the ML bots work. These explanations will be presented in an appropriate vocabulary for the analysts and will be specific to the task of the HBCT, providing valuable insight for the human analysts so they will trust the ML model and reduce manual effort.
The group will also research how analysts integrate information to arrive at decisions, as well as their mental models of how bots operate. This will allow them to take a step towards automating the decision-making process. Moreover, the mental model can help design robust ML models that are more specific to the task of the HBCT.
Adaptability will be key as well. Human-bot team dynamics change as new adversaries with different capabilities arise, and adversaries adapt in response to new team strategies. Adversaries interactive learning must be taken into account to develop methods for the entire team to adaptto adversaries in an interactive manner.
Chaudhuri and her research group will develop methods that can improve generalization capabilities of current machine learning methods to rare inputs, bolster the quality of explanations provided by them and design principled solutions for adversarial robustness.
“This will build on some of our prior and current work in the area of interactive learning, as well as principled methods for defending against adversaries,” she said.
Since its inception in 1985, the tri-Service MURI program has convened teams of investigators with the hope that collective insights drawn from research across multiple disciplines could facilitate the advancement of newly emerging technologies and address the Department’s unique problem sets. Complementing the Department’s single-investigator basic research grants, the highly competitive MURI program has made immense contributions to both national defense and society at large. Innovative technological advances from the MURI program help drive and accelerate current and future military capabilities and find multiple applications in the commercial sector.
Seven universities comprise the research team, which is led by Somesh Jha at the University of Wisconsin-Madison: Carnegie Mellon University, University of California San Diego, Pennsylvania State University, University of Melbourne, Macquarie University, and University of Newcastle. Benjamin Rubinstein of the University of Melbourne will lead the Australian team, or AUSMURI, funded by the Australian Government. The team brings together diverse expertise spanning computer security, machine learning, psychology, decision sciences, and human-computer interaction