COVID-19 Updates

Information is available for the campus community on the Return to Learn website. Please get vaccinated and stay up to date with County and State guidelines as well as CDC recommendations.

News Release

ICRA 2019 preview: bots, drones and neural nets

San Diego, Calif., May 13, 2019 -- From ways to improve long-distance surgery techniques to better ways to get robots to work with humans in manufacturing settings and to a testing platform for UAVs, engineers at the University of California San Diego will make strong showing at the 2019 International Conference on Robotics and Automation May 20 to 24 in Montreal, Canada.

The event is the flagship conference of the IEEE Robotics and Automation Society and a premier international forum for robotics researchers to present their work. Established in 1984 and held annually, the conference joins experts in the field of robotics and automation for technical communications through presentations and discussions. Henrik Christensen, director of UC San Diego’s Contextual Robotics Institute, is the co-chair of the conference’s government forum.

Below is a summary of the research papers that UC San Diego researchers will be presenting.

Overcoming delays in long-distance surgery

Telesurgery is a technology that could enable surgeons to operate remotely on patients at distant locations—across a city, country, or the globe. But the major hurdle preventing this from becoming a reality is the signal delay when transmitting commands from a surgeon’s console to the robot at the patient’s bedside, and the video back to the surgeon. At a second of delay, it becomes nearly impossible to coordinate the robot instruments and perform surgery. Roboticists and surgeons at UC San Diego are developing solutions to overcome this delay. Michael Yip, a professor of electrical and computer engineering at the Jacobs School, and his team are presenting two papers to tackle this issue at ICRA 2019.

One paper describes an augmented reality system that predicts where instruments should go before they are moved and overlays these positions on a screen so a surgeon can observe, in real time where the instruments essentially are without the effect of delay. The researchers are also working on visual-haptic feedback that predicts and displays how much force remotely controlled instruments are applying to tissues. To test the AR system, researchers had 10 participants remotely control a da Vinci surgical robot to perform a peg transfer task while experiencing a one-second delay. On average, the AR system cut the users’ time to complete the task by 19 percent.

Another paper presents a different set of approaches, called motion scaling solutions, to counteract the signal delay in remote telesurgery. Motion scaling allows an operator to remotely control a robot (such as the robotic arms on a da Vinci surgical system) using large and natural motions, and then the robot scales these large motions down into tiny millimeter-sized movements. This technique can be adaptively controlled to enable surgeons to perform procedures over long distances where risky maneuvers are scaled down to enable safer motions. In this paper, researchers propose three new motion scaling solutions that reduced operator errors during delayed telesurgery. To test the different scaling solutions, researchers had 17 participants perform a peg transfer task using a da Vinci surgical robot while experiencing a 750 millisecond delay. Out of the three motion scaling solutions, participants performed the best using the velocity scaling solution, which on average decreased weighted error by 29 percent at the cost of increasing the time to complete the task by 22 percent.

“Augmented Reality Predictive Displays to Help Mitigate the Effects of Delayed Telesurgery”

Florian Richter, Yifei Zhang, Yuheng Zhi, Ryan K. Orosco and Michael C. Yip

 Press release:

“Motion Scaling Solutions for Improved Performance in High Delay Surgical Teleoperation”

Florian Richter, Ryan K. Orosco and Michael C. Yip

Better activity recognition for robots in manufacturing

Goto Flickr

In safety-critical environments, robots need to reliably recognize human activity to be effective and trustworthy teammates. Computer scientists at the University of California San Diego and the Massachusetts Institute of Technology explored how different types of human motion granularity (fine vs. gross) and different types of sensors (wearable sEMG vs. motion capture) affected classification accuracy. To the team’s knowledge, it is the first time this question has been thoroughly researched. They found motion capture is up to 37% more accurate for gross motion recognition, and wearables are up to 28% more accurate for fine motion recognition. The results suggest the two sensor modalities are complementary, and  roboticists may benefit from employing both approaches. The data for the study is available in the new UCSD-MIT Human Motion dataset.  “Our findings will help guide researchers in numerous fields--including learning from demonstration and grasping--to effectively choose sensor modalities that are most suitable for their applications,” the researchers write.

Activity recognition in manufacturing: The role of motion capture and sEMG+inertial wearables in detecting fine vs. gross motion

Alyssa Kubota, Tariq Iqbal, Julie A. Shah, Laurel D. Riek

A neural-networks based motion planning algorithm for fast collision-free paths

Getting robots to move around and perform a task without hitting anything is no small feat. This so-called “motion planning” problem is computationally complex and takes time. For a robot, even just picking up a cup on a crowded desk may require between seconds to a few minutes of computation. And for autonomous vehicle applications where reacting to obstacles is critical, planning in seconds and minutes is unacceptable. In this paper, researchers present a neural network-based motion planning algorithm that generates collision-free paths in milliseconds to at most one second, regardless of what and how many obstacles are present. It is based on the principle that human brains naturally encode how to move given a target and an observation of the space ahead. It is 150 times faster than the state-of-the-art motion planning algorithms. The planning network creates paths in a stepwise manner. It encodes the observed obstacle space, then combines that information with the robot’s current state and goal to produce a next step that would lead the robot closer to its end configuration. It generalizes to new and unseen obstacle environments and can be easily integrated with other motion planning algorithms. Researchers evaluated the planning network on various 2D and 3D environments, including the planning of a 7 degrees of freedom Baxter robot manipulator.

“Motion Planning Networks”

Ahmed H. Qureshi, Anthony Simeonov, Mayur J. Bency and Michael C. Yip

A system for better robot software management

Computer scientists at UC San Diego have developed a new system to better manage, schedule and monitor software components for service robots. The system, named Rorg, reduces CPU load by 45.5 percent and memory usage by 16.5 percent on average. Rorg allows developers to pack software into self-contained images and runs them in isolated environments using Linux containers. It also allows the robot to turn software components on and off on demand to avoid them competing for resources. Linux containers are already widely used in cloud environments. In a cloud environment Kubernetes is a popular tool to manage distributed services and Rorg is a similar tool targeted at robotics applications. But adoption in service robot systems has run into obstacles, including resource constraints and performance requirements. To solve these issues, computer scientists developed a programmable container management interface and a resource time-sharing mechanism incorporated with ROS. They tested Rorg on an autonomous tour guide robot that manages 41 software components.

Rorg: Service robot software management with Linux containers

Shengye Wang, Xiao Liu, Jishen Zhao, Henrik I. Christensen

A testing platform for UAVs tethered to an unmanned vehicle

Goto Flickr

Engineers have created a low-cost testing platform that simulates the conditions a UAV would experience while tethered to an unmanned boat. The three-degrees-of-freedom device is capable of replicating a wave’s roll, pitch and heave motions and of simulating ocean waves up to 2.5 meters (8 ft. 2 in.) tall. The device performed well during tests and was less than two degrees off from desired motions. The structure is an alternative to costly tests in the field. “This ocean wave and boat motion replicator design is low cost and easily scalable for different payload sizes and wave heights,” write the researchers from the University of California San Diego and SPAWAR.

Design and parameter optimization of a 3-PSR parallel mechanism for replicating wave and boat motion

Kurt Talke, Dylan Drotman, Nicholas Stroumtsos, Mauricio de Oliveira, Thomas Bewley

Media Contacts

Ioana Patringenaru
Jacobs School of Engineering

Liezel Labios
Jacobs School of Engineering