News Release

ICRA 2022 preview: from robots inspired by insects to helping robots navigate and interact

April 11, 2022-- From algorithms that help robots better navigate and interact with the world and humans, to robots inspired by insects, researchers at the University of California San Diego are making significant contributions to the field of robotics at the 2022 International Conference on Robotics and Automation taking place from May 23 to 27, 2022  in Philadelphia.

The conference brings together the world's top researchers and most important companies to share ideas and advances in the field. This year’s theme is “the future of work”. 

Henrik Christensen, director of the UC San Diego Contextual Robotics Institute, is the ICRA 2022 chair of forums for the conference and organized six forums: The future of work, Industry, Venture Capital, National Programs National Research Strategies and an entrepreneurship event.

“ICRA is the premier robotics conference and after two years of being virtual the objective is to be back to in-person presentations and networking. It is encouraging to see such a diverse set of strong contributions from UCSD from new mechanisms over medical systems to next generation transportation” says Henrik Christensen. 

Nikolay Atanasov, a professor in the Department of Electrical and Computer Engineering, is the conference’s chair of workshops, overseeing 51 events. He is also giving a talk titled “Signed Directional Distance Functions” at the Robotic Perception and Mapping workshop, May 23, 2022. 

Laurel Riek, a professor in the Department of Computer Science and Engineering, will be speaking at a workshop on "Shared Autonomy in Physical Human-Robot Interaction: Adaptability and Trust.” The working title of her talk is "Proximate Human Robot Teaming: Fluent and Trustworthy Interaction".  

Sylvia Herbert is one of the organizers of the Debates on the Future of Robotics workshop, which tackles topics such as the state of robotics as an academic discipline, its relationship with other fields in computer science and engineering, and its broader social and economic impacts. 

 

Below are abstracts for the UC San Diego papers accepted to the conference. 

P2SLAM: Bearing Based WiFi SLAM for Indoor Robots

Aditya Arun, Roshan Ayyalasomayajula, William Hunter and Dinesh Bharadia, Department of Electrical and Computer Engineering, UC San Diego

A recent spur of interest in indoor robotics has increased the importance of robust simultaneous localization and mapping algorithms in indoor scenarios. This robustness is typically provided by the use of multiple sensors which can correct each others’ deficiencies. In this vein, exteroceptive sensors, like cameras and LiDAR’s, employed for fusion are capable of correcting the drifts accumulated by wheel odometry or inertial measurement units (IMU’s). However, these exteroceptive sensors are deficient in highly structured environments and dynamic lighting conditions. This letter will present WiFi as a robust and straightforward sensing modality capable of circumventing these issues. Specifically, we make three contributions. First, we will understand the necessary features to be extracted from WiFi signals. Second, we characterize the quality of these measurements. Third, we integrate these features with odometry into a state-of-art GraphSLAM backend. We present our results in a 25×30 m and 50×40 environment and robustly test the system by driving the robot a cumulative distance of over 1225 m in these two environments. We show an improvement of at least 6× compared odometry-only estimation and perform on par with one of the state-of-the-art Visual-based SLAM.
 

Autonomous Actuation of Flapping Wing Robots Inspired by Asynchronous Insect Muscle

James Lynch and Nick Gravish, Department of Mechanical and Aerospace Engineering, UC San Diego
Jeff Gau and Simon Sponberg, School of Physics, Georgia Institute of Technology

In most instances, flapping wing robots have emulated the “synchronous” actuation of insects, in which the wingbeat timing is generated from a time-dependent, rhythmic signal. An understudied area in flapping wing robotics is that of “asynchronous” actuation in which the wingbeat is self-excited through state-dependent feedback. The internal dynamics of asynchronous insect flight muscle enable high-frequency, adaptive wingbeats with minimal direct neural control. In this paper, we investigate how the delayed stretch-activation (dSA) response of asynchronous insect flight muscle can be transformed into a feedback control law for flapping wing robots that results in stable limit cycle wingbeats. We first demonstrate in theory and simulation the mechanism by which asynchronous wingbeats self-excite. Then, we implement the feedback law on a dynamically-scaled robophysical model as well as on an insect-scale robotic flapping wing. Experiments on the large- and small-scale robots demonstrate good agreement with the theory results and highlight how dSA parameters govern wingbeat amplitude and frequency. Lastly, we demonstrate that asynchronous actuation has several advantages over synchronous actuation schemes, including the ability to rapidly adapt or halt wingbeats in response to external loads or collisions through low-level feedback control.

TridentNetV2: Lightweight Graphical Global Plan Representations for Dynamic Trajectory Generation

David Paz, Hao Xiang, Andrew Liang, and Henrik I. Christensen, Department of Computer Science and Engineering, UC San Diego

A framework for dynamic trajectory generation for autonomous navigation, which does not rely on HD maps as the underlying representation is presented. High Defi- nition (HD) maps have become a key component in most autonomous driving frameworks, which include complete road network information annotated at a centimeter-level that include traversable waypoints, lane information, and traffic signals. Instead, the presented approach models the distributions of feasible ego-centric trajectories in real-time given a nominal graph-based global plan and a lightweight scene representation. By embedding contextual information, such as crosswalks, stop signs, and traffic signals, the new  approach achieves low errors across multiple urban navigation datasets that include diverse intersection maneuvers, while maintaining real-time performance and reducing network complexity. Underlying datasets introduced are available online.

Combining suction and friction to stabilize a soft gripper to shear and normal forces, for manipulation of soft objects in wet environments

Jessica Sandoval, Iman Adibnazari, Michael T. Tolley, Department of Mechanical and Aerospace Engineering, UC San Diego
Thomas Xu, Dimitri D. Deheyn,  Scripps Institution of Oceanography, UC San Diego

Soft robotic gripping in wet environments is generally limited by the presence of a liquid that lubricates the interface between the gripper and an object being manipulated. The use of soft grippers is particularly beneficial for manipulating soft, delicate objects, yet is further limited by low grip strengths. We propose the use of suction, a form of adhesion that functions well in wet environments, to enhance soft robotic grippers. We stabilized the suction against shear disturbances using soft actuated fingers decorated with fluid-channeling patterns to enhance friction, counteracting the interfacial lubrication experienced in wet environments. We therefore combined the uses of attachment via suction and shear stability via friction to create an adhesive soft gripper. We evaluated the contributions to attachment of each component to help stabilize it against dislodgement forces that act in parallel and normal to an object that it aimed to manipulate. By identifying these contributions, we envision that such an adhesive gripper can be used to benefit soft robotic manipulation in a variety of wet environments, from surgical to subsea applications.

Look Closer: Bridging Egocentric and Third-person Views with Transformers for Robotic Manipulation

Rishabh Jangir, Nicklas Hansen, Sambaran Ghosal, Mohit Jain and Xiaolong Wang
Department of Electrical and Computer Engineering, UC San Diego

Learning to solve precision-based manipulation tasks from visual feedback using Reinforcement Learning (RL) could drastically reduce the engineering efforts required by traditional robot systems. However, performing fine-grained motor control from visual inputs alone is challenging, especially with a static third-person camera as often used in previous work. We propose a setting for robotic manipulation in which the agent receives visual feedback from both a third-person camera and an egocentric camera mounted on the robot's wrist. While the third-person camera is static, the egocentric camera enables the robot to actively control its vision to aid in precise manipulation. To fuse visual information from both cameras effectively, we additionally propose to use Transformers with a cross-view attention mechanism that models spatial attention from one view to another (and vice-versa), and use the learned features as input to an RL policy. Our method improves learning over strong single-view and multi-view baselines, and successfully transfers to a set of challenging manipulation tasks on a real robot with uncalibrated cameras, no access to state information, and a high degree of task variability. In a hammer manipulation task, our method succeeds in 75% of trials versus 38% and 13% for multi-view and single-view baselines, respectively.

CRANE: a 10 Degree-of-Freedom, Tele-surgical System for Dexterous Manipulation within Imaging Bores

Dimitri Schreiber, Zhaowei Yu, Hanpeng Jiang, Taylor Henderson, Guosong Li1, Renjie Zhu,  and Michael C. Yip, Department of Electrical and Computer Engineering, UC San Diego
Alexander M. Norbash, Department of Radiology, UC San diego
Julie Yu, Department of Mechanical and Aerospace Engineering, UC San Diego

Physicians perform minimally invasive percuta- neous procedures under Computed Tomography (CT) image guidance both for the diagnosis and treatment of numerous diseases. For these procedures performed within Computed Tomography Scanners, robots can enable physicians to more accurately target sub-dermal lesions while increasing safety. However, existing robots for this application have limited dexterity, workspace, or accuracy. This paper describes the design, manufacture, and performance of a highly dexterous, low-profile, 8+2 Degree-of-Freedom (DoF) robotic arm for CT guided percutaneous needle biopsy. In this article, we propose CRANE: CT Robot and Needle Emplacer. The design focuses on system dexterity with high accuracy: extending physicians’ ability to manipulate and insert needles within the scanner bore while providing the high accuracy possible with a robot. We also propose and validate a system architecture and control scheme for low profile and highly accurate image-guided robotics, that meets the clinical requirements for target accuracy during an in- situ evaluation. The accuracy is additionally evaluated through a trajectory tracking evaluation resulting in <0.2mm and <0.71â—¦ tracking error. Finally, we present a novel needle driving and grasping mechanism with controlling electronics that provides simple manufacturing, sterilization, and adaptability to accom- modate different sizes and types of needles.

Configuration Space Decomposition for Scalable Proxy Collision Checking in Robot Planning and Control

Mrinal Verghese, Nikhil Das, Yuheng Zhi, and Michael Yip
Department of Electrical and Computer Engineering, UC San Diego

Real-time robot motion planning in complex high-dimensional environments remains an open problem. Motion planning algorithms, and their underlying collision checkers, are crucial to any robot control stack. Collision checking takes up a large portion of the computational time in robot motion planning. Existing collision checkers make trade-offs between speed and accuracy and scale poorly to high-dimensional, complex environments. We present a novel space decomposition method using K-Means clustering in the Forward Kinematics space to accelerate proxy collision checking. We train individual configuration space models using Fastron, a kernel perceptron algorithm, on these decomposed subspaces, yielding compact yet highly accurate models that can be queried rapidly and scale better to more complex environments. We demonstrate this new method, called Decomposed Fast Perceptron (D-Fastron), on the 7-DOF Baxter robot producing on average 29× faster collision checks and up to 9.8× faster motion planning compared to state-of-the-art geometric collision checkers.

Robotic Tool Tracking under Partially Visible Kinematic Chain: A Unified Approach

Florian Richter, Jingpei Lu, and Michael C. Yip, Department of Electrical and Computer Engineering, UC San Diego
Ryan K. Orosco, Department of Surgery, Division of Head and Neck Surgery, UC San Diego

Anytime a robot manipulator is controlled via visual feedback, the transformation between the robot and camera frame must be known. However, in the case where cameras can only capture a portion of the robot manipulator in order to better perceive the environment being interacted with, there is greater sensitivity to errors in calibration of the base-to- camera transform. A secondary source of uncertainty during robotic control are inaccuracies in joint angle measurements, which can be caused by biases in positioning and complex transmission effects such as backlash and cable stretch. In this work, we bring together these two sets of unknown parameters into a unified problem formulation when the kinematic chain is partially visible in the camera view. We prove that these parameters are non-identifiable implying that explicit estimation of them is infeasible. To overcome this, we derive a smaller set of parameters we call Lumped Error since it lumps together the errors of calibration and joint angle measurements. A particle filter method is presented and tested in simulation and on two real world robots to estimate the Lumped Error and show the efficiency of this parameter reduction.

Pose Estimation for Robot Manipulators via Keypoint Optimization and Sim-to-Real Transfer

Jingpei Lu, Florian Richter and Michael C. Yip, Department of Electrical and Computer Engineering, UC San Diego

Keypoint detection is an essential building block for many robotic applications like motion capture and pose estimation. Historically, keypoints are detected using uniquely engineered markers such as checkerboards or fiducials. More recently, deep learning methods have been explored as they have the ability to detect user-defined keypoints in a marker-less manner. However, different manually selected keypoints can have uneven performance when it comes to detection and localization. An example of this can be found on symmetric robotic tools where DNN detectors cannot solve the correspondence problem correctly. In this work, we propose a new and autonomous way to define the keypoint locations that overcomes these challenges. The approach involves finding the optimal set of keypoints on robotic manipulators for robust visual detection and localization. Using a robotic simulator as a medium, our algorithm utilizes synthetic data for DNN training, and the proposed algorithm is used to optimize the selection of keypoints through an iterative approach. The results show that when using the optimized keypoints, the detection performance of the DNNs improved significantly. We further use the optimized keypoints for real robotic applications by using domain randomization to bridge the reality gap between the simulator and the physical world. The physical world experiments show how the proposed method can be applied to the wide-breadth of robotic applications that require visual feedback, such as camera-to-robot calibration, robotic tool tracking, and end-effector pose estimation. As a way to encourage further research in this topic, we establish the “Robot Pose” dataset, comprising calibration and tracking problems and ground truth data, available online†.


 

Media Contacts

Ioana Patringenaru
Jacobs School of Engineering
858-822-0899
ipatrin@ucsd.edu