From self-folding robots to computer vision: UC San Diego makes strong showing at the International Conference on Intelligent Robots and Systems

San Diego, Calif., Sept. 21, 2017 -- From self-folding robots, to robotic endoscopes, to better methods for computer vision and object detection, researchers at the University of California San Diego have a wide range of papers and workshop presentations at the International Conference on Intelligent Robots and Systems (or IROS) which takes place from Sept. 24 to 28 in Vancouver, Canada. UC San Diego researchers also are organizing workshops on a range of themes during the event.

“IROS is one of the premier conferences in robotics,” said Henrik Christensen, director of the Contextual Robotics Institute and a professor of computer science at UC San Diego. “It is essential for our institute that we present key papers across manufacturing, materials, healthcare and autonomy. I am very pleased to see that we have a strong showing at this flagship conference.”

The conference this year focuses on “friendly people, friendly robots.” Robots and humans are becoming increasingly integrated in various application domains, conference organizers explain on the IROS 2017 website. “We work together in factories, hospitals and households, and share the road,” organizers said. “This collaborative partnership of humans and robots gives rise to new technological challenges and significant research opportunities in developing friendly robots that can work effectively with, for, and around people.”

Soft robotics is one way to create robots that are not dangerous for humans and the research group of roboticist Michael Tolley is exploring the field with three papers at IROS 2017. Better interactions between robots and people also require improving computer vision and researchers led by computer scientist Laurel Riek are proposing using depth information to do so in one paper. Computer scientist Gary Cottrell has a paper on improving object recognition processes. Meanwhile, electrical engineer Michael Yip is looking to make medical robots like the Da Vinci surgical system even better.

Tolley also is one of the organizers of the Sept. 28 workshop titled “Folding in Robotics.” Yip is one of the organizers of the Sept. 24 workshop titled “Continuum Robots in Medicine, Design, Integration, and Applications.” Mechanical engineer Nicholas Gravish is one of the organizers for the Sept. 28 “Robotics-inspired Biology” workshop.

Below is a list of paper abstracts, with videos and visuals when available.

In addition, the UC San Diego Contextual Robotics Institute will host its fourth annual Forum on October 27. The theme: Intelligent Vehicles 2025.

Custom Soft Robotic Gripper Sensor Skins for Haptic Object Visualization

Benjamin Shih, Dylan Drotman, Caleb Christianson, Zhaoyuan Huo, Ruffin White, Henrik Iskov Christensen and Michael Thomas Tolley, Univ. of California, San Diego

Robots are becoming increasingly prevalent in our society in forms where they are assisting or interacting with humans in a variety of environments, and thus they must have the ability to sense and detect objects by touch. An ongoing challenge for soft robots has been incorporating flexible sensors that can recognize complex motions and close the loop for tactile sensing. We present sensor skins that enable haptic object visualization when integrated on a soft robotic gripper that can twist an object. First, we investigate how the design of the actuator modules impact bend angle and motion. Each soft finger is molded using a silicone elastomer, and consists of three pneumatic chambers which can be inflated independently to achieve a range of complex motions. Three fingers are combined to form a soft robotic gripper. Then, we manufacture and attach modular, flexible sensory skins on each finger to measure deformation and contact. These sensor measurements are used in conjunction with an analytical model to construct 2D and 3D tactile object models. Our results are a step towards soft robot grippers capable of a complex range of motions and proprioception, which will help future robots better understand the environments with which they interact, and has the potential to increase physical safety in human-robot interaction. Please see the accompanying video for additional details.

Towards Rapid Mechanical Customization of Cm-Scale Self-Folding Agents

William Weston-Dawkes, Aaron Ong, Majit Abdul, Ramzi Mohamad, Francis Joseph, and Michael Thomas Tolley,Univ. of California, San Diego

Large robotic collectives provide advantages such as resilience to mechanical failure of single agents and increased capabilities for search and coverage based applications. However, a lack of rapid and free-form manufacturing processes remains a barrier to high-volume fabrication of mechanically heterogeneous robotic swarms. Self-folding laser-machined structures have the potential to enable heterogeneous robotic swarms. As an initial step to realizing a functional robotic collective, we focus on the design and characterization of the locomotion of an individual laminate manufactured robot. We look to a vibration-based locomotion technique that uses flexible structures or bristles to enhance the effects of vibration, allowing for fast locomotion (i.e. a "bristle-bot"). However, previous bristle-bot implementations have not allowed for controllable steering behaviors with high locomotion speeds. We describe the extension of existing two dimensional bristle-bot models to a three dimensional model that explores parameters that govern linear and angular velocity. We implement an autonomous laminate-manufactured bristle-bot inspired robot capable of linear velocities of up to 23 cm/s and turning rates of 2 rad/s. Moving towards automated manufacturing, we also demonstrate a self-folding bristle-bot structure that uses a linear compression laminate to achieve a uniform leg fold angle.

Differential Pressure Control of 3D Printed Soft Fluidic Actuators

Tom Kalisky, Yueqi Wang, Benjamin Shih, Dylan Drotman, Saurabh Jadhav, Spencer Aronoff, and Michael Thomas Tolley,   Univ. of California, San Diego

Fluidically actuated soft robots show a great promise for operation in sensitive and unknown environments due to their intrinsic compliance. However, most previous designs use either flow control systems that are noisy, inefficient, sensitive to leaks, and cannot achieve differential pressure (i.e. can only apply either positive or negative pressures with respect to atmospheric), or closed volume control systems that are not adaptable and prohibitively expensive. In this paper, we present a modular, low cost volume control system for differential pressure control of soft actuators. We use this system to actuate three-chamber 3D printed soft robotic modules. For this design, we find a 54% increase in achievable blocked force, and a significant increase in actuator workspace when using differential pressure actuation as compared to the use of only pressure or vacuum. The increased workspace allowed the robot to achieve complex tasks such as writing on a screen with a laser pointer or manipulating fragile objects. Furthermore, we demonstrate a self-healing capability of the combined system by using vacuum to actuate ruptured modules which were no longer responsive to positive pressure.

Faster Robot Perception Using Salient Depth Partitioning

Darren Chan, Angelique Taylor, and  Laurel D. Riek, Univ. of California San Diego

Goto Flickr

This paper introduces Salient Depth Partitioning(SDP), a depth-based region cropping algorithm devised to be easily adapted to existing detection algorithms. SDP is designed to give robots a better sense of visual attention,and to reduce the processing time of pedestrian detectors.In contrast to proposal generators, our algorithm generates sparse regions, to combat image degradation caused by robot motion, making them more suitable for real-world operation.Furthermore, SDP is able achieve real-time performance (77 frames per second) on a single processor without a GPU. Our algorithm requires no training, and is designed to work with any pedestrian detection algorithm, provided that the input is in the form of a calibrated RGB-D image. We tested our algorithm with four state-of-the-art pedestrian detectors (HOG and SVM, Aggregate Channel Features, Checkerboards, and R-CNN), and show that it improves computation time by up to 30%, with no discernible change in accuracy.

Belief Tree Search for Active Object Recognition

Mohsen Malmir and Garrison W. Cottrell, Univ. of California, San Diego

Active Object Recognition (AOR) has been approached as an unsupervised learning problem, in which optimal trajectories for object inspection are not known and to be discovered by reducing label uncertainty or training with reinforcement learning. Such approaches suffer from local optima and have no guarantees of the quality of their solution. In this paper, we treat AOR as a Partially Observable Markov Decision Process (POMDP) and find near-optimal values and corresponding action-values of training data using Belief Tree Search (BTS) on the AOR belief Markov Decision Process (MDP). AOR then reduces to the problem of knowledge transfer from these action-values to the test set. We train a Long Short Term Memory (LSTM) network on these values to predict the best next action on the training set rollouts and experimentally show that our method generalizes well to explore novel objects and novel views of familiar objects with high accuracy. We compare this supervised scheme against guided policy search, and show that the LSTM network reaches higher recognition accuracy compared to the guided policy search and guided Neurally Fitted Q-iteration. We further look into optimizing the observation function to increase the total collected reward during active recognition. In AOR, the observation function is known only approximately. We derive a gradient-based update for the observation function to increase the total expected reward. We show that by optimizing the observation function and retraining the supervised LSTM network, the AOR performance on the test set improves significantly.

Visual Feedback Control of Tensegrity Robotic Systems

Haresh Karnan, Raman Goyal and Manoranjan Majji, Texas A&M Univ, Robert E. Skelton, Univ. of California, San Diego and Puneet Singla, State Univ. of New York at Buffalo

Feedback control problems pertaining to the control of tensegrity robotic systems are detailed in this paper. The unique problems that arise due to the positivity of the string tensions required to maintain the static stability and desirable stiffness of the structural system are shown to bring about interesting opportunities to optimize for the redundancy in the actuation process. The static stability consideration, coupled with the nonlinear dynamics and the sensor models introduce additional algebraic constraints in implementation of both kinematic and model based dynamic controllers for tensegrity systems. Approaches to develop kinematic and dynamic control techniques are detailed in this paper. A bench top experimental setup consisting of a simple tensegrity system is utilized to demonstrate the efficacy of the output feedback control approach developed in the paper. Near real time image measurements are utilized to drive the output error used in the control scheme.

Workshop papers: 

Screw-Propelled Endoscopic Robot

Kevin Cheng, Andrew Saad, Dmitrii Votintcev, Elaine Tanaka, Michael Yip, Univ. of California, San Diego

Robot Control of Endoscopic Instruments using Flexible Polymer Sheath

Aaron Gunn, Mrinal Verghese, Wesly Wong, Michael Yip, Univ. of California, San Diego



Media Contacts

Ioana Patringenaru
Jacobs School of Engineering

Related Links