当前位置: 首页 > 工具软件 > Tactile > 使用案例 >

Tactile intelligence is the future of robotic grasping

陈实
2023-12-01

The simple task of picking something up is not as easy as it seems. Not for a robot, at least. Roboticists aim to develop a robot that can pick up anything—but today most robots perform “blind grasping,” where they’re dedicated to picking up an object from the same location every time. If anything changes, such as the shape, texture, or location of the object, the robot won’t know how to respond, and the grasp attempt will most likely fail.

Robots are still a long way off from being able to grasp any object perfectly on their first attempt. Why do grasping tasks pose such a difficult problem? When people try to grasp something they use a combination of senses, the primary ones being visual and tactile. But so far, most attempts at solving the grasping problem have focused on using vision alone.

The current focus on robotic vision is unlikely to enable perfect grasping. In addition to vision, the future of robotic grasping requires something else: tactile intelligence.

This approach is unlikely to give results that fully match human capabilities, because although vision is important for grasping tasks (such as for aiming at the right object), vision simply cannot tell you everything you need to know about grasping. Consider how Steven Pinker describes all the things the human sense of touch accomplishes: “Think of lifting a milk carton. Too loose a grasp, and you drop it; too tight, and you crush it; and with some gentle rocking, you can even use the tugging on your fingertips as a gauge of how much milk is inside!” he writes in How the Mind Works. Because robots lack these sensing capabilities, they still lag far behind humans when it comes to even the simplest pick-and-place tasks.

So far, most of the research in robotic grasping has aimed at building intelligence around visual feedback. One way of doing so is through database image matching, which is the method used in the Million Objects Challenge at Brown’s Humans to Robots Lab. Other researchers have turned to machine learning techniques for improving robotic grasping. These techniques allow robots to learn from experience, so eventually the robots can figure out the best way to grasp something on their own. Plus, unlike the database-matching methods, machine learning requires minimal prior knowledge. The robots don’t need to access a pre-made image database—they just need plenty of practice.

Google recently conducted an experiment in grasping technology that combined a vision system with machine learning. Google’s biggest breakthrough was in showing how robots could teach themselves—using a deep convolutional neural network, a vision system, and a lot of data (from 800,000 grasp attempts)—to improve based on what they learned from past experiences.

Researchers are aware of the crucial role that tactile sensors play in grasping, and the past 30 years have seen many attempts at building a tactile sensor that replicates the human apparatus. However, the signals sent by a tactile sensor are complex and of high dimension, and adding sensors to a robotic hand often doesn’t directly translate into improved grasping capabilities. What’s needed is a way to transform these raw and low-level data into high-level information that will result in better grasping and manipulation performance. Tactile intelligence could then give robots the ability to predict grasp success using touch, recognize object slippage, and identify objects based on their tactile signatures.

Vision still makes crucial contributions to grasping tasks. However, now that artificial vision has reached a certain level of development, it could be better to focus on developing new aspects of tactile intelligence, rather than continue to emphasize vision alone so strongly.

Now that the robotics community has mastered the first 80 percent of visual intelligence, perfecting the last 20 percent of vision is hard to do and won’t contribute much to object-manipulation tasks. By contrast, roboticists are still working on the first 80 percent of tactile sensing. So perfecting this first 80 percent will be relatively easy to do, and it has the potential to make a tremendous contribution to robots’ grasping abilities.

Vincent Duchaine is a professor at École de Technologie Supérieure (ÉTS) in Montreal, Canada, where he leads the haptic and mechatronics group at the Control and Robotics (CoRo) Lab and holds the ÉTS Research Chair in interactive robotics. Duchaine’s research interests include grasping, tactile sensing, and human-robot interaction. He is also a co-founder of Robotiq, which makes tools for agile automation such as the three-finger gripper described in this article. 

Source: IEEE Spectrum    https://spectrum.ieee.org/automaton/robotics/robotics-hardware/why-tactile-intelligence-is-the-future-of-robotic-grasping

 类似资料:

相关阅读

相关文章

相关问答