A group of graduate engineering students have adapted Microsoft’s new Kinect technology for a surprising purpose: surgical robotics.
The method involves using the Kinect — an array of cameras and sensors that allow video-game users to control their Xbox 360s with their bodies — to give surgeons force feedback when using tools to perform robotic surgery.
“For robotics-assisted surgeries, the surgeon has no sense of touch right now,” said Howard Chizeck, UW professor of electrical engineering. “What we’re doing is using that sense of touch to give information to the surgeon, like ‘You don’t want to go here.’”
Currently, surgeons commonly use robotic tools for minimally invasive surgeries. Tubes with remotely controlled surgical instruments on the ends are inserted into the patient in order to minimize scarring. Surgeons control the instruments with input devices that resemble complex joysticks, and use tiny cameras in the tubes to see inside the patient.
The problem is, however, that surgeons have no realistic way to feel what they are doing. If they move a surgical instrument into something solid, the instrument will stop but the joystick will keep moving.
Electrical engineering graduate student Fredrik Ryden solved this problem by writing code that allowed the Kinect to map and react to environments in three dimensions, and send spatial information about that environment back to the user.
This places electronic restrictions on where the tool can be moved; if the actual instrument hits a bone, the joystick that controls it stops moving. If the instrument moves along a bone, the joystick follows the same path. It is even possible to define off-limits areas to protect vital organs.
“We could define basically a force field around, say, a liver,” said Chizeck. “If the surgeon got too close, he would run into that force field and it would protect the object he didn’t want to cut.”
At first it was suggested that presurgery CT scans be used to define these regions. However, Howard’s group came up with the idea of using a “depth camera,” a sensor that detects movement in three dimensions by measuring reflecting infrared radiation to automatically define those regions. At a meeting on a Friday afternoon in December, a team member suggested using the newly released Kinect.
“It’s really good for demonstration because it’s so low-cost, and because it’s really accessible,” Ryden, who designed the system during one weekend, said. “You already have drivers, and you can just go in there and grab the data. It’s really easy to do fast prototyping because Microsoft’s already built everything.”
Before the idea to use a Kinect, a similar system would have cost around $50,000, Chizeck said.
The project is part of a larger research effort at the electrical engineering department’s BioRobotics Lab to improve surgical robotic methods. The team hopes to integrate its feedback system into a collaboration of different systems.
“Our colleague, Blake Hannaford, has this wonderful robotic platform that’s been developed here, and we’d like to incorporate this into part of that system,” Chizeck said.
The team hopes to make surgical robotics reliable and practical enough for long-distance use, allowing doctors in major cities to easily perform surgeries on patients in small, isolated towns. Chizeck said this idea could be extrapolated to use in disaster relief or battlefield situations.
“Suppose there’s an earthquake somewhere,” Chizeck said. “First responders could get victims to a van with a satellite dish on top and the tools inside, and a surgeon somewhere else could perform the surgery.”
Ryden said that a paper will soon be published about the research. In the meantime, the sensors will need to be scaled down to a size deemed appropriate for surgical use, and the resolution of the video will need to be increased before it is usable.
Reach reporter Ryan Dunn at email@example.com.
Please read our Comment policy.