That’s Dr. Kinect to you, says University of Washington students
As all the amazing innovations have been coming through involving Microsoft’s highly versatile Kinect controller, I’ve even imagined that it might be useful for robotic microsurgery—it makes sense from a teleoperator angle. At the University of Washington, students have thought along the same lines; however, while my mind went to better focusing the efforts of the surgeon’s hands, they went the route of bringing a better map of the patient.
Currently, surgical robots provide scant little feedback aside from an internal camera for guiding the progress of the surgeon through the body of the patient. One considerable issue is that robotic instruments lack tactile feedback. Fredrik Ryden at the University of Washington has sought to resolve this issue, and Kinect seems to be the way to do it,
Electrical engineering graduate student Fredrik Ryden solved this problem by writing code that allowed the Kinect to map and react to environments in three dimensions, and send spatial information about that environment back to the user.
This places electronic restrictions on where the tool can be moved; if the actual instrument hits a bone, the joystick that controls it stops moving. If the instrument moves along a bone, the joystick follows the same path. It is even possible to define off-limits areas to protect vital organs.
Since the Microsoft Kinect already has a great deal of software for defining and mapping special regions, it only makes sense that can be extended to highly fine-tuned regions (possibly moving regions) and special-relationships involved in vascular and organ surgery. Instead of “you as the controller” Ryden has gone the other direction and made it “you are the map.” Using the Kinect, it’s 3D camera system, and special relationship mapping software, he’s having it map the patient so that the system can keep track of where the surgical instruments are at all times and provide force-feedback to the surgeons when they wander too wide or too near a possible danger zone.
I imagine, as this sort of research continues, that MRI and CAT scans could be used to develop highly detailed maps of the patient’s body and then the Kinect camera and special-orientation system could be used to better guide surgical instruments through the landscape of the human body. Doing so would revolutionize not just the way that probes interact with humans, but how humans interact with robotic surgery devices, permitting them might finer apertures for surgical operations.
Another extreme boon of using the Kinect technology to do this is that the console peripheral is amazingly cheap.
“It’s really good for demonstration because it’s so low-cost, and because it’s really accessible,” Ryden, who designed the system during one weekend, said. “You already have drivers, and you can just go in there and grab the data. It’s really easy to do fast prototyping because Microsoft’s already built everything.”
Before the idea to use a Kinect, a similar system would have cost around $50,000, Chizeck said.
It’s not every day that we see a video game peripheral become part of a multi-million dollar and life-saving industry.
Undoubtedly, this is just the tip of the iceberg when it comes to medical technology and the underlying concepts that scaffold the Kinect. Video-games themselves are reflex oriented and based on virtual special awareness—it’s not much of a jump to change virtual spatial awareness into real-world impact. We’ve already seen this with robotics control and the Aldebaran NAO project and its current evolutions.
This is just a prime example of how the Kinect can be used as both interface and map. Undoubtedly this sort of capability will revolutionize how catheterized surgery and microsurgery could be done in the future.
Post written by Kit Dotson for SiliconAngle and is reposted here with permission.