Could robots take the surgeon’s scalpel?

After assisting doctors and surgeons for decades, are robots ready to take over? Today, engineers build independent machines that can not only cut or suture, but also plan these cuts, improvise in the face of the vagaries of a surgical operation and adapt. Researchers are improving the ability of machines to navigate the complexities of the human body and coordinate their action with that of doctors. But the truly autonomous robotic surgeon may still be a long way off. The biggest challenge will not be technological, but rather convincing people that it is acceptable to use these robotic surgeons.

In 2004, the U.S. Defense Advanced Research Projects Agency (DARPA) dangled a $1 million prize for any group able to design a self-driving car capable of driving itself over rough terrain in several tens of kilometers. This prize gave birth to the autonomous vehicles that we know today. Thirteen years later, the same DARPA announced another award – not for a robot car this time, but for autonomous, robotic doctors.

Robots have been found in operating rooms since the 1980s, including for holding a patient’s limbs in place and, later, for laparoscopic surgery, which allows surgeons to use remote-controlled robotic arms to operate on the human body through tiny holes instead of huge incisions. But, for the most part, these robots have been just highly sophisticated versions of the scalpels and forceps that surgeons have used for centuries – impossibly sophisticated, to be sure, and capable of working with incredible precision, but still, in finetools in the hands of the surgeon.

Despite many challenges, the situation is changing. Today, five years after the award was announced, engineers are taking steps to build independent machines that can not only cut or suture, but also plan those cuts, improvise in the face of the vagaries of surgery, and adapt. Researchers are improving the ability of machines to navigate the complexities of the human body and coordinate their action with that of doctors. But the truly autonomous robotic surgeon that the Darpa military envisioned may still be a long way off. And the biggest challenge may not be technological, but rather convincing people that it is acceptable to use robotic surgeons.

Navigate the unpredictable

Like vehicle drivers, surgeons must learn to navigate their specific environment, which sounds easy in principle but turns out to be infinitely complicated in the real world. Real-life roads are full of traffic, construction machinery, pedestrians – all things that don’t necessarily show up on Google Maps that the car has to learn to avoid.

Likewise, if one human body is generally similar to another, in reality we are all singular on the inside. The precise size and shape of organs, the presence of scar tissue, and the location of nerves or blood vessels often differ from person to person. “Patients vary so much from individual to individual,” says Barbara Goff, gynecologist oncologist and surgeon-in-chief at the University of Washington Medical Center in Seattle. “I think that could be a challenge.” She’s been using laparoscopic surgical robots — the kind of robots that don’t move on their own but translate the surgeon’s movements — for more than a decade.

The fact that bodies move poses another difficulty. A few robots are already demonstrating some degree of autonomy, a classic example being a device with the (perhaps a bit exaggerated) name ROBODOC, which can be used in hip surgery to shave the bone around the hip socket. But the bone is relatively easy to work with, and once in place it doesn’t move much. “Bone doesn’t bend,” says Aleks Attanasio, a research specialist who now works at Konica Minolta and who wrote about robots in surgery for the journal Annual Review of Control, Robotics, and Autonomous Systems of 2021. “And if they do, that means there’s a bigger problem.”

Unfortunately, the rest of the body isn’t as easy to put together. Muscles contract, stomachs growl, brains twitch, lungs expand and contract, for example, before a surgeon even starts moving anything. And if a human surgeon can obviously see and feel what he’s doing, how could a robot know if his scalpel is in the right place or if the tissues have moved?

One of the most promising options for such dynamic situations is the use of cameras and sophisticated tracking software. In early 2022, for example, researchers at Johns Hopkins University used a device called Smart Tissue Autonomous Robot (STAR ​​for short) to stitch together two ends of severed intestines from an anesthetized pig – a potentially very delicate task – using this visual system.

To achieve this feat, a human operator marks the ends of the intestine with drops of fluorescent glue, creating markers for the robot to follow (much like an actor wearing a motion capture suit in a Hollywood movie). At the same time, a camera system creates a three-dimensional model of the fabric using a grid of light points projected onto the area. Together, these technologies allow the robot to see what is in front of it.

“What’s really special about our vision system is that it not only allows us to reconstruct the appearance of tissues, but also allows us to do so quickly enough to be able to act in real time,” explains Justin Opfermann, designer of the STAR system and an engineering researcher at Hopkins. “If something moves during the operation, you can detect it and follow it.”

The robot can then use this visual information to predict the best course of action, either presenting the human operator with different plans to choose from or checking with him between two sutures. In testing, STAR worked fine on its own, but not perfectly. A total of 83% of the sutures could be performed independently, but the man had to intervene in the remaining 17% to correct things.

“The 83% can certainly be exceeded,” says Opfermann. Most of the problems were because the robot had a little trouble finding the right angle in certain corners and needed a human to push it to the right spot, he adds. Newer trials, which have yet to be published, now show success rates in the 90% range. In the future, the man might just have to approve the plan and then watch the operation unfold, without intervention.

Pass the security test

For now, however, there still needs to be someone in the driver’s seat, so to speak. And that could be the case for a while for many different autonomous robots: if, in theory, we could entrust the entire decision-making process to the robot, that raises a question, which has also been raised with driverless cars. “What happens if some of these activities go wrong? “, explains Mr. Attanasio. “What happens if the car has an accident? The general view, for now, is that it’s better for humans to remain in control at the end of the day – at least in a supervisory role, reviewing and approving procedures and being ready in case. emergency.

Even so, proving to hospitals and regulators that autonomous robots are both safe and effective could be the biggest barrier to truly human-free robots entering the operating room. Experts have some ideas on how to circumvent this obstacle. For example, designers will likely need to be able to explain to regulators exactly how robots think and decide what to do, Attanasio says, especially if they progress to the point of not just assist a human surgeon, but perform the surgical procedure themselves. This explanation, however, might be easier said than done, as current artificial intelligence systems leave observers with few clues as to how they make their decisions. Therefore, engineers might want to design from the start a system whose famous “black box” is “explainable”.

Pietro Valdastri, a biomedical engineer at the University of Leeds in England and one of Attanasio’s co-authors, thinks no single manufacturer may be able to solve the regulatory issue easily, but he has a Alternative solution. “The solution here is to manufacture a system that, although self-contained, is inherently safe. This specialist works on what are called soft robots, used in particular for colonoscopies. Traditionally, a colonoscopy consists of passing a flexible tube equipped with a camera – an endoscope – through the intestine in order to detect the first signs of colon cancer. The procedure is recommended for anyone over the age of 45, but it takes a lot of time and training for an operator to become proficient with the endoscope. As there are few properly trained operators, waiting lists have grown longer.

According to Valdastri, using an intelligent robot that can steer itself would make things much easier, like driving a car in a video game. The doctor could then focus on the essentials: spotting the first signs of cancer. And in this case, the robot, created from soft materials, would be inherently safer than more rigid devices. It could even reduce the need for anesthesia or sedation, according to Valdastri, since it could more easily avoid pushing against the intestinal walls. And since the robot has no way of cutting or zapping anything on its own, it might be easier for regulators to accept.

As technology develops, autonomous robots could, Opfermann says, start out being approved only for simple tasks, like holding a camera. As these basic tasks are approved, the tasks can accumulate to form a self-contained system. In cars, we first had cruise control, he says, but now there’s brake assist, lane keep assist, and even park assist — all of those features. are moving towards a driverless system. “I think it will be a bit of the same,” Opfermann predicts, “where we see small, self-contained tasks that eventually chain together to form a complete system. »

Source : Knowable