On a recent morning, Vt Muthusamy Ramalingamm, a local resident, walked into a room on the second floor, sat down and rested his chin on a small desktop device that pointed a camera into his eyes.
A technician tapped on a screen at the back of an eye scanner, and within seconds a diagnosis appeared on a computer against the wall. Both eyes showed signs of diabetic retinopathy, a condition that can cause blindness if untreated.
In most hospitals and clinics around the world, trained physicians make this diagnosis, examining a patient’s eyes and identifying the tiny lesions, hemorrhages and discoloration that anticipate diabetic blindness. But Aravind is trying to automate the process. Working with a team of Google artificial intelligence researchers based in California, the hospital is testing a system that can recognize the condition on its own.
Google and its sister company Verily targeted this type of blindness because of its prevalence and because it is the sort of illness that an A.I. system can detect early. Google is not charging the hospital while it tests the technology.
Researchers hope this A.I. system will help doctors screen more patients in a country where diabetic retinopathy is increasingly prevalent. Nearly 70 million Indians are diabetic, according to the World Health Organization, and all are at risk of blindness. But the country does not train enough doctors to properly screen them all. For every one million people in India, there are only 11 eye doctors, according to the International Council of Ophthalmology.
The project is part of a widespread effort to build and deploy systems that can automatically detect signs of illness and disease in medical scans. Hospitals in the United States, Britain and Singapore have also run clinical trials with systems that detect signs of diabetic blindness. Researchers across the globe are exploring technologies that detect cancer, stroke, heart disease and other conditions in X-rays and in M.R.I. and CT scans.
Last month, regulators certified the eye system for use in Europe under the Verily name. And the Food and Drug Administration recently approved a similar system in the United States. But hospitals are treading lightly as they consider deploying systems that are vastly different from technology traditionally used for health care.
Aravind’s founder, Govindappa Venkataswamy, an iconic figure in India who was known as “Dr. V” and died in 2006, envisioned a network of hospitals and vision centers that operate like McDonald’s franchises, systematically reproducing inexpensive forms of eye care for people across the country. There are more than 40 of the vision centers around India.
In addition to screening patients in Madurai — one of the largest cities in southern India — the hospital plans to install Google’s technology in surrounding villages where few if any eye doctors are available. The new A.I. system could radically expand the number of people who can be screened.
“Right now, there is a bottleneck when it comes to just screening patients, ” said Dr. R. Kim, a nephew of Dr. V’s who now serves as chief medical officer at Aravind.
Behind the new screening methods are neural networks, complex mathematical systems that can learn tasks by analyzing vast amounts of data. By analyzing millions of retinal scans showing signs of diabetic blindness, a neural network can learn to identify the condition on its own.
A neural network is the same technology that is rapidly improving face recognition services, talking digital assistants, driverless cars and instant translation services like Google Translate.
Because these systems learn from enormous amounts of information, researchers are still struggling to completely understand how they work — and how they will ultimately behave. But some experts believe that once they are honed, tested and properly deployed, they can fundamentally improve health care.
At Aravind, computer screens mounted on the walls of the waiting rooms translate information into the myriad languages spoken in the hospital. During his exam, Mr. Ramalingamm, 60, spoke Tamil, the ancient language of southern India and Sri Lanka. He said he was comfortable with a machine diagnosing his eye condition, in part because it happened so quickly. After the initial screening by the A.I. system, doctors could treat the eyes, perhaps with laser surgery, to stave off blindness.
The system performs on a par with trained ophthalmologists, according to a study published in The Journal of the American Medical Association. But it is far from completely replacing a doctor.
Earlier in the day, Pambaiyan Balusamy, 55, sat in the same room. The Google system diagnosed “proliferative” retinopathy in his left eye — the most serious form of the condition — but it could not read the scan of his right eye, most likely because the eye had developed a cataract.
Doctors can sometimes make a diagnosis when faced with cataracts and blurry eye scans. The Google system still struggles to do this. It is trained largely on clear, unobstructed images of the retina, though Google is exploring the use of lower-quality images.
Even with this limitation, Dr. Kim said, the system can augment what doctors can do on their own. Aravind already operates small vision centers in many of the cities and villages surrounding Madurai. The hope is that the Google system can make eye screening easier in these facilities and perhaps other locations across southern India.
Today, in these vision centers, technicians take eye scans and send them to doctors in Madurai for review. Automated diagnosis can streamline and expand the process, reaching more people in more places — the kind of “McDonaldization” espoused by Dr. V.
The technology still faces regulatory hurdles in India, in part because of the difficulty of navigating the country’s bureaucracy. And though Google’s eye system is now certified for use in Europe, it is still awaiting approval in the United States.
Luke Oakden-Rayner, the director of medical imaging research at the Royal Adelaide Hospital in Australia, said these systems might even need new regulatory frameworks because existing rules weren’t always sufficient.
“I am not convinced that people care enough about the safety of these systems,” he said.
Though these deep-learning systems are new, they are hardly the first effort to aid diagnosis through computer technology. As Dr. Oakden- Rayner pointed out, software called breast CAD — approved by the Food and Drug Administration in 1998 — has been widely adopted in the United States to help with the detection of breast cancer, in part because Medicaid provides a rebate when the technology is used. But studies have shown that patient outcomes did not improve and in some cases declined.
“On paper, the Google system performs very well,” Dr. Oakden-Rayner said. “But when you roll it out to a huge population, there can be problems that do not show up for years.”
© New York Times News Service