ENDO 2025 Conference Coverage

AI-Based Screening for Diabetic Retinopathy

In this expert conversation, researchers Elangovan Krishnan, MBBS, PGDHM, MTech, MS, PhD; Ramya Elangovan; Kavin Elangovan; Nancy Jayeshbhai Vora, MD; and Gurudharshan Rajamani, MBBS, discuss the development and clinical relevance of SMART (Simple Mobile AI Retina Tracker), a mobile AI-based screening tool for diabetic retinopathy. The team outlines how SMART was trained using global retinal image datasets, validated across diverse populations, and optimized to function on low-resource devices while maintaining expert-level accuracy. They also explore how SMART addresses barriers in early detection and access to care in underserved settings, and highlight ongoing work to expand its diagnostic capabilities across multiple ocular conditions. This tool and its underlying model was presented at ENDO 2025, July 12–15, 2025, in San Francisco, CA.

Additional Resource:

  • Sethuraj JR, Elangovan R, Elangovan K, et al. A simple mobile artificial intelligence retina tracker (SMART) powered by efficient deep learning models for diagnosis and prognosis of diabetic retinopathy. Presented at: ENDO 2025; July 12-15, 2025; San Francisco, CA. https://www.endocrine.org/meetings-and-events/endo-2025

TRANSCRIPTION

Consultant360: What are the key takeaways you hope to highlight in your Endo 2025 presentation?

Elangovan Krishnan, MBBS, PGDHM, M.Tech., MS, PhD: In the last few decades, the greatest advances in medicine have come from outside medicine, like from the field of computer science, biomedical engineering. Today if you go for an MRI or you go for a CT, it's an engineering innovation which is revolutionizing medicine. So in the same vein today, there is a new kid on the block. This kid is one of the most intelligent kids, and these are nothing but intelligent systems which can think like humans, which can learn like humans, which can reason like humans and make decisions. And that's artificial intelligence, because it has the power of human intelligence. The entire idea is we want to harness this power of AI to help physicians to make better decisions, to predict diseases before it comes, to be able to identify how the disease progresses.

The entire core of our principle is applying AI to create intelligent health care decisions and to augment physicians to make those decisions accurately, precisely. The most important thing is not only creating this technology, we want to make this technology accessible to the ends of the earth. So if I have to create a technology which has to be adopted to everywhere in the world, the end users should be involved in creating the technology. So that is the exact same reason we have went deliberately across the world, all over the world. Today we have physicians from 63 nations involved. They volunteer their time to provide us their intellectual, scientific input in creating those intelligent solutions for health care using the power of AI.

So we have created a simple application, an AI-based clinical application to diagnose diabetic retinopathy. So I'm going to explain a little bit about why we choose AI to study the eyes, right? The eyes are incredible, incredible organs. The eye is like a dashboard for our whole body.. Today you can diagnose 10 years up to 10 years before Alzheimer's just looking at the eye. We can diagnose Parkinson's a decade before it affects the brain because there are certain changes in the retina which occur earlier. You can predict heart attacks by retinal images. You can predict stroke. So there are a number of diseases. This is a new evolving field called omics. So our goal is to apply AI in the field of omics and as a first model, we chose diabetic retinopathy.

Why diabetes? If there is one disease which causes the most vision loss after age-related macular degeneration, it is diabetes. There is a significant population in the world who are diabetic or obese, or are suffering from metabolic syndrome. Nearly 50% of the population are one of these three, nearly 90% of preventable conditions preventable vision loss can be diagnosed by AI. So we want to use this as a proof of principle.

We first tested developing an AI model training with vast data sets and then tried to accurately diagnose diabetic retinopathy. Not only diagnose diabetic retinopathy, we can classify the stages–stage one to stage two, to stage three, to stage four—every stage of the diabetic retinopathy. Not only can we classify the stages of diabetic retinopathy, we can differentiate one kind of eye disease from all the other eye diseases.

So our project as involved in all these three steps, identifying diabetes, staging the diabetes, differentiating diabetes from other common conditions, about 30 different conditions of the eye, including hypertensive retinopathy, glaucoma, cataract. And then we didn't stop there. We reached an accuracy of more than 99% in most of these diseases. When we run this machine long enough, we could even reach an accuracy close to 100%, and we didn't test on one dataset. We collected datasets of retina images from all six continents in the world. All the six continents. We spent 3 years—I should not say we, it was Ramya who spent 3 years of our time just reaching out and sourcing publicly every single diabetic or every single retinal data set we could find in the world.

And it took us about 3 years to reach there, collect all these datasets. Then we have built a massive AI infrastructure to be able to analyze this data and then train a model which reaches an accuracy close to 100%.

But we do want to mention, we want to be very cautious in saying these are all done in computer systems. They may not be accurately reflecting how this will happen in the real world. So when we trained in one dataset, we tested in other datasets; for example, an AI model trained in, for example, an Indian population may not perform the same level of accuracy with the retinal images from another nation. There are differences between male and female retinas. There are differences between retinas from different races. So how do we avoid those biases? So we went through an extensive way of removing every single bias in our analysis.

That's the hallmark of our study. But us creating this model doesn't solve all the world's problems. Then came Kavin. He said we need to make this model available for everyone in the world. So he created a simple web application so the model can be hosted in an online domain and provided to everyone in the world absolutely free of cost. So anyone, any physician or anyone can use this web application to test any images. So that was a very important point. We need to collect someone’s routine images, but it is their private information, it's protected health information. But now they can use it in their own browser, they can use it in their own computer and they can make the analysis. The image does not go anywhere else. So it is protected, it is personalized.

So we solve the problem of privacy so the people don't have to give their image, say even if there is a patient, you don't even have to give his image to a physician. He can show his own image in his own cell phone to the doctor when he visits the doctor and make a diagnosis. So this is another beautiful thing Kavin did is not only we do one image or one model, we have the option to choose many models. Just like you get a second physician opinion, you can get a second AI model opinion. So everything is incorporated into the application. So we have created an application where you can test yourself with multiple AI models and all of them are available. So this is the uniqueness of this study. Then we created the application.

The third, the most important piece of the puzzle is how good is a model if it is only tested by the one who created the model or one who developed the application? We want the real end users. So we reached out to physicians across the world and today the physicians you are seeing who work with us, who volunteer with us across the world, they are a small sample of the many other physicians who are testing this application for us and giving us real feedback, which we are going to incorporate and present at the end of 2025. And that's the crux of our presentation.

C360: Why is now the right time for a tool like smart to make an impact in diabetic retinopathy screening?

Ramya Elangovan: Well firstly, diabetes and vision loss, they're on the rise. Diabetes is more common now than at any point in the world and that means that more and more people are at risk of losing their sight from diabetic retinopathy. But the good news is that if diabetic retinopathy is found early, then the majority of these vision loss cases can be prevented. The problem though that we try to address with this project is that our current screening methods are struggling to keep pace with the demand and so many people just aren't getting screened in time. We have a worldwide shortage of eye doctors, especially in lower income countries or resource-limited settings. So that makes regular eye screening inaccessible to billions of people, especially the people that need it the most. These people, they can't get screened because maybe they live far from clinics or they can't afford the test, they're too expensive or the clinics don't have special equipment.

But the application that we created, which is smart, its goal was to make this screening simple and fast. So for SMART, all you need is just a smartphone and AI, and it will check for diabetic retinopathy. So the cornerstones of SMART, one, it's fast when you upload an image, it scans it and it gives you the diagnosis in just a few seconds. The second thing, it's universal can be used anywhere in the world on any device. The third thing, it's very easy to use and it's perfect for busy clinics or areas with fewer resources only minimal training is needed to use it and then most importantly, it's accurate. So this technology is allowing regular doctors and nurses, not just eye specialists to help screen patients. So that means more people can get checked and then eye doctors can dedicate their time to focusing on the people that need the most help. And to answer your question, now is the most perfect time for SMART and for screening for diabetic retinopathy because more people than ever have smartphones and also AI models are demonstrating exceptional accuracy. We have new AI models that are outperforming what we've seen before. Also, doctors and health professionals, more of them are excited about using AI to help more patients and improve care. So technology like SMART makes it possible to screen more people more easily, more quickly and more affordable than ever before.

C360: How does this technology improve access to retinal screening in underserved or resource limited settings?

Kavin Elangovan: Our app is super helpful for the underserved, underprivileged or resource limited settings. The main problem is that there is a lack of ophthalmologists in these settings. So either they're not really well-trained or there's not actually enough present. For example, in Texas there's about only one ophthalmologist for about 25,000 people and people might have to wait about 6 months for an appointment. This is a state of Texas which is very strong. If you think about there are so many things where this could be a lot worse. In addition to that, the equipment for proper diagnosis can also be very expensive and out of reach. So there's a lot of things that can be causing problems for people in those resource limited settings.

So this is where our innovation is super helpful because we utilize simple smartphones and in general internet connected devices. Nowadays devices like smartphones or tablets or even smartwatches, they're more and more affordable. Even in these underprivileged communities, you can buy a simple internet connected device for as little as $10. And right now almost every household has at least one device with at least a camera and basic internet access. So we can just use these devices that are already in the people's hands, and we can create a very powerful diagnostic platform to be in reach for all these communities. When we were developing these AI models, they were not only accurate, but we made them especially efficient. So they don't require a powerful device to run it. The models can work very smoothly on, for example, low end smartphones and they still deliver expert level performance.

As my sister mentioned, they take about less than a second to diagnose an image. So this type of AI takes a lot less time to learn and produces the output in seconds while a specialist may not be as well trained and will still take about maybe a few minutes to make that diagnosis. So the capabilities of AI are truly very helpful in all these communities.

And in addition, we also decide to address the aspect of computational power. So most of the AI systems today use very complex models and they require a lot of computing power, also fast internet or access to the cloud and the simple smartphones and devices in the resource limited settings, they're not usually able to run these models and get the actual data output. So those complex systems are usually run by big companies and they only work on high-end devices. And this keeps AI out of the reach of the general public. So in contrast, we decide to focus on more lightweight, fast and then also accurate models so that anyone anywhere can get high quality results.

Privacy is also one of the most important aspects when you're developing an application for medicine because there are so many complications if there is a concern about data misuse. But our application, the data does not leave the user's device. Everything is local, private, and it's secure and the user has full control over their medical information. And then lastly, our application is very simple to use. So no matter education or background, all the instructions are super simple and very clear and anyone can use it efficiently. So kind of in summary, the SMART application, it's very simple and also uses very lightweight models. And since we're using the smartphones and devices that people already have in their hands, we can bridge the gap, especially in the underprivileged or resource limited settings.

C360: What do you feel clinicians should know about incorporating SMART into routine care?

Nancy Jayeshbhai Vora, MD: So with this AI model, actually what we are trying to do is we want to provide a more accessibility and we also want to provide a more accuracy. Like if say for reason, some patients have a diabetic retinopathy and the first doctor they will mostly visit is their primary care physician. And yes, primary care physician has all the abilities and they are incredibly talented, but they don't have enough tools and enough rigorous training. Like say for example to become an ophthalmologist after your high school, you need at least 20 years to 25 years of training to become an ophthalmologist. So due to that reason, most of the primary care usually misses the early signs of diabetic retinopathy.

And here comes our model. We want to bridge this gap. So our incredible team and this model really helps the primary care physician, and it performs with the 25 years of an ophthalmologist experience. So this model has 99.9% accurate and only takes around few milliseconds to just diagnose the diabetic retinopathy, which is almost a leading cause of the preventable blindness. So by integrating this, we want the primary care providers who are located in the remote setting to have more access and also the people located in the remote settings can use this app and can prevent the diabetic retinopathy.

Gurudharshan Rajamani, MBBS: So our team incorporated the AI model developed by Ramya into an application and from a clinician's perspective that it just takes seconds to arrive with a diagnosis. Our goal with this project is pretty simple: it is to support physicians by reducing their workload and also to help an early identification of suspicious cases. Not all cases can be referred to specialists and increase their workload in the name of suspicion. So this AI helps us bridge that gap.

It can also provide us a second reliable opinion if needed. And the best part about application is it is versatile, so it doesn't rely on a single model, but it rather uses multiple AI models to arrive a more accurate diagnosis. And also it also has multiple diseases also. So you can screen other conditions as well.

The best part is it is free and it's universally accessible. You can access this application anywhere in the world and it works on any internet powered smartphone. And what I feel truly sets our project apart is that is open source, it is noncommercial and it's completely free. There are no subscriptions, no strings attached with it. So by keeping it open source, we invite developers from around the world to contribute and enhance the application. So with this vision here is to have a collaborative approach to democratize healthcare by making advanced diagnostic tools easily accessible to everyone, even in rural and under resource limited settings.

C360: What unanswered questions around this technology remain and where do you see the most promising directions for future research?

Ramya Elangovan: So first, as we mentioned before, we still don't know exactly how the retina differs. For example, different genders, different populations, different backgrounds, people from different countries, they may have small differences in the eye. And we want to make sure that the AI is working equally well for everyone no matter what their background is. We want to ensure that the AI does not get biased. And so one of the unanswered questions is, how do we ensure AI is not introducing any bias when we introduce it to images of different people?

Another thing is also not everyone uses the same quality of camera or instrument to take pictures of the eye. So a question is how do we ensure that AI works well without bias, even with different equipment? And sometimes the person taking the picture like the operator, they may not be a doctor and we want to make sure that AI can still give good results no matter who's using it.

 So to answer some of these questions, here are a few of our future directions or where our research is heading. One of the most promising future directions, and I think most exciting in my perspective, is actually creating a portable device. So imagine a small easy to use device that you could use to scan your eye anywhere. So at home, at a clinic, in a rural setting, anywhere without having to visit a specialist. So the AI model could be incorporated into that device and it would analyze the scan and give you an accurate and instantaneous diagnosis. And we're currently working towards making this a reality.

We've also created an app for hypertensive retinopathy so it can spot signs of high blood pressure in the eye. And we're actually presenting that at the American Heart Association Hypertension Conference. So that's another future direction.

 And finally, we've completed the training and testing of AI models for 30 different ocular conditions. One of our major goals is making sure that the model is as universal as possible. So incorporating more diseases ensures that it's applicable to more people. We are currently in the process of deploying that model in an application that people can use.


©2025 HMP Global. All Rights Reserved.
Any views and opinions expressed are those of the author(s) and/or participants and do not necessarily reflect the views, policy, or position of Consultant360 or HMP Global, their employees, and affiliates.