On June 2, RSNA (Radiological Society of North America), one of the world's largest international medical societies, held its first Spotlight Course on AI for radiologists from May 31-June 1, and the first AI course for radiologists was held on June 1, 2012, with the aim of providing a platform to help radiologists understand the importance of AI in radiology.
After all, healthcare, with its vast amounts of data and technological needs, is one of the first areas to be impacted by large-scale AI technologies, and one of the fastest industries to see many of these technologies move toward adoption.
This "AI Talk" includes a brief introduction to AI in medical imaging, a discussion of its impact on better human health, and how to access AI systems in your own medical practice, each of which features a discussion or presentation by one of the AI industry's leading practitioners. We've excerpted some of the key points:
One of the clearest things that came out of the course was that AI is already the most important technology in radiology, and that CT, MRI, PET, and other medical imaging tools are important for doctors to use to make diagnoses, and that AI's powerful data-processing capabilities can help doctors on multiple levels.
World-renowned AI expert and Stanford professor Ernest Ng introduced the development of AI and deep learning algorithms, as well as new advances in AI imaging technology. His lab and Stanford Hospital have collaborated on ChestXnet,, Xray4all and other work to understand images with deep learning. These deep learning techniques can understand eleven different pathologies in chest X-rays, detect abnormalities in MRIs of the knee, detect pathologies pointing to aneurysms in head CT films, and more.
"Deep learning has been able to do all the basic tasks that humans need a second to be able to do, of course, AI wants to completely replace the doctor for diagnosis, judgment there is still a lot of road to go, there are a lot of breakthroughs need to be done." Wu Enda said.
One of the organizers of the course, Professor Curtis Langlotz, associate chair of the Department of Radiology at Stanford University School of Medicine, mentioned that he is not that pessimistic about the crisis of AI completely replacing the work of clinical imaging physicians. He emphasized the need for imaging physicians to keep changing and learning more about cutting-edge AI knowledge and skills, but AI is just another valuable new technology and development after new technologies similar to CT, magnetic **** vibration, and ultrasound encountered in clinical medicine. Clinicians need to utilize new AI technologies in their clinical work." Tasks that some physicians feel are menial, such as measuring lesion size and tracking changes in lesion location size across disease cycles, are tasks that AI is better at and that people are less comfortable and good at doing. So in one way AI can make the clinician's job better." He said, "With the assistance of AI, clinicians can do tasks that are cognitively more interesting and challenging."
There's no denying that doctors still face some new challenges. In the face of AI's ongoing transformation of the healthcare field, how can physicians, who are in close proximity to patients and provide day-to-day medical care, adapt to such times?
First, doctors need to learn more about new technologies and how they can be applied in areas such as clinical diagnosis, surgical prognosis, and advance screening. Several medical imaging AI researchers in the course shared their new research in these areas.
"AI will not replace doctors, but doctors who can use AI will replace doctors who can't." Prof. Curtis Langlotz's golden words while discussing the clinical application of AI in healthcare.
Ng also said, "In the world of technology, every five years, our jobs change dramatically. Today, technology is also enabling all other industries to change even faster than before. A lot of things that radiologists used to do will be automated, yet if these doctors are willing to think about what the really important work is, keep broadening their horizons, and focus on work that is different (from these jobs that can be automated), they won't have to worry about anything."
Secondly, the new technologies themselves can further enhance doctors' specialties.
Dr. Hugh Harvey, a radiologist at Kheiron Medical in the U.K., noted that radiologists need to know more about data science techniques. Radiologists need to understand basic data science, machine learning and other aspects of knowledge, especially for the organization of data. He mentioned that AI techniques such as deep learning require a lot of data volume, but the discussion tends to focus on quantity at the expense of quality. The data directly from the clinical system is far from being able to be really used for clinical AI research and application.
General data organization requires at least four layers of operation.
The first layer is the data directly from the clinical system (PACS, electronic medical record system), which often contains sensitive information, and is too large but too heterogeneous to really be used for research.
The second layer is the data that has been reviewed by the ethics committee, and the data that has been removed from the patient's sensitive information, which doctors and researchers can be restricted from getting, but this type of data is generally still unstructured and not issued for direct use in research.
The third layer is to further structured cleaning of these data, visualization and inspection, so as to ensure the quality of image data and other issues.
The fourth layer is to ultimately match these data with the corresponding clinical information, through manual or automated methods for the data labeling so that it can be analyzed for AI research. But at the end of this layer, it is also necessary to confirm whether the statistical value of the data is sufficient, and whether there are real criteria for labeling. For example, the determination of a patient's disease needs to be based on the comparison of the results of multiple doctors' readings, as well as the confirmation of the disease through the results of the subsequent onset or follow-up.
For physicians, being open to technology, and having access to and mastery of emerging technologies through courses, events, and program exchanges will likely make the future of healthcare "twice as easy".
Professor Greg Zaharchuk, a Stanford neuroimager and director of the Frontier Neurological Imaging Laboratory, who attended the conference, said that this kind of course can be a good way to explain the theory, application, development, and limitations of AI to clinicians, and that he is very pleased to see more and more imaging doctors are enthusiastic about AI and want to gain more knowledge in this area.
On the other hand, he also emphasized that there is still a big gap between clinical AI research and real clinical AI product deployment. How to ensure that the algorithms work across different cases, devices, scanning parameters, etc., are all issues that are now being faced and will need to be gradually addressed again in the future.
"I am very efficient to see so many imaging doctors and practitioners participate in this event, this is the first AI focus course organized by RSNA, we hope to keep the communication between research, clinical and industry. Additionally AI imaging companies like DeepTouch Medical maintain academic presentations and papers published while commercializing. It's great to critically analyze product performance and clinical value." said Prof. Matthew Lungren, Stanford Dr., one of the organizers of the event.
Radiologists face more opportunities and challenges in the age of AI, while for the broader public, technology can bring more security and higher standards of care.
At the event, Pranav Rajpurkar, a PhD student from the lab of Ernst & Young Wu, gave a live demonstration of the Xray4All platform: uploading a photo of the user's intercepted x-ray image, and after a second or two of transmission, the result is available online, with an abnormality detected and the site of the abnormality marked with highlighting.
"The application scenario of this technology is particularly suitable for addressing the shortage of clinician resources in developing countries, in the global health scenario." Pranav described.
Arterys, another US-based AI imaging company that has raised more than $45 million in funding, also hosted the luncheon, describing their vision for the future: to further roll out their image analytics and AI products and gradually expand the platform. To provide preventive analytics by using real-world data to inform healthcare decisions for humans globally, automate routine healthcare tasks, and further promote the equalization and democratization of healthcare. Specifically Arterys emphasized that its image analytics and AI products are processed based on cloud computing, with a special emphasis on the fact that cloud computing is actually faster, safer and more reliable compared to computing in a hospital's internal computing system.
As one of the countries with the highest annual healthcare investment in terms of total government spending, the United States is at the forefront of the world in promoting technology in the AI healthcare space. And China, as a populous country with tight average healthcare resources, also has a great demand for AI healthcare.
In this session, China's Imagine Technology, US-based Nuance, and Subtle Medical, a deep-penetration medical company that has been rapidly expanding AI image processing in both China and the US, were invited to present on the theme of "Implementing AI: the last mile", which discusses the last key step in industrializing clinically deployed AI systems.
Nuance Medical introduced several of its products that have reached millions of patient records in China and are being tested in 4 hospitals/imaging centers in the U.S. Nuance has a large market share in the U.S. for speech recognition tools for clinical imaging, reading and marking tools, and is also promoting its "Nuance AI market" medical imaging AI app store.
DeepTrans Medical is the only one of the three with an AI product approved by the FDA for commercialization. Dr. Eunho Miyagi, CEO of Deep Transparency Healthcare, described how it is clinically deploying its FDA-approved SubtlePET product, as well as conducting clinical testing of its pending SubtleMR and other products.
DeepTouch's SubtlePET product is the first approved medical image enhancement application and the first AI application for nuclear medicine, and its product value focuses on using AI to achieve approximately 4x faster image acquisition, as well as providing a solution to reduce radiation and contrast dose. This software solution makes clinical imaging easier, higher quality, safer, and smarter for patients, and has been commercially deployed and clinically partnered with 20 leading hospitals and imaging centers in the U.S. and globally since FDA approval.
In the U.S., there is a high bar to actually getting hospitals to apply AI and be willing to pay for it, with deep integration of hospital information systems, confirmation of the system's effectiveness with clinicians, and a demonstration to hospitals of the return that AI system purchases can bring.
"In the U.S. really deployed in hospitals need to communicate with clinicians, information systems leaders and hospital management operations on many fronts. In the case of DeepTouch Healthcare, for example, the company's clinical and sales leaders need to conduct quick and effective real data testing with hospitals, allowing them to conduct clinical testing with their own data in real time without affecting the hospital's existing operations as much as possible. Through the actual test and the acceleration of the real observable imaging examination, it can be very objective for the hospital to see that AI brings new clinical value as well as economic value to the hospital, so as to progress to the procurement and deployment." Said Eunho Miyagi, CEO of DeepTouch Medical.
Jeff Soreson, CEO of TeraRecon, a medical image post-processing company, and CEO of Envoy, a medical imaging AI platform, and Prof. Eliot Siegal, a renowned imaging physician and promoter of imaging AI, also discussed how to optimize imaging AI workflows, the deployment process, and ongoing validation, with each other in the form of an interview. validation.
"Deep clinical validation of AI algorithms is a very critical step in the rollout of medical AI, and we are constantly moving towards that goal." Prof. Eliot Siegal emphasized.
While medical imaging is already one of the areas where AI is best suited and can be deployed fastest, we still face challenges.
First, AI technology, as represented by deep learning, is still a "black box". This means that the technology is able to achieve a high level of accuracy in medical imaging, but it is still difficult for AI to understand the true relationship between the data, how to categorize it, and so on.
"At Stanford, we hope to avoid the black box effect by building better attention maps for medical image perception." said Dr. Saafwan Halabi, a professor at Stanford School of Medicine. There have been many recent studies and reports discussing how data-based adversarial attack algorithms (Adverserial Attack) can prevent AI that recognizes road signs from working properly. In healthcare AI, how to ensure that AI is not misled is a very important part of the equation, but not enough research has been done in this area.
Dr. Matthew Lungren, head of the Medical Imaging Research Program in Artificial Intelligence at Stanford AIMI, and one of the leaders of the undergraduate program, also discussed the issue of bias in clinical AI, "bias and implications for medical imaging AI". AI is likely to introduce bias when used clinically. For example, it is likely that classifiers for medical image recognition will recognize other markers in the image rather than the lesion itself. Current tools do not understand the problem of bias in data and algorithms very well. AI for practical clinical applications must be able to be used in such a way that a person can understand the credibility of the results. Considering human-computer interaction in system design and AI algorithms that give confidence analysis can greatly help people to minimize possible bias issues.
Prof. Jayashree kalpathy, one of the heads of the Machine Learning Lab at Massachusetts General Hospital, discussed how to build a more robust model, how to share trained deep learning AI models without having to share sensitive data through algorithms such as transfer learning as well as federated learning for deeper collaboration in multi-hospital collaborative projects.
In the age of artificial intelligence, technology is constantly penetrating and transforming all walks of life. Medicine is a field that is extremely connected to human life, and in such a large, important field that stands at the forefront of AI adoption, we are seeing more and more efforts to help technology better integrate with medical practice.
For example, the first AI in medical imaging course offered by RSNA attracted more than 200 doctors from top U.S. hospitals, and techies in the industry are happy to provide more information to help doctors better understand AI. In addition, startups like DeepTouch are designing their products in a way that makes it as easy as possible for doctors to "seamlessly" integrate technology into their workflows without having to go through the extra effort of adapting to the product. Doctors know more about the technology and startups know more about the technology. The doctors know more about the technology, and the startups are developing better products for the doctors and the patients.
In the future, human health will be supported by more technology, but most importantly, by human beings in the industry*** working together to bring about a more efficient and effective healthcare system.