Doctors are part of the diagnosis process of artificial intelligence.
In clinical diagnosis, doctors need to judge and analyze the symptoms, signs and physical examination results of patients. This judgment is easily influenced by doctors' vague memory, knowledge gap and cognitive prejudice. On the other hand, AI has the potential to objectively evaluate the latest and most comprehensive data and medical evidence, and provide highly accurate diagnosis results and recommended treatments based on these data.
However, AI needs to receive accurate data input to produce a correct diagnosis, and the patient's symptom experience can not always be described in perfect medical terms. Understanding the patient's complete medical history is still the key skill to complete clinical diagnosis. Doctors who are good at listening and can make patients trust are more likely to find the implication of patients, get more data and take correct measures to help patients.
Patients may also report inaccurate or irrelevant information, including exaggeration or even lies. Human doctors can identify these contents more easily than AI. In the human -AI diagnosis interface, human doctors will have an important role, that is, as "human beings" to understand patients' diseases and input accurate data into the computer.
But for patients, the fundamental problem facing the diagnosis interface may not be "Can this machine understand me", but "Do I want the machine to understand me?" In the future, artificial intelligence will almost certainly be able to simulate empathy and evaluate the authenticity of patients' narratives. Chat bots are on the rise, and the technology of AI to explain body language is also improving. But will patients be willing to share information with machines? Are they willing to let a machine tell them that they have cancer, no matter how appropriate the emotional simulation of the machine is at this time?
Effective communication requires doctors to carefully evaluate patients' hopes, fears and expectations. Most of them are nonverbal. A skilled doctor can read the message that patients don't resort to their mouths. These communication channels are instinctive and will affect the doctor's diagnosis and treatment behavior, and doctors usually don't even realize it. This kind of human interaction is very complicated, which cannot be replicated by algorithms.
Sometimes, artificial intelligence algorithms may fail due to lack of appropriate data. For example, for rare diseases, the data that may be used for training is not enough to support artificial intelligence. One of the important skills of new doctors in this era will be to understand the limits of AI and how to make diagnostic decisions in these situations. Similarly, when patients suffer from multiple diseases and need multiple treatments, decision-making will become more complicated and subtle, because some medical decisions may affect another situation, and AI may not be as good as human doctors in this respect. Another challenge will be the case of equivalent diagnosis, that is, AI proposes that multiple diagnoses have similar possibilities. Human doctors need to judge this uncertainty and communicate with patients.
The head of the emergency room
At present, the triage of health system also depends on human judgment, sometimes according to rules, sometimes according to knowledge and experience. Rules are usually based on several variables, so they will be blunt.
Using AI for triage can be based on more variables, thus achieving faster, more accurate and more sensitive results. Variables include clinical measurement results and real-time tracking obtained through wearable instruments or implantation techniques. The triage no longer needs a simple rough classification (such as red, amber and green indicating the risk level of the disease), but can be constantly adjusted according to the patient's risk and the need for rapid intervention. Continuous data flow can trigger emergency services at an early stage, allowing unmanned ambulances to carry human emergency personnel and arrive at the scene before patients realize it.
The role of doctors in the emergency room is team leader, knowledge processor and disseminator. It will be the key to coordinate the rapid development of diagnosis and treatment and discuss the potential benefits and risks of treatment with patients if possible. These behaviors don't have to be done by a doctor, but they do need a person.
Let doctors deal with complicated and abnormal situations.
Many mild diseases can almost be taken over by AI. When the diagnosis is made and there are perfect, effective and safe treatment methods, there may be no need for human doctors to participate.
If AI can handle most routine low-risk diseases, doctors will have more time to focus on complex patients who need rich experience. These patients may have more complicated conditions (such as rare or frequently-occurring diseases), or the uncertainty of disease diagnosis is greater.
What is complicated may also be the actual situation of the patient, not the condition. For those patients with learning difficulties, dementia, addiction and other diseases, they may need more human support than other patients, so AI can save doctors time to help these patients.
Doctors are educators and consultants of patients.
For a long time, doctors have been the gatekeepers of medical knowledge, making medical decisions for patients. In the AI era, both patients and doctors have access to medical knowledge. But human beings are very bad at understanding probability and evaluating risk, especially when it concerns the health of themselves or their relatives and friends. Therefore, for most patients, doctors have a very important task to understand risks and communicate with patients, including the reliability of diagnosis, the safety or effectiveness of intervention and so on. Doctors also need to be able to explain the treatment plan formulated by AI. This does not require a doctor to have a deep understanding of machine learning, just as the use of magnetic resonance imaging scanning does not require a detailed understanding of mechanical knowledge. Let doctors explain the treatment plan of AI, which can effectively convey information to patients by combining AI's profound computing power, doctors' understanding of medicine and communication skills with patients.
Doctors are the spokesmen of patients.
Doctors have experienced many battles in the medical field, listening to patients' opinions every day, taking care of the same patient for many years, and deeply understanding the possibilities and limitations of medicine. From this perspective, doctors can listen to and respond to the needs of individual patients and patients as a whole. When there is a conflict of interest, such as allocating limited medical resources among patients, this endorsement is particularly important. These questions may be complicated and easy to arouse, but at least they are reasonable and transparent. Not everyone will agree with the final decision, but the process leading to the decision can withstand careful examination.
In the AI era, there is a risk that stakeholders can embed "hidden" values in the algorithm to achieve the purpose of affecting patient care. As Dr. Paul Hodgkin said, "What happens when there is a conflict of values? A pharmaceutical company that funds a machine learning system may want to increase sales, while a health care system may want to reduce costs, and patients may give priority to safety. " Everyone-including patients, the public and doctors-needs to participate in this process and let the "rule" algorithm be responsible. The key contribution that doctors can make will be their understanding of two areas: patients' experiences in the "real world" and doctors' understanding of medical capabilities and risks.
Doctors in hospice care situations
In the robot /AI code proposed by science fiction master Asimov, the most basic principle is "don't hurt human beings or let human beings get hurt through inaction." This principle is valid in most cases, but it may be invalid in the case of dying decision. Human doctors can understand that some decisions are not just based on the logic of survival. Although Asimov's law is similar to the Hippocratic oath, human beings can interpret it in a more complicated way, including that life is not only about length, but also about the quality of life. The limitation of AI in this respect is difficult to overcome by simply inserting a threshold of "quality of life change-remaining life". One terminally ill patient may choose palliative care, while another may choose further chemotherapy. The decision made by the patient can be based on many factors, which can be provided to the AI algorithm for analysis, but the final decision still needs to be made by the patient alone. Such a decision must always be outside the algorithm.
The emergence of artificial intelligence will be a revolution in medical care, so the role of doctors needs to be brought into play. This paper emphasizes special opportunities or challenges. Becoming an excellent doctor in the era of artificial intelligence requires rethinking the skill set and a greater change in mentality. Medical schools and postgraduate training should also plan to participate in this revolution. New doctors should be able to cope with the new world built by AI. In this new world, AI will seamlessly record every patient's condition and every clinical report, present them as input data, and produce the diagnosis, therapeutic efficacy, adverse events and death probability of diseases. In most cases, AI will do this faster, more reliably and cheaper than humans. Some people will see this as a threat, while others will see it as an opportunity.