Will doctors be replaced by artificial intelligence?

In the era of artificial intelligence, doctors will need to give up their old roles and find the most suitable place for them to exert the most important influence. The center of AI is its algorithm. Everyone is concerned about where the new algorithm surpasses human beings, but we should also pay attention to another aspect: what new roles have human doctors played in the era of artificial intelligence?

As a part of the process of human -AI diagnosis, doctors need to judge and analyze the symptoms, signs and physical examination results of patients when making clinical diagnosis. This judgment is easily influenced by doctors' vague memory, knowledge gap and cognitive prejudice. AI, on the other hand, has the potential to objectively evaluate the latest and most comprehensive data and medical evidence, and provide highly accurate diagnosis results and recommended therapies based on these data.

However, AI needs to receive accurate data input before it can produce a correct diagnosis, and the patient's symptom experience can not always be described in perfect medical terms, and understanding the patient's complete medical history is still the key skill to complete clinical diagnosis. Doctors who are good at listening and can make patients trust are more likely to find out the implication of patients, get more data and take correct measures to help patients.

patients may also report inaccurate or irrelevant information, including exaggeration or even lies. Human doctors can identify these contents more easily than AI. In the human -AI diagnosis interface, human doctors will have an important role, that is, as "human beings" to understand patients' diseases and input accurate data into the computer.

But for patients, the fundamental question facing the diagnosis interface may not be "Can this machine understand me", but "Do I want a machine to understand me?". In the future, AI will almost certainly be able to simulate empathy and evaluate the authenticity of patients' narratives. Chat bots are on the rise, and the technology of AI to explain body language is improving. But will patients be willing to share information with the machine? Are they willing to let a machine tell them that they have cancer, no matter how appropriate the emotional simulation of the machine is at this time?

Effective communication requires doctors to carefully evaluate patients' hopes, fears and expectations. Most of them are nonverbal. A skilled doctor can read the information that patients don't resort to the mouth. These communication channels are instinctive and will affect the doctor's diagnosis and treatment behavior, and doctors usually don't even realize it. This kind of human interaction is very complex and cannot be replicated by algorithms.

sometimes the AI algorithm may fail due to lack of appropriate data. For example, for rare diseases, the data that may be used for training is not enough to support artificial intelligence. One of the important skills of new doctors in this era will be to understand the limits of AI and how to make diagnostic decisions under these circumstances. Similarly, when patients have multiple diseases and need multiple treatments, decision-making will become more complicated and subtle, because some medical decisions may affect another condition, and AI may not be as good as human doctors in this respect. Another challenge will be the case of equivalent diagnosis, that is, AI proposes that multiple diagnoses have similar possibilities. Human doctors need to judge this uncertainty and communicate with patients.

Team leader in emergency room

At present, triage in health system still depends on human judgment, sometimes according to rules, sometimes according to knowledge and experience. Rules are usually based on a few variables, so they will be blunt.

triage with AI can be based on more variables, thus achieving faster, more accurate and more sensitive results. Variables include clinical measurement results and real-time tracking obtained through wearable instruments or implantation techniques. Triage no longer needs to be simply divided into rough categories (such as red, amber and green indicating the risk level of the disease), but can be continuously adjusted according to the patient's risk and the need for rapid intervention. Continuous data flow can trigger emergency services at an early stage, allowing unmanned ambulances to carry human first responders and arrive at the scene before patients realize it.

The role of doctors in emergency room is team leader, knowledge processor and disseminator. It will be the key to coordinate the rapid development of diagnosis and treatment, and discuss the potential benefits and risks of treatment with patients if possible. These behaviors don't have to be done by doctors, but they do need a human.

Let doctors deal with complex and abnormal situations

Many mild diseases can almost be taken over by AI. When the diagnosis is confirmed and there are perfect, effective and safe treatment methods, it may not be necessary to have the participation of human doctors.

If AI can handle most routine low-risk diseases, doctors will have more time to focus on those complicated patients who need rich experience to deal with. These patients may have more complicated conditions (such as rare diseases or frequently-occurring diseases), or the uncertainty of disease diagnosis is greater.

it may be the actual situation of the patient, not the condition. For those patients with learning difficulties, dementia, addiction and other conditions, they may need more human support than other patients, so AI can save time for doctors to help these patients.

Doctors as educators and consultants of patients

For a long time, doctors have been the gatekeepers of medical knowledge, making medical decisions for patients. In the AI era, both patients and doctors have access to medical knowledge. But human beings are very bad at understanding probability and evaluating risk, especially when it is related to the health of themselves or their relatives and friends. Therefore, for most patients, doctors have a very important task to understand the risks and communicate with patients, including the reliability of diagnosis, the safety or efficacy of intervention and so on. Doctors also need to be able to explain the treatment plan formulated by AI. This does not require doctors to have a deep understanding of machine learning, just as the use of magnetic resonance imaging scanning does not require a detailed understanding of mechanical knowledge. Let the doctor explain the treatment plan of AI, which can combine AI's profound computing power, doctors' understanding of medicine and communication skills with patients to effectively convey information to patients.

Doctors are the spokesmen of patients

Doctors have fought many battles in the front line of medical care, listened to patients' opinions every day, cared for the same patient for years, and deeply understood the possibilities and limitations of medicine. From this perspective, doctors can listen to and respond to the needs of individual patients and patients as a whole. When there is a conflict of interest-for example, to allocate limited medical resources among patients-this kind of endorsement is particularly important. These questions may be complicated and easily aroused, but at least they are reasonable and transparent. Not everyone will agree with the final decision, but the process leading to the decision can withstand careful examination.

In the AI era, there is a risk that stakeholders can embed "hidden" values in the algorithm to achieve the purpose of influencing patient care. As Dr. Paul Hodgkin said: "What happens when there is a value conflict? A pharmaceutical company that funds a machine learning system may want to increase sales, while a health care system may want to reduce costs, and patients may give priority to safety. " Everyone-including patients, the public and doctors-needs to participate in this process and make the "rule" algorithm responsible. The key contribution that doctors can make will be their understanding of two fields: patients' experiences in the "real world" and doctors' understanding of medical abilities and risks.

In the occasion of hospice care for doctors

In the robot /AI code proposed by science fiction master Asimov, the most basic principle is "don't hurt human beings or let human beings get hurt through inaction." This principle is valid in most cases, but it may fail in the case of dying decisions. Human doctors can understand that some decisions are not just based on the logic of survival. Although Asimov's Law is similar to Hippocratic Oath, human beings can interpret it in a more complicated way, including that life is not only about length, but also about quality of life. The limitation of AI in this respect is difficult to overcome by simply inserting a threshold of "quality of life change-remaining life". One patient with terminal illness may choose palliative care, while another may choose further chemotherapy. The decision made by the patient can be based on many factors, which can be provided to the AI algorithm for analysis, but the final decision still needs to be made by the patient alone. Such decisions must always be outside the algorithm.

The emergence of p>AI will be a revolution in medical care, so the role of doctors needs to be developed. This paper highlights special opportunities or challenges. To be an excellent doctor in the AI era requires rethinking the skill set and a greater change in mentality. Medical schools and postgraduate training should also plan to participate in this revolution. New doctors should be able to deal with the new world built by AI. In this new world, AI will seamlessly record every patient's situation and every clinical report, present it as input data, and produce the diagnosis, therapeutic efficacy, adverse events and death probability of diseases. In most cases, AI will do this faster, more reliably and cheaper than humans. Some people will see this as a threat, others will see it as an opportunity.