Discussions of AI cover a wide range of ethical, legal, and real-life interventions. The discussion will continue for a long time, but it should focus on the most likely risks of the moment rather than Hollywood's conjectures. For example, how much can we rely on AI for healthcare, financial, and national defense decisions when its code programs can go wrong? In this regard, there needs to be greater investment in sensible prevention mechanisms that can effectively offset the risks of program vulnerabilities, cyberattacks, and virus propagation.
The risks of AI in military systems are self-evident; but even in commercial use, it can have a large number of unforeseen negative effects. There are different regulatory agencies in the government sector that test product safety (including the Environmental Protection Agency, the Food and Drug Administration, the National Highway Traffic Safety Administration, the Bureau of Alcohol, Tobacco, Firearms and Ammunition, and countless others). But the point is, if a product is designed to incorporate a program that will pass regulatory testing, does it mean that when the product passes the test and is deemed safe, that the product is actually safe?
Regulating the AI space can make the risks manageable: fleshing out testing protocols for AI algorithms, promoting cybersecurity and input validation procedures, and refining them across industries and personal devices. The government should draft regulatory provisions for this but avoid that the setup hinders the development of innovation.