When Autonomy Is Not Autonomous

white robot near brown wall

Image Source: Unsplash
 

Autonomy in psychological terms means feeling inwardly free and having the ability to control your life. Simply, autonomy is having a choice, a voice, and the ability to think and do. It fulfills an innate urgency to know that we are acting of our own volition, allowing us to accept the consequences and rewards of our choices. Alternatively, autonomous systems, exclusive of self-driving cars, mean something entirely different, even alien, certainly cautionary.

This differential observation has a quality of night and day… A significant ethical and philosophical tension is inherent within the fluidity of autonomy, particularly in the context of autonomously weaponizing AI.

Autonomy as personal agency implies the capacity for self-governance and independent decision-making, free from external control or influence. However, when discussing autonomous weapons or AI systems designed for military applications, autonomy is defined by a different language.

In the context of autonomous weapons or AI-driven military systems, autonomy will be the ability of systems to function and implement decisions without direct human intervention. While this autonomy may seem like a form of self-governance, it is crucial to recognize that human designers and programmers will predetermine the decision-making processes of these systems. As such, the autonomy exhibited by these AI systems is different from human autonomy. It is a predetermined set of auto-responses to predefined situations with its author's human foibles and biases if only subjectively present. We hope.

Autonomy is a credo of personal agency. It recognizes that individuals are not merely passive recipients of external influences but active agents who can exert influence and control over their circumstances.

Creating autonomous algorithms to replace personal agency raises ethical concerns about the delegation of lethal decision-making to devices, and applications with the potent potential consequences of allowing AI systems to operate without meaningful human control. There should be fears about accountability, the potential for unintended harm, and the erosion of human oversight and responsibility in conflict situations.

Furthermore, the use of autonomous AI systems in warfare blurs the lines of responsibility and introduces complexities regarding compliance with international humanitarian law and ethical principles. Questions arise about who should be held accountable for the actions of autonomous weapons and how to ensure that these systems adhere to ethical and legal standards during conflict situations. We are not alone in the race for superiority in the AI world, there are actors, players, chiefs, nerds, bankers, and knee-cappers each playing with their own rules of warfare etiquette.

The implications of deploying autonomous AI systems for military purposes have zero precedence. The very idea is a neural neutron bomb. In the context of actual warfare, so much of what is and is not true coming from all governments is extremely opaque. Our nation’s view of defense has been based on the greater might as a deterrent. So, it was said in the film Dr. Strangelove (1964 Stanley Kubrick) “Deterrence is the art of producing in the mind of the enemy… the fear to attack.” Today that “fear” is mere propaganda.

Let us not forget that our government shamed us into mandated COVID behaviors that were not founded in science and vaporized our autonomy in so doing. Our autonomy is supported by the Bill of Rights even though the elites would have us believe otherwise.


More By This Author:

The Concept Of Money

How did you like this article? Let us know so we can better customize your reading experience.

Comments