My first loves were international relations, economics and politics. I grew up around cigar smoking patriarchs that talked at length about the US, Russia, China and all the innovations, wars, conflicts and progress in between. I marveled at our global world and I was fascinated by where influence originated and what it could do. I was frightened by the scale of the world’s superpowers and the myriad ways they affect the lives of innocent people all over the world. Nothing much has changed since those early days.
When it was time for me to choose a major when I started college, I already knew where I belonged. I speak four languages, I grew up traveling, often alone, to see relatives all over the world and I came of age with a keen sense of personal financial responsibility. For better or worse, I was primed for a global world.
Since I joined the data science and AI world, I’ve been keenly aware of the incredible potential and power AI holds for not just our political standing as a nation, but for our own personal growth and expansion. Technology is only as good as what you use it for, and I am inspired daily by the innovations I see others coming up with regularly. I’ve joined a number of AI organizations, including Advancing Trust in AI, since my journey began, but one of my favorite recent talks was with Colonel Dave Barnes, Chief AI Ethics Officer at the US Army Artificial Intelligence Task Force and Professor and Deputy Head of English and Philosophy at the USMA, who joined us to talk about our positioning in AI from a military perspective. Check it out here! The below is a summary of some of the key concepts that came up during our discussion.
How do we deploy AI in the Army?
Autonomous Systems are not specific to the Army as AI is already ubiquitous, and the military as a whole may be transformed by AI in many ways, but it’s much more expansive than battlefield applications. Anything from HR, talent management and predictive maintenance as well as humanitarian assistance, disaster relief or administering vaccines are helpful frameworks to think of AI in broader contexts. Similarly, we see a rise of data science and AI products, but it’s not just that — AI is influencing how we get hired, how we get fired, what mortgage we qualify for, what ads we see, and so much more.
Let’s talk about Ethics & Trust in AI
One of the earlier points made early on was “what we mean when we say Trust in AI.” Everyone brings their own world view when creating and using AI systems and our soldiers will need to trust this new technology. But what do we mean by Trust? Does that mean the robustness of the algorithm of the system itself or are we talking about the validation of the system?
We throw around the word trust when we’re talking about interpersonal relationships, for instance, in the context of whether or not we trust a particular person or group of people. But we also use it when we’re talking about functionality. Do we trust Waze over Google Maps? Despite being owned by the same company, Waze’s lazy directions make me miss my turn way too often, so I trust Google Maps. I know, there are already a list of disgruntled Waze fans ready to prove to me just how wrong I am. But they won’t change my mind.
When we talk about machines and tools, the trust typically applies more towards reliability, but when it comes to AI we really do mean that both views of trust matter significantly. Safety, trust, reliability, sustainability — these are all necessary to have ethical and responsible AI, but they are not sufficient alone.
Most principled people do not desire to be unethical in their work. And yet, in many of our organizations, we feel that someone else must be concerned with ethics. We relegate compliance and legality to some external ethics or compliance team but really everyone has a role and these frameworks need to be present at every stage of the products lifecycle, not just at the end. And not just after a company or organization is sued as is often the case. We must remember that if a system is impacting humans or the environment in some way, we must constantly evaluate the values and ethics that it’s built with and how it is used.
AI is not values neutral. Sure math is objective and the technology is neutral, but humans are making these systems and using these systems. That neutrality ends the moment a human gets involved. Diversity matters because we bring our world view to the products we create. Optimization has downstream effects and we must be keenly aware of what we are optimizing for. Facebook is optimized for screen time, but we see that this has adverse affects on our youth and on the intake of political and scientific information. Clickbait is great for the producers of clickbait and Facebook, but it’s bad for our mental state.
We might shout about ethics from the metaphorical rooftops to our hearts content, but it’s not as simple as just “coding” ethics into a system. This is easier said than done. We firstly don’t have an assumed universal normative theory of what constitutes “ethical AI.” The assumption that we can take that theory and encode it into a system, even if it were possible or true, might be operationally “ethical,” but that doesn’t mean that it’s actually an ethical system in the grand scheme of things. These definitions are more philosophical than anything else. Ethics themselves vary drastically from one region or nation to another. Even within a country or region, ethics evolve over time. What we considered ethical in the US 100, 50, 10 years ago may not be ethical now. How do we even begin to reconcile this geopolitically? These are difficult questions.
China, Russia and other adversaries
The book we’re reading this month for our book club is AI Superpowers and out of all the nations, China has the most ambitious global AI strategy right now. By 2030, China has a mandate to “become the world’s premier artificial intelligence innovation center,” and to position itself as the global leader in AI. The emerging digital liberal democracy in the US, the digital hybrid regime in Russia, and the digital authoritarian regime in China are all key players in the creation of a global AI culture that’s still being birthed.
This then begs the question: if the US is building AI ethically, will AI ethics place us at a disadvantage if our adversaries don’t share our morals and ethics towards AI?
Do we have to play dirty to win?
According to Colonel Barnes, No. AI enabled systems will be built in accordance with our nations values and the rule of law because of our own integrity. We are informed by the 4 Basic Principles of Law of Armed Conflict, by our nations treaties and laws which are grounded in the Just War Principles.
“Our nation expects us to do the right thing”
-Col Dave Barnes, Professor, USMA & Deputy Head, English and Philosophy — Chief AI Ethics Officer at US Army Artificial Intelligence (AI) Task Force
AI will essentially allow us to prevent more collateral damage and enable decision making to be more timely so there is a lot to be hopeful for. If we can be better informed and have technology that helps us react more quickly, then we have a moral obligation to use this technology for good, especially in the military.
Will our adversaries take advantage of any limitations? Sure. There will be windows where they can employ technology that will place us at a disadvantage and I’m grateful that we are taking these risks seriously. Currently this hasn’t been the case, but we will need to ensure that we are leading AI research and development with our values intact. Moral injury also threatens our soldiers’ mental wellbeing. I loved that the Colonel made a point about that. Inclusive liberal democracy and diversity is our strength and this will allow us to create technology that’s leading edge and we as the US must maintain a competitive edge on AI development and research for geopolitical reasons.
Lethal Autonomous Weapon Systems
Lethal autonomy has its concerns. The EU notion that AI is lawful, robust and ethical is equally applicable whether AI is deployed through weapons or whether it’s used in talent management or predictive maintenance programs. Information processing systems, from narrow, rules-based early AI expert systems, to the deep learning and generative systems we see today hang on the notion of reliability. We are relying on these systems to process information more quickly so that we can leverage its promise and we need to understand how AI arrives to its’ own conclusions. A learning system is going to be a different system than it is on the day you first deploy it. This is the nature of AI.
There is wide concern globally about lethal autonomy. One of the problems is really the lack of a common definition. The other part of the problem is the over-reliance on examples of AI gone bad from science fiction. The US and other countries have deployed autonomous systems since WWII so this use isn’t necessarily a new thing. Killer robots are scary, but we’re not yet anywhere close to general AI. Some argue we may never get there. And yet, AI enabled autonomous systems bring another degree to these fears and ethical and legal concerns when the tech works as advertised and as planned. What are the ethical and legal concerns when the tech breaks? Just look at the issue of Teslas autopilot that came out in recent days.
For the US, our policy is derived from DOD directive 3000.09: “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” Part of the reason why there hasn’t been legislation is really because of the variety of views of what an autonomous system might entail. Another recent distinction is the difference between lethal autonomous systems and fully lethal autonomous systems.
The difficulty is not just defining the system. If there was a universal desire to ban lethal autonomous systems, we have to define limits. Often these systems are part of larger systems and could include everything from target recognition, processing intelligence, selecting a weapon system, recommending a target, or all of the above. Unlike other weapons of mass destruction, AI is democratized and more widespread. This adds to the complexity of how to legislate their use.
The triad of the algorithm, data and computing power could point us in the right direction for defining how we can identify and regulate the use of these weapons. Also, the AI winter also provided a great number of potential products and use cases that we didn’t have the data for prior. Enter the age of big data.
Regardless of what industry we represent, we shouldn’t be deploying AI in the world if it’s not ready. Secretary of Defense signed off on DoD AI principles: Responsible, Equitable, Traceable, Reliable, Governable. Different research orgs have discovered globally 36 major nation states or larger orgs have their own sets of principles. The common areas are visible but how do you operationalize them? Principles are guideposts but how do we use them to influence processes? Can we take a look at legal ethical policy considerations through our agile sprints before it’s too late? Teams of experts will need to work with research lifecycles and it will have to affect everyone working along every facet of the lifecycle. Risk management already is a methodology in the Army about where the potential risks are. This applies to ethics as well. Translate principles down and ensure that people working on them at the DoD are working in accordance.
The good news is: everyone is dealing with this. Every country is still grappling with what AI is, how it can be used and how it can harm. Grassroots events like the ones we organize can do a better job of bringing people from varied backgrounds together. We can realize a brighter future with the use of AI and we can minimize the risks and harms AI can cause if we keep having these difficult conversations. I’m relieved that people far more dedicated and smarter than I are already committed to being part of the solution.
If this was interesting, feel free to check out another link regarding AI principles in the US Military
Also check out the following books if you want to keep reading on: