The Future of AI: Is There a Place for Ambiguity?

As technologists and artists in new media (that strange place where words, imagery, meaning and new tech collide), we design, build, and test software experiences and products that center the deep creative impulse of humanity, because we are interested in the power of that impulse.

Der Mensch als Industriepalast, by Dr. Fritz Kahn, 1926.

We’ve observed throughout our work that many computing and AI software and tools have underestimated, and even excluded, this human impulse from their design.

The particular joy of uncovering patterns and logic embedded in various functions of the universe, and building things that make use of them for people to enjoy, is unparalleled. It’s no surprise that as computing devices became more accessible and explainable, people fell in love with them. But today, many are becoming uncomfortable with their ubiquity and feel less understood and more constrained by computing and AI, and that they must fight or compete with the software and products sold to them.

Why do people feel they need to fight against AI?

Computing and AI design has been commandeered by business needs and goals, which seek to predict a person’s actions — will they buy x product or y product? Through a business lens, the ultimate goal is to crack and tame the unpredictable nature of people, laying product after product in front of them, so enticed, they scoop them up one after the other, like Hansel and Gretel did the witch’s candies.

People don’t appreciate being hoodwinked, but they do appreciate ease, and companies have become excellent at hiding the latter with the former — Alexa can compile shopping lists, answer questions and respond to commands and learn from a user’s inputs, which makes life ‘easier’ but Amazon will also use that learning to sell data — a composite of a user’s movements, purchases, desires, and beliefs — on to other companies, and to (frighteningly) governments.

The logic of sensation: an architecture and philosophy colouring book by Kim Bridgland

No one wants to be reduced to a dollar sign, or be made a tool merely for more profits. We need agency and freedom, and yet computing and artificial intelligence design has been laser focused on creating products and software that meet business needs and goals, while leading us down the garden path to do it.

Why are business needs and goals the focus?

Computing and AI have historically been designed for precision. Early computers were incubated in research departments funded by government militaries, and those designs had applications in space exploration, medical research and agriculture, all areas not explicitly funded by military departments, but that relied on military designs as a starting point, which heavily favored precision.

For example, during WWII, ground soldiers routinely missed aircraft targets, so military manufacturers integrated software into newly issued weapons to increase the soldiers’ target precision. Suddenly they began to accurately hit aircraft targets. One wonders if they knew the real reason why.

Some may call this ability to increase precision valuable, but after WWII, when software-assisted decision-making, originally designed for war, entered the realm of cultural production, the measure of its value became questionable. If we extrapolate this history and survey where computing and AI has taken us, it’s clear that we’ve relied heavily on the design element that produces precision — the loop.

Still from The Trial, a film by Orson Welles, adapted from the book of the same name by Franz Kafka

What is a loop?

In computing, a loop is a programming function that iterates a statement or condition based on specified boundaries. For example, a person clicks on news links, views ads or explores products on Instagram, and their choices are then fed back into the same loop. This process is fundamentally adversarial, because there is no reciprocity in the relationship. Instead, the person is being ‘processed’ through the logic of the loop, which checks their choices against predetermined boundaries: ‘did the user choose step a or b, if they chose step a, process them to step c.’

As the loop refines and pares down choices for the user, they are served invisibly narrow options and then spun into new loops, where the process begins again. Combined with an interactive user interface, there is an illusion that the outcome was determined by the user’s clicks. But contrary to popular assumptions, much of the outcome has been designed, coded, and logically connected long before the clicks happened.

The World, the Matrix, the Architect

Looping maximizes profits for companies but weakens interpersonal relationships

When a user’s behavior is ‘abnormal’ the system’s feedback in the loop is to drag the user back to the average or normal behavior. This experience reinforces combativeness, isolation and fatigue, and trains people to transfer those feelings and ensuing behaviors, IRL, to their interpersonal relationships and communities.

Online, people are sorted unknowingly into certain categories while specific feelings, like anger or excitement, are amplified. We’ve all seen what comes next.

Monopoly Board Game by Doc Braham

It doesn’t have to be this way. People are asking for better, but they (and we) face a challenge — computing and AI design principles are opaque to most people, and yet our ability to expand into a new design paradigm depends on their knowledge and active participation in the design process.

In today’s technology classrooms, teachers are teaching kids how to code by building their own buildings in Minecraft. But that teaching is taking place within a closed system with a predetermined goal of creating digital objects, with limited room for the child to ask why to create, and for what purposes.

Do these gestures and actions on the part of the child truly produce agency? Perhaps not.

The future of computing and AI

For the last decade, “Don’t make me think”, has served as the guiding principle for user experience designers.

Essentially narratives that are “need to work harder, try harder, assimilate better” contribute to the illusion of individual agency. What kind of principles would reorient responsibilities from the individual to the system? What kind of system do we have to support those individuals’ desire for agency? Do our systems make for activated individuals, or controlled individuals?

Adam Simpson for “Surveillance State” a NYT review of The Panopticon by Jenni Fagan

We want to design a new set of pedagogical and educational principles, steeped in the value of ambiguity and the creative impulse. Perhaps then we can transform the nature of computing and AI, expanding away from looping and into the unpredictable, indeterminate nature of human exploration and creativity.

What if instead, the focus was on a reciprocal response, rather than a negative feedback loop? This is the future of computing and AI — reflecting and working with the beauty and complexity of our consciousness, that is fundamentally rooted in human agency.

Zhenzhen Qi, founder of zzyw, is an educator, researcher, mathematician, and technologist based in Brooklyn, New York. She is a candidate for Doctor of Education(EdD) at Teachers College, Columbia University. She is also a technology resident artist at the New York based art institution Pioneer Works and a member of NEW INC.

Yang Wang, founder of zzyw, is a computational media artist, graphic designer, and software developer based in Brooklyn, New York. He works as a Creative Technologistist at architecture firm Rockwell Group. He is also a technology resident artist at the New York based art institution Pioneer Works and a member of NEW INC.

Original post:

Leave a Reply

Your email address will not be published. Required fields are marked *