Like millions of people around the world, I’ve spent much of the past year having an AI existential crisis.
And I’m not new to technology. I’ve worked in digital media since the 1980s. My company created one of the first multimedia science journals and earned a U.S. Presidential Design Award. I thought I understood how transformational technologies arrive, accelerate, and eventually settle into everyday life.
Then AI entered all of our lives.
At first, it felt miraculous. I suddenly had a tireless assistant helping me interpret medical research, sharpen my writing, manage complex projects in rural India where I live part of the year, and even spark new ideas for my artwork. It was like having a brilliant intern who never slept. But somewhere along the way, my feeling about this “miracle” changed.
While caring for my wife during a serious medical journey, I began noticing how algorithmic systems quietly shaped which information surfaced first, which treatment options appeared, and which questions were never asked or never answered. Often, no one explained those decisions. And it was never clear who was responsible for them.
That’s when AI stopped feeling like just another tool.
We are crossing into a world where systems increasingly make decisions for us. The upside is enormous. But the downside is already here: what information we see, who gets access to opportunity, and how quickly these systems evolve, often faster than we can understand or govern them.
How do you analyze something that changes every day? Especially when the media is full of breathless headlines about the latest AI breakthrough. Most of us barely have the time or attention to keep up, let alone decide how we want to live with this technology. Meanwhile, regulation and public understanding fall further behind.
That urgency forced me to step back and ask a simpler question:
What basic rights do people need as AI becomes invisible infrastructure in our lives?
The timing matters. As we approach the 250th anniversary of American democracy, we are once again facing a concentration of power that could either strengthen human freedom or quietly erode it.
This isn’t about robot-apocalypse fantasies. It’s about everyday fairness.
Right now, AI systems are trained on the work and data of millions of people who receive no compensation. Workers are displaced without meaningful transition support. Algorithms influence decisions about jobs, loans, healthcare, and housing, often without transparency or appeal.
For me, any serious framework for AI must address four things.
First, truth: Systems that shape what we see and hear must not distort reality without accountability.
Second, fairness: Creators should be paid when their work trains AI. Workers should be protected when automation changes jobs. AI’s benefits should not belong only to those already in power.
Third, transparency: People deserve to know when AI is involved in decisions about their lives, and to challenge those decisions when they are wrong.
Fourth, human safeguards: There must always be a human option for high-stakes decisions, especially in healthcare, justice, finance, and education. Children and vulnerable communities need extra protection.
Compare this to what exists today. Governments have issued principles. International bodies have published guidelines. Regulators are trying to catch up. All of that matters. But most frameworks still avoid the hardest questions about accountability, economic justice, and who ultimately bears responsibility when AI causes harm.
This moment feels uncomfortably familiar. New power structures are forming faster than democratic oversight can keep pace. Corporate AI profits are exploding. Public trust is thinning.
But here’s what gives me hope.
We are not powerless.
The future of AI is not predetermined by code. It will be shaped by the rights we demand, the rules we insist on, and the values we refuse to surrender.
I explore these questions more fully in my new book, Before AI Decides, which offers practical ways to stay human inside systems that increasingly make decisions for us. But the core issue is simple. The question is not whether AI will transform our world. That is already happening. The question is whether we will guide that transformation or wake up one day to discover it was decided for us.
Now is the moment to draw some clear lines.
Before it’s too late.
Payson R. Stevens is a science communicator, author, and artist whose work spans technology, public communication, and human-centered design for over five decades. He received a U.S. Presidential Design Award for pioneering digital science media.