There are moments in tech when the conversation shifts from “What’s next?” to “What does this mean for all of us?” We’re in one of those moments right now — and few people sit closer to the center of it than Sam Altman. In a new episode of her podcast Mostly Human, host Laurie Segall sat down with Altman for his first major interview since OpenAI’s Pentagon deal and a wave of scrutiny over the company’s direction. So we asked Segall about that conversation — what surprised her, where she pushed back, and how her long history interviewing Altman shaped this moment.
Here, Segall pulls back the curtain on an interview that spans everything from AI’s role in war to the deeply personal question of raising children in an AI-driven world. She describes Altman as both unwavering in his worldview and acutely aware of the stakes, outlining a future defined by rapid innovation — and real disruption. The result, she tells us, is less a typical tech interview and more a nuanced, sometimes uncomfortable exploration of power, responsibility, and what comes next. Here's her thoughts on talking with the tech titan.
KCM: You’ve interviewed Sam Altman for years: How did this conversation feel different from your past ones, especially given the stakes right now?
LS: When I first interviewed Sam, it was over 15 years ago. He had a mobile app called Loopt, and we were at the beginning of an era of tech accelerated by the iPhone and mobile progress. Sam’s name has always held weight in Silicon Valley as a respected founder who helped usher in companies like Airbnb, but now his name holds a different type of weight. OpenAI is one of the most powerful companies at the helm of one of the most high stakes tech revolutions in human history. The questions we have to ask about the future are fundamental human questions, like will AI benefit all of us, or just some of us? The decisions made at OpenAI will impact every facet of our lives, from war to how we raise our children. It’s both an exciting time and a daunting time, and you can feel the weight of that in our conversation.
In this episode, you cover everything from AI and warfare to raising kids in an AI-driven world — what part of the conversation surprised or unsettled you the most?
Altman and I went in depth on what it will mean to parent in the age of AI. We both have children of similar ages, and I told him that whether or not he realizes it, he’s raising my child too — because the tech he's creating will impact every facet of my son’s life. We have an extensive exchange regarding friction and its role in childhood development and speak about whether AI creates too many shortcuts that could impede early growth. Altman is wildly optimistic about scientific breakthroughs — he laid out a scenario where automated AI researchers could compress a decade of scientific discovery into a single year, fundamentally reshaping society. But that innovation sits against a complex backdrop with fundamental human questions at stake.
On an unsettling note, over the last few years, I’ve done quite a bit of work calling attention to victims of sexually explicit deepfakes. In the interview, I push Altman on OpenAI’s stance to curb state legislation in the name of innovation. I’ve spent countless hours with victims of sexually explicit deepfakes whose only recourse were state laws passing as federal legislation historically lags. He says, “I think we disagree on this,” and we do. Spending 15 years covering both tech founders and those impacted by innovation, I believe that some of the idealistic terms of Silicon Valley don’t always play out in reality.
Altman admits he “miscalibrated” public mistrust around the Pentagon deal. Did you get the sense that he’s genuinely rethinking his approach, or standing firm in his worldview?
Altman was very clear in our interview — he stands firm on his decision. While he told me he wished he’d had announced the deal differently and he “miscalibrated” on the current mood of mistrust, he believes strongly that when it comes to the question of who should be more powerful — AI companies or governments — it’s important for governments to be more powerful.
He said this in our interview: "I don't think it works for our industry to say, 'Hey, this is the most powerful technology humanity has ever built. It is going to be the high order bit in geopolitics. It is going to be the greatest cyber weapon the world has ever built. It is going to be the determinant of future wars and protection. And we are not giving it to you.'”
Of course, the devil is in the details. We’re in the midst of quite a bit of societal unease. There's increased domestic scrutiny over ICE and the war in Iran. There are questions of lawfulness. But when pressed on his red lines, and whether he was concerned about the current administration’s use of tech, Altman dug in.
Acknowledging the current environment, he told me, “Maybe this is a uniquely bad time [and] a company’s patriotic duty is to not support the government. ... I totally disagree with that. Given what I see on the horizon.”
One of the most striking ideas in the episode is that a solo founder could build a billion-dollar company with AI. Do you see that as exciting, alarming, or both?
I see that as incredibly exciting with an important caveat. Building a company can be difficult if you don’t have the resources or access. One of the most surprising moments of the interview was when Altman revealed he thinks the first one-person, billion-dollar AI company has already been created. Altman speaks quite a bit about his principles in the interview, specifically his vision to empower people with AI to build and scale companies. While a billion-dollar single founder company might feel like an outlier, what it represents is relevant to small business owners and people who want to dabble in entrepreneurship but don't necessarily have certain skill sets or access that has traditionally been a limitation. While this is an interesting outlier, the implications are what we should be paying attention to: AI will lead to a tremendous amount of job loss. Altman acknowledges that the near-term period will be disruptive and lays out a need for new economic models to support the tremendous change society will undergo.
Zooming out a bit — what’s your approach to interviewing someone like Altman, who's both incredibly powerful and deeply familiar? How do you balance access with accountability?
Having deep experience in Silicon Valley and tech — and having covered many of these figures, including Sam Altman — over nearly two decades offers valuable context and perspective that’s helpful in an interview like this. I’ve covered founders through highs and lows and got my start very much in the trenches of the last tech revolution.
I’ve found that my history in this space allows me to go deeper with founders to help the audience better understand their thinking and these important topics. Understanding where someone came from, and the nuances of who they are or have been is helpful when it comes to covering the future. Because of my background covering the business, I speak product, but I also I speak people. My goal is to use Mostly Human to bridge those two worlds. In the conversation with Altman, you’ll understand much more about who he is and his principles, and you’ll hear quite a bit of back-and-forth on everything from the incentive models of AI companies and whether they're conducive to building a healthier society, what role AI should and shouldn’t play, and who has the power to benefit.
Watch the full interview with Altman, and other Mostly Human interviews, right here.