Patients Are Asking Chatbots About Their Cancer: Here’s How to Use Them Correctly

 A practical guide to using AI tools without falling into misinformation or anxiety spirals.

AI assistant search

Getty Images

More and more cancer patients are turning to AI to interpret their lab results, scans, and symptoms before they’ve had a chance to speak with a doctor or patient advocate. And that shift is happening at scale, with more than 40 million people daily now using ChatGPT for health-related questions. In moments of fear and uncertainty, that inclination is understandable. When a test result appears in your patient portal late at night, AI can feel like a fast, private, and reassuring way to fill the void while you wait for answers from your care team.

But that same instinct can also lead patients to inaccurate treatment information, misplaced confidence, and unnecessary anxiety.

That tension between rapid, easily accessible, online information and in-person engagement with health professionals was a core component of the discussion at BeOne Medicines’ recent Patient Advocacy AI Innovation Lab. The event was a first-of-its-kind convening that brought together patients, advocacy groups (nonprofit organizations that support patients and families as they navigate diagnosis, treatment decisions, and access to care), clinicians, researchers, and AI experts at the American Society of Hematology's (ASH) annual meeting in December. The insights shared reflected a reality already unfolding across cancer care.

Why cancer patients turn to AI in the first place 

People don't turn to chatbots because they distrust their doctors. They do so because today’s care experience leaves gaps: Test results appear before they’re explained, appointments can be short, and anxiety pops up outside office hours. At the same time, for many patients, AI has simply become the modern version of a web search — faster, more conversational, and always available — whenever questions arise. AI feels like something, anything, when the alternative is waiting alone in silence.

That reality is what prompted global oncology leader BeOne Medicines to bring advocacy leaders, technologists, and experts together to explore how patients can use AI tools to  support patient care. 

“Patients are already turning to AI to understand their diagnosis, lab results, and treatment options,” said John V. Oyler, Co-Founder, Chairman and CEO, BeOne Medicines. “That gives all of us across the cancer community — including industry, advocates, and clinicians — the opportunity and responsibility to help shape and harness these tools so they provide accurate information and help patients navigate their care more confidently." 

When AI becomes the first stop – but may not provide the full picture 

At the AI Innovation lab, one story in particular captured the risks and the stakes of turning to AI before human guidance is in place. Meghan Gutierrez, CEO, Lymphoma Research Foundation, recalled a phone call she received last fall that deeply unsettled her. A lymphoma patient, overwhelmed and terrified about what might come next, had reached out after his cancer relapsed.

“He explained a bit about his prognosis and the reason for his trepidation and anxiety,” Gutierrez said. But as the conversation continued, she realized something critical was missing.

“It came out in the course of the conversation that he hadn't yet actually discussed the findings of his recent PET scan and blood work with his healthcare provider,” she said. Instead, the patient uploaded his test results into an AI-chatbot and was reacting to how it had responded.

The information wasn’t just confusing; it was incorrect. “Some of the information he received was antiquated…almost a decade old,” Gutierrez said.

That moment was a wake-up call for Gutierrez. Not because the patient had used AI, but because he relied on it as a final answer, without the clinical context or confirmation that only a healthcare provider can offer.

“I realized in that moment that patients were already using ChatGPT and other AI tools to draw conclusions before speaking with their care team,” Gutierrez said. “That means we have to help shape how those tools are used, so the information is accurate, current, and supports conversations with clinicians rather than replacing them." 

How to use AI responsibly, without making your anxiety surge 

If you or someone you love is navigating cancer care, here’s how experts and advocates recommend approaching AI tools more safely:

  • Use AI to understand concepts, not to interpret your personal results. AI can explain what a PET scan is or what a medical term means. But it shouldn't be used to diagnose disease, predict outcomes, or interpret your lab values.
  • Pressure-test AI answers before believing them. Ask the chatbot whether the information is current, sourced, and nuanced by requesting citations and reviewing those sources yourself. And know that overly confident or alarmist answers can be a red flag. 
  • Bring AI-generated questions and output to your doctor visit. AI can help clarify what you want to ask about, but your care team should always provide the final interpretation.
  • Know when to call an advocacy organization. Patient advocacy groups can help validate, contextualize, and translate information, including the latest treatment information, helping patients avoid dangerous rabbit holes.

As Gutierrez emphasized, advocacy organizations have long played a critical role in helping patients interpret new digital tools and integrate them safely into care. 

“Advocacy organizations like ours have traditionally actually been early adopters,” she said, pointing to past technological shifts in the industry, such as the creation of COVID informational websites, social media accounts, and virtual education during the pandemic. She sees AI as the next inflection point, one that calls for co-creating tools grounded in accurate, up-to-date information and designed to earn patients’ trust. 

A shift toward patient-centered cancer care 

Another theme that emerged at the AI Innovation Lab was the rise of the “patient-as-CEO” mindset. It's referring to the fact that patients increasingly arrive at their medical appointments informed and engaged, ready to participate in decisions about their care.

Clinicians remain essential experts, but their role is evolving toward partnership and coaching — especially as patients bring AI-guided questions to the exam room. Advocacy organizations can help bridge these conversations, grounding digital insights in real-world experience and human judgment.

Looking at what’s next  

Medical experts are envisioning a future in which advocacy organizations work alongside healthcare and industry partners to develop AI-enabled systems that help match patients to clinical trials, treatment paths, financial resources, and personalized education.

Across every vision, one principle is clear: AI should expand human support, not replace it.

By gathering advocacy leaders, patients, clinicians, and AI experts, BeOne is helping shape a future in which AI tools are more trustworthy and designed around real patient needs. Those insights are guiding the development of new digital tools that help patients navigate their care with clarity, confidence, and consistent human support. 

Gutierrez reminded the group of one familiar adage during the AI Innovation Lab: “If you want to go fast, you go alone. But if you want to go far, you go together.” In the age of AI, that collaboration across the entire cancer community will matter more than ever.


BeOne Medicines is building the world’s leading oncology company — driven by scientific excellence and exceptional speed — to reach more patients than ever before. Together, we’re how the world stops cancer.

From the Web