Who Controls the Narrative in AI? Insights from Campbell Brown

TL;DR
- Campbell Brown, ex-Meta news chief and CNN anchor, launches Forum AI to fact-check high-stakes AI outputs on politics, geopolitics, and mental health using top experts.
- She argues AI's real problem isn't political bias but lack of transparency—users trust AI summaries without seeing sources or differing viewpoints.
- Forum AI trains "AI judges" to match 90% consensus with human experts, pushing for visible sourcing to rebuild trust in AI-driven information.
The AI Trust Gap: Silicon Valley vs. Everyday Users
In the echo chambers of Silicon Valley, AI is hailed as a world-changing force—curing cancer, revolutionizing work, and unlocking infinite knowledge. But for the average user querying ChatGPT about the economy or mental health advice, the reality is often "slop and wrong answers," as Campbell Brown bluntly puts it. This stark contrast underscores a growing divide: tech leaders promise utopia while consumers grapple with opaque, potentially misleading information flows. Brown, a veteran journalist turned tech executive, is stepping in with her startup Forum AI to bridge that gap and demand accountability.
From Newsroom to AI Frontier: Brown's Journey
Campbell Brown's career reads like a playbook for navigating media's trust crises. She anchored primetime at CNN, co-hosted NBC's Weekend Today, and later became Facebook's (now Meta's) first dedicated news chief, wrestling with bias and misinformation at scale. Her mantra? "No Bias, No Bull." Now, as CEO of Forum AI, she's applying those lessons to generative AI, which she sees as the next battleground for information integrity.
Last fall, Forum AI raised $3 million in seed funding led by Lerer Hippeau, with backing from Perplexity AI's venture fund. The company's mission: evaluate major AI models like ChatGPT on "high-stakes topics" where nuance matters—politics, international conflicts, geopolitics, and even mental health. Brown warns that AI is already reshaping worldviews, but without human insight, it risks amplifying subtle biases or gaps.
Transparency Over Neutrality: The Core Fix
Brown dismisses the endless left-right bias debate as a distraction. "Political bias misses the deeper issue: transparency," she writes. AI answers sound authoritative, blending peer-reviewed studies with Reddit threads into seamless summaries. Yet sources are often buried as footnotes, ignored by 69% of users in news searches—who now consume AI overviews without clicking originals. News interactions with ChatGPT surged 212% from early 2024 to May 2025, amplifying the risk.
Her solution? Make sourcing central. Imagine AI responses highlighting not just links, but context: "A 2024 MIT study funded by the National Science Foundation says X... while a Wall Street economist, labor union researcher, and Fed official interpret it differently." This visibility acknowledges that everyone has perspective—pretending neutrality erodes trust, just as it did in traditional media.
Forum AI's Expert-Powered Approach
Forum AI recruits an all-star roster of experts to architect benchmarks and grade AI outputs: historian Niall Ferguson, CNN's Fareed Zakaria, former Secretary of State Tony Blinken, ex-House Speaker Kevin McCarthy, and cybersecurity leader Anne Neuberger. These humans evaluate tone, balance, and context—areas where data labeling alone fails.
The innovation? Training "AI judges" to scale this expertise, aiming for 90% consensus with human evaluators—a threshold Forum AI claims to hit. For breaking news or knowledge gaps, experts provide on-demand context, offering AI companies independent feedback to boost reliability.
Beyond Accuracy: Capturing the Full Debate
Teams obsess over factual accuracy, Brown notes from events like Fortune's Brainstorm AI, but miss "perspective coverage." An AI can be factually correct yet ignore critical viewpoints, especially in finance or mental health where incomplete advice causes "distributed and invisible" harm. Safety isn't just harm avoidance—it's ensuring AI reflects the full spectrum of debate.
She critiques the AI hype narrative as a product of concentrated capital and unchecked scale, echoing Karen Hao's analysis of how a few U.S. companies' incentives birthed today's systems with minimal guardrails.
Rebuilding Trust in an AI-Driven World
As her teenagers turn to AI for homework without questioning origins, Brown sees urgency. Forum AI isn't about eliminating bias—impossible—but making it visible to empower users. By embedding human expertise and demanding AI "show its work," she's positioning her startup as a guardian of the narrative. In a world where AI decides what you know, the question isn't just "Who's fact-checking AI?"—it's who controls the story. Brown bets on transparency to ensure it's us.
Get All The Latest Updates Delivered Straight To Your Inbox For Free!