Credibility Over Virality: The AI Vision Reshaping Online Discourse
A former space physicist creates a platform that checks your facts before you embarrass yourself—and assigns you a credibility score for everything you share. Scientists have a low tolerance for statements like, “I don’t know what to believe anymore.” Dan Nottingham spent seven years as a staff scientist at Boston University’s Center for Space Physics after majoring in physics and astronomy. That scientific mindset doesn’t coexist well with misinformation.
When trust in mainstream media hits all-time lows and people dismiss expertise in favor of opinion-based beliefs, frustration builds. So Dan built AmICredible—his way of pushing back against waves of misinformation and mistrust. In this episode of Lead with AI, host Dr. Tamara Nall speaks with Dan about creating an AI platform designed to restore credibility to online discourse by checking statements before people share them, not after the damage has already spread.
The Cocktail Party Problem
You’re at a gathering. Politics comes up. Someone states something as fact. You’re pretty sure they’re wrong—but not confident enough to challenge them. Dan experienced exactly this scenario. Except he had AmICredible on his phone. After some back-and-forth with a friend about a political event, Dan entered both positions into the platform.
It turned out Dan was wrong. His friend was correct. Dan had misinterpreted what actually happened. He shared the results. Admitted his error. And the conversation transformed. Instead of arguing about who was right, they explored why Dan misunderstood the event. How the misinterpretation formed. Which sources led him astray. Growth replaced conflict.
What Good Research Actually Looks Like
Most people don’t know how to research properly. They find sources that confirm existing beliefs. Skip viewpoints that challenge conclusions. Mistake isolated facts for truth without examining context. AmICredible mimics what trained researchers actually do. First, it identifies credible sources relevant to the specific claim being checked. It deliberately reaches across multiple sources and perspectives, seeking diversity of viewpoint. Information is aggregated to reduce bias.
Then comes the AI layer.
Multiple large language models analyze the compiled information simultaneously. Why multiple? Because individual LLMs carry inherent biases. Using several models allows those biases to average out rather than dominate. The models look for consensus across perspectives, then generate a comprehensive analysis listing all sources and assigning a credibility score from 0–10. At a glance—before reading the analysis—users can see: Is what I’m researching credible? Then they can explore why or why not.
Citing Fox News and MSNBC Together
Dan initially thought building AmICredible would be straightforward. Prompt a large language model correctly. Save people the trouble. No major technical hurdle. Reality hit quickly. Hammering the system with diverse information revealed serious problems. The AI hallucinated. Lost context. Behaved unpredictably. Nearly a year went into understanding how AI engines behave in this specific use case. Proprietary code was written to make them respond consistently and reliably.
The breakthrough came during testing.
After extensive tuning, analyses began coming back deep and detailed. The system cited sources properly. In one case, it cited Fox News and MSNBC in the same analysis. You could see it actively weeding out bias. Credibility scores made sense. And something unexpected happened: Dan and his team realized none of their own worldviews were baked into the platform. They were learning from it too. That first complete analysis—with accurate scoring and transparent sourcing—was the moment it became clear: this was the product.
The Misinformation Sweet Spot
AmICredible doesn’t just fact-check. It assesses credibility. The distinction matters. Dan offers an example. You walk outside. The horizon looks flat. The sky appears dome-shaped. Two facts almost everyone would agree on. Therefore, the Earth is flat. Both stated facts are correct. The conclusion seems logical—if those are the only facts considered. But it’s wrong because a vast body of additional evidence is ignored.
Misinformation often emerges from multiple true facts wrapped in a misleading conclusion. That’s why AmICredible evaluates both facts and context. Are the details accurate? Is the framing misleading? The credibility score reflects both dimensions. A ten means highly credible in both fact and context. A zero means nothing credible could be found. Middle scores indicate uncertainty. Above six leans credible. Below four leans questionable.
The Personal Credibility Score You Didn’t Ask For
AmICredible offers something unusual: personal credibility tracking. Users can check statements as guests without creating accounts. But creating an account unlocks credibility scoring based on what users choose to share publicly. Private checks don’t affect scores. But statements shared through the platform after verification do. Share credible information consistently, and your score rises. Share questionable claims, and your score reflects that too. The Pro version adds deeper analysis and access to global leaderboards—allowing users to compare credibility scores by region or globally. Gamification meets information integrity.
Dan’s vision extends beyond individuals. He imagines credibility built directly into social platforms. Right now, content spreads through engagement alone—regardless of accuracy. In the future, credibility could be weighted alongside engagement. High interaction and high credibility spread widely. High interaction with low credibility gets limited. Not censorship. Responsible editing. Traditional publishers have editorial standards. Social media turned everyone into global publishers—with no guidance. Credibility scoring provides that guidance without telling people what to think.
The Bias Problem Nobody Solves Perfectly
Building unbiased AI is extraordinarily difficult. Who decides what counts as bias? Context makes truth slippery. That’s why Dan chose the word credible instead of true. Credibility can be assessed through methodology. Good journalism provides the blueprint: check multiple sources, examine diverse viewpoints, apply consistent standards, and report transparently. Rather than eliminate bias entirely, AmICredible aims to average out as many biases as possible. Don’t stack the deck. Don’t impose a worldview. Let sources speak. Let users decide. Transparency plays a key role. The platform shows its sources. Users can see exactly what informed the analysis and judge balance for themselves. Perfect neutrality may be impossible. But systematic credibility beats gut instinct every time.
What Keeps Scientists Up at Night
Asked what concerns him most about AI, Dan answers without hesitation: misuse. Every tool can be used for good or harm. AI is no different. People already associate it with misinformation—and they’re not wrong. But it can also fight misinformation. AmICredible demonstrates that possibility. The question isn’t whether AI will be misused—it will. The question is whether responsible applications can outpace harmful ones. Dan’s boldest prediction: most future software will be written by AI. Humans will describe intent. AI will generate the code. Programming becomes specifying outcomes rather than implementing logic. His most underrated trend ties directly to his work: small, specialized AI models. Large language models do everything adequately. Specialized models go deep instead of broad.
The Proactive Misinformation Fighter
Most fact-checking happens after information spreads. After damage is done. After beliefs harden. AmICredible aims to reverse that timeline. Instead of checking others, check yourself. Pause before posting. Verify before sharing. Take responsibility. If people routinely verified claims before sharing them, misinformation could stop before it starts—before it spreads, before it’s created. That requires behavioral change. Verification must be convenient. Credibility must matter socially.
Dan’s frustration with hearing “I don’t know what to believe anymore” drove AmICredible’s creation. The platform offers an answer: believe what multiple credible sources consistently support. Believe what holds up under scrutiny across diverse perspectives. Believe what survives systematic analysis. Not perfect. But better than believing whatever sounds right. That’s the physicist’s approach to information: test it, verify it, and revise beliefs when evidence demands it.
Hear how a space physicist turned misinformation fighter: Listen to Dan Nottingham's full conversation on Lead with AI. 👉 Try AmICredible here: AmICredible.ai. Check statements for credibility, test claims as a guest, or create an account to build your own credibility score based on what you verify and share.
#LeadwithAI #Misinformation #CredibilityScore #FactChecking #AIForGood #MediaLiteracy #TrustInMedia #InformationVerification #SocialMediaTruth #BiasDetection #AmICredible #ContentVerification #DigitalLiteracy #FightMisinformation #AIEthics #TruthMatters
Follow or Subscribe to Lead with AI Podcast on your favorite platforms:
Website: LeadwithAIPodcast.com | Apple Podcasts: Lead-with-AI | Spotify: Lead with AI | Podbean: Lead-with-AI-Podcast | YouTube: @LeadWithAiPodcast | Facebook: Lead With AI | Instagram: @leadwithaipodcast | TikTok: @leadwithaipodcast | Twitter (X): @LeadWithAi
Follow Dr. Tamara Nall:
LinkedIn: @TamaraNall | Website: TamaraNall.com | Email: Tamara@LeadwithAIPodcast.com
Follow Dan Nottingham: LinkedIn: @Daniel-Nottingham
AmICredible: Website: AmICredible.ai | LinkedIn: @Go-Nitro

Comments