Skip to main content
← Portfolio
02  ·  Product Design

Verum AI

Building an AI-powered platform to identify bias and misinformation in political speeches and putting it in the hands of everyday voters, not just journalists.

2nd Place — All Inclusive Hacks (663 participants)
Timeline Aug.–Oct. 2022
Team Dereck Villagrana +3
Role UX Designer · Front-end Design
Tools Figma, React, Python

Verum AI is a platform designed to empower informed democracy by leveraging advanced AI to cut through bias, inconsistency, and misinformation in political messages. Built during All Inclusive Hacks, a virtual student-led hackathon focused on web disability access, bias detection, and inclusion, Verum brought together designers and developers from UC Davis and UC San Diego across a six-week sprint. The result: a chatbot capable of analyzing political speeches and text for bias and misinformation using GPT-3.5 Turbo, earning 2nd place out of 663 participants worldwide.

Most Americans can't reliably tell political fact from fiction, and the tools that exist weren't built for them.

8% of Americans correctly identified all false political claims in a national survey
41% believed at least one false political claim outright
30% of Americans trusted mass media by 2022
50% of U.S. adults get political news from social media

Existing AI tools weren't closing that gap. Systems like ClaimBuster, Newtral ClaimHunter, and Full Fact AI were powerful. Full Fact's pipeline processed around 500,000 claims per day. But they were built for journalists and professional fact-checkers, not ordinary voters. They surfaced "check-worthy" claims without providing user-friendly explanations, holistic bias breakdowns, or any interactive interface a voter could just open and use.

"Voters need a tool that translates AI's ability to detect bias and misinformation into an accessible, conversational experience, one that doesn't require a journalism background to use."

Verum AI problem statement
Competitive landscape: existing fact-checking tools

Competitive analysis: Full Fact, ClaimBuster, Newtral ClaimHunter and where they fall short for everyday voters

Empower voters with a steadfast tool that cuts through bias, inconsistency, and misinformation.

By leveraging advanced AI, guide users toward confident, well-informed voting decisions, promoting a more transparent and engaged democratic process. Where existing tools routed content to human fact-checkers behind the scenes, Verum puts the analysis directly in the voter's hands, in plain language, on demand.

A chatbot that lets anyone analyze a political speech in plain language.

Verum offers a chatbot that helps individuals examine political speeches and texts. Users can:

Paste a link to a transcript or article and let Verum extract and analyze the text automatically.

Select from a library of pre-loaded transcripts: speeches, debates, press briefings.

Input their own text directly and get an immediate analysis of bias and potential misinformation.

Bias percentage, count of flagged statements, annotated document highlights, and plain-language explanations of each flag.

Verum AI: main interface

Verum AI: chatbot interface with bias analysis and annotated output

Verum AI: bias breakdown screen

Bias breakdown: percentage and count of flagged statements with plain-language explanations

Making variable AI output feel consistent and credible.

Working with GPT-3.5 Turbo meant the output could vary. Designing around that variability, presenting percentages and annotated statements in a way that felt consistent and readable regardless of what the model returned, was one of the more interesting problems I worked on as the UX lead.

Text and web content analysis

Enabling analysis of both plain text and live web content required integrating an external API for text extraction from websites, with AWS Lambda orchestrating the pipeline.

GPT-3.5 Turbo integration

Managing prompt input meant dynamically summarizing text to fit within prompt limits while preserving the full context and intent of the original document. Getting the model to return structured, readable bias analytics rather than a wall of text took significant prompt engineering.

Async optimization

We tackled API call optimization and asynchronous process handling throughout, landing on a system robust enough to analyze both text and web content at scale.

AWS Lambda + API Gateway · GPT-3.5 Turbo / OpenAI API · Extractor API / X-RapidAPI · Next.js / React / JavaScript · Python · Vercel

All Inclusive Hacks: virtual student-led competition. Focus: bias detection, accessibility, inclusion through computational linguistics. Open to all ages, 663 participants globally, $905,957 in prizes.

How the product performed under real conditions.

88% primary flow task completion rate: 7 of 8 users analyzed a speech without guidance
6 / 6 politically engaged testers said the bias breakdown changed how they interpreted at least one claim
2nd / 663 place at All Inclusive Hacks, judged on technical execution, accessibility, and social impact
12 / 12 pre-labeled speeches correctly flagged for bias by the GPT-3.5 pipeline: 0 false negatives

How do you make a politically sensitive AI tool feel trustworthy, not just functional?

Building Verum in six weeks forced me to confront a design challenge I hadn't fully anticipated. The statistics we found during research made the need undeniable: only 8% of Americans could correctly identify all false political claims. But knowing a problem is real and knowing how to design for it are different things.

People don't distrust political content because they lack access to information. They distrust it because they can't tell what's reliable. The interface had to feel credible without feeling preachy. The bias breakdown had to be specific enough to be useful without overwhelming someone who just wanted to understand a speech they'd heard.

The cross-school collaboration was equally formative. Designing for a back-end I didn't build taught me how much good handoff matters. I learned to ask better questions earlier: What does the API actually return? What breaks the layout? What's the latency we're designing around?

If I had more time, I'd focus on the experience around uncertainty. When the model flags a statement as biased, a voter's natural next question is why. Letting users push back, ask follow-ups, or explore a claim from multiple angles would move the product from a bias detector to something closer to a thinking partner. That's the version I'd want to exist before the next election.