Upload a pharmaceutical or medical PDF — US or international. The platform extracts efficacy and safety claims, scores each against evidence on four quality dimensions, reviews how claims are worded and presented, checks fair balance against the drug label, and delivers an interactive scorecard — so reviewers can focus on judgment, not research.
Get StartedReads your PDF and identifies efficacy claims, indications, dosage statements, and safety disclosures — prioritized by regulatory importance.
Searches seven authoritative sources — including international drug databases — and scores each claim on four dimensions: Population, Endpoint, Magnitude, and Context. The result is a graduated evidence quality assessment, not just a binary verdict.
Compares the document’s safety disclosures against the approved drug label and post-market adverse event data. Highlights potential gaps in safety coverage, understated warnings, and fair balance concerns.
Examines how claims are worded and structured. Flags language patterns that may overstate benefits or minimize risks — such as implied superiority, cherry-picked statistics, or relative claims without baselines — and highlights where audiences could draw incorrect conclusions from technically accurate information.
Upload a pharmaceutical or medical marketing PDF. The system extracts text and identifies the drug product automatically.
An AI model reads the document and identifies individual claims — efficacy, indications, dosage, mechanism, and safety statements — ranked by regulatory priority.
Each claim is searched against seven authoritative sources: DailyMed and Health Canada drug labels, Europe PMC and PubMed literature, ClinicalTrials.gov registrations, and FDA adverse event and approval data.
Each claim is scored on four evidence dimensions — Population, Endpoint, Magnitude, and Context — from 0 to 2 each, producing a composite quality score that replaces binary verdicts.
Each claim is checked for misleading language patterns and misinterpretation risks. The document’s structure and emphasis are analyzed for selective presentation. Findings are audience-aware — what a healthcare professional reads differently than a consumer.
The document’s safety disclosures are compared against the approved drug label and post-market adverse event data to identify missing or understated risk information.
Sources cited in the document are checked across multiple databases for retractions, errata, and publication age. Citations are reviewed for consistency with the claims they support.
An interactive scorecard with per-claim score breakdowns, evidence links, presentation findings, fair balance results, reference integrity flags, and safety screening. View in Context highlights every finding directly on the original PDF. Team accounts can tag, track, and navigate findings through a structured reviewer workflow.
The AI model is never the source of truth. Every fact in the report traces back to a published, verifiable source.
A provenance check compares quotes against their sources. Below-threshold matches are flagged, not hidden.
Each claim displays its four-dimension score breakdown, the evidence it relied on, and source quotes — so reviewers can verify and challenge any individual dimension.
Safety claims are screened against the drug label and post-market adverse event data. Fair balance gaps and missing risk disclosures are flagged prominently, independent of efficacy scoring.