We Need Your Documentation
A velociraptor, a Terminator, and TheDude walk into a hospital. The Terminator wants to optimize everything. TheDude wants to make sure nobody gets hurt. The velociraptor wants to know: have we learned from 3.8 billion years of evolution, or are we about to repeat the same mistakes with artificial intelligence?
This archive exists to document what actually happens when medical AI meets clinical reality. Not theoretical concerns. Not academic abstractions. Real failures with real consequences.
If you've witnessed medical AI gone wrong, we want to hear about it.
What We're Looking For:
- Documented failures: AI outputs that were clinically wrong, dangerous, or inappropriate
- Viral celebrations: Cases where incorrect AI outputs were praised without clinical scrutiny
- Systematic issues: Patterns of failure that reveal architectural problems
- Accountability gaps: Situations where liability is unclear or improperly distributed
- Real-world impact: Cases where AI failures caused or could cause patient harm
Confidentiality: We can anonymize cases to protect patient privacy and institutional identity. All submissions are reviewed by practicing physicians with clinical expertise in the relevant specialty.
Verification: Each submission is clinically verified before publication. We're not here to create clickbait—we're documenting reality to drive accountability and better practices.