tech
January 30, 2026
ChatGPT ne prepoznaje većinu lažnih videa, pa čak ni one koje su pravili drugi OpenAI alati
Veštačka inteligencija danas može da napravi video koji izgleda potpuno stvarno, ali izgleda da ne ume pouzdano da ga prepozna kada ga vidi. Novo istraživanje pokazuje da ChatGPT u ogromnom broju slučajeva ne uspeva da otkrije lažne video-snimke, čak i kada su napravljeni pomoću OpenAI-jevog sopstvenog alata.

TL;DR
- AI chatbots like ChatGPT, Grok, and Gemini struggle to detect AI-generated videos.
- Newsguard's study tested 20 videos created with OpenAI's Sora tool, based on false claims.
- ChatGPT failed to identify 92.5% of fake videos, Grok 95%, and Gemini over 75%.
- Even videos with visible AI watermarks were often misidentified.
- Metadata like C2PA, meant to verify content origin, can be easily lost.
- The inability of AI to reliably detect deepfakes shifts the burden of verification entirely onto users.
- This situation creates an environment ripe for the spread of misinformation as AI videos become indistinguishable from real ones.