Misinformation (7)

OpenAI Tags AI Art to Combat Misinformation

OpenAI will add watermarks from the C2PA, including invisible metadata and a visible CR symbol, to images generated by ChatGPT and DALL-E 3 to help identify AI-generated content. READ ARTICLE

The Dark Side of Democratized AI

The most common uses of AI tools like ChatGPT are spamming, cheating, faking, and other mundane or unethical purposes rather than truly democratizing intelligence. This reflects the simplistic human condition despite our intelligence. AI was supposed to lower the cost of intelligence but is instead lowering the cost of spam,…

Meta Expands AI Image Labels to Curb Election Misinfo

Meta will expand efforts to identify and label AI-generated images on its platforms ahead of upcoming elections globally, seeking to curb misinformation and deception. Meta will label AI-generated images from major AI companies like Google, OpenAI, Microsoft etc. in addition to content created by its own AI tools. These labels…

The Complex Role of AI in Political Campaigning

AI is being increasingly used by political campaigns to raise more money more efficiently. However, companies like OpenAI are limiting how campaigns can use their models due to concerns around potential misuse. Tech for Campaigns conducted experiments in Virginia in 2022 using AI models like Google's Bard and OpenAI's ChatGPT…