Skip to Main Content

Fake News

Artificial Intelligence

6 things to know about AI Artificial intelligence technology is not new, but dramatic advances in generative AI have captured the world’s attention and are transforming the information landscape. Here are six news literacy takeaways and implications to keep in mind as this technology continues to evolve. They are subject to the biases of the humans who make them — and integrate any biases baked into their training data. Data sets often include copyrighted, misleading and overtly biased material. These tools do not just learn human biases; they can also amplify, extend and entrench them. Don’t let AI technology undermine your willingness to trust anything you see and hear. Just be careful about what you accept as authentic. AI chatbots and image generators produce text and visuals at an unprecedented scale — and have the potential to supercharge the spread of misinformation. Some tools are even being used to produce fabricated news broadcasts using realistic-looking AI anchors. Be ready to encounter even more information with less transparency about its origin. Credible sources follow processes to verify information before sharing it, and this should translate into higher levels of trust. Professional journalism ethics — such as fairness, transparency and accuracy — can be seen in the quality of information published by standards-based news organizations. Generative AI tools don’t show the same concern for truth, verification or the public interest. AI tools might feel authoritative and credible, but the responses they generate are routinely riddled with inaccuracies. It can be easy to get swept up in alarmist takes, but AI tools also have tremendous upsides. For example, they can boost scientific research and make complicated or specialized tasks more accessible, like writing computer code or building websites. Some news organizations use AI to responsibly automate certain tasks, such as The Associated Press using AI to compile corporate earnings and sports box scores. Researchers have raised concerns about AI chatbots generating misinformation and providing responses that include conspiracy theories, pseudoscience and harmful content. Traditional signals of credibility — such as clean writing, academic footnotes or sleek website design — are relatively easy to fake in today’s information landscape. But AI chatbots make them easier than ever to game — at scale. Skills like “lateral reading” become even more critical. Chatbots have been known to make up sources, provide incorrect answers to simple questions and write persuasive responses that include misinformation. Experts refer to false information presented with confidence as “hallucinations” — a persistent issue with this technology. AI image generators amplify biases in race and gender and can default to harmful Western stereotypes. A Bloomberg experiment, for instance, found that Stable Diffusion produced AI images dominated by people with lighter skin tones for high- paying jobs, while images of fast food workers and dish washers skewed toward darker skin tones. The rise of more convincing fake photos and videos means that finding the source and context for visuals is often more important than hunting for visual clues of authenticity. Any viral image you can’t verify through a reliable source — using a reverse image search, for example — should be approached with skepticism. Generative AI tools are not objective ... It signals a change in the nature of evidence. Reputable sources matter more than ever. ... or reliably factual. It’s not all bad. 1 4 5 6 2 3 TOOLS Chatbots: ChatGPT, Google Bard, Bing Chat, Claude, etc. HOW THEY WORK Generative AI chatbots rely on a technology called a large language model to synthesize large amounts of information and imitate human writing. They are trained on vast databases of internet text and digitized online writings, including books, articles and websites. Chatbots use probability to predict what words and phrases go together to answer a given prompt. Some experts compare them to autocompletion tools on steroids. TEXT Content is easier than ever to create — and fake. » AI image generators can create anything you ask for — however absurd, whimsical or potentially harmful and misleading. While many people use image generators to make fun, fanciful images, bad actors can use them to smear public officials or produce other damaging fakes. This infographic was created by the News Literacy Project with support from SmartNews, a news app for mobile

From: News Literacy Project

Artificial intelligence technology is not new, but dramatic advances in generative AI have captured the world’s attention and are transforming the information landscape.

This infographic provides an overview of how this technology works and offers six news literacy takeaways to keep in mind as these tools evolve:

  1. Generative AI tools are not objective: They are subject to the biases of the humans who make them, and any biases in the training data may show up when they are used.
  2. . . .or reliably factual: AI tools might feel authoritative and credible, but the responses they generate are routinely riddled with inaccuracies.
  3. It’s not all bad: AI tools also have tremendous upsides. (For example, they can boost scientific research and make complicated or specialized tasks more accessible, like writing computer code or building websites.)
  4. Content is easier than ever to create — and fake: AI chatbots and image generators produce text and visuals at an unprecedented scale — and have the potential to supercharge the spread of misinformation. Be ready to encounter even more information with less transparency about its origin.
  5. It signals a change in the nature of evidence: The rise of more convincing photos and videos means that determining the source and context for visuals is often more important than hunting for visual clues of authenticity. Any viral image you can’t verify through a reliable source — using a reverse image search, for example — should be approached with skepticism.
  6. Reputable sources matter more than ever: Credible sources follow processes to verify information before sharing it, and this should translate into higher levels of trust.

Don’t let AI technology undermine your willingness to trust anything you see and hear. Just be careful about what you accept as authentic.