The media must “call-out” AI-generated news content and “use good journalism” to reverse AI-manipulated information
By Autumn Shelton, West Virginia Press Association
CHARLESTON, W.Va. – Artificial intelligence (AI) is everywhere. From advanced facial recognition systems to the content we see when browsing the internet, AI has taken over, according to experts from West Virginia University. But, although AI can be beneficial in some ways, it does have a dark side – especially when it leaves the public unsure about what is real and what is fake.
On Saturday, during the West Virginia Press Convention, Dr. Prashnna Gyawali, assistant professor in the Lane Department of Computer Science and Electrical Engineering, Dr. Tom Devine, teaching assistant professor, and Dr. Sam Workman, director of the WVU Institute for Policy Research and Public Affairs, spoke about AI and how journalists can accurately report facts.
According to Dr. Gyawali, one of the perils of AI is “deep fakes,” which became popular in 2018.
“Deep fakes can be thought of as AI-generated or AI-manipulated content of someone’s images or videos,” Gyawali explained.
These deep fakes can lead to misinformation and data privacy issues, he continued. While some of the deep fakes can be humorous depictions of well-known individuals, they can also present false information that can be damaging to a person, or to how a person perceives an individual.
In the six years since the arrival of deep fakes, AI has only continued to progress, Gyawali added. Now, advanced AI systems using deep learning have resulted in large language models, such as ChatGPT, which is an algorithm that is “equipped to summarize, translate, predict and generate human-like text to convey ideas and concepts.”
To illustrate how large language models can create human-like content, Gyawali prompted an AI program to write a poem about Charleston, West Virginia’s riverfront.
It can be difficult to determine what is AI-generated and what is human created, Gyawali noted. And, although there have been attempts to inject water marks into AI-generated content, those systems are not advanced yet.
Gyawali said to address the challenges of AI, like deep fakes, better policies and regulations, such as the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI and the EU Artificial Intelligence Act, are places to begin. Additionally, research and education for AI ethics and safety as well as industry wide standards for AI use responsibility should be considered.
Dr. Thomas Devine began his presentation by explaining that AI is projected to affect available jobs in the near future.
“According to recent reports, AI could replace the equivalent of 300 million full-time jobs, two-thirds of the jobs in the US and Europe are already using AI in some form,” Devine said. “It’s estimated that up to one-fourth of the jobs on the planet could be done entirely by AI.”
However, Devine added that while this is “terrifying news,” it is not a new phenomenon.
“Automation has been getting rid of jobs for hundreds and hundreds of years,” Devine stated, adding that the solution to the loss of jobs is worker evolution, not replacement.
AI is quickly becoming the most in-demand job, second only to cybersecurity, Devine continued, and AI education will be imperative to “close the skills gap.”
Other “dark sides” of AI, according to Devine, include ethical concerns, such as the loss of personal privacy, training data bias, malicious attacks, and cognitive bias; disinformation and manipulation concerns, such as social engineering, fake news, propaganda, and the “death of objective reality; and educational concerns such as plagiarism and cognitive atrophy.
For members of the media, Devine said that paths forward in an AI-driven world may include embracing AI as a collaborator instead of a replacement, ensuring safe and ethical AI incorporation, and questioning everything that is read or seen.
Devine said it’s important for members of the media, and in other industries, to not let AI write code, love letters or news articles.
“Do the work yourself,” Devine said. “Please don’t let writing die.”
Dr. Sam Workman added to the conversation surrounding the media and AI by focusing on resiliency and AI in politics.
“What I hope to do is convince you by the end of this that the main tool for combating AI is in this room,” Workman said to members of the media in attendance.
Workman said that one of the main reasons why he returned to West Virginia is the strong presence of local news reporting.
“For someone who has lived in four different states, I can tell you not every state has [that], and it’s done on a level here that is exceptional,” Workman said. “You punch above your weight.”
When it comes to politics, Workman said its supreme power comes with setting an agenda.
“If you give me agenda power in [this room], I can structure a vote . . . on anything, anyway I want to, such that I get what I want,” Workman said. “So attention matters a lot more than preferences and solutions, and that’s where the media immediately comes in.”
Workman explained that the role of a press corps is to determine what news story is worthy of their front page.
“One way to think about the power of AI has nothing to do with how well it speaks or how convincing or persuasive it is, if you flood the market with stories about Gaza, inflation, the federal reserve, that’s what people are going to think about and that’s how we are going to base our decisions.”
“It’s an effect that local journalism, regional journalism, can sort of mitigate,” Workman continued. “AI can’t know what problems should be covered. It ingests information, and then generates content based on that information. Collectively, [the media] are the ones who know, have the foresight to know, what is useful and should be covered.”
Workman said the media is responsible for the agenda brought to the public, though it does have a tendency to “disproportionately focus on conflict” and the “arc of characters over time,” instead of procedures and institutions.
“Look, you’ve gotta sell papers and content, right? People don’t want to read about an obscure congressional rule that caused something to fail. They want to know what [Chuck] Schumer says and Mike Johnson and those folks, but when you think about what we know about how the media operates, or tendencies, then that’s exactly what AI is going to do.”
He said a more robust local media is imperative to combat AI manipulation of political coverage.
Lastly, Workman said that members of the media must call-out AI generated news content and “use good journalism to rectify the situation.”