New MJC research found that a majority of Americans want newsrooms to disclose their use of AI and develop ethical guidelines before they integrate these technologies into their workflows
By Benjamin Toff
Minnesota Journalism Center Research Report
As growing numbers of newsrooms and journalists have experimented with a variety of applications for artificial intelligence in their reporting, production and business-side operations, audiences remain wary of eroding standards and a lack of disclosure. Only a small segment of the public are regularly using these tools themselves on a daily or weekly basis, and even those who regularly do use these technologies have little confidence in newsrooms’ abilities to apply them in an ethical and transparent manner.
These and other takeaways are among the findings detailed in this report, a collaboration between the Minnesota Journalism Center research team and The Poynter Institute, a nonprofit that focuses on improving the relevance, ethical practice and value of journalism in serving the public, which sponsored the study.
Findings are based on original survey data the MJC and Poynter collected earlier this month and presented at the Summit on AI, Ethics & Journalism convened by Poynter and the Associated Press in New York City. The summit brought together local and national newsrooms and other experts to discuss the ethical considerations of introducing AI into journalistic practices.
Table of contents
- Key takeaways
- AI use in general
- Attitudes about current uses of AI in news
- Demand for disclosure and ethical guidelines
- Implications of our findings for publishers and newsrooms
Our survey results echo those in a recent Pew Research Center study, which also found a wide gap between how many Americans are feeling about the growing use of AI across all facets of society — in contrast to experts on the subject, who tend to be the heaviest users of these technologies and are often far more optimistic and hopeful about the impact of these tools than the public at large.
- Background on the study
The data presented in this report are based on a nationally representative survey of 1,128 English-speaking Americans 18 years old or older conducted March 6-10, 2025. The MJC and the Poynter Institute contracted with NORC at the University of Chicago to field the survey using their AmeriSpeak panel, which includes a random selection of U.S. households invited to join the panel via mail notifications, telephone interviews and in-person field interviews. The survey was fielded almost entirely online with a small number completed by telephone (7%).
Data are weighted to the latest Current Population Survey (CPS) benchmarks developed by the U.S. Census Bureau and are balanced by sex, age, education, race/ethnicity and region.
Questions included in the survey were developed jointly by the MJC research team and Poynter based on findings from a prior qualitative study conducted in 2024. Earlier findings were presented at a 2024 event Poynter convened, which featured results from a series of focus groups the MJC held in Minnesota inductively exploring how audiences were thinking about newsrooms’ use of AI. Those sessions pointed to considerable anxiety and frustration about AI use in general, including a loss of human connection from the growing use of these technologies, which prompted many to say they wanted newsrooms to commit to greater transparency about their use of these tools. While some participants who use AI themselves regularly were more open to journalists doing the same, many of these same participants also expressed concern about these tools’ limitations and ways they could be exploited to manipulate the public.
Rapid changes in the adoption of these tools among the public prompted the MJC and Poyner to collect nationally representative data on these subjects, knowing that audience expectations are likely changing rapidly as well. These data should be treated accordingly as a snapshot in time, showing where the American public stands as of Spring 2025.
- Full survey results
Explore additional information about the survey, including design effects, sampling margin of error, response rates and the complete questionnaire.
Key takeaways
Our 2025 survey shows that many Americans are concerned about the growing use of AI in U.S. newsrooms — though most are at least open to some use of these tools, so long as the practices are accompanied by ethical guidelines and transparency about how they are being used.
Specifically, we find:
- Few Americans say they are regularly using AI themselves for any purpose. Just a fifth said they were doing so on a daily or weekly basis, whereas nearly half said they had never used AI or even heard of these tools. Older, less-educated Americans are the least likely to be regularly using AI, although we found fewer differences by gender or race.
- Those who are not using AI feel particularly fearful about these tools. Many expressed a high level of ambivalence about AI in general, although those using the tools most frequently in their lives tended to feel most hopeful and intrigued about their promise.
- Many think news outlets are already using AI, but have low confidence in news media doing so. Two-thirds said they thought U.S. news organizations were at least sometimes using AI to generate stories or create images. Respondents who report using AI the most were especially likely to think news organizations were doing the same. However, by a 3-to-1 margin, respondents said they had little or no confidence in newsrooms using AI for these purposes compared to the percentage who said they had some or a great deal of confidence.
- News literacy tends to be associated with higher levels of skepticism about AI. AI use is positively correlated with levels of news literacy knowledge, but the most news literate respondents also express the lowest levels of confidence in newsrooms using AI. That is true even as those who use AI most are also most confident in newsrooms using the tools as well.
- Audiences are particularly uninterested in using AI chatbots to get news. Nearly half say they have no interest in using such tools themselves, but 40% say they might use them if news organizations committed to verifying the information in the responses the chatbots generated.
- Most say disclosure is 'very important' to them. About half of all respondents say labels around the use of AI are essential even where reporters and editors remain involved in confirming the underlying information in AI-generated content. Likewise, 58% say they want publishers to establish clear ethical guidelines around their use of AI before experimenting with it in their newsrooms, but only a fifth of Americans say they think news organizations should never use AI under any circumstances.
AI use in general
Most Americans are not using AI at all
We find that only a small segment of the public say they are regularly using generative AI tools for any purpose.
When asked about their AI use, respondents were provided with a broad definition of these tools; the survey explained that by generative AI we meant “the use of advanced computer systems to perform tasks such as automated writing or analyzing large datasets.” The question also provided respondents with examples of these tools including “ChatGPT, Google Gemini, Microsoft Copilot, Meta AI, Grok and DeepSeek" because it is increasingly possible that Americans may encounter these tools without recognizing they are using them.
That said, fewer than half of respondents said they had ever used these tools at all. Forty percent said they had never used any of these tools, twice the percentage who said they were using them on a daily or weekly basis.
We also found that AI use, at least at the time of this survey, tends to be somewhat less stratified by age than education.
Whereas nearly a quarter of Americans under 60 say they are regularly using generative AI either on a daily or weekly basis with the vast majority of those over 60 never having used it, there were few differences among those who were under 60 years old.
This pattern was somewhat more consistent with respect to education where frequency of AI use tended to increase steadily the more formal schooling respondents said they had.
A majority of people with at least a college degree said they used AI at least occasionally versus roughly a third of Americans without college degrees.
We found even fewer differences with respect to some other demographic variables.
For example, while women were slightly less likely to say they used AI tools on a daily basis (just 6% of women compared to 9% of men), they were similarly only modestly more likely to say they had never used AI (41% compared to 38%).
Racial differences were similarly muted: roughly the same percentage of Hispanic respondents (19%) and non-Hispanic white respondents (18%) said they were using AI daily or weekly, which was about even with the percentage of Black respondents (20%) who said the same.
Those who are using AI regularly have much more positive feelings about it
Closely related to AI use, we also found wide disparities in attitudes about these tools in general depending on how frequently respondents said they were using AI.
Based on our previous focus group findings which included a number of participants expressing considerable anxiety and concern about AI use in general, we asked respondents how strongly they felt “fearful,” “angry,” “hopeful,” or “intrigued” when thinking about AI in general.
While two-thirds said they felt fearful or intrigued at least “a little strongly,” somewhat fewer said they felt the hopeful or angry.
These reactions to AI varied considerably depending on how frequently respondents said they themselves were using these tools.
Those who said they have never used AI reported having far more negative feelings toward the technologies than those who are using them daily or weekly. Nearly one in five who are using these tools regularly also reported feeling a mix of both positive and negative feelings.
Regular users of AI also tend to have higher news literacy
In addition to AI use, the survey also included a series of questions designed to assess how knowledgeable respondents were about basic practices of journalism.
Specifically, we asked eight true/false questions in order to measure levels of news literacy, drawing on a series of questions previously developed and validated in a recently published academic study from Adam Maksl and colleagues.
Questions included false statements such as, “To operate in the U.S., news reporters must be licensed by the Society of Professional Journalists” or “Having lots of ‘likes,’ ‘shares,’ or comments means a news story is credible,” as well as true statements such as, “A local journalist is more likely to write a story about a city council election than an election in a foreign country.”
When we tally up the number of correct responses to these items, we found levels of news literacy were widely distributed across the sample.
Assessing levels of news literacy allowed us to directly compare the relationship between AI use and news literacy.
We find a positive correlation between the two: Nearly 60% of respondents with high levels of news literacy said they were using AI at least occasionally versus just 38% of those with low levels of news literacy.
Attitudes about current uses of AI in news
Many think news outlets are already widely using AI
In general, Americans think many news organizations are already regularly using generative AI in various ways to support their work.
Specifically, we asked about the use of AI for four purposes: (1) to create images where real photographs are not available, (2) to make charts and infographics, (3) to write the text of articles and (4) to turn written articles into audio or video (or vice versa).
Respondents consistently said they thought the news media in the U.S. were already using these tools for these purposes at least some of the time, with similar rates for all four use cases we asked about.
Perceptions about where newsrooms are currently using AI were related to some degree to levels of news literacy; however, respondents with higher levels of news literacy tended to think news organizations were using AI more frequently rather than less.
But confidence in newsrooms’ use of AI tends to be low
At the same time that large portions of the public think U.S. news organizations are regularly using AI for various purposes, confidence in their ability to do so effectively remains low.
Respondents were asked about the same four ways that newsrooms might use AI for editorial purposes. Nearly six in 10 said they had little or no confidence in news organizations using AI to create images or write articles and almost a majority said the same about charts and infographics or turning written articles into audio or video.
Regular users of AI themselves did tend to hold a more positive outlook about news organizations’ abilities to use these tools.
When we divide up the sample and separate out responses from those who say they use AI daily or weekly for any purpose and compare them to those who say they have never used AI at all, we find that audiences who are most confident in newsrooms’ abilities to use these tools tend be those who use these tools regularly themselves.
This pattern was consistent across all four use cases of AI that we asked about, but we saw somewhat different patterns with respect to news literacy.
That is, although people higher in news literacy tended to have relatively more confidence in newsrooms’ abilities to use AI to create charts and infographics or transform text into audio or video content, they actually had less confidence when it came to the ability for news organizations to use AI to create images or generate the text of news stories.
In other words, news literacy appears to correspond to increased skepticism around using AI for aspects of journalism that involve newsgathering or the creation of original reporting.
There is general skepticism about AI chatbots for news
We also asked a specific question about an additional use case for AI, one that several prominent news organizations have already been experimenting with, including The Washington Post and San Francisco Chronicle: AI chatbots.
When we asked people whether they might be interested in using a tool like this to get news even if it “sometimes made mistakes when interpreting the language in previously published reporting”; whether they would only use a tool like this if accompanied by editors verifying the information; or whether they had “no interest in using a tool like this to get information,” respondents overwhelmingly said they either had no interest or would only use it if editors were separately verifying what the tool reported.
Responses on this question were also somewhat stratified depending on how often respondents themselves used AI.
Nearly a quarter (23%) of those who said they used AI daily or weekly said they might use an AI chatbot to get news even if it sometimes made mistakes, making them more than twice as likely to respond this way compared to the public at large.
Similarly, nearly two-thirds (65%) of those who said they had never used AI also said they had no interest in using an AI chatbot for news.
Demand for disclosure and ethical guidelines
Most say they want newsrooms to be transparent
As these findings suggest, audiences remain wary of the use of AI for journalistic applications. This distrust was particularly apparent when we asked directly about how much respondents thought newsrooms should disclose about their use of these tools — even in cases where reporters and editors were verifying the output of content generated by AI.
The vast majority of respondents said they thought disclosure was “very important” to them when it came to news organizations using AI to write stories or edit photographs and nearly as many said the same for using AI to translate text into different languages or create infographics.
Just single-digit percentages said they thought it was not very or not at all important.
While preferences around disclosure were uniformly high among nearly all respondents, there was somewhat less variation in these responses related to AI use in general than there was for news literacy.
For example, more than three-quarters of respondents said they either thought disclosure was somewhat or very important among those who had never used AI and those who did so on a daily or weekly basis.
But when we divide respondents in the sample according to levels of news literacy, we see a much wider gap in these attitudes.
Roughly 85% or 90% of respondents said disclosure was somewhat or very important to them among those with the highest levels of news literacy versus just over half of those with the lowest levels of news literacy.
The relative lack of importance this segment of the public places on disclosure may be more a reflection of their lack of knowledge about the work that goes into producing journalism in general.
Audiences want editorial guidelines
We included one final question in the survey about how audiences thought about the use of AI in news by asking respondents to select between three options for how they wanted U.S. news organizations to approach AI.
We asked whether they thought newsrooms should prioritize experimentation even if it meant making some mistakes. This sentiment is often expressed at industry events and conferences where publishers are grappling with the existential threat of the growing adoption of these tools across the media environment.
Alternatively, respondents could say they thought news organizations should only use AI if they first established “clear ethical guidelines and policies around its use.” The Poynter Institute, the nonprofit Trusting News, and other organizations offer a range of resources around doing just that.
Third, respondents could instead state that they believed news organizations “should never use AI under any circumstances.”
A fifth of respondents chose this latter response option, roughly the same percentage who said they didn’t know or declined to provide a response. A clear majority (58%) instead said they would support the use of AI if news organizations first established ethical guidelines.
Just 2% advocated for news organizations to prioritize experimentation even at the risk of making mistakes.
That percentage did increase the more often respondents said they themselves regularly used these tools. But even among the heaviest users of AI, those said they used an AI tool or app on a daily basis, levels of support for this kind of experimentation was muted.
Just 12% said they believed experimentation should be prioritized; the same percentage of these respondents said they thought news organizations should never use AI.
Implications of our findings for publishers and newsrooms
The survey results described in this report capture an American public grappling with considerable anxiety about the expanding use of generative AI across a growing range of domains in society. While only a small segment of the public are currently using these tools regularly, optimism about these technologies remains particularly concentrated among those early adopters.
When it comes to news and journalism, audiences generally report high levels of skepticism and concern about the impact of these technologies on the performance of the press, echoing findings from focus groups the Minnesota Journalism Center convened a year ago.
Given historically low trust in the news media in the U.S., generative AI may further erode already low confidence in journalism, even as many industry observers have argued that news media must innovate quickly to avoid what many view to be a clear existential threat to their industry.
MJC + Poynter research: We asked people about using AI to make the news. They’re anxious and annoyed
There is little mistaking the fundamental message for publishers expressed in these data around the need for transparent disclosure around the use of AI. Audiences remain suspicious of being deceived and want labeling around how AI is being used — even where humans remain in the loop — and most say they want newsrooms to adopt ethical guidelines first before they embrace experimentation.
That said, survey respondents by and large do not reject any use of AI; they merely want journalists, which they view as distinct from other media, to stand by professional commitments to verification, sourcing, and fact-based reporting. These are values many perceive as in tension with contemporary technologies.
Lastly, it is worth reiterating that these findings are merely a snapshot in time at precisely a moment in which generative AI technologies are changing rapidly and consumer understanding of these tools remains in flux. It is certainly possible that over time, more of the public that is currently most skeptical about AI will begin to resemble those who are most hopeful about these tools. After all, those who are already using AI most frequently in work and school are also most optimistic about them.
The challenge for news organizations will be innovating quickly enough to sustain their operations without leaving that large segment of the audience behind who are far less confident about the prospect of AI as a tool for enhancing conventional journalistic practices rather than undermining them.
Questions about this research? Contact MJC director Benjamin Toff at [email protected]. Explore more of the Minnesota Journalism Center's research work on our website.
About the Minnesota Journalism Center
We support more vibrant, equitable and sustainable news ecosystem in Minnesota through educational initiatives, applied research and engagement with journalists and newsrooms across the state.
Events
We host and share events focused on connecting working journalists to resources, training, research findings, students — and each other — throughout the year. Join us.
Stay connected
Stay informed about our events, research initiatives and how you can get involved. Sign up for our newsletter.