The creation and distribution of online news and information have been supported by early forms of artificial intelligence, before the development of generative AI. From generating earnings reports and sports summaries to creating tags and transcriptions, major newsrooms have employed automation for years to streamline production and routine tasks. These approaches are becoming more common in larger news organizations, but they have been much less common in smaller newsrooms. Automation of critical news and information-related tasks, such as content recommendation and moderation, search result creation, and summary generation, is also being increasingly automated by technology companies through the use of AI.
Assuming that creative work would be largely unaffected, public conversations about the rise of AI have primarily focused on how it might affect physical labor and operational roles like food service or manufacturing. However, concerns have been raised about the potential of new, more accessible, and significantly more advanced "generative AI" systems like DALL-E, Lensa AI, Stable Diffusion, ChatGPT, Poe, and Bard disrupting white-collar roles and media work, violating copyright (inside and outside of newsrooms), spreading false information, and undermining trust. From pitching articles and moderating comment sections to covering local events (with mixed results) and creating summaries or newsletters, these technologies also offer new opportunities for sustainability and creativity in news production.
Some of the approaches used by newsrooms to experiment with generative AI have been criticized for being unclear and error-prone. News publishers are facing claims of copyright and terms-of-service violations from people who are using their news content to train artificial intelligence tools. Some of these publishers have even gone as far as to make deals with tech companies or block web crawler access to their content. It's also important to note that generative AI tools have the potential to further redirect search engine traffic away from news content.
As a result of these developments, journalists, content creators, lawmakers, and social media platforms face new ethical and legal challenges. The ways in which AI is integrated into news production and distribution by publishers, the information that AI systems extract from news material, and the impact of global AI policies on these areas are all part of this.
Today, AI is simply a mainstream technology. Many organizations have reported using AI by the year 2021, according to research. This is particularly true for companies located in developing economies. Professionals have begun to document the growing importance of AI for IT businesses and news publishers, both independently and in relation to one another. Additionally, there is increasing evidence that AI is already in widespread use in both the algorithms of social media platforms and the production of everyday news stories, though this is more common among wealthier and larger publications.
The public and media have limited understanding about artificial intelligence. According to studies, journalists' knowledge and views on the widespread use of AI in news are inconsistent with one another. Research on artificial intelligence in journalism that focuses on the audience has also shown that readers struggle to distinguish between human- and AI-created articles. Despite strong evidence that AI technologies may reinforce societal biases and encourage the creation of misinformation, they also see less media bias and more credibility for certain forms of AI-generated news.
There has been extensive theoretical research on the use of AI in journalism. While evidence-based work can help us answer some critical questions, it is usually more qualitative than quantitative, making it hard to get a clear picture of the overall situation. Much theoretical work has been devoted to discussing how AI is changing the face of journalism, how platform companies are influencing both AI and the news industry, and what this means for AI in terms of journalism's value and its ability to achieve democratic goals. European Union policy discussions and the need for transparency about AI news techniques to build trust have dominated much of the media policy literature.
Research on how AI changes the news that people see, whether directly from publishers or indirectly through platforms, should be prioritized in future work. To get a more complete picture of how technology developments affect news practices worldwide, AI research should focus on regions other than the United States and economically developed countries. The policy side of things might benefit from comparing use cases in order to establish global standards for AI-related news transparency and disclosure.
57%
of companies based in emerging economies reported Al adoption in 2021
(McKinsey, 2021)
67%
of media leaders in 53 countries say they use Al for story selection or recommendations to some extent
(Reuters Institute for the Study of Journalism, 2023)
Most countries' governments have struggled to keep pace with the rapid rate of AI advancement. Responses by regulators to new technology, such as AI, vary from one country to the next and may take many forms, including direct regulation, soft legislation (such as recommendations), and industry self-regulation. Russia and China are just two examples of the governments that have a role in, or at least influence over, their countries' artificial intelligence research and development efforts. Involving different stakeholders is an approach that some try to use to encourage innovation. Some people are trying to find ways to control AI so that people are not harmed by it. In contrast to countries like China, which asserts the state's right to collect and use residents' data, the European Union's privacy laws have emphasized strong protections for citizens' data from private companies and the government.
These differences highlight the fact that people do not agree on the principles that should support AI laws or ethical frameworks, which makes reaching a global agreement on how to regulate the technology challenging. But laws in one country can have far-reaching implications in another. Without sacrificing democratic principles of a free press, an open internet and free speech, those putting forward policies and solutions must consider the many possible outcomes and acknowledge the global differences.
Lack of agreement on what AI is will undermine efforts to regulate the technology, making it much harder to detect and punish violations. With the rate of innovation and the complexity of these systems, experts have claimed that general solutions would not work and have instead advocated for tailored solutions. Underrepresented groups such as those living in poverty or experiencing other forms of social exclusion must be actively included in the process of making laws for AI.
Finally, given the growing relevance of news material in training AI in policy and regulatory issues, responses to AI should most likely take into account the need to maintain a free and independent press. This is relevant today to discussions on digital content copyright and fair use updates, collective bargaining agreements and other forms of support between publishers and the companies that create and sell these technologies.
Become a
Top Rated Author
Today