Artificial intelligence is gradually infiltrating the opinion pages of major US newspapers, such as The New York Times, without clear disclosure. This has raised increasing concerns about the credibility of journalistic content and reader trust in it.
In an article published by The Atlantic, writer Vohini Vara noted that the debate over the use of AI in journalistic writing intensified after a post by writer Becky Tohk on the platform X, where she questioned the style of an article in the Modern Love column of The New York Times. Tohk argued that the language of the article seemed closer to texts produced by AI models.
Details of the Incident
Tohk's post sparked widespread reactions, prompting some researchers to test the text using specialized tools designed to detect AI-generated content. The results of these tools showed significant variation, with one estimating that over 60% of the text exhibited characteristics of AI, while other tools indicated lower percentages or did not detect clear use, reflecting the limitations of these technologies and their lack of accuracy in final judgment.
In her response, the article's writer Kate Gilgan confirmed that she did not copy ready-made texts from AI tools, but admitted to using them as an editing assistant, utilizing platforms like ChatGPT, Claude, and Gemini to help develop ideas and maintain text coherence. This raises questions about the dividing lines between human editing and algorithmic contribution.
Background & Context
Vohini Vara points out that this incident is not an exception but part of a broader phenomenon. Research conducted by computer science researchers, such as Tuhin Chakrabarti and Gina Russell, has shown indicators of AI use in opinion articles published in several leading US newspapers, including The Wall Street Journal and The Washington Post, while its presence was less in traditional news materials.
She also highlighted other controversial incidents, such as the withdrawal of a novel titled Shy Girl from publication after suspicions arose that it contained AI-generated texts, in addition to publishing journalistic materials that included inaccurate information, such as summer reading lists that featured non-existent titles due to reliance on text generation tools.
Impact & Consequences
The writer emphasizes that the core of the problem lies not only in the use of AI but in the lack of transparency. Readers assume that published articles reflect the writer's voice and expertise, while they may actually be a mix of human and algorithmic production, without clarification. This overlap could lead to a erosion of trust in media institutions, especially since opinion articles directly influence public attitudes and decision-makers.
Studies indicate that AI-generated texts may be more persuasive, despite their stereotypical and homogeneous nature. Additionally, AI models may carry cultural or political biases, opening the door for these biases to infiltrate public discourse through platforms that are assumed to be reliable.
Regional Significance
In light of the rapid advancement of AI technologies, Arab media also faces similar challenges. The credibility of journalistic content in the region may be affected, necessitating the adoption of clear editorial policies that ensure transparency in the use of these technologies. Furthermore, the absence of transparency could undermine trust in Arab media institutions, requiring a swift and effective response.
In conclusion, addressing the phenomenon of AI infiltration into journalistic writing requires clear editorial policies that mandate writers to disclose their use of AI. Editors should also be trained to detect indicators of its use, and perhaps legislative intervention is needed to impose higher transparency standards, warning that continued ambiguity could undermine one of the most fundamental principles of journalism: trust.
