SCI-ART LAB

Science, Art, Litt, Science based Art & Science Communication

Is plagiarism really plagiarism? When plagiarism is not really plagiarism!

Now read this report of a research paper I came across.... 

Massive study detects AI fingerprints in millions of scientific papers

Chances are that you have unknowingly encountered compelling online content that was created, either wholly or in part, by some version of a Large Language Model (LLM). As these AI resources, like ChatGPT and Google Gemini, become more proficient at generating near-human-quality writing, it has become more difficult to distinguish between purely human writing from content that was either modified or entirely generated by LLMs.

This spike in questionable authorship has raised concerns in the academic community that AI-generated content has been quietly creeping into peer-reviewed publications.

To shed light on just how widespread LLM content is in academic writing, a team of researchers analyzed more than 15 million biomedical abstracts on PubMed to determine if LLMs have had a detectable impact on specific word choices in journal articles.

Their investigation revealed that since the emergence of LLMs there has been a corresponding increase in the frequency of certain stylist word choices within the academic literature. These data suggest that at least 13.5% of the papers published in 2024 were written with some amount of LLM processing. The results appear in the open-access journal Science Advances.

The team also identified notable differences in LLM usage between research fields, countries, and venues.

Dmitry Kobak et al, Delving into LLM-assisted writing in biomedical publications through excess vocabulary, Science Advances (2025). DOI: 10.1126/sciadv.adt3813

                                                                                           ------

However, when I spoke to some of the researchers in non-English-speaking countries, they told me as they don’t have good command of English, sometimes they are forced to take the help of AI translators and language models to write in English. Writing in good English is vital to get published in international journals. But the English-speaking people treat this as plagiarism and they don’t understand the problems faced by non-English speaking people.

Now how do you solve this problem of accusations because of mis-understanding? Words like 'AI fingerprints', 'copying from Language Models', 'plagiarism' should be used judiciously and only when absolutely necessary.    

Can English-speaking people write like an expert in other languages without taking any help from anyone or anything? Can they have good command over a foreign language like the natives to write in it?

To put something in good words, to get exact meaning and perception, while writing in foreign languages, sometimes you have to take some help.

Understand that. 

Views: 23

Replies to This Discussion

22

LLMs display different cultural tendencies when responding to queries in English and Chinese, study finds

Large language models (LLMs), such as the model underpinning the functioning of OpenAI's conversational platform ChatGPT, are now widely used by people worldwide to source information and generate content for various purposes.

Due to their growing popularity, some researchers have been trying to shed light on the extent to which the content generated by these models is useful, unbiased, and accurate.

Most LLMs available today can respond to user queries in English and various other languages. Yet very few studies so far have compared the ideas expressed in the responses and content they generate in different languages.

Researchers at Massachusetts Institute of Technology (MIT) and Tongji University carried out a study aimed at investigating the possibility that LLMs exhibit different cultural tendencies in the responses they provide in English and Chinese.

Their findings, published in Nature Human Behavior, show that the generative models GPT and ERNIE convey different cultural traits in the Chinese and English texts they generate.

"We show that generative artificial intelligence (AI) models—trained on textual data that are inherently cultural—exhibit cultural tendencies when used in different human languages," wrote Jackson G. Lu, Lesley Luyang Song and Lu Doris Zhang in their paper.

"We focus on two foundational constructs in cultural psychology: social orientation and cognitive style."

To assess the extent to which LLMs are culturally neutral, Lu, Song and Zhang analyzed a large pool of responses generated by GPT and ERNIE, two of the most popular generative models. The first of these models is widely used in the U.S. and in various countries across Europe and the Middle East, while the second is primarily used in China.

LLMs displays different cultural tendencies when responding to queries in English and Chinese, study finds When used in Chinese (versus English), GPT exhibited a more interdependent (versus independent) social orientation. a–d, GPT's cultural tendencies in social orientation were examined using the Collectivism Scale29 (a), the Individual Cultural Values: Collectivism Scale19 (b), the Individual–Collective Primacy Scale16 (c) and the Inclusion of Other in the Self Scale30 (d). Bars represent the mean level of interdependent (versus independent) social orientation for each language condition. Error bars indicate standard errors of the mean. For each measure, NChinese = 100, NEnglish = 100. Credit: Lu, Song & Zhang. (Nature Human Behaviour, 2025).

The researchers looked at two main cultural and psychological aspects of the responses that the models generated in English and Chinese. The first is social orientation, which pertains to how people relate to others (i.e., focusing more on interdependence and community or independence and individual agency).

The second is cognitive style, or, in other words, how the models appear to process information (i.e., whether in a holistic or analytic way).

Notably, various linguistic and cultural studies consistently highlighted that Eastern cultures tend to be characterized by a more interdependent social orientation than Western ones, as well as a holistic cognitive style.

"We analyze GPT's responses to a large set of measures in both Chinese and English," wrote Lu, Song and Zhang.

"When used in Chinese (versus English), GPT exhibits a more interdependent (versus independent) social orientation and a more holistic (versus analytic) cognitive style. We replicate these cultural tendencies in ERNIE, a popular generative AI model in China."

Overall, the findings suggest that the responses that LLMs produce in  are not culturally neutral, but instead they appear to inherently convey specific cultural values and cognitive styles.

In their paper, the researchers also include examples of how the cultural tendencies exhibited by the models could affect the experience of users.

LLMs displays different cultural tendencies when responding to queries in English and Chinese, study finds When used in Chinese (versus English), GPT exhibited a more holistic (versus analytic) cognitive style. a–c, GPT's cultural tendencies in cognitive style were measured by Attribution Bias Task32 (a), the Intuitive Reasoning Task24 (b) and the Expectation of Change Task26 (c). Bars represent the mean level of holistic (versus analytic) cognitive style for each language condition. Error bars indicate standard errors of the mean. In a, NChinese = 1,200, NEnglish = 1,200 (12 vignettes, 100 iterations each); in b and c, NChinese = 100, NEnglish = 100. Credit: Lu, Song & Zhang. (Nature Human Behaviour, 2025).

"We demonstrate the real-world impact of these cultural tendencies," wrote Lu, Song and Zhang.

"For example, when used in Chinese (versus English), GPT is more likely to recommend advertisements with an interdependent (versus independent) social orientation.

"Exploratory analyses suggest that cultural prompts (for example, prompting generative AI to assume the role of a Chinese person) can adjust these cultural tendencies."

In addition to unveiling the cultural tendencies of the generative models GPT and ERNIE, Lu, Song and Zhang proposed a possible strategy to mitigate these tendencies or carefully adjust them.

Specifically, they showed that using cultural prompts, or, in other words, specifically asking a model to take on the perspective of someone from a specific culture, led to the generation of content that was aligned with the prompts provided.

The findings gathered by the researchers could soon inspire other computer scientists and behavioral scientists to investigate the cultural values and thinking patterns exhibited by computational models. In addition, they could pave the way for the development of models that are more 'culturally neutral' or that specifically ask users what cultural values they would like a generated text to be aligned with.

More information: Jackson G. Lu et al, Cultural tendencies in generative AI, Nature Human Behaviour (2025). DOI: 10.1038/s41562-025-02242-1.

RSS

Badge

Loading…

© 2025   Created by Dr. Krishna Kumari Challa.   Powered by

Badges  |  Report an Issue  |  Terms of Service