New AI tool identifies 1,000 'questionable' scientific journals

听
A team of computer scientists led by the 黑料社区网 has developed a new artificial intelligence platform that automatically seeks out 鈥渜uestionable鈥 scientific journals.
The study, in the journal 鈥淪cience Advances,鈥 tackles an alarming trend in the world of research.
Daniel Acu帽a, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acu帽a has never heard of, and offer to publish his papers鈥攆or a hefty fee.
Such publications are sometimes referred to as 鈥減redatory鈥 journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting.

Daniel Acu帽a
鈥淭here has been a growing effort among scientists and organizations to vet these journals,鈥 Acu帽a said. 鈥淏ut it鈥檚 like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name.鈥
His group鈥檚 new AI tool automatically screens scientific journals, evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors?
Acu帽a emphasizes that the tool isn鈥檛 perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable.
But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said.
鈥淚n science, you don鈥檛 start from scratch. You build on top of the research of others,鈥 Acu帽a said. 鈥淪o if the foundation of that tower crumbles, then the entire thing collapses.鈥
The shake down
When scientists submit a new study to a reputable publication, that study usually undergoes a practice called peer review. Outside experts read the study and evaluate it for quality鈥攐r, at least, that鈥檚 the goal. 听
A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase 鈥減redatory鈥 journals to describe these publications.
Often, they target researchers outside of the United States and Europe, such as in China, India and Iran鈥攃ountries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high.
鈥淭hey will say, 鈥業f you pay $500 or $1,000, we will review your paper,鈥欌 Acu帽a said. 鈥淚n reality, they don鈥檛 provide any service. They just take the PDF and post it on their website.鈥
A few different groups have sought to curb the practice. Among them is a nonprofit organization called the (DOAJ). Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.)
But keeping pace with the spread of those publications has been daunting for humans.
To speed up the process, Acu帽a and his colleagues turned to AI. The team trained its system using the DOAJ鈥檚 data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet.
Among those journals, the AI initially flagged more than 1,400 as potentially problematic.
Acu帽a and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable.
鈥淚 think this should be used as a helper to prescreen large numbers of journals,鈥 he said. 鈥淏ut human professionals should do the final analysis.鈥
A firewall for science
Acu帽a added that the researchers didn't want their system to be a "black box" like some other AI platforms.
鈥淲ith ChatGPT, for example, you often don鈥檛 understand why it鈥檚 suggesting something,鈥 Acu帽a said. 鈥淲e tried to make ours as interpretable as possible.鈥
The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level.
听听Beyond the story
Our research impact by the numbers:
- 45 U.S. patents issued for CU inventions through Venture Partners in 2023鈥24
- 35 startups launched based on university innovations in 2023鈥24
- $1.2 billion raised by companies built on 黑料社区网 innovations in 2022鈥24
The new AI system isn鈥檛 publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acu帽a sees the tool as one way that researchers can protect their fields from bad data鈥攚hat he calls a 鈥渇irewall for science.鈥
鈥淎s a computer scientist, I often give the example of when a new smartphone comes out,鈥 he said. 鈥淲e know the phone's software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science.鈥
Co-authors on the study included Han Zhuang at the Eastern Institute of Technology in China and Lizheng Liang at Syracuse University in the United States.
听