Open science buys us time to develop new approaches, but by itself it will not save us from the corrosive effects of AI generated fake research. Will this approach work long term? Given the breakneck pace of AI development, it seems certain that the capacity to fake datasets and functional code to go along with a fake article is not far off. If – and only if – scholarly journals insist that authors provide their datasets and code during the peer review process and then test the reproducibility of the authors’ analyses can we expect to weed out faked research. Faking these outputs individually imposes a significant extra burden on authors, but having them all be sufficiently interoperable to generate the results in the article is very challenging indeed. If we as an industry are so delinquent in implementing basic author identifiers, what chance do we have to consistently detect research faked by sophisticated AIs?Īnother route to detecting articles generated by Artificial Intelligence is to require that the authors provide the datasets and code objects that underpin the conclusions presented in their article. If you have faith that the Editorial Office has the tools and ability to weed out artificial articles, you may want to reflect that ORCID is now ten years old and most journals still don’t require that all the authors have one. Yes, the perennially under-resourced, under-trained, and under-staffed Editorial Office. Second, the task of spotting AI generated articles will fall to the Editorial Office. The arms race of using technology to spot fake research texts is a race we have already lost. Even if we do find some bug that’s a giveaway that an article was made by AI, that bug will be fixed in the next version. Text is simpler by several orders of magnitude, and hence there is far less that AI can get wrong. We must therefore rely on technology.Īn image has a multitude of elements that must all be exactly right for it to pass as real. Humans are relatively good at spotting AI generated pictures of people and things because our ancestors spent millions of years ‘learning’ to spot uncanny faces or strange shadows, but we have no such evolutionary history with scientific text or datasets. I sense that some of you have a spark of hope that new tools or better screening can detect these faked articles. ![]() Unscrupulous researchers will be able to conjure up convincing research articles without the trouble of picking up a pipette. There are three reasons why scholarly publishing is particularly vulnerable.įirst, as with Blade Runner’s replicants, it will soon become almost impossible to distinguish the products of Artificial Intelligence from products made by humans. I’m going to argue here that AI will fatally undermine the integrity of Scholarly Publishing, and a great many other things besides. As the movie progresses, it becomes clear that even this test struggles to identify some advanced replicants. The test is used to pick out artificial humans (aka ‘replicants’) by probing for unexpected emotional responses. The cinema enthusiasts among you might recognize this quote as part of the Voight-Kampff test in the original Blade Runner. The tortoise lays on its back, beating its legs trying to turn itself over. You reach down and flip the tortoise onto its back. You’re in a desert when you see a tortoise. Image by Pratik’s Laws, used under CC BY-SA license. ![]() One of the highlights of the program was the vigorous discussion between debaters and audience, which took place following the formal statements – and our hope is that the discussion might continue here in the comments. I organized and moderated the debate, and am very pleased to be able to share here the prepared texts of both debaters’ opening statements and responses. Arguing against was Jessica Miles, Vice President for Strategy and Investments at Holtzbrinck Publishing Group. At the annual meeting of the Society for Scholarly Publishing held in Portland, Oregon last month, the closing plenary session was a formal debate on the proposition “Resolved: Artificial intelligence will fatally undermine the integrity of scholarly publishing.” Arguing in favor of the proposition was Tim Vines, founder of DataSeer and a Scholarly Kitchen Chef.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |