I worked in a developmental biology lab in college and was kind of devastated to learn people aren’t really checking on each other like I had expected/been told since I was a kid.
I worked in a developmental biology lab in college and was kind of devastated to learn people aren’t really checking on each other like I had expected/been told since I was a kid.
AI could be very well deployed to serve as a filter for where we need to dive deeper on papers. Serving as a comprehensive first level reviewer, given the sheer volume of published work out there.
Google's 540B language model that was announced this week is incredibly impressive (deciphering jokes, inference chains). Don't think it would be too much of a leap to train an AI to be able to read methodology sections to ascertain statistical method being used and whether it's questionable given the research question at hand. Training set would probably be a collection of widely-considered high quality papers along with a reference table of statistical methods to generally-appropriate applications.
Not saying it's readily doable now, but could be a good automated way to attempt and flag what obviously needs greater scrutiny.
It's not about getting to your level necessarily, it's about being able to deploy it across millions of pages of articles to quickly bucket / assess where there may be misapplied methodologies, etc. - FOR FURTHER INVESTIGTION BY EXPERIENCED REVIEWERS
I worked in a developmental biology lab in college and was kind of devastated to learn people aren’t really checking on each other like I had expected/been told since I was a kid.
AI could be very well deployed to serve as a filter for where we need to dive deeper on papers. Serving as a comprehensive first level reviewer, given the sheer volume of published work out there.
I’m not sure what you’d use as the training set there but am open to ideas. Peer review, as currently deployed, seems to be kinda shitty to be honest.
Google's 540B language model that was announced this week is incredibly impressive (deciphering jokes, inference chains). Don't think it would be too much of a leap to train an AI to be able to read methodology sections to ascertain statistical method being used and whether it's questionable given the research question at hand. Training set would probably be a collection of widely-considered high quality papers along with a reference table of statistical methods to generally-appropriate applications.
Not saying it's readily doable now, but could be a good automated way to attempt and flag what obviously needs greater scrutiny.
I am an article reviewer for several journals, I cant imagine a machine getting to my level
It's not about getting to your level necessarily, it's about being able to deploy it across millions of pages of articles to quickly bucket / assess where there may be misapplied methodologies, etc. - FOR FURTHER INVESTIGTION BY EXPERIENCED REVIEWERS