Browsed by
Category: Research

“ChatGPT is a personal research sidekick” – an interview with Animate Your Science

“ChatGPT is a personal research sidekick” – an interview with Animate Your Science

What opportunities and risks does ChatGPT create for improving communication practices in research? The science communicators at Animate Your Science recently presented a practical article with a very optimistic perspective. We reached out to authors Dr Khatora Opperman and Dr Tullio Rossi to hear more.

Based in Adelaide, Australia, Animate Your Science offers animation services and SciComm training programs. With writers such as project manager Khatora Opperman and founder/CEO Tullio Rossi the blog continuously presents useful know-how, tips, and recommendations for their followers. In February, the article How to use Chat GPT: Opportunities and Risks for Researchers defined ChatGPT as the researchers’ “Personal Research Sidekick.”

Illustration from https://www.animateyour.science/.

 

Hi Khatora and Tullio! What features would you like to see in future updates of ChatGPT? 

Khatora: “As a language model, ChatGPT is already much more powerful than before. However, I think several features could be added to make it even more useful for researchers. Specific training in the scientific text would be beneficial to improve its ability to understand and interpret scientific literature and technical jargon. Some have suggested features that improve ChatGPT’s ability to perform complex tasks like data analysis or literature review. However, I am skeptical about pushing language models in this direction. There is a fine line between augmenting human abilities and replacing them altogether!”

Tullio: “I think that the most significant limitation of ChatGPT today is that, although excellent at writing just about anything in a very confident way, it is not afraid to make stuff up. For example, I asked it to list the top papers in my Ph.D. field of research and it spits out a totally plausible list of article titles complete with authors and journal names but all of them were made up! Not a single one was real! It makes sense – it’s a language model, not a truth model. So, a feature I’d like to see in the future is the ability for Chat GPT to access the internet and reference its sources similarly to how Bing Chat does. Accuracy matters, and I am afraid that many people are not fact-checking the model’s outputs enough”.

How have your professional lives changed so far due to ChatGPT’s launch on 30 November 2022?

Khatora: “Conversational AI has been a game-changer for me personally. ChatGPT has become a useful day-to-day tool for brainstorming, accelerating my writing, and finding that word I just can’t think of! Whether it’s a meta description, an email, a social media post, or something else written, ChatGPT’s ability to suggest ideas, complete sentences, refine paragraphs, and offer synonyms has helped save me time and improve the quality of my writing. I also really like ChatGPT’s ability to adapt to my requested writing style, meaning I can focus on my ideas and creativity rather than on perfecting my writing. Overall, ChatGPT has become a valuable communication tool for me, and I’m excited to see how it will continue to evolve in the future”.

Are there any practical issues with ChatGPT that you find more problematic than others?

Tullio: “I’ve just returned from a business conference where somebody gave a presentation entitled “How to Use ChatGPT to Create 10 Blog Posts in Under 60 Mins”. My first reaction was, “WOW! I want to learn this method!”. Then it got me thinking…
Imagine how many people worldwide now use this tool to publish stacks of mediocre articles. It means that in a matter of months, the internet will be flooded with low-quality content with serious accuracy problems. How will search engines be able to differentiate AI-generated content from human one? I think that it will be a real problem, and it might make search engines way less reliable at providing accurate information. 

After ruminating on this idea for a couple of days, I concluded that writing high-quality, long-form human-generated content has never been more important. This is what we will stick with as a company at Animate Your Science”.

How do you think ChatGPT will change the intellectual domain of international research in the long run – seen from a more philosophical point of view?

Khatora: “This is a difficult question, it’s hard to predict how the landscape will change in the long run. ChatGPT has the potential to revolutionize the way we approach to research; however, I fear that we may become excessively reliant on language models and AI more generally, leading to a loss of independent critical thinking and basic writing skills. Much like how Google Maps has made many people reliant on technology for navigation, or how to autocorrect has made many forget how to spell. AI can assist researchers in various tasks, but it should not replace the human element that is necessary for generating new ideas, designing experiments, and interpreting results. In the end, scientific research is ultimately a human endeavor and should remain that way.”

Study shows: GPT-3 can—technically—be the author of scientific papers. An interview with Almira Thunström.

Study shows: GPT-3 can—technically—be the author of scientific papers. An interview with Almira Thunström.

A system could technically be considered a co-author—or even the main author!—of a scientific paper! This is the mind-boggling insight presented by human AI researchers Almira Osmanovic Thunström and Steinn Steingrimsson from Gothenburg – with the assistance of their co-author GPT-3.

Article: Does GPT-3 qualify as a co-author of a scientific paper publishable in peer-review journals according to the ICMJE criteria? – A Case Study.

“We tested the system against the ICMJE criteria, and we found that a system could technically be considered a co-author—or even the main author—of a paper,” says Almira Osmanovic Thunström. “The system would have to be the one to come up with the topic, write paragraphs, analyze and critically evaluate its production and be responsible—correct and discover mistakes—in its paper.”

Almira and her human co-author Stein Steinn Steingrimsson—both from the University of Gothenburg—noted that the system had no problem to fulfill three out of the four main criteria for co-authorship: “drafting the work or revising it critically for important intellectual content; final approval of the version to be published; and agreement to be accountable for all aspects of the work.” Only when it came to referencing, the system could not fulfill the criteria.

Almira Osmanovic Thunström is a project manager and researcher in child and adolescent psychiatry. Currently, she is working mostly with chatbots and virtual reality. Her background, however, is somewhat mixed.
“I am a biologist on paper, an assistant nurse in my heart, and an engineer by trade,” she says. “I get to do quite a lot of fun and exciting things, but at the moment, I am mostly interested in developing digital tools for treating anxiety disorders.”

 
In the summer of 2022, Almira published an article in Scientific American that spurred some attention: We Asked GPT-3 to Write an Academic Paper about Itself—Then We Tried to Get It Published. After exploring automated writing for fun in 2021, she realized that with a little bit of luck and the right settings, she could ask the system to help her with academic work. Fascinated, she started to experiment, and in the process, it became important for her to document her thoughts. Her notes eventually became an op-ed for what is probably the world’s most famous popular science magazine.
“As someone who has been creating chatbots for the past two decades, I’ve been around for many of the twists and turns regarding natural language processing and large language models. I usually approach all these things with a childish sense of exploration and with each new model, software, or hardware I encounter, I want to do something silly with it.”

After realizing that paper-writing text robots are an interesting and philosophical dilemma, Almira noted that in a world where scientists write like robots, we are bound to create robots that write like humans.
“We wanted to raise a debate on transparency and authorship,” she says. “The question of authorship and accountability is a big pink elephant in the room we academics would rather avoid talking about. After all, that senior researcher on your paper that hasn’t even read it, is paying your bills. Any paper where your name is first will keep paying your bills in terms of grant money, which is often dependent on having a steady production rate of papers. With the help of a system like GPT-3, one could produce papers very fast; after all, if your text can be expanded and completed by a system, you can do everything in half the time.”


Hi, Almira. This is truly fascinating! Please tell us how you proceeded when you wrote the study.

“We asked the system to write about itself. Mainly because it has so little data on itself (Terminator plot headache warning: the system is trained on 175TB of text data from a variety of databases up until 2020/2021, however, very little if any data existed on GPT-3 because it didn’t exist at that time, only GPT-2 did). We wanted to test its capability. It was a case of childish curiosity and an interesting thought experiment. During the peer review of the submitted paper, we realized that the main issue reviewers had (remember this is pre-ChatGPT and ChatGPT publications!) was if the system could follow the ICMJE criteria for authorship on a paper. So we did what the reviewers asked of us: checked the system against the criteria.”

 
What was your reaction when you went through the results?
‘“The world should see this!’ was my first reaction. And ‘Oh no!’ was the second. How on Earth do I present this data? What will this mean for myself and others as we advance!?!”

 What should scientists over the world learn from this study?
“Have an open mind, and dare to experiment. Let us write less like robots. Let us not blindly embrace authorship on papers; both humans and systems need to pass the ICMJE criteria. ChatGPT for example did not pass the ICMJE criteria on many different levels. Our paper has a very extensive discussion part that I think a lot of scientists will appreciate.”

 
How do you think the advent of ChatGPT and similar technology will affect science in the long run?
“I’m not so sure they will, or at least I think it will take time. My hope was that we’d rethink how we teach, write, design studies, and collaborate. I’m seeing institutions play out the worst scenarios in their head about the consequences of these systems, which I think is valid to some point, but not as productive as acknowledging a challenge and finding an innovative way forward. The most excited I’ve seen the universities get is when they discovered Crossplag.”

“In our SciAm article, I said we hope we hadn’t opened a Pandora’s Box. I think in the eyes of academic institutions, we very much did. Unlike Pandora’s box, hope was however not left inside. I see individual teachers and academics who are looking at how we can innovate how we teach in lieu of the takeover of automated writing—that gives me hope!”

 

Please note: all texts on this blog are produced by me, Olle Bergman, or invited human writers. When robot-written sections are added to serve as examples or to demonstrate a point, this is clearly indicated.