Browsed by
Category: Interviews

”There is no doubt NLG could have a profound impact“ – The AI ONE ON ONE interview

”There is no doubt NLG could have a profound impact“ – The AI ONE ON ONE interview

Anyone interested in how AI will affect the world should have a look at the LinkedIn-based blog/newsletter AI ONE ON ONE, run by the two academic researchers Aleksandra Przegalinska and Tamilla Triantoro. The purpose of their blog is to discuss advantages and limitations of current AI systems and explore the possibilities of using AI for good. RWR got in touch for an interview, focusing on NLG in general and the advent of ChatGP3 in particular.

Behind AI ONE ON ONE are two colleagues residing at different sides of the Atlantic Ocean. Aleksandra Przegalinska is a professor and Vice-President of Kozminski University in Poland, and a Senior Research Associate at the Harvard Labour and Worklife Program. Tamilla Triantoro is a professor at Quinnipiac University in the United States, and a leader of the Masters in Business Analytics program.
“The blog has been a good run so far,” says Aleksandra Przegalinska. “In summer of the last year, we started experimenting with different features of GPT-3, and then, in December, ChatGPT took the world by storm, making these conversations even more relevant.”

 

 

Hi, Aleksandra and Tamilla! What motivated you to start this blog?
“We both have a solid background in human-computer interaction research. One of the interesting things about working with new cutting-edge technologies is to observe how humans interact with them, and what are the possible improvements that can be implemented. Right now is the first time when natural language generation has been democratized. As more people get acquainted with this technology, it becomes more relevant to explore the possibilities. Through our blog we explore the creative applications, and in our research, we focus on the applications for business, and the effect of generative AI on routine and creative processes at the workplace.”

Let’s talk NLG in general and GPT-3 in particular! You recently noted that there is “incredible hype surrounding [ChatGPT] these days”. What do you think spurred this hype?

”The hype around the GPT family of models, particularly ChatGPT, is largely due to the impressive capabilities it has demonstrated in natural language processing tasks. The key strength of ChatGPT is its ability to generate highly coherent and human-like text in the form of a conversation. Unlike the previous version of GPT, ChatGPT requires minimal fine-tuning, and we observed this during our experiments. Newer technology, GPT-3.5 that powers ChatGPT, allows for a more seamless interaction and that is truly captivating.”

“The hype is also related to the excitement regarding the transformers’ remarkable ability to perform creative tasks – such as storytelling, generating interview questions, article titles and writing programming code. The transformers such as ChatGPT perform tasks, that were once considered impossible for machines to accomplish. This allows for more innovation and creativity while making it all accessible not only to industry giants but also to small businesses and individuals.”

Have you used GPT-3 for other things than studying how it works and experimenting with its features? 

”We both have interest in the future of work. AI influences the way the work will be performed in the future, and an interesting question is in what circumstances humans will collaborate with AI. Currently we study the collaboration of humans and AI via the lens of individual acceptance. We created an application for business use that integrates GPT-3 technology. The application assists knowledge workers in performing tasks of various difficulty and levels of creativity. We measure the levels of collaboration and attitudes towards AI and devise a program of improvement of the human-AI interaction and collaboration at the workplace.”

In what situations do you think GPT-3 can be useful for professional groups such as researchers, investigating journalists, and business people?

“GPT-3 certainly has the potential to be a useful tool for a wide range of professional groups due to its ability to generate human-like text and understand natural language input. Some specific examples includes, for instance, researchers: here GPT-3 can be used to assist with literature reviews and data analysis. It could help generate summaries of scientific papers and articles, assist with data analysis and visualization, and even support writing research proposals and papers. Another example could be investigating journalists: GPT-3 could be used to assist with fact-checking and research by quickly providing additional information on people, organizations, or events mentioned in articles. It could also be used to generate summaries of complex documents such as legal filings, and help analyze and organize large sets of data.”

“Obviously, the biggest impact will be seen for businesspeople: here GPT-3 can be used for automation of repetitive and low-value tasks such as writing reports, composing emails, and creating presentations. Additionally, it could also assist with market research and trend analysis, as well as generate strategic plans, and help identify new business opportunities.”

Can you see any ethical problems arise – short-term and long-term?

“Well, it is important to keep in mind that GPT-3 is a machine learning model and the results it generates still need to be verified, but it can save time for professional groups by automating time-consuming, repetitive and low-value tasks, supporting with research and analysis, and generating new insights and ideas. We need to understand that we humans are the real fact-checkers here.”

“Other issues include the following:

  • bias – because language models like GPT-3 are trained on a large dataset of text, which can perpetuate and amplify societal biases that are present in the data;
  • privacy – because the use of large language models for natural language processing can raise privacy concerns, particularly when the models are used to process sensitive or personal information;
  • explainability – because language models like GPT-3 are complex systems that are difficult to understand and interpret, which can be a problem in situations where it is important to know how a model arrived at a certain decision or conclusion.”

Do you think that upcoming generations of NLG technology  (GPT-5…6 …7 …) may attain better results than humans in certain situations? 

“It is possible that future generations of natural language generation technology may attain better results than humans in certain situations. The capabilities of language models like GPT-3 have already surpassed human-level performance in tasks such as language translation and text summarization. As the technology continues to improve and more data is used to train these models, it is likely that they will perform even better on these and other tasks.”

“For example, GPT-5 or GPT-6 models might be able to generate news articles or summaries, they might be able to compose a novel or a poem, it might be able to compose persuasive or convincing fake news or deepfake videos, even able to impersonate a person on internet, these are areas where human are currently capable but natural language generation technology may outmatch them in future.

“However, it’s worth noting once again that such models have significant limitations. They can only perform as well as the data and the algorithms that they have been trained on, so if the training data is biased or inaccurate, the model will be too. Additionally, Language models currently lack common sense, self-awareness or creativity. So, While they may be able to perform certain tasks better than humans, they may not be able to understand context.”

How will humanity’s relation with the written word be affected by NLG technology? Is it a bigger disruption than Gutenberg’s printing technology?

“Natural Language Generation (NLG) technology has the potential to significantly change the way people interact with written text. With NLG, machines can automatically generate written content in a way that mimics human writing. This means that people may rely more on computer-generated text for tasks such as writing reports, news articles, and other types of written content.”

“It’s hard to say whether NLG will be a bigger disruption than Gutenberg’s printing press, as it depends on how widely the technology is adopted and how it’s used. Currently it’s a bit of a social experiment on mainstreaming AI, we will see where it takes us. Having that said, the printing press allowed for the mass production of books, making written information more widely available and helping to spread knowledge and ideas. In a similar way, NLG has the potential to increase the efficiency and speed of producing written content, but it’s still in early stages and it’s hard to predict how it will evolve over time.”

“There is no doubt though that NLG could also have a profound impact on how people write and consume written information, as it may change the way we think and communicate, allowing us to express ourselves in new ways, and could have implications on areas such as employment, education, and language.”

Björn Lundberg, historian, about ChatGPT: “In philosophical terms, the implications are huge”

Björn Lundberg, historian, about ChatGPT: “In philosophical terms, the implications are huge”

Just some days after the launch of ChatGPT, Björn Lundberg, a historian at Lund University, wrote an analysis that was widely shared on social media in Sweden. Primarily, Björn was concerned about the challenge that university teachers now face when it comes to written examinations and papers. But, as a historian, he naturally sees the wider implications. “It’s a revolution, and it happens now,” he writes. 

“I have seen what AI can do, and it’s time to be afraid”—Björn Lundberg’s blog post was widely shared on social media in Sweden.

 

Björn Lundberg [Lund University website] is a researcher and teacher at the Department of History, Lund University, mainly interested in the modern era. Before returning to the university in 2012 to begin his doctoral studies, he worked as a journalist for the popular history magazine Allt om Historia for a few years. Since then, he has wanted to combine his academic interests with writing for a wider audience.
“I am a curious person, and I tend to find almost any topic fascinating once I delve a little deeper into it – from the philosophy of history to early modern discovery,” Björn says. “That said, my expertise lies in twentieth-century history. My latest book was a biography of Gunder Hägg, the world’s most successful middle-distance runner during the 1940s and the first true sports star in Sweden. My next book will be about the space race during the Cold War.”


Tell us a little about your blog – purpose, themes, audience!

“I actually pursue two parallel blog projects at the moment. One is a Substack in which I try to make sense of the development of political affairs in Sweden by describing some underlying currents in modern history. The other is a blog about historical writing (broadly construed), in which I share my thoughts on the subject. The idea is to describe elements of style that can enhance academic writing as well as popular narratives: chronology, dialogue, setting, and so forth. My ambition is to make visible the strategies and decisions involved in the writing of history, whether it’s intended for scholarly publication or a local heritage organization.”

What were your first impressions when you started experimenting with ChatGPT? How come you say that it is ”time to be afraid”?
“To be honest, my first impression was “Wow!” Of course, I realized that it has obvious limitations. But its sheer confidence and its ability to address abstract problems in a clear style impressed me. Sure, I’ve seen AI applications do impressive things before, but this was something different. At least in terms of writing. That’s also the reason I said it’s time to be afraid. Mainly because I wanted to send out a wake-up call to fellow university teachers. I think higher education in Sweden, in general, is ill-prepared for this challenge. We need to think about our teaching methods, and how we write exams.”

What is your reflection as a historian when it comes to NLG technology?
“In philosophical terms, the implications are huge. This isn’t new, of course. We’ve talked about intelligent computers for more than 50 years. But in the last few years, we’ve seen a progress that is truly impressive. It’s not science fiction anymore, just science. First of all, this goes to the very heart of what it means to be human. Carl Linnaeus gave us the name Homo sapiens, ‘wise man’. For centuries, humans have defined their role in the world in terms of intelligence, the capacity for self-reflection, etc. What happens when we manage to create intelligence that surpasses us in terms of intellectual capacity? Will we pad ourselves on the back for this achievement, or rather re-identify what it means to be human? Will our uniqueness rather be our capacity for complex emotions?”

Do you see any similarities with other technical breakthroughs? Does this one stand out in any particular way?
“Yes, there are obvious similarities to the industrial revolution and the fear of mechanization. In narrow terms, the Luddites feared unemployment, but industrial societies have dealt with the social and cultural effects of industrialization – that is, modernity – for two centuries. One obvious difference is that Luddites feared that mechanization would replace manual labor. Now we fear that AI will replace intellectual labor, leaving only manual labor left for us.”

How do you think AI in general and NLG, in particular, will shape the course of history and the human condition?
“The key question is if AI will be used to address key global challenges or primarily an instrument to improve economic productivity. Our inability to address global warming has made me rather cynical about this at the moment. But there is of course also great potential. At least, in theory, we can hope to use technology to improve the quality of life. Maybe the six-hour working week is just around the corner? On the other hand, if we ever create artificial intelligence with a will-to-live or a will-to-power, we are likely to be in deep trouble.”