Time flies with great content! Renew in to keep enjoying all our premium content.
Dilemma Artificial Intelligence poses in academia, workplace
A photo taken on October 4, 2023 in Manta, near Turin, shows a smartphone and a laptop displaying the logos of the artificial intelligence OpenAI research laboratory and ChatGPT robot.
It marks one year since the wide use ascension of generative pre-trained transformer large language model (LLM) artificial intelligence. The drama surrounding LLM AI leader OpenAI, the owner of the most used site ChatGPT and the firing, resigning, and re-hiring of its executive leadership team gives us an opportunity to examine how the technology has proliferated in our lives.
Initial use of LLM AI alarmed academics and fears of unfair advantage for use of artificial intelligence. But much like in today’s world how every student utilises a calculator for mathematical computations, now near unanimous use of LLM AI by university students eliminates the argument of unfair advantage. Instead, it raises questions around whether real learning outcomes and original output takes place.
However, academics still try to stamp out LLM AI use through detection tools. Researchers Debora Weber-Wulff, Alla Anohina-Naumeca, Sonja Bjelobaba, Tomáš Foltýnek, Jean Guerrero-Dib, Olumide Popoola, Petr Šigut, and Lorna Waddington investigated the 14 leading artificial intelligence detection tools available in the market.
They found dismal results that instead of finding if material was written by LLM AI, the tools instead focused on probabilities of pieces of work being human written. Marc Watkins quotes detection tool sources as stating certain methodologies only detect AI use correctly 26 percent of the time.
This author utilised the most famous AI plagiarism checkers on Harvard University’s website to look at articles and policy manuals posted before the advent of large language model artificial intelligence.
The AI plagiarism checkers proved incorrect a staggering number of times. It makes one wonder where is the boundary of effective checking, whether to disclose LLM AI usage, and what to use it for.
This author asked LLM AI from three different platforms to provide the most appropriate format to write newspaper articles that would most interest readers. I slowly tried implementing different versions of those formats in my prior eight articles in the Business Daily.
However, I find the articles less interesting, simpler, and less vibrant when sticking LLM AI’s per paragraph designs. Yet readers in my focus groups reported an increase in readability and relatedness to the material I wrote. That made for a humbling result knowing that LLM AI can boost even professional authors.
While opinions are coalescing around disclosure of AI authorship, what about when students or entrepreneurs use LLM AI to brainstorm, how about incorporating AI for grammar and spelling checks, what should an author disclose when utilising LLM AI for formatting, translating, or restating in past, present, or future tense something that was an original work?
How about medical doctors investigating patient health with AI, should they disclose such to the patient? One of the gold standards for academic citations, the American Psychological Association, maintains a whole guideline written by Timothy McAdoo for how to cite usage of LLM AI.
Yet unfortunately, there seems to exist a widely reported decline in ChatGPT quality by researcher Ethan Mollick. Even anecdotally one can see the uniformity of LLM AI output prospective quality deterioration from Google’s Bard to Claude AI to Chatsonic, among others.
Is the technology becoming overburdened by extensive use? Is the generative pre-trained transformer large language model artificial intelligence starting to train on itself instead of original sources like in the beginning? How should one quote AI that trained on AI?
Now, let us give ChatGPT the final thought on the above reflections with this AI generated conclusion: “Reflecting on the past year since large language models (LLMs) like GPT entered the AI scene, we’ve seen both highs and lows.
Initially, there were doubts about using LLM AI in schools, but now, there’s more talk about how it might help with learning and being creative. Figuring out if content comes from AI or a person is tough, and it makes us rethink what it means to be an original author. Despite concerns over quality and sameness, LLM AI still plays a big role in changing how we write, do research, and learn.
The way we use AI is changing—from brainstorming help to aiding in academic referencing—and it keeps us questioning how much we should rely on technology versus our own ideas. As we explore this new area, the place of LLM AI in our studies and creativity stays an important and active topic, requiring our ongoing attention and careful thought.”
Ironically, AI detection tools only minimally recognise the direct use of LLM GPT in the above paragraph.
Have a management or leadership issue, question, or challenge? Reach out to Dr Scott through @ScottProfessor on Twitter or on email at [email protected].