ChatGPT: A friend or a foe?

ChatGPT: A friend or a foe?

January 12, 2023

ChatGPT is a variation of OpenAI's popular language model GPT-3 and is tailored exclusively for chatbot applications. While ChatGPT can create human-like language and can be trained to understand and respond to a broad range of inputs, it has flaws and limitations, and it causes moral disputes over issues like plagiarism.

ChatGPT is a chatbot is built on top of OpenAI's GPT-3 family of large language models.

A few months ago, artificial intelligence (AI) showed the world how capable it is when it comes to processing text into image format by “hunting and gathering” images from the known internet. It didn’t come without flaws, but nevertheless, it made many artists furious about how their livelihoods could be robbed. Others said the bot is rather useful for non-artists and gabbed about how everyone could become an “artist”. Now similar altercations arise once again over ChatGPT –– this time over academic fraud. 

Are we learning or exploiting? 

While many claim AI is useful as it helps us develop and innovate–such as coming up with more ideas, and advancing technology–others claim it has made us lazier, and that AI comes at the cost of our jobs. ChatGPT has many issues around it, especially when coders and software developers use ChatGPT to code. One significant concern is that ChatGPT is a language model and was not built or trained for programming. As a result, it may require assistance in comprehending and interpreting code, which may result in mistakes and faults in the final program. Another issue is that ChatGPT lacks the generalization capacity and domain-specific knowledge required to tackle complicated coding challenges successfully. While it may write simple lines of code in HTML, it can’t go beyond being broad in other languages. This means the complications of any language are still to be tackled properly by humans rather than AI. It may be capable of producing syntactically accurate code, but this does not imply that the code will be useful or efficient. AI-generated code would end up taking us in spirals, as all it does is piece different pieces of code together, which in the long run wouldn't promote innovation, but rather would be plagiarism. 

Not only does this create rather useless code, but also a lack of understanding on how to debug or even create code for new learners. Relying on ChatGPT may feel like a game changer for now, but it would only lead to a lack of true educational development. 

Many teachers are also worried about the plagiarism ChatGPT brings. All the AI does is take our text and search the internet for already written works. With some light paraphrasing, ChatGPT writes a response to your designated prompt. Although some claim to learn through ChatGPT, it does more harm than good. The works used in any AI are often copyrighted and not cited after, which goes back to our fellow artists who were also protesting against it. More often than not, AI uses copyrighted articles, images, and papers without permission. The people who use AI not only use it for personal reasons, but for commercial purposes as well, often making money at the expense of others’ creations.

But this can be ethical, right? 

After all is said and done, everything we do is in our hands. ChatGPT is a privilege – and it needs to be treated as one.  Essentially, the way we use ChatGPT can make a difference, and how we use it is drastically important. Instead of writing code using ChatGPT, one could debug code instead. Some other ways of being ethically responsible when using ChatGPT are: 

  1. Understanding the impacts AI has on society: AI has economic and social impacts on a multitude of jobs and households. It's important to understand what those impacts can be. It may seem frivolous to generate content for personal use, but using them for commercial purposes can lead to plagiarism and fraud. 
  1. Being transparent: Let people know if you post any text, code, and images that aren’t your work. In other words, content that you have generated. There’s a difference between generating and creating a product. 
  1. Understanding biases: AI can present itself as biased and present instances of bias through race, gender, and ethnicity. This is because many generated images and texts can contain biased outputs due to the biased inputs in the world wide web.

Not only can we teach students and learners how to use AI properly, but there are ways to combat ChatGPT now. Edward Tian, a Princeton student, built an app called GPTZero that detects AI-generated texts. The way it functions is complex and will change ChatGPT forever. It measures perplexity and burstiness in order to evaluate whether the text is AI-generated or written by a human. In the end, it's up to us to make sure we treat our resources right without making it troublesome for others. 

Latest Articles

All Articles
Education's Tech Tools

Education's Tech Tools

In a future where classrooms feature robot teachers and virtual reality, technology promises both dystopia and utopia. While still distant, advancements like ChatGPT and Khanmigo hint at transformative education. Tools like Kahoot! and ClassDojo enhance engagement and connectivity. Desmos and Phet Simulations enrich STEM learning, while Anki sharpens memory. Embracing these innovations propels us towards a brighter educational horizon.

Tech
Introduction to Data Structure & Other Upcoming Courses

Introduction to Data Structure & Other Upcoming Courses

Unlock the world of Data Structures with Codology's new course by Katia Ohmacht! From arrays to sorting, this self-paced, fun-filled journey is your key to mastering CS. Whether you're a novice or pro, seize this opportunity to level up your skills!

Tech
AI in Medicine

AI in Medicine

In the ever-evolving landscape of healthcare, artificial intelligence has emerged as a powerful tool, capable of efficiently analyzing vast amounts of data and providing valuable insights for patients and doctors.

Tech