top of page
  • Writer's pictureCCA Pulse Magazine

ChatGPT: Limitations & Concerns | Sophie Shi

ChatGPT has made headlines since it was released last year. Created by OpenAI, a San Francisco based startup, the improved language model is programmed to generate conversational responses to prompts by users. This revolutionary chatbot has been used for almost everything, from writing short poems and stories to crafting fun dating profiles and personal bios. Perhaps some of the software’s most promising aspects include its ability to modify future answers based on previous questions and dig into the vast archive of the internet in order to provide specific, factual responses. Currently, the chatbot averages around 13 million unique visitors a day, an impressive figure that may continually see an upward trend, especially with Microsoft’s recent $10 billion dollar investment in the AI bot.


Of course, like most other smart applications, users ought to take caution when utilizing the program. ChatGPT is no finished product and is continuously being worked on and improved. Additionally, there are many concerns over the dangers associated with this new tool. For example, many were shocked when the language model was able to pass the final exam for the University of Pennsylvania’s Wharton School MBA program. With the ongoing rise of the bot, let’s take a look at some of the concerns regarding ChatGPT:


Response Limitations

As emphasized by OpenAI themselves, ChatGPT does not guarantee a perfect response to every user input. At times, the AI model will directly inform users of its inability to give a proper answer, responding along the lines of “I’m sorry, but I do not have the information you are requesting at this time.” Prompts requiring personal emotions and knowledge not accessible to the program are often met with such a response. There is also a bias that has been baked into the system since the bot has been trained based on the writing and knowledge presented by people all over the world. As a result, no matter how authoritative or true a response may seem, there remains the underlying possibility of it including opinionated information. Furthermore, the program is sensitive to phrasing, so small tweaks in prompts may warrant different responses.


Threat to Search Engines

Upon ChatGPT’s release, Google’s management team declared a “code red.” The company has been at the forefront of the search engine business since 2000, known for its vast index of more than a billion web pages. With ChatGPT’s ability to offer specific responses to questions, however, users may begin to turn to the bot over-scrolling through the different websites pulled up by Google. Rumors of Microsoft adding this component of the AI model to the Bing search engine, Google’s rival, have only deepened the controversy and many are now awaiting Google’s next steps.


Legal and Ethical Concerns

There are numerous concerns over whether this new tool may inflict damage on the reputations of certain brands and pose harm to ordinary users. Legally, ChatGPT risks the infringement of intellectual property (IP) rights. If the model has been trained using copyrighted works, users may unknowingly end up plagiarizing and see themselves facing lawsuits and other severe forms of legal action. Ethically, this bot produces a series of new questions regarding how work generated by the application should be shared with the world and the proper way to credit the bot.


Educational Woes

As seen with the aforementioned MBA Wharton exam, ChatGPT is extremely capable of producing decent responses to academic prompts and test questions. Educators and parents have expressed worry over this new method of “cheating the system,” where students can avoid completing written assignments by delegating such tasks to the language model, thus eliminating a crucial step in the learning process. In fact, concern has become so great that some schools have already blocked the website and refuse to let students access it. Princeton University student Edward Tian has actually already begun to approach this issue by creating GPT Zero, a program that works to detect work written by AI models based on even complexity of the sentences. While a worthy effort, this software has yet to be fully developed and is not an entirely reliable method in combating the problems associated with the usage of ChatGPT.


Despite these limitations, however, there is no doubt that ChatGPT is a huge step for AI technology, and exploration in this exciting field should continue. Rather than be completely disheartened by the potential consequences, as of now, people should simply caution against, in the words of OpenAI’s Chief Executive Sam Altman, “relying on [ChatGPT] for anything important right now… [since] we have lots of work to do on robustness and truthfulness.”



23 views0 comments

Recent Posts

See All
bottom of page