Introducing ChatGPT: the definition, limitations, concerns and governance

Just couple months after the release of ChatGPT many universities have begun to prohibit students from using ChatGPT, some are encouraging students to use chat gpt to think more critically, and teachers in some universities are already using chagpt to ask questions. What exactly is chapgpt that makes it instantly the talk of the dinner table, coffee shop, and even top universities? I am sure my readers have used or at least heard of chatgpt. Because of its unique convenience and speed of response, chatgpt has been labelled as artificial intelligence.

In 2023, which is called the year of technological explosion by netizens, the development of artificial intelligence seems to have made a qualitative leap overnight. At the beginning of the year, these novel technologies gushed out like a fountain, allowing people to generate animations, videos, music, articles, codes, etc. using only a text description.

When we cheer for the convenience brought by artificial intelligence, we also raise some concerns, does artificial intelligence blur the gap between the public and the professional? Will artificial intelligence replace some jobs and cause a lot of unemployment? These thoughts are a good start because artificial intelligence is indeed not just a handy tool for human beings, it is more powerful and complex.

This article will explain the definition of AI and ChatGPT, the problems it brings, and how we can govern AI and ChatGPT. 

What is AI?

To understand ChatGPT, we need to understand AI first.

There are many different views on the definition of AI. In past academic research, the level of knowledge of artificial intelligence is often described as far beyond human intelligence, artificial intelligence is considered to be rational, and the decisions made by artificial intelligence are usually optimal in a situation (Russell & Norvig, 2021, p. 34).

Image 1: AI encounters humans.

Others suggest that “AI is neither artificial nor intelligent” (Crawford, 2021, p. 8). Data is the fuel that powers AI, and data is gathered from a variety of sources without the knowledge – and most definitely without the conscious involvement – of the people who offer it. 

These arguments still seem controversial. But the conclusion we as readers can draw is that “standard” life-changing AI is just an idea at the moment, a concept that current technology does not come close to. 

There is currently no uniform standard for the definition of AI in the academic community, but we can see that some academics have very idealistic definitions of AI, while other academics have more critical thinking about the definition of AI. 

The AI we’re talking about in our lives right now is not standard AI, because it can’t disrupt human life or provide optimal solutions in a situation. 

The mainstream of society generally believes that artificial intelligence is a program with great development potential, and it can evolve an independent thinking system, even emotions, etc.

Image 2: AI with emotions.

Thus, even though the AI we are familiar with in reality cannot be called standard AI, because of its quality and convenience to our lives at this stage, they are still labelled as AI and collectively referred to as AI by the general public.

Is ChatGPT AI?

Well, based on its own description and public opinion, it is AI. Different from our cognition, Siri and automatic driving is usually considered artificial intelligence, but in fact, they are only machine learning. So as ChatGPT. When using ChatGPT, it does not develop independent thinking ability but uses an algorithm to screen out the information that meets the questions from the global database and arrange them into answers (Gewirtz, 2023). On the other hand, ChatGPT does improve our lives in many ways, it’s been labelled as AI. Therefore, ChatGPT is more like machine learning than AI. 

Image 3: Limitations of ChatGPT.

Limitations of ChatGPT:

The AI ​​we are currently implementing, whether it is the large-scale language model (LLM) behind ChatGPT or the diffusion algorithm (Diffusion) based on AI painting, is essentially a kind of training for specific situations, pouring in huge data, and lets it learn the patterns and rules of the data from a class of models (Li, 2023).

They have data processing capabilities beyond the reach of human beings, but their scope and upper limit are inherently “locked”, and they can only complete the specific tasks they were trained for, and cannot spontaneously emerge “subjectivity”.  

Therefore, I call them “tool AI”. At least so far, they do not belong to true general artificial intelligence, that is, artificial intelligence that can learn and think spontaneously, can truly understand the situation, and solve the general problems faced by humans.

So, artificial intelligence is not currently capable of performing creative work. The public’s current perception that AI can do creative work is wrong. For example, when AI paints, AI does not think and create but analyzes human language instructions through algorithms, and finds suitable materials in the global image database for splicing to complete the painting.

Concerns of AI and Chatgpt:

Concerns about the development of AI have long been a topic of public and academic concern. The two biggest concerns people currently have about AI are undoubtedly the impact on employment rates and the impact on academic integrity. 

Humanity has been worried about AI as a threat to employment for a long time. Yet the reality is indeed what humans think.

Image 4: AI causes unemployment.

AI programs such as MidJourney and Stable Diffusion have been tuned and controlled to generate near-realistic AI paintings and even simulate photographs. ChatGPT has been able to talk to users in natural language, answer questions, polish texts, refine the information, and write drafts.

Microsoft’s New Bing and Google’s Bard add web search capabilities, which can combine real-time data and predictive models to help us aggregate information and directly tell us the answer (Li, 2023). Because AI blurs the distinction between ordinary people and professionals, many positions that only specialized skills can fill will likely face a wave of unemployment.

AI already poses serious challenges to academic integrity. As an artificial intelligence chatbot program, ChatGPT can quickly generate almost all the answers you need, so some students start to use it to write papers, search for answers to exams, etc.

Image 5: Students use laptops in the classroom.

Teachers are concerned that because ChatGPT is simply a text-generating machine that can only produce a passable replica of what is being communicated, it cannot comprehend what is being presented or take the time to evaluate the relevancy or accuracy of the content (Rudolph et al., 2023).

“In a recent study, abstracts created by ChatGPT were submitted to academic reviewers, who only caught 63% of these fakes” (Thorp, 2023). This suggests that many AI-generated texts will flow into the literature. This will lead to an erosion of trust in academics.

AI depends on data, algorithms and hardware (Assen et al., 2020). AI gathers massive volumes of data, and then, using machine learning techniques, algorithms learn to identify relationships between these data points, and then replicate this logic for every new piece of data they encounter (Assen et al., 2020). Therefore, algorithm problems will also cause AI concerns.

A serious problem with algorithms is that they are racially and gender biased, which is unavoidable because algorithms are based on human data. Since ancient times, humans have held prejudices against different races and women, so algorithms built on human data are also biased. 

Image 6: A influencer complains about the bias of ChatGPT on Twitter(Chowdhury, 2023).

Since white, male, and college-educated Americans make up the majority of people building AI systems, any racial and gender biases they carry may be reflected in AI (Chowdhury, 2023). Sam Altman, the CEO of OpenAI, acknowledged on Wednesday that ChatGPT has “bias problems,” though he did not elaborate (Chowdhury, 2023). In actuality, this most likely means that its underlying AI model has been programmed in such a way that it occasionally spits forth racist, sexist, or other types of biased responses.

For instance, The Intercept questioned ChatGPT about which airline passengers would pose a greater security concern (Chowdhury, 2023). According to reports, the bot spits out a formula that determined that the risk was higher for passengers who originated from or had merely visited North Korea, Syria, Iraq, or Afghanistan (Chowdhury, 2023). 

Governance of AI:

Now that AI is causing so many concerns, how can humans regulate AI? As AI continues to advance, there is a growing need to ensure that it is developed and used in a way that is safe, ethical, and beneficial to society. Approaches to regulating AI have been explored since its development. 

These approaches include:

  • Trustworthy AI must be governed and subject to regulations at every stage of its development, from ideation to design, development, deployment, and machine learning operations (MLOps). It must be anchored on the six dimensions of the framework for trustworthy AI developed by Deloitte, including transparency and explainability, fairness and impartiality, robustness, and reliability (European Commission, 2019).
  • The FAT (Fairness, Accountability, and Transparency) framework highlights the necessity of making sure AI systems are impartial and non-discriminatory, accountable for their decisions and actions, and transparent in how they function (Shin & Park, 2019).
  • Explainable AI (XAI) aims to increase the transparency and interpretability of AI systems so that humans can understand and rely on their judgements and deeds (Barredo Arrieta et al., 2020).
  • Human-in-the-loop (HITL) is an approach that involves incorporating human oversight and control into AI systems so that humans can intervene and correct errors or biases in the system’s decisions (Enarsson et al., 2022).
  • Human-Centered AI (HCAI) aims to develop AI systems that amplify and augment human capabilities rather than replace them. In order to ensure that artificial intelligence fits our needs, operates transparently, produces fair results, and respects privacy, HCAI works to maintain human control (Geyer et al., 2022).
  • The call to limit or ban certain AI technologies is another strict and decisive approach. This method is dedicated to solving the problem from the root, such as directly banning artificial intelligence services, such as ChatGPT or automatic driving.

We can see some of these approaches are somewhat similar, such as making a trustworthy and explainable AI that can achieve fairness, accountability and transparency. These methods require developers to carry out more complete upgrades and supervision of artificial intelligence. Banning certain AI technologies is a more straightforward approach, but its viability is less optimistic because people’s demand for automation is increasing.

Governance of ChatGPT:

So which of the above methods can be applied to govern ChatGPT?Well, it is clear that disabling access and the use of ChatGPT is not a viable option. Even though ChatGPT poses a threat to academic integrity and employment as well as racial and gender equality, its emergence is undeniably inevitable and a result of social processes and technological developments.

Nowadays, there is a growing demand for automation and artificial intelligence, and the development of such technologies can somewhat improve our quality of life and make human life more convenient; therefore, we should not avoid the development of technology, but find ways to regulate ChatGPT so that it can develop in accordance with human expectations.

For the governance of ChatGPT, reference can be made to Trustworthy AI, FAT and Explainable AI to improve and upgrade ChatGPT’s governance system through developers to meet the standards of fairness, accountability and transparency, and to reduce the frequency of bias and errors to make ChatGPT more trustworthy and easier for people to understand the content it generates.

Conclusion:

In this ever-changing society, the introduction of ChatGPT has certainly brought us a lot of conveniences. However, it also brought us many challenges such as students using it to cheat, causing unemployment, and its bias. But instead of questioning and avoiding this technology, people should develop better approaches to AI governance to make it work better for us.

Perhaps someday in the future, humans can completely develop a completely human-centred AI, or perhaps AI will be completely free from human control. The second future is something we don’t want to foresee. Therefore humans should pay more attention to the governance of AI, for the sake of the common future of humanity.

References

Assen, M.V., Banerjee, I., & De Cecco, C. N. (2020). Beyond the Artificial Intelligence Hype What Lies Behind the Algorithms and What We Can Achieve. Journal of Thoracic Imaging, 35(3), S3–S10. https://doi.org/10.1097/RTI.0000000000000485

Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012

Chowdhury, H. (2023, February 3). Sam Altman has one big problem to solve before ChatGPT can generate big cash — making it ‘woke’. Business Insider. https://www.businessinsider.com/sam-altmans-chatgpt-has-a-bias-problem-that-could-get-it-canceled-2023-2

Crawford, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. https://doi.org/10.2307/j.ctv1ghv45t

Enarsson, T., Enqvist, L., & Naarttijärvi, M. (2022). Approaching the human in the loop – legal perspectives on hybrid human/algorithmic decision-making in three contexts. Information & Communications Technology Law, 31(1), 123–153. https://doi.org/10.1080/13600834.2021.1958860

European Commission. (2019, April 8). Ethics guidelines for trustworthy AI. Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

Geyer, W., Weisz, J., Pinhanez, C. S., & Daly, E. (2022, April 1). What is human-centered AI? IBM. https://research.ibm.com/blog/what-is-human-centered-ai

Gewirtz, D. (2023, March 10). How does ChatGPT work. ZDNET. https://www.zdnet.com/article/how-does-chatgpt-work/

Li, R. (2023, March 27). ChatGPT 会带来失业潮吗. Zhihu. https://www.zhihu.com/question/582933780/answer/2955537081?utm_medium=social&utm_oi=852173374623973376&utm_psn=1631407208175939584&utm_source=ZHShareTargetIDMore

Russell, S., & Norvig, P. (2021). Artificial intelligence: a modern approach. Pearson Education.

Rudolph, J., Tan, S., & Tan, S. (2023, January 24). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? | Journal of applied learning and teaching. Future Home of journals. https://journals.sfu.ca/jalt/index.php/jalt/article/view/689

Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. https://doi.org/10.1016/j.chb.2019.04.019

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science (American Association for the Advancement of Science), 379(6630), 313–313. https://doi.org/10.1126/science.adg7879

Be the first to comment

Leave a Reply