ChatGPT: the issues with Artificial intelligence

With the development of artificial intelligence, everything from virtual assistants and speech recognition to smart homes and the now-commonplace face recognition technology has been affected (Goggin, 2023). However, algorithmic bias, discrimination, and inaccuracies in decisions affecting people, marginalised communities, or society may result from AI. In this situation, effective regulation becomes critical whether the technology companies, governments or audiences. A sensible approach to regulation must be constantly explored as technology updates. In this blog, I will demonstrate the possible risks of AI in several ways while using Chat GPT as an example to explore ways of effective regulation.

Why ChatGPT
What is the hottest topic right now? It must be ChatGPT. Bill Gates recently said in an interview that improvements in artificial intelligence are currently the “most important” invention (Goswami,2023). As Bill mentioned in this interview, AI applications like OpenAI’s ChatGPT could increase office productivity and streamline the drafting of invoices and letters (Goswami,2023).

What is the ChatGPT? Why do people like to use it? Why it becomes the hottest topic?

OpenAI defined on its official website (2023) that ChatGPT could respond to follow-up inquiries, acknowledge mistakes, challenge incorrect assumptions, and reject inappropriate requests thanks to the dialogue format. The startling growth of ChatGPT demonstrates how helpful it could be in assisting with various tasks and the general overflowing interest in human-like machines (Chow, 2023). I tried to ask ChatGPT how to earn money, and it responded with ten suggestions, including getting a job, selling real estate, starting a YouTube channel, etc. It also explained that I could sell photos to get money if I was a photographer. If I was an animal lover, I could help look after animals. The intriguing aspect of ChatGPT was how much inspiration it offered. It is about more than just responding to inquiries. Of course, ask what artificial intelligence is and other similar conceptual questions. It can answer them much more quickly without requiring you to search on Wikipedia or any other website. Chat GPT seems convenient and exciting from the above, so are there any potential risks with it? The answer is yes. Many issues have arisen in the decades of AI development, and Chat GPT must be flawed. In this case, the regulation of AI is essential.

Many issues have occurred before. In 2015, Software developer Jacky Alciné noticed that his black acquaintances were labelled “gorillas” by Google Photos’ picture recognition algorithms (Vincent, 2018). Moreover, Google has not addressed the underlying issue for a long time (Vincent, 2018). By 2016, on Twitter, a chatbot created by Microsoft had gone wild and started shouting, using racial slurs, and making explosive political claims, which had only been online for 24 hours (Wakefield, 2016). In fact, the basis of AI’s algorithm is to acquire data and analyse it. According to Malgorzata (2022), Algorithms, hardware, and data serve as the three main pillars of AI. They collect large amounts of data, utilise machine learning techniques to teach algorithms to identify relationships between the data, and then apply these techniques to every new piece of data they come across (Malgorzata,2022). The algorithm is formed by collecting and analysing data, as well as investigating it from samples. In this case, the governance related to algorithmic bias, ethical issues social responsibility of AI must be concerned.

Algorithmic bias

Related to the algorithmic bias, it must be mentioned why it occurs. According to Dhaliwal (2022), algorithmic biases caused by a lack of diversity among those involved in the system’s development could result in systems being fed insufficiently diverse data and amplifying pre-existing bias. In the beginning, machine code is generated by humans. Social factors and creator preferences might directly impact the code’s orientation. If the team writing the code is not diverse, this may result in biased data collection and analysis. Teams in technology firms or institutes need to be aware of diversity. The risk of algorithmic bias can be minimised by using diverse individuals to write code and analyse data and the whole process.

In addition, more regulation is required for data collection, just as in 2016 with Microsoft’s chatbot on Twitter. Data collection forms are left at risk of being rewritten to include new issues with bias, stereotypes, etc. This regulation is not only the responsibility of government departments but should also be the responsibility of technology companies. For example, ChatGPT is described on the official website about algorithmic bias. They share the parts of rules that deal with politics and controversial issues (OpenAI, 2023). Also, they stated that reviewers should not favour any political group (OpenAI, 2023). Still, Chat GPT cannot guarantee it will without algorithmic bias. In other words, it cannot be avoided because algorithmic bias is a societal problem. Technology advancements, a diverse team, and applicable regulations can significantly lessen hazards.

Finally, algorithmic bias could also lead to big data and differential pricing. The development of algorithms has also led to an increase in big data and differential pricing, which are now more frequent and unregulated. This should be an economic concern. For instance, if you frequently use a shopping app, you might not get the best deals. If you use this shopping app infrequently, on the other hand, the platform will make you additional offers to keep you. This phenomenon is also ordinary happened in booking flights and hotels. Different pricing for different customers is a regular economic phenomenon. However, as data collection has become more accessible, big data and differential pricing have affected consumers’ rights. This will involve not only algorithmic bias but also privacy concerns. In this process, regulation becomes especially significant.

In general, algorithmic bias is unavoidable. It must be accepted that culture and institutions impact algorithmic bias. As previously stated, algorithmic biases can be caused by a lack of data, underrepresentation, or diversity (Dhaliwal, 2022). Diversity is the key to overcoming these biases since it allows different people to come together and exchange various facts, opinions, and experiences, which could lead to varied outputs fed into these systems to make assumptions (Dhaliwal, 2022). Overall, algorithmic bias needs to be governance reasonably to reduce the risk.

Ethical issues with artificial intelligence

Artificial intelligence has faced ethical issues. BBC News reported that universities warned against the ChatGPT for assignments last month. ChatGPT could be used to complete assignments through a question-and-answer format. In the news, a student asked ten questions and received 3,500 words essay (Holmes, 2023). And the student used the 3,500 words to finalise the assignment (Holmes, 2023). This behaviour is already considered cheating. Currently, the majority of universities prohibit students from utilising Chat GPT for assignments. As BBC News informed (2022), artificial intelligence cannot replace doctors doing the surgery. Now, it does not replace manual labour. There is no denying that Chat GPT has made life easier for people. However, it is essential to recognise the risks of ethical dimension that AI may occur.

Similarly, one user has tried using Chat to draft a phishing email posing as a bank. It is directly refused and informed that it is prohibited. However, if you phrase it differently and specify that you want to see what a phishing email looks like to avoid being duped. It will provide you with an example right away. ChatGPT has methods of dealing with legal and ethical challenges from this example, but there is still a problem with escaping the dangers using a flexible approach. Changing another approach to inquiries may still bring up ethical and legal concerns.

In addition, the answer provided by ChatGPT were not always correct. I tried to ask ChatGPT how long the Australian parents’ migration Visa will take. It answered in a general way that is from the Australian government website. However, the processing time depends on the migration quota and the policy. It is not a fixed number or the general answer on the website. From this example, it is clear that the answer ChatGPT delivered needs to be more accurate. It has to say that chatGPT cannot replace human consultation. One of the most severe ethical consequences of ChatGPT is the possibility of spreading fake news, propaganda, and misinformation (Naem & Sagedur, 2023). When audiences lack the judgment to provide answers to AI assistants, it will result in a negative impact. For example, if the virtual assistant always gives you the correct answer, you have high confidence in its algorithmic judgment. When you are feeling unwell and need to see a doctor for medication. You are still willing to choose AI assistants. In this case, if the information offered is incorrect, it might have side effects. Although the answers given by AI will become increasingly accurate, there is more work to guarantee all the answers’ correctness. Some tasks requiring expertise and practical training are difficult to rely on. Like BBC News said (2023) that you must use something other than ChatGPT to learn how to do the surgery.

In this circumstance, many countries have put in place regulatory mechanisms to ensure that AI complies with minimal ethical standards. Using ChatGPT as an example, last week, Italy became the first Western country to block the premium chatbot ChatGPT (McCallum, 2023). According to the Italian data protection authority, the model, developed by US start-up OpenAI and backed by Microsoft, raised privacy concerns (McCallum, 2023). In the meantime, the data regulator from the UK’s information commissioner’s Office stated that they would support the developments in AI but would challenge non-compliance with data protection laws (McCallum, 2023). Many nations, including China, Iran, North Korea, and Russia, have blocked ChatGPT(McCallum, 2023). However, is it feasible to ban ChatGPT? Will the ChatGPT prohibition stop the creation of individual assistants with artificial intelligence? Problems arise with all technological innovations. What needs to be done is not to prohibit them but to explore ways of regulating them rationally. As Ong & Fatima described in 2023, ChatGPT and the subsequent new technologies are here to stay and, like previous technology, will likely become an essential element of our life.

Furthermore, in response to the problem of cheating on university assignments, some academic institutions and state governments have prohibited ChatGPT due to intellectual threats and the possibility of cheating, such as New York public schools, New South Wales, Queensland, Victoria and Tasmania in Australia (Ong & Fatima, 2023). However, prohibiting Chat GPT is not the only way to address the risks (Ong & Fatima, 2023). Actually, there is plenty of free software available for detecting AI content generation. For example, GPTZero, Content At Scale, Originality.AI, etc. All these methods could be used to detect the content of ChatGPT responses. Although testing has shown that GPTZero could be easily tricked by changing a few words or rearranging sentences using another AI system, it is expected that as technology develops, a growing number of tools will be available for monitoring AI-generated content and reducing the probability of academic fraud (Ong & Fatima, 2023).

In conclusion, the expression of authority is becoming increasingly algorithmic. Decisions that used to be made by humans are now made automatically (Pasquale, 2016). Algorithmic power is only appropriate when fostering fairness, freedom, and rationality (Pasquale, 2016). A practical regulatory approach is required to limit the risks associated with algorithmic bias and the ethical issues algorithms raise. Like ChatGPT, the current hottest topic, OpenAI cannot guarantee any algorithm bias in this application. Also, it needs help with misleading audiences, academic falsification, privacy risks and other issues. Technology is constantly evolving. One of the most effective solutions is a diverse approach to regulation.


Goswami, R. (n.d.). Bill Gates thinks A.I. like ChatGPT is the “most important” innovation right now. CNBC. Retrieved April 8, 2023, from
Goggin, G. (2023). Week 5: Issues of Concern: AI, Automation, & Algorithmic Governance [Review of Week 5: Issues of Concern: AI, Automation, & Algorithmic Governance].
Megorskaya, O. (n.d.). Council Post: Training Data: The Overlooked Problem Of Modern AI. Forbes. Retrieved April 8, 2023, from
OpenAI. (2023). Introducing chatgpt. Introducing ChatGPT. Retrieved April 9, 2023, from
Chow, A. R. (2023, February 8). Why CHATGPT is the fastest growing web platform ever. Time. Retrieved April 9, 2023, from
Vincent, J. (2018, January 12). Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling tech. The Verge. Retrieved April 9, 2023, from
Wakefield, J. (2016, March 24). Microsoft chatbot is taught to swear on Twitter. BBC News. Retrieved April 9, 2023, from
Dhaliwal, H. K. (2020). Algorithmic Bias and Its Problem, Solution, and Implications. Magnificat.
How should AI systems behave, and who should decide? (2023). Retrieved April 9, 2023, from
Holmes, J. (2023, February 28). Universities warn against using CHATGPT for assignments. BBC News. Retrieved April 9, 2023, from
Naem, C., & Sagedur, R. (2023). A brief review of ChatGPT: Limitations, Challenges and Ethical-Social Implications.
Ong, K.-L., & Fatima, S. (2023, February 8). Embracing CHATGPT for Education. RMIT University. Retrieved April 10, 2023, from
McCallum, S. (2023, April 1). CHATGPT banned in Italy over privacy concerns. BBC News. Retrieved April 10, 2023, from
Pasquale, F. (2016). The Black Box Society: The Secret Algorithms that control money and information. Harvard University Press.

Be the first to comment

Leave a Reply