The Dawn of AI: Unraveling the Path from Large Language Models to Artificial General Intelligence

1. Introduction

In the past two years, the field of artificial intelligence (AI) has experienced unprecedented rapid development. Since the end of 2022, the emergence of large language models (LLMs) represented by ChatGPT has rapidly ignited attention and discussion globally. These systems have demonstrated astonishing natural language processing capabilities, able to answer complex questions, write articles, code programs, and more, with a level of accuracy that is far beyond previous AI systems, bringing huge convenience to humans. This marks a significant milestone in AI technology development: Just a few years ago, most people still considered Artificial General Intelligence (AGI) a distant dream. Yet, the abilities displayed by LLMs such as ChatGPT seem to offer a glimpse of dawn for achieving it—AGI remains the original aspiration and ultimate goal of AI research.

In light of this, this blog aims to demystify the current state and limitations of AI development in a simple manner. We’ll explore what the ideal AGI should look like and the distinct impacts that today’s AI and the ideal AGI could have on human society.

2. The Current State of AI

2.1 Parrots and Crows

Over the past few decades, the paradigm of machine learning has gone through several important development stages: from the earliest symbolic paradigm to connectionism, then to machine learning utilizing probabilistic modeling and statistical methods, and the introduction of the deep learning paradigm. Most artificial intelligence systems are narrowly focused on specific tasks, designed with a simple value function, and then trained with massive datasets for specific models (Songchun Zhu, 2017). The limitation of these systems is that the trained models cannot generalize or explain. In other words, these models cannot be directly applied to other tasks, nor can they truly “understand”. To use an analogy, this is like “Clever Hans” and parroting: you can’t always be sure of what a model has learned from the data it has been given (Kate Crawford, 2021).

Let’s take a more simple example to understand this:

Case Study : Kahneman’s Dual-Process Model and the Crow’s Insight

Psychologist Daniel Kahneman introduced a concept in his book “Thinking, Fast and Slow”: He put forward the idea that human thinking and cognition are divided into two systems:

System 1: Intuitive, fast heuristic, UNCONSCIOUS, non-linguistic (Like simple multiplication calculations, or responses that come immediately).

System 2: Slow, logical, sequential, CONSCIOUS, linguistic, algorithmic

Humans can train themselves to convert some tasks that only System 2 can do into tasks that System 1 can handle (practice makes perfect). The way System 1 processes problems helps humans save their mental energy, at the expense of sacrificing some critical thinking abilities (because it does not involve thinking). Yoshua Bengio, a renowned computer scientist, believes that deep learning can perform the work of System 1 very well, but it fundamentally lacks System 2’s capabilities (Mingke, 2019).

For current Large Language Models, their processing method is more similar to human System 1, but thanks to technologies like massive pre-training and fine-tuning, contextual understanding and long-term memory, and Reinforcement Learning from Human Feedback (RLHF), they have also begun to exhibit some characteristics of System 2 (Fu, 2023). This is a key difference from early deep learning AIs. They can quickly respond to many types of requests, from simple factual queries to complex problem-solving, all without the need for deep reasoning. However, when dealing with tasks that require a deeper understanding, they also attempt to mimic human System 2 through simulated reasoning and learning processes. This includes understanding complex language structures, solving problems, and creative thinking.

Compared to parrots that only mimic, crows more closely resemble the ideal totem of AGI, the ultimate goal. AGI should, like the crow, have the capability for autonomous “understanding.”

2.2 A Brief Overview of Artificial General Intelligence

So, what should AGI look like, and where does the existing gap between LLMs and AGI lie? Understanding these questions is crucial to analyze the impact AGI could have on human society, the extent to which current LLMs can affect human society, and whether we should be concerned about the rapid development of AI today.

Professor Songchun Zhu has delved deeply into issues like “General Artificial Intelligence and AI Safety” at the AAAI SafeAI workshop. He pointed out that AGI needs to meet three key requirements:

1. AGI needs to be capable of handling an unlimited number of tasks, including those not predefined in complex, dynamic physical and social environments.

2. AGI requires autonomy, the ability to generate and complete tasks on its own, akin to humans.

3. AGI must possess a value system, understanding human values.

Based on these requirements for AGI, let’s analyze how current LLMs fare:

1. Large Language Models have achieved generalization in text-related tasks, meaning you no longer need to train specific models for specific tasks. However, LLMs lack the ability to interact with physical and social environments, making it difficult for them to truly “understand” the meaning of language.

2. Large language models lack autonomy. They are passively responding to human commands and inputs, lacking self-awareness, emotions, desires, and subjective goals.

3. Even though LLMs have been massively trained on text data corpora containing human values, they do not possess the capability to understand or align with human values.

Therefore, there is a significant gap between current LLMs and the ideal AGI. Overcoming this gap requires technological innovation to break through this significant barrier, as relying solely on previous deep learning methods will ultimately not lead to AGI. As for how long this will take—three years? Five years? Twenty years? No one can provide a definitive answer.

3. Standing at the Crossroads of AGI

3.1 “Giant Parrot” ≈ “Crow”

Some scholars have pointed out that the Turing test is essentially useless for testing conversational systems—it’s more about testing how easily humans can be deceived rather than measuring intelligence (Mingke, 2019). From the discussions above, it’s clear that current LLMs, while significantly different from true AGI, are more like some kind of “giant parrot” compared to a “crow.” Passing the Turing test doesn’t mean it truly understands the content—it just appears to understand and manages to deceive humans.

However, my perspective is that from the practical user experience, once the “giant parrot” evolves to a certain degree, it becomes a “crow” for the users interacting with it (i.e., an entity truly possessing intelligence). In other words, as long as the giant parrot can solve users’ problems, whether it truly has the capacity to understand is not a concern that users need to particularly mind. Existing LLMs already hint at this: a person lacking understanding of the current state of AI development could easily make incorrect judgments based on their own usage experience. With the advent of ChatGPT 3.5, the progress in LLMs has been nothing short of revolutionary for the entire AI academic community, almost like comparing missiles to bows and arrows when viewed against past models. They have already demonstrated tremendous commercial value and have had a significant impact on our societal production and lifestyle.

3.2 Will LLMs Replace Human Jobs?

My answer to this question is: While emerging Large Language Models (LLMs) and various AI-generated content (AIGC) tools are not currently capable of completely replacing a profession outright, they will undoubtedly reduce the number of jobs due to significant productivity improvements. The 18th and 19th centuries’ industrial revolutions brought about massive advances in productivity, eliminating the need for humans to engage in repetitive physical labor. Now, the fourth technological revolution may have arrived: unlike past technological revolutions that addressed repetitive physical labor, emerging AI technologies are beginning to tackle repetitive intellectual tasks. They can accomplish many things, often better than the average human.

From this, we can infer that lower-level repetitive intellectual labor will definitely be replaced by AI. For example:

1. Various voice customer service/front desk reception roles

These jobs don’t typically involve specialized skills or thinking; the essence of the work is to act as a “megaphone”: understanding user needs through conversation, filtering out inappropriate requests, and then passing the request on to the relevant person for resolution. Past mechanical AIs might not have been up to the task, but today’s artificial intelligence, with its ability to generalize tasks it hasn’t seen before, can completely replace this profession.

2. Simple clerical work

This includes clerical processing, writing, generating blog posts, and other repetitive writing tasks, where LLMs can replace or supplement part of the content creation work.

3. Basic programming tasks

Technologies based on LLMs for code generation and automated testing can improve development efficiency, including “grunt work” type coding tasks that can be well replaced. Thanks to specialized training in code, the coding ability of current LLMs is already very strong.

In summary, if our work merely consists of searching the internet for answers to questions, transporting, or connecting the two, then current AI can naturally do it faster and better than us (Yuzheng Sun, 2023). To avoid being replaced by AI in their profession, people need to do two things:

1. Reflect on the essence of their work, whether it truly involves innovation and creating something new based on existing resources, rather than low-level repetitive intellectual labor.

2. Go with the flow and become a proficient user of AI tools: achieve higher efficiency than others and enhance one’s competitiveness in the job market.

3.3 The Polarization of Intelligence: Abandoning Thought, Abandoning All

Today’s AI is so powerful that virtually any question thrown at it can yield a reasonable answer. If I were to delegate all my programming assignments to AI, I would end the semester knowing nothing, yet still scoring high grades. This behavior is extremely dangerous; even if LLMs excel in language understanding and generation, their knowledge comes solely from training data and lacks real-world validation. Furthermore, while AI may find good answers to a question, it does not challenge these answers.

The prevalence of AI is likely to exacerbate the polarization of intellectual levels—I believe so. There’s already a vast inherent gap in intelligence among individuals. For the intellectually gifted, the rapid development of AI is an immense boon: they can use AI tools to boost their efficiency and knowledge while consciously avoiding its pitfalls. However, laziness is a human nature: in the era of widespread AI, some will forsake their critical thinking for convenience, or they lack the ability to discern incorrect information generated by AI, ultimately letting AI replace their own thought process.

3.4 Concerns and Challenges: Should Humanity Curb the Pace of AI Development?

Last year, Musk and thousands of other technologists signed an open letter “Immediately suspend training of AI more powerful than GPT-4,” Hawking also said AI could be the worst event in the history of our civilization, and Turing had issued warnings about AI’s development. What were they worried about? Is it necessary to halt AI research immediately?

These industry figures’ concerns revolve around AGI itself. AGI, with its ability to continuously acquire new knowledge through observation, practice, reading, etc., and its superior cognitive abilities in reasoning, planning, innovation, represents the ultimate form of human data and computing power utilization (Yuzheng Sun, 2023). If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility (Sam Altman, 2023). On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption, leading to numerous governance issues:

1. Safety and Controllability: How can humanity always maintain control in the face of intelligences greater than our own? How should we manage machines with autonomy?

2. Social Turbulence Brought: Not just repetitive intellectual labor, but pure intellectual work will be replaced by AI, leading to direct job losses for roles like product managers. What then for humanity?

3. AGI Monopolized by Tech Giants: The first company to develop AGI will gain an unimaginable monopolistic advantage, further influencing the international political landscape.

4. Misuse of AGI: Issues surrounding the development of military weapons, privacy protection, etc.

However, as long as there remain insurmountable technical barriers to achieving AGI, with deep learning still the mainstream method of AI training, there’s no need to worry about the pace of AI development. Conversely, once AI achieves reasoning abilities based on common sense and world models, we really need to start thinking about countermeasures in advance.

4. Conclusion

Within the confines of this article, we have summarized the remarkable progress made in the field of artificial intelligence. Large Language Models (LLMs) have demonstrated near-human capabilities in understanding and generating language, marking a significant step towards the development of General Artificial Intelligence (AGI). However, we must recognize that there remains a vast technological chasm between existing LLMs and the ideal AGI.

If AGI were to become a reality, its potential impact would be profound and unpredictable. We must think ahead about how to ensure the safety and controllability of AGI to prevent its misuse for unethical purposes. While significant technological hurdles to achieving AGI still exist, we must act timely to develop necessary ethical norms and governance frameworks, preparing adequately for the potential arrival of the AGI era.


Crawford, K. (2021). Atlas of AI. In Yale University Press eBooks.

Fu, Y. (2023). How does GPT obtain its ability? Tracing emergent abilities of language models to their sources. Notion.

Kathy. (2021, December 19). Hierarchies of intelligence. Kathy’s Newsletter.

Lim, C., Lauren Bourke, Kendra Fouracre, Luke Pallaras, & Antonio Barbaro. (2023). ChatGPT and the Importance of AI Governance.

Mingke. (2019, January 21). Artificial Idiocy 2: The AI you see has nothing to do with intelligence. Jiqizhixin.

Team, A., & Team, A. (2023, January 23). ChatGPT and AI Governance: Ensuring Responsible use. AIContentfy.

Xu. (2022). ChatGPT won’t replace Google Search, but it will revolutionize cloud computing platforms. Weixin Official Accounts Platform.

Zhang Junlin. (2023). The Path to AGI: Essentials of Large Language Model (LLM) Technology. Zhihu Column.

Zhu, Songchun. (2017, November 2). Brief discussion on artificial intelligence: current status, tasks, architecture, and unification. Weixin Official Accounts Platform.

Zhu, Songchun. (2025). Regarding AGI and ChatGPT, this is how Stuart Russell and Zhu Songchun see it. Weixin Official Accounts Platform.

Be the first to comment

Leave a Reply