How to Learn to Stay Irreplaceable by AI

Advertisements

China’s adoption of artificial intelligence recently reached a remarkable turning point during the 2025 Spring Festival, largely thanks to the unexpected popularity of DeepSeek, an advanced AI toolThis once esoteric technology suddenly became a household name as its usage spread nationwide like wildfirePeople were no longer sharing festive fireworks displays online but instead challenged each other with creatively generated poetry composed by AIFamily group chats buzzed with endearing exchanges of requests, like uncles teaching AI how to make dumplings in dialect or a relative asking it to calculate the calorie counts for their New Year's Eve feastEven my father, who is in his seventies, asked me to give him a tutorial on how to craft festive greetings using AI.

However, this widespread embrace of AI sparked a wave of anxiety amid the excitementA seasoned programmer expressed his dismay after experimenting with DeepSeek, revealing that the years of programming expertise he had meticulously built seemed alarmingly inadequateFollowing the festival, a screenshot of a group chat from a Shanghai company went viral, showcasing plans to replace a significant portion of their workforce with AIIn some departments, such as legal and content innovation, the replacement ratio soared to 80%. This reality check startled many, highlighting that AI was not just threatening low-skilled positions but also extending its reach to educated and experienced roles.

Confronted with this rapid development of AI, individuals began questioning the purpose of their educationIf human knowledge and experience could be so easily supplanted by AI, how should we adapt our learning strategies for this brave new world?

The traditional concept of “work” — characterized by fixed hours, locations, and tasks — has existed for barely 200 yearsPrior to this, individuals had to juggle a multitude of responsibilities, such as farming, weaving, and fishing

Advertisements

The Industrial Revolution reshaped these dynamics, confining workers to specific roles and standard operating procedures, consequently transforming them into cogs in a societal machine.

This transformation also redefined learning methodologiesBefore the Industrial Revolution, varied work demands necessitated that individuals acquire a broad range of knowledge, allowing for cross-disciplinary coordination and negotiation; thus, learning prioritized breadthAfterward, work became highly specialized, and expertise in one area became paramount, shifting the learning goal from breadth to depthFor instance, a programmer focused solely on coding, while a product manager dealt only with market analysis without engaging in product development.

Yet, as AI continues to proliferate, this status quo is again being disruptedAI's capacity to complete specific tasks with superior efficiency, especially in fields requiring extensive knowledge accumulation and experience, is rapidly outpacing human capabilitiesThis trend threatens to diminish the role of humans in the industrial paradigm, pushing work forms into a "negation of negation," similar to our pre-industrial stateIn essence, people may need to perform various tasks simultaneously; a product manager, for instance, would not only analyze market trends and design products but also implement their conceptsFortunately, modern tools allow us to delegate many of these tasks to AI, fundamentally shifting the skillset individuals needThe emphasis is now on the ability to break down tasks, identify objectives, and make proficient use of AI technologies.

As job roles and predictable work environments dissipate, the ability to adapt and adjust one's knowledge base according to the surrounding conditions will emerge as a crucial competitive advantageNotably, while AI can substitute for a plethora of tasks, it still struggles to replace those requiring imagination, creativity, and human empathyThus, the cultivation of these inherently human skills, especially the enhancement of imaginative thinking, should be prioritized in order to express unique human value in an AI-dominated landscape.

In light of these changes, I propose that to thrive in an AI-driven workforce and organizational landscape, innovations in learning paradigms should unfold across four dimensions:

1. Tools: Become proficient in utilizing AI technologies as effective aids in various tasks.

2. Knowledge: Gain a diverse understanding of knowledge relevant to one’s work, ensuring precise task delegation to AI.

3. Skills: Develop the abilities for coordinating complex tasks, allocating resources effectively, and tackling unforeseen challenges.

4. Empathy: Enhance human-centric capacities like emotional intelligence, creativity, and empathy, particularly fostering imaginative skills.

Now, let us delve deeper into these four aspects of transformative learning.

To effectively engage with AI, we must learn to be its Socrates

Advertisements

In practical scenarios, different users yield vastly different efficiencies from the same AI toolsSome individuals can harness AI to independently design extensive websites or manage entire projects, while others merely use it for basic interactions or image generation.

The disparity begs the question: what drives such variation? In the past, utilizing AI demanded substantial programming knowledge, leading people to attribute differences in usage efficiency to their coding skillsToday, however, users can directly communicate with AI using natural languageYet, this gap persists and might even be more pronouncedAt its core, this discrepancy is ultimately a function of mindset.

Before the surge in popularity of tools like ChatGPT, there were existing applications that allowed users to communicate with AI through natural language or visual methods, facilitating text generation and task completionKnown as "low-code programming," these tools lower the barrier to entry for codingHowever, effectively constructing robust programs still requires not just technical skills but also sound coding logicThe same principle applies to generative AI usage.

The most critical skill when employing generative AI is the ability to articulate one’s needs clearly and direct AI to execute tasks as instructedMuch like Socrates did in the "Meno," where he guided an uneducated slave boy to derive the Pythagorean theorem without overt instruction, users today must guide AI through logical breakdowns, leading them to discover solutions independently.

Socrates utilized a structured dialogue method to provoke thoughtHe would first require his interlocutor to define relevant concepts, then generalize these definitions to draw conclusions, subsequently challenging these deductions to expose contradictions and stimulate new reflectionsEssentially, Socratic dialogue parallels the act of crafting “low-code” scripts designed to inch closer to answers — a methodology equally applicable to interaction with generative AI.

Users can develop a set of conversational frameworks for AI interaction, strategizing effective response prompts through iterative practices

Advertisements

Regardless of the challenge at hand, this approach allows individuals to communicate their needs to AI efficiently, significantly enhancing AI’s operational efficacy.

This Socratic methodology not only boosts AI task efficiency but also empowers AI to challenge us in return, augmenting our own learning experienceIn many cases, learners find difficulty assessing their understanding of conceptsBy having AI play the Socratic role, persistently probing our knowledge, we can evaluate our mastery and stimulate our thinking while broadening our cognitive horizons.

Consider a professor I know who, since early 2023, has spent over an hour daily conversing with ChatGPTIn these exchanges, he presents the ideas he’s formulating for his academic papers and invites AI to critique and enhance his argumentsThe result has been several high-caliber papers published in prestigious journals.

As we continue this exploration, another discourse arises: the necessity to grasp the "index" of our chaotic worldIn Stephen Chow's film "The Duke of Mount Deer," there is a notable scene where Chen Jinnan, after accepting Wei Xiaobao as his disciple, hands him a hefty book and insists that he read diligently to familiarize himself with its contentWei Xiaobao, puzzled, inquires whether this book contains his master’s ultimate martial arts secretsTo which Chen Jinnan responds that it is merely the index of his invaluable techniques, directing him to another pile of tomes that house the true martial arts knowledge.

Initially, I viewed this scene as comedic in its absurdityAfter all, mastering any one of those martial arts skills seemed sufficient for thriving within the martial worldWhat value lay in perusing an apparently irrelevant catalog? However, reflecting on this in the context of the AI era unveiled a profound truth: mastering the “index” can have remarkable significance.

We currently dwell in an age of information overload, where knowledge proliferates across disciplines at an unprecedented rate

For most individuals, it would take a lifetime to survey the breadth of any given fieldHowever, AI can accomplish the same task in a heartbeatIn this light, the struggle to collect knowledge pales in significance compared to simply knowing how to leverage AI as an "external brain." By allowing AI to assist in acquiring and organizing our knowledge, we can summon specific information as needed without attempting to absorb it all entirely.

Knowing precisely when to instruct AI to draw upon certain knowledge itself remains a valuable skillOften, when faced with new inquiries, individuals may not know where to beginFor instance, if tasked with drafting an analytical paper on "Millet Economics," the novelty of the topic might render its angles of exploration elusiveLacking direction can undermine one’s ability to effectively guide AI in sourcing the relevant materials, ultimately harming productivity.

In such scenarios, a broad knowledge reservoir becomes tremendously advantageousWhen equipped with basic familiarity across various fields, individuals may not instantly solve an issue but can utilize their existing knowledge as a roadmap, thereby indicating productive research directions with AI’s help.

That said, this breadth of knowledge does not diminish the importance of depthTrue breadth necessitates foundational depth, as one's ability to discern errors in a given area often indicates sufficient depth of understandingIn the age of AI, this skill is crucialThe prevailing issue with today’s AI models is that they generate content laden with "hallucinations," inaccuracies that can mislead users.

As an example, leading AI models exhibit a high rate of “hallucination” when producing outputsWhile there are methods to mitigate errors—such as demanding proper citations—widespread reliance on AI’s output is fraught with risks, especially for those lacking a deep understanding of a domainWithout a solid foundation, individuals might overlook inaccuracies present in AI-generated content.

From this perspective, even with AI assistance, a certain level of depth in relevant knowledge is indispensable

Only with critical judgment can humans enjoy AI’s conveniences while steering clear of its potential pitfallsIn the AI era, “screwdriver” roles are on the verge of extinction, giving rise to individuals harnessing AI's capabilities to become their own bosses launching solo enterprises in innovative enterprises.

As such, the cultivation of planning, organizing, and problem-solving skills becomes increasingly vitalThese capabilities demand substantial experiential foundations; however, exposure to algorithms can significantly enhance the development of these skillsAlgorithms consist of structured sequences of instructions devised to solve problems efficientlyEach algorithm embodies human wisdom pertaining to a specific problem-solving approach and is applicable beyond AI usage.

By learning more about algorithms and cultivating algorithmic thinking, we can bolster our analytical skills for contemporary challengesMuch like embracing algorithmic concepts in everyday tasks can facilitate workflow optimization, applying such rational frameworks can elevate our decision-making capabilities.

For instance, drawing strategies from dynamic programming algorithms may enable us to effectively design processes, while the dimensionality increase concepts found in support vector machines (SVM) can offer new viewpoints for reframing issues.

Ultimately, integrating algorithmic understanding into our knowledge base could greatly enhance our efficiency in both work and problem-solving scenarios.

In the quest for fostering creativity, an intriguing phenomenon manifests in AI’s notion of “hallucinations.” After exploring DeepSeek, many users remarked on its unique charm and human-like qualities, particularly treasured by those from humanities backgroundsFor example, a friend teaching in a university’s Chinese Department was thrilled with how DeepSeek generated creative and imaginative outputs.

This leads us to ask: What accounts for DeepSeek's vibrant creativity and personality compared to its competitors? Examining its design reveals that this distinguishing feature may stem from an intentional scaling down of prediction accuracy, thereby increasing "hallucination" occurrences

By reducing numerical precision for efficiency in training methods, developers can elevate certain operational metrics while inadvertently permitting a slight increase in inaccuracy.

This hypothesis is validated by recent research conducted by Vectara, indicating that the hallucination rate of DeepSeek-V3 is approximately 3.9%, significantly higher than the 1.5% and 2.4% rates of OpenAI’s advanced modelsWhile higher hallucination rates restrict applications in tasks where accuracy prevails, such as literature reviews highly dependent on precise citations, they also invite surprise and novelty when engaging in creative tasks.

Generative AI operates fundamentally through predictive text generation based on pre-existing materialsA model with high prediction accuracy is likely to produce cleaner and more formulaic outputsIn contrast, when a model's predictive precision is allowed some leeway, it might produce more imaginative and diverse outputsFor instance, if I instruct an AI to find “the latest 512G iPhone,” the model with lower hallucination rates will accurately suggest purchasing an iPhone, while its more imaginative counterpart might playfully assume I was referring to fruit, saying, “Why not try some fresh pears in season?” Although the latter is technically an error, it presents a more engaging interaction, reflecting the nuances of human conversation.

The value of such hallucinations transcends entertainment: in scientific domains, if AI strictly draws conclusions from available literature, it would lack the capacity for genuine innovationConversely, allowing some degree of “hallucination” enables AI to step outside traditional frameworks, inspiring groundbreaking discoveriesNoteworthy scientists, including David Baker, the Nobel laureate in Chemistry for 2024, have acknowledged that AI's lapses in accuracy sparked fresh inspirations, leading to new avenues of exploration.

Reducing the hallucination phenomenon is technically achievable through improvements in training data quality and algorithmic investments

Advertisements

Advertisements

Social Share

Post Comment