Artificial Intelligence (AI) is both a powerhouse and pandora’s box in today’s information landscape. While AI is adept at handling creative tasks, there is a concern: Its influence may alter the perception of what skills and knowledge humans value.
Generative Artificial Intelligence (AI) is the cutting-edge technology of our time, poised to tackle challenges that once felt insurmountable.
To the optimists, it is a game-changer. Meanwhile, pessimists see a darker future, where AI outsmarts humans, devours jobs, and rattles institutions—posing an existential threat.
As Melvin Kranzberg, a renowned technology researcher, noted, the fate of AI rests in our hands—technology itself is neither inherently good nor bad.This article explores both sides.
Unlike past automation (think conveyor belts or calculators), generative AI does not just follow instructions; it creates.
While AI is adept at handling creative tasks, there is a concern: if it assumes too much responsibility, could we lose our essential skills?
Its influence may alter the perception of what skills and knowledge humans value.
But as it churns out content at scale, there is a risk of misinformation and a dip in digital trust.
Impact on (mis)information
AI is both a powerhouse and a Pandora’s Box in today’s information landscape.
AI personalises content to your liking—be it news, language, or accessibility. But the flip side? AI drives 'surveillance capitalism,' where companies track your data to tailor adverts and adjust prices, based on what they think you will spend. This data advantage lets the big players dominate markets, hike prices, and keep you hooked.
Enter deepfakes. AI generates hyper-realistic fake content that blurs the line between fact and fiction, swaying political opinions, health decisions and fuelling conflicts. It is all powered by micro-targeted content designed to feed biases.
Attempts to counter this include labelling AI-generated content and “prebunking” techniques to make users more aware.
But these solutions have limits. AI misinformation can be subtle, complex, and harder to spot. Plus, “hallucinations”—false data generated by AI—pose unique risks, as repeated inaccuracies can begin to feel like facts.
An academic paper by 33 experts in technology, economics, and society, including the 2024 economics Nobel laureate Daron Acemoglu called, ‘The Impact of Generative Artificial Intelligence on Socioeconomic Inequalities and Policy Making,’ used for this article, argues that generative AI not only reshapes industries but also shifts human behaviour.
The research suggests that interacting with AI can make people less empathetic and more self-centered, impacting fields such as customer service that depend on genuine human connection.
With AI creeping into search engines from Google to Microsoft, access to the rich diversity of perspectives might be lost.
Instead of sifting through various viewpoints, users could be served a single, “averaged” response—leading to a bland “average of averages” that dilutes depth and insight.
This habit of providing generic answers is feared to dampen knowledge-sharing spaces where vibrant discussions thrive, something that needs both smart regulation and genuine corporate responsibility.
Ethical practices boost reputation and trust, but they often come with costs that some firms won’t embrace unless pushed by regulation, the publication points out.
Therefore, striking a balance between profits and digital accountability could soon be a defining trait for companies.
But while AI’s influence on information sharing raises critical concerns, its effects also extend to the workplace, where it could redefine how skills, labour, and productivity are viewed.
Impact on work
High-skill tech such as personal computers mainly favoured educated workers, while automation, like industrial robots, displaced many lower-skilled roles. This pattern, known as "Skill-Biased Technological Change," shows how tech has historically widened inequality.
However, AI tools are designed to work with people, not replace them. By boosting efficiency and enabling workers to tackle higher-value tasks, these “pro-worker” tools drive productivity and job satisfaction.
For instance, studies highlight that customer service agents and programmers benefit from AI chat and coding assistants, especially new or less-experienced workers.
In one case, a Fortune 500 company’s customer service agents gained a 34 percent productivity boost using AI for real-time response suggestions, while programming aids like GitHub Copilot have sped up and enhanced coding for less-experienced users.
“These tools are levelling the playing field by helping less-experienced workers complete tasks faster and more effectively, closing skill gaps and enhancing job satisfaction. Generative AI, used as a complement rather than a replacement, could foster broader opportunities and inclusive productivity growth—offering well-paying roles and reducing inequality across the workforce,” the scholars assert.
Generative AI’s ‘inverse skill-bias’ could be its most surprising twist—boosting productivity most for less-skilled workers.
Unlike previous tech revolutions that favoured the highly skilled, AI’s support for novices could help close income gaps, they add.
AI’s translation and synthesis tools can also break down barriers in the digital economy, connecting people across languages and regions.
For instance, by reducing language barriers, AI tools help people from rural or remote areas access job opportunities and information otherwise out of reach.
Limited digital infrastructure
However, these benefits largely flow to developed economies, while limited digital infrastructure and investment may hold back regions in the Global South including Uganda from fully participating, according to the thesis aforementioned.
But this ability to gather and summarise information could be a game-changer for those in low-resource settings, where traditional research costs are high.
Instead of combing through sources, users can quickly access aggregated insights, helping level the playing field for businesses and individuals competing with better-resourced counterparts.
Yet, access disparities remain a challenge. Without reliable internet, devices, and training, some areas risk falling further behind.
AI also has the potential to devalue specialised skills, as companies may replace rather than complement expertise.
According to the research study, if AI enables lower-skilled workers to perform tasks at a level close to that of experts, employers may be reluctant to raise wages because they see these roles as interchangeable.
This may deter people from developing higher skills because they see less benefit from doing so in a market where expertise can be easily replicated.
There is also the risk of AI commodifying knowledge, diminishing the perceived value of higher education and advanced training.
If AI makes certain skills accessible, workers may be less inclined to pursue further education, leading to “skill stagnation,” something that could crowd the lower end of the skill spectrum, intensifying competition and suppressing wages.
“As reliance on AI grows, roles that require deep expertise may erode, as individuals lean on AI rather than personal master. Governments have a key role to play in crafting policies that harness the productivity potential of generative AI while addressing its inequality risks,” the scholarly paper shows.
AI and education
AI’s personalised learning can assist with writing, research, and analysis.
However accuracy and privacy concerns persist, leaving students to wonder about its long-term impact on personal development and careers.
The aforementioned expert analysis has it that engagement with AI tools varies by demographic group, with lower usage among female students, potentially widening gender gaps in future job markets.
There is also concern that AI might not lighten teachers' workloads as expected. Instead of automating tasks, it could shift responsibilities, like monitoring AI-student interactions or providing technical support.
This could create generational divides, as younger teachers may adapt more easily than older ones. Furthermore, excessive reliance on AI could stifle students’ independent problem-solving and critical-thinking skills.
The debate on AI in education is polarised, with some institutions banning it entirely, while others recommend limited usage.
The challenge is not whether to allow AI, but how to adapt the education system. Just like calculators did not kill algebra, and the internet did not make fact-checking obsolete, AI should not replace critical thinking and communication.
According to the experts, it is about teaching students to use AI without letting it do all the thinking for them. To navigate AI-powered learning, students should develop key skills, including prompt engineering to craft queries, fact-checking to assess online information accurately.
As AI reshapes education, its influence also extends into the realm of policymaking, where governments are tasked with creating frameworks that ensure AI is implemented ethically.