Will humanity be better off thanks to artificial intelligence? This is one of the most defining issues of our times and as things are developing, I am not very optimistic about ensuring that AI can really become a force for good. Certainly, we cannot underestimate its potential. On health and science, AI seems to be promising unimaginable discoveries. Extraordinary medical breakthroughs are being touted. The possibility of perennial longevity could be also achieved and even there are efforts towards the so-called Singularity, a situation in which humans could get AI powered chips transplanted in their brains and bodies.
As crazy it might look, there are scientists and big money invested in these concepts. Then in the broader world of jobs, there is this belief that AI would immeasurably enhance our levels of productivity. We will stop wasting our time in futile but time-consuming tasks because we can delegate them to AI agents. Finally, AI could help spread a revolution in the way we learn.
Millions of children unable to access quality education could finally have at their disposal new tools to greatly enhance their process of knowledge creation.
Yet the only realm where I feel that AI can have a huge impact is in science and health, helping defeat the beast called cancer or other rare diseases and providing persons living with disabilities with new unimaginable opportunities.
On the jobs front, there is already ample evidence that more productivity in offices means only one thing, an increased bottom line for the shareholders. It couldn’t be simpler: AI tools offer a great opportunity to do more with less workers that can easily lead to massive layoffs with unemployment rates skyrocketing and, on this regard, there are some alarming reports already emerging from the US job market.
Then on the field of education, a very clear scenario is emerging. AI is indeed disrupting the education system but not in the way many were hoping but rather in the opposite direction.
Students have quickly developed a dependency on tools like ChatGPT that is not supporting their learning journey but it is actually destroying it. They have stopped carrying out the most fundamental tasks expected from them, reading, writing, doing calculation, processing information and learning to master the art of critical thinking.
It is a very concerning scenario and I see it first hand when I do interact with the younger generations. I am worrying that they are delegating to a machine vital function of life, functions that define us as cognizant human being. Learning, thinking, assessing and making judgements and taking decisions, these are our most important capabilities.
And we have not yet achieved the stage of Artificial General Intelligence (AGI) when AI models will equal our brains, paving the way for a Super Artificial Intelligence. I am not the only one concerned. Just few weeks ago at the United Nations, over 200 experts published a new appeal called the AI Red Lines, to ensure that any future development AI systems cannot jeopardize humanity’s existence.
These days there is also a global petition, open for signature, on the risks of developing a superintelligence without proper safeguards and guardrails. And I stop here without mentioning the impact of AI linked data centered on climate warming and this is itself another huge problem.
The overall picture is confusing and worrying but I do know that out there, a number of young professionals take a more optimistic view on the future of AI.
I have been looking for opinions that could help me navigate this extremely complex issue by showing that a way forward is possible and it is not going to be detrimental for humanity. In short I want to believe that AI can be a “win-win” for us rather than a tool that will create immense problems, putting at risk our whole civilization as some experts believe.
Deep Bikram Thapa Chhetri is a young software entrepreneur from Nepal who founded Enotes Nepal when he was just 16 years old. This is a digital learning platform that has grown to become Nepal’s most popular educational platform, serving over 100,000 active users each month.
I met Deep on LinkedIn and when I saw his profile with his interest and expertise on technology,
I thought he could offer some new perspectives on AI. So, I decided to approach him, hoping he could provide some insights on the future of AI from the perspective of a tech-enthusiast, with a fresh and optimistic perspective that characterize the new generations. His venture is, basically, a platform that focuses on subjects that traditionally lack accessible resources, helping students from diverse backgrounds succeed academically.
Deep is driven by the mission of democratizing education in Nepal, believing that quality education should be a right accessible to everyone, regardless of location or background.
A very interesting e-mail-based conversation emerged from our initial approach. While the contents are mostly focused on education, the approach that emerges offer an interesting prism through which analyzing the dangers, risks but also the opportunities stemming from AI.
I do believe Deep offers a very interesting and importantly nuanced view of how AI can truly be rolled out in a way that it can truly be at the service of humanity. “Your concerns about unregulated AI and its impact on learning resonate deeply with me. As someone working at the intersection of technology and education, I see both the tremendous potential and the very real dangers you have outlined, especially on the risks of losing critical thinking” he told me.
“You are absolutely right to be concerned. We are witnessing a fundamental shift in how students engage with learning. The easy availability of AI tools like ChatGPT has created what I would call a “cognitive shortcut culture”—where students increasingly bypass the struggle that actually builds understanding”. “The friction of learning – the difficulty, the confusion, the breakthrough – that’s not a bug, it’s the entire point. When we remove that friction entirely, we’re not making education more efficient; we’re making it meaningless.”
Deep is also equally concerned about the way students use AI. “The real crisis isn’t that students are using AI—it is that many are using it as a replacement for thinking rather than a tool to enhance it. When students stop reading deeply, stop wrestling with complex ideas, stop revising their writing, they’re not just missing out on knowledge -they’re losing the opportunity to develop the mental muscles that make learning possible in the first place”.
What about the need for strong regulations? Regulations are paramount and some nations are achieving some important milestones. The EU was the first to come up with a law entirely focused on minimizing AI induced risks while ensuring the elimination of the most dangerous hazards.
South Korea has also come up with a legislation and other nations are following suit.
I heard an argument that Nepal should not be too heavy on AI regulation because the country cannot afford too heavy red tape. In addition, the country can take a “piggy back” approach and align its policies with what more advanced nations have already legislated. To some extent, the rationale behind these arguments make sense but at the same time, we should be careful no matter the imperative for Nepal to proceed with development of the ITC sector. “I believe we need a multi layered approach. Yes, some regulatory frameworks are necessary, but regulation alone will not solve this,” Deep explains.

He proposes a sort of broad blueprint that I am going to highlight in the following lines.
To start with, Deep explained, we need what he referred to as “institutional guidelines”. “Universities and schools need clear policies on AI use—not blanket bans, which are both unenforceable and counterproductive, but thoughtful frameworks that distinguish between AI as a crutch and AI as a catalyst for deeper learning”. Then, according to him, it is paramount to focus on Digital Literacy Education. “Students”, he shared, “need to understand not just how to use AI, but when to use it, when not to, and most importantly, how to think critically about AI generated content”. “AI literacy isn’t about learning to use ChatGPT—it is about learning to think in an age where machines can generate convincing answers to almost any question,” he continued. Finally, Deep talked about Teacher Empowerment. I had mentioned how some university level professors in the USA are changing and adapting to a new era dominated by ChatGPT by bringing back viva and in classroom exercises and tests. “Educators need support and training to adapt their pedagogical approaches. The teachers you mentioned who are introducing in class writing and viva exams are on the right track they’re making learning visible and immediate in ways that can’t be outsourced to AI”.
Shifting on the issue of leveraging AI productively, Deep remains optimistic. “Here is where I find hope: AI doesn’t have to be the enemy of learning. In my work, I am exploring how AI can actually deepen engagement rather than replace it.” “The key is designing AI tools that can promote active learning rather than passive consumption; that can create scaffolding for understanding rather than shortcuts around it and finally can encourage iteration and reflection rather than one-click solutions. At the end we need to make the learning process transparent rather than opaque”.
“The question isn’t whether students will use AI, they will. The question is whether we can build educational experiences that make the hard work of learning more compelling than the ease of automation”, Deep wrote.
Towards the end of our conversation, I asked him about what could be a path forward. I understand that a light approach as the one he proposed in the education sector resonates with the same views I had heard. Yet I do remain skeptical of this approach. Will guidelines and digital literacy be enough? Without some sort of mandatory rules, abuses in the ways students prepare for their exams or write their essays or defend their theses will continue to remain rampant. And this is where Deep’s nuanced view emerges. “I am optimistic, but my optimism is conditional. We are at a crossroads. If we let market forces alone dictate how AI integrates into education, we will likely see the dystopian outcome you fear—a generation that has lost the capacity for deep reading, sustained attention, and original thought”. “But if educators, technologists, policymakers, and yes, young entrepreneurs like myself, work together intentionally, we can shape a different future”.
He believes that we can progress in a future where “AI enhances human capability rather than atrophies it”. “The adaptations you’re seeing from teachers—in-person assessments, process-focused evaluation, collaborative learning—these are exactly right. We need to shift from measuring outputs (which AI can produce) to measuring understanding (which it cannot),” Deep elaborated. Still how can we ensure that these adaptations do happen and students make the best and most intelligent use of AI? How can we have policy makers in the educational sector at federal, provincial and especially at local level, develop the skills and capacities to navigate the new system that is emerging? Considering the wide authority that municipalities and metropolitan cities have in education, I am seriously concerned that a unified approach on how to deal with AI can emerge and be upheld.
These are questions that, to me, still need to be discussed and an open conversation about how to address them should happen, sooner than later. Deep concluded with a personal note that I believe it is worthy publishing it in its entirety: “As someone from the generation that will inherit this AI transformed world, I feel a deep responsibility to get this right. We cannot simply build tools and hope for the best. Every educational technology should be designed with one question at its core: “Does this make students better thinkers, or just better at avoiding thinking?”
These are existential questions.
We are truly at risk of “shortcutting” our future as humans in control, capable of taking decisions and progress in a way that we remain in charge. At the very end, Deep shared a personal quote.
I believe it is a powerful statement that summarizes his approach to how AI can be leveraged for the good. “Don’t just build something that works. Build something that matters. Build something that years from now, you’ll look back on and say: this mattered, this changed lives, and I’m proud I built it.”
I hope and I wish he would succeed in this mission.













Comment