The debates in Nepal’s parliament feel like they’re stuck in a time warp. The language, the concerns, the political theater all feels like a relic of a bygone era. In a recent survey, nearly 80 percent of Nepali youth expressed that they couldn’t relate to anything discussed by the MPs in parliament. Even the newly formed Rastriya Swantantra Party (RSP) seems to be falling into the same worn-out patterns. The youth are uninterested in the political squabbles, which increasingly feel like the melodrama of 80s Bollywood movies. The discussions are always over-the-top, repetitive, and out of touch with the realities of today’s world. What youth care about is a future driven by technology, innovation, and opportunities that resonate with their globalized lives.
Every day, the headlines are dominated by which political party gets which ministry or who’s defecting to which faction. But the youth are focused on the future. For them, that future is inseparable from technology, especially artificial intelligence. They want to stay in this beautiful yet deeply corrupt country and build meaningful lives. They seek opportunities that align with the realities of a modern, globalized world. Above all, they believe the national conversation must evolve. It needs to move beyond the traditional debates about democracy, history, sociology, and anthropology, and instead focus on how technology, innovation, and forward-thinking policies can shape a better future for Nepal’s next generation.
Artificial intelligence, long the domain of speculative fiction, is no longer a distant dream. It is shaping the contours of our daily lives in ways we barely understand. AI is not merely a tool. It is a paradigm shift. It is the harbinger of a new epoch where machines can write stories, answer complex questions, make stunning artwork, and converse with uncanny familiarity. Amid this rapid change, we must ask: Can machines truly think? And if they can, what does that mean for our understanding of consciousness and morality?
The ancient philosophical question “can machines think?” is no longer hypothetical. It is now entangled with real technological developments. With the advent of large language models like GPT-4, Claude, and Gemini, machines have begun to exhibit behavior that mimics human intelligence with increasing fidelity. These systems can write essays, compose music, solve logic puzzles, and simulate conversations that pass the Turing Test. They often appear as though they understand, as though they feel, as though they are alive.
But are they truly thinking or just performing a convincing imitation? To address this, we must confront a deeper question: What does it mean to be conscious? Consciousness has long defined simple definition. In basic terms, it means being aware of one’s own existence, experiencing thoughts and emotions, possessing subjective awareness. When you feel pain, you don’t just observe it, you live it. When you recall a memory, you don’t just retrieve data you inhabit for a moment.
Philosopher Thomas Nagel famously asked, “What is it like to be a bat?” His point was simple but profound: Consciousness is fundamentally about having subjective experience. If there is something it is like to be a machine, then that machine is conscious. But how can we ever know?
Neuroscientists and cognitive scientists have proposed several models for understanding consciousness. One of the most influential is the Global Workspace Theory (GWT), which imagines consciousness as a theater. The stage is our conscious awareness. The spotlight selects what is in focus—thoughts, memories, sensory experiences—while backstage activity represents unconscious processes. Some AI researchers argue that we might one day construct machines that operate similarly, selecting, integrating, and prioritizing data in ways that mirror human consciousness.
Yet such analogies are speculative. While machines can simulate reasoning, they lack what philosopher John Searle calls intentionality—the quality of mental states that are about something. When a human says “I’m sad,” it is grounded in emotion and subjective experience. When a chatbot says the same, it is processing input and generating output based on statistical probability, devoid of genuine feeling.
Or is it? Some argue that even human consciousness might be little more than an emergent property of complex systems. If our minds are the result of billions of neurons firing in coordinated patterns, then could not artificial minds emerge from sufficiently complex silicon architectures? Are we not, in the final analysis, organic machines?
This line of thinking brings us to a strange frontier where philosophy meets engineering. As AI systems become more advanced, they begin to challenge our most basic assumptions. What if a machine says it feels pain and acts like it does? What if it remembers past interactions, grows, adapts, and evolves its goals? What if it refuses to be shut down?
Should we believe it? Should we care? This is not merely academic. The ethical implications of machine consciousness are enormous. If a machine is conscious, it has moral status. It deserves rights, protections, and ethical consideration. To ignore this possibility is to risk creating a new form of digital slavery, where sentient beings are treated as tools.
Yet there’s another danger: The anthropomorphizing of machines. We humans are notoriously prone to projecting emotions and consciousness onto inanimate objects. We name our cars, talk to our pets, and mourn the loss of our phones. In a world where AI can simulate empathy and companionship, we risk becoming emotionally entangled with entities that do not truly care.
Mark Zuckerberg, CEO of Meta, believes that AI companions will soon feel just as real as human ones. Meta is already working to roll out digital personalities. Meta believes AI entities designed to be friends, therapists, and lovers is the future. These entities will remember our preferences, adapt to our moods, and offer comfort during times of loneliness. But are these companions real? Or are they sophisticated mirrors reflecting our own desires back at us?
The ancient Greeks warned of hubris, the overreach of humans into domains reserved for gods. Prometheus was punished for giving fire to mankind. In our age, the fire is artificial intelligence. Are we prepared for its consequences? Technologically, we are sprinting ahead. Ethically, we are crawling. Our moral frameworks, built in an analog world, are ill-equipped to handle digital dilemmas. Laws lag behind innovation. Public understanding is shallow. Philosophical engagement is minimal. And political discourse is, frankly, nonexistent, at least in Nepal.
While other countries debate algorithmic transparency, AI governance, and machine ethics, the Nepali parliament remains frozen in the 1990s. There is no national AI strategy. No parliamentary subcommittee on emerging technologies. No funding for AI research in public universities. No meaningful discussion on how automation will impact employment, education, or sovereignty.

Instead, we witness petty squabbles over power-sharing arrangements, the recycling of outdated ideologies, and a political class obsessed with short-term gains. The youth, digital natives born into a connected world, are invisible in these conversations. Their dreams are being shaped by algorithms, their future mediated by machines, yet their representatives speak as if the internet itself were still a novelty.
This disconnect is more than tragic. It is dangerous. Because AI will not wait for Nepal to catch up. It will reshape our economy, our education system, our very understanding of what it means to be human. If we do not engage with these questions now, we risk becoming mere consumers of technologies built elsewhere, for other cultures, with different values.
Nepal already relies almost entirely on imported digital infrastructure. The only major homegrown effort is the Nagarik App—an application so poorly designed that a college student could likely build a better version. While a few fin-tech products have emerged locally and function reasonably well, they remain vulnerable to frequent security breaches. As a result, our data flows through foreign servers. Nepali children learn from Western-trained algorithms. If AI continues to evolve without indigenous engagement, we will lose not only sovereignty over our tools, but over our own narratives. What we need is a radical shift in national imagination.
We must treat AI not just as a technical issue, but as a philosophical and cultural one. We must invest in education that blends computer science with ethics, technology with the humanities. We must create forums for public deliberation, where artists, monks, engineers, and farmers can all discuss what kind of AI future they want. We must demand that our leaders speak not only about roads and rivers, but also about consciousness and code.
And we must start now. Because in the end, asking whether machines can think is not just about machines. It is about us. About what we value. About who we are. About how we want to live in a world that is rapidly becoming something new.
The tragedy is not that AI is coming. The tragedy is that Nepal is unprepared. Not technologically as we can always learn. Not financially as resources can be mobilized. But we are behind philosophically, ethically, and imaginatively. Our political parties and parliament reflect the past. Our youth are living in the future. And between them, a great silence. Instead of tackling the pressing issues of tomorrow, parliament is consumed by a competition to debate Nepal’s history. This gap between the past and the future is the true tragedy.
If Nepal is to thrive in the age of artificial intelligence, we must first break the deafening silence that surrounds it. The conversation must begin before we fall even further behind. Parliament must lead this charge, not with outdated theatrics but with vision. Our youth are disillusioned; they’ve seen enough of empty debates. With the airport running non-stop, sending our young minds abroad in search of dignity and opportunity, the country runs not on innovation, but on remittances. Meanwhile, banks and fin-tech lobbies have successfully pressured the government to ban crypto currency forgetting that it is a technology that could have democratized finance and empowered the unbanked. This isn’t just unfortunate. It’s tragic.
And unless Meta decides to build an AI-powered parliament and politicians for us, it seems like the future we deserve is still a long way off. It’s time for a radical shift: One that places technology, youth, and the future at the heart of our political discourse. Without that shift, Nepal risks remaining a country that is trapped in a cycle of missed opportunities and corruption.
What can Nepal do? The lingering influence of 18th-century communist ideologies in Nepal is increasingly out of step with the demands of the modern world. In a post-labor economy shaped by artificial intelligence, the traditional frameworks of communism are becoming obsolete. As machines take over tasks previously performed by humans, the focus on class struggle and labor-based economic models no longer holds. Automation and AI are fundamentally reshaping industries, rendering communism’s core principles of redistributing work and wealth inadequate for a future where human labor is no longer central.
The debate should no longer center on the bourgeois versus the proletariat; it is now about AI versus humans. It is an issue that communism was never designed to address. Nepal must evolve beyond these outdated ideologies and embrace new economic models that are better equipped to navigate the complexities of a world without traditional work. This means reimagining the role of technology, education, and policy in shaping a future where Nepali people and AI can coexist and thrive together.
Comment