In defense of advancing AI

The debate on artificial intelligence often swings between utopian dreams and dystopian fears. Yet, AI's true potential lies in democratizing computing and enhancing productivity, positioning it as a catalyst for progress.

AI's risks are real, but they're met with proactive regulation by global bodies and tech giants' self-regulation. We should not let Our deepest fears from anthropomorphizing AI and viewing its rise as a binary conflict misguide us. Ultimately, AI is here to stay and its our duty to enable it responsibly rather than to hold it back.

Leveling the playing field

AI's allure lies in making complex tasks simple through natural language (English is the hottest new programming language), leveling the playing field across socio-economic and professional divides. This empowerment allows non-technical creatives to code and techies without artistic skills to bring their visions to life.

Moreover, in this technological revolution, it’s small companies, not just tech giants, are making significant strides. It isn't FAANG that dominates AI but OpenAI, a plucky startup founded in 2015. And unlike the course of many technologies towards monopoly, AI development seems to encourage fragmentation as specialized models tend to outperform large models on tasks that are relevant for an industry.

Artificial Intelligence is only artificially new

Of course, AI isn't new; it's been evolving since 1956, with milestones like Deep Blue (1997) and AlphaGo (2017) marking its journey. Even ChatGPT's interaction, while innovative, is more a testament to AI's ongoing evolution than a revolution.

Let’s not mistake tangibility with proof of its superior intelligence. Even Sam Altman himself has said that we're at the end, not the beginning of this phase of AI models. So this idea that we're facing a completely new technology seems farfetched.

The misplaced anxiety over job displacement overlooks technology's history of creating as many opportunities as it displaces. The much foretold job displacement (75+ years in the making) has not yet come to pass. The rise of AI might challenge knowledge workers, traditionally seen as immune to tech shifts, but history suggests a positive outcome. New technologies create as many or more opportunities than they displace and AI should be no exception (there are 300 years of history that prove it). 

With all that said, it's important that AI regulation and corporate incentives place value in retraining and upskilling workers so that when displacement happens, there are new opportunities for them. We need concerted effort between governments and the private sector to establish robust educational programs and financial support systems.

With great power has already come great regulation

Meanwhile, the regulatory efforts surrounding AI also surpass those of earlier technologies. Just note the rapid implementation of content moderation policies for every major AI chatbot as a commitment to responsible development across AI models (far more than non-existent early social media policing). Meanwhile, the EU has already enacted landmark AI regulation and the US is not far behind.

At the same time, the biggest tech companies have been proactive about self-regulating AI technology. From red-teaming to ethical study to prominent disclaimers of AI as “experimental” and often incorrect, tech companies are eager to stem the hyperbole (a far cry from Wikipedia’s laissez-faire approach).

What the naysayers also omit is that for the first time in a long time, the very technology we are trying to regulate can be used to regulate itself. Already, governments are using AI to better serve the public and there is no shortage of opportunities to further build out their toolbox.

The only thing to fear is fear of ourselves

Fears of AI's potential harm often stem from a lack of understanding and an overestimation of AI's impending superintelligence and the resulting short-term impacts. But we need look no further than the slow progress in AI-powered autonomous vehicles, once heralded as inevitable by 2020, to illustrate the gap between hype and reality, challenging the immediacy of AGI threats. The idea that we can accurately map AI to the human brain for AGI when we can’t even power AI-driven cars illustrates the arrogance (or ignorance).

Of course, we tend to anthropomorphize AI, attributing human-like motivations and behaviors to machine intelligence, which distorts our perception of its risks. Indeed, incidents of AI-enabled racism or malfeasance highlight the need for ethical training and regulation. However, proper governance can mitigate misuse, encouraging responsible development and application.

In fact, it’s clear that the near-term fear is not AI going rogue, but people going rogue with AI. If anyone is going to stamp out humanity, it's most likely to be humans themselves, using neutral AI technology for nefarious means. The oft-used quote used in the context of AI is that "we have Paleolithic emotions, medieval institutions, and godlike technology" and there are terrible uses of technology that AI that we've seen from human-directed incidents of AI discrimination, misinformation and hacking.

So we should not advocate for the flawed mantra that "AI doesn't kill people, People kill people". But the solution is not to block or dismantle, but to learn from our mistakes and expand our understanding. There are ways we can strictly punish incorrect AI use the way hackers are punished while encouraging AI research to continue.

But by far the most prevailing myth that needs dismantling is the binary standoff we imagine between humanity and AI, as if we're headed for a showdown and we must pick sides. The truth is far more synergistic. Our unique position allows us to bridge the tangible world of atoms with the intangible world of bits, offering us an unparalleled advantage. Today, doctors already use AI to better detect cancer while teachers are using AI to focus more on education than administration. It's not about us versus them; it's about us with them, enhancing our collective potential through augmentation.

We should embrace AI to own our future

Instead of viewing AI as an adversary, we should embrace a synergistic relationship that enhances our capabilities. From healthcare to education, AI is already augmenting human efforts, showcasing a partnership that benefits society.

Our future with AI isn't set towards dystopia; it's a path we're shaping with thoughtful regulation, ethical development, and optimism. By fostering a hybrid approach where AI complements human skills, we ensure technology serves to enhance, not detract from, our collective well-being.