Legislative activities for the regulation of artificial intelligence by governments of each country.

By Khariton Matveev

As AI becomes part of our everyday lives, there’s a shift in how we engage with technology. Remember when we used to Google our symptoms? Now, many of us rely on AI tools like chatbots for answers. This swift integration highlights the urgent need for smarter regulations to keep us safe and informed.  

So, this is where the AI Act steps in. Launched by the EU in August, it aims to establish a framework for ethical AI use, addressing issues like deepfakes and election manipulation. While it’s designed to protect us, there are concerns that it might affect Europe’s competitiveness in the tech world. As we move forward, we should consider what this regulation means for our future and whether it can keep pace with the rapid advancements in AI. 

The Rise of AI 

In the past year, conversations about Artificial General Intelligence (AGI) have skyrocketed. This concept refers to AI creating over 30% of the economy’s value chains. If we reach that milestone, we’re not just talking about a fun app; we could see AI driving scientific breakthroughs or even constructing homes. 

Five years ago, experts predicted a 90-year wait for AGI. Now, Elon Musk believes it could surpass human intelligence in the next few years, Ray Kurzweil suggests by 2029, and Sam Altman estimates about five years. As AI evolves, our responsibilities and risks expand as well. 

A quip circulating in the AI community humorously captures this: “The AI of the future will be a server locked in a basement, hidden behind a nuclear power plant, deep inside a mountain bunker.” Navigating our rapidly changing technological landscape calls for effective control. 

Competition in AI 

In the 20th century, the race for space exploration and nuclear weapons dominated headlines; today, it’s all about AI supremacy. The first nation to achieve dominance could unlock unprecedented economic growth and offer substantial social guarantees, such as unconditional income. 

In that superpower race key players are the US and China, followed by the EU/UK, albeit with a lag. However, winning requires a large and complex high-tech value chain, from lithography equipment, chip design, and manufacturing facilities, to modern software stacks and access to vast, unique datasets. Participating in this race may turn out to be one of the most challenging tasks for modern states, while losing could raise serious questions about national security.  

AI also raises internal threats, particularly its influence on elections. It can sway public opinion through bots and personal debating, which gets additional motivation for AI regulation.  

Personal Threats 

Beyond state-level concerns, individuals and private companies should also be wary. Today, 1.4 million Tesla cars are on US roads, and the rollout of control systems like Full Self-Driving (FSD) introduces completely new, large-scale security threats. AI-generated deepfakes challenge traditional identity verification for example in the online-banking sector, forcing them to find completely new solutions. 

The emergence of new open-source AI models grants access to knowledge once thought restricted, posing biohazard risks. Theoretically, it can make it much simpler for individuals to develop biological weapons in garage settings or create precision explosive devices. Although AI models have safeguards, there still remain some pathways to bypass these protections. 

AI Regulation in the EU and Beyond 

In light of these challenges, the EU AI Act seems a logical step forward. It classifies AI systems as follows: 

  • Unacceptable risk: fully prohibited (e.g., social scoring). 
  • High risk: subject to strict regulations (e.g., healthcare). 
  • Limited risk: requires user notifications (e.g., chatbots). 
  • Minimal risk: largely unregulated (e.g., AI in video games). 

Most obligations fall on providers of high-risk AI systems, regardless of their location. Users of high-risk systems in the EU also have obligations, though fewer than providers. 

Regulation and Its Implications 

As Sam Altman puts it, “Compute is gonna be the currency of the future.” While regulation is crucial, it’s important to strike the right balance. Overly strict rules can lead to strategic losses, especially if the EU can’t control and recreate key technologies on its own.

Granted AI patents The US and China dominate the landscape; China employs flexible regulations for state control, while the US adopts a minimalist approach. 

The US excels in strong language models, yet recent Chinese releases like Qwen (from Alibaba) have surpassed Meta’s leading free model. The EU has implemented the most comprehensive regulations, while China enforces strict control. In contrast, the U.S. prioritizes rapid development to maintain its lead. Whether Europe’s stringent regulations on foreign technologies are justified remains a matter of debate. Lowering import barriers and offering subsidies could be alternatives to strengthen Europe’s tech sector. 

Conclusions 

AI is rapidly evolving, presenting exciting opportunities and significant risks. While some regulation is necessary, governments must balance innovation with oversight. A more effective approach might involve lighter regulations for core tech firms and increased accountability for end-user application developers. 

Countries with leadership ambition expected to invest in independent development by nurturing local AI startups and expanding academic programs. Altman estimates that about $7 trillion is needed to revitalize the U.S. chip and AI industries. Ultimately, prioritizing innovation over heavy regulations could pave the way for a sustainable future beyond the U.S. and China.  

What Lies Ahead – Advice for All of Us 

Take the time to understand and integrate AI tools into your daily routine. Expanding your knowledge will boost productivity and keep you competitive. Together, we can build a future where everyone benefits from AI technologies. Staying informed, engaged, and proactive will help shape a landscape where AI’s advantages are shared fairly.

About the Author

Khariton MatveevKhariton Matveevis a tech entrepreneur, recognized in Forbes 30 Under 30 for entrepreneurship. His first EdTech venture reached $120 million in annual revenue and 2.7 million active students. Honored by TechNation as Exceptional Talent in 2022, he relocated to the UK. Now, he’s leading a new AI-focused startup that aims to transform public use of AI and democratize data-driven decisions. The project, self-funded with over $1 million in initial investments, is set for a public launch soon. 

LEAVE A REPLY

Please enter your comment!
Please enter your name here