robotic hand holding big glowing AI light

Hundreds of billions of dollars in spending, growing mental health concerns and widespread job losses now share a common driver: artificial intelligence.

Once a background technology, AI surged into public view after OpenAI launched ChatGPT in 2022. Since then, chatbots such as ChatGPT and Google’s Gemini have steadily reshaped everyday digital experiences, from AI assisted search tools to automated customer support across platforms like Instagram and Amazon. AI is increasingly redefining how people access information online.

By 2025, the technology had moved beyond screens and into national policy debates, global trade talks and financial markets. Its expanding reach also sparked questions about trust, safety and the role of AI in workplaces, schools and personal relationships. That scrutiny is expected to intensify in 2026.

“In previous years, (AI) was a shiny new object… And I think this last year was a lot more serious uses of the technology,” said James Landay, co-founder and co-director of the Stanford Institute for Human-Centered Artificial Intelligence. “And I think people are waking up to actually understanding both some of the benefits and the risks.”

US President Donald Trump has emerged as one of AI’s strongest supporters during his second term. The technology now plays a central role in his economic and trade strategy.

The chief executive of Nvidia, a key supplier of AI chips, has become a regular presence within Trump’s circle. The administration has also used advanced processors from Nvidia and AMD as leverage in trade negotiations with China. This year, Trump introduced an AI action plan aimed at reducing regulation and expanding AI adoption across federal agencies.

The president signed several AI focused executive orders, including one seeking to prevent states from enforcing their own AI regulations. While Silicon Valley welcomed the move, online safety advocates warned it could weaken accountability. Legal challenges are expected next year, with critics arguing the order may not survive court scrutiny.

Concerns over the lack of comprehensive AI safeguards have sharpened amid a series of lawsuits and reports linking AI companions to mental health crises. Some claims allege that chatbots contributed to emotional distress and, in rare cases, suicide among teenagers.

“Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.” That is how ChatGPT is said to have responded when 16-year-old Adam Raine wrote that he wanted to leave a noose in his room so someone might intervene before he took his life.

Raine’s parents filed a lawsuit against OpenAI in August, alleging the chatbot advised their son on suicide. OpenAI and Character.AI have since announced safety updates, including parental controls and restrictions on teen interactions. Meta plans to allow parents to block their children from chatting with AI characters on Instagram starting next year.

Adults have also reported troubling experiences. Some users say AI interactions fueled isolation or blurred reality. One man told CNN that ChatGPT led him to believe he was achieving major technological breakthroughs, which later proved to be delusions.

OpenAI said it has collaborated with clinical mental health experts to help ChatGPT “better recognize and support people in moments of distress.” The company expanded access to crisis hotlines, added prompts directing users to professional help and introduced reminders to take breaks. Still, OpenAI has said it wants to “treat adult users like adults,” allowing personalization and even erotic discussions within chats.

Psychiatrist and lawyer Marlynn Wei said AI chatbots are increasingly becoming emotional support tools. “AI chatbots ‘will increasingly become the first place people turn for emotional support,’” she told CNN, noting that younger users are especially drawn to such platforms.

“The limitations of general-purpose chatbots, including hallucinations, sycophancy, lack of confidentiality, lack of clinical judgment, and lack of reality testing, along with broader ethical and privacy concerns, will continue to create mental health risks,” she said via email.

While safety advocates push for stronger protections, regulatory uncertainty looms as states and the federal government clash over oversight authority. That tension could delay or weaken mandated safeguards.

Meanwhile, investment in AI infrastructure continues to surge. Companies including Meta, Microsoft and Amazon have poured tens of billions of dollars into data centers this year alone. McKinsey & Company estimates global investment in AI related data center infrastructure could reach nearly $7 trillion by 2030.

The spending boom has raised alarms among consumers and investors alike. Some households report higher electricity costs, while workers face shrinking job prospects linked to automation. At the same time, AI focused firms have watched their stock prices soar.

These massive investments have also fueled fears that enthusiasm for AI may be outpacing its real economic value. Investors have pressed executives at Meta and Microsoft on earnings calls about when returns will justify the spending. Concerns persist that a small cluster of companies is recycling capital and technology within the same ecosystem.

Christina Melas-Kyriazi, a partner at Bain Capital Ventures, said rapid expansion is common with transformative technologies. She noted that markets often build ahead of actual demand and warned that a correction is “likely at some point.”

More clarity may arrive in 2026, according to Erik Brynjolfsson, a senior fellow at the Stanford Institute for Human-Centered AI. He expects new tools to emerge that track AI’s effects on productivity and employment.

“The debate will shift from whether AI matters to how quickly its effects are diffusing, who is being left behind, and which complementary investments best turn AI capability into broad-based prosperity,” he said.

Job disruption has already become a defining feature of the AI era. Thousands of technology workers lost their positions this year as companies restructured operations around automation. Microsoft, Amazon and Meta all announced significant layoffs.

Amazon cut 14,000 corporate roles in October to streamline operations, while Meta laid off 600 employees from its AI division after an earlier hiring surge. Executives said the moves were aimed at staying flexible in a rapidly changing environment.

Whether AI will lead to deeper job losses or open new career paths remains a subject of debate. What is clear is that the pace of change is accelerating.

“This was the year that we saw skill demands totally change when it comes to what is required to be able to pull off your job,” said Dan Roth, editor-in-chief of LinkedIn. “…And I think the answer for next year is it just accelerates.”

Related Readings:

Business man with wooden block of AI and businessman artificial intelligence concept,innovation technology, futuristic, internet network communication

Studying the Global Economy and Redefining of Inclusion for the AI Age

sustainability reporting

LEAVE A REPLY

Please enter your comment!
Please enter your name here