By Jonathan Armstrong
Artificial intelligence is dominating conversations everywhere right now, but what exactly are AI vacuums, and why could they pose a risk to organisations?Â
As AI adoption accelerates across Europe, a new and often overlooked risk is emerging known as AI information vacuums. With search engines prioritising AI-generated answers, gaps in reliable information can be filled with content that appears credible but may be misleading or incomplete. Jonathan Armstrong explores how this shift is exposing organisations to new threats, and what they can do to protect themselves.
The ‘Unlocking Europe’s AI Potential 2026’ report[i] shows that 54% of European businesses are now using AI, up from 33% two years ago. This is a milestone moment, and one that signals not only widespread uptake, but growing confidence in AI as a vital business tool. But as adoption surges, so too do the risks.
AI may be transforming how organisations operate, but it is also reshaping how information is created, used and trusted. One overlooked emerging risk is the rise of AI information vacuums. An AI information vacuum occurs when there is little or no reliable, high-quality and suitable information available on a topic, and AI systems step in to fill the gap, often with content that appears authoritative but may be incomplete, misleading or simply wrong. European businesses can be especially at risk as those businesses who commonly put out their content in a language other than English are particularly vulnerable given the bias towards English in most GenAI platforms.
This challenge is becoming more visible as search behaviour evolves, making it essential for businesses to understand it in order to respond effectively.
Search engines embrace AI
There have been major changes to how search engines deliver results, and increasingly, they are moving away from traditional paid-for and organic listings, instead prioritising AI-generated summaries at the top of their results. These answers are designed to be fast, convenient and authoritative, and users are responding accordingly.
Recent research from YouGov[ii] highlights just how widespread this shift has become. More than half of respondents (53%) say they have seen these AI-generated summaries often, while only 20% report never encountering them. Engagement is strong too with 52% of users who have seen these summaries saying they find them useful, compared to 32% who do not.
This growing familiarity is translating into changing behaviour. Over 50% of users now prefer AI-generated summaries to traditional search listings, with click-through rates remaining high as these summaries often include links to supporting sources, reinforcing their credibility.
This shift has significant implications not only for search engine revenue models but also for information security, compliance and legal and reputational risk.
Understanding the risksÂ
Internet scams have existed for as long as the internet itself. Historically, attackers diverted traffic from legitimate sites through typo-squatting, misleading domains, metatag misuse, or paid search manipulation. As user search behaviour has evolved, scammers have adapted too.
Many of these earlier scams relied on businesses not having a strong online presence. Where a digital information gap existed, attackers could exploit it to capture traffic for their own purposes. AI-first search creates a similar environment, where information vacuums can be exploited.
With AI-first search, a number of risks emerge:
- Manipulated AI summaries could redirect users to scam sites, hijacking an organisation’s reputation or potential customers;
- Investment and employment scams could be amplified through AI-generated content;
- Credential phishing could be reinforced using fraudulent AI-informed pages;
- Hacktivists could exploit an AI vacuum to spread false rumours about an organisation leaving little trace behind to identify those responsible
The low cost of AI makes these attacks more feasible and scalable. According to Nina Schick[iii], a globally recognised expert on AI, three years ago a million tokens of AI inference cost $60; today, the same computing power costs just six cents. This reduction enables threat actors to experiment at scale and probe for vulnerabilities more efficiently.
So far, most public examples of AI vacuums being exploited have been light-hearted or humorous, for example using Reddit posts to make GenAI suggest pizza glue to stick cheese to a pizza or fake recipes for PB&J sandwiches. However, the potential for serious harm exists due to the way AI-first search functions.
Why AI information gaps are a growing concern
Earlier generative AI models were trained on restricted datasets, such as Common Crawl. Modern models now access broader datasets, including websites that allow AI crawling. However, AI model operations often lack transparency. For example, in December 2025, the European Commission opened an investigation into Google over concerns about the data used to train its GenAI models.
Efforts by organisations to protect their intellectual property can sometimes make the problem worse. Most reputable crawlers like OpenAI’s GPTBot, Google-Extended, and Anthropic’s ClaudeBot, respect technical measures such as robots.txt files, which tell bots which parts of a site they can access. But if an organisation restricts AI access too heavily, information vacuums can appear.
A key concern is that many brands are underrepresented or invisible in AI summaries. Â The GEOMETRIQS study (October 2025)[iv] found that across the top 80 brands analysed, average visibility was just 4%, with one in five brands not appearing at all.
The imbalance becomes clearer when broken down by sector. Technology and consumer-facing industries dominate, with tech companies accounting for nearly 40% of all mentions and consumer goods adding a further 20%. By contrast, finance and energy firms together made up less than 5%, despite their significant economic weight.
Financial services performed particularly poorly, ranking as the second-worst sector with just 2.9% visibility, raising concerns about increased exposure to financial scams. Brands outside Anglo-American markets also fared worse.
Taken together, the findings suggest that AI systems tend to amplify brands with strong public visibility and abundant English-language content, while industrial, regional and non-Western firms remain underrepresented.
Recommended actions for organisationsÂ
To mitigate these risks, organisations should review their AI strategy and risk profile. Suggested measures include:
- Monitoring AI-generated results regularly, as AI search outputs can change frequently.
- Developing an AI optimisation strategy, similar to traditional SEO, including reviewing robots.txt configurations and making content AI-friendly. Employ Generative Engine Optimisation (GEO) and E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) principles to improve credibility.
- Integrating AI risk management into brand protection, covering domain monitoring, trademark enforcement, and other reputation safeguards.
- Promoting AI literacy internally. Educating staff on AI risks and opportunities aligns with EU AI Act requirements and strengthens mitigation strategies.
- Having a plan to react to a hit. For some organisations they’ll want to align this to an existing crisis management or data breach plan.
Conclusion
As AI continues to reshape how information is created and consumed, AI information vacuums present a subtle but growing risk for organisations. Left unmanaged, they can distort visibility, amplify misinformation and expose businesses to reputational and legal threats. The key is not to resist AI, but to engage with it strategically, ensuring accurate, accessible content and active monitoring. Organisations that understand and address these gaps early will be far better positioned to protect their brand and maintain trust in an increasingly AI-driven business world.


Jonathan Armstrong




