James here, CEO of Mercury Technology Solutions. Hong Kong — April 22, 2026
Recently, I’ve been talking a lot about "AI Poisoning" (data contamination). An older reader reached out and noted that the current state of AI feels exactly like the internet in the late 1990s.
For the younger generation navigating this economy, I need to break down exactly what is happening right now, why the rules of survival have fundamentally changed, and how we actually maneuver through this chaos.
1. The Economics of Forgery and the 90s Internet
In the late 90s, the early internet was a chaotic, lawless frontier. While there was genuine information, early digital spaces were dominated by scams, malware, and fake data. This wasn't a technological flaw; it was human nature.
The lifecycle of a new technological ecosystem is always the same: First comes the virus, then the victims, and finally, the firewall. Only after people lose money and get hurt do they become willing to pay for protection, which creates the market for cybersecurity. Right now, AI is squarely in the "virus" phase.
There is an old rule in the antique appraisal world: If an artifact sells for $100 at auction, and the cost to forge it is $50, you must assume it might be a fake. If the cost to forge it is $110, it is guaranteed to be real. Capital dictates reality. Nobody does a money-losing business.
Historically, manipulating public consensus on the internet was expensive. You had to hire real humans, run troll farms, and pay for massive distribution. AI has dropped the cost of digital forgery to the price of electricity.
2. The Algorithmic Stampede (AI Poisoning)
Because the cost of forgery is now near zero, we are witnessing the rise of industrial-scale "AI Poisoning."
Think of algorithmic high-frequency trading in the financial markets. If an institutional trader knows a stock has a cluster of automated "stop-loss" orders at a certain price, they will artificially drive the price down to trigger them. The algorithms then blindly execute, causing a chain reaction and a flash crash.
AI data poisoning works the exact same way. Bad actors are injecting highly optimized fake data, synthetic reviews, and false narratives into the web's crawlable surface. When LLMs (Large Language Models) scrape this data, they ingest the poison. The AI believes the hallucination and confidently regurgitates it to millions of users, which then gets published on other sites, further poisoning the training data of the next model.
Millions of people currently rely on AI for absolute truth, completely unaware that the oracle has been compromised.
3. The Extinction of the Entry Level
This poisoned ecosystem creates two massive, depressing realities for young professionals today:
- You cannot trust your learning sources: The AI you use to upskill is actively hallucinating.
- Your entry-level job is gone: The junior roles you need to gain practical experience have been replaced by AI agents.
Look at what is happening in the United States right now. Elite universities are orchestrating "Paid Internships"—but the money is flowing in reverse.
Top-tier companies no longer want human interns. Even if the student works for free, the company views them as a liability who drains senior management's time. They prefer to use that time training their AI "Digital Employees" instead. In response, Ivy League alumni networks are literally paying corporations to take their students, subsidizing the gap in efficiency just so their graduates can get real-world exposure.
The traditional path—study hard, get an entry-level job, learn the ropes, and climb the ladder—is dead. If you wait to become "qualified," the AI will outpace you. You have to aggressively grab opportunities, regardless of whether you feel ready.
If hunting has evolved from bows to sniper rifles, staying home to perfect your archery will get you killed.
4. The Mercury Protocol: How We Execute GEO in a Poisoned Ecosystem
So, how do corporate brands survive in an ecosystem where the AI is actively being fed garbage, and users are increasingly skeptical of the outputs?
If you are still doing traditional SEO—pumping out keyword-stuffed blog posts—you are just adding to the toxic sludge. The AI models are developing "immune systems" to filter out this synthetic noise.
At Mercury, our approach to Generative Engine Optimization (GEO) and LLM SEO is built entirely around bypassing the poisoned data pool and establishing Algorithmic Authority. Here is how we do it:
- Entity Anchoring (The Off-Page Trust Web): We do not rely on your website’s self-proclaimed text. LLMs verify truth by looking at high-trust, un-poisonable nodes. We anchor your brand to verified PR, tier-1 media citations, Crunchbase, and neutral Wiki entities. We build a consensus of authority that a poisoned scraper bot cannot replicate.
- Structuring First-Party APIs: We bypass the web crawlers entirely. We help enterprises structure their proprietary data, pricing, and product specs into clean, machine-readable Knowledge Graphs and APIs. When ChatGPT or Perplexity needs to recommend a vendor, we ensure it pulls directly from your verified first-party data, not a hallucinated third-party blog.
- Sentiment Architecture: AI poisoning often manifests as synthetic negative reviews or skewed comparisons. We actively monitor and structure positive, verifiable use-cases and deployments across independent forums (Reddit, GitHub, specialized communities) to ensure the LLM's sentiment analysis remains overwhelmingly positive and mathematically grounded.
In a world where data is cheap and easily faked, Proof is the only currency that matters. You cannot out-publish the AI poisoners. You can only out-verify them.
Mercury Technology Solutions: Accelerate Digitality.


