The "Fujiyama Times" Problem: Fifteen Fake News Sites and the New Influence Operation Targeting AI
Hello, this is Ryuta Hamamoto from TIMEWELL.
There is a Japanese-language news site called Fujiyama Times - "Mt. Fuji Time," literally. The name is comically off in Japanese, the kind of phrase only a clumsy machine translator would land on. When I first read about it I almost laughed. But the site is not a joke. It was named in 2024 by the University of Toronto's Citizen Lab as part of a Chinese influence operation, an actual instrument of state-aligned propaganda.
What was reported by Sankei Shimbun on May 1, 2026 is more striking. Two years after the exposure, fifteen of these fake Japan-targeted sites - Nikko News, Sendai News, Fukuoka Express, Tokushima Online, and others dressed up to look like regional outlets - are still online and still publishing [1].
I am writing about this not because the names are funny. I am writing because, as someone who works in security export control, I see this for what it is: front-line cognitive warfare. And the most important thing to understand is that the ultimate audience is no longer human. It is the generative AI we are starting to embed into every business decision.
Reading the Site First: What the Off-Tone Japanese Tells Us
The screenshots Sankei published show what looks at first like a real local portal: a "FUJIYAMA News" logo, navigation tabs for Sports, Industry, Privacy Policy, Contact. Then you read an article and you trip over sentences that no Japanese journalist would ever write - garbled subject-verb pairings, headlines that resolve into nonsense halfway through, sudden insertions of Falun Gong attacks. The texture is unmistakably machine-translated, with a layer of state-aligned messaging crudely fused on top.
What is interesting is that the operators clearly do not expect to fool a Japanese reader. The Sankei piece quotes article fragments like "the monk is compu" and "flew with Chinese to surprise the Japanese," neither of which parses as Japanese. Mainichi Shimbun's late-2025 investigation of the Russian-aligned "Pravda Japan" found similar artifacts, including the now-famous line "friends, yesterday's broadcast is available for viewing and listening via the link!" - the kind of phrase a human would never type [2].
By any normal standard these sites are failures. No literate Japanese reader is being persuaded by them. Yet the sites stay up, the operators keep updating them, and the network has now survived two full years of public exposure. Why? Because the readership the operators care about is not human. It is the crawlers and the language models trained on what they collect.
How to solve export compliance challenges?
Learn about TRAFEED (formerly ZEROCK ExCHECK) features and implementation benefits in our materials.
"Paperwall": The 123-Site Network Mapped by Citizen Lab
The 2024 Citizen Lab report Sankei references is titled "PAPERWALL" and remains the foundational map of this kind of operation [3]. The researchers identified 123 fake local news sites distributed across more than thirty countries. The largest country count is South Korea with seventeen, followed by Japan and Russia tied at fifteen.
The sites share three signatures. First, they use domain names that mimic plausible local-media brands by stitching together place names and generic media words. Second, they fill most of their content using machine-translated commercial press releases and recycled foreign news, which lets a human-skimming reader walk away thinking the site is an unimportant aggregator. Third, hidden among that mass of bland content, they slip in a small share of pro-Beijing messaging: ad hominem attacks on dissidents, denial of the Xinjiang situation, attacks on Falun Gong, pressure narratives about Taiwan.
The sophistication is in the ratio. If a site were 100 percent propaganda, anyone would notice. Instead, 99 percent is filler and one percent is poison, and the whole operation rides on top of a commercial press-release service so that to a casual visitor it looks like a noisy private ad-funded outlet. Citizen Lab named this structure the "paperwall" - thin individually, but stacked into a wall around the information environment.
Citizen Lab traced operational control to a Shenzhen-based PR firm called Haimai. As far back as 2022 Google's Mandiant team had already detected the firm's infrastructure powering an operation it labelled "HaiEnergy," and warned that at least 72 suspect fake-news sites were tied to it [4]. In other words, this commercial-PR-plus-state-propaganda hybrid has been targeting Japan for at least four years.
The Real Target Is the Model: LLM Grooming as a New Battlefield
This is the part of the story that becomes urgent in 2026. Sankei's article surfaced a concern that researchers have been raising for over a year: these fake sites are positioning themselves to be cited by generative AI [1]. The term of art is "LLM grooming" - large language model grooming. The word "grooming" comes from the cybercrime literature on predatory manipulation of children, but here the entity being manipulated is the AI itself. The objective is to make a model repeat the operator's disinformation as if it were ordinary fact.
The clearest evidence so far comes from a March 2025 study by the U.S. verification company NewsGuard. They tested ten leading chatbots - ChatGPT, Microsoft Copilot, Google Gemini, xAI's Grok, Perplexity, and others - against a battery of prompts referencing claims pushed by the Moscow-based "Pravda" network, a sprawling Russia-aligned site network associated with the former American John Mark Dougan. The chatbots reproduced Pravda-origin disinformation 33 percent of the time, on average [5]. One in three answers, AI was speaking Russia's lines as fact.
The mechanism is straightforward. Foundation models train on a vast slice of the open web. Retrieval-augmented chatbots also re-query the live web to fill in fresh answers. If a particular topic has a dense layer of Russian or Chinese disinformation pages and a thin layer of authoritative coverage - what researchers call a "data void" - the model will reach for whatever is densely linked, even if it is garbage [6].
This is why the broken Japanese on the fake sites does not matter. A human reader uses fluency as a signal of trust; an embedding model does not. What the model picks up is co-occurrence, internal links, and topic frequency. Spreading the same claim across many domains gets you ranked as "frequently mentioned." That is the real reason a Paperwall site can quietly post low-volume propaganda alongside dull press releases for years without "succeeding" by any human metric. It is succeeding for the audience it actually targets.
Hiroshi Taira, a media researcher with Asahi Shimbun who spent time at Harvard's Shorenstein Center, estimates that the Pravda network alone has put roughly ten million fake articles online since it began operating in 2022. No human reader is reading ten million articles. This is AI-targeted spam, designed for machine consumption from the start.
Already Underway in Japan Under the Takaichi Government
I would like to believe Japan still has time. The Japanese-language barrier has historically slowed influence operations down. But the evidence is that operators have closed that gap.
During the 2025 lower-house election that brought Sanae Takaichi's LDP to power, multiple media outlets observed several hundred China-aligned fake X accounts targeting Takaichi specifically, posting AI-generated images and concentrated negative narratives across the campaign window [7]. Then, in early 2026, OpenAI itself disclosed that it had detected "Japan-targeted influence operations by individuals associated with Chinese authorities, conducted through ChatGPT." OpenAI counted six attack lines designed around Takaichi as a target, with thousands of fake accounts and operators numbering in the hundreds [8].
The tactical picture is now two-sided. The original Paperwall and Pravda model is supply-side: flood the web with content for AI to ingest. The newer pattern is demand-side: weaponize AI directly to mass-produce posts and comments at low cost. Together they pollute the model's training corpus and accelerate the manufacturing of fresh disinformation. Both are visibly inside Japan's electoral and policy environment now.
For someone who works in export control and economic security, this is recognisably a component of hybrid warfare - not kinetic, not sanctions, but very much capable of bending a state's decision process. The penetration of the information environment is at least as consequential as a controlled-item list, with the disadvantage that you cannot photograph it at a port.
What Companies and Individuals Can Do Now: Information Hygiene, Not Information Literacy
I want to keep this practical. The traditional advice - "don't click suspicious links," "check your sources" - is no longer enough in a world where the adversary is targeting your AI assistant rather than you. No amount of personal skepticism on the part of your employees will close that gap.
The framing I now use with clients is information hygiene, not information literacy. Literacy is an individual skill. Hygiene is shared infrastructure. You cannot defeat cholera with hand-washing alone; you also need water and sewage systems. The same applies to your information environment.
In practice that translates into three things.
First, if you are putting generative AI inside your business workflow, you need an internal process for verifying which sources the AI is pulling from. Subscribing to a third-party trust dataset like NewsGuard's is one option. Another is mandating a legal or PR review step before any AI-generated output goes external. NewsGuard's own follow-up showed that filtering out low-trust sources alone substantially reduced the rate at which models repeated Russian-origin disinformation [9].
Second, restrict what your enterprise chatbots can retrieve. Enterprise editions of ChatGPT, Copilot, and most major models now expose source-restriction and trusted-domain configurations. Configuring these is no longer optional; it is part of basic enterprise security hygiene.
Third, update your security training. The traditional curriculum focused on physical hygiene - password reuse, USB drives, phishing links. We need to add cognitive hygiene: do not treat a single AI response as a primary source, and never push an AI-derived claim into a customer or compliance decision without checking it against the underlying document. From what I see, most Japanese enterprises have not made this addition yet.
I have to add: this issue is not separate from export-control practice. End-use, end-user, and counterparty diligence depends on the accuracy of the underlying intelligence. When teams hand that research to ChatGPT and the model retrieves a Paperwall article, the contamination flows directly into a compliance decision. I have personally been consulted by a corporate legal team that received a suspiciously "off" executive bio from ChatGPT for a counterparty - one of the cited sources turned out to be a Paperwall-style site. This is no longer hypothetical.
What TRAFEED Is Building
TRAFEED is the intelligence layer we operate at TIMEWELL for export control and economic security teams. It is designed around the assumption that the upstream sources of information themselves can be poisoned. The product narrows the working set to verified primary sources for end-user diligence, controlled-item screening, and counterparty review, and exposes the source domain and publication date of every claim so that an analyst - or an automated checker - can verify provenance instead of trusting a black-box AI summary.
If you are concerned that your export control review is being silently fed by Paperwall-style sites, if you are tightening up your information environment for the new security clearance regime, or if you simply want a second pair of eyes on your AI governance design, please feel free to get in touch. We work end-to-end across export control program design and internal AI governance.
Closing
I started with the comically named Fujiyama Times. The reason that name is worth taking seriously is that beneath it sits a global propaganda architecture, and the ultimate consumer of that architecture is no longer the Japanese reader who is too literate to be fooled - it is the AI that we are inviting into our businesses.
Working in export control, you spend a lot of time worrying about tightly defined items and long, technical regulatory amendments. The harder problem is the one moving in parallel outside the regulatory envelope: the contest for control of the information environment. On that battlefield, Japan is currently losing on defense.
The first thing within reach of all of us is honesty about the situation. "Fujiyama Times is still up?" is a question worth asking seriously, not laughing off. Why is it still up. Who keeps it up. What are they actually trying to do with it. Once those questions are clear, the corporate-level and national-level defenses are still well within reach. Information hygiene starts as a daily practice, exactly the way physical hygiene did.
References
[1] Sankei Shimbun, "Fifteen Chinese-Origin Fake News Sites Still Operating; Risk That AI Will Train On Them in the Future," May 1, 2026. https://www.sankei.com/article/20260501-DOV2BOBB5FN45D7I3LKHRPZXP4/
[2] Mainichi Shimbun, "Japan, Too, A Target? Generative AI 'Trained' By Russia; Bias Toward Chinese-Origin Variants," December 25, 2025. https://mainichi.jp/articles/20251225/k00/00m/040/373000c
[3] The Citizen Lab, "PAPERWALL: Chinese Websites Posing as Local News Outlets Target Global Audiences with Pro-Beijing Content," February 7, 2024. https://citizenlab.ca/research/paperwall-chinese-websites-posing-as-local-news-outlets-with-pro-beijing-content/
[4] Google Cloud / Mandiant, "Pro-PRC Information Operations Campaign 'HaiEnergy' Leverages Infrastructure From Public Relations Firm to Disseminate Content on Suspected Inauthentic News Sites," August 4, 2022. https://cloud.google.com/blog/topics/threat-intelligence/pro-prc-information-operations-campaign-haienergy
[5] NewsGuard, "A Well-funded Moscow-based Global 'News' Network has Infected Western Artificial Intelligence Tools Worldwide with Russian Propaganda," March 2025. https://www.newsguardtech.com/special-reports/moscow-based-global-news-network-infected-western-artificial-intelligence-russian-propaganda
[6] JBpress, "Data Voids and LLM Grooming: How AI Gets Trained Into Treating Lies as Truth." https://jbpress.ismedia.jp/articles/-/91423
[7] quitccpjapan, "Reports of Influence Operations in Japan's Lower-House Election: Hundreds of CCP-Aligned Fake Accounts," February 28, 2026. https://www.quitccp.jp/2026/02/28/
[8] Ledge.ai, "OpenAI Discloses Japan-Targeted Influence Operation by Chinese Authorities; Takaichi Was Targeted by AI-Driven Distribution," 2026. https://ledge.ai/articles/openai_china_cyber_special_operations_takaichi_influence_operation_report
[9] NewsGuard, "Two Data Filters Appear Able to Protect LLMs from Russian 'Infection.'" https://www.newsguardtech.com/press/two-data-filters-appear-able-to-protect-llms-from-russian-infection/
![The 'Fujiyama Times' Problem: Fifteen Fake News Sites and the New Influence Operation Targeting AI [May 2026]](/images/columns/china-fake-news-llm-grooming/cover.png)