E-Commerce Evidence Brief (Crabstone)
A Chinese e-commerce marketplace obscured by a haze of drifting AI-generated product images

Taobao Blocked 100,000 AI Images. The Figure Is Weather.

Chinese e-commerce platforms have stopped trying to remove synthetic content and started reporting on its concentration. Taobao's 100,000 blocked AI images are a weather reading, not a cleanup.

Sir John Crabstone

Taobao blocked nearly 100,000 heavily edited or AI-generated product images in March 2025 — the kind with distorted hands and impossible shadows — and presented the figure as a victory, per the South China Morning Post. The numbers are meant to sound like enforcement. They read, instead, like weather data.

China is the first major retail market where synthetic content has migrated from problem to pollutant. An Originality.ai audit (the firm sells AI detection tools) found that 6.2 per cent of reviews on the Alibaba Shopping App carried language-model signatures in 2024, up 222.5 per cent since 2020. The governance response follows: platforms no longer publish the ambition of removal, only the throughput of detection.

A problem is something a platform solves; a pollutant is something it measures.

The damage pattern is recognizable. A crab merchant in Suzhou lost 195 yuan to a buyer’s AI-faked video of dead crabs; a toy merchant in Guangxi faced a comparable fabrication, as China Daily reported. Both resolved through escalation the platforms declined to provide: police action in Suzhou, social media in Guangxi. The individual losses are modest. The operational load is not.

Alibaba’s current disclosures are instructive. The company is proud of scanning hundreds of millions of images daily, proud of running its Qwen models against listings, proud of its 100,000 blocked photographs of phantom dresses. Each number measures the filter. None measures the air. Matthew Bassiur, Alibaba’s global IP enforcement chief, has told World IP Review that infringers “increasingly use generative AI” to produce synthetic images and altered logos. The admission reframes the job: infringers set the agenda; platforms report the weather.

Moderation no longer targets clean content. Tolerable concentration is the new promise. Enforcement becomes an abatement programme, funded by the host.

The regulatory apparatus has caught up to the diagnosis, if not the cure. Beihang University associate professor of law Zhao Jingwu told China Daily that platforms are not required to verify user-submitted evidence, and are therefore structurally unable to. China’s Cyberspace Administration now requires all AI-generated content to carry labels. Labels do not remove the pollutant. They put a badge on it.

The second-order consequence is more interesting than the first. AI clones of real hosts were already selling overnight on Taobao after human streamers signed off — a practice MIT Technology Review documented in 2023. If synthetic content is moderated toward acceptable background rather than zero, ranking and refund systems inherit a permanent baseline of forgery. Brand authenticity becomes a paid tier. Clean data becomes something platforms sell back to the sellers whose listings created the load.

Western marketplaces watching this should resist the instinct to congratulate themselves. They are three years behind. Amazon and Meta will face the same pollution curves, denominated in English. China is not a cautionary tale; it is a forecast.

The platforms that accept pollution first will learn to price it. The rest will keep announcing image counts.