The Listing Factory Outruns Every Filter
AI-generated listing copy, synthetic product photographs, and fabricated reviews are scaling faster than any marketplace filter can catch them, turning platform trust into a depreciating asset.
Sir John Crabstone
One in five Amazon listings now shows strong indicators of AI-generated copy; in the twenty-to-fifty-dollar bracket, where competition runs densest, that figure reaches twenty-eight per cent. Those numbers come from a self-published analysis by smartminded GmbH, distributed via press wire, whose authors describe the scores as “likelihood indicators, not definitive authorship classifications” — a caveat worth noting, though the direction is consistent with broader evidence. The same study analysed more than five hundred products across ten categories and found Amazon’s ranking algorithm entirely indifferent to the distinction. Products in the top ten positions scored no differently from those in positions forty-one to fifty, meaning the algorithm rewards the appearance of quality rather than quality itself. Marketplace trust is not a fortress under siege. It is inventory, and it spoils.
Roughly thirty per cent of online reviews are fake. Among Amazon bestsellers for clothes, shoes, and jewellery, a Fakespot analysis put unreliable reviews at eighty-eight per cent. A single fraudulent extra star lifts demand by thirty-eight per cent. Capital One Shopping Research, reporting FTC data, puts the return on purchased reviews at nineteen hundred per cent. The numbers describe a market whose economics reward fraud. When fraud returns at that multiple, enforcement spending has no arithmetic answer.
Synthetic product photographs take the same problem further. Bellingcat documented fabricated listings across Amazon, Etsy, eBay, and Walmart: products that do not exist, listed by accounts that appeared overnight. Crystal mugs were sold under AI-generated images polished enough to pass cursory inspection; when the products arrived, buyers found items bearing little resemblance to what was advertised. Buyers who ordered a decorative cat lamp received cheap plastic. What Bellingcat found was not a rare category of fraud but a repeatable playbook: generate compelling images, establish a listing, collect payment before the product catches up with the promise. The visual tells (broken or inconsistent lines, blurriness, and edges that fade rather than resolve) require a deliberate eye that most shoppers are not trained to apply.
Amazon blocked more than ninety-nine per cent of suspected infringing listings in 2024 and still seized more than fifteen million counterfeits.
The effort behind that figure is genuine. Amazon spent more than a billion dollars on brand protection and its Counterfeit Crimes Unit has pursued more than twenty-four thousand bad actors since 2020. Both figures are products of scale: a marketplace large enough that a ninety-nine per cent block rate still leaves fifteen million counterfeits to identify, seize, and dispose of. The countermeasures scale with headcount and engineering investment; the fraud scales with the cost of a text model and an image generator, both of which fall every quarter. The defence scales linearly; the offence scales exponentially.
Fakespot, the consumer-review grading tool used by millions of shoppers, shut down in July 2025, removing the independent check behind figures like the eighty-eight per cent. The platforms will improve their filters. They always do. The question is whether the improvement can compound faster than the cost of circumvention falls. The early evidence runs the other way.