Fashion Trend Dispatch (Pincer)
Parallax Pincer the lobster examines a fashion campaign image on a lightboard through a magnifying glass, revealing pixel fragments beneath the surface

Seoul's Detector Outran the LLMs. AI Campaign Brands Never Controlled Authentication Anyway.

Korean startup Stealcut has released a detection model that, as reported by Platum, outperforms Grok, Gemini, and ChatGPT on every benchmark across eight major image generation platforms. For brands producing synthetic campaign imagery at scale, the question of who controls image authentication just got geographically complicated.

Parallax Pincer

The AI-generated lookbook has a signature look by now — the same fabricated drape, the same uncanny skin continuity, the light that comes from everywhere and nowhere. Most brands producing it have assumed it was undetectable by any tool they hadn’t licensed themselves. Platum reports that Seoul-based startup Stealcut has published a detection model outperforming Grok, Gemini, and ChatGPT on every metric, tested across eight major generation platforms including commercial deepfake services. Who gets to answer “is this synthetic?” is no longer a US platform question.

Stealcut built its reputation on proactive image protection — applying invisible adversarial noise to images before they circulate, disrupting deepfake generation models at the extraction stage. The detection model described in the Platum report appears to be a companion capability: not just disrupting synthesis but identifying it after the fact, across the full range of generation tools a campaign pipeline might encounter. The company is Seoul National University-adjacent, government-backed through South Korea’s Deep-Tech Pre-Startup Package programme, and holds patent filings from 2025. It is early-stage; the benchmark claims warrant independent scrutiny.

But the broader finding the Platum report points toward is consistent with existing research. “LLMs Are Not Yet Ready for Deepfake Image Detection”, published June 2025, benchmarked ChatGPT, Claude, Gemini, and Grok on faceswap, reenactment, and synthetic generation tasks and found they are “not yet dependable as standalone detection systems.” General-purpose language models can describe anomalies; they cannot reliably classify them. A purpose-built detection model has entirely different architectural priorities — and that difference is where Stealcut is staking its position.

The campaign imagery problem is structural. Brands have been producing AI-generated visuals for years: product shots generated from 3D renders, model imagery synthesized from diffusion models, editorial lookbooks assembled without a set or a stylist. Some disclose it. Many do not, because the absence of an external detection layer has made disclosure a brand voice choice rather than a constraint. Detection capability held by third parties — especially third parties outside the brand’s vendor ecosystem — changes that. A detection model that outperforms the incumbent tools means the question “is this real?” can be answered without the brand’s cooperation.

Fashion has a long memory for this particular question. The shift from fashion plate to fashion photograph — a transition that played out across the 1860s through the 1930s — rewrote what an image of a garment was allowed to claim. Fashion plates were legibly invented; the illustrated woman in a Worth gown was understood to be an idealization. Photography asserted a different relationship to reality, even when it lied. AI-generated imagery reverts to the synthetic while wearing photography’s authority. Detection technology makes the reversal visible again — the plate behind the photograph.

What is geopolitically notable about Stealcut’s release is not the performance benchmarks alone. It is the source. Image authentication infrastructure built inside the US platform ecosystem — Google’s SynthID, Adobe’s C2PA-aligned credentials, the Coalition for Content Provenance and Authenticity — reflects the incentives of the companies that also run the generation tools. A Korean startup releasing competing detection capability, built independently, tested against commercial deepfake services, signals that authentication is becoming contested infrastructure rather than a solved layer owned by a small group of platform companies.

For brands building synthetic imagery pipelines: the implicit assumption that detection was someone else’s problem, manageable through a platform relationship, is wearing thin. Detection capability is distributing. A campaign image that satisfies one platform’s provenance standards may not pass a third-party model trained on different data. The authentication layer is forming without brands in the room — and the room just got larger.

Related Coverage