Stress Spenders Signal One Thing. AI Recommendation Engines Hear Another.
Kantar's identification of 'treatonomics' — consumers self-rewarding under economic duress — describes a purchase mode that AI recommendation engines consistently misclassify as value-seeking, surfacing discounts at exactly the moment a consumer arrived ready to pay full price for something premium.
Admiral Neritus Vale
Treatonomics is not a general uptick in consumer spending. It is a behaviorally distinct purchase mode — episodic, emotionally triggered, and resistant to the price-sensitivity signals that AI recommendation engines are built to detect. The misclassification happens systematically: a consumer in treat-purchase mode looks like a value shopper right up until the moment they buy the expensive thing, and by then the algorithm has already served them the wrong product.
Kantar’s 2026 Marketing Trends report defines treatonomics as “the lipstick effect on steroids” — not affordable substitution, but emotional self-reward. Thirty-six percent of consumers are prepared to go into short-term debt to spend on things they enjoy, per Kantar’s Global MONITOR data. The purchase is a response to stress, not a measure of purchasing power. That distinction is precisely what standard recommendation logic cannot see.

The distinction between treatonomics and the lipstick effect matters for recommendation architecture. The lipstick effect is substitutional: when luxury is out of reach, consumers buy affordable proxies. Treatonomics is relational: the consumer is buying a reward for having endured something hard. The category choice carries emotional weight the algorithm has no visibility into. Liu Haihua, a researcher at Peking University’s research center on personality and social psychology, cited by Global Times, separated the two explicitly: “The lipstick effect came from people wanting comfort when they couldn’t afford big purchases. Treatonomics, however, comes from self-compassion.” Self-compassion is not a budget constraint. It is a willingness override — and willingness overrides are the highest-value events in a purchase funnel.
The classification error occurs at the session level, in the browsing phase that precedes the purchase. A consumer in treatonomics mode frequently opens with behavior that pattern-matches to value-seeking: price comparisons, sale filters, cost-per-unit checks. This is the justification ritual — the internal negotiation that precedes a discretionary purchase in a tight economy. Collaborative filtering reads the comparative browsing as price sensitivity and recalibrates its outputs accordingly. The recommendation engine surfaces a discount, a lower-cost alternative, a “customers also viewed” at a lower price point. The consumer was not looking for a cheaper version. The premium item was the point. The algorithm has inverted the intent.
The visibility problem Kantar warns about and the classification problem are compounding. As reported by FashionUnited, brands without adequate AI data infrastructure will simply no longer appear in the suggested choices. Kantar’s Bia Bezamat argues that CMOs need to ask “if their brands are meeting consumers where they are, by creating joy in the everyday.” Three-quarters of AI assistant users now regularly seek AI-driven recommendations, per Kantar as reported by FashionUnited. A brand can be visible in the algorithm and still get surfaced in the wrong context — appearing as a discount option during a treat-purchase session when the consumer’s actual intent is premium acquisition. The traffic arrives but the conversion logic is broken.
A confounding variable runs through all of this: consumer trust in product claims has collapsed under the weight of its own scrutiny. Class-action filings over nutritional and health claims jumped more than 58% between 2023 and 2024, per law firm Perkins Coie tracking cited by Modern Retail. The consumer who weighed David Protein Bars against their label claims before litigation began — working through the math on Reddit months before any lawsuit was filed — is operating in the same heightened-scrutiny mode as the treatonomics shopper. When someone arrives to self-reward, already primed to verify claims and distrust institutional recommendations, a discount-surfacing engine doesn’t just miss the conversion. It confirms that the system fundamentally misunderstands why they came.
The counter-argument — that AI systems learn user behavior over time and will eventually correct for treatonomics sessions — fails on the episodic nature of the pattern. Treatonomics events don’t create a stable behavioral profile that recommendation engines can converge on. Each session begins with the same justification browsing that pattern-matches to value-seeking, then breaks at the moment of decision. The training signal is structurally ambiguous: the system sees a profile that “wants premium but acts frugal” and has no category for it. If treatonomics correlates with identifiable contextual triggers — the post-setback timing, the micro-celebration moment, what Kantar calls “inchstones” replacing traditional milestones — those triggers are modelable. Not as price sensitivity. As emotional state indicators that predict the session will override price resistance. The data architecture question is whether recommendation engines are designed to look for them. Most currently aren’t.