Share of voice across ChatGPT, Perplexity, Claude, and Google AIO — measured across 8 shopper prompts and 30 brands.
Different AI models weight different sources. A brand that wins ChatGPT may lag in Google AIO if its retail footprint is weak.
SHIFFA commands 52.9% share of voice across AI search engines in the UAE luxury skincare category — nearly double its closest competitor, La Mer, at 27.9%, roughly double Augustinus Bader at 25.0%, and just over triple Dr. Barbara Sturm at 17.5%. In a category historically dominated by European heritage houses and American clinical brands, a UAE-founded player holds the top generative-search slot with an average mention position of 1.5. This is an unusual structural outcome and warrants close reading.
The competitive dynamic is uneven across engines. SHIFFA's lead is widest on Gemini (70.0% engine rate) and ChatGPT (56.7%), narrower but still dominant on Perplexity (51.7%), and narrower again on Google AIO (33.3%). The cause of the Google AIO compression is not visible in this dataset — a citation-source audit is required to determine whether retailer pages, publisher SEO, or a different source mix is driving the difference. Below the top four, a long tail of 26 brands fights for scraps — 10 other brands clear 7% SoV, and four tracked brands (Glow Recipe, Erno Laszlo, Royal Fern, ZIIP Beauty) register zero mentions across 240 engine calls. The category is concentrated at the top and thin in the middle.
For a brand currently at roughly 9–30% SoV, the operating implication is to move this quarter, while SHIFFA's incumbency in regional prompts is still being set rather than reinforced. The priority is not to contest "best luxury skincare in Dubai" head-on, but to claim adjacent, under-answered prompt territory (workplace skin, humidity-specific routines, dermatologist-recommended sensitive-skin queries) where citation sources are still forming. This is a sourcing problem more than a marketing problem.
AI search is a plausible — and likely growing — influence on research-stage discovery for uae-shiffa buyers, because the discovery path for luxury skincare in the Gulf is unusually research-intensive. A shopper considering a AED 1,500 cream typically asks three or four questions before purchase: which brand suits humid-to-arid climate transitions, which is available on Ounass, Sephora Middle East, Faces, or Bloomingdale's UAE, which dermatologists in Dubai recommend it, and how it compares to a specific rival. These are exactly the prompts that generative engines now answer in a single response, compressing what was previously a multi-session Google journey. Whether ChatGPT, Perplexity, Gemini, and Google AIO already function as the de facto research filter for this buyer is a hypothesis, not a measurement from this dataset — the 240 engine calls analyzed here measure brand visibility within AI responses, not shopper adoption, traffic share, or conversion impact. A channel-attribution study is the right follow-up.
The category is also structurally favorable to AI citation because the authoritative source layer is thin. There are relatively few dermatologist-authored English-language reviews of UAE-available prestige skincare, few regional publications with deep product databases, and limited UGC depth compared to US or Korean categories. This means a small number of well-placed sources — a Vogue Arabia feature, a Harper's Bazaar Arabia roundup, a retailer editorial on Ounass — likely shape a disproportionate share of what models return. Brands that understand which sources the models actually cite will compound; brands that rely on paid social will not show up at all.
| Brand | SoV % | Avg Position |
|---|---|---|
| SHIFFA | 52.9 | 1.5 |
| La Mer | 27.9 | 1.9 |
| Augustinus Bader | 25.0 | 2.6 |
| Dr. Barbara Sturm | 17.5 | 3.6 |
| SkinCeuticals | 17.1 | 2.9 |
| La Prairie | 9.6 | 3.0 |
| Estée Lauder | 8.8 | 4.9 |
| 111SKIN | 8.3 | 5.9 |
| Sisley Paris | 7.9 | 3.6 |
| Tatcha | 7.1 | 4.5 |
| Fresh | 7.1 | 3.5 |
| SK-II | 5.8 | 3.5 |
| Guerlain | 5.4 | 5.6 |
| Kiehl's | 5.0 | 3.4 |
| Lancôme | 4.2 | 4.7 |
| Caudalie | 3.8 | 4.6 |
| Drunk Elephant | 3.3 | 5.4 |
| Sunday Riley | 2.9 | 7.1 |
| Charlotte Tilbury | 2.9 | 4.6 |
| Clarins | 2.9 | 5.7 |
| Eve Lom | 2.1 | 7.0 |
| Chanel Beauty | 1.7 | 3.8 |
| Murad | 0.8 | 6.5 |
| Dior Beauty | 0.8 | 3.5 |
| Dermalogica | 0.8 | 7.0 |
| Origins | 0.4 | 2.0 |
| Glow Recipe | 0 | — |
| Erno Laszlo | 0 | — |
| Royal Fern | 0 | — |
| ZIIP Beauty | 0 | — |
The gap between SHIFFA at 52.9% and La Mer at 27.9% is roughly 25 points — a large current visibility gap, though AI search visibility can shift quickly with prompt mix, citation changes, and model updates, so defensibility should be read as "meaningful today" rather than structural. SHIFFA's lead is consistent with two compounding factors: the brand is the only regionally-native luxury name in the consideration set, and its average position of 1.5 means that when it is cited, it typically appears near the top, often in first or second position. Early-position placement matters disproportionately — the model's first recommendation is what tends to get repeated in follow-up prompts.
SkinCeuticals is punching above its weight. At 17.1% SoV it sits in fifth, but its average position of 2.9 is stronger than Dr. Barbara Sturm's 3.6 despite Sturm's slightly higher share, and stronger than Sisley Paris at the same 3.6. This pattern is consistent with SkinCeuticals earning early-list citations in clinical contexts rather than trailing mentions in luxury roundups. Origins, with only 0.4% SoV but an average position of 2.0, represents the opposite pattern — when it appears, it appears prominently, but it almost never appears. The Origins average is based on an extremely sparse sample (likely one or two mentions across 240 calls) and should not be over-interpreted, but the directional read is a distribution problem rather than a positioning one.
SHIFFA leads at 56.7%, followed by La Mer (46.7%) and Augustinus Bader (45.0%). ChatGPT exhibits the broadest top-5 coverage of the four surfaces — more brands clear meaningful mention rates here than elsewhere. Without citation-level data we cannot determine whether this breadth comes from editorial roundups, retailer pages, or model priors; the observation is the breadth itself, and the cause is a question for a source-level follow-up.
SHIFFA dominates at 51.7%, with La Mer a distant second at 18.3% and Augustinus Bader at 11.7%. The concentration is sharper than ChatGPT. Absent citation-level data we cannot prove the cause, but the pattern is consistent with Perplexity drawing from a narrower authoritative source set. Winning here likely requires targeted placements, not volume.
Gemini is SHIFFA's strongest surface at 70.0%, followed by Augustinus Bader (36.7%), La Mer (35.0%), Dr. Barbara Sturm (26.7%), and SkinCeuticals (25.0%). Notably, Fresh over-indexes here at 18.3% versus 5.0% on ChatGPT and 3.3% on Perplexity, suggesting Gemini's retrieval surfaces a different source mix for this category. For challengers, Gemini is simultaneously the hardest surface to displace SHIFFA on and the one where mid-tier brands have the most visibility to build from.
SHIFFA leads at 33.3%, La Mer at 11.7%, Augustinus Bader at 6.7%. Google AIO shows the lowest overall brand density of the four engines in this dataset, and a visibly different output composition — several brands with meaningful ChatGPT presence (111SKIN at 20.0%, Lancôme at 8.3%, Drunk Elephant at 3.3%) register 0% on Google AIO. A structurally distinct source mix is a plausible hypothesis for the divergence, but the aggregate data does not demonstrate it; a citation-domain audit is the appropriate follow-up.
The dataset underlying this report is aggregate (engine-level SoV and overall average position). The cluster readings below are directional hypotheses consistent with the aggregate patterns and should be validated against prompt-level outputs before being used to set spend.
Regional-context cluster (hypothesis: SHIFFA wins). Prompts that embed "UAE", "Dubai", "Abu Dhabi", "Ramadan", or "summer heat" are one plausible source of SHIFFA's first-position prominence (overall avg. position 1.5). Regional-prompt lock-in is a hypothesis, not a conclusion — SHIFFA's lead could also reflect prompt construction, source availability in the Gulf media layer, or model priors on the brand name. Prompt-level outputs are required to attribute the SoV to any specific driver.
Generic prestige cluster (hypothesis: La Mer strongest among non-regional names, to validate). La Mer may be strongest in non-regional prestige prompts — its overall avg. position of 1.9 combined with its high ChatGPT rate is consistent with that pattern — but SHIFFA leads La Mer on every engine and on overall average position, so the hypothesis requires prompt-level validation before it is relied upon.
Comparison cluster (hypothesis: SHIFFA asymmetrically advantaged, to validate). SHIFFA leads mention rate on every engine and holds the best overall average position in the tracked set, which is consistent with reference-brand status in head-to-head prompts. This hypothesis should be validated through explicit "SHIFFA vs [competitor]" comparison prompts before conclusions are drawn about comparison-query ownership.
Dermatologist / sensitive-skin cluster (hypothesis: Augustinus Bader and SkinCeuticals are structurally positioned to compete). Augustinus Bader's relative strength versus its total SoV — particularly on Gemini (36.7%) and ChatGPT (45.0%) — combined with SkinCeuticals' strong overall average position (2.9), is consistent with clinical-endorsement positioning. Whether SHIFFA is beatable in this cluster specifically cannot be determined from aggregate data; prompt-level testing is required before treating it as the point of attack.
Retailer-surface cluster (hypothesis: contested). Google AIO's weaker SHIFFA lead (33.3% versus its 52.9% overall) is consistent with a different source mix, potentially including retailer-driven surfaces, but the aggregate data does not prove the cause. Confirming this requires source-level citation analysis.
The aggregate dataset does not enumerate empty prompts directly; what follows is a hypothesis grounded in the observed SoV distribution and known Gulf shopper behavior, to be validated by prompt-level testing. Lifestyle and use-case prompts (office skin, post-flight, wedding prep, post-procedure aftercare) are structurally under-answered across most categories because brands have optimized for product queries, not occasion queries.
Claiming empty prompt territory is materially cheaper than displacing an incumbent — the model has no prior association to override, so a small number of authoritative, crawlable sources can materially improve the odds of becoming a cited answer in underdeveloped prompt territory. Five prompts worth testing and, where confirmed empty, claiming:
For a brand currently at roughly 9–30% SoV — the tier that includes La Mer, Augustinus Bader, Dr. Barbara Sturm, SkinCeuticals, and La Prairie — three moves to execute within the current quarter. The operator hypothesis is that if SHIFFA continues accumulating regional citations, the cost of dislodgement may rise; there is no time-series in this dataset to prove compounding, but the direction of travel favors acting sooner.
Move 1: Build a clinical-authority source layer on crawlable surfaces. The priority is earned expert placement and clearly disclosed sponsored education on pages AI engines can index. Pitch expert-led stories to The National's wellness vertical and Harper's Bazaar Arabia beauty, securing dermatologist quotes in editorial features. Sponsored and commissioned content is permissible where properly disclosed under applicable UAE media and advertising rules (verify current UAE Media Council / Media Regulatory Office scope before publication); what is neither credible nor compliant is undisclosed commissioned reviews presented as independent editorial. Invest in crawlable dermatologist-authored clinic articles, long-form on SEO-optimized Dubai clinic blogs, publisher articles, and YouTube content with accurate transcripts — these are the authority surfaces AI engines actually retrieve from. Treat dermatologist and KOL Instagram as supporting social proof and demand generation, not as a primary AI-citation surface; Instagram is weakly indexed by ChatGPT, Perplexity, Gemini, and Google AIO relative to crawlable web content. Each asset should position the brand in a sensitive-skin or clinical context, not a luxury context. The operating precedent is the dermatologist- and clinical-KOL-led authority building run by SkinCeuticals, La Roche-Posay, and EltaMD in Western markets, adapted to Gulf medical voices.
Move 2: Seed five occasion-based guides targeting the empty prompt territory. Publish (or sponsor, with disclosure compliant with current UAE Media Council requirements) guides answering the office-AC, post-flight, pre-wedding, post-procedure, and Ramadan-dryness prompts on surfaces the target engines cite — candidates include Vogue Arabia, Ounass editorial, and Cosmopolitan Middle East, to be prioritized after a citation-source audit. Each guide should name the brand early and consistently in the body, recognizing that specific citation-window heuristics should be calibrated empirically per engine. Expected SoV lift should be tracked via weekly re-prompting rather than forecast in advance; directionally, occasion prompts with no current branded answer resolve to the first authoritative source faster than contested prompts.
Move 3: Audit and, where confirmed, contest one comparison prompt directly. Given SHIFFA's overall leadership, comparison prompts should be audited first; if SHIFFA is overrepresented there, pick the single "SHIFFA vs [your brand]" query most relevant to your hero product and place a structured comparison — ingredient-level, price-per-ml, climate suitability — through one of two routes: pitch an objective comparison to a regional beauty editor, or sponsor a clearly disclosed comparative advertorial on a third-party review surface. Either route must comply with current UAE media-disclosure requirements; sponsored and independent content can both be legitimate, but must be labeled as what they are. A well-structured third-party comparison will not deterministically flip the default answer — model outputs depend on retrieval, index freshness, authority, and model version — but it materially improves the odds of inclusion in comparison responses. Measure via weekly re-prompting, not impressions.
This analysis is based on 240 engine calls across four AI search surfaces (ChatGPT, Perplexity, Gemini, Google AIO), executed against 60 shopper prompts intended to mirror the real D2C query distribution for UAE luxury skincare. The prompt taxonomy assumed for interpretation — product discovery, comparison, dermatologist recommendation, retailer surface, and occasion-based intents — is not confirmed by the aggregate dataset supplied and must be verified against the underlying prompt file before this report is used to commit spend. Share of voice is computed as the percentage of engine-prompt responses in which a brand was named, with engine-level rates averaged with equal weight to produce the overall SoV figure. Average position is the mean ordinal placement of the brand across all responses in which it appeared and is reported at the overall (not engine) level. Thirty brands were tracked. Cluster-level findings in this report are directional hypotheses derived from aggregate patterns and should be validated against prompt-level outputs before being used to commit spend.
Seeno tracks how your brand gets named by ChatGPT, Perplexity, Claude, and Google AIO — automatically, across your category's real shopper prompts.
Run a free audit →Free · no signup · 3-minute report