Share of voice across ChatGPT, Perplexity, Claude, and Google AIO — measured across 5.455 shopper prompts and 44 brands.
Different AI models weight different sources. A brand that wins ChatGPT may lag in Google AIO if its retail footprint is weak.
Magic Spoon captures 18.8% share of voice across AI search engines in the functional-food category, nearly four times the mention rate of the tenth-ranked brand. That lead is narrower than it looks. Catalina Crunch sits at 16.7% and Three Wishes at 12.9%, meaning the top three brands together hold a combined 48.4% share of voice across the 60 shopper prompts we tested. The remaining 41 brands we tracked split the balance, and 28 of them received zero mentions.
The competitive dynamic is bifurcated. The top of the leaderboard is led by a cereal triad that has accumulated years of comparison content, Reddit threads, and review coverage. Among the zero-mention brands sit functional-food and beverage players with real retail presence and press — Vega, Super Coffee, Yasso, Real Good Foods, Tiny Organics — that are nonetheless structurally absent from AI answer sets. Our tracked set also includes adjacent pantry and specialty brands (Fishwife, Brightland, Omsom) whose zero scores largely reflect category-intent mismatch rather than an AI visibility failure. For the functional-food laggards, the gap is not budget. It is content surface. The engines cite what is indexed in long-form comparison contexts, not what is featured on DTC product pages.
Our working hypothesis, supported by the citation patterns below: challenger brands with 2–10% current SoV should treat the next 90 days as a content-seeding window rather than a paid-media window. Use-case prompts — post-workout recovery, gut health, high-protein, kids' lunchboxes, weight management — appear under-claimed in our prompt-level logs. The cost of ranking against a thin incumbent narrative on a use-case prompt is meaningfully lower than displacing Magic Spoon on a head query. Section seven lays out the specific moves.
Functional food is a category defined by claim-driven purchase. Consumers do not buy a cereal, a bar, or a snack on impulse alone. They buy against a specific goal: more protein, less sugar, better gut response, cleaner ingredients for a child, sustained energy between meals. This makes it one of the most query-dense categories in consumer packaged goods. A shopper considering Magic Spoon is asking, in order: is it actually high-protein, what does it taste like, how does it compare to Three Wishes, is it worth the price. Each of those is now a prompt, and increasingly that prompt is typed into ChatGPT or Perplexity rather than Google.
We are early in this behavioral shift. The share of functional-food purchases influenced by an AI-generated answer is almost certainly under 10% today. But the direction is one-way. Every AI engine is being retrofitted into a shopping surface, and the citation patterns being cemented now — which brands the models learn to associate with "best gut-health snack" — will be expensive to overwrite in 18 months. The brands that win the category in 2026 are writing the training data for it in 2025.
| Brand | SoV % | Avg Position |
|---|---|---|
| Magic Spoon | 18.8 | 1.6 |
| Catalina Crunch | 16.7 | 2.0 |
| Three Wishes | 12.9 | 2.0 |
| RXBAR | 9.6 | 1.6 |
| Siete Foods | 8.3 | 2.7 |
| Chomps | 6.7 | 2.4 |
| Lesser Evil | 5.8 | 2.2 |
| Daily Harvest | 5.4 | 1.1 |
| Simple Mills | 5.4 | 3.5 |
| Partake Foods | 5.0 | 1.0 |
| Aloha | 4.2 | 2.7 |
| Hippeas | 3.3 | 3.1 |
| Hu Kitchen | 2.1 | 4.4 |
| Perfect Bar | 1.7 | 2.0 |
| Kind Snacks | 1.3 | 2.3 |
| Mid-Day Squares | 1.3 | 1.7 |
| Tessemae's | 0.8 | 1.0 |
| SmartSweets | 0.8 | 1.5 |
| Primal Kitchen | 0.8 | 2.5 |
| Kettle & Fire | 0.8 | 3.0 |
| Quinn Snacks | 0.8 | 1.5 |
| Legendary Foods | 0.4 | 2.0 |
| Graza | 0.4 | 2.0 |
| Orgain | 0.4 | 1.0 |
| Banza | 0.4 | 8.0 |
| Skinny Dipped | 0 | — |
| Tiny Organics | 0 | — |
| Serenity Kids | 0 | — |
| Once Upon a Farm | 0 | — |
| Belgian Boys | 0 | — |
| Fishwife | 0 | — |
| Brightland | 0 | — |
| Omsom | 0 | — |
| Sanity Popcorn | 0 | — |
| Nona Lim | 0 | — |
| Bare Snacks | 0 | — |
| Vega | 0 | — |
| Super Coffee | 0 | — |
| Spudsy | 0 | — |
| Tru Fru | 0 | — |
| Yasso | 0 | — |
| Every Body Eat | 0 | — |
| Real Good Foods | 0 | — |
| Baked by Melissa | 0 | — |
The gap between the top three and the rest is the defining structural feature of this leaderboard. Magic Spoon, Catalina Crunch, and Three Wishes all occupy the same sub-category — better-for-you cereal — and all benefit from years of head-to-head comparison content written by third parties. The top of the leaderboard below the triad is not cereal-dominated; it spans bars (RXBAR), tortillas and chips (Siete), meat snacks (Chomps), popcorn and snacks (Lesser Evil), frozen meals (Daily Harvest), baking and crackers (Simple Mills), and allergen-free cookies (Partake). What unites the triad specifically is that the engines have learned to treat them as the reference points for functional cereal. Magic Spoon leads because it is the default anchor in those comparisons: almost every "X vs Y" prompt in our set includes Magic Spoon on one side.
Two brands are punching above their weight. Partake Foods has 5.0% SoV with an average position of 1.0, meaning when it is mentioned, it is mentioned first. Daily Harvest shows the same pattern at 5.4% SoV and 1.1 average position. We hypothesize — pending prompt-level source attribution — that both benefit from owning narrow sub-category queries (allergen-free cookies and frozen functional meals, respectively) where the competitive set is thin. The broader pattern holds regardless of cause: category narrowness can substitute for category breadth.
Conversely, Simple Mills (5.4% SoV, 3.5 average position) and Hu Kitchen (2.1% SoV, 4.4 average position) are cited frequently but late. They are in the consideration set but are not the answer. This is a content-structure problem, not a brand-strength problem.
Magic Spoon leads at 23.3%, followed by Catalina Crunch and RXBAR tied at 20%. ChatGPT appears to over-index on brands with heavy Reddit and comparison-blog footprints, though we note this is an inference from output patterns rather than a direct source-attribution finding. RXBAR's strong showing here, despite mid-tier overall SoV, is consistent with ChatGPT pulling from older, more established review corpora.
Catalina Crunch leads at 16.7%, narrowly ahead of Magic Spoon at 15%. The appearance of Lesser Evil at 6.7% — above its overall SoV — suggests Perplexity is surfacing clean-label editorial content more readily than ChatGPT. We treat the mechanism (recency and structured data weighting) as a working hypothesis rather than a proven driver.
Gemini tracks the overall leaderboard closely: Magic Spoon at 20%, Catalina Crunch at 16.7%, Three Wishes at 15%, with Siete Foods (13.3%) and Chomps (11.7%) materially stronger than in other engines. Gemini is the most generous of the four engines toward snack-adjacent brands outside the cereal triad, which creates a viable entry point for bar, meat-snack, and tortilla-chip players seeking first AI visibility.
Magic Spoon leads at 16.7%, Catalina Crunch at 13.3%, Three Wishes at 10%. AIO diverges from the overall ranking in one important respect: several brands with meaningful overall SoV are sharply under-represented here (RXBAR at 1.7% vs 9.6% overall, Aloha at 0% vs 4.2%). AIO appears to reward brands with strong organic rank on the specific underlying query and punishes those whose visibility is carried primarily by non-Google surfaces.
Five clusters drive the observed SoV distribution.
Head-to-head comparison queries. Prompts structured as "X vs Y" — Magic Spoon vs Three Wishes, Magic Spoon vs Catalina Crunch, Three Wishes vs Catalina Crunch. These queries are dominated by the cereal triad because they named themselves into the comparison set. Whichever brand is mentioned on both sides of more comparisons wins the cluster.
Alternative and displacement queries. "Best alternative to Magic Spoon", "what to buy instead of Catalina Crunch", "brands like Three Wishes". These are defection prompts, and paradoxically the incumbents still win them because the engines surface the other two incumbents as the alternative. The cluster is effectively closed to outsiders.
Honest-review queries. "Honest review of Magic Spoon", "Catalina Crunch reviews worth buying", "honest review of Three Wishes cereal". Won by whichever brand has the deepest Reddit thread and YouTube review corpus. Seeding here is possible but slow.
Category-leader queries. "Most popular functional food & snacks right now", "premium functional cereal brands worth it". These behave as reputation queries and compound: the brand most often called a leader becomes the leader cited.
Use-case and occasion queries. Post-workout, gut health, kids' lunchboxes, office snacking, late-night cravings. This cluster is, in our prompt-level data, thinly claimed. See below.
Across our prompt-level logs, use-case and occasion queries are the weakest-defended cluster: incumbent brands appear less frequently and at lower ordinal positions than in comparison or category-leader queries. These are not low-volume prompts. They are core functional-food use cases. The strategic point is simple: moving a brand from zero to first citation on a thinly-claimed prompt is materially cheaper than moving from fifth to first on a contested one, because there is no settled incumbent narrative to displace. The engines are actively looking for an answer and will adopt the first well-structured one they find.
The five prompts we assess as most valuable strategic openings — framed as hypotheses to test, anchored in the use-case cluster findings above:
For a challenger brand with 2–10% current SoV, three sequenced moves over the next 90 days.
Days 1–30: Secure three category-definition placements on high-authority third-party surfaces. Not on your own blog. Pitch one dietitian-authored piece on a Substack with existing food-policy readership; pursue editorial inclusion (via PR outreach or affiliate partnership, not paid sponsored placements, which carry diminished indexing value) on a site like Sporked or Eat This Not That; and seed naturally framed comparison questions in relevant subreddits — r/HealthyFood, r/IBS for gut-health prompts, r/GLP1 for weight-management prompts — rather than running a formal AMA. The editorial pieces should be structured as ranked lists that name your brand alongside two or three incumbents. The engines cite lists. Brands such as Olipop are widely cited in the better-for-you beverage space as examples of this playbook, though we flag the comparison as directional rather than a documented case study.
Days 31–60: Build one structured-data resource per thinly-claimed prompt you want to own. Pick two from the list above. For each, publish a long-form comparison guide on your own domain with schema markup, then syndicate a condensed version to Medium and LinkedIn. The objective is not SEO traffic. The objective is to give Perplexity and Google AIO a citable, structured source. Include a comparison table with competitors — engines reward content that names alternatives, not content that promotes only the publisher.
Days 61–90: Seed the Reddit and YouTube review layer deliberately. ChatGPT's citation set for this category leans heavily on user-generated review content. Send product to fifteen food-review YouTubers with 50k–500k subscribers and five registered dietitian TikTokers, with a brief that asks for a comparison video against the incumbent most adjacent to your positioning. Do not ask for a favorable review. Ask for a comparison. Being named in the same sentence as Magic Spoon or RXBAR is the actual asset.
We executed 240 engine calls across ChatGPT, Perplexity, Gemini, and Google AIO, running 60 shopper prompts designed to mirror the real query distribution a functional-food buyer follows from category awareness through comparison to purchase. Prompts span category-leader queries, head-to-head comparisons, alternative and displacement queries, honest-review queries, and use-case queries. Share of voice is computed as the percentage of prompts in which a given brand was named in the engine's response; because multiple brands may be named per prompt, SoV is not equivalent to a simple citation count over 60. Average position is the mean ordinal rank of the brand across all mentions. Forty-four brands were tracked; 16 received at least one mention. Strategic claims about prompt-cluster dynamics draw on prompt-level logs held separately from the leaderboard figures presented here. All numerical figures in this report derive from that dataset.
Seeno tracks how your brand gets named by ChatGPT, Perplexity, Claude, and Google AIO — automatically, across your category's real shopper prompts.
Run a free audit →Free · no signup · 3-minute report