How to Read Your SPINS Report: A Plain-English Guide for Emerging CPG Brands
You just got your first SPINS report. Congratulations — and good luck.
If you're a founder or sales lead at a growth-stage food or beverage brand, opening a SPINS dashboard for the first time can feel like someone handed you a 747 cockpit manual and told you to fly. There are metrics you haven't heard of, categories that don't quite match how you think about your product, and comparisons to competitors that raise more questions than they answer.
This guide cuts through all of it. Here's exactly how to read your SPINS report — what each metric means, which numbers actually matter, and what to do with the data once you have it.
What Is SPINS Data, and Why Do You Have It?
SPINS is a syndicated data provider that aggregates point-of-sale (POS) data from natural and specialty grocery retailers. Unlike Nielsen (which covers conventional FDM — Food, Drug, Mass) or Circana/IRI (which covers conventional and some natural), SPINS is the gold standard for the natural channel: Sprouts, Fresh Market, independent natural co-ops, and many regional health food chains.
You're likely reading a SPINS report because:
A retailer (Sprouts, Whole Foods, a regional chain) requires it as part of your category review
Your distributor (UNFI or KeHE) provided it as part of their brand portal
You purchased access directly through SPINS
Regardless of how you got it, the underlying structure is the same.
The 5 Metrics That Actually Matter
SPINS reports can show dozens of data points. Most of them are noise for an emerging brand. Here are the five you need to understand cold.
1. Dollar Sales
This is your total revenue through the tracked retailers for the time period selected. It's the most straightforward metric, but context is everything: $200K in dollar sales at 40 Sprouts stores means something very different than $200K across 400 stores.
What to watch: Are dollar sales trending up, flat, or declining week-over-week and 52-week? Direction matters more than the absolute number at early stage.
2. Unit Sales
How many units moved. This differs from dollar sales when you have multiple SKUs at different price points, or when promotions distort your average selling price.
What to watch: Unit velocity should increase as you add distribution. If dollar sales go up but unit sales stay flat, you may have a price increase in the data or a mix shift toward higher-price SKUs.
3. Velocity ($ per Point of Distribution)
This is the most important metric in your report, full stop. Velocity measures your sales per store per week — it's how buyers assess whether your product is actually pulling through at shelf.
The formula: Dollar Sales ÷ Total Distribution Points (TDP)
A "good" velocity number varies by category, but for natural snacks and beverages, anything above 8−12perstoreperweekistypicallyconsideredhealthy.YourSPINSreportmayexpressthisas"8-12 per store per week is typically considered healthy. Your SPINS report may express this as " 8−12perstoreperweekistypicallyconsideredhealthy.YourSPINSreportmayexpressthisas" per TDP" or "$ per ACV."
What to watch: Low velocity is the most common reason brands get cut from a reset. If your velocity is below category average, this is your most urgent problem to diagnose.
4. ACV (All Commodity Volume) % Distribution
ACV% tells you what percentage of the retailer universe (weighted by store size/sales volume) currently carries your product. 100% ACV at a retailer means you're in every store. 30% ACV means you're in a meaningful but partial subset.
What to watch: Rising ACV% is healthy. Stagnant or falling ACV% when you expect growth means placement isn't expanding or you're losing stores.
5. Category Share
Your brand's dollar sales as a percentage of the total category within the tracked channel. This is the number buyers care about most, and the one you'll be asked to bring to every category review.
What to watch: Are you gaining or losing share? Even if your absolute sales are growing, losing share means competitors are growing faster — which is a problem you need to get ahead of.
How to Set Up Your Category Correctly (This Is Where Most Brands Go Wrong)
The single biggest mistake emerging brands make with SPINS is using the default category definition. SPINS organizes products into categories and sub-categories that may or may not match how consumers or buyers actually think about your product.
For example: if you make a high-protein, grain-free granola, SPINS might put you in "Granola & Muesli" alongside conventional sugary granolas. Your velocity looks terrible compared to the category average because you're a $12 premium bag competing against $4 conventional bags.
The fix: define a custom category or sub-category that creates a fair competitive set. In SPINS, you can filter by attribute tags (grain-free, protein content, clean label, etc.) to build a custom benchmark. This takes work, but the payoff is a data story that actually holds up when a buyer challenges you.
If you don't know how to do this in SPINS, this is one of the most valuable things an outside analyst can help you with.
The Time Periods That Matter
SPINS reports typically let you choose from several time windows. Here's how to use each:
4-week: Your most recent snapshot. Useful for spotting short-term velocity changes or the impact of a promotion. Noisy — don't make major decisions on 4-week data alone.
12-week (L12W): The standard for most buyer conversations. Stable enough to show trends, recent enough to be relevant.
52-week (L52W): Your baseline annual performance. Used for category share calculations and year-over-year comparisons.
Year-over-year (YOY): The most important trend metric. Buyers want to see growth, and YOY is the cleanest way to show it.
Rule of thumb: always lead with 52-week data for category context, then use 12-week to show recent momentum.
Reading a Competitive Report
SPINS allows you to see your performance alongside competitors in your category. This is where the data gets genuinely useful — and where brands typically get either excited or worried.
When reading competitive data, focus on:
Share of shelf vs. share of sales. A competitor with 30% more SKUs than you but only 10% more category share is over-assorted. That's a point you can make to a buyer.
Velocity comparison. If your velocity is at or above the category leader on a per-SKU basis, lead with that. It's a stronger argument for shelf expansion than absolute sales.
Trend direction. A competitor losing share is an opportunity. A competitor gaining share fast should make you ask why — and whether their pricing, packaging, or distribution expansion is something you need to respond to.
What to Do When Your Numbers Look Bad
First: don't panic, and don't hide the data. Buyers have seen every number. The question they're really asking is whether you understand your business and have a plan.
If velocity is below category average, the usual culprits are:
Pricing. Are you priced more than 25-30% above the category average per ounce? That gap is often too wide for a brand without significant consumer awareness.
Placement. Are you in the right section of the store? Being in the wrong sub-section kills velocity silently.
Trial drivers. Sampling, demos, and paid social targeting to zip codes near your stores are the fastest velocity levers for early-stage brands.
Distribution gaps. If you have 40% ACV at a retailer but should have 80%, the missing 60% is dragging your velocity calculation without actually reflecting poor in-store performance.
A Word on SPINS vs. Circana vs. Nielsen
SPINS covers the natural channel. Circana (formerly IRI) and NielsenIQ cover conventional FDM channels plus some natural. If your brand sells at Kroger, Walmart, or Target, you need Circana or Nielsen data — SPINS alone won't capture your full retail picture.
Many growth-stage brands make the mistake of using SPINS data for their conventional retailer conversations. The coverage doesn't match. Make sure you know which syndicated source covers which of your retail partners, and don't conflate them.
The Bottom Line
Your SPINS report is only useful if you know what questions to bring to it. Most emerging brands receive the data and let it sit — either because it's overwhelming or because nobody on the team has the time to dig in properly.
The brands that win at retail use their data proactively. They bring velocity trend charts to category reviews. They build custom competitive sets that make their product look like the obvious choice. They catch distribution gaps before they become delistings.
If you're not doing that yet with your SPINS data — whether because of time, expertise, or bandwidth — that's the gap worth closing first.
CPG Data Nerds helps growth-stage food and drink brands actually use their syndicated and distributor data. If you want a second set of eyes on your SPINS report, reach out here.