The Whole Foods Data You Already Have (And Aren't Using)
The Whole Foods Data You Already Have (And Aren't Using)
Most brands treat Whole Foods reporting as a box to check. Someone downloads the weekly file from Grocery Central, glances at the total sales number, and moves on. That habit is costing you.
The weekly store-level data Whole Foods provides is, arguably, the most actionable retailer dataset available to an emerging brand — and it's free. You don't need a SPINS subscription or a Nielsen license to get it. The problem isn't access. The problem is that almost no one has set it up to actually work.
Here's what the data can do when it's modeled correctly, and why most Whole Foods reports fall short.
What You Can Actually See (When You Set It Up Right)
Grocery Central gives you weekly sales data at the individual store level. Not regional. Not aggregated by team. Individual stores, individual weeks. That granularity is rare — most syndicated data providers sell you retailer totals or at best regional splits. Whole Foods gives you the building blocks to do real analysis if you know how to use them.
When the data is structured properly, you can calculate velocity per store per week — the same metric SPINS and Circana use as their primary performance indicator. You can track distribution across stores over time and immediately see when a location stops ordering. You can measure the impact of a promotion at the store and regional level, not just as a systemwide blur. You can identify which stores are genuinely moving product and which have been carrying your SKU for six months without meaningful sell-through.
The raw file can't do any of this on its own. It's a flat weekly export. The value comes from building the right structure around it.
Why Most Whole Foods Reports Miss the Point
The default approach at most small brands is some version of this: download the weekly file, paste it into an existing spreadsheet, look at total units and dollars, note whether it's up or down versus last week, and file it away. Maybe aggregate it into a regional view. That's it.
This approach has three specific failures that matter.
The first is that units are reported but not normalized. If Store A does 20 units in a week and Store B does 8 units, that looks like Store A is dramatically outperforming. But if Store A is a 40,000 square foot flagship in Manhattan and Store B is a 12,000 square foot neighborhood store, those numbers tell a very different story. Velocity — units per store per week, or dollars per store per week — is the only metric that puts stores on an equal footing for comparison. Without it, you can't identify true over- and under-performers, which means you can't take targeted action.
The second failure is that voids are invisible. A store that sold zero units in a given week doesn't announce itself in a raw data file. It just... doesn't appear. When you're looking at 40 or 80 or 150 stores worth of data, a store that quietly stopped ordering three weeks ago can go unnoticed for months. By the time someone catches it, you've lost weeks of sales, and in some cases the store has moved on to a competitor SKU.
The third is that nobody measures promo ROI. Whole Foods promotions are expensive — you're typically paying for TPR (temporary price reduction) scan discounts, sometimes co-op spend, and the margin you give up during the promotional window. The weekly store-level data gives you everything you need to evaluate whether that spend drove real incremental volume or just discounted sales you would have made anyway. Pre-promo baseline velocity, week-of-promo lift, post-promo retention — all of it is in the data. Almost no one builds this analysis.
What Becomes Possible When You Fix It
The shift from raw file to structured reporting changes the questions you can answer.
Instead of "how are we doing at Whole Foods overall?" you can ask "which 15 stores are our highest-velocity locations, and are we doing anything differently in those markets?" Instead of "sales seem soft lately" you can identify exactly which stores went quiet and when, and correlate it with a competitor reset or a void in your distribution. Instead of "the spring promo seemed fine," you can calculate the incremental lift per store, identify which regions actually responded, and decide whether to run the same program again in the fall.
These are the conversations that win and protect shelf space. Buyers at Whole Foods — especially at the regional and category manager level — respond to specificity. "Our top quartile of stores by velocity averaged $X per week over the last 12 weeks, and our voids dropped from 18% to 6% after we addressed distribution gaps in the Rocky Mountain region" is a sentence that closes business. "Sales are up and we're excited about the partnership" is not.
The Data You Need to Pull
To build this analysis, you need two things from Grocery Central: the weekly item-level sales file and the store list with store attributes (region, team, square footage if available). The item-level file is what gives you the store-week granularity. The store list is what lets you normalize and segment.
If you haven't been set up on Grocery Central properly, that's the first step — some brands are on but haven't configured their item access correctly, which is why their reports only show partial data. If your Whole Foods sales number in Grocery Central doesn't match what you're invoicing through UNFI or your direct Whole Foods account, that's usually a sign something is misconfigured.
Once you have the data, the most important columns to build calculations around are store-week units, store-week dollars, and distribution (whether a given store-SKU combination had any sales in a given week). From those three inputs, you can derive velocity, distribution %, void rate, and all the promotional comparison metrics.
A Note on What This Data Can't Do
Whole Foods store-level data tells you what sold. It doesn't tell you why, and it doesn't tell you what's happening on the shelf in real time. Out-of-stocks, incorrect shelf placement, a competitor product that just landed next to yours — none of that shows up in the weekly export. You learn about it indirectly, through a velocity drop that prompts you to investigate.
This is also not a substitute for SPINS if you need competitive context. Whole Foods data shows your performance in isolation. To understand whether your velocity is good or bad relative to your category, you need syndicated benchmarks. The two datasets are complementary — Whole Foods data for store-level operational decisions, SPINS for category context and buyer conversations.
The Bottom Line
The brands that win long-term at Whole Foods are the ones that treat their data as an early warning system, not a report card. They're not waiting for a buyer to tell them velocity is soft at a regional review — they saw it three weeks ago in their store-level file and already have a plan.
If your current Whole Foods reporting is a weekly download and a glance at the total, you're leaving that advantage on the table. The data to do better is already in your Grocery Central account.
CPG Data Nerds builds this kind of reporting for growth-stage food and drink brands — structured Whole Foods dashboards, velocity tracking, void detection, and promo analysis. If you want to see what your data could look like, let’s find time to chat.