← Back to blog

Rufus Has Three Spots Per Category. Here's the Data on Who Gets Them.

Amazon RufusRufus visibilityAmazon AI recommendationsAI shopping optimizationRufus brand monitoring

Amazon Rufus returns the same 2–3 products per keyword. Nearly every time. Run the same query an hour apart, a day apart — same result. Those slots aren't random. Someone holds them. Most brand teams have no idea whether they do.

Amalytix analyzed 1,300+ products across systematic Amazon Rufus queries to find out what the consistent winners have in common. It's the clearest look we've had at how Amazon's AI decides what to recommend — and the numbers define a floor that most brands have never measured themselves against.

What Gets You Into the Shortlist

The hard floor is 4.0 stars. Nothing below that appeared in Rufus recommendations across the study. The median was 4.5. That gap matters more than it sounds — a product hovering at 4.1 isn't just "okay," it's below the threshold Amazon Rufus needs before it recommends with confidence.

Review counts tell a similar story. Median: 2,991. About 3,000 verified buyer signals before Rufus starts putting your product in front of shoppers who ask questions. New products, recently launched variations, anything under a few hundred standalone reviews — not showing up.

The content signals are starker:

Rufus SignalThreshold / Median
Star rating4.5 median, 4.0 floor
Review count2,991 median
Product images7
Product videos3
A+ content present87.2% of recommendations
Prime eligible92.1% of recommendations
Amazon Basics share0.6%

That last row matters. The narrative that Amazon Rufus systematically steers shoppers toward Amazon's own products doesn't hold up. Amazon Basics appears in less than 1% of recommendations in this study. The advantage goes to well-reviewed, well-contented, Prime-eligible products from any brand — not the house label.

What the Slot Structure Means

The 2–3 consistent ASINs finding is the part most brand teams haven't thought through yet.

Traditional Amazon search returns hundreds of results. Your rank might be position 7 today and position 15 next week based on ad spend or sales velocity. Amazon Rufus is more like a curated shortlist — a handful of products it recommends confidently for a given query. Position 3 on page one of search is different from being one of three products Rufus recommends. The second is harder to get and harder to lose.

This is competitive positioning at a different level. Think about your main category keywords. Who holds those 2–3 Rufus slots right now? If it's not you, it's a competitor — and that competitor is capturing the 60% higher conversion rate Amazon has published for Rufus sessions vs. standard search. On a surface that's handling 38% of Amazon sessions in some categories.

The brands treating Rufus like a vague optimization project are thinking about it wrong. There are specific slots. Someone holds them. You can track whether it's you.

Why A+ Content at 87% Is Not a Coincidence

The 87.2% A+ content figure looks like a content quality filter. It probably is. But it's worth separating cause from correlation.

Brands that invested in A+ content tend to have built-out listings overall. They're the same brands with the review volume, the images, the videos, and the descriptions written for human readers rather than keyword scrapers. A+ content correlates with general listing quality, and Amazon Rufus responds to listing quality.

The practical implication is the same either way: if your listing doesn't have A+ content, you're missing a signal that 87% of recommended products carry. You can debate whether it's a direct input or a proxy for quality — that debate doesn't change the action.

Same logic for Prime. 92.1% of recommended products are Prime-eligible. If your products aren't Prime — FBM without SFP, or just slow fulfillment SLAs — you're not in the same conversation when Rufus fields a recommendation request in your category.

What This Means Before March 25

Sponsored Prompts — the paid ads inside Rufus conversations — go live with CPC billing on March 25. Amazon auto-enrolls any active Sponsored Products or Sponsored Brands campaign.

Here's the connection: Sponsored Prompts aren't a pure auction. Amazon Rufus decides which products are eligible to appear in a prompt based on relevance and content quality — the same signals driving organic recommendations. Budget and bid determine visibility among eligible products. Content quality determines whether you're eligible at all.

A product with 3.9 stars, 400 reviews, and no A+ content won't anchor a strong Rufus prompt response regardless of how high you bid. You're paying CPC to amplify a weak signal. Rufus will still answer the question — your product just won't be the answer.

Eleven days until billing starts. If your products don't clear the Amalytix thresholds, that's your prioritization list for the next ten days.

The Variation Review Problem

One specific scenario connects the two biggest recent Amazon changes.

Amazon's February 12 policy stripped pooled reviews from child ASINs with functional differences — different formulations, different sizes that perform differently, different specifications that matter. Full enforcement through May 31. A child ASIN that was inheriting thousands of reviews from a high-review parent may now show a few hundred of its own.

Put the Amalytix data against that: the median Rufus recommendation has 2,991 reviews. A child ASIN that just dropped from 2,000 pooled reviews to 87 standalone ones is now below the floor. It won't get recommended organically. Sponsored Prompts won't help if the underlying signal isn't there.

Brands with large variation families should run this audit now: which child ASINs are affected by the review split, and where do they land against the Rufus thresholds? That's not a 2027 problem. It's a March 2026 problem.

Category-Specific Stakes

The thresholds are consistent, but the urgency varies by category.

For supplements, Amazon Rufus explicitly avoids reinforcing medical claims. Products that lead with efficacy language instead of use-case descriptions are at a systematic disadvantage — Rufus won't echo health claims, so it surfaces the products it can describe accurately. Clear use-case bullets beat vague efficacy language. See the full supplements checklist for the complete audit list.

For cameras and photography, the image and video thresholds are table stakes in a visual category. Seven product images and three videos are the median for products Rufus recommends. A camera product with four images and no video is competing in a format it's already losing.

For health monitors, A+ content with clear specification tables is what Rufus can extract and surface when someone asks "what blood pressure monitor should I buy." Unstructured spec information in bullets gives Rufus less to work with. See the health monitors comparison guide for category-specific benchmarks.

The Audit Most Brands Haven't Done

Most brand teams know their keyword rankings. Some know their organic Rufus visibility. Almost none have benchmarked their products against the actual Rufus qualification thresholds.

The Amalytix study gave you the checklist. Pull your star rating, review count, image count, video count, A+ status, and Prime eligibility for every product in your catalog. Run your main category keywords in the Amazon app and see which products Rufus recommends. Note the gap between where you are and what the data says you need.

That gap is your roadmap — before March 25.

If you want to see exactly what Amazon Rufus is recommending in your category and where your products stand, that's what AgentBuy tracks. The Amalytix study told you the floor. AgentBuy shows you whether you're above it — and what Rufus actually says when your customers ask.

Free: Rufus Visibility Checklist

12 things to audit on your listings so Rufus actually recommends your products.