Cookie deprecation is no longer a future talking point; it is happening. If you promote products or run an app that buys media, you still need to know which clicks and views lead to results. The Attribution Reporting API (ARA), part of the Privacy Sandbox, is the most practical way to keep measuring performance without third‑party cookies or cross‑site identifiers. This article explains the API in plain language, walks through a minimal setup, and shows how to trust and tune what you get back—without leaking personal data.
What the Attribution Reporting API actually does
The API lets a browser remember that a person saw or clicked an ad (source) and later completed something valuable on your site (trigger). Instead of sending a detailed, user‑level trail, the browser sends privacy‑preserving reports on a delay. You choose what to measure and how to aggregate it. The browser enforces limits and adds noise to keep individuals from being singled out.
There are two report types you will use together:
- Event‑level reports: A sparse, delayed signal with a few bits of data you choose. Good for near‑real‑time debugging and basic campaign attribution.
- Aggregatable reports: Encrypted contributions that get summed across many users by an aggregation service. Best for accurate totals by campaign, geo, creative, or device class—without user‑level detail.
Why this matters for small teams
You do not need a huge ad stack to use ARA. A few endpoints and a simple data model go a long way. You can keep a clean line from spend to outcomes, even as cross‑site cookies disappear.
The mental model: sources, triggers, and reports
Think in three stages:
- Register a source when someone sees or clicks an ad. A source stores metadata you define (for example, campaign and creative codes).
- Register a trigger when that same person converts (for example, signs up or purchases). A trigger includes your conversion labels and value ranges.
- Receive reports later, batched and sometimes noisy, at endpoints you control.
Sources: click‑through and view‑through
Sources represent ad exposures. There are two kinds:
- Navigation sources for clicks that open a landing page.
- Event sources for views, such as display impressions inside iframes.
You can register a source using a special attribute on links or assets, or via response headers from your ad server. The basic idea: when the creative loads or the click happens, the browser records the source with your metadata and an expiry window.
Triggers: conversions on your site
When the user completes a goal, your site registers a trigger. The trigger chooses which source types can be matched (navigation, event, or both), supplies small labels you manage (such as product line), and sets a value for aggregatable reporting. The browser then decides whether a match exists and schedules reports.
Privacy guardrails you should expect
- Delays: Reports do not arrive instantly. The browser waits, batches, and sometimes adds noise or holds back reports below thresholds.
- Limits: You get a small set of fields and low‑granularity values to prevent user re‑identification.
- Origin scoping: Reporting is scoped to origins. Your ad code, your landing pages, and your report endpoints must be set intentionally to avoid silent drops.
Design your measurement model
Success with ARA comes from choosing a compact, robust schema. You will compress rich marketing taxonomies into a handful of bits and keys. Keep it boring and consistent.
Pick the minimum “who, what, where” you need
- Campaign: Use a stable numeric code, not a long string.
- Creative: Another numeric code, separate from campaign.
- Channel: Keep it coarse (for example, display, search, social).
- Region: Coarse buckets you control, not GPS or city‑level detail.
- Device class: Desktop, tablet, or phone—only if truly needed.
For event‑level reports, you get very few custom bits. Use them to differentiate the most essential dimension. For aggregatable reports, you can define keys that combine several dimensions (for example, channel + campaign + region) and assign a contribution value you want summed (for example, 1 for a signup, order amount bucket for purchases).
Map conversion types to limited fields
Decide upfront how you will represent “signup,” “trial start,” and “purchase” using the small labels allowed in triggers. If you plan to optimize on a single primary metric, keep that mapping simple. If you need a revenue proxy, use quantized buckets (for example, 0–10, 11–50, 51–200, 200+).
Step‑by‑step: a minimal implementation
You can start with one ad surface, one conversion type, and two endpoints for reports. Build from there.
1) Prepare your domains and endpoints
- Ad delivery origin (for example, ads.example-adtech.com): Serves creatives and registers sources.
- Landing and conversion origin (for example, app.example.com): Where triggers are registered.
- Report receivers (for example, reports.example.com): Two POST endpoints—one for event‑level JSON, one for aggregatable JSON. They return 200 OK and log payloads.
2) Register a source on ad load or click
For navigation sources (clicks), render ad links with a special attribute that points to your source registration endpoint. Your ad server responds with the appropriate headers so the browser stores the source with your metadata. For view‑through sources, do the same on impression assets (such as images or iframes). The exact header and attribute names are documented in the official guides; they include an eligibility header and header(s) to register the source. Keep payloads small and numeric.
3) Register a trigger on conversion
When a conversion event fires on your site (for example, server thanks page or client purchase confirmation), register a trigger with your labels and aggregatable value. This can be done via a JS call from the page or via a response header from your server. Again, keep it compact.
4) Receive and store reports
- Event‑level reports arrive as simple JSON. They include your limited metadata, timestamps rounded to coarse buckets, and a randomization mechanism that sometimes withholds or alters details to protect privacy.
- Aggregatable reports arrive encrypted. You cannot read them directly. Forward them to an aggregation service you trust (hosted or self‑run) to get sums back by the keys you defined.
5) Debug with built‑in tools
- Use your browser’s DevTools Application panel to inspect stored sources and triggers where supported.
- Enable debug options in registration to receive extra diagnostic reports, which are separate from production reports.
- Test with different origins locally (for example, subdomains) to catch origin‑scoping mistakes early.
What “good” looks like in production
Ship in stages. Your goal is stable counts that line up with your backend sales and signups within a sensible range. Expect variance; you are modeling reality, not replaying it.
Parallel‑run and calibration
- Phase 1: Run ARA alongside your existing measurement (cookies or platform reports). Compare by day and by campaign. Expect different totals; you are measuring through a new lens.
- Phase 2: Turn off legacy remarketing and cross‑site identifiers in a subset of traffic. Confirm ARA holds up for your primary KPI.
- Phase 3: Scale ARA to all traffic. Keep a small holdout for sanity checks.
Sanity checks that save time
- Landing page quality: If bounce spikes on a single campaign, check click sources first. A missing attribute or header can collapse matches.
- Time windows: Ensure your attribution windows match your business. Too short and you lose late conversions; too long and you add noise and contention.
- Deduplication: Use dedup keys to guard against duplicate triggers (for example, double reload on confirmation).
How to get accurate totals with aggregatable reports
Aggregatable reports are your workhorse for trustworthy totals. They return sums by the keys you define, computed by a privacy‑preserving aggregation service. That service combines many encrypted contributions and returns only aggregates over minimum crowd sizes.
Designing keys
Each trigger contributes to one or more keys. A key can represent campaign, region, device class, or a composite of those. Keep the number of keys manageable so you have enough data per bucket. If you spread contributions too thin, you will hit minimum thresholds and lose visibility.
Choosing contribution values
- Count conversions: Set contribution to 1 for every success. Best for signup or trial start.
- Approximate revenue: Use coarse value buckets (for example, 5, 20, 100). Fine granularity is not supported and increases privacy risk.
- Weighted goals: If you use a composite KPI (for example, “high‑quality signup”), encode it as a small set of weights agreed with marketing.
Using the aggregation service
In production you will either:
- Use a hosted aggregation service provided by a cloud or adtech vendor. You send encrypted reports to them and request aggregate results.
- Run an open source service in your own cloud if you need isolation or custom integrations. This takes more effort but keeps everything in your stack.
In both cases, you define reporting jobs (for example, daily by campaign and region). The service validates crowd thresholds and adds calibrated noise to the returned sums.
Event‑level reports: what they are good for
Event‑level reports are limited by design. They’re ideal for:
- Integration debugging: Confirm sources and triggers are flowing.
- Simple optimization: Coarse measurement by a small number of campaigns.
- Latency‑aware alerts: They tend to arrive sooner than aggregated results, so you can spot outages faster.
Do not try to rebuild user‑level logs. Fields are sparse, timestamps are batched, and sometimes a report just will not be sent. That is the point.
Common pitfalls and how to avoid them
Origin mismatches
Sources and triggers are scoped to origins. If your ad loads from one subdomain, the landing resolves on another, and your conversion happens on a third, be explicit. Keep a diagram of where each step happens and align it with your registration code.
Over‑segmentation
Resist the urge to pack every dimension into keys. ARA works best when you keep the number of distinct buckets small, so each meets minimum thresholds. Merge small markets or long‑tail creatives into an “Other” bucket you still respect.
Ignoring noise and delays
Your totals will wiggle. Accept that true privacy requires statistical protection. Smooth your graphs with 7‑day windows and compare trends—not single days.
Modeling and decision‑making with limited signals
Successful teams combine ARA with a small set of clean, independent signals:
- Platform reporting (for example, ad network dashboards) for delivery baselines.
- Backend conversions from your own system of record.
- ARA aggregates for privacy‑preserving attribution to campaigns and channels.
Blend these in a simple model you can explain to non‑experts. A one‑page method beats a complicated spreadsheet no one trusts.
Calibration you can run monthly
- Holdout tests: Pause a campaign in a small region. Ensure ARA’s aggregate conversions shift in the expected direction versus your backend.
- Day‑of‑week patterns: Match campaign cycles with backend trends. If they mirror, your mapping is sound.
- Spend elasticity: Look for diminishing returns as spend rises. If ARA shows linear growth forever, you are over‑bucketed or under‑threshold.
Privacy and compliance you can explain to anyone
ARA reduces the risk of re‑identification by design. You can reinforce this:
- No PII in sources, triggers, or keys. Stick to your numeric codes.
- Short retention for raw reports on your servers. Keep only the aggregates you need.
- Clear disclosures in your privacy policy about privacy‑preserving measurement.
Explain it to your team like this: “We measure what works, not who you are.” That is a message customers and regulators can live with.
Cross‑browser reality and graceful fallback
Today, the Attribution Reporting API is most mature in Chromium‑based browsers. Other engines have different timelines and approaches. Build a graceful fallback plan:
- Server‑side last‑click on deep‑linked campaigns where the landing URL contains campaign codes. It is not perfect, but it is transparent.
- Modeled conversions that combine backend totals and platform delivery when ARA is unavailable.
- Consent‑aware feature flags so you can disable measurement in regions or contexts where it is not appropriate.
Security and reliability basics
- Validate JSON on event‑level endpoints and log suspicious payloads.
- Rate limit by IP and origin, but avoid dropping legitimate bursts that come from batched delivery.
- Monitor 4xx/5xx rates and alert when your endpoints or aggregation jobs fail.
- Version your schemas so you can evolve keys without breaking dashboards.
Putting it together: a sample rollout plan
Week 1–2
- Define your minimal schema: campaign, creative, channel, region.
- Set up two report endpoints with logging and dashboards.
- Enable source and trigger registration in a sandbox ad placement and a test conversion flow.
Week 3–4
- Start collecting event‑level and aggregatable reports in parallel with your legacy measurement.
- Validate aggregates from the service against your backend totals in a few buckets.
- Share early results internally with clear caveats on noise and delays.
Month 2
- Expand to your top campaigns. Add basic guardrails: dedup keys, window settings, and failure alerts.
- Run a controlled holdout to validate that changes in spend reflect in ARA aggregates.
- Publish a one‑page explainer for leadership and legal. Keep it simple and factual.
Frequently asked questions
Can I measure revenue exactly?
No. The API is not designed for exact user‑level revenue tracking. You can use coarse value buckets in aggregatable reports to get useful revenue signals without exposing individual purchases.
Can I support multi‑touch attribution?
Out of the box, the API focuses on last‑touch within the windows you choose. You can experiment with multi‑touch modeling at the aggregate level if you plan keys and holdouts carefully, but keep expectations modest.
Do I still need platform pixels?
Yes, but use them for delivery diagnostics, not as your only source of truth. ARA gives you an independent measurement channel you control.
A short glossary you will actually use
- Source: A stored record of an ad exposure (view or click), created at the time of the ad.
- Trigger: A stored record of a conversion event on your site that tries to match a source.
- Event‑level report: A delayed JSON report with minimal, noisy detail about a matched source and trigger.
- Aggregatable report: An encrypted contribution that, when combined with many others, yields sums by keys you define.
- Aggregation service: The system that computes aggregates from encrypted reports, enforcing thresholds and adding noise.
Summary:
- The Attribution Reporting API measures ad outcomes without third‑party cookies using on‑device matching and privacy guardrails.
- Use event‑level reports for debugging and quick checks; rely on aggregatable reports for accurate totals.
- Design a compact schema with numeric codes and coarse buckets; avoid PII entirely.
- Ship in stages, calibrate with holdouts, and accept noise and delays as part of privacy.
- Secure your endpoints, monitor failures, and plan graceful fallbacks for non‑Chromium browsers.
