Every ad impression carries more than a chance to show a creative. It carries a stream of signals that, when read right, tell a story about who the user is, where they are, and how likely they are to become a valuable customer.
Most teams treat those signals as short-term bidding inputs. But you can turn them into long-term value by linking oRTB signals to real user acquisition outcomes like installs, revenue, and lifetime value or LTV.
Below, I explain what oRTB signals are, which ones matter most for user acquisition, and simple ways to connect those signals to LTV so your bids actually pay off.

What Are oRTB Signals
oRTB stands for Open Real Time Bidding. It is the standard format exchanges that bidders use to talk about an ad opportunity. A bid request arrives in real time and includes many small pieces of information. Those pieces are the oRTB signals. Think of each signal as one line in a short resume for the impression. Together, they help a bidder decide how much to pay.
Common oRTB Signals And What They Mean For UA
Here are the signals you will see most and why they matter for user acquisition:
| Signal | What It Tells You | Why It Helps UA |
| Device and OS | Type of device, OS version | Some apps perform or monetize better on certain devices |
| Geo | Country, city, sometimes zip | Regional pricing, user behavior, and LTV vary by location |
| App or site metadata | App bundle ID, page URL, placement type | Shows content context and likely user intent |
| Timestamp and timezone | When the impression happened | Time of day affects conversion rates |
| Supply path and publisher ID | Where the inventory came from | Helps measure quality and fraud risk |
| Contextual data | Page category, content tags | Useful when IDs are limited or missing |
| IDs and identifiers | IDFA, GAID, or hashed IDs when available | Enables linking an impression to a user or event |
Sources and spec details explain these fields and how they are encoded in bid requests. Use them to know what you can rely on and what will be optional or variable across partners.
Why oRTB Data Can’t Predict LTV On Its Own
Signals are immediate. LTV is delayed. An impression can tell you the user looks promising, but LTV depends on what happens after install: retention, purchases, and ad engagement. So the core challenge is attribution and mapping: which sets of signals consistently lead to higher LTV over weeks or months.
AWS, Google, and other guidance recommend capturing raw OpenRTB events and storing them for analytics. That historical store is what lets you test whether a signal pattern predicts real revenue later.
How To Turn Raw oRTB Data Into Real UA Results
You do not need an elaborate data lake to start. Here is a practical, step-by-step pipeline that works for most teams.
- Capture. Log each bid request and the associated signals you can get from your exchange. Include the request ID, timestamp, and any returned bid ID.
- Link. When an install or first conversion happens, link the post-install event back to the bid request using identifiers, postbacks, or probabilistic matching if IDs are absent. Google’s and IAB docs show common linking patterns.
- Aggregate. Build features that summarize signal groups. For example, average LTV by publisher ID and device type, or conversion rate by hour of day.
- Model. Run simple uplift or regression models to estimate expected LTV from the signals. Start simple and iterate.
- Feed bids. Convert predicted LTV into a bid price or priority rule. If predicted LTV > CPA target, bid more. If not, reduce spending.
Features To Build First
| Feature | How to compute | Use |
| Publisher quality score | Avg revenue per user from that publisher over 30 days | Filter low-quality supply |
| Device LTV multiplier | Avg LTV by device type normalized to baseline | Adjust bids by device |
| Time-of-day factor | Conversion rate per hour for last 14 days | Schedule higher bids during peak hours |
Small Changes That Make A Big Impact
Use publisher-provided signals and contextual signals when identifiers are weak. Publishers can supply audience or content attributes that improve prediction without exposing raw IDs. This is increasingly important as ID availability drops.
Store raw events for at least 30 to 90 days. You will need historic matches to see which signals truly correlate with revenue. Many integrations capture both near-real-time events for bidding and batch exports for model training.
Example: From Bid Request To LTV Decision
- Bid request arrives with publisher ID P123, device Android 12, country IN, placement fullscreen.
- Your capture logs the request and returns a bid.
- User installs the app. Attribution links the install back to that bid request ID.
- After 14 days, this cohort’s average revenue is 1.6x baseline. Your model increases the bid multiplier for P123 and Android 12 in India during the same placement type. Over time, you see higher quality installs at a stable CPA.
Common Pitfalls And How To Avoid Them
- Chasing short-term conversion spikes only. That can increase install volume but reduce LTV. Track both early conversion and later revenue.
- Ignoring supply path transparency. Bad supply chains inflate spend and lower LTV. Use supply chain fields to weed out risky sources.
- Overfitting to small datasets. Require minimum sample sizes before trusting a new signal.
Kpi Checklist To Monitor
| KPI | Why it matters |
| Predicted LTV vs actual LTV | Tests model accuracy |
| Cost per install vs CPA target | Immediate bidding sanity check |
| Retention at day 1, 7, 30 | Shows early engagement that drives LTV |
| Spend by publisher ID | Detects sudden drops in quality |
Bottom Line
oRTB signals are a rich, underused resource for user acquisition. The signals in each bid request are small clues. When you capture them, link them to installs and revenue, and use simple models, you can shift from bidding for impressions to bidding for long-term value.
If you want a partner that helps you scale efficiently, SpinX is a great choice. Their real-time bidding marketplace connects premium demand with quality inventory, and they specialize in driving high-quality users while optimizing for retention and performance.