On-Site vs. In-Ad Measurement
This article builds on fraud0 Detection Methodology – How We Identify Invalid Traffic (IVT) and explains where detection happens.
It describes the two measurement layers — on-site and in-ad — and shows concrete, real-world examples of what each detects.
Together, these layers translate the detection methodology into measurable, transparent results.
Why two measurement layers
fraud0 detects invalid traffic at two stages of the user journey.
In-Ad Measurement – runs inside the ad creative the moment an impression loads. Uses post-bid JavaScript to verify ad delivery, placement integrity, and viewability.
On-Site Measurement – runs on your website once a visitor lands. Analyses browser, device, behavioral, and network signals in real time.
The same detection logic described in fraud0 Detection Methodology is applied in both environments.
Only the data source and timing differ: on-site runs after the click, in-ad runs at the impression level.
On-Site Measurement – detecting automation on your website
The on-site tag checks every visit in real time.
It runs hundreds of browser and device checks, behavioral analyses, and hidden honey-pot elements — part of over 2,000 cybersecurity challenges that fraud0 uses to verify authenticity (fraud0.com/technology).
Real examples of what on-site detection finds
Example A – Impossible Hardware Profile
Reported OS: Android 14 / Fonts Reported: [] / Hardware Concurrency: 119
Interpretation: Real Android devices report 4–12 cores. Seeing 119 means the visit came from an emulator or server environment.
Example B – Tampered Browser Objects
Browser APIs like window.chrome or navigator.webdriver are missing or falsified.
Typical for automation tools such as Selenium or Playwright.
Example C – Software Rendering Instead of GPU
Real hardware uses GPU acceleration.
Virtual systems use software renderers like SwiftShader → flagged as likely emulation.
Example D – Font List Anomalies
Missing baseline fonts (e.g. Calibri on Windows or Helvetica Neue on macOS) → automation signal.
Android-specific fonts (HarmonyOS Sans, MiSans, vivo Sans) → expected and never flagged.
Example E – TLS Fingerprint Anomalies
Each secure connection leaves a TLS fingerprint.
Real users produce a few stable patterns.
High cardinality (many unique fingerprints) → automated traffic.
Example F – Data-centre Traffic
Sessions from public cloud infrastructures (e.g. AWS or OVH ranges) are flagged as invalid when identified via real-time network signals.
What this layer reveals
Which sessions behave like humans vs. automation.
How much of your incoming traffic is human, suspicious, or bot.
Post-click quality across all acquisition channels.
In-Ad Measurement – detecting fraud before the click
In-ad measurement runs directly inside the ad creative.
It uses post-bid JavaScript to validate impressions after they are served and to confirm whether an ad was truly viewable.
This layer captures invalid activity that never reaches your website (fraud0.com/product-in-ad-measurement).
Real examples of what in-ad detection finds
Example 1 – Pixel Stuffing
An ad loads inside a 1 × 1 pixel iframe.
It’s invisible to users but still counted as a served impression.
Example 2 – Ad Stacking
Multiple ads placed on top of each other.
Only the top ad is visible – all count as delivered.
Example 3 – Made-for-Advertising (MFA) Websites
Pages created only for ad revenue.
They generate many impressions but almost no user engagement.
Example 4 – Domain Spoofing
Low-quality sites pretend to be trusted publishers by faking the domain in ad requests.
Example 5 – Bot-Triggered Impressions
Automated systems trigger ad loads, complete the impression event, and stop before the redirect → never appear in on-site data but captured by in-ad tracking.
Example 6 – Non-Viewable Inventory
The ad technically loads but is never visible in the viewport.
In-ad measurement uses viewability standards (Open Measurement SDK) to mark it as invalid.
What you can do with the results
Identify placements or domains with high invalid impression rates.
Build inclusion and exclusion lists to protect future campaigns.
Confirm viewability and placement transparency for each media partner.
How both layers work together
Stage | Measurement | Typical signals | What it shows |
|---|---|---|---|
Before the click | In-Ad | Pixel stuffing, ad stacking, MFA traffic, domain spoofing, non-viewable delivery | Invalid ad delivery and fake impressions |
After the click | On-Site | Impossible hardware, tampered APIs, TLS anomalies, data-centre IPs | Non-human sessions on your website |
Scenario
A campaign shows 12 % invalid impressions in-ad and 4 % bot sessions on-site.
This means most fraud occurs before the click.
Use in-ad data to exclude weak placements and on-site data to validate post-click engagement.
Industry context and standards
fraud0 classifies invalid traffic in line with industry terminology set by the Media Rating Council (MRC):
GIVT (General Invalid Traffic) – straightforward cases such as data-centre IPs or declared bots.
SIVT (Sophisticated Invalid Traffic) – harder cases like emulated devices, scripted browsers, and hidden ad placements.
fraud0’s approach is complementary to supply-chain standards such as ads.txt, app-ads.txt, sellers.json, and the SupplyChain object, which help ensure authorized reselling and reduce domain spoofing.
Further Readings
Media Rating Council (MRC) – Invalid Traffic Detection and Filtration Standards Addendum (June 2020)
A key industry standard defining general invalid traffic (GIVT) and sophisticated invalid traffic (SIVT) and how they should be detected and removed.
IAB Tech Lab – Open Measurement SDK (OM SDK) Specification
https://iabtechlab.com/standards/open-measurement-sdk/
Technical standard for measurement and verification of ad viewability and invalid traffic across web, app, and CTV.
MRC – Minimum Standards for Media Rating Research
https://www.mediaratingcouncil.org/standards-and-guidelines
Foundational guidelines for measurement organizations, useful for understanding how IVT detection fits into overall rating integrity.