The best digital advertising campaign can fail if the data you get from performance metrics is not accurate. Receiving invalid traffic, such as traffic from bots and other non-human sources, can distort audience measurement and disrupt your campaign strategy.
This glossary page breaks down the key terms and concepts related to GIVT, helping marketers minimize its impact on their campaigns.
In this post
General invalid traffic (GIVT) refers to any non-human or otherwise non-legitimate ad traffic that can be identified and filtered using tools and rules. GIVT includes activity from known data centers, bots, crawlers, and irregular patterns such as duplicate ad requests or traffic from outdated browsers.
GIVT is not usually fraudulent in intent. Still, it can skew campaign metrics and reduce media efficiency by serving ads to unqualified or non-existent users. Thus, identifying GIVT is critical to ensure that reported ad impressions, clicks, and conversions reflect genuine user interest and engagement.
There are two main types of invalid traffic: General Invalid Traffic and Sophisticated Invalid Traffic (SIVT). The main difference between the two resides in the sources of the traffic.
GIVT refers to traffic that comes from sources that can be detected by tools, blacklists, or pre-defined lists. These sources may include bots, spiders, and known data centers. This type of invalid traffic is often benign but can hinder campaigns, so it is unhelpful to advertisers.
On the other hand, SIVT is more complex and harder to detect. Sophisticated invalid traffic involves tactics like spoofed domains, session hijacking, malware-driven traffic, and human-like bots. Detecting SIVT often requires machine learning, behavioral analysis, and forensic investigation. Both forms inflate metrics and waste ad spend, but SIVT poses a greater challenge because of its deceptive nature.
Preventing invalid traffic is essential for maintaining the integrity of your digital advertising campaign. When a campaign gets GIVT, performance data gets diluted, and it becomes difficult to assess which channels, creatives, or audience segments are effective. Inflated impressions and clicks are also considered invalid traffic and can distort cost-per-metric calculations, leading to inefficient media buying and poor ROI.
What’s more important, if the GIVT is not detected and prevented, the continuous exposure to it can impact brand safety and lower trust with publishers and platforms. Money wasted on bots or nonhuman traffic could have been directed to reach real consumers.
There are several causes of general invalid traffic, both malicious and non-malicious. Here are the most common:
Detecting and eliminating invalid traffic requires both technical tools and consistent monitoring. For GIVT, basic filters from analytic platforms like Google Ads, DV360, or verification tools can flag known bots and data center IPs. Businesses can use log-level data analysis, although AI technologies are increasingly popular to reveal patterns, such as unusually high click-through rates or suspicious referral sources.
To reduce GIVT, advertisers should:
Implement traffic verification and click ad fraud prevention tools: Using specialized verification platforms helps identify and filter out nonhuman or invalid traffic before it affects campaign data. These tools analyze clicks, impressions, and engagement patterns in real time, ensuring that advertisers pay only for genuine user interactions. Leading vendors also provide detailed reports, allowing brands to take corrective action quickly.
Block known data center IPs and non-human user agents: Many sources of invalid traffic originate from data centers or automated systems that simulate human browsing behavior. Maintaining and updating block lists of suspicious IP addresses and known bots can significantly reduce exposure to fraudulent impressions.
Monitor site analytics for abnormal patterns: When you detect unusual engagement metrics, it may be a sign of invalid traffic. For instance, high bounce rates, a sudden spike in impressions from a single location, or very short user sessions, suggest bot activity. Review these patterns regularly to identify irregularities and maintain campaign integrity.
Audit campaign traffic sources and performance by placement and channel. Frequent auditing helps uncover discrepancies and isolate underperforming or suspicious sources. Advertisers should analyze traffic quality across all channels.
Use ads.txt and app-ads.txt to authorize legitimate sellers. The ads.txt framework, developed by the Interactive Advertising Bureau, allows publishers to publicly declare which companies are authorized to sell their inventory.