Ads Management
AdsManagement.coBy TwoSquares
How We WorkBlogOur ToolsContact
Get an Ads Audit
Ads Management
AdsManagement.coBy TwoSquares

Professional paid ads management for predictable growth.

Ads Management
AdsManagement.coBy TwoSquares

Professional paid ads management for predictable growth.

Services

  • Google Ads
  • Microsoft Ads
  • Meta Ads
  • LinkedIn Ads
  • YouTube Ads
  • TikTok Ads
  • Free Audit

Industries

  • Ecommerce
  • SaaS
  • B2B Services
  • Healthcare
  • Legal
  • Finance
  • Real Estate
  • Education
  • Hospitality
  • Automotive
  • Home Services
  • Professional Services

Company

  • About
  • Contact
  • Blog
  • Our Tools

Connect

hello@adsmanagement.co
SSL Secured
GDPR Compliant

© 2026 AdsManagement.co. All rights reserved.

Privacy PolicyTerms of Service

Part of TwoSquares

ADSMANAGEMENT

Back to Strategy Hub

Google Ads Experiments: Running Valid A/B Tests

2026-01-23
5 min read
Kiril Ivanov
Kiril Ivanov
Performance Marketing Specialist

Most advertisers test things by "changing it and seeing what happens." This is not testing. This is gambling. If you change your bid strategy from Manual CPC to Target CPA, and sales go up, was it because of the change? Or was it because it's Tuesday? Or because your competitor ran out of budget?

To know the truth, you need a control group and an experiment group running simultaneously. Google Ads has a built-in feature called Experiments (formerly "Drafts & Experiments") that handles the split for you.

In this guide, we break down the Scientific Testing Framework, the 50/50 Cookie Split, and the top 3 high-impact tests you should run this quarter.

The Financial Value of "Certainty"

Bad testing costs money twice.

  1. The Loss: You switch to a new strategy that tanks performance (-20% revenue).
  2. The Reversion: You panic and switch back, sending the algorithm into "Learning Mode" again (-10% revenue).

Experiments allow you to limit the risk. You can run the test on just 30% of your traffic. If it tanks, you only damaged 30% of the account, not 100%.

The Significance Formula:

$$ \text{Confidence Level} = 1 - \text{P-Value} $$

You do not need to do the math. Google does it for you. Look for the "Blue Star" icon in the experiments tab. That means "Statistically Significant."

Theory: Cookie-Based Splits

How does Google split the traffic? It uses cookies.

  • User A searches "Best CRM." They get put in the Control Group (Manual CPC).
  • User B searches "Best CRM." They get put in the Experiment Group (Target CPA).

If User A searches again tomorrow, they strike the same Control Group. This ensures data integrity. You are testing people, not just queries.

Framework: The 3-Test Rotation

You should always have one experiment running. Rotate through these three hypothesis categories:

  1. The Bidding Test: Manual CPC vs. Smart Bidding (tCPA/tROAS).
  2. The Match Type Test: Exact Match Only vs. Broad Match + Smart Bidding.
  3. The Creative Test: RSA (Responsive Search Ad) vs. RSA (New Messaging).

Execution: Setting Up an Experiment

Do not just duplicate the campaign manually. Use the tool.

  1. Select Campaign: Go to your campaign settings.
  2. Create Experiment: Click the "Experiments" tab on the left sidebar.
  3. Configure Split:
    • Name: "Test - Target CPA vs Manual"
    • Split: 50% (Recommended for speed) or 30% (Recommended for safety).
    • Dates: Set it for 30 days.
  4. Make Changes: The interface will open a "Shadow Campaign." Make your changes here (e.g., change settings to Target CPA).
  5. Launch.

Advanced Strategy: The "Clean Room" Method

If you are testing Broad Match, a standard experiment might "bleed." Broad match keywords in your Experiment arm might steal traffic from Exact match keywords in your Control arm.

The Fix: When testing Broad Match, use a Cookie-Based Split with negative keyword matching. But honestly, for Broad Match tests, Google's "Custom Experiments" handle this overlap better than they used to. Trust the tool, but monitor the Search Terms Report of the Control arm to ensure it isn't shrinking.

Case Study: Manual CPC vs. Target CPA

Client: SaaS Company (High CPCs - $50) Hypothesis: "Manual CPC is better because we can cherry-pick cheap keywords." Google's Pitch: "Target CPA will find conversions you are missing."

The Test:

  • Control: Manual CPC ($50 Max Bid).
  • Experiment: Target CPA ($150 tCPA).
  • Duration: 4 weeks.

The Result:

  • Control: 20 conversions @ $140 CPA.
  • Experiment: 28 conversions @ $110 CPA.
  • Winner: Experiment.
  • Why? Smart Bidding bid down on users who were "researching" (low intent) and bid up ($80+) on users who were ready to buy. Manual CPC missed those high-intent users because of the $50 cap.

Pitfalls to Avoid

1. Ending Too Early

You launch on Monday. On Wednesday, the Experiment is winning by 50%. You apply it. Wrong. You likely just had a lucky day. You need Statistical Significance. Wait for the data to stabilize (usually 2 weeks minimum).

2. Changing Variables Mid-Test

If you change the ad copy in the Control group on Day 10, you corrupted the data. freeze the Control group. Do not touch it.

3. Testing Too Many Things

"I want to test Broad Match AND Target CPA AND new Ad Copy." If performance improves, which one caused it? Test one variable at a time. Science is disciplined.

Summary

The difference between a "Gut Feeling" and "Data-Driven" is the Experiment tool.

Your Testing Roadmap:

  1. Identify your highest-spend campaign.
  2. Formulate a hypothesis (e.g., "tCPA will lower costs").
  3. Launch a 50/50 Experiment.
  4. Do not touch it for 14 days.

Stop arguing about what works. Let the users decide.

Kiril Ivanov

About the Author

Performance marketing specialist with 6 years of experience in Google Ads, Meta Ads, and paid media strategy. Helps B2B and Ecommerce brands scale profitably through data-driven advertising.

View author profile Connect on LinkedIn

Need this implemented for you?

Read the guide, or let our specialist team handle it while you focus on the big picture.

Get Your Free Audit