When you adjust a Target CPA or Target ROAS, performance usually gets worse before it gets better. This is documented, predictable, and still catches most advertisers off guard.

What’s actually happening

Google’s Smart Bidding is built on machine learning models trained on your campaign’s conversion history. When you change a bidding target, the model doesn’t just update a number — it re-enters active recalibration, re-weighting signals against your new constraints.

Google calls this the learning period. During it, the algorithm bids more conservatively while it builds confidence in the new target. You’ll typically see lower impression share, higher CPC, and fewer conversions than baseline. None of this means the target is wrong. It means the model is doing what it’s supposed to do.

For accounts with consistent conversion volume (30+ conversions per month per campaign), the learning period stabilizes in 1–2 weeks. For lower-volume campaigns, 3–4 weeks is common before performance levels off.

The mistake that makes it worse

The most common response to a post-change performance dip is to change the target again. CPA spikes on day 3. The account manager decides the target was too aggressive. They adjust it. The clock resets.

The result is a campaign that’s permanently in learning mode. The algorithm never fully settles. Performance stays volatile — not because the targets were wrong, but because they kept changing before the model could stabilize.

This pattern is widespread in manually managed accounts, particularly agencies where multiple people have access and no single person sees the full change timeline. Someone looks at day 4 data and panics. They don’t know a change was made three days ago.

What the data actually looks like

A typical learning cycle in a stable account: days 1–3 show volume compression as bidding becomes more selective. Days 4–10 show uneven recovery — some days better, some worse than baseline. By day 14, assuming no further changes, performance usually meets or exceeds the pre-change baseline.

Evaluating a bidding change at day 3 is looking at the worst possible window. The useful evaluation point is after day 14 — and using 14-day rolling averages, not daily comparisons that amplify noise.

What to do instead

Make one change at a time. Set it. Wait 14 days. Use 14-day rolling averages for evaluation, not before-vs-after point comparisons on individual days.

Keep a change log that anyone with account access can see. Date, what changed, who changed it, why. This prevents the second most common mistake: a different team member intervening mid-learning because they don’t know a change is in progress.

If 14 days have passed and the rolling average is still meaningfully above your target, the target itself may warrant adjustment. Make one change. Wait another 14 days.

Verka tracks this automatically. When a bidding change is applied, it notes the change date and suppresses recommendations that would modify the target during the learning window. The campaign gets flagged for evaluation at day 14 — not before. The change gets a fair chance to work before anyone questions it.