Blueprint: Build a Google Ads Budget Optimizer Microservice with Event-Driven Architecture
adsarchitectureserverless

Blueprint: Build a Google Ads Budget Optimizer Microservice with Event-Driven Architecture

ppassive
2026-02-10
10 min read
Advertisement

Blueprint to build an event-driven, serverless microservice that uses Google Ads' total campaign budgets to auto-correct spend patterns.

Hook: Stop chasing daily spend — build an event-driven microservice that reacts, not reacts late

You’re a developer or cloud engineer tired of manually nudging Google Ads daily budgets during promotions, watching unpredictable bills, and replaying the same “pause/raise/restore” routine every sale. In 2026, total campaign budgets are widely available for Search and Shopping, and the right event-driven microservice can automate the heavy lifting: detect spend drift, compute a corrective total budget, and apply it through the Google Ads API — all with CI/CD, low ops, and observability baked-in.

Executive summary (what you'll get)

This article gives a production-ready blueprint for an event-driven microservice that:

  • Ingests spend telemetry (real-time or near-real-time)
  • Detects overspend / underspend patterns against a plan
  • Computes and applies a new total campaign budget using the new total campaign budgets capability in the Google Ads API
  • Runs serverless or containerized with a recommended CI/CD pipeline
  • Includes security, observability, and cost estimates for 2026 cloud economics

Why this matters in 2026

Late 2025 and early 2026 pushed two parallel trends:

Combining the two yields predictable spend with minimal hands-on work — an ideal match for technology teams who want recurring revenue or cost control without 24/7 ops.

High-level architecture

Here’s the architecture we’ll implement. It’s intentionally modular so you can run it fully serverless or containerized in any cloud.

  1. Telemetry sources: Google Ads Reporting API / streaming reports, conversion APIs, billing export to BigQuery (or your data warehouse), and optional front-line webhooks from your marketing stack.
  2. Event bus: Pub/Sub or Kafka for ingesting spend events and scheduling triggers.
  3. Decision service: The microservice (Cloud Run / Fargate) that computes budget corrections and issues Google Ads API calls to set total campaign budgets.
  4. Audit & storage: BigQuery or TimescaleDB for historical spend, decisions and A/B testing results.
  5. CI/CD: GitHub Actions or GitLab pipelines for tests, canaries, and deploys.
  6. Observability: Cloud Logging / OpenTelemetry traces, metrics, and alerting (budget violation alerts).

Event flow (step-by-step)

  1. Event flow: Your data pipeline exports hourly spend per campaign to the event bus.
  2. Trigger: A rule (threshold, time window, or scheduled check) creates an event if spend deviates from the planned curve.
  3. Decision: The microservice consumes the event, loads historical data from BigQuery, runs a rule or ML model, and decides on a new total campaign budget.
  4. Act: The microservice calls the Google Ads API mutate endpoint to update the campaign’s total budget for the period.
  5. Audit: The change is logged; metrics are recorded; alerts fire on failures or repeated flips.

Design patterns and event types

Design your microservice for idempotency, eventual consistency, and safe rollback. Use these event categories:

  • anomaly.spend — rapid overspend detected for a campaign
  • lag.spend — underutilization vs. scheduled thrust
  • schedule.check — periodic check to re-baseline totals
  • manual.override — human operator overrides for exceptions
"Event-driven architectures let you react to spend patterns in near-real-time while retaining human-in-the-loop control for edge cases."

Decisioning: rules first, ML next

Start simple. Rules are transparent and safe. Move to ML when you have 30–90 days of event + outcome data.

Rule example (pseudocode)

if (current_spend_hourly > expected_spend_hourly * 1.25) {
  // Overspend: compute smaller total to keep end-date budget on track
  new_total = max(current_total * 0.9, min_allowed_total)
  reason = 'overspend_correction'
} else if (current_spend_hourly < expected_spend_hourly * 0.5 && days_left > 3) {
  // Underutilization: increase total modestly to allow delivery
  new_total = current_total * 1.1
  reason = 'underdelivery_boost'
}

Log decisions with the event id, campaign id, current_total, new_total, and rationale.

Code-level blueprint (Node.js + Cloud Run)

The example below is intentionally concise. It shows an HTTP handler for Pub/Sub push events and a safe call to the Google Ads API. Replace placeholders with your client library calls and credentials.

Assumptions

  • Google Ads credentials are provided via a service account / OAuth2 flow stored securely (Secret Manager).
  • Events come as Pub/Sub push to a Cloud Run HTTP endpoint.
  • We keep operations idempotent using an eventId-based dedupe table in BigQuery or Redis.

index.js (Express handler)

const express = require('express')
const axios = require('axios')
const { BigQuery } = require('@google-cloud/bigquery')

const app = express()
app.use(express.json())

const bigquery = new BigQuery()
const GOOGLE_ADS_API_URL = process.env.GOOGLE_ADS_API_URL // e.g. https://googleads.googleapis.com
const CUSTOMER_ID = process.env.GOOGLE_ADS_CUSTOMER_ID

// idempotency check
async function alreadyProcessed(eventId) {
  const query = `SELECT eventId FROM \`project.dataset.processed_events\` WHERE eventId=@eventId LIMIT 1`
  const [rows] = await bigquery.query({ query, params: { eventId } })
  return rows.length > 0
}

async function markProcessed(eventId, meta) {
  const dataset = bigquery.dataset('dataset')
  const table = dataset.table('processed_events')
  await table.insert({ eventId, ts: new Date().toISOString(), meta })
}

async function applyTotalBudget(campaignResourceName, newTotalMicros) {
  // Minimal safe HTTP wrapper. In production use official client libraries.
  const mutateUrl = `${GOOGLE_ADS_API_URL}/v{API_VERSION}/customers/${CUSTOMER_ID}/campaignBudgets:mutate`

  const operation = {
    update: {
      resourceName: campaignResourceName,
      amountMicros: newTotalMicros // replace field name with correct API field
    },
    updateMask: 'amountMicros'
  }

  // OAuth2 token should be fetched via service account flow
  const token = await getAccessToken()
  const resp = await axios.post(mutateUrl, { operations: [operation] }, {
    headers: { Authorization: `Bearer ${token}` }
  })
  return resp.data
}

app.post('/pubsub', async (req, res) => {
  try {
    const envelope = req.body
    const eventId = envelope.message && envelope.message.messageId
    if (!eventId) return res.status(400).send('missing messageId')

    if (await alreadyProcessed(eventId)) {
      return res.status(200).send('already processed')
    }

    const payload = JSON.parse(Buffer.from(envelope.message.data, 'base64').toString())
    // payload contains campaignId, currentSpendMicros, expectedSpendMicros, daysLeft, currentTotalMicros

    const { campaignId, currentSpendMicros, expectedSpendMicros, daysLeft, currentTotalMicros } = payload

    // simple rule
    let newTotal = currentTotalMicros
    let reason = 'no_action'
    if (currentSpendMicros > expectedSpendMicros * 1.25) {
      newTotal = Math.max(Math.floor(currentTotalMicros * 0.9), MIN_ALLOWED_MICROS)
      reason = 'overspend_correction'
    } else if (currentSpendMicros < expectedSpendMicros * 0.5 && daysLeft > 3) {
      newTotal = Math.floor(currentTotalMicros * 1.1)
      reason = 'underdelivery_boost'
    }

    if (reason !== 'no_action') {
      const campaignResourceName = `customers/${CUSTOMER_ID}/campaignBudgets/${campaignId}`
      const result = await applyTotalBudget(campaignResourceName, newTotal)
      await markProcessed(eventId, { payload, result, reason })
    } else {
      await markProcessed(eventId, { payload, reason })
    }

    res.status(200).send('ok')
  } catch (err) {
    console.error(err)
    res.status(500).send('error')
  }
})

module.exports = app

Notes:

  • Replace the simplistic REST call with the official Google Ads client for production; the method names and fields vary by API version.
  • Use Secret Manager for OAuth2 tokens and rotate credentials regularly.

CI/CD and deployment

Automate building, testing and deploying via GitHub Actions. Key steps:

  1. Unit tests and linting
  2. Static security scans (Snyk/Trivy)
  3. Build container image and push to registry
  4. Run integration smoke tests against a staging Ads sandbox
  5. Deploy to Cloud Run (or Kubernetes) with a canary rollout

Sample GitHub Actions (trimmed)

name: CI/CD
on:
  push:
    branches: [ main ]

jobs:
  build-deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Set up Node
        uses: actions/setup-node@v4
        with:
          node-version: '18'
      - run: npm ci && npm test
      - name: Build container
        run: |
          docker build -t gcr.io/${{ secrets.GCP_PROJECT }}/ads-optimizer:${{ github.sha }} .
          docker push gcr.io/${{ secrets.GCP_PROJECT }}/ads-optimizer:${{ github.sha }}
      - name: Deploy to Cloud Run
        uses: google-github-actions/deploy-cloudrun@v1
        with:
          service: ads-optimizer
          image: gcr.io/${{ secrets.GCP_PROJECT }}/ads-optimizer:${{ github.sha }}
        env:
          GCP_PROJECT: ${{ secrets.GCP_PROJECT }}

Security & IAM (production must-haves)

  • Least privilege: Grant only the Google Ads scopes required (e.g., campaignBudget.modify) to the service account.
  • Secrets: Use Secret Manager or Vault for OAuth tokens and store them in CI secrets during deploy.
  • Network: Use private egress or VPC connectors if your data warehouse is private.
  • Audit: Log all mutate requests and responses; store signed diffs to support audits and rollbacks.

Observability and SLOs

Track these key metrics:

  • Decision latency: time from event ingestion to mutate call (target < 3s for real-time triggers)
  • API success rate: percent of successful Google Ads mutate responses
  • Budget drift: cumulative deviation from planned spend (target < 5%)
  • Cost of microservice: keep daily runtime below $1–$3 for small deployments; scale can be $10s–$100s based on throughput

Cost estimate (2026 ballpark)

Costs vary by cloud and scale. A low-throughput deployment serving hundreds of events/day typically runs:

  • Cloud Run (1 vCPU, 512Mi): ~$20–$40 / month if idle most of time (depending on invocations)
  • Pub/Sub: $0.40 per million messages plus network egress
  • BigQuery (storage + queries): $5–$50 / month based on retention and query frequency
  • Google Ads API calls: Ads quotas are separate; ensure you request enough quota (no direct billing from API).

These figures favor serverless for small-to-medium traffic and containerized orchestration for high-volume enterprise work.

Operational safeguards & best practices

  • Guardrails: Always apply a soft cap on daily budget changes (e.g., max 20% delta per decision) and require human approval above larger thresholds.
  • Backoff & retries: Implement exponential backoff for HTTP 429/5xx and record failures for manual review.
  • Canary updates: Roll new logic against a small percentage of campaigns to verify ROAS impact before full rollout.
  • Audit trail: Store previous budget and rationale; allow automated rollback on anomalous downstream metrics (e.g., sudden ROAS drop).

Testing strategy

  1. Unit tests for decision logic with edge cases
  2. Integration tests against Google Ads sandbox (or test accounts) to validate mutate calls and response handling
  3. Load tests to simulate spikes and verify retries and rate-limits
  4. End-to-end tests with canary campaigns in production traffic

Real-world example & KPI results (hypothetical)

Company: mid-market retailer running week-long promotions. Baseline problem: manual daily budget adjustments caused variable delivery and missed max-promo reach.

After implementing the event-driven optimizer with safe rules and a 10% canary rollout, they saw:

  • 16% increase in total conversions for promo periods (no budget overshoot).
  • Reduction of manual budget tweaks by 95%.
  • Spend variance vs. plan reduced from ±18% to ±4%.

These numbers mirror reports from early 2026 ad tests using total campaign budgets where automations freed marketers to focus on creatives and audience strategy.

Future enhancements (2026+)

  • Integrate privacy-preserving multi-touch attribution for better ROAS-informed decisions.
  • Move to causal models that estimate marginal conversions per dollar to optimize total budgets by expected uplift.
  • Support multi-platform orchestration (Google + Meta + Microsoft) for cross-platform budget balancing.

Common pitfalls and how to avoid them

  • Overreacting to noise — use smoothing windows and daily aggregates for decisions.
  • Missing quotas — request Ads API quota increases early and implement client-side throttling.
  • Security gaps — never embed credentials in code; use managed secrets and rotate.
  • Too-aggressive automation — always include human approval flows for large budget changes.

Checklist before you go live

  1. Telemetry pipeline verified (hourly spend accuracy ±2%).
  2. Idempotency and dedupe in place (eventId stored).
  3. Guardrails and human override implemented.
  4. CI/CD deploys to staging and runs integration tests against Ads sandbox.
  5. Monitoring and alerting created for API errors, decision failures, and budget drift.

Final takeaways

In 2026, the new total campaign budgets feature removes the need for micromanaging daily budgets — but only if you attach a reliable, auditable controller that can react to real spend patterns. An event-driven microservice minimizes ops, enables predictable spend, and frees your marketing and engineering teams to iterate on higher-value work.

Call to action

Ready to deploy an optimizer for your account? Clone the starter repo (includes Cloud Run + GitHub Actions templates, rules engine, and BigQuery audit schema), run the integration tests against the Ads sandbox, and start with a 5% canary. Want the repo link, templates, and a one-hour architecture review tailored to your account? Reply with your cloud provider and scale, and we'll send a customized plan.

Advertisement

Related Topics

#ads#architecture#serverless
p

passive

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-12T06:56:38.763Z