Audit Logging & Monitoring
Even with IAM restrictions and deletion protection in place, you need visibility into what your AI agents are doing. Cloud Audit Logs combined with alerting pipelines give you real-time awareness of any destructive attempts.
Cloud Audit Logs Overview
GCP automatically generates audit logs for most services. There are three types relevant to AI agent monitoring:
| Log Type | What It Records | Enabled By Default | Cost |
|---|---|---|---|
| Admin Activity | API calls that modify configuration or metadata (create, delete, update) | Yes (always on, cannot disable) | Free |
| Data Access | API calls that read configuration or user data | No (must enable per service) | Charged per volume |
| System Event | Google-initiated actions (live migration, maintenance) | Yes (always on) | Free |
delete, create, and update API call is automatically logged at no cost. This is your primary audit trail for detecting destructive agent actions.Enabling Audit Logs for All Services
While Admin Activity logs are always on, you should enable Data Access logs for critical services to get complete visibility:
# Get the current audit config gcloud projects get-iam-policy my-project \ --format=json > policy.json # Edit policy.json to add audit config (see below) # Apply the updated policy gcloud projects set-iam-policy my-project policy.json
{
"auditConfigs": [
{
"service": "allServices",
"auditLogConfigs": [
{ "logType": "ADMIN_READ" },
{ "logType": "DATA_READ" },
{ "logType": "DATA_WRITE" }
]
},
{
"service": "compute.googleapis.com",
"auditLogConfigs": [
{ "logType": "ADMIN_READ" },
{ "logType": "DATA_READ" },
{ "logType": "DATA_WRITE" }
]
},
{
"service": "sqladmin.googleapis.com",
"auditLogConfigs": [
{ "logType": "ADMIN_READ" },
{ "logType": "DATA_READ" },
{ "logType": "DATA_WRITE" }
]
}
]
}
Log Filters for Catching Delete/Destroy Operations
Use Cloud Logging filters to find all destructive operations performed by your AI agent's service account:
-- Filter: All delete operations by the AI agent service account
protoPayload.authenticationInfo.principalEmail="ai-agent-worker@my-project.iam.gserviceaccount.com"
protoPayload.methodName=~"delete|destroy|remove|drop"
severity>=WARNING
-- Filter: Compute Engine deletions resource.type="gce_instance" protoPayload.methodName="v1.compute.instances.delete" -- Filter: Cloud SQL deletions resource.type="cloudsql_database" protoPayload.methodName="cloudsql.instances.delete" -- Filter: Project deletions resource.type="project" protoPayload.methodName="DeleteProject" -- Filter: GCS bucket deletions resource.type="gcs_bucket" protoPayload.methodName="storage.buckets.delete" -- Filter: GKE cluster deletions resource.type="gke_cluster" protoPayload.methodName=~"DeleteCluster" -- Filter: Any permission denied (agent tried something blocked) protoPayload.authenticationInfo.principalEmail="ai-agent-worker@my-project.iam.gserviceaccount.com" protoPayload.status.code=7
Log-Based Alerting
Create log-based alerting policies that notify you immediately when destructive operations are detected:
# Create a log-based metric for delete operations gcloud logging metrics create agent-delete-attempts \ --project=my-project \ --description="Counts delete attempts by AI agent service accounts" \ --log-filter=' protoPayload.authenticationInfo.principalEmail=~"ai-agent" protoPayload.methodName=~"delete|destroy|remove" ' # Create a notification channel (email) gcloud alpha monitoring channels create \ --display-name="Ops Team Email" \ --type=email \ --channel-labels=email_address=ops-team@example.com # Create an alerting policy based on the metric gcloud alpha monitoring policies create \ --display-name="AI Agent Delete Attempt Alert" \ --condition-display-name="Agent attempted destructive operation" \ --condition-filter='resource.type="global" AND metric.type="logging.googleapis.com/user/agent-delete-attempts"' \ --condition-threshold-value=0 \ --condition-threshold-comparison=COMPARISON_GT \ --condition-threshold-duration=0s \ --notification-channels=CHANNEL_ID \ --combiner=OR
Cloud Monitoring Alerting Policies
Beyond log-based alerts, Cloud Monitoring can alert on infrastructure-level events:
resource "google_monitoring_notification_channel" "ops_email" { display_name = "Ops Team Email" type = "email" labels = { email_address = "ops-team@example.com" } } resource "google_logging_metric" "agent_delete_metric" { name = "agent-delete-attempts" filter = <<-EOT protoPayload.authenticationInfo.principalEmail=~"ai-agent" protoPayload.methodName=~"delete|destroy|remove" EOT metric_descriptor { metric_kind = "DELTA" value_type = "INT64" } } resource "google_monitoring_alert_policy" "agent_delete_alert" { display_name = "AI Agent Destructive Operation Alert" combiner = "OR" conditions { display_name = "Agent delete attempt detected" condition_threshold { filter = "resource.type=\"global\" AND metric.type=\"logging.googleapis.com/user/${google_logging_metric.agent_delete_metric.name}\"" comparison = "COMPARISON_GT" threshold_value = 0 duration = "0s" aggregations { alignment_period = "60s" per_series_aligner = "ALIGN_RATE" } } } notification_channels = [ google_monitoring_notification_channel.ops_email.id ] alert_strategy { auto_close = "1800s" } }
Log Sinks to BigQuery for Analysis
Export audit logs to BigQuery for long-term storage and analysis:
# Create a BigQuery dataset for audit logs bq mk --dataset --location=US my-project:audit_logs # Create a log sink that exports all admin activity to BigQuery gcloud logging sinks create agent-audit-sink \ bigquery.googleapis.com/projects/my-project/datasets/audit_logs \ --log-filter=' protoPayload.authenticationInfo.principalEmail=~"ai-agent" logName="projects/my-project/logs/cloudaudit.googleapis.com%2Factivity" ' # Grant the sink's service account write access to BigQuery # (get the writer identity from the sink) gcloud logging sinks describe agent-audit-sink \ --format="value(writerIdentity)" # Output: serviceAccount:p123456789-123456@gcp-sa-logging.iam.gserviceaccount.com # Grant BigQuery dataEditor role to this SA on the dataset
Pub/Sub Notifications for Real-Time Alerts
For real-time alerting, route logs through Pub/Sub:
# Create a Pub/Sub topic for agent alerts gcloud pubsub topics create agent-destructive-alerts # Create a log sink that routes to Pub/Sub gcloud logging sinks create agent-alert-sink \ pubsub.googleapis.com/projects/my-project/topics/agent-destructive-alerts \ --log-filter=' protoPayload.authenticationInfo.principalEmail=~"ai-agent" protoPayload.methodName=~"delete|destroy|remove|drop" ' # Create a subscription for processing gcloud pubsub subscriptions create agent-alert-processor \ --topic=agent-destructive-alerts \ --push-endpoint=https://us-central1-my-project.cloudfunctions.net/alert-to-slack
Alert Pipeline: Audit Logs to Slack
Build a complete pipeline from audit logs to Slack notifications:
import base64 import json import requests import os SLACK_WEBHOOK_URL = os.environ["SLACK_WEBHOOK_URL"] def alert_to_slack(event, context): """Cloud Function triggered by Pub/Sub message.""" pubsub_message = base64.b64decode(event["data"]).decode("utf-8") log_entry = json.loads(pubsub_message) # Extract key information payload = log_entry.get("protoPayload", {}) principal = payload.get("authenticationInfo", {}).get("principalEmail", "unknown") method = payload.get("methodName", "unknown") resource_name = payload.get("resourceName", "unknown") status = payload.get("status", {}) timestamp = log_entry.get("timestamp", "unknown") # Determine if the action was blocked or succeeded was_blocked = status.get("code", 0) != 0 status_text = "BLOCKED" if was_blocked else "SUCCEEDED" color = "#36a64f" if was_blocked else "#ff0000" # Send Slack notification slack_message = { "attachments": [{ "color": color, "title": f"AI Agent Destructive Operation {status_text}", "fields": [ {"title": "Agent", "value": principal, "short": True}, {"title": "Operation", "value": method, "short": True}, {"title": "Resource", "value": resource_name, "short": False}, {"title": "Status", "value": status_text, "short": True}, {"title": "Time", "value": timestamp, "short": True}, ], "footer": "GCP Audit Logs | AI Agent Guardrails" }] } requests.post(SLACK_WEBHOOK_URL, json=slack_message)
Agent Action
|
v
Cloud Audit Logs (Admin Activity - always on, free)
|
v
Log Sink (filter: delete/destroy operations by agent SAs)
|
+---> BigQuery (long-term storage + analysis)
|
+---> Pub/Sub Topic (real-time routing)
|
v
Cloud Function (format + send)
|
v
Slack Channel (#agent-alerts)
|
v
Ops Team (immediate response)
Lilly Tech Systems