White Lies and Green Chips - A realistic guide to LLMs
Repo is Here: https://github.com/CyberQuixotae/gcp-devsecops-claude
WHY READ THIS
There are probably tons of other articles you could read on the internet. So why bother reading this? What sets it apart?
Nothing.
Other than trying to do it my way.
Perhaps it'll be interesting enough to get you to read it. Perhaps it will be lost in the roiling tempest of information that is the Internet. It's the dice roll I make when I decide to put my hands to the keyboard - or decide to work with the mechanical turk.
This experiment is simple: using my knowledge of Arcane Sorcery and Forbidden Magic, I communed with the Machine Spirit (Claude) to help me generate some decent code for a Google Cloud Infrastructure-as-Code (IaC) deployment using Terraform. Use this as a template to aid in your own endeavors.
Part of this was curiosity, and to see how far I could take an idea with this technology from concept to something that was at least development level ready. This wasn't a race to ship, this is a starting point for something more...
I've worked with Terraform before, which gave me helpful background knowledge so that I wasn't stumbling through this, while simultaneously ensuring I understood the work that the machine did, and then modifying it as I felt was needed to accomplish my goals. If you are a struggling org and you need something that could potentially help you weather the storm, this could be a step in the right direction. I intend to continue working on this, also while making use of Claude as part of my workflow.
Look at it and draw your own conclusions.
WHAT I DID WITH CLAUDE
Before we dive into the actual code itself, let me be clear about my approach:
- In Claude, I generated two different types of configuration:
- A collection of
.tf
files meant to satisfy the needs of Small to Medium Businesses who are using Google Cloud. - A collection of
.tf
files for an Enterprise entity with deeper pockets who'd be willing to pay for extra security features in Google Cloud.
- A collection of
I also had it generate a .md
file to explain how to configure your Google Cloud environment for testing. I suggest you read through it. The commands can be run from Google Cloud Shell or Editor. I provided Claude a basic statement of what I wanted it to do for each sessions output. This was done through the WebUI, nothing fancy. I could probably of looked into Claude Code for this, but I wanted to keep it simple and manual.
Note that because I am a cheap bastard, I am using the free tier of Claude, which means that it will only generate a certain amount of tokens for you in a single prompt session.
Some will argue that doing this on the free tier proves nothing, but if you're creative and take a few seconds, you can find ways to work within this limit. This was done in mind so I could get the output I needed without going over the limit. It's like driving a car with a manual transmission - I WANT to make sure I am involved in the process.
Because it was the poor machines first attempt, I asked it in each session to look at it's sorry output, consider its life choices, and audit itself. What was cool is when I did that it provided not only a bevy of fixes, but it generated multiple versions of each file so I can review what was done. You're given many opportunities to peek into the logic it's using to try and solve your issue. The same approach worked for everything that was generated, so at least such behavior can be repeated in Claude. Your mileage may vary if use any other model besides Claude Sonnet 4
. Several versions later (which Claude generated for me), the code was looking far more finished than it had been previously. Either way Claude was able to pull recent data from multiple official sources and do all that with minimal intervention.
Before We Descend
This section is meant to serve as snippets of the code to showcase what is happening in the main.tf file of the Small to Medium Business module. I want to showcase this module as it is meant for smaller shops that may not have the funds to use things like Cloud Armor or SCC. The repo will have additional documentation on different parts of the project.
Establishing the VPC & Subnets
# Enable the APIs needed for total domination
resource "google_project_service" "security_apis" {
for_each = toset([
"cloudresourcemanager.googleapis.com",
"iam.googleapis.com",
"cloudasset.googleapis.com",
"securitycenter.googleapis.com",
"logging.googleapis.com",
"monitoring.googleapis.com",
"cloudkms.googleapis.com",
"secretmanager.googleapis.com",
"binaryauthorization.googleapis.com",
"containeranalysis.googleapis.com",
"cloudtrace.googleapis.com",
"clouddebugger.googleapis.com",
"cloudprofiler.googleapis.com",
"accesscontextmanager.googleapis.com",
"compute.googleapis.com",
"container.googleapis.com",
"dns.googleapis.com",
"storage-api.googleapis.com"
])
}
# Network Security - VPC with security-focused configuration
resource "google_compute_subnetwork" "secure_subnet" {
name = "${local.resource_prefix}-secure-subnet"
ip_cidr_range = var.vpc_cidr
region = var.region
network = google_compute_network.secure_vpc.id
private_ip_google_access = true
dynamic "log_config" {
for_each = var.enable_flow_logs ? [1] : []
content {
aggregation_interval = "INTERVAL_10_MIN"
flow_sampling = 0.5
metadata = "INCLUDE_ALL_METADATA"
}
}
secondary_ip_range {
range_name = "pods"
ip_cidr_range = "192.168.0.0/18"
}
secondary_ip_range {
range_name = "services"
ip_cidr_range = "192.168.64.0/18"
}
}
# Cloud Router and NAT for secure outbound access
resource "google_compute_router" "secure_router" {
name = "${local.resource_prefix}-secure-router"
region = var.region
network = google_compute_network.secure_vpc.id
}
resource "google_compute_router_nat" "secure_nat" {
name = "${local.resource_prefix}-secure-nat"
router = google_compute_router.secure_router.name
region = var.region
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
log_config {
enable = true
filter = "ERRORS_ONLY"
}
}
Network Security - Building the Perimeter
# Firewall rules with security hardening
resource "google_compute_firewall" "deny_all_ingress" {
name = "${local.resource_prefix}-deny-all-ingress"
network = google_compute_network.secure_vpc.name
deny {
protocol = "all"
}
direction = "INGRESS"
priority = 65534
source_ranges = ["0.0.0.0/0"]
log_config {
metadata = "INCLUDE_ALL_METADATA"
}
}
resource "google_compute_firewall" "allow_internal" {
name = "${local.resource_prefix}-allow-internal"
network = google_compute_network.secure_vpc.name
allow {
protocol = "tcp"
ports = ["443", "22"]
}
direction = "INGRESS"
priority = 1000
source_ranges = [var.vpc_cidr, "192.168.0.0/18", "192.168.64.0/18"]
}
# Security-hardened VM template
resource "google_compute_instance_template" "secure_template" {
name_prefix = "${local.resource_prefix}-secure-"
machine_type = "e2-micro"
region = var.region
disk {
source_image = "projects/ubuntu-os-cloud/global/images/family/ubuntu-2204-lts"
auto_delete = true
boot = true
disk_size_gb = 20
disk_type = "pd-ssd"
disk_encryption_key {
kms_key_self_link = google_kms_crypto_key.security_key.id
}
}
network_interface {
subnetwork = google_compute_subnetwork.secure_subnet.id
# No external IP - use IAP for access
}
metadata = {
enable-oslogin = var.enable_oslogin ? "TRUE" : "FALSE"
block-project-ssh-keys = "TRUE"
enable-oslogin-2fa = var.enable_oslogin ? "TRUE" : "FALSE"
startup-script = templatefile("${path.module}/startup-script.sh", {
project_id = var.project_id
})
}
tags = ["allow-ssh-iap", "secure-instance"]
service_account {
email = google_service_account.security_ops.email
scopes = ["https://www.googleapis.com/auth/cloud-platform"]
}
shielded_instance_config {
enable_secure_boot = true
enable_vtpm = true
enable_integrity_monitoring = true
}
labels = local.common_labels
lifecycle {
create_before_destroy = true
}
}
🔐 The Key Vault (Secrets and Identity Management)
KMS & Secrets Manager
NOTE: The Keyring and Key generated are meant to be used expressly for management of the infrastructure - ONLY! Different solutions should have their own key vaults with their own access policies
# Cloud KMS for encryption
resource "google_kms_key_ring" "security_keyring" {
name = "${local.resource_prefix}-security-keyring"
location = var.region
depends_on = [time_sleep.wait_for_apis]
}
# Application encryption key
resource "google_kms_crypto_key" "security_key" {
name = "${local.resource_prefix}-security-key"
key_ring = google_kms_key_ring.security_keyring.id
rotation_period = var.kms_rotation_period
version_template {
algorithm = "GOOGLE_SYMMETRIC_ENCRYPTION"
}
labels = local.common_labels
lifecycle {
prevent_destroy = true
}
}
# Secret Manager for sensitive data
resource "google_secret_manager_secret" "database_password" {
secret_id = "${local.resource_prefix}-db-password"
replication {
auto {
customer_managed_encryption {
kms_key_name = google_kms_crypto_key.security_key.id
}
}
}
labels = local.common_labels
depends_on = [time_sleep.wait_for_apis]
}
resource "google_secret_manager_secret_version" "db_password_version" {
secret = google_secret_manager_secret.database_password.id
secret_data = random_password.db_password.result
}
# Grant access to secret
resource "google_secret_manager_secret_iam_member" "secret_access" {
secret_id = google_secret_manager_secret.database_password.secret_id
role = "roles/secretmanager.secretAccessor"
member = "serviceAccount:${google_service_account.security_ops.email}"
}
IAM & Service Account Configuration
resource "google_project_iam_custom_role" "security_admin" {
role_id = "${var.company_name}SecurityAdmin"
title = "${var.company_name} Security Administrator"
description = "Custom role for security administration"
permissions = [
"securitycenter.assets.list",
"securitycenter.findings.list",
"securitycenter.sources.list",
"logging.logs.list",
"logging.entries.list",
"monitoring.alertPolicies.list",
"monitoring.notificationChannels.list",
"iam.serviceAccounts.list",
"iam.roles.list",
"cloudkms.keyRings.list",
"cloudkms.cryptoKeys.list",
"compute.instances.list",
"compute.networks.list",
"compute.firewalls.list"
]
depends_on = [time_sleep.wait_for_apis]
}
# Service Account for Security Operations
resource "google_service_account" "security_ops" {
account_id = "${local.resource_prefix}-security-ops"
display_name = "${var.company_name} Security Operations"
description = "Service account for security operations and monitoring"
depends_on = [time_sleep.wait_for_apis]
}
# Bind custom role to service account
resource "google_project_iam_member" "security_ops_binding" {
project = var.project_id
role = google_project_iam_custom_role.security_admin.id
member = "serviceAccount:${google_service_account.security_ops.email}"
}
# Bind admin users to security role
resource "google_project_iam_member" "admin_security_binding" {
for_each = toset(var.admin_users)
project = var.project_id
role = google_project_iam_custom_role.security_admin.id
member = "user:${each.value}"
}
🍱 Container Security & Analysis
Binary Authorization - The Gatekeeper
# Binary Authorization policy for container security (conditional)
resource "google_binary_authorization_policy" "security_policy" {
count = var.enable_binary_auth ? 1 : 0
admission_whitelist_patterns {
name_pattern = "gcr.io/${var.project_id}/*"
}
admission_whitelist_patterns {
name_pattern = "us-docker.pkg.dev/${var.project_id}/*"
}
default_admission_rule {
evaluation_mode = "REQUIRE_ATTESTATION"
enforcement_mode = "ENFORCED_BLOCK_AND_AUDIT_LOG"
require_attestations_by = [
google_binary_authorization_attestor.security_attestor[0].name
]
}
global_policy_evaluation_mode = "ENABLE"
depends_on = [time_sleep.wait_for_apis]
}
resource "google_binary_authorization_attestor" "security_attestor" {
count = var.enable_binary_auth ? 1 : 0
name = "${local.resource_prefix}-security-attestor"
attestation_authority_note {
note_reference = google_container_analysis_note.security_note[0].name
public_keys {
ascii_armored_pgp_public_key = file("${path.module}/attestor-public-key.pgp")
id = "${local.resource_prefix}-security-attestor-key"
}
}
depends_on = [time_sleep.wait_for_apis]
}
resource "google_container_analysis_note" "security_note" {
count = var.enable_binary_auth ? 1 : 0
name = "${local.resource_prefix}-security-note"
attestation_authority {
hint {
human_readable_name = "${var.company_name} Security Attestor"
}
}
depends_on = [time_sleep.wait_for_apis]
}
📊 Phase 4: Surveillance and Monitoring
Security Monitoring Stack (This is for the Enterprise Deployment)
# Cloud Security Command Center Integration
rresource "google_scc_notification_config" "security_notifications" {
config_id = "security-notifications"
organization = var.organization_id
description = "Security findings notifications"
pubsub_topic = google_pubsub_topic.security_notifications.id
streaming_config {
filter = "severity=\"HIGH\" OR severity=\"CRITICAL\""
}
depends_on = [google_project_service.security_center]
}
Alerting & Notification Configuration
# Cloud Monitoring Notification Channel
resource "google_monitoring_notification_channel" "email_alerts" {
count = var.enable_monitoring ? 1 : 0
display_name = "${var.company_name} Security Alerts"
type = "email"
labels = {
email_address = var.security_email
}
depends_on = [time_sleep.wait_for_apis]
}
# Monitoring Alert Policies
resource "google_monitoring_alert_policy" "iam_policy_changes" {
count = var.enable_monitoring ? 1 : 0
display_name = "IAM Policy Changes"
combiner = "OR"
conditions {
display_name = "IAM policy binding changes"
condition_threshold {
filter = "resource.type=\"project\" AND protoPayload.methodName=\"SetIamPolicy\""
duration = "0s"
comparison = "COMPARISON_GT"
threshold_value = 0
aggregations {
alignment_period = "60s"
per_series_aligner = "ALIGN_COUNT"
}
}
}
notification_channels = [google_monitoring_notification_channel.email_alerts[0].id]
alert_strategy {
auto_close = "86400s"
}
documentation {
content = "Alert triggered when IAM policies are modified. Review changes immediately."
}
}
resource "google_monitoring_alert_policy" "failed_login_attempts" {
count = var.enable_monitoring ? 1 : 0
display_name = "Failed Login Attempts"
combiner = "OR"
conditions {
display_name = "High number of failed login attempts"
condition_threshold {
filter = "resource.type=\"gce_instance\" AND jsonPayload.event_subtype=\"login_failure\""
duration = "300s"
comparison = "COMPARISON_GT"
threshold_value = 5
aggregations {
alignment_period = "300s"
per_series_aligner = "ALIGN_COUNT"
}
}
}
notification_channels = [google_monitoring_notification_channel.email_alerts[0].id]
documentation {
content = "Alert triggered when multiple failed login attempts are detected. Possible brute force attack."
}
}
resource "google_monitoring_alert_policy" "root_activity" {
count = var.enable_monitoring ? 1 : 0
display_name = "Root/Admin Activity"
combiner = "OR"
conditions {
display_name = "Root or administrative activity detected"
condition_threshold {
filter = "protoPayload.authenticationInfo.principalEmail=\"root@${var.project_id}.iam.gserviceaccount.com\" OR protoPayload.authenticationInfo.principalEmail=~\"admin@.*\""
duration = "0s"
comparison = "COMPARISON_GT"
threshold_value = 0
aggregations {
alignment_period = "60s"
per_series_aligner = "ALIGN_COUNT"
}
}
}
notification_channels = [google_monitoring_notification_channel.email_alerts[0].id]
documentation {
content = "Alert triggered when root or admin accounts are used. Review for unauthorized access."
}
}
🔍 Phase 5: Compliance and Auditing
Audit Logging Configuration & Org Security Hardening Policies
# Cloud Logging Configuration
resource "google_logging_project_sink" "security_sink" {
count = var.enable_logging ? 1 : 0
name = "${local.resource_prefix}-security-logs"
destination = "storage.googleapis.com/${google_storage_bucket.security_logs[0].name}"
filter = <<EOF
(protoPayload.serviceName="cloudresourcemanager.googleapis.com" OR
protoPayload.serviceName="iam.googleapis.com" OR
protoPayload.serviceName="compute.googleapis.com" OR
protoPayload.serviceName="container.googleapis.com" OR
protoPayload.serviceName="cloudkms.googleapis.com" OR
protoPayload.serviceName="secretmanager.googleapis.com" OR
severity >= ERROR OR
protoPayload.methodName="SetIamPolicy" OR
protoPayload.methodName="CreateServiceAccount" OR
protoPayload.methodName="DeleteServiceAccount")
EOF
unique_writer_identity = true
depends_on = [google_storage_bucket.security_logs]
}
# Grant Cloud Logging permission to write to bucket
resource "google_storage_bucket_iam_member" "security_logs_writer" {
count = var.enable_logging ? 1 : 0
bucket = google_storage_bucket.security_logs[0].name
role = "roles/storage.objectCreator"
member = google_logging_project_sink.security_sink[0].writer_identity
}
# Organization policies for security hardening
resource "google_org_policy_policy" "require_shielded_vm" {
count = var.organization_id != "" ? 1 : 0
name = "projects/${var.project_id}/policies/compute.requireShieldedVm"
parent = "projects/${var.project_id}"
spec {
rules {
enforce = "TRUE"
}
}
depends_on = [time_sleep.wait_for_apis]
}
resource "google_org_policy_policy" "disable_serial_port" {
count = var.organization_id != "" ? 1 : 0
name = "projects/${var.project_id}/policies/compute.disableSerialPortAccess"
parent = "projects/${var.project_id}"
spec {
rules {
enforce = "TRUE"
}
}
depends_on = [time_sleep.wait_for_apis]
}
Final Notes and Reflections
As someone experienced working with machines and code, I worry that folks will be all too eager to push unverified information to prod using this technology. I see it happening already. Many of them are just not aware of how much Large Language Models don't think, how they are more like glorified copy machines that are using the Internet to scrape whatever data they can get their grubby little hands on.
I don't have any illusions that such a technology is ready for prime time, and neither should you. Be skeptical of all output from a machine. And from me. Work with the tool as an augment of your skills, not as a replacement. The future may be here, but you, me, we can't afford to sit on the sidelines while others write the story for us. We do have the technology to teach and empower others, if we want to do it.
Other than fear, I do also see potential for those with the knowledge and experience to utilize it. That is where I hope I can prove my own cynicism wrong, but ultimately time will tell how the ramifications of any of this shake out.
If you've made it to the end of this article, you'll be happy to know that I've made this code available on my repo. Go there and feel free to look for yourself. Hopefully this sort of exercise can be beneficial to others who are perhaps trying to understand where to go with their skills and careers.
By the way, I used Claude to also help me write this article, or at least parts of it. 👺
White Lies and Green Chips - A realistic guide to LLMs© 2025. This work is openly licensed via CC BY 4.0