Terraform module customisation with GCP Parameter Manager

Terraform is a phenomenal tool for defining your infrastructure as code, promoting consistency and repeatability. But as your infrastructure grows across multiple environments — development, staging, production — you quickly hit a common challenge: how do you keep your Terraform configurations clean, reusable, and free from environment-specific hardcoding?

Terraform provides Input Variables feature to customise Terraform modules without altering the module’s own source code. Common approach is to define these variables in environment specific .tfvars files to achieve this customisation. Essentially, these .tfvars files serve as the configuration inputs for your Terraform workflow. However, storing these files directly in your Git repository presents significant challenges: values are static, fixed at commit time, complicating dynamic updates or experimentation. More critically, this approach lacks granular access controls and poses a security risk if sensitive data is ever included.

This article will guide you through a robust approach: customising your Terraform deployments by storing environment-specific input variables (including references to secrets) in GCP Parameter Manager and dynamically fetching them during your Terraform workflow execution.

The challenge: Environment-specific Terraform configurations

Imagine you have a core Terraform configuration for deploying a set of Virtual Machine instances. In your development environment, you might need 1 small VMs (e2-small) in asia-south1. For production, you require 3 larger VMs (e2-medium) in asia-southeast1. Each VM instance needs to contain a LICENSE (sensitive material). These infra requirements might change with time.

You need a way to define your infrastructure once and then inject environment-specific values, including secrets, dynamically and securely.

GCP Parameter Manager: Your centralised configuration store

GCP Parameter Manager (extension of GCP Secret Manager) is a powerful service designed to store workflow configuration data as key-value pairs. Its features make it an excellent fit for externalising Terraform variables and even referencing secrets, it offers features like:

  • Centralised Storage: A single place to manage all your environment parameters.

  • Versioning: Every change to a parameter creates a new version, allowing easy rollbacks and historical tracking.

  • Access Control (IAM): Granular permissions to control who can read or write parameters.

  • Audit Logging: All access and modifications are logged for compliance and security.

  • Secret Integration: Parameter Manager can directly reference secrets stored in GCP Secret Manager, allowing you to fetch complex configurations that include sensitive values, all resolved within Parameter Manager itself.

By using Parameter Manager, we separate our Terraform code from its environment-specific configuration, enabling true “build once, deploy anywhere” principle.

Diving into the code

In the below sections, we’ll walk through the Terraform code step-by-step, explaining each component as we go. You can follow along directly in your own GCP project.

You can find the complete, runnable code for this example in our GitHub repository: https://github.com/gptSanyam/customizing_terraform_workflows

Please note that setting up a CI/CD pipeline is beyond the scope of this article, but you can easily integrate the commands into a tool of your choice.

Our reusable Terraform configuration

Let’s start with a simple Terraform configuration to provision Google Compute Engine VMs. This configuration is designed to be highly parameterised, allowing us to customise the number of instances, machine type, deployment environment, and even inject a license key using input variables.

Important note: The code provided in this article is an overly simplified sample designed to clearly illustrate the concepts of module customisation and Parameter Manager integration. Real world Terraform configs for production environments are often much more complex, involving more resources, networking, security policies, and robust error handling much of which is skipped in our code to simplify. The concepts should still be applicable though.

Project Directory Structure:

.
├── environments/
│   ├── dev/
│   │   ├── main.tf           # Root module for the development environment
│   │   └── variables.tf      # Input variables for the dev root module
│   └── prod/
│       ├── main.tf           # Root module for the production environment
│       └── variables.tf      # Input variables for the prod root module
└── modules/
    └── vm_instance/
        ├── main.tf           # Defines the reusable VM instance resource
        ├── variables.tf      # Input variables for the vm_instance module
        └── outputs.tf        # Outputs from the vm_instance module

Define a module for vm instance

  • modules/vm_instance/main.tf: This file defines the google_compute_instance module. It’s built to be generic, utilizing Terraform’s count meta-argument to provision multiple VMs based on an input variable (num_vm_instances). All customizable attributes like name, machine_type, zone, and tags are populated directly from variables. Crucially, it includes a metadata_startup_script which contains a basic shell script executed on VM boot. This script dynamically injects the software_license_key (passed as a variable) into the VM’s environment.
# Configure Compute Engine VM Instance
resource "google_compute_instance" "my_minimal_vm" {
  count = var.num_vm_instances # This will create VM instances equal to the count value with indices 0, 1, 2 ...
  name         = "my-vm-${var.deployment_env}-${count.index}" # Include environment name and instance count in the name
  machine_type = var.vm_machine_type
  zone         = var.gcp_zone
  # Add tags to define its environment
  tags = [var.deployment_env]
  # Boot disk configuration
  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
      size  = 10
      type  = "pd-standard"
    }
  }
  # Default VPC with public access
  network_interface {
    network = "default"
    access_config {}
  }
  # Add a startup script to set the license key as an environment variable
  metadata_startup_script = <<-EOF
    #!/bin/bash
    sudo apt-get update
    # --- Injecting Software License Key ---
    # This will create a file in /etc/profile.d/ that sets the environment variable
    # This variable will be available for new shell sessions after VM boot.
    # For applications, you might need to source this file or restart the application.
    if [ -n "${var.software_license_key}" ]; then
      echo "Injecting software license key..."
      echo "export MY_APP_LICENSE_KEY=\"${var.software_license_key}\"" | sudo tee /etc/profile.d/my_app_license.sh
      echo "Software license key loaded as MY_APP_LICENSE_KEY in /etc/profile.d/my_app_license.sh"
    else
      echo "No software license key provided."
    fi
  EOF
}
  • modules/vm_instance/variables.tf: This file declares all the input variables that the vm_instance module accepts. These variables act as the module’s API, allowing callers (our environment-specific root modules) to customise its behaviour without modifying the module’s internal logic. Examples include gcp_project_id, gcp_region, vm_machine_type, num_vm_instances, deployment_env, and the sensitive software_license_key.
variable "gcp_project_id" {
  description = "The GCP project ID where resources will be created."
  type        = string
}
variable "gcp_region" {
  description = "The GCP region for the provider."
  type        = string
  default     = "asia-south1"
}
variable "gcp_zone" {
  description = "The GCP zone to deploy the VM in (must be within the chosen region)."
  type        = string
  default     = "asia-south1-a"
}
variable "vm_machine_type" {
  description = "The machine type for the VM. e2-small or e2-medium are cost-optimal."
  type        = string
  default     = "e2-small"
}
variable "num_vm_instances" {
  description = "The number of VMs to be provisioned for a deployment."
  type        = number
  default     = 1
}
variable "deployment_env" {
  description = "The number of VMs to be provisioned for a deployment."
  type        = string
  default     = "dev"
}
variable "software_license_key" {
  description = "The license key for software installed on the VM."
  type        = string
  default     = "" # Provide an empty string as default if not always required
  sensitive   = true
}

Environment-specific overrides

While our modules/vm_instance/ provides the generic blueprint for a VM, the environments/ directory is where we define the environment-specific customizations for each deployment (e.g., dev, prod). Each subdirectory within environments/ acts as a separate Terraform root module, meaning it manages its own independent state file and is deployed as a distinct infrastructure stack.

This structure allows us to deploy the same vm_instance module with different configurations, effectively “overriding” the module’s generic nature with parameters tailored for a specific environment.

Both the dev and prod environment configurations folders contain the following files:

environments/*/main.tf: This file defines the overall deployment for a specific environment.

  • Backend Configuration: It specifies the GCS backend for Terraform state, ensuring that each environment’s state is stored separately (e.g., sg-tfstate-dev for development, sg-tfstate-prod for production). This isolation is crucial to prevent accidental modifications across environments.

  • Provider Configuration: It configures the Google Cloud provider, explicitly setting the project and region based on variables defined for that particular environment.

  • Module Call & Variable Overrides: Most importantly, this file calls our reusable vm_instance module. It then passes environment-specific values to the module’s variables (e.g., gcp_project_id, vm_machine_type, num_vm_instances, deployment_env, and software_license_key). These values are sourced from the terraform.tfvars.json file generated for that environment, effectively overriding the generic definitions in the vm_instance module.

# environments/dev/main.tf
terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 5.0"
    }
  }
  # Reference the GCS backend for state storage specific to dev
  backend "gcs" {
    bucket = "sg-tfstate-dev"
    prefix = "terraform/state"
  }
}
# Configure the Google Cloud provider for the Dev project and region
provider "google" {
  project = var.gcp_project_id # This will be set by terraform.tfvars.json
  region  = var.gcp_region     # This will be set by terraform.tfvars.json
}
# Call the vm_instance module
module "dev_vm_deployment" {
  source = "../../modules/vm_instance" # Relative path to your module
  # Pass variables to the module, whose values will come from terraform.tfvars.json
  gcp_project_id       = var.gcp_project_id
  gcp_region           = var.gcp_region
  gcp_zone             = var.gcp_zone
  vm_machine_type      = var.vm_machine_type
  num_vm_instances     = var.num_vm_instances
  deployment_env       = var.deployment_env
  software_license_key = var.software_license_key
}
# Optionally, define outputs for the dev environment's deployment
output "dev_vm_names" {
  description = "Names of VMs provisioned in the dev environment."
  value       = module.dev_vm_deployment.instance_names
}

environments/*/variables.tf: These files simply declare the input variables that the respective root module (e.g., dev or prod) expects. Their values are not hardcoded here; instead, they are supplied externally, primarily from the terraform.tfvars.json file that we’ll generate from GCP Parameter Manager for each environment. This ensures that the environment’s configuration remains dynamic and external to the Terraform code.

variable "gcp_project_id" {
  description = "The GCP project ID for the DEV environment."
  type        = string
}
variable "gcp_region" {
  description = "The GCP region for the DEV environment."
  type        = string
}
variable "gcp_zone" {
  description = "The GCP zone for the DEV environment."
  type        = string
}
variable "vm_machine_type" {
  description = "The machine type for VMs in the DEV environment."
  type        = string
}
variable "num_vm_instances" {
  description = "The number of VMs to provision in the DEV environment."
  type        = number
}
variable "deployment_env" {
  description = "The environment name for this deployment (should be 'dev')."
  type        = string
}
variable "software_license_key" {
  description = "The license key for software installed on VMs in the DEV environment."
  type        = string
  default     = "" # Default can be empty for dev, or a specific dev license.
  sensitive   = true
}

Storing your environment configurations (Including Secret References) in Parameter Manager

For each environment, we’ll create a JSON Parameter which stored the environment configuration along with the secret reference.

We have defined a JSON Parameters for each environment. Each Parameter contains a Parameter Version that stores the environment configuration i.e. it defined the values of the various variables used in the terraform configs defined earlier.

Note that you can easily refer GCP Secrets in a Parameters, both the dev andprod Parameter Version contains a reference to Secret which stores the license. This value is provided to the Terraform workflow only during the execution.

Grant Parameter Manager access to the Secret

Render operation on the Parameter Version returns its contents with the actual secret value replaced with the reference. The Principal Identity of the Parameter must have the Secret Manager Secret Accessor role (roles/secretmanager.secretAccessor) on the license secret.

We will provide Secret Manager Secret Accessor role to the Principal Identity of both the Parameters.

Now the setup is ready and we can perform the deployment in individual environments.

Deployment with dynamic customisation: Fetching, rendering, and applying parameters

Your environments/dev/ and environments/prod/ folders will serve as the entry points for each deployment. They call the central vm_instance module and set up their own state management.

This is where Parameter Manager’s secret rendering capability truly shines. The Render API on Parameter Version will automatically resolve the _REF_ and include the actual secret value in its renderedPayload.

Deploy to dev or prod environment

  1. Navigate to the respective environment directory
cd environments/dev
cd environments/prod
  1. Fetch the Dev or Prod parameters from Parameter Manager (which includes the resolved license key) and save as terraform.tfvars.json
gcloud parametermanager parameters versions render v1 \
  --parameter=prod-tfvars \
  --location=global \
  --project gcp-demo-464514 \
  --format=json | \
  jq -r '.renderedPayload' | \
  base64 --decode > terraform.tfvars.json

Crucial note on security: When the gcloud render command completes, the terraform.tfvars.json file will contain the plain text value of your software_license_key. For production environments, ensure:

This file is generated in a secure, ephemeral environment (like a CI/CD pipeline).

The file is immediately deleted after terraform apply finishes.

Access to the build environment is extremely restricted.

  1. Initialise, Plan and Apply Terraform
terrafrom init
terraform plan
terrafrom apply

Result for prod deployment, we can see that 3 instances of VMs were provisioned in asia-southeast1 region. Similarly, the deployment can be done for dev environment can be performed.

Conclusion

In conclusion, by leveraging GCP Parameter Manager to store your Terraform configurations, you can achieve a highly secure, flexible, dynamic, and automated deployment process. This powerful pattern not only embodies the “build once, deploy anywhere” principle but also ensures your infrastructure scales efficiently across environments while maintaining strict control over sensitive data.

8 Likes

Great article!!

1 Like

Excellent write-up, Sanyam. I have been exploring Terraform workflows myself. The way you leveraged Parameter Manager for secure, dynamic environment configs is brilliant. This approach really embodies the “build once, deploy anywhere” mindset. Appreciate you sharing such a practical guide.

1 Like

Interesting approach.