LogoLogo
HomePlatformAsk DuploCloudPricing
  • Overview
  • Product Updates
  • Workshops
    • DuploCloud 101 for AWS
      • Create Your Infrastructure and Application
        • 1. Log in to the DuploCloud Portal
        • 2. Create a DuploCloud Infrastructure
        • 3. Create a DuploCloud Tenant
        • 4. Create an EKS Worker Node
        • 5. Deploy an Application
        • 6. Create a Load Balancer
        • 7. Deploy an S3 Bucket
        • 8. Deploy a Database
        • 9. Create an Alarm
      • Daily Operations using DuploCloud
        • 1. Host, Container, and Kubectl Shell
        • 2. Logging
        • 3. Metrics
        • 4. Billing and Cost Management
        • 5. Audit Logs
        • 6 - Tenant and Admin Just-In-Time (JIT) AWS Access
        • 7. CI/CD
        • 8. Security Hub and Dashboard
        • 9. Terraform Mode of Operations
      • Post-workshop Reference Guide
        • Post-Workshop Testing and Documentation Links
        • Connect With Us
        • DuploCloud Whitepapers
        • DuploCloud Terraform Provider
        • DuploCloud AWS Demo Video
  • Getting Started with DuploCloud
    • What DuploCloud Does
    • DuploCloud Onboarding
    • Application Focused Interface: DuploCloud Architecture
      • DuploCloud Tenancy Models
      • DuploCloud Common Components
        • Infrastructure
        • Plan
        • Tenant
        • Hosts
        • Services
        • Diagnostics
      • Management Portal Scope
    • GRC Tools and DuploCloud
    • Public Cloud Tutorials
    • Getting Help with DuploCloud
  • Container Orchestrators
    • Terminologies in Container Orchestration
  • DuploCloud Prerequisites
    • DNS Configuration
  • AWS User Guide
    • Prerequisites
      • Route 53 Hosted Zone
      • ACM Certificate
      • Shell Access for Containers
      • VPN Setup
      • Connect to the VPN
    • AWS Quick Start
      • Step 1: Create Infrastructure and Plan
      • Step 2: Create a Tenant
      • Step 3: Create an RDS Database (Optional)
      • Creating an EKS Service
        • Step 4: Create a Host
        • Step 5: Create a Service
        • Step 6: Create a Load Balancer
        • Step 7: Enable Additional Load Balancer Options (Optional)
        • Step 8: Create a Custom DNS Name (Optional)
        • Step 9: Test the Application
      • Creating an ECS Service
        • Step 4: Create a Task Definition for an Application
        • Step 5: Create the ECS Service and Load Balancer
        • Step 6: Test the Application
      • Creating a Native Docker Service
        • Step 4: Create an EC2 Host
        • Step 5: Create a Service
        • Step 6: Create a Load Balancer
        • Step 7: Test the Application
    • AWS Use Cases
      • Creating an Infrastructure and Plan for AWS
        • EKS Setup
          • Enable EKS endpoints
          • Enable EKS logs
          • Enable Cluster Autoscaler
        • ECS Setup
          • Enable ECS logging
        • Add VPC endpoints
        • Security Group rules
        • Upgrading the EKS version
      • Creating a Tenant (Environment)
        • Setting Tenant session duration
        • Setting Tenant expiration
        • Tenant Config settings
      • Hosts (VMs)
        • Adding Hosts
        • Connect EC2 instance
        • Adding Shared Hosts
        • Adding Dedicated Hosts
        • Autoscaling Hosts
          • Autoscaling Groups (ASG)
            • Launch Templates
            • Instance Refresh for ASG
            • Scale to or from Zero
            • Spot Instances for AWS
          • ECS Autoscaling
          • Autoscaling in Kubernetes
        • Configure Auto-reboot
        • Create Amazon Machine Image (AMI)
        • Hibernate an EC2 Host
        • Snapshots
        • Taints for EKS Nodes
        • Disable Source Destination Check
      • Auditing
      • Logs
        • Enable Default-Tenant logging
        • Enable Non-Default Tenant logging
        • Configure Logging per Tenant
        • Display logs
        • Create custom logs
      • Diagnostics and Metrics
        • Metrics Setup
        • Metrics Dashboard
        • Kubernetes Administrator dashboard
      • Faults and Alerts
        • Alert notifications
        • Automatic alert creation
        • Automatic fault healing
        • SNS Topic Alerts
        • System Settings Flags
      • AWS Console link
      • Just-in-Time (JIT) Access
      • Billing and Cost management
        • Enable billing data
        • View billing data
        • Apply cost allocation tags
        • DuploCloud License Usage
        • Configure Billing Alerts
      • Resource Quotas
      • Big Data and ETL
      • Custom Resource tags
    • AWS Services
      • Containers and Services
        • EKS Containers and Services
          • Allocation Tagging
        • ECS Containers, Task Definitions and Services
        • Passing Configs and Secrets
        • Container Rollback
        • Docker Registry credentials
      • Load Balancers
        • Target Groups
        • EKS Load Balancers
        • ECS Services and Load Balancers
        • Native Docker Load Balancers
      • Storage
        • Storage Class and PVCs
        • GP3 Storage Class
      • API Gateway
      • Batch
      • CloudFront
      • Databases
        • AWS ElastiCache
        • AWS DynamoDB database
        • AWS Timestream database
        • RDS database
          • IAM authentication
          • Backup and restore
          • Sharing encrypted database
          • Manage RDS Snapshots
          • Add and manage RDS read replicas
            • Add Aurora RDS replicas
          • Add monitoring interval
          • Enable or disable RDS logging
          • Restrict RDS instance size
          • Add parameters in Parameter Groups
          • Manage Performance Insights
      • Data Pipeline
      • Elastic Container Registry (ECR)
        • Sharing ECR Repos
      • Elastic File System (EFS)
        • Mount an EFS in an EC2 instance
      • EMR Serverless
      • EventBridge
      • IoT (Internet of Things)
      • Kafka Cluster
      • Kinesis Stream
      • Lambda Functions
        • Configure Lambda with Container Images
        • Lambda Layers
      • Managed Airflow
      • NAT Gateway for HA
      • OpenSearch
      • Probes and Health Check
      • S3 Bucket
      • SNS Topic
      • SQS Queue
      • Virtual Private Cloud (VPC) Peering
      • Web App Firewall (WAF)
    • AWS FAQ
    • AWS Systems Settings
      • AWS Infrastructure Settings
      • AWS Tenant Settings
    • AWS Security Settings
      • Tenant Security settings
      • Infrastructure Security settings
      • System Security settings
      • AWS Account Security settings
      • Vanta Compliance Controls
  • GCP User Guide
    • Container deployments
      • Container orchestration features
      • Key DuploCloud concepts
    • Prerequisites
      • Docker Registry
      • Service Account Setup
      • Cloud DNS Zone
      • Certificates for Load Balancer and Ingress
      • Initial Infrastructure Setup
      • Tools Tenant
        • Enable Kubectl Shell
      • Docker
        • Docker Registry Credentials (Optional)
        • Shell Access for Docker (Optional)
      • VPN
        • VPN Setup
        • Connect to the VPN
      • Managed SSL Certificates with Certificate Manager (Optional)
    • GCP Quick Start
      • Step 1: Create Infrastructure and Plan
      • Step 2: Create a Tenant
      • Create a Service with GKE Autopilot
        • Step 3: Create a Service
        • Step 4: Create a Load Balancer
        • Step 5: Test the Application
      • Create a Service with GKE Standard
        • Step 3: Create a Node Pool
        • Step 4: Create a Service
        • Step 5: Create a Load Balancer
        • Step 6: Test the Application
    • GCP Use Cases
      • Creating an Infrastructure and Plan for GCP
        • Creating a GKE Autopilot Cluster
        • Creating GKE Standard Cluster
        • Kubectl token and config
        • Upgrading the GKE version
      • Creating a Tenant (Environment)
        • Tenant expiry
        • Tenant Config settings
      • Hosts (VMs)
      • Cost management for billing
        • Export Billing to BigQuery
        • Manage cross project billing in GCP
    • GCP Services
      • Containers and Services
      • GKE Containers and Services
        • Allocation Tagging
        • Docker Registry credentials
        • Container Rollback
        • Passing Config and Secrets
      • GCP Databases
        • Cloud SQL
        • Firestore Database
        • Managed Redis
      • Load Balancers
      • Cloud Armour
      • Cloud Credentials
      • Cloud Functions
      • Cloud Scheduler
      • Cloud Storage
      • Node Pools
      • Pub/Sub
    • GCP FAQs
    • GCP Systems Settings
      • GCP Infrastructure Settings
      • GCP Tenant Settings
    • GCP Security Settings
      • Infrastructure Security settings
      • GCP Account Security settings
  • Azure User Guide
    • Container deployments
      • Container orchestration features
      • Key DuploCloud concepts
    • Prerequisites
      • Program DNS entries
      • Set the AKS cluster version
      • Import SSL certificates
      • Provision the VPN
      • Connect to the VPN
      • Managed Identity Setup
    • Azure Quick Start
      • Step 1: Create Infrastructure and Plan
      • Step 2: Create a Tenant
      • Step 3: Create Agent Pools
      • Step 4: Create a Service
      • Step 5: Create a Load Balancer
      • Step 6: Test the Application
    • Azure Use Cases
      • Creating an Infrastructure and Plan for Azure
        • AKS initial setup
        • Kubectl token and config
        • Encrypted storage account
        • Upgrading the AKS version
      • Creating a Tenant (Environment)
        • Tenant expiry
        • Tenant Config settings
      • Hosts (VMs)
        • Autoscaling for Hosts
          • Autoscaling Azure Agent Pools
        • Shared Hosts
        • Availability Sets
        • Snapshots
      • Logs
      • Metrics
      • Faults and alerts
        • Alert notifications
      • Azure Portal link
      • Billing and Cost management
        • Enable billing data
        • Viewing billing data
    • Azure Services
      • Containers and Services
        • AKS Containers and Services
          • Allocation Tagging
        • Docker Registry Credentials
        • Container Rollback
        • Passing Configs and Secrets
      • Agent Pools
        • Spot Instances for AKS Agent Pools
      • Azure Container Registry (ACR)
      • Databases
        • MSSQL Server database
        • PostgreSQL database
        • PostgreSQL Flexible Server
        • MySQL Server database
          • Azure Managed SQL Instances
        • MySQL Flexible Server
        • Redis database
      • Docker Web Application
      • Databricks
      • Data Factory
      • Infra Secrets
      • Key Vault
      • Load Balancers
      • Public IP Address Prefix
      • Serverless
        • App Service Plans and Web Apps
        • Function Apps
      • Service Bus
      • Storage Account
      • Subscription
      • VM Scale Sets
    • Azure FAQ
    • Azure Systems Settings
      • Azure Infrastructure Settings
      • Azure Tenant Settings
    • Azure Security Settings
      • Tenant Security Settings
  • Kubernetes User Guide
    • Kubernetes Quick Start
    • Kubectl
      • Local Kubectl Setup
        • Kubectl Shell
      • Kubectl Shell
        • Enable Kubectl Shell for GKE
        • Enable Kubectl Shell for AKS
      • Kubectl Tokens and Access Management
      • Read-only Access in Kubernetes
      • Mirantis Lens
    • Configs and Secrets
      • Setting Kubernetes Secrets
      • Creating a Kubernetes ConfigMap
      • Setting Environment Variables (EVs) from a ConfigMap or Secret
      • Mounting ConfigMaps and Secrets as files
      • Using Kubernetes Secrets with Azure Storage connection data
      • Creating the SecretProviderClass Custom Resource to mount secrets
      • Managing Secrets and ConfigMaps access for readonly users (AWS and GCP)
    • Jobs
    • CronJobs
    • DaemonSet
    • Helm Charts
    • Ingress Loadbalancer
      • EKS Ingress
      • GKE Ingress
      • AKS Shared Application Gateway
        • Using an Azure Application Gateway SSL policy with Ingress
    • InitContainers and Sidecar Containers
    • HPA
    • Pod Toleration
    • Kubernetes Lifecycle Hooks
    • Kubernetes StorageClass and PVC
      • Native Azure Storage Classes
    • Import an External Kubernetes Cluster
    • Managed Service Accounts (RBAC)
    • Create a Diagnostics Application Service
  • Security and Compliance
    • Control Groups
    • Isolation and Firewall
      • Cloud Account
      • Network Segmentation
      • IAM
      • Security Groups
      • VPN
      • WAF
    • Access Management
      • Authentication Methods
      • Cloud Console, API and CLI
      • VM SSH
      • Container Shell
      • Kubernetes Access
      • Permission Sets
    • Encryption
      • At Rest Encryption
      • In Transit encryption
    • Tags and Label
    • Security Monitoring
      • Agent Management
      • SIEM
      • Vulnerabilities
      • Hardening Standards (CIS)
      • File Integrity Monitoring
      • Access Monitoring
      • HIDS
      • NIDS
      • Inventory Monitoring
        • Inventory Reports
      • Antivirus
      • VAPT (Pen Test)
      • AWS Security HUB
      • Alerting and Event Management
    • Compliance Frameworks
    • Security and Compliance Workflow
  • Terraform User Guide
    • DuploCloud Terraform Provider
    • DuploCloud Terraform Exporter
      • Install Terraform Exporter
      • Generate Terraform
      • Using Generated Code
      • Troubleshooting Guide
    • Terraform FAQ
  • Automation and Tools
    • DuploCtl CLI
    • Supported 3rd Party Tools
    • Automation Stacks
      • Clone from a Tenant
      • Create a deploy template
      • Deploy from a template
      • Customize deploy templates
  • CI/CD Overview
    • Service Accounts
    • GitHub Actions
      • Configure GitHub
      • Build a Docker image
      • Update a Kubernetes Service
      • Update an ECS Service
      • Update a Lambda function
      • Update CloudFront
      • Upload to S3 bucket
      • Execute Terraform
    • CircleCI
      • Configure CircleCI
      • Build and Push Docker Image
      • Update Service
    • GitLab CI/CD
      • Configure Gitlab
      • Build a Docker image
      • Update a service
    • Bitbucket Pipelines
      • Configure Bitbucket
      • Build a Docker image
      • Update the Service with Deploy Pipe
    • Azure Pipelines
      • Configure Azure DevOps
      • Build a Docker image from Azure DevOps
      • Update a Service
      • Troubleshooting
    • Katkit
      • Environments
      • Link repository
      • Phases
      • Katkit config
      • Advanced functions
  • User Administration
    • User Logins
    • User access to DuploCloud
    • API tokens
    • Session Timeout
    • Tenant Access for Users
      • Add Tenant access over a VPN
      • Read-only access to a Tenant
      • Cross-tenant Access
      • Deleting a Tenant
    • VPN access for users
    • Database access for users
    • SSO Configuration
      • Azure SSO Configuration
      • Okta Identity Management
    • Login Banner/Button Customization
  • Observability
    • Standard Observability Suite
      • Setup
        • Logging Setup
          • Custom Kibana Logging URL
        • Metrics Setup
        • Auditing
          • Custom Kibana Audit URL
      • Logs
      • Metrics
    • Advanced Observability Suite
      • Architecture
      • Dashboards
        • Administrator Dashboard
        • Tenant Dashboard
        • Customizing Dashboards
      • Logging with Loki
      • Metrics with Mimir
      • Tracing with Tempo
      • Profiles with Pyroscope
      • Alerts with Alert Manager
      • Service Level Objectives (SLOs)
      • OTEL Stack Resource Requirements
      • Application Instrumentation
      • Custom Metrics
      • Terraform
    • Faults and Alerts
      • Alert notifications
      • Automatic alert creation
    • Auditing
    • Web App Firewall (WAF)
  • Runbooks
    • Configuring Egress and Ingress for AKS Ingress Controllers in Private Networks
    • Configuring Retool to SSH into a DuploCloud Host with a Static IP Address for Secure Remote Database
  • FAQs
  • Extras
    • FluxCD
    • Deploying Helm Charts
    • Setting up SCPs (Service Control Policies) for DuploCloud
    • BYOH
    • Delegate Subdomains
    • Video Transcripts
      • DuploCloud AWS Product Demo
      • DuploCloud Azure Product Demo
      • DuploCloud GCP Product Demo
      • DevOps Deep Dive - Abstracting Cloud Complexity
      • DuploCloud Uses Infrastructure-as-Code to Stitch Together DevOps Lifecycle
Powered by GitBook
LogoLogo

Platform

  • Overview
  • Demo Videos
  • Pricing Guide
  • Documentaiton

Solutions

  • DevOps Automation
  • Compliance
  • Platform Engineering
  • Edge Deployments

Resources

  • Blog & News
  • Customer Stories
  • Webinars
  • Privacy Policy

Company

  • Careers
  • Press
  • Events
  • Contact

© DuploCloud, Inc. All rights reserved. DuploCloud trademarks used herein are registered trademarks of DuploCloud and affiliates

On this page
  • Wrapper Scripts
  • CI/CD
  • Terraform Provider
  • AWS Provider
  • Kubernetes Provider
  • Terraform Backends
  • AWS S3 Bucket Backend
  • GCP GCS Bucket Backend
  • Azure Storage Account Backend
  • Terraform Workspaces
  • Portal Workspaces
  • Infrastructure Workspace
  • Tenant Workspaces
  • Terraform Configurations

Was this helpful?

Edit on GitHub
Export as PDF
  1. Terraform User Guide

DuploCloud Terraform Provider

Using DuploCloud exclusive Terraform provider

PreviousTerraform User GuideNextDuploCloud Terraform Exporter

Last updated 9 months ago

Was this helpful?

DuploCloud has its own fully integrated Terraform Provider that interacts directly with our API; it is not simply wrapping existing Terraform modules. Learn more about our Terraform offerings . Since we have a provider, we don't change how Terraform works. You simply provide credentials to the DuploCloud Terraform Provider. Many common patterns and use cases are available to help you create your desired stack.

Wrapper Scripts

to execute your Terraform code. These scripts templatize common Terraform tasks and align with some custom implementations by DuploCloud. They cover selecting workspaces, initializing modules, finding variable files, etc.

If you have custom scripts or methods, the next sections cover requirements to successfully configure them to work with DuploCloud.

CI/CD

DuploCloud engineers can help you configure your pipelines to execute your Terraform scripts. Often times, it is simply a matter of running our wrapper scripts within a pipeline and inputting the workspace, module, and executable commands. The sections below describe specific steps to implement Terraform using CI/CD pipelines using these supported platforms:

Terraform Provider

To get started with the DuploCloud Terraform Provider, set these two environment variables:

export duplo_host="https://myportal.duplocloud.net"
export duplo_token="abc123"

Use this code to configure the provider in your module.

terraform {
  duplocloud = {
    source  = "duplocloud/duplocloud"
    version = "~> 0.10.21"
  }
}
provider "duplocloud" {}

AWS Provider

data "duplocloud_admin_aws_credentials" "current" {}

# using JIT to inject creds
provider "aws" {
  region     = local.region
  access_key = data.duplocloud_admin_aws_credentials.current.access_key_id
  secret_key = data.duplocloud_admin_aws_credentials.current.secret_access_key
  token      = data.duplocloud_admin_aws_credentials.current.session_token
}
# uses local aws config
provider "aws" {
  region     = local.region
}
duploctl jit update_aws_config myportal --interactive --admin

Then use this profile by setting

AWS_PROFILE=myportal

Kubernetes Provider

data "duplocloud_eks_credentials" "current" {
  plan_id = local.infra_name
}
# using JIT to inject creds
provider "kubernetes" {
  host                   = data.duplocloud_eks_credentials.current.endpoint
  cluster_ca_certificate = data.duplocloud_eks_credentials.current.ca_certificate_data
  token                  = data.duplocloud_eks_credentials.current.token
}
# Discovers config from KUBECONFIG environment variable
provider "kubernetes" {}
duploctl jit update_kubeconfig --plan myinfra --interactive --admin

Terraform Backends

Each DuploCloud Portal manages its own Terraform state, and each public cloud has its own standards. On AWS we use S3 for the state and a DynamoDB Table for locking. On GCP we use their S3 buckets. On Azure a storage container is created.

Some of the benefits of using a managed backend for your state include:

  • Encrypted, secure, and compliant infrastructure

  • Consistency and integrity from using standard naming conventions (for example, duplo-tfstate-<account id>).

  • Isolation, in that each portal gets its own S3 Bucket for its state, as needed to segregate production from non-production environments.

  • Centralized management from installation by a unique instance of the DuploCloud Portal.

AWS S3 Bucket Backend

Within your Terraform configuration, you can use an S3 Bucket backend, as shown here:

backend "s3" {
  workspace_key_prefix = "tenants"
  key                  = "mymod.tfstate"
  encrypt              = true
}
export AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query "Account" --output text)"
export DUPLO_TF_BUCKET="duplo-tfstate-$AWS_ACCOUNT_ID"
terraform init \
  -backend-config=dynamodb_table=${DUPLO_TF_BUCKET}-lock \
  -backend-config=region=$AWS_DEFAULT_REGION \
  -backend-config=bucket=$DUPLO_TF_BUCKET

GCP GCS Bucket Backend

Within your Terraform configuration, use the following for a GCP GCS Bucket backend:

backend "gcs" {
  prefix = "mymod.tfstate"
}
export GCP_PROJECT_ID="my-project"
export DUPLO_TF_BUCKET="duplo-tfstate-$GCP_PROJECT_ID"
terraform init -backend-config=bucket=$DUPLO_TF_BUCKET

Azure Storage Account Backend

Within your Terraform configuration, use the following for Azure Storage Account backend:

backend "azurerm" {
  resource_group_name = "duplo-terraform-secure-infra"
  container_name      = "tfstate"
  key                 = "mymod.tfstate"
}
export AZURE_ACCOUNT_ID="my-project"
export RESOURCE_GROUP_NAME="duplo-terraform-secure-infra"
export DUPLO_TF_BUCKET="duplotfstate$AZURE_ACCOUNT_ID"
export ARM_ACCESS_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP --account-name $STORAGE_ACCOUNT_NAME --query '[0].value' -o tsv)
terraform init -backend-config=storage_account_name=$DUPLO_TF_BUCKET

Terraform Workspaces

Using these DuploCloud constructs provides three distinct topological levels; each can be executed at various frequencies and provides some degree of desired variance. You can build this topology into your state's backend configuration to clearly organize your state files.

Portal Workspaces

Each Portal Workspace represents the configuration of one DuploCloud Portal, which in turn is managing one AWS Account, GCP Project, or Azure Subscription. Portal Workspaces are rarely executed, as they represent the highest and most global scope of your environment.

You might use them to create certificates and zones or any configuration which would be valid for one time only. Instances of Infrastructure Workspaces are sets of infrastructures contained by a single DuploCloud portal instance.

Use the following variable when the module represents a DuploCloud portal configuration.

locals {
  portal_name = terraform.workspace
}

The following code provides an example of how an Infrastructure module can be configured to reference a portal's state.

data "terraform_remote_state" "portal" {
  backend   = "s3"
  workspace = local.portal_name
  config = {
    bucket               = local.tfstate_bucket
    region               = local.default_region
    workspace_key_prefix = "portals"
    key                  = "portal.tfstate"
  }
}

Infrastructure Workspace

Each instance of a DuploCloud Infrastructure represents one Infrastructure Workspace. By using an Infrastructure Workspace, you are able to define the configuration for your infrastructure, such as certificates, domains, regions, or whether, for example, you'll use EKS, EC2, or ECS.

Often times the Infrastructure Workspace may contain extra services you want installed. Some examples of these services are Kubernetes operators, a bridge server for a cloud app, or any other app that is installed one time per Infrastructure.

Use the following variable when the module represents an Infrastructure configuration.

locals {
  infra_name = terraform.workspace
}

Here is how a Tenant Module can be configured to reference the infrastructures it is in

data "terraform_remote_state" "portal" {
  backend   = "s3"
  workspace = local.infra_name
  config = {
    bucket               = local.tfstate_bucket
    region               = local.default_region
    workspace_key_prefix = "infrastructures"
    key                  = "infrastructure.tfstate"
  }
}

Tenant Workspaces

Each instance of a DuploCloud Tenant represents one Tenant Workspace. The Portal and Infrastructure levels are likely to contain a single module as they are written for a singular purpose or job. Tenants, on the other hand, usually have a series of modules to be run within the workspace.

Common modules within a Tenant Workspace are:

  • Tenant - The configuration for the tenant itself.

  • Services - The cloud services like databases and file systems the application services will rely on

  • App - The configuration of the actual micro services and applications. This often will break out into multiple groups like frontend-app and backend-app.

Use this variable when the module represents a portal configuration.

locals {
  tenant_name = terraform.workspace
}

Here is how an application or services Module can reference a Tenants state.

data "terraform_remote_state" "portal" {
  backend   = "s3"
  workspace = local.tenant_name
  config = {
    bucket               = local.tfstate_bucket
    region               = local.default_region
    workspace_key_prefix = "tenants"
    key                  = "tenant.tfstate"
  }
}

Terraform Configurations

To do this, create a directory named config at the same level as the Terraform modules directory. These directories usually reside at the top level of your repository.

Here is an example directory structure for modules and workspace configurations.

|__config
| |__nonprod
| | |__portal.tfvars.json
| |__nonpro01
| | |__infra.tfvars.json
| |__dev01
| | |__tenant.tfvars.json
| | |__services.tfvars.json
| | |__app.tfvars.json
|__modules
| |__portal
| |__infra
| |__tenant
| |__services
| |__app

Here is a simple BASH script that discovers the config file and applies the Application Module.

export WORKSPACE=dev01 
export MODULE=app
export MODULE_PATH="modules/${MODULE}"
export MODULE_CONFIG="$(pwd)/config/${WORKSPACE}/${MODULE}.tfvars.json"
terraform -chdir=$MODULE_PATH workspace select -or-create $WORKSPACE
terraform -chdir=$MODULE_PATH apply -var-file=$MODULE_CONFIG

Use a secret manager such as AWS Secret Manager and a data block to retrieve it, instead of passing secrets through variables. If you must use Terraform variables, inject the variable as a TF_VAR_myvar style environment variable from the CI/CD tool, which manages the execution.

The DuploCloud provider has a . This is used to inject credentials into the . This ensures the AWS credentials you are using are never stored locally and is always in scope of the correct AWS account with a dedicated admin role.

When using your local AWS configuration, there is a chance you are scoped into the wrong account, for example you want to run for dev but forgot you set AWS_PROFILE=prod. If you really would like to use the local configuration instead, you can use the to safely achieve this.

Generate the local AWS config with

The DuploCloud provider has a . This is used to inject credentials into the , the , or any other Kubernetes based provider. This ensures the Kubernetes credentials you are using are never stored locally and always in scope of the correct Kubernetes cluster with a dedicated admin role.

Use when you are on GKE. For Azure AKS, simply use .

When using your local Kubernetes configuration, there is a chance you are scoped into the wrong cluster, for example you want to run for dev but forgot you set current-context: prod in your kubeconfig. If you really would like to use the local configuration instead, you can use the to safely achieve this.

Generate the local KUBECONFIG with and use this context

Using the following code, discover and inject the S3 Bucket into the managed state backend by running a .

Using the following code, discover and inject the GCP GCS Bucket into the managed backend state by running a .

Using the following code, discover and inject the Azure Storage Account into the managed backend state by running a .

DuploCloud uses a straightforward method to associate a to a corresponding DuploCloud resource: a workspace can be associated to any one of the following DuploCloud constructs, which serve as a template. These templates are similar in concept to classes from which objects are instantiated in many programming languages.

When executing a module for a workspace, there is often a file with inputs for that module with the file extension .tfvars or .tfvars.json. This file can include nonsensitive inputs and be committed to version control.

GitLab CI (coming soon)

Bitbucket Pipes (coming soon)

in the Terraform Registry
DuploCloud provides useful wrapper scripts

GitHub Actions

data resource for JIT admin access to AWS
Terraform AWS Provider
duploctl jit aws
duploctl
data resource for JIT admin access to Kubernetes
Terraform Kubernetes Provider
Terraform Helm Provider
duplocloud_gke_credentials
duplocloud_eks_credentials
duploctl jit k8s
duploctl
Terraform Init
Terraform Init
Terraform Init
Terraform Workspace
Terraform Variables