Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Use Cases supported for DuploCloud GCP
Topics in this section are covered in the order of typical usage. Use cases that are foundational to DuploCloud such as Infrastructure, Tenant, and Hosts are listed at the beginning of this section; while supporting use cases such as Logs, Metrics, and Faults and alerts appear near the end.
Infrastructure and Plan
How Infrastructures and Plans work together to create a VPC
Infrastructures are abstractions that allow you to create a Virtual Private Cloud (VPC) instance in the DuploCloud Portal. When you create an Infrastructure, a Plan is automatically generated to supply the network configuration necessary for your Infrastructure to run.
DuploCloud creates a VNET with a default subnet and a default Network Security Group (NSG). The creation of an Infrastructure takes about ten (10) minutes.
When you create a DuploCloud Infrastructure, you create an isolated environment that maps to a Kubernetes cluster.
In DuploCloud, an Infrastructure maps one-to-one to a VPC in a specified region. It also maps to a Google Kubernetes Engine (GKE) cluster you use for container orchestration.
When creating an Infrastructure, specify the number of availability zones, the region, VPC Classless Inter-Domain Routing (CIDR), and a subnet mask. DuploCloud creates two subnets in each availability zone, one private and one public, and sets up routes and a NAT gateway.
Create a DuploCloud Infrastructure in the DuploCloud Portal:
Click Administrator -> Infrastructure from the navigation menu.
Click Add.
Define the Infrastructure by completing the fields on the Add Infrastructure form.
Click Enable GKE to enable GKE for the Infrastructure.
Click the Cluster Mode list box, and select either GKE Standard or GKE Autopilot.
Optionally, select Advanced Options to specify additional configurations (public and private subnets, for example).
Click Create. The Infrastructure is created and is listed on the Infrastructure page.
Up to one (0 or 1) GKS instance is supported for each DuploCloud Infrastructure.
When you create the Infrastructure, DuploCloud creates the following components:
VPC with 2 subnets (private, public) in each availability zone
Required security groups
NAT Gateway
Internet Gateway
Route tables
VPC peering with the master VPC, which is initially configured in DuploCloud
Cloud providers limit the number of Infrastructures that can run in each region. If you have completed the steps to create an Infrastructure and it doesn't show a Status of Complete, try selecting a different region.
Once the Infrastructure is created, a Plan (with the same Infrastructure name) is automatically created and populated with the Infrastructure configuration. The Plan is used to create Tenants.
Navigate from Administrator -> Infrastructure -> Add to create Infrastructure with GKE Standard Cluster.
Name: nonprod
Account: Google Cloud account
VPC CIDR: 10.11.0.0/16
Cloud: Google
Region: us-east1
Subnet CIDR: 22
Enable GKE: enabled
Cluster Mode: GKE Standard
This takes about 20 minutes. Infrastructure status should move to Completed. Once the Infrastructure status shows Complete, navigate to Administrators -> Plans to verify that a plan has been created with the same name (nonprod).
You can view the details and download the kubeconfig file to connect the cluster from GKE Tab available in the infrastructure created.
Navigate from Administrator -> Infrastructure -> Add to create Infrastructure with GKE Standard Cluster.
Name: nonprod
Account: Google Cloud account
VPC CIDR: 10.11.0.0/16
Cloud: Google
Region: us-east1
Subnet CIDR: 22
Enable GKE: enabled
Cluster Mode: GKE Autopilot
This takes about 20 minutes. Infrastructure status should move to Completed. Once the Infrastructure status shows Complete, navigate to Administrators -> Plans to verify that a plan has been created with the same name (nonprod).
Connect to the Cluster namespace using the kubectl token.
DuploCloud provides a way to connect directly to the Cluster namespace using the kubectl
token.
See kubectl Setup for available options.
Upgrade the Google Kubernetes Engine (GKE) version
Google frequently updates the version of GKE based on new features that are available in the Kubernetes platform.
DuploCloud pushes GKE upgrades to the DuploCloud Portal code, but we request that you contact the DuploCloud Support staff on your Slack channel or by email when upgrading, for the moment.
In future releases, this upgrade will be available for customers to install.
Using Tenants in DuploCloud
In GCP, cloud features such as Resource Groups, Identity and Access Management (IAM), Security Groups, Cloud KMS, as well as Kubernetes Namespaces, are exposed in Tenants which reference their configurations.
When you create Tenants in an Infrastructure, a namespace is created in the Kubernetes cluster with the name duploservices-TENANT_NAME.
At the logical level, the Tenant is:
A Container of resources: All resources (except ones corresponding to the Infrastructure) are created within the Tenant. If a tenant is deleted, all the resources in the Tenant are terminated.
A Security Boundary: All resources within a Tenant can talk to each other. For example, a Docker container deployed in a GKE instance within the tenant will have access to Google Cloud Storage and Google Cloud databases within the same tenant. SQL database instances in another tenant cannot be reached, for example, by default. Tenants expose endpoints to each other using load balancers or explicit inter-Tenant security groups and identity management policies.
User Access Control: Self-service is the bedrock of the DuploCloud platform. To that end, users can be granted Tenant level access. For example, John and Jim are developers who can be granted access to the DEV01 tenant, Joe is an administrator who has access to all tenants, and Anna is a data scientist who has access only to the DATASCI tenant.
A Billing Unit: Because the Tenant is a container of resources, all resources in the Tenant are tagged with the Tenant's name in the cloud provider, making it easy to segregate usage by Tenant.
A mechanism for alerting: All alerts represent Faults in any resource within the Tenants.
A mechanism for logging: Each Tenant has its unique set of logs.
A mechanism for metrics: Each Tenant has its unique set of metrics.
DuploCloud customers usually create at least two Tenants for their production and non-production cloud environments (Infrastructures).
You can map Tenants in each or all of your development, testing, staging, Quality Assurance (QA), and production environments.
For example:
Production Infrastructure
Pre-production Tenant - for preparing or reviewing production code
Production Tenant - for deploying tested code
Non-production Infrastructure
Development Tenant - for writing and reviewing code
Quality Assurance Tenant - for automated testing
In larger organizations, some customers create Tenants based on application environments, such as creating a tenant for Data Science applications, another for web applications, etc.
Tenants are sometimes created to isolate a single customer workload, allowing more granular performance monitoring, scaling flexibility, or tighter security. This is referred to as a single-Tenant setup. In this case, a DuploCloud Tenant maps to an environment used exclusively by the end client.
When you have a large set of applications that different teams access, it is helpful to map Tenants to team workloads. For example, you could create Tenants for Dev-analytics, Stage-analytics, and so on.
While Infrastructure provides abstraction and isolation at the Virtual Private Cloud (VPC) and Kubernetes/cluster level, the Tenant supplies the next level of isolation implemented in GKS by segregating Tenants using the following construct per Tenant
A set of security groups
An identity management role and profile
A Kubernetes Namespace, a read-only service account, and a write service account
Cloud KMS
PEM file
GKS Worker nodes or virtual machines (VMs) created within a Tenant are given a label with the Tenant Name, as are the node selectors and namespaces. Consequently, even at the worker node level, two tenants achieve complete isolation and independence, even though they may be sharing the same Kubernetes cluster by a shared Infrastructure
Manage Tenant expiry settings in the DuploCloud Portal
In the DuploCloud Portal, configure an expiration time for a Tenant. At the set expiration time, the Tenant and associated resources are deleted.
In the DuploCloud Portal, navigate to Administrator -> Tenants. The Tenants page displays.
From the Name column, select the Tenant for which you want to configure an expiration time.
From the Actions list box, select Set Tenant Expiration. The Tenant - Set Tenant Expiration pane displays.
Select the date and time (using your local time zone) when you want the Tenant to expire.
Click Set. At the configured day and time, the Tenant and associated resources will be deleted.
The Set Tenant Expiration option is not available for Default or Compliance Tenants.
Using Hosts in DuploCloud
Once we have the Infrastructure (Networking, Kubernetes cluster, and other common configurations) and an environment (Tenant) set up, the next step is to create VMs. These could be meant for:
Compute Engine virtual machines in GCP
Worker Nodes (Docker Hosts) if built-in container orchestration is used.
Regular nodes that are not part of any container orchestration, where a user manually connects and installs applications.
In GCP, you can use GCE VMs or BYOH (bring your own hosts) to get a Virtual Machine setup. Both of these are available through the Cloud Services -> Hosts menu
See the Services documentation for steps to create Hosts and configure Kubernetes storage options.
You can create a GCE VM by going to Cloud Services -> Hosts -> GCE VM.
While lower-level details such as IAM roles and security groups are abstracted, deriving instead from the Tenant, only the most application-centric inputs are required to set up Hosts.
Most of these inputs are optional and some are available as list box selections, set by the administrator in the Plan (for example, Image ID, in Host Advanced Options).
There is an additional parameter labeled Fleet Type. This is applicable if the VM is to be used as a host for container orchestration by the platform. The choices are:
Linux Docker/Native: To be used for hosting Linux containers using the Built-in Container orchestration.
None: To be used for non-Container Orchestration purposes and contents inside the VM are self-managed by the user.
If a VM is used for container orchestration, ensure that the Image ID corresponds to the Image in the container. Any name that begins with Duplo is an image that DuploCloud generates for Built-in container orchestration
Manage costs for resources
Usage costs for resources can be viewed and managed in the DuploCloud Portal, by month or week, and by Tenant. You can also explore historical resource costs.
To view the Billing page for GCP in the DuploCloud Portal, click Administrator -> Billing.
You can view usage by:
Time
Select the Spend by Month tab and click More Details to display monthly and weekly spending options.
Tenant
Select the Spend by Tenant tab.
In Google Cloud Platform (GCP), billing data can be exported to a BigQuery dataset in only one project. However, when deploying instances of an application across multiple projects (e.g., dev, qa, stg, prod), it is necessary to replicate the billing dataset to enable billing monitoring on all DuploCloud dashboards in these projects. This documentation outlines the steps to configure automated replication of a BigQuery dataset from a source project to a destination project.
NOTE: This documentation is an extension of Export Billing to BigQuery
Two GCP projects: a source project where the original billing dataset resides, and a destination project where the dataset will be replicated.
Appropriate permissions to create datasets and data transfer jobs in BigQuery.
Google Cloud SDK installed and initialized.
Source Project: GCP project where the original billing dataset resides with billing export.
Destination Project: New GCP project which has duplo-master running and dataset needs to be created.
Open the BigQuery console in the source project: BigQuery Console
Click on CREATE DATASET.
Enter the dataset ID, choose a data location, and set other options as mentioned in the below screenshot.
Click Create dataset.
For the replication to work, you need to allow specific roles on the dataset in source project to the duplo-master
GCP service account of the destination project
Following roles are needed:
BigQuery Admin
BigQuery Data Viewer
BigQuery Data Editor
BigQuery User
Open the BigQuery console in the destination project.
In the left-hand menu, click on Data Transfers.
Click on CREATE TRANSFER.
Select Source Type as Dataset Copy
Schedule options: Choose Start now. Set the frequency option to every 12 hours.
Under the Destination Settings
Put destination project dataset as Dataset
Put source project dataset as Source Dataset
Put source project ID as Source Project
Enable checkbox Overwrite destination table
Under Service Account select the destination duplo-master
service account (which has the permission to access the source project dataset)
Click SAVE
In the BigQuery console of the destination project, go to the Transfers tab.
You should see your transfer job listed. You can click on it to view details and monitor its progress.
By following these steps, you can set up automated replication of a BigQuery dataset from one GCP project to another, enabling billing monitoring on all DuploCloud dashboards across multiple projects. Ensure to monitor the transfer job periodically to make sure it is running as expected.
Export GCP billing data to BigQuery using DuploCloud
By exporting your Google Cloud Platform (GCP) billing data to BigQuery, you can leverage DuploCloud's dashboard to monitor and analyze your GCP billing effectively.
To export to BigQuery you must have:
A Google Cloud Platform account with billing enabled.
Permission to access the Google Cloud Billing API and BigQuery.
Billing Account Administrator permissions
BigQuery Admin permissions
Navigate to the BigQuery Console in your Google Cloud Platform account.
In GCP, select the Project where you want to create the dataset.
Click Create Dataset.
In the Create dataset window, configure your dataset with the following parameters:
Dataset ID: Enter a unique name for your dataset.
Location Type: Select Multi-Region.
Default table expiration: Select Enable table expiration and set a default expiration time for tables in this dataset, such as 60 days. Tables will be automatically deleted after this period.
Click Create Dataset.
Once the dataset is created, it appears in the BigQuery Console under your project. Select the dataset to view details.
In GCP, open the Google Cloud Console.
Select Billing from the main menu or visit Google Cloud Billing.
Select the billing account for which you want to enable the billing export.
In the Billing Account Details page, select Billing Export from the left navigational pane.
In the Billing Export page, in the Detailed usage cost area, click Edit Settings.
In the BigQuery Export tab, configure Detailed usage cost.
Select the Project: Choose the project where you created the BigQuery dataset.
Select the Dataset: Choose the dataset you created for billing data.
Click Save.
Contact DuploCloud Support to complete additional steps to enable the billing dashboard.
The exported billing data includes detailed information about your GCP usage and charges. Regularly monitor and analyze this data to keep track of your cloud spending.