Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Use FluxCD with DuploCloud to synchronize your K8s clusters with code stored in Git
DuploCloud integrates with FluxCD, a continuous delivery (CD) solution for Kubernetes that automates deploying and managing applications in a GitOps-based workflow. In simple terms, FluxCD synchronizes your Kubernetes cluster state with configuration and application files stored in Git repositories.
FluxCD is built around the GitOps approach, where Git is the single source of truth for application and infrastructure configurations. Any changes to the configuration files in Git automatically trigger updates to the Kubernetes environment, ensuring the system is always in sync with the desired state. FluxCD is ideal for automated deployments, declarative management, Helm chart handling, and multi-environment workflows.
Use FluxCD to synchronize your K8s clusters with Git for:
Automatically deploying application updates when a new version is pushed to Git.
Promoting a new version of an app from staging to production using GitOps.
Managing Kubernetes cluster configurations across multiple teams or projects.
Flux v1 was the original FluxCD version. It offers basic GitOps capabilities with some limitations in scalability and flexibility. Flux v2 introduces a modular architecture, enhanced security, and a richer feature set, collectively known as the GitOps Toolkit. It addresses the shortcomings of Flux v1 by providing more granular control over the reconciliation process and extending support for more complex workflows. For more about the differences between Flux v1 and Flux v2 see the FluxCD documentation.
DuploCloud supports migration from Flux v1 to Flux v2. Contact DuploCloud Support with any questions or for assistance with the migration process.
Additional features of the DuploCloud Portal
These additional features provide flexibility and accessibility across all supported clouds:
For Kubernetes Power Users: Information about the service accounts created by DuploCloud
When a DuploCloud Tenant is created with Kubernetes access, DuploCloud creates three service accounts that are mapped to the Tenant's unique namespace.
default -
The default
account serves as a template for creating other accounts. This account cannot be altered by the end user. There are no role bindings for the default
service account.
duploservices--readonly-user
- This service account is assigned to the duploservices-<tenant>-readonly-role
role binding. It provides read-only access to resources in the Tenant
duploservices--edit-user
- This service account is assigned to the duploservices-<tenant>-edit-role
role binding. It provides edit access to resources in the Tenant. This is the service account that is assigned to a new Pod, unless you explicitly override it
Service accounts can be applied to Pods using the DuploCloud Service's Other Pod Configuration field when you Add a Service.
View the full DuploCloud AWS Product Demo video.
NARRATOR:
Let's look at a product demo. In this sample scenario, we have decided to deploy a few applications to AWS. The architects have developed this infrastructure blueprint for one of the applications we need to manage. The diagram shows a dedicated VPC and 2 availability zones with public and private subnets in each.
We need to provision Docker containers on AWS EKS, MySQL, Aurora database, and S3 are the data stores. The application is exposed to the Internet by a load balancer protected by a web application firewall.
Now that we've gone through the desired architecture, let's get to work by using the DuploCloud portal to deploy the application. You could also use the DuploCloud Terraform provider or API, but we'll focus on the UI for this demo.
We start by logging in as an admin and creating the network infrastructure. Behind the scenes, the platform auto-generates the required AWS configuration and completes the setup. This includes the VPC, subnets, and EKS cluster. We then move to deploy the application infrastructure.
We start by creating a logical workspace known as a tenant where admins and the appropriate developers can access the infrastructure we just created.
Next, we're going to switch to the Dev105 Tenant workspace. We'll create various cloud services starting with a virtual machine. Notice that the user provides a high-level specification only. Behind the scenes, the platform will auto-generate the security group, IAM role, instance profile, and automatically add it to the EKS cluster as a worker node.
From here we'll create an S3 bucket. The platform auto-generates the IAM access policy for the host to access the S3 bucket. Detailed security controls such as encryption, public access block, and logging, among others, are applied to meet the compliance requirements.
We can provision a SQL database. The software understands that in this case, it needs to generate security group-based policies rather than IAM rules. It will also set up backups, encryption, and other best practices behind the scenes. With the network and network infrastructure in place, we next move to deploy first the Docker-based microservice. Here again, the user provides application-centric specifications while the software auto-generates Kubernetes and AWS configurations that include the deployment of stateful sets, node ports, ingress controls, and other such infrastructure details. The right set of access policies, security groups, and ACLs are applied.
It's important to note that the platform is performing all these actions with no presumptions about the application topology that it is being asked to deploy and many other functions are built into the platform.
For example, you can see a quick view of the container logs, or access the shell of the running container itself.
Just-in-time access to KubeCTL is provided for more granular control and debugging across the platform. You can easily click ‘console’ to access the various cloud services inside of apps. Notice that the platform implicitly provides Just-in-Time access to users for AWS console access with the right permission set. There are no access permissions to manage manually.
After we complete all the infrastructure and the application has been deployed, we can do a sanity test.
Our application works!
While we saw an app that used an RDS, S3 Bucket, and Kubernetes, the platform supports the vast majority of Cloud Services, such as Kafka, OpenSearch, Managed Airflow, AWS Batch, CloudFront, and so on.
The platform has been performing other necessary tasks for the infrastructure. For example, several diagnostics functions are implemented by default. Here we're going to look at a metrics dashboard that has been set up using Prometheus, Grafana, and CloudWatch with no further user input required.
Leveraging OpenSearch, logs are collected and segregated per tenant and per service. Alerts for various infrastructure resources can easily be configured as well. The platform is using tools like CloudWatch and Prometheus for this, allowing the user to simply specify the filters for each alert. Similarly, there's a billing dashboard to track costs across cloud services or across applications.
Next, businesses in highly regulated industries need a security and incident management platform that comes built-in. All configuration changes in the cloud infrastructure are detected and the controls are applied. Compliance dashboards are readily available for auditors. Notice that all of this is set up without the user having to lift a finger.
There's an audit trail of all actions in an application-specific context. Here we are showing the audit trail for the application workspace.
CI/CD is a layer on top of DuploCloud, and you can use platforms such as GitHub Actions, to build pipelines to leverage DuploCloud underneath, as a CD system.
Everything that we've gone through in the UI can also be done via Terraform. Here is the script for the current setup deploying a fully secure and compliant infrastructure with a fraction of the code that would have otherwise been written and maintained.
Now that you’ve seen this demo, you have a better idea of how DuploCloud can provision new environments and streamline developer self-service. Connect with DuploCloud to better understand our comprehensive customer experience in supporting your existing applications and workloads.
To learn more, visit DuploCloud.com
View the whole DevOps Deep Dive - Abstracting Cloud Complexity video.
NARRATOR:
Welcome to the first in a series of deep dives into the DuploCloud Dev and SecOps developer self-service platform.
DuploCloud deep dive videos explore how DuploCloud speeds time-to-market when creating and deploying cloud applications with a practical use-case approach.
Each DuploCloud deep dive answers five questions in 10 minutes or less about a particular feature or capability of the DuploCloud platform.
We address the problem, phrasing it in terms of a use-case, DuploCloud’s solution to that problem by abstracting complexity and a simplified UI, and expand on the benefits of that solution to both your customers and you.
Finally, we explore DuploCloud's competitive edge over similar products and detail tangible savings you can achieve for a flat cost each year, including white glove support.
Let's get started.
Apart from the ever-increasing costs of maintaining an automated and scalable Dev and SecOps environment, there are other factors you must consider when creating a cloud-management strategy and selecting a developer-friendly self-service platform to drive it.
All DevOps workloads require dynamic and complex computing storage and networking configurations.
These configs must be updated, upgraded, and monitored constantly to ensure maximum uptime and minimal cost.
If you're watching this video, you probably already know the problem.
How do you create reliable, guardrail-equipped developer sandboxes that maximize your developer's valuable time while manually managing hundreds of components and configurations?
Managing SecOps is a full-time job by itself.
When you combine the complexity of implementing literally hundreds of compliance controls with the maintenance demanded by most security products, the amount of data you must manually analyze and maintain multiplies exponentially.
Finally, the cost of hiring dedicated DevOps and SecOps engineers has never been higher, and expertise in this area continues to be scarce.
For example, have you ever tried to hire an app developer with extensive DevOps experience?
For an estimate of the savings you can achieve, take our cost calculator for a spin. The results may surprise you.
How does DuploCloud drive down the cost?
Central to DuploCloud’s value proposition is the way DuploCloud replaces much of the complexity behind common DevOps tasks with a templatized approach, creating and maintaining many components for you with minimal inputs.
For example, creating a complete cloud infrastructure with hundreds of components such as VPC, subnets, root tables, security groups, and IAM roles, in addition to Kubernetes cluster enablement can take just minutes with only a few clicks using DuploCloud.
At the same time, DuploCloud gives you the freedom to create a platform that is as simplified or customizable as you require.
We don't drive you toward a prescribed solution.
We reduce the time needed to implement the platform you require.
To better understand how Duplo Cloud is able to abstract cloud complexity, let's explore DuploCloud's architecture, including the core concepts of Infrastructure and Plan.
Here's Duplo Cloud's solution architect Andy Buotte for a closer look.
ANDY BUOTTE:
The user creates infrastructures and at the same time in the backend, when an Infrastructure is created, a Plan is created.
So there's a one-to-one relationship between an Infrastructure and aPlan.
Within a customer's DuploCloud, they can have n number of Infrastructures and that would mean that there would be n number of Plans.
The Infrastructure is a DuploCloud construct, but on the backend, at the actual infrastructure layer within AWS or Azure and GCP, the DuploCloud Infrastructure is gonna map to many different resources within their cloud accounts.
Plan is a construct within DuploCloud that is going to include a lot of these settings and configurations that are going to apply both to the mapped infrastructure.
Some of those settings are lower-level details that will be applied when a Tenant is created.
In a Plan, you can specify what SSL certificates are going to be used. That setting is going to apply to all Tenants that are within that Infrastructure.
The relationship between an Infrastructure and a Tenant. So, in this prod Infrastructure, we have a couple different Tenants. We have the data science, the web app, and an ETL workflow. Those are each a Tenant, live within the prod infrastructure.
So the relationship between an Infrastructure and a Tenant is a one-to-many relationship. Typically, in a production environment, a customer may have a Tenant per application, or it could be like a Tenant per use-case, or like a Tenant per team.
There's many different ways that the end user, that the customer can decide to utilize that boundary, and Infrastructure is one layer of security boundary. So anything that's deployed into the non-prod Infrastructure will not have access to anything that is deployed into the product Infrastructure. And vice versa is true.
These are essentially two air-gapped networks so that there's no access between the two different environments.
The Tenant is another boundary.
So the data science containers that live within this Tenant would not have any way to talk to the containers that are within the web app Tenants and vice versa.
So it's another security boundary layer.
It's pretty common in, for our customers, for a development or a non-prod infrastructure, to create a Tenant per developer. The kind of primary use case or reasoning for that is DuploCloud is very good with developer self-service. So, by giving a developer their own Tenants, they are free to create infrastructure as needed, so that they're not blocked by anyone else.
They don't need to file a ticket in a DevOps queue specifying that they need an S3 bucket or an RDS instance to accomplish their software development task. They should be able to log into DuploCloud, utilize their own Tenant, and create the infrastructure that they need, and immediately start work on their software development tickets and not be blocked by any other team.
Again, the relationship between the infrastructure of the Tenant is one-to-many, and it is very common for customers to have at least two different infrastructures to separate production workloads from all other non-production workloads.
NARRATOR:
Let's summarize the benefits of what we've heard so far.
Creating self-service developer sandboxes in today's dynamic DevOps environment requires a low-code, no-code approach. For this self-service to be effective, however, guardrails must exist.
One such guardrail that DuploCloud provides is the DuploCloud Infrastructure: a virtual network connected to your native cloud with a fundamental set of functionalities exposed.
Further security and flexibility are provided by DuploCloud Tenants: isolated workspaces that you define according to criteria such as application area or customer for prod infrastructures, or developer or tester for non-prod infrastructures, to use just a few examples.
You can define as many Infrastructures or Tenants as you need.
Additional infrastructure customization is possible by modifying DuploCloud Plans: sets of configurable templates.
Remember that each DuploCloud Infrastructure has one Plan, but you can have many Tenants in an Infrastructure.
Finally, DuploCloud gives you the freedom to implement the cloud solution you require while greatly reducing your costs in both developer and maintenance cycles.
Access your native cloud provider with just-in-time access within the DuploCloud portal in a fraction of the time it takes you to log in and out of the native portal and navigate through various screens.
Harness the power of Kubernetes objects and Terraform scripts with very little hard coding thanks to DuploCloud’s templatized Kubernetes objects and DuploCloud’s Terraform provider.
How is DuploCloud’s solution more comprehensive and yet even more affordable than many competitors' offerings?
What many people don't understand about DuploCloud is that we are DevOps, SecOps, and professional services in one product for a flat rate per year.
Create comprehensive infrastructures including Kubernetes Elastic Services in less than half an hour.
Get Services, Hosts, and load balancers up and running in only a matter of minutes.
Create Tenants to isolate workspaces for prod and test with only a few clicks.
Rest easy knowing that we ensure compliance with numerous industry standards such as SOC 2, PCI, and HIPAA.
We complete compliance questionnaires for you and support you during the audit process if needed.
White-glove support is white glove at Duplo Cloud.
We not only ensure your initial setup and customization is successful, we also offer all cloud migration services at no additional cost.
Speaking of which, what hard savings can you achieve with DuploCloud?
To name just a few, faster time-to-market for your core business apps, on-demand support from our staff of dedicated Dev and SecOps specialists, and maybe most importantly, freeing your dev staff to do what they do best: develop.
But don't take our word for it.
Here's one of our many customers, Brad Fino from Lily AI to talk about the power of developer self-service using DuploCloud.
BRAD FINO:
Cost controls, standardization across your infrastructure.
DuploCloud is the missing link between all of those things and giving developers the access and ability to manage and maintain their infrastructure.
Without people coming to my team and saying, Hey, Brad, can you spin up a database for us? Hey, Brad, can you go deploy this container for us? No. Go do it yourself.
You have DuploCloud.
NARRATOR:
Thanks for watching this deep dive with DuploCloud. For more information, go to duplocloud.com and we look forward to seeing you back here soon.
View the full video.
NARRATOR:
Now, let's take a look at a product demo where we deploy a microservices-based infrastructure in Azure.
Start by considering a high-level application diagram with a dedicated VNet and subnets each with one network security group, located in the East US region.
Docker containers are to be provisioned on Azure's AKS with Azure MySQL and storage as the data stores. Secrets are stored in Azure's Key Vault and the app is exposed via App Gateway.
We show integrated logging, monitoring, and alerting.
We'll integrate with Azure Defender for monitoring of the security posture and compliance reporting.
Finally, we showcase the CI/CD pipeline using Azure DevOps.
Now that we've gone through the desired architecture. Let's get to work.
We start by logging in to create the base infrastructure.
That includes VNet, AKS cluster, Log Analytics workspace, among other things.
With the simplified high-level specification, the platform will generate the low-level details in Azure.
We then move to deploy the application.
We start by creating a logical workspace or an environment that we call a Tenant.
We are creating a Tenant called Invoice in the finance infrastructure that we just created. Behind the scenes, for each Tenant a unique managed identities, resource groups, Kubernetes namespace, applications Security Group, and other constructs are generated by the platform.
Next, we can switch to an Invoice application.
Here we're going to create an Azure agent pool for Azure Kubernetes.
Notice that the user only needs to provide high-level specifications. Behind the scenes, the platform will configure encryption, link to managed identity, Log Analytics workspace connection, and other Azure best practices.
We can then move to create a storage account. Again, the user specifies a high-level configuration while the platform generates the details and security best practices.
You can then do things like creating file shares and access shared keys.
We then move on to create a SQL database. It will also set up backups, encryption, VNet endpoints, and other recommended practices behind the scenes.
Next, we move to deploy the first Docker-based microservice.
Here, again, the user provides an application-centric specification while the platform auto-generates Kubernetes and Azure configurations that include the deployment or stateful sets, node ports, Ingress controls, and other such infrastructure details.
We then expose the application via load balancer. Note that you only have to provide the high-level specifications and do not have to worry about the low-level implementation details.
Many other functions are built into the platform. For example, you can take a quick look at the container tail logs.
Or, get into the Container shell.
And you can get access to kubectl all secured and locked down to this Tenant’s namespace.
As we have completed the deployment, let's do a sanity test.
Our application works.
Important diagnostic functions are built into the platform.
For example, we can look at the metrics of various resources that is set up automatically by orchestrating Grafana, Prometheus, Azure Monitoring, and Log Analytics workspace.
Logging is implemented via Elasticsearch and Kibana. Here, one, you could see the logs automatically collected and separated by Tenant and by Service.
Note that all of this is done without any manual effort and comes out of the box.
Next, businesses in highly regulated Industries need to implement an exhaustive list of compliance controls.
DuploCloud comes with the SIEM, and you can see all these controls have automatically been met.
There's an audit trail and application-specific context.
Here we are showing all the changes in the Invoice Tenant.
CI/CD is a layer on top of DuploCloud, and any CI/CD system could be leveraged for scripts would invoke duplicate API calls for deployments.
Finally, everything we saw via the UI can also be done via DuploCloud’s Terraform provider, with a fraction of the code or expertise that would have otherwise been required.
For further information or more demos visit the DuploCloud website at duplocloud.com.
Learn more about Cloud Infrastructure and DevOps Automation using our video tutorials.
Use SCPs with DuploCloud to add guardrails to AWS organizational units
If you use AWS organizations, you likely use SCPs as guardrails to restrict specific user actions for each organization. You typically set up policy statements in an SCP to add security to restrict specific user actions.
Create a separate organizational unit for your DuploCloud accounts and use the Full Access SCP with no restrictions as a base JSON template, as shown below, and then add policy statements such as the ones described in the and those linked in the following list:
Deploying Helm Charts in DuploCloud
Helm Charts are packages of pre-configured Kubernetes resources that help you define, install, and upgrade Kubernetes applications. You can integrate Helm Charts with DuploCloud to deploy applications onto the Kubernetes clusters you've created in DuploCloud.
Ensure you have a Kubernetes cluster set up in DuploCloud.
Install Helm on your local machine or a machine with access to your Kubernetes cluster.
Identify the Helm charts you want to use for deploying your applications or services. You can use community-maintained charts from public repositories like Helm Hub, or you can create your own custom charts tailored to your specific needs.
Modify the values.yaml file or create custom templates as necessary. This might involve configuring resource limits, environment variables, or other settings specific to your deployment.
The following YAML is an example of a values file modified to deploy using the API key: DUPLO_API_SECRET, on the cluster: DUPLO_CLUSTER cluster, in the awsprod environment:
You can also modify Helm Chart values using the node selector. This is the most common method. For example, to deploy a chart into the duploservices-mytenant Tenant using the node selector, give:
To specify which hosts the chart should run on using the node selector with allocation tags, give:
If you're using Helm charts from external repositories, add the repositories to your Helm configuration using the helm repo add command:
helm repo add
REPOSITORY_NAME REPOSITORY_URL
Update your Helm repositories to ensure you have the latest versions of the charts available locally using the helm repo update command:
helm repo update
Use the helm install command to deploy the Helm charts onto your Kubernetes cluster. Specify the release name, chart name, and any additional configuration values as needed.
helm install
[RELEASE_NAME] [CHART] [FLAGS]
Monitor the status of your deployments in DuploCloud by navigating to the Services page. Services deployed via Helm Chart are displayed in the Services list in read-only mode. Although you can't make changes to a Helm Service in DuploCloud, you can view the Service details to verify that it has deployed successfully and is running as expected.
Give your Helm Chart a duploservices-tenant namespace to deploy into. If you create your own namespace, you will have to manage those resources outside of DuploCloud. It may be helpful to create a single-purpose Tenant to use with your chart or use allocation tags to assign different node types to your chart's Pods.
The node selector is the most common method for customizing Helm Chart values. For an example of using the node selector to modify chart values, see the syntax above under Step 3.
If the chart you are deploying has interactions with Kubernetes resources (Pods, jobs, etc.) ServiceAccounts may be needed. Generally, the default ServiceAccounts chart configuration is sufficient.
When creating a Load Balancer in DuploCloud, ensure that the Helm Chart is not creating a Kubernetes Service with the same name as the deployment. This can cause a naming conflict since DuploCloud needs to manage the Kubernetes Service for the Load Balancer to work.
DuploCloud Helm Chart deployments are displayed in the list of Services, in read-only mode. You can attach a Load Balancer to the Service via the Service details page Load Balancer tab, but you cannot update the Service (e.g., change the replica count or image) from within the DuploCloud UI. Those changes must be done through Helm.
View the full video.
NARRATOR:
Now let's take a look at a product demo.
We have decided to deploy a few applications on Google Cloud.
We start by considering a high-level application diagram.
Here, we have a dedicated VPC, two subnets each, with multiple firewall rules located in the US West region, one subnet for internal load balancers, and another for backend apps.
The application is exposed to the internet via an HTTPS load balancer.
The apps are packaged Docker containers to be provisioned on Google GKE.
Now that we've gone through the desired architecture, let's get to work.
We start by logging in to create the base infrastructure that includes the VPC, subnets, all networking configurations, firewall rules, and GKE cluster.
All of this is represented by the DuploCloud construct called Infrastructure.
With the simplified, high-level specification, the platform will generate the low-level details in Google Cloud.
We then move to deploy the back-end application.
The application is called Invoice and we start by creating a logical workspace or Tenant by that name in the finance Infrastructure, which we just created.
Behind the scenes, managed identities, resource groups, and other details are auto generated.
Next, we could switch to the Invoice application.
We will provision cloud storage by creating a bucket. We will now create a cloud function using the storage bucket.
We will next create a PubSub topic and create a cloud scheduler with the target set to that PubSub topic.
The cloud scheduler could also trigger an HTTP endpoint.
With network, compute, and databases in place, we next moved to deploy the first Docker-based microservice.
Here, again, the user provides an application-centric specification while DuploCloud auto-generates configurations that include the deployment or stateful sets, network ports, Ingress controls, and other such infrastructure details.
We then expose the application via a load balancer.
Let's do a quick sanity test.
Our application works.
Logging is implemented via Elasticsearch and Kibana.
Here, one, you could see the logs automatically collected and separated by Tenant and by Service.
Note that all of this is done without any manual effort and comes out of the box.
Next, businesses in highly regulated industries need to implement an exhaustive list of compliance controls.
DuploCloud comes with the SIEM and you could see, all these controls have automatically been met.
There's an audit trail in application-specific context.
Here, we are showing all the changes in the Invoice Tenant.
CI/CD is a layer on top of DuploCloud, and any CI/CD system could be leveraged for scripts would invoke DuploCloud API calls for deployments.
Finally, everything we saw via the UI can also be done via DuploCloud’s Terraform provider with a fraction of the code or expertise that would have otherwise been required.
For further information or more demos, visit the DuploCloud website at duplocloud.com.
There's an audit trail of all actions in an application-specific context.
Here we are showing the audit trail for the Invoice workspace.
Finally, there are various options to configure CI/CD for your daily code releases.
Everything that we've gone through the UI today can also be done via Terraform.
Here is the script for the current setup, deploying a fully secure and compliant infrastructure underneath, with a fraction of the code that would have otherwise had to be written and maintained.
Autonomous DevOps are the future of cloud automation.
To learn more, visit duplocloud.com.
Yes. Refer to this document to create an . Note the generated Ingress class. Refer to the documentation, for information about how to enable Ingress and add a class for the Ingress.
For EKS, refer to the document for information on customizing annotations to your specific needs.
View the whole DuploCloud Uses Infrastructure-as-Code to Stitch Together DevOps Lifecycle video.
NARRATOR:
Building and operating infrastructure in the public cloud is challenging.
It revolves around the five pillars of operational excellence, security, reliability, performance, and cost optimization.
This is a broad taxonomy of thought operations.
It starts from building a network infrastructure that includes VPCs and hybrid connectivity.
Next comes application infrastructure. This involves virtual machines, data stores, along with the right security policies and backup configuration.
On top of the app infra is app provisioning where we see Kubernetes, serverless, and spark clusters.
There's logging and monitoring. CI/CD makes this a repeatable process.
Compliance controls are needed across the board.
Today, certified cloud experts automate these functions by manually writing code that spans thousands of lines.
We see cargo culting, with every engineer, bringing their own style, favorite programming language, and tools.
With expanding infrastructure, the size of the codebase and team grows.
The bigger the system, the harder to make changes, decreasing productivity.
One has to understand what code to change.
Once updated, peers need to do code review, then tests are to be performed, followed by a roll out.
A simple user request to open a port for an application requires subject matter experts and still takes days.
Lack of cloud subject matter experts is the single biggest blocker to cloud adoption.
Hiring a DevOps workforce is hard and expensive. Today, on average, to build and operate a 50-VM cloud infrastructure, companies require two DevSecOps engineers.
But did you know that inside Amazon and Microsoft, they manage millions of virtual machines with only 1000-odd people?
To understand how, imagine for a moment being assisted by a DevOps bot, which had the ability to auto-generate this infrastructure configuration based on a rules engine that combines user requirements, subject matter expertise, and principles of a well-architected framework.
Cloud operations might very well involve thousands of configurations that overwhelm humans, but in the age of AI, bots can auto-generate these.
At DuploCloud, we have built such a robot.
The bot installs in a virtual machine in your cloud account.
Users can easily interface it with it through a browser or API. There, they configure their application needs in simple declarative terms and the bot auto-generates the underlying infrastructure policies.
Whether doing a first-time deployment or ongoing operations, it manages a complete application and infrastructure life cycle.
Today, the bots are operational in over 25 enterprises, managing over 3000 virtual machines, and 250 applications.
Users do 5000 self-service deployments a week.
For a product demo, go to duplocloud.com/demo.
How to delegate subdomains to another Cloud Provider
When managing a domain, such as example.com
, you may want to delegate control of specific subdomains to different cloud platforms. The guide will help you go through the steps how to delegate subdomain from a Cloud Provider to a hosted zone in AWS and GCP.
Lets assume you have example.com
registed in a DNS Provider such as GoDaddy or CloudFlare, and we want to be able to delegate:
aws.example.com
. to your AWS account
gcp.example.com
. to your GCP account
For DNS management, AWS offers Amazon Route 53. Here are the steps you need to take:
Create a hosted zone for aws.example.com
in Route 53.
Note down the nameserver (NS) records that Route 53 assigns to your new hosted zone
To delegate the DNS control, you need to update the DNS settings of your primary domain example.com
in the DNS Provider.
Go to where example.com
is hosted, and add NS records for aws.example.com
, pointing to the nameservers you noted from AWS Route 53.
This configuration delegates the management of aws.example.com
to AWS.
Create a managed zone for gcp.example.com
in Google Cloud DNS.
Note down the nameserver (NS) records assigned to the managed zone.
Update the DNS settings for your primary domain example.com
in your DNS Provider to delegate GCP:
Go to where example.com
is hosted. and add NS records for gcp.example.com
, pointing to the nameservers noted from Google Cloud DNS.
This configures the delegation of gcp.example.com
to GCP.
Import an external or On-Prem cluster to be managed by DuploCloud
DuploCloud allows an external or an On-Premises Kubernetes (K8s) Cluster to be imported as an Infrastructure that the DuploCloud Platform manages.
The Kubernetes Cluster that needs to be imported should be ready to use and accessible using the kubectl
shell.
Save this YAML code as a file name service-account-admin-setup.yaml.
Run kubectl apply -f service-account-admin-setup.yaml
, creating a new service account with Administrator permissions.
Run kubectl -n kube-system describe secret duplo-admin-token
to fetch the token for DuploCloud to use when importing the cluster.
Before performing this step, Contact DuploCloud Support to enable the configuration that allows the import of an external Kubernetes cluster.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
Click Add. The Add Infrastructure page displays.
From the Cloud list box, select On-Premises.
Enter the details of the Kubernetes Cluster:
Kubernetes Cluster Name
Kubernetes Cluster Endpoint
Kubernetes Token, which you retrieved when you created a service account in the previous step.
Kubernetes Cluster Certificate Authority Data (For an EKS cluster, this can be copied from the EKS Cluster Overview page from the AWS Console).
Kubernetes Vendor (Enter EKS, as in the example below).
Select the Kubernetes tab to display information about the imported Kubernetes Cluster.
In the DuploCloud Portal, navigate to Administrator -> Tenants.
Click Add. The Create a Tenant pane displays.
Enter the Tenant Name.
Select the Infrastructure name from the Plan list box.
Click Create.
Navigate to Kubernetes -> Nodes. The Nodes page displays.
Click the On-Premises Tab.
Click Add. The Add On-Premesis Instance pane displays.
Select the node from the Kubernetes Node list box.
Supply an Allocation Tag.
Click Add.
Navigate to Kubernetes -> Nodes to view the imported cluster.
Create a WebServer Service in the DuploCloud portal by selecting OnPrem from the Cloud list box while creating a Kubernetes Service.
Once the service is created, you should be able to access the kubectl
shell, retrieve the KubeCtl Token, Host/Container shell, and Container logs for the service you created.
An administrator can import an external Kubernetes cluster in the DuploCloud Portal with readonly
access.
Save the following YAML code as service-account-readonly-setup.yaml.
Run kubectl apply -f service-account-readonly-setup.yaml
, creating a new service account with readonly
permission.
Run kubectl -n kube-system describe secret duplo-readonly-token
to fetch the token for DuploCloud to use when importing the cluster.
Follow this step to import and view the cluster.
DuploCloud users with non-administrator access (User role) can only view Kubernetes resources. They cannot add Nodes or create or update any Services in readonly
mode.
If you have a host already running somewhere in the cloud or on-premise, you can bring that to DuploCloud using BYOH functionality and let DuploCloud manage the host, let it be running the containers or installing the compliance agents.
To configure BYOH go to Cloud Services -> Hosts, and select the BYOH tab. Click on Add and provide the name, IP Address and in the Fleet type select Native App (if you don’t, DuploCloud will not manage the containers but manage the compliance agents on the host), provide the username, password/Private key file. Make sure SSH access to the host is opened to access from DuploCloud.
Within about 5 minutes of adding the host, you can go to Security -> Agents and see that the agent on host is in Active state.
Pin a container to a set of hosts using allocation tagging
In DuploCloud, allocation tags give you control over where containers and Services are deployed within a Kubernetes cluster. By default, DuploCloud spreads container replicas across available Hosts to balance resource usage. Allocation tags allow you to label Hosts and Services with specific characteristics, capabilities, or preferences, and to "pin" Services to certain Hosts to meet your operational and resource needs. Allocation tags are useful for deployment requirements like using Hosts with specialized resources, meeting compliance standards, or isolating workloads.
For a Service to run on a specific Host, the Host and the Service must have matching allocation tags. Services without allocation tags are deployed on any available Host in the Kubernetes cluster.
Assign a tag describing the Host's characteristics or capabilities, such as resource capacity, geographic location, or compliance needs.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
Select the Host from the NAME column. If the Host is part of an Auto-Scaling Group (ASG), select the ASG tab and select the correct ASG.
Click the Allocation Tag Edit Icon ( ). The Set Allocation Tag pane displays.
In the Allocation Tag field, enter a tag name. Use only alphanumeric characters. Hyphens ( - ) are supported as special characters if needed. For example, highmemory-highcpu is a valid tag name.
Click Set. The allocation tag you set displays in the heading banner for the Host or ASG.
In the DuploCloud Portal, navigate to the Add Service or Edit Service page, and enter a tag name in the Allocation Tag field. When the Service runs, DuploCloud will attempt to select a Host with a matching allocation tag. To pin the Service to run on a specific Host, apply matching allocation tags to the Host and Service.
On the Host or ASG page, select the Metadata tab, and edit or delete the existing allocation tag.
Customize or update the text on your DuploCloud login screen banner or button
Navigate to Administrator -> Systems Settings.
Click on the System Config tab.
Click Add. The Add Config pane displays.
In the Config Type list box, select AppConfig.
In the Key list box, select Other.
In the Key text field, enter LoginBannerText or LoginButtonText.
In the Value field, enter the text that will display on the login banner or button.
Click Submit. The entered text displays on the login banner or button.
Navigate to Administrator -> Systems Settings.
Click on the System Config tab.
The Update Config AppConfig pane displays.
Update the text in the Value field and click Submit.
The configuration is updated and the updated text displays on your DuploCloud login screen banner/button.
Click on the menu icon () on the left of the LoginBannerText or LoginButtonText row and select Update.