Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
DuploCloud DevOps-as-a-Service platform is cloud infrastructure automation software that enables developer self-service with continuous security and compliance.
DuploCloud is a platform that enables security and compliance for engineers and operators in organizations that host infrastructure on the public cloud.
You provide high-level application specifications including cloud services, application containers, packages and configurations, interconnectivity, requirements for multiple environments, and scoped compliance standards. DuploCloud uses these specifications to auto-generate required lower-level configurations, provisioning them in a secure and compliant manner, while maintaining their ongoing operations.
In addition, logging, monitoring, alerting, and reporting of the provisioned system are enabled. The following figure shows the various functions provided by the platform.
DuploCloud is a single-Tenant software platform installed in the customer's cloud account. The customer interfaces with DuploCloud via the browser UI, the DuploCloud Terraform provider, and/or API calls while the data and configuration stay within the customer's cloud account. All configurations created and applied by DuploCloud can be reviewed and edited in the customer's cloud account.
Got 5 minutes? Check out a video overview of a DuploCloud deployment:
An overview of the DuploCloud Portal
This Getting Started module describes DuploCloud and its unique constructs and terminology. It also outlines the business value of automating your DevOps and Security Compliance cloud environments in the DuploCloud Portal.
To integrate your identity and access management service logins with your DuploCloud account, reach out to DuploCloud support for assistance. DuploCloud supports logins from these services:
Google SSO
Microsoft login
Azure Active Directory (AD)
Okta SSO
These tutorials are specific to various public cloud environments and demonstrate some of DuploCloud's most common use cases:
Popular and frequently asked questions about DuploCloud
Use these FAQ documents to quickly find answers to popular questions about using AWS, Azure, and GCP with DuploCloud.
DuploCloud supports the following:
Amazon AWS
Microsoft Azure
Google Cloud
On-Premises
See DuploCloud Support for examples of what we do and do not support and how to contact us.
We estimate that DuploCloud will increase your company's monthly cloud costs by approximately $100 to $200.
Yes, DuploCloud's On-prem services support companies with private clouds.
No. DuploCloud is a self-hosted solution deployed within the customer's cloud account. This hosted solution provides the customer with a SaaS-like experience. DuploCloud can provide a fully managed service to maintain uptime, provide updates, and supply ongoing support.
DuploCloud creates a private Slack channel so your organization can communicate directly with the DuploCloud Support Staff to resolve issues and answer questions during onboarding.
Typically, clients create separate AWS accounts, GCP projects, or Azure subscriptions for DuploCloud so we don't interfere with any pre-existing AWS accounts and configurations. The goal is to set up a development environment the client team can validate and switch to, followed by staging and production migration. DuploCloud is integrated into an existing environment, or there can be a hybrid where some infrastructure is moved to a DuploCloud-managed environment that connects to the existing environment.
DuploCloud installs the portal for you and then schedules a call to orient you to the portal and perform additional configuration, such as creating DuploCloud Services, as required.
During the initial call with DuploCloud, you can complete the setup with help from our engineers. You can also create some Services and let our engineers set up the rest for you.
After your DuploCloud Portal has been installed and configured, DuploCloud Infrastructures (VPCs) will run, Kubernetes will be enabled and configured, and logging, monitoring, alerting, CI/CD, and Soc2 Controls will be implemented.
Even though the onboarding process can take thirty to seventy (30-70) hours, the DuploCloud staff performs about 90% of the work for you. Feel free to contact our Support staff anytime with questions or concerns.
In addition, during setup, we perform penetration testing and vulnerability assessments for your applications. Using our SIEM solution, we assess the vulnerabilities of DuploCloud Hosts in the Infrastructure.
DuploCloud is a self-hosted single-tenant solution deployed within the customer's cloud account. The software runs in a virtual machine (VM), and the VM derives permissions to call the cloud provider using the VM's permissions. Specifically, in AWS, DuploCloud utilizes an IAM role, an instance profile, to access AWS accounts, ensuring secure access without needing access keys. In Azure, permissions derive from Managed Identity. Permissions derive from service accounts in GCP.
Installing a DuploCloud appliance in each account is necessary to manage workloads across multiple AWS accounts. This strategy ensures effective management of the workloads in different environments, aligning with DuploCloud's comprehensive support for multi-cloud and hybrid-cloud setups. Cloud Portals are secured, as is any other workload in the cloud. In addition to SSO login for portal access, the VM runs optionally behind a VPN. Therefore, only internal users can load the portal when connected to a VPN.
While you can install DuploCloud in your existing environment, we prefer that we do the setup in a separate account. You do not have to migrate all your existing data sources, especially if you have terabytes of files in the S3 bucket. The workload in DuploCloud environments can connect to existing data sources and endpoints over peering and cross-account access. Here are the top reasons why people prefer Duplocloud in a separate account:
It is considered a safe and non-intrusive way to validate the new setup and, at times, new architecture, like Kubernetes, without touching the existing account.
From a compliance perspective, a new account is a clean slate and is very easy to pass audits as DuploCloud will guarantee all aspects of it. If we were to fit in an existing account, there may be many things that are pretty hard and, in some cases, impossible to fix. For example, in a non-compliant scenario, that history is stored in a cloud trail, indicating irregularities to an auditor and creating questions around scope, leading to exceptions in the report.
Some DuploCloud security features can impact existing non-compliant resources and workloads. While this is rare, it is something to consider.
In summary, while one can deploy the platform in an existing account and even import a VPC and Kubernetes clusters, avoiding it can cause more overhead than benefit compared to a new account with either data migrated or connected to existing data sources over cross-account access.
DuploCloud is running in your cloud account, along with your workloads. DuploCloud is a provisioning system, so stopping DuploCloud does not impact any of your applications and cloud services.
The following is a list of automation constructs managed by DuploCloud and a summary of what you need to do to maintain them directly instead of through DuploCloud.
Cloud Provider Configuration (Terraform): This involves various cloud services, IAM roles, Security groups, VPC, etc. DuploCloud can export your latest cloud configuration into native Terraform code and state files. Once exported, you maintain the configuration.
Kubernetes: All applications and configurations that have been deployed in K8s are available in the form of deployments, StatefulSets, DaemonSet, K8s Secrets, ConfigMaps, etc. One can run kubectl
commands to export configurations as YAML files and continue to maintain them in the future.
Compliance monitoring: DuploCloud uses a third-party SIEM solution called Wazuh. Wazuh is an open-source software platform running in an independent VM in your cloud account, and you have full permission to retain it "as-is." However, in the future, you need to integrate any new systems that need compliance monitoring into the SIEM.
Diagnostics tools: These include Prometheus, Grafana, and Elasticsearch. They are all open source and run in your cloud account, so you can continue to manage them directly.
If DuploCloud is down, it's similar to having an unavailable DevOps engineer. If you need to opt out of DuploCloud, you are replacing your DevOps management. DuploCloud is neither a PaaS nor a hosted solution that hosts your workloads.
Absolutely! More than half of our customers have no DevOps team. With our managed service offering, we handle your deployments, act as the first line of defense for any issues, and are constantly involved in daily tasks like CI/CD updates.
The DuploCloud team is your extended DevOps team. We assist with white-glove environment setup and daily operations with 24x7 Slack and email support. We cover what the DuploCloud platform supports and assist with your cloud provider's requirements.
After the initial onboarding of the platform, we recommend that you engage the DuploCloud team as your second line of defense by setting up an internal triage process of your own. We can assist you in setting up this process.
DuploCloud is your extended DevOps team! We are available 24x7 on your Slack channel, by phone, and by email.
Yes. This is often required because DuploCloud may not support all cloud features or configurations. Direct changes to your cloud account can be categorized within the following groups:
DuploCloud labels the resources and configurations it manages. If independent changes are being made directly with your cloud provider, DuploCloud will not interfere. However, DuploCloud still monitors compliance changes and alerts you to non-compliant configurations.
For resources that DuploCloud does not manage directly, you change directly in your cloud provider. For example, if you add additional forwarding rules to a load balancer created through DuploCloud, DuploCloud does not interfere with the new configuration that you created.
For resources that DuploCloud manages, DuploCloud automatically detects conflicts and either revert changes or raises an alert about inconsistencies.
DuploCloud provides flexibility when a feature that is not supported by DuploCloud can be programmed directly in your cloud provider. If you are using Terraform, then the DuploCloud provider and the native cloud provider can be used in tandem. For an example of this use case, see https://duplocloud.com/white-papers/devops/#PAAS.
Yes. DuploCloud's Web UI is a no-code interface for DevOps. You do not need to know IaC or have cloud expertise to operate it. However, you should read the product documentation to understand the basic constructs in DuploCloud.
"Click Ops" is when engineers manually create infrastructure resources in the cloud and other UIs. It's often considered bad practice because there are so many components and configurations that it's easy to make mistakes. You can skip past default settings that aren't secure, copy configuration incorrectly between environments, etc. You need hundreds or thousands of clicks and a lot of DevSecOps knowledge.
DuploCloud manages infrastructure resources for you. You pick application-level functionality like "services" and "load balancers," and then DuploCloud creates the complex cloud resources needed to deliver that functionality. It ensures the underlying compute instances, firewall rules, IAM policies, and other components are configured following good practices. You only need a few clicks and don't need to know DevSecOps because DuploCloud knows it for you.
Click Ops is an easy way to make mistakes. DuploCloud's No Code is an easy way to ensure you don't.
The answer depends on the following key factors:
Especially in a rapidly growing company environment, where architecture is constantly evolving, and services are constantly updated, there may be a desire to let developers self-service and move fast. This would be a case for no-code. On the other hand, in cases where an established operations organization has centralized requirements, a low-code Terraform solution may be a better option.
Note that Terraform is a client-side scripting tool and a single-user system, so the scope of a project is limited to one person operating at one time. This can incur constraints if a project has many components and two people cannot operate simultaneously, even if they deal with completely independent constructs.
Terraform projects typically have a broad scope with multiple components. Sometimes, you must make small targeted changes (for example, a health check URL change). But when the change is being executed, there may be other drifts, and the user may be forced to resolve them. This can be inconvenient, and often, the user will change the UI, resulting in further configuration drifts.
About half of our customer base uses no-code, while the other half uses Terraform. Ironically, software developer-centric companies prefer no-code because it enables engineers to be agile and focus on their application code. By contrast, the low-code Terraform provider is often used in DevOps-centric organizations.
DuploCloud license usage is calculated based on the services managed by DuploCloud. The service usage is counted in terms of units, with a unit defined as below:
A host is counted as 1 unit. (example: EC2 instance, Azure, or GCP VM)
A serverless function or service is counted as 1/4 unit (example: Lambda function)
A serverless application is counted as 1/2 unit (example: AWS ECS Service, Azure Web App, Google GKE Service)
AWS Managed Airflow (MWAA) worker is counted as 1/2 unit. For an MWAA environment, the number of workers is calculated as the average of the minimum and maximum worker count.
Yes.
Yes. Every element in the DuploCloud UI is populated by calling an API. Additionally, DuploCloud maintains a comprehensive log file that records the creation of all resources, including compute instances, storage options, and databases, facilitating tracking and auditing of resource provisioning.
No. DuploCloud segregates resources into environments called Tenants, which accelerates ramp-up time. For more information about implementation, contact the DuploCloud support team.
You must make control plane modifications before enabling central logging. If logging is enabled, the Service Description cannot be edited.
While creating a Host, click on Show Advanced to display advanced options and select the public subnet from the list of availability zones.
Under each Host, you can click on Connection Details under the Actions dropdown, which will provide the key file and instructions to SSH.
Under Host, click on Connection Details under the Actions dropdown. It will provide the password and instructions on how to connect to RDP.
Under the Services Status tab, find the host where the container is running. SSH into the host (see instructions above), and run sudo docker ps
to get the container ID. Next, run sudo docker exec -it
CONTAINER_ID
bash
. Find your container using the image ID.
Make sure the DNS name is resolved by running ping
on your local machine. Ensure that the application is running running ping
from within the container. SSH into the host and connect to your Docker container using the Docker command sudo docker exec -t
CONTAINER_ID
bash
. From inside the container, curl the application URL using the IP 127.0.0.1
and the port where the application is running. Confirm that this works. CURL is the same URL using the container's IP address instead of 127.0.0.1
. The IP address can be obtained by running the ifconfig
command in the container.
If the connection from within the container works, exit the container and navigate to the host. Curl the same endpoint from the host (i.e., using container IP and port). If this works, then under the ELB UI in DuploCloud, note down the host port that DuploCloud created for the given container endpoint. This will be in the range of 10xxx or the same as the container port. Now try connecting to the “HostIP,” and DuploMappedHostPort just obtained. If this also works but the service URL fails, contact your enterprise admin or duplolive-support@duplocloud.net.
No, Kubernetes is not required. DuploCloud supports both AWS ECS and Kubernetes, among other Cloud solutions.
The main advantage of Kubernetes is its broad-based, highly customizable, third-party, open-source community that champions and supports it as a delivery platform. For example, Astronomer (managed air flow), Time series database, IsTio Service Mesh, and Kong API Gateway all expect you to have a Kubernetes deployment. However, if your business needs and use cases are met with an AWS solution, you may not need Kubernetes.
Choose the container management software that best meets the complexity level of your use cases and requirements. DuploCloud supports AWS, Kubernetes, Azure, and GCP. Many customers use software from multiple vendors to create robust business solutions backed by DuploCloud's compliance assurance and automated low-code/no-code DevOps approach.
Enable Health Check for your service and ensure the API does not return HTTP 200
status until migration is done. Since DuploCloud waits for a complete Health Check on one service before upgrading to the next service, only one instance will run migration at a time.
DuploCloud's approach to container deployment emphasizes applications being self-contained and fungible, which facilitates independent updates of each service. Kubernetes automatically manages failing containers, and DuploCloud supports the use of Health Checks, including Liveness and Readiness Probes, to ensure containers are functioning correctly and ready to receive work.
DuploCloud supports the creation of S3 buckets with custom prefixes, enabling unique bucket names without the default numeric suffix. This feature can be activated by configuring a specific setting in the system, allowing for more personalized and easily identifiable bucket names that comply with S3's uniqueness requirements.
If the current status is Pending and the desired status is Running, wait a few minutes for the image to finish downloading. If it’s been more than five minutes, check the faults from the button below the table. Ensure that your image name is correct and does not have spaces. Image names are case-sensitive, so they should be all lowercase, including the image name in DockerHub.
If the current state is Pending when the desired state is Delete, the container is the old service version. It is still running because the system is being upgraded, and the previous replica has not been upgraded yet. Check the faults in other containers of this service for which the Current State is Pending and the desired state is Running.
This means DuploCloud is going to remove these containers. DuploCloud also supports the creation of on-demand testing environments, facilitating quick setup and teardown of environments with different configurations through the console or Terraform. This capability is crucial for testing and development workflows, ensuring flexibility and efficiency in managing containerized applications.s. The most common cause is that DuploCloud blocked the upgrade because a replica of the service was upgraded but is no longer operational. Some replicas may show a Running state even though the health check fails and the rolling upgrade is blocked. To unblock the upgrade, restore the service configuration (image, env, etc.) to an error-free state.
No. DuploCloud is calling the cloud provider's API directly. Based on user requirements, the software interacts with the cloud provider API asynchronously, maintaining a state machine of operations with built-in retries to ensure robustness. Any configuration drift, system faults, security, and compliance controls are monitored continuously by interacting with the cloud provider.
The Terraform and DuploCloud Web UIs layer on top of the DuploCloud platform.
DuploCloud provides an SDK into Terraform called the DuploCloud Terraform Provider. This SDK allows users to configure their cloud infrastructure using DuploCloud constructs rather than lower-level cloud provider constructs. It enables users to benefit from Infrastructure-as-Code while significantly reducing the needed code. The DuploCloud Terraform Provider calls DuploCloud APIs. Our DevOps white paper provides detailed examples.
It is best to create separation between your developers and your DevOps team. Allowing developers to use the Web UI in non-production development environments to iterate product changes quickly can create inconsistency.
You use Terraform for setting up Cloud services and first-time application deployment in both production and critical non-production environments. In addition, build CI/CD workflows to update only the application (Docker and Lambda) deployments. The DevOps team should make any cloud service level changes via Terraform, and developers should ask the DevOps team to do the same. Developers should still be able to trigger CI/CD for their application rollouts without DevOps involvement.
From the DuploCloud portal, click on your name in the top right corner and select Profile. Click on the VPN URL and enter the required credentials. On the first login, scan the barcode displayed on the screen. Download the profile and add it to the OpenVPN Connect. The next time you log in with OpenVPN, enter the authentication code when prompted.
CI/CD is the topmost layer of the DevOps stack. DuploCloud should be viewed as a deployment and monitoring solution invoked by your CI/CD pipelines, written with tools such as CircleCI, Jenkins, GitHub Actions, etc. You build images and push them to container registries without involving DuploCloud, but you invoke DuploCloud to update the container image. An example of this is in the CI/CD section. DuploCloud offers its own CI/CD tool, as well.
DuploCloud provides comprehensive monitoring capabilities, including Kubernetes pods, node hosts, RDS databases, and load balancers. This enables the creation of dashboards and setting up alerts for efficient service management. Additionally, DuploCloud's built-in monitoring feature displays resource utilization by tenant or container, simplifying usage tracking and offering a cost-effective alternative to solutions like Datadog.
When you update (for example, when you change an image or environment variable) a service with multiple replicas, DuploCloud makes the change to one container at a time. If an updated container fails to start or the health check URL does not return HTTP 200
status, DuploCloud will pause the upgrade of the remaining containers. Update the service with a newer image with a fix. If no health check URL is specified, DuploCloud only checks to see if an updated container is running before moving on to the next. To specify Health Check, use the Elastic Load Balancer menu to find the Health Check URL suffix.
DuploCloud automates the management of AWS IAM roles, streamlining the access control for services within a tenant and facilitating AWS-integrated tasks without code modifications. This includes a duplomaster
role for administrative tasks in the AWS console, enhancing security and operational efficiency.
DuploCloud's out-of-the-box diagnostics stack is optional. To integrate with a third-party toolset like Datadog, you follow the toolset's guidelines and deploy collector agents. You can do this as if running an application within the respective DuploCloud tenants.
Could not load credentials from any providers"
.Your duplo-jit
local cache must be cleared. To do this, run the following command:
Conditions Unschedulable" message because 0/N nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable....
Two possible reasons for receiving this fault message are:
You are not allocating enough hosts to process your workload.
The allocation tags you assigned to your existing Hosts limit additional Service workloads.
Docker native collection agent Filebeat is not running for Tenant"
.Ensure that logging is set up and that you have selected the Tenant for which you want to collect logging information. If logs are still not appearing in the DuploCloud console and Filebeat indicates a bulk send failure
, it may be due to Elasticsearch running out of disk space. In this case, check the disk space in the diagnostics history of the Default Tenant. If necessary, follow the steps to recognize and expand the volume on Linux as outlined in the AWS documentation to resolve this issue.
Clear your browser cache or use a private browsing window. Should the problem persist, please request further assistance from DuploCloud to ensure a smooth login experience.
New features and enhancements in DuploCloud
AWS
Enable automatic AWS ACM (SSL) Certificates for a Plan.
Configure K8s Ingress redirect using a container port name.
Enable UltraWarm Data nodes for OpenSearch domains.
Support for upgrading EKS components (add-ons).
Add a Web App Firewall URL when creating or updating a Plan.
Create an OpenSearch domain.
Create Lambdas with Ephemeral Storage.
Support for Lambda Dead Letter Queues.
Set a delivery delay for SQS Queues, using increments of seconds.
Configure Vanta compliance controls for DuploCloud Tenants.
Support for OpenSearch storage options.
Security Configurations Settings documentation section added.
GCP
GKE Standard mode is supported when creating DuploCloud Infrastructures.
Support for Firestore databases.
Create Node Pools with support for accelerators and taints.
Support for GKE Ingress.
General
Set Tenants to expire at specified dates and times.
Configure settings for all new Tenants under a Plan using Tenant Config tab.
SIEM - Configure agents to install on specific Tenants.
AWS
Enable Spot Instances for EKS Autoscaling Groups (ASG).
Implement Kubernetes Lifecycle Hooks while Adding a DuploCloud EKS/Native Service.
Enable shared hosts to allow K8s Pods in a Tenant to run on Hosts in another Tenant.
Set a default automated backup retention period for databases.
Enable bucket versioning when creating an S3 bucket.
Create an Amazon Machine Image (AMI).
Use dedicated hosts to launch Amazon EC2 instances and provide additional visibility and control over how instances are placed on a physical server.
Automatically reboot a host upon StatusCheck faults or Host disconnection.
Support for SNS Topic Alerts, enabling notifications and alerts across different AWS services and external endpoints.
Establish VPN connections for private endpoints when creating an Infrastructure.
Restore an RDS to a particular point in time.
Dynamically change the configuration of a Kafka Cluster.
Fields for Sort Key and Key Type are now available when creating a DynamoDB.
Azure
Create a MySQL Flexible Server managed database service.
Add an Azure Service Bus.
Kubernetes
Follow logs for K8s containers in real-time.
Influence Pod scheduling by specifying K8s YAML for Pod Toleration.
Create Kubernetes Jobs (K8s Jobs) in AWS and GCP to manage short-lived, batch workloads in a Kubernetes cluster.
Create Kubernetes CronJobs (K8s CronJobs) in AWS and GCP to schedule long-term K8s Jobs to run at preset intervals.
General updates
The DuploCloud UI contains numerous design, navigational, and usability improvements, including new menus for managing an RDS, Containers, and Hosts. These improvements are cross-platform and apply to AWS, Azure, and GCP.
Quickly search the DuploCloud Portal for any navigation menus or tab labels, such as Kubernetes Secrets and Spend by Month, by using the Search box at the top center of the DuploCloud Portal.
Use the Supported Third-Party Tools page for a list of functionality supported by DuploCloud, out-of-the-box.
DuploCloud no longer supports launch configurations. Instead, launch templates are created. If you use launch configurations, DuploCloud automatically converts them to launch templates with no interruption in uptime.
AWS
Hibernate an EC2 host instance.
AWS
Set a monitoring interval for an RDS database.
Enable or disable logging for an RDS database.
Add custom Lambda image configurations and URLs.
Enable Object Lock in S3 Buckets to prevent objects from being deleted or overwritten.
Configure a custom S3 Bucket for auditing.
Customize a Node Selector for EKS Services to prevent overrides of specific configurations.
Access ECS container task shells directly from the DuploCloud Portal.
Ability to designate Essential Containers in Task definitions for ECS Services.
Automate fault healing on EC2 Hosts that fail a status check.
Enhanced support for Startup Probes.
GCP
Support for Redis database instances.
Support for SQL databases.
Change Cloud Armour Security Policies.
General updates
Last Login card available for determining the last user sign-in when viewing user access.
Grant access to specific databases to non-administrators.
AWS
Enable EKS endpoints in a DuploCloud Infrastructure, in a more cost-effective and secure manner. Enabling endpoints in DuploCloud allows your network communication to remain internal to the network, without using NAT gateways.
Multiple containers are now supported in the ECS Task Definitions tab.
Start, stop, and restart up to twenty (20) services at one time.
Add VPC Endpoints to a DuploCloud Infrastructure to create a private connection to supported AWS services and VPC endpoint services powered by AWS PrivateLink.
Define S3 bucket policies.
Support for Lambda Layers has been added.
CloudWatch EventBridge rules and targets are supported.
The CloudFront feature and associated UI tab have been relocated in the DuploCloud Portal from the Cloud Services -> App Integration menu item to the Cloud Services -> Networking menu item.
Azure
Support for Redis databases is available.
GCP
Cloud Armour is supported, to monitor your cloud infrastructures and deployed applications against cyber-attacks.
AWS
Define custom CIDRs for NLB Load Balancers.
Manage multiple Load Balancer settings using the Load Balancer tab's Other Settings card. Settings include specifying a Web Application Firewall (WAF) Access Control List (ACL), enabling HTTP to HTTPS redirects, enabling Access Logs, setting an Idle Timeout, and an option to drop invalid headers.
Specify custom public and private EKS endpoints for your DuploCloud Infrastructure during or after creating an Infrastructure.
JIT Access to the AWS Console is redesigned with several usability enhancements.
Support for Aurora RDS Serverless and MySQL read replicas and ability to modify Serverless replica instance size.
Improved documentation for upgrading an EKS cluster version.
Azure
Add a direct link to the Azure Console from a DuploCloud Host page Actions Menu.
General Updates
Set read-only access to specific Tenants for DuploCloud users.
AWS
Virtual Private Cloud (VPC) peering is supported to facilitate data transfer between VPCs.
EMR Serverless is supported to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers.
DuploCloud users can obtain Just-In-Time (JIT) access to the AWS Console.
AWS SQS Standard and FIFO queues are now supported.
Use the DuploCloud Portal to work with AWS Internet of Things (IoT).
Support for Redis database versions when creating Elastic Cache (Ecache).
Enable shell access for ECS, Kubernetes, and Native docker containers using a simplified workflow.
Reduce storage cost and increase performance by setting GP3 as your default storage class.
GCP
Updated documentation for supported databases.
CI/CD
Documentation for Bitbucket Pipelines is available, which allows developers to automatically build, test, and deploy their code every time they push changes to an Atlassian Bitbucket repository.
Terraform
Added IdleTimeout
to duplocloud_aws_load_balancer
resource.
AWS
Enable Elastic Kubernetes Service (EKS) for your existing infrastructure. EKS versions 1.22 and 1.23 are supported.
Timestream databases are now supported.
General updates
Delete VPN connections for users.
AWS
AWS ElastiCache, a managed caching service for Redis and Memcached, is now supported.
Monitor Tenant usage in Cost Management for billing with weekly or monthly views. After clicking the Spend by Tenant tab, you can also select the shared card to display tax and support costs.
Maintain cluster stability with Ingress Health Checks annotations.
Azure
Support for Kubernetes Ingress.
Monitor Tenant usage in the Cost Management for billing feature with weekly or monthly views.
Edit Azure agent pools, used to run Azure Kubernetes (AKS) workloads.
GCP
Monitor Tenant usage in the Cost Management for billing feature with weekly or monthly views.
Kubernetes (K8s)
Support for Kubernetes Ingress in Azure.
Maintain cluster stability with Ingress Health Checks annotations for AWS.
Use the K8s Admin dashboard to monitor StatefulSets in AWS.
Edit Azure agent pools, used to run Azure Kubernetes (AKS) workloads.
Ability to add Path-Based Routing rules: Configure path-based routing rules for application load balancers.
Support for Aurora Serverless V2: User can create and manage Aurora Serverless V2 RDS.
Billing License Usage: Overview of DuploCloud License Usage according to current service usage.
Ability to add Logging Infra at Tenant Level: Support to configure logging setup other than default tenant.
Support multiple docker registry credentials in a single tenant: The user can configure multiple docker registry credentials from the plan.
Support for Amazon Managed Apache Airflow: Ability to configure AWS Managed Airflow
Configure custom prefix for S3: Ability to configure a prefix for S3 bucket names.
Azure Support to add Storage account: Create Storage Accounts, File Shares, and generate Shared Access Signature (SAS).
Multiple Azure User Enhancements were made.
Support for Elastic File System (EFS): Support for adding EFS has been added to DuploCloud. You can create and mount a shared filesystem for an Infrastructure in the DuploCloud Portal.
Support for adding Kubernetes Storage Class: Support for Kubernetes Storage Class and Persistent Volumes is now available.
Support for Kubernetes Secret Provider Class: This provides the ability to integrate AWS parameters and secrets to be available as Kubernetes secrets.
Ability to add Lambda using Container Images: Users can now configure an AWS Lambda using Container images.
Support to configure RDS Automatic Backup Retention: Administrators can configure RDS Automatic Backup Retention in days at the system level
Export Terraform from an existing Tenant: Ability to export DuploCloud terraform provider code for an existing DuploCloud Tenant
Ability to Automatically generate Alert: Users can now configure automated alarm creation in AWS, to make sure any new resource added to their environment is not missed from monitoring.
Ability to set resource allocation quotas by an Admin: Administrators would often like to restrict the type of resources that should or should not be provisioned in their environments. This feature allows them a way to configure those rules via a DuploCloud Plan.
Support for Kubernetes Ingress Controller: Support for the K8s Ingress controller has been added, this is a key piece of functionality for traffic routing to a K8s cluster.
RDS Snapshot Management: Support to manage RDS database snapshot was added to the Portal, accessible through the RDS page.
Terraform Provider updates: Expanded support for more resources in the DuploCloud terraform provider, specifically for Microsoft Azure.
The high level application and compliance requirements are passed onto a DevOps team that is the subject matter expert for the Cloud. This team would accept the requirements and translate them into hundreds or thousands of lower level configurations, best practices and compliance controls. These include IAM Roles, Instance profiles, KMS Keys, PEM key, vulnerability scanning system, virus scanners, VPC, Security Groups, Intrusion detection, etc. This translation is usually done based on human knowledge and subject matter expertise in the area. Further, DevOps engineers are usually required to write thousands of lines of code to implement these requirements using programming languages like Terraform, Python and Bash.
A common misconception is that Terraform automates the DevOps workflow. In fact, Terraform is only a programming language. One needs substantial infrastructure know-how to build automation using Terraform. Typically, DevOps engineers are not aware of compliance nuances that go beyond best practices and have to redo a lot of the work on an ongoing basis.
DevOps essentially is a skill that requires one to be a programmer (to write IaC), an operator, and a compliance expert. These are three distinct skill sets that have never traditionally co-existed in the IT industry. This is the #1 challenge in the DevOps space.
For organizations operating in regulated industries, the infrastructure needs to follow strict compliance guidelines. Compliance standards such as NIST, PCI, HITRUST, and SOC 2 set the bar, with the complexity and duration of compliance efforts varying significantly. For instance, achieving compliance in a 50-node infrastructure could span from 6 months to a year, depending on the specific requirements of these standards.
In addition to these standards, organizations releasing new applications must also consider data protection regulations such as the General Data Protection Regulation (GDPR) in the European Union. Duplo offers guidance on GDPR compliance, emphasizing the importance of data classification and security. It is crucial to identify Personal Identifiable Information (PII) and ensure it is stored and managed in compliance with GDPR. For example, storing PII in databases located within the EU and applying appropriate access controls meets GDPR requirements. Leveraging standard AWS services in the correct region further supports compliance efforts. Duplo and AWS provide extensive resources and discussions on navigating GDPR compliance, including detailed information available through the AWS GDPR Center.
Understanding and navigating the complex landscape of compliance requirements demands a comprehensive approach. Whether adhering to industry-specific standards like PCI DSS or broader regulations like GDPR, the key to compliance lies in meticulously managing data and infrastructure, guided by the detailed resources and support available from cloud service providers and compliance guidance platforms.
DuploCloud is a software platform that essentially takes three high level inputs:
High Level Application Architecture as we explained before.
Compliance Standard that is required, like SOC 2, PCI, HIPAA, etc.
Public Cloud Provider where the application will be deployed.
With the above inputs, the platform is able to automatically generate the required lower level configurations that are compliant and can be used by end users like engineers and DevOps alike. All the best practices and compliance controls are baked in. Standard functions like central logging monitoring and reporting dashboards are available out-of-the-box.
Users interact with the system either using a no-code UI or a low-code Terraform provider. The Terraform provider enables the user to achieve the same automation with 10x less code, requiring little DevOps skills compared to using native Terraform.
The AWS PCI guide is 3400 pages long! This highlights the complexity and depth of compliance requirements, even when narrowed to 20 commonly used services.
Support feature included with the product and how to contact DuploCloud Support
DuploCloud offers hands-on 24/7 Support for all customers via Slack or email as part of your subscription. Automation and Developer Self-Service are at the heart of the DuploCloud platform. We are dedicated to helping you achieve hands-off automation in the fastest time possible, via rapid deployment of managed service or customized Terraform scripts using our exclusive Terraform provider.
Use the customer Slack or Microsoft Teams channel created during the Onboarding process.
Send us an email at support@duplocloud.net.
Some of the ways we support our customers in real-time include, but are not limited to:
Any configuration changes in your public cloud infrastructures and associated Kubernetes (K8s) constructs that are managed by DuploCloud
Hands-on support for setting up CI/CD pipelines
Cloud Migration from any existing platform
Proactive, tailored EKS cluster upgrades designed for minimum downtime impact
Accelerated onboarding of existing Services
Troubleshooting and debugging
Apps and services crashing
OpenSearch or database instances slow or crashing
Proof-of-Concepts (PoCs) for third-party integrations, including roll-out to development environment
Downtime during Rolling Upgrades
Increases in billing by your public cloud provider that need investigation and clarification. Many times DuploCloud can suggest a more cost-effective alternative.
Consolidation of third-party tools for which you currently subscribe that are included with your DuploCloud subscription
Adding a CI/CD pipeline for a new service
We cover most of your DevOps needs, but there are some we do not cover or only partially support. Some examples of these include, but are not limited to:
Patching an application inside a Docker image
Monitoring alerts in a Network Operations Center (NOC)
Troubleshooting of application code
Database configuration
Each Infrastructure represents a network connection to a unique VPC/VNET, in a region with a Kubernetes. In case of AWS it can also include an ECS. Infrastructure can be created with 4 basic inputs: Name, VPC CIDR, Number of AZs, Region, and the option to enable or disable a K8S/ECS cluster. Behind the scenes, the system will automatically create the subnets, NAT gateway, routes, and clusters in the given region.
If the Infrastructure requirement includes custom Private/Public Subnet CIDR, it can be achieved using Advanced Options.
A common use for Infrastructure is having two Infrastructures, one for prod and one for non-prod. Another one is having an infrastructure in a different region either for DR or localized deployments for clients in that region.
The greatest capability of the DuploCloud platform is the application infrastructure centric abstraction created on top of the cloud provider which enables the user to deploy and operate their applications without knowledge of lower level DevOps nuances. Further, unlike a PAAS such as Heroku, the platform does not get in the way of users consuming cloud services directly from the cloud provider, as in a user directly operating on constructs like S3, DynamoDB, Lambda functions, GCP Redis, Azure SQL etc., while offering greater scale and unlimited flexibility.
Some concepts relating to security (DevSecOps) are hidden from the end user, for example IAM roles, KMS keys, Azure Managed Identities, GCP service accounts etc. However, even those are configurable for the operator and in any case since this is a self-hosted platform running in the customer's own cloud account, the platform is capable of working in tandem with direct changes on the cloud account by an administrator. This is explained with examples at https://duplocloud.com/white-papers/devops/
The following picture shows the high level abstractions within which applications are deployed and users operate.
While there are many concepts in the policy model, the following are the main ones to be aware of:
Infrastructure
Plan
Tenant
App and Cloud Services
Diagnostics
Configure settings for all new Tenants under a Plan
You can configure settings to apply to all new Tenants under a Plan using the Config tab. Tenant Config settings will not apply to Tenants created under the Plan before the settings were configured.
From the DuploCloud portal, navigate to Administrator -> Plan.
Click on the Plan you want to configure settings under in the NAME column.
Select the Config tab.
Click Add. The Add Config pane displays.
From the Config Type field, select TenantConfig.
In the Name field, enter the setting that you would like to apply to new Tenants under this Plan. (In the example, the enable_alerting setting is entered.)
In the Value field, enter True.
Click Submit. The setting entered in the Name field (enable alerting in the example) will apply to all new Tenants added under the Plan.
You can check that the Tenant Config settings are enabled for new Tenants on the Tenants details page, under the Settings tab.
From the DuploCloud portal, navigate to Administrator -> Tenants.
From the NAME column, select a Tenant that was added after the Tenant Config setting was enabled.
Click on the Settings tab.
Check that the configured setting is listed in the NAME column. (Enable Alerting in the example.)
Tenant is the most fundamental construct in DuploCloud which is essentially like a project or a workspace and is a child of the infrastructure. While Infrastructure is a VPC level isolation, Tenant is the next level of isolation implemented by segregating Tenants using Security Groups, IAM role, Instance Profile, K8S Namespace, KMS Key, etc., in case of AWS. Similar concepts are leveraged from other cloud providers like resource groups, managed identity, ASG, etc., in Azure.
A Tenant is fundamentally four things, at the logical level:
Container of resources: All resources (except ones corresponding to infrastructure) are created within the Tenant. If we delete the tenant then all resources within that are terminated.
Security Boundary: All resources within the tenant can talk to each other. For example a Docker container deployed in an EC2 instance within the tenant will have access to S3 buckets and RDS instances within the same tenant. RDS instances in another tenant cannot be reached, by default. Tenant can expose endpoints to each other either via ELBs or explicit inter-tenant SG and IAM policies.
User Access Control: Self-service is the bedrock of the DuploCloud platform. To that end, users can be granted Tenant level access. For example John and Jim are developers who can be granted access to Dev tenant, while Joe is an administrator who has access to all tenants, while Anna is a data scientist who has access only to the data science tenant.
Billing Unit: Since Tenant is a container of resources, all resources in the tenant are tagged with the Tenant's name in the cloud provider, making it easy to segregate usage by tenant.
A common use case for Tenant in an organization that contains 4 tenants: Dev and QA are under the non-prod infrastructure, while the Pre-prod and Prod tenants are under the Prod Infrastructure. In larger organizations, one could have tenants by groups as well like a tenant for Data Science, Tenant for web application, etc. We have seen companies creating dedicated tenants for each of their end user clients in cases where the application is single Tenant. Tenant is a logical concept that can be used either way.
The DuploCloud platform automatically orchestrates three main diagnostic functions:
Central Logging: A shared Elasticsearch cluster is deployed and file beat is installed in all worker nodes to fetch the logs from various applications across tenants. The logs are injected with metadata corresponding to Tenant name, service name, container ID, Host name, etc. Further, each tenant has the central logging dashboard which includes the Kibana view of the logs from applications within the service. See the screenshot below:
Metrics: Metrics are fetched from hosts, containers, and cloud services like ELB, RDS, Redis, etc., and displayed in Grafana. Behind the scenes, for cloud services, these are collected by calling cloud provider APIs like CloudWatch and Azure Mon, while for nodes and containers, this is done using Prometheus, Node Exporter, and cAdvisor. Again, the dashboards are Tenant-centric and segregated per application and cloud service as shown in the picture below:
Alarms and Faults: The platform creates faults for many failures automatically, such as Health check, container crashes, node crashes, deployment failures, etc. Further, users can easily set alarms on cloud services like CPU and Memory on EC2 instances, Free Disk space in RDS database, etc. All failures are displayed as faults per tenant. Sentry and Pager Duty projects can be linked to each tenant and DuploCloud will send these faults there so the user can set notification configurations.
Audit Trail: An audit trail of all changes made to the system are logged in Elasticsearch where these can be audited using high level constructs like changes by tenant, by service, by change type, by user and dozens of other such filters.
Corresponding to each Infrastructure is the concept of a Plan. A Plan is a placeholder or a template for configurations. These configurations are consistently applied to all Tenants within the Plan (or Infrastructure). Examples of such configurations are:
Certificates available to be attached to Load Balancers in Tenants of this Plan
Machine images
WAF web ACLs
Common IAM policies and SG rules to be applied to all resources in Tenants within the Plan
Unique or shared DNS domain name where applications provisioned in Tenants within the Plan can have a unique DNS name in this domain
Resource Quota: The plan also has a resource quota that is enforced in each of the Tenants within that Plan
DB Parameter Groups
Several policies and feature flags are to be applied at the infrastructure level on Tenants within the Plan
The figure below shows a screenshot of the plan constructs:
When creating DuploCloud Plans and DNS names, consider the following to prevent DNS issues:
Plans in different portals will delete each other's DNS records, so each portal must use a distinct subdomain for its Plans.
DuploCloud Plans in the same portal can share a DNS domain without deleting each other's records. Duplo-created DNS names will always include the Tenant name, which prevents collisions.
The recommended practice for most portals is to set all Plans to the same DNS name, including the default
Plan.
Ideally, custom subdomains will be set in the Plans before turning on shell, monitoring, or logging. If the DNS is changed later, those services may need to be updated.
About DuploCloud's No-Code / Low-Code approach
DuploCloud provides a no-code UI-based approach to building and operating cloud infrastructures. Visit our Video Library for demos.
For an audience that desires to use Infrastructure-as-Code (IaC), we have a Terraform Provider that enables low-code IaC. This provider exposes all DuploCloud abstractions and constructs to be programmed using Terraform. Using the DuploCloud Terraform provider, you build the same infrastructure, using 10 times less code, with all compliance controls built-in. You are not required to have subject matter expertise in DevOps or SecOps. This whitepaper describes the process in detail.
A common misconception is that DuploCloud generates Terraform behind the scenes to provision the cloud infrastructure. The DuploCloud UI and Terraform (with the DuploCloud Provider) are layered on top of DuploCloud. Behind the scenes, DuploCloud uses the cloud provider APIs as shown in the picture below
DuploCloud uses APIs behind the scenes, beyond processing user requests, generating configurations synchronously, and calling the cloud provider. Many more operations require asynchronous processing, requiring a state machine with retries and the ability to continuously detect and fix configuration drift. Faults and compliance controls must be monitored continuously.
Terraform, or any scripting approach, is meant to be run with human supervision. There is no synchronicity and retries --- scripts start and end, processed as single threads.
For more information about No-Code/Low-Code and how it relates to Click Ops, see this FAQ.
A Service could be a Kubernetes Deployment, Stateful set or a Daemon set. It can also be a Lambda function or an ECS task or service. It essentially captures a microservice. Each service (except Lambda) can be given a load balancer to expose itself and be assigned a DNS name.
DuploCloud Service should not be confused with a Kubernetes or a ECS service. By service we mean application components that can either be Docker-based or serverless.
Below is an image of some properties of a service:
Cloud Services: DuploCloud supports a simple application specific interface to configure dozens of cloud services like S3, SNS, SQS, Kafka, Elasticsearch, Data Pipeline, EMR, Sagemaker, Azure Redis, Azure SQL, Google Redis, etc. Almost all commonly used services are supported and new ones are constantly added. A typical request to support a new service takes the DuploCloud team a matter of days, based on the complexity of the service.
While users specify application level constructs for provisioning cloud resources, all the underlying DevOps and compliance controls are implicitly added by DuploCloud.
IMPORTANT: All services and cloud features are created within a Tenant.
Multiple container orchestration technologies for ease of consumption
Most application workloads deployed on DuploCloud are in Docker containers. The rest consist of serverless functions, Big data workloads like Amazon EMR jobs, Airflow and SageMaker. DuploCloud abstracts the complexity of container orchestration technologies, allowing you to focus on the deployment, updating, and debugging of your containerized application.
Among the technologies supported are:
Kubernetes: On AWS, DuploCloud supports orchestration using Elastic Kubernetes Service (EKS). On GCP we support GKE auto pilot and node pool based. On Azure we support AKS and Azure web apps.
Built-in (DuploCloud): DuploCloud platform's Built-in container management has the same interface as the docker run
command, but it can be scaled to manage hundreds of containers across many hosts, providing capabilities such as associated load balancers, DNS, and more.
AWS ECS Fargate: Fargate is a technology you can use with Elastic Container Service (ECS) to run containers without having to manage servers or clusters of EC2 instances.
Use the feature matrix below to compare the features of the orchestration technologies that DuploCloud supports. Whatever option you choose, DuploCloud can help you implement it through the Portal or the Terraform API.
One dot indicates a low rating, two dots indicate a medium rating, and three dots indicate a high rating. For example, Kubernetes has a low ease-of-use rating but a high rating for stateful applications.
Use the definitions below to understand how each feature in the matrix above is rated compared to Kubernetes, Built-in, or ECS Fargate.
Ease of Use:
Kubernetes is extensible and customizable, but not without a cost in ease-of-use. The DuploCloud platform reduces the complexities of Kubernetes, making it comparable with other container orchestration technologies in ease-of-adoption.
DuploCloud's Built-In orchestration mirrors docker run
. You can Secure Shell (SSH) into a virtual machine (VM) and run docker
commands to debug and diagnose. If you have an application with a few stateless microservices or configurations that use environment variables or AWS services such as SSM, S3, or Secret store, consider using DuploCloud's Built-in container orchestration.
ECS Fargate contains proprietary constructs (such as task definitions, tasks, or services) that can be hard to learn. As Fargate is serverless, you have no control over the Host Docker, so commands such asdocker ps and docker restart
are unavailable. This makes debugging a container crash very difficult and time-consuming. DuploCloud simplifies using Fargate with an out-of-the-box setup for logging, shell access, and abstraction of proprietary constructs and behavior.
Features and Ecosystem Tools: Kubernetes is rich in additional built-in features and ecosystem tools, most notably Secrets Management and ConfigMaps. Built-In and ECS rely on native AWS services such as AWS Secrets Manager, SSM, S3, and so on. While Kubernetes features have an equivalent in AWS, third parties tend to publish their software as Kubernetes packages (Helm Charts). Some examples are Influx DB, Time Series DB, Prefect, etc.
Suitability for Stateful apps: Stateful applications should be avoided in AWS. Instead, managed cloud storage solutions should be leveraged for the best availability and SLA compliance. In scenarios where this is undesirable due to cost, Kubernetes offers the best solution. Kubernetes uses StatefulSets and Volumes to implicitly manage Elastic Block Storage (EBS) volumes. With Built-in and ECS, you must use a shared EFS drive, which may not have feature parity with Kubernetes volume management.
Stability and Maintenance: Even though Kubernetes is highly stable, it is an open-source product. The native customizability and extensibility of Kubernetes can lead to points of failure when a mandatory cluster upgrade is needed, for example. This complexity often leads to support costs from third-party vendors. Maintenance can be especially costly with EKS, as versions are frequently deprecated, requiring you to upgrade the control plane and data nodes. While DuploCloud automates this upgrade process, it still requires careful planning and execution.
AWS Cost: While the EKS control plane cost is relatively low, operating an EKS environment without business support (at an additional premium) is not recommended. If you are a small business, you may be able to add the support tier when you need it and remove it when not needed to reduce costs.
Multi-Cloud: For many enterprises and independent software vendors this is, or will soon be, a requirement. While Kubernetes provides this benefit, DuploCloud's implementation is much easier to maintain and implement.
Key terms and concepts in DuploCloud Container Orchestration
These are virtual machines (EC2 Instances, GCP Node pools or Azure Agent Pools). By default, apps within a Tenant are pinned to VMs in the same Tenant. One can also deploy Hosts in one Tenant that can be leveraged by apps in other Tenants. This is called the shared host model. The shared host model does not apply to ECS Fargate.
Service is a DuploCloud term and is not the same as a Kubernetes Service. In DuploCloud, a Service is a micro-service defined by a Name, Docker Image, Number of Replicas, and many other optional parameters. Behind the scenes, a DuploCloud Service maps 1:1 to a DeploymentSet or a StatefulSet, based on whether the microservice has stateful volumes. There are many optional Service configurations representing various ways that Docker containers can be run. Among these are:
Environment variables
Host Network Mode
Volume mounts
Entrypoint or command overrides
Resource caps
Kubernetes health checks
If a service needs to be pinned to run only a specific set of hosts, you set an Allocation Tag on both the hosts, as well as the Service. Allocation Tags are case-insensitive substrings. Allocation Tags on a service should be a substring of the tag specified on the host. For example, if a Host is tagged as HighCpu;HighMem, then the service (if it is tagged highcpu) can be allocated on this host. If a service does not have any tag set, it can be placed on any host.
If the host is tagged with a specific value, and you have services with the same tag, the host is available for any service which has no tags. If you want the exclusive assignment of a host to a set of services, ensure every service in the Tenant is tagged.
For Kubernetes deployments the concept of allocation tags is realized by mapping it to labels on nodes and then node selector on the Deploymentset or Statefulset
By default, Docker containers have their network addresses. At times, you may want these containers to reuse the same network interface that the VM uses. This reuse is called Host Network Mode.
Every DuploCloud Service that communicates with other Services, needs to be exposed by a LoadBalancer. DuploCloud supports the following Load Balancers (LBs).
Feature | Kubernetes | Built-In | ECS Fargate |
---|---|---|---|
The following concepts do not apply to ECS. ECS uses a proprietary policy model, which is explained in a .
Familiarize yourself with these DuploCloud concepts and terms before deploying containerized applications in DuploCloud. See for a description of DuploCloud Infrastructures and Tenants.
Application Elastic Load Balancer (ELB): When exposed by an ELB, the DuploCloud Service is reachable from anywhere unless it is marked as Internal, in which case it is reachable only from within the VPC (or DuploCloud Infrastructure). Application ELBs allow you to use a certificate for terminating SSL on the LB, which allows you to avoid providing application SSLs and certificates, such as a certificate issued from ). In Kubernetes, the platform creates a pointing to the DeploymentSet and adds the Host IPs of the worker nodes to the ELB. Traffic flows from the client to the external port defined in the ELB (for example, 443), to the ELB's NodePort (for example, 30004 on the Worker Node), and to the Kubernetes Proxy running on each Worker Node. The Worker Node forwards the NodePort to the container.
Classic ELB (Only applicable to Built-In Container Orchestration): Classic ELBs can be used when the application is exposing non-HTTP ports and they operate on any TCP port. When exposed by an ELB, the Service is reachable from anywhere unless it is marked as Internal, in which case it is reachable only from within the VPC (or DuploCloud infrastructure). Classic ELBs allow you to use a certificate for terminating SSL on the LB. which allows you to avoid providing application SSLs and certificates, such as a certificate issued from ).
Cluster IP (Kubernetes only): load balancers can be used if you are required to expose the application only within the Kubernetes Cluster.
Ease of use
Features and ecosystem tools
Suitability for stateful apps
Stability and maintenance
AWS cost
Multi-cloud (w/o DuploCloud)
Look at the following individual cloud provider specific DuploCloud docs for a quick start that shows how to quickly launch a Kubernetes cluster, deploy a simple webapp and expose it via load balancer.
Kubernetes features in the DuploCloud Portal
DuploCloud leverages Kubernetes as a foundational building block behind many of its managed Services.
As DuploCloud supports Kubernetes Cluster enablement on all public cloud platforms, there are many Kubernetes objects and components that you can work with. This includes the flexibility to choose the instance type based on workload characteristics, such as compute or memory-intensive tasks, and AI/ML workloads which may benefit from GPU instances. It's recommended to have a minimum disk capacity of 40GB per host to accommodate image sizes and application data.
Use the topics in this section to implement many Kubernetes features with little or no hard coding, using DuploCloud's no-code/low-code approach. This encompasses configuring autoscaling for your EKS cluster based on CPU/memory usage through Horizontal Pod Autoscaler (HPA) or Auto Scaling Groups (ASG) to efficiently scale your pods or underlying infrastructure as needed.
For information about public-cloud provider-specific Kubernetes container features, see the Kubernetes Containers documentation in AWS EKS, for example. Additionally, DuploCloud can integrate with CloudWatch alarms via the DuploCloud UI for setting up custom alerts for CPU/memory usage, ensuring proactive monitoring and management of resources. This integration supports forwarding alerts to various notification systems like Sentry, PagerDuty, NewRelic, or OpsGenie for immediate action.
Moreover, when managing your Kubernetes clusters, it's possible to add allocation tags to existing nodes with running services. This action modifies a label on the Kubernetes node, influencing future pod scheduling without affecting currently running services. To apply the new allocation tags to existing services, a restart of the services is required, allowing for more granular control over resource allocation and utilization within your cluster.
Accessing kubectl on your local computer
You can access kubectl
on a local computer to a Kubernetes cluster with cluster-admin
privileges to download and run kubeconfig
.
You can obtain Just-In-Time (JIT) access to Kubernetes by using duplo-jit
. See the JIT Access documentation for detailed information about:
• Obtaining JIT access, using the UI and CLI.
• Installing duplo-jit
, using various tools.
• Getting credentials for AWS access interactively, or with an API token.
• Accessing the AWS Console.
kubeconfig
In the DuploCloud Portal, navigate to Administrators -> Infrastructure.
In the Name column, select the Infrastructure in which you want to set up kubectl
.
Click the EKS (for AWS) tab, GKE (for GCP) tab, or the AKS (for Azure) tab.
Click Download Kube Config to download the kubeconfig
file.
If you don't have Administrator access, you can use duplo-jit
to access Kubernetes. When you click Download Kube Config, the Access to Kubernetes from your Workstation window displays, which provides you the alternative of installing duplo-jit
to access your Kubernetes cluster without obtaining permanent access keys.
Use these tools to install kubectl
locally.
Run these commands to enable kubectl
to use the downloaded kubeconfig
.
For Linux or macOS:
For Windows:
Set up KubeCtl within the DuploCloud Portal by downloading the token and configuring Mirantis Lens for DuploCloud authentication.
DuploCloud provides a way to connect directly to the Cluster namespace using the kubectl
token. This facilitates direct interaction with your Kubernetes cluster through a command-line interface.
If you attempt to start a KubeCtl Shell instance and receive a 503 in your web browser, ensure that the duplo-shell service in the Default Tenant is running and that the Hosts which support it are running, as well.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
From the KubeCtl list box, select KubeCtl Token. The Token window displays. Copy the contents to your clipboard.
To enhance your Kubernetes management experience, you can integrate Mirantis Lens with DuploCloud by following these steps:
Install DuploCloud Client: Ensure the duploctl
command-line tool is installed. If not, use the pip install duplocloud-client
command to install it.
Install Lens Client: Download and install the Lens Kubernetes IDE client from its official website.
Generate Kubeconfig File: Usingduploctl
, generate a kubeconfig
file for Lens connection, as follows:
Add Kubeconfig to Lens: In Lens, navigate to "Catalog" and use the +
button to add the kubeconfig
file, configuring Lens to connect to your Kubernetes cluster.
Connect to the Cluster: Lens will prompt for login through a browser window. For private EKS clusters, ensure VPN connectivity for authentication.
Note: Disconnect from the cluster after your session to avoid repeated browser tab openings during re-authentication attempts.
Integrating Mirantis Lens with DuploCloud enhances your Kubernetes cluster management by providing a powerful graphical interface alongside the direct command-line access provided by the kubectl
token.
Accessing kubectl without administrator privileges
If you don't have administrator privileges, use this procedure to access kubeconfig
using the kubectl
token. kubeconfig
is a YAML file that stores cluster authentication information for kubectl. It contains a list of contexts to which kubectl refers when running commands. By default, kubeconfig
is saved in the $HOME/
directory in the Linux operating system.
Add the following code to your AWS profile:
kubectl
tokenThe token that you download is for the selected Tenant only. It is intended for use with a non-human DuploCloud service account.
In the DuploCloud Portal, navigate to Kubernetes -> Services. The Services page displays.
Select the Service from the Name column.
From the KubeCtl item list, select KubeCtl Token. The KubeCtl Token window displays.
Click Copy to copy the kubectl
commands in the Token window to your clipboard.
From the KubeCtl item list, select KubeCtl Shell to launch the shell instance. Paste the copied commands into the shell and run them.
In the DuploCloud Portal, navigate to Kubernetes -> Containers. Click on the menu icon () in the row of the Infrastructure for which you want to view the shell, and select Container Shell. The bash shell displays.
Before beginning, refer to this article for more information about.
Setting, mounting, and managing Kubernetes ConfigMaps and Kubernetes Secrets in DuploCloud environments.
In DuploCloud environments, you can pass configurations and Kubernetes Secrets using Kubernetes ConfigMaps or through various strategies tailored to enhance security and management efficiency:
Setting Kubernetes Secrets directly in DuploCloud: This involves creating secrets under Kubernetes > Secrets in the DuploCloud console. These secrets are then available in the Kubernetes environment and can be utilized as either files or environment variables. This method is straightforward, incurs no additional cost, and allows for the visibility of both secret keys and values within the DuploCloud console. For detailed instructions, see Setting Kubernetes Secrets in DuploCloud.
Settings Environment Variables (EVs) from a K8s ConfigMap or Secret: This traditional method continues to be supported, offering a familiar approach to those accustomed to Kubernetes' native secrets management.
Mounting ConfigMaps and Secrets as files: This method provides a seamless way to integrate configuration data directly into your application's file system.
Additionally, DuploCloud supports advanced secrets management strategies, including:
Using AWS as the Source of Truth: By creating secrets in AWS Secrets Manager or Parameter Store and integrating them into Kubernetes secrets with SecretProviderClass, you benefit from advanced features like automatic rotation. This method displays only the secret keys in the DuploCloud console and involves a more complex setup but is ideal for centralizing secrets management across DuploCloud and non-DuploCloud resources. For more on this setup, visit Adding SecretProviderClass Custom Resource in DuploCloud.
Application Directly Reads Secrets from AWS: This approach allows the application code to directly fetch secrets from AWS Secret Manager or Parameter Store, managed via IAM roles facilitated by DuploCloud. It provides an added layer of protection and is particularly beneficial for development environments, though it requires modifications to the application code. Implementation guidance can be found in AWS SDK for PHP - Managing Secrets.
By leveraging these strategies, DuploCloud offers flexible and secure options for managing Kubernetes ConfigMaps and Secrets, catering to a variety of operational needs and security requirements.
Using K8s Secrets with Azure Storage Accounts
Refer to steps to configure the new Storage Account and FileShare in Azure.
Copy Storage Account Key and FileShare Name from DuploCloud Portal for creating Kubernetes Secrets in the next step.
Navigate to Kubernetes -> Secrets. Create a Kubernetes Secret Object using an Azure Storage Account.
For more information, see Kubernetes Configs and Secrets.
While creating a deployment, under Other Pod Config and Other Container Config, provide the configuration below to create and mount the storage volume for your service. In the configuration below, shareName
attribute should be the File Share name which you can get from the Storage Account screen.
Creating K8s SecretProviderClass CRs in the DuploCloud Portal
DuploCloud Portal provides the ability to create Custom Resource (CR) SecretProviderClass
.
This capability allows Kubernetes (K8s) to mount secrets stored in external secrets stores into the Pods as volumes. After the volumes are attached, the data is mounted into the container’s file system.
An Administrator must set the Infrastructure setting Enable Secrets CSI Driver
as True
. This setting is available by navigating to Administrator -> Infrastructure, selecting your Infrastructure, and clicking Settings).
In the DuploCloud Portal, navigate to Kubernetes -> Secret Provider.
Click Add. The Add Kubernetes Sercet Provider Class page displays.
Map the AWS Secrets
and SSM Parameters
configured in DuploCloud Portal (Cloud Services -> App Integration) to the Parameters section of the configuration.
Optionally, use the Secret Objects field to define the desired state of the synced Kubernetes secret objects.
The following is an example SecretProviderClass
configuration where AWS secrets and Kubernetes Secret Objects are configured:
To ensure your application is using the Secrets Store CSI driver, you need to configure your deployment to use the reference of the SecretProviderClass
resource created in the previous step.
The following is an example of configuring a Pod to mount a volume based on the SecretProviderClass
created in prior steps to retrieve secrets from Secrets Manager.
It's important to note that SPC timeouts can occur due to issues related to Secret Auto Rotation, which is enabled by default. This feature checks every two minutes if the secrets need to be updated from the values in AWS Secrets Manager. During a service deployment, if a secret is deleted due to a redeployment while a rotation check is attempted, it can lead to timeouts. This deletion happens because the secret is generated from the volume mount in the service Pod, and when the Pod is destroyed, the secret is also destroyed.
In the DuploCloud Portal, create a Kubernetes Service by navigating to Kubernetes -> Services and clicking Add.
Complete the required fields and click Next to display the Advanced Options page.
On the Advanced Options page, in the Cloud Credentials list box, select From Kubernetes.
Add code to the Other Pod Config field, as in the example below.
Add code for VolumeMounts
in the Other Container Config field, as in the example below.
Click Create to create the Kubernetes service.
Before you can sync Kubernetes Secret Objects, you must Create a Kubernetes Service and mount volumes based on the configured secrets.
Optionally, you can define secretObjects
in the SecretProviderClass
to define the desired state of your synced Kubernetes secret objects.
The following is an example of how to create a SecretProviderClass
CR that syncs a secret from AWS Secrets Manager to a Kubernetes secret:
In Other Container Config field, specify mount details with the secretobject-name
. Refer to the following example:
Set environment variables in your deployment to refer to your Kubernetes secrets.
Refer to the following example using the Environment Variables field in the Basic Options page when creating a service.
While powerful, this integration of secrets into Kubernetes deployments requires careful management to avoid issues such as SPC timeouts. Understanding the underlying mechanisms, such as Secret Auto Rotation and the lifecycle of secrets in pod deployments, is crucial for smooth operations.
Mounting application configuration maps and secrets as files
In Kubernetes, you can mount application configurations or secrets as files.
Before you create and mount the Kubernetes ConfigMap, you must create a DuploCloud Service.
In the DuploCloud Portal, navigate to Kubernetes -> Config Maps.
Click Add. The Add Kubernetes Config Map pane displays.
Name the ConfigMap you want to create, such as my-config-map.
Add a Data key/value pair for each file in your config map, separated by a colon (:
). The key is the file name, and the value is the file's contents.
Click Create.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
Select the service you want to modify from the Name column.
Click the Actions menu and select Edit.
On the Edit Service: service_name Basic Options page, click Next to navigate to the Advanced Options page.
On the Advanced Options page, in the Volumes field, enter the configuration YAML to mount the ConfigMap as a volume.
For example, to mount a config map named my-config-map
to a directory named /app/my-config
, enter the following YAML code block in the Volumes field:
If you want to select individual ConfigMap items, specifying the subpath for mounting, you can use a different configuration. For example, if you want the key named my-file-name
to be mounted to /app/my-config/config-file
, use the following YAML:
Before you create and mount a Kubernetes Secret, you must create a DuploCloud Service.
In the DuploCloud Portal, navigate to Kubernetes -> Secrets.
Click Add. The Add Kubernetes Secret pane displays.
Enter the Secret Name that you want to create, such as my-secret-files.
Add Secret Details such as a data key/value pair for each file in your secret. The key is the file name, and the value is the file's contents, separated by a colon (:
).
Click Add to create the secret.
Follow the steps in Creating a Kubernetes Secret, defining a Key value using the PRIVATE_KEY_FILENAME
in the Secret Details field, as shown below.
Click Add to create the multi-line secret.
In the DuploCloud Portal, edit the DuploCloud Service.
On the Edit Service: service_name Basic Options page, click Next to navigate to the Advanced Options page.
On the Advanced Options page, in the Volumes field, enter the configuration YAML to mount the Secret as a volume.
For example, to mount a Secret named my-secret-files
to a directory named /app/my-config
, enter the following YAML code block in the Volumes field:
If you want to select individual Secret items, specifying the subpath for mounting, you can use a different configuration. For example, if you want the key named secret-file
to be mounted to /app/my-config/config-file
, use the following YAML:
Set EVs from the Kubernetes ConfigMap
In Kubernetes, you populate environment variables from application configurations or secrets.
In the DuploCloud Portal, navigate to Kubernetes -> Config Maps.
Click Add. The Add Config Map pane displays.
Name the ConfigMap you want to create, such as my-config-map
.
Add a Data key/value pair for each file in your ConfigMap, separated by a colon (:
). The key is the file name, and the value is the file's contents.
Click Create.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
Select the Service you want to modify from the Name column.
Click the Actions menu and select Edit.
You can import the entire ConfigMap as Environment Variables or choose specific keys to import as environment variables.
The most straightforward approach is to import the entire ConfigMap as environment variables. Using this approach, your service will recognize each key in the ConfigMap defined as an environment variable.
On the Edit Service: service_name Basic Options page, click Next to navigate to the Advanced Options page.
On the Advanced Options page, in the Other Container Config field, enter the configuration YAML to import environment variables from a ConfigMap. For example, to import all environment variables from a ConfigMap named my-env-vars
, use the following YAML:
To import from additional ConfigMaps, duplicate the YAML from lines 2 and 3 in the above example for each config map that you want to import from.
Another approach is to select which keys to import from the ConfigMap as environment variables. This method gives you complete control over each environment variable as well as its name, but it requires you to perform more manual configuration.
On the Edit Service: service_name Basic Options page, in the Environment Variables field, enter the configuration for choosing environment variables to import from a ConfigMap. For example, to set a single environment variable (ENV_VAR_ONE)
to the value of the MY_ENV_VAR
key in the my-env-vars
config map, use the following YAML:
To add additional environment variables, duplicate the YAML from lines 2 through 5 in the above example for each environment variable that you want to add.
You can import Kubernetes Secrets as Environment Variables.
In the DuploCloud Portal, navigate to Kubernetes -> Secrets.
Click Add. The Add Kubernetes Secret page opens.
Create a Secret Name, such as my-env-vars
.
From the Secret Type list box, select Opaque.
In the Secret Details field, Add Data key/value pairs for each Environment Variable in your ConfigMap, separated by a colon (:
). The key is the Environment Variable name, and the value is the Environment Variable's value.
Click Add to create the secret.
The most straightforward approach is to import the entire Secret as environment variables. Using this approach, your service will recognize each key in the Secret defined as an environment variable.
On the Edit Service: service_name Basic Options page, click Next to navigate to the Advanced Options page.
On the Advanced Options page, in the Other Container Config field, enter the configuration YAML to import environment variables from a Secret. For example, to import all environment variables from a secret named my-env-vars
, use the following YAML:
To import from additional secrets, duplicate the YAML from lines 2 and 3 in the above example for each secret that you want to import.
Another approach is to select which keys to import from the Secret as environment variables. This method gives you complete control over each environment variable as well as its name, but it requires you to perform more manual configuration.
On the Edit Service: service_name Basic Options page, in the Environment Variables field, enter the configuration for choosing specific environment variables to import from a Secret. For example, to set a single environment variable (ENV_VAR_ONE)
to the value of the SECRET_ENV_VAR
key in the my-env-vars
secret, use the following YAML:
To import from additional secrets, duplicate the YAML from lines 2 and 5 in the above example for each secret that you want to import.
Create Kubernetes Jobs in AWS and GCP from the DuploCloud Portal
In the DuploCloud Portal, you can create K8s Jobs to create one or more Pods. The Job continues to retry execution of the Pods until a specified number of them successfully terminate. The K8s Jobs tracks the successful terminations. When the specified number of successful terminations completes, the Job is marked as completed
in Kubernetes. Deleting a Job cleans up the Pods that the job created. Suspending a Job delete the Job's active Pods until the Job is resumed again.
You typically create one Job object to reliably run one Pod to completion. The Job object starts a new Pod if the first Pod fails or is deleted (for example, in case of a node hardware failure or a node reboot).
In the DuploCloud Portal, select the Tenant you are working with from the Tenant list box at the top-left of the DuploCloud Portal.
Navigate to Kubernetes -> Job.
Click Add. The Add Kubernetes Job page displays.
In the Basic Options step, specify the Kubernetes Job Name.
In the Container - 1 area, specify the Container Name and associated Docker Image.
In the Command field, specify the command attributes for Container - 1. Click the Info Tip icon for examples. Select and Copy commands as needed.
In the Init Container - 1 area, specify the Container Name and associated Docker Image.
Click Next to open the Advanced Configuration step.
In the Other Spec Configuration field, specify the Job spec (in YAML) for Init Container - 1. Click the Info Tip icon for examples. Select and Copy commands as needed.
Click Create. The job is created and displayed on the Job page with a status of Active.
In the DuploCloud Portal, navigate to Kubernetes -> Job.
Select the job you want to view and click the Overview, Containers, and Details tabs for more information about the job status and job history.
You can view K8s Jobs linked to Containers by clicking the Container Name on the Containers page (Kubernetes -> Containers).
You can filter Container names by using the search field at the top of the page, as in this example:
In the DuploCloud Portal, navigate to Kubernetes -> Job.
Select the K8s job you want to edit.
You can edit a job in the DuploCloud Portal and modify the following fields:
Cleanup After Finished in Seconds
Other Spec Configuration
Metadata Annotations
Labels
In the DuploCloud Portal, navigate to Kubernetes -> Job.
Select the K8s job you want to delete.
Set Kubernetes Secrets in the DuploCloud Portal and manage them effectively.
To securely manage sensitive information in your deployment, set and reference Kubernetes secrets in the DuploCloud Portal.
In the DuploCloud Portal, navigate to Kubernetes -> Secrets. The Kubernetes Secrets page displays.
Click Add.
Fill in the fields (Secret Name, Secret Type, Secret Details, Secret Labels, and Secret Annotations).
Click Add. The Kubernetes Secret is set.
To enhance the security and management of Kubernetes secrets, consider the following strategies:
Utilize Centralized Secret Management Tools: Centralize the management of secrets to streamline access and control.
Implement Access Controls: Define who can access or modify secrets to minimize risk.
Regularly Rotate Secrets: Change secrets periodically to reduce the impact of potential breaches.
Audit Access Logs: Keep track of who accesses secrets and when, to detect unauthorized access or anomalies.
By integrating these practices, you can ensure a more secure and efficient handling of secrets within your Kubernetes environment.
Before you create the Kubernetes , you must create a DuploCloud .
.
Before you configure Environment Variables, you must create a DuploCloud .
.
.
In Kubernetes, a is a controller object that represents a task or a set of tasks that runs until successful completion. It is designed to manage short-lived, batch workloads in a Kubernetes cluster. You use a Job when you need to run a task or a set of tasks once, to completion, rather than continuously, as in other types of controllers such as .
Refer to the Kubernetes documentation for use cases and examples of when to use jobs.
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod is a group of one or more , with shared storage and network resources, including a specification that dictates how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers that are tightly coupled.
You can also use a Job to run multiple Pods in . If you want to run a Job (either a single task, or several in parallel) on a schedule, see .
To run the Job to completion, you must specify a Kubernetes . Click the Add Container button and select the Add Init Container option. The Init Container - 1 area displays.
You can also view details of a job by clicking the menu icon ( ) icon to the left of the job name and selecting View.
Click the options menu ( ) icon to the left of the job you want to edit and select Edit.
Click the job options menu ( ) icon to the left of the job name and select Delete.
Ingress controllers abstract the complexity of routed Kubernetes application traffic, providing a bridge between Kubernetes services and services that you define.
See the Containers topic for steps on how to create Tenants, Hosts, and Services.
Once your service is deployed, you are ready to add and configure Kubernetes Ingress. There are slightly different steps to create ingress in each of the cloud.
Manage and troubleshooting services with HPA configured.
See the Autoscaling in Kubernetes topic in the AWS DuploCloud documentation.
When working with Kubernetes services configured with Horizontal Pod Autoscaler (HPA), it's essential to understand how to manage and troubleshoot them effectively.
To stop a service that is hung in a Running state due to HPA, you cannot directly delete pods, as new ones are created to maintain the set number of replicas. Instead, remove the HPA configuration and adopt a static replication strategy by setting the replica count to 0. This ensures the service is effectively stopped without attempting to set minReplicas
to 0, which is an invalid configuration for HPA.
If issues arise while stopping a service with HPA configured, avoid setting minReplicas
it to 0 and ensure the HPA configuration is removed in favor of a static replication strategy. For further troubleshooting, consult the Faults menu under the DevOps section in the DuploCloud UI, where all errors are logged, facilitating efficient diagnosis and resolution.
DuploCloud is planning enhancements to the UI to improve the management of services running with HPA configurations. These improvements include adding validation to prevent set minReplicas
to 0, potentially removing the Stop
option for services with HPA, and documenting the correct procedure for stopping such services. These updates simplify the process and prevent common configuration mistakes, ensuring a smoother experience managing Kubernetes services with HPA.
See the Autoscaling in Kubernetes topic in the AWS DuploCloud documentation. The Kubernetes Horizontal Pod Autoscaler (HPA) is critical for managing resources efficiently in a Kubernetes environment. It automatically adjusts the number of pods in a deployment based on observed CPU utilization or other selected metrics. For detailed guidance on autoscaling in Kubernetes, refer to the Autoscaling in Kubernetes topic in the AWS DuploCloud documentation.
In instances where a service pod requires more memory than is available on any single node, such as a pod demanding 30GB on a node with a maximum of 16GB, it's essential to isolate the resource-intensive service. By moving the script, which causes high memory demand for its service, and allocating it to a more prominent instance with an highmem
allocation tag, you can ensure that your services continue to run efficiently. This approach allows for instances with up to 64GB of memory, accommodating high-demand applications without compromising the performance of other services.
When configuring autoscaling for an EKS cluster, it's crucial to base the autoscaler on CPU/memory requests or limits to ensure optimal performance and resource utilization. This method allows for dynamic scaling that responds to the actual needs of your applications, preventing over-provisioning and resource wastage.
For advanced monitoring and alerting, DuploCloud supports the integration of its Prometheus endpoints with external Grafana instances. This capability enables the pod to set up custom alerts for memory usage, allowing for proactive resource management and issue resolution. Whether using DuploCloud's Grafana instance or an external one, these integrations provide valuable insights into your Kubernetes environment's health and performance.
Adding an allocation group to an existing node with running services requires careful consideration regarding service continuity and the potential need for restarts. While the specific behavior may vary, understanding the implications of such changes is crucial for maintaining uninterrupted service availability during scaling and resource adjustments.
By following these guidelines and leveraging DuploCloud's support for HPA, teams can effectively manage Kubernetes resources, ensuring that applications remain performant and resilient under varying loads.
Schedule a Kubernetes Job in AWS and GCP by creating a CronJob in the DuploCloud Portal
A Kubernetes CronJob is a variant of a Kubernetes Job, with the exception that you can schedule a CronJob to run at periodic intervals.
See the Kubernetes documentation on CronJobs for more information.
In the DuploCloud Portal, navigate to Kubernetes -> CronJob.
Click Add. The Add Kubernetes CronJob page displays.
In the Basic Options step, specify the Kubernetes CronJob Name.
In the Schedule field, specify the Cron Schedule in Cron Format. Click the Info Tip icon for examples. When specifying a Schedule in Cron Format, ensure you separate each value with a space. For example, 0 0 * * 0
is a valid Cron Format input; 00**0
is not. See the Kubernetes documentation for detailed information about Cron Format.
In the Container - 1 area, specify the Container Name and associated Docker Image.
In the Command field, specify the command attributes for Container - 1. Click the Info Tip icon for examples. Select and Copy commands as needed.
In the Init Container - 1 area, specify the Container Name and associated Docker Image.
Click Next to open the Advanced Configuration step.
Click Create. The K8s CronJob is created and displayed on the CronJob page and will be run according to the schedule you specified.
In the DuploCloud Portal, navigate to Kubernetes -> CronJob.
Select the CronJob you want to view and click the Overview, Schedule, and Details tabs for more information about the job schedule and job history.
You can also view CronJobs linked to containers by clicking the container Name on the Containers page (Kubernetes -> Containers).
You can filter container names by using the search field at the top of the page, as in this example:
In the DuploCloud Portal, navigate to Kubernetes -> CronJob.
Select the K8s CronJob you want to edit.
You can edit a Kubernetes Job in the DuploCloud Portal and modify the following fields:
Cleanup After Finished in Seconds
Other Spec Configuration
Metadata Annotations
Labels
In the DuploCloud Portal, navigate to Kubernetes -> CronJob.
Select the K8s CronJob you want to delete.
Create a GKE Ingress using the DuploCloud Portal
GCP's Ingress Controller for GKE automatically manages traffic routing to Kubernetes services, integrating Kubernetes workloads with Google Cloud's load-balancing infrastructure. It simplifies external access to applications, handling SSL termination and global load distribution.
GCP offers its own Ingress Controller, specifically created for Google Kubernetes Engine (GKE), to seamlessly integrate Kubernetes services with Google Cloud's advanced load balancing features.
Container-native load balancing on Google Cloud Platform (GCP) allows load balancers to directly target Kubernetes Pods instead of using a node-based proxy. This approach improves performance by enabling more efficient routing, reducing latency by eliminating extra hops and providing better health-checking capabilities.
It leverages the network endpoint groups (NEGs) feature to ensure that traffic is directed to the appropriate container instances, enabling more granular and efficient load distribution for applications running on GKE.
See the Containers topic for steps on how to create Tenants, and Services.
Once your services are deployed, you are ready to add and configure a GKE Ingress controller in GCP.
Add a load balancer listener that uses Kubernetes (K8s) ClusterIP type service. Kubernetes Health Check and Probes are enabled by default. To specifically configure the settings for Health Check, select Additional Health Check configs when you add the Load Balancer.
In the DuploCloud Portal, navigate Kubernetes -> Services.
On the Services page, select the Service name in the Name column.
Click the Load Balancers tab.
Click Configure Load Balancer. The Add Load Balancer Listener pane appears.
From the Select Type list box, select K8S Cluster IP.
Complete the other required fields in the Add Load Balancer Listener pane and click Add. The Load Balancer displays in the Load Balancers tab.
Click Advanced Kubernetes Settings and enable Set Health Check annotations for Ingress. (This will add required annotations in Kubernetes Service to be recognized by the GKE Ingress Controller)
Click Add.
In order to enable SSL, you can create a GCP-managed certificate resource in the application namespace.
Once Services are deployed, add an Ingress:
Select Kubernetes -> Ingress from the navigation pane.
Click Add. The Add Kubernetes Ingress page displays.
You must define rules to add a Kubernetes Ingress. Continue to the next section to add rules to Kubernetes Ingress and complete the Ingress setup.
In the Add Kubernetes Ingress page, configure Ingress by clicking Add Rule. The Add Ingress Rule pane displays.
Specify the Path (/samplePath/ in the example above).
From the Service Name list box, select the Service exposed through the K8S ClusterIP (nginx-test in the example above). The Container port field is completed automatically.
Click Add Rule. The rule is displayed on the Add Kubernetes Ingress page. Add additional rules by repeating the preceding steps.
On the Add Kubernetes Ingress page, specify the Ingress Name.
From the Ingress Controller list box, select gce.
From the Visibility list box, select Internal Only or Public.
If you have created a GCP managed certificate, add the following annotations in the Annotations field to link the Ingress with your GCP managed certificate
Click Add to add the Kubernetes Ingress with defined rules. The Ingress you added displays in the Ingress page.
When Ingress is configured, you can access Services based on the rules for each DNS, displayed in the K8S Ingress tab.
In this example, we display the output for three Services with Path Type rules and different DNS names. See the previous example for detailed steps to create Ingress rules.
The Ingress creation will take a few minutes. Once the IP is attached to the ingress, you are ready to use your path- or host-based routing defined via ingress!
Set up Kubernetes Ingress and Load Balancer with K8s NodePort
Ingress controllers abstract the complexity of routed Kubernetes application traffic, providing a bridge between Kubernetes services and services that you define.
See the Containers topic for steps on how to create Tenants, Hosts, and Services.
Once your service is deployed, you are ready to add and configure Kubernetes Ingress by enabling the AWS Application Load Balancer.
Your administrator needs to enable the AWS Application Load Balancer controller for your infrastructure before you can use Ingress.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure and select the Infrastructure name from the NAME column. Select the Settings tab.
Click Add. The Infra - Custom Data pane displays.
From the Setting Name list box, select Enable ALB Ingress Controller.
Select Enable.
Click Set. In the Settings tab, the Enable ALB Ingress Controller setting displays a Value of true.
Add a load balancer listener that uses Kubernetes (K8s) NodePort. Kubernetes Health Check and Probes are enabled by default. To specifically configure the settings for Health Check, select Additional Health Check configs when you add the Load Balancer.
In the DuploCloud Portal, navigate Kubernetes -> Services.
On the Services page, select the Service name in the Name column.
Click the Load Balancers tab.
Click Configure Load Balancer. The Add Load Balancer Listener pane appears.
In the Select Type field, select K8S Node Port.
Complete the other required fields in the Add Load Balancer Listener pane and click Add. The Load Balancer displays in the Load Balancers tab.
Once Services are deployed, add Ingress:
Select Kubernetes -> Ingress from the navigation pane.
Click Add. The Add Kubernetes Ingress page displays.
You must define rules to add a Kuberenetes Ingress. Continue to the next section to add rules to Kubernetes Ingress and complete the Ingress setup.
In the Add Kubernetes Ingress page, configure Ingress by clicking Add Rule. The Add or Edit Ingress Rule pane displays.
Specify the Path (/ in the example above).
To use a container port name (optional), use the toggle switch to enable Use Container Port Name.
If you enabled Use Container Port Name in step 3., type a Service name in the Service Name field (redirect:use-annotation in the example) and a container port name in the Container Port field (use-annotation in the example).
If you did not enable Use Container Port Name in step 3., from the Service Name list box, select the Service exposed through the K8S Node Port. The Container Port field is completed automatically.
Click Add Rule. The rule is displayed on the Add Kubernetes Ingress page. Add additional rules by repeating the preceding steps.
On the Add Kubernetes Ingress page, specify the Ingress Name.
From the Ingress Controller list box, select the Ingress Controller that you defined previously.
From the Visibility list box, select either Internal Only or Public.
From the Certificate ARN list box, select the appropriate ARN.
Click Add Redirect Config. The Add Redirect Config pane displays.
Fill the fields as shown in the example above.
Click Add to add the Kubernetes Ingress with defined rules. The Ingress you added displays in the K8S Ingress tab.
DuploCloud Platform supports defining multiple paths in Ingress. For example, you could define an Ingress rule with an Exact Path Type to route requests to /path1/
for js-service1, add a rule with a Prefix Path Type to route requests to /path2/
for testsvc2. Additionally, you could add a rule with a Prefix Path Type to route requests via a BYOH Host (Bring-Your-Own-Host) named example.com, for a third service, testsvc3.
When Ingress is configured, you can access Services based on the rules for each DNS, displayed on the Kubernetes -> Ingress page.
In this example, we display the output for three services with Path Type rules and different DNS names. See the previous example for detailed steps to create Ingress rules.
By executing curl
commands, you can see the difference in the output for each service in this example. Configured services are accessed based on the DNS name specified in the DuploCloud Portal and the paths that you specified when you added Ingress rules.
>curl http://ig-nev-ingress-ing-t2-1-duplopoc.net/
path-x
/ this is service1 >curl http://ing-doc-ingress-ing-t2-1-duplopoc.net/
path-y
/ this is service2
>curl http://ing-public-ingress-ing-t2.1.duplopoc.net/
path-z
/
this is ING2-PUBLIC
Adding an Ingress for DuploCloud Azure load balancers
Ingress controllers abstract the complexity of routed Kubernetes application traffic, providing a bridge between Kubernetes services and services that you define.
To add an SSL certificate to a service using Kubernetes Ingress, see Using the SSL certificate for Ingress in DuploCloud in the Import SSL Certificates prerequisite for Azure in DuploCloud.
Before you add an Ingress rule, you need to enable the Ingress Controller for the application gateway.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
Select the Infrastructure from the Name column.
Click the Settings tab.
Click Add. The Infra-Set Custom Data pane displays.
In the Setting Name field, select Enable App Gateway Ingress Controller. Click Enable and Set. In the Settings tab, the Enable App Gateway Ingress Controller setting contains the true value.
Add a load balancer listener that uses the Kubernetes NodePort (K8S NodePort).
Using Kubernetes Health Check allows AKS's Application Load Balancer to determine whether your service is running properly.
You must create Services to run the load balancers. In this example, we name these services s1-alb and s4-nlb, respectively.
In the DuploCloud Portal, navigate DevOps -> Containers -> AKS/Native.
On the Services page, select the Service name in the Name column.
Click the Load Balancers tab.
Click Configure Load Balancer. The Add Load Balancer Listener pane appears.
In the Select Type field, select K8S Node Port.
In the Health Check field, add the Kubernetes Health Check URL for this container.
Complete the other fields in the Add Load Balancer Listener and click Add.
Add an Ingress rule to listen on port 80 (in this example) using both load balancers.
If you use a port other than 80, you must define an additional Security Group rule for that port. See this section for more information.
DuploCloud Platform supports defining multiple paths in Ingress.
In the DuploCloud Portal, navigate to DevOps -> Containers -> AKS / Native.
Click the K8S Ingress tab.
Click Add. The Add Kubernetes Ingress page displays.
Supply the Ingress Name, select the Ingress Controller azure-application-gateway, and set Visibility to Public.
Click Add Rule. The Add Ingress Rule pane displays. Specify a unique Path identifier.
In the Service Name field, select s1-alb:80. Click Add Rule to add the load balancer.
Add another rule by clicking Add Rule. The Add Ingress Rule pane displays. In the Service Name field, select s4-nlb:80. Click Add Rule to add the load balancer.
On the Add Kubernetes Ingress page, Add to finish setting up the load balancer rules.
Port 80 is configured by default when adding Ingress. If you want to use a custom port number other than 80, set up an additional Security Group Rule for the custom port using this procedure.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
Select the Infrastructure from the Name column.
Click the Security Group Rules tab.
Click Add. The Add Infrastructure Security pane displays.
Define the rule and click Add. The rule is added to the Security Group Rules list.
Once Ingress is configured, you can access Services based on the rules for each DNS.
By executing curl
commands, you can see the difference in the output for each service. Configured services are accessed based on the DNS name specified in the DuploCloud Portal and the paths that you configured when you added Ingress rules.
>curl http://ig-nev-ingress-ing-t2-1.duplopoc.net/
this is IG-NEV >curl http://ing-doc-ingress-ing-t2-1.duplopoc.net/
this is ING-DOC
>curl http://ing-public-ingress-ing-t2.1.duplopoc.net/
this is ING2-PUBLIC
Creating K8s PVCs and StorageClass constructs in the DuploCloud Portal
You can configure the Storage Class and Persistent Volume Claims (PVCs) from the DuploCloud Portal.
Click Add. The Add Kubernetes Persistent Volume Claim page displays.
Define the PVC Name, Storage Class Name, Volume Name, Volume Mode, and other details such as volume Access Modes.
Click Add.
On the Kubernetes Storage page, select the Storage Class option.
Click Add. The Add Kubernetes Storage Class page displays.
Define the Storage Class Name, Provisioner, Reclaim Policy, and Volume Binding Mode. Select other options, such as whether to Allow Volume Expansion.
Click Add.
In the DuploCloud Portal, navigate to Kubernetes -> Storage.
Click Add. The Add Kubernetes Storage Class page displays.
Create a Storage Class, as in the example below.
Support for specifying K8s YAML for Pod Toleration
In the DuploCloud Portal, navigate to Kubernetes -> Services or Docker -> Services. The Services page displays.
Select the Service from the NAME column.
From the Actions menu, select Edit. The Edit Service page displays.
Click Next to proceed to the Advanced Options page.
In the Other Container Config field, add the tolerations
operator YAML you have customized for your container.
Click Update. Your container has been updated with your custom specification for the tolerations operator.
tolerations
operator YAMLIn this example:
If a Pod is running and a taint matching key1
exists on the node, then the Pod will not schedule the node (NoSchedule
).
If a Pod is running and a taint matching example-key
exists on the node, then the Pod stays bound to the node for 6000
seconds, and then is evicted (NoExecute
). If the taint is removed before that time, the Pod will not be evicted.
To run the CronJob to completion, you must specify a Kubernetes Init Container. Click the Add Container button and select the Add Init Container option. The Init Container - 1 area displays.
In the Other Spec Configuration field, specify the job spec (in YAML) for Init Container - 1. Click the Info Tip icon ( ) for examples. Select and Copy commands as needed
You can also view details of a Kubernetes CronJob by clicking on the menu icon ( ) icon to the left of the job name and selecting View.
Click the options menu ( ) icon to the left of the CronJob name and select Edit.
Click the Job Options Menu ( ) icon to the left of the Job name and select Delete.
Optionally, complete Path Type and Host. In this example, we specify a Path Type of Exact. Clicking the Info Tip icon ( ) provides more information for these optional fields.
Optionally, complete Path Type and Host. In this example, we specify a Path Type of Exact. Clicking the Info Tip icon ( ) provides more information for these optional fields.
In the DuploCloud Portal, navigate to Kubernetes -> Storage. The Kubernetes Storage page displays. From this page, you define your Kubernetes and . The Persistent Volume Claims option is selected by default.
For information on using Native Azure StorageClasses, .
DuploCloud supports the customization of many Kubernetes (K8s) YAML operators, such as . If you are using a Docker container, you can specify the tolerations
operator configuration in the Other Container Config field in the container definition in DuploCloud
A triggers events to run at different stages of a Container's lifecycle. These hooks run scripts or commands before or after a specific event, such as a container being created, started, or stopped. Lifecycle hooks perform tasks like starting services, or initializing, configuring, or verifying containers.
You can implement Kubernetes Lifecycle Hooks while by adding the YAML like the example below to the Other Container Config field.
Tasks to perform before you use AWS with DuploCloud
Before using DuploCloud, ensure the following prerequisites are met.
Read the Access Control section to ensure at least one person has administrator access.
AWS-specific cloud provider deployments
The DuploCloud platform installs in an EC2 instance within your AWS account. It can be accessed using a web interface, API, and a Terraform provider.
Log in to the DuploCloud portal, using single sign-on (SSO), with your GSuite or O365 login.
Before getting started:
Connect to the DuploCloud Slack channel for support from the DuploCloud team.
Before you begin, read through the and are familiar with DuploCloud terms such as , , and .
Set up the DuploCloud Portal and ensure that you have .
Create an Certificate for AWS Certificate Manager
The DuploCloud platform needs a wild character AWS Certificate Manager (ACM) certificate that corresponds to the domain you created for the Route 53 Hosted Zone.
For example, if the Route 53 Hosted Zone created is apps.acme.com
, then the ACM certificate specifies *.apps.acme.com
. You can add additional domains to this certificate (for example, *.acme.com
.
The ACM certificate is used with AWS Elastic Load Balancers (ELBs) that are created as part of DuploCloud application deployment. Follow this AWS guide to issue an ACM certificate.
Once the certificate is issued, add the Amazon Resource Name (ARN) of the certificate in the DuploCloud Plan so that it is available to subsequent configurations, starting with the DuploCloud default plan.
In the DuploCloud Platform, navigate to Administrator -> Plans. The Plans page displays.
Select the DEFAULT Plan from the Name column.
Click the Certificates tab.
Click Add.
In the Name field, enter a certificate name.
In the Certificate ARN field, enter the ARN.
Click Create. The ACM Certificate with ARN is created.
Note that the ARN Certificate must be set for every new Plan created in a DuploCloud Infrastructure.
Configure DuploCloud to automatically generate Amazon Certificate Manager (ACM) Certificates for your Plan's DNS.
From the DuploCloud portal, navigate to Administrator -> Systems Settings.
Select the System Config tab, and click Add. The Add Config pane displays.
From the Config Type list box, select Flags.
From the Key list box, select Other.
In the Key field that displays, enter enabledefaultdomaincert
.
In the Value list box, select True.
Click Submit.
Enabling shell access using native Docker or ECS
DuploCloud allows shell access into the deployed containers. Shell access is enabled differently, depending on whether you use native Docker or ECS.
To enable shell access for the DuploCloud Docker Native container system:
In the DuploCloud Portal, navigate to Docker -> Services, displaying the Services page.
From the Docker list box, click Enable Docker Shell. The Start Shell Service pane displays.
From the Certificate list box, select a certificate name.
From the Visibility list box, select Public.
Click Update.
A provisioned service named dockerservices-shell is created, enabling you to access the Service containers using SSH.
Optionally, DuploCloud provides just-in-time (JIT) access to both the container shell and the kubectl
shell directly from your browser.
In the DuploCloud Portal:
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the Default Tenant.
Navigate to Docker -> Services, displaying the Services page.
Click Enable Docker Shell. The Start Shell Service pane displays.
From the Platform list box, select Kubernetes.
From the Certificate list box, select a certificate name.
From the Visibility list box, select Public.
Click Update.
Now you can begin using the Kubernetes (K8s) shell from the DuploCloud Portal for K8s services.
Navigate to Kubernetes -> Services. The Service page displays.
From the KubeCtl list box, click KubeCtl Shell.
In the DuploCloud Portal, navigate to Kubernetes -> Containers.
Select Container Shell or Host Shell from the Actions menu. The container or host shell launches in AWS Systems Manager.
You can also view the ECS task shell and select the container shell to which you want to connect.
In the DuploCloud Portal, navigate to Cloud Services -> ECS, displaying the ECS Task Definition page.
Select the name from the TASK DEFINITION FAMILY NAME column.
Select the Tasks tab.
To display the ECS task shell for any task, click on the (>_) icon in the Actions column of the appropriate row. Click on Console for AWS Console access, Logs for log data, or a container task shell of your choice. A browser launches to give you access to the resource you select.
Click the options menu () icon in the appropriate row.
DuploCloud integrates natively with OpenVPN by provisioning VPN users added in the Duplocloud portal. As a DuploCloud user, you can access resources in the private network by connecting to the VPN with the OpenVPN client.
The OpenVPN Access Server is set to forward only traffic destined for network resources in the DuploCloud-managed private networks. Traffic accessing other resources on the internet does not pass through the tunnel.
User VPN credentials are accessible on the user profile page. It can be accessed through the menu on the upper right of the page or through the User menu option on the left.
Follow the VPN URL link in the VPN Details section of your user profile. Modern browsers will call the link unsafe since it is using a self-signed certificate. Proceed to it.
Creating a Route 53 hosted zone to program DNS entries
The DuploCloud platform needs a unique Route 53 hosted zone to create DNS entries for services that you deploy. The domain must be created out-of-band and set in DuploCloud. The zone is a subdomain such as apps.[
MY-COMPANY
].com
.
Never use this subdomain for anything else, as DuploCloud owns all CNAME
entries in this domain and removes all entries it has no record of.
To create the Route53 hosted zone using the AWS Console:
Log in to the AWS console.
Navigate to Route 53 and Hosted Zones.
Create a new hosted zone with the desired domain name, for example, apps.acme.com
.
Access the hosted zone and note the name server names.
Go to your root Domain Provider's site (for acme.com
, for example), and create a NS
record that references the domain name of the hosted zone you created (apps.acme.com
) and add the zone name to the name servers that you noted above.
Once this is complete, provision the Route53 domain in every DuploCloud Plan, starting with the default plan. Add the Route53 hosted zone ID and domain name, preceded with a dot (.).
Do not forget the dot (.) at the beginning of the DNS suffix, in the form as shown below.
Note that this domain must be set in each new Plan you create in your DuploCloud Infrastructure.
Integrate with OpenVPN by provisioning VPN users
DuploCloud integrates natively with OpenVPN by provisioning VPN users that you add to the Duplocloud Portal. OpenVPN setup is a two-step process.
Accept OpenVPN Free tier (Bring Your Own License) in the AWS marketplace:
Log into your AWS account. In the console, navigate to: https://aws.amazon.com/marketplace/pp?sku=f2ew2wrz425a1jagnifd02u5t.
Accept the agreement. Other than the regular EC2 instance cost, no additional license cost is added.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the VPN tab.
Click Provision VPN.
After the OpenVPN is provisioned, it is ready to use. Behind the scenes, DuploCloud launches a CloudFormation script to provision the OpenVPN.
You can find the OpenVPN admin password in the CloudFormation stack in your AWS console.
Provision a VPN while creating a user:
In the DuploCloud Portal, navigate to Administrator -> Users.
Click Add. The Create User pane displays.
Enter a valid email address in the Username field.
In the Roles field, select the appropriate role for the User.
Select Provision VPN.
Click Submit.
For information about removing VPN access for a user, see Deleting a VPN user. To delete VPN access, you must have administrator privileges.
By default, users connected to a VPN can SSH or RDP into EC2 instances. Users can also connect to internal load balancers and endpoints of the applications. However, to connect to other services, such as databases and ElastiCache, you must open the port to the VPN:
In the DuploCloud Portal, navigate to Administrator -> Tenants.
Select the Tenant in the Name column.
Click the Security tab.
Click Add. The Add Tenant Security pane displays.
In the Source Type field, select Ip Address.
In the IP CIDR field, enter the name of your VPN.
Click Add.
Get up and running with DuploCloud inside an AWS cloud environment; harness the power of generating application infrastructures.
This Quick Start tutorial shows you how to set up an end-to-end cloud deployment. You will create AWS infrastructure and tenants and, by the end of this tutorial, you can view a deployed sample web application.
Estimated time to complete tutorial: 75-95 minutes.
When you complete the AWS Quick Start Tutorial, you have three options or paths, as shown in the table below.
Using EKS - You create a service in DuploCloud using AWS Elastic Kubernetes Service and expose it using a load balancer within DuploCloud.
Using ECS - You create an app and service in DuploCloud using AWS Elastic Container Service.
Using Native Docker Services - You create a service in Docker and expose it using a load balancer within DuploCloud.
There are optional steps in each tutorial path, in the table below, marked with an asterisk ( * ). While these steps are not needed to complete each tutorial, you may want to perform or read through them, as they are tasks that are normally completed when you create production-ready services.
* - Optional Step
Click the card below to open the DuploCloud video page to watch a number of DuploCloud demos.
Log in to the OpenVPN Access Server user portal using the credentials from the DuploCloud user profile section.
Install the OpenVPN Connect app for your local machine.
Download the OpenVPN user profile for your account from the link labeled Yourself (user-locked profile).
Open the .ovpn file and click OK at the Import profile dialog. Then click Connect.
For information about the differences between these methods, to help you choose which method best suits your needs, skills, and environments, see this and the documentation.
Step | EKS | ECS | Native Docker Services |
---|
1 | Create Infrastructure and Plan | Create Infrastructure and Plan | Create Infrastructure and Plan |
2 | Create Tenant | Create Tenant | Create Tenant |
3 | Create RDS * | Create RDS * | Create RDS * |
4 | Create Host | Create a Task Definition for an application | Create Host |
5 | Create Service | Create the ECS Service and Load Balancer | Create app |
6 | Create Load Balancer | Test the app | Create Load Balancer |
7 | Enable Load Balancer Options * | Test the App |
8 | Create Custom DNS Name * |
9 | Test the App |
Creating the DuploCloud Infrastructure and a Plan
Each DuploCloud Infrastructure is a connection to a unique Virtual Private Cloud (VPC) network that resides in a region that can host Kubernetes clusters, EKS or ECS clusters, or a combination of these, depending on your public cloud provider.
After you supply a few basic inputs DuploCloud creates an Infrastructure for you, within AWS and within DuploCloud, with a few clicks. Behind the scenes, DuploCloud does a lot with what little you supply — generating the VPC, Subnets, NAT Gateway, Routes, and EKS or ECS cluster.
With the Infrastructure as your foundation, you can customize an extensible, versatile Platform Engineering development environment by adding Tenants, Hosts, Services, and more.
Estimated time to complete Step 1: 40 minutes. Much of this time is consumed by DuploCloud's creation of the Infrastructure and enabling your EKS cluster with Kubernetes.
Before starting this tutorial:
Learn more about DuploCloud Infrastructures, Plans, and Tenants.
Reference the Access Control documentation to create User IDs with the Administrator role. To perform the tasks in this tutorial, you must have Administrator privileges.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
Click Add. The Add Infrastructure page displays.
From the table below, enter the values that correspond to the fields on the Add Infrastructure page. Accept all other default values for fields not specified.
Select either Enable EKS or Enable ECS Cluster option. You will follow different paths in the tutorial for creating Services with EKS, ECS, or DuploCloud Docker.
Click Create to create the Infrastructure. It may take up to half an hour to create the Infrastructure. While the Infrastructure is being created, a Pending status is displayed in the Infrastructure page Status column, often with additional information about what part of the Infrastructure DuploCloud is currently creating. When creation completes, a status of Complete displays.
DuploCloud begins creating and configuring your Infrastructure and EKS/ECS clusters using Kubernetes.
It may take up to forty-five (45) minutes for your Infrastructure to be created and Kubernetes (EKS/ECS) enablement to be complete. Use the Kubernetes card in the Infrastructure screen to monitor the status, which should display as Enabled when completed. You can also monitor progress by using the Kubernetes tab, as DuploCloud generates your Cluster Name, Default VM Size, Server Endpoint, and Token.
Every DuploCloud Infrastructure generates a Plan. Plans are sets of templates that are used to configure the Tenants or workspaces, in your Infrastructure. You will set up Tenants in the next tutorial step.
Before proceeding, confirm that a Plan exists that corresponds to your newly created Infrastructure.
In the DuploCloud Portal, navigate to Administrator -> Plans. The Plans page displays.
Verify that a Plan exists with the name NONPROD, the name that you gave to the Infrastructure you created.
You previously verified that your Infrastructure and Plan were created. Now verify that Kubernetes is enabled before proceeding to Create a Tenant.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the NONPROD Infrastructure.
Click the EKS or ECS tabs. When Kubernetes has been Enabled for EKS or ECS, details are listed in the respective tab. The Infrastructure page displays the Enabled status on the Kubernetes card for EKS clusters. For ECS, the Cluster Name is listed in the ECS tab.
Creating an RDS database to integrate with your DuploCloud Service
Creating an RDS database is not essential to running a DuploCloud Service. However, as most services also incorporate an RDS, this step is included to demonstrate the ease of creating a database in DuploCloud. To skip this step, proceed to the Services section of this tutorial.
An AWS RDS is a managed Relational Database Service that is easy to set up and maintain in DuploCloud for AWS public cloud environments. RDSs support many databases including MySQL, PostgreSQL, MariaDB, Oracle BYOL, or SQL Server.
See the DuploCloud AWS Database documentation for more information.
Estimated time to complete Step 3: 5 minutes.
Before creating an RDS, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
An Infrastructure and Plan exist, both with the name NONPROD.
The NONPROD infrastructure has Kubernetes (EKS or ECS) Enabled.
A Tenant with the name dev01 has been created.
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
In the DuploCloud Portal, navigate to Cloud Services -> Database. The Database page displays.
In the RDS tab, click Add. The Create a RDS page displays.
From the table below, enter the values that correspond to the fields on the Create a RDS page. Accept all other default values for fields not specified.
Click Create. The DUPLODOCS database displays in the RDS tab with a Status of Submitted. Database creation takes approximately ten (10) minutes.
DuploCloud prepends DUPLO to the name of your RDS database instance.
You can monitor the status of database creation using the RDS tab and the Status column.
In the DuploCloud Portal Database page, in the RDS tab, when the database Status is Available, the database's endpoint is ready for connection to a DuploCloud Service, which you create and start in the next step.
Invalid passwords - Passwords cannot have special characters like quotes, @, commas, etc. Use a combination of upper and lower-case letters and numbers.
Invalid encryption - Encryption is not supported for small database instances (micro, small, or medium).
In the RDS tab, select the DUPLODOCS database you created.
Note the database Endpoint, the database name, and the database credentials. For security, the database is automatically placed in a private subnet to prevent all access from the internet. Access to the database is automatically set up for all resources (EC2 instances, containers, Lambdas, etc) in the DuploCloud dev01 Tenant. You need the Endpoint to connect to the database from an application running in the EC2 instance.
When you place a DuploCloud Service in a live production environment, consider passing the database endpoint, name, and credentials to a DuploCloud Service using AWS Secrets Manager, or Kubernetes Configs and Secrets.
When your database is available and you have verified the endpoint, choose one of these three paths to create a DuploCloud Service and continue this tutorial.
Creating an AWS EKS Service in DuploCloud running Docker containers
Creating an AWS ECS Service in DuploCloud running Docker containers
Not sure what kind of Duplcloud Service you want to create? Consider the following:
AWS EKS is a managed Kubernetes service. AWS ECS is a fully managed container orchestration service using AWS technology. For a full discussion of the benefits of EKS vs. ECS, consult this AWS blog.
Docker Containers are ideal for lightweight deployments and run on any platform, using GitHub and other open-source tools.
Finish the Quick Start Tutorial by creating an EKS Service
Alternatively, you can finish this tutorial by:
Estimated time to complete remaining tutorial steps: 30-40 minutes
For the remaining steps in this tutorial, you will:
Create a Service and applications (webapp) using the premade Docker image duplocloud/nodejs-hello:latest.
Expose the Service by creating and sharing a load balancer and DNS name.
Test the application.
Obtain access to the container shell and kubectl
for debugging.
Behind the scenes, the topology that DuploCloud creates resembles this low-level configuration in AWS.
Creating a Host that acts as an EKS Worker node
Kubernetes uses worker nodes to distribute workloads within a cluster. The cluster automatically distributes the workload among its nodes, enabling seamless scaling as required system resources expand to support your applications.
Estimated time to complete Step 4: 5 minutes.
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
In the EC2 tab, click Add. The Add Hosts page displays.
In the Friendly Name field, enter host01.
From the Instance Type list box, select 2 CPU 4 GB - t3.medium.
Select the Advanced Options checkbox to display advanced configuration fields.
From the Agent Platform list box, select EKS Linux.
From the Image ID list box, select any Image ID that is prefixed by EKS (for example, EKS-Oregon-1.23).
Click Add. The Host is created, initialized, and started. In a few minutes, when the Status displays Running, the Host is available for use.
The EKS Image ID is the image published by AWS specifically for an EKS worker in the version of Kubernetes deployed at Infrastructure creation time. For this tutorial, the region is us-west-2, where the NONPROD Infrastructure was created.
Verify that the Host you created has a Status of Running.
Add Infrastructure page field | Value |
---|---|
Create a RDS page field | Value |
---|---|
Faults are shown in the DuploCloud Portal by clicking the Fault/Alert ( ) Icon. Common database faults that may cause database creation to fail include:
In this tutorial for DuploCloud AWS, you have so far created a VPC network with configuration templates (), an isolated workspace (), and optionally, an .
Now you need to create a DuploCloud Service on top of your Infrastructure and configure the Service to run and deploy your application. In this tutorial path, we'll deploy using Docker containers, leveraging .
running Docker containers
For a full discussion of the benefits of EKS vs. ECS, consult.
Create a Host (EC2 Instance), which serves as an .
When you create an Service, you are using a combination of technologies from AWS and the open-source container orchestration system.
Before creating a Host (essentially a ), verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
An exist, both with the name NONPROD.
The NONPROD infrastructure has .
A Tenant with the name .
If no Image ID is available with a prefix of EKS, copy the AMI ID for the desired EKS version by referring to this . Select Other from the Image ID list box and paste the copied AMI ID in the Other Image ID field. Contact the DuploCloud Support team via your Slack channel if you have questions or issues.
Name
nonprod
Region
YOUR_GEOGRAPHIC_REGION
VPC CIDR
10.221.0.0/16
Subnet CIDR Bits
24
RDS Name
docs
User Name
YOUR_DUPLOCLOUD_ADMIN_USER_NAME
User password
YOUR_DUPLOCLOUD_ADMIN_PASSWORD
Rds Engine
MySQL
Rds Engine Version
LATEST_AVAILABLE_VERSION
Rds Instance Size
db.t3.medium
Storage size in GB
30
Creating a DuploCloud Tenant that segregates your workloads
Now that the Infrastructure and Plan exist and a Kubernetes EKS or ECS cluster has been enabled, create one or more Tenants that use the configuration DuploCloud created.
Tenants in DuploCloud are similar to projects or workspaces and have a subordinate relationship to the Infrastructure. Think of the Infrastructure as a virtual "house" (cloud), with Tenants conceptually "residing" in the Infrastructure performing specific workloads that you define. As Infrastructure is an abstraction of a Virtual Private Cloud, Tenants abstract the segregation created by a Kubernetes Namespace, although Kubernetes Namespaces are only one component that Tenants can contain.
In AWS, cloud features such as IAM Roles, security groups, and KMS keys are exposed in Tenants, which reference these feature configurations.
Estimated time to complete Step 2: 10 minutes.
DuploCloud customers often create at least two Tenants for their production and non-production cloud environments (Infrastructures).
For example:
Production Infrastructure
Pre-production Tenant - for preparing or reviewing production code
Production Tenant - for deploying tested code
Non-production Infrastructure
Development Tenant - for writing and reviewing code
Quality Assurance Tenant - for automated testing
In larger organizations, some customers create Tenants based on application environments, such as creating one Tenant for Data Science applications and another Tenant for web applications, and so on.
Tenants are sometimes created to isolate a single customer workload, allowing more granular performance monitoring, scaling flexibility, or tighter security. This is referred to as a single-Tenant setup.
Before creating a Tenant, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
An Infrastructure and Plan exist, both with the name NONPROD.
The NONPROD infrastructure has Kubernetes (EKS or ECS) Enabled.
Create a Tenant for your Infrastructure and Plan:
In the DuploCloud Portal, navigate to Administrator -> Tenants.
Click Add. The Create a Tenant pane displays.
Enter dev01 in the Name field.
Select the Plan that you created in the previous step (NONPROD).
Click Create.
Navigate to Administrator -> Tenants and verify that the dev01 Tenant displays in the list.
Navigate to Administrator -> Infrastructure and select dev01 from the Tenant list box at the top left in the DuploCloud Portal. Ensure that the NONPROD Infrastructure appears in the list of Infrastructures with the Status Complete.
Creating a Load Balancer to configure network ports to access the application
Now that your DuploCloud Service is running, you have a mechanism to expose the containers and images in which your application resides. But because your containers are running inside a private network, you also need a load balancer to listen on the correct ports in order to access the application.
In this step, we add a Load Balancer Listener to complete this network configuration.
Estimated time to complete Step 6: 10 minutes.
Before creating a Load Balancer, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
In the DuploCloud Portal, navigate to Kubernetes -> Services. The Services page displays.
From the Name column, select demo-service.
Click the Load Balancers tab.
Click the Configure Load Balancer link. The Add Load Balancer Listener pane displays.
From the Type list box, select Application LB.
In the Container Port field, enter 3000. This is the configured port on which the application inside the Docker Container Image duplocloud/nodejs-hello:latest
is running.
In the External Port field, enter 80. This is the port through which users will access the web application.
From the Visibility list box, select Public.
From the Application Mode list box, select Docker Mode.
Type / (forward-slash) in the Health Check field to indicate that the cluster we want Kubernetes to perform Health Checks on is located at the root
level.
In the Backend Protocol list box, select HTTP.
Click Add. The Load Balancer is created and initialized. Monitor the LB Status card on the Services page. When the Load Balancer is ready for use the LB Status card displays Ready.
Verify that the Load Balancer has an LB Status of Ready.
On the Services page, note the DNS Name of the Load Balancer that you created.
In the LB Listeners area of the Services page, note the configuration details of the Load Balancer's HTTP protocol, which you specified, when you added it above.
Creating a Service to run a Docker-containerized application
DuploCloud supports three container orchestration technologies to deploy containerized applications in AWS:
Native EKS
Native ECS Fargate
Built-in container orchestration in DuploCloud using EKS/ECS Kubernetes
You can use any of these methods, which all employ Docker containers. This tutorial uses DuploCloud's built-in container orchestration using EKS and Kubernetes
This tutorial will access a pre-built Docker container to deploy a simple Hello World NodeJS
web app. When you run the application, DuploCloud accesses Docker images in a preconfigured Docker Hub.
Estimated time to complete Step 5: 10 minutes.
Before creating a Service, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
In the DuploCloud Portal, navigate to DevOps -> Containers -> EKS/Native. The Services page displays.
Click Add. The Add Service page displays.
From the table below, enter the values that correspond to the fields on the Add Service page. Accept all other default values for fields not specified.
Click Next. The Advanced Options page is displayed.
At the bottom of the Advanced Options page, click Create. Your Service is created and initialized. In about five (5) minutes, in the Containers tab, your DuploCloud Service displays a Current status of Running.
From the table below, enter the values that correspond to the fields on the Add Service page. Accept all other default values for fields not specified.
Click Next. The Advanced Options page is displayed.
At the bottom of the Advanced Options page, click Create. Your Service is created and initialized. In about five (5) minutes, in the Containers tab, your DuploCloud Service displays a Current status of Running.
Use the Containers tab to monitor the Service creation status, between Desired (Running) and Current.
Verify that your DuploCloud Service, demo-service, has a Current status of Running.
Adding a security layer and enabling other options for your Load Balancer
This step is optional and not necessary to run the example application in this tutorial.
However, while it's not as important to secure a load balancer for a small web application in a tutorial, your production cloud apps require an elevated level of protection.
Estimated time to complete Step 7: 5 minutes.
Before securing a Load Balancer, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
In the DuploCloud Portal, navigate to Kubernetes -> Services. The Services page displays.
From the Name column, select the Service to which your Load Balancer is attached (demo-service).
Click the Load Balancers tab.
In the Other Settings card, click Edit. The Other Load Balancer Settings pane displays.
In the Web ACL list box, select None, because you are not connecting a Web Application Firewall.
For this tutorial, select only the Enable Access Logs and Drop Invalid Headers options.
Accept the Idle Timeout default setting and click Save. The Other Settings card in the Load Balancers tab is updated with your selections.
Verify that the Other Settings card contains the selections you made above for:
Web ACL - None
HTTP to HTTPS Redirect - False
Enable Access Logs - True
Drop Invalid Headers - True
Changing the DNS Name for ease of use
It is possible to modify the DNS Name after you create a Load Balancer Listener for ease of use and reference by your applications, but it isn't necessary to run your application or complete this tutorial.
Estimated time to complete Step 8: 5 minutes.
Before securing a Load Balancer, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
In the DuploCloud Portal, navigate to Kubernetes -> Services. The Services page displays.
From the Name column, select demo-service.
Click the Load Balancers tab. The ALB Load Balancer configuration is displayed.
In the DNS Name card, click Edit. The prefix in the DNS Name is editable.
Edit the DNS Name and select a meaningful DNS Name prefix.
Click Save. A Success message briefly displays at the top center of the DuploCloud Portal.
An entry for your new DNS name is now registered with demo-service.
The DNS Name card displays your modified DNS Name.
An exist, both with the name NONPROD.
The NONPROD infrastructure has .
A Tenant with the name .
A Host with the name .
A Service with the name .
You don't have to have experience with Kubernetes to deploy an application in the DuploCloud Portal. However, it is helpful to be familiar with the platform. Docker runs on any platform and provides an easy-to-use UI for creating, running, and managing containers, in which your application code resides.
When you run your own applications, you will choose a public image or provide credentials to access your private repository. Before you deploy your own applications, .
An exist, both with the name NONPROD.
The NONPROD infrastructure has .
A Tenant with the name .
A host with the name .
Add a Service page field | Value |
---|
Follow the steps in . In the Add Service page, Basic Options, Select Tolerate spot instances.
To set up a Web Application Firewall (WAF) for a production application, follow the steps in the . You won't set up a WAF in this tutorial.
Otherwise, to skip this step, proceed to the .
In this tutorial step, for the Application Load Balancer (ALB) you created in , you will:
Enable access logging to monitor details.
Protect against requests that contain .
An exist, both with the name NONPROD.
The NONPROD infrastructure has EKS.
A Tenant with the name .
A Host with the name .
A Service with the name .
An has been created.
To skip this step, proceed to .
Once the load balancer is created, DuploCloud programs an autogenerated DNS Name registered to demo-service in the domain. Before you create production deployments, you must Hosted Zone domain, if the DuploCloud staff has not already created one for you. For this tutorial, it is not necessary to create the domain.
An exist, both with the name NONPROD.
The NONPROD infrastructure has .
A Tenant with the name .
A Host with the name .
A Service with the name .
An has been created.
Service Name |
|
Docker Image |
|
Test the application to ensure you get the results you expect
You can test your application directly from the Services page using the DNS status card.
Estimated time to complete Step 9 and finish tutorial: 10 minutes.
Before testing your application, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
An Infrastructure and Plan exist, both with the name NONPROD.
The NONPROD infrastructure has EKS Enabled.
A Tenant with the name dev01 has been created.
A Host with the name host01 has been created.
A Service with the name demo-service has been created.
An HTTPS Application Load Balancer has been created.
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
Note that if you skipped Step 7 and/or Step 8, the configuration in the Other Settings and DNS cards appears slightly different from the configuration depicted in the screenshot below. These changes do not impact you in testing your application, as these steps are optional. You can proceed to test your app with no visible change in the output of the deployable application.
In the DuploCloud Portal, navigate to Kubernetes -> Services. The Services page displays.
From the Name column, select demo-service.
Click the Load Balancers tab. The Application Load Balancer configuration is displayed.
Open a browser instance and Paste the DNS in the URL field of your browser.
Press ENTER. A web page with the text Hello World! is displayed, from the JavaScript program residing in your Docker Container that is running in demo-service, which is exposed to the web by your Load Balancer.
It can take from five to fifteen (5-15) minutes for the DNS Name to become active once you launch your browser instance to test your application.
Congratulations! You have just launched your first web service on DuploCloud!
In this tutorial, your objective was to create a cloud environment to deploy an application for testing purposes, and to understand how the various components of DuploCloud work together.
The application rendered a simple web page with text, coded in JavaScript, from software application code residing in a Docker container. You can use this same procedure to deploy much more complex cloud applications.
In the previous steps, you:
Created a DuploCloud Infrastructure named NONPROD, a Virtual Private Cloud instance, backed by an AKS-enabled Kubernetes cluster.
Created a Tenant named dev01 in Infrastructure NONPROD. While generating the Infrastructure, DuploCloud created a set of templates (Plan) to configure multiple Azure and Kubernetes components needed for your environment.
Created an EC2 host named host01, so that your application has storage resources with which to run.
Created a Service named demo-service to connect the Docker containers and associated images, in which your application code resides, to the DuploCloud Tenant environment.
Created an ALB Load Balancer Listener to expose your application via ports and backend network configurations.
Verified that your web page rendered as expected by testing the DNS Name exposed by the Load Balancer Listener.
In this tutorial, you created many artifacts for testing purposes. When you are ready, clean them up so that another person can run this tutorial from the start, using the same names for Infrastructure and Tenant.
To delete the dev01 tenant follow these instructions and then return to this page. As you learned, the Tenant segregates all work in one isolated environment, so deleting the Tenant that you created cleans up most of your artifacts.
The NONPROD Infrastructure is deleted and you have completed the clean-up of your test environment.
Thanks for completing this tutorial and proceed to the next section to learn more about using DuploCloud with AWS.
Create a Task Definition for your application in AWS ECS
You enabled ECS cluster creation when you created the Infrastructure. In order to create a Service using ECS, you first need to create a Task Definition that serves as a blueprint for your application.
Once you create a Task Definition, you can run it as a Task or as a Service. In this tutorial, we run the Task Definition as a Service.
Estimated time to complete Step 4: 10 minutes.
Before creating an RDS, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
An Infrastructure and Plan exist, both with the name NONPROD.
The NONPROD infrastructure has ECS Enabled.
A Tenant with the name dev01 has been created.
In the DuploCloud Portal's Tenant list box. select Tenant dev01.
Navigate to Cloud Services -> ECS.
In the Task Definition tab, click Add. The Add Task Definition page displays.
In the Name field, enter sample-task-def.
In the Container - 1 section, in the Container Name field, enter sample-task-def-c1. Container names are required for Docker images in AWS ECS.
In the Image field, enter duplocloud/nodejs-hello:latest.
From the vCPU list box, select 0.50 vCPU.
From the Memory list box, select 1 GB.
In the Port Mappings section, in the Port field, enter 3000. Port mappings allow containers to access ports for the host container instance to send or receive traffic.
Click Submit.
Create an ECS Service from Task Definition and expose it with a Load Balancer
Now that you've created a Task Definition, you create a Service, which creates a Task (from the definition) to run your application. A Task is the instantiation of a Task Definition within a cluster. After you create a task definition for your application within Amazon ECS, you can specify multiple tasks to run on your cluster, based on your performance and availability requirements.
Once a Service is created, you must create a Load Balancer to expose the Service on the network. An Amazon ECS service runs and maintains the desired number of tasks simultaneously in an Amazon ECS cluster. If any of your tasks fail or stop for any reason, the Amazon ECS service scheduler launches another instance based on parameters specified in your Task Definition. It does so in order to maintain the desired number of tasks created.
Estimated time to complete Step 5: 10 minutes.
Before creating the ECS Service and Load Balancer, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
Tasks run until an error occurs or a user terminates the Task in the ECS Cluster.
In the DuploCloud Portal's Tenant list box. select Tenant dev01.
Navigate to Cloud Services -> ECS.
In the Service Details tab, click the Configure ECS Service link. The Add ECS Service page displays.
In the Name field, enter sample-httpd-app as the Service name.
In the LB Listeners area, click Add. The Add Load Balancer Listener pane displays.
From the Select Type list box, select Application LB.
In the Container Port field, enter 3000.
In the External Port field, enter 80.
From the Visibility list box, select Public.
In the Heath Check field, enter /, specifying root
, the location of Kubernetes Health Check logs.
From the Backend Protocol list box, select HTTP.
From the Protocol Policy list box, select HTTP1.
Select other options as needed and click Add.
On the Add ECS Service page, click Submit.
In the Service Details tab, information about the Service and Load Balancer you created is displayed. Verify that the Service and Load Balancer configuration details in the Service Details tab are correct.
In the DNS status card on the right side of the Portal, click the Copy Icon ( ) to copy the DNS address displayed to your clipboard.
Finish by deleting the NONPROD Infrastructure. In the DuploCloud Portal, navigate to Administrator -> Infrastructure. Click the Action menu icon () for the NONPROD row and select Delete.
An exist, both with the name NONPROD.
The NONPROD infrastructure has .
A Tenant with the name .
A has been created.
In the Task Definitions tab, select the Task Definition Family Name, DUPLOSERVICES-DEV01-SAMPLE-TASK-DEF. This is the prepended by a unique identified, which includes your Tenant name (DEV01) and part of your Infrastructure name (ECS-TEST).
Create an EC2 Host in DuploCloud
Before you create your application and service using native Docker, create an EC2 Host for storage in DuploCloud.
Estimated time to complete Step 4: 5 minutes.
Before creating a Host (essentially a Virtual Machine), verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
An Infrastructure and Plan exist, both with the name NONPROD.
A Tenant with the name dev01 has been created.
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
In the EC2 tab, click Add. The Add Hosts page displays.
In the Friendly Name field, enter host01.
From the Instance Type list box, select 2 CPU 4 GB - t3.medium.
Select the Advanced Options checkbox to display advanced configuration fields.
From the Agent Platform list box, select Linux/Docker Native.
From the Image ID list box, select any Docker-Duplo or Ubuntu image.
Click Add. The Host is created, initialized, and started. In a few minutes, when the Status displays Running, the Host is available for use.
Verify that host01 has a Status of Running.
Finish the Quick Start Tutorial by running a native Docker Service
This section of the tutorial shows you how to deploy a web application with a DuploCloud Docker Service, by leveraging DuploCloud platform in-built container management capability.
Instead of creating a DuploCloud Docker Service, you can alternatively finish the tutorial by:
Creating an AWS EKS Service in DuploCloud running Docker containers or
Creating an AWS ECS Service in DuploCloud running Docker containers.
Instead of creating a DuploCloud service using EKS or ECS, you can deploy your application with native Docker containers and services and DuploCloud.
To deploy your app with a DuploCloud Docker Service in this tutorial, you:
Create an EC2 host instance in DuploCloud.
Create a native Docker application and service.
Expose the app to the web with an Application Load Balancer in DuploCloud.
Complete the tutorial by testing your application.
Estimated time to complete remaining tutorial steps: 30-40 minutes
Behind the scenes, the topology that DuploCloud creates resembles this low-level configuration in AWS.
Test the application to ensure you get the results you expect
You can test your application directly from the Services page using the DNS status card.
Estimated time to complete Step 6 and finish tutorial: 5 minutes.
Before testing your application, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
An Infrastructure and Plan exist, both with the name NONPROD.
The NONPROD infrastructure has ECS Enabled.
A Tenant with the name dev01 has been created.
A Task Definition named sample-task-def has been created.
The ECS Service (sample-httpd-app) and Load Balancer have been created.
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
In the DuploCloud Portal, navigate to Cloud Services -> ECS.
Click the Service Details tab. The Application Load Balancer configuration is displayed.
Open a browser instance and Paste the DNS address in the URL field of your browser.
Press ENTER. A web page with the text It works! displays, from the JavaScript program residing in your Docker Container that is running in sample-httpd-app, which is exposed to the web by your Application Load Balancer.
It can take from five to fifteen (5-15) minutes for the Domain Name to become active once you launch your browser instance to test your application.
Congratulations! You have just launched your first web service on DuploCloud!
In this tutorial, your objective was to create a cloud environment to deploy an application for testing purposes, and to understand how the various components of DuploCloud work together.
The application rendered a simple web page with text, coded in JavaScript, from software application code residing in a Docker container. You can use this same procedure to deploy much more complex cloud applications.
In the previous steps, you:
Created a DuploCloud Infrastructure named NONPROD, a Virtual Private Cloud instance, backed by an AKS-enabled Kubernetes cluster.
Created a Tenant named dev01 in Infrastructure NONPROD. While generating the Infrastructure, DuploCloud created a set of templates (Plan) to configure multiple Azure and Kubernetes components needed for your environment.
Created a Task Definition named sample-task-def, used to create a service to run your application.
Created a Service named sample-httpd-app to connect the Docker containers and associated images, in which your application code resides, to the DuploCloud Tenant environment. In the same step, you created an ALB Load Balancer Listener to expose your application via ports and backend network configurations.
Verified that your web page rendered as expected by testing the DNS Name exposed by the Load Balancer Listener.
In this tutorial, you created many artifacts for testing purposes. When you are ready, clean them up so that another person can run this tutorial from the start, using the same names for Infrastructure and Tenant.
To delete the dev01 tenant follow these instructions and then return to this page. As you learned, the Tenant segregates all work in one isolated environment, so deleting the Tenant that you created cleans up most of your artifacts.
The NONPROD Infrastructure is deleted and you have completed the clean-up of your test environment.
Thanks for completing this tutorial and proceed to the next section to learn more about using DuploCloud with AWS.
Finish the Quick Start Tutorial by creating an ECS Service
Instead of creating a DuploCloud Service with AWS ECS, you can alternatively finish the tutorial by:
To deploy your app with AWS ECS in this ECS tutorial, you:
Create a Task Definition using ECS.
Create a DuploCloud Service named webapp, backed by a Docker image.
Expose the app to the web with a Load Balancer.
Complete the tutorial by testing your application.
Estimated time to complete remaining tutorial steps: 30-40 minutes
Behind the scenes, the topology that DuploCloud creates resembles this low-level configuration in AWS.
In the DNS Name card, click the Copy Icon ( ) to copy the DNS address to your clipboard.
Finish by deleting the NONPROD Infrastructure. In the DuploCloud Portal, navigate to Administrator -> Infrastructure. Click the Action menu icon () for the NONPROD row and select Delete.
This section of the tutorial shows you how to deploy a web application with .
For a full discussion of the benefits of using EKS vs. ECS, consult.
running Docker containers or
.
Unlike AWS EKS, creating and deploying services and apps with ECS requires creating a , a blueprint for your application. Once you create a Task Definition, you can run it as a Task or as a Service. In this tutorial, we run the Task Definition as a Service.
Create a native Docker Service in the DuploCloud Portal
You can use the DuploCloud Portal to create a native Docker service without leaving the DuploCloud interface.
Estimated time to complete Step 5: 10 minutes.
Before creating a Service, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
An Infrastructure and Plan exist, both with the name NONPROD.
A Tenant with the name dev01 has been created.
An EC2 Host with the name host01 has been created.
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
In the DuploCloud Portal, navigate to Docker -> Services.
Click Add. The Add Service Basic Options page displays.
In the Service Name field, enter demo-service-d01.
From the Platform list box, select Linux/Docker Native.
In the Docker Image field, enter duplocloud/nodejs-hello:latest.
From the Docker Networks list box, select Docker Default.
Click Next. The Advanced Options page displays.
Click Create.
On the Add Service Basic Options page, you can also specify optional Environment Variables (EVs) such as database Host, port, and so on. You can also pass Docker credentials using EVs for testing purposes.
Verify that demo-service-d01 has a Current Status of Running.
Once the Service is Running, you can check logs for informational messages by clicking the menu icon ( ) to the left of the running Service Name on the Service page and selecting the Logs option.
Test the application to ensure you get the results you expect
You can test your application directly from the Services page using the DNS status card.
Estimated time to complete Step 7 and finish tutorial: 5 minutes.
Before testing your application, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
An Infrastructure and Plan exist, both with the name NONPROD.
A Tenant with the name dev01 has been created.
An EC2 Host with the name host01 has been created.
A Service with the name demo-service-d01 has been created.
A Load Balancer configured to listen on port xxxx has been created.
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
In the DuploCloud Portal, navigate to Docker -> Services. The Services page displays.
From the Name column, select demo-service-d01.
Click the Load Balancers tab. The Application Load Balancer configuration is displayed.
Open a browser instance and Paste the DNS in the URL field of your browser.
Press ENTER. A web page with the text Hello World! is displayed, from the JavaScript program residing in your Docker Container that is running in demo-service-d01, which is exposed to the web by your Load Balancer.
It can take from five to fifteen (5-15) minutes for the DNS Name to become active once you launch your browser instance to test your application.
Congratulations! You have just launched your first web service on DuploCloud!
In this tutorial, your objective was to create a cloud environment to deploy an application for testing purposes, and to understand how the various components of DuploCloud work together.
The application rendered a simple web page with text, coded in JavaScript, from software application code residing in a Docker container. You can use this same procedure to deploy much more complex cloud applications.
In the previous steps, you:
Created a DuploCloud Infrastructure named NONPROD, a Virtual Private Cloud instance, backed by an AKS-enabled Kubernetes cluster.
Created a Tenant named dev01 in Infrastructure NONPROD. While generating the Infrastructure, DuploCloud created a set of templates (Plan) to configure multiple Azure and Kubernetes components needed for your environment.
Created an EC2 host named host01, so that your application has storage resources with which to run.
Created a Service named demo-service-d01 to connect the Docker containers and associated images, in which your application code resides, to the DuploCloud Tenant environment.
Created an ALB Load Balancer Listener to expose your application via ports and backend network configurations.
Verified that your web page rendered as expected by testing the DNS Name exposed by the Load Balancer Listener.
In this tutorial, you created many artifacts for testing purposes. When you are ready, clean them up so that another person can run this tutorial from the start, using the same names for Infrastructure and Tenant.
To delete the dev01 tenant follow these instructions and then return to this page. As you learned, the Tenant segregates all work in one isolated environment, so deleting the Tenant that you created cleans up most of your artifacts.
The NONPROD Infrastructure is deleted and you have completed the clean-up of your test environment.
Thanks for completing this tutorial and proceed to the next section to learn more about using DuploCloud with AWS.
In the DNS status card on the right side of the Portal, click the Copy Icon ( ) to copy the DNS address displayed to your clipboard.
Finish by deleting the NONPROD Infrastructure. In the DuploCloud Portal, navigate to Administrator -> Infrastructure. Click the Action menu icon () for the NONPROD row and select Delete.
Create a Load Balancer to expose the native Docker Service
Now that your DuploCloud Service is running, you have a mechanism to expose the containers and images in which your application resides. But because your containers are running inside a private network, you also need a load balancer to listen on the correct ports in order to access the application.
In this step, we add a Load Balancer Listener to complete this network configuration.
Estimated time to complete Step 6: 15 minutes.
Before creating a Service, verify that you accomplished the tasks in the previous tutorial steps. Using the DuploCloud Portal, confirm that:
An Infrastructure and Plan exist, both with the name NONPROD.
A Tenant with the name dev01 has been created.
An EC2 Host with the name host01 has been created.
A Service with the name demo-service-d01 has been created.
In the Tenant list box, on the upper-left side of the DuploCloud Portal, select the dev01 Tenant that you created.
In the DuploCloud Portal, navigate to Docker -> Services.
Select the Service demo-service-d01 that you created.
Click the Load Balancers tab.
Click the Configure Load Balancer link. The Add Load Balancer Listener pane displays.
From the Select Type list box, select Application LB.
In the Container Port field, enter 3000, the port on which the application running inside the container image (duplocloud/nodejs-hello:latest) is running.
In the External Port field, enter 80.
From the Visibility list box, select Public.
From the Application list box, select Docker Mode.
In the Health Check field, enter /, indicating that you want the Kubernetes Health Check logs written to the root directory.
From the Backend Protocol list box, select HTTP.
Click Add.
When the LB Status card displays Ready, your Load Balancer is running and ready for use.
If you want to secure the load balancer created, you can follow the steps specified here.
You can modify the DNS name by clicking Edit in the DNS Name card in the Load Balancers tab. For additional information see this page.
Use Cases supported for DuploCloud AWS
This section details common use cases for DuploCloud AWS.
Topics in this section are covered in the order of typical usage. Use cases that are foundational to DuploCloud such as Infrastructure, Tenant, and Hosts are listed at the beginning of this section; while supporting use cases such as Cost management for billing, JIT Access, Resource Quotas, and Custom Resource tags appear near the end.
AWS Console link
Enable Elastic Kubernetes Service (EKS) for AWS by creating a DuploCloud Infrastructure
In the DuploCloud platform, a Kubernetes Cluster maps to a DuploCloud Infrastructure.
Start by creating a new Infrastructure in DuploCloud. When prompted to provide details for the new Infrastructure, select Enable EKS. In the EKS Version field, select the desired release.
Optionally, enable logging and custom EKS endpoints.
The worker nodes and remaining workload setup are described in the Tenant topic.
Up to one instance (0 or 1) of an EKS is supported for each DuploCloud Infrastructure.
Creating an Infrastructure with EKS can take some time. See the Infrastructure section for details about other elements on the Add Infrastructure form.
When the Infrastructure is in the ready state, as indicated by a Status of Complete, select the Name of the Infrastructure page to view the Kubernetes configuration details, including the token and configuration for kubectl
.
When you create Tenants in an Infrastructure, a namespace is created in the Kubernetes cluster with the name duploservices-TENANT_NAME
Specify EKS endpoints for an Infrastructure
AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the default public endpoint for each service in an AWS Region. However, when you create an Infrastructure in DuploCloud, you can specify a custom Private endpoint, a custom Public endpoint, or Both public and private custom endpoints. If you specify no endpoints, the default Public endpoint is used.
From the EKS Endpoint Visibility list box, select Public, Private, or Both public and private. If you select private or Both public and private, the Allow VPN Access to the EKS Cluster option is enabled.
Click Advanced Options.
Using the Private Subnet CIDR and Public Subnet CIDR fields, specify CIDRs for alternate public and private endpoints.
Click Create.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the Infrastructure.
Click the Settings tab.
From the Setting Name list box, select Enable VPN Access to EKS Cluster.
Select Enable to enable VPN.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the Infrastructure containing settings that you want to view.
Click the Settings tab. The Infrastructure settings display.
Modifying endpoints can incur an outage of up to thirty (30) minutes in your EKS cluster. Plan your update accordingly to minimize disruption for your users.
To modify the visibility for EKS endpoints you have already created:
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the Infrastructure for which you want to modify EKS endpoints.
Click the Settings tab.
From the Setting Value list box, select the desired type of visibility for endpoints (private, public, or both).
Click Set.
For more information about AWS Endpoints, see the .
Follow the steps in the section . Before clicking Create, specify EKS Endpoint Visibility.
If you want to enable private visibility after you have previously with only public visibility, follow these steps to enable private visibility and VPN access.
In the EKS Endpoint Visibility row, in the Actions column, click the ( ) icon and select Update Setting. The Infra - Set Custom Data pane displays.
Click Set. When you , the Allow VPN Access to the EKS Cluster option will be enabled.
In the EKS Endpoint Visibility row, in the Actions column, click the ( ) icon and select Update Setting. The Infra - Set Custom Data pane displays.
Enable logging functionality for EKS
Follow the steps in the section Creating an Infrastructure. Before clicking Create, select the EKS Logging list box and select one or more ControlPlane Log types.
Enable EKS logging for an Infrastructure that you have created.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
From the Name column, select the Infrastructure for which you want to enable EKS logging.
Click the Settings tab.
Click Add. The Infra - Set Custom Data pane displays.
From the Setting Name list box, select EKS ControlPlane Logs.
In the Setting Value field, enter: api;audit;authenticator;controllerManager;scheduler
Click Set. The EKS ControlPlane Logs setting is displayed in the Settings tab.
Enable autoscaler for the K8 cluster
The Cluster AutoScaler automatically adjusts the number of nodes in your cluster when Pods fail or are rescheduled onto other nodes.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
Select the Infrastructure with which you want to use Cluster AutoScaler.
Click the Settings tab.
Click Add. The Add Infra - Set Custom Data pane displays.
From the Setting Name list box, select Cluster Autoscaler.
Select Enable to enable EKS.
Click Set. Your configuration is displayed in the Settings tab.
Enable Elastic Container Service (ECS) for AWS when creating a DuploCloud Infrastructure
Setting up an Infrastructure that uses ECS is similar to creating an Infrastructure that uses EKS, except that during creation, instead of selecting Enable EKS, you select Enable ECS Cluster.
For information about ECS Services, see the Containers and Services documentation.
Up to one instance (0 or 1) of an ECS is supported for each DuploCloud Infrastructure.
Creating an Infrastructure with ECS can take some time. See the Infrastructure section for details about other elements on the Add Infrastructure form.
How Infrastructures and Plans work together to create a VPC
Infrastructures are abstractions that allow you to create a Virtual Private Cloud (VPC) instance in the DuploCloud Portal. When you create an Infrastructure, a Plan is automatically generated to supply the network configuration necessary for your Infrastructure to run.
DuploCloud automatically deploys NAT gateways across availability zones (AZs) for all Infrastructures that you create with DuploCloud.
You can customize your EKS configuration:
Enable EKS endpoints, logs, Cluster Autoscaler, and more. See the EKS Setup topics for information about configuration options.
You can customize your ECS configuration. See the ECS Setup topic for information about configuration options.
When you create a DuploCloud Infrastructure, you create an isolated environment that maps to a Kubernetes cluster.
Create a DuploCloud Infrastructure in the DuploCloud Portal:
Select Administrator -> Infrastructure from the navigation menu.
Click Add.
Define the Infrastructure by completing the fields on the Add Infrastructure form.
Select Enable EKS to enable EKS for the Infrastructure, or select Enable ECS Cluster to enable an ECS Cluster during Infrastructure creation.
Optionally, select Advanced Options to specify additional configurations (such as Public and Private CIDR Endpoints).
Click Create. The Infrastructure is created and is listed on the Infrastructure page.
When you create the Infrastructure, DuploCloud creates the following components:
VPC with 2 subnets (private, public) in each availability zone
Required security groups
NAT Gateway
Internet Gateway
Route tables
VPC peering with the master VPC, which is initially configured in DuploCloud
Cloud providers limit the number of Infrastructures that can run in each region. Refer to your cloud provider for further guidelines on the number of Infrastructures you can create.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the Infrastructure containing settings that you want to view.
Click the Settings tab. The Infrastructure settings display.
Once the Infrastructure is created, DuploCloud automatically creates a Plan (with the same Infrastructure name) with the Infrastructure configuration. The Plan is used to create Tenants.
Up to one instance (0 or 1) of an EKS or ECS is supported for each DuploCloud Infrastructure.
Enable ECS Elasticsearch logging for containers at the Tenant level
To generate logs for AWS ECS clusters, you must first create an Elasticsearch logging container. Once auditing is enabled, your container logging data can be captured for analysis.
Define at least one service and container.
Enable the Audit feature.
In the DuploCloud Portal, navigate to Administrator -> Tenant. The Tenant page displays.
From the Name column, select the Tenant that is running the container for which you want to enable logging.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select Other. The Configuration field displays.
In the Configuration field, enter Enable ECS ElasticSearch Logging.
In the field below the Configuration field, enter True.
Click Add. In the Settings tab, Enable ECS ElasticSearch Logging displays a Value of True.
You can verify that ECS logging is enabled for a specific container.
In the DuploCloud Portal, navigate to Cloud Services -> ECS.
In the Task Definitions tab, select the Task Definition Family Name in which your container is defined.
Click the Task Definitions tab.
In the Container - 1 area, in the Container Other Config field, your LogConfiguration
is displayed.
In the Container-2 area, another container is created by DuploCloud with the name log_router
.
Menu icon ( ) in the row of the task definition and select Edit Task Definition. The Edit Task Definition page displays your defined Containers.
Add rules to custom configure your AWS Security Groups in the DuploCloud Portal
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
Select the Infrastructure for which you want to add or view Security Group rules from the Name column.
Click the Security Group Rules tab.
Click Add. The Add Infrastructure Security pane displays.
From the Source Type list box, select Tenant or IP Address.
From the Tenant list box, select the Tenant for which you want to set up the Security Rule.
Select the protocol from the Protocol list box.
In the Port Range field, specify the range of ports for access (for example, 1-65535).
Optionally, add a Description of the rule you are adding.
Click Add.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
Select the Infrastructure from the Name column.
Click the Security Group Rules tab. Security Rules are displayed.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
Select the Infrastructure from the Name column.
Click the Security Group Rules tab. Security Rules are displayed in rows.
In the first column of the Security Group row, click the Options Menu Icon ( ) and select Delete.
Upgrade the Elastic Kubernetes Service (EKS) version for AWS
AWS frequently updates the EKS version based on new features that are available in the Kubernetes platform. DuploCloud automates this upgrade in the DuploCloud Portal.
IMPORTANT: An EKS version upgrade can cause downtime to your application depending on the number of replicas you have configured for your services. Schedule this upgrade outside of your business hours to minimize disruption.
DuploCloud notifies users when an upgrade is planned. The upgrade process follows these steps:
A new EKS version is released.
DuploCloud adds support for the new EKS version.
DuploCloud tests all changes and new features thoroughly.
DuploCloud rolls out support for the new EKS version in a platform release.
The user updates the EKS version.
Updating the EKS version:
Updates the EKS Control Plane to the latest version.
Updates all add-ons and components.
Relaunches all Hosts to deploy the latest version on all nodes.
After the upgrade process completes successfully, you can assign allocation tags to Hosts.
Click Administrator -> Infrastructure.
Select the Infrastructure that you want to upgrade to the latest EKS version.
Select the EKS tab. If an upgrade is available for the Infrastructure, an Upgrade link appears in the Value column.
Click the Upgrade link. The Upgrade EKS Cluster pane displays.
From the Target Version list box, select the version to which you want to upgrade.
From the Host Upgrade Action, select the method by which you want to upgrade hosts.
Click Start. The upgrade process begins.
Click Administrator -> Infrastructure.
Select the Infrastructure with components you want to upgrade.
Select the EKS tab. If an upgrade is available for the Infrastructure components, an Upgrade Components link appears in the Value column.
Click the Upgrade link. The Upgrade EKS Cluster Components pane displays.
From the Host Upgrade Action, select the method by which you want to upgrade hosts.
Click Start. The upgrade process begins.
The EKS Upgrade Details page displays that the upgrade is In Progress.
Find more details about the upgrade by selecting your Infrastructure from the Infrastructure page. Click the EKS tab, and then click Show Details.
When you click Show Details, the EKS Upgrade Details page displays the progress of updates for all versions and Hosts. Green checkmarks indicate successful completion in the Status list. Red Xs indicate Actions you must take to complete the upgrade process.
If any of your Hosts use allocation tags, you must assign allocation tags to the Hosts:
After your Hosts are online and available, navigate to Cloud Services -> Hosts.
Select the host group tab (EC2, ASG, etc.) on the Hosts screen.
Click the Add button.
Name the Host and provide other configuration details on the Add Host form.
Select Advanced Options.
Edit the Allocation Tag field.
Click Create and define your allocation tags.
Click Add to assign the allocation tags to the Host.
For additional information about the EKS version upgrade process with DuploCloud, see the AWS FAQs section on EKS version upgrades.
Securely access AWS Services using VPC endpoints
DuploCloud allows you to specify predefined AWS endpoints for your Infrastructure in the DuploCloud Portal.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
Select the Infrastructure to which you want to add VPC endpoints.
Click the Endpoints tab.
Click Add. The Infra - Create VPC Endpoints pane displays.
From the VPC Endpoint Service list box, select the endpoint service you want to add.
Click Create. In the Endpoints tab, the VPC Endpoint ID of your selected service displays.
Using Tenants in DuploCloud
In AWS, cloud features such as AWS resource groups, AWS IAM, AWS security groups, KMS keys, as well as Kubernetes Namespaces, are exposed in Tenants which reference their configurations.
When you create Tenants in an Infrastructure, a namespace is created in the Kubernetes cluster with the name duploservices-TENANT_NAME.
All application components within the Analytics Tenant are placed in the duploservices-analytics
namespace. Since nodes cannot be part of a Kubernetes Namespace, DuploCloud creates a tenantname
label for all the nodes that are launched within the Tenant. For example, a node launched in the Analytics Tenant is labeledtenantname: duploservices-analytics
.
Any Pods that are launched using the DuploCloud UI have an appropriate Kubernetes nodeSelector
that ties the Pod to the nodes within the Tenant. If you are deploying via kubectl,
ensure that your deployment is using the proper nodeSelector
.
At the logical level, the Tenant is:
A Container of resources: All resources (except ones corresponding to the Infrastructure) are created within the Tenant. If a tenant is deleted, all the resources in the Tenant are terminated.
A Security Boundary: All resources within a Tenant can talk to each other. For example, a Docker container deployed in an EKS instance within the tenant will have access to S# storage and RDS databases within the same tenant. RDS database instances in another tenant cannot be reached, for example, by default. Tenants expose endpoints to each other using load balancers or explicit inter-Tenant security groups and identity management policies.
User Access Control: Self-service is the bedrock of the DuploCloud platform. To that end, users can be granted Tenant level access. For example, John and Jim are developers who can be granted access to the DEV01 tenant, Joe is an administrator who has access to all tenants, and Anna is a data scientist who has access only to the DATASCI tenant.
A Billing Unit: Because the Tenant is a container of resources, all resources in the Tenant are tagged with the Tenant's name in the cloud provider, making it easy to segregate usage by Tenant.
A mechanism for alerting: All alerts represent Faults in any resource within the Tenants.
A mechanism for logging: Each Tenant has its unique set of logs.
A mechanism for metrics: Each Tenant has its unique set of metrics.
Many DuploCloud customers create at least two Tenants for both their production and non-production cloud environments (Infrastructures).
You can map Tenants in each or all of your development, testing, staging, Quality Assurance (QA), and production environments.
For example:
Production Infrastructure
Pre-production Tenant - for preparing or reviewing production code
Production Tenant - for deploying tested code
Non-production Infrastructure
Development Tenant - for writing and reviewing code
Quality Assurance Tenant - for automated testing
In larger organizations, some customers create Tenants based on application environments, such as creating a tenant for Data Science applications, another for web applications, etc.
Tenants are sometimes created to isolate a single customer workload, allowing more granular monitoring of performance, the flexibility of scaling, or tighter security. This is referred to as a single-Tenant setup. In this case, a DuploCloud Tenant maps to an environment used exclusively by the end client.
When you have a large set of applications that different teams access, it is helpful to map Tenants to team workloads. For example, you could create Tenants for Dev-analytics, Stage-analytics, and so on.
While Infrastructure provides abstraction and isolation at the Virtual Private Cloud (VPC) and Kubernetes level, the Tenant supplies the next level of isolation implemented in EKS by segregating Tenants using the following construct per Tenant
A set of security groups
An identity management role and profile
A Kubernetes Namespace, a read-only service account, and a write service account
KMS Key
PEM file
EKS Worker nodes or virtual machines (VMs) created within a Tenant are given a label with the Tenant Name, as are the node selectors and namespaces. Consequently, even at the worker node level, two tenants achieve complete isolation and independence, even though they may be sharing the same Kubernetes cluster by a shared Infrastructure.
To add a Tenant, navigate to Administrator -> Tenant in the DuploCloud Portal and click Add.
All application components within the Analytics Tenant are placed in the duploservices-analytics
namespace. Since nodes cannot be part of a Kubernetes Namespace, DuploCloud creates a tenantname
label for all the nodes that are launched within the Tenant. For example, a node launched in the Analytics Tenant is labeledtenantname: duploservices-analytics
.
Any Pods that are launched using the DuploCloud UI have an appropriate Kubernetes nodeSelector
that ties the Pod to the nodes within the Tenant. If you are deploying via kubectl,
ensure that your deployment is using the proper nodeSelector
.
An AWS creates a private connection to supported AWS services and VPC endpoint services powered by AWS PrivateLink. Amazon VPC instances do not require public IP addresses to communicate with the resources of the service. Traffic between an Amazon VPC and a service does not leave the Amazon network.
VPC endpoints are virtual devices. They are horizontally scaled, redundant, and highly available Amazon VPC components that allow communication between instances in an Amazon VPC and services without imposing availability risks or bandwidth constraints on network traffic. There are two types of VPC endpoints, , and .
For information about granting Cross-Tenant access to resources, see .
Each is mapped to a Namespace in Kubernetes. For example, if a Tenant is called Analytics in DuploCloud, the Kubernetes Namespace is called duploservices-analytics
.
Each is mapped to a Namespace in Kubernetes. For example, if a Tenant is called Analytics in DuploCloud, the Kubernetes Namespace is called duploservices-analytics
.
Manage Tenant session duration settings in the DuploCloud Portal
In the DuploCloud Portal, configure the session duration time for all Tenants or a single Tenant. At the end of a session, the Tenants or Tenant ceases to be active for a particular user, application, or Service.
For more information about IAM roles and session times in relation to a user, application, or Service, see the AWS Documentation.
In the DuploCloud Portal, navigate to Administrator -> System Settings. The System Settings page displays.
Click the System Config tab.
Click Add. The App Config pane displays.
From the Config Type list box, select AppConfig.
From the Key list box, select AWS Role Max Session Duration.
From the Select Duration Hour list box, select the maximum session time in hours or set a Custom Duration in seconds.
Click Submit. The AWS Role Max Session Duration and Value are displayed in the System Config tab. Note that the Value you set for maximum session time in hours is displayed in seconds. You can Delete or Update the setting in the row's Actions menu.
In the DuploCloud Portal, navigate to Administrator -> Tenants. The Tenants page displays.
From the Name column, select the Tenant for which you want to configure session duration time.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select AWS Role Max Session Duration.
From the Select Duration Hour list box, select the maximum session time in hours or set a Custom Duration in seconds.
Click Add. The AWS Role Max Session Duration and Value are displayed in the Settings tab. Note that the Value you set for maximum session time in hours is displayed in seconds. You can Delete or Update the setting in the row's Actions menu.
Manage Tenant expiry settings in the DuploCloud Portal
In the DuploCloud Portal, configure an expiration time for a Tenant. At the set expiration time, the Tenant and associated resources are deleted.
In the DuploCloud Portal, navigate to Administrator -> Tenants. The Tenants page displays.
From the Name column, select the Tenant for which you want to configure an expiration time.
From the Actions list box, select Set Tenant Expiration. The Tenant - Set Tenant Expiration pane displays.
Select the date and time (using your local time zone) when you want the Tenant to expire.
Click Set. At the configured day and time, the Tenant and associated resources will be deleted.
The Set Tenant Expiration option is not available for Default or Compliance Tenants.
Adding EC2 hosts in DuploCloud AWS
Once you have the Infrastructure (Networking, Kubernetes cluster, and other common configurations) and an environment (Tenant) set up, the next step is to launch EC2 virtual machines (VMs). You create VMs to be:
EKS Worker Nodes
Worker Nodes (Docker Host), if the built-in container orchestration is used.
DuploCloud AWS requires at least one Host (VM) to be defined per AWS account.
You also create VMs if Regular nodes are not part of any container orchestration; for example, where a user manually connects and installs apps, as when using Microsoft SQL Server in a VM, Running an IIS application, and such custom use cases.
While all the lower-level details like IAM roles, Security groups, and others are abstracted away from the user (as they are derived from the Tenant), standard application-centric inputs are required to be provided. This includes a Name, Instance size, Availability Zone choice, Disk size, Image ID, etc. Most of these are optional, some are published as a list of user-friendly choices by the admin in the plan (Image or AMI ID is one such example). Other than these AWS centric parameter there is two DuploCloud platform-specific value to be provided:
Agent Platform: This is applicable if the VM is going to be used as a host for container orchestration by the platform. The choices are:
EKS Linux: If this is to be added to the EKS cluster i.e. EKS is the chosen approach for container orchestration
Linux Docker: If this is to be used for hosting Linux containers using the Builtin Container orchestration
Docker Windows: If this is to be used for hosting Windows containers using the Builtin Container orchestration
None: If the VM is going to be used for non-Container Orchestration purposes and contents inside the VM will be self-managed by the user
Allocation Tags (Optional): If the VM is being used for containers, then you have the option to set a label on the VM. This label can be then specified during docker app deployment to ensure that the application containers are pinned to a specific set of nodes. Thus you get the ability to split a tenant further into separate pools of servers and deploy applications on them.
If a VM is being used for container orchestration make sure that the Image ID corresponds to an Image for that container orchestration. This should be already set up for you and the list box will have self-descriptive Image IDs for example "EKS Worker", "Duplo-Docker", "Windows Docker" etc. Anything that starts with Duplo would be an image for the Built-in container orchestration
Connect an EC2 instance with SSH by Session ID or by downloading a key
Once an EC2 Instance is created, you connect it with SSH either by using Session ID or by downloading a key.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts and select the host to which you want to connect.
After you select the Host, on the Host's page click the Actions menu and select SSH. A new browser tab opens and you can connect your Host using SSH with by session ID. Connection to the host launches in a new browser tab.
After you select the Host, on the Host's page click the Actions menu and select Connect -> Connection Details. The Connection Info for Host window opens. Follow the instructions to connect to the server.
Click Download Key.
If you don't want to display the Download Key button, disable the button's visibility.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Click Add. The Add Config pane displays.
From the Config Type list box, select Flags.
From the Key list box, select Disable SSH Key Download.
From the Value list box, select true.
Click Submit.
Add a Host (virtual machine) in the DuploCloud Portal.
DuploCloud AWS supports EC2, ASG, and BYOH (Bring Your Own Host) types. Use BYOH for any VMs that are not EC2 or ASG.
Ensure you have selected the appropriate Tenant from the Tenant list box at the top of the DuploCloud Portal.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
Click the tab that corresponds to the type of Host you want to create (EC2, ASG, or BYOH).
Click Add. The Host that you added is displayed in the appropriate tab (EC2, ASG, or BYOH).
To connect to the Host using SSH, follow this procedure.
The EKS Image ID is the image published by AWS specifically for an EKS worker in the version of Kubernetes deployed at Infrastructure creation time.
If no Image ID is available with a prefix of EKS, copy the AMI ID for the desired EKS version by referring to this AWS documentation. Select Other from the Image ID list box and paste the copied AMI ID in the Other Image ID field. Contact the DuploCloud Support team via your Slack channel if you have questions or issues.
See Kubernetes StorageClass and PVC.
From the DuploCloud Portal, navigate to Cloud Services -> Hosts.
Select the Host name from the list.
From the Actions list box, you can select Connect, Host Settings, or Host State to perform the following supported actions:
If you add custom code for EC2 or ASG Hosts using the Base64 Data field, your custom code overrides the code needed to start the EC2 or ASG Hosts and the Hosts cannot connect to EKS. Instead, use this procedure to add custom code directly in EKS.
SSH
Establish an SSH connection to work directly in the AWS Console.
Connection Details
View connection details (connection type, address, user name, visibility) and download the key.
Host Details
View Host details in the Host Details YAML screen.
Create AMI
Set the AMI.
Create Snapshot
Create a snapshot of the Host at a specific point.
Update User Data
Update the Host user data.
Change Instance Size
Resize a Host instance to accommodate the workload.
Update Auto Reboot Status Check
Enable or disable Auto Reboot. Set the number of minutes after the AWS Instance Status Check fails before automatically rebooting.
Start
Start the Host.
Reboot
Reboot the Host.
Stop
Stop the Host.
Hibernate
Hibernate (temporarily freeze) the Host.
Terminate Host
Terminate the Host.
Autoscale your Host workloads in DuploCloud
DuploCloud supports various ways to scale Host workloads, depending on the underlying AWS services being used.
Control placement of EC2 instances on a physical server with a Dedicated Host
Use Dedicated Hosts to launch Amazon EC2 instances and provide additional visibility and control over how EC2 instances are placed on a physical server; enabling you to use the same physical server, if needed.
Configure the DuploCloud Portal to allow for the creation of Dedicated Hosts.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Click Add. The Add Config pane displays.
In the Config Type field, select Flags.
In the Key field, select Allow Dedicated Host Sharing.
In the Value field, select true.
Click Submit. The configuration is displayed in the System Config tab.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
In the EC2 tab, click Add. The Add Host page displays.
After completing the required fields to configure your Host, select Advanced Options. The advanced options display.
In the Dedicated Host ID field, enter the ID of the Dedicated Host. The ID is used to launch a specific instance on a Dedicated Host. See the screenshot below for an example.
Click Add. The Dedicated Host is displayed in the EC2 tab.
After you create Dedicated Hosts, view them by doing the following:
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
In the EC2 tab, select the Host from the Name column. The Dedicated Host ID card on the Host page displays the ID of the Dedicated Host.
Create Autoscaling groups to scale EC2 instances to your workload
Configure Autoscaling Groups (ASG) to ensure the application load is scaled based on the number of EC2 instances configured. Autoscaling detects unhealthy instances and launches new EC2 instances. ASG is also cost-effective as EC2 Instances are dynamically created per the application requirement within minimum and maximum count limits.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
In the ASG tab, click Add. The Add ASG page is displayed.
In the Friendly Name field, enter the name of the ASG.
Select Availability Zone and Instance Type.
In the Instance Count field, enter the desired capacity for the Autoscaling group.
In the Minimum Instances field, enter the minimum number of instances. The Autoscaling group ensures that the total number of instances is always greater than or equal to the minimum number of instances.
In the Maximum Instances field, enter the maximum number of instances. The Autoscaling group ensures that the total number of instances is always less than or equal to the maximum number of instances.
Select Use for Cluster Autoscaling.
Select Advanced Options.
Select the appropriate Image ID.
From the Agent Platform list box, select Linux Docker/Native to run a Docker service or select EKS Linux to run services using EKS. Fill in additional fields as needed for your ASG.
Click Add. Your ASG is added and displayed in the ASG tab.
View the Hosts created as part of ASG creation from the ASG Hosts tab.
The Use for Cluster Autoscaling option will not be available until you enable the .
Optionally, enable .
Optionally, for EKS only, enable .
Refer to AWS for detailed steps on creating Scaling policies for the Autoscaling Group.
The DuploCloud Portal provides the ability to configure Services based on the platforms EKS Linux and Linux Docker/Native. Select the ASG based on the platform used when creating services and Autoscaling groups. Optionally, if you previously , you can configure the Service to use Spot Instances by selecting Tolerate spot instances.
Scale to or from zero when creating Autoscaling Groups in DuploCloud
DuploCloud allows you to scale to or from zero in EKS clusters by enabling the Scale from zero option in the Advanced Options when creating an Autoscaling Group.
Scaling to or from zero with AWS Autoscaling Groups (ASG) offers several advantages depending on the context and requirements of your application:
Cost Savings: By scaling down to zero instances during periods of low demand, you minimize costs associated with running and maintaining instances. This pay-as-you-go model ensures you only pay for resources when they are actively being used.
Resource Efficiency: Scaling to zero ensures that resources are not wasted during periods of low demand. By terminating instances when they are not needed, you optimize resource utilization and prevent over-provisioning, leading to improved efficiency and reduced infrastructure costs.
Flexibility: Scaling to zero provides the flexibility to dynamically adjust your infrastructure in response to changes in workload. It allows you to efficiently allocate resources based on demand, ensuring that your application can scale up or down seamlessly to meet varying levels of traffic.
Simplified Management: With automatic scaling to zero, you can streamline management tasks associated with provisioning and de-provisioning instances. The ASG handles scaling operations automatically, reducing the need for manual intervention and simplifying infrastructure management.
Rapid Response to Increased Demand: Scaling from zero allows your infrastructure to quickly respond to spikes in traffic or sudden increases in workload. By automatically launching instances as needed, you ensure that your application can handle surges in demand without experiencing performance degradation or downtime.
Improved Availability: Scaling from zero helps maintain optimal availability and performance for your application by ensuring that sufficient resources are available to handle incoming requests. This proactive approach to scaling helps prevent resource constraints and ensures a consistent user experience even during peak usage periods.
Enhanced Scalability: Scaling from zero enables your infrastructure to scale out horizontally, adding additional instances as demand grows. This horizontal scalability allows you to seamlessly handle increases in workload and accommodate a growing user base without experiencing bottlenecks or performance issues.
Elasticity: Scaling from zero provides elasticity to your infrastructure, allowing it to expand and contract based on demand. This elasticity ensures that you can efficiently allocate resources to match changing workload patterns, resulting in optimal resource utilization and cost efficiency.
Create Autoscaling Groups (ASG) with Spot Instances in the DuploCloud platform
Spot Instances are spare capacity priced at a significant discount compared to On-Demand Instances. Users specify the maximum price (bid) they will pay per hour for a Spot Instance. The instance is launched if the current Spot price is below the user's bid. Since Spot Instances can be interrupted when spare capacity is unavailable, applications using Spot Instances must be fault-tolerant and able to handle interruptions.
Spot Instances are only supported for Auto-scaling Groups (ASG) with EKS
Follow the steps in the section Creating Autoscaling Groups (ASG). Before clicking Add, Click the box to access Advanced Options. Enable Use Spot Instances and enter your bid, in dollars, in the Maximum Spot Price field.
Follow the steps in Creating Services using Autoscaling Groups. In the Add Service page, Basic Options, Select Tolerate spot instances.
Tolerations will be entered by default in the Add Service page, Advanced Options, Other Container Config field.
Automatically reboot a host upon StatusCheck faults or Host disconnection
Configure hosts to be rebooted automatically if the following occurs:
EC2 Status Check - Applicable for Docker Native and EKS Nodes. The Host is rebooted in the specified interval when a StatusCheck
fault is identified.
Kubernetes (K8s) Nodes are disconnected: Applicable for EKS Nodes only. The Host is rebooted in the specified interval when a Host Disconnected
fault is identified.
You can configure host Auto Reboot features for a particular Tenant and for a Host.
When you configure an Auto Reboot feature for both Tenant and Host, the Host level configuration takes precedence over the configuration at the Tenant level.
Use the following procedures to configure Auto Reboot at the Tenant level.
Configure the Auto Reboot feature at the Tenant for Docker Native and EKS Node-based Hosts, to reboot when a StatusCheck
fault is identified.
In the DuploCloud Portal, navigate to Administrator -> Tenant. The Tenant page displays.
Select a Tenant with access to the Host for which you want to configure Auto Reboot.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select Enable Auto Reboot EC2 status check.
In the field below the Select Feature list box, enter the time interval in minutes after which the host automatically reboots after a StatusCheck
fault is identified. Enter zero (0) to disable this configuration.
Click Add. The configuration is displayed in the Settings tab.
Configure the Auto Reboot feature at the Tenant for EKS node-based Hosts, to reboot when a Host Disconnected
fault is identified.
In the DuploCloud Portal, navigate to Administrator -> Tenant. The Tenant page displays.
Select a Tenant with access to the Host for which you want to configure Auto Reboot.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select Enable Auto Reboot K8s Nodes if disconnected.
In the field below the Select Feature list box, enter the time interval in minutes after which the host automatically reboots when a Host Disconnected
fault is identified. Enter zero (0) to disable this configuration.
Click Add. The configuration is displayed in the Settings tab.
Use the following procedures to configure Auto Reboot at the Host level.
Configure the Auto Reboot feature on the Host level for Docker Native and EKS Node-based Hosts, to reboot when a StatusCheck
fault is identified.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
Click the appropriate tab for your Host type and select the Host for which you want to configure Auto Reboot.
Click the Actions menu and select Host Settings -> Update Auto Reboot Status Check. The Set Auto Reboot Status Check Time pane displays.
In the Auto Reboot Status Check field, enter the time interval in minutes after which the host automatically reboots after a StatusCheck
fault is identified. Enter zero (0) to disable this configuration.
Click Set.
Configure the Auto Reboot feature on the Host level for EKS node-based Hosts, to reboot when a Host Disconnected
fault is identified.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
Click the appropriate tab for your Host type and select the Host for which you want to configure Auto Reboot.
Click the Actions menu and select Host Settings -> Update Auto Reboot Disconnected. The Set Auto Reboot Status Check Time pane displays.
In the Auto Reboot Time field, enter the time interval in minutes after which the host automatically reboots when a Host Disconnected
fault is identified. Enter zero (0) to disable this configuration.
Click Set.
To remove or edit an Auto Reboot Tenant-level configuration, click the () icon and select Edit Setting or Remove Setting.
ECS Autoscaling has the ability to scale the desired count of tasks for the ECS Service configured in your infrastructure. Average CPU/Memory metrics of your tasks are used to increase/decrease the desired count value.
Navigate to Cloud Services -> ECS. Select the ECS Task Definition where Autoscaling needs to be enabled > Add Scaling Target
Set the MinCapacity (minimum value 2) and MaxCapacity to complete the configuration.
Once Autoscaling for Targets is configured, Next we have to add Scaling Policy
Provide details below:
Policy Name - The name of the scaling policy.
Policy Dimension - The metric type tracked by the target tracking scaling policy.. Select from the dropdown
Target Value - The target value for the metric.
Scalein Cooldown - The amount of time, in seconds, after a scale in activity completes before another scale in activity can start.
ScaleOut Cooldown -The amount of time, in seconds, after a scale out activity completes before another scale out activity can start.
Disable ScaleIn - Disabling scale-in makes sure this target tracking scaling policy will never be used to scale in the Autoscaling group
This step creates the target tracking scaling policy and attaches it to the Autoscaling group
View the Scaling Target and Policy Details from the DuploCloud Portal. Update and Delete Operations are also supported from this view
Save resources by hibernating EC2 hosts while maintaining persistence
When you hibernate an instance, Amazon EC2 signals the operating system to perform hibernation (suspend-to-disk). Hibernation saves the contents from the instance memory (RAM) to your Amazon Elastic Block Store (Amazon EBS) root volume. Amazon EC2 persists the instance's EBS root volume and any attached EBS data volumes.
For more information on Hibernation, see the AWS Documentation.
Before you can hibernate an EC2 Host in DuploCloud, you must configure the EC2 host at launch to use the Hibernation feature in AWS.
Follow the steps in the AWS documentation before attempting Hibernation of EC2 Host instances with DuploCloud.
After you configure your EC2 hosts for Hibernation in AWS:
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
In the EC2 tab, select the Host you want to Hibernate.
Click the Actions menu, and select Hibernate Host. A confirmation message displays.
Click Confirm. On the EC2 tab, the host's status displays as hibernated.
Add and view AMIs in AWS
You can create Amazon Machine Images (AMIs) in the DuploCloud Portal. Unlike EC2 Hosts, which are fully dedicated physical servers for launching EC2 instances, AMIs are templates that contain the information required to launch an instance, such as an operating system, application software, and data. EC2 is used for creating a virtual server instance. AMI is the EC2 virtual machine image.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
Select the Host on which you want to base your AMI from the Name column.
Click the Actions menu and select Host Settings -> Create AMI. The Set AMI pane displays.
In the AMI Name field, enter the name of the AMI.
Click Create.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
Select the AMI tab. Your AMIs are displayed on the AMI page. Selecting an AMI from this page displays the Overview and Details tabs for more information.
You can disable host creation by non-administrators (Users) for custom AMIs by configuring the option in DuploCloud.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Click Add. The Add Config pane displays.
In the Config Type list box, select Flags.
In the Key list box, select Disable Host Creation with Custom AMI.
In the Value list box, select true.
Click Submit.
When this setting is configured, the Other option in the Image ID list box in the Add Host page, will be disabled, preventing hosts with custom AMIs from being created.
Backup your hosts (VMs)
Create Virtual Machine (VM) snapshots in the DuploCloud Portal.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
From the Name column, select the Host you want to backup.
Click Actions and select Snapshot.
Once you take a VM Snapshot, the snapshot displays as an available Image ID when you create a Host.
can be issued by Kubernetes when a becomes unreachable or not tolerated by certain workloads. As Kubernetes can initiate Taints, you can as well. For example, to isolate a node for the purpose of applying maintenance, such as an upgrade, using the kubectl taint
command.
In the EC2 tab, check for hosts with a Status of stopped
and tainted
. If these statuses are present, the connection to the underlying Node is lost and you should take appropriate action to restore the connection. See the for available commands, flags, and examples to resolve the Taint.
To find Tainted Nodes, use the kubectl get nodes
command, followed by the kubectl describe node
<NODE_NAME>
command. See to get Shell Access to Kubernetes within the DuploCloud Portal and issue kubectl
console commands from the Portal.
Logging for AWS in the DuploCloud Platform
The DuploCloud Platform performs centralized logging for Docker-based applications. For the native and Kubernetes container orchestrations, this is implemented using OpenSearch and Kibana with Elastic Filebeat as the log collector.
For ECS Fargate, AWS Lambda, and AWS SageMaker Jobs, the platform integrates with CloudWatch, automatically setting up Log Groups and making them viewable from the DuploCloud Portal.
No setup is required to enable logging for ECS Fargate, Lambda, or AWS SageMaker Jobs. DuploCloud automatically sets up CloudWatch log groups and provides a menu next to each resource.
Disable CloudFormation's SourceDestCheck in EC2 Host metadata
The AWS Cloudformation template contains a Source Destination Check (SourceDestCheck
parameter) that ensures that an EC2 Host instance is either the source or the destination of any traffic the instance receives. In the DuploCloud Portal, this parameter is specified as true
, by default, enabling source and destination checks.
There are times when you may want to override this default behavior, such as when an EC2 instance runs services such as network address translation, routing, or firewalls. To override the default behavior and set the SourceDestCheck
parameter to false
, use this procedure.
SourceDestCheck
in the DuploCloud PortalSet AWS CloudFormation SourceDestCheck
to false
for an EC2 Host:
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
In the EC2 tab, select the Host for which you want to disable SourceDestCheck
.
Click the Metadata tab.
Click Add. The Add Metadata pane displays.
In the Key field, enter SourceDestCheck.
In the Value field, enter False.
Click Create. The Key/Value pair is displayed in the Metadata tab.
Metrics setup comprises of two parts
Control Plane: This comprises of a Grafana service for dashboard and a Prometheus container for fetching VM and container metrics. Cloud service metrics are directly pulled by Grafana from AWS without requiring Prometheus.
To enable Metrics go under Administrator -> Observability -> Settings. Select the Monitoring tab and click on "Enable Monitoring"
Metrics Collector: Once Metrics control plane is ready i.e. Grafana and Prometheus service has been deployed and are active, you can enable Metrics on a per tenant basis. Navigate to Administrator -> Observability -> Settings. Select the Monitoring tab, and using the toggle buttons to enable monitoring for individual Tenants. This triggers the deployment of Node Exporter and CAdvvisor container in each Host in the tenant similar to how Log Collectors like File beat were deployed for fetching central logs and sending to Open Search.
Set up logging for the DuploCloud Portal
If you need to make changes to the Control Plane Configuration, follow this procedure to do so, before enabling logging. Note that you cannot modify the Control Plane Configuration after you set up logging.
Docker applications use stdout
for writing log files, collecting logs, placing them in the Host directory, mounting them into Filebeat containers, and sending them to AWS Elasticsearch. If you need to customize the log collection and you use folders other than stdout
, for example, follow this procedure. Note that you cannot customize the log collection after you set up logging.
In the DuploCloud Portal, navigate to Administrator -> Observability -> Settings -> Logging.
From the Tenant list box at the top of the DuploCloud Portal, select the Default Tenant.
Click the Create Logging link. The Enable Logging page displays.
Use the Enable Logging page to deploy logging for the Control Plane, which uses OpenSearch and Kibana to retrieve and display log data for the Default Tenant. In the Cert ARN field, enter the ARN certificate for the Default Tenant. Find the ARN by selecting the Default Tenant from the Tenant list box at the top of the DuploCloud Portal; navigating to Administrator -> Plans; selecting the Plan that matches your Infrastructure Name; and clicking the Certificates tab.
Click Submit. Data gathering takes about fifteen (15) minutes. When data gathering is complete, graphical logging data is displayed in the Logging tab.
After logging has been enabled for the Control Plane, finish the logging setup by enabling the Log Collector to collect logs per Tenant. This feature is especially useful for Tenants that are spread across multiple regions. In the DuploCloud Portal, navigate to Administrator -> Observability -> Settings -> Logging.
In the Logging tab, on the Logging Infrastructure Tenants page, click Add.
Select the Tenants for which you want to configure logging, using the Select Tenants to enable logging area, as in the example below. The Control Plane configuration is deployed for each Tenant that you select in the Infrastructure, specified in Infrastructure Details.
The Log Collector uses Elastic Filebeat containers that are deployed within each Tenant.
When you enable a Tenant for logging, the Filebeat service starts up and begins log collection. View the Filebeat containers by navigating to Kubernetes -> Containers in the DuploCloud Portal. In the row of the container for which you want to view the logs, click on the menu icon and select Logs.
When you perform the steps above to configure logging, DuploCloud does the following:
An EC2 Host is added in the Default tenant, for example, duploservices-default-oc-diagnostics.
Services are added in the Default tenant, one for OpenSearch and one for Kibana. Both services are pinned to the EC2 host using allocation tags. Kibana is set up to point to ElasticSearch and exposed using an internal load balancer.
Security rules from within the internal network to port 443 are added in the Default Tenant to allow log collectors that run on Tenant hosts to send logs to ElasticSearch.
A Filebeat service (filebeat-duploinfrasvc)
is deployed for each Tenant where central logging is enabled.
The /var/lib/docker/Containers
are mounted from the Host into the Filebeat container. The Filebeat container references ElasticSearch, which runs in the Default Tenant. Inside the container, Filebeat is configured so that every log line is added with metadata information consisting of the Tenant name, Service names, Container ID, and Hostname, enabling ease of search using these parameters with ElasticSearch.
Display logs for the DuploCloud Portal, components, services, and containers
The central logging dashboard displays detailed logs for Service and Tenant. The dashboard uses Kibana and preset filters that you can modify.
In the DuploCloud Portal, navigate to Observability -> Logging.
Select the Tenant from the Tenant list box at the top of the DuploCloud Portal.
Select the Service from the Select Service list box.
Modify the DQL to customize Tenant selection, if needed.
Adjust the date range by clicking Show dates.
Add filters, if needed.
DuploCloud pre-filters logs per Tenant. All DuploCloud logs are stored in a single index. You can see any Tenant or combination of Tenants (using the DQL option) but the central logging control plane is shared, with no per-Tenant access.
Confirm that your Hosts and Services are running or runnable to view relevant log data.
See Kubernetes Containers for information on displaying logs per container.
Set up features for auditing and view auditing reports and logs
The DuploCloud Portal provides a comprehensive audit trail, including reports and logs, for security and compliance purposes. Using the Show Audit Records for list box, you can display real-time audit data for:
Auth (Authentications)
Admin (Administrators)
Tenants (DuploCloud Tenants)
Compliance (such as HIPAA, SOC 2, and HIGHTRUST, among others)
Kat-Kit (DuploCloud's CI/CD Tool)
In the DuploCloud Portal, navigate to Administrator -> Observability -> Settings, and select the Audit tab. The Audit page displays.
Click the Enable Audit link.
To view complete auditing reports and logs, navigate to the Observability -> Audit page in the DuploCloud Portal.
You can create an S3 bucket for auditing in another account, other than the DuploCloud Master Account.
Verify that the S3 bucket exists in another account, and note the bucket name. In this example, we assume a BUCKET_REGION of us-west-2 and a BUCKET name of audit-s2-bucket-another-account.
Ensure that your S3 bucket has Duplo Master
permission to access the S3:PutObject
. Refer to the code snippet below for an example.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Continuing the example above, configure the S3BUCKET_REGION.
Click Add. The Add Config pane displays.
From the Config Type list box, select AppConfig.
in the Key list box, enter DUPLO_AUDIT_S3BUCKET_REGION.
In the Value field, enter us-west-2.
Click Submit.
Continuing the example above, configure the S3BUCKET name.
Click Add. The Add Config pane displays.
From the Config Type list box, select AppConfig.
in the Key list box, enter DUPLO_AUDIT_S3BUCKET.
In the Value field, enter audit-s2-bucket-another-account.
Click Submit.
Your S3 bucket region and name configurations are displayed in the System Config tab. View details on the Audit page in the DuploCloud Portal.
Contact your DuploCloud Support team if you have additional questions or issues.
Change configuration for the Control Plane, customize Platform Services
There are several use cases for customized log collection. The central logging stack is deployed within your environment, as with any other application, streamlining the customization process.
The version of OpenSearch, the EC2 host size, and the control plane configuration are all deployed based on the configuration you define in the Service Description. Use this procedure to customize the Service Description according to your requirements.
You must make Service Description changes before you enable central logging. If central logging is enabled, you cannot edit the description using the Service Description window.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
In the Service Description tab, in the Name column, select duplo_svd_logging_opensearch. The Service Description window displays.
Edit the YAML in the Service Description window as needed.
Click Update when the configuration is complete to close the window and save your changes.
You can update the Control Plane configuration by editing the Service Description. If the control plane is already deployed using the Service Description specification, then updating the description is similar to making a change to any application.
Note that Control Plane Components are deployed in the DuploCloud Default Tenant. Using the Default Tenant, you can change instance size, Docker images, and more.
You can update the log retention period using the OpenSearch native dashboard by completing the following steps.
From the DuploCloud portal, navigate to Administrator -> Observability -> Logging.
Click Open New Tab to access the OpenSearch dashboard.
Navigate to Pancake -> Index management -> State management policies.
Edit the FileBeat YAML file and update the retention period.
For more information see the OpenSearch documentation.
The new retention period settings will only apply to logs generated after the retention period was updated. Older logs will still be deleted according to the previous retention period settings.
You can modify Elastic Filebeat logging configurations, including mounting folders other than /var/lib/docker
for writing logs to folders other than stdout
.
You need to customize the log collection before enabling logging for a Tenant.
If logging is enabled, you can update the Filebeat configuration for each tenant by editing the Filebeat Service Description (see the procedure in Defining Control Plane Configuration).
Alternately, delete the Filebeat collector from the Tenant and the platform automatically redeploys based on the newest configuration.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Select the Platform Services tab.
Click the Edit Platform Services button. The Platform Services window displays. Select the appropriate Filebeat service. For native container management, select filebeat; for Kubernetes container management, select filebeat-k8s.
Edit the YAML in the Platform Services window as needed.
Click Update to close the window and save your changes.
With DuploCloud, you have the choice to deploy third-party tools such as Datadog, Sumo Logic, and so on. To do this, deploy Docker containers that act as collectors and agents for these tools. Deploy and use these third-party app containers as you would any other container in DuploCloud.
DuploCloud platform comes with an option of centralized metrics for Docker containers, Virtual machines as well as various cloud services like ELB, RDS, ECache, ECS, Kafka etc. These metrics are displayed through Grafana which is embedded into the DuploCloud UI. Just like central logging these are not turned on by default but can be setup with a single click.
Under Observability -> Metrics we have the various metrics per Tenant.
While there are 8-10 out-of-box dashboard for various services, one can add their own dashboards and make them appear in Duplo Dashboard through a configuration
Monitoring Kubernetes status with the K8s Admin dashboard
Use the K8s Admin dashboard to monitor various statistics and statuses for Kubernetes, including the number and availability of StatefulSets defined for a service.
In the DuploCloud Portal, select Administrator -> Observability -> Metrics.
Click the k8s tab. The K8s Admin dashboard displays.
Faults that happen in the system, be it Infrastructure creation, container deployments, Application health checks, or any Triggered Alarms can be tracked in the DuploCloud portal under Faults Menu.
You can look at Tenant-specific faults under Observability -> Faults or all the faults in the system under Administrator -> Faults.
You can set the AWS Alerts for individual metrics.
From the DuploCloud portal, navigate to Observability -> Alerts and click Add. The Create Alert pane displays.
Enter the Resource Type and select the resource from the Resource type list box. Click Next.
Fill in the necessary information and click Create. The Alert is created.
View general alerts from the DuploCloud Portal in the Observability -> Alerts.
Select the Alerts tab for alerts pertaining to a specific resource, such as Hosts.
DuploCloud allows automatic generation of alerts for resources within a Tenant. This makes sure that the defined baseline of monitoring is applied to all current and new resources based on a set of rules.
As an Administrator:
From the DuploCloud Portal, navigate to Administrator -> Tenants.
Click the name of your Tenant from the list and select the Alerting tab.
Click Enable Alerting. An alerts template displays. The alerts template contains rules for each AWS namespace and metric to be monitored.
Review the alerts template, and adjust the thresholds
Click Update
Fix faults automatically to maintain system health
You can configure hosts to auto-reboot and heal faults automatically, either at the Tenant level, or the Host level. See the Configure Auto Reboot topic for more information.
Access specific resources in the AWS Console using the DuploCloud Portal
Use Just-In-Time (JIT) to launch the AWS console and work with a specific Tenant configuration, or to obtain Administrator privileges.
DuploCloud users have AWS Console access for advanced configurations of S3 Buckets, Dynamo databases, SQS, SNS Topic, Kinesis stream, and API Gateway resources that are created in DuploCloud. ELB and EC2 areas of the console are not supported.
Using the DuploCloud Portal, click on the Console link in the title bar of the AWS resource you created in DuploCloud, as in the example for S3 Bucket, below.
Clicking the Console link launches the AWS console and gives you access to the resource, with permissions scoped to the current Tenant.
Using the Console link, you don't need to set up permissions to create new resources in the AWS Console. You can perform any operations on resources that are created with DuploCloud.
For example, you can create an S3 bucket from the DuploCloud UI, and then launch the AWS Console with the Console link, removing files, setting up static web hosting, and so on. Similarly, you can create a DynamoDB in DuploCloud and use the AWS console to add and remove entries in a database table.
Enable setting of SNS Topic Alerts for specific Tenants
SNS Topic Alerts provide a flexible and scalable means of sending notifications and alerts across different AWS services and external endpoints, allowing you to stay informed about important events and incidents happening in your AWS environment.
SNS is a fully managed service that enables you to publish messages to topics. The messages can be delivered to subscribers or endpoints, such as email, SMS, mobile push notifications, or even HTTP endpoints.
SNS Alerts can only be configured for the specific resources included under Observability -> Alerts in the DuploCloud Portal. Integrating external monitoring programs (e.g., Sentry) allows you to view all of the faults for a particular Tenant under Observability -> Faults.
Configuring this setting will attach the SNS Topic to the alerts in the OK and Alarm state.
In the DuploCloud Portal, navigate to Administrator -> Tenants. The Tenants page displays.
Select the Tenant for which you want to set SNS Topic Alerts from the NAME column.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select Set SNS Topic Alerts.
In the field below the Select Feature list box, enter a valid SNS Topic ARN.
Click Add. The configuration is displayed in the Settings tab.
Make changes to fault settings by adding Flags under Systems Settings in the DuploCloud portal
If there is a Target Group with no instances/targets, DuploCloud generates a fault. You can configure DuploCloud's Systems Settings to ignore Target Groups with no instances.
From the DuploCloud portal, navigate to Administrator -> Systems Settings.
Select the System Config tab.
Click Add. The Add Config pane displays.
For ConfigType, select Other.
In the Other Config Type field, type Flags.
In the Key field, enter IgnoreTargetGroupWithNoInstances.
In the Value field, enter True.
Click Submit. The Flag is set and DuploCloud will not generate faults for Target Groups without instances.
Enable and view alert notifications in the DuploCloud Portal
DuploCloud supports viewing of Faults in the portal and sending notifications and emails to the following systems:
Sentry
PagerDuty
NewRelic
OpsGenie
You will need to generate an keys from each of these vendor systems, and then provide that key to DuploCloud to enable integration.
In the Sentry website, navigate to Projects -> Create a New Project.
Click Settings -> Projects -> project-name -> Client keys. The Client Keys page displays.
Complete the DSN fields on the screen.
Click Generate New Key.
In the DuploCloud Portal, navigate to Observability -> Faults.
Click Update Notifications Config. The Set Alert Notifications Config pane displays.
In the Sentry - DSN field, enter the key you received from Sentry.
In the Alerts Frequency (Seconds) field, enter a time interval in seconds when you want alerts to be displayed.
Click Update.
In the PagerDuty website home page, select the Services tab and navigate to the service that receives Events. If a Service does not exist, click New Service. When prompted, enter a friendly Name (for example, your DuploCloud Tenant name) and click Next.
Assign an Escalation policy, or use an existing policy.
Click Integration.
Click Events API V2. Your generated Integration Key is displayed as the second item on the right side of the page. This is the Routing Key you will supply to DuploCloud.
Copy the Integration Key to your Clipboard.
In the DuploCloud Portal, navigate to Observability -> Faults.
Click Update Notifications Config. The Set Alert Notifications Config pane displays.
In the Pager Duty - Routing Key field, enter the key you generated from PagerDuty.
In the Alerts Frequency (Seconds) field, enter a time interval in seconds when you want alerts to be displayed.
Click Update.
In the DuploCloud Portal, navigate to Observability -> Faults.
Click Update Notifications Config. The Set Alert Notifications Config pane displays.
In the NewRelic - API Key field, enter the key you generated from NewRelic.
In the Alerts Frequency (Seconds) field, enter a time interval in seconds when you want alerts to be displayed.
Click Update.
In the OpsGenie website, generate an API Key to integrate DuploCloud faults with OpsGenie.
In the DuploCloud Portal, navigate to Observability -> Faults.
Click Update Notifications Config. The Set Alert Notifications Config pane displays.
In the OpsGenie - API Key field, enter the key you generated from OpsGenie.
In the Alerts Frequency (Seconds) field, enter a time interval in seconds when you want alerts to be displayed.
Click Update.
Manage costs for resources
Usage costs for resources can be viewed and managed in the DuploCloud Portal. As an administrator, you can view your company's billing data by month, week, or Tenant. You can also configure billing alerts, explore historical resource costs, and view DuploCloud license usage information. Non-administrator users can view billing data for Tenants they can access by viewing billing data for a selected Tenant.
To enable the billing feature, you must:
Enable access to billing data in AWS by following the steps in this AWS document .
Apply cost allocation tags so that DuploCloud can retrieve billing data.
Grant AIM permissions to view billing data in AWS
IAM access permissions must be obtained to view the billing data in AWS.
Follow the steps in this AWS document to obtain access.
In order to perform the steps in this document, you must be logged in asroot
from the AWS instance that manages cost and billing for the AWS organization.
Use just-in-time (JIT) to access the console in AWS
Access the AWS Console for specific resources created in DuploCloud, such as S3 Buckets and Dynamo databases, by clicking the Console link in the title bar of the resource page.
DuploCloud users can obtain Just-In-Time (JIT) access to the AWS Console. This access is restricted to resources that the user has access to in the DuploCloud portal. With JIT access, DuploCloud administrators have admin-level access within the AWS Console and the access is generated in real-time and revoked, by default, in one hour.
You can obtain AWS JIT access directly from the DuploCloud Portal, as well as obtain temporary AWS credentials to the Tenant, and access to AWS from your workstation.
In the DuploCloud Portal, navigate to Administrator -> User and select the Username that needs access.
In the upper-right corner of the Portal, click the user profile picture and select Profile. The User Profile page displays.
From the JIT AWS Console list box, select the appropriate option to open the JIT AWS Console, get Temporary AWS Credentials to the Tenant, or obtain AWS Access from my Workstation.
When you select JIT AWS Console, the AWS Console launches.
When you select Temporary AWS Credentials, the Get JIT AWS Access window displays with available links for temporary or permanent access, as in the graphic below. For temporary access, click Get JIT Access. For permanent access, click the more permanent link.
When you select AWS Access from my Workstation, the Get JIT AWS Access window displays with the Access to AWS from your Workstation option. Follow the instructions and links.
duplo-jit
Obtain access through the command line interface (CLI) with duplo-jit
. duplo-jit
must obtain an AWS JIT session using a DuploCloud API Token. This token can be specified either as part of your local AWS configuration or can be obtained interactively, using your DuploCloud portal session.
Run the following command:
Download the latest .zip archive from https://github.com/duplocloud/duplo-jit/releases for your operating system.
Extract the archive listed in the table below based on the operating system and processor you are running.
Add the path to duplo-jit
to your $PATH
environment variable.
Obtain credentials using a DuploCloud API Token or interactively.
Edit the AWS Config file (~/.aws/config) and add the following profile, as shown in the code snippet below:
To obtain credentials interactively, rather than with a token, replace --token <DUPLO_TOKEN>
in the argument above with --interactive
.
When you make the first AWS call, you are prompted to grant authorization through the DuploCloud portal, as shown below. Click Authorize if you consent.
Upon successful authorization, A Just-In-Time token is provided, which is valid for one hour. When the token expires, you are prompted to re-authorize the request.
Obtain access to the AWS console using the Command Line Interface (CLI).
As long as you use the AWS_PROFILE
that matches the profile name you set in the section above, the AWS CLI obtains the required access credentials.
For example:
AWS_PROFILE=<ENV_NAME> aws ec2 describe-instances
To obtain a link to the AWS Console, run one of the following commands, which copies the Console URL to your clipboard that you can use in any browser.
All of these examples assume Administrator role access, passing the --admin
flag. If you are obtaining JIT access for a User role (not Administrator), ensure that you replace the --admin
argument in the following code snippets with --tenant <YOUR_TENANT>
, for example --tenant dev01
. Tenants are lower-case at the CLI.
If you are receiving errors when attempting to retrieve credentials, try running the command with the --no-cache
argument.
zsh
shellAdd the following to your .zshrc
file:
usage is jitnow <ENV_NAME>
By default, JIT sessions expire after one hour. This can be modified in the DuploCloud Portal for a specific Tenant.
In the DuploCloud Portal, navigate to Administrator -> Tenant.
Select the Tenant for which you want to change the expiration period from the NAME column.
Click the Settings tab.
Click Add to add a custom timeout setting. The Add Tenant Feature pane displays.
Select AWS Access Token Validity from the Select Feature list box.
In the field below (the value), specify the desired timeout period in seconds. in the example below, we specify 7200 seconds or two (2) hours, overriding the default of 3600 seconds, or one (1) hour.
Click Update. The new setting is displayed in the Tenants, Settings tab.
If you are increasing the session timeout beyond the AWS default of 1 hour, you also need to update the maximum session duration value for the IAM role assigned to your DuploCloud tenant.
Access the AWS Console as an Administrator using the instructions above. In the AWS Console, navigate IAM -> Roles and modify the value for your tenant accordingly. For example, if your Tenant is named DEV01, and you need to set a timeout of two hours (7200 seconds), locate the IAM role duploservices-dev01 and modify the Maximum Session Duration to two hours.
Processor/Operating System | Archive |
---|---|
Intel macOS
darwin_amd64.zip
M1 macOS
darwin_arm64.zip
Windows
windows_amd64.zip
Displaying Node Usage for billing
DuploCloud calculates license usage by Node for the following categories:
Elastic Compute Cloud
Elastic Container Services
AWS Lambda Functions
Managed Workflows for Apache Airflow
In the DuploCloud portal, navigate to Administrator -> Billing. The Billing page displays.
Click the DuploCloud License Usage tab.
Click More Details in any License Usage card for additional breakdown of Node Usage statistics per Tenant.
Click the DuploCloud license documentation link to download a copy of the license document.
Activating cost allocation tags in DuploCloud AWS
The duplo-project cost allocation tag must be activated after you enable IAM access to billing data. Use the same AWS user and account that you used to enable IAM access to activate cost allocation tags.
To apply and activate cost allocation tags, follow the steps in this document.
After you activate the tag successfully, you should see this screen:
Set billing alerts based on the previous month's spending or define a custom threshold. Receive email notifications if the current month's expenses exceed a specified percentage of the threshold.
From the DuploCloud Portal, navigate to Administrator -> Billing, and select the Billing Alerts tab.
Click Add or Edit.
Enable Billing Alerts.
Select a threshold and trigger for the alert and enter the email of the administrator user who will receive the email notifications.
Click Submit. The alert details show on the Billing Alerts tab.
Displaying Service and Tenant billing data.
From the DuploCloud portal, administrators can view account spending details by month, week, and Tenant. Non-administrator users can view billing data for a Tenant they have user access to.
View the billing details for your company's AWS account.
Log in as an administrator, and navigate to Administrator -> Billing.
You can view usage by:
Time
Select the Spend by Month tab and click More Details to display monthly and weekly spending options.
Tenant
Select the Spend by Tenant tab.
You must first enable the billing feature to view or manage usage costs in the DuploCloud Portal.
View billing details for a selected Tenant. This option is accessible to non-administrator users with user access to the selected Tenant.
Select the Tenant name from the Tenant list box.
Navigate to Cloud Services -> Billing. The Billing page displays.
The Spend by Month tab lists the five services with the highest spending for each month for the selected Tenant. Click More Details on any month's card to display more details about that month's spending.
Use case:
Collection of data from using various methods/sources
Web scraping: Selenium using headless chrome/firefox.
Web crawling: status website sing crawling
API to Data collection: It could be REST or GraphQL API
Private internal customer data collected over various transactions
Private external customer data collected over secured SFTP
The data purchased from 3rd party
The data from various social networks
Correlate data from various sources
Clean up and Process data and apply various statistical methods, create
Correlate terabytes of data from various sources and make sense from the data.
Detect anomalies, summarize, bucketize, and various aggregations
Attach meta-data to enrich data.
Create data for NLP and ML models for predictions of future events.
AI/ML pipelines and life-cycle management
Make data available to data science team
Train models and do continuous improvement trials, reinforcement learning.
Create anomalies, bucketize data, summarize and do various aggregations.
Train NLP and ML models for predictions of future events based on history
Create history for models/hyper parameters and data at various stages.
Deploying Apache Spark™ cluster
In this tutorial we will create a Spark cluster with a Jupyter notebook. A typical use case is ETL jobs, for example reading parquet files from S3, processing and pushing reports to databases. The aim is to process GBs of data in faster and cost-effective way.
The high-level steps are:
Create 3 VMs one for each Spark master, Spark worker and Jupyter notebook.
Deploy Docker images for each of these on these VMs.
From the DuploCloud portal, navigate to Cloud Services -> Hosts -> EC2. Click +Add and check the Advanced Options box. Change the value of instance type to ‘m4.xlarge
‘ and add an allocation tag ‘sparkmaster
‘.
Create another host for the worker. Change the value of instance type to ‘m4.4xlarge
‘ and add an allocation tag ‘sparkworker
‘. Click Submit. The number of workers depends on how much load you want to process. You should add one host for each worker. They should all have the same allocation tag ‘sparkworker
‘. You can add and remove workers and scale up or down the Spark worker service as many times as you want. We will see in the following steps.
Create one more host for Jupyter notebook. Choose the value of instance type to ‘m4.4xlarge
‘ and add the allocation tag as ‘jupyter
‘.
Navigate to Docker -> Services and click Add. In the Service Name field, enter ‘sparkmaster
‘ and in the Docker Image field, enter ‘duplocloud/anyservice:spark_v6'
, add the allocation tag ‘sparkmaster
‘. From the Docker Networks list box, select Host Network. By setting this in Docker Host config you are making the container networking the same as the VM i.e., container IP is same as VM.
First we need the IP address of Spark master. Click on Spark master service and on the right expand the container details and copy the host IP. Create another service, under name choose ‘jupyter
‘, image ‘duplocloud/anyservice:spark_notebook_pyspark_scala_v4
‘, add the allocation jupyter and select Host network for Docker Host Config, Add volume mapping “/home/ubuntu/jupyter:/home/jovyan/work
“, Also provide the environment variables
Replace the brackets <>
with the IP you just got. See figure 5.
Create another service named ‘sparkworker1
`, image ‘duplocloud/anyservice:spark_v7
‘, add the allocation tag ‘sparkworker
‘ and select Host Network for Docker Network. Also provide the environment variables
{"node": "worker", "masterip": "<>"}
Replace the brackets <>
with the IP you just got. See Figure 5.
Depending on how many worker hosts you have created, use the same number under replicas and that is the way you can scale up and down. At any time, you can add new hosts, set the allocation tag ‘sparkworker
‘ and then under services, edit the sparkworker service and update the replicas.
Add or update shell access by clicking on >_
icon. This gives you easy access into the container shell. You will need to wait for 5 minutes for the shell to be ready. Make sure you are connected to VPN if you choose to launch the shell as internal only
Select Jupyter service and expand the container. Copy the hostip and then click on >_
icon.
Once you are inside the shell. Run command ‘> jupyter notebook list
‘ to get the URL along with auth token. Replace the IP with Jupyter IP you copied previously. See Figure 5.
In your browser, navigate to the Jupyter URL and you should be able to see the UI.
Now you can use Jupyter to connect to data sources and destinations and do ETL jobs. Sources and destinations can include various SQL and NoSQL databases, S3 and various reporting tools including big data and GPU-based Deep learning.
In this following we will create a Jupyter notebook and show some basic web scraping, using Spark for preprocessing, exporting into schema, do ETLs, join multiple dataframes (parquets), and export reports into MySQL.
Connect to a website and parse html (using jsoup)
Extract the downloaded zip. This particular file is 8 GB in size and has 9 million records in csv
Upload the data to AWS S3
Also Configure session with required settings to read and write from AWS S3
Load data in Spark cluster
Define the Spark schema
Do data processing
Setup Spark SQL
Spark SQL joins 20 GB of data from multiple sources
Export reports to RDS for UI consumption Generate various charts and graphs
Managing AWS services and related components
Applications are written involving many AWS Services like S3 for Object Store, RDS for RDBS (SQL), Redis, Kafka, SQS, SNS, Elastic Search, and so on. While each of their configurations needs a few application-centric inputs, there are scores of lower-level nuances around access control, security, and compliance among others.
Using DuploCloud you can pretty much create any service within the Tenant using basic app-centric inputs while the platform will make sure the lower-level nuances are programmed to best practices for security and compliance.
Every service within the Tenant will automatically be reachable to any application running within that tenant. If you need to expose some service from one Tenant to another, see Allow Cross-tenant Access.
DuploCloud adds new AWS services to the platform on almost a weekly basis, if a certain service is not documented here please contact the DuploCloud Team. Even if the feature is currently available, the DuploCloud team will enable the feature in a matter of days.
Supported Services are listed in alphabetical order, following the core services: Containers, Load Balancers, and Storage.
Using containers and DuploCloud Services with AWS EKS and ECS
Containers and Services are critical elements of deploying AWS applications in the DuploCloud platform. Containers refer to Docker containers: lightweight, standalone packages that contain everything needed to run an application including the code, runtime, system tools, libraries, and settings. Services in DuploCloud are microservices defined by a name, Docker image, and a number of replicas. They can be configured with various optional parameters and are mapped to Kubernetes deployment sets or StatefulSets, depending on whether they have stateful volumes.
DuploCloud supports three container orchestration technologies to deploy containerized applications in AWS: Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), and Native Docker containers in virtual machines (VMs). Each option provides benefits and challenges depending on your needs and requirements.
Amazon Elastic Container Service (ECS) is a fully managed service that uses its own orchestration engine to manage and deploy Docker containers. It is quite easy to use, integrates well with other AWS services, and is optimized for running containers in the AWS ecosystem. The tradeoff for this simplicity is that ECS is not as flexible or versatile as EKS and is less portable outside the AWS ecosystem.
Amazon Elastic Kubernetes Service (EKS) is a managed service that uses the open-source container orchestration platform Kubernetes. The learning curve is steeper for EKS than ECS, as users must navigate the complexities of Kubernetes. However, EKS users benefit from the excellent flexibility that Kubernetes’ wide range of tools, features, solutions, and portability provides.
Docker is the foundational containerization technology. It is not managed, so the user manually controls the containers and orchestration. Although Docker requires considerably more user input than ECS or EKS, it offers greater control over the VM infrastructure, strong isolation between applications, and supreme portability.
When you create a service, refer to the registry configuration in Docker -> Services | Kubernetes -> Services | Cloud Services -> ECS -> Services. Select the Service from the NAME column and select the Configuration tab. Note the values in the Environment Variables and Other Docker Config fields.
For example:
{"DOCKER_REGISTRY_CREDENTIALS_NAME":"registry1"}
Adding a Service in the DuploCloud Platform is not the same as adding a Kubernetes service. When you deploy DuploCloud Services, the platform implicitly converts your DuploCloud Service into either a deployment set or a StatefulSet. The service is mapped to a deployment set if there are no volume mappings. Otherwise, it is mapped to a StatefulSet, which you can force creation of if needed. Most configuration values are self-explanatory, such as Images, Replicas, and Environmental Variables.
Kubernetes clusters are created during Infrastructure setup using the Administrator -> Infrastructure option in the DuploCloud Portal. The cluster is created in the same Virtual Private Cloud (VPC) as the Infrastructure. Building an Infrastructure with an EKS/ECS cluster may take some time.
Next, you deploy an application within a Tenant in Kubernetes. The application contains a set of VMs, a Deployment set (Pods), and an application load balancer. Pods can be deployed either through the DuploCloud Portal or through kubectl,
using HelmCharts.
An Administrator can define Quotas for resource allocation in a DuploCloud Plan. Resource allocation can be restricted via providing Instance Family and Size. An administrator can restrict the total number of allowed resources by setting Cumulative Count value per Resource type.
Once the Quota is defined, DuploCloud users are restricted to create new resources when the corresponding quota configured in Plan is reached.
The quotas are controlled at the Instance Family level, such as, if you define a quota for t4g.large
, it will be enforced including all lower instance types in the t4g
family as well. So a quota with a count of 100 for t4g.large
will mean instances up to that instance type cannot exceed 100.
Add custom tags to AWS resources
An Administrator can provide a list of custom tag names that can be applied to AWS resources for any Tenant in a DuploCloud environment.
In the DuploCloud portal, navigate to Administrator -> System Settings -> System Config.
Click Add. The Add Config pane displays.
In the Config Type list box, select App Config.
In the Key list box, select Duplo Managed Tag Keys.
In the Value field, enter the name of the custom tag, for example, cost-center.
Click Submit. In the System Configs area of the System Config tab, your custom tag name is displayed with Type AppConfig and a Key value of DUPLO_CUSTOM_TAGS, as in the example below.
Once the custom tag is added, navigate to Administrator -> Tenants.
Select a Tenant from the Name column.
Click Add.
Click the Tags tab.
In the Key field, enter the name of the custom tag (cost-center in the example) that you added to System Config.
In the Value field, enter an appropriate value. In the Tags tab, the tag Key and Value that you set are displayed, as in the example below.
Managing Containers and Service with ECS
For an end-to-end example of creating an ECS Task Definition, Service, and Load Balancer, see this tutorial.
Using the Services tab in the DuploCloud Portal (navigate to Cloud Services -> ECS and select the Services tab), you can display and manage the Services you have defined.
For ECS Services, select the Service Name and click the Actions menu to Edit or Delete Services, in addition to performing other actions, as shown below.
You can display and manage the Containers you have defined in the DuploCloud portal. Navigate to Kubernetes -> Containers.
You can create up to five (5) containers for ECS services by defining a Task Definition.
To designate a container as Essential, see Defining an Essential Container.
In the DuploCloud Portal, navigate to Cloud Services -> ECS.
In the Task Definitions tab, click Add. The Add Task Definition page displays.
Specify a unique Name for the Task Definition.
From the vCPUs list box, select the number of CPUs to be consumed by the task and change other defaults, if needed.
In the Container - 1 area, specify the Container Name of the first container you want to create.
In the Image field, specify the container Image name, as in the example above.
Specify Port Mappings, and Add New mappings or Delete them, if needed.
Click Submit. Your Task Definition for multiple ECS Service containers is created.
To edit the created Task Definition in order to add or delete multiple containers, select the Task Definition in the Task Definitions tab, and from the Actions menu, select Edit Task Definition.
In AWS ECS, an essential container is a key component of a task definition. An essential container is one that must successfully complete for the task to be considered healthy. If an essential container fails or stops for any reason, the entire task is marked as failed. Essential containers are commonly used to run the main application or service within the task.
By designating containers as essential or non-essential, you define the dependencies and relationships between the containers in your task definition. This allows ECS to properly manage and monitor the overall health and lifecycle of the task, ensuring that the essential containers are always running and healthy.
To designate a container as Essential, follow the Creating multiple containers for ECS Services using a Task Definition procedure to create your containers, but before creating the container you want to designate as Essential, in the Container definition, select the Essential Container option, as in the example below.
Fargate is a technology that you can use with ECS to run containers without having to manage servers or clusters of EC2 instances.
For information about Fargate, contact the DuploCloud support team.
Follow this procedure to create the ECS Service from your Task Definition and define an associated Load Balancer to expose your application on the network.
Managing Containers and Service with EKS and Native Docker Services
For an end-to-end example of creating an EKS Service, see this tutorial.
For a Native Docker Services example, see this tutorial.
Using the Services tab in the DuploCloud Portal (Kubernetes -> Services), you can display and manage the Services you have defined.
For EKS Services, select the Service Name and click the Actions menu to Edit or Delete Services, in addition to performing other actions, as shown below.
In the DuploCloud Portal, navigate to Kubernetes -> Services for an EKS Service.
Click Add. The Basic Options section of the Add Service page displays.
Complete the fields on the page, including Service Name, Docker Image name, and number of Replicas. Use Allocation Tags to deploy the container in a specific set of hosts.
To force the creation of Kubernetes StatefulSets, select Yes in the Force StatefulSets field.
Click Next. The Advanced Options section of the Add Service page displays.
Configure advanced options as needed. For example, you can implement Kubernetes Lifecycle Hooks, by adding the YAML to the Other Container Config field (optional).
Click Create. The Service is created.
Do not use spaces when creating Service or Docker image names.
The number of Replicas you define must be less than or equal to the number of hosts in the fleet.
Once the deployment commands run successfully, navigate to Administrator -> Tenants. Select the Tenant from the NAME column. Your deployments are displayed and you can now attach load balancers for the Services.
Using the Services page, you can start, stop, and restart multiple services simultaneously.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
Use the checkbox column to select multiple services you want to start or stop at once.
From the Service Actions menu, select Start Service, Stop Service, or Restart Service.
Your selected services are started, stopped, or restarted as you specified.
Using the Import Kubernetes Deployment pane, you can add a Service to an existing Kubernetes namespace using Kubernetes YAML.
In the DuploCloud Portal, select Kubernetes -> Services from the navigation pane.
Click Add. The Add Service page displays.
Click the Import Kubernetes Deployment button in the upper right. The Import Kubernetes Deployment pane displays.
Paste the deployment YAML code, as in the example below, into the Import Kubernetes Deployment pane.
Click Import.
In the Add Service page, click Next.
Click Create. Your Native Kubernetes Service is created.
You can supply advanced configuration options with EKS in the DuploCloud Portal in several ways, including the advanced use cases in this section.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Click Add. The Add Config pane displays.
From the Config Type list box, select, Flags.
From the Key list box, select Block Master VPC CIDR Allow in EKS SG.
From the Value list box, select True.
Click Submit. The setting is displayed as BlockMasterVpcCidrAllowInEksSg in the System Config tab.
You can display and manage the Containers you have defined in the DuploCloud portal. Navigate to Kubernetes -> Containers.
DuploCloud provides you with a Just-In-Time (JIT) security token, for fifteen minutes, to access the kubectl
cluster.
In the DuploCloud Portal, select Administrator -> Infrastructure from the navigation pane.
Select the Infrastructure in the Name column.
Click the EKS tab.
Copy the temporary Token and the Server Endpoint (Kubernetes URL) Values from the Infrastructure that you created. You can also download the complete configuration by clicking the Download Kube Config button.
Run the following commands, in a local Bash shell instance:
You have now configured kubectl
to point and access the Kubernetes cluster. You can apply deployment templates by running the following command:
If you need security tokens of a longer duration, create them on your own. Secure them outside of the DuploCloud environment.
See this section in the Duplocloud Kubernetes documentation.
See this section in the DuploCloud Kubernetes documentation.
See this section in the DuploCloud documentation.
See Kubernetes Pod Toleration for examples of specifying K8s YAML for Pod Toleration.
Configuration and Secret management in AWS
There are many ways to pass configurations to containers at run-time. Although simple to set up, using Environmental Variables can become complex if there are too many configurations, especially files and certificates.
You can use an S3 Bucket to store and pass configuration to the containers:
Set the S3 Bucket name as an Environmental Variable.
Create a start-up script that defines the entry point of the container to download the file from the S3 bucket into the container, referenced by the Environmental Variable. Do this by:
Create a bash script with the S3 config predefined. When run, the script sets the EV.
Similar to using an S3 bucket, you can create values in an SSM parameter store (navigate to Cloud Services -> App Integration, and select the SSM Parameters tab) and set the Name of the parameter in the Environmental Variable. You then use a startup script in the AWS CLI to pull values from SSM and set them for the application in the container, either as an Environmental Variable or as a file.
Use the AWS Secrets Manager to set configs and secrets in Environmental Variables. Use a container startup script in the AWS CLI to copy secrets and set them in the appropriate format in the container.
Use the ECS Task Definition Secrets fields to set the configuration. For example::
Where X_SERVICE_TOKEN
is the Secret
defined in the JSON and VALUE_FROM
is the AWS secret ARN.
Set Docker registry credentials
To authenticate with private Docker registries, DuploCloud utilizes Kubernetes secrets of type kubernetes.io/dockerconfigjson
. This process involves specifying the registry URL and credentials in a .dockerconfigjson
format, which can be done in two ways:
Base64 Encoded Username and Password: Encode the username and password in Base64 and include it in the .dockerconfigjson
secret.
Raw Username and Password: Directly use the username
and password
in the secret without Base64 encoding. This method is supported and simplifies the process by not requiring the auth
field to be Base64 encoded.
In the DuploCloud Portal, navigate to Docker -> Services.
From the Docker list box, select Docker Credentials. The Set Docker registry Creds pane displays.
Supply the credentials in the required format and click Submit.
Enable the Docker Shell Service by selecting Enable Docker Shell from the Docker list box.
If you encounter errors such as pull access denied
or fail to resolve references due to authorization issues, ensure the secret is correctly configured and referenced in your service configuration. For non-default repositories, explicitly code the imagePullSecrets
with the name of the Docker authentication secret to resolve image-pulling issues, as in the example below:
You can pull images from multiple Docker registries by adding multiple Docker Registry Credentials.
In the DuploCloud Portal, click Administrator-> Plan. The Plans page displays.
Select the Plan in the Name column.
Click the Config tab.
Click Add. The Add Config pane displays.
Docker Credentials can be passed using the Environment Variables config field in the Add Service Basic Options page. This method is particularly useful for dynamically supplying credentials without hardcoding them into your service configurations. Refer to the Kubernetes Configs and Secrets section for more details on using environment variables to pass secrets.
Ensure all required secrets, like imagePullSecrets
for Docker authentication, are correctly added and referenced in the service configuration to avoid invalid config issues with a service. Reviewing the service configuration for any missing or incorrectly specified parameters is crucial for smooth operation.
Working with Load Balancers using AWS EKS
In the DuploCloud Portal, navigate Kubernetes -> Services.
On the Services page, select the Service name in the Name column.
Click the Load Balancers tab.
If no Load Balancers exist, click the Configure Load Balancer link. If other Load Balancers exist, click Add in the LB listeners card. The Add Load Balancer Listener pane displays.
From the Select Type list box, select a Load Balancer Listener type based on your Load Balancer.
Complete other fields as required and click Add to add the Load Balancer Listener.
To specify a custom classless inter-domain routing (CIDR) value for an NLB Load Balancer, edit the Load Balancer Listener configuration in the DuploCloud Portal.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
On the Services page, select the Service name in the Name column.
Click the Load Balancers tab.
Click Add in the Custom CIDR field of the Edit Load Balancer Listener pane.
Repeat this procedure for each custom CIDR that you want to add.
Navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the appropriate Infrastructure.
Click the Security Group Rules tab.
Click Add to add a Security Group. The Add Tenant Security pane displays.
From the Source Type list box, select Ip Address.
From the IP CIDR list box, select Custom. A field labeled CIDR notation of allowed hosts displays.
In the CIDR Notation of allowed hosts field enter a custom CIDR and complete the other required fields.
Click Add to add the Security Group containing the custom CIDR.
Repeat this procedure to add additional CIDRs.
In the DuploCloud Portal, navigate to Cloud Services -> Networking.
Click the Load Balancer tab.
Click Add. The Create a Load Balancer pane displays.
In the Name field, enter a name for the Load Balancer.
From the Type list box, select a Load Balancer type.
From the Visibility list box, select Public or Internal.
Click Create.
Instead of creating a unique Load Balancer for each Service you create, you can share a single Load Balancer between multiple Services. This is helpful when your applications run distributed microservices where the requests use multiple services and route traffic based on application URLs, which you can define with Load Balancer Listener Rules.
To accomplish this, you:
Create a Service Load Balancer with the type Target Group Only. This step creates a Service Load Balancer that includes a Target Group with a pre-defined name.
Create a Shared Load Balancer with the Target Group that was defined.
Create routing rules for the Shared Load Balancer and the Target Group it defines.
In the DuploCloud Portal, navigate Kubernetes -> Services.
On the Services page, select the Service name in the Name column.
Click the Load Balancers tab.
If no Load Balancers exist, click the Configure Load Balancer link. If other Load Balancers exist, click Add in the LB listeners card. The Add Load Balancer Listener pane displays.
From the Select Type list box, select Target Group Only.
You can create a Load Balancer Listener with a type of Target Group Only for Docker Mode or Native EKS and ECS Services based on your application requirement. Complete the other required fields and click Add.
The Target Group Only Service Load Balancer is displayed in the LB Listeners area in the Load Balancers tab on the Services page.
In the Load Balancer tab of the Cloud Services -> Networking page, select the Shared Load Balancer you created. The Load Balancer page with the Listeners tab displays.
In the Listeners tab, click Add. The Load Balancer Listener pane displays.
Click Save. The Shared Load Balancer for the Target Group displays in the Listeners tab.
Rules are not supported for Network Load Balancers (NLBs).
Click Add. The Add LB Listener rule page displays.
Create routing rules for the Target Group by setting appropriate Conditions. Add Routing Rules by specifying Rule Type, Values, and Forward Target Group. Forward Target Group lists all the Target Groups created for Docker Native, K8s, and ECS Services. Specify Priority for multiple rules. Use the X button to delete specific Values.
Click Submit.
View the rules you defined for any Shared Load Balancer.
In the DuploCloud portal, navigate to Cloud Services -> Networking.
Select the Load Balancer tab.
From the Name column, select the Load Balancer whose rules you want to view.
Update attributes for your defined Target Group.
In the DuploCloud portal, navigate to Cloud Services -> Networking.
Select the Load Balancer tab.
From the Name column, select the Load Balancer whose defined Target Group attributes you want to modify.
The Update Target Group Attributes pane displays.
Find the attribute you want to update in the Attribute column and update the associated value in the Value column.
Click Update to save the changes.
You can use the Other Settings card in the DuploCloud Portal to set the following features:
WAF Web ACL
Enable HTTP to HTTPS redirects
Enable Access Logging
Set Idle Timeout
Drop invalid headers
In the DuploCloud Portal, navigate to Kubernetes -> Services. The Services page displays.
Select the Service to which your Load Balancer is attached from the Name column.
Click the Load Balancers tab.
In the Other Settings card, click Edit. The Other Load Balancer Settings pane displays.
In the Other Load Balancer Settings pane, select any or all options.
Click Save.
Use the Options Menu ( ) in each Container row to display Logs, State, Container Shell, Host Shell, and Delete options.
Option | Functionality |
---|---|
Click the Plus Icon ( ) to the left of the Primary label, which designates that the first container you are defining is the primary container. The Container - 2 area displays.
Use the and icons to collapse and expand the Container areas as needed. Specify Container Name and Image name for each container that you add. Add more containers by clicking the Add Icon ( ) to create up to five (5) containers, in each container area. Delete containers by clicking the Delete ( X ) Icon in each container area.
Use the Options Menu ( ) in each Container row to display Logs, State, Container Shell, Host Shell, and Delete options.
Option | Functionality |
---|---|
In Kubernetes, you also have the option to populate environment variables from .
in the Tenant and add the needed configurations in an S3 Bucket as a file.
Using a
command, copying the config file in S3 to a location in the container;
Running the command, parsing the file, and setting the contents as an Environment Variable.
See the section.
If you need to create an Ingress Load Balancer, refer to the page in the DuploCloud Kubernetes User Guide.
For an end-to-end example of deploying an application using an EKS Service, see the and choose the option.
Before completing this task, you must .
In the LB Listeners area, select the Edit Icon () for the NLB Load Balancer you want to edit. The Edit Load Balancer Listener pane displays.
Add the Custom CIDR(s) and press ENTER. In the example below 10.180.12.0/22 and 10.180.8.0/22 are added. After the CIDRs are added, you .
Note the name of the created Target Group by clicking the Info Icon ( ) for the Load Balancer in the LB Listener card and searching for the string TgName
. You will select the Target Group when you .
before performing this procedure.
Complete all fields, specifying the Target Group that was created when you .
before performing this procedure.
In the Listeners tab, in the Target Group row, click the Actions menu ( ) and select Manage Rules. You can also select Update attributes from the Actions menu, as well, to dynamically update Target Group attributes. The Listener Rules page displays.
In the Listeners tab, in the appropriate Target Group row, click the Actions menu ( ) and select Manage Rules.
In the Listeners tab, in the appropriate Target Group row, click the Actions menu ( ) and select Update Target Group attributes.
To enable stickiness, complete steps 1-5 for above. On the Update Target Group Attributes pane, in the Value field for stickiness.enabled, enter true. Update additional stickiness attributes, if needed. Click Update to save the changes.
Logs
Displays container logs.
State
Displays container state configuration, in YAML code, in a separate window.
Container Shell
Accesses the Container Shell. To access the Container Shell option, you must first set up Shell access for Docker.
Host Shell
Accesses the Host Shell.
Delete
Deletes the container.
Logs
Displays container logs. When you select this option, the Container Logs window displays. Use the Follow Logs option (enabled by default) to monitor logging in real-time for a running container. See the graphic below for an example of the Container Logs window.
State
Displays container state configuration, in YAML code, in a separate window.
Container Shell
Accesses the Container Shell. To access the Container Shell option, you must first set up Shell access for Docker.
Host Shell
Accesses the Host Shell.
Delete
Deletes the container.
Storage services included in DuploCloud for AWS
DuploCloud AWS Storage Services include:
You can also easily create and manage Kubernetes Storage Classes and Persistent Volume Claims and GP3 Storage Classes within the DuploCloud Portal.
To create Hosts (Virtual Machines) see the Use Cases documentation.
Working with Load Balancers in a Native Docker Service
For an end-to-end example of deploying an application using a Native Docker Service, see the AWS Quick Start Tutorial and choose the Creating a Native Docker Service option.
In the DuploCloud Portal, navigate to Docker -> Services.
Select the Service that you created.
Click the Load Balancers tab.
Click the Configure Load Balancer link. The Add Load Balancer Listener pane displays.
From the Select Type list box, select your Load Balancer type.
Complete other fields as required and click Add to add the Load Balancer Listener.
When the LB Status card displays Ready, your Load Balancer is running and ready for use.
Working with Load Balancers using AWS ECS
Before you create an ECS Service and Load Balancer, you must create a Task Definition to run the Service. You can define multiple containers in your Task Definition.
For an end-to-end example of deploying an application using an ECS Service, see the AWS Quick Start Tutorial and choose the Creating an ECS Service option.
Tasks run until an error occurs or a user terminates the Task in the ECS Cluster.
Navigate to Cloud Services -> ECS.
In the Task Definitions tab, select the Task Definition Family Name. This is the Task Definition Name that you created prepended by a unique DuploCloud identifier.
In the Service Details tab, click the Configure ECS Service link. The Add ECS Service page displays.
In the Name field, enter the Service name.
In the LB Listeners area, click Add. The Add Load Balancer Listener pane displays.
From the Select Type list box, select Application LB.
In the Container Port field, enter a container port number.
In the External Port field, enter an external port number.
From the Visibility list box, select an option.
In the Heath Check field, enter a path (such as /) to specify the location of Kubernetes Health Check logs.
From the Backend Protocol list box, select HTTP.
From the Protocol Policy list box, select HTTP1.
Select other options as needed and click Add.
On the Add ECS Service page, click Submit.
In the Service Details tab, information about the Service and Load Balancer you created is displayed.
Verify that the Service and Load Balancer configuration details in the Service Details tab are correct.
Creating Load Balancers for single and multiple DuploCloud Services
DuploCloud provides the ability to configure Load Balancers with the type of Application Load Balancer, Network Load Balancer, and Classic Load Balancer.
DuploCloud provides the ability to configure Load Balancers with the following types:
Application Load Balancer - An ALB provides outbound connections to cluster nodes inside the EKS virtual network, translating the private IP address to a public IP address as part of its Outbound Pool.
Network Load Balancer - An NLB distributes traffic across several servers by using the TCP/IP networking protocol. By combining two or more computers that are running applications into a single virtual cluster, NLB provides reliability and performance for web servers and other mission-critical servers.
Classic Load Balancer - The legacy AWS Load Balancer (which was retired from AWS support, as of August 2022).
Load Balancers can be configured for Docker Native, EKS-Enabled, and ECS Services from the DuploCloud Portal. Using the Portal, you can configure:
Service Load Balancers - Application Load Balancers specific to one service. (Navigate to Docker -> Services or Kubernetes -> Services, select a Service from the list, and click the Load Balancer tab).
Shared and Global load balancers - Application or Network Load Balancers that can be used as a shared Load Balancer between Services and for Global Server Load Balancing (GSLB). (Navigate to Cloud Services -> Networking and select the Load Balancers tab).
DuploCloud allows one Load Balancer per DuploCloud Service. To share a load balancer between multiple Services, create a Service Load Balancer of type Target Group Only.
See the following pages for specific information on adding Load Balancer Listeners for:
To specify a custom classless inter-domain routing (CIDR) value for an NLB Load Balancer, edit the Load Balancer Listener configuration in the DuploCloud Portal.
Before completing this task, you must add a Load Balancer Listener of Type Network LB.
In the DuploCloud Portal, navigate Docker -> Services or Kubernetes -> Service.
Select the Service name from the NAME column.
Click the Load Balancers tab.
Click Add in the Custom CIDR field of the Edit Load Balancer Listener pane.
Add the Custom CIDR(s) and press ENTER. In the example below 10.180.12.0/22 and 10.180.8.0/22 are added. After the CIDRs are added, you add Security Groups for Custom CIDR(s).
Repeat this procedure for each custom CIDR that you want to add.
Navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the appropriate Infrastructure.
Click the Security Group Rules tab.
Click Add to add a Security Group. The Add Tenant Security pane displays.
From the Source Type list box, select Ip Address.
From the IP CIDR list box, select Custom. A field labeled CIDR notation of allowed hosts displays.
In the CIDR Notation of allowed hosts field enter a custom CIDR and complete the other required fields.
Click Add to add the Security Group containing the custom CIDR.
Repeat this procedure to add additional CIDRs.
In the DuploCloud Portal, navigate to Cloud Services -> Networking.
Click the Load Balancer tab.
Click Add. The Create a Load Balancer pane displays.
In the Name field, enter a name for the Load Balancer.
From the Type list box, select a Load Balancer type.
From the Visibility list box, select Public or Internal.
Click Create.
Instead of creating a unique Load Balancer for each Service you create, you can share a single Load Balancer between multiple Services. This is helpful when your applications run distributed microservices where the requests use multiple services and route traffic based on application URLs, which you can define with Load Balancer Listener Rules.
To accomplish this, you:
Create a Service Load Balancer with the type Target Group Only. This step creates a Service Load Balancer that includes a Target Group with a pre-defined name.
Create a Shared Load Balancer with the Target Group that was defined.
Create routing rules for the Shared Load Balancer and the Target Group it defines.
In the DuploCloud Portal, navigate Docker -> Services or Kubernetes -> Services.
On the Services page, select the Service name in the Name column.
Click the Load Balancers tab.
If no Load Balancers exist, click the Configure Load Balancer link. If other Load Balancers exist, click Add in the LB listeners card. The Add Load Balancer Listener pane displays.
From the Select Type list box, select Target Group Only.
You can create a Load Balancer Listener with a type of Target Group Only for Docker or EKS and ECS Services based on your application requirement. Complete the other required fields and click Add.
The Target Group Only Service Load Balancer is displayed in the LB Listeners area in the Load Balancers tab on the Services page.
Add a Shared Load Balancer before performing this procedure.
In the Load Balancer tab of the Cloud Services -> Networking page, select the Shared Load Balancer you created. The Load Balancer page with the Listeners tab displays.
In the Listeners tab, click Add. The Load Balancer Listener pane displays.
Complete all fields, specifying the Target Group that was created when you added a Load Balancer with the Type Target Group Only in the previous step.
Click Save. The Shared Load Balancer for the Target Group displays in the Listeners tab.
Create a Shared Load Balancer for the Target Group before performing this procedure.
Rules are not supported for Network Load Balancers (NLBs).
Click Add. The Add LB Listener rule page displays.
Create routing rules for the Target Group by setting appropriate Conditions. Add Routing Rules by specifying Rule Type, Values, and Forward Target Group. Forward Target Group lists all the Target Groups created for Docker Native, K8s, and ECS Services. Specify Priority for multiple rules. Use the X button to delete specific Values.
Click Submit.
View the rules you defined for any Shared Load Balancer.
In the DuploCloud portal, navigate to Cloud Services -> Networking.
Select the Load Balancer tab.
From the Name column, select the Load Balancer whose rules you want to view.
Update attributes for your defined Target Group.
In the DuploCloud portal, navigate to Cloud Services -> Networking.
Select the Load Balancer tab.
From the Name column, select the Load Balancer whose defined Target Group attributes you want to modify.
You can use the Other Settings card in the DuploCloud Portal to set the following features:
WAF Web ACL
Enable HTTP to HTTPS redirects
Enable Access Logging
Set Idle Timeout
Drop invalid headers
In the DuploCloud Portal, navigate to Docker -> Services or Kubernetes -> Service. The Services page displays.
Select the Service to which your Load Balancer is attached from the Name column.
Click the Load Balancers tab.
In the Other Settings card, click Edit. The Other Load Balancer Settings pane displays.
In the Other Load Balancer Settings pane, select any or all options.
Click Save.
Set up Storage Classes and PVCs in Kubernetes
Navigate to Kubernetes -> Storage -> Storage Class
Configure EFS parameter created at Step1 by clicking on EFS Parameter.
Here, we are configuring Kubernetes to use Storage Class created in Step2 above, to create a Persistent Volume with 10Gi of storage capacity and ReadWriteMany access mode.
Configure below in Volumes to create your application deployment using this PVC.
Enhance performance and cut costs by using the AWS GP3 Storage Class
GP3, the new storage class from AWS, offers significant performance benefits as well as cost savings when you set it as your default storage class. By using GP3 storage classes instead of GP2 storage classes, you get a baseline of 3000 IOPS, without any additional fees. You can also configure workloads that used a gp2 volume of up to 1000 GiB in capacity with a gp3 volume.
To set GP3 as your default Storage Class for future allocations, you must add a custom setting in your Infrastructure.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the Infrastructure to which you want to add a custom setting (for the default G3 storage class).
Click the Settings tab.
Click Add. The Infra - Set Custom Data pane displays.
In the Setting Name field, select Other from the list box.
In the Custom Setting field, select DefaultK8sStorageClass from the list box.
in the Setting Value field, enter gp3.
Click Set.
In the LB Listeners area, select the Edit Icon () for the NLB Load Balancer you want to edit. The Edit Load Balancer Listener pane displays.
Note the name of the created Target Group by clicking the Info Icon ( ) for the Load Balancer in the LB Listener card and searching for the string TgName
. You will select the Target Group when you create a Shared Load Balancer for the Target Group.
In the Listeners tab, in the Target Group row, click the Actions menu ( ) and select Manage Rules. You can also select Update attributes from the Actions menu, as well, to dynamically update Target Group attributes. The Listener Rules page displays.
In the Listeners tab, in the appropriate Target Group row, click the Actions menu ( ) and select Manage Rules.
In the Listeners tab, in the appropriate Target Group row, click the Actions menu ( ) and select Update attributes.
Refer to steps
If the volume size is greater than 1000 GiB, check the actual IOPS and choose a corresponding value.
For information about migrating your type GP2 Storage Classes to GP3, see this .
AWS ApiGateway RestApi is created from the DuploCloud Portal which will take care of creating the security policies to make the API Gateway accessible to other resources (like Lambda functions) within the Tenant. Creating the RestApi is the only configuration done from within the DuploCloud portal. All other configurations for the API (like defining methods, resources, and pointing to lambda functions) should be done in the AWS console. The API console can be reached by navigating to Cloud Services -> Networking, selecting the API Gateway tab, and then clicking on the Console button under the Actions menu.
The steps below use DuploCloud's API Gateway/Lambda integration to create a web API with an HTTP endpoint for your Lambda function (in this case, it returns a simple "Hello!
" response).
The example API deployed is not secure. Anyone on the internet can access the endpoint (in this example, "Hello!
"). When creating your own Lambda, you will need to configure CORS, authentication, and other security details.
Create a lambda_function.py
with this code:
For more information about formatting your Lambda response, the AWS documentation.
Run zip my_deployment_package.zip lambda_function.py
Upload my_deployment_package.zip
to an S3 bucket.
Create a Lambda Function in DuploCloud and point it to that Zip with handler lambda_function.lambda_handler
.
Create an API Gateway and select the Lambda you just created.
Then you can "Deploy API" from the new gateway that's created in AWS Console and you can curl the endpoint that shows up under Stages -> Stage details -> Invoke URL (again in AWS Console).
Run AWS batch jobs without installing software or servers
Create scheduling policies to define when your batch job runs.
From the DuploCloud Portal, navigate to Cloud Services -> Batch page, and click the Scheduling Policies tab.
Click Add. The Create Batch Scheduling Policy page displays.
Click Create.
In the DuploCloud Portal, navigate to Cloud Services -> Batch.
Click the Compute Environments tab.
Click Add. The Add Batch Environment page displays.
In the Compute Environment Name field, enter a unique name for your environment.
In the Type field, select the environment type (On-Demand, Spot, Fargate, etc.).
Modify additional defaults on the page, as needed, or add configuration parameters in Other Configurations.
Click Create. The Compute Environment is created.
From the DuploCloud Portal, navigate to Cloud Services -> Batch page, and click the Queues tab.
Click Add. The Create Batch Queue page displays.
Click Create. The Batch Queue is created.
For Priority, enter a whole number. Job queues with a higher priority are run before those with a lower priority associated with the same compute environment.
Before you can run AWS batch jobs, you need to create job definitions specifying how batch jobs are run.
From the DuploCloud Portal, navigate to Cloud Services -> Batch, and click the Job Definitions tab.
Click Add. The Create Batch Job Definition page displays.
Click Create. The Batch Job Definition is created.
Click Add. The Add Batch Job page displays.
On the Add Batch Job page, define a Job Name, Job Definition, Job Queue, and Job Properties.
Click Create. The Batch job is created.
Navigate from the DuploCloud Portal to Cloud Services -> Batch, and click the Jobs tab. The Jobs list displays.
Click the name of the Job to view Job Details (Status, Job ID, Job Queue, Job Definition).
Configuring a CloudFront distribution in DuploCloud
The S3 bucket needs to be created and static asserts need to be uploaded to the S3 bucket. Please follow the steps in the link below to create the S3 bucket.
Create Cloudfront distribution by navigating to Cloud Services -> Networking and selecting the CloudFront tab. Then click +Add.
Name - Friendly name for the distribution.
Root Object - Default root object that will be returned while accessing the root of the domain. Example: index.html. Should not start with "/".
Certificate - ACM certificate for distribution. Only certs in us-east-1 can be used. If not already present should be created in AWS and added to the plan (Administrator > Plans > Select Tenant Plan > Certificate tab).
Aliases - Domain name using which distribution will be accessed. Multiple domain names can be configured if needed. If the Domain name is managed by Duplo CNAME mapping will be automatically done else CNAME mapping should be added manually in the appropriate DNS management console.
Origins - Location information where the actual content is stored. It can be an S3 bucket or any HTTP server endpoint.
Domain Name - S3 bucket can be selected or chosen other and enter custom endpoint.
ID - unique identifier for the origin. UI pre-populates it from the domain name. If needed can be changed.
Path - Optional. The Path will be suffixed to the origin's domain name (URL) while fetching content. For S3: If the content that needs to be served is under prefix static. You should enter "static" in the path. For custom URL: If all the APIs have a prefix like v1. You should enter "v1" in the path.
Default Cache Behaviors - The default Cache policy and the default origin to fetch content are entered here.
Cache Policy ID - AWS predefined cache policies are listed. You can select one or choose another and enter a custom cache policy.
Target Origin - Choose the default origin that should be used for the distribution
Custom Cache Behaviors - Additional Cache policies and path patterns to use the custom cache behaviors are entered here.
Cache Policy ID - AWS predefined cache policies are listed. You can select one or choose another and enter a custom cache policy.
Path Pattern - For requests matching the pattern enter this specific origin and cache policy will be used. For example "api/*" all requests that start with API prefix will be routed to this origin.
Target Origin - Choose the origin that should be used for this custom path.
Note: If the S3 bucket used is part of the same tenant where CloudFront distribution is created. Duplo creates an Origin Access Identity and updates the bucket policy to allow GetObject for Cloudfront Origin Access Identity. No extra step is needed on the user end to deal with S3 bucket permissions.
Create the lambda function in the tenant by selecting the Edge lambda
checkbox. This will create a lambda function in us-east-1
along with necessary permissions.
Create a CloudFront distribution by giving the necessary values, in addition for the lambda@edge select the function associations and select the lambda function.
Note: We will show the versions of the lambda function, so the same function will be there multiple times with V1 and V2.
Once the deployment status becomes Deployed
. Then visit the domain name and you should see the invocation of the lambda function
The default origin should point to your app URL ui.mysite.com.
Create a new S3 Bucket to store maintenance pages. In the bucket create a prefix/folder called maintpage.
Upload maintenance page asserts (.html, .css, .js
, etc.) into an S3 bucket inside maintpage
folder.
Add new S3 Origin pointing to the S3 bucket we have maintenance static asserts.
Add new Custom Cache Behaviors use /maintpage/*
as path pattern, Target origin should be S3 maintenance asserts origin.
Adding Custom Error Response mapping.
In the error code dropdown select the HTTP code for which the maintenance page should be served. 502 gateway timeout is commonly used.
In the Response page path enter /maintpage/5xx.html
. 5xx.html should be changed to a page that exists in s3.
HTTP Response code can be either 200 or 502 (same as the actual source origin response code).
You can perform processing directly in the DuploCloud Portal without the additional overhead of installed software, allowing you to focus on analyzing results and diagnosing problems.
In the Create Batch Scheduling Policy page, create batch job scheduling policies using the . The fields in the AWS documentation map to the fields on the DuploCloud Create Batch Scheduling Policy page.
(Elastic Compute Cloud [EC2] instances) map to DuploCloud Infrastructures. The settings and constraints in the computing environment define how to configure and automatically launch the instance.
After you define job definitions, create queues for your batch jobs to run in. For more information about batch job queues, see the .
In the Create Batch Queue page, create batch job queues using the . The fields in the AWS documentation map to the fields on the DuploCloud Create Batch Queue page.
In the Create Batch Job Definition page, define your batch jobs using the . The fields in the AWS documentation map to the fields on the DuploCloud Create Batch Job Definition page.
Add a job for AWS batch processing. See the for more information about batch jobs.
After you , in the DuploCloud Portal, navigate to Cloud Services -> Batch, and click the Jobs tab.
Optionally, if you created a to apply to this job, paste the into the Other Properties field.
As you C, paste the following YAML code into the Other Properties field on the Add Batch Job page.
Use the for information about running your AWS Batch jobs.
Databases supported by DuploCloud AWS
A number of databases are supported for DuploCloud and AWS. Use the procedures in this section to set them up.
Create ElastiCache for Redis database and Memcache memory caching
Amazon ElastiCache is a serverless, Redis- and Memcached-compatible caching service delivering real-time, cost-optimized performance for modern applications.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
Click the ElastiCache tab.
Click Add. The Create a ElastiCache page displays.
Select the ElastiCache Type and complete the required fields based on your type selection.
Optionally, select Enable Cluster Mode to scale the ElastiCache instance for performance.
Click Create.
Pass the cache endpoint to your application through the Environment Variables via the AWS Service.
Create and connect to an RDS database instance
Support for the Aurora Serverless V1 database engines has been deprecated. When using Terraform, do not create V1 engines.
DuploCloud supports the following RDS databases in AWS:
MySQL
PostgreSQL
MariaDB
Microsoft SQL-Express
Microsoft SQL-Web
Microsoft SQL-Standard
Aurora MySQL
Aurora MySQL Serverless
Aurora PostgreSQL
Aurora PostgreSQL Serverless
When upgrading RDS versions, use AWS Console and see your Cloud Provider for compatibility requirements. Note that while versions 5.7.40, 5.7.41, and 5.7.42 cannot be upgraded to version 8.0.28, you can upgrade these versions to version 8.0.32 and higher.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
Click Add. The Create a RDS page displays.
Fill out the form based on your requirements, and Enable Logging, if needed.
Optionally, in the Backup Retention Period in Days field, enter a number of days to retain automated backups between one (1) and thirty-five (35). If a value is not entered, the Backup Retention Period value configured in Systems Settings will be applied.
You can create Aurora Serverless V2 Databases by selecting Aurora-MySql-Serverless-V2 or Aurora-PostgreSql-Serverless-V2 from the RDS Database Engine list box. Select the RDS Engine Version compatible with Aurora Serverless v2. The RDS Instance Size of db.serverless
applies to both engines.
Once the database is created, select it and use the Instances tab to view the endpoint and credentials. Use the Endpoints and credentials to connect to the database from your application running in an EC2 instance. The database is only accessible from inside the EC2 instance in the current Tenant, including the containers running within.
Pass the endpoint, name, and credentials to your application for maximum security.
Adding DynamoDB Tables in DuploCloud
When using DynamoDB in DuploCloud AWS, the required permissions to access the DynamoDB from a virtual machine (VM), Lambda functions, and containers are provisioned automatically using Instance profiles. Therefore, no Access Key is required in the Application code.
When you write application code for DynamoDB in DuploCloud AWS, use the IAM role/Instance profile to connect to these services. If possible, use the AWS SDK constructor, which uses the region.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
Click the DynamoDB tab.
Click Add. The Create a DynamoDB Table pane displays.
Specify the DynamoDB Table Name and other required fields, including Primary Key, Key Type, Attribute Type, Sort Key, and Sort Key Type.
Click Create.
For detailed guidance about configuring the duplocloud_aws_dynamodb_table
, refer to the Terraform documentation. This resource allows for creating and managing AWS DynamoDB tables within DuploCloud.
Perform additional configuration, as needed, in the AWS Console by clicking the >_ Console icon. In the AWS console, you can configure the application-specific details of DynamoDB database tables. However, no access or security-level permissions are provided.
After creating a DynamoDB table, you can retrieve the final name of the table using the .fullname
attribute, which is available in the read-only section of the documentation. This feature is handy for applications that dynamically access table names post-creation. If you encounter any issues or need further assistance, please refer to the documentation or contact support.
Using IAM for secure log-ins to RDS databases
Authenticate to MySQL, PostgreSQL, Aurora MySQL, Aurora PostgreSQL, and MariaDB RDS instances using AWS Identity and Access Management (IAM) database authentication.
Using IAM for authenticating an RDS instance offers the following benefits:
Network traffic to and from the database is encrypted using Secure Socket Layer (SSL) or Transport Layer Security (TLS).
Centrally manage access to your database resources, instead of managing access individually for each DB instance.
For applications running on Amazon EC2 hosts, you can use profile credentials specific to your EC2 instance to access your database, instead of using a password, for greater security.
Use the System Config tab to enable IAM authentication before enabling it for a specific RDS instance.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab. The Add Config pane displays.
From the Config Type list box, set Flags.
From the Key list box, select Enable RDS IAM auth.
From the Value list box, select True.
Click Submit. The configuration is displayed in the System Config tab.
You can also enable IAM for any MySQL, PostgreSQL, and MariaDB instance during RDS creation or by updating the RDS Settings after RDS creation.
Select the Enable IAM auth option when you create an RDS database.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
In the RDS tab, select the database for which you want to enable IAM.
Click the Actions menu and select RDS Settings -> Update IAM Auth. The Update IAM Auth pane displays.
Select Enable IAM Auth.
Click Update.
To download a token which you can use for IAM authentication:
In the DuploCloud Portal, navigate to Cloud Services -> Database.
In the RDS tab, select the database for which you want to enable IAM.
Click the Actions menu and select View -> Get DB Auth Token. The RDS Credentials window displays.
Click Close to dismiss the window.
In the RDS Credentials window, click the Copy Icon ( ) to copy the Endpoint, Username, and Password to your clipboard.
Today, technology organizations typically have people with two distinct skill sets: Software Engineers and DevOps Engineers. Further, some may have DevOps and compliance functions managed within the same or separate teams. In startups and smaller companies, there may just be the same engineers wearing all three hats.
Software engineers come up with the high level application architecture. The business provides compliance requirements. These two are passed on to the DevOps team who use their subject matter expertise to realize what needs to be done for the cloud infrastructure. There are other elements of operations in scope, such as CI/CD and diagnostics that include central logging, monitoring and alerting.
Compliance and security is DuploCloud's bread-and-butter. The core approach is out-of-box compliance where the users don't have to explicitly learn and apply compliance controls. Following are some of the white papers on how DuploCloud implements security and compliance
We support several other standards including NIST, ISO, GDPR and so on.
Support for AWS Timestream databases
DuploCloud supports the Amazon Timestream database in the DuploCloud Portal. AWS Timestream is a fast, scalable, and serverless time-series database service that makes it easier to store and analyze trillions of events per day at an accelerated speed.
Amazon Timestream automatically scales to adjust for capacity and performance, so you don’t have to manage the underlying infrastructure.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
From the RDS page, click the Timestream tab.
Click Add. The Add Timestream Database pane displays.
Enter the DatabaseName.
Select an Encryption Key, if required.
Click Submit. The Timestream database name displays on the Timestream tab.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
From the RDS page, click the Timestream tab.
Select the database from the Name column.
On the Tables tab, click Add. The Add Timestream Table pane displays.
Enter the Table Name and other necessary information to size and create your table.
Click Create.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
From the RDS page, click the Timestream tab.
Select the database from the Name column.
On the Timestream page, click the database's Action menu to modify the JSON code or launch the Console in AWS. You can also select the database name in the Name column and, from the Tables tab, click the table's Action menu to modify the JSON code or launch the Console in AWS or Delete a table.
The application engineers start off by giving a set of requirements to the operations or DevOps team. This typically includes:
High level architecture. Like the AWS example shown in the figure below which depicts the following: - A set of docker containers to be deployed connected to a SQL database along with a Redis instance and an S3 bucket. - Part of the containers needs to be behind a public ELB, part behind an internal LB. - Data science team may want a Spark cluster connected to ES - Lambda functions behind API gateway are to be deployed. One could draw similar examples for other cloud providers
2. Multiple environments might be required: Dev, Stage, QA and Production. In some case there may be a need to deploy a unique copy of the application for each customer (Single Tenant Application).
3. Diagnostics. Central logging, monitoring and alerting must be established.
4. Compliance standards. Specific standards are to be met like PCI, HIPAA, SOC 2 etc.
5. CI/CD is to be established.
Deploy Hosts in one Tenant that can be accessed by Kubernetes (K8s) Pods in a separate Tenant.
You can enable shared Hosts in the DuploCloud Portal. First, configure one Tenant to allow K8s Pods from other Tenants to run on its Host(s). Then, configure another Tenant to run its K8s Pods on Hosts in other Tenants. This allows you to break Tenant boundaries for greater flexibility.
In the DuploCloud Portal, navigate to Administrator -> Tenant.
From the Tenant list, select the name of the Tenant to which the Host is defined.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature item list, select Allow hosts to run K8S pods from other tenants.
Select Enable.
Click Add. This Tenant's hosts can now run Pods from other Tenants.
In the DuploCloud Portal, navigate to Administrator -> Tenant.
From the Tenant list, select the name of the Tenant that will access the other Tenant's Host (the Tenant not associated with a Host).
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature item list, select Enable option to run K8S pods on any host.
Select Enable.
Click Add. This Tenant can now run Pods on other Tenant's Hosts.
From the Tenant list box at the top of the DuploCloud Portal, select the name of the Tenant that will run K8s Pods on the shared Host.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
In the Services tab, click Add. The Add Service window displays.
Fill in the Service Name, Cloud, Platform, and Docker Image fields. Click Next.
In the Advanced Options window, from the Run on Any Host item list, select Yes.
Click Create. A Service running the shared Host is created.
Autoscale your DuploCloud Kubernetes deployment
Before autoscaling can be configured for your Kubernetes service, make sure that:
Autoscaling Group (ASG) is setup in the DuploCloud tenant
Cluster Autoscaler is enabled for your DuploCloud infrastructure
Horizontal Pod Autoscaler (HPA) automatically scales the Deployment and its ReplicaSet. HPA checks the metrics configured in regular intervals and then scales the replicas up or down accordingly.
You can configure HPA while creating a Deployment Service from the DuploCloud Portal.
In the DuploCloud Portal, navigate Kubernetes -> Services, displaying the Services page.
Create a new Service by clicking Add.
In Add Service - Basic Options, from the Replication Strategy list box, select Horizontal Pod Scheduler.
In the Horizontal Pod Autoscaler Config field, add a sample configuration, as shown below. Update the minimum/maximum Replica Count in the resource
attributes, based on your requirements.
Click Next to navigate to Advanced Options.
In Advanced Options, in the Other Container Config field, ensure your resource attributes, such as Limits
and Requests
, are set to work with your HPA configuration, as in the example below.
At the bottom of the Advanced Options page, click Create.
For HPA Configures Services, Replica is set as Auto in the DuploCloud Portal
When your services are running, Replicas: Auto is displayed on the Service page.
If a Kubernetes Service is running with a Horizontal Pod AutoScaler (HPA), you cannot stop the Service by clicking Stop in the service's Actions menu in the DuploCloud Portal.
Instead, do the following to stop the service from running:
In the DuploCloud Portal, navigate to Kubernetes -> Containers and select the Service you want to stop.
From the Actions menu, select Edit.
From the Replication Strategy list box, select Static Count.
In the Replicas field, enter 0 (zero).
Click Next to navigate to the Advanced Options page.
Click Update to update the service.
When the Cluster Autoscaler flag is set and a Tenant has one or more ASGs, an unschedulable-pod alert will be delayed by five (5) minutes to allow for autoscaling. You can configure the Infrastructure settings to bypass the delay and send the alerts in real-time.
From the DuploCloud portal, navigate to Administrator -> Infrastructure.
Click on the Infrastructure you want to configure settings for in the Name list.
Select the Settings tab.
Click the Add button. The Infra - Set Custom Data pane displays.
In the Setting Name list box, select Enables faults prior to autoscaling Kubernetes nodes.
Set the Enable toggle switch to enable the setting.
Click Set. DuploCloud will now generate faults for unschedulable K8s nodes immediately (before autoscaling).