Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Use Cases supported for DuploCloud AWS
This section details common use cases for DuploCloud AWS.
Topics in this section are covered in the order of typical usage. Use cases that are foundational to DuploCloud such as Infrastructure, Tenant, and Hosts are listed at the beginning of this section; while supporting use cases such as Cost management for billing, JIT Access, Resource Quotas, and Custom Resource tags appear near the end.
AWS Console link
Enable Elastic Kubernetes Service (EKS) for AWS by creating a DuploCloud Infrastructure
In the DuploCloud platform, a Kubernetes Cluster maps to a DuploCloud Infrastructure.
Start by creating a new Infrastructure in DuploCloud. When prompted to provide details for the new Infrastructure, select Enable EKS. In the EKS Version field, select the desired release.
Optionally, enable logging and custom EKS endpoints.
The worker nodes and remaining workload setup are described in the Tenant topic.
Up to one instance (0 or 1) of an EKS is supported for each DuploCloud Infrastructure.
Creating an Infrastructure with EKS can take some time. See the Infrastructure section for details about other elements on the Add Infrastructure form.
When the Infrastructure is in the ready state, as indicated by a Complete status, navigate to Kubernetes -> Services and select the Infrastructure from the NAME column to view the Kubernetes configuration details, including the token and configuration for kubectl
.
When you create Tenants in an Infrastructure, a namespace is created in the Kubernetes cluster with the name duploservices-TENANT_NAME
Enable logging functionality for EKS
Follow the steps in the section Creating an Infrastructure. In the EKS Logging list box, select one or more ControlPlane Log types.
Enable EKS logging for an Infrastructure that you have already created.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
From the NAME column, select the Infrastructure for which you want to enable EKS logging.
Click the Settings tab.
Click Add. The Infra - Set Custom Data pane displays.
From the Setting Name list box, select EKS ControlPlane Logs.
In the Setting Value field, enter: api;audit;authenticator;controllerManager;scheduler
Click Set. The EKS ControlPlane Logs setting is displayed in the Settings tab.
Enable Elastic Container Service (ECS) for AWS when creating a DuploCloud Infrastructure
Setting up an Infrastructure that uses ECS is similar to creating an , except that during creation, instead of selecting Enable EKS, you select Enable ECS Cluster.
For more information about ECS Services, see the documentation.
Up to one instance (0 or 1) of an ECS is supported for each DuploCloud Infrastructure.
Use the DuploCloud Portal to create an AWS Infrastructure and associated Plan
From the DuploCloud Portal, navigate to Administrator -> Infrastructure.
Click Add.
Define the Infrastructure by completing the fields on the Add Infrastructure form.
Select Enable EKS to enable EKS for the Infrastructure, or select Enable ECS Cluster to enable an ECS Cluster during Infrastructure creation.
Optionally, select Advanced Options to specify additional configurations (such as ).
Click Create. The Infrastructure is created and listed on the Infrastructure page. DuploCloud automatically creates a (with the same Infrastructure name) with the Infrastructure configuration.
Cloud providers limit the number of Infrastructures that can run in each region. Refer to your cloud provider for further guidelines on how many Infrastructures you can create.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the Infrastructure containing settings that you want to view.
Click the Settings tab. The Infrastructure settings display.
Up to one instance (0 or 1) of an EKS or ECS is supported for each DuploCloud Infrastructure.
You can customize your EKS configuration:
Specify EKS endpoints for an Infrastructure
AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the default public endpoint for each service in an AWS Region. However, when you create an Infrastructure in DuploCloud, you can specify a custom Private endpoint, a custom Public endpoint, or Both public and private custom endpoints. If you specify no endpoints, the default Public endpoint is used.
For more information about AWS Endpoints, see the .
Follow the steps in the section . Before clicking Create, specify EKS Endpoint Visibility.
From the EKS Endpoint Visibility list box, select Public, Private, or Both public and private. If you select private or Both public and private, the Allow VPN Access to the EKS Cluster option is enabled.
Click Advanced Options.
Using the Private Subnet CIDR and Public Subnet CIDR fields, specify CIDRs for alternate public and private endpoints.
Click Create.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the NAME column, select the Infrastructure.
Click the Settings tab.
From the Setting Name list box, select Enable VPN Access to EKS Cluster.
Select Enable to enable VPN.
Modifying endpoints can incur an outage of up to thirty (30) minutes in your EKS cluster. Plan your update accordingly to minimize disruption for your users.
To modify the visibility for EKS endpoints you have already created:
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the Infrastructure for which you want to modify EKS endpoints.
Click the Settings tab.
From the Setting Value list box, select the desired type of visibility for endpoints (private, public, or both).
Click Set.
Enable Cluster Autoscaler for a Kubernetes cluster
The Cluster AutoScaler automatically adjusts the number of nodes in your cluster when Pods fail or are rescheduled onto other nodes.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the NAME column, select the Infrastructure with which you want to use Cluster AutoScaler.
Click the Settings tab.
Click Add. The Add Infra - Set Custom Data pane displays.
From the Setting Name list box, select Cluster Autoscaler.
Select Enable to enable EKS.
Click Set. Your configuration is displayed in the Settings tab.
Securely access AWS Services using VPC endpoints
An AWS creates a private connection to supported AWS services and VPC endpoint services powered by AWS PrivateLink. Amazon VPC instances do not require public IP addresses to communicate with the resources of the service. Traffic between an Amazon VPC and a service does not leave the Amazon network.
VPC endpoints are virtual devices. They are horizontally scaled, redundant, and highly available Amazon VPC components that allow communication between instances in an Amazon VPC and services without imposing availability risks or bandwidth constraints on network traffic. There are two types of VPC endpoints, , and .
DuploCloud allows you to specify predefined AWS endpoints for your Infrastructure in the DuploCloud Portal.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
Select the Infrastructure to which you want to add VPC endpoints.
Click the Endpoints tab.
Click Add. The Infra - Create VPC Endpoints pane displays.
From the VPC Endpoint Service list box, select the endpoint service you want to add.
Click Create. In the Endpoints tab, the VPC Endpoint ID of your selected service displays.
Enable ECS Elasticsearch logging for containers at the Tenant level
To generate logs for AWS ECS clusters, you must first create an Elasticsearch logging container. Once auditing is enabled, your container logging data can be captured for analysis.
Define at least one .
Enable the feature.
In the DuploCloud Portal, navigate to Administrator -> Tenant. The Tenant page displays.
From the Name column, select the Tenant that is running the container for which you want to enable logging.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select Other. The Configuration field displays.
In the Configuration field, enter Enable ECS ElasticSearch Logging.
In the field below the Configuration field, enter True.
Click Add. In the Settings tab, Enable ECS ElasticSearch Logging displays a Value of True.
You can verify that ECS logging is enabled for a specific container.
In the DuploCloud Portal, navigate to Cloud Services -> ECS.
In the Task Definitions tab, select the Task Definition Family Name in which your container is defined.
Click the Task Definitions tab.
In the Container - 1 area, in the Container Other Config field, your LogConfiguration
is displayed.
In the Container-2 area, another container is created by DuploCloud with the name log_router
.
Using DuploCloud Tenants for AWS
In AWS, cloud features such as AWS resource groups, AWS IAM, AWS security groups, KMS keys, as well as Kubernetes Namespaces, are exposed in Tenants which reference their configurations.
For more information about DuploCloud Tenants, see the topic in the DuploCloud Common Components documentation.
Navigate to Administrator -> Tenant in the DuploCloud Portal and click Add. The Create a Tenant pane displays.
In the Name field, enter a name for the Tenant. Choose unique names that are not substrings of one another, for example, if you have a Tenant named dev
, you cannot create another named dev2
. We recommend using distinct numerical suffixes like dev01
and dev02
.
In the Plan list box, select the Plan to associate the Tenant with.
Click Create. The Tenant is created.
Creating an Infrastructure with ECS can take some time. See the section for details about other elements on the Add Infrastructure form.
.
Enable EKS endpoints, logs, Cluster Autoscaler, and more. For information about configuration options, see these topics.
You can customize your ECS configuration. See the topic for information about configuration options.
To change VPN visibility from public to private after you have , follow these steps.
In the EKS Endpoint Visibility row, in the Actions column, click the ( ) icon and select Update Setting. The Infra - Set Custom Data pane displays.
Click Set. When you , the Allow VPN Access to the EKS Cluster option will be enabled.
In the EKS Endpoint Visibility row, in the Actions column, click the ( ) icon and select Update Setting. The Infra - Set Custom Data pane displays.
Menu icon ( ) in the row of the task definition and select Edit Task Definition. The Edit Task Definition page displays your defined Containers.
For information about granting Cross-Tenant access to resources, see .
Adding EC2 hosts in DuploCloud AWS
Once you have the Infrastructure (Networking, Kubernetes cluster, and other standard configurations) and an environment (Tenant) set up, the next step is to launch EC2 virtual machines (VMs). You create VMs to be:
EKS Worker Nodes
Worker Nodes (Docker Host), if the built-in container orchestration is used.
DuploCloud AWS requires at least one Host (VM) to be defined per AWS account.
You also create VMs if Regular nodes are not part of any container orchestration. For example, a user manually connects and installs apps, as when using Microsoft SQL Server in a VM, Running an IIS application, or such custom use cases.
While all the lower-level details like IAM roles, Security groups, and others are abstracted away from the user (as they are derived from the Tenant), standard application-centric inputs must be provided. This includes a Name, Instance size, Availability Zone choice, Disk size, Image ID, etc. Most of these are optional, and some are published as a list of user-friendly choices by the admin in the plan (Image or AMI ID is one such example). Other than these AWS-centric parameters, there are two DuploCloud platform-specific values to be provided:
Agent Platform: This is applicable if the VM is going to be used as a host for container orchestration by the platform. The choices are:
EKS Linux: If this is to be added to the EKS cluster. For example, EKS is the chosen approach for container orchestration
Linux Docker: If this is to be used for hosting Linux containers using the Built-in Container orchestration
Docker Windows: If this is to be used for hosting Windows containers using the Built-in Container orchestration
None: If the VM is going to be used for non-Container Orchestration purposes and contents inside the VM will be self-managed by the user
Allocation Tags (Optional): If the VM is being used for containers, you can set a label on it. This label can then be specified during docker app deployment to ensure the application containers are pinned to a specific set of nodes. Thus, you can further split a tenant into separate server pools and deploy applications.
If a VM is being used for container orchestration, ensure that the Image ID corresponds to an Image for that container orchestration. This is set up for you. The list box will have self-descriptive Image IDs. Examples are EKS Worker, Duplo-Docker, Windows Docker, and so on. Anything that starts with Duplo would be an image for the Built-in container orchestration.
Upgrade the Elastic Kubernetes Service (EKS) version for AWS
AWS frequently updates the EKS version based on new features that are available in the Kubernetes platform. DuploCloud automates this upgrade in the DuploCloud Portal.
IMPORTANT: An EKS version upgrade can cause downtime to your application depending on the number of replicas you have configured for your services. Schedule this upgrade outside of your business hours to minimize disruption.
DuploCloud notifies users when an upgrade is planned. The upgrade process follows these steps:
A new EKS version is released.
DuploCloud adds support for the new EKS version.
DuploCloud tests all changes and new features thoroughly.
DuploCloud rolls out support for the new EKS version in a platform release.
The user updates the EKS version.
Updating the EKS version:
Updates the EKS Control Plane to the latest version.
Updates all add-ons and components.
Relaunches all Hosts to deploy the latest version on all nodes.
After the upgrade process completes successfully, you can assign allocation tags to Hosts.
Click Administrator -> Infrastructure.
Select the Infrastructure that you want to upgrade to the latest EKS version.
Select the EKS tab. If an upgrade is available for the Infrastructure, an Upgrade link appears in the Value column.
Click the Upgrade link. The Upgrade EKS Cluster pane displays.
From the Target Version list box, select the version to which you want to upgrade.
From the Host Upgrade Action, select the method by which you want to upgrade hosts.
Click Start. The upgrade process begins.
Click Administrator -> Infrastructure.
Select the Infrastructure with components you want to upgrade.
Select the EKS tab. If an upgrade is available for the Infrastructure components, an Upgrade Components link appears in the Value column.
Click the Upgrade link. The Upgrade EKS Cluster Components pane displays.
From the Host Upgrade Action, select the method by which you want to upgrade hosts.
Click Start. The upgrade process begins.
The EKS Upgrade Details page displays that the upgrade is In Progress.
Find more details about the upgrade by selecting your Infrastructure from the Infrastructure page. Click the EKS tab, and then click Show Details.
When you click Show Details, the EKS Upgrade Details page displays the progress of updates for all versions and Hosts. Green checkmarks indicate successful completion in the Status list. Red Xs indicate Actions you must take to complete the upgrade process.
If any of your Hosts use allocation tags, you must assign allocation tags to the Hosts:
After your Hosts are online and available, navigate to Cloud Services -> Hosts.
Select the host group tab (EC2, ASG, etc.) on the Hosts screen.
Click the Add button.
Name the Host and provide other configuration details on the Add Host form.
Select Advanced Options.
Edit the Allocation Tag field.
Click Create and define your allocation tags.
Click Add to assign the allocation tags to the Host.
Configure settings for all new Tenants under a Plan
You can configure settings to apply to all new Tenants under a Plan using the Config tab. Tenant Config settings will not apply to Tenants created under the Plan before the settings were configured.
From the DuploCloud portal, navigate to Administrator -> Plan.
Click on the Plan you want to configure settings under in the NAME column.
Select the Config tab.
Click Add. The Add Config pane displays.
From the Config Type field, select TenantConfig.
In the Name field, enter the setting that you would like to apply to new Tenants under this Plan. (In the example, the enable_alerting setting is entered.)
In the Value field, enter True.
Click Submit. The setting entered in the Name field (enable alerting in the example) will apply to all new Tenants added under the Plan.
You can check that the Tenant Config settings are enabled for new Tenants on the Tenants details page, under the Settings tab.
From the DuploCloud portal, navigate to Administrator -> Tenants.
From the NAME column, select a Tenant that was added after the Tenant Config setting was enabled.
Click on the Settings tab.
Check that the configured setting is listed in the NAME column. (Enable Alerting in the example.)
Manage Tenant expiry settings in the DuploCloud Portal
In the DuploCloud Portal, configure an expiration time for a Tenant. At the set expiration time, the Tenant and associated resources are deleted.
In the DuploCloud Portal, navigate to Administrator -> Tenants. The Tenants page displays.
From the Name column, select the Tenant for which you want to configure an expiration time.
From the Actions list box, select Set Tenant Expiration. The Tenant - Set Tenant Expiration pane displays.
Select the date and time (using your local time zone) when you want the Tenant to expire.
Click Set. At the configured day and time, the Tenant and associated resources will be deleted.
The Set Tenant Expiration option is not available for Default or Compliance Tenants.
Manage Tenant session duration settings in the DuploCloud Portal
In the DuploCloud Portal, configure the session duration time for all Tenants or a single Tenant. At the end of a session, the Tenants or Tenant ceases to be active for a particular user, application, or Service.
For more information about IAM roles and session times in relation to a user, application, or Service, see the AWS Documentation.
In the DuploCloud Portal, navigate to Administrator -> System Settings. The System Settings page displays.
Click the System Config tab.
Click Add. The App Config pane displays.
From the Config Type list box, select AppConfig.
From the Key list box, select AWS Role Max Session Duration.
From the Select Duration Hour list box, select the maximum session time in hours or set a Custom Duration in seconds.
Click Submit. The AWS Role Max Session Duration and Value are displayed in the System Config tab. Note that the Value you set for maximum session time in hours is displayed in seconds. You can Delete or Update the setting in the row's Actions menu.
In the DuploCloud Portal, navigate to Administrator -> Tenants. The Tenants page displays.
From the Name column, select the Tenant for which you want to configure session duration time.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select AWS Role Max Session Duration.
From the Select Duration Hour list box, select the maximum session time in hours or set a Custom Duration in seconds.
Click Add. The AWS Role Max Session Duration and Value are displayed in the Settings tab. Note that the Value you set for maximum session time in hours is displayed in seconds. You can Delete or Update the setting in the row's Actions menu.
Connect an EC2 instance with SSH by Session ID or by downloading a key
Once an EC2 Instance is created, you connect it with SSH either by using Session ID or by downloading a key.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts and select the host to which you want to connect.
After you select the Host, on the Host's page click the Actions menu and select SSH. A new browser tab opens and you can connect your Host using SSH with by session ID. Connection to the host launches in a new browser tab.
After you select the Host, on the Host's page click the Actions menu and select Connect -> Connection Details. The Connection Info for Host window opens. Follow the instructions to connect to the server.
Click Download Key.
If you don't want to display the Download Key button, disable the button's visibility.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Click Add. The Add Config pane displays.
From the Config Type list box, select Flags.
From the Key list box, select Disable SSH Key Download.
From the Value list box, select true.
Click Submit.
Configuring the following system setting disables SSH access for read-only users. Once this setting is configured, only administrator-level users can access SSH.
From the DuploCloud Portal, navigate to Administrator -> Systems Settings.
Select the Settings tab, and click Add. The Update Config Flags pane displays.
From the Config Type list box, select Flags.
In the Key list box, select Admin Only SSH Key Download.
In the Value field list box, select true.
Click Submit. The setting is configured and SSH access is limited to administrators only.
Add a Host (virtual machine) in the DuploCloud Portal.
DuploCloud AWS supports EC2, ASG, and BYOH (Bring Your Own Host) types. Use BYOH for any VMs that are not EC2 or ASG.
Ensure you have selected the appropriate Tenant from the Tenant list box at the top of the DuploCloud Portal.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
Click the tab that corresponds to the type of Host you want to create (EC2, ASG, or BYOH).
Click Add. The Host that you added is displayed in the appropriate tab (EC2, ASG, or BYOH).
To connect to the Host using SSH, .
The EKS Image ID is the image published by AWS specifically for an EKS worker in the version of Kubernetes deployed at Infrastructure creation time.
From the DuploCloud Portal, navigate to Cloud Services -> Hosts.
Select the Host name from the list.
From the Actions list box, you can select Connect, Host Settings, or Host State to perform the following supported actions:
Control placement of EC2 instances on a physical server with a Dedicated Host
Use Dedicated Hosts to launch Amazon EC2 instances and provide additional visibility and control over how EC2 instances are placed on a physical server; enabling you to use the same physical server, if needed.
Configure the DuploCloud Portal to allow for the creation of Dedicated Hosts.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Click Add. The Add Config pane displays.
In the Config Type field, select Flags.
In the Key field, select Allow Dedicated Host Sharing.
In the Value field, select true.
Click Submit. The configuration is displayed in the System Config tab.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
In the EC2 tab, click Add. The Add Host page displays.
After completing the required fields to configure your Host, select Advanced Options. The advanced options display.
In the Dedicated Host ID field, enter the ID of the Dedicated Host. The ID is used to launch a specific instance on a Dedicated Host. See the screenshot below for an example.
Click Add. The Dedicated Host is displayed in the EC2 tab.
After you create Dedicated Hosts, view them by doing the following:
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
In the EC2 tab, select the Host from the Name column. The Dedicated Host ID card on the Host page displays the ID of the Dedicated Host.
Add rules to custom configure your AWS Security Groups in the DuploCloud Portal
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
Select the Infrastructure for which you want to add or view Security Group rules from the Name column.
Click the Security Group Rules tab.
Click Add. The Add Infrastructure Security pane displays.
From the Source Type list box, select Tenant or IP Address.
From the Tenant list box, select the Tenant for which you want to set up the Security Rule.
Select the protocol from the Protocol list box.
In the Port Range field, specify the range of ports for access (for example, 1-65535).
Optionally, add a Description of the rule you are adding.
Click Add.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
Select the Infrastructure from the Name column.
Click the Security Group Rules tab. Security Rules are displayed.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
Select the Infrastructure from the Name column.
Click the Security Group Rules tab. Security Rules are displayed in rows.
Autoscale your Host workloads in DuploCloud
DuploCloud supports various ways to scale Host workloads, depending on the underlying AWS services being used.
Create Autoscaling groups to scale EC2 instances to your workload
Configure Autoscaling Groups (ASG) to ensure the application load is scaled based on the number of EC2 instances configured. Autoscaling detects unhealthy instances and launches new EC2 instances. ASG is also cost-effective as EC2 Instances are dynamically created per the application requirement within minimum and maximum count limits.
The Use for Cluster Autoscaling option will not be available until you enable the .
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
In the ASG tab, click Add. The Add ASG page is displayed.
In the Friendly Name field, enter the name of the ASG.
Select Availability Zone and Instance Type.
In the Instance Count field, enter the desired capacity for the Autoscaling group.
In the Minimum Instances field, enter the minimum number of instances. The Autoscaling group ensures that the total number of instances is always greater than or equal to the minimum number of instances.
In the Maximum Instances field, enter the maximum number of instances. The Autoscaling group ensures that the total number of instances is always less than or equal to the maximum number of instances.
Select Use for Cluster Autoscaling.
Select Advanced Options.
Select the appropriate Image ID.
From the Agent Platform list box, select Linux Docker/Native to run a Docker service or select EKS Linux to run services using EKS. Fill in additional fields as needed for your ASG.
Optionally, enable .
Optionally, for EKS only, enable .
Click Add. Your ASG is added and displayed in the ASG tab.
View the Hosts created as part of ASG creation from the ASG Hosts tab.
Create Autoscaling Groups (ASG) with Spot Instances in the DuploCloud platform
are spare capacity priced at a significant discount compared to On-Demand Instances. Users specify the maximum price (bid) they will pay per hour for a Spot Instance. The instance is launched if the current Spot price is below the user's bid. Since Spot Instances can be interrupted when spare capacity is unavailable, applications using Spot Instances must be fault-tolerant and able to handle interruptions.
Spot Instances are only supported for Auto-scaling Groups (ASG) with EKS
Follow the steps in the section . Before clicking Add, Click the box to access Advanced Options. Enable Use Spot Instances and enter your bid, in dollars, in the Maximum Spot Price field.
Tolerations will be entered by default in the Add Service page, Advanced Options, Other Container Config field.
For additional information about the EKS version upgrade process with DuploCloud, see the .
If no Image ID is available with a prefix of EKS, copy the AMI ID for the desired EKS version by referring to this . Select Other from the Image ID list box and paste the copied AMI ID in the Other Image ID field. Contact the DuploCloud Support team via your Slack channel if you have questions or issues.
See .
If you add custom code for EC2 or ASG Hosts using the Base64 Data field, your custom code overrides the code needed to start the EC2 or ASG Hosts and the Hosts cannot connect to EKS. Instead, to add custom code directly in EKS.
In the first column of the Security Group row, click the Options Menu Icon ( ) and select Delete.
Refer to AWS for detailed steps on creating Scaling policies for the Autoscaling Group.
The DuploCloud Portal provides the ability to configure Services based on the platforms EKS Linux and Linux Docker/Native. Select the ASG based on the platform used when creating services and Autoscaling groups. Optionally, if you previously , you can configure the Service to use Spot Instances by selecting Tolerate spot instances.
Follow the steps in . In the Add Service page, Basic Options, Select Tolerate spot instances.
Discover tainted EC2 hosts in the DuploCloud Console
Taints can be issued by Kubernetes when a Node becomes unreachable or not tolerated by certain workloads. As Kubernetes can initiate Taints, you can as well. For example, to isolate a node for the purpose of applying maintenance, such as an upgrade, using the kubectl taint
command.
In the DuploCloud Portal, Taints are displayed in the Status column on the EC2 Hosts page.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
In the EC2 tab, check for hosts with a Status of stopped
and tainted
. If these statuses are present, the connection to the underlying Node is lost and you should take appropriate action to restore the connection. See the Kubernetes kubectl
reference documentation for available commands, flags, and examples to resolve the Taint.
To find Tainted Nodes, use the kubectl get nodes
command, followed by the kubectl describe node
<NODE_NAME>
command. See this topic to get Shell Access to Kubernetes within the DuploCloud Portal and issue kubectl
console commands from the Portal.
Scale to or from zero when creating Autoscaling Groups in DuploCloud
DuploCloud allows you to scale to or from zero in Amazon EKS clusters by enabling the Scale from Zero option within the Advanced Options when creating an Autoscaling Group. This feature intelligently adjusts the number of instances in your cluster, dynamically scaling up when demand increases and down to zero when resources are not in use. Reducing resource allocation during idle periods leads to significant cost savings.
Autoscaling to zero is ideal for Kubernetes workloads that don’t always require 100% availability such as:
Non-Critical Workloads: Batch processing jobs, data analysis tasks, and other non-customer-facing services that can be scaled down to zero during off-peak hours (e.g., nights or weekends).
Dev/Test Environments: Development and testing environments that can be scaled up when developers need them and scaled down when not in use.
Background Jobs: Workloads with background jobs running in Kubernetes that are only needed intermittently, such as those triggered by specific events or scheduled at certain times.
Autoscaling to zero is not suitable for all workloads. Avoid using this feature for:
Customer-Facing Applications: Frontend web applications that must always be available should not use autoscaling to zero, as it can cause downtime and negatively impact user experience.
Workloads Outside Kubernetes: If background jobs or other processes are not running in Kubernetes, autoscaling to zero will not apply. Different scaling strategies are required for these environments.
Scaling to or from zero with AWS Autoscaling Groups (ASG) offers several advantages depending on the context and requirements of your application:
Cost Savings: By scaling down to zero instances during periods of low demand, you minimize costs associated with running and maintaining instances. This pay-as-you-go model ensures you only pay for resources when they are actively being used.
Resource Efficiency: Scaling to zero ensures that resources are not wasted during periods of low demand. By terminating instances when they are not needed, you optimize resource utilization and prevent over-provisioning, leading to improved efficiency and reduced infrastructure costs.
Flexibility: Scaling to zero provides the flexibility to dynamically adjust your infrastructure in response to changes in workload. It allows you to efficiently allocate resources based on demand, ensuring that your application can scale up or down seamlessly to meet varying levels of traffic.
Simplified Management: With automatic scaling to zero, you can streamline management tasks associated with provisioning and de-provisioning instances. The ASG handles scaling operations automatically, reducing the need for manual intervention and simplifying infrastructure management.
Rapid Response to Increased Demand: Scaling from zero allows your infrastructure to quickly respond to spikes in traffic or sudden increases in workload. By automatically launching instances as needed, you ensure that your application can handle surges in demand without experiencing performance degradation or downtime.
Improved Availability: Scaling from zero helps maintain optimal availability and performance for your application by ensuring that sufficient resources are available to handle incoming requests. This proactive approach to scaling helps prevent resource constraints and ensures a consistent user experience even during peak usage periods.
Enhanced Scalability: Scaling from zero enables your infrastructure to scale out horizontally, adding additional instances as demand grows. This horizontal scalability allows you to seamlessly handle increases in workload and accommodate a growing user base without experiencing bottlenecks or performance issues.
Elasticity: Scaling from zero provides elasticity to your infrastructure, allowing it to expand and contract based on demand. This elasticity ensures that you can efficiently allocate resources to match changing workload patterns, resulting in optimal resource utilization and cost efficiency.
DuploCloud platform comes with an option of centralized metrics for Docker containers, Virtual machines as well as various cloud services like ELB, RDS, ECache, ECS, Kafka etc. These metrics are displayed through Grafana which is embedded into the DuploCloud UI. Just like central logging these are not turned on by default but can be setup with a single click.
SSH |
Connection Details |
Host Details | View Host details in the Host Details YAML screen. |
Create AMI |
Create Snapshot |
Update User Data | Update the Host user data. |
Change Instance Size | Resize a Host instance to accommodate the workload. |
Update Auto Reboot Status Check |
Start | Start the Host. |
Reboot | Reboot the Host. |
Stop | Stop the Host. |
Hibernate |
Terminate Host | Terminate the Host. |
Automatically reboot a host upon StatusCheck faults or Host disconnection
Configure hosts to be rebooted automatically if the following occurs:
EC2 Status Check - Applicable for Docker Native and EKS Nodes. The Host is rebooted in the specified interval when a StatusCheck
fault is identified.
Kubernetes (K8s) Nodes are disconnected: Applicable for EKS Nodes only. The Host is rebooted in the specified interval when a Host Disconnected
fault is identified.
You can configure host Auto Reboot features for a particular Tenant and for a Host.
When you configure an Auto Reboot feature for both Tenant and Host, the Host level configuration takes precedence over the configuration at the Tenant level.
Use the following procedures to configure Auto Reboot at the Tenant level.
Configure the Auto Reboot feature at the Tenant for Docker Native and EKS Node-based Hosts, to reboot when a StatusCheck
fault is identified.
In the DuploCloud Portal, navigate to Administrator -> Tenant. The Tenant page displays.
Select a Tenant with access to the Host for which you want to configure Auto Reboot.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select Enable Auto Reboot EC2 status check.
In the field below the Select Feature list box, enter the time interval in minutes after which the host automatically reboots after a StatusCheck
fault is identified. Enter zero (0) to disable this configuration.
Click Add. The configuration is displayed in the Settings tab.
Configure the Auto Reboot feature at the Tenant for EKS node-based Hosts, to reboot when a Host Disconnected
fault is identified.
In the DuploCloud Portal, navigate to Administrator -> Tenant. The Tenant page displays.
Select a Tenant with access to the Host for which you want to configure Auto Reboot.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select Enable Auto Reboot K8s Nodes if disconnected.
In the field below the Select Feature list box, enter the time interval in minutes after which the host automatically reboots when a Host Disconnected
fault is identified. Enter zero (0) to disable this configuration.
Click Add. The configuration is displayed in the Settings tab.
Use the following procedures to configure Auto Reboot at the Host level.
Configure the Auto Reboot feature on the Host level for Docker Native and EKS Node-based Hosts, to reboot when a StatusCheck
fault is identified.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
Click the appropriate tab for your Host type and select the Host for which you want to configure Auto Reboot.
Click the Actions menu and select Host Settings -> Update Auto Reboot Status Check. The Set Auto Reboot Status Check Time pane displays.​
In the Auto Reboot Status Check field, enter the time interval in minutes after which the host automatically reboots after a StatusCheck
fault is identified. Enter zero (0) to disable this configuration.
Click Set.
Configure the Auto Reboot feature on the Host level for EKS node-based Hosts, to reboot when a Host Disconnected
fault is identified.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
Click the appropriate tab for your Host type and select the Host for which you want to configure Auto Reboot.
Click the Actions menu and select Host Settings -> Update Auto Reboot Disconnected. The Set Auto Reboot Status Check Time pane displays.​
In the Auto Reboot Time field, enter the time interval in minutes after which the host automatically reboots when a Host Disconnected
fault is identified. Enter zero (0) to disable this configuration.
Click Set.
ECS Autoscaling has the ability to scale the desired count of tasks for the ECS Service configured in your infrastructure. Average CPU/Memory metrics of your tasks are used to increase/decrease the desired count value.
Navigate to Cloud Services -> ECS. Select the ECS Task Definition where Autoscaling needs to be enabled > Add Scaling Target
Set the MinCapacity (minimum value 2) and MaxCapacity to complete the configuration.
Once Autoscaling for Targets is configured, Next we have to add Scaling Policy
Provide details below:
Policy Name - The name of the scaling policy.
Policy Dimension - The metric type tracked by the target tracking scaling policy.. Select from the dropdown
Target Value - The target value for the metric.
Scalein Cooldown - The amount of time, in seconds, after a scale in activity completes before another scale in activity can start.
ScaleOut Cooldown -The amount of time, in seconds, after a scale out activity completes before another scale out activity can start.
Disable ScaleIn - Disabling scale-in makes sure this target tracking scaling policy will never be used to scale in the Autoscaling group
This step creates the target tracking scaling policy and attaches it to the Autoscaling group
View the Scaling Target and Policy Details from the DuploCloud Portal. Update and Delete Operations are also supported from this view
Deploy Hosts in one Tenant that can be accessed by Kubernetes (K8s) Pods in a separate Tenant.
You can enable shared Hosts in the DuploCloud Portal. First, configure one Tenant to allow K8s Pods from other Tenants to run on its Host(s). Then, configure another Tenant to run its K8s Pods on Hosts in other Tenants. This allows you to break Tenant boundaries for greater flexibility.
In the DuploCloud Portal, navigate to Administrator -> Tenant.
From the Tenant list, select the name of the Tenant to which the Host is defined.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature item list, select Allow hosts to run K8S pods from other tenants.
Select Enable.
Click Add. This Tenant's hosts can now run Pods from other Tenants.
In the DuploCloud Portal, navigate to Administrator -> Tenant.
From the Tenant list, select the name of the Tenant that will access the other Tenant's Host (the Tenant not associated with a Host).
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature item list, select Enable option to run K8S pods on any host.
Select Enable.
Click Add. This Tenant can now run Pods on other Tenant's Hosts.
From the Tenant list box at the top of the DuploCloud Portal, select the name of the Tenant that will run K8s Pods on the shared Host.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
In the Services tab, click Add. The Add Service window displays.
Fill in the Service Name, Cloud, Platform, and Docker Image fields. Click Next.
In the Advanced Options window, from the Run on Any Host item list, select Yes.
Click Create. A Service running the shared Host is created.
Autoscale your DuploCloud Kubernetes deployment
Before autoscaling can be configured for your Kubernetes service, make sure that:
Autoscaling Group (ASG) is setup in the DuploCloud tenant
Cluster Autoscaler is enabled for your DuploCloud infrastructure
Horizontal Pod Autoscaler (HPA) automatically scales the Deployment and its ReplicaSet. HPA checks the metrics configured in regular intervals and then scales the replicas up or down accordingly.
You can configure HPA while creating a Deployment Service from the DuploCloud Portal.
In the DuploCloud Portal, navigate Kubernetes -> Services, displaying the Services page.
Create a new Service by clicking Add.
In Add Service - Basic Options, from the Replication Strategy list box, select Horizontal Pod Scheduler.
In the Horizontal Pod Autoscaler Config field, add a sample configuration, as shown below. Update the minimum/maximum Replica Count in the resource
attributes, based on your requirements.
Click Next to navigate to Advanced Options.
In Advanced Options, in the Other Container Config field, ensure your resource attributes, such as Limits
and Requests
, are set to work with your HPA configuration, as in the example below.
At the bottom of the Advanced Options page, click Create.
For HPA Configures Services, Replica is set as Auto in the DuploCloud Portal
When your services are running, Replicas: Auto is displayed on the Service page.
If a Kubernetes Service is running with a Horizontal Pod AutoScaler (HPA), you cannot stop the Service by clicking Stop in the service's Actions menu in the DuploCloud Portal.
Instead, do the following to stop the service from running:
In the DuploCloud Portal, navigate to Kubernetes -> Containers and select the Service you want to stop.
From the Actions menu, select Edit.
From the Replication Strategy list box, select Static Count.
In the Replicas field, enter 0 (zero).
Click Next to navigate to the Advanced Options page.
Click Update to update the service.
When the Cluster Autoscaler flag is set and a Tenant has one or more ASGs, an unschedulable-pod alert will be delayed by five (5) minutes to allow for autoscaling. You can configure the Infrastructure settings to bypass the delay and send the alerts in real-time.
From the DuploCloud portal, navigate to Administrator -> Infrastructure.
Click on the Infrastructure you want to configure settings for in the Name list.
Select the Settings tab.
Click the Add button. The Infra - Set Custom Data pane displays.
In the Setting Name list box, select Enables faults prior to autoscaling Kubernetes nodes.
Set the Enable toggle switch to enable the setting.
Click Set. DuploCloud will now generate faults for unschedulable K8s nodes immediately (before autoscaling).
Backup your hosts (VMs)
Create Virtual Machine (VM) snapshots in the DuploCloud Portal.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
From the Name column, select the Host you want to backup.
Click Actions and select Snapshot.
Once you take a VM Snapshot, the snapshot displays as an available Image ID when you create a Host.
Save resources by hibernating EC2 hosts while maintaining persistence
When you hibernate an instance, Amazon EC2 signals the operating system to perform hibernation (suspend-to-disk). Hibernation saves the contents from the instance memory (RAM) to your Amazon Elastic Block Store (Amazon EBS) root volume. Amazon EC2 persists the instance's EBS root volume and any attached EBS data volumes.
For more information on Hibernation, see the AWS Documentation.
Before you can hibernate an EC2 Host in DuploCloud, you must configure the EC2 host at launch to use the Hibernation feature in AWS.
Follow the steps in the AWS documentation before attempting Hibernation of EC2 Host instances with DuploCloud.
After you configure your EC2 hosts for Hibernation in AWS:
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
In the EC2 tab, select the Host you want to Hibernate.
Click the Actions menu, and select Hibernate Host. A confirmation message displays.
Click Confirm. On the EC2 tab, the host's status displays as hibernated.
Add and view AMIs in AWS
You can create Amazon Machine Images (AMIs) in the DuploCloud Portal. Unlike EC2 Hosts, which are fully dedicated physical servers for launching EC2 instances, AMIs are templates that contain the information required to launch an instance, such as an operating system, application software, and data. EC2 is used for creating a virtual server instance. AMI is the EC2 virtual machine image.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
Select the Host on which you want to base your AMI from the Name column.
Click the Actions menu and select Host Settings -> Create AMI. The Set AMI pane displays.
In the AMI Name field, enter the name of the AMI.
Click Create.
In the DuploCloud Portal, navigate to Cloud Services -> Hosts. The Hosts page displays.
Select the AMI tab. Your AMIs are displayed on the AMI page. Selecting an AMI from this page displays the Overview and Details tabs for more information.
You can disable host creation by non-administrators (Users) for custom AMIs by configuring the option in DuploCloud.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Click Add. The Add Config pane displays.
In the Config Type list box, select Flags.
In the Key list box, select Disable Host Creation with Custom AMI.
In the Value list box, select true.
Click Submit.
When this setting is configured, the Other option in the Image ID list box in the Add Host page, will be disabled, preventing hosts with custom AMIs from being created.
Set up features for auditing and view auditing reports and logs
The DuploCloud Portal provides a comprehensive audit trail, including reports and logs, for security and compliance purposes. Using the Show Audit Records for list box, you can display real-time audit data for:
Auth (Authentications)
Admin (Administrators)
Tenants (DuploCloud Tenants)
Compliance (such as HIPAA, SOC 2, and HIGHTRUST, among others)
Kat-Kit (DuploCloud's CI/CD Tool)
In the DuploCloud Portal, navigate to Administrator -> Observability -> Settings, and select the Audit tab. The Audit page displays.
Click the Enable Audit link.
To view complete auditing reports and logs, navigate to the Observability -> Audit page in the DuploCloud Portal.
You can create an S3 bucket for auditing in another account, other than the DuploCloud Master Account.
Verify that the S3 bucket exists in another account, and note the bucket name. In this example, we assume a BUCKET_REGION of us-west-2 and a BUCKET name of audit-s2-bucket-another-account.
Ensure that your S3 bucket has Duplo Master
permission to access the S3:PutObject
. Refer to the code snippet below for an example.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Continuing the example above, configure the S3BUCKET_REGION.
Click Add. The Add Config pane displays.
From the Config Type list box, select AppConfig.
in the Key list box, enter DUPLO_AUDIT_S3BUCKET_REGION.
In the Value field, enter us-west-2.
Click Submit.
Continuing the example above, configure the S3BUCKET name.
Click Add. The Add Config pane displays.
From the Config Type list box, select AppConfig.
in the Key list box, enter DUPLO_AUDIT_S3BUCKET.
In the Value field, enter audit-s2-bucket-another-account.
Click Submit.
Your S3 bucket region and name configurations are displayed in the System Config tab. View details on the Audit page in the DuploCloud Portal.
Contact your DuploCloud Support team if you have additional questions or issues.
Disable CloudFormation's SourceDestCheck in EC2 Host metadata
The AWS Cloudformation template contains a Source Destination Check (SourceDestCheck
parameter) that ensures that an EC2 Host instance is either the source or the destination of any traffic the instance receives. In the DuploCloud Portal, this parameter is specified as true
, by default, enabling source and destination checks.
There are times when you may want to override this default behavior, such as when an EC2 instance runs services such as network address translation, routing, or firewalls. To override the default behavior and set the SourceDestCheck
parameter to false
, use this procedure.
SourceDestCheck
in the DuploCloud PortalSet AWS CloudFormation SourceDestCheck
to false
for an EC2 Host:
In the DuploCloud Portal, navigate to Cloud Services -> Hosts.
In the EC2 tab, select the Host for which you want to disable SourceDestCheck
.
Click the Metadata tab.
Click Add. The Add Metadata pane displays.
In the Key field, enter SourceDestCheck.
In the Value field, enter False.
Click Create. The Key/Value pair is displayed in the Metadata tab.
Change configuration for the Control Plane, customize Platform Services
There are several use cases for customized log collection. The central logging stack is deployed within your environment, as with any other application, streamlining the customization process.
The version of OpenSearch, the EC2 host size, and the control plane configuration are all deployed based on the configuration you define in the Service Description. Use this procedure to customize the Service Description according to your requirements.
You must make Service Description changes before you enable central logging. If central logging is enabled, you cannot edit the description using the Service Description window.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
In the Service Description tab, in the Name column, select duplo_svd_logging_opensearch. The Service Description window displays.
Edit the YAML in the Service Description window as needed.
Click Update when the configuration is complete to close the window and save your changes.
You can update the Control Plane configuration by editing the Service Description. If the control plane is already deployed using the Service Description specification, then updating the description is similar to making a change to any application.
Note that Control Plane Components are deployed in the DuploCloud Default Tenant. Using the Default Tenant, you can change instance size, Docker images, and more.
You can update the log retention period using the OpenSearch native dashboard by completing the following steps.
From the DuploCloud portal, navigate to Administrator -> Observability -> Logging.
Click Open New Tab to access the OpenSearch dashboard.
Navigate to Pancake -> Index management -> State management policies.
Edit the FileBeat YAML file and update the retention period.
For more information see the OpenSearch documentation.
The new retention period settings will only apply to logs generated after the retention period was updated. Older logs will still be deleted according to the previous retention period settings.
You can modify Elastic Filebeat logging configurations, including mounting folders other than /var/lib/docker
for writing logs to folders other than stdout
.
You need to customize the log collection before enabling logging for a Tenant.
If logging is enabled, you can update the Filebeat configuration for each tenant by editing the Filebeat Service Description (see the procedure in Defining Control Plane Configuration).
Alternately, delete the Filebeat collector from the Tenant and the platform automatically redeploys based on the newest configuration.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Select the Platform Services tab.
Click the Edit Platform Services button. The Platform Services window displays. Select the appropriate Filebeat service. For native container management, select filebeat; for Kubernetes container management, select filebeat-k8s.
Edit the YAML in the Platform Services window as needed.
Click Update to close the window and save your changes.
With DuploCloud, you have the choice to deploy third-party tools such as Datadog, Sumo Logic, and so on. To do this, deploy Docker containers that act as collectors and agents for these tools. Deploy and use these third-party app containers as you would any other container in DuploCloud.
Display logs for the DuploCloud Portal, components, services, and containers
The central logging dashboard displays detailed logs for Service and Tenant. The dashboard uses Kibana and preset filters that you can modify.
In the DuploCloud Portal, navigate to Observability -> Logging.
Select the Tenant from the Tenant list box at the top of the DuploCloud Portal.
Select the Service from the Select Service list box.
Modify the DQL to customize Tenant selection, if needed.
Adjust the date range by clicking Show dates.
Add filters, if needed.
DuploCloud pre-filters logs per Tenant. All DuploCloud logs are stored in a single index. You can see any Tenant or combination of Tenants (using the DQL option) but the central logging control plane is shared, with no per-Tenant access.
Confirm that your Hosts and Services are running or runnable to view relevant log data.
Metrics setup comprises of two parts
Control Plane: This comprises of a Grafana service for dashboard and a Prometheus container for fetching VM and container metrics. Cloud service metrics are directly pulled by Grafana from AWS without requiring Prometheus.
To enable Metrics go under Administrator -> Observability -> Settings. Select the Monitoring tab and click on "Enable Monitoring"
Metrics Collector: Once Metrics control plane is ready i.e. Grafana and Prometheus service has been deployed and are active, you can enable Metrics on a per tenant basis. Navigate to Administrator -> Observability -> Settings. Select the Monitoring tab, and using the toggle buttons to enable monitoring for individual Tenants. This triggers the deployment of Node Exporter and CAdvvisor container in each Host in the tenant similar to how Log Collectors like File beat were deployed for fetching central logs and sending to Open Search.
Monitoring Kubernetes status with the K8s Admin dashboard
Use the K8s Admin dashboard to monitor various statistics and statuses for Kubernetes, including the number and availability of StatefulSets defined for a service.
In the DuploCloud Portal, select Administrator -> Observability -> Metrics.
Click the k8s tab. The K8s Admin dashboard displays.
Logging for AWS in the DuploCloud Platform
The DuploCloud Platform performs centralized logging for -based applications. For the native and container orchestrations, this is implemented using and with as the log collector. For ECS Fargate, AWS Lambda, and AWS SageMaker Jobs, the platform integrates with CloudWatch, automatically setting up Log Groups and making them viewable from the DuploCloud Portal.
No setup is required to enable logging for ECS Fargate, Lambda, or AWS SageMaker Jobs. DuploCloud automatically sets up CloudWatch log groups and provides a menu next to each resource.
To maintain optimal performance and cost-efficiency, it's crucial to manage logging resources effectively. If you find yourself with unnecessary monitoring hosts or logging instances, specific steps should be taken to clean them up without affecting essential services.
To terminate unnecessary monitoring hosts in DuploCloud, it's recommended that a designated user, referred to as Person 0, performs the termination. This approach ensures that essential services, such as Prometheus, are not inadvertently removed, which could lead to loss of data or configurations.
Cleaning up a logging instance involves several steps, starting with remote access into DuploMaster. From there, navigate to the appropriate directories to edit and delete specific files related to the unintended tenant. This includes removing entries from the logging_config.json
and deleting tenant-specific JSON files. Additionally, tenant services related to OpenSearch, Kibana, and Elastic Filebeat need to be deleted, followed by the termination of the oc-diagnostics
host. It's also necessary to remove specific entries from the DuploCloud portal related to reverse proxy settings and platform services.
When a host or a Load Balancer (LB) is no longer required, consider stopping or deleting them as part of cost optimization measures. Before taking such actions, ensure they do not contain or support essential services that could impact your infrastructure's operation.
By following these guidelines, you can ensure that your logging resources in DuploCloud are managed efficiently, contributing to both operational effectiveness and cost savings.
Under Observability -> Metrics we have the various metrics per Tenant.
While there are 8-10 out-of-box dashboard for various services, one can add their own dashboards and make them appear in Duplo Dashboard through a configuration
Establish an to work directly in the AWS Console.
View connection details (connection type, address, user name, visibility) and .
Set the .
Create a of the Host at a specific point.
Enable or disable . Set the number of minutes after the AWS Instance Status Check fails before automatically rebooting.
(temporarily freeze) the Host.
To remove or edit an Auto Reboot Tenant-level configuration, click the () icon and select Edit Setting or Remove Setting.
See for information on displaying logs per container.
Enable setting of SNS Topic Alerts for specific Tenants
SNS Topic Alerts provide a flexible and scalable means of sending notifications and alerts across different AWS services and external endpoints, allowing you to stay informed about important events and incidents happening in your AWS environment.
SNS is a fully managed service that enables you to publish messages to topics. The messages can be delivered to subscribers or endpoints, such as email, SMS, mobile push notifications, or even HTTP endpoints.
SNS Alerts can only be configured for the specific resources included under Observability -> Alerts in the DuploCloud Portal. Integrating external monitoring programs (e.g., Sentry) allows you to view all of the faults for a particular Tenant under Observability -> Faults.
Configuring this setting will attach the SNS Topic to the alerts in the OK and Alarm state.
In the DuploCloud Portal, navigate to Administrator -> Tenants. The Tenants page displays.
Select the Tenant for which you want to set SNS Topic Alerts from the NAME column.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select Set SNS Topic Alerts.
In the field below the Select Feature list box, enter a valid SNS Topic ARN.
Click Add. The configuration is displayed in the Settings tab.
Faults that happen in the system, be it Infrastructure creation, container deployments, Application health checks, or any Triggered Alarms can be tracked in the DuploCloud portal under Faults Menu.
You can look at Tenant-specific faults under Observability -> Faults or all the faults in the system under Administrator -> Faults.
You can set the AWS Alerts for individual metrics.
From the DuploCloud portal, navigate to Observability -> Alerts and click Add. The Create Alert pane displays.
Enter the Resource Type and select the resource from the Resource type list box. Click Next.
Fill in the necessary information and click Create. The Alert is created.
View general alerts from the DuploCloud Portal in the Observability -> Alerts.
Select the Alerts tab for alerts pertaining to a specific resource, such as Hosts.
Access specific resources in the AWS Console using the DuploCloud Portal
Use Just-In-Time (JIT) to launch the AWS console and work with a specific Tenant configuration, or to obtain Administrator privileges.
DuploCloud users have AWS Console access for advanced configurations of S3 Buckets, Dynamo databases, SQS, SNS Topic, Kinesis stream, and API Gateway resources that are created in DuploCloud. ELB and EC2 areas of the console are not supported.
Using the DuploCloud Portal, click on the Console link in the title bar of the AWS resource you created in DuploCloud, as in the example for S3 Bucket, below.
Clicking the Console link launches the AWS console and gives you access to the resource, with permissions scoped to the current Tenant.
Using the Console link, you don't need to set up permissions to create new resources in the AWS Console. You can perform any operations on resources that are created with DuploCloud.
For example, you can create an S3 bucket from the DuploCloud UI, and then launch the AWS Console with the Console link, removing files, setting up static web hosting, and so on. Similarly, you can create a DynamoDB in DuploCloud and use the AWS console to add and remove entries in a database table.
Set up logging for the DuploCloud Portal
If you need to make changes to the Control Plane Configuration, follow this procedure to do so, before enabling logging. Note that you cannot modify the Control Plane Configuration after you set up logging.
Docker applications use stdout
for writing log files, collecting logs, placing them in the Host directory, mounting them into Filebeat containers, and sending them to AWS Elasticsearch. If you need to customize the log collection and you use folders other than stdout
, for example, follow this procedure. Note that you cannot customize the log collection after you set up logging.
In the DuploCloud Portal, navigate to Administrator -> Observability -> Settings -> Logging.
From the Tenant list box at the top of the DuploCloud Portal, select the Default Tenant.
Click the Create Logging link. The Enable Logging page displays.
Use the Enable Logging page to deploy logging for the Control Plane, which uses OpenSearch and Kibana to retrieve and display log data for the Default Tenant. In the Cert ARN field, enter the ARN certificate for the Default Tenant. Find the ARN by selecting the Default Tenant from the Tenant list box at the top of the DuploCloud Portal; navigating to Administrator -> Plans; selecting the Plan that matches your Infrastructure Name; and clicking the Certificates tab.
Click Submit. Data gathering takes about fifteen (15) minutes. When data gathering is complete, graphical logging data is displayed in the Logging tab.
After logging has been enabled for the Control Plane, finish the logging setup by enabling the Log Collector to collect logs per Tenant. This feature is especially useful for Tenants that are spread across multiple regions. In the DuploCloud Portal, navigate to Administrator -> Observability -> Settings -> Logging.
In the Logging tab, on the Logging Infrastructure Tenants page, click Add.
Select the Tenants for which you want to configure logging, using the Select Tenants to enable logging area, as in the example below. The Control Plane configuration is deployed for each Tenant that you select in the Infrastructure, specified in Infrastructure Details.
The Log Collector uses Elastic Filebeat containers that are deployed within each Tenant.
When you enable a Tenant for logging, the Filebeat service starts up and begins log collection. View the Filebeat containers by navigating to Kubernetes -> Containers in the DuploCloud Portal. In the row of the container for which you want to view the logs, click on the menu icon and select Logs.
When you perform the steps above to configure logging, DuploCloud does the following:
An EC2 Host is added in the Default tenant, for example, duploservices-default-oc-diagnostics.
Services are added in the Default tenant, one for OpenSearch and one for Kibana. Both services are pinned to the EC2 host using allocation tags. Kibana is set up to point to ElasticSearch and exposed using an internal load balancer.
Security rules from within the internal network to port 443 are added in the Default Tenant to allow log collectors that run on Tenant hosts to send logs to ElasticSearch.
A Filebeat service (filebeat-duploinfrasvc)
is deployed for each Tenant where central logging is enabled.
The /var/lib/docker/Containers
are mounted from the Host into the Filebeat container. The Filebeat container references ElasticSearch, which runs in the Default Tenant. Inside the container, Filebeat is configured so that every log line is added with metadata information consisting of the Tenant name, Service names, Container ID, and Hostname, enabling ease of search using these parameters with ElasticSearch.
Make changes to fault settings by adding Flags under Systems Settings in the DuploCloud portal
If there is a Target Group with no instances/targets, DuploCloud generates a fault. You can configure DuploCloud's Systems Settings to ignore Target Groups with no instances.
From the DuploCloud portal, navigate to Administrator -> Systems Settings.
Select the System Config tab.
Click Add. The Add Config pane displays.
For ConfigType, select Other.
In the Other Config Type field, type Flags.
In the Key field, enter IgnoreTargetGroupWithNoInstances.
In the Value field, enter True.
Click Submit. The Flag is set and DuploCloud will not generate faults for Target Groups without instances.
Enable and view alert notifications in the DuploCloud Portal
DuploCloud supports viewing of Faults in the portal and sending notifications and emails to the following systems:
Sentry
PagerDuty
NewRelic
OpsGenie
You will need to generate an keys from each of these vendor systems, and then provide that key to DuploCloud to enable integration.
In the Sentry website, navigate to Projects -> Create a New Project.
Click Settings -> Projects -> project-name -> Client keys. The Client Keys page displays.
Complete the DSN fields on the screen.
Click Generate New Key.
In the DuploCloud Portal, navigate to Observability -> Faults.
Click Update Notifications Config. The Set Alert Notifications Config pane displays.
In the Sentry - DSN field, enter the key you received from Sentry.
In the Alerts Frequency (Seconds) field, enter a time interval in seconds when you want alerts to be displayed.
Click Update.
In the PagerDuty website home page, select the Services tab and navigate to the service that receives Events. If a Service does not exist, click New Service. When prompted, enter a friendly Name (for example, your DuploCloud Tenant name) and click Next.
Assign an Escalation policy, or use an existing policy.
Click Integration.
Click Events API V2. Your generated Integration Key is displayed as the second item on the right side of the page. This is the Routing Key you will supply to DuploCloud.
Copy the Integration Key to your Clipboard.
In the DuploCloud Portal, navigate to Observability -> Faults.
Click Update Notifications Config. The Set Alert Notifications Config pane displays.
In the Pager Duty - Routing Key field, enter the key you generated from PagerDuty.
In the Alerts Frequency (Seconds) field, enter a time interval in seconds when you want alerts to be displayed.
Click Update.
In the DuploCloud Portal, navigate to Observability -> Faults.
Click Update Notifications Config. The Set Alert Notifications Config pane displays.
In the NewRelic - API Key field, enter the key you generated from NewRelic.
In the Alerts Frequency (Seconds) field, enter a time interval in seconds when you want alerts to be displayed.
Click Update.
In the OpsGenie website, generate an API Key to integrate DuploCloud faults with OpsGenie.
In the DuploCloud Portal, navigate to Observability -> Faults.
Click Update Notifications Config. The Set Alert Notifications Config pane displays.
In the OpsGenie - API Key field, enter the key you generated from OpsGenie.
In the Alerts Frequency (Seconds) field, enter a time interval in seconds when you want alerts to be displayed.
Click Update.
Fix faults automatically to maintain system health
You can configure Hosts to auto-reboot and heal faults automatically, either at the Tenant level, or the Host level. See the topic for more information.
Use DuploCloud-JIT access to interact with the AWS Console and resources
DuploCloud-JIT (Just-In-Time) offers temporary access to the AWS Console to quickly and easily interact with your AWS resources. With DuploCloud-JIT, you can perform necessary tasks without relying on long-lived credentials, simplifying access while maintaining strict security controls.
Use DuploCloud-JIT for tasks that require short-term access to AWS resources, such as:
One-Time JIT Tasks: Accessing AWS resources like S3 Buckets or DynamoDB for one-time tasks.
Automated Scripts with Short-Lived Access: Running scripts or CI/CD pipeline tasks that need limited-time access, such as deploying applications or running tests.
Ad-Hoc Troubleshooting: Troubleshooting issues or urgent maintenance that require immediate authentication.
Dynamic Access for Temporary Services: Securely authenticating and interacting with services that are needed for a limited time.
Interactive Sessions: Providing users access to AWS Console for specific tasks without the complexity of permanent credentials.
You can obtain DuploCloud JIT access to AWS Console through the DuploCloud UI, or using command-line tools and duplo-jit
or duplo-ctl
.
Access AWS Console using the Console link from your user profile page, or a specific resource page. To access the AWS Console from a specific resource page, see the AWS Console link.
To access the AWS Console from your user profile page, follow these steps:
In the DuploCloud Portal, navigate to Administrator -> Users.
Click the username in the upper right corner, and select Profile.
Click the JIT AWS Console button. A browser opens, giving you access to AWS Console.
From the JIT AWS Console list box, you can also select Copy AWS Console URL, Temporary AWS Credentials, or AWS access from my Workstation.
duplo-jit
or duplo-ctl
To gain JIT AWS Console access through a CLI, install duplo-jit
and duplo-ctl
, obtain credentials, and access the AWS Console.
DuploCloud-JIT CLI access is based on user permissions configured in the DuploCloud Portal. For instance, if you have Administrator permissions in DuploCloud, you can gain admin-level JIT access. If you are a User, your JIT access will be restricted to the resources and functionalities your DuploCloud permissions permit.
duplo-jit
Install duplo-jit
with Homebrew, or from GitHub releases:
duplo-jit
with HomebrewRun the following command:
duplo-jit
from GitHub ReleasesDownload the latest .zip archive from https://github.com/duplocloud/duplo-jit/releases for your operating system.
Extract the archive listed in the table below based on the operating system and processor you are running.
Add the path to duplo-jit
to your $PATH
environment variable.
Obtain credentials using an API token, or interactively:
Obtain an API token. While you can create a temporary or permanent API token, a permanent token is recommended.
Edit the ~/.aws/config
file, and add the following profile, as shown in the code snippet below:
To obtain credentials interactively, rather than with a token, replace --token <DUPLO_TOKEN>
in the argument above with --interactive
.
When you make the first AWS call, you are prompted to grant authorization through the DuploCloud portal, as shown below.
Upon successful authorization, A JIT token is provided. This token is valid for one (1) hour. When the token expires, you are prompted to re-authorize the request.
Ensure that the AWS CLI is configured with the profile name that matches the one you used when obtaining credentials. This can be done in the ~/.aws/config
file.
Use the following command, replacing <ENV_NAME>
with your actual environment name:
This command will list your EC2 instances in the specified environment.
Run one of the following commands to copy an AWS Console URL link to your clipboard. You can use the link in any browser.
All of these examples assume Administrator access. If you are obtaining JIT access for a User role, replace the --admin
flag in the commands with --tenant <YOUR_TENANT>
. For example, if your tenant's name is dev01
, you would use --tenant dev01
. Tenants are lower-case at the CLI.
zsh
shellAdd the following to your .zshrc
file:
usage is jitnow <ENV_NAME>
If you are receiving errors when attempting to retrieve credentials, try running the command with the --no-cache
argument.
By default, JIT sessions expire after one (1) hour. You can modify the session timeout setting for a specific Tenant in the DuploCloud Portal.
If you increase the JIT session timeout beyond the AWS default of one (1) hour, you must also increase the maximum session value for the IAM role assigned to your DuploCloud Tenant.
In the DuploCloud Portal, navigate to Administrator -> Tenant.
Select the Tenant name from the NAME column.
Select the Settings tab, and click Add. The Add Tenant Feature pane displays.
Select AWS Access Token Validity from the Select Feature list box.
In the Value field, enter the length of time JIT access should remain active in seconds.
Click Update. The new setting is displayed on the Tenant details page under the Settings tab.
By default, AWS IAM roles have a maximum session duration of one (1) hour. You can modify the maximum session duration for the AWS Master IAM role in the DuploCloud Portal.
From the DuploCloud Portal, navigate to Administrator -> Systems Settings.
Select the System Config tab, and click Add. The Update Config AppConfig pane displays.
From the Config Type list box, select AppConfig.
From the Key list box, select AdminJitSessionDuration.
In the Value field, enter the length of time JIT access should remain active in seconds.
Click Submit. The Admin-JIT session duration is configured.
DuploCloud allows automatic generation of alerts for resources within a Tenant. This makes sure that the defined baseline of monitoring is applied to all current and new resources based on a set of rules.
As an Administrator:
From the DuploCloud Portal, navigate to Administrator -> Tenants.
Click the name of your Tenant from the list and select the Alerting tab.
Click Enable Alerting. An alerts template displays. The alerts template contains rules for each AWS namespace and metric to be monitored.
Review the alerts template, and adjust the thresholds
Click Update
Activating cost allocation tags in DuploCloud AWS
The duplo-project cost allocation tag must be activated after you enable IAM access to billing data. Use the same AWS user and account that you used to enable IAM access to activate cost allocation tags.
To apply and activate cost allocation tags, follow the steps in this document.
After you activate the tag successfully, you should see this screen:
Grant AIM permissions to view billing data in AWS
IAM access permissions must be obtained to view the billing data in AWS.
Follow the steps in this to obtain access.
In order to perform the steps in , you must be logged in asroot
from the AWS instance that manages cost and billing for the AWS organization.
Displaying Node Usage for billing
DuploCloud calculates license usage by Node for the following categories:
Elastic Compute Cloud
Elastic Container Services
AWS Lambda Functions
Managed Workflows for Apache Airflow
In the DuploCloud portal, navigate to Administrator -> Billing. The Billing page displays.
Click the DuploCloud License Usage tab.
Click More Details in any License Usage card for additional breakdown of Node Usage statistics per Tenant.
Click the DuploCloud license documentation link to download a copy of the license document.
Processor/Operating System | Archive |
---|---|
Intel macOS
darwin_amd64.zip
M1 macOS
darwin_arm64.zip
Windows
windows_amd64.zip
Manage costs for resources
The DuploCloud Portal allows you to view and manage resource usage costs. As an administrator, you can view your company's billing data by month, week, or Tenant. You can configure billing alerts, explore historical resource costs, and view DuploCloud license usage information. Non-administrator users can view billing data for Tenants they can access by viewing billing data for a selected Tenant.
To enable the billing feature, you must:
Enable access to billing data in AWS by following the steps in this AWS document.
Apply cost allocation tags so that DuploCloud can retrieve billing data.
Set billing alerts based on the previous month's spending or define a custom threshold. Receive email notifications if the current month's expenses exceed a specified percentage of the threshold.
From the DuploCloud Portal, navigate to Administrator -> Billing, and select the Billing Alerts tab.
Click Add or Edit.
Enable Billing Alerts.
Select a threshold and trigger for the alert and enter the email of the administrator user who will receive the email notifications.
Click Submit. The alert details show on the Billing Alerts tab.
An Administrator can define Quotas for resource allocation in a DuploCloud Plan. Resource allocation can be restricted via providing Instance Family and Size. An administrator can restrict the total number of allowed resources by setting Cumulative Count value per Resource type.
Once the Quota is defined, DuploCloud users are restricted to create new resources when the corresponding quota configured in Plan is reached.
The quotas are controlled at the Instance Family level, such as, if you define a quota for t4g.large
, it will be enforced including all lower instance types in the t4g
family as well. So a quota with a count of 100 for t4g.large
will mean instances up to that instance type cannot exceed 100.
Displaying Service and Tenant billing data.
From the DuploCloud portal, administrators can view account spending details by month, week, and Tenant. Non-administrator users can view billing data for a Tenant they have user access to.
View the billing details for your company's AWS account.
Log in as an administrator, and navigate to Administrator -> Billing.
You can view usage by:
Time
Select the Spend by Month tab and click More Details to display monthly and weekly spending options.
Tenant
Select the Spend by Tenant tab.
You must first enable the billing feature to view or manage usage costs in the DuploCloud Portal.
View billing details for a selected Tenant. This option is accessible to non-administrator users with user access to the selected Tenant.
Select the Tenant name from the Tenant list box.
Navigate to Cloud Services -> Billing. The Billing page displays.
The Spend by Month tab lists the five services with the highest spending for each month for the selected Tenant. Click More Details on any month's card to display more details about that month's spending.
Add custom tags to AWS resources
An Administrator can provide a list of custom tag names that can be applied to AWS resources for any Tenant in a DuploCloud environment.
In the DuploCloud portal, navigate to Administrator -> System Settings -> System Config.
Click Add. The Add Config pane displays.
In the Config Type list box, select App Config.
In the Key list box, select Duplo Managed Tag Keys.
In the Value field, enter the name of the custom tag, for example, cost-center.
Click Submit. In the System Configs area of the System Config tab, your custom tag name is displayed with Type AppConfig and a Key value of DUPLO_CUSTOM_TAGS, as in the example below.
Once the custom tag is added, navigate to Administrator -> Tenants.
Select a Tenant from the Name column.
Click Add.
Click the Tags tab.
In the Key field, enter the name of the custom tag (cost-center in the example) that you added to System Config.
In the Value field, enter an appropriate value. In the Tags tab, the tag Key and Value that you set are displayed, as in the example below.
Use case:
Collection of data from using various methods/sources
Web scraping: Selenium using headless chrome/firefox.
Web crawling: status website sing crawling
API to Data collection: It could be REST or GraphQL API
Private internal customer data collected over various transactions
Private external customer data collected over secured SFTP
The data purchased from 3rd party
The data from various social networks
Correlate data from various sources
Clean up and Process data and apply various statistical methods, create
Correlate terabytes of data from various sources and make sense from the data.
Detect anomalies, summarize, bucketize, and various aggregations
Attach meta-data to enrich data.
Create data for NLP and ML models for predictions of future events.
AI/ML pipelines and life-cycle management
Make data available to data science team
Train models and do continuous improvement trials, reinforcement learning.
Create anomalies, bucketize data, summarize and do various aggregations.
Train NLP and ML models for predictions of future events based on history
Create history for models/hyper parameters and data at various stages.
Deploying Apache Sparkâ„¢ cluster
In this tutorial we will create a Spark cluster with a Jupyter notebook. A typical use case is ETL jobs, for example reading parquet files from S3, processing and pushing reports to databases. The aim is to process GBs of data in faster and cost-effective way.
The high-level steps are:
Create 3 VMs one for each Spark master, Spark worker and Jupyter notebook.
Deploy Docker images for each of these on these VMs.
From the DuploCloud portal, navigate to Cloud Services -> Hosts -> EC2. Click +Add and check the Advanced Options box. Change the value of instance type to ‘m4.xlarge
‘ and add an allocation tag ‘sparkmaster
‘.
Create another host for the worker. Change the value of instance type to ‘m4.4xlarge
‘ and add an allocation tag ‘sparkworker
‘. Click Submit. The number of workers depends on how much load you want to process. You should add one host for each worker. They should all have the same allocation tag ‘sparkworker
‘. You can add and remove workers and scale up or down the Spark worker service as many times as you want. We will see in the following steps.
Create one more host for Jupyter notebook. Choose the value of instance type to ‘m4.4xlarge
‘ and add the allocation tag as ‘jupyter
‘.
Navigate to Docker -> Services and click Add. In the Service Name field, enter ‘sparkmaster
‘ and in the Docker Image field, enter ‘duplocloud/anyservice:spark_v6'
, add the allocation tag ‘sparkmaster
‘. From the Docker Networks list box, select Host Network. By setting this in Docker Host config you are making the container networking the same as the VM i.e., container IP is same as VM.
First we need the IP address of Spark master. Click on Spark master service and on the right expand the container details and copy the host IP. Create another service, under name choose ‘jupyter
‘, image ‘duplocloud/anyservice:spark_notebook_pyspark_scala_v4
‘, add the allocation jupyter and select Host network for Docker Host Config, Add volume mapping “/home/ubuntu/jupyter:/home/jovyan/work
“, Also provide the environment variables
Replace the brackets <>
with the IP you just got. See figure 5.
Create another service named ‘sparkworker1
`, image ‘duplocloud/anyservice:spark_v7
‘, add the allocation tag ‘sparkworker
‘ and select Host Network for Docker Network. Also provide the environment variables
{"node": "worker", "masterip": "<>"}
Replace the brackets <>
with the IP you just got. See Figure 5.
Depending on how many worker hosts you have created, use the same number under replicas and that is the way you can scale up and down. At any time, you can add new hosts, set the allocation tag ‘sparkworker
‘ and then under services, edit the sparkworker service and update the replicas.
Add or update shell access by clicking on >_
icon. This gives you easy access into the container shell. You will need to wait for 5 minutes for the shell to be ready. Make sure you are connected to VPN if you choose to launch the shell as internal only
Select Jupyter service and expand the container. Copy the hostip and then click on >_
icon.
Once you are inside the shell. Run command ‘> jupyter notebook list
‘ to get the URL along with auth token. Replace the IP with Jupyter IP you copied previously. See Figure 5.
In your browser, navigate to the Jupyter URL and you should be able to see the UI.
Now you can use Jupyter to connect to data sources and destinations and do ETL jobs. Sources and destinations can include various SQL and NoSQL databases, S3 and various reporting tools including big data and GPU-based Deep learning.
In this following we will create a Jupyter notebook and show some basic web scraping, using Spark for preprocessing, exporting into schema, do ETLs, join multiple dataframes (parquets), and export reports into MySQL.
Connect to a website and parse html (using jsoup)
Extract the downloaded zip. This particular file is 8 GB in size and has 9 million records in csv
Upload the data to AWS S3
Also Configure session with required settings to read and write from AWS S3
Load data in Spark cluster
Define the Spark schema
Do data processing
Setup Spark SQL
Spark SQL joins 20 GB of data from multiple sources
Export reports to RDS for UI consumption Generate various charts and graphs