Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Configuration and Secret management in AWS
There are many ways to pass configurations to containers at run-time. Although simple to set up, using Environmental Variables can become complex if there are too many configurations, especially files and certificates.
In Kubernetes, you also have the option to populate environment variables from Config Maps or Secrets.
You can use an S3 Bucket to store and pass configuration to the containers:
Create an S3 bucket in the Tenant and add the needed configurations in an S3 Bucket as a file.
Set the S3 Bucket name as an Environmental Variable.
Create a start-up script that defines the entry point of the container to download the file from the S3 bucket into the container, referenced by the Environmental Variable. Do this by:
Similar to using an S3 bucket, you can create values in an SSM parameter store (navigate to Cloud Services -> App Integration, and select the SSM Parameters tab) and set the Name of the parameter in the Environmental Variable. You then use a startup script in the AWS CLI to pull values from SSM and set them for the application in the container, either as an Environmental Variable or as a file.
Use the AWS Secrets Manager to set configs and secrets in Environmental Variables. Use a container startup script in the AWS CLI to copy secrets and set them in the appropriate format in the container.
Use the ECS Task Definition Secrets fields to set the configuration. For example::
Where X_SERVICE_TOKEN
is the Secret
defined in the JSON and VALUE_FROM
is the AWS secret ARN.
See the Kubernetes Configs and Secrets section.
Using containers and DuploCloud Services with AWS EKS and ECS
Containers and Services are critical elements of deploying AWS applications in the DuploCloud platform. Containers refer to Docker containers: lightweight, standalone packages that contain everything needed to run an application including the code, runtime, system tools, libraries, and settings. Services in DuploCloud are microservices defined by a name, Docker image, and a number of replicas. They can be configured with various optional parameters and are mapped to Kubernetes deployment sets or StatefulSets, depending on whether they have stateful volumes.
DuploCloud supports three container orchestration technologies to deploy containerized applications in AWS: Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), and Native Docker containers in virtual machines (VMs). Each option provides benefits and challenges depending on your needs and requirements.
Amazon Elastic Container Service (ECS) is a fully managed service that uses its own orchestration engine to manage and deploy Docker containers. It is quite easy to use, integrates well with other AWS services, and is optimized for running containers in the AWS ecosystem. The tradeoff for this simplicity is that ECS is not as flexible or versatile as EKS and is less portable outside the AWS ecosystem.
Amazon Elastic Kubernetes Service (EKS) is a managed service that uses the open-source container orchestration platform Kubernetes. The learning curve is steeper for EKS than ECS, as users must navigate the complexities of Kubernetes. However, EKS users benefit from the excellent flexibility that Kubernetes’ wide range of tools, features, solutions, and portability provides.
Docker is the foundational containerization technology. It is not managed, so the user manually controls the containers and orchestration. Although Docker requires considerably more user input than ECS or EKS, it offers greater control over the VM infrastructure, strong isolation between applications, and supreme portability.
When you create a service, refer to the registry configuration in Docker -> Services | Kubernetes -> Services | Cloud Services -> ECS -> Services. Select the Service from the NAME column and select the Configuration tab. Note the values in the Environment Variables and Other Docker Config fields.
For example:
{"DOCKER_REGISTRY_CREDENTIALS_NAME":"registry1"}
Adding a Service in the DuploCloud Platform is not the same as adding a Kubernetes service. When you deploy DuploCloud Services, the platform implicitly converts your DuploCloud Service into either a deployment set or a StatefulSet. The service is mapped to a deployment set if there are no volume mappings. Otherwise, it is mapped to a StatefulSet, which you can force creation of if needed. Most configuration values are self-explanatory, such as Images, Replicas, and Environmental Variables.
Kubernetes clusters are created during Infrastructure setup using the Administrator -> Infrastructure option in the DuploCloud Portal. The cluster is created in the same Virtual Private Cloud (VPC) as the Infrastructure. Building an Infrastructure with an EKS/ECS cluster may take some time.
Next, you deploy an application within a Tenant in Kubernetes. The application contains a set of VMs, a Deployment set (Pods), and an application load balancer. Pods can be deployed either through the DuploCloud Portal or through kubectl,
using HelmCharts.
Managing AWS services and related components
Applications are written involving many AWS Services like S3 for Object Store, RDS for RDBS (SQL), Redis, Kafka, SQS, SNS, Elastic Search, and so on. While each of their configurations needs a few application-centric inputs, there are scores of lower-level nuances around access control, security, and compliance among others.
Using DuploCloud you can pretty much create any service within the Tenant using basic app-centric inputs while the platform will make sure the lower-level nuances are programmed to best practices for security and compliance.
Every service within the Tenant will automatically be reachable to any application running within that tenant. If you need to expose some service from one Tenant to another, see Allow Cross-tenant Access.
DuploCloud adds new AWS services to the platform on almost a weekly basis, if a certain service is not documented here please contact the DuploCloud Team. Even if the feature is currently available, the DuploCloud team will enable the feature in a matter of days.
Supported Services are listed in alphabetical order, following the core services: Containers, Load Balancers, and Storage.
Managing Containers and Service with ECS
Using the Services tab in the DuploCloud Portal (navigate to Cloud Services -> ECS and select the Services tab), you can display and manage the Services you have defined.
For ECS Services, select the Service Name and click the Actions menu to Edit or Delete Services, in addition to performing other actions, as shown below.
You can display and manage the Containers you have defined in the DuploCloud portal. Navigate to Kubernetes -> Containers.
You can create up to five (5) containers for ECS services by defining a Task Definition.
In the DuploCloud Portal, navigate to Cloud Services -> ECS.
In the Task Definitions tab, click Add. The Add Task Definition page displays.
Specify a unique Name for the Task Definition.
From the vCPUs list box, select the number of CPUs to be consumed by the task and change other defaults, if needed.
In the Container - 1 area, specify the Container Name of the first container you want to create.
In the Image field, specify the container Image name, as in the example above.
Specify Port Mappings, and Add New mappings or Delete them, if needed.
Click Submit. Your Task Definition for multiple ECS Service containers is created.
To edit the created Task Definition in order to add or delete multiple containers, select the Task Definition in the Task Definitions tab, and from the Actions menu, select Edit Task Definition.
In AWS ECS, an essential container is a key component of a task definition. An essential container is one that must successfully complete for the task to be considered healthy. If an essential container fails or stops for any reason, the entire task is marked as failed. Essential containers are commonly used to run the main application or service within the task.
By designating containers as essential or non-essential, you define the dependencies and relationships between the containers in your task definition. This allows ECS to properly manage and monitor the overall health and lifecycle of the task, ensuring that the essential containers are always running and healthy.
Fargate is a technology that you can use with ECS to run containers without having to manage servers or clusters of EC2 instances.
Managing Containers and Service with EKS and Native Docker Services
Using the Services tab in the DuploCloud Portal (Kubernetes -> Services), you can display and manage the Services you have defined.
In the DuploCloud Portal, navigate to Kubernetes -> Services for an EKS Service.
Click Add. The Basic Options section of the Add Service page displays.
Complete the fields on the page, including Service Name, Docker Image name, and number of Replicas. Use Allocation Tags to deploy the container in a specific set of hosts.
To force the creation of Kubernetes StatefulSets, select Yes in the Force StatefulSets field.
Click Next. The Advanced Options section of the Add Service page displays.
Click Create. The Service is created.
Do not use spaces when creating Service or Docker image names.
The number of Replicas you define must be less than or equal to the number of hosts in the fleet.
Using the Services page, you can start, stop, and restart multiple services simultaneously.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
Use the checkbox column to select multiple services you want to start or stop at once.
From the Service Actions menu, select Start Service, Stop Service, or Restart Service.
Your selected services are started, stopped, or restarted as you specified.
Using the Import Kubernetes Deployment pane, you can add a Service to an existing Kubernetes namespace using Kubernetes YAML.
In the DuploCloud Portal, select Kubernetes -> Services from the navigation pane.
Click Add. The Add Service page displays.
Click the Import Kubernetes Deployment button in the upper right. The Import Kubernetes Deployment pane displays.
Paste the deployment YAML code, as in the example below, into the Import Kubernetes Deployment pane.
Click Import.
In the Add Service page, click Next.
Click Create. Your Native Kubernetes Service is created.
You can supply advanced configuration options with EKS in the DuploCloud Portal in several ways, including the advanced use cases in this section.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Click Add. The Add Config pane displays.
From the Config Type list box, select, Flags.
From the Key list box, select Block Master VPC CIDR Allow in EKS SG.
From the Value list box, select True.
Click Submit. The setting is displayed as BlockMasterVpcCidrAllowInEksSg in the System Config tab.
You can display and manage the Containers you have defined in the DuploCloud portal. Navigate to Kubernetes -> Containers.
DuploCloud provides you with a Just-In-Time (JIT) security token, for fifteen minutes, to access the kubectl
cluster.
In the DuploCloud Portal, select Administrator -> Infrastructure from the navigation pane.
Select the Infrastructure in the Name column.
Click the EKS tab.
Copy the temporary Token and the Server Endpoint (Kubernetes URL) Values from the Infrastructure that you created. You can also download the complete configuration by clicking the Download Kube Config button.
Run the following commands, in a local Bash shell instance:
You have now configured kubectl
to point and access the Kubernetes cluster. You can apply deployment templates by running the following command:
If you need security tokens of a longer duration, create them on your own. Secure them outside of the DuploCloud environment.
For an end-to-end example of creating an ECS Task Definition, Service, and Load Balancer, .
Use the Options Menu ( ) in each Container row to display Logs, State, Container Shell, Host Shell, and Delete options.
Option | Functionality |
---|
To designate a container as Essential, see .
Click the Plus Icon ( ) to the left of the Primary label, which designates that the first container you are defining is the primary container. The Container - 2 area displays.
Use the and icons to collapse and expand the Container areas as needed. Specify Container Name and Image name for each container that you add. Add more containers by clicking the Add Icon ( ) to create up to five (5) containers, in each container area. Delete containers by clicking the Delete ( X ) Icon in each container area.
To designate a container as Essential, follow the procedure to create your containers, but before creating the container you want to designate as Essential, in the Container definition, select the Essential Container option, as in the example below.
For information about Fargate, .
Follow to create the ECS Service from your Task Definition and define an associated Load Balancer to expose your application on the network.
For an end-to-end example of creating an EKS Service, see .
For a Native Docker Services example, see .
For EKS Services, select the Service Name and click the to Edit or Delete Services, in addition to performing other actions, as shown below.
Configure advanced options as needed. For example, you can implement , by adding the YAML to the Other Container Config field (optional).
Once the deployment commands run successfully, navigate to Administrator -> Tenants. Select the Tenant from the NAME column. Your deployments are displayed and you can now attach for the Services.
Use the Options Menu ( ) in each Container row to display Logs, State, Container Shell, Host Shell, and Delete options.
Option | Functionality |
---|
in the Duplocloud Kubernetes documentation.
in the DuploCloud Kubernetes documentation.
in the DuploCloud documentation.
See for examples of specifying K8s YAML for Pod Toleration.
Logs | Displays container logs. |
State | Displays container state configuration, in YAML code, in a separate window. |
Container Shell |
Host Shell | Accesses the Host Shell. |
Delete | Deletes the container. |
Logs | Displays container logs. When you select this option, the Container Logs window displays. Use the Follow Logs option (enabled by default) to monitor logging in real-time for a running container. See the graphic below for an example of the Container Logs window. |
State | Displays container state configuration, in YAML code, in a separate window. |
Container Shell |
Host Shell | Accesses the Host Shell. |
Delete | Deletes the container. |
Set Docker registry credentials
In the DuploCloud Portal, navigate to Docker -> Services. Docker registry credentials are passed to the Kubernetes cluster as kubernetes.io/dockerconfigjson
.
From the Docker list box, select Docker Credentials. The Set Docker registry Creds pane displays.
Supply the credentials and click Submit.
Enable the Docker Shell Service by selecting Enable Docker Shell from the Docker list box.
You can pull images from multiple Docker registries by adding multiple Docker Registry Credentials.
In the DuploCloud Portal, click Administrator-> Plan. The Plans page displays.
Select the Plan in the Name column.
Click the Config tab.
Click Add. The Add Config pane displays.
You can pass Docker Credentials using the Environment Variables config field in the Add Service Basic Options page. See the Kubernetes Configs and Secrets section for details.
Creating Load Balancers for single and multiple DuploCloud Services
DuploCloud provides the ability to configure Load Balancers with the type of Application Load Balancer, Network Load Balancer, and Classic Load Balancer.
DuploCloud provides the ability to configure Load Balancers with the following types:
Application Load Balancer - An ALB provides outbound connections to cluster nodes inside the EKS virtual network, translating the private IP address to a public IP address as part of its Outbound Pool.
Network Load Balancer - An NLB distributes traffic across several servers by using the TCP/IP networking protocol. By combining two or more computers that are running applications into a single virtual cluster, NLB provides reliability and performance for web servers and other mission-critical servers.
Classic Load Balancer - The legacy AWS Load Balancer (which was retired from AWS support, as of August 2022).
Load Balancers can be configured for Docker Native, EKS-Enabled, and ECS Services from the DuploCloud Portal. Using the Portal, you can configure:
Service Load Balancers - Application Load Balancers specific to one service. (Navigate to Docker -> Services or Kubernetes -> Services, select a Service from the list, and click the Load Balancer tab).
Shared and Global load balancers - Application or Network Load Balancers that can be used as a shared Load Balancer between Services and for Global Server Load Balancing (GSLB). (Navigate to Cloud Services -> Networking and select the Load Balancers tab).
DuploCloud allows one Load Balancer per DuploCloud Service. To share a load balancer between multiple Services, create a Service Load Balancer of type Target Group Only.
See the following pages for specific information on adding Load Balancer Listeners for:
To specify a custom classless inter-domain routing (CIDR) value for an NLB Load Balancer, edit the Load Balancer Listener configuration in the DuploCloud Portal.
Before completing this task, you must add a Load Balancer Listener of Type Network LB.
In the DuploCloud Portal, navigate Docker -> Services or Kubernetes -> Service.
Select the Service name from the NAME column.
Click the Load Balancers tab.
Click Add in the Custom CIDR field of the Edit Load Balancer Listener pane.
Add the Custom CIDR(s) and press ENTER. In the example below 10.180.12.0/22 and 10.180.8.0/22 are added. After the CIDRs are added, you add Security Groups for Custom CIDR(s).
Repeat this procedure for each custom CIDR that you want to add.
Navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the appropriate Infrastructure.
Click the Security Group Rules tab.
Click Add to add a Security Group. The Add Tenant Security pane displays.
From the Source Type list box, select Ip Address.
From the IP CIDR list box, select Custom. A field labeled CIDR notation of allowed hosts displays.
In the CIDR Notation of allowed hosts field enter a custom CIDR and complete the other required fields.
Click Add to add the Security Group containing the custom CIDR.
Repeat this procedure to add additional CIDRs.
In the DuploCloud Portal, navigate to Cloud Services -> Networking.
Click the Load Balancer tab.
Click Add. The Create a Load Balancer pane displays.
In the Name field, enter a name for the Load Balancer.
From the Type list box, select a Load Balancer type.
From the Visibility list box, select Public or Internal.
Click Create.
Instead of creating a unique Load Balancer for each Service you create, you can share a single Load Balancer between multiple Services. This is helpful when your applications run distributed microservices where the requests use multiple services and route traffic based on application URLs, which you can define with Load Balancer Listener Rules.
To accomplish this, you:
Create a Service Load Balancer with the type Target Group Only. This step creates a Service Load Balancer that includes a Target Group with a pre-defined name.
Create a Shared Load Balancer with the Target Group that was defined.
Create routing rules for the Shared Load Balancer and the Target Group it defines.
In the DuploCloud Portal, navigate Docker -> Services or Kubernetes -> Services.
On the Services page, select the Service name in the Name column.
Click the Load Balancers tab.
If no Load Balancers exist, click the Configure Load Balancer link. If other Load Balancers exist, click Add in the LB listeners card. The Add Load Balancer Listener pane displays.
From the Select Type list box, select Target Group Only.
You can create a Load Balancer Listener with a type of Target Group Only for Docker or EKS and ECS Services based on your application requirement. Complete the other required fields and click Add.
The Target Group Only Service Load Balancer is displayed in the LB Listeners area in the Load Balancers tab on the Services page.
Add a Shared Load Balancer before performing this procedure.
In the Load Balancer tab of the Cloud Services -> Networking page, select the Shared Load Balancer you created. The Load Balancer page with the Listeners tab displays.
In the Listeners tab, click Add. The Load Balancer Listener pane displays.
Complete all fields, specifying the Target Group that was created when you added a Load Balancer with the Type Target Group Only in the previous step.
Click Save. The Shared Load Balancer for the Target Group displays in the Listeners tab.
Create a Shared Load Balancer for the Target Group before performing this procedure.
Rules are not supported for Network Load Balancers (NLBs).
Click Add. The Add LB Listener rule page displays.
Create routing rules for the Target Group by setting appropriate Conditions. Add Routing Rules by specifying Rule Type, Values, and Forward Target Group. Forward Target Group lists all the Target Groups created for Docker Native, K8s, and ECS Services. Specify Priority for multiple rules. Use the X button to delete specific Values.
Click Submit.
View the rules you defined for any Shared Load Balancer.
In the DuploCloud portal, navigate to Cloud Services -> Networking.
Select the Load Balancer tab.
From the Name column, select the Load Balancer whose rules you want to view.
Update attributes for your defined Target Group.
In the DuploCloud portal, navigate to Cloud Services -> Networking.
Select the Load Balancer tab.
From the Name column, select the Load Balancer whose defined Target Group attributes you want to modify.
You can use the Other Settings card in the DuploCloud Portal to set the following features:
WAF Web ACL
Enable HTTP to HTTPS redirects
Enable Access Logging
Set Idle Timeout
Drop invalid headers
In the DuploCloud Portal, navigate to Docker -> Services or Kubernetes -> Service. The Services page displays.
Select the Service to which your Load Balancer is attached from the Name column.
Click the Load Balancers tab.
In the Other Settings card, click Edit. The Other Load Balancer Settings pane displays.
In the Other Load Balancer Settings pane, select any or all options.
Click Save.
Working with Load Balancers using AWS EKS
If you need to create an Ingress Load Balancer, refer to the EKS Ingress page in the DuploCloud Kubernetes User Guide.
For an end-to-end example of deploying an application using an EKS Service, see the AWS Quick Start Tutorial and choose the Creating an EKS Service option.
In the DuploCloud Portal, navigate Kubernetes -> Services.
On the Services page, select the Service name in the Name column.
Click the Load Balancers tab.
If no Load Balancers exist, click the Configure Load Balancer link. If other Load Balancers exist, click Add in the LB listeners card. The Add Load Balancer Listener pane displays.
From the Select Type list box, select a Load Balancer Listener type based on your Load Balancer.
Complete other fields as required and click Add to add the Load Balancer Listener.
To specify a custom classless inter-domain routing (CIDR) value for an NLB Load Balancer, edit the Load Balancer Listener configuration in the DuploCloud Portal.
Before completing this task, you must add a Load Balancer Listener of Type Network LB.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
On the Services page, select the Service name in the Name column.
Click the Load Balancers tab.
Click Add in the Custom CIDR field of the Edit Load Balancer Listener pane.
Add the Custom CIDR(s) and press ENTER. In the example below 10.180.12.0/22 and 10.180.8.0/22 are added. After the CIDRs are added, you add Security Groups for Custom CIDR(s).
Repeat this procedure for each custom CIDR that you want to add.
Navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the appropriate Infrastructure.
Click the Security Group Rules tab.
Click Add to add a Security Group. The Add Tenant Security pane displays.
From the Source Type list box, select Ip Address.
From the IP CIDR list box, select Custom. A field labeled CIDR notation of allowed hosts displays.
In the CIDR Notation of allowed hosts field enter a custom CIDR and complete the other required fields.
Click Add to add the Security Group containing the custom CIDR.
Repeat this procedure to add additional CIDRs.
In the DuploCloud Portal, navigate to Cloud Services -> Networking.
Click the Load Balancer tab.
Click Add. The Create a Load Balancer pane displays.
In the Name field, enter a name for the Load Balancer.
From the Type list box, select a Load Balancer type.
From the Visibility list box, select Public or Internal.
Click Create.
Instead of creating a unique Load Balancer for each Service you create, you can share a single Load Balancer between multiple Services. This is helpful when your applications run distributed microservices where the requests use multiple services and route traffic based on application URLs, which you can define with Load Balancer Listener Rules.
To accomplish this, you:
Create a Service Load Balancer with the type Target Group Only. This step creates a Service Load Balancer that includes a Target Group with a pre-defined name.
Create a Shared Load Balancer with the Target Group that was defined.
Create routing rules for the Shared Load Balancer and the Target Group it defines.
In the DuploCloud Portal, navigate Kubernetes -> Services.
On the Services page, select the Service name in the Name column.
Click the Load Balancers tab.
If no Load Balancers exist, click the Configure Load Balancer link. If other Load Balancers exist, click Add in the LB listeners card. The Add Load Balancer Listener pane displays.
From the Select Type list box, select Target Group Only.
You can create a Load Balancer Listener with a type of Target Group Only for Docker Mode or Native EKS and ECS Services based on your application requirement. Complete the other required fields and click Add.
The Target Group Only Service Load Balancer is displayed in the LB Listeners area in the Load Balancers tab on the Services page.
Add a Shared Load Balancer before performing this procedure.
In the Load Balancer tab of the Cloud Services -> Networking page, select the Shared Load Balancer you created. The Load Balancer page with the Listeners tab displays.
In the Listeners tab, click Add. The Load Balancer Listener pane displays.
Complete all fields, specifying the Target Group that was created when you added a Load Balancer with the Type Target Group Only in the previous step.
Click Save. The Shared Load Balancer for the Target Group displays in the Listeners tab.
Create a Shared Load Balancer for the Target Group before performing this procedure.
Rules are not supported for Network Load Balancers (NLBs).
Click Add. The Add LB Listener rule page displays.
Create routing rules for the Target Group by setting appropriate Conditions. Add Routing Rules by specifying Rule Type, Values, and Forward Target Group. Forward Target Group lists all the Target Groups created for Docker Native, K8s, and ECS Services. Specify Priority for multiple rules. Use the X button to delete specific Values.
Click Submit.
View the rules you defined for any Shared Load Balancer.
In the DuploCloud portal, navigate to Cloud Services -> Networking.
Select the Load Balancer tab.
From the Name column, select the Load Balancer whose rules you want to view.
Update attributes for your defined Target Group.
In the DuploCloud portal, navigate to Cloud Services -> Networking.
Select the Load Balancer tab.
From the Name column, select the Load Balancer whose defined Target Group attributes you want to modify.
The Update Target Group Attributes pane displays.
Find the attribute you want to update in the Attribute column and update the associated value in the Value column.
Click Update to save the changes.
To enable stickiness, complete steps 1-5 for Updating Target Group Attributes above. On the Update Target Group Attributes pane, in the Value field for stickiness.enabled, enter true. Update additional stickiness attributes, if needed. Click Update to save the changes.
You can use the Other Settings card in the DuploCloud Portal to set the following features:
WAF Web ACL
Enable HTTP to HTTPS redirects
Enable Access Logging
Set Idle Timeout
Drop invalid headers
In the DuploCloud Portal, navigate to Kubernetes -> Services. The Services page displays.
Select the Service to which your Load Balancer is attached from the Name column.
Click the Load Balancers tab.
In the Other Settings card, click Edit. The Other Load Balancer Settings pane displays.
In the Other Load Balancer Settings pane, select any or all options.
Click Save.
Set up Storage Classes and PVCs in Kubernetes
Navigate to Kubernetes -> Storage -> Storage Class
Configure EFS parameter created at Step1 by clicking on EFS Parameter.
Here, we are configuring Kubernetes to use Storage Class created in Step2 above, to create a Persistent Volume with 10Gi of storage capacity and ReadWriteMany access mode.
Configure below in Volumes to create your application deployment using this PVC.
Working with Load Balancers using AWS ECS
Tasks run until an error occurs or a user terminates the Task in the ECS Cluster.
Navigate to Cloud Services -> ECS.
In the Service Details tab, click the Configure ECS Service link. The Add ECS Service page displays.
In the Name field, enter the Service name.
In the LB Listeners area, click Add. The Add Load Balancer Listener pane displays.
From the Select Type list box, select Application LB.
In the Container Port field, enter a container port number.
In the External Port field, enter an external port number.
From the Visibility list box, select an option.
In the Heath Check field, enter a path (such as /) to specify the location of Kubernetes Health Check logs.
From the Backend Protocol list box, select HTTP.
From the Protocol Policy list box, select HTTP1.
Select other options as needed and click Add.
On the Add ECS Service page, click Submit.
In the Service Details tab, information about the Service and Load Balancer you created is displayed.
Verify that the Service and Load Balancer configuration details in the Service Details tab are correct.
Storage services included in DuploCloud for AWS
DuploCloud AWS Storage Services include:
Working with Load Balancers in a Native Docker Service
In the DuploCloud Portal, navigate to Docker -> Services.
Click the Load Balancers tab.
Click the Configure Load Balancer link. The Add Load Balancer Listener pane displays.
From the Select Type list box, select your Load Balancer type.
Complete other fields as required and click Add to add the Load Balancer Listener.
When the LB Status card displays Ready, your Load Balancer is running and ready for use.
Accesses the Container Shell. To access the Container Shell option, you must first set up .
Accesses the Container Shell. To access the Container Shell option, you must first set up .
In the LB Listeners area, select the Edit Icon () for the NLB Load Balancer you want to edit. The Edit Load Balancer Listener pane displays.
Note the name of the created Target Group by clicking the Info Icon ( ) for the Load Balancer in the LB Listener card and searching for the string TgName
. You will select the Target Group when you create a Shared Load Balancer for the Target Group.
In the Listeners tab, in the Target Group row, click the Actions menu ( ) and select Manage Rules. You can also select Update attributes from the Actions menu, as well, to dynamically update Target Group attributes. The Listener Rules page displays.
In the Listeners tab, in the appropriate Target Group row, click the Actions menu ( ) and select Manage Rules.
In the Listeners tab, in the appropriate Target Group row, click the Actions menu ( ) and select Update attributes.
In the LB Listeners area, select the Edit Icon () for the NLB Load Balancer you want to edit. The Edit Load Balancer Listener pane displays.
Note the name of the created Target Group by clicking the Info Icon ( ) for the Load Balancer in the LB Listener card and searching for the string TgName
. You will select the Target Group when you create a Shared Load Balancer for the Target Group.
In the Listeners tab, in the Target Group row, click the Actions menu ( ) and select Manage Rules. You can also select Update attributes from the Actions menu, as well, to dynamically update Target Group attributes. The Listener Rules page displays.
In the Listeners tab, in the appropriate Target Group row, click the Actions menu ( ) and select Manage Rules.
In the Listeners tab, in the appropriate Target Group row, click the Actions menu ( ) and select Update Target Group attributes.
Refer to steps
Before you create an ECS Service and Load Balancer, you must create a to run the Service. You can define multiple containers in your Task Definition.
For an end-to-end example of deploying an application using an ECS Service, see the and choose the option.
In the Task Definitions tab, select the Task Definition Family Name. This is the prepended by a unique DuploCloud identifier.
You can also easily create and manage Kubernetes and within the DuploCloud Portal.
To create Hosts (Virtual Machines) see the .
For an end-to-end example of deploying an application using a Native Docker Service, see the and choose the option.
Select the Service .
Databases supported by DuploCloud AWS
A number of databases are supported for DuploCloud and AWS. Use the procedures in this section to set them up.
AWS ApiGateway RestApi is created from the DuploCloud Portal which will take care of creating the security policies to make the API Gateway accessible to other resources (like Lambda functions) within the Tenant. Creating the RestApi is the only configuration done from within the DuploCloud portal. All other configurations for the API (like defining methods, resources, and pointing to lambda functions) should be done in the AWS console. The API console can be reached by navigating to Cloud Services -> Networking, selecting the API Gateway tab, and then clicking on the Console button under the Actions menu.
The steps below use DuploCloud's API Gateway/Lambda integration to create a web API with an HTTP endpoint for your Lambda function (in this case, it returns a simple "Hello!
" response).
The example API deployed is not secure. Anyone on the internet can access the endpoint (in this example, "Hello!
"). When creating your own Lambda, you will need to configure CORS, authentication, and other security details.
Create a lambda_function.py
with this code:
For more information about formatting your Lambda response, the AWS documentation.
Run zip my_deployment_package.zip lambda_function.py
Upload my_deployment_package.zip
to an S3 bucket.
Create a Lambda Function in DuploCloud and point it to that Zip with handler lambda_function.lambda_handler
.
Create an API Gateway and select the Lambda you just created.
Then you can "Deploy API" from the new gateway that's created in AWS Console and you can curl the endpoint that shows up under Stages -> Stage details -> Invoke URL (again in AWS Console).
Configuring a CloudFront distribution in DuploCloud
The S3 bucket needs to be created and static asserts need to be uploaded to the S3 bucket. Please follow the steps in the link below to create the S3 bucket.
Create Cloudfront distribution by navigating to Cloud Services -> Networking and selecting the CloudFront tab. Then click +Add.
Name - Friendly name for the distribution.
Root Object - Default root object that will be returned while accessing the root of the domain. Example: index.html. Should not start with "/".
Certificate - ACM certificate for distribution. Only certs in us-east-1 can be used. If not already present should be created in AWS and added to the plan (Administrator > Plans > Select Tenant Plan > Certificate tab).
Aliases - Domain name using which distribution will be accessed. Multiple domain names can be configured if needed. If the Domain name is managed by Duplo CNAME mapping will be automatically done else CNAME mapping should be added manually in the appropriate DNS management console.
Origins - Location information where the actual content is stored. It can be an S3 bucket or any HTTP server endpoint.
Domain Name - S3 bucket can be selected or chosen other and enter custom endpoint.
ID - unique identifier for the origin. UI pre-populates it from the domain name. If needed can be changed.
Path - Optional. The Path will be suffixed to the origin's domain name (URL) while fetching content. For S3: If the content that needs to be served is under prefix static. You should enter "static" in the path. For custom URL: If all the APIs have a prefix like v1. You should enter "v1" in the path.
Default Cache Behaviors - The default Cache policy and the default origin to fetch content are entered here.
Cache Policy ID - AWS predefined cache policies are listed. You can select one or choose another and enter a custom cache policy.
Target Origin - Choose the default origin that should be used for the distribution
Custom Cache Behaviors - Additional Cache policies and path patterns to use the custom cache behaviors are entered here.
Cache Policy ID - AWS predefined cache policies are listed. You can select one or choose another and enter a custom cache policy.
Path Pattern - For requests matching the pattern enter this specific origin and cache policy will be used. For example "api/*" all requests that start with API prefix will be routed to this origin.
Target Origin - Choose the origin that should be used for this custom path.
Note: If the S3 bucket used is part of the same tenant where CloudFront distribution is created. Duplo creates an Origin Access Identity and updates the bucket policy to allow GetObject for Cloudfront Origin Access Identity. No extra step is needed on the user end to deal with S3 bucket permissions.
Create the lambda function in the tenant by selecting the Edge lambda
checkbox. This will create a lambda function in us-east-1
along with necessary permissions.
Create a CloudFront distribution by giving the necessary values, in addition for the lambda@edge select the function associations and select the lambda function.
Note: We will show the versions of the lambda function, so the same function will be there multiple times with V1 and V2.
Once the deployment status becomes Deployed
. Then visit the domain name and you should see the invocation of the lambda function
The default origin should point to your app URL ui.mysite.com.
Create a new S3 Bucket to store maintenance pages. In the bucket create a prefix/folder called maintpage.
Upload maintenance page asserts (.html, .css, .js
, etc.) into an S3 bucket inside maintpage
folder.
Add new S3 Origin pointing to the S3 bucket we have maintenance static asserts.
Add new Custom Cache Behaviors use /maintpage/*
as path pattern, Target origin should be S3 maintenance asserts origin.
Adding Custom Error Response mapping.
In the error code dropdown select the HTTP code for which the maintenance page should be served. 502 gateway timeout is commonly used.
In the Response page path enter /maintpage/5xx.html
. 5xx.html should be changed to a page that exists in s3.
HTTP Response code can be either 200 or 502 (same as the actual source origin response code).
Run AWS batch jobs without installing software or servers
You can perform AWS batch job processing directly in the DuploCloud Portal without the additional overhead of installed software, allowing you to focus on analyzing results and diagnosing problems.
Create scheduling policies to define when your batch job runs.
From the DuploCloud Portal, navigate to Cloud Services -> Batch page, and click the Scheduling Policies tab.
Click Add. The Create Batch Scheduling Policy page displays.
In the Create Batch Scheduling Policy page, create batch job scheduling policies using the AWS documentation. The fields in the AWS documentation map to the fields on the DuploCloud Create Batch Scheduling Policy page.
Click Create.
AWS compute environments (Elastic Compute Cloud [EC2] instances) map to DuploCloud Infrastructures. The settings and constraints in the computing environment define how to configure and automatically launch the instance.
In the DuploCloud Portal, navigate to Cloud Services -> Batch.
Click the Compute Environments tab.
Click Add. The Add Batch Environment page displays.
In the Compute Environment Name field, enter a unique name for your environment.
In the Type field, select the environment type (On-Demand, Spot, Fargate, etc.).
Modify additional defaults on the page, as needed, or add configuration parameters in Other Configurations.
Click Create. The Compute Environment is created.
After you define job definitions, create queues for your batch jobs to run in. For more information about batch job queues, see the AWS instructions for creating a job queue.
From the DuploCloud Portal, navigate to Cloud Services -> Batch page, and click the Queues tab.
Click Add. The Create Batch Queue page displays.
In the Create Batch Queue page, create batch job queues using the AWS documentation. The fields in the AWS documentation map to the fields on the DuploCloud Create Batch Queue page.
Click Create. The Batch Queue is created.
For Priority, enter a whole number. Job queues with a higher priority are run before those with a lower priority associated with the same compute environment.
Before you can run AWS batch jobs, you need to create job definitions specifying how batch jobs are run.
From the DuploCloud Portal, navigate to Cloud Services -> Batch, and click the Job Definitions tab.
Click Add. The Create Batch Job Definition page displays.
In the Create Batch Job Definition page, define your batch jobs using the AWS documentation. The fields in the AWS documentation map to the fields on the DuploCloud Create Batch Job Definition page.
Click Create. The Batch Job Definition is created.
Add a job for AWS batch processing. See the AWS documentation for more information about batch jobs.
After you configure your compute environment, in the DuploCloud Portal, navigate to Cloud Services -> Batch, and click the Jobs tab.
Click Add. The Add Batch Job page displays.
On the Add Batch Job page, define a Job Name, Job Definition, Job Queue, and Job Properties.
Optionally, if you created a Scheduling Policy to apply to this job, paste the YAML code below into the Other Properties field.
Click Create. The Batch job is created.
As you Create a Batch Job, paste the following YAML code into the Other Properties field on the Add Batch Job page.
Navigate from the DuploCloud Portal to Cloud Services -> Batch, and click the Jobs tab. The Jobs list displays.
Click the name of the Job to view Job Details (Status, Job ID, Job Queue, Job Definition).
Use the AWS Best Practices Guide for information about running your AWS Batch jobs.
Enhance performance and cut costs by using the AWS GP3 Storage Class
GP3, the new storage class from AWS, offers significant performance benefits as well as cost savings when you set it as your default storage class. By using GP3 storage classes instead of GP2 storage classes, you get a baseline of 3000 IOPS, without any additional fees. You can also configure workloads that used a gp2 volume of up to 1000 GiB in capacity with a gp3 volume.
If the volume size is greater than 1000 GiB, check the actual IOPS driven by the workload and choose a corresponding value.
For information about migrating your type GP2 Storage Classes to GP3, see this AWS blog.
To set GP3 as your default Storage Class for future allocations, you must add a custom setting in your Infrastructure.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
From the Name column, select the Infrastructure to which you want to add a custom setting (for the default G3 storage class).
Click the Settings tab.
Click Add. The Infra - Set Custom Data pane displays.
In the Setting Name field, select Other from the list box.
In the Custom Setting field, select DefaultK8sStorageClass from the list box.
in the Setting Value field, enter gp3.
Click Set.
Support for AWS Timestream databases
DuploCloud supports the Amazon Timestream database in the DuploCloud Portal. AWS Timestream is a fast, scalable, and serverless time-series database service that makes it easier to store and analyze trillions of events per day at an accelerated speed.
Amazon Timestream automatically scales to adjust for capacity and performance, so you don’t have to manage the underlying infrastructure.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
From the RDS page, click the Timestream tab.
Click Add. The Add Timestream Database pane displays.
Enter the DatabaseName.
Select an Encryption Key, if required.
Click Submit. The Timestream database name displays on the Timestream tab.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
From the RDS page, click the Timestream tab.
Select the database from the Name column.
On the Tables tab, click Add. The Add Timestream Table pane displays.
Enter the Table Name and other necessary information to size and create your table.
Click Create.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
From the RDS page, click the Timestream tab.
Select the database from the Name column.
On the Timestream page, click the database's Action menu to modify the JSON code or launch the Console in AWS. You can also select the database name in the Name column and, from the Tables tab, click the table's Action menu to modify the JSON code or launch the Console in AWS or Delete a table.
Adding DynamoDB Tables in DuploCloud
When using DynamoDB in DuploCloud AWS, the required permissions to access the DynamoDB from a virtual machine (VM), Lambda functions, and containers are provisioned automatically using Instance profiles. Therefore, no Access Key is required in the Application code.
When you write application code for DynamoDB in DuploCloud AWS, use the IAM role/Instance profile to connect to these services. If possible, use the AWS SDK constructor, which uses the region.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
Click the DynamoDB tab.
Click Add. The Create a DynamoDB Table pane displays.
Specify the DynamoDB Table Name and other required fields, including Primary Key, Key Type, Attribute Type, Sort Key, and Sort Key Type.
Click Create.
Perform additional configuration, as needed, in the AWS Console by clicking the >_ Console icon. In the AWS console, you can configure the application-specific details of DynamoDB database tables. However, no access or security-level permissions are provided.
After creating a DynamoDB table, you can retrieve the final name of the table using the .fullname
attribute, which is available in the read-only section of the documentation. This feature is handy for applications that dynamically access table names post-creation. If you encounter any issues or need further assistance, please refer to the documentation or contact support.
For detailed guidance about configuring the duplocloud_aws_dynamodb_table
, refer to the Terraform. This resource allows for creating and managing AWS DynamoDB tables within DuploCloud.
Using IAM for secure log-ins to RDS databases
Authenticate to MySQL, PostgreSQL, Aurora MySQL, Aurora PostgreSQL, and MariaDB RDS instances using AWS Identity and Access Management (IAM) database authentication.
Using IAM for authenticating an RDS instance offers the following benefits:
Network traffic to and from the database is encrypted using Secure Socket Layer (SSL) or Transport Layer Security (TLS).
Centrally manage access to your database resources, instead of managing access individually for each DB instance.
For applications running on Amazon EC2 hosts, you can use profile credentials specific to your EC2 instance to access your database, instead of using a password, for greater security.
Use the System Config tab to enable IAM authentication before enabling it for a specific RDS instance.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab. The Add Config pane displays.
From the Config Type list box, set Flags.
From the Key list box, select Enable RDS IAM auth.
From the Value list box, select True.
Click Submit. The configuration is displayed in the System Config tab.
You can also enable IAM for any MySQL, PostgreSQL, and MariaDB instance during RDS creation or by updating the RDS Settings after RDS creation.
Select the Enable IAM auth option when you create an RDS database.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
In the RDS tab, select the database for which you want to enable IAM.
Click the Actions menu and select RDS Settings -> Update IAM Auth. The Update IAM Auth pane displays.
Select Enable IAM Auth.
Click Update.
To download a token which you can use for IAM authentication:
In the DuploCloud Portal, navigate to Cloud Services -> Database.
In the RDS tab, select the database for which you want to enable IAM.
Click the Actions menu and select View -> Get DB Auth Token. The RDS Credentials window displays.
Click Close to dismiss the window.
Create and connect to an RDS database instance
Support for the Aurora Serverless V1 database engines has been deprecated. When using Terraform, do not create V1 engines.
DuploCloud supports the following RDS databases in AWS:
MySQL
PostgreSQL
MariaDB
Microsoft SQL-Express
Microsoft SQL-Web
Microsoft SQL-Standard
Aurora MySQL
Aurora MySQL Serverless
Aurora PostgreSQL
Aurora PostgreSQL Serverless
When upgrading RDS versions, use AWS Console and see your Cloud Provider for compatibility requirements. Note that while versions 5.7.40, 5.7.41, and 5.7.42 cannot be upgraded to version 8.0.28, you can upgrade these versions to version 8.0.32 and higher.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
Click Add. The Create a RDS page displays.
Fill out the form based on your requirements, and Enable Logging, if needed.
Optionally, in the Backup Retention Period in Days field, enter a number of days to retain automated backups between one (1) and thirty-five (35). If a value is not entered, the Backup Retention Period value configured in Systems Settings will be applied.
You can create Aurora Serverless V2 Databases by selecting Aurora-MySql-Serverless-V2 or Aurora-PostgreSql-Serverless-V2 from the RDS Database Engine list box. Select the RDS Engine Version compatible with Aurora Serverless v2. The RDS Instance Size of db.serverless
applies to both engines.
Once the database is created, select it and use the Instances tab to view the endpoint and credentials. Use the Endpoints and credentials to connect to the database from your application running in an EC2 instance. The database is only accessible from inside the EC2 instance in the current Tenant, including the containers running within.
Pass the endpoint, name, and credentials to your application using environment variables for maximum security.
Manage backup and restore for Relational Database Services (RDS)
In the DuploCloud Portal, navigate to Cloud Services -> Database.
Confirm the snapshot request. Once taken, the snapshot displays in the Snapshot tab.
You can restore available RDS snapshots to a specific point in time.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
Click the Snapshots tab.
Click the Actions menu and select Backup & Restore -> Restore to Point in Time. The Restore Point in Time pane displays.
In the Target Name field, append the RDS name to the prefilled TENANT_NAME
prefix.
Select either the Last Restorable Time or Custom date and time option. If you select the Custom date and time option, specify the date and time in the format indicated.
Click Submit. Your selected RDS is restored to the point in time you specified.
can set backup retention periods in the DuploCloud Portal.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Select the System Config tab.
Click Add. The Config pane displays.
From the Config Type list box, select AppConfig.
From the Key list box, select RDS Automated Backup Retention days.
In the Value field, enter the number of days to retain the backup, from one (1) to thirty-five (35) days.
Click Submit. The System Configs area in the System Config tab is updated with the retention period you entered for the RDS Automated Backup Retention days key.
The backup retention period new databases.
To update or skip the final snapshot, navigate to Cloud Services -> Database, and click the RDS tab. Select the name of the RDS database for which you want to update or skip the final snapshot.
From the Actions menu list box, select Backup & Restore -> Update Final Snapshot.
The Update Final Snapshot pane for the database displays. To skip the final snapshot upon database deletion, select Skip Final Snapshot. Click Update.
In the RDS Credentials window, click the Copy Icon ( ) to copy the Endpoint, Username, and Password to your clipboard.
Create a of an RDS.
In the RDS tab, in the row containing your RDS instance, click the Actions menu icon ( ) and select Backup & Restore -> Create Snapshot.
Once backups are available, you can restore them on the next instance creation when you .
In the RDS tab, select an RDS instance containing .
Steps for sharing encrypted RDS databases in DuploCloud AWS
Sharing unencrypted databases to other accounts is very simple and straightforward. Sharing an encrypted database is slightly more difficult. Here we will go through the steps that need to be followed to share the encrypted database.
Create a managed key that can be used by both accounts. Share the managed key with the destination account.
Copy the existing snapshot in the source account, but encrypt it with the new key.
Share the new snapshot with the destination account.
In the destination account, make a copy of the shared snapshot encrypted with the destination account's key.
Add the Name tag to the new copy in the destination so the DuploCloud portal recognizes it.
Create a new database from the snapshot.
Create a new customer-managed key in AWS KMS. In the Define key usage permissions area provide the account id of the other account.
Once the key is created, navigate to Cloud Services -> Database and select the RDS tab. From the Actions menu, select Manage Snapshots. Select the snapshot, and click Copy Snapshot. In the encryption, use the key we created above.
Once the copied snapshot is ready, share the snapshot with another account by clicking Share snapshot and providing the destination account id.
In the destination account, Navigate to Cloud Services -> Database and select the RDS tab. Select Shared with me. Select the shared snapshot and click copy-snapshot. Use the encryption key of the destination account, not the shared key.
In the copied snapshot add a tag with Key as “Name
” and Value as “duploservices-{tenantname}
” where tenantname
is the tenant where you want to launch an RDS with this snapshot.
Go to the DuploCloud portal and select the tenant. Navigate to Cloud Services -> Database and select the RDS tab. Click Add. Then give a name for the new database. In the snapshot select the new snapshot. Enter the instance type and click Submit. In a few minutes, the database will be created with the data from the snapshot. You must use the existing username and password to access the database.
You can manage RDS Snaphots from DuploCloud. Go to Navigate to Cloud Services -> Database and select the RDS tab. From the Actions menu, select Manage Snapshots.
The Manage Snapshots page shows the list of all manual and automated snapshots available within a Tenant. Additional details like owner and snapshot shared with the user are displayed. A user can also delete snapshots from this page.
You can view the Snapshot quota limits and numbers of snapshots used and available from this page.
Create a read replica of your RDS database
By creating an AWS RDS Read Replica of your database, you can:
Elastically scale your capacity for read-heavy database workloads.
Boost performance by increasing aggregate read throughput.
Create standalone database instances.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
Click the RDS tab.
Select the database that you want to replicate from the Name column.
Click the Actions menu. Select RDS Settings, and then Add Replica. The Add read replica to: SELECTED_DATABASE_NAME pane displays.
In the Read Replica Name field, enter the name of your replica. The Tenant name is prefixed automatically.
Select an appropriate Instance Size from the list box to match or exceed the database you want to replicate.
Click Create. Your replica is displayed in the RDS tab with a Status of Submitted. When the replica is ready for use, the Status is Available.
Create a read replica of an Aurora database
Aurora database replica setup is slightly different from adding an RDS read replica.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
Follow one of these procedures to complete the serverless and MySQL replicas setup.
In the Add Replica pane, enter a name for the Serverless replica in the Replica Name field.
In the RDS Engine field, select the Aurora RDS Serverless engine you want the replica to use.
Specify Min Capacity (ACUs) and Max Capacity (ACUs).
From the RDS Instance Size list box, select the appropriate instance size.
Click Save. The replica is created with a Reader role and displayed in the RDS tab.
To modify instance sizes for an existing Aurora Serverless replica:
In the DuploCloud Portal, navigate to Cloud Services -> Database and, in the RDS tab, locate the read replica you want to update in the Name column.
From the RDS Instance Size list box, select the appropriate instance size.
Click Save.
In the Add Replica pane, enter a name for the MySQL replica in the Replica Name field.
From the RDS Instance Size list box, select the appropriate instance size.
From the Availability Zone list box, select an availability zone
Click Save. The replica is created with a Reader role and displayed in the RDS tab.
Set a monitoring interval for an RDS database
Add or update a monitoring interval for an RDS database configuration.
In the DuploCloud Portal, navigate to Cloud Services -> Database.
Click the RDS tab.
From the Monitoring Internal list box, select an interval, in seconds. To remove a previously set interval, select Disable.
Click Submit.
In the row of the RDS for which you want to add an Aurora read replica, click the ( ) icon, select RDS Settings, and then Add Replica. The Add Replica pane displays.
Click the ( ) icon in the Actions column and select Update Instance Size. The Update Instance Size pane displays.
In the row for the RDS database that you want to update, click the ( ) icon in the Actions column, and select Update Monitoring Interval. The Update Monitoring Interval pane displays.
Turn logging on or off for an AWS RDS
You can enable or disable logging for an RDS database at any time, using the DuploCloud Portal.
To update logging for an RDS, you must select the Enable Logging option when you create the RDS.
In the DuploCloud Portal, navigate to Cloud Services -> Databases.
In the RDS tab, from the Name column, select the database for which you want to enable or disable logging.
Click the Actions menu, select RDS Settings, and then Update Logging. The Update Logging pane displays.
Select or deselect Enable Logging to turn logging on or off, respectively.
Click Update.
View the status of the EnableLogging attribute in the Details tab.
Administrator can configure parameters for RDS Parameter Group for DB Instances and Clusters from Administrator -> System Settings -> System Config.
Specify the Database Engines for auto creation of parameter groups. Administrator can set the supported parameters to override the values while creating RDS
Create an Amazon Elastic File System (EFS) from the DuploCloud Portal
Amazon Elastic File System (Amazon EFS) is a scalable, fully managed file storage service. It offers a simple and scalable file storage solution for use with AWS cloud services and on-premises resources. It is designed to provide shared file storage for multiple instances, enabling concurrent access, as well.
See the AWS Documentation for more information.
Before you create an EFS, you must configure the EFS Volume Controller for your Infrastructure.
In the DuploCloud portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
Select your Infrastructure from the Name column.
Click the Settings tab.
Click Add. The Infra - Set Custom Data pane displays.
From the Settings Name list box, select Enable EFS Volume Controller.
Select Enable.
Click Set.
In the Settings tab, your configuration Enable EFS Volume Controller is set to true.
In the DuploClod Portal, navigate to Cloud Services -> Storage.
Click the EFS tab.
Click Add. The Add Elastic File System page displays.
In the Name field, enter a name for the EFS you want to create.
In the Creation Token field, enter a string of up to 64 ASCII characters.
From the Performance Mode list box, select General or Max I/O. Select General for most file systems. Selecting Max I/O allows scaling to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for most file operations. You can not change this setting after the file system has been created.
From the Throughput Mode list box, select Bursting or Provisioned. If you select Provisioned, you must also set a value from 1 to 1024 for Provisioned Throughput (in MiB). After you create the file system, you can decrease the file system's throughput in Provisioned mode or change between the throughput modes, as long as more than 24 hours have passed since the last decrease in throughput or throughput mode change.
Change other defaults as needed as click Create. The EFS is created and displayed in the EFS tab. Select the EFS from the Name column and view the configuration in the Details tab.
Max I/O mode is not supported on file systems using One Zone storage classes.
Information about EFS Mount Targets and Access Points is available in their respective tabs.
You can update the policies for EFS Lifecycle management in the DuploCloud Portal. See the AWS Documentation for more information.
If you want to disable an EFS Lifecycle Management Policy that you previously created, you must do so in the AWS Portal. You can not disable a Lifecycle Management Policy by using the DuploCloud portal.
In the DuploClod Portal, navigate to Cloud Services -> Storage.
Click the EFS tab.
Select the EFS from the Name column. The EFS page displays.
From the Actions menu, select Update Lifecycle Policies. The Update EFS Lifecycle Policies pane displays.
From the Transition to IA list box, select the time duration (in days) to elapse before transitioning files to the IA storage class.
Optionally, select Transition to Primary Storage Class, if appropriate.
Click Submit. The EFS Lifecycle Policies are updated and can be viewed in the Lifecycle Policies tab.
AWS Data Pipeline is a web service that helps you reliably process and move data between different AWS compute and storage services, as well as on-premises data sources, at specified intervals. With AWS Data Pipeline, you can regularly access your data where it’s stored, transform and process it at scale, and efficiently transfer the results to AWS services such as Amazon S3, Amazon RDS, Amazon DynamoDB, and Amazon EMR.
AWS Data Pipeline helps you easily create complex data processing workloads that are fault tolerant, repeatable, and highly available. You don’t have to worry about ensuring resource availability, managing inter-task dependencies, retrying transient failures or timeouts in individual tasks, or creating a failure notification system. AWS Data Pipeline also allows you to move and process data that was previously locked up in on-premises data silos.
A data pipeline can be created using any of the following ways:
Using DuploCloud UI
Using an exported template from AWS console
Cloning an existing template
Proceed to Cloud Services → Analytics -> Data Pipeline. Click on +Add button.
Enter relevant information on the form. Click Generate button. The form includes information like - name, description, s3 log folder, cron schedule details, EMR resources, EMR steps, etc.
Review generated JSON, and make any further changes to generated JSON
Proceed to Cloud Services → Analytics -> Data Pipeline. Click on +Add button. Click 'Import Pipeline Template'
In AWS console Proceed to Data Pipeline -> Choose Existing Data Pipeline -> Click Edit -> Click Export. Please review generated JSON, and make any further changes to generated JSON. Click Submit.
Copy previously exported template from the form. Please do any additional changes (such as schedule frequency, EMR steps). Click Submit to save the Data Pipeline.
Existing Data Pipelines can be cloned in List View or Details View.
To get JIT (Just In Time) access to appropriate AWS console, click on Data Pipeline, EMR Console, EMR Jupyter Console. Click **** row level menu actions to manage the Data Pipeline. e.g. Clone, Edit, Export, Delete etc.
Use Details view to update Data Pipeline. Use JIT (Just In Time) access to AWS console. Check Errors and warnings.
There are two types of Data Pipeline templates:
Exported template in AWS console
Exported template in DuploCloud UI
Mount an EFS in an EC2 instance using a script
If you want to connect an EFS to a Native Docker Service, for example, you can mount it in an EC2 instance.
Create a bash
script, as in the example above, and replace nfs4
with your EFS endpoint. You can run the script below on an existing EC2 instance or run an EC2 user data script to configure the instance at first launch (bootstrapping).
In the DuploCloud Portal, edit the DuploCloud Service.
On the Edit Service page, click Next. The Advanced Options page displays.
On the Advanced Options page, in the Volumes field, enter the configuration YAML to mount the EFS endpoint as a volume.
You can create a Kinesis Stream. From the DuploCloud portal, navigate to Cloud Services -> Analytics and select the Kinesis Stream tab. Click the +Add button above the table. Refer to AWS DynamoDB User Guide to know more about the permissions.
Enabling IoT for a Tenant, creating Things and supporting certificates
Connect and manage billions of devices with AWS IoT, per Tenant. Collect, store, and analyze IoT data for industrial, consumer, commercial, and automotive workloads within DuploCloud.
Use Just-In-Time access to provision devices in your IoT.
In the DuploCloud Portal, navigate to Administrator -> Tenants.
Select your Tenant in the Name column.
Click the Settings tab.
Click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select Enable AWS IoT and Enable.
Click Add. It takes approximately five minutes to enable IoT.
Navigate to Cloud Services -> IoT. The IoT Things page displays.
In the DuploCloud Portal, navigate to Cloud Services -> IoT.
Click the Things tab.
Click Add. The Create an IoT Thing pane displays.
In the editable portion of the Name field, enter a Thing name.
From the IoT Certificate list box, select an IoT Certificate.
From the IoT Thing Type list box, select the Thing type that you want to create.
In the Attributes field, add Thing Attributes in quotes, separated by a comma (,).
Click Create. Your IoT Thing is created and displayed.
Select the Thing to view Details and IoT Principals (certificate information) for the Thing. Use the Action menu to Edit or Delete the Thing, Attach IoT Certificate, and Download Device Package.
In the DuploCloud Portal, navigate to Cloud Services -> IoT.
Click the Things tab.
Add a certificate if needed.
Select the Thing to which you want to attach a certificate from the Name column.
Click the Actions menu and select Attach IoT Certificate. The Attach an IoT Certificate pane displays.
From the IoT Certificate list box, select an IoT certificate to attach to the Thing.
Click Attach.
In the DuploCloud Portal, navigate to Cloud Services -> IoT.
Click the Things tab.
Select the Thing to which you want to attach a certificate from the Name column.
Click the Actions menu and select Download Device Package. The Download IoT Device Package window displays.
From the IoT Certificate list box, select the IoT certificate associated with the Thing's Device Package.
Click Download.
Add, update, or manage an IoT certificate with the following procedures.
In the DuploCloud Portal, navigate to Cloud Services -> IoT.
Click the Certificates tab.
Click Add. The Create an IoT Certificate pane displays.
Select Activate the Certificate and click Create. The certificate displays.
In the DuploCloud Portal, navigate to Cloud Services -> IoT.
Click the Certificates tab. The available certificates are displayed and listed by ID.
From the Status list box, select the new status of the certificate.
Click Update.
In the DuploCloud Portal, navigate to Cloud Services -> IoT.
Click the Certificates tab. Available certificates are displayed and listed by ID.
Select Console. The AWS Console launches so that you can manage your certificate using AWS.
Topic Rules are SQL-based rules that select data from message payloads and send data to other services, such as Amazon S3, Amazon DynamoDB, and AWS Lambda. Define a Rule to invoke a Lambda function when invoking an AWS or third-party service.
To learn more about IoT Topic Rules and how you define and manage them, see the AWS documentation.
In the DuploCloud Portal, navigate to Cloud Services -> IoT.
Click the Topic Rules tab.
Click Add. The Add Topic Rules page displays.
In the Name field, enter a Topic Rule name.
Add a meaningful description of what the rule does in the Description field.
Define the rule by completing the fields in the AWS IoT SQL and AWS IoT SQL Version areas. Select Define an Error Action if the rule pertains to error management.
Click Create. Your rule is defined and displayed in the Topic Rules tab.
View the details of a rule by selecting the rule from the Topic Rules tab Name column. The Details tab displays the rule description. The Actions tab displays the SQL-based rule(s).
Collect and display real-time event data in AWS with DuploCloud
Amazon EventBridge collects and visualizes real-time logs, metrics, and event data in automated dashboards to streamline your cloud infrastructure and application maintenance.
By default, the metrics for a resource are available on the Metrics page in DuploCloud. Some metrics, however, need agents to be installed in the system to collect the information, such as AWS SSM Agent.
DuploCloud provides a way to automatically install these agents on all the hosts whenever they are provisioned. For more information, refer to the DuploCloud Security White Paper PCI and HIPAA Compliance with DuploCloud, and read the General section, Agent Models, to learn about installing agents for compliance controls and security frameworks.
In the DuploCloud Portal, navigate to Cloud Services -> App Integration.
Click the EventBridge tab.
In the Rule Name field, specify or change the rule name.
In the Description field, specify or change the rule description.
In the Schedule Expression field, enter or edit the interval for which you want this rule to run. Use the format: rate(x interval), where x is a numeric value and interval is seconds, minutes, hours, or days. Ensure that you include a blank space between the numeric value x and the interval.
From the State list box, select Enabled.
Click Submit. The rule is displayed in the EventBridge tab.
An EventBridge target is a resource or endpoint to which EventBridge sends an event when the event matches the event pattern defined for a rule. The rule processes event data and sends pertinent information to the target. To deliver event data to a target, EventBridge needs permission to access the target resource. You can define up to five targets for each rule.
You define targets and associated types in DuploCloud. DuploCloud supports types ECS Task and Lambda.
In the DuploCloud Portal, navigate to Cloud Services -> App Integration.
Click the EventBridge tab. The rules you defined are displayed.
In the Target tab, click Add. The Add Rule Target page displays.
In the Target tab, click Add. The Add Rule Target page displays.
In the Name field, enter a target name.
From the Target Type list box, select a target type.
From the Task Definition Family list box, select a task definition family.
In the Task Version field, enter a numeric version number.
Click Submit. The Target you added is displayed in the Target tab.
Run big data applications with open-source frameworks without managing clusters and servers
Amazon EMR Serverless is a serverless option in Amazon EMR that makes it easy for data analysts and engineers to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. You get all the features and benefits of Amazon EMR without needing experts to plan and manage clusters.
In this procedure, we create an EMR studio, create and clone a Spark application, then create and clone a Spark job to run the application with EMR Serverless.
DuploCloud EMR Serverless supports Hive, Spark, and custom ECR images.
To create EMR Serverless applications you first need to create an EMR studio.
In the DuploCloud Portal, navigate to Cloud Services -> Analytics.
Click the EMR Serverless tab.
Click EMR Studio.
Enter a Description of the Studio for reference.
Select an S3 Bucket that you previously defined from the Logs Default S3 Bucket list box.
Optionally, in the Logs Default S3 Folder field, specify the path to which logs are written.
Click Create. The EMR Studio is created and displayed.
Select the EMR Studio name in the Name column. The EMR Studio page displays. View the Details of the EMR Serverless Studio.
Now that the EMR Studio exists, you create an application to run analytics with it.
The DuploCloud Portal supports Hive
and Spark
applications. In this example, we create a Spark Application.
In the EMR Serverless tab, click Add. A configuration wizard launches with five steps for you to complete.
Enter the EMR Serverless Application Name (app1
, in this example) and the EMR Release Label in the Basics step. DuploCloud prepends the string DUPLOSERVICES-TENANT_NAME to your chosen application name, where TENANT_NAME is your Tenant's name. Click Next.
Accept the defaults for the Capacity, Limits, and Configure pages by clicking Next on each page until you reach the Confirm page.
On the Confirm page, click Submit. Your created application instance (DUPLOSERVICES-DEFAULT-APP1
, in this example) is displayed in the EMR Serverless tab with the State of CREATED.
Before you begin to create a job to run the application, clone an instance of it to run.
Make any desired changes while advancing through the Basics, Capacity, Limits, and Configure steps, clicking Next to advance the wizard to the next page. DuploCloud gives your cloned app a unique generated name by default (app1-c-833, in this example).
On the Confirm page, click Submit. In the EMR Serverless tab, you should now have two application instances in the CREATED State: your original application instance (DUPLOSERVICES-DEFAULT-APP1) and the cloned application instance (DUPLOSERVICES-DEFAULT-APP1-C-833).
You have created and cloned the Spark application. Now you must create and clone a job to run it in EMR Serverless. In this example, we create a Spark job.
Select the application instance that you previously cloned. This instance (DUPLOSERVICES-DEFAULT-APP1-C-833, in this example) has a STATE of CREATED.
Click Add. The configuration wizard launches.
In the Basics step, enter the EMR Serverless RunJob Name (jobfromcloneapp, in this example).
Click Next.
In the Job details step, select a previously-defined Spark Script S3 Bucket.
In the Spark Script S3 Bucket File field, enter a path to define where your scripts are stored.
Optionally, in the Spark Scripts field, you can specify an array of arguments passed to your JAR or Python script. Each argument in the array must be separated by a comma (,). In the example below, a single argument of "40000" is entered.
Optionally, in the Spark Submit Parameters field, you can specify Spark --conf
parameters. See the example below.
Click Next.
Make any desired changes in the Configure step and click Next to advance the wizard to the Confirm page.
On the Confirm page, click Submit. In the Run Jobs tab for your cloned application, your job JOBFROMCLONEAPP displays.
Observe the status of your jobs and makes changes, if needed. In this example, we monitor the Spark jobs created and cloned in this procedure.
In the DuploCloud Portal, navigate to Cloud Services -> Analytics.
Click the EMR Serverless tab.
Select the application instance that you want to monitor. The Run Jobs tab displays run jobs connected to the application instance and each job's STATE.
Using the Actions menu, you can view the Console, Start, Stop, Edit, Clone or Delete jobs. You can also click the Details tab to view configuration details.
Create a Kafka Cluster for real-time streaming data pipelines and apps
Apache Kafka (Kafka) is an open-source, distributed streaming platform that enables the development of real-time, event-driven applications. It is used to build real-time streaming data pipelines and real-time streaming applications.
A data pipeline reliably processes and moves data from one system to another, and a streaming application is an application that consumes streams of data. Streaming platforms enable developers to build applications that continuously consume and process streams at high speeds, with a high level of accuracy.
When creating a Kafka Cluster in DuploCloud, if you want to select a Cluster Configuration and Configuration Revision, you must add the configuration or revision in the AWS console before creating the DuploCloud Kafka cluster.
In the DuploCloud Portal, navigate to Cloud Services -> Analytics.
Click the Kafka tab.
Click Add. The Create a Kafka Cluster pane displays.
Enter a Kafka Cluster Name.
From the field list boxes, select a Version of Kafka, the Size of the cluster you want to create, the Volume size in gigabytes, and the Transit Encryption mode.
Optionally, select Availability Zones or Number of BrokerNodes. You must specify a minimum of two (2) Availability Zones zones.
Optionally, select a Cluster Configuration and Configuration Revision when creating a Kafka Cluster in DuploCloud. The Cluster Configuration and Configuration Revision list boxes are prepopulated with configurations and revisions previously defined in the AWS Portal.
Click Submit. The cluster is created and displayed as Active in the Kafka tab. It may take up to half an hour to create the cluster.
View Kafka Clusters by navigating to Cloud Services -> Analytics in the DuploCloud Portal and selecting the Kafka tab.
In the DuploCloud Portal, navigate to Cloud Services -> Analytics.
Click the Kafka tab.
Select the Kafka Cluster with Active Status from the Name column. The Kafka Cluster page displays.
Click the Actions menu and select Change Configuration. The Change Cluster Configuration pane displays.
From the Cluster Configuration list box, select the new cluster configuration.
From the Configuration Revision list box, select the revision of the new cluster configuration.
Click Submit. The configuration change is displayed on the Kafka Cluster page
Use Lambda to deploy serverless functions in DuploCloud
Using Lambda, you write your code and upload it to AWS. Lambda executes and scales the code as needed, abstracting away the underlying infrastructure, and allowing you to focus on writing the actual business logic of your application. Lambda Functions are the principal resource of the Lambda serverless platform.
In a Zip file, the Lambda Function code resides at the root of the package. If you are using a virtual environment, all dependencies should be packaged.
Upload the Zip package in the AWS Console.
In the DuploCloud Portal, navigate to Cloud Services -> Serverless.
Click the Lambda tab. The Lambda Function page displays.
Click Add. The Create a Lambda Function page displays.
In the Name field, enter the name of your Lambda Function.
In the Description field, enter a useful description of the function.
In the Runtime field, enter the runtime for your programming language.
To allocate a temporary file share, enter the value in megabytes (MB) in the Ephemeral Storage field. The minimum value is 512; the maximum value is 10240.
In the Function Handler field, enter the method name that Lambda calls to execute your function.
In the Function Package field, enter the name of the Zip package containing your Lambda Function.
In the Dead Letter Queue list box, select an Amazon Simple Queue Service (SQS) queue or Amazon Simple Notification Service (SNS) topic.
Click Submit. The Lambda Function is created.
On the Lambda Function page, from the Name column, select the function you created.
From the Actions menu, click Console. You are redirected to the AWS Console.
Test the function using the AWS Console.
DuploCloud enables you to create a classic micro-services-based architecture where your Lambda function integrates with any resource within your Tenant, such as S3 Buckets, Dynamo database instances, RDS database instances, or Docker-based microservices. DuploCloud implicitly enables the Lambda function to communicate with other resources but blocks any communication outside the Tenant, except Elastic Load Balancers (ELB).
To set up a trigger or event source, create the resource in the DuploCloud Portal. You can then trigger directly from the resource to the Lambda function in the AWS console menu of your Lambda function. Resources can be S3 Buckets, API gateways, DynamoDB database instances, and so on.
Passing secrets to a Lambda function can be done in much the same manner as passing secrets to a Docker-based service using Environmental Variables. For example, you can create a relational database from the Cloud Services -> Database -> RDS menu in DuploCloud, providing a Username and Password. In the Lambda menu, supply the same credentials. No secrets need to be stored in an AWS Key Vault, a Git repository, and so on.
To update the code for the Lambda function:
Create a new Zip package with a different name and upload it in the S3 bucket.
Select the Lambda Function (with the updated S3 Bucket). From the Actions menu, click Edit.
Enter the updated Name of the Lambda Function.
Use the Image Configuration field to update an additional configuration parameter.
Click Submit.
In the row for the certificate you want to update, click the Actions menu ( ) and select Edit. The Update an IoT Certificate pane displays.
In the row for the certificate you want to update, click the menu () icon in the Actions column.
In the Topic Rules tab, edit a Topic Rule by clicking the Actions menu ( ) in the row listing your Topic Rule Name, and selecting Edit.
Click Add. The Add EventBridge Rule page displays; or to update an existing rule, select the menu ( ) icon in the Actions column for the rule you want to update, and click Update. The Update EventBridge Rule page displays.
Click Add. The Add EMR Studio pane displays.
Navigate to the EMR Serverless tab and click the menu () icon in the Actions column. Use the Actions Menu to delete the studio if needed, as well as to view the studio in the AWS Console.
On the EMR Serverless page, click the menu () icon and select Clone.
If you are new to Spark, use the Info Tips (blue icon) when entering data in the EMR Serverless configuration wizard steps below.
For complete documentation on Apache Kafka, see the .
is a serverless computing platform provided by AWS that allows you to run code without provisioning or managing servers. It enables you to build and run applications in response to events or triggers from Lambda Functions.
Lambda Functions are event-driven and designed to perform small, specific tasks or functions. They can be written in supported programming languages such as Python, JavaScript (Node.js
), Java, C#, PowerShell, or Ruby. Once you create a Lambda function, you can configure it to respond to various types of events, such as changes in data stored in an Amazon , updates in an Amazon table, incoming HTTP requests via Amazon API Gateway, or custom events triggered by other .
Use with images or S3 bucket updates.
Refer to the for detailed instructions on how to generate the package, using tools such as and .
.
Use to access the AWS Console.
From the Package Type list box, select Zip. For type Image, see the topic.
In the S3 Bucket list box, select an existing .
Using Container Images to configure Lambda
Create and Build your Lambda code using DockerFile
. Refer to the AWS documentation for detailed instructions on how to build and test container Images.
In the DuploCloud Portal, navigate to Cloud Services -> Storage.
Click the ECR Repository tab. The ECR Repository page displays.
Click Add. The Create an ECR Repository page displays.
In the ECR Repository Name field, enter the ECR Repository Name.
Click Create.
Login to ECR
Tag the images you have built.
Push the images to the ECR Repository that you created.
Refer to the AWS Documentation for more details about uploading Container Images.
In the DuploCloud Portal, navigate to Cloud Services -> Serverless.
Click the Lambda tab. The Lambda Function page displays.
Click Add. The Create a Lambda Function page displays.
In the Name field, enter the name of your Lambda Function.
In the Description field, enter a useful description of the function.
From the Package Type list box, select Image. For type Zip, see the Lambda Functions topic.
In the Image URL field, enter the URL of the image.
Click Submit. The Lambda function is created.
On the Lambda Function page, from the Name column, select the function you created.
From the Actions menu, click Console. You are redirected to the AWS Console.
Test the function using the AWS Console.
Enable AWS NAT Gateway for High Availability (HA)
Use NAT gateways so that instances in a private subnet can connect to services outside your Virtual Private Cloud (VPC). External services cannot initiate a connection with these instances.
See this AWS Documentation for more information on NAT Gateways.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
Select the Infrastructure for which you want to enable NAT Gateway from the Name column.
Click the Settings tab.
Click Add. The Infra - Set Custom Data pane displays.
In the Setting Name field, select Enable HA NAT Gateway from the list box.
Select Enable.
Click Set.
Configure Apache Airflow for AWS
Amazon Managed Workflows for Apache Airflow (Amazon MWAA) orchestrates your workflows using Directed Acyclic Graphs (DAGs) written in Python. You provide MWAA an Amazon S3 bucket where your DAGs, plugins, and Python requirements reside. You can run and monitor your DAGs using the AWS Management Console, a command line interface (CLI), a software development kit (SDK), or the Apache Airflow user interface (UI).
Create a S3 bucket by following the steps here.
Package and upload your DAG (Directed Acyclic Graph) code to Amazon S3. Amazon MWAA loads the following folders and files into Airflow.
Ensure Versioning is enabled for the custom plugins in a plugins.zip
, the startup
shell script file and Python dependencies in a requirements.txt
on your Amazon S3 bucket.
Refer to the Amazon documentation on DAGs for more details.
In the DuploCloud Portal, navigate to Cloud Services -> Analytics.
Click the Airflow tab.
Click Add. The New Managed Airflow Environment wizard displays.
Provide the required information, such as Airflow Environment Name, Airflow Version, S3 bucket, and DAGs folder location by navigating through the wizard. You can also enable Logging for Managed Airflow.
If you specify plugins.zip
, requirements.txt
, and startup
script while setting up the Airflow Environment, you must provide the S3 Version ID of these files (for example, lSHNqFtO5Z7_6K6YfGpKnpyjqP2JTvSf
). If the Version ID is blank, the default reference is to the latest Version ID of the files specified from S3 Bucket.
After setup, view the Managed Airflow Environment from the DuploCloud Portal, using the Airflow tab. You can view the Airflow Environment in the AWS Console by clicking the WebserverURL.
Package code libraries for sharing with Lambda Functions
A Lambda Layer is a Zip archive that can contain additional code or other content. A Lambda Layer may contain libraries, a custom runtime, data, or configuration files.
Lambda Layers provide a convenient and effective way to package code libraries for sharing with Lambda functions in your account. Using layers can help reduce the size of uploaded archives and make it faster to deploy your code.
You must add a Key/Value pair in the DuploCloud Portal's System Config settings to display Lambda Layers in DuploCloud.
In the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Click Add. The Add Config pane displays.
From the Config Type list box, select Other. The Other Config Type field displays.
In the Other Config Type field, enter AppConfig.
In the Key field, enter ListAllLambdaLayers.
In the Value field, enter True.
Click Submit. The Key/Value pair is displayed in the System Config tab.
After you set ListAllLambdaLayers to True:
Layer names prefixed with DUPLO-
display for all Tenants in the DuploCloud Portal.
Layer names prefixed with DUPLOSERVICES-
display in the appropriate Tenant.
Before you add a Lambda Layer, you must have defined at least one Lambda Function.
In the DuploCloud Portal, navigate to Cloud Services -> Serverless.
In the Lambda tab, select the Lambda Function to which you want to add Lambda Layers.
Click the Actions menu and select Edit. The Edit Lambda Function page displays.
In the Layers area, click the + button. The Add Lambda Layer pane displays.
From the Layer list box, select the Lambda Layer to add.
From the Version list box, select the layer version.
Click Add Layer. The layer you added is displayed in the Layers area of the Edit Lambda Function page.
Optionally, enter an Image Configuration. Refer to the informational ToolTip ( ) for examples.
Support for Kubernetes Probes
Liveness, Readiness, and Startup probes are well-known methods to detect Pod health in Kubernetes. They are used in regular uptime monitoring and enable initial startup health that allows rolling deploys of new service updates.
The example below will define Liveness, Readiness, and Startup probes to one service deployment.
While creating a deployment, provide the below configuration to set up probes for your service.
In addition to the httpGet
example, TCP Probes can be configured from the Other Container Config field:
Complete details of this feature are available in the Kubernetes documentation here.
Enable Kubernetes Health by adding a Load Balancer Listener with Health Check enabled.
Create an S3 bucket for AWS storage
Amazon Simple Storage Service (Amazon S3) is an object-storage service offering scalability, data availability, security, and performance. You can store and protect any data for data lakes, cloud-native applications, and mobile apps. Read more about S3 and its capabilities here.
To configure an S3 bucket for auditing, see the Auditing topic.
In the DuploCloud Portal, navigate to Cloud Services -> Storage.
Click the S3 tab.
Click Add. The Create an S3 Bucket pane displays.
In the Name field, enter a name for the S3 bucket.
In the Region list box, select the region. You can select Tenant Region, Default Region, or Global Region, and specify Other Region to enter a custom region you have defined.
Optionally, select Enable Bucket Versioning and/or Object Lock. Both of these settings are disabled by default, unless you Enable Bucket Versioning Tenant-wide in Tenant Settings. For more information about S3 bucket versioning, see the AWS documentation.
Click Create. An S3 bucket is created.
Enable Bucket Versioning must be selected to use Object Lock.
You can configure the Tenant to enable bucket versioning by default.
In the DuploCloud Portal, navigate to Administrator -> Tenants.
Click on the Tenant name in the list.
In the Settings tab, click Add. The Add Tenant Feature pane displays.
Click Add. The Create an S3 Bucket pane displays.
From the Select Tenant Feature list box, select Default: Enable bucket versioning for new S3 buckets.
Select Enable.
Click Add. Bucket versioning will be enabled by default on the Create an S3 Bucket pane when creating a new S3 bucket.
With this setting configured, all new S3 buckets in the Tenant will automatically have bucket versioning enabled.
You can set specific AWS S3 bucket permissions and policies using the DuploCloud Portal. Permissions for virtual machines, Lambda functions, and containers are provisioned automatically through Instance profiles, so no access key is required in your application code. However, when coding your application, be aware of these guidelines:
Use the IAM role or Instance profile to connect to services.
Only use the AWS SDK constructor for the region.
Set S3 Bucket permissions in the DuploCloud Portal:
In the DuploCloud Portal, navigate to Cloud Services -> Storage.
Click the S3 tab.
From the Name column, select the bucket for which you want to set permissions. The S3 Bucket page for your bucket displays.
In the Settings tab, click Edit. The Edit a S3 Bucket pane displays.
From the KMS list box, select the key management system scope (AWS Default KMS Key, Tenant KMS Key, etc.).
Select permissions: Allow Public Access, Enable Access Logs, or Enable Versioning.
Select an available Bucket Policy: Require SSL/HTTPS or Allow Public Read. To select the Allow Public Read policy, you must select the Allow Public Access permission. To ignore all bucket policies for the bucket, select Ignore Bucket Policies.
Click Save. In the Details tab, your changed permissions are displayed.
Use this table to map the permission and policies options above with the YAML key/value pair.
From the S3 Bucket page, you can set bucket permissions directly in the AWS Console by clicking the >_Console icon. You have permission to configure the bucket within the AWS Console session, but no access or security-level permissions are available.
DuploCloud provides the capability to specify a custom prefix for S3.
IMPORTANT: Before you add custom prefixes for S3 buckets, contact the DuploCloud Support Team and ask them to set the ENABLEAWSRESOURCEMGMTUSINGTAGS
property toTrue
in the DuploCloud System. After this property is set, use this procedure to add custom prefixes.
IMPORTATIn the DuploCloud Portal, navigate to Administrator -> System Settings.
Click the System Config tab.
Click Add. The Add Config pane displays.
From the Config Type list box, select AppConfig.
From the Key list box, select Prefix all S3 Bucket Names.
In the Value field, enter the custom prefix.
Click Submit.
Avoid specifying system-reserved prefixes such asduploservices
.
Create an OpenSearch domain from the DuploCloud portal
Navigate to Cloud Services -> Analytics, select the OpenSearch tab, and click the Add button. The Add OpenSearch Domain page displays.
In the Domain Name field, create a name for the OpenSearch domain.
In the OpenSearch Version field, select the OpenSearch version you are using.
Select your needed instance size from the Data Instance Size list box.
Enter the the instance count in the Data Instance Count field, and choose the correct zone(s) from the Zone list box.
Optionally, enter a key in the Encryption Key (Optional) field.
In the Storage (In Gb) field, enter the amount of storage needed.
If needed, select a Master Instance Count and Master Instance Size.
Use the toggle switches to enable encryption options (Require SSL/HTTPS, Use Latest TLS Cipher, or Enable Node-to-Node Encryption), if needed.
Optionally, use the toggle switch to Enable UltraWarm data nodes (nodes that are optimized for storing large volumes of data cost-effectively). When this option is enabled, additional fields display. Select a Warm Instance type, enter Number of warm data nodes, and Enable Cold Storage as your application requires.
Click Submit. The OpenSearch domain is created.
See the Logging documentation.
Creating SNS Topics
In the DuploCloud Portal, navigate to Cloud Services -> App Integration.
Click Add. The Create a SNS Topic pane displays.
In the Name field, enter the SNS Topic name.
From the Encryption Key list box, select a key.
Click Create.
SNS Topic Alerts provide a flexible and scalable means of sending notifications and alerts across different AWS services and external endpoints, allowing you to stay informed about important events and incidents happening in your AWS environment.
Connect two VPCs for communication using private IP addresses
VPC peering facilitates the transfer of data. For example, if you have more than one AWS account, you can peer the VPCs across those accounts and create a file-sharing network.
This procedure describes how to peer two VPCs, using subnet routes, and how to manage the peering connections and routes.
Enable VPCs for peering:
We will be referring following steps to peer 2 VPCs VPC-A and VPC-B.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays. In this example, the Infrastructures are named VPC-A and VPC-B.
From the Name column, select the first Infrastructure (VPC-A) for which to enable peering. VPC-A and its defined subnet routes are displayed.
Click the Peering tab and the VPC Peering page displays.
From the Choose VPC list box, select a VPC that you want to peer with VPC-A. In this example, we select VPC-B.
Select the Is Peered checkbox.
Click Save.
Click on Peer again.
Follow similar from the above-listed steps 2 to 6 for VPC-B Infrastructure.
Now that your two VPCs (VPC-A and VPC-B) are connected, define the subnet routes that the VPCs use for communication.
To begin, on the VPC Peering page for the first VPC that you set up (VPC-A), click Peer again. The Infrastructure page displays.
Click the Peering tab and the VPC Peering page displays.
Select the Choose VPC list box. The second VPC (VPC-B) displays in the list box and the Is Peered checkbox is selected, indicating that you previously connected the first VPC (VPC-A) with the second VPC (VPC-B) for peering.
Select the subnet routes that you want to define for VPC peering communication between the two VPCs (VPC-A and VPC-B). In this example, we select the checkboxes for subnet routes vpc-B-a-private and vpc-B-a-public.
Click Save.
Click Peer again and repeat the numbered procedure above to peer the VPC-B Infrastructure.
Confirm that your two VPCs are enabled for peering, are connected with each other, and have subnet routes defined for communication.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
Click the Peering tab and the VPC Peering page displays.
Select the Choose VPC list box to confirm that VPC-B is peered with VPC-A and uses the subnet routes you defined. The name of the second VPC (VPC-B) displays in the list box and the Is Peered checkbox is selected. The subnet routes that you selected are displayed as checked.
Click Save.
To maintain accessibility, add Security Group rules for Tenant VPC zones:
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
Select the Infrastructure from the Name column.
Click the Security Group Rules tab.
Click Add. The Add Tenant Security pane opens.
Define the rule for your Port Range and click Add.
Delete subnet routes that you defined for VPC peer-to-peer communication:
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
Click the Peering tab. The VPC Peering page displays for VPC-A.
Select the Choose VPC list box. The peered VPC (VPC-B) displays and the Is Peered checkbox is selected along with the associated subnet routes defined for communication.
Clear the checkboxes of the subnet routes you want to remove in the Select Subnets column. Using the CTRL key, you can select multiple checkboxes and clear them with a single click. In this example, we remove the subnet route vpc-b-A-private by clearing its checkbox.
Click Save. The subnet route vpc-b-A-private has been removed for VPC-A/VPC-B peering.
Delete the peering connection between VPCs:
In the DuploCloud Portal, navigate to Administrator -> Infrastructure. The Infrastructure page displays.
Click the Peering tab. The VPC Peering page displays for VPC-A.
Select the Choose VPC list box. The peered VPC (VPC-B) displays and the Is Peered checkbox is selected along with the associated subnet routes defined for communication.
Clear the Is Peered checkbox.
Click Save. The Select Subnets list no longer displays and the peering connection between VPC-A and VPC-B has been removed.
Edit a S3 Bucket Option | Key | Value |
---|---|---|
A SNS Topic is a logical access point that acts as a communication channel. It lets you group multiple endpoints (such as , , HTTP/S, or an email address).
To set alerts for SNS Topics, .
SNS Topics are used in event processing in conjunction with DynamoDB and Lambda, among other services. See the for information, permissions information, and examples.
VPC is a networking connection between two VPCs enabling traffic to be routed between them. When you use VPC peering, instances in the VPCs communicate with each other as if they are in the same network. The VPCs can be in different regions (also known as Inter-Region VPC peering connections).
Select the Infrastructure (VPC-A) containing the first VPC that you for peering.
Select one of the Infrastructures containing a VPC that you previously for peering and for which you defined . In this example, we select VPC-A.
Select one of the Infrastructures containing a VPC that you previously for peering and for which you defined . Continuing the example above, in this case, we select VPC-A.
Optionally, confirm the deletion by .
Select one of the Infrastructures containing a VPC that you previously for peering and for which you defined . Continuing the example above, in this case, we select VPC-A.
Optionally, confirm the deletion by .
Allow Public Access
duplo-allow-public-access
true
Enable Access Logs
duplo-enable-access-logs
true
Enable Versioning
enable-versioning
true
Require SSL / HTTPS
duplo-policy
ssl
Allow Public Read
duplo-policy
publicread
Ignore Bucket Policies
duplo-policy
ignore
Creating and Using a WAF in DuploCloud AWS
The creation of a Web Application Firewall (WAF) is a one-time process. Create a WAF in the public cloud Console, fetch the ID/ARN, and update the Plan in DuploCloud. Once updated, the WAF can be attached to the Load Balancer.
When you create a WAF in DuploCloud, an entry is added to the Web ACL. You use this entry in a later step to attach an ALB Load Balancer to your WAF.
In the DuploCloud Portal, navigate to Administrator -> Plans. The Plans page displays.
From the Name column, select the Plan you want to update.
Click the WAF tab.
Click Add. The Add WAF pane displays.
In the Name field, type the name of your WAF.
In the WAF ARN field, enter the Amazon Resource Name (ARN).
Optionally, enter your WAF Dashboard URL.
Click Create.
Only ALB Load Balancers can be attached to a WAF.
If you don't yet have an Application Load Balancer (ALB), create one.
In the Other Settings card, click Edit. The Other Load Balancer Settings pane displays.
From the Web ACL list box, select a WAF that you have added to DuploCloud.
Complete the other required fields in the Other Load Balancer Settings pane.
Click Update.
From the DuploCloud portal, navigate to Administrator -> Plans.
From the Name column, select the Plan associated with the WAF you want to update.
Click the WAF tab.
Update the Name and/or WAF ARN.
Update or add a WAF Dashboard URL.
Click Update. The WAF is updated.
DuploCloud also provides a WAF Dashboard through which you can analyze the traffic that is coming in and the requests that are blocked. The Dashboard can be accessed from the left navigation panel: Observability -> WAF.
Click on the menu icon () in the row of the existing WAF that you want to update, and select Edit. The Update WAF YOUR_WAF_NAME pane displays.
Create ElastiCache for Redis database and Memcache memory caching
In the DuploCloud Portal, navigate to Cloud Services -> Database.
Click the ElastiCache tab.
Click Add. The Create a ElastiCache page displays.
Select the ElastiCache Type and complete the required fields based on your type selection.
Optionally, select Enable Cluster Mode to scale the ElastiCache instance for performance.
Click Create.
is a serverless, Redis- and Memcached-compatible caching service delivering real-time, cost-optimized performance for modern applications.
Pass the cache endpoint to your application through the via the AWS Service.
Using Amazon SQS in DuploCloud
Amazon Simple Queue Service (Amazon SQS) offers a secure, durable, and available hosted queue to integrate and decouple distributed software systems and components. It provides a generic web service API that you can access using any programming language that AWS SDK supports.
The following Amazon SQS Queue types are supported.
Standard Queues - Standard queues support a nearly unlimited number of API calls per second, per API action (SendMessage
, ReceiveMessage
, or DeleteMessage
). Standard queues support at-least-once message delivery. However, occasionally (because of the highly distributed architecture that allows nearly unlimited throughput), more than one copy of a message might be delivered out of order. Standard queues provide best-effort ordering which ensures that messages are generally delivered in the same order as they're sent.
FIFO Queues - FIFO queues have all the capabilities of a Standard queue, but are designed to enhance messaging between applications when the order of operations and events is critical, or where duplicates cannot be tolerated.
in the DuploCloud portal, navigate to Cloud Services -> App Integration.
Click the SQS tab.
Click Add. The Create an SQS Queue pane displays.
Enter an SQS Queue Name (my-std-queue in the example below).
Select Standard from the Queue Type list box.
Enter Message Retention Period (in Seconds). For example, 345600 seconds in the example below equates to four days.
Enter the Visibility Timeout in seconds. In the example below, we specify 30 seconds.
Click Create.
in the DuploCloud portal, navigate to Cloud Services -> App Integration.
Click the SQS tab.
Click Add. The Create an SQS Queue pane displays.
Enter an SQS Queue Name.
Select FIFO from the Queue Type list box.
Enter Message Retention Period (in Seconds). For example, 345600 seconds in the example below equates to four days.
Enter the Visibility Timeout in seconds. In the example below, we specify 30 seconds.
Optionally, select Content-based deduplication. Selecting this option indicates that message deduplication IDs are used to ensure duplicate messages are not sent. If a message deduplication ID is sent successfully, any messages sent with the same message ID aren't delivered within five minutes.
Select either Queue or Message group from the Deduplication scope list box, indicating that you want deduplication processing at either the Queue level or at the Message group level, using Message group IDs.
If you selected Queue in the previous step, the only available option in the FIFO throughput limit list box is Per queue. However, if you selected Message group in the previous step, you have the option of selecting Per queue or Per message group ID. This option specifies whether the FIFO Throughput Quota applies to the FIFO Queue or per Message Group.
Click Create.