Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Configure and access the kubectl shell from within the DuploCloud Portal
This feature provides an alternative to downloading a kubeconfig
and installing kubectl
locally. It opens a fully configured shell within a browser tab, equipped with kubectl
and an associated kubeconfig
. This convenient setup allows you to quickly access your Kubernetes clusters directly from the DuploCloud Portal, with no need for downloading or configuring files on your machine.
kubectl
shell in the DuploCloud PlatformFor EKS, kubectl
is already enabled in the DuploCloud Platform. Once the EKS infrastructure is ready, you can navigate to Kubernetes -> Services in the DuploCloud platform and use the KubeCtl menu options to view the kubectl
token, settings, and configuration details.
To set up the kubectl
shell in DuploCloud for GKE and AKS users, see the links below.
You can also obtain Just-In-Time (JIT) access to Kubernetes by using duplo-jit
. See the JIT Access documentation for detailed information about:
• Obtaining JIT access using the UI and CLI.
• Installing duplo-jit
using various tools.
• Getting credentials for AWS access interactively or with an API token.
• Accessing the AWS Console.
kubectl
Shell from the DuploCloud PortalUse kubectl
to access the Kubernetes cluster for your Tenant namespace.
From the Tenant list box, select the correct Tenant.
Navigate to Kubernetes -> Services.
Click on the Service name from the NAME column.
From the KubeCtl options, select KubeCtl Shell. A shell instance will launch, allowing you to interact with the Kubernetes cluster directly using kubectl
commands.
Look at the following individual cloud provider-specific DuploCloud docs for a quick start that shows how to quickly launch a Kubernetes cluster, deploy a simple web app and expose it via Load Balancer.
Using Kubectl with DuploCloud for AWS, GCP, and Azure users
kubectl
is the command-line interface (CLI) for interacting with Kubernetes clusters. Use kubectl
when you need more granular control and precision than what the DuploCloud portal provides. For further guidance on using kubectl
, refer to the official Kubernetes documentation.
DuploCloud users have two primary options for using kubectl
to interact with Kubernetes clusters:
Download the kubeconfig
and install kubectl
locally. This setup allows full command-line access on your own machine and is ideal if you require persistent, scriptable access to Kubernetes resources.
DuploCloud’s in-browser solution, where a preconfigured shell provides immediate access to kubectl
without any local setup. This allows you to manage Kubernetes clusters from a browser tab, making it a quick and secure alternative to local configuration.
Kubernetes features in the DuploCloud Portal
DuploCloud leverages Kubernetes as a foundational building block behind many managed Services.
DuploCloud supports Kubernetes Cluster enablement on all public cloud platforms so that you can work with many Kubernetes objects and components. This includes flexibility in choosing the instance type based on workload characteristics, such as compute or memory-intensive tasks and AI/ML workloads that may benefit from GPU instances. It's recommended to have a minimum disk capacity of 40GB per host to accommodate image sizes and application data.
Use the topics in this section to implement many Kubernetes features with little or no hard coding using DuploCloud's no-code/low-code approach. This encompasses configuring autoscaling for your EKS cluster based on CPU/memory usage through Horizontal Pod Autoscaler (HPA) or Auto Scaling Groups (ASG) to efficiently scale your Pods or underlying infrastructure as needed.
For information about cloud-provider-specific Kubernetes container features, see the Kubernetes Containers documentation in the DuploCloud User Guide for your cloud provider.
Moreover, you can add allocation tags to existing nodes with running services when managing your Kubernetes clusters. This action modifies a label on the Kubernetes node, influencing future Pod scheduling without affecting currently running services. The services must be restarted to apply the new allocation tags to existing services. This allows for more granular control over resource allocation and utilization within your cluster.
Additionally, DuploCloud can integrate with CloudWatch alarms via the DuploCloud UI to set up custom alerts for CPU/memory usage, ensuring proactive resource monitoring and management. This integration supports forwarding alerts to various notification systems like Sentry, PagerDuty, NewRelic, or OpsGenie for immediate action.
Setup kubectl and kubeconfig on your local computer
kubectl
is the command-line tool used to interact with Kubernetes clusters. It allows you to deploy applications, inspect and manage cluster resources, and troubleshoot issues directly from your terminal. This setup guide will walk you through installing kubectl
on your computer, downloading the kubeconfig
file, and configuring kubectl
for your environment.
kubectl
Install kubectl
on your local computer:
Use these tools to install kubectl
locally.
Run these commands to enable kubectl
to use the downloaded kubeconfig
.
kubeconfig
The kubeconfig
file is a configuration file used by kubectl
to connect to a Kubernetes cluster. It contains essential information such as the cluster's API server address, authentication credentials, and context settings that define which cluster and namespace kubectl
should interact with. Refer to this article for more information about kubeconfig
. Download kubeconfig
one of two ways: using duploctl
or from within the DuploCloud Portal:
kubeconfig
using duploctl
To download kubeconfig
using duploctl
, follow these instructions.
kubeconfig
from the DuploCloud PortalIn the DuploCloud Portal, navigate to Administrators -> Infrastructure.
In the NAME column, select the Infrastructure where you want to set up kubectl
.
Click the EKS (for AWS), GKE (for GCP), or the AKS (for Azure) tab. The Download Kubeconfig For Plan pane displays.
Click Download Kubeconfig to download the kubeconfig
file.
If you don't have Administrator access, you can use duplo-jit
to access Kubernetes. When you click Download kubeconfig, the Access to Kubernetes from your Workstation window gives you the option to install duplo-jit
to access your Kubernetes cluster without obtaining permanent access keys.
Integrate Mirantis Lens with DuploCloud
Mirantis Lens, commonly referred to as Lens, is a popular open-source Kubernetes IDE (Integrated Development Environment) that simplifies Kubernetes cluster management and visualization. Lens provides an intuitive graphical user interface (GUI) to interact with and manage multiple Kubernetes clusters, local and remote, making it easier for developers and administrators to monitor, troubleshoot, and manage workloads.
Integrate Mirantis Lens with DuploCloud by following these steps:
Install the DuploCloud Client: Ensure the duploctl
command-line tool is installed. If not, use the pip install duplocloud-client
command to install it.
Install the Lens Client: Download and install the Lens Kubernetes IDE client from its official website.
Generate the Kubeconfig File: Using the DuploCloud UI or duploctl
:
Using the DuploCloud UI.
Using duploctl
, generate a kubeconfig
file for Lens connection, as follows:
Add Kubeconfig to Lens: In Lens, navigate to Catalog, click the +
button to add the kubeconfig
file, and configure Lens to connect to your Kubernetes cluster.
Connect to the Cluster: Lens will prompt for a login through a browser window. For private EKS cluster authentication, ensure VPN connectivity.
Disconnect from the cluster after your session to avoid repeated browser tab openings during reauthentication attempts.
Integrating Mirantis Lens with DuploCloud enhances your Kubernetes cluster management by providing a powerful graphical interface alongside the direct command-line access provided by the kubectl
token.
Configure read-only access to your Kubernetes cluster in DuploCloud
Complete the following steps to configure read-only access to a Kubernetes cluster.
Save the below content as a file name service-account-readonly-setup.yaml.
Run kubectl apply -f service-account-readonly-setup.yaml
. This will create a new service account with read-only permission.
Run kubectl -n kube-system describe secret duplo-readonly-token
to fetch the token. This can be used in DuploCloud to import the cluster as a read-only infrastructure.
With the above token, EKS server URL, and certificate-authority-data, create a kubeconfig
as follows. The server URL and certificate-authority-data are in the cloud console under the cluster settings. The DuploCloud service account can interact with the Kubernetes cluster with read-only permissions.
Configure the kubectl shell for for DuploCloud-managed AKS deployments
From the Tenant list box, select the correct Tenant.
Navigate to Kubernetes -> Services.
Click Add. The Add Service page displays.
Enter the values in the table below in the fields on the Add Service page. Accept default values for fields not specified.
Add Service page field | Value |
---|
From the DuploCloud Portal, navigate to Kubernetes -> Services.
From the NAME column, select the kubectl service you created in the previous step.
Select the Load Balancers tab, and click Configure Load Balancer.
Select type Cluster IP.
Set external and container ports to 80.
In the Health Check field, enter /duplo_auth
.
In the Backend Protocol field, select TCP.
Click Add.
In the DuploCloud Portal, navigate to Kubernetes -> Ingress, and click Add.
In the Name field, enter kubect-shell
. Add a Path that defaults all traffic to the kubectl Service we created in the previous step:
Navigate to Administrator -> Systems Settings.
Select the System Config tab, and click Add. The Add Config pane displays.
From the Config Type list box, select AppConfig.
From the Key list box, select Other.
In the second Key field, enter DuploShellfqdn
In the Value field, paste the Ingress DNS name.
Click Submit. kubectl
shell access is enabled.
Set and manage Kubernetes Secrets in the DuploCloud Portal, including troubleshooting format issues.
To securely manage sensitive information in your deployment, set and reference Kubernetes secrets in the DuploCloud Portal.
In the DuploCloud Portal, navigate to Kubernetes -> Secrets. The Kubernetes Secrets page displays.
Click Add.
Fill in the fields (Secret Name, Secret Type, Secret Details, Secret Labels, and Secret Annotations).
Click Add. The Kubernetes Secret is set.
When entering a Kubernetes secret with a private key in Duplo, ensure the data is formatted as key/value pairs with all keys and values as strings. If you encounter format errors, it's likely due to non-string values or incorrect multiline string formatting. Use the |
character to indicate multiline strings and manually split a single-line private key into multiple lines for compatibility. Matching the format of an existing, working secret can also aid in resolving these issues.
To enhance the security and management of Kubernetes secrets, consider the following strategies:
Utilize Centralized Secret Management Tools: Centralize the management of secrets to streamline access and control.
Implement Access Controls: Define who can access or modify secrets to minimize risk.
Regularly Rotate Secrets: Change secrets periodically to reduce the impact of potential breaches.
Audit Access Logs: Keep track of who accesses secrets and when, to detect unauthorized access or anomalies.
By integrating these practices, you can ensure a more secure and efficient handling of secrets within your Kubernetes environment.
Set up kubectl within the DuploCloud Portal by downloading the kubectl token
kubectl
TokenConnect directly to your Kubernetes cluster namespace using a kubectl
token. This facilitates direct interaction with your Kubernetes cluster through a command-line interface.
If you attempt to start a kubectl
shell instance and receive a 503 in your web browser, ensure that the Duplo-shell Service in the Default Tenant and the Hosts that support it are running.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
From the KubeCtl list box, select KubeCtl Token. The Token window displays. Copy the contents to your clipboard.
kubectl
Token for Non-AdministratorsIf you don't have administrator privileges, configure AWS credentials for interacting with cloud resources and download a kubectl
token tied to a service account. This token is specifically for the selected Tenant. It is designed for use with a DuploCloud service account, not for human users.
In the DuploCloud Portal, navigate to Kubernetes -> Services. The Services page displays.
Select the Service name from the Name column.
From the KubeCtl item list, select KubeCtl Token. The KubeCtl Token window displays.
Click Copy to copy the kubectl
commands in the Token window to your clipboard.
From the KubeCtl item list, select KubeCtl Shell to launch the shell instance.
Name |
|
Cloud |
|
Platform |
|
Docker Image |
|
Set EVs from the Kubernetes ConfigMap
In Kubernetes, you populate environment variables from application configurations or secrets.
In the DuploCloud Portal, navigate to Kubernetes -> Config Maps.
Click Add. The Add Config Map pane displays.
Name the ConfigMap you want to create, such as my-config-map
.
Add a Data key/value pair for each file in your ConfigMap, separated by a colon (:
). The key is the file name, and the value is the file's contents.
Click Create.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
Select the Service you want to modify from the Name column.
Click the Actions menu and select Edit.
You can import the entire ConfigMap as environment variables or choose specific keys to import as environment variables.
The most straightforward approach is to import the entire ConfigMap as environment variables. Using this approach, your service will recognize each key in the ConfigMap defined as an environment variable.
On the Edit Service: service_name Basic Options page, click Next to navigate to the Advanced Options page.
On the Advanced Options page, in the Other Container Config field, enter the configuration YAML to import environment variables from a ConfigMap. For example, to import all environment variables from a ConfigMap named my-env-vars
, use the following YAML:
To import from additional ConfigMaps, duplicate the YAML from lines 2 and 3 in the above example for each ConfigMap you want to import from.
Another approach is to select which keys to import from the ConfigMap as environment variables. This method gives you complete control over each environment variable and its name, but it requires more manual configuration.
On the Edit Service: service_name Basic Options page, in the Environment Variables field, enter the configuration for environment variables to import from a ConfigMap. For example, to set a single environment variable (ENV_VAR_ONE)
to the value of the MY_ENV_VAR
key in the my-env-vars
config map, use the following YAML:
To add additional environment variables, duplicate the YAML from lines 2 through 5 in the above example for each environment variable that you want to add.
You can import Kubernetes Secrets as Environment Variables.
In the DuploCloud Portal, navigate to Kubernetes -> Secrets.
Click Add. The Add Kubernetes Secret page opens.
Create a Secret Name, such as my-env-vars
.
From the Secret Type list box, select Opaque.
In the Secret Details field, add key/value pairs for each EV in your ConfigMap, separated by a colon (:
). The key is the EV name, and the value is the EV value.
Click Add to create the secret.
Before you configure environment variables, you must create a DuploCloud Service.
The most straightforward approach is to import the entire Secret as environment variables. Using this approach, your service will recognize each key in the Secret defined as an EV.
On the Edit Service: service_name Basic Options page, click Next to navigate to the Advanced Options page.
On the Advanced Options page, in the Other Container Config field, enter the configuration YAML to import environment variables from a Secret. For example, to import all environment variables from a secret named my-env-vars
, use the following YAML:
To import from additional secrets, duplicate the YAML from lines 2 and 3 in the above example for each secret that you want to import.
Another approach is to select which keys to import from the Secret as environment variables. This method gives you complete control over each environment variable and its name, but it requires more manual configuration.
On the Edit Service: service_name Basic Options page, in the Environment Variables field, enter the configuration for specific environment variables to import from a Secret. For example, to set a single environment variable (ENV_VAR_ONE)
to the value of the SECRET_ENV_VAR
key in the my-env-vars
secret, use the following YAML:
To import from additional secrets, duplicate the YAML from lines 2 and 5 in the above example for each secret that you want to import.
Mounting application configuration maps and secrets as files
In Kubernetes, you can mount application configurations or secrets as files.
Before you create and mount the Kubernetes ConfigMap, you must create a DuploCloud Service.
In the DuploCloud Portal, navigate to Kubernetes -> Config Maps.
Click Add. The Add Kubernetes Config Map pane displays.
Give the ConfigMap a name, such as my-config-map.
In the Data field, add a key/value pair for each file in your config map, separated by a colon (:
). The key is the file name, and the value is the file's contents.
Click Create.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
Select the Service you want to modify from the Name column.
Click the Actions menu and select Edit.
On the Edit Service: service_name Basic Options page, click Next to navigate to the Advanced Options page.
In the Volumes field on the Advanced Options page, enter the configuration YAML to mount the ConfigMap as a volume.
For example, to mount a config map named my-config-map
to a directory named /app/my-config
enter the following YAML code block in the Volumes field:
If you want to select individual ConfigMap items and specify the subpath for mounting, you can use a different configuration. For example, if you want the key named my-file-name
to be mounted to /app/my-config/config-file
use the following YAML:
Before you create and mount a Kubernetes Secret, you must create a DuploCloud Service.
In the DuploCloud Portal, navigate to Kubernetes -> Secrets.
Click Add. The Add Kubernetes Secret pane displays.
Give the secret a name, such as my-secret-files.
Add Secret Details such as a key/value pair for each file in your secret separated by a colon (:
). The key is the file name, and the value is the file's contents.
Click Add to create the secret.
Follow the steps in creating a Kubernetes Secret, defining a Key value using the PRIVATE_KEY_FILENAME
in the Secret Details field, as shown below.
Click Add to create the multi-line secret.
In the DuploCloud Portal, edit the DuploCloud Service.
On the Edit Service: service_name Basic Options page, click Next to navigate to the Advanced Options page.
In the Volumes field on the Advanced Options page, enter the configuration YAML to mount the Secret as a volume.
For example, to mount a Secret named my-secret-files
to a directory named /app/my-config
Enter the following YAML code block in the Volumes field:
If you want to select individual Secret items and specify the subpath for mounting, you can use a different configuration. For example, if you want the key named secret-file
to be mounted to /app/my-config/config-file
use the following YAML:
Create Kubernetes Jobs in AWS and GCP from the DuploCloud Portal
In Kubernetes, a Job is a controller object representing a task or a set of tasks that runs until successful completion. It is designed to manage short-lived batch workloads in a Kubernetes cluster. Use a Kubernetes Job when you need to run a task or a set of tasks once, to completion, rather than continuously, as in other types of controllers like Deployments.
Refer to the Kubernetes Job documentation for use cases and examples of when to use Kubernetes Jobs.
Pods are the smallest deployable computing units that you can create and manage in Kubernetes. A Pod is a group of one or more containers with shared storage and network resources, including a specification that dictates how to run the containers. A Pod's contents are always co-located and co-scheduled and run in a shared context. A Pod models an application-specific "logical host": it contains one or more tightly coupled application containers.
In the DuploCloud Portal, you can create K8s Jobs to create one or more Pods. The Job retries until a specified number of Pods are successfully executed. Kubernetes Jobs tracks successful terminations. When the specified number of successful terminations are completed, the Job is marked as completed in Kubernetes. Deleting a Kubernetes Job cleans up the Pods that it created. Suspending a Kubernetes Job deletes its active Pods until it is resumed again.
You typically create one Kubernetes Job object to run one Pod to completion reliably. The Job object starts a new Pod if the first Pod fails or is deleted (for example, in case of a node hardware failure or a node reboot).
You can also use a Kubernetes Job to run multiple Pods in parallel. If you want to run a Kubernetes Job (a single task or several in parallel) on a schedule, see Kubernetes CronJobs.
In the DuploCloud Portal, select the Tenant from the Tenant list box at the top-left of the DuploCloud Portal.
Navigate to Kubernetes -> Job.
Click Add. The Add Kubernetes Job page displays.
In the Basic Options step, specify the Kubernetes Job name.
In the Container - 1 area, specify the Container Name and associated Docker Image.
In the Command field, specify the command attributes for Container - 1. Click the Info Tip icon for examples. Select and copy commands as needed.
In the Command field, specify the command attributes for Container - 1. Click the Info Tip icon for examples. Select and Copy commands as needed.
In the Init Container - 1 area, specify the Container Name and associated Docker Image.
Click Next to open the Advanced Configuration step.
In the Other Spec Configuration field, specify the Kubernetes Job spec (in YAML) for Init Container - 1. Click the Info Tip icon for examples. Select and copy commands as needed.
Click Create. The Kubernetes Job is created and displayed on the Job page with a status of Active.
In the Init Container - 1 area, specify the Container Name and associated Docker Image.
Click Next to open the Advanced Configuration step.
In the Other Spec Configuration field, specify the Kubernetes Job spec (in YAML) for Init Container - 1. Click the Info Tip icon for examples. Select and Copy commands as needed.
Click Create. The job is created and displayed on the Job page with a status of Active.
Allocation tags for Kubernetes Jobs (labels, node selectors, or node affinity) help manage resources in a Kubernetes environment. They can be useful for:
Resource Organization
Scheduling and Affinity Rules
Resource Quotas and Limits
Monitoring and Logging
Cost Allocation and Billing
You can add allocation tags in the Allocation Tag field when creating Kubernetes Jobs.
In the YAML below, the following act as allocation tags:
labels
are key-value pairs used to organize, categorize, and identify resources such as Pods, Nodes, Jobs, and more. Thecompliance: HIPAA
label applied in the example indicates that the Kubernetes Job is associated with a HIPAA compliance context.
nodeSelector
specifies that the Kubernetes Job should be scheduled on specific nodes. In this example, it will be scheduled on nodes with the label security-level: high.
To learn more about allocation tags for Kubernetes Jobs, see the Kubernetes documentation on labels and selectors and node selectors and node affinity.
You can manage/override Kubernetes Jobs faults on a Tenant or Job level. If a Job fails, and no Tenant- or Job-level fault setting is configured, DuploCloud will generate a fault by default.
Enable or disable faults for failed Kubernetes Jobs in a specific Tenant.
From the DuploCloud Portal, navigate to Administrator -> Tenant.
Click the Tenant name in the NAME column.
Select the Settings tab, and click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select Enable K8s job fault logging by default, and use the toggle switch to enable or disable the setting.
Click Add. The Jobs fault setting is added.
You can view the Jobs fault setting on the Tenants page (Navigate to Administrator -> Tenant, select the Tenant name) under the Settings tab. If the value is true, DuploCloud will generate a fault. If the value is false, DuploCloud will not generate a fault.
You can configure the faults for a specific Job when creating the Job in DuploCloud. Fault settings added this way override Tenant-level settings. On the Add Kubernetes Job page, in the Metadata Annotations field, enter:
duplocloud.net/fault/when-failed: true.
or
duplocloud.net/fault/when-failed: false.
When the value is true and the Job fails, DuploCloud will generate a fault. When the value is false and the Job fails, a fault will not be generated.
In the DuploCloud Portal, navigate to Kubernetes -> Job.
Select the Kubernetes Job you want to view and click the Overview, Containers, and Details tabs for more information about the Job status and history.
You can view K8s Jobs linked to Containers by clicking the Container Name on the Containers page (Kubernetes -> Containers).
You can filter container names by using the search field at the top of the page, as in this example:
In the DuploCloud Portal, navigate to Kubernetes -> Job.
Select the K8s Job you want to edit.
You can edit and modify the following fields in the DuploCloud portal:
Cleanup After Finished in Seconds
Other Spec Configuration
Metadata Annotations
Labels
In the DuploCloud Portal, navigate to Kubernetes -> Job.
Select the K8s Job you want to delete.
Using K8s Secrets with Azure Storage Accounts
Refer to the DupoCloud documentation to configure Storage Account and File Share in Azure.
Copy the Storage Account Key and File Share Name from the DuploCloud Portal to prepare to create Kubernetes Secrets in the next step.
Navigate to Kubernetes -> Secrets. Create a Kubernetes Secret Object using an Azure Storage Account.
For more information, see Kubernetes Configs and Secrets.
While creating a deployment, provide the configuration below under Other Pod Config and Other Container Config to create and mount the storage volume for your Service. In the configuration below, shareName
is the File Share name, which you can get from the Storage Account screen.
Configure the kubectl shell for for DuploCloud-managed GKE deployments
Enabling kubectl
shell access in GCP is part of a one-time DuploCloud Portal setup process.
In the Tenant list box, select the correct Tenant.
Navigate to Kubernetes -> Nodes.
Select the Node Pool tab, and click Add.
Complete the required fields, and click Create. Once the node pool is complete, it will display on the GCP VM tab with a status of Running.
In the Tenant list box, select the correct Tenant.
Navigate to Kubernetes -> Services.
Click Add. The Add Service page displays.
From the table below, enter the values that correspond to the fields on the Add Service page. Accept default values for fields not specified.
In the Environment Variables field, enter the following YAML. Replace the flask app secret (b33d13ab-5b46-443d-a19d-asdfsd443 in this example) with a string of random numbers and letters in the same format and replace CUSTOMER_PREFIX with your customer URL prefix.
Click Next. The Advanced Options page displays.
Click Create. The Service is created.
Navigate to Kubernetes -> Services.
Select the kubectl Service from the NAME column.
Select the Load Balancers tab, and click Configure Load Balancer. The Add Load Balancer Listener pane displays.
In the Select Type list box, select K8s Cluster IP.
In the Container port and External port fields, enter 80.
In the Health Check field, enter /duplo_auth.
In the Backend Protocol list box, select TCP
Select Advanced Kubernetes settings and Set HealthCheck annotations for Ingress.
Click Add. The Load Balancer listener is added.
In the Tenant list box, select the correct Tenant.
Navigate to Kubernetes -> Ingress.
Click Add. The Add Kubernetes Ingress page displays.
In the Ingress Name field, enter kubect-shell
.
From the Ingress Controller list box, select gce.
In the Visibility list box, select Public.
In the DNS Prefix field, enter the DNS name prefix.
In the Certificate ARN list box, select the ARN added to the Plan in the Certificate for Load Balancer and Ingress step.
Click Add Rule. The Add Ingress Rule pane displays.
In the Path field, enter (/)
In the Service Name list box, select the Service previously created (kubectl:80)
Click Add Rule. A rule directing all traffic to the kubectl Service is created.
13. On the Add Kubernetes Ingress page, click Add. The Ingress is created.
Navigate to Administrator -> Systems Settings.
Select the System Config tab, and click Add. The Add Config pane displays.
From the Config Type list box, select AppConfig.
From the Key list box, select Other.
In the second Key field, enter DuploShellfqdn
In the Value field, paste the Ingress DNS. To find the Ingress DNS, navigate to Kubernetes -> Ingress, and copy the DNS from the DNS column.
Click Submit. kubectl
shell access is enabled.
Restrict or enable read-only user access to sensitive information in Kubernetes Secrets for AWS or GCP users.
You can restrict or enable read-only user access to Kubernetes by configuring DuploCloud systems settings:
From the DuploCloud Portal, navigate to Administrator -> Systems Settings.
Select the System Config tab, and click Add. The Add Config pane displays.
In the Config Type list box, select AppConfig.
In the Key list box, select Allow Readonly Users to view Kubernetes Secrets.
In the Value field, enter True or False. A true value allows read-only users to view Kubernetes Secrets. A false value prohibits read-only users from viewing Kubernetes Secrets.
Click Submit. The setting is configured.
Schedule a Kubernetes Job in AWS and GCP by creating a Kubernetes CronJob in the DuploCloud Portal
A Kubernetes CronJob is a variant of a Kubernetes Job you can schedule to run at periodic intervals.
See the Kubernetes CronJob documentation for more information.
In the DuploCloud Portal, navigate to Kubernetes -> CronJob.
Click Add. The Add Kubernetes CronJob page displays.
In the Basic Options step, specify the Kubernetes CronJob name.
In the Schedule field, specify the Cron Schedule in Cron Format. Click the Info Tip icon for examples. When specifying a Schedule in Cron Format, ensure you separate each value with a space. For example, 0 0 * * 0
is a valid Cron Format input; 00**0
is not. See the Kubernetes documentation for detailed information about Cron Format.
In the Container - 1 area, specify the Container Name and associated Docker Image.
In the Command field, specify the command attributes for Container - 1. Click the Info Tip icon for examples. Select and copy commands as needed.
In the Init Container - 1 area, specify the Container Name and associated Docker Image.
Click Next to open the Advanced Configuration step.
Click Create. The Kubernetes CronJob is created and displayed on the CronJob page. It will run according to the schedule you specified.
You can manage/override Kubernetes Jobs faults on a Tenant or CronJob level. If a CronJob fails, and no Tenant- or Job-level fault setting is configured, DuploCloud will generate a fault by default.
Enable or disable faults for failed Kubernetes CronJobs in a specific Tenant.
From the DuploCloud Portal, navigate to Administrator -> Tenant.
Click the Tenant name in the NAME column.
Select the Settings tab, and click Add. The Add Tenant Feature pane displays.
From the Select Feature list box, select Enable K8s job fault logging by default, and use the toggle switch to enable or disable the setting.
Click Add. The CronJobs fault setting is added.
You can view the CronJobs fault setting on the Tenants page (Navigate to Administrator -> Tenant, select the Tenant name) under the Settings tab. If the value is true, DuploCloud will generate a fault. If the value is false, DuploCloud will not generate a fault.
You can configure the faults for a specific CronJob when creating the CronJob in DuploCloud. Fault settings added this way override Tenant-level settings. On the Add Kubernetes Job page, in the Metadata Annotations field, enter:
duplocloud.net/fault/when-failed: true.
or
duplocloud.net/fault/when-failed: false.
When the value is true and the CronJob fails, DuploCloud will generate a fault. When the value is false and the CronJob fails, a fault will not be generated.
In the DuploCloud Portal, navigate to Kubernetes -> CronJobs.
Select the Kubernetes CronJob you want to view and click the Overview, Schedule, and Details tabs for more information about the CronJob schedule and history.
You can view Kubernetes CronJobs linked to containers by clicking the container name on the Containers page (Kubernetes -> Containers).
You can filter container names by using the search field at the top of the page, as in this example:
In the DuploCloud Portal, navigate to Kubernetes -> CronJob.
Select the Kubernetes CronJob you want to edit.
You can edit and modify the following fields in the DuploCloud Portal:
Cleanup After Finished in Seconds
Other Spec Configuration
Metadata Annotations
Labels
In the DuploCloud Portal, navigate to Kubernetes -> CronJob.
Select the Kubernetes CronJob you want to delete.
Set up Kubernetes Ingress and Load Balancer with K8s NodePort
Ingress controllers abstract the complexity of routed Kubernetes application traffic, providing a bridge between Kubernetes services and services that you define.
See the DuploCloud documentation for instructions to add , , and .
An administrator needs to enable the AWS Application Load Balancer controller for your Infrastructure before you can use Ingress.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure and select the Infrastructure name from the NAME column.
Select the Settings tab, and click Add. The Infra - Custom Data pane displays.
From the Setting Name list box, select Enable ALB Ingress Controller.
Select Enable.
Click Set. In the Settings tab, the Enable ALB Ingress Controller setting displays a value of true.
Add a Load Balancer listener that uses Kubernetes (K8s) NodePort.
In the DuploCloud Portal, navigate to Kubernetes -> Services.
Select your Service name from the NAME column.
Select the Load Balancers tab.
Click Configure Load Balancer. The Add Load Balancer Listener pane appears.
In the Select Type field, select K8S Node Port.
Enter the Container port and External port.
Optionally, enable Advanced Kubernetes settings.
Kubernetes Health Check and Probes are enabled by default. To manually configure Health Check settings, select Additional health check configs.
Click Add. The Load Balancer listener is displayed under LB Listeners on the Load Balancers tab.
In the Select Type field, select K8S Node Port.
Complete the Container port and External port fields.
In the Health Check field, enter /
.
Complete the other required fields in the Add Load Balancer Listener pane as needed.
Click Add. The Load Balancer displays in the Load Balancers tab.
Select Kubernetes -> Ingress from the navigation pane.
Click Add. The Add Kubernetes Ingress page displays.
Enter a name in the Ingress Name field.
From the Visibility list box, select either Internal Only or Public.
From the Certificate ARN list box, select the appropriate ARN.
To expose your services over HTTP or HTTPS, enter the listener ports in the HTTP Listener Port and HTTPS Listener Port fields.
In the Target Type field, specify how you want to route traffic to Pods. You can choose between Instance (Worker Nodes) or IP (Pod IPs).
Instance (Worker Nodes) routes traffic to all EC2 instances within the cluster on the NodePort opened for your Service. To use the Instance target type, the Service must be NodePort or LoadBalancer type.
IP (Pod IPs) routes traffic directly to the Pod IP. The network plugin must use secondary IP addresses on ENI (e.g., amazon-vpc-cni-k8s) for the Pod IP to use IP mode. The Service can be of any type (e.g., ClusterIP, NodePort, or LoadBalancer). IP mode is required for sticky sessions to work with Application Load Balancers.
On the Add Kubernetes Ingress page, click Add Rule. The Add Ingress Rule pane displays.
Specify the Path (/ in the example above) and Path Type (Exact, Prefix, or Implementation Specific).
Optionally, enter a Host in the Host field.
Select the Service Name (the Container Port field is automatically completed), or, use the toggle switch to enable Use Container Port Name, and manually complete the Service Name and Container Port Name fields.
Click Add Rule. The rule will be displayed on the Add Kubernetes Ingress page. Repeat steps 1-7 to add additional rules.
On the Add Kubernetes Ingress page, click Add Redirect Config. The Add Redirect Config pane displays.
In the Name field, enter a descriptive name for the Ingress redirect configuration.
In the Host field, specify the domain name for which this redirect rule will apply.
In the Path field, Define the path that should trigger the redirect.
Enter the Port for the backend service or redirect.
Enter the Protocol to enforce (e.g., HTTPS).
If Applicable, in the Query field, specify query parameters for the redirect.
In the Status Codes field, enter the HTTP status code for the redirect.
Optionally, in the Annotations field, enter additional configuration options specific to the Ingress controller.
Click Add to add the Kubernetes Ingress with defined rules and configurations. The Ingress you added displays in the K8S Ingress tab.
DuploCloud Platform supports defining multiple paths in Ingress.
When Ingress is configured, view details by navigating to Kubernetes -> Ingress, and selecting your Ingress from the NAME column.
curl
CommandsYou can also view Ingress details using curl
commands. Curl commands are configured with the DNS names and paths (as defined in your Ingress rules) in the format: curl http://<dns1>/<path1>
. The responses from these requests will show how traffic is being routed according to the Ingress configuration. For example, see the following three commands and responses:
Command: curl http://ig-nev-ingress-ing-t2-1-duplopoc.net/path-x/
Response: this is service1
Command: curl http://ing-doc-ingress-ing-t2-1-duplopoc.net/path-y/
Response: this is service2
Command: curl http://ing-public-ingress-ing-t2.1.duplopoc.net/path-z/
Response: this is ING2-PUBLIC
Ingress controllers abstract the complexity of routed Kubernetes application traffic, providing a bridge between Kubernetes services and external services. They play a crucial role in managing access to the services within a Kubernetes cluster, ensuring that traffic is efficiently directed to the appropriate backend services.
See the DuploCloud documentation on creating , , and for your cloud provider. These foundational steps are essential for deploying and managing your applications and services within a Kubernetes environment.
Once your Service is deployed, you can add and configure Kubernetes Ingress. To make services deployed via Helm charts accessible through an ingress controller, it's necessary to adjust the values.yaml
file of your Helm chart. This involves enabling ingress and specifying the ingress resource attributes such as hostnames and paths, which are crucial for routing external traffic to your services. The steps to create Ingress for each cloud provider are slightly different, but the core principle of configuring ingress settings in the Helm chart remains consistent across environments. Ingress setup is a critical step in defining how external traffic is routed to your services, providing a scalable and secure entry point for your applications.
To ensure the smooth operation of services within a cluster, it's important to have access to and understand how to view container logs. For troubleshooting or monitoring purposes, you can easily view the logs of containers within a cluster directly from the containers' user interface. This is achieved by utilizing the context menu in the UI, offering a straightforward method to access the necessary log information. This capability is crucial for diagnosing issues, understanding service behavior, and ensuring the reliability and performance of your applications.
In summary, setting up Ingress controllers, configuring Helm charts for ingress, and understanding how to access container logs are fundamental aspects of managing and troubleshooting Kubernetes applications. By following the documented steps for creating tenants, hosts, and services, adjusting Helm chart values for ingress, and by utilizing the UI for log access, you can effectively manage traffic flow and monitor the health and performance of your services.
Add a DaemonSet for your AWS or GCP Services in DuploCloud
Kubernetes DaemonSets are controllers that manage Pod lifecycles, ensuring that one copy of a Pod runs on each node (or a selected subset of nodes) in the cluster. It is defined using a YAML or JSON configuration file, similar to other Kubernetes resources like Deployments or StatefulSets. See the for more information.
From the DuploCloud Portal, navigate to Kubernetes -> DaemonSet, and click Add. The Add DaemonSet page displays.
In the Name field, specify a unique name for the DaemonSet.
Set the Local Tenant toggle switch. When set to true, the Daemon set deploys only on hosts in this Tenant instead of the entire cluster.
In the Configurations field, enter the DaemonSet configurations. See the example DaemonSet JSON below.
Click Create. The DaemonSet is created and the configurations applied.
Creating K8s SecretProviderClass CRs in the DuploCloud Portal
DuploCloud Portal provides the ability to create Custom Resource (CR) SecretProviderClass
.
This capability allows Kubernetes (K8s) to mount secrets stored in external secrets stores into the Pods as volumes. After the volumes are attached, the data is mounted into the container’s file system.
An Administrator must set the Infrastructure setting Enable Secrets CSI Driver
as True
. This setting is available by navigating to Administrator -> Infrastructure, selecting your Infrastructure, and clicking Settings).
In the DuploCloud Portal, navigate to Kubernetes -> Secret Provider.
Click Add. The Add Kubernetes Sercet Provider Class page displays.
Map the AWS Secrets
and SSM Parameters
configured in DuploCloud Portal (Cloud Services -> App Integration) to the Parameters section of the configuration.
Optionally, use the Secret Objects field to define the desired state of the synced Kubernetes secret objects.
The following is an example SecretProviderClass
configuration where AWS secrets and Kubernetes Secret Objects are configured:
To ensure your application is using the Secrets Store CSI driver, you need to configure your deployment to reference the SecretProviderClass
resource created in the previous step.
The following is an example of configuring a Pod to mount a volume based on the SecretProviderClass
created in prior steps to retrieve secrets from Secrets Manager.
It's important to note that SPC timeouts can occur due to issues related to Secret Auto Rotation, which is enabled by default. This feature checks every two (2) minutes if the secrets need to be updated from the values in AWS Secrets Manager. During a service deployment, if a secret is deleted due to a redeployment while a rotation check is attempted, it can lead to timeouts. This deletion happens because the secret is generated from the volume mount in the service Pod, and when the Pod is destroyed, the secret is also destroyed.
In the DuploCloud Portal, create a Kubernetes Service by navigating to Kubernetes -> Services and clicking Add.
Complete the required fields and click Next to display the Advanced Options page.
On the Advanced Options page, in the Cloud Credentials list box, select From Kubernetes.
Add code to the Other Pod Config field, as in the example below.
Add code for VolumeMounts
in the Other Container Config field, as in the example below.
Click Create to create the Kubernetes service.
Optionally, you can define secretObjects
in the SecretProviderClass
to define the desired state of your synced Kubernetes secret objects.
The following is an example of how to create a SecretProviderClass
CR that syncs a secret from AWS Secrets Manager to a Kubernetes secret:
In the Other Container Config field, specify mount details with the secretobject-name
. Refer to the following example:
Set environment variables in your deployment to refer to your Kubernetes secrets.
While powerful, integrating secrets into Kubernetes deployments requires careful management to avoid issues such as SPC timeouts. Understanding the underlying mechanisms, such as Secret Auto Rotation and the lifecycle of secrets in Pod deployments, is crucial for smooth operations.
Set, mount, and manage Kubernetes ConfigMaps and Kubernetes Secrets in DuploCloud environments.
In DuploCloud environments, you can pass configurations and Kubernetes using Kubernetes or through various strategies tailored to enhance security and management efficiency:
Setting Kubernetes Secrets directly in DuploCloud: You can create secrets under Kubernetes -> Secrets in the DuploCloud Portal. These secrets are then available in the Kubernetes environment and can be utilized as either files or environment variables. This method is straightforward, incurs no additional cost, and allows for the visibility of both secret keys and values in the DuploCloud console. For detailed instructions, see .
Settings Environment Variables (EVs) from a K8s ConfigMap or Secret: This traditional method continues to be supported, offering a familiar approach to those accustomed to Kubernetes' native secrets management.
Mounting ConfigMaps and Secrets as files: This method seamlessly integrates configuration data directly into your application's file system.
Additionally, DuploCloud supports advanced secrets management strategies, including:
Using AWS as the Source of Truth: By creating secrets in AWS Secrets Manager or Parameter Store and integrating them into Kubernetes secrets with SecretProviderClass, you benefit from advanced features like automatic rotation. This method displays only the secret keys in the DuploCloud console and involves a more complex setup but is ideal for centralizing secret management across DuploCloud and non-DuploCloud resources. For more on this setup, visit .
Application Directly Reads Secrets from AWS: This approach allows the application code to fetch secrets directly from the AWS Secret Manager or Parameter Store, managed via IAM roles facilitated by DuploCloud. It provides an added layer of protection and is particularly beneficial for development environments, though it requires modifications to the application code. Implementation guidance can be found in the AWS SDK for PHP—.
By leveraging these strategies, DuploCloud offers flexible and secure options for managing Kubernetes ConfigMaps and Secrets, catering to various operational needs and security requirements.
Adding an Ingress for DuploCloud Google Cloud Platform Load Balancers
GCP's Ingress Controller for GKE automatically manages traffic routing to Kubernetes services, integrating Kubernetes workloads with Google Cloud's load-balancing infrastructure. It simplifies external access to applications, handling SSL termination and global load distribution.
GCP offers its own Ingress Controller, specifically created for Google Kubernetes Engine (GKE), to seamlessly integrate Kubernetes services with Google Cloud's advanced load balancing features.
Container-native load balancing on Google Cloud Platform (GCP) allows Load Balancers to directly target Kubernetes Pods instead of using a node-based proxy. This approach improves performance by enabling more efficient routing, reducing latency by eliminating extra hops, and providing better health-checking capabilities.
It leverages the network endpoint groups (NEGs) feature to ensure that traffic is directed to the appropriate container instances, enabling more granular and efficient load distribution for applications running on GKE.
Once your Tenant and Service are deployed, you are ready to add and configure a Load Balancer listener.
Add a Load Balancer listener that uses Kubernetes (K8s) ClusterIP. Kubernetes Health Check and probes are enabled by default. To specifically configure the settings for Health Check, select Additional health check configs when you add the Load Balancer.
In the DuploCloud Portal, navigate Kubernetes -> Services.
On the Services page, select the Service name from the NAME column.
Click the Load Balancers tab.
Click Configure Load Balancer. The Add Load Balancer Listener pane appears.
From the Select Type list box, select K8s Cluster IP.
Enable Advanced Kubernetes Settings and Set HealthCheck annotations for Ingress, if needed. (This will add required annotations in Kubernetes Service to be recognized by the GKE Ingress Controller)
Click Add. The Load Balancer listener details display in the Load Balancers tab on the Service details page.
To enable SSL, create a GCP-managed certificate resource in the application namespace.
Once a Service and Load Balancer are deployed, add an Ingress:
Select Kubernetes -> Ingress from the navigation pane.
Click Add. The Add Kubernetes Ingress page displays.
Enter an Ingress Name.
From the Ingress Controller list box, select GCE.
From the Visibility list box, select Internal Only or Public.
Enter your DNS prefix in the DNS Prefix field.
Select your ARN from the Certificate ARN list box.
Enter labels in the Labels field, if required.
Click Add to add the Ingress.
In the Add Kubernetes Ingress page, click Add Rule. The Add Ingress Rule pane displays.
Specify the Path (/samplePath/ in the example).
From the Service Name list box, select the Service exposed through the K8S ClusterIP (nginx-test in the example). The Container port field is completed automatically.
Click Add Rule. The rule displays on the Add Kubernetes Ingress page. Repeat the preceding steps to add additional rules.
Click Add to add the Kubernetes Ingress. The Ingress displays on the Ingress page.
The Ingress creation will take a few minutes. Once the IP is attached to the Ingress, you are ready to use your path- or host-based routing defined via Ingress.
You can view the Ingresses you have created by navigating to Kubernetes -> Ingress.
To run the Kubernetes Job to completion, you must specify a Kubernetes Init Container. Click the Add Container button and select the Add Init Container option. The Init Container - 1 area displays.
To run the Kubernetes Job to completion, you must specify a Kubernetes Init Container. Click the Add Container button and select the Add Init Container option. The Init Container - 1 area displays.
You can also view a Kubernetes Job's details by clicking the menu icon ( ) icon to the left of the Job name and selecting View.
Click the options menu ( ) icon to the left of the K8s Job you want to edit and select Edit.
Click the Job options menu ( ) icon to the left of the Job name and select Delete.
Add Service page field | Value |
---|---|
To run the Kubernetes CronJob to completion, you must specify a Kubernetes Init Container. Click the Add Container button and select the Add Init Container option. The Init Container - 1 area displays.
In the Other Spec Configuration field, specify the Kubernetes CronJob spec (in YAML) for Init Container - 1. Click the Info Tip icon ( ) for examples. Select and copy commands as needed
You can also view details of a Kubernetes CronJob by clicking on the menu icon ( ) icon to the left of the job name and selecting View.
Click the options menu ( ) icon to the left of the Kubernetes CronJob name and select Edit.
Click the Options Menu () icon to the left of the Kubernetes CronJob name and select Delete.
From the Ingress Controller list box, select the .
To add a Kubernetes Ingress, you must define rules. to Kubernetes Ingress and complete the setup.
Before you can sync Kubernetes Secret Objects, you must .
Refer to the following example using the Environment Variables field in the Basic Options page when .
Before you can create an Ingress, you must create a DuploCloud Tenant and Service. See the DuploCloud GCP User Guide for steps on how to create and .
If you have , add the following annotations in the Annotations field to link the Ingress with your GCP-managed certificate
To add a Kubernetes Ingress, you must define . Continue to the next section to add rules to Kubernetes Ingress and complete the Ingress setup.
Optionally, specify the Path Type and Host. In this example, we specify a Path Type of Exact. Clicking the Info Tip icon ( ) provides more information for these optional fields.
Name
kubectl
Cloud
Google
Platform
GKE Linux
Docker Image
duplocloud/shell:terraform_kubectl_v15
Implementing Kubernetes Lifecycle Hooks in DuploCloud
A Kubernetes Lifecycle Hook triggers events to run at different stages of a container's lifecycle. These hooks run scripts or commands before or after a specific event, such as a container being created, started, or stopped. Lifecycle hooks perform tasks like starting services, or initializing, configuring, or verifying containers.
You can implement Kubernetes Lifecycle Hooks while adding a DuploCloud EKS Service by adding YAML, like the example below, to the Other Container Config field.
Manage and troubleshooting services with HPA configured.
See the Autoscaling in Kubernetes topic in the AWS DuploCloud documentation.
When working with Kubernetes Services configured with Horizontal Pod Autoscaler (HPA), it's essential to understand how to manage and troubleshoot them effectively.
To stop a Service that is hung in a Running state due to HPA, you cannot directly delete Pods, as new ones are created to maintain the set number of replicas. Instead, remove the HPA configuration and adopt a static replication strategy by setting the replica count to 0. This ensures the Service is effectively stopped without attempting to set minReplicas
to 0, which is an invalid configuration for HPA.
If issues arise while stopping a Service with HPA configured, avoid setting minReplicas
it to 0 and ensure the HPA configuration is removed in favor of a static replication strategy. For further troubleshooting, consult the Faults menu under the DevOps section in the DuploCloud UI, where all errors are logged, facilitating efficient diagnosis and resolution.
DuploCloud is planning enhancements to the UI to improve the management of Services running with HPA configurations. These improvements include adding validation to prevent users from setting minReplicas
to 0, potentially removing the Stop
option for Services with HPA, and documenting the correct procedure for stopping such Services. These updates simplify the process and prevent common configuration mistakes, ensuring a smoother experience managing Kubernetes Services with HPA.
The Kubernetes Horizontal Pod Autoscaler (HPA) is critical for managing resources efficiently in a Kubernetes environment. It automatically adjusts the number of Pods in a deployment based on observed CPU utilization or other selected metrics. For detailed guidance on autoscaling in Kubernetes, see the Autoscaling in Kubernetes topic in the AWS DuploCloud documentation.
When a Service Pod requires more memory than is available on any single node, such as a Pod demanding 30GB on a node with a maximum of 16GB, it's essential to isolate the resource-intensive Service. By moving the script, which causes high memory demand for its service, and allocating it to a more prominent instance with a highmem
allocation tag, you can ensure that your services continue to run efficiently. This approach allows for instances with up to 64GB of memory, accommodating high-demand applications without compromising the performance of other services.
When configuring autoscaling for an EKS cluster, it's crucial to base the autoscaler on CPU/memory requests or limits to ensure optimal performance and resource utilization. This method allows for dynamic scaling that responds to the actual needs of your applications, preventing over-provisioning and resource wastage.
For advanced monitoring and alerting, DuploCloud supports the integration of its Prometheus endpoints with external Grafana instances. This capability enables the Pod to set up custom alerts for memory usage, allowing for proactive resource management and issue resolution. Whether using DuploCloud's Grafana instance or an external one, these integrations provide valuable insights into your Kubernetes environment's health and performance.
Adding an allocation group to an existing node with running Services requires careful consideration regarding Service continuity and the potential need for restarts. While the specific behavior may vary, understanding the implications of such changes is crucial for maintaining uninterrupted Service availability during scaling and resource adjustments.
By following these guidelines and leveraging DuploCloud's support for HPA, teams can effectively manage Kubernetes resources, ensuring that applications remain performant and resilient under varying loads.
Creating K8s PVCs and StorageClass constructs in the DuploCloud Portal
You can configure the Storage Class and Persistent Volume Claims (PVCs) from the DuploCloud Portal.
In the DuploCloud Portal, navigate to Kubernetes -> Storage. The Kubernetes Storage page displays. You define your Kubernetes Persistent Volume Claims and Storage Classes from this page. The Persistent Volume Claims option is selected by default.
Click Add. The Add Kubernetes Persistent Volume Claim page displays.
Define the PVC Name, Storage Class Name, Volume Name, Volume Mode, and other details such as volume Access Modes.
Click Add.
On the Kubernetes Storage page, select the Storage Class option.
Click Add. The Add Kubernetes Storage Class page displays.
Define the Storage Class Name, Provisioner, Reclaim Policy, and Volume Binding Mode. Select other options, such as whether to Allow Volume Expansion.
Click Add.
If you are using K8s and PVCs to autoscale your storage groups and you encounter out-of-space conditions, simply adding new storage volumes may not resolve the issue. Instead, you must increase the size of the existing PVCs to accommodate your storage needs.
For guidance on how to perform volume expansion in Kubernetes, refer to the following resources:
In the DuploCloud Portal, navigate to Kubernetes -> Storage.
Click Add. The Add Kubernetes Storage Class page displays.
Create a Storage Class, as in the example below.
For information on using Native Azure StorageClasses, see this section.
Adding an Ingress for DuploCloud Azure Load Balancers
Ingress controllers abstract the complexity of routed Kubernetes application traffic, providing a bridge between Kubernetes services and services that you define.
To add an SSL certificate to a service using Kubernetes Ingress, see the DuploCloud documentation for using SSL certificates with Ingress.
To run the Load Balancers, you must create one or more Services. To add a service, follow the steps in the Services topic. In this example, we created two Services named s1-alb and s4-nlb.
Before you add an Ingress rule, you need to enable the Ingress Controller for the application gateway.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
Select the Infrastructure from the NAME column.
Select the Settings tab, and click Add. The Infra-Set Custom Data pane displays.
In the Setting Name list box, select Enable App Gateway Ingress Controller. Enable the setting and click Set. The Enable App Gateway Ingress Controller setting value is true.
In the DuploCloud Portal, navigate Kubernetes -> Services.
Select the Service from the NAME column.
Click the Load Balancers tab.
Click Configure Load Balancer. The Add Load Balancer Listener pane appears.
In the Select Type field, select K8S Node Port.
In the Health Check field, add the Kubernetes Health Check URL for this container.
Complete the other fields in the Add Load Balancer Listener and click Add.
Using Kubernetes Health Check allows AKS's Application Load Balancer to determine whether your Service is running properly.
In the DuploCloud Portal, navigate to Kubernetes -> Ingress.
Click Add. The Add Kubernetes Ingress page displays.
Supply the Ingress Name, select the Ingress Controller (in this example, azure-application-gateway), and set Visibility to Public.
In the DNS Prefix field, provide the DNS prefix to expose services.
From the Certificate ARN list box, select the certificate ARN to expose services over HTTPS.
Optionally, in the Port Override field, select a port to override. This field allows configuring frontend listeners to use ports other than 80/443 for HTTP/HTTPS. If you use a port other than 80, you must define an additional Security Group rule for that port. See this section for more information.
On the Add Kubernetes Ingress page, click Add Rule. The Add Ingress Rule pane displays.
Enter a Path.
In the Path Type list box, select Exact, Prefix, or Implementation Specific.
In the Service Name field, select the Service (s1-alb:80 in this example).
Click Add Rule to add the Ingress rule.
Repeat steps 1-5 to add additional rules. In this example, we added a second rule for Service s4-nlb:80.
On the Add Kubernetes Ingress page, click Add to create the Ingress.
The DuploCloud Platform supports defining multiple rules/paths in Ingress.
Port 80 is configured by default when adding Ingress. If you want to use a custom port number, add a security group rule for the custom port.
In the DuploCloud Portal, navigate to Administrator -> Infrastructure.
Select the Infrastructure from the NAME column.
Select the Security Group Rules tab.
Click Add. The Add Infrastructure Security pane displays.
Define the rule and click Add. The rule is added to the Security Group Rules list.
When Ingress is configured, view details by navigating to Kubernetes -> Ingress, and selecting your Ingress from the NAME column.
curl
CommandsYou can also view Ingress details using curl
commands. Curl commands are configured with the DNS names and paths (as defined in your Ingress rules) in the format: curl http://<dns1>/<path1>
. The responses from these requests will show how traffic is being routed according to the Ingress configuration. For example, see the following three commands and responses:
Command: curl http://ig-nev-ingress-ing-t2-1.duplopoc.net/path1/
Response: this is IG-NEV
Command: curl http://ing-doc-ingress-ing-t2-1.duplopoc.net/path2/
Response: this is ING-DOC
Command: curl http://ing-public-ingress-ing-t2.1.duplopoc.net/path3/
Response: this is ING2-PUBLIC
An Azure Application Gateway SSL policy allows you to configure the security settings for SSL/TLS connections between clients and the application gateway. By defining an SSL policy, you can specify which protocols and cipher suites to use, enhancing security, meeting compliance requirements, and optimizing performance. Configuration can be done via the Azure portal, Azure CLI, or ARM templates. See the Microsoft documentation for more information.
To use an Application Gateway SSL policy with Ingress for your ALB Load Balancer, follow these steps:
From the DuploCloud Portal, navigate to Administrator -> Systems Settings.
Select the System Config tab, and click Add. The Add Config pane displays.
In the Config Type list box, select AppConfig.
In the Key field, enter AZURE_APP_GATEWAY_SSL_POLICY
.
In the Value field, enter your Azure Application Gateway SSL Policy (for example AppGwSslPolicy20220101).
Click Submit.
Support for specifying Kubernetes YAML for Pod Toleration
DuploCloud supports the customization of many Kubernetes (K8s) YAML operators, such as tolerations
. If you are using a Docker container, you can specify the tolerations
operator configuration in the Other Container Config field in the container definition in DuploCloud
In the DuploCloud Portal, navigate to Kubernetes -> Services. The Services page displays.
Select the Service from the NAME column.
From the Actions menu, select Edit. The Edit Service page displays.
Click Next to proceed to the Advanced Options page.
In the Other Container Config field, add the tolerations
operator YAML you have customized for your container.
Click Update. Your container has been updated with your custom specifications for the tolerations
operator.
tolerations
operator YAMLIn this example:
If a Pod is running and a taint matching key1
exists on the node, then the Pod will not schedule the node (NoSchedule
).
If a Pod is running and a taint matching example-key
exists on the node, then the Pod stays bound to the node for 6000
seconds and then is evicted (NoExecute
). If the taint is removed before that time, the Pod will not be evicted.
Use Azure's built-in Kubernetes StorageClass constructs
AKS provides a few out-of-the-box StorageClass objects. To mount the built-in storage classes, configure the Volumes field as shown below when adding a Service.