View the full DuploCloud AWS Product Demo video.
NARRATOR:
Let's look at a product demo. In this sample scenario, we've decided to deploy a few applications to AWS.
The architects have come up with this infrastructure blueprint for one of the applications we need to manage.
In the diagram, we see a dedicated VPC, two availability zones with public and private subnets in each, deployed in the US West region.
We need to provision Docker containers on AWS EKS. In addition, it functions on AWS Lambda to obtain a serverless model for the microservices.
MySQL Aurora database and S3 are the data stores, and the application is exposed to the internet via load balancer, protected by a Web Application Firewall.
Now that we've gone through the desired architecture, let's get to work by using DuploCloud's user interface to deploy the application.
We could also use DuploCloud's Terraform provider or API, but we'll focus on the UI For this demo.
We start by logging in, and as an admin, create the network infrastructure.
Behind the scenes, the platform will auto-generate the required AWS configuration and complete the setup. This includes the VPC, availability zones, and subnets.
We then move to deploy the application infrastructure.
The application is called Invoice, and we start by creating a logical workspace known as a Tenant, where both admins and the appropriate developers can access the finance network we just created.
Next, we're gonna switch to the Invoice application, where we'll create the various cloud services, starting with a virtual machine.
Notice that the user provides a high-level specification only. Behind the scenes, the platform will generate the security group, IAM role, instance profile, and other AWS configurations.
From here, we'll create an S3 bucket.
The platform auto-generates the IAM access policy for the host to access the S3 bucket.
Detailed security controls, such as encryption, public access block, logging, among others, are applied to meet the compliance requirements.
We can provision a SQL database. The software understands that in this case, it needs to generate security-group-based policies as a guest IAM.
We'll also set up backups, encryption, and other best practices behind the scenes.
With network and network infrastructure in place, we next move to deploy first the Docker-based microservice.
Here again, the user provides application-centric specifications while the software auto-generates Kubernetes and AWS configurations that include the deployment or stateful sets, node ports, Ingress controls, and other such infrastructure details.
The right set of access policies, security groups, ACLs, are applied.
It's important to note that the platform is performing all these actions with no presumptions about the application topology that is being asked to deploy, and many other functions are built into the platform.
For example, we can see a quick view of the container logs or access the shell of the running container itself.
Just in time access to kubectl is provided for more granular control and debugging.
Across the platform, you can easily click Console to access the various cloud services inside of AWS.
An example of a time we would wanna do this would be in order to upload a Lambda package to an S3 bucket, the first step in deploying our Lambda function.
Notice that the platform implicitly provides just-in-time access for users for AWS Console access with the right permission set.
There are no access permissions to manage manually.
After uploading the package, we specify the Lambda function parameters and the software will do the rest.
As we complete all of the infrastructure, and the application has been deployed, we can look to do a sanity test.
Our application works.
The platform has been performing other necessary tasks for the infrastructure.
For example, several diagnostics functions are implemented by default.
Here we're gonna take a look at a metrics dashboard that has been set up using Prometheus, Grafana, and CloudWatch with no further user input required.
Leveraging Elasticsearch, logs are collected and segregated per Tenant and per Service.
Alerts for various infrastructure resources can easily be configured as well.
The platform is using tools like CloudWatch and Prometheus for this, allowing the user to simply specify the filters for each alert.
Similarly, there's a billing dashboard to track costs across cloud services or across applications.
Next, businesses in highly regulated industries need a security and incident management platform, and that comes built-in.
All configuration changes in the cloud infrastructure are detected, and the controls are applied.
Compliance dashboards are readily available for auditors.
Notice that all of this is set up without the user having to lift a finger.
There's an audit trail of all actions in an application-specific context.
Here we are showing the audit trail for the Invoice workspace.
Finally, there are various options to configure CI/CD for your daily code releases.
Everything that we've done through the UI today can also be done via Terraform.
Here's the script for the current setup, deploying a fully secure and compliant Infrastructure underneath with a fraction of the code that would've otherwise had to be written and maintained.
Autonomous DevSecOps are the future of cloud automation.
To learn more, visit duplocloud.com.
View the full DuploCloud GCP Product Demo video.
NARRATOR:
Now let's take a look at a product demo.
We have decided to deploy a few applications on Google Cloud.
We start by considering a high-level application diagram.
Here, we have a dedicated VPC, two subnets each, with multiple firewall rules located in the US West region, one subnet for internal load balancers, and another for backend apps.
The application is exposed to the internet via an HTTPS load balancer.
The apps are packaged Docker containers to be provisioned on Google GKE.
Now that we've gone through the desired architecture, let's get to work.
We start by logging in to create the base infrastructure that includes the VPC, subnets, all networking configurations, firewall rules, and GKE cluster.
All of this is represented by the DuploCloud construct called Infrastructure.
With the simplified, high-level specification, the platform will generate the low-level details in Google Cloud.
We then move to deploy the back-end application.
The application is called Invoice and we start by creating a logical workspace or Tenant by that name in the finance Infrastructure, which we just created.
Behind the scenes, managed identities, resource groups, and other details are auto generated.
Next, we could switch to the Invoice application.
We will provision cloud storage by creating a bucket. We will now create a cloud function using the storage bucket.
We will next create a PubSub topic and create a cloud scheduler with the target set to that PubSub topic.
The cloud scheduler could also trigger an HTTP endpoint.
With network, compute, and databases in place, we next moved to deploy the first Docker-based microservice.
Here, again, the user provides an application-centric specification while DuploCloud auto-generates configurations that include the deployment or stateful sets, network ports, Ingress controls, and other such infrastructure details.
We then expose the application via a load balancer.
Let's do a quick sanity test.
Our application works.
Logging is implemented via Elasticsearch and Kibana.
Here, one, you could see the logs automatically collected and separated by Tenant and by Service.
Note that all of this is done without any manual effort and comes out of the box.
Next, businesses in highly regulated industries need to implement an exhaustive list of compliance controls.
DuploCloud comes with the SIEM and you could see, all these controls have automatically been met.
There's an audit trail in application-specific context.
Here, we are showing all the changes in the Invoice Tenant.
CI/CD is a layer on top of DuploCloud, and any CI/CD system could be leveraged for scripts would invoke DuploCloud API calls for deployments.
Finally, everything we saw via the UI can also be done via DuploCloud’s Terraform provider with a fraction of the code or expertise that would have otherwise been required.
For further information or more demos, visit the DuploCloud website at duplocloud.com.
There's an audit trail of all actions in an application-specific context.
Here we are showing the audit trail for the Invoice workspace.
Finally, there are various options to configure CI/CD for your daily code releases.
Everything that we've gone through the UI today can also be done via Terraform.
Here is the script for the current setup, deploying a fully secure and compliant infrastructure underneath, with a fraction of the code that would have otherwise had to be written and maintained.
Autonomous DevOps are the future of cloud automation.
To learn more, visit duplocloud.com.
NARRATOR:
Welcome to the first in a series of deep dives into the DuploCloud Dev and SecOps developer self-service platform.
DuploCloud deep dive videos explore how DuploCloud speeds time-to-market when creating and deploying cloud applications with a practical use-case approach.
Each DuploCloud deep dive answers five questions in 10 minutes or less about a particular feature or capability of the DuploCloud platform.
We address the problem, phrasing it in terms of a use-case, DuploCloud’s solution to that problem by abstracting complexity and a simplified UI, and expand on the benefits of that solution to both your customers and you.
Finally, we explore DuploCloud's competitive edge over similar products and detail tangible savings you can achieve for a flat cost each year, including white glove support.
Let's get started.
Apart from the ever-increasing costs of maintaining an automated and scalable Dev and SecOps environment, there are other factors you must consider when creating a cloud-management strategy and selecting a developer-friendly self-service platform to drive it.
All DevOps workloads require dynamic and complex computing storage and networking configurations.
These configs must be updated, upgraded, and monitored constantly to ensure maximum uptime and minimal cost.
If you're watching this video, you probably already know the problem.
How do you create reliable, guardrail-equipped developer sandboxes that maximize your developer's valuable time while manually managing hundreds of components and configurations?
Managing SecOps is a full-time job by itself.
When you combine the complexity of implementing literally hundreds of compliance controls with the maintenance demanded by most security products, the amount of data you must manually analyze and maintain multiplies exponentially.
Finally, the cost of hiring dedicated DevOps and SecOps engineers has never been higher, and expertise in this area continues to be scarce.
For example, have you ever tried to hire an app developer with extensive DevOps experience?
For an estimate of the savings you can achieve, take our cost calculator for a spin. The results may surprise you.
How does DuploCloud drive down the cost?
Central to DuploCloud’s value proposition is the way DuploCloud replaces much of the complexity behind common DevOps tasks with a templatized approach, creating and maintaining many components for you with minimal inputs.
For example, creating a complete cloud infrastructure with hundreds of components such as VPC, subnets, root tables, security groups, and IAM roles, in addition to Kubernetes cluster enablement can take just minutes with only a few clicks using DuploCloud.
At the same time, DuploCloud gives you the freedom to create a platform that is as simplified or customizable as you require.
We don't drive you toward a prescribed solution.
We reduce the time needed to implement the platform you require.
To better understand how Duplo Cloud is able to abstract cloud complexity, let's explore DuploCloud's architecture, including the core concepts of Infrastructure and Plan.
Here's Duplo Cloud's solution architect Andy Buotte for a closer look.
ANDY BUOTTE:
The user creates infrastructures and at the same time in the backend, when an Infrastructure is created, a Plan is created.
So there's a one-to-one relationship between an Infrastructure and aPlan.
Within a customer's DuploCloud, they can have n number of Infrastructures and that would mean that there would be n number of Plans.
The Infrastructure is a DuploCloud construct, but on the backend, at the actual infrastructure layer within AWS or Azure and GCP, the DuploCloud Infrastructure is gonna map to many different resources within their cloud accounts.
Plan is a construct within DuploCloud that is going to include a lot of these settings and configurations that are going to apply both to the mapped infrastructure.
Some of those settings are lower-level details that will be applied when a Tenant is created.
In a Plan, you can specify what SSL certificates are going to be used. That setting is going to apply to all Tenants that are within that Infrastructure.
The relationship between an Infrastructure and a Tenant. So, in this prod Infrastructure, we have a couple different Tenants. We have the data science, the web app, and an ETL workflow. Those are each a Tenant, live within the prod infrastructure.
So the relationship between an Infrastructure and a Tenant is a one-to-many relationship. Typically, in a production environment, a customer may have a Tenant per application, or it could be like a Tenant per use-case, or like a Tenant per team.
There's many different ways that the end user, that the customer can decide to utilize that boundary, and Infrastructure is one layer of security boundary. So anything that's deployed into the non-prod Infrastructure will not have access to anything that is deployed into the product Infrastructure. And vice versa is true.
These are essentially two air-gapped networks so that there's no access between the two different environments.
The Tenant is another boundary.
So the data science containers that live within this Tenant would not have any way to talk to the containers that are within the web app Tenants and vice versa.
So it's another security boundary layer.
It's pretty common in, for our customers, for a development or a non-prod infrastructure, to create a Tenant per developer. The kind of primary use case or reasoning for that is DuploCloud is very good with developer self-service. So, by giving a developer their own Tenants, they are free to create infrastructure as needed, so that they're not blocked by anyone else.
They don't need to file a ticket in a DevOps queue specifying that they need an S3 bucket or an RDS instance to accomplish their software development task. They should be able to log into DuploCloud, utilize their own Tenant, and create the infrastructure that they need, and immediately start work on their software development tickets and not be blocked by any other team.
Again, the relationship between the infrastructure of the Tenant is one-to-many, and it is very common for customers to have at least two different infrastructures to separate production workloads from all other non-production workloads.
NARRATOR:
Let's summarize the benefits of what we've heard so far.
Creating self-service developer sandboxes in today's dynamic DevOps environment requires a low-code, no-code approach. For this self-service to be effective, however, guardrails must exist.
One such guardrail that DuploCloud provides is the DuploCloud Infrastructure: a virtual network connected to your native cloud with a fundamental set of functionalities exposed.
Further security and flexibility are provided by DuploCloud Tenants: isolated workspaces that you define according to criteria such as application area or customer for prod infrastructures, or developer or tester for non-prod infrastructures, to use just a few examples.
You can define as many Infrastructures or Tenants as you need.
Additional infrastructure customization is possible by modifying DuploCloud Plans: sets of configurable templates.
Remember that each DuploCloud Infrastructure has one Plan, but you can have many Tenants in an Infrastructure.
Finally, DuploCloud gives you the freedom to implement the cloud solution you require while greatly reducing your costs in both developer and maintenance cycles.
Access your native cloud provider with just-in-time access within the DuploCloud portal in a fraction of the time it takes you to log in and out of the native portal and navigate through various screens.
Harness the power of Kubernetes objects and Terraform scripts with very little hard coding thanks to DuploCloud’s templatized Kubernetes objects and DuploCloud’s Terraform provider.
How is DuploCloud’s solution more comprehensive and yet even more affordable than many competitors' offerings?
What many people don't understand about DuploCloud is that we are DevOps, SecOps, and professional services in one product for a flat rate per year.
Create comprehensive infrastructures including Kubernetes Elastic Services in less than half an hour.
Get Services, Hosts, and load balancers up and running in only a matter of minutes.
Create Tenants to isolate workspaces for prod and test with only a few clicks.
Rest easy knowing that we ensure compliance with numerous industry standards such as SOC 2, PCI, and HIPAA.
We complete compliance questionnaires for you and support you during the audit process if needed.
White-glove support is white glove at Duplo Cloud.
We not only ensure your initial setup and customization is successful, we also offer all cloud migration services at no additional cost.
Speaking of which, what hard savings can you achieve with DuploCloud?
To name just a few, faster time-to-market for your core business apps, on-demand support from our staff of dedicated Dev and SecOps specialists, and maybe most importantly, freeing your dev staff to do what they do best: develop.
But don't take our word for it.
Here's one of our many customers, Brad Fino from Lily AI to talk about the power of developer self-service using DuploCloud.
BRAD FINO:
Cost controls, standardization across your infrastructure.
DuploCloud is the missing link between all of those things and giving developers the access and ability to manage and maintain their infrastructure.
Without people coming to my team and saying, Hey, Brad, can you spin up a database for us? Hey, Brad, can you go deploy this container for us? No. Go do it yourself.
You have DuploCloud.
NARRATOR:
Thanks for watching this deep dive with DuploCloud. For more information, go to duplocloud.com and we look forward to seeing you back here soon.
View the whole video.
View the whole DuploCloud Uses Infrastructure-as-Code to Stitch Together DevOps Lifecycle video.
NARRATOR:
Building and operating infrastructure in the public cloud is challenging.
It revolves around the five pillars of operational excellence, security, reliability, performance, and cost optimization.
This is a broad taxonomy of thought operations.
It starts from building a network infrastructure that includes VPCs and hybrid connectivity.
Next comes application infrastructure. This involves virtual machines, data stores, along with the right security policies and backup configuration.
On top of the app infra is app provisioning where we see Kubernetes, serverless, and spark clusters.
There's logging and monitoring. CI/CD makes this a repeatable process.
Compliance controls are needed across the board.
Today, certified cloud experts automate these functions by manually writing code that spans thousands of lines.
We see cargo culting, with every engineer, bringing their own style, favorite programming language, and tools.
With expanding infrastructure, the size of the codebase and team grows.
The bigger the system, the harder to make changes, decreasing productivity.
One has to understand what code to change.
Once updated, peers need to do code review, then tests are to be performed, followed by a roll out.
A simple user request to open a port for an application requires subject matter experts and still takes days.
Lack of cloud subject matter experts is the single biggest blocker to cloud adoption.
Hiring a DevOps workforce is hard and expensive. Today, on average, to build and operate a 50-VM cloud infrastructure, companies require two DevSecOps engineers.
But did you know that inside Amazon and Microsoft, they manage millions of virtual machines with only 1000-odd people?
To understand how, imagine for a moment being assisted by a DevOps bot, which had the ability to auto-generate this infrastructure configuration based on a rules engine that combines user requirements, subject matter expertise, and principles of a well-architected framework.
Cloud operations might very well involve thousands of configurations that overwhelm humans, but in the age of AI, bots can auto-generate these.
At DuploCloud, we have built such a robot.
The bot installs in a virtual machine in your cloud account.
Users can easily interface it with it through a browser or API. There, they configure their application needs in simple declarative terms and the bot auto-generates the underlying infrastructure policies.
Whether doing a first-time deployment or ongoing operations, it manages a complete application and infrastructure life cycle.
Today, the bots are operational in over 25 enterprises, managing over 3000 virtual machines, and 250 applications.
Users do 5000 self-service deployments a week.
For a product demo, go to duplocloud.com/demo.
View the full DuploCloud Azure Product Demo video.
NARRATOR:
Now, let's take a look at a product demo where we deploy a microservices-based infrastructure in Azure.
Start by considering a high-level application diagram with a dedicated VNet and subnets each with one network security group, located in the East US region.
Docker containers are to be provisioned on Azure's AKS with Azure MySQL and storage as the data stores. Secrets are stored in Azure's Key Vault and the app is exposed via App Gateway.
We show integrated logging, monitoring, and alerting.
We'll integrate with Azure Defender for monitoring of the security posture and compliance reporting.
Finally, we showcase the CI/CD pipeline using Azure DevOps.
Now that we've gone through the desired architecture. Let's get to work.
We start by logging in to create the base infrastructure.
That includes VNet, AKS cluster, Log Analytics workspace, among other things.
With the simplified high-level specification, the platform will generate the low-level details in Azure.
We then move to deploy the application.
We start by creating a logical workspace or an environment that we call a Tenant.
We are creating a Tenant called Invoice in the finance infrastructure that we just created. Behind the scenes, for each Tenant a unique managed identities, resource groups, Kubernetes namespace, applications Security Group, and other constructs are generated by the platform.
Next, we can switch to an Invoice application.
Here we're going to create an Azure agent pool for Azure Kubernetes.
Notice that the user only needs to provide high-level specifications. Behind the scenes, the platform will configure encryption, link to managed identity, Log Analytics workspace connection, and other Azure best practices.
We can then move to create a storage account. Again, the user specifies a high-level configuration while the platform generates the details and security best practices.
You can then do things like creating file shares and access shared keys.
We then move on to create a SQL database. It will also set up backups, encryption, VNet endpoints, and other recommended practices behind the scenes.
Next, we move to deploy the first Docker-based microservice.
Here, again, the user provides an application-centric specification while the platform auto-generates Kubernetes and Azure configurations that include the deployment or stateful sets, node ports, Ingress controls, and other such infrastructure details.
We then expose the application via load balancer. Note that you only have to provide the high-level specifications and do not have to worry about the low-level implementation details.
Many other functions are built into the platform. For example, you can take a quick look at the container tail logs.
Or, get into the Container shell.
And you can get access to kubectl all secured and locked down to this Tenant’s namespace.
As we have completed the deployment, let's do a sanity test.
Our application works.
Important diagnostic functions are built into the platform.
For example, we can look at the metrics of various resources that is set up automatically by orchestrating Grafana, Prometheus, Azure Monitoring, and Log Analytics workspace.
Logging is implemented via Elasticsearch and Kibana. Here, one, you could see the logs automatically collected and separated by Tenant and by Service.
Note that all of this is done without any manual effort and comes out of the box.
Next, businesses in highly regulated Industries need to implement an exhaustive list of compliance controls.
DuploCloud comes with the SIEM, and you can see all these controls have automatically been met.
There's an audit trail and application-specific context.
Here we are showing all the changes in the Invoice Tenant.
CI/CD is a layer on top of DuploCloud, and any CI/CD system could be leveraged for scripts would invoke duplicate API calls for deployments.
Finally, everything we saw via the UI can also be done via DuploCloud’s Terraform provider, with a fraction of the code or expertise that would have otherwise been required.
For further information or more demos visit the DuploCloud website at duplocloud.com.
Learn more about Cloud Infrastructure and DevOps Automation using our video tutorials.