Managing cloud services, reserve the EKS and Kubernetes unless you think of these...
As I alluded to in my previous post, deploying an application in the old way can lead to issues or configuration drifts. Containers are now the backbone of the cloud, and deploying an application in a container eliminates these issues. Kubernetes, a popular container orchestration tool, simplifies container deployment, scaling, and management. Amazon EKS is an excellent solution for those who want to use Kubernetes while avoiding dealing with the infrastructure's complexity. Of course, you must add physical servers/compute power if you'd manage Kubernetes yourself, right? Unless you work with servers, chances are, you may not have seen a server in a long time.
My first title was destined to be "Managing Managed Services" even though they alleviate most of the complexities, Kubernetes (Managed or not) still has its difficulties. I've spent about two years at Ahlsell, where we have a solid team working with AKS, so I'd like to share some common issues and thoughts on how to prepare for them:
Running containers in a shared environment requires attention to security. For example, Amazon EKS is also true to Azure AKS, but this post will be more about EKS; thus, some pointers might differ.
EKS includes security features such as role-based access control (RBAC), network policies, and security groups. It helps if you could just have projects in different namespaces; the isolation we learned from my previous post really shines in such events as you don't want a container accessing another container's secrets. We solved it by using a combination of entity identity and Service principal to authenticate and fetch secrets; each Kubernetes namespace would have its own identity with RBAC access to where the secrets are stored.
Kubernetes can consume many resources, and overprovisioning can lead to unnecessary costs. Therefore, EKS includes features such as auto-scaling, which automatically adjusts the resources available to the containers based on demand.
You can also play with the resources provided in the deployment template.
resources requests: memory: "1000Mi" cpu: "100m":
(code taken from a Kubernetes deployment template)
Looking at the code above, does it need 1000Mi memory and 100m CPU?
Give the container some resources and check the performance; fewer resources than needed would leave the application performing slow/bad/unusable; too many resources, on the other hand, would not affect the application negatively, but it might not help it be better either, so I would say it's trial and error process. Mitigate the resource utilization issues by first trying the application with fewer or more resources and benchmarking the performance. Also, define resource limits and use auto-scaling to scale up and down as needed. I don't want to spell it out, but your bill will increase as you consume more resources.
Kubernetes is a complex tool, and managing it requires knowledge and expertise. EKS simplifies Kubernetes deployment, but it is still vital to understand Kubernetes concepts and best practices. For example, if the team needs to be more experienced. In that case, you might spend days or weeks troubleshooting an issue on your dev cluster only to find out that a firewall rule is updated and is now blocking your traffic from the cloud to the on-prem databases. Of course, it is essential to communicate infra/Network changes, but for a team working with Kubernetes, it is crucial to invest in training and knowledge sharing.
Monitoring Kubernetes clusters is critical to ensure the reliability and availability of applications. EKS includes monitoring features such as Amazon CloudWatch and EKS Dashboard. Sometimes it is easier to use what you know; if you have used Prometheus with Kibana before, do it, have a daemon logging your things, and visualize them in Grafana. Having logs from every Kafka stream is way too costly, but Prometheus logs are almost measured in bits, so they are smaller. Defining appropriate monitoring policies and regularly reviewing and analyzing logs and metrics are essential to mitigate monitoring issues. Also, Alerts... create alerts to notify you if something unusual is happening.
In conclusion, Amazon EKS simplifies container deployment, scaling, and management. However, knowing the potential difficulties and how to mitigate them is crucial. By following best practices, defining appropriate policies, and investing in training and education, "any" managed Kubernetes can be an excellent solution for deploying and managing container applications.
As for the importan part:
The importance of automation and DevOps practices can significantly improve the efficiency and scalability of EKS deployments. Automation can streamline the deployment and management of applications, reducing the risk of human error and enabling more frequent updates and releases. DevOps practices such as CI/CD can also accelerate the software development lifecycle, enabling faster iteration and innovation. By adopting automation and DevOps practices, teams can leverage the full potential of any platform and drive greater agility and competitiveness in the cloud.
Do you have any lessons learned, aha moments or anything similar that you want to share with me?