Best Practices for Containers

As more enterprise IT operations organizations move to container technology, IT administrators are having to morph into DevOps roles to deal with the container orchestration systems within IT production. These include systems like Docker Swarm, Apache Mesos, and Google Kubernetes, as well as a handful of lesser known players. Container technology has become a reliable way to quickly package, deploy and run application workloads without the need for concern of the physical underlying hardware or operating systems.

Just as important as the containers themselves is the container orchestration technology. These products allow you to start and stop containers through scheduling. They also allow you to scale container usage through managed container clusters. Enterprise data centers have come to expect 99.99% uptime, and introducing new technologies puts a lot of pressure on those individuals expected to run them.

Cybersecurity Live - Boston

This is where the orchestration of containers plays an important role and why you should be using best practices when orchestrating containers in your environment. Best practices for deploying other applications in your current environment, won’t do the trick, as containers are a different animal unto itself. The following are listing of the most important best practices to follow for container orchestration in your production environment.

  1. Design a clear path from Development to Production

Ensuring a smooth transition to production using container orchestration is architecting the path from development to production and having a staging platform in place. Containers require testing and validation. The staging environment should be an exact copy of your production environment, and will allow for verification of containers, to ensure the containers are stable before moving them to your production environments.

  1. Automate Reporting Issues discovered in Container Orchestration production

As with any technology things can go wrong. Now that production and developments are now linked (think DevOps), technology teams need to understand what is happening within the container orchestration system. There are several monitoring and management tools that can be configured to perform automated and continuous reporting of issues. These reports will allow developers to see the issues and enable them to react with fixes that are continuously tested, integrated, and then deployed. It will also ensure faster resolution of any issues that are discovered.

  1. Continuously Monitor

Running container orchestration systems, whether in the cloud or on premises, needs constant monitoring. Luckily, there are a number of monitoring and management tools that are available to watch over the containers. These tools can be configured to allow the monitoring system to take automatic action based on its findings and policies you’ve established within the system. These systems can also use past container behavior to help with predictive future failure, to help your team give the systems the attention they need before disaster strikes.

  1. Configure Automatic Data and Disaster Recovery

Containers, including containers that work within orchestration systems, store data either within the container where the application is running or, more likely, via an external database that may be container-based, but typically is not. No matter where the data exists, it must be replicated to secondary and independent storage systems and protected in some way.

Users who manage container orchestration production without a good understanding of where the data is or how it needs to be backed up, preserved, and available for restoration are heading for disaster. These are requirements that must be dealt with, whether you’re on the public cloud or not. Don’t rely on default built-in disaster recovery systems in place in the public cloud space.

The data, wherever it is stored, must be replicated to secondary and independent storage systems and protected in some way. Users should also be able to perform some backup and recovery mechanisms, and security controls need to be set-up for appropriate access as per the customer’s policies. You will need to test these controls routinely to ensure the IT team isn’t over-run with restore requests by non-ops staff, when the proper security and governance controls are not configured correctly.

  1. Plan for Capacity in Production

No matter where your systems reside, whether on premises or in the cloud, you need to emphasize capacity planning for production. Development teams need to follow guidelines while planning for production. Understanding the current capacity requirements, in terms of infrastructure needed by the container orchestration systems is an essential part of this process. This includes servers, storage, network, databases, etc. You’ll also need to understand near-term as well as long-term capacity requirements for these systems.

The key to all of this is to understand the interrelationship between the containers, container orchestration, and any supporting systems (like databases) and their impact upon capacity. Model the capacity of the servers in terms of storage, networking, security etc. by configuring these servers virtually within a public cloud provider, or physically using traditional methods.

The take-away is that near, mid, and long-term growth plans need to be considered and the capacity should be modeled around the forecasted resources required for growth. While having your systems in the public cloud allows for more flexibility and ease of future growth, it doesn’t negate the need for future capacity planning to ensure your budgets will allow for it.

While container orchestration is still in its infancy, containerized architectures are quickly becoming a staple in DevOps workflows. Container orchestration benefits extend beyond just ensuring business continuity and accelerating time to market. Container orchestration tools enable interaction between containers through well-defined interfaces, along with the modular container model, serves as the backbone or the ideal deployment vehicle for microservice architectures. Microservice architectures are enabling companies to bid farewell to legacy architecture and digitize their business models, products and infrastructure. A shift to resolve complexity and achieve success.

George Chanady

Author Bio: George Chanady is a Sr. Solutions Architect for CCSI. Experienced Senior Solutions Architect with a demonstrated history of working in the information technology and services industry. Strong Technical professional skilled in Storage Area Network (SAN), Domain Name System (DNS), Data Center, BladeCenter, and Cloud Applications. Currently held certifications include MCSA and AWS Certified Solutions Architect.

The post Best Practices for Containers appeared first on CCSI.

*** This is a Security Bloggers Network syndicated blog from CCSI authored by George Chanady. Read the original post at: