What Do You Need a Dynamic Infrastructure For?
Terms such as "dynamic infrastructure", "infrastructure as code", "Docker", and "DevOps" are becoming increasingly important in the corporate world. In the following articles, we want to shed some light on these systems. This article deals with the concept of a dynamic infrastructure in more detail. In the second
and third part
, we will discuss the terms "IaC", "DevOps", "Docker" and "Amazon Web Services" and show how they relate to dynamic infrastructures.
Overview of Important Terms:
- Dynamic Infrastructure: Allows servers to be started and reduced automatically, depending on how busy they are.
- Infrastructure as Code (IaC): An IT infrastructure that does not use manual processes, but that automatically provides and manages the code to operate the infrastructure. Also called programmable infrastructure.
- Docker: Software that allows applications to be placed in containers so that they can run in isolation from other applications and operating systems.
- Amazon Web Services: A cloud computing platform that provides fast and flexible access to IT resources such as servers, storage, or databases.
- DevOps: A compound abbreviation of Development Operations. As the word indicates, this corporate culture aims to achieve a more efficient collaboration between software development and IT.
What do I need a dynamic infrastructure for? Let's take a look at the utilization of an application. We’ll assume that on an ordinary morning, two servers are enough to handle the load. However, it is rarely the case that the load is constant throughout the day.
Often, the number of users increases during the day and it becomes necessary to start additional servers. For example, twelve servers may be running at noon. During the night, on the other hand, we can expect a low usage, so that the number of servers can be reduced to a minimum, let's say two servers.
Important point: why not reduce to one server? For security reasons, the application should be located at two different data center locations. If one data center has a problem, there is backup in at least one other location.
The number of running servers is therefore proportional to the number of (active) users. Depending on the application, peak loads might occur in the morning and late afternoon. However, a user does not notice what is happening at the IT infrastructure level when servers are started or stopped. The application is just always there.
In the past, static environments were most common. Their disadvantage is obvious: The server capacity must be based on the expected peak load. However, much of this maximum capacity is hardly used. A static environment requires peak financial overhead 24 hours a day, year round.
And even if you orient the system to run at peak load capacity, this can lead to failures. The biggest enemy of a static infrastructure is success, in this case, the increase in user numbers. To save costs, a static infrastructure is set up without much reserve capacity. Any successful marketing campaign (or whatever increases use and therefore the burden on the system) means a latent or acute threat to the infrastructure. The more successful the marketing is, the slower the web application becomes until the servers no longer work at all.
A well-made dynamic infrastructure, on the other hand, manages these peaks with ease and without performance degradation. The "conflict of success" is avoided. The servers are not started and stopped manually. There is no fixed schedule. The choreography of starting and shutting down additional servers is defined in what is called a “metric”.
The metric defines when additional servers are prepared for launch and when they are actually allowed on the "stage". A cold boot of a new server takes one to two minutes. Fine-tuning this metric is the key to consistent performance at minimum cost. Once the metric is refined, the environment dynamically manages itself. Regular checks by qualified administrators, however, are still recommended.
The following advantages can be derived from this automated process of a dynamic infrastructure:
- Cost Savings: Cloud services like Amazon Web Services typically charge servers on a per-minute basis. This saves money in times of low usage. If necessary, however, enough power can be made available. For example, in the weeks leading up to Christmas, there could be 30 servers at peak times—whatever is needed for consistent performance and a great user experience.
- Regional hosting: For web applications that run globally, it makes sense to have the infrastructure closer to the users because the Internet connection speed can drop over long distances. For example, for users in Asia, a web application running on servers in Europe or America may feel slow. In this case, it makes sense to set up servers in an Asian data center. The infrastructure is typically globally networked.
- Lower error rate: If one of the application servers has a problem ("freezes"), no manual intervention is required. By using self-healing features, the server is automatically taken out of service, reset, re-installed, reconfigured, and put back into service. This means that the hosting environment can repair itself without the manual work of server administrators without the users noticing a problem. Redundancy can also be achieved with dynamic hosting: servers are automatically started in another data center if there is a problem in the main data center.
- Simplified update installation: Updates are very similar to the self-healing process. An update means simply telling the application servers that a new software version is available. Each server in turn is removed from the service, reset, and reinstalled with the new software version. In many cases, it is not necessary to display a maintenance page during the installation of updates. Updates can be installed during normal operation.
If you need help with the implementation of a dynamic infrastructure, please contact us. h.com helps you migrate your existing application to a dynamic infrastructure and builds complex setups with scalability, redundancy, failover, and backup solutions. With our experience in complex international infrastructures, we may as well be the beacon in the Cloud for the setup of smaller environments.
Christian Haag, founder and CEO of h.com networkers GmbH
Does that sound interesting? You would like to know more?
Good software is created in dialogue and we are happy to exchange ideas with you.