Page Banner

Linux / Unix Automation : System deployment

Linux / Unix Automation : System Deployment

Any organisation that utilises an IT infrastructure will at some point come across the following fundamental questions:

  • What do we need server hosts for?
  • What type of servers do we need, physical or virtual?
  • How many do we need?
  • Where do we want them?
  • How long do we need them for (lifetime )?
  • How quickly do we need them?
  • How do we want them built?
  • How many different build types do we need?
  • Why should we automate deployments?
  • How do we want to configure them?

All of these questions are simple to ask, but sometimes overlooked. Within program life-cycles the stage of feasibility often gets omitted, by-passed or looked at too late. We should not forget the feasibility stage, as the cost of change at production level is exponentially more than that at the point of design and development.

To determine if what we plan is feasible we need to look into these important questions to arrive at some answers or at least discussion points.

So, onto the questions. There are many deployment methodologies, so understanding their impact and our options is essential.

  1. What do we need server hosts for?

Where are our web services going to run, or the multiple applications that they may connect to, or a database that is required by the applications? We need to run applications, therefore we need compute power.

IT compute power is the fundamental reason.  Simple so far…

  1. What type of servers do we need, physical or virtual?

This depends on company policies, cost and many other factors, but generally virtual is good as it provides greater flexibility which leads to cost benefits. However, virtual hosts cannot always provide enough compute power sensibly. One common case is that of databases, where it’s more logical to utilise dedicated physical server hardware as more compute power is often required in one place. So, now we have some baselines or rules…

  1. How many do we need?

Now we start to get to the point of more complexity. How can we calculate how many servers we need?

For physical servers, this comes down to the low level resources available, memory, CPU, I/O and network. Servers should be scaled to meet and exceed requirements by an overhead safety factor, taking into consideration points such as single points of failure, balancing and required availability. These are all points which can be discussed in a lot more detail.

In the case of VMs, this should be split into the number of virtual hosts required, and the number of physical hosts needed in our VM farm on which to run these VMs, again with some overhead.

  • Number of VMs

Bench-marking can help if we have existing platforms, but without these at the start we need to make some intelligent estimations. We don’t want to overspend, and we want to make sure that we have enough to start with. These are key points for consideration later, when we decide where we want our hosts.

  • Number of host servers within VM Farm

Similar to calculating individual physical servers as above, a sensible hardware standard should be designed to run our VM farm taking into consideration points such as mitigation of single points of failure, and balancing. It’s a good idea to stick to a standard as this will make horizontal scaling easier in the future. A balance of CPU core capability and memory should be considered to meet the requirements of the VM demand. This will form a ‘standard build’ for our physical hardware providing our VM farm.

Points need to be taken into consideration such as memory sharing, CPU sharing and other resource constraints such as I/O and networking. Generally, for virtual platforms it is not such a good idea to over utilise memory as we can end up with workload issues. So here we have a good place to start in determining how many physical host servers we need to run the required VMs – base it on memory. In simple terms, add up all the VMs required, generate a sum of required memory from this, and then apply a sensible overhead to allow for hardware failures and busy periods. This will determine how much total memory we need. We can then divide this by the total memory available in our standard build, and then we have the number of hosts required.

  1. Where do we want them?

Lots of factors should be considered to help answer this question involving ‘non-functional’ requirements such as required availability, scalability, resilience etc. Total cost of ownership is also key as at business level, and will vary depending on a couple of key points:

  • Do we have data centres with available capacity?
  • Is Cloud a consideration? – Other questions later will help in determining

Sometimes this answer is easy, as we may have lots of capacity in our data centres and total cost of ownership works out favourable in this. Or, we can choose to look at options with cloud based services where maybe we can reduce our TCO. This leads on to other questions…

  1. How long do we need them for – what’s the lifetime requirement?

What environments do we need? In the case of non-production, if hosts for development and testing are only required for hours or days in order to run test plans, maybe we can utilise ‘on-demand’ based cloud services to supply us with VMs on a temporary basis. If we need them for longer, maybe we can utilise other fixed cloud based plans. This is a cost Vs flexibility exercise and requires discussion between business and technical members of the organisation.

  1. How quickly do we need them?

Often the answer from the business is simply, ‘fast’. What is the lifetime requirement for our test hosts? Do we need to be able to rapidly build, destroy and redeploy hosts based on test plans? Probably not in production, just fast deployment.

So we can see that fast deployment is always good, and sometimes we need flexibility to be able to destroy and redeploy.

  1. How do we want them built?

To meet requirements of the particular applications that will run on them, and a key point… To a standard!

  1. How many different build types do we need?

As few as possible! This will make automation easier. This is a simplistic answer, but requires some technical design to ensure all requirements are captured within our build types, and that they should be as future proof as possible. Often you will start by envisaging many different build types, but with careful thought these can nearly always be reduced to a small number… perhaps even one?

  1. Why should we automate deployments?

All of the discussions above should have raised enough points to help write a business case for an automated deployment platform. It really is essential, and time spent on getting this right will be of great benefit to our business ongoing.

  1. Deployment types

To compare the more technical aspects of deployment we can take into consideration the points above to help decide our design.  Servers can be built in a variety of ways, each with its own merits and disadvantages, as follows:

  • Manual build from physical media or ISO file

This traditional method for a one off installation is generally simplistic and should not take long to provide installation criteria. However, repetition is slow and unwanted differences can easily be introduced (remember, standards). So this is not great if we require many hosts, but fine for one or two.

  • Automated deployment from Operating System distribution

This method will take more time to design and implement, but will have many advantages over a manual build, as we can repeat and guarantee standards… a key point.

  • Automated deployment from image

There are various methods of creating hosts from an image. This can involve using pre-built hosts available externally, or from images which we create ourselves from a previously installed host.

  1. Automated deployment tools and methods

There are many tools which can be utilised to automate deployment. A few key ones will be discussed here, but there are many more.

  • Kickstart (Linux) / Jumpstart (Unix)

This is a network based deployment tool which introduces provisioning from an O/S distribution (or from images). It effectively installs each system individually and allows the ability to apply pre and post installation scripts to create standards.

  • Cobbler

A powerful extension tool to kickstart which incorporates a browser based configuration as well as command line. Cobbler utilises the concept of hierarchical deployment, using distributions, profiles, sub-profiles and individual host definition. This tool provides Great flexibility and the ability to create multiple types or sub-types of deployment if required. This makes deployment possible by semi-skilled staff.

  • Cloning / Templates

Cloning or templating utilises the concept of taking an image of an existing system and storing it at a block level to be copied. Following doing this it is required to apply unique system configuration parameters to make the clone independent to its parent. This method gives us flexibility to be able to apply changes to our host to be used to create more templates. We can then build our standards at this level for our defined build types. Once good example of this is tooling from VMWare, such as VRA/VRO. VMWare also allows the concept of templating within a VSphere platform

  • Containers

Containers often associated with utilities such as Docker are packaged ready made systems which can be shared or edited and re-packaged as required. Containers can also be created and edited as required, again to create our required standard build types.

How do we want to configure them?

Now we naturally move on to another huge topic. This will be covered in another post.