Server virtualization task risks
It is well known that no task is risk-free. Things can go bad and unfortunately they often go bad. Identification and prognosis of task risks is a topic with an uncut literature. There are risks that are base to all projects (generic risks), and others that are due to the specific features of the task (specific risks). So for instance, since every task has an end date, every task has the generic risk of not being completed in time. In this record we shall focus on those risks that are specific of server virtualization projects and to the specific features of generic risks in server virtualization projects.
Cloud Vps
Performance risks in server sever virtualization projects
In a new application implementation task it is very difficult to size the systems because no workload data is available. On the contrary in a server virtualization task fellowships have uncut workload data. Unfortunately, not always there is the will to get and analyze them.
There are basically three strategies to mitigate the risk of undersizing systems and therefore of having an immoderate response latency:
Oversizing; Extensive experimentation; and Data collection and analysis.
Oversizing is a very base strategy. The basic rationale is that Hw is so cheap that it has exiguous sense to spend time to recognize the exact requirements. However, it is prominent to remember that unless you make experimentations or an in-depth assessment, you do not know either you are actually oversizing or undersizing the systems. You even do not know either you are virtualizing the right applications. You can adopt an aggressive approach, and then as a consequence have complaints from users about system performance; or you can adopt a cautious approach, and then have a virtual server farm scope much smaller of what could have been. uncut experimentation is a good but precious alternative. Typically systems are sized agreeing to rule-of-thumbs and generic policies (e.g., Dbmss should not be virtualized) and only those that are supposed to have important overheads are actually tested. Unfortunately rule-of-thumbs are often unreliable and generic policies gloss over the specific features of virtual servers. Data collection and prognosis is the ideal approach. There are however any prominent challenges:
Simultaneous data collection from tens or hundreds or servers. Cleaning and prognosis of workload data containing tens of thousands of data points. Identification of the optimal virtual server farm out of the collected data. Estimate of the virtualization layer impact on the workloads.
Each of these challenges can be efficiently handled with appropriate data collection tools. The Wasfo Data assembler and Wasfo prognosis and Optimization tools (see references below) have been advanced and designed exactly with this purpose.
High Availability risks in server virtualization projects
In a non virtual server farms there are very few applications that are classified mission important and are protected with High Availability (Ha) clusters. Ha clusters can significantly improve the service availability insofar as Hw and application failures are concerned. Unfortunately they are costly and complex to maintain. Ha clusters safe against server, Os and application failure but they require:
Shared warehouse (not all Ha technologies want shared warehouse but those most widely distributed do); Ha software; Scripts or Dlls that recognize failed applications and shut them down in a regular way; and Overall certification of the clarification to get preserve from all the concerned vendors.
Hypervisors (also known as Virtual engine Monitors or virtualization layer) thanks to the fact that Virtual Machines images are actually files hosted on a shared warehouse make it possible to originate server farms in which all application instances are protected from server failure. If any server fails, a monitoring service will detect the failure and turn on the Vm on someone else server. Unfortunately these technologies monitor and act at the hypervisor level so they do not deliver any security in case of application failure or freeze. If such a security is required Ha cluster Sw can be used on top of the virtualization layer.
Another prominent point is that hypervisors, thanks to the fact that Virtual Machines can be moved at runtime with no service interruption (live migration), make minimal the impact of planned server outages. If, for instance, a server needs to be rebooted to change a failed component, the server can be first moved to someone else server so that the user operation is not interrupted.
Security risks in server virtualization projects
If you hunt Google for "virtualization risk" you will find tens of articles on security risks. That proves that security is the most prominent concern citizen have as far as virtualization task risks are concerned. citizen are regularly concerned about what they do not know well because one of the fundamental determinants of human behaviour is the need to have some form of control of the surrounding environment. Virtualization is not an exception. In these projects a whole new set of products is introduced; and those that already are up and running need to be configured in new ways. So a cautious approach is not only recommended by mandatory.
Since there are so many articles on virtualization and security around in the web we shall not spend time here to go straight through all security concerns. We shall limit ourselves to point out that unless strong control processes are in places in a virtual server farm it is far easier to originate new Os systems. So it is not surprising that citizen after a while discover plenty of Virtual Machines that have been created for development or testing purposes and that are actually not managed in professional way. They could for instance not have the most recent security patches or not being configured agreeing to the enterprise security standards. Strong control processes may look to be the correct clarification but strong control significantly diminishes the benefit of increased flexibility we get straight through virtualization. A better alternative is likely to use a weaker control process and then periodically embark on a server farm list to spot possible security holes.
Costs
Project costs all too often exceed expectations and there are thousands of pages written on how to control the task so that costs do not exceed the budget. Virtualization projects have a specific issues linked to Sw licensing. Depending on the Sw licensing rules the virtualization task can produce important savings or cost increases. If the application is licensed agreeing to the whole of corporal cores even when it runs on top of a Virtual engine Monitor the cost will likely increase, seeing that virtual servers have typically many more processor cores than those required by any of the hosted Sw applications. If on the contrary the application license takes into list the whole of logical cores or the system utilization you may realize important savings.
Conclusions
There are many risks in server virtualization projects that could offset or even exceed the task benefits. correct planning and prognosis are required to mitigate the performance, availability and security risks; as well as to ensure that the improbable financial benefits are accrued.
Server Virtualization task Risks
0 comments:
Post a Comment