Automation tools meant to facilitate application management may get you more—or less—than you bargained for, particularly as your organization matures along its software-defined journey. Underlying approaches to automation vary widely and not just because, obviously, different vendor’s solutions differ. There’s that, of course, and you already know that even two relatively ‘apples to apples’ solutions can perform differently in the same production environment for any number of reasons. But there’s more: using third party automation tools may introduce a level of risk into your application management architecture that wouldn’t otherwise exist. Worse: sometimes the downside takes a while to understand, by which time you could have a problem on your hands worse than the one the tool was purchased to solve!
How can this be? To understand it fully, and thereby keep the decision-making power in your hands where it belongs, requires a shift in the minds of some IT practitioners: it takes an application-centric approach. Applications do not perform exactly the same in any two production environments. In today’s highly dynamic virtualized and cloud environments, workloads are moving across clouds and back again and may perform differently as they do so. Many third party tools follow narrow rules and use static metrics that could incorrectly deem something a problem when it’s a “normal” anomaly for that application in that setting. For instance, a tool may alert you to excessive activity but the “automated fix” actually causes downtime elsewhere. IT managers need an understanding of the solution’s underlying philosophy.
Many vendors use a “one size fits all” approach to workload automation
This approach can result in capacity miscalculations through an overweighting of the value of “rightsizing.” Many tools lack a holistic view of the enterprise, delivering only limited static metrics and unable to accurately diagnose overall system health. Niche solutions may be adequate for limited server virtualization, but won’t scale out with the sophistication you need. Customers are surprised to discover they need to augment missing capabilities with other vendor’s products in order to retrofit essential functionality. That correction can lead to duplication in other functionalities, resulting in inefficiency and negation of anticipated cost savings – e.g. excessive movement of workloads can result in high overhead, and thus the value gained is negligible. The screenshot below illustrates how VMware vRealize Log Insight can give users a “behind the scenes” look at how a 3rd party automation tool can potentially negatively impact your environment.
As your enterprise extends into the cloud, third party tools vetted for simpler times may not effectively make the journey with you, so, at a minimum, you take on the onus of continual re-evaluation of your tools that may not be worth the energy saved by using them.
As an enterprise evolves in its software-defined journey, the risk introduced by third party automation tools becomes greater
A fully virtualized stack, with storage, networking, and compute, is simply more complicated than just server virtualization. Third party tools may work well as you scale up, but as you scale out to tens of thousands of VM’s, the only way to have high levels of certainty for your business-critical applications is with an automated control plane that’s natively integrated with the stack. Moreover, tapping the full benefit of that virtualized stack—taking it to a hybrid cloud environment for breakaway efficiency and agility—requires a sophisticated control plane to orchestrate your highly dynamic virtualized and cloud environment.
Simply put, it’s about decision support. Some vendor’s approach to “automation” is to relieve you of the “headache” of knowing what’s going on in your environment. So, for example, a tool may identify a VM server whose health is degrading and move the workload—but it fails to fix the server. Again, this may not be problematic in one data center with a few hundred VMs, but scale out to tens of thousands, and you’ve introduced the risk that comes from having rules in place that may not bubble up problems like downed VMs while they’re still simple to fix.
To realize the vision of the self-healing data center requires a management control plane that delivers accurate and actionable data. A policy-based approach, rather than a rules-based approach, can more properly discern what data is critical to deliver to you. An integrated approach to operations management ensures application uptime by learning your environment’s particular needs. Optimal management in the era of the modern data center and cloud requires an approach based on learning and correlating the right metrics based on application behavior in order to identify the most actionable data. Competitive approaches tend to treat all resources equally, as commodities in your ecosystem, and deal with situations according to a basic set of rules, with no intelligence—just a straightforward “when A happens, do B” approach. This fails in a fully virtualized and cloud environment. Practitioners in this environment need advice on how best to remediate problems based on a holistic view. A policy-based unified management console can point out configuration issues, performance bottlenecks and opportunities to rightsize over provisioned capacity—in other words, not just move a workload when there’s a problem (it can do that too!), but continually monitor your dynamic environment and adapt accordingly. Tracking anomalies in a system of self-learning behavioral analytics rapidly drives time to benefit, helping not only to keep your apps safe and productive but supporting accurate capacity planning.
As discussed in an independent research report from the University of Waterloo, integrated OS and application monitoring drive the intelligent workload management needed to make the self-healing data center a reality. The intelligent operations and automation provided by vRealize Operations, coupled with VMware Predictive DRS, allows you to get more out of vSphere, and SDDC environment, with easy to use capacity management and performance monitoring. Using self-learning analytics, an integrated management console can guide remediation with recommended corrective actions and automatically reclaim over provisioned capacity for optimal resource utilization.
Analyst firm Tenaja Group recently cited VMware vRealize Suite as a leader in all aspects of cloud management
- Truly comprehensive cloud management platforms are rare and will become increasingly essential for the holistic view modern enterprises need,
- Despite what vendors say, most cloud management toolsets are optimized for particular environments (and often operate best in a proprietary environment), and
- Few tools operate well in a cross-cloud environment. For example, Microsoft cloud management is specifically designed for Azure clouds with Hyper-V and HPE cloud management for HPE’s flavor of OpenStack clouds. Indeed, lack of standards makes cloud interoperability challenging.
Legend: VMW=VMware, MSFT=Microsoft, SVN=ServiceNow, HPE=HP Enterprise, CSCO=Cisco, RH= RedHat, SPL=Splunk, VMT=VMTurbo
Source: Taneja Group, 2016
The software-defined journey is about more than IT; it’s an approach to modern business, one that’s increasingly necessary to deliver the kind of customer experiences needed to stay competitive. The application has become more and more central to your customer’s experience with your brand, and the underlying infrastructure that delivers that experience is best managed by a unified management console that becomes intimate with applications, no matter where they reside.
The post Automating Virtualization & Cloud Management: Realizing Upsides Can Mean Uncalculated Risk appeared first on VMware Cloud Management.