Multi-vendor enterprise IT environments don’t happen overnight. They tend to change over time. Organizations cobble them together by combining sequences of usually unrelated events — everything from acquisitions to investments in the best components to leadership changes that alter tactics regarding vendor preferences.
Where does that leave IT managers? Sometimes overwhelmed.
As IT environments grow, managers find themselves faced with instances of vendor sprawl. It is not uncommon for individual departments to host multiple vendors’ compute, storage, and networking platforms under one roof. Connect these platforms to dozens of satellite components and vendor software packages, and replicate the matrix across divisions and geographies, and you have a complex mix of IT issues to manage.
So how can IT managers manage all this complexity? They have two basic choices. They can take care of it themselves, tracking all service contracts, communications with vendors, maintenance tasks, and troubleshooting. Or they can hire a third party to handle multi-vendor support for them.
There are arguments to be made on both sides. Organizations that want to keep tight control over IT and are confident in the ability of staff to coordinate many often difficult vendor relationships may want to try keeping the task in-house. Those looking to focus on core competencies and leave vendor relations to a specialist can look to multi-vendor support.
When IT managers decide how to manage complex multi-vendor IT environments, they need to consider a few key factors.
Do they really know their environment?
In other words, what runs where? Which vendor provided a specific router that controls traffic to the northeast region of the organization? What type of support is available for this product? And who can they call to get it fixed?
An overwhelming percentage of IT managers today lack a good understanding of their organization’s assets. This may be because the organization has not developed a comprehensive set of asset management resources. Or the IT department may not keep it up to date. Either way, IT managers need to be able to identify everything in their environment: what it is, what it connects to, who provided it, and what can be done. quickly to solve a problem.
Lack of insight in a multi-vendor environment can be disastrous. If a compliance audit reveals a problem along a network path, IT will need to explain what contributed to the problem. If a network scan shows that an older instance of Windows XP still has nodes in a foreign country where the organization doesn’t even have operations, IT should shut down the instance.
What are your main downtime risks?
This is a question that organizations should be asking, regardless of the equipment they use. Risk tolerance is, after all, mission-critical knowledge that can save the business. But it’s even more important in a multi-vendor environment. If a payment system or key database interfaces with multiple vendor platforms, executives need to know what all the dependencies are and what needs to be done to handle an outage. SLAs will need to be updated as risk management plans change.
Do their SLAs cover their company’s service levels?
Speaking of SLAs: they need to be tightly managed in multi-vendor environments. If you work with multiple support providers, each will have a different support approach. SLAs will change and each will follow a different escalation process. If a company has a 24/7 response maintenance contract on its servers, but only a next business day contract for networks, network issues can render server response coverage moot. Operations will be limited by the lowest level SLA available.
Are their teams up to it?
In recent years, the pace of change that has taken place in computing has accelerated dramatically. It is no longer driven by hardware technology, which had a predictable lifecycle. Now, this rate of change is more software driven. This puts pressure on IT staff to stay up to date with complex modern technology practices – in software, containers, orchestration system, and resource management systems – all of which are changing at incredible speeds. Add to that the challenges of maintaining knowledge levels across each of multiple vendor platforms, and IT departments quickly find themselves struggling with skills shortages.
It’s best to assess the company’s skill needs and address them proactively. This could mean setting up additional training for staff members on key vendor technologies or, if too many platforms need to be covered, handing over maintenance responsibilities to an experienced external vendor.
Can you see the whole picture?
Perhaps the biggest challenge in multi-vendor environments is isolating the cause of an outage and getting the responsible party to address it in a timely manner. For example, a failure may at first glance look like a server problem. But computing environments are so large, complex, and filled with dependencies that the problem could be with a component attached to a server platform. Calls to a vendor representative often lead to pointing fingers. After all, sellers are unfamiliar with each other’s products, reluctant to talk to each other, and reluctant to take responsibility for resolving a case.
IT managers trying to manage a multi-vendor environment should view the environment as a holistic system, not a set of components. This will require knowledge of the systems, the vendors themselves, overlapping SLAs, and the resources needed to resolve an issue.
As IT environments grow, executives have their work cut out trying to manage the increased level of complexity. It can be done. But it will take energy and organization to lead the effort internally. This is especially true with the rapid explosion of “Edge Computing” and “Edge Data”.
Critical information is now collected, stored and manipulated in increasingly diverse locations and configurations than ever before. Some organizations turn to an outside service provider to help them. But determining the technical depth and geographic reach of the provider is critical to delivering consistent services across a wide range of platforms and in all locations where the organization may have deployed assets. Managing an increasingly tangled network of vendor relationships can free up IT to focus more intensely on driving positive business results.
John Gunther is Head of Global Multi-Vendor Services Portfolio Commercial Segment at Hewlett Packard Enterprise, responsible for the development and maintenance of support services designed for assist customers in the regular operation, maintenance and management of their IT environments from edge to data center to cloud. He leads a cross-functional team that manages all aspects of the Multivendor Services business at HPE. A long-time HPE veteran, John has held various global and regional roles in the The organization of Pointnext Services, including the management and planning of activities, and the service Delivery planning for HPE Customer Support Center. He started his IT career as HPE client, developing statistical databases and operating a time-sharing data center.
About Marianne Talbot
Marianne Talbot – Hewlett Packard Enterprise
Marianne Talbot is the Global Category Manager for HPE Multivendor Services, she is responsible for enabling high performance The organization of operational services is focused on results for customers and has a holistic and comprehensive view of HPE solutions, differentiators, and their valuable message. Marianne joined HPE in 2014 as an administrator and soon after joins the Engagement and Pursuit team as Bid Manager for the Irish Market. From there, Marianne moved into a role of engagement and Pursuit Country Manager before joining Global Category Management Team in 2020.