The notion of being able to shape and scale the resources of your IT infrastructure dynamically and on-demand to meet specific use cases and workloads isnt entirely new. Its a common selling point presented by hybrid or public cloud IaaS platforms such as AWS and Azure, for instance. HPE themselves began talking about adaptable infrastructure almost fifteen years ago, with the appearance of the first blade servers.

Despite these efforts, organisations reliant upon on-premises infrastructure for reasons such as compliance, legacy systems or application-specific performance, have yet to find a comprehensive answer to the challenge of maintaining their traditional systems, while still embracing the business opportunities and agility presented by cloud applications. With a great deal of downtime often caused by human error, there are additional advantages to any approach that can streamline IT infrastructure and the troubleshooting issues that can emerge from its typical complexity.

Now, HPE have developed a solution to bridge the gap between traditional on-premises IT and the cloud-based services that are driving the new digital economy. Their new platform, dubbed Synergy, has been architected from the ground up as Composable Infrastructure or Infrastructure-as-Code.

The Synergy platform represents HPEs biggest enterprise breakthrough in a decade – and the single biggest R&D investment in the companys history – but what is it exactly, and- what does it mean for IT?


While Converged Infrastructure brought compute, storage and networking into an integrated stack, such an architecture can target only a small number of workloads and data. Its reliance on pre-configured IT means it can be complex to manage and difficult to scale, carrying the risk of over-provisioning.

Hyper-convergence introduced easily deployed, single-node systems that could be clustered together to create extensible compute and software-defined storage resource pools. Vendors such as Nutanix, for instance, built their hardware so that compute and storage are at optimal performance. In this regard, Nutanix products rely heavily on the purchase of tightly integrated compute/storage nodes, but the network fabric remains a separate consideration.

Composable Infrastructure however seeks to build on the flexibility that hyper-convergence offers by creating a single, wholly software-defined platform combining compute, storage, and fabric into a fluid resource pool. The entire system is overseen by a unified API – dubbed OneView – that provides visibility into every resource via a single management interface, and allows for frictionless updates and automated rolling maintenance. One key advantage this presents is the reduced need to troubleshoot across server, storage or network teams should any issues arise.

When you need to add compute or storage, you simply add another module that is specific to your needs and it is automatically recognised and added to the resource pool. When those resources are required to be directed elsewhere, they can be readily reallocated without having to undertake a new provisioning cycle. Thus your infrastructure becomes composable, and can be optimised for any application or workload, ready for near-immediate deployment. The platform also allows for power and cooling automation, for instance via the Schneider Electrical connector, to help drive down costs.


HPE argues that their Synergy architecture will enable businesses to support two operating environments – one stable and secure, for traditional workloads; the other capable of supporting an agile DevOps environment – thereby meeting the requirements for what has been termed by Gartner as bimodal IT. Both cloud apps and traditional applications can be deployed on one platform, removing the issue of requiring two siloed systems.

With this solution, enterprises essentially create their own internal cloud infrastructure that behaves much the same as that used by the public cloud. A disaggregated approach to hardware allows the resources to be logically pooled, and reallocated or recomposed at cloud-like speeds once a given workload or application is no longer needed.

Essentially, its HPEs take on the hybrid cloud, presented in a single product that can deliver applications across private and public clouds using the same command syntax. Users start with a framework ready to accept storage, compute, and networking, then customise as needed. If an additional storage module is added, for instance, the software-defined intelligence automatically recognises it, and adds it to the defined resources pool.


One potential issue? Organisations would be required to go all-in with HPE for their hardware, meaning the point of entry for buy-in to the Synergy platform would have to occur at the beginning of a new upgrade or refresh cycle. Others may prefer the optionality of a multi-vendor environment, which may also limit the market for this new take on IT.

Despite such objections, however, Synergy and the Composable Infrastructure it enables represents an exciting strategy from HPE. Its modularity means organisations can start small and add specific capabilities as needed as it scales readily from mid-market to enterprise-level capacities. Plus, the ability to deploy purpose-built infrastructures for workloads and applications in seconds and minutes – rather than hours or days – is an exciting prospect.

The company has also been busy lining up valuable software integrations with Microsoft -through a local Azure fabric – as well as container and VM management/automation tools such as Chef and Docker. NVIDIA and VMware are also onboard with the strategy, so while the hardware may be HPE only, the software is essentially open and extensible by third-party customers and vendors.

Composable Infrastructure and the HPE Synergy solution represents another big step along the shift towards physical infrastructure abstraction. It brings the promise of a Software Defined Data Centre closer to reality and should be considered as part of any infrastructure planning project within enterprise IT teams.