The never-ending IT skills crisis, and the importance of understanding the difference between cloud, the model, and cloud, the place.

An admission. In ten years’ time, I will be sixty-four. While that’s not really important, except for the earworm now playing in my head, the realisation of the march of time did make me reflect on how the world of IT has changed over my ~30 years or so in and around the industry. It also made me reflect on a recent conversation with my friend and colleague, Director of Product Engineering - Danny Abukalam. The age difference between Danny and I, I should explain, is over twenty years.

In his working life, it dawned upon us, that cloud computing had never not existed.

My working life, on the other hand, started near the rapid explosion in PC networking. As I joined the workforce yellow “thick” Ethernet was still the backbone of choice at a screaming fast 10MB/s, within trailing lengths of “thin” coax cable snaking its way around offices, daisy chaining PCs together at spacing no closer than 0.5M. For Danny, and the generation of engineers that follow him, the servers, network, and storage infrastructure that live in data centres will have always been, at least conceptually at least, both virtual and to be largely ignored. In ten years’ time, and likely a lot sooner, those who truly understand physical infrastructure will be few and far between.

Cloud the model, and cloud the place

Without getting into too much of the technical detail (and too far out of my wheelhouse), there are characteristics about the way applications need to be architected that mean you can then access the full benefit of running them in the cloud - but you don’t have to build them that way. It is, in theory, and too often in practice in cloud migrations, possible to recreate the physical servers that once lived in your data centre as virtual servers in the cloud and then run traditionally structured applications on them. This is often called a “lift and shift”.

On paper, you’ve migrated to the cloud, but in reality, all you’ve done is relocated your apps to a new data centre and are paying for your server, by the month, there instead. All the benefits of the flexibility, elasticity, performance improvements, resilience, etc. that come from building, or “refactoring” an application to be cloud native are not realised. Your application is still bounded by the machine that it is running on, and the upper bounds of the way the code is written, even though it is now virtual. It’s also likely to be a lot more expensive.

You’ve landed in the cloud - the place, not the cloud - the model.

Underpinning many failed cloud migration projects is missing out on that vital refactoring step. And often it’s because those applications simply can’t be refactored. More illogically, sometimes just the data is migrated to the cloud (because that bit is deemed cheap and/or easy), but the app has to stay on-prem, and/or the business processes they support make more logical sense to stay on-premises.

For many organizations that means planning for a long-term future of running on-prem. infrastructure, yet in an environment where the demographics are increasingly against you, as both development and infrastructure operations skills become increasingly cloud-native.

Squaring the circle: supporting on-prem. workloads

So, to recap, the pain points are:

  1. Applications and workloads that need to reside on-prem. Likely for the long term, and need support over their life.

  2. Every year the talent pool of development engineers available to build and support those applications work and only work, with tools and methodologies that are cloud-native, while the team that originally supported those legacy apps is retraining or retiring.

  3. Every year it is becoming harder, and therefore more expensive, to recruit and retain skilled IT operations people needed to run legacy infrastructure in my own data centre or colo.

The answer I suspect you won’t be surprised to find out is, of course, to build and operate a private cloud based on HyperCloud.

Using HyperCloud, existing applications can be quickly re-hosted and consolidated in a “lift and shift” as existing hardware infrastructure becomes too tricky or too old to manage. Because those workloads didn’t leave your own environment there’s no worry about unexpected costs from egress fees, for example, and as you migrate more and more applications into a single cloud infrastructure the simplicity and resilience that HyperCloud delivers reap increasingly greater rewards.

But there’s more.

HyperCloud delivers both the cloud location and the cloud model for those cloud-native developers. Now all developers can work within the same environment and, as applications are re-factored to take advantage of the cloud-native tools and processes, the added benefits that delivers can be quickly realised. If at that point you decide that some workloads can now live fully or partially in the public cloud, the step to achieve that now becomes trivial.

Solving the IT operations challenge, the simplicity that HyperCloud delivers and the ability to collapse multiple legacy and cloud-native workloads into a single environment means that finding the small number of IT generalists required to run your private cloud, or indeed the ability to use “intelligent hands” type services from your co-location provider give you the solution to solving that platform.

It won’t take ten years. In fact, you could with something small right now in just 8U of spare rack space in your data centre or co-lo.

Let me know how you get on.

Related articles