At Hadean, we set out to simplify distributed computing because we realised that the traditional approach was ignoring the most important role developers and engineers have:
The problems today's developers and engineers want to solve require hyper-scale, distributed applications. And they want to solve these problems using the same code they’ve written locally, but across thousands of cloud servers.
And we’re not just talking about the developers and engineers at the Facebooks, Googles, Apples, and Amazons of the world. We’re talking about every business, including the one person start-up, because let's face it, every business is now a software business, and they’re looking to adopt algorithms in some shape or form, whether it be machine learning, A.I., or simulation, to compete with the big four.
But when the developers ship their distributed applications to the cloud, they require months of engineering effort to orchestrate, performance tune and scale up, then, guess what? The requirements change and the lengthy and costly process starts all over again.
So that’s what we’ve been focused on solving, over the past four years.
But as we’ve listened to more and more businesses trying to build distributed applications, we realised that problems don’t just start at writing the code.
For a lot of teams, middleware is still the only viable solution for distributed applications.
And the reason teams rely on middleware? The operating system is obsolete.
The limitations of using middleware for distributed computing have been a common theme in our conversations with businesses.
For years, developers and engineers have relied on middleware. We know because we’ve helped build distributed systems for advertising bidding, financial services, retail, scientific computing, and online gaming.
And while these middleware-based systems could help achieve your distributed application goals, the overall experience for developers and engineers is miserable, and often short-lived.
Take a moment to think about it: the cloud has been a common goal for most organisations for well over a decade, and we’ve been stitching together and layering on more and more software abstractions, automation and orchestration to make many servers behave like one.
It’s not that people want to use middleware, it’s that they don’t have a choice. Middleware fills the gaps of outmoded operating systems that have architecturally changed little in the last 40 years. This OS paradigm was designed on the premise that programs ran on a single computer, not across an abundance of machines at your disposal.
Much of this middleware was not designed with real-time performance and reliability guarantees in mind. It was never meant for use in real-time systems such as gaming but that doesn’t stop developers and engineers from treating it like it is.
They build huge application stacks employing enterprise architectural patterns such as microservices mixed generously with orchestration tools, hoping they can reach their performance goals.
But without any context of the hardware and its performance you’ll never be able to deliver an efficient, reliable, real-time experience.
Developers and engineers have relied on middleware for so long mostly because it was the only tool they had.
As more and more businesses offered dynamic applications, devs and engineers started leaning more heavily into different flavours of middleware.
Now middleware has simply turned into a bragging game. It’s not whether you’re smart enough to deliver new value to your customers, it’s whether you’re smart enough to set-up and manage the layer upon layer of complexity the middleware creates.
Want to increase the complexity of your deployment? Just implement more middleware.
This might be the worst problem of all when it comes to middleware: reliability.
As a result, developers and engineers end up in a cat and mouse game of re-engineering, and performance tuning. And, from experience, it can take months to scale an application from 100 servers to 150.
There are more than 26 million lines of code in Linux and Windows is north of 50 million, then layer on top the middleware, and on top of that your application. The valuable performance and reliability your application needs are now a million miles away.
With the traditional tools, you’re almost guaranteed to create engineering roadblocks no matter how cautious your engineering. And that, in turn, leads to huge compromises and missed opportunities for the business.
As the application cannot trust the reliability of the system, complex redundancy and recovery mechanisms need to be introduced. This adds code bloat as well as behavioural complexity to the running system. It dramatically increases the chance the system will misbehave and/or crash.
In order to simplify distributed computing, we’re building the next generation of operating system here at Hadean — an operating system that completely removes the need for middleware, is reliable at its core, provides real-time guarantees, designed by default for massively distributed tasks, and that doesn’t create a rat’s nest of bloat and complexity.
Imagine if a developer could write a 100 line program, and deploy it with a single command to run on the 8 cores of their desktop, and then, with a slight tweak of the command-line argument, deploy it and watch it dynamically scale across hordes of cloud servers. That application wouldn’t need months of performance tuning, nor would it require any middleware or orchestration.
You’d finally be delivering that real-time experience your business has dreamed of.
Developers and engineers wouldn’t have to worry about writing monitoring solutions for bespoke deployments, and they wouldn’t have to worry about unreliable middleware. As soon as the business defined the requirement the developer could work on creating value.
And finally, imagine if the OS was smart.
Imagine if, instead of thinking like one machine, the operating system knew that it was part of a cloud deployment. Imagine if it had the context to know that once it had run out of resources, it could automatically call on more, either by spinning up more of the same servers (horizontal scaling) or instead choosing to spin up more powerful servers (vertical scaling), because it knew that the workload had changed.
Not only is it smart enough to know when and how to scale up, but it’s also frugal enough to know when to scale back down - freeing up resources and saving money when they’re no longer required. No more provisioning headroom for those events that you know are coming, but you don’t know when. You only provision and pay for the cloud resources that you actually need.
You’d be able to deliver more value, better experiences, with fewer outages — and that’s exactly how we picture the future of distributed computing for developers and engineers.
While a lot of what I’ve been describing is our vision for the operating system here at Hadean, we’ve already started to make that vision a reality.
We’ll be attempting to break a world record by using our first distributed application - Aether Engine - to deliver the world’s largest player vs. player battle. The current world record stands at 6,142 players, achieved by EVE Online. We’re lucky enough to have Hilmar and his team as trusted friends and advisors and we’ve already tested at 10,000 virtual players so we’re confident we’re going to blow it out of the water.
It’s our first step toward building a distributed solution that optimises for performance and experience, not just connecting servers.
HadeanOS is a cloud-first operating system that has been engineered and optimized for performance across massively distributed computing infrastructures.
9 Appold St
T: 020 3514 1170