Building Network Architectures of the Future

Added: 30th October 2018 by Juniper Networks

Just as networking technology is evolving, so too are the architectures that connect and support applications and services. In today’s IT world, there is no enterprise-wide infrastructure. Rather, there are individual networks—data centre, campus, branch, public cloud and WAN—each with their own teams, budgets, priorities and tools. And while these networks are also likely to have their own micro drivers for change, ultimately the evolution of network architectures will be somewhat co-dependent as the industry converges isolated devices into coherently-managed resource pools.

The macro driver

Individual refresh and expansion cycles will provide these networks with opportunities for incremental improvement, allowing enterprises to take advantage of improved economics in networking equipment. Incremental improvements, however, do not define a macro driver.

The broad context in which all of this will happen is cloud and multi-cloud. As enterprises adopt cloud and, eventually, multi-cloud options, to service their application workloads, the concept of network operations will be redefined.

Indeed, the promise of multi-cloud will be unfulfilled if there is not operational change. The end-game is a pool of distributed resources, all managed as a collective with coherent end-to-end control and security. This requires the coordinated evolution of all the places in network.

While individual in-the-moment priorities will dictate how this evolution unfolds, the ultimate destination requires that the disparate networks in enterprise IT grow closer. This has implications on how architectures are likely to evolve.

Less is more

For years, architectures have been collapsing. Three tiers have become two tiers in the data centre. Multiple boxes are becoming a single box with virtual functions at the branch gateway as uCPE and SD-WAN emerge. Wherever possible, network architectures will look to collapse complexity into a simpler architecture with fewer moving parts, leveraging software to unlock capability.

Additionally, it seems obvious that the go-forward architectures in every place in the network will revolve around a smaller set of standard building blocks. Proprietary and niche protocols will be retired in favour of open alternatives. The number of technologies in production will drop as simplification overtakes customisation as a primary design practice.

This is already happening in the data centre, where BGP EVPN is the de facto standard for deploying IP fabrics. Campus architectures will follow a similar route, likely adopting BGP EVPN, as well. This would allow enterprises to converge not just within a specific place in network, but also across all places in the network, standardizing on tools, processes and people.

Broadly open

If there are going to be fewer technologies leveraged architecturally, those technologies will have to be based on broadly adopted open standards.

Open is not new and debating the value of standards-based approaches is unnecessary. But open will not be enough. The standards that emerge will need to be broadly adopted. Networking teams have no economic leverage when a technology is narrowly available. Clearly, the cost of networking will have to drop, especially as data explodes and applications get distributed. This means that emerging standards will only be transformative if they are adopted by a significant cross-section of suppliers. This would seem to favor not only protocols like BGP EVPN, but also management standards like NETCONF, gRPC, and OpenConfig.

Centralised and distributed

The cloud movement is largely perceived as a centralisation of application resources. While this is generally true, it’s worth noting that this is largely a logical phenomenon. That is to say, resources are logically centralised, but physically distributed.

At the core of cloud and multi-cloud is a simple question: do economics or physics dictate the design?

For some applications, the answer is economics, in which case a centralised set of resources that benefit from economies of scale makes sense. For other applications, performance is what matters. IoT is a good example; if performance matters, the workload will likely be moved to the edge. This will drive the proliferation of edge cloud and multi-access edge computing (MEC), where compute and storage are pushed to white box devices at the network edge. For remote connectivity, this will likely lead to remote gateways that resemble uCPEs with limited compute and storage resources co-resident on the device.

Another dynamic that will drive a mix of centralised and distributed architectures is data. Specifically, is it better to move the data to the application, or the application to the data?

Traditionally, the answer has always been to move the data. But where there is a large amount of data or the WAN connection is small (satellite connections to a remote drilling platform, for example), moving the application is a more feasible option. This means that, at least for some applications, the cloud will exist on the edge.

Tagged as: Industry News