The Network Approaching Platform Three

Kevin Deierling, CMO, Mellanox
"Think outside the computer. In the old days you could count on Moore’s law doubling performance every couple of years – [the CPUs] could just absorb all the software and you just keep writing inefficient applications." Kevin Deierling, CMO, Mellanox. Photo: Lars Bennetzen

By Lionel Snell,Editor, NetEvents

Media and analysts love to track historic aeons,­ like “the fourth industrial revolution”. According to Ksenia Efimova, IDC’s Senior Research Analyst, EMEA Telecoms and Networking, we are entering the Third Platform. The First Platform being the mainframe computer; the Second Platform being client/server; while the Third Platform “consists of cloud, social, mobility and data; and innovation accelerators, AR, VR, AI, robotics, block chain, et cetera.” A big platform indeed.

She put this idea to an industry panel representing VMware, NetFoundry and Mellanox Technologies to stimulate discussion around digital transformation and the developments necessary to support its widespread adoption. Changes begin with the current move to higher speed networks, but recognise that this alone is not nearly enough to address the sheer complexity and volume of relatively unstructured data that will pour in from other developments such as the Internet of Things (IoT). There is the problem of massive energy requirements, calling for greater efficiency. There is the complexity integrating on-premises, off-premises, cloud-based, and traditional data centres both private and public. It calls for a whole new approach – so where should we begin?

The question is baseless until one’s strategy and objectives are agreed. “Are you looking to build a hyperscale data centre? Or just something for a local German subsidiary to keep data in-country? Or are looking for edge locations to reduce latency? Or IOT processing, where you might simply need a couple of blades or racks within a telco tower?” asks Philip Griffiths , NetFoundry’s Head of EMEA Partnerships. That agreed, one should focus on customer priorities: such as reducing energy consumption, using automation and AI, having everything software-defined to save sending engineers onsite to fix things manually every time.

A more holistic approach is suggested by Kevin Deierling, Mellanox’s Chief Marketing Officer: “Think outside the computer. In the old days you could count on Moore’s law doubling performance every couple of years – [the CPUs] could just absorb all the software and you just keep writing inefficient applications.” Thinking holistically means not optimizing at the box level but across the whole platform – compute, storage, networking and application. The data centre is now the computer. He points out that the downside of virtualization in the data center is that it consumes so much CPU power. Once a Cisco router forwarded packets using software, but with today’s 100 or 400 gig switches you need a hybrid of ASIC hardware and software to accelerate virtual machine forwarding, firewall rules and load balancing etc. “When I first told Martin Casado… he was so excited. He goes: ‘you mean you put a virtual switch into silicon in an Ethernet NIC? … that’s fantastic. Too bad I can’t use it, because I’m VMware and I need to control that interface in order to have software control’. It was only then we explained that the control path is still controlled by the VMware.”

Joe Baguley, Vice President and Chief Technology Officer for EMEA, VMware agrees about seeing the whole data centre as a single computer. In fact a lot of his customers are not building new data centres, opting instead to rent data centre space or move their VMs  to a suitable host. Those that are building data centres want to know how the hyperscalers do it, where everything is basically done in software: “The customers that are building large scale data centres, they’re looking to rack essentially Lego building blocks of hyper-converged infrastructure plugged in via a 10 gig spine, which then goes 10/40, or even 40 to 100… the spine is just flat layer 2, because all the intelligence is done in software on the devices.”

That allows for a much more energy efficient use of hardware and what he called “a rolling death”. Where most businesses get cluttered with hardware bought over the years for some specific project, a CloudScale operator on a software virtualised platform adds new hardware like Lego blocks. The most critical workloads automatically go onto the latest kit while older hardware works its way down the hierarchy to lower priority jobs until it falls away. Far more efficient than the peaks and troughs of hardware refreshing.

He sounded a caution about energy efficiency: “We have to be aware of Jevon’s paradox in that the more we make something efficient, the cheaper it eventually becomes to run, therefore the more people look for ways to use it, therefore we use more of it”.

But does this hyper-converged approach suit everyone? Already we see the migration of CRM and ERP to hyper-converged and Baguely agrees: “If you can virtualise it, it will run on hyper-converged… The barrier to HCI is very rarely anything to do with the apps or the software, it’s the people not understanding how to take advantage of HCI… there’s a whole bunch of people that need to give up their fiefdoms and understand they’re playing a bigger game. Networking, compute, storage – all one.”

Kevin Deierling points out how this migrates towards the edge: “We have a SmartNIC that combines 25 gig, 100 gig networking connectivity with ARM cores for edge applications; and now that’s running ESXi. So we’re starting to see hypervisor running on these tiny little machines.” He refers to hyperconverged as “invisible infrastructure”: easy to deploy and it works – until you move a VM. But with SmartNIC intelligence in the network, when something moves there is a notification and the network adapts: “so now we’ve made the network invisible too”.

Another example of intelligence migrating edge-wards is seen in 5G IoT. It was originally suggested that everything would connect to the cloud, but instead the move is to multi-tier architecture, with massive cloud data centres, then regional cloud data centres, and then edge on-premises ones and maybe even IoT gateways doing local processes.

Returning to human factors: the industry is also facing a severe skills shortage that calls for more automation. Baguley argues that: “Enterprisers have yet to wake up to the fact that automation is a fundamental design requirement, not a bolt on… I see these people building systems and then working out afterwards how to automate, as opposed to working out how to build an automated system – it’s the only way you get to scale”. That is a good point to end the discusssion, as it reminds us that tomorrow’s Third Platform network should not only be faster and more scalable, but also much easier to manage.

Be the first to comment

Leave a Reply

Your email address will not be published.


*


5 × three =