Why ‘Fog Computing’ may be needed alongside the Cloud

  • ‘Fog Computing’ can answer real capacity and latency problems
  • Why the Fog trend could be the CSP’s friend
  • CSPs can either host edge computing or provide it as part of a service

ITU_blg telecomtv BLG (cg)

Fog: It’s a catchy IT name for the process of distributing processing and storage back out towards the edge of the network where, its protagonists argue, it is needed to rebalance the cloud architecture. The current general purpose Cloud IT model, with end devices attached directly to a central data centre is not universally optimal as applications become more demanding – especially in the Internet of Things (IoT) domain where latency and sheer data volume are projected to become major issues. Fog Computing is part of the answer.

The good news is that the trend should be the Communication Service Provider’s (CSP) friend. Fog needs a network edge to play from and that offers access network operators the opportunity to provide compute facilities connected to their communications networks.

That’s the Fog pitch, but how well does it map onto IT and network reality? Is this really just a “Hey, remember me!” strategy for box vendors and CSPs who feel that corporate enthusiasm for pure Cloud is elbowing them out?

No is the answer. There are some real Cloud challenges looming and Fog computing, in one form or another, looks like being part of the solution.

How does ‘Fog’ complement ‘Cloud’?

Metaphorically, we can think of the ‘Cloud’ as high up and thereby able to serve a ‘footprint’ of millions of end devices from horizon to horizon. On this basis Fog describes a thinner layer of resources much closer to the ground (with a narrower purview) but able to serve some applications better by being a short ‘hop’ or two away and therefore more responsive to the end system.

For some applications – especially in IoT – Fog might also find itself performing triage on data straining to get to the cloud by doing some preliminary sifting to analyze and perhaps distil, throwing out the unnecessary or repetitive.

So the first job for Fog is to solve the round trip delay problem for some of the data heading from the edge to the middle of the cloud. Until somebody finds a way to beat the speed of light we are stuck with long-distance fibre transmission delay which, the experts say, has to be overcome if the promise of things like driverless cars or remote surgery are to come to fruition. The only way that the necessary latency can be achieved for these applications is if the journey between end device and the critical data it interacts with is right at the edge of the network – enter Fog computing.

IoT: a big challenge to the Cloud

Perhaps the biggest long-term challenge to the central ‘Cloud’ as we currently understand it, is IoT. At the moment we’re imagining billions of ‘things’ just popping up at long intervals to send a few bytes of data. But the fact is that not all applications are going to be so undemanding. It’s already possible to see today’s simple, telemetry-style applications being beefed up to return more and more data over time.

Take the humble domestic boiler. It can be rigged up to return stats on its power usage and can even be controlled to maximize efficiency and reduce bills. All well understood as a worthy metering application today. But what if it could return a constant stream of information on the state of the boiler via sensors? That might enable a central system to use big data analysis (feeding in data from all the boilers of the same model) to be able to predict from tell-tale signs (vibration, overheating, lowered pressure) an imminent failure and to replace the boiler before its owner even knows there’s a problem.

But that beefed up boiler app will generate a vast amount of data – perhaps several state notifications a second – to identify the fatal pattern. Multiply that by thousands of boilers and the data volume could overwhelm both the network and the cloud storage and compute facility. However, Fog facilities could aggregate a few hundred boilers at a time, distilling the data for each boiler and perhaps forwarding only the ‘exceptions’ to the central Cloud with network overload thus prevented.

It’s not that ‘cloud’ is somehow ‘wrong’ or is going to be replaced. Fog is just one more response to the continually shifting balance of advantage between centralized storage and processing (economies of scale, ability to analyze huge data sets, processing flexibility) and distributed computing and storage (local control, reduced network costs, increased responsiveness).

This original version of this article first appeared in Telecom TV. 

Telecom TV delivers daily insight on the converging worlds of telecoms, media and entertainment. Views expressed in this article from Telecom TV do not necessarily reflect those of ITU.

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: