The triad of digital infrastructure
Stacey Higginbotham wrote an interesting piece over at GigaOm, called “Something’s gotta give when big data meets broadband.” She writes,
Scientists aren’t worried about storing or processing all data, according to an article written by Mari Silbey for
A Smarter PlanetSmartPlanet. Instead they’re worried about shipping that data from Chile to everyplace else it will be wanted. Basically it’s not a big data issue, it’s a broadband issue.
My feeling is that, while this may be a current bottleneck, if we removed either of the other two pieces, the same scientists would have nightmares figuring out how to store their massive volumes of data or how to crunch the petabytes of information. I don’t think the scientists disagree; they are more concerned that the broadband piece is lagging behind storage and compute capacity.
But there is progress in the broadband arena that will surely support these researchers. Notably, a rather under-ballyhooed demonstration taking place this week at the GENI Engineering Conference in Boston.
The demonstration is called Slice Around the World, and you won’t find much about it on the Internet, ironically.
In the Slice Around the World demonstration, software – rather than manually configured network resources – will dynamically direct network and computation high volume data streams over the ultra-high-speed CANARIE and BCNET networks, as well as other research-and-education networks around the world.
Ordinarily, network switches and routers are proprietary hardware, manually configured to direct traffic over specified paths, much as a railroad switches tracks to direct trains onto specific, defined routes. With a programmable network, traffic flow can be controlled by software dynamically with almost instant response to dramatically increased traffic needs or network congestion. These highly programmable OpenFlow-based networks are considered to be the next dramatic leap in network technology innovation. The Slice Around the World demonstration is a critical step forward in proving the capabilities of this approach, which launches an era of ubiquitous high performance networking and computation that is as accessible and easy-to-use as the web is today.
With the software approach, the flow is much more like a river flowing out to a delta: the water (data) flows where there is least restriction without manual intervention.
Could this be the solution to the scientists’ concern about data bottlenecks? Who knows. But advanced networks in Canada and globally are already delving into the problem.
What are your thoughts?