Recently Arista released a white paper surrounding the idea that having
deeper buffers running within the network can help to alleviate the incast
congestion patterns that can present when a large number of many-to-one
connections are happening within a network. Also known as the TCP incast
problem. They pointedly targeted Hadoop clusters, as the incast problem can
rear its ugly head when utilizing the Hadoop Cluster for MapReduce
functions. The study used an example of 20 servers hanging off of a single
ToR switch that has 40Gbps of uplink capacity within a Leaf/Spine network,
presenting a 5:1 oversubscription ratio. This type of oversubscription was
just seen in the recent release of the Facebook network that is used within
their data centers. So its safe to assume that these types of
oversubscription ratios are seen in the wild. I know I’ve run my fair share
With the blurring of technology lines, the rise of competitive companies, and
a shift in buying models all before us, it would appear we are at the cusp of
ushering in the next era in IT — the Third Platform Era. But as with the
other transitions, it is not the technology or the vendors that trigger a
change in buying patterns. There must be fundamental shifts in buying
behavior driven by business objectives.
The IT industry at large is in the midst of a massive rewrite of key business
applications in response to two technology trends: the proliferation of data
(read: Big Data) ... (more)
Amazon is indisputably the biggest name in cloud service providers. They have
built up a strong market presence primarily on the argument that access to
cheap compute and storage resources is attractive to companies looking to
shed IT costs as they move from on-premises solutions to the cloud. But after
the initial push for cheap resources, how will this market develop?
Is cheap really cheap?
Amazon has cut prices to their cloud offering more than 40 times since
introducing the service in 2006. The way this gets translated in press
circles is that cloud services pricing is approa... (more)
Software-defined networking is fundamentally about two things: the
centralization of network intelligence to make smarter decisions, and the
creation of a single (or smaller number of) administrative touch points to
allow for streamlined operations and to promote workflow automation. The
former can potentially lead to new capabilities that make networks better (or
create new revenue streams), and the latter is about reducing the overall
operating costs of managing a network.
Generating revenue makes perfect sense for the service providers who use
their network primarily as a mea... (more)
We are two short weeks away from HadoopWorld, one of the world’s largest
Big Data conferences. October 15—17 our team will be in in New York City to
demo our Big Data fabric and answer questions about preparing networks for
Big Data. Stop by booth 552 to catch up with our team and pick up a pair of
Plexxi Socks. We look forward to seeing you there.
In this week’s PlexxiTube of the week, Dan Backman describes how Plexxi
manages load balancing in Big Data networks.
Check out what we’ve been up to on social media this week. Have a great
[View the story "Plexxi-Week of Se... (more)