SDN has been around for time now and we could there are about three basic areas that have emerged as part of the movement.
1. Network automation - While automation and scripting has been around for some time, there is a bigger interest to have automation capabilities and APIs built into the switch OS. Example of this would Arista EOS, Cisco OnePK and several others. Tools such as Puppet and Chef are becoming very mainstream in networking.
2. Control Plane Abstraction - This approach is predominantly built around Openflow for both Physical and Overlay networking. Obviously, the overlay portion is getting more attention these days. The real focus of approach is provide better network wide abstraction to allow better application level to network programming. This effort is mainly managed by the Open Networking Foundation (ONF).
3. Open Switching - This could also be called Open Networking. The focus here is to separate the switch OS from the Hardware and is mainly advocated by the Open Compute Project started by Facebook. The real goal here is that any hardware vendor could package the switch with a choice of operating systems based on customer specific needs. This movement seems biased towards large cloud providers and not the typical enterprise.
Now that we have a view of the different flavors of Software Defined Networking (SDN); what are the business cases to deploy such technology?
1. Cost - The approach of open networking could reduce the cost for hardware in general as now the need for vendor lock-in would be reduced. The other side to cost reduction could be in the case of lowering operational tasks and churn.
2. Flexibility - This would come in several ways. In the case of Open OS approach, the customer can now deploy any application on the switch for network management. No longer dependent on a vendor to provide such. The other side of flexibility is the case with control plane abstraction, resources can be moved around in the network with minimal configuration effort.
3. Services - With all these tools at your disposal its now possible to add and deploy new services on the network without significant network changes. This is especially important for security and other functions.
4. Analytics - This has become a very important part of the network as it gives you the ability to see what has been happening in time down to the application level between end points.
There are probably other good cases but we do think these are some of the core benefits for deploying an SDN style network in the future.
What is your take on these business cases? Leave a comment
While the software defined networking (SDN) movement is still alive and trying to evolve the concept. There also the notion of Open Networking is especially being pushed by the Open Compute project. What this means in a nutshell or basic terms is that a customer can buy a bare metal switch with no native operating system (NOS) and then get the Operating system from another vendor. Of course this in itself is not SDN although many of these software companies does have some concept of SDN. Lets look at a few of these companies especially on the software side:
1. Big Switch - Their current solution is built around Openflow but they supply both the switch OS and the controller in order to build out a solution. They do allow you the choice of hardware from certified vendors. Switchlight is built on the Open Networking Linux (ONL) which is part of OCP.
2. Cumulus - This solution is built around the Linux OS which allows you the flexibility to develop applications that is tailored to your network needs.
3. Pluribus - The concept of a network fabric does allow ease of management and therefore allows more predictability for application deployment especially using the fabric API concept.
4. Pica8 - Openflow focused switch OS but does have a very mature traditional protocol suite.
Those are the main players today in the open networking arena as independent switch OS providers. The question that comes to mind is; How well does these different offerings play together? This is where standards could play a better role over time. The customer has choice in hardware but today it would be difficult to pick multiple switch OSes and get a clean infrastructure management story.
One good narrative is to standardize on a protocol like Openflow then settle on say Open Day light (ODL) as the main foundation for a controller (Vendors can deliver controllers leveraging the ODL base). Once that is in place, then vendors would focus on delivering applications as their main differentiators. This will probably not happen any time soon but there maybe a case to build out open standards to really enable choices.
The story of RIM is well known in the mobile phone world. They were the leaders and pretty much number one supplier of "smart phones" at that point in time. All that changed when another company unknown to the mobile phone world came along with a different approach to phone features and that company was Apple. Ironically, Apple is now the number 1 mobile phone company by revenue and RIM (Blackberry) is pretty much near the bottom.
What could RIM have done differently? Sometimes, its hard to beat another company coming along to disrupt the overall market. However, there are things that can be done to protect the business without suing all the competition. There are two basic principles that applies here which are (1) satisfy the core customer segments and (2) innovate.
One of the benefits that the iPhone brought along was that Apple had a large fan base to work with. In some ways, its almost like a cult and that gave them a core market to start with. Once that core market is satisfied, others will see the benefit and jump on board.
Again, how could RIM avoid the large subsequent loses that followed? Well, they started to build a Mobile phone OS that was suppose to be better than both Android and IOS. That was probably the first mistake in the process. Looking at the spec sheet, that OS was actually better than those others but their biggest challenge would be to recruit mobile app developers to embrace this new ecosystem. Following the rules of segmentation, RIM needed to ensure that their core business customers were being served and fill the gaps they had with the current market.
Here are some options that RIM could have explored back then.
1. Port their BBM platform to IOS and Android to allow communication across all platforms - Its seems that WhatsApp is now filling that void.
2. Offer phones running the Android OS. While one could argue that this would undermine the core OS that the company had, it goes a far way to satisfy Customer needs and that is the essence of business.
3. Add new hardware features to their existing core products. For example adding touch screen to the popular Blackberry classic. Many other features to consider for incremental improvement.
4. Double down on their security strengths especially for corporate users while adding the incremental features to give them flexibility.
There are many other iterations of what could be done but the bottom line is to speak with customers and really have a feel for what will sell. The iPhone is not perfect but it does the job for a core group and consequently most people. The lesson, we can all learn is that customer focus is very important while innovating around those customers. There is no point in trying to please everyone in the market but select a core set of customers and give them the thrill.
Just this week December 5, 2014, the On.lab group release their new ONOS SDN controller. This controller seem to have a lot of good concepts covered and we are hoping to this out the next few weeks.
One key application that looks promising is the SDNIP app which allows linking the SDN network to legacy networks using BGP. This is critical as most customers need a good way to migrate their networks without total forklift upgrades.
Stay tune for more updates..
Recently, I was looking at what the future of networks would be like. Nowadays, the concept of a controller does have some place and is probably part of the transition to more intelligent networks.
However, despite all the relevant redundancy technologies that you can put in place, there is always concerns about controller failures etc. Looking at what Pluribus Networks is doing, they seem to address some critical areas of the future network design with the following.
1. Clustering - Building smart and open clusters allow easier management and co-ordination of network resources within the data center.
2. Network Hypervisor - The development of the Switch OS as a basic hypervisor creates a lot of opportunities to do many more things with NFV and other applications.
You can read more about Pluribus Networks.
What exactly is Cisco's ACI product? Lets try to get the basics on this and then later examine things in more details
Recently Cisco released their Application Centered Infrastructure (ACI) product line so all of the vendors in the SDN industry started to criticize them sharply for delivering both an hardware centered designed as well as being proprietary. What is going in the SDN world in terms of following standards? First of all Cisco is part of all the major standard bodies and is quiet very active.
Lets put this another way; are there any standard solutions out there in the market today? People talk a lot about VMware NSX but that is not standard its based on VMware own APIs and implementation of protocols. Taking that into consideration, what Cisco is doing is not so far off from the rest of the SDN industry.
Openflow is suppose to represent what is considered pure SDN but only smaller players like Big Switch and larger Cloud operators like Google are really embracing it full on. In fact, it seems that the mainstream vendors are way behind in this space. Broadcom recently released the OF-DPA SDK libraries that would allow better implementation of Openflow in hardware and solve some of the main challenges faced by the standard. However, I think the bigger question is whether the larger network vendors would step up and release complete solutions around this standard.
The moral of the story is that no one today is really embracing open standards in their solutions. Having an ecosystem is not being open and this in fact a challenge for the industry going forward. We should continue to push for standard and open solutions which allows full inter-operability based entirely on standards.
Most networks today have not changed despite the fact that SDN has being around for the last few years. One major reason for this slow change is the issue of migration faced by SDN especially Openflow. The incumbent vendors are very slow to embrace open technologies and the non-traditional vendors do not seem to have the clout to drive changes. The good news is that service providers and larger data center operators are taking Openflow seriously.
Here are a few strategies that could be explored to overcome the limitations with migration;
1. Software Upgrade - Does the incumbent vendor already have an Openflow enabled firmware image for the devices in question? If the answer is yes, do we know what level of support available?
2. Strategic Replacement - While it maybe impractical to replace all devices in the network immediately, there maybe an opportunity to replace specific devices that could enable easy migration to SDN. This could work especially for edge switches.
3. Tunneling - Another option to consider would be using tunnels in the SDN switches between each other while using the existing network as a basic transport. This is also one of the reasons why overlay technologies such as VxLAN has gotten more traction than Openflow, due to the ability to leverage the existing network and still get the benefits of SDN.
4. Hybrid Controllers - Most SDN controllers are focused only on SDN and therefore not geared towards managing existing networks. It would be useful for these controllers to speak both languages so that customers can see some unified benefit today as they slowly migrate forward.
There are many strategies to consider and employ but the bottom line is to show customers the benefits of migrating to a proper SDN Network. Building better greenfield networks is one way to make that case.
Check out this paper from the ONF on migration - Openflow Migration Paper.
Openflow started out with the concept that all communication in the network would have its own flow installed in hardware. This means if two end points are communicating, they would have one flow for each direction. This amount of flows could increase significantly especially with the nature of networks today running with east-west traffic (N*(N-1) problem).
This approach is great as it allows full control of the network traffic which collects statistics for any communication happening between any two stations and policies. The challenge we have today is that current hardware designs are very limited to scale with this level of granularity. The only viable work around to achieve this ideal behavior today would be using software based switches or NPU type hardware switches. In the ideal sense flows means ACLs in the hardware ASICs which is about 2000 in most merchant chips today.
Most ASICs were designed for traditional networking and they have lots of tables that could be leveraged for SDN purposes. For example the MAC table in most switches can run from 32K to 124K easily. This amount of table entries can create lots of scale. However, the goal of Openflow and SDN is to implement flexible policy across the network fabric. MAC table entries are very limited as they only track the destination MAC, VLAN and output port. Taking that into consideration, we could build a flexible L2 network just using the MAC table where two stations could communicate by installing the respective MAC entries. Now we come to the issue of L3 requirements as well as with the WAN. These ASICs also have L3 and MPLS tables that could be leveraged to perform these functions.
In summary then, we could combine MAC, L3 and other tables to provide basic flexible forwarding within the network fabric with the Openflow controller coordinating table programming. We are still not achieving the ideal goal yet as these tables would not by themselves allow for policies within the network.
Openflow 1.3.1 Promise
While lots of this started out in Openflow 1.1, the concept of pipe lining with the various hardware tables is the real solution to overcome current hardware designs plus lay the foundation for the future.
Lets say the controller would allow all stations to communicate with each other using basic L2 and L3 tables. The next step is now to have some control over whom can speak to whom. With pipe lining you could insert an ACL entry to block traffic between two subnets or in some cases apply QOS parameters to specific L4 flows. All these would work together to maximize resource usage. The tables are processed in sequence as oppose to only one table per packet in Openflow 1.0.
Its very unlikely that you would have to block traffic within a given subnet but definitely you would need to consider differently between subnets. You have about 124K MAC entries and 4K L3 entries in most hardware and smartly using your ACLs and these other tables, you could easily scale out your Data Center network.
The next challenge are for both hardware and controller vendors to implement this standard approach to pipe lining tables and provide the real benefit of an SDN based fabric. While that is happening, we hope that ASIC vendors would deliver more ACL type table entries closer to 100K or more to allow even more scaling in the network.
Recently Cisco launched their ACI product and started a IETF draft for their OPFLEX protocol. They have taken on the issue of Centralize control versus distributed. At the end of the day what they are trying to say is not totally accurate. First of all ACI is Centralizes as you have to define policies from the APIC controller, that is the begin and end of it. Yes, once the policy is installed, you do not need the controller for traffic forwarding but so is the case with most of the other SDN protocols.
What about the other SDN protocols? Notably Openflow and the various Overlays. On the surface Openflow can be seen as a Centralize Control Plane approach while Overlays may be seen as distributed. The reality is that yes, in Openflow you need to define flow rules centrally but you can make these rules proactive to reduce reliance on the controller. In the case of overlays, you need some centralize entity to manage end point addresses across the fabric. If this overlay address management entity goes offline, your network could break significantly.
In summary, the centralized model does give us the advantage of agility and flexible network management but you do need schemes that would reduce the dependency on controller reliability. For example in Openflow, you can implement proactive flows to ensure less dependence on the controller.
UK Chartered Engineer and Manager focused on Innovation in Networking technologies