The story of RIM is well known in the mobile phone world. They were the leaders and pretty much number one supplier of "smart phones" at that point in time. All that changed when another company unknown to the mobile phone world came along with a different approach to phone features and that company was Apple. Ironically, Apple is now the number 1 mobile phone company by revenue and RIM (Blackberry) is pretty much near the bottom.
What could RIM have done differently? Sometimes, its hard to beat another company coming along to disrupt the overall market. However, there are things that can be done to protect the business without suing all the competition. There are two basic principles that applies here which are (1) satisfy the core customer segments and (2) innovate.
One of the benefits that the iPhone brought along was that Apple had a large fan base to work with. In some ways, its almost like a cult and that gave them a core market to start with. Once that core market is satisfied, others will see the benefit and jump on board.
Again, how could RIM avoid the large subsequent loses that followed? Well, they started to build a Mobile phone OS that was suppose to be better than both Android and IOS. That was probably the first mistake in the process. Looking at the spec sheet, that OS was actually better than those others but their biggest challenge would be to recruit mobile app developers to embrace this new ecosystem. Following the rules of segmentation, RIM needed to ensure that their core business customers were being served and fill the gaps they had with the current market.
Here are some options that RIM could have explored back then.
1. Port their BBM platform to IOS and Android to allow communication across all platforms - Its seems that WhatsApp is now filling that void.
2. Offer phones running the Android OS. While one could argue that this would undermine the core OS that the company had, it goes a far way to satisfy Customer needs and that is the essence of business.
3. Add new hardware features to their existing core products. For example adding touch screen to the popular Blackberry classic. Many other features to consider for incremental improvement.
4. Double down on their security strengths especially for corporate users while adding the incremental features to give them flexibility.
There are many other iterations of what could be done but the bottom line is to speak with customers and really have a feel for what will sell. The iPhone is not perfect but it does the job for a core group and consequently most people. The lesson, we can all learn is that customer focus is very important while innovating around those customers. There is no point in trying to please everyone in the market but select a core set of customers and give them the thrill.
Just this week December 5, 2014, the On.lab group release their new ONOS SDN controller. This controller seem to have a lot of good concepts covered and we are hoping to this out the next few weeks.
One key application that looks promising is the SDNIP app which allows linking the SDN network to legacy networks using BGP. This is critical as most customers need a good way to migrate their networks without total forklift upgrades.
Stay tune for more updates..
Recently, I was looking at what the future of networks would be like. Nowadays, the concept of a controller does have some place and is probably part of the transition to more intelligent networks.
However, despite all the relevant redundancy technologies that you can put in place, there is always concerns about controller failures etc. Looking at what Pluribus Networks is doing, they seem to address some critical areas of the future network design with the following.
1. Clustering - Building smart and open clusters allow easier management and co-ordination of network resources within the data center.
2. Network Hypervisor - The development of the Switch OS as a basic hypervisor creates a lot of opportunities to do many more things with NFV and other applications.
You can read more about Pluribus Networks.
What exactly is Cisco's ACI product? Lets try to get the basics on this and then later examine things in more details
Recently Cisco released their Application Centered Infrastructure (ACI) product line so all of the vendors in the SDN industry started to criticize them sharply for delivering both an hardware centered designed as well as being proprietary. What is going in the SDN world in terms of following standards? First of all Cisco is part of all the major standard bodies and is quiet very active.
Lets put this another way; are there any standard solutions out there in the market today? People talk a lot about VMware NSX but that is not standard its based on VMware own APIs and implementation of protocols. Taking that into consideration, what Cisco is doing is not so far off from the rest of the SDN industry.
Openflow is suppose to represent what is considered pure SDN but only smaller players like Big Switch and larger Cloud operators like Google are really embracing it full on. In fact, it seems that the mainstream vendors are way behind in this space. Broadcom recently released the OF-DPA SDK libraries that would allow better implementation of Openflow in hardware and solve some of the main challenges faced by the standard. However, I think the bigger question is whether the larger network vendors would step up and release complete solutions around this standard.
The moral of the story is that no one today is really embracing open standards in their solutions. Having an ecosystem is not being open and this in fact a challenge for the industry going forward. We should continue to push for standard and open solutions which allows full inter-operability based entirely on standards.
Most networks today have not changed despite the fact that SDN has being around for the last few years. One major reason for this slow change is the issue of migration faced by SDN especially Openflow. The incumbent vendors are very slow to embrace open technologies and the non-traditional vendors do not seem to have the clout to drive changes. The good news is that service providers and larger data center operators are taking Openflow seriously.
Here are a few strategies that could be explored to overcome the limitations with migration;
1. Software Upgrade - Does the incumbent vendor already have an Openflow enabled firmware image for the devices in question? If the answer is yes, do we know what level of support available?
2. Strategic Replacement - While it maybe impractical to replace all devices in the network immediately, there maybe an opportunity to replace specific devices that could enable easy migration to SDN. This could work especially for edge switches.
3. Tunneling - Another option to consider would be using tunnels in the SDN switches between each other while using the existing network as a basic transport. This is also one of the reasons why overlay technologies such as VxLAN has gotten more traction than Openflow, due to the ability to leverage the existing network and still get the benefits of SDN.
4. Hybrid Controllers - Most SDN controllers are focused only on SDN and therefore not geared towards managing existing networks. It would be useful for these controllers to speak both languages so that customers can see some unified benefit today as they slowly migrate forward.
There are many strategies to consider and employ but the bottom line is to show customers the benefits of migrating to a proper SDN Network. Building better greenfield networks is one way to make that case.
Check out this paper from the ONF on migration - Openflow Migration Paper.
Openflow started out with the concept that all communication in the network would have its own flow installed in hardware. This means if two end points are communicating, they would have one flow for each direction. This amount of flows could increase significantly especially with the nature of networks today running with east-west traffic (N*(N-1) problem).
This approach is great as it allows full control of the network traffic which collects statistics for any communication happening between any two stations and policies. The challenge we have today is that current hardware designs are very limited to scale with this level of granularity. The only viable work around to achieve this ideal behavior today would be using software based switches or NPU type hardware switches. In the ideal sense flows means ACLs in the hardware ASICs which is about 2000 in most merchant chips today.
Most ASICs were designed for traditional networking and they have lots of tables that could be leveraged for SDN purposes. For example the MAC table in most switches can run from 32K to 124K easily. This amount of table entries can create lots of scale. However, the goal of Openflow and SDN is to implement flexible policy across the network fabric. MAC table entries are very limited as they only track the destination MAC, VLAN and output port. Taking that into consideration, we could build a flexible L2 network just using the MAC table where two stations could communicate by installing the respective MAC entries. Now we come to the issue of L3 requirements as well as with the WAN. These ASICs also have L3 and MPLS tables that could be leveraged to perform these functions.
In summary then, we could combine MAC, L3 and other tables to provide basic flexible forwarding within the network fabric with the Openflow controller coordinating table programming. We are still not achieving the ideal goal yet as these tables would not by themselves allow for policies within the network.
Openflow 1.3.1 Promise
While lots of this started out in Openflow 1.1, the concept of pipe lining with the various hardware tables is the real solution to overcome current hardware designs plus lay the foundation for the future.
Lets say the controller would allow all stations to communicate with each other using basic L2 and L3 tables. The next step is now to have some control over whom can speak to whom. With pipe lining you could insert an ACL entry to block traffic between two subnets or in some cases apply QOS parameters to specific L4 flows. All these would work together to maximize resource usage. The tables are processed in sequence as oppose to only one table per packet in Openflow 1.0.
Its very unlikely that you would have to block traffic within a given subnet but definitely you would need to consider differently between subnets. You have about 124K MAC entries and 4K L3 entries in most hardware and smartly using your ACLs and these other tables, you could easily scale out your Data Center network.
The next challenge are for both hardware and controller vendors to implement this standard approach to pipe lining tables and provide the real benefit of an SDN based fabric. While that is happening, we hope that ASIC vendors would deliver more ACL type table entries closer to 100K or more to allow even more scaling in the network.
Recently Cisco launched their ACI product and started a IETF draft for their OPFLEX protocol. They have taken on the issue of Centralize control versus distributed. At the end of the day what they are trying to say is not totally accurate. First of all ACI is Centralizes as you have to define policies from the APIC controller, that is the begin and end of it. Yes, once the policy is installed, you do not need the controller for traffic forwarding but so is the case with most of the other SDN protocols.
What about the other SDN protocols? Notably Openflow and the various Overlays. On the surface Openflow can be seen as a Centralize Control Plane approach while Overlays may be seen as distributed. The reality is that yes, in Openflow you need to define flow rules centrally but you can make these rules proactive to reduce reliance on the controller. In the case of overlays, you need some centralize entity to manage end point addresses across the fabric. If this overlay address management entity goes offline, your network could break significantly.
In summary, the centralized model does give us the advantage of agility and flexible network management but you do need schemes that would reduce the dependency on controller reliability. For example in Openflow, you can implement proactive flows to ensure less dependence on the controller.
Openflow as emerged as the pioneering protocol relating to Software Defined Networking and the hopes to transform networking as we know it. One of the biggest complaints with Openflow for real deployment is the lack of scale.
Recently, Broadcom an ASIC manufacturer published a new API scheme that shows the implementation of Multiple table pipe-lining which started in Openflow 1.1 but mostly talked about with 1.3.1. This pipe-lining approach allows switch vendors to exploit the many tables already in the current hardware. This API is significant as its implementation could turn the corner for real Openflow deployments in the year or so.
The other side to the scaling challenge is that Broadcom and other chip vendors are developing ASICs with much larger ACL tables closer to 100K to provide more flexibility in designing Openflow networks.
The idea of application centric networking is getting more attention nowadays. This has been Cisco's response to the SDN world through its Insieme Networks spin out and spin-in. There are some merits to the concept of declarative network management which was started by the likes of Puppet and Chef to declare policies which are enforced by the end points.
Cisco is also trying to push their ACI based technology as open by submitting a draft standard to the IETF for OPFLEX along with other companies. This is also going to be part of the Open Day Light project as well.
You can read more about Cisco's ACI Here..
UK Chartered Engineer and Manager focused on Innovation in Networking technologies