随着就像我们都看到和听到关于云计算在过去几年中,有时事情的发生你意识到所面临的物联网产业的急剧变化是多么现实真的是化妆。以及由VMware的收购Nicira的是本周的一个时刻。它应该成为一个警钟大家在物联网产业如何为那些来参加你的IT部门的变化做好准备。这已经不是什么秘密,“云”中的日益普及和私有云架构迅速成为内部基础架构的主要建筑风格。虽然VMware已经清楚地赢得了虚拟化的企业工作负载的战斗,云控制器,如VMware的vCloud Director的战斗,现在是激烈。所有计算正在发生急剧转变,因为云是不断发展的,这已经引起了“Betamax的”风格战争给整个行业进行斗争。计算机行业已经通过在其历史上具有划时代意义的众多变化,因此我们意识到,随着云计算的成熟,最终仍会进行整合,并占主导地位的建筑风格将会出现。不同的厂商将最终收敛,每个都有自己的倾斜,围绕这些常见的样式。而现在,几乎所有的主要厂商都使他们在云的优势发挥,努力让自己的架构,成为新时代的标准。关键战役是VMware正在面临现在不与思科,而是与微软,思杰,OpenStack的,的CloudStack,亚马逊,甚至谷歌。 At one point, enterprise private cloud infrastructure seemed like it may have evolved in a way that was more disconnected with cloud service providers; however, a lot of different technical and economic factors have helped to shape the current state of the cloud market. As the significance of computing in all aspects of business continues to grow, rapid application introduction has become increasingly critical at the business level, which is forcing infrastructure standards to morph at a rapid pace. And as XaaS and consumerization continue to explode, the ability for enterprise IT departments to offer infrastructure that is as streamlined, flexible, accessible and inexpensive as XaaS providers is critical ... at least for those that still want to have in-house infrastructure to manage. The battle for private cloud now demands that VMware include a holistic, self-contained solution that provides everything cloud application developers demand. And as enterprise applications that were built for the client-server era are getting re-engineered and purpose built for cloud architecture, they are now emerging with fundamentally different infrastructure demands. Going back to the client-server era, applications were largely built around the ability to run on one big server, scale out methods were primitive and proprietary. The explosion of the internet led to massive improvements in distributed computing. As cloud-style application architectures have emerged, developers have taken advancements in distributed computing to a whole new level. The latest MapReduce applications do truly treat an entire cloud of infrastructure resources as though the cloud were a single system ... and accordingly many of the application-level interactions that used to happen inside of a single computer are now happening across the cloud fabric. And as a result, cloud application developers are far more network savvy than those who have been focusing on enterprise infrastructure alone may be aware of. For the average enterprise IT worker, the intricacies of advanced distributed computing have been largely hidden. For the past 10 years as enterprises have been focused on virtualizing enterprise applications that were built for the client-server era, web providers have been increasing the ranks of IT professionals that understand web-style programming & architecture up to huge numbers. All of these developers have to learn networking for development, and they have to debug their applications across network fabrics. The old concept where the network guy is needed to come in with their sniffer to help debug application problems is really a lot more relevant to legacy applications. In this new world, application developers are already using tcpdump and packet-level analysis tools to debug application streams across a network. Not only is the traditional network guy not needed, but often their skill set is still optimized for legacy applications that only sent more primitive communications across the network. Much like one of the prevailing themes of the past ten years has been around efforts for enterprises to virtualize large percentages of their applications, today the momentum has changed to a very analogous effort ... to move virtualized applications into the highly optimized and automated cloud application lifecycle. In the early days of virtualization, there were a lot of inhibitors limiting virtualization to a small percentage of enterprise applications. With time hypervisor vendors added new features and applications evolved to the point we are at today where most applications can now be virtualized. So the momentum over the next few years will largely be around how many different applications can we stuff into one common private cloud. One of the biggest challenges with this effort is that many different applications, most notably the newest and emerging cloud applications, have distinctly different network and topology requirements. And where it gets really challenging is the need for elasticity ... or for the need for applications to grow and shrink on demand ... and for distributed applications this means dynamic modification of network topologies. So how do you stuff a bunch of applications with disparate topology requirements into a single cloud with a single static topology? ... the same way that you put numerous applications and operating systems onto a single server ... by putting a virtualization layer that shields applications from the complexities of the physical infrastructure. The function of network topology itself is now becoming a network service, and true network virtualization will allow hypervisor environments to provide these virtualized network topologies and services. If you have worked in the networking industry and have any exposure to VMware, it is pretty obvious that the type of virtualization common in the networking industry (VRF/VDC) isnt even in the same league as the type of virtualization that VMware has provided for servers. Ultimately it comes down to this: the requirement from cloud developers is to be able to define network services and behavior dynamically through software ... something the traditional network just can't do. The main job of the private cloud controller is to examine the needs of applications and their changing demands in real time and optimize these across a pool of server, storage and networking resources with the goal of creating the maximum resource utilization to the highest possible levels without impeding application performance. For networking, this idea is a nightmare, it simply cannot function across the the industries antiquated approach to Quality of Service (QoS). And this is THE critical point driving SDN. For private clouds to achieve the key goals of their current growth trajectory, the cloud controller must tightly manage network access and each applications network requirements, this job simply cannot be part of a separately controlled 3rd party solution. And clearly the legacy approach to QoS cannot be extended to this level of demand. Over the past 15 years we have seen the evolution of QoS starting as a model built at a time when application architectures barely resembled what they are today. The networking industry has approached modernizing QoS on an application by application basis, and even with the slow one-app-at-a-time approach, new network-sensitive applications like VoIP and FCoE have taken years to implement. Each of these also has had the benefit of frequently being the only prioritized traffic on a given link, and in the case of VoIP real contention for bandwidth was rare. And today despite years of effort, multivendor/heterogeneous FCoE fabrics still seem like a pipe dream. It is astoundingly clear that this approach will not work for the emerging demands of the cloud. This is exactly why OpenFlow has been so appealing to cloud developers ... while traditional networking devices still have no real awareness of network conditions in their forwarding decisions, even the earliest OpenFlow applications written by grad students showed how powerful the OpenFlow paradigm is in its ability to forward not only based on real time network conditions, but also with real time awareness of application and server availability. This behavior is exactly what cloud-developers are looking for, hence their affection of SDN. Because the traditional network has been abysmal at providing meaningful application services, interfaces or programmability to web application developers, for years application developers have been building patchwork at the application layer to compensate for the inability to communicate with the network. If any Cisco fans have read to this point this statement may upset them, but this was the exact theme of大卫·沃德的演讲在第一次公开网络峰会上。这为vSwitch (OVS)在云提供商市场上的巨大成功铺平了道路。OVS在云提供商中非常流行,以至于OVS内核现在已经成为主流Linux的一部分。由于Open vSwitch驻留在管理程序中,并且是开源的,因此它为应用程序开发人员提供了一种新的方法,以克服对开发人员不友好的传统网络的许多限制。因此,在过去的几年中,OVS已经使许多世界上最大的网络绕过了传统基础设施的限制,提供了弹性的网络特性。这将意味着网络行业?预测管理程序网络将成为应用程序和VMware管理员所造成的运营库地址缓慢的领域,使得通用的团队会处理私有云的管理和各组成一个私有云,包括内部容器面料的技术。为什么这是不好的思科云计算争夺战已经切入了思科一些非常战略版图。随着云寻求对所有应用程序的性能相关的功能包罗万象的支持,现在的手段,传统的接入层和相关网络服务正在被纳入云管理平台,严重限制了思科提供增值的能力服务,以维持他们的保证金水平和强大的品牌的忠诚度。这需要全包的支持手段,唯一的关键功能,VMware将不会移动到盖将是那些对基础设施本身,而不是应用程序或工作显著。这显著限制UCS的价值主张,破坏一些重要的战略基石,如VN-标签和臭名昭著的帕洛阿尔托。和VMware的这一举动为客户提供自助式私有云是建立在以与厂商无关的基础性工作。而且不只是任何基础设施,而是专门云优化的硬件的CloudFoundry和其他领先的IaaS / PaaS的供应商使用了新的风格,真的犯规都像UCS建筑风格。有了强大的先例,以支持混合动力和社区云,建筑风格,公共云提供商使用正在对企业如何最终将部署其私有云一个显著的影响。虽然企业有独特的要求,也不会部署相同的基础设施,这将成为占主导地位的将是云服务提供商的基础设施,而不是像UCS根本不同的企业,改编版本的风格。我不意味只是这里攻击UCS,它有一些伟大的功能,但最终该行业将汇聚各地常见的建筑风格,和UCS越来越似乎是一个小众的架构。 As hypervisor networking grows and VMware administrators start to become confident in their ability to manage their own virtual networks, physical networking solutions will emerge that are built to have plug-n-play type compatibility to support and strengthen hypervisor networks. This will change the administrative domain that is controlling the cloud fabric to virtualization administrators and application developers and architects. And it is fair to assume that VMware, Microsoft and Citrix will eventually certify different vendors networking hardware further challenging Cisco's dominance. While having to sell to a very different audience in customer environments and support entirely new features in a new and different marketplace are challenges, the biggest challenge for Cisco will be their competition with VMware. Cisco has a tendency to constrain their features to push customers toward purchasing more of their products. So as private clouds continue to approach Cisco's strategic ground and limit the value propositions of Cisco's data center ambitions, I find it unlikely that Cisco will take this lying down. My bet is they will move rapidly to develop advanced features limited to their N1k and UCS customers. I anticipate hearing about how VMware and other private cloud deployments will work much better for those that buy the N1k and UCS, pushing those that want to stick to VMware's roadmap elsewhere. Cisco has already kept crucial features out of their physical networking portfolio to help push their other platforms, and I think unless they drop their competing lines, this type of behavior is expected and natural. And frankly there is nothing wrong with it, but it will open the door for Cisco's competitors to strengthen VMware's native toolset without holding out premium features for UCS and N1k customers. So I am not simply trying to attack Cisco and not their competitors, it just seems clear that Cisco is in a more vulnerable position here. And if Cisco loses key ground in the data center, it will make them more susceptible to attacks from their competitors across the board. I really dont see them keeping the same level of brand loyalty if other switch vendors gain the opportunity to shine in the data center, it will demonstrate clearly that Cisco isnt the only company that can make a switch. While the pace of change across all of technology has been maddening, this acquisition really signifies the cementing of the way that a lot of architecture will evolve in the cloud era, and the vision of the future of networking is now increasingly clear. The private cloud has unique needs and the networking components of each cloud container will become the domain of the private cloud management platform, separate from the rest of the network, and will emerge as a new and distinctly different networking marketplace and ecosystem where an entirely different group of players will control the industry. This move adds substantively to the SDN movement and is among the most powerful evidence to date that SDN will be the way of the future.
另一个关键要求是优化基础设施的效率水平。几年来,为虚拟化工作已经趋于成熟,企业的VMware管理员一直在努力寻找的应用的最佳组合,以最大限度地提高服务器上的平均资源利用率。迄今为止,这种努力都集中在最大化CPU,内存和存储利用率,网络已经在很大程度上得到了一通思科一直试图阻止VMware管理员渗入其控制的领域提高的障碍。然而最新一代的服务器的CPU已经采取重新关注输入/输出效率,而目前的势头是仔细检查网络使用相同的方式,CPU /内存/存储利用率有。这绝不是简单的命题,并使其更具挑战性,私有云平台,力求做到这一点以自动化的方式。
在历史上,企业,网络管理程序尚未处理得非常严重。当VMware第一次来到现场,快速配置和流程优化有这么多的商业价值,很少谴责了传统网络接入层的破裂。思科一直在战斗,以控制接入层和重要的网络增值服务,从而为客户采用VMware和原先简单化向下的vSwitch,思科一直专注于试图让客户采用VN-标签或关系1000V(N1k)保持的接入层业务控制和VMware已经慢慢增加功能,提供先进的网络功能与N1k竞争。在先进的管理程序联网然而,在我的经验中,企业的市场整体表现出的兴趣有限,迄今仅最新进展表明,这个空间是一个真正的威胁。管理程序的网络空间得到了与VXLAN协议,隧道技术,将允许VMware来完全绕过许多物理网络的制约宣布了大规模的提升。虽然VXLAN公告是显著,它也不是很清楚如何积极的VMware会追求管理程序的网络空间,但现在看着VMware的最新的分布式交换机,VXLAN,现在的数十亿美元收购Nicira的高级功能,它似乎很清楚这是是VMware重要的战略版图。
我们可以期待hypervisor网络将从它经常被忽视的地位出现,成为数据中心网络的新宠儿。2020欧洲杯预赛2020欧洲杯预赛数据中心网络本身现在也将被划分为独立的市场,值得注意的是,hypervisor网络市场将越来越不同于传统的网络市场。织物连接的内部私有云现在成为类似于电脑的主板,并将发展与hypervisor市场创造越来越发散特征在计算集群和织物面料连接不同集群的计算资源(即网格或云容器)在一起。
这对思科不利,有一些明显但不那么明显的原因。最明显的就是思科的Nexus 1000v,它现在将面临来自VMware的激烈竞争。最终,思科并没有在VMware的层面上展开竞争,他们没有在云控制器上展开竞争。因此,企业虚拟机监控程序网络空间将成为VMware、微软、Citrix和Eucalyptus、Ubuntu和Piston等小型厂商之间竞争的一部分。
我应该指出,我是戴尔的员工,但这只是我的个人博客和我的个人观点,并不一定反映戴尔的立场。