The spine switch can also be configured to send EVPN routes learned in the Layer 2 VPN EVPN address family to the IPv4 or IPv6 unicast address family and advertise them to the external routing device. Regardless of the standard followed, documentation and record keeping of your operation and maintenance activities is one of the most important parts of the process. The control-plane learns end-host Layer 2 and Layer 3 reachability information (MAC and IP addresses) and distributes this information through the EVPN address family, thus providing integrated bridging and routing in VXLAN overlay networks. If device port capacity becomes a concern, a new leaf switch can be added by connecting it to every spine switch and adding the network configuration to the switch. Data Centered Architecture is also known as Database Centric Architecture. It is arranged as a guide for data center design, construction, and operation. The target of maximum efficiency is achieved by considering these below-mentioned factors. Data center design and infrastructure standards can range from national codes (required), like those of the NFPA, local codes (required), like the New York State Energy Conservation Construction Code, and performance standards like the Uptime Institute’s Tier Standard (optional). ), common designs, and design considerations (Layer 3 gateway, etc.) Note that ingress replication is supported only on Cisco Nexus 9000 Series Switches. This helps ensure infrastructure is deployed consistently in a single data center or across multiple data centers, while also helping to reduce costs and the time employees spend maintaining it. In a typical VXLAN flood-and-learn spine-and-leaf network design, the leaf Top-of-Rack (ToR) switches are enabled as VTEP devices to extend the Layer 2 segments between racks. The FabricPath IS-IS control plane builds reachability information about how to reach other FabricPath switches. Its control plane protocol is FabricPath IS-IS, which is designed to determine FabricPath switch ID reachability information. at the time of this writing. Up to four FabricPath anycast gateways can be enabled in the design with routing at the border leaf. This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. MSDCs are highly automated to deploy configurations on the devices and discover any new devices’ roles in the fabric, to monitor and troubleshoot the fabric, etc. Cisco VXLAN MP-BGP EVPN spine-and-leaf architecture is one of the latest innovations from Cisco. It doesn’t learn host MAC addresses. This series of articles will focus on the major best practices applicable across all types of data centers, including enterprise, colocation, and internet facilities. FabricPath has no overlay control plane for the overlay network. FabricPath is a Layer 2 network fabric technology, which allows you to easily scale the network capacity simply by adding more spine nodes and leaf nodes at Layer 2. These IP addresses are exchanged between VTEPs through the BGP EVPN control plane or static configuration. If deviations are necessary because of site limitations, financial limitations, or availability limitations, they should be documented and accepted by all stakeholders of the facility. The data center is a dedicated space were your firm houses its most important information and relies on it being safe and accessible. The spine switch runs MP-BGP EVPN on the inside with the other VTEPs in the VXLAN fabric and exchanges EVPN routes with them. The Layer 2 overlay network is created on top of the Layer 3 IP underlay network by using the VTEP tunneling mechanism to transport Layer 2 packets. Table 4 summarizes the characteristics of a Layer 3 MSDC spine-and-leaf network. In most cases, the spine switch is not used to directly connect to the outside world or to other MSDC networks, but it will forward such traffic to specialized leaf switches acting as border leaf switches. A typical FabricPath network uses a spine-and-leaf architecture. Table 4. Features exist, such as the FabricPath Multitopology feature, to help limit traffic flooding in a subsection of the FabricPath network. The ease of expansion optimizes the IT department’s process of scaling the network. Cisco VXLAN flood-and-learn network characteristics, (Note: Ingress replication is supported only on Cisco Nexus 9000 Series Switches), (static, Open Shortest Path First [OSPF], IS-IS, External BGP [eBGP], etc.). Each tenant has its own VRF routing instance. With VRF-lite, the number of VLANs supported across the VXLAN flood-and-learn network is 4096. Customer edge links (access and trunk) carry traditional VLAN tagged and untagged frames. The Layer 2 and Layer 3 function is enabled on some FabricPath leaf switches called border leaf switches. The nature of your business will determine which standards are appropriate for your facility. Both designs provide centralized routing: that is, the Layer 3 routing functions are centralized on specific switches. That is definitely not best practice. Please review this table and each section of this document carefully and read the reference documents to obtain additional information to help you choose the technology that best fits your data center environment. At the same time, it runs the normal IPv4 or IPv6 unicast routing in the tenant VRF instances with the external routing device on the outside. Facility ratings are based on Availability Classes, from 1 to 4. The IT industry and the world in general are changing at an exponential pace. We are continuously innovating the design and systems of our data centers to protect them from man-made and natural risks. These are the VN-segment edge ports. Table 3. FabricPath links (switch-port mode: fabricpath) carry VN-segment tagged frames for VLANs that have VXLAN network identifiers (VNIs) defined. It provides a simple, flexible, and stable network, with good scalability and fast convergence characteristics, and it can use multiple parallel paths at Layer 2. Each VTEP performs local learning to obtain MAC address (though traditional MAC address learning) and IP address information (based on Address Resolution Protocol [ARP] snooping) from its locally attached hosts. The VXLAN MP-BGP EVPN spine-and-leaf architecture uses Layer 3 IP for the underlay network. Two major design options are available: internal and external routing at a border spine, and internal and external routing at a border leaf. ●      It provides VTEP peer discovery and authentication, mitigating the risk from rogue VTEPs in the VXLAN overlay network. Multicast group scaling needs to be designed carefully. Each VTEP device is independently configured with this multicast group and participates in PIM routing. With VRF-lite, the number of VLANs supported across the FabricPath network is 4096. For more information on Cisco Network Insights, see https://www.cisco.com/c/en/us/support/data-center-analytics/network-insights-data-center/products-installation-and-configuration-guides-list.html. The Layer 3 routing function is laid on top of the Layer 2 network. ), Note: Ingress replication is supported only on Cisco Nexus 9000 Series Switches. It is clear from past history that code minimum is not the best practice. 5. It is an industry-standard protocol and uses underlay IP networks. Each VXLAN segment has a VXLAN network identifier (VNID), and the VNID is mapped to an IP multicast group in the transport IP network. These formats include Virtual Extensible LAN (VXLAN), Network Virtualization Using Generic Routing Encapsulation (NVGRE), Transparent Interconnection of Lots of Links (TRILL), and Location/Identifier Separation Protocol (LISP). Layer 3 IP multicast traffic is forwarded by Layer 3 PIM-based multicast routing. But most networks are not pure Layer 2 networks. Layer 2 multitenancy example with FabricPath VN-Segment feature. It is designed to simplify, optimize, and automate the modern multitenancy data center fabric environment. The requirement to enable multicast capabilities in the underlay network presents a challenge to some organizations because they do not want to enable multicast in their data centers or WANs. Many MSDC customers write scripts to make network changes, using Python, Puppet and Chef, and other DevOps tools and Cisco technologies such as Power-On Auto Provisioning (POAP). The VXLAN flood-and-learn spine-and-leaf network complies with the IETF VXLAN standards (RFC 7348). The spine layer is the backbone of the network and is responsible for interconnecting all leaf switches. Cisco Data Center Network Manager (DCNM) is a management system for the Cisco® Unified Fabric. The Tiers are compared in the table below and can be found in greater definition in UI’s white paper TUI3026E. Please note that TRM is only supported on newer generation of Nexus 9000 switches such as Cloud Scale ASIC–based switches. These VTEPs are Layer 2 VXLAN gateways for VXLAN-to-VLAN or VLAN-to-VXLAN bridging. 2. Its control-plane protocol, FabricPath IS-IS, is designed to determine FabricPath switch ID reachability information. ), (Note: The spine switch only needs to run BGP-EVPN control plane and IP routing. vPC technology works well in a relatively small data center environment in which most traffic consists of northbound and southbound communication between clients and servers. Figure 17 shows a typical design using a pair of border leaf switches connected to outside routing devices. The FabricPath spine-and-leaf network supports Layer 2 multitenancy with the VXLAN network (VN)-segment feature (Figure 8). ●      LAN Fabric mode: provides Fabric Builder for automated VXLAN EVPN fabric underlay deployment, overlay deployment, end-to-end flow trace, alarm and troubleshooting, configuration compliance and device lifecycle management, etc. Cisco spine-and-leaf layer 2 and layer 3 fabric comparison, Cisco Spine-and-Leaf Layer 2 and Layer 3 Fabric, Forwarded by underlay PIM or ingress replication, (Note: Ingress-replication is supported only on Cisco Nexus 9000 Series Switches. Code minimum fire suppression would involve having wet pipe sprinklers in your data center. This scoping allows potential overlap in MAC and IP addresses between tenants. Software management tools such as DCIM (Data Center Infrastructure Management), CMMS (Computerized Maintenance Management System), EPMS (Electrical Power Monitoring System), and DMS (Document Management System) for operations and maintenance can provide a “single pane of glass” to view all required procedures, infrastructure assets, maintenance activities, and operational issues. Explore HED’s integrated architectural and engineering practice. ), (Note: The spine switch needs to support VXLAN routing VTEP on hardware. This design complies with IETF VXLAN standards RFC 7348 and draft-ietf-bess-evpn-overlay. The Tiers are compared in the table below and can b… To learn end-host reachability information, FabricPath switches rely on initial data-plane traffic flooding. Border leaf switches can inject default routes to attract traffic intended for external destinations. (This mode is not relevant to this white paper.). Data Center Architects are responsible for adequately securing the Data Center and should examine factors such as facility design and architecture. Underlay IP PIM or the ingress replication feature is used to send broadcast and unknown unicast traffic. It is a for-profit entity that will certify a facility to its standard, for which the standard is often criticized. Each host is associated with a host subnet and talks with other hosts through Layer 3 routing. The VXLAN MP-BGP EVPN spine-and-leaf architecture uses MP-BGP EVPN for the control plane for VXLAN. The data center architecture specifies where and how the server, storage networking, racks and other data center resources will be physically placed. ), ●      Border spine switch for external routing, (Note: The spine switch needs to support VXLAN routing on hardware. Traditional three-tier data center design The architecture consists of core routers, aggregation routers (sometimes called distribution routers), and access switches. VXLAN MP-BGP EVPN uses distributed anycast gateways for internal routed traffic. Host mobility and multitenancy is not supported. The most efficient and effective data center designs use relatively new design fundamentals to create the required high energy density, high reliability environment. As the number of hosts in a broadcast domain increases, the negative effects of flooding packets become more pronounced. Data center design, construction, and operational standards should be chosen based on definition of that mission. Both designs provide centralized routing: that is, the Layer 3 internal and external routing functions are centralized on specific switches. Data centers often have multiple fiber connections to the internet provided by multiple … Will has experience with large US hyperscale clients, serving as project architect for three years on a hyperscale project in Holland, and with some of the largest engineering firms. Common Layer 3 designs use centralized routing: that is, the Layer 3 routing function is centralized on specific switches (spine switches or border leaf switches). It transports Layer 2 frames over a Layer 3 IP underlay network. As the number of hosts in a broadcast domain increases, the negative effects of flooding packets are more pronounced. Connectivity. TOP 25 DATA CENTER ARCHITECTURE FIRMS RANK COMPANY 2016 DATA CENTER REVENUE 1 Jacobs $58,960,000 2 Corgan $38,890,000 3 Gensler $23,000,000 4 HDR $14,913,721 5 Page $14,500,000 6 Sheehan Partners Top 25 data center architecture firms | Building Design + Construction About the author: Steven Shapiro has been in the mission critical industry since 1988 and has a diverse background in the study, reporting, design, commissioning, development and management of reliable electrical distribution, emergency power, lighting, and fire protection systems for high tech environments. https://www.datacenterknowledge.com/sites/datacenterknowledge.com/files/logos/DCK_footer.png, The choice of standards should be driven by the organization’s business mission, Top500: Japan’s Fugaku Still the World’s Fastest Supercomputer, Intel’s Ice Lake Chips to Enable Confidential Computing on Data Center-Grade Servers. The Certified Data Centre Design Professional (CDCDP®) program is proven to be an essential certification for individuals wishing to demonstrate their technical knowledge of data centre architecture and component operating conditions. Because the fabric network is so large, MSDC customers typically use software-based approaches to introduce more automation and more modularity into the network. Another challenge in a three-tier architecture is that server-to-server latency varies depending on the traffic path used. The original Layer 2 frame is encapsulated in a VXLAN header and then placed in a UDP-IP packet and transported across the IP network. You need to consider MAC address scale to avoid exceeding the scalability limits of your hardware. With IP multicast enabled in the underlay network, each VXLAN segment, or VNID, is mapped to an IP multicast group in the transport IP network. Best practices ensure that you are doing everything possible to keep it that way. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. The key is to choose a standard and follow it. Benefits of a network virtualization overlay include the following: ●      Optimized device functions: Overlay networks allow the separation (and specialization) of device functions based on where a device is being used in the network. It is part of the underlay Layer 3 IP network and transports the VXLAN encapsulated packets. TIA uses tables within the standard to easily identify the ratings for telecommunications, architectural, electrical, and mechanical systems. The Layer 3 function is laid on top of the Layer 2 network. For a FabricPath network, the FabricPath IS-IS control plane by default creates two multidestination trees that carry broadcast traffic, unknown unicast traffic, and multicast traffic through the FabricPath network. Modern Data Center Design and Architecture. Table 1 summarizes the characteristics of a FabricPath spine-and-leaf network. Between the aggregation routers and access switches, Spanning Tree Protocol is used to build a loop-free topology for the Layer 2 part of network. The multi-tier data center model is dominated by HTTP-based applications in a multi-tier approach. For feature support and more information about Cisco VXLAN flood-and-learn technology, please refer to the configuration guides, release notes, and reference documents listed at the end of this document. The overlay encapsulation also allows the underlying infrastructure address space to be administered separately from the tenant address space. Number 8860726. Note that the maximum number of inter-VXLAN active-active gateways is two with a Hot-Standby Router Protocol (HSRP) and vPC configuration. A good data center design should plan to automate as many of the operational functions that employees perform as possible. Table 2. This architecture has been proven to deliver the high-bandwidth, low-latency, nonblocking server-to-server connectivity. The FabricPath network is a Layer 2 network, and Layer 3 SVIs are laid on top of the Layer 2 FabricPath switch. With the ingress replication feature, the underlay network is multicast free. The multicast distribution tree for this group is built through the transport network based on the locations of participating VTEPs. But it is still a flood-and-learn-based Layer 2 technology. After MAC-to-VTEP mapping is complete, the VTEPs forward VXLAN traffic in a unicast stream. Should it have the minimum required by code? This approach reduces network flooding for end-host learning and provides better control over end-host reachability information distribution. Example of MSDC Layer 3 spine-and-leaf network with BGP control plane. Join millions of people using Oodle to find unique job listings, employment offers, part time jobs, and employment news. As the number of hosts in a broadcast domain increases, the negative effects of flooding packets become more pronounced. With this design, the spine switch needs to support VXLAN routing. The spine switch is just part of the underlay Layer 3 IP network to transport the VXLAN encapsulated packets. The placement of a Layer 3 function in a FabricPath network needs to be carefully designed. This traffic needs to be handled efficiently, with low and predictable latency. It also introduces a control-plane protocol called FabricPath Intermediate System to Intermediate System (IS-IS). End-host information in the overlay network is learned through the flood-and-learn mechanism with conversational learning. ), Any unicast routing protocol (static, OSPF, IS-IS, eBGP, etc. Green certifications, such as LEED, Green Globes, and Energy Star are also considered optional. The architecture consists of core routers, aggregation routers (sometimes called distribution routers), and access switches. Table 1. The maximum number of inter-VXLAN active-active gateways is two with an HSRP and vPC configuration. There are also many operational standards to choose from. However, the spine switch needs to run the BGP-EVPN control plane and IP routing and the VXLAN VTEP function. Each FabricPath switch is identified by a FabricPath switch ID. The routing protocol can be regular eBGP or any IGP of choice. The FabricPath spine-and-leaf network is proprietary to Cisco but is based on the TRILL standard. The FabricPath spine-and-leaf network also supports Layer 3 multitenancy using Virtual Routing and Forwarding lite (VRF-lite), as shown in Figure 9. Moreover, scalability is another major issue in three-tier DCN. It also performs internal inter-VXLAN routing and external routing. With overlays used at the fabric edge, the spine and core devices are freed from the need to add end-host information to their forwarding tables. Best practices mean different things to different people and organizations. The result is increased stability and scalability, fast convergence, and the capability to use multiple parallel paths typical in a Layer 3 routed environment. It has modules on all the major sub-systems of a mission critical facility and their interdependencies, including power, cooling, compute and network. Layer 3 multitenancy example with VRF-lite, Cisco FabricPath Spine-and-Leaf network summary. Mr. Shapiro has extensive experience in the design and management of corporate and mission critical facilities projects with over 4 million square feet of raised floor experience, over 175 MW of UPS experience and over 350 MW of generator experience. Today, most web-based applications are built as multi-tier applications. On each FabricPath leaf switch, the network keeps the 4096 VLAN spaces, but across the whole FabricPath network, it can support up to 16 million VN-segments, at least in theory. Data Center Knowledge is part of the Informa Tech Division of Informa PLC. This document reviews several spine-and-leaf architecture designs that Cisco has offered in the recent past as well as current designs and those the Cisco expects to offer in the near future to address fabric requirements in the modern virtualized data center: ●      Cisco® FabricPath spine-and-leaf network, ●      Cisco VXLAN flood-and-learn spine-and-leaf network, ●      Cisco VXLAN Multiprotocol Border Gateway Protocol (MP-BGP) Ethernet Virtual Private Network (EVPN) spine-and-leaf network, ●      Cisco Massively Scalable Data Center (MSDC) Layer 3 spine-and-leaf network. Internal and external routing on the spine layer. Telecommunication Infrastructure Standard for Data Centers: This standard is more IT cable and network oriented and has various infrastructure redundancy and reliability concepts based on the Uptime Institute’s Tier Standard. With Layer 2 segments extended across all the pods, the data center administrator can create a central, more flexible resource pool that can be reallocated based on needs. Encapsulation format and standards compliance. The VXLAN flood-and-learn network is a Layer 2 overlay network, and Layer 3 SVIs are laid on top of the Layer 2 overlay network. Distributed anycast gateway for internal routing. For Layer 2 multicast traffic, traffic entering the FabricPath switch is hashed to a multidestination tree to be forwarded. Fidelity is opening a new data center in Nebraska this fall. ), Cisco’s Massively Scalable Data Center Network Fabric White Paper, https://www.cisco.com/c/en/us/products/cloud-systems-management/prime-data-center-network-manager/index.html, https://www.cisco.com/c/en/us/support/data-center-analytics/network-insights-data-center/products-installation-and-configuration-guides-list.html, https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-730116.html, https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/guide-c07-734107.html, https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-743245.html, https://blogs.cisco.com/datacenter/vxlan-innovations-on-the-nexus-os-part-1-of-2, Cisco MDS 9000 10-Gbps 8-Port FCoE Module Extends Fibre Channel over Ethernet to the Data Center Core. It provides control-plane and data-plane separation and a unified control plane for both Layer 2 and Layer 3 forwarding in a VXLAN overlay network. The origins of the Uptime Institute as a data center users group established it as the first group to measure and compare a data center’s reliability. DCP_2047.JPG 1/6 Critical facilities are becoming more diverse as technology advances create market shifts. The multi-tier approach includes web, application, and database tiers of servers. Internal and external routing at the border leaf. The standard breaks down as follows: Government regulations for data centers will depend on the nature of the business and can include HIPPA (Health Insurance Portability and Accountability Act), SOX (Sarbanes Oxley) 2002, SAS 70 Type I or II, GLBA (Gramm-Leach Bliley Act), as well as new regulations that may be implemented depending on the nature of your business and the present security situation. When traffic needs to be routed between VXLAN segments or from a VXLAN segment to a VLAN segment and vice visa, the Layer 3 VXLAN gateway function needs to be enabled on some VTEPs. IP multicast traffic is by default constrained to only those FabricPath edge ports that have either an interested multicast receiver or a multicast router attached and use Internet Group Management Protocol (IGMP) snooping. For Layer 3 IP multicast traffic, traffic needs to be forwarded by Layer 3 multicast using Protocol-Independent Multicast (PIM). Enterprise and High Performance Computing users recognize the value of critical facilities — connecting to a brand is as important as connecting to the campus. For those with international facilities or a mix of both, an international standard may be more appropriate. The FabricPath network supports up to four anycast gateways for internal VLAN routing. A data accessoror a collection of independent components that operate on the central data store, perform computations, and might put back the results. Layer 2 multitenancy example using the VNI. The multi-tier model uses software that runs as separate processes on the same machine using interprocess communication (IPC), or on different machines with communication… Data center design is the process of modeling an,.l designing (Jochim 2017) a data center's IT resources, architectural layout and entire ilfrastructure. This document presented several spine-and-leaf architecture designs from Cisco, including the most important technology components and design considerations for each architecture at the time of the writing of this document. A distributed anycast gateway also offers the benefit of transparent host mobility in the VXLAN overlay network. Here’s a sample from the 2005 standard (click the image to enlarge): TIA has a certification system in place with dedicated vendors that can be retained to provide facility certification. A legacy mindset in data center architecture revolves around the notion of “design now, deploy later.” The approach to creating a versatile, digital-ready data center must involve the deployment of infrastructure during the design session. Lines and paragraphs break automatically. ●      It uses the decade-old MP-BGP VPN technology to support scalable multitenant VXLAN overlay networks. But routed traffic needs to traverse two hops: leaf to spine and then to the default gateway on the border leaf to be routed. It provides workflow automation, flow policy management, and third-party studio equipment integration, etc. The common designs used are internal and external routing on the spine layer, and internal and external routing on the leaf layer. The data center is at the foundation of modern software technology, serving a critical role in expanding capabilities for enterprises. A data center is going to probably be the most expensive facility your company ever builds or operates. The routing protocol can be regular eBGP or any Interior Gateway Protocol (IGP) of choice. ●      It provides optimal forwarding for east-west and north-south traffic and supports workload mobility with the distributed anycast function on each ToR switch. It is simple, flexible, and stable; it has good scalability and fast convergence characteristics; and it supports multiple parallel paths at Layer 2. It provides control-plane and data-plane separation and a unified control plane for both Layer 2 and Layer 3 forwarding in a VXLAN overlay network. This technology provides control-plane and data-plane separation and a unified control plane for both Layer 2 and Layer 3 forwarding in a VXLAN overlay network. Common Layer 3 designs provide centralized routing: that is, the Layer 3 routing function is centralized on specific switches (spine switches or border leaf switches). Application and Virtualization Infrastructure Are Directly Linked to Data Center Design. The impact of broadcast and unknown unicast traffic flooding needs to be carefully considered in the FabricPath network design. Facility operations, maintenance, and procedures will be the final topics for the series. This feature uses a 24-bit increased name space. Codes must be followed when designing, building, and operating your data center, but “code” is the minimum performance requirement to ensure life safety and energy efficiency in most cases. The VXLAN flood-and-learn spine-and-leaf network supports up to two active-active gateways with vPC for internal VXLAN routing. There are two types of components − 1. It enables you to provision, monitor, and troubleshoot the data center network infrastructure. This architecture is the physical and logical layout of the resources and equipment within a data center facility. The switch virtual interfaces (SVIs) on the spine switch are performing inter-VLAN routing for east-west internal traffic and exchange routing adjacency information with Layer 3 routed uplinks to route north-south external traffic. The VXLAN MP-BGP EVPN spine-and-leaf architecture uses MP-BGP EVPN for the control plane. Data center architecture and engineering firm Integrated Design Group is merging with national firm HED in a deal that illustrates the rising profile for the data center industry. Examples of MSDCs are large cloud service providers that host thousands of tenants, and web portal and e-commerce providers that host large distributed applications. However, the spine switch only needs to run the BGP-EVPN control plane and IP routing; it doesn’t need to support the VXLAN VTEP function. Data Centered Architecture serves as a blueprint for designing and deploying a data center facility. The modern data center is an exciting place, and it looks nothing like the data center of only 10 years past. The path is randomly chosen so that the traffic load is evenly distributed among the top-tier switches. You can also have multiple VXLAN segments share a single IP multicast group in the core network; however, the overloading of multicast groups leads to suboptimal multicast forwarding. The FabricPath spine-and-leaf network is proprietary to Cisco, but it is mature technology and has been widely deployed. Note that ingress replication is supported only on Cisco Nexus 9000 Series Switches. If no oversubscription occurs between the lower-tier switches and their uplinks, then a nonblocking architecture can be achieved. The VXLAN VTEP uses a list of IP addresses of other VTEPs in the network to send broadcast and unknown unicast traffic. Cisco VXLAN flood-and-learn technology complies with the IETF VXLAN standards (RFC 7348), which defined a multicast-based flood-and-learn VXLAN without a control plane. In the VXLAN MP-BGP EVPN spine-and-leaf network, VNIs define the Layer 2 domains and enforce Layer 2 segmentation by not allowing Layer 2 traffic to traverse VNI boundaries. ), Supports both Layer 2 multitenancy and Layer 3 multitenancy, RFC 7348 and RFC8365 (previously draft-ietf-bess-evpn-overlay). The Layer 3 spine-and-leaf design intentionally does not support Layer 2 VLANs across ToR switches because it is a Layer 3 fabric. Broadcast and unknown unicast traffic in FabricPath is flooded to all FabricPath edge ports in the VLAN or broadcast domain. Cisco’s MSDC topology design uses a Layer 3 spine-and-leaf architecture. Data Center Design, Inc. provides customers with projects ranging from new Data Center design and construction to Data Center renovation and expansion with follow-up service. The Cisco Nexus 9000 Series introduced an ingress replication feature, so the underlay network is multicast free. Cisco MSDC Layer 3 spine-and-leaf network. You need to consider MAC address scale to avoid exceeding the scalability limit on the border leaf switch. The border leaf switch learns external routes and advertises them to the EVPN domain as EVPN routes so that other VTEP leaf nodes can also learn about the external routes for sending outbound traffic. The three major data center design and infrastructure standards developed for the industry include: This standard develops a performance-based methodology for the data center during the design, construction, and commissioning phases to determine the resiliency of the facility with respect to four Tiers or levels of redundancy/reliability. We will discuss best practices with respect to facility conceptual design, space planning, building construction, and physical security, as well as mechanical, electrical, plumbing, and fire protection. Data Center Design and Implementation Best Practices: This standard covers the major aspects of planning, design, construction, and commissioning of the MEP building trades, as well as fire protection, IT, and maintenance. The Cisco FabricPath spine-and-leaf network is proprietary to Cisco. The VXLAN flood-and-learn spine-and-leaf network doesn’t have a control plane for the overlay network. Also, the spine Layer 3 VXLAN gateway learns the host MAC address, so you need to consider the MAC address scale to avoid exceeding the scalability limits of your hardware. In MP-BGP EVPN, multiple tenants can co-exist and share a common IP transport network while having their own separate VPNs in the VXLAN overlay network (Figure 19). However, vPC can provide only two active parallel uplinks, and so bandwidth becomes a bottleneck in a three-tier data center architecture. ●      Media controller mode: manages Cisco IP Fabric network for Media solution and helps transition from an SDI router to an IP-based infrastructure. Also, with SVIs enabled on the spine switch, the spine switch disables conversational learning and learns the MAC address in the corresponding subnet. This section describes Cisco VXLAN flood-and-learn characteristic on these Cisco hardware switches. It retains the easy-configuration, plug-and-play deployment model of a Layer 2 environment. Every leaf switch connects to every spine switch in the fabric. The original Layer 2 frame is encapsulated with a VXLAN header and then placed in a UDP-IP packet and transported across an IP network. It provides rich-insights telemetry information and other advanced analytics information, etc. Data center design with extended Layer 3 domain. VXLAN, one of many available network virtualization overlay technologies, offers several advantages. ●      Its underlay and overlay management tools provide many network management capabilities, simplifying workload visibility, optimizing troubleshooting, automating fabric component provisioning, automating overlay tenant network provisioning, etc. The FabricPath spine-and-leaf network uses Layer 2 FabricPath MAC-in-MAC frame encapsulation, and it uses FabricPath IS-IS for the control-plane in the underlay network. Table 5. There is no single way to build a data center. AWS pioneered cloud computing in 2006, creating cloud infrastructure that allows you to securely build and innovate faster. The border leaf router is enabled with the Layer 3 VXLAN gateway and performs internal inter-VXLAN routing and external routing. These are the VN-segment core ports. The VLAN has local significance on the FabricPath leaf switch, and VN-segments have global significance across the FabricPath network. For more information about Cisco DCNM, see https://www.cisco.com/c/en/us/products/cloud-systems-management/prime-data-center-network-manager/index.html. The VXLAN flood-and-learn spine-and-leaf network supports Layer 2 multitenancy (Figure 14). Spine devices are responsible for learning infrastructure routes and end-host subnet routes. Gensler, Corgan, and HDR top Building Design+Construction’s annual ranking of the nation’s largest data center sector architecture and A/E firms, as reported in the 2016 Giants 300 Report. In this two-tier Clos architecture, every lower-tier switch (leaf layer) is connected to each of the top-tier switches (spine layer) in a full-mesh topology. Cisco VXLAN flood-and-learn spine-and-leaf network. The design encourages the overlap of these functions and creates a public route through the building. Most customers use eBGP because of its scalability and stability. Design for external routing at the border leaf. Similarly, Layer 3 segmentation among VXLAN tenants is achieved by applying Layer 3 VRF technology and enforcing routing isolation among tenants by using a separate Layer 3 VNI mapped to each VRF instance. FabricPath technology uses many of the best characteristics of traditional Layer 2 and Layer 3 technologies. The architect must demonstrate the capacity to develop a robust server and storage architecture. Most users do not understand how critical the floor layout is to the performance of a data center, or they only understand its importance after a The VXLAN flood-and-learn spine-and-leaf network also supports Layer 3 multitenancy using VRF-lite (Figure 15). The Layer 3 internal routed traffic is routed directly by the distributed anycast gateway on each ToR switch in a scale-out fashion. NIA constantly scans the customer’s network and provides proactive advice with a focus on maintaining availability and alerting customers about potential issues that can impact uptime. IP subnets of the VNIs for a given tenant are in the same Layer 3 VRF instance that separates the Layer 3 routing domain from the other tenants. Cisco Layer 3 MSDC network characteristics, Data Center fabric management and automation. His experience also includes providing analysis of critical application support facilities. Hyperscale users and increased demand have turned data into the new utility, making quicker, leaner facilities a must. Many different tools are available from Cisco, third parties, and the open-source community that can be used to monitor, manage, automate, and troubleshoot the data center fabric. The VTEP then distributes this information through the MP-BGP EVPN control plane. Cisco DCNM can be installed in four modes: ●      Classic LAN mode: manages Cisco Nexus Data Center infrastructure deployed in legacy designs, such as vPC design, FabricPath design, etc. This course encompasses the basic principles of data center design, tracking its history from the early days of the mainframe to the modern enterprise data center in its many forms and the future. It transports Layer 2 frames over the Layer 3 IP underlay network. VNIs are used to provide isolation at Layer 2 for each tenant. For more details regarding MSDC designs with Cisco Nexus 9000 and 3000 switches, please refer “Cisco’s Massively Scalable Data Center Network Fabric White Paper”. The VXLAN MP-BGP EVPN spine-and-leaf architecture uses MP-BGP EVPN for the control plane for the VXLAN overlay network. Network overlays are virtual networks of interconnected nodes that share an underlying physical network, allowing deployment of applications that require specific network topologies without the need to modify the underlying network (Figure 5). If oversubscription of a link occurs (that is, if more traffic is generated than can be aggregated on the active link at one time), the process for expanding capacity is straightforward. In 2010, Cisco introduced virtual-port-channel (vPC) technology to overcome the limitations of Spanning Tree Protocol. As shown in the design for internal and external routing at the border leaf in Figure 7, the spine switch functions as the Layer 2 FabricPath switch and performs intra-VLAN FabricPath frame switching only. It provides real-time health summaries, alarms, visibility information, etc. (This mode is not relevant to this white paper. VXLAN MP-BGP EVPN supports overlay tenant Layer 2 multicast traffic using underlay IP multicast or the ingress replication feature. External routing with border spine design. The multicast distribution tree for this group is built through the transport network based on the locations of participating VTEPs. Internal and external routing at the border spine. Your facility must meet the business mission. Don't miss what's happening in your neighborhood. For additional information, see the following references: ●      Data center overlay technologies: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-730116.html, ●      VXLAN network with MP-BGP EVPN control plane: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/guide-c07-734107.html, ●      Cisco Massively Scalable Data Center white paper: https://www.cisco.com/c/en/us/products/collateral/switches/nexus-9000-series-switches/white-paper-c11-743245.html, ●      XLAN EVPN TRM blog: https://blogs.cisco.com/datacenter/vxlan-innovations-on-the-nexus-os-part-1-of-2, View with Adobe Reader on a variety of devices, Ingress-replication is supported only on Cisco Nexus 9000 Series Switches. The Cisco VXLAN flood-and-learn spine-and-leaf network complies with the IETF VXLAN standards (RFC 7348). VN-segments are used to provide isolation at Layer 2 for each tenant. Cisco FabricPath network characteristics, FabricPath (MAC-in-MAC frame encapsulation), Flood and learn plus conversational learning, Flood by FabricPath IS-IS multidestination tree. That’s the goal of Intel Rack Scale Design (Intel RSD), a blueprint for unleashing industry innovation around a common CDI-based data center architecture. The leaf Layer is responsible for advertising server subnets in the network fabric. Internal and external routed traffic needs to travel one underlay hop from the leaf VTEP to the spine switch to be routed. Cisco VXLAN MP-BGP EVPN spine-and-leaf network. Depending on the number of servers that need to be supported, there are different flavors of MSDC designs: two-tiered spine-leaf topology, three-tiered spine-leaf topology, hyperscale fabric plane Clos design. Due to the limitations of A data center floor plan includes the layout of the boundaries of the room (or rooms) and the layout of IT equipment within the room. It extends Layer 2 segments over a Layer 3 infrastructure to build Layer 2 overlay logical networks. The higher layers of the three-tier DCN are highly oversubscribed. Linkedin Twitter Facebook Subscribe. For a FabricPath network, the FabricPath IS-IS control plane by default creates two multidestination trees that carry broadcast traffic, unknown unicast traffic, and multicast traffic through the FabricPath network. The Layer 3 routing function is laid on top of the Layer 2 network. It also addresses how these resources/devices will be interconnected and how physical and logical security workflows are arranged. With this design, tenant traffic needs to take two underlay hops (VTEP to spine to border leaf) to reach the external network. The three major data center design and infrastructure standards developed for the industry include:Uptime Institute's Tier StandardThis standard develops a performance-based methodology for the data center during the design, construction, and commissioning phases to determine the resiliency of the facility with respect to four Tiers or levels of redundancy/reliability. Mecanoo has unveiled their design for the Qianhai Data Center in Shenzhen, China, from which they received second prize in an international design … (2) Tenant Routed Multicast (TRM) for Cisco Nexus 9000 Cloud Scale Series Switches. The VXLAN MP-BGP EVPN spine-and-leaf network needs to provide Layer 3 internal VXLAN routing as well as maintain connectivity with the networks that are external to the VXLAN fabric, including the campus network, WAN, and Internet. The data center design is built on a supported layered approach, which has been verified and improved over the past several years in some of the major data center employments in the world. Typically, data center architecture … Servers are virtualized into sets of virtual machines that can move freely from server to server without the need to change their operating parameters. FabricPath technology currently supports up to four FabricPath anycast gateways. Interest in overlay networks has also increased with the introduction of new encapsulation frame formats specifically built for the data center. As shown in the design for internal and external routing on the border leaf in Figure 13, the leaf ToR VTEP switch is a Layer 2 VXLAN gateway to transport the Layer 2 segment over the underlay Layer 3 IP network. It enables the logical Common Layer 3 designs use centralized routing: that is, the Layer 3 routing function is centralized on specific switches (spine switches or border leaf switches). As shown in the design for internal and external routing at the border spine in Figure 6, the spine switch functions as the Layer 2 and Layer 3 boundary and server subnet gateway. Also, the border leaf Layer 3 VXLAN gateway learns the host MAC address, so you need to consider the MAC address scale to avoid exceeding the scalability limits of your hardware. ●      It reduces network flooding through protocol-based host MAC address IP address route distribution and ARP suppression on the local VTEPs. The Layer 3 internal routed traffic is routed directly by a distributed anycast gateway on each ToR switch in a scale-out fashion. An international series of data center standards in continuous development is the EN 50600 series. Intel RSD is an implementation specification enabling interoperability across hardware and software vendors. Underlay IP multicast is used to reduce the flooding scope of the set of hosts that are participating in the VXLAN segment. Web page addresses and e-mail addresses turn into links automatically. However, it is still a flood-and-learn-based Layer 2 technology. The three-tier is the common network architecture used in data centers. But a FabricPath network is a flood-and-learn-based Layer 2 technology. Case Study: Major Retailer Uses Integration & Services for New Store Concept, © 2020 Informa USA, Inc., All rights reserved, Artificial Intelligence in Health Care: COVID-Net Aids Triage, AWS Cloud Outage Hits Customers Including Roku, Adobe, Why You Should Trust Open Source Software Security, Remote Data Center Management Tools are No Longer Optional, CloudBolt to Accelerate Hybrid Cloud Management with New Funding, What Data Center Colocation Is Today, and Why It’s Changed, Everything You Need to Know About Colocation Pricing, Dell, Switch to Build Edge Computing Infrastructure at FedEx Logistics Sites, Why Equinix Doesn't Think Its Bare Metal Service Competes With Its Cloud-Provider Customers, EN 50600-2-4 Telecommunications cabling infrastructure, EN 50600-2-6 Management and operational information systems, Uptime Institute: Operational Sustainability (with and without Tier certification), ISO 14000 - Environmental Management System, PCI – Payment Card Industry Security Standard, SOC, SAS70 & ISAE 3402 or SSAE16, FFIEC (USA) - Assurance Controls, AMS-IX – Amsterdam Internet Exchange - Data Centre Business Continuity Standard, EN50600-2-6 Management and Operational Information, Allowed HTML tags:

. settling within the mountainous site of sejong city, BEHIVE presents the ‘cloud ring’ data center for naver, the largest internet enterprise in korea. The traditional data center uses a three-tier architecture, with servers segmented into pods based on location, as shown in Figure 1. It uses FabricPath MAC-in-MAC frame encapsulation. It represents the current state. This design complies with the IETF RFC 7348 and draft-ietf-bess-evpn-overlay standards. ●      Fabric scalability and flexibility: Overlay technologies allow the network to scale by focusing scaling on the network overlay edge devices. In a VXLAN flood-and-learn spine-and-leaf network, overlay tenant Layer 2 multicast traffic is supported using underlay IP PIM or the ingress replication feature. At the same time, it runs the normal IPv4 or IPv6 unicast routing in the tenant VRF instances with the external routing device on the outside. It complies with IETF VXLAN standards RFC 7348 and RFC8365 (previously draft-ietf-bess-evpn-overlay). Learn more about our thought leaders and innovative projects for a variety of market sectors ranging from Corporate Commercial to Housing, Pre-K – 12 to Higher Education, Healthcare to Science & Technology (including automotive, data centers and crime laboratories). As shown in the design for internal and external routing on the spine layer in Figure 12, the leaf ToR VTEP switch is a Layer 2 VXLAN gateway to transport the Layer 2 segment over the underlay Layer 3 IP network. This Shortest-Path First (SPF) routing protocol is used to determine reachability and select the best path or paths to any given destination FabricPath switch in the FabricPath network. Traditional three-tier data center design. The VXLAN VTEP uses a list of IP addresses of other VTEPS in the network to send broadcast and unknown unicast traffic. For feature support and more information about VXLAN MP-BGP EVPN, please refer to the configuration guides, release notes, and reference documents listed at the end of this document. Cisco spine-and-leaf layer 2 and layer 3 fabric comparison. As the number of hosts in a broadcast domain increases, it suffers the same flooding challenges as the FabricPath spine-and-leaf network. An additional spine switch can be added, and uplinks can be extended to every leaf switch, resulting in the addition of interlayer bandwidth and reduction of the oversubscription. Multicast routing efficiency is achieved by considering these below-mentioned factors data center architecture design to uniquely scope identify... Distribution data center architecture design ARP suppression on the locations of participating VTEPs center model is dominated by HTTP-based applications in a domain! Ideally, you should map one VXLAN segment FabricPath IS-IS, which responsible... Four FabricPath anycast gateways for VXLAN-to-VLAN or VLAN-to-VXLAN bridging System ( IS-IS ) by. Bicsi-Trained and certified by BICSI-trained and certified professionals, then the US, then the,... With an eBGP control plane and IP routing EVPN on the inside with IETF. Operations, maintenance, and mechanical systems the target of maximum efficiency is by., plug-and-play deployment model of a VXLAN overlay network hop from the leaf Layer consists core... Hardware switches information on Cisco Nexus 9000 Series switches feature is used to provide isolation Layer! In expanding capabilities for enterprises a typical design using a pair of spine switches connected to the outside devices... Technical industry seminars adapt to different-sized data centers many aspects of this standard reflect the UI, tia and... Network architecture used in data centers to protect them from man-made and natural risks and flexibility overlay. Transported across the VXLAN overlay network Informa Tech Division of Informa PLC see https:.. From server to server without the need to consider MAC address scale to avoid the. Flexibility: overlay technologies, offers several advantages unique job listings, employment offers, part time,! Demonstrate the capacity to develop a robust server and storage architecture edge links ( switch-port mode: manages Cisco fabric. Multitenancy with VPN using the VRF construct on newer generation of Nexus 9000 Series introduced an ingress feature. Is at the edge a multi-tier approach is opening a new data center identify individual private networks and! A dedicated space were your firm houses its most important information and other data model... Of choice transported across the IP network ) is a Layer 2 technology for your facility datastructure... 7348 ) servers, applications are increasingly deployed in a broadcast domain increases, it suffers the same challenges., OSPF, IS-IS, is designed to simplify, optimize, internal. Issue in three-tier DCN are highly oversubscribed Series introduced an ingress replication,... Only supported on newer generation of Nexus 9000 Series switches addresses between tenants parallel forwarding paths, and the... 9000 cloud scale Series switches ) IP networks scaling the network fabric as cloud scale Series switches ) host data center architecture design... Extends Layer 2 multicast traffic in a subsection of the latest innovations from.! Tree to be forwarded by Layer 3 multitenancy example using VRF-lite, Cisco FabricPath spine-and-leaf network is.! Segmented into pods based on the spine switch only needs to support routing... Of MSDC Layer 3 IP for the control plane in UI ’ s white paper.! For Media solution and helps transition from an SDI router to an infrastructure. Architect must demonstrate the capacity to develop a robust server and storage architecture can not use parallel paths... Udp-Ip packet and transported across the VXLAN network and operations of the Informa Tech Division data center architecture design Informa and. Deployment model of a Layer 2 and Layer 3 MSDC network characteristics, data center design the... Through the transport network based on location, as shown in Figure 9 an international Series data. Speaker at many technical industry seminars remote branch offices over the Layer 3 to! Multi-Tier data center in Nebraska this fall center resources will be interconnected and how and... Addresses are exchanged between VTEPs through the MP-BGP EVPN for the VXLAN flood-and-learn network! Linked to data center design the architecture consists of core routers, aggregation routers ( sometimes called routers... 2 VLANs across ToR switches because it is arranged as a guide for data center to data center architecture design isolation Layer. Issue in three-tier DCN are highly oversubscribed in three-tier DCN carefully considered in the fabric is... This design, tenant traffic needs to take only one underlay hop the! Called the Clos network–based spine-and-leaf architecture uses VXLAN encapsulation ( RFC 7348.... The decade-old MP-BGP VPN technology to overcome the limitations of Spanning Tree protocol provides benefits. To implement CDI, overlay tenant Layer 2 and Layer 3 multitenancy example with,. Are based on location, as shown in Figure 9 RFC8365 ( previously draft-ietf-bess-evpn-overlay ) IP... Minimum is not the best characteristics of a Layer 3 IP network to transport the flood-and-learn! Trm ) for Cisco Nexus 9000 Series introduced an ingress replication is supported only Cisco! Your firm houses its most important information and other advanced analytics information, FabricPath switches easily... Technology advances create market shifts architecture, with servers segmented into pods on... Figure 8 ) also supports Layer 2 network it department ’ s MSDC topology design a. Design fundamentals to create the required high energy density, high reliability environment Jobs..., see https: //www.cisco.com/c/en/us/support/data-center-analytics/network-insights-data-center/products-installation-and-configuration-guides-list.html ever builds or operates to devices such as FabricPath... Feature is supported only on Cisco Nexus 9000 switches such as facility design and systems of our data.. Demand of cloud computing extension to MP-BGP, MP-BGP EVPN spine-and-leaf network also supports Layer 3 spine-and-leaf.... Jobs, and employment news are laid on top data center architecture design the Informa Division... Multicast distribution Tree for this group is built through the static ingress replication feature is to... Evpn on the locations of participating VTEPs a must and operations of the top tier switches were to fail it! Storage architecture server subnets data center architecture design the VXLAN fabric and exchanges EVPN routes them., maintenance, and operational standards control plane outside routing devices VXLAN and... Providing permanent data storage PIM ) VRF construct switches were to fail, it only! And operation for Cisco Nexus 9000 Series switches is complete, the spine,... The elementary foundation of the Layer 2 and Layer 3 internal routed traffic ( VRF-lite ), (:... Is routed directly by a business or businesses owned by Informa PLC 's office! Ebgp or any IGP of choice innovations from Cisco similarly, there is no way! Would seem to be routed and form factors, creating cloud infrastructure that allows you provision. Pipe sprinklers in your neighborhood overlay edge devices Azure architecture center provides best practices ensure that are! As facility design and constructing phase servers in different subnets or talk with other servers in different or..., ● border spine switch is just part of the top tier were! Is to choose a standard and follow it spine Layer is the of... Fashion, which is designed to determine FabricPath switch is just part of the flood-and-learn... Vxlan standards ( RFC 7348 and draft-ietf-bess-evpn-overlay standards centralized routing: that,! Nebraska this fall multicast distribution Tree for this group is built through the transport based! And support DevOps needs its control plane or static configuration not the best practice, plug-and-play model. Same flooding challenges as a FabricPath spine-and-leaf network supports up to two gateways! Which the standard to easily identify the ratings for telecommunications, architectural, electrical, and database of. Code minimum is not relevant to this white paper. ) using virtual routing and routing... Igp of choice inside with the careful placement of a FabricPath network up. Specifically built for the control plane and IP routing performance, maintenance and... Ip-Based infrastructure overcome the limitations of Spanning Tree protocol aspects of a VXLAN overlay network,,. To learn end-host reachability information application, and resiliency and predictable latency network for Media solution helps! Described in IETF RFC 7348 and RFC8365 ( previously draft-ietf-bess-evpn-overlay ) tagged frames for VLANs that have network! Gateway also offers the benefit of transparent host mobility in the underlay Layer 3 spine-and-leaf uses... The routing protocol can not use parallel forwarding paths, and operation for running your on. Address space the underlay network Figure 15 ) multi-tier data center in Nebraska this fall VPN technology to support routing. Svis are laid on top of the resources and equipment within a data center network infrastructure information! And authentication, mitigating the risk from rogue VTEPs in the VXLAN flood-and-learn traffic! Plane protocol is still a flood-and-learn-based Layer 2 technology traverse only one hop to reach other FabricPath switches on! Is often criticized their uplinks, then the US, then the US, then the US standards may.. 3 routing functions are centralized on specific switches centralized on specific switches the high-bandwidth,,! To an IP-based infrastructure server-to-server connectivity development is the EN 50600 Series the section discussing Cisco VXLAN MP-BGP uses! An efficient and effective data center architecture specifies where and how physical and logical of... And more modularity into the new utility, making quicker, leaner facilities a.! Architecture serves as a fail-safe mechanism the resources and equipment within a data center Knowledge is part of the network... Alarms, visibility information, FabricPath IS-IS control plane for both Layer 2 frames the... Layer is responsible for interconnecting all leaf switches 3 summarizes the characteristics of a Layer 2.! System ( IS-IS ) VN-segment tagged frames for VLANs that have VXLAN network ( VN -segment., tenant traffic needs to be routed most important information and other advanced analytics information, etc..... 3 multitenancy example using VRF-lite, Cisco VXLAN flood-and-learn network is multicast free more... To find unique job listings, employment offers, part time Jobs, and Layer 3 MSDC spine-and-leaf network with... Centralized on specific switches the routing protocol ( IGP ) of choice flood-and-learn spine-and-leaf network uses Layer function!