Windows Server 2012 – networking and security Tomáš Kantůrek
NIC Teaming with Load Balancing Technical Overview
NIC Teaming Built-in to Windows Server 2012, also known as Load Balancing/Failover (LBFO) • Allows multiple network interfaces to be placed into a team for the purposes of: • Bandwidth aggregation, and/or • Traffic failover to prevent connectivity loss in the event of a network component failure • Available in Windows Server 2012 in all SKUs (ServerCore and Full Server) versions • not available in Windows 8 Client SKUs; however, Remote Server Administration Tools can be installed on Windows 8 in order to manage NIC Teaming on the servers
Architectural Components 2 basic sets of algorithms for NIC teaming
• Switch-dependent modes • Require the switch to participate in the teaming • Types • Generic or static teaming • Dynamic teaming (LACP) • Switch-independent modes • Do not require the switch to participate in the teaming Traffic distribution methods • Hyper-V switch port • Address Hashing (TransportPorts)
Requirements • 1 NIC to be used for VLAN traffic • At least 2 NICs for all modes that provide fault protection through failover • Up to 32 NICs per team
NIC Teaming in VMs NIC Teaming in Windows Server 2012 is supported in a VM • Virtual network adapters that are connected to more than one Hyper-V switch can still have connectivity even if the network adapter under that switch gets disconnected • Useful when working with SR-IOV • Each Hyper-V switch port associate with a VM that is using NIC Teaming must be set to allow Teaming in the host (parent partition) using PowerShell with administrative permissions: Set-VMNetworkAdapter
-VMName
-AllowTeaming
• Teams created in a VM can only run in Switch Independent configuration, Address Hash distribution mode • Only teams where each of the team members is connected to a different Hyper-V switch are supported • Each Hyper-V switch port that is associated with a virtual machine that is using Teaming must be set to allow MAC spoofing • Hyper-V NICs exposed in the parent partition (vNICs) must not be placed in a Team
Interactions with Distribution Modes All Address hash modes
Switch Independent
Switch Dependent (Static and LACP)
Hyper-V Switch Port mode
Outbound traffic is spread across all Outbound traffic is tagged with the port active members. on the Hyper-V switch where it Inbound traffic (from beyond the subnet) originated. All traffic with that port tag arrives on only one interface (primary is sent on the same team member. member). If primary member fails Inbound traffic destined for a specific another team member is selected as Hyper-V port will arrive on the same primary and all inbound traffic moves to team member that the traffic from that that team member. port is sent out on. Outbound traffic is spread across all Outbound traffic is tagged with the port active members. on the Hyper-V switch where it Inbound traffic will be distributed by the originated. All traffic with that port tag switch’s load distribution algorithm. is sent on the same team member. If a team is put in the Hyper-V switch port distribution mode but is not connected to a Hyper-V switch, all outbound traffic will be sent to a single team member. Inbound traffic will be distributed by the switch’s load distribution algorithm.
rd 3 -Party
Interaction with Teaming Solutions
STRONGLY RECOMMENDED that no system administrator ever run two teaming solutions at the same time on the same server. The teaming solutions are unaware of • x each other’s existence resulting in potentially serious problems. • If the system administrator attempts to put a NIC into a 3rd party team that is presently part of a Microsoft NIC Teaming team the system will become unstable and communications may be lost completely • If the system administrator attempts to put a NIC into a Microsoft NIC Teaming team that is presently part of a 3rd party teaming solution team the system will become unstable and communications may be lost completely
Network Virtualization
Blue VM
Red VM
Blue network Red network
Virtualization Fyzický server Serverová virtualizace Spouštění mnoha virtuálních serverů na fyzickém stroji Každý virtuál si připadá jako samostatný fyzický server
Fyzická síť Síťová virtualizace Běh mnoha virtuálních sítí na jedné fyzické infrastruktuře Každá virtuální síť si připadá jako samostatná fyzická síť
Address Rewrite Stejný formát TCP/IP packetu zajišťuje využití stávajících NIC Performance Offloads 192.168.2.22192.168.5.55 192.168.2.23 192.168.5.56 192.168.2.22
10.1.1.11
10.1.1.11 10.1.1.12
192.168.2.23
192.168.5.55
192.168.5.56
10.1.1.11
10.1.1.12
10.1.1.12
10.1.1.11 10.1.1.12
10.1.1.11 10.1.1.12
10.1.1.11 10.1.1.12
Encapsulation – NVGRE Lepší využití sítě díky sdílení PA mezi VMs Explicitní customer ID umožňuje lepší rozlišení zákaznických sítí 192.168.2.22 10.1.1.11 GRE MAC 192.168.5.55 Key=5001 10.1.1.12 192.168.2.22 10.1.1.11 GRE MAC 192.168.5.55 Key=6001 10.1.1.12
192.168.2.22
10.1.1.11 10.1.1.11 10.1.1.12
10.1.1.11 10.1.1.11 10.1.1.12
192.168.5.55
10.1.1.12 10.1.1.11 10.1.1.12
10.1.1.12 10.1.1.11 10.1.1.12
DHCP Failover
Lokalita 2 20.0.0.0/16
Lokalita 1 10.0.0.0/16
Vysoká dostupnost Lokalita 3 30.0.0.0/16
DHCP bez clusteringu Podpora více subnetů
Centrální lokalita
Sekundární DHCP 10.0.0.0/16 20.0.0.0/16 30.0.0.0/16
Hyper-V Extensible Switch Root Partition
VM1 VM NIC
Host NIC
BFE Service
Firewall
Callout Extensible Switch
Filtering Engine
Extension Protocol Capture Extensions WFP Extensions Filtering Extensions Forwarding Extensions Extension Miniport
Physical NIC
VM2 VM NIC
Capture může Forwarding extensions řídí(WFP) provoz, Windows Filter Platform Filteringextensions extensions může být také kontrolovat provoz a generovat Extensions může kontrolovat, definuje cíl(e) jednotlivých paketů implementována pomocí NDIS zahazovat, modifikovat a vkládat reporting filtering APIs Forwarding extensions může pakety pomocí WFPPrevention APIs zachytávat aVM filtrovat provoz Například: DoS by Capture extensions Například: Windows Antivirus nemodifikuje a Firewall Broadcom stávající přesWFP Extensible Cisco Nexusprovoz 1000V a UCS software používají pro Switch filtrování provozu vPFS OpenFlow NEC ProgrammableFlow's Například: Virtual Firewall by Například: sflow by inMon 5NINE Software
Minimum Bandwidth 12 Gbps Service
Minimum Bandwidth
10 Gbps 8 Gbps
VM 1 VM 2 VM 3
3 Gbps 4 Gbps 3 Gbps
6 Gbps
4 3 4 4
4 Gbps 2 Gbps
2 2
5 5
8:00
5 4
6 6
3 3
2 2
14:00
22:00
Benefit z predikce výkonu a vysoké zátěže spojení.
Síťová bezpečnost DHCP guard
Router guard
Monitor mode
Extensible switch
Network Adapter Hardware Acceleration Virtual Machine Queue (VMQ) • Employs hardware packet filtering to deliver packets from an external VM network directly to VMs using DMA transfers IPsec task offload • Reduces the load on the system’s processors by IPsec encryption/decryption using a dedicated processor on the network adapter Single-Root I/O virtualization (SR-IOV) • Enables a device to divide access to its resources among various PCIe hardware functions
Resource metering • Využití Resource pools • Plně kompatibilní s veškerými funkcemi Hyper-V • Data jsou migrována s VM • Network Metering Port active control lists (ACLs)
• Průměrné využití CPU
Resource Pool Resource Metering Resource Pool 1
• Paměť
Průměrné využití Minimum využití Maximum využití
• Maximální alokovaný diskový prostor • Síť
Příchozí provoz Odchozí provoz
VM1
VM2
VM3
VM1
Resource Pool 2
VM2
VM3
Virtual Machine Resource Metering
Storage a bezpečnost Šifrované cluster disky
Šifrování pomocí BitLocker
Podpora pro tradiční failover disky
Podpora pro Cluster Shared Volumes
BranchCache BranchCache • Small cache block size reduces network bandwidth requirements • Intelligent data compression • Encryption on cache • More scalable Printing • The document is sent directly to the local printer, while print request only is routed to the print server in datacenter
BENEFITS • • •
Users in the branch office can download and print documents faster Frees up network bandwidth Saving costs – support more people in branch offices with same hardware; no WAN Optimizers needed
BranchCache Spend time working, not waiting Users access data from local caches • Distributed cache mode • Hosted cache mode Improved network and delivery performance • Pre-load or distribute content • Reduce print-file data on network
Deploy, manage, and scale with ease Streamlined deployment and management • Multisite deployment via a single GPO • Client computer configuration is automatic; configurable
via GPOs • Delivers encrypted caches without PKI or hard drive encryption
Scales from home office to large locations • Multiple hosted cache servers for large offices • Extensible Storage Engine database technology