Please enable JavaScript.
Coggle requires JavaScript to display documents.
M8 - Lesson 1 : Planning a failover cluster (Failover-cluster storage…
M8 - Lesson 1 : Planning a failover cluster
Preparing to implement failover clustering
Configure distribution of highly-available applications from a failed node. When a node fails, a failed node’s highly-available services or applications should distribute among the remaining nodes to prevent overloading a single node.
Sufficient capacity for each node to service the highly-available services or applications that you allocate to it when another node fails. This capacity should be a sufficient buffer to avoid nodes that run at near capacity after a failure event.
Ensure that you use hardware that has similar capacity for all nodes in a cluster. This simplifies the failover planning process because the failover load will distribute evenly among the surviving nodes.
Failover-cluster storage
Failover clusters require shared storage to provide consistent data to a virtual server after failover
Shared serial attached SCSI (SAS):
Shared SAS is the lowest-cost option. However, it is not very flexible because the two cluster nodes must be close together physically.
iSCSI.
iSCSI is a type of storage area network (SAN) that transmits SCSI commands over IP networks. Performance is acceptable for most scenarios when you use 1 gigabit per second (Gbps) or 10 Gbps Ethernet physical medium for data transmission. This type of SAN is inexpensive to implement because it does not require any specialized networking hardware.
Fibre Channel.
Fibre Channel SANs typically have better performance than iSCSI SANs, but they are significantly more expensive.
Shared virtual hard disk.
You can use shared virtual hard disk as VM guest-clustering storage. You should locate a shared virtual hard disk on a Cluster Shared Volume (CSV) or Scale-Out File Server cluster, or you should connect to a SCSI or guest Fibre Channel interface so that you can add it to two or more VMs that are participating in the guest cluster.
Scale-Out File Server.
You can utilize shared Server Message Block (SMB) storage as the shared location for some failover cluster roles, specifically SQL Server and Hyper-V. You then do not have to have local storage on nodes that are hosting the SQL Server or Hyper-V roles. All storage occurs over SMB 3.0 at the Scale-Out File Server.
Storage requirements
If you want to use the native disk support that failover clustering includes, you should use basic disks, not dynamic disks.
We recommend that you format the partitions with NTFS or Resilient File System (ReFS). For the disk witness, the partition must be NTFS or ReFS. Scale-Out File Servers do not support ReFS at this time.
For the partition style of the disk, you can use either master boot record (MBR) or GUID partition table (GPT).
The miniport driver that you use for storage components must be compatible with the Microsoft Storport storage driver, which offers a higher-performance architecture
You must isolate storage devices, so that you have only one cluster per device. Servers from different clusters must be unable to access the same storage devices.
Multipath I/O for the highest level of redundancy and availability
Hardware requirements
You must use server hardware that is certified for Windows Server
Server nodes should all have the same configuration and contain the same or similar components
All servers must pass the tests in the
Validate a Configuration Wizard
Server nodes should all have the same configuration and contain the same or similar components
Network requirements
Your server should connect to multiple networks to ensure communication redundancy, or it should connect to a single network with redundant hardware, to remove single points of failure
You should ensure that network adapters are identical and that they have the same IP versions, speed, duplex and flow-control capabilities.
Your network adapters should be compatible with RSS and RDMA
Infrastructure & software requirements
You should run the supported version of Active Directory domain controllers, and they should use Windows Server 2008 or newer.
Domain-functional level and forest-functional level should use Windows Server 2008 or newer.
You should run the supported version of Domain Name System (DNS) servers, and they should use Windows Server 2008 or newer.
Security
Considerations
Provide a method for authentication and authorisation
Ensure that your intra-cluster communication authenticates with Kerberos v5
Ensure that unauthorised users do not have physical access to the failover cluster nodes
Ensure that you use anti-malware software
Active Directory-detached cluster
AD DS objects for network names are not created
Cluster network name that you register in a DNS is not necessary to create new objects in AD DS
We do not recommend this for any scenario that requires Kerberos authentication
You must run 2012 R2 or newer on all cluster nodes
Quorum
Node Majority.
Each node that is available and is in communication can vote. The cluster functions only with a majority, or more than half of votes. This model is preferred when the cluster consists of an odd number of server nodes and requires no witness to maintain or achieve quorum.
Node and Disk Majority.
Each node can vote, as can a designated disk in the cluster storage (the disk witness) when they are available and in communication. The cluster functions only with a majority (more than half) of votes.
Node and File Share Majority.
Each node can vote, as can a designated file share (file share witness) that an administrator creates, as long as they are available and in communication.
No Majority: Disk Only.
The cluster has quorum if one node is available and in communication with a specific disk in the cluster storage.
Planning for migrating/upgrading failover clusters
Pause the cluster node and drain all cluster resources
Migrate cluster resources to another node in the cluster
Replace the cluster node OS with Server 2016 and add node back to the cluster
Upgrade all nodes to Server 2016
Run cmdlet
Update-ClusterFunctionalLevel