Hi All,
Looking to build an R&D VDi platform between two nodes using local disks.
I'm planning on buying two servers each with 4 x 1.92tb 6gbps sata SSDs. My research tells me this:
2 servers meaning 2-way mirror
all ssds so no caching required
auto calculated reserve space
Usable capacity = 6.9tb
fileshare witness hosted away from the cluster
This is the first time I've looked into storage spaces direct as I've always gone with the traditional route of compellent sans. My servers have an HBA330 card which is needed for this technology (ie, no raid at the hardware level). I'm confused right from the off regarding installing windows on each server. Usually I go with 2xssd raid1 for the OS then map my iscsi targets for the storage. How do I go about setting up the disks so I can get windows installed before then installing the roles to support storage? Is it simply a case of specing the server with say 2x250gb nvme (raid1) on its own controller card?
I'm going with two network cards. The first one will give me dual 25gbps for the storage (dedicated fibre switch for storage only), and I'm going with a second card which is dual 40gbps to the LAN. We have plenty of ports available on our fibre core switch so might as well make use of it all. Does this sound like a good idea, or should I look into swapping the disks for sas 12gbps ones and upgrading the network card from 25gbps to 40gbps for storage?
The two nodes will also be running hyper-v failover clustering so we can live migrate critical desktop vms (although not all will need to failover)
also, when I add a third (and maybe forth) server I can change to 3-way mirror on the fly?
Thanks!!