Quantcast
Channel: High Availability (Clustering) forum
Viewing all 4519 articles
Browse latest View live

Windows Server 2012-R2 or 2016 Failover cluster manager: multiple online resources

$
0
0

I was wondering if anybody experienced and/or resolved the following issue:

Windows Failover cluster Setup:

  • Two Windows 2016 or 2012-R2 server nodes: A and B with current Windows patches.
  • Generic Application DLL resource: implements IsAlive(), LookAlive(), Online() and Offline()
  • Virtual IP address resource: as a dependency of the Generic Application
  • Policy: configured to failover at the first failure
    1. Period for Restarts=15:00
    2. Maximum restarts in the specified period=0
    3. Delay between restarts=0

Issue:

When IsAlive() fails on A primary server, the cluster manager:

  • Does not call Offline() on A (leaving A online)
  • Moves VIP address from A to B
  • Calls Online() on B

As a result, both A and B Application resources are online.




Server 2016 two node cluster network config on virtual hw

$
0
0

I'm setting up a two node cluster for SQL AAG on vmware guests running windows server 2016. I haven't messed with a cluster in quite a while. I'm looking for networking best practice config for virtual hardware. Back in the day with physical hw i setup a separate NIC on each server and ran a crossover cable between each server for a dedicated network. What's the best practice (or close to it) in the virtual realm to mimic the function of the crossover cable of old? I've tried setting up a second NIC on a different subnet than the primary NIC but i often see 'network unavailable' messages in the event logs. 

Any advice would be great.

THX> Eric

Disk Accessing Errors

$
0
0

Today in morning we faced issues regarding cluster disk failures.

Below is the error we faced.

The validation report is run and found some warning can some expert help me to understand.

https://crescentpk-my.sharepoint.com/:u:/g/personal/osama_mansoor_crescent_com_pk/EUN8kP29fxBCs722GXxuLyYBlSX1LoufmDkIxOqBKN9c4Q?e=szzB9w

S2D - to iWARP or to RoCE / Switchless?

$
0
0

Hello all,

I am about to build a new platform and now I have to answer a number of crucial issues to make the right choice for hardware.

3-Node S2D Setup based on Windows 2019 Datacenter:

3 x HPE DL360 Gen10 NVME with 5 x 6.4 TB NVME SSD with an effective 25.6 TB Storage Pool

In every server I have the option to do 2 x SFP28 10 / 25Gbit adapter with 2 ports of the brands:

• Mellanox ConnectX-4 Lx (640FLR-SFP28) prefered

• Broadcom BCM57414 (631FLR-SFP28)

• Marvell QL41401L-A2G (622FLR-SFP28)

So I am figuring out what fits best to get the most optimal configuration.

For example, RoCE and iWARP are such a choice where one says that iWARP is faster:

https://www.chelsio.com/wp-content/uploads/resources/iwarp-s2d-updates.pdf

And where the other person says that RoCE v2 is faster:

http://www.mellanox.com/related-docs/whitepapers/WP_RoCE_vs_iWARP.pdf

My questions:

It is advisable to have a witness (FSW) available with a S2D cluster with 3 nodes. This is possible because of a 4th server that is not part of the S2D cluster. So a FileShare witness can be realized on this. Can a 3-node setup with witness (FSW) be realized without a switch? Should the witness FS be part of the iWARP or RoCE subnet? Or is it advised to always use a switch?

Which technique should I use? iWARP or RoCE? What is the fastest?

From Windows 2012R2 to 2016 Cluster operating system rolling upgrade question

$
0
0

Hi everyone, 

I want to upgrade a Windows 2012 R2 guest cluster (File server) to Windows 2016. This cluster is running on Windows 2016 hosts. The VMs that compose this cluster are using shared drives (VHDS). 

The link to perform the upgrade https://docs.microsoft.com/en-us/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade says:

"The following scenario is not supported in Windows Server 2016:
Cluster OS Rolling Upgrade of guest clusters using virtual hard disk (.vhdx file) as shared storage"

I don't think this applies to me. I think it refers to the way that Windows 2012R2 shared drives, which is different tan the way Windows 2012R2 does (Enable virtual hard disk sharing. vhdx)

Can I go ahead with the upgrade?

Thanks to everyone,

Ivan Mckenzie

Fault Domains with s2d

$
0
0

Hello all,

I am currently into learning the way of virtualization with Windows and s2d.

it’s clear that I can define different kind of fault domains such as node, chassis, rack and site.

now I lack the money to proper test this configurations :)

as far as I understood the concept of fault domains on fault domain can go offline and the cluster self will remain online.

let’s say I have defined 2 rack fault domains ("rack-a" and "rack-b") with each 4 nodes(x4 100gb disk) ("hv01-08)

hv01-04 are assigned to rack-a the others to rack-b. then I create a s2d pool, as suggested from ms after I configured

the fault domains.

so for available storage, will I have 1,6tb or 3,2tb if I configure them with dual parity?

if I have vms on rack-a, after a failure of that rack, will the vms live-migrate over to rack-b?

does the rack self also have a fault tolerance of nodes? or will it mean if just a single node fails, the whole rack fails?

last, is it possible to add a single node to a rack only? how would it affect the storage?

 

hopefully somebody can answer my questions

thanks in advance

Regards

Elmar

From Windows 2012R2 to 2016 Cluster operating system rolling upgrade question

$
0
0

Hi everyone, 

I want to upgrade a Windows 2012 R2 guest cluster (File server) to Windows 2016. This cluster is running on Windows 2016 hosts. The VMs that compose this cluster are using shared drives (VHDS). 

The link to perform the upgrade https://docs.microsoft.com/en-us/windows-server/failover-clustering/cluster-operating-system-rolling-upgrade says:

"The following scenario is not supported in Windows Server 2016:
Cluster OS Rolling Upgrade of guest clusters using virtual hard disk (.vhdx file) as shared storage"

I don't think this applies to me. I think it refers to the way that Windows 2012R2 shared drives, which is different tan the way Windows 2012R2 does (Enable virtual hard disk sharing. vhdx)

Can I go ahead with the upgrade?

Thanks to everyone,

Ivan Mckenzie

Storage Spaces Direct (S2D) - Poor write performance with 5 nodes with 24 Intel P3520 NVME SSDs each over 40Gb IB network

$
0
0

Need a little help with my S2D cluster which is not performing as I had expected.

Details:

5 x Supermicro SSG-2028R-NR48N servers with 2 x Xeon E5-2643v4 CPUs and 96GB RAM

Each node has 24 x Intel P3520 1.2TB NVME SSDs

The servers are connected over an Infiniband 40Gb network, RDMA is enabled and working.

All 120 SSDs are added to S2D storage pool as data disks (no cache disks). There are two 30TB CSVs configured with hybrid tiering (3TB 3-way mirror, 27TB Parity)

I know these are read intensive SSDs and that parity write performance is generally pretty bad but I was expecting slightly better numbers then I'm getting:

Tested using CrystalDiskMark and diskspd.exe

Multithreaded Read speeds: < 4GBps (seq) / 150k IOPs (4k rand)

Singlethreaded Read speeds: < 600MBps  (seq) 

Multithreaded Write speeds: < 400MBps  (seq) 

Singlethreaded Write speeds: < 200MBps (seq) / 5k IOPS (4k rand)

I did manage to up these numbers by configuring a 4GB CSV cache on the CSVs and forcing write through on the CSVs:

Max Reads: 23GBps/500K IOPs 4K IOPS, Max Writes:2GBps/150K 4KIOPS

That high read performance is due to the CSV cache which uses memory. Write performance is still pretty bad though. In fact it's only slight better than the performance I would get for a single one of these NVME drives. I was expecting much better performance from 120 of them!

I suspect that the issue here is that Storage Spaces is not recognising that these disks have PLP protection which you can see here:

Get-storagepool "*S2D*" | Get-physicaldisk |Get-StorageAdvancedProperty

FriendlyName          SerialNumber       IsPowerProtected IsDeviceCacheEnabled
------------          ------------       ---------------- --------------------                   
NVMe INTEL SSDPE2MX01 CVPF7165003Y1P2NGN            False                     
WARNING: Retrieving IsDeviceCacheEnabled failed with ErrorCode 1.
NVMe INTEL SSDPE2MX01 CVPF717000JR1P2NGN            False                     
WARNING: Retrieving IsDeviceCacheEnabled failed with ErrorCode 1.
NVMe INTEL SSDPE2MX01 CVPF7254009B1P2NGN            False                     
WARNING: Retrieving IsDeviceCacheEnabled failed with ErrorCode 1.

Any help with this issue would be appreciated.

Thanks.


windows 2019 s2d cluster failed to start event id 1809

$
0
0

Hi I have lab with insider windows 2019 cluster which I inplace upgraded to rtm version of 2019 server and cluster is shutdown after while and event id 1809 is listed 

This node has been joined to a cluster that has Storage Spaces Direct enabled, which is not validated on the current build. The node will be quarantined.
Microsoft recommends deploying SDDC on WSSD [https://www.microsoft.com/en-us/cloud-platform/software-defined-datacenter] certified hardware offerings for production environments. The WSSD offerings will be pre-validated on Windows Server 2019 in the coming months. In the meantime, we are making the SDDC bits available early to Windows Server 2019 Insiders to allow for testing and evaluation in preparation for WSSD certified hardware becoming available.

Customers interested in upgrading existing WSSD environments to Windows Server 2019 should contact Microsoft for recommendations on how to proceed. Please call Microsoft support [https://support.microsoft.com/en-us/help/4051701/global-customer-service-phone-numbers].

Its kind weird because my s2d cluster is running in VMs is there some registry switch to disable this stupid lock ???


The computer is joined to a cluster in Windows Server 2012 and R2

$
0
0

Dear Forum, 

I deploy Window Failover Cluster in window server 2016 with 3 node(Node1, Node2, Node3). once i removed one node (Node3) from window fail over cluster.  after next week later i add node3 to window failover cluster, when i add node3 can't join to cluster,  while i add (Node3)to exiting the cluster it's show the error message in event viewer below.

The Cluster service cannot be started. An attempt to read configuration data from the Windows registry failed with error '2'. Please use the Failover Cluster Management snap-in to ensure that this machine is a member of a cluster. If you intend to add this machine to an existing cluster use the Add Node Wizard. Alternatively, if this machine has been configured as a member of a cluster, it will be necessary to restore the missing configuration data that is necessary for the Cluster Service to identify that it is a member of a cluster. Perform a System State Restore of this machine in order to restore the configuration data.

Ir seems that the record hasn't been deleted in the registry of this computer after dis-joining the cluster.

could anyone help on this problem?

Cluster resource could not be brought online in windows 2012 R2

$
0
0

unable to bring cluster name online in windows 2012 R2 file failover cluster

Getting error code: 0x8007139a

cluster events showing event id: 1214, 1205, 1069, 1254

My cluster is working OK but unable to switch over to other node and cluster ip is showing online but cluster name is not getting online.

Pl provide fixes.

Narender Yadav

S2D in a lab - questions about node failure.....

$
0
0

Set up a 2 node S2D cluster with nested Hyper-V and was doing a few tests. Live and quick migration work fine, but if I 'pull the power' on one of the S2D nodes (to simulate a node failure), the machine on that node never migrates. In Failover Cluster Manager the role shows as 'Unmonitored' and the VM is dead in the water. I do have a file share witness on a machine not impacted by my testing.

I would think that if I pulled the power on a host those VMs would 'figure out' the node is offline and for the other node to pick up the load.

Did I miss a configuration step somewhere?

[EDIT] After a few minutes the machine came back online, but like it had been restarted. Is that expected behavior? Was hoping that would be faster or in a state where the machine wasn't reset. But I may need to set my expectations!

Unable to connect to Failover cluster manager in Windows server 2016

$
0
0
Hi,

I have a 2 node cluster in my environment which I used to be able to manage from Failover cluster manager. However, I am getting an error"The operation has failed. The following is a list of nodes that encountered this problem when the connection to the cluster was attempted:The remote node". Both nodes are in the same network segment and no local firewall is blocking.

I have seen the URL from below.

1) https://blogs.msdn.microsoft.com/clustering/2010/11/23/trouble-connecting-to-cluster-nodes-check-wmi/
2) https://blogs.technet.microsoft.com/askcore/2013/12/17/unable-to-launch-cluster-failover-manager-on-any-node-of-a-20122012r2-cluster/

I am able to the output when I ran wbemtest and "Get-WmiObject -namespace "root\mscluster" -class MSCluster_Resource" locally. When I ran the verification script from the 2nd URL, it shows "WMI query succeeded" for itself but "WMI query failed //The RPC server is unavailable. (Exception from HResult: 0x800706BA). for the remote node.

I then proceed to run the remediation steps below but it is still not working. Both servers seems be to working on its own but cannot connect to each other. There are a lot of event 4683 in Failoverclustering-Manager event viewer with the message "The error was 'An attempt to connect to the cluster failed due to one or more nodes not responding to WMI calls. This is usually caused by a problem with the WMI infrastructure on the node(s)'. Any suggestion?

MOF Parser
cd c:\windows\system32\wbem
mofcomp.exe cluswmi.mof

Reset WMI Repository
Winmgmt /resetrepository

Regards,
Chiew Sheng

S2D 2 node cluster

$
0
0

Hello,

We have 2 node S2D cluster with windows server 2019. Between two nodes we have directly connected RDMA storage network (Cluster only) and client-facing network based on LACP teaming on each node (Client And Cluster). We have done failover test and it works: when we power off one node, virtual machines migrates to another host as expected. But when we unplug client facing adapters (two adapters in LACP) on one node, where VM are resides, VM migration fails and after some time Cluster network name and Cluster IP address also failed. When we plug again client facing adapters (two adapters in LACP) to failed node, cluster IP address recover and VM client network works again. So the problem: cluster migration fails after unexpectedly shutdown of client facing network of one node, where VM are resides. Nodes can communicate with each other through Storage network and all nodes are up in Failover Cluster manager. So when client network is down, VM should migrate to another node with working client-facing network. But cluster fails and VM do not migrate. Where we can fix this behaviour? Has anyone met this before?

SAN HPE SV3200 iScsiIPrt errors crashing VMs and after cascade also Fail Over Server Nodes?

$
0
0

We have a three node W2012 R2 Fail over cluster that has been running spotless for years with the HPE P4300 SAN but after adding the HPE Storevirtual SV3200 as a new SAN we are having iScsiPrt errors that HPE Support cannot fix, crashing VMs and also two of the three fail over nodes. 

At first everything seemed to work, but after adding additional disks on the SAN a SAN controller crashed. That has been replaced under warranty but now when moving our servers and especially SQL 2008 Servers to the SAN, problems start to occur. The VHDX volumes of the SQL servers are thin provisioned.  

Live moving of the storage worked fine for none SQL servers. For some SQL servers the servers frooze and operation was halted, so we needed to perform an offline move. Then during high disk IO and especially during backups W2012 R2 FOC started to behave erratic eventually crashing VMs and in one instance rebooting two fail over nodes, as a result of a flood of iScsciPrt errors in the eventlog:

System iScsiPrt event ID 27 error Initiator could not find a match for the initiator task tag in the received PDU. Dump data contains the entire iSCSI header.
System iScsiPrt event 129 warning The description for Event ID 129 from source iScsiPrt cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

If the event originated on another computer, the display information had to be saved with the event.

The following information was included with the event:

\Device\RaidPort4

the message resource is present but the message is not found in the string/message table

System iScsiPrt event ID 39 error Initiator sent a task management command to reset the target. The target name is given in the dump data.
System iScsiPrt event ID 9 error Target did not respond in time for a SCSI request. The CDB is given in the dump data.
System iScsiPrt event 129 warning The description for Event ID 129 from source iScsiPrt cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

If the event originated on another computer, the display information had to be saved with the event.

The following information was included with the event:

\Device\RaidPort4

the message resource is present but the message is not found in the string/message table
System iScsiPrt event ID 27 error Initiator could not find a match for the initiator task tag in the received PDU. Dump data contains the entire iSCSI header.
System FailOverClustering event id 5121 Information Cluster Shared Volume 'Volume4' ('NEMCL01_CSV04') is no longer directly accessible from this cluster node. I/O access will be redirected to the storage device over the network to the node that owns the volume. If this results in degraded performance, please troubleshoot this node's connectivity to the storage device and I/O will resume to a healthy state once connectivity to the storage device is reestablished.

After a 2 hour period of these events the FailOver Cluster services started to give errors, VMs failed and finally 2 nodes of our 3 node failover cluster rebooted because of a crash.

Sofar HPE has not been able to fix this. The SV3200 logs has occasional ISCSI controller errors but the error logging in the SVMC is minimal. 

HPE support blamed using a VIP and using Sites (a label). Both are supported according to the HPE product documentation. This has been removed and ISCSI initiator has been set to the Eth0 bond IP adresses directly. As problems persist they blamed that we are using the Lefthand DSM MPIO driver on the initiator connections to the SV3200 which is not the case. Standard MS DSM. Yes the Lefthand driver is on the system for our old SAN but not configured for the SV3200 initiator sessions, which is round robin with supset.     

We  are currently facing a legal warranty standoff.

Any pointers  or other comparable experiences with the HPE Storevirtual SV3200 SAN?

TIA,

Fred


++ 2 2012R2 Node Hyper-V Cluster Connection Issue ++

$
0
0

Hello,

it seems that we have som ekind of connection issue within out hyper-v cluster. I am usinge the following script found in: C:\Program Files (x86)\Quest\NetVault Backup\scripts\HyperV\v2

Content of the script:

##############################################################################
#
#  GetVM.ps1 
#
#    This script will fetch a bunch of info for a specific VM on the specified
#    machine if a name is given, or all VMs if no VM name is given. We'll use
#    this in the case of populating our VM nodes under a server node as well
#    as any time we just need state info on a specific VM.
#
#    Usage:
#         GetVM.ps1 [ComputerName] [-isCluster] [-unclusteredOnly] [VmName]
#
#    ComputerName - name of the host or cluster machine to get VM info from.
#
#    isCluster - specifies that the input computer name is a cluster and
#                the desire is to get a list of VMs that resides inside that cluster
#
#    unclusteredOnly - only list VMs on inputted host that are not part of the local cluster
#                      This option implies you are querying for VMs on the inputted
#                      host (ComputerName) and is *not* compatible with the isCluster option
#
#    VmName - name of a specific VM. This is optional - if not specified, 
#             info for all VMs is returned.
#
##############################################################################
param 
(
   [string]$machine,
   [switch]$isCluster,
   [switch]$unclusteredOnly,
   [string]$vmname,
   [switch]$ignoreRPCError,
   [String[]]$InclusiveVMList,
   [String[]]$ExclusiveVMList

)

$ScriptDir = Split-Path -parent $MyInvocation.MyCommand.Path
Import-Module $ScriptDir\UtilModuleV2
Import-Module $scriptDir\..\UtilModule

$error.clear()

# -----------------------------------------------------------------------------
# PrintVm
#   - prints out VM's information
# -----------------------------------------------------------------------------
Function PrintVm
{
param
(
$myVM
)

write-host ("<vm>")
write-host ("<Name>"+$myVM.VMId.ToString().ToUpper()+"</Name>")
write-host ("<Element Name>"+$myVM.Name+"</Element Name>")
    if ($vm.State -eq "Off")
    {
    write-host ("<Enabled State>3</Enabled State>")
    }
    else
    {
    write-host ("<Enabled State>2</Enabled State>")
    }
write-host ("<Host Name>"+$myVM.ComputerName+"</Host Name>")
write-host ("</vm>")

}

Function GetSingleVm
{
    if ($isCluster -eq $true)
    {
        foreach ($node in $nodes)
        {
            $vm = Get-VM -ComputerName $node $vmname
            if ($vm -ne $null)
            {
                $vm
                break
            }
        }
    }
    else
    {
    Get-VM -ComputerName $machine $vmname
    }
}

Function GetVmList
{
    $vmlist = New-Object System.Collections.ArrayList

    $iExecuteQuery = 1

    if ([string]::IsNullOrEmpty($InclusiveVMList) -and [string]::IsNullOrEmpty($ExclusiveVMList))
    {
        $iExecuteQuery = 0
        if ($isCluster -eq $true)
        {
            foreach ($node in $nodes)
            {
                $vmList.AddRange(@(Get-VM -ComputerName $node | where {$_.IsClustered -eq $true}))
            }
        }
        else
        {
            $vmList = @(Get-VM -ComputerName $machine)
        }
    } 
       

    if ($iExecuteQuery -eq 1)
    {
if (-not [string]::IsNullOrEmpty($InclusiveVMList))

$iFlag = 1
$inclusiveQuery = ""
foreach ($vmPattern in $InclusiveVMList)
{
if ($iFlag)
{
$iFlag = 0
}
else
{
$inclusiveQuery = $inclusiveQuery + " -or "      
}
$inclusiveQuery = $inclusiveQuery + '$_.Name' + " -clike `"" + $vmPattern + "`""         
}
}

if (-not [string]::IsNullOrEmpty($ExclusiveVMList))

$iFlag = 1
$exclusiveQuery = ""
foreach ($vmPattern in $ExclusiveVMList)
{
if ($iFlag)
{
$iFlag = 0
}
else
{
$exclusiveQuery = $exclusiveQuery + " -and "      
}            
$exclusiveQuery = $exclusiveQuery + '$_.Name' + " -cnotlike `"" + $vmPattern + "`""
}
}

if (-not [string]::IsNullOrEmpty($InclusiveVMList))

if ([string]::IsNullOrEmpty($ExclusiveVMList)) 
{             
$query = $inclusiveQuery     
}
else
{
$query = $inclusiveQuery + " -and " + $exclusiveQuery
}         
}    
else
{
            $query = $exclusiveQuery
}

if ($isCluster -eq $true)
{
foreach ($node in $nodes)
{
$command = "Get-VM -ComputerName $node"
$cluster = "{"+'$_.IsClustered'+" -eq 'true'"+"}"
$vmListCommandOnCluster = $command +" | where "+ $cluster
$vmListCommandOnQuery = $vmListCommandOnCluster+" | where "+ "{" +$query +"}"
write-host ("<vmListCommandOnQuery>"+$vmListCommandOnQuery+"</vmListCommandOnQuery>")
$vmListCommandOutput = $null
$vmListCommandOutput = Invoke-Expression $vmListCommandOnQuery          
    if (-not [string]::IsNullOrEmpty($vmListCommandOutput))
{
$count = $vmListCommandOutput.Count
if ($count -ne 0)
{
if ($count -eq "1")
{
$vmList.Add($vmListCommandOutput) 
}
else
{
$vmList.AddRange($vmListCommandOutput)
}
}
}
}
}
else
{
$command = "Get-VM -ComputerName localhost"
$vmListQuery = "$command | where { $query" + "}"
$vmList = Invoke-Expression $vmListQuery
}
    }

foreach ($vm in $vmlist)
    {
        if (($unclusteredOnly -eq $true) -and ($vm.IsClustered -eq $true))
        {
            continue
        }

        [void]$vmlist_out.Add($vm)
    }
}

# -----------------------------------------------------------------------------
# START
# -----------------------------------------------------------------------------

# verify machine parameter has been specified
if ([string]::IsNullOrEmpty($machine))
{
  LogError "Missing argument.  Host or cluster machine name required."
  exit
}

# if isCluster, . won't work since it's always the host's name
#   Not the cluster name even if we're on the cluster manager
if (($isCluster -eq $true) -and (($machine -eq ".") -or ($machine -eq "localhost")))
{
LogError "Invalid arguments.  Cluster queries cannot use `".`" for machine name."
exit
}

# FIXME:  if isCluster, how do we make sure the input is in fact a cluster name?


if ($isCluster -eq $true)
{
# specifying a cluster name, but requesting 
#   VMs *not* in a cluster makes no sense
if ($unclusteredOnly -eq $true)
{
LogError "Invalid arguments.  Cannot specify unclusteredOnly for a Cluster query."
exit
}
}
else
{
if ($unclusteredOnly -eq $true)
{
$cluster = $(Get-Cluster).Name
if ([string]::IsNullOrEmpty($cluster))
{
$unclusteredOnly = $false
$machine = gc env:computername
}
}
if ((($machine -eq ".") -or ($machine -eq "localhost")))
{
$machine = gc env:computername
}
}

$error.clear()

# get list of cluster nodes
$nodes = Get-ClusterNode -Cluster $machine

# initialize array of vms to be outputted
$vmlist_out = New-Object System.Collections.ArrayList
$vmlist_out.clear()

# get the vm(s)
if (-not [string]::IsNullOrEmpty($vmname))
{
    $vmlist_out = GetSingleVm

#  Log an error if a specific VM was requested, but not found
if ($vmlist_out.count -eq 0)
{
LogError ("Couldn't Find Requested VM: " + $vmname)
exit
}
}
else
{
    GetVmList
}

# sort the list of vm's
$vmlist_out = $vmlist_out | Sort-Object Name

write-host ("<start>")
if (($vmlist_out -ne $null) -and ($vmlist_out.Count -ne 0))
{
# first print count of VMs outputted
write-host ("<VmCount>"+$vmlist_out.Count+"</VmCount>")

# now loop through our list 
foreach ($vm in $vmlist_out)
{
PrintVm($vm)
}
}
else
{
write-host ("<VmCount>0</VmCount>")
}
write-host ("<stop>")


Node 1 - VMS01
Node 2 - VMS02
Cluster - VMSCL
Running the script shows only the maschines on the current node and shows a error for the maschines hostet on node two.

PS C:\Program Files (x86)\Quest\NetVault Backup\scripts\HyperV\v2> ./GetVms.ps1 "VMSCL" "APP01" -isCluster
<start>
<VmCount>1</VmCount>
<vm>
<Name>C2648FA1-B83B-4CC5-BC0E-CCCC7B8A4EF7</Name>
<Element Name>APP01</Element Name>
<Enabled State>2</Enabled State>
<Host Name>VMS01</Host Name>
</vm>
<stop>
PS C:\Program Files (x86)\Quest\NetVault Backup\scripts\HyperV\v2> ./GetVms.ps1 "VMSCL" "ARC01" -isCluster
<start>
<VmCount>1</VmCount>
<vm>
<Name>8F77588D-9B15-4E58-9294-0340289517B2</Name>
<Element Name>ARC01</Element Name>
<Enabled State>2</Enabled State>
<Host Name>VMS01</Host Name>
</vm>
<stop>
PS C:\Program Files (x86)\Quest\NetVault Backup\scripts\HyperV\v2> ./GetVms.ps1 "VMSCL" "BI01" -isCluster
Get-VM : Ein Parameter ist ungültig. Ein virtueller Computer mit dem Namen "BI01" konnte von Hyper-V nicht gefunden
werden.
In C:\Program Files (x86)\Quest\NetVault Backup\scripts\HyperV\v2\GetVms.ps1:77 Zeichen:19
+             $vm = Get-VM -ComputerName $node $vmname
+                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (BI01:String) [Get-VM], VirtualizationInvalidArgumentException
    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVMCommand

<start>
<VmCount>1</VmCount>
<vm>
<Name>C04FEFDA-5F30-4480-94C0-1C5D55050374</Name>
<Element Name>BI01</Element Name>
<Enabled State>2</Enabled State>
<Host Name>VMS02</Host Name>
</vm>
<stop>
PS C:\Program Files (x86)\Quest\NetVault Backup\scripts\HyperV\v2> ./GetVms.ps1 "VMSCL" "CLOUD" -isCluster
<start>
<VmCount>1</VmCount>
<vm>
<Name>6B8E7B4F-E61C-48D3-A6BA-7B33314B6EF9</Name>
<Element Name>CLOUD</Element Name>
<Enabled State>2</Enabled State>
<Host Name>VMS01</Host Name>
</vm>
<stop>
PS C:\Program Files (x86)\Quest\NetVault Backup\scripts\HyperV\v2> ./GetVms.ps1 "VMSCL" "CTI" -isCluster
<start>
<VmCount>1</VmCount>
<vm>
<Name>4ACE33FB-1E93-4C05-80DC-055171178A60</Name>
<Element Name>CTI</Element Name>
<Enabled State>3</Enabled State>
<Host Name>VMS01</Host Name>
</vm>
<stop>
PS C:\Program Files (x86)\Quest\NetVault Backup\scripts\HyperV\v2> ./GetVms.ps1 "VMSCL" "CTI01" -isCluster
Get-VM : Ein Parameter ist ungültig. Ein virtueller Computer mit dem Namen "CTI01" konnte von Hyper-V nicht gefunden
werden.
In C:\Program Files (x86)\Quest\NetVault Backup\scripts\HyperV\v2\GetVms.ps1:77 Zeichen:19
+             $vm = Get-VM -ComputerName $node $vmname
+                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (CTI01:String) [Get-VM], VirtualizationInvalidArgumentException
    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVMCommand

<start>
<VmCount>1</VmCount>
<vm>
<Name>B44260B6-3156-4600-85E1-58168A3CC3D5</Name>
<Element Name>CTI01</Element Name>
<Enabled State>2</Enabled State>
<Host Name>VMS02</Host Name>
</vm>
<stop>
PS C:\Program Files (x86)\Quest\NetVault Backup\scripts\HyperV\v2> ./GetVms.ps1 "VMSCL" "CX" -isCluster
<start>
<VmCount>1</VmCount>
<vm>
<Name>1200962C-E875-45F9-B4D9-E7BBA2BBBCFD</Name>
<Element Name>CX</Element Name>
<Enabled State>2</Enabled State>
<Host Name>VMS01</Host Name>
</vm>
<stop>
PS C:\Program Files (x86)\Quest\NetVault Backup\scripts\HyperV\v2> ./GetVms.ps1 "VMSCL" "DC01" -isCluster
<start>
<VmCount>1</VmCount>
<vm>
<Name>16A20330-7ABA-4694-A068-FCCA3D7E1A6F</Name>
<Element Name>DC01</Element Name>
<Enabled State>2</Enabled State>
<Host Name>VMS01</Host Name>
</vm>
<stop>

Setup a Cluster with CSV storage as a target for a DFS-R

$
0
0

 Trying to set up a windows 2016 server Cluster Server to replicate with a file Server.

This a the setup:
Primary site:
- 2 DFS
- 1 Cluster, 2 nodes, set with CVS storage

Secondary Site (DR)
- 1 DFS
- 1 File Server

The goal : I want to replication from the Cluster/File server from the primary site the secondary. 

Want I add the folder to replicate, I add this message :

"The volume file system cannot be determined the netword name cannot be found"

Thank you,

Jasmin

Live Migrate fails with event 21502 (2019-->2016 host)

$
0
0

I have 2016 Functional level cluster with Server 2019 (basically in a process of replacing 2016 host with 2019)

If VM is running on 2019 host I can poweroff, quick migrate to 2016 host, power on & all is good

But Live migration always gives me above error

All I am getting in Event Data is (very descriptive?!):

Live migration of 'Virtual Machine Test' failed.

Nothing else, no reason.

If VM is running on 2016 host I CAN do live migration to 2019 fine! (albeit with errors reported in this thread, but I do NOT have VMM being used!)

vm\service\ethernet\vmethernetswitchutilities.cpp(124)\vmms.exe!00007FF7EA3C2030: (caller: 00007FF7EA40EC65) ReturnHr(138) tid(2980) 80070002 The system cannot find the file specified.
    Msg:[vm\service\ethernet\vmethernetswitchutilities.cpp(78)\vmms.exe!00007FF7EA423BE0: (caller: 00007FF7EA328FEE) Exception(7525) tid(2980) 80070002 The system cannot find the file specified.
] 

Both host are IDENTICAL hardware on same firmware level of every component!

There is NOTHING relating to even attempting migration in local host Hyper-V VMMS/Admin/Operational logs

In Hyper-V High Availability/Admin I get same error but with Even ID 21111

Seb


I am wondering if it is easier to ditch 2019 & stick with 2016 for now

Cannot create checkpoint when shared vhdset (.vhds) is used by VM - 'not part of a checkpoint collection' error

$
0
0

We are trying to deploy 'guest cluster' scenario over HyperV with shared disks set over SOFS. By design .vhds format should fully support backup feature.

All machines (HyperV, guest, SOFS) are installed with Windows Server 2016 Datacenter. Two HyperV virtual machines are configured to use shared disk in .vhds format (located on SOFS cluster formed of two nodes). SOFS cluster has a share configured for applications and HyperV uses \\sofs_server\share_name\disk.vhds path to SOFS remote storage). Guest cluster is configured with 'File server' role and 'Failover clustering' feature to form a guest cluster. There are two disks configured on each of guest cluster nodes: 1 - private system disk in .vhdx format (OS) and 2 - shared .vhds disk on SOFS.

While trying to make a checkpoint for guest machine, I get following error:

Cannot take checkpoint for 'guest-cluster-node0' because one or more sharable VHDX are attached and this is not part of a checkpoint collection.

Production checkpoints are enabled for VM + 'Create standard checkpoint if it's not possible to create a production checkpoint' option is set. All integration services (including backup) are enabled for VM.

When I delete .vhds disk of shared drive from SCSI controller of VM, checkpoints are created normally (for private OS disk).

It is not clear what is 'checkpoint collection' and how to add shared .vhds disk to this collection. Please advise.

Thanks.

Server 2016 HV Cluster - Storage migration for virtual machine ' failed with error 'Unspecified error' (0x80004005).

$
0
0

Storage is on the same array (different volume) that the server connects to

VM in question has some disks on Volume1 & some on Volume2

So I want to consolidate them all into Volume1

And I am presented with this Unspecified error (I have enough of this totally amatourish Microsoft approach to something that should be a business solution!)

I can disconnect the disk & move it by "hand" & re-attach & it works perfectly fine, but that is stupid to have Live Storage migration & not being able to use it!

Seb


Viewing all 4519 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>