Quantcast
Channel: High Availability (Clustering) forum
Viewing all 4519 articles
Browse latest View live

Event Id 1069 The error code was '0x139f' ('The group or resource is not in the correct state to perform the requested operation.').

$
0
0

We have a 2 node cluster with S2D Server 2019 that live migrates when you do it nicely perfectly fine

Cluster validation shows no problems

When we power off a server for testing the migration gets to 59% hangs for a bit then stops and the following is logged:

Event iD 1069 Cluster resource 'Virtual Machine Configuration TEST-VM' of type 'Virtual Machine Configuration' in clustered role 'TEST-VM' failed. The error code was '0x139f' ('The group or resource is not in the correct state to perform the requested operation.'). The Migration times out then the TEST-VM restarts on the other node.

Further testing shows that this only happens when we have 8 or 10 CPU's and 32GB of RAM on the TEST-VM.
If we only use 1 CPU and 1024MB RAM when we test the power off the live Migration works as it should.

Dell PowerEdge R640's with 2x 8core CPU and 128GB Ram each.

Getting stuck here as 1069 is a generic error for something isn't available, we think the virtual machine configuration file is not able to be found / move. the only other events are to do with the networking going down, but as we turned a node off this is to be expected.


IPv6 VM Property in Virtual Machine Manager

$
0
0

Hello All,

I have a 3 node 2016 Hyper-V Failover Cluster running 20 VMs. When looking at the Networking tab at the bottom of VMM (not the individual VM network adapter settings), five of the VMs display IPv6 addresses starting with fe80::. Four of the VMs are using Dynamic adapters, one is using static. All VMs are also Windows Server 2016. Four of the VMs are on the same node, one VM on a second node. There are a total of 4 Virtual Networks, but DNS, AD, etc, exists only on the "Management" Virtual Network.

I assume these are link local addresses and not in use. I don't understand why all the VMs do not exhibit this same behavior.

There are no IPv6 addresses in DNS, I just looked. For every NIC in every VM, I unchecked the box labeled "Register this connection's address in DNS" box in the Advanced DNS properties page of the Internet Protocol Version 6 (TCPIP/IPv6) protocol.

I am wondering how this happened and how to get rid of the IPv6 addresses in VMM.

Thanks,

Vint


Thanks, Vint

Unable to create hard link locally on cluster volume when file is created and open via cluster share

$
0
0

Windows Server 2016 file cluster have an issue when it is not possible to locally create a hard link of the file on CsvFS when it is created and in open state remotely via cluster share.

All works fine after file is closed but we need to create a hard link as faster as possible and not able to wait when file will be closed.

The issue is easily reproduced using copy/mklink command line tools.

I've create a premiere request case three months ago but still have not seen any progress from technical support at Convergys, which is assigned for this case. It seems information concerning this issue does not reach Microsoft development team at all.

Hope some Microsoft guys will see this message and help to resolve the issue.


understand dynamic quorum

$
0
0

Please explain the below statement & in which context they are using it. 

Can survive one server failure, then another: Yes

https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/understand-quorum

Windows Server 2012 R2 Cluster Mount Points and Cluster Shared Volumes SQL Server 2016

$
0
0

We have an environment as follows:

2 node SQL Server Cluster running SQL Server 2016 instances.  The failover cluster is running on Windows Server 2012 R2.  All storage resides in a Compellant SAN.  We have just about run out of drive letters for new instances and want to move to mount points but I have run into an issue when trying to change the moint point volumes to Cluster Shared Volumes.  

On each node of the cluster, I set up a mount point base drive and assigned it the same drive letter.  Created the moint point volumes and mapped them to the base drive.  Added the disks as disk resources for the cluster. However, I believe, I need to change the disks as a Cluster Shared Volume resource so each node in the cluster can have access to it during a failover.  

In FC Manager, as soon as I changed the moint point from Available Storage to Cluster Shared Volume, it loses the mapping to the base drive.

This is my first time setting up mount points in a cluster, I have looked at so many webites and have't found any resolution.  Any advice will help.

Public IPv6 address

$
0
0

Hi 

I've been trouble shooting an issue with a hyper V failover cluster and I've been skimming through the network config for the VM hosts.

It's a 2 node cluster and comprised of 2 X dell FC630's in an FX-2 chassis. The only difference in config that I can see is that when the NICS presented to the OS are placed into a NIC team, the team get an IPv6 that is not a local link address (fe80). 

The other node does however and this is what i would expect as the VLAN associated with getting out of the local subnet is not configured.

One of the outcomes is that this cluster network is appearing incorrectly in my cluster configuration with the wrong address.

VM shutdown, when try add highly available role in Windows 2016 Hyper-V Failover Cluster

$
0
0
Hi! We have Hyper-V Failover cluster on Windows 2016 (after clear upgrade from 2012R2). We have SoFS Cluster on 2016 as a file storage for VM. Also we have VMM.
Let's analyze the problem in steps:
1.) Create non highly available VM "vmtest1" in VMM on Hyper-v Cluster 2016, and start. 
2.) Add Virtual Machine Role for vm "vmtest1" in Failover Clauster snap-in.
3.) After ~5-10 min we have error in eventlog Hyper-V-SynthStor (Event ID 12630) - 'vmtest1': Virtual hard disk resiliency failed to recover the drive '\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx'. The virtual machine will be powered off. Current status: Permanent Failure.
4.) Now virtual machine "vmtest1" is poweroff.  
This situation is repeated on other cluster on Windows 2016.
We have that problem, after upgrade cluster from windows 2012R2 to Windows 2016.
On 2012R2 clusters that problem is not noticed.
Its happen only when we add "highly available" role in cluster and VM is Running. If we just try create "highly available" VM in VMM, everything goes well.
All cluster servers and sofs have last updates. Some events:


Microsoft-Windows-Hyper-V-Worker/Admin:
'vmtest1': Virtual hard disk '\\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' received a resiliency status notification. Current status: Disconnected.
'vmtest1': Virtual hard disk '\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' has detected a recoverable error. Current status: Disconnected.
'vmtest1': Virtual hard disk resiliency failed to recover the drive '\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx'. The virtual machine will be powered off. Current status: Permanent Failure.
'vmtest1' was paused for critical error
'vmtest1' was turned off as it could not recover from a critical error. 

Microsoft-Windows-Hyper-V-StorageVSP/Microsoft-Hyper-V-StorageVSP-Admin:
Storage device '\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' changed recovery state. Previous state = Recoverable Error Detected, New state = Unrecoverable Error.
Storage device '\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' received a recovery status notification. Current device state = Recoverable Error Detected, Last status = Disconnected, New status = Permanent Failure.
Storage device '\\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' received a recovery status notification. Current device state = No Errors, Last status = No Errors, New status = Disconnected.
Storage device '\\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' changed recovery state. Previous state = No Errors, New state = Recoverable Error Detected.

Microsoft-Windows-FailoverClustering/Diagnostic:
[RES] Virtual Machine Configuration <Virtual Machine Configuration vmtest1>: Current state 'Online', event 'UpdateVmConfigurationProperties'
[RES] Virtual Machine Configuration <Virtual Machine Configuration vmtest1>: Updated VmStoreRootPath property to '\\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx'
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine Configuration vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine Configuration vmtest1: Flags 1 added to StatusInformation. New StatusInformation 1 
[RCM] vmtest1: Added Flags 1 to StatusInformation. New StatusInformation 1 
[RHS] Resource Virtual Machine vmtest1 called SetResourceLockedMode. LockedModeEnabled1, LockedModeReason0.
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine vmtest1: Flags 1 added to StatusInformation. New StatusInformation 1 
[GUM] Node 16: Processing RequestLock 16:1953
[RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RHS] Resource Virtual Machine Configuration vmtest1 called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine Configuration vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine Configuration vmtest1: Flags 1 removed from StatusInformation. New StatusInformation 0 
[RHS] Resource Virtual Machine vmtest1 called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine vmtest1: Flags 1 removed from StatusInformation. New StatusInformation 0 
[RCM] vmtest1: Removed Flags 1 from StatusInformation. New StatusInformation 0 
[RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine vmtest1: Flags 1 removed from StatusInformation. New StatusInformation 0 
[RES] Virtual Machine <Virtual Machine vmtest1>: Current state 'Online', event 'VmStopped'
[RCM] vmtest1: Removed Flags 1 from StatusInformation. New StatusInformation 0 
[RES] Virtual Machine <Virtual Machine vmtest1>: State change 'Online' -> 'Offline'
[RCM] rcm::RcmApi::OfflineResource: (Virtual Machine vmtest1, 1)
[RCM] Res Virtual Machine vmtest1: Online -> WaitingToGoOffline( StateUnknown )
[RCM] TransitionToState(Virtual Machine vmtest1) Online-->WaitingToGoOffline.
[RCM] rcm::RcmGroup::UpdateStateIfChanged: (vmtest1, Online --> Pending)
[RCM] Res Virtual Machine vmtest1: WaitingToGoOffline -> OfflineCallIssued( StateUnknown )
[RCM] TransitionToState(Virtual Machine vmtest1) WaitingToGoOffline-->OfflineCallIssued.
[RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine vmtest1', gen(0) result 0/0.








change possible nodes of cluster role

$
0
0

Hi,

I have a 2 node windows failover cluster on windows server 2012r2 (NOT a hyperv cluster).

I am interested in changing a specific role to only be able to run on a specific node (I know that in a 2 node cluster this might not make a lot of sense but it is suppose to be a temporary thing)

however, using the GUI I am only able to view and change the preferred nodes and I interested in changing the possible nodes. strangely, I was unable to find a answer that worked on google. 

anyone able to assist me please? (GUI or\and powershell are both good options)


VM shutdown, when try add highly available role in Windows 2016 Hyper-V Failover Cluster

$
0
0
Hi! We have Hyper-V Failover cluster on Windows 2016 (after clear upgrade from 2012R2). We have SoFS Cluster on 2016 as a file storage for VM. Also we have VMM.
Let's analyze the problem in steps:
1.) Create non highly available VM "vmtest1" in VMM on Hyper-v Cluster 2016, and start. 
2.) Add Virtual Machine Role for vm "vmtest1" in Failover Clauster snap-in.
3.) After ~5-10 min we have error in eventlog Hyper-V-SynthStor (Event ID 12630) - 'vmtest1': Virtual hard disk resiliency failed to recover the drive '\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx'. The virtual machine will be powered off. Current status: Permanent Failure.
4.) Now virtual machine "vmtest1" is poweroff.  
This situation is repeated on other cluster on Windows 2016.
We have that problem, after upgrade cluster from windows 2012R2 to Windows 2016.
On 2012R2 clusters that problem is not noticed.
Its happen only when we add "highly available" role in cluster and VM is Running. If we just try create "highly available" VM in VMM, everything goes well.
All cluster servers and sofs have last updates. Some events:


Microsoft-Windows-Hyper-V-Worker/Admin:
'vmtest1': Virtual hard disk '\\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' received a resiliency status notification. Current status: Disconnected.
'vmtest1': Virtual hard disk '\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' has detected a recoverable error. Current status: Disconnected.
'vmtest1': Virtual hard disk resiliency failed to recover the drive '\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx'. The virtual machine will be powered off. Current status: Permanent Failure.
'vmtest1' was paused for critical error
'vmtest1' was turned off as it could not recover from a critical error. 

Microsoft-Windows-Hyper-V-StorageVSP/Microsoft-Hyper-V-StorageVSP-Admin:
Storage device '\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' changed recovery state. Previous state = Recoverable Error Detected, New state = Unrecoverable Error.
Storage device '\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' received a recovery status notification. Current device state = Recoverable Error Detected, Last status = Disconnected, New status = Permanent Failure.
Storage device '\\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' received a recovery status notification. Current device state = No Errors, Last status = No Errors, New status = Disconnected.
Storage device '\\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' changed recovery state. Previous state = No Errors, New state = Recoverable Error Detected.

Microsoft-Windows-FailoverClustering/Diagnostic:
[RES] Virtual Machine Configuration <Virtual Machine Configuration vmtest1>: Current state 'Online', event 'UpdateVmConfigurationProperties'
[RES] Virtual Machine Configuration <Virtual Machine Configuration vmtest1>: Updated VmStoreRootPath property to '\\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx'
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine Configuration vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine Configuration vmtest1: Flags 1 added to StatusInformation. New StatusInformation 1 
[RCM] vmtest1: Added Flags 1 to StatusInformation. New StatusInformation 1 
[RHS] Resource Virtual Machine vmtest1 called SetResourceLockedMode. LockedModeEnabled1, LockedModeReason0.
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine vmtest1: Flags 1 added to StatusInformation. New StatusInformation 1 
[GUM] Node 16: Processing RequestLock 16:1953
[RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RHS] Resource Virtual Machine Configuration vmtest1 called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine Configuration vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine Configuration vmtest1: Flags 1 removed from StatusInformation. New StatusInformation 0 
[RHS] Resource Virtual Machine vmtest1 called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine vmtest1: Flags 1 removed from StatusInformation. New StatusInformation 0 
[RCM] vmtest1: Removed Flags 1 from StatusInformation. New StatusInformation 0 
[RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine vmtest1: Flags 1 removed from StatusInformation. New StatusInformation 0 
[RES] Virtual Machine <Virtual Machine vmtest1>: Current state 'Online', event 'VmStopped'
[RCM] vmtest1: Removed Flags 1 from StatusInformation. New StatusInformation 0 
[RES] Virtual Machine <Virtual Machine vmtest1>: State change 'Online' -> 'Offline'
[RCM] rcm::RcmApi::OfflineResource: (Virtual Machine vmtest1, 1)
[RCM] Res Virtual Machine vmtest1: Online -> WaitingToGoOffline( StateUnknown )
[RCM] TransitionToState(Virtual Machine vmtest1) Online-->WaitingToGoOffline.
[RCM] rcm::RcmGroup::UpdateStateIfChanged: (vmtest1, Online --> Pending)
[RCM] Res Virtual Machine vmtest1: WaitingToGoOffline -> OfflineCallIssued( StateUnknown )
[RCM] TransitionToState(Virtual Machine vmtest1) WaitingToGoOffline-->OfflineCallIssued.
[RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine vmtest1', gen(0) result 0/0.








Network Load Balancing with rdp

$
0
0
Hi every one I installed two terminal servers host A and host B, then I installed NLB features on this servers and created cluster with name RDS.test.local, NLB work properly in my test lab I have 5-6 users if I deploy this in my prodaction with 60 users will it work normally? How many connection NLB can accept to cluster?

System administrator

Add Node to Cluster - Keyset does not exist

$
0
0

Hi,

I am trying to add third node to a Windows 2012 fail over cluster, but gets the following error.

The server 'DR.domain.com' could not be added to the cluster.
An error occurred while adding node 'DR.domain.com' to cluster 'domain-fc'.

Keyset does not exist

The User I am using to Add Node is Domain Admin, so it may not be a permission issue.

All nodes are Windows 2012 R2 VMs on Azure


Usman Shaheen MCTS BizTalk Server http://usmanshaheen.wordpress.com


Network Load Balancing for terminal servers

$
0
0
Hi every one I installed two terminal servers host A and host B, then I installed NLB features on this servers and created cluster with name RDS.test.local, NLB work properly in my test lab I have 5-6 users if I deploy this in my prodaction with 60 users will it work normally? How many connection NLB can accept to cluster?

System administrator


Validate Windows Firewall Configuration error -Failover Clusters' rule group is not enabled

$
0
0

Hi 

Am starting new thread for this previous discussion

 https://social.technet.microsoft.com/Forums/windowsserver/en-US/b3d17d2a-de43-4d89-8031-64bddbc2cca8/validate-windows-firewall-configuration-error?referrer=https://social.technet.microsoft.com/Forums/windowsserver/en-US/b3d17d2a-de43-4d89-8031-64bddbc2cca8/validate-windows-firewall-configuration-error?forum=winserverClustering

Am still facing the same problem , i have redeployed azure VM thrice :( but still same issue  

VM shutdown, when try add highly available role in Windows 2016 Hyper-V Failover Cluster

$
0
0
Hi! We have Hyper-V Failover cluster on Windows 2016 (after clear upgrade from 2012R2). We have SoFS Cluster on 2016 as a file storage for VM. Also we have VMM.
Let's analyze the problem in steps:
1.) Create non highly available VM "vmtest1" in VMM on Hyper-v Cluster 2016, and start. 
2.) Add Virtual Machine Role for vm "vmtest1" in Failover Clauster snap-in.
3.) After ~5-10 min we have error in eventlog Hyper-V-SynthStor (Event ID 12630) - 'vmtest1': Virtual hard disk resiliency failed to recover the drive '\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx'. The virtual machine will be powered off. Current status: Permanent Failure.
4.) Now virtual machine "vmtest1" is poweroff.  
This situation is repeated on other cluster on Windows 2016.
We have that problem, after upgrade cluster from windows 2012R2 to Windows 2016.
On 2012R2 clusters that problem is not noticed.
Its happen only when we add "highly available" role in cluster and VM is Running. If we just try create "highly available" VM in VMM, everything goes well.
All cluster servers and sofs have last updates. Some events:


Microsoft-Windows-Hyper-V-Worker/Admin:
'vmtest1': Virtual hard disk '\\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' received a resiliency status notification. Current status: Disconnected.
'vmtest1': Virtual hard disk '\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' has detected a recoverable error. Current status: Disconnected.
'vmtest1': Virtual hard disk resiliency failed to recover the drive '\\test1.test.consto.ru\VD0\test1\vmtest1.vhdx'. The virtual machine will be powered off. Current status: Permanent Failure.
'vmtest1' was paused for critical error
'vmtest1' was turned off as it could not recover from a critical error. 

Microsoft-Windows-Hyper-V-StorageVSP/Microsoft-Hyper-V-StorageVSP-Admin:
Storage device '\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' changed recovery state. Previous state = Recoverable Error Detected, New state = Unrecoverable Error.
Storage device '\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' received a recovery status notification. Current device state = Recoverable Error Detected, Last status = Disconnected, New status = Permanent Failure.
Storage device '\\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' received a recovery status notification. Current device state = No Errors, Last status = No Errors, New status = Disconnected.
Storage device '\\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx' changed recovery state. Previous state = No Errors, New state = Recoverable Error Detected.

Microsoft-Windows-FailoverClustering/Diagnostic:
[RES] Virtual Machine Configuration <Virtual Machine Configuration vmtest1>: Current state 'Online', event 'UpdateVmConfigurationProperties'
[RES] Virtual Machine Configuration <Virtual Machine Configuration vmtest1>: Updated VmStoreRootPath property to '\\?\UNC\test1.test.consto.ru\VD0\test1\vmtest1.vhdx'
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine Configuration vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine Configuration vmtest1: Flags 1 added to StatusInformation. New StatusInformation 1 
[RCM] vmtest1: Added Flags 1 to StatusInformation. New StatusInformation 1 
[RHS] Resource Virtual Machine vmtest1 called SetResourceLockedMode. LockedModeEnabled1, LockedModeReason0.
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine vmtest1: Flags 1 added to StatusInformation. New StatusInformation 1 
[GUM] Node 16: Processing RequestLock 16:1953
[RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RHS] Resource Virtual Machine Configuration vmtest1 called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine Configuration vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine Configuration vmtest1: Flags 1 removed from StatusInformation. New StatusInformation 0 
[RHS] Resource Virtual Machine vmtest1 called SetResourceLockedMode. LockedModeEnabled0, LockedModeReason0.
[RCM] HandleMonitorReply: LOCKEDMODE for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine vmtest1: Flags 1 removed from StatusInformation. New StatusInformation 0 
[RCM] vmtest1: Removed Flags 1 from StatusInformation. New StatusInformation 0 
[RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine vmtest1', gen(0) result 0/0.
[RCM] Virtual Machine vmtest1: Flags 1 removed from StatusInformation. New StatusInformation 0 
[RES] Virtual Machine <Virtual Machine vmtest1>: Current state 'Online', event 'VmStopped'
[RCM] vmtest1: Removed Flags 1 from StatusInformation. New StatusInformation 0 
[RES] Virtual Machine <Virtual Machine vmtest1>: State change 'Online' -> 'Offline'
[RCM] rcm::RcmApi::OfflineResource: (Virtual Machine vmtest1, 1)
[RCM] Res Virtual Machine vmtest1: Online -> WaitingToGoOffline( StateUnknown )
[RCM] TransitionToState(Virtual Machine vmtest1) Online-->WaitingToGoOffline.
[RCM] rcm::RcmGroup::UpdateStateIfChanged: (vmtest1, Online --> Pending)
[RCM] Res Virtual Machine vmtest1: WaitingToGoOffline -> OfflineCallIssued( StateUnknown )
[RCM] TransitionToState(Virtual Machine vmtest1) WaitingToGoOffline-->OfflineCallIssued.
[RCM] HandleMonitorReply: INMEMORY_NODELOCAL_PROPERTIES for 'Virtual Machine vmtest1', gen(0) result 0/0.








New-Volume cmdlet does not create requested size volume.

$
0
0

Hello,

I don't know if this is the best forum for this as it is powershell, but it is a cluster volume I am creating with 2 VM's in Azure.

When I create a volume for DTC of 500MB, the volume actually gets created as 8GB. I would like to know why and what to do to fix this behavior.

Command I ran:

New-Volume -StoragePoolFriendlyName S2D* -FriendlyName VDiskDBDTC -FileSystem CSVFS_REFS -Size 500MB

Result:

DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining    Size
----------- --------------- ---------- --------- ------------ ----------------- -------------    ----
            VDiskDBDTC      CSVFS      Fixed     Healthy      OK                      7.21 GB 7.94 GB

Also, I tried to create it as 1GB and got the following:

PS C:\windows\system32> New-Volume -StoragePoolFriendlyName S2D* -FriendlyName VDiskDBDTC -FileSystem CSVFS_REFS -Size 1GB


DriveLetter FileSystemLabel FileSystem DriveType HealthStatus OperationalStatus SizeRemaining    Size
----------- --------------- ---------- --------- ------------ ----------------- -------------    ----
            VDiskDBDTC      CSVFS      Fixed     Healthy      OK                      7.21 GB 7.94 GB

The other volumes I created, of various sizes in GB, are the correct sizes. The help page for this command states that you can use MB when specifying size, so I don't know why it won't work correctly.

https://docs.microsoft.com/en-us/powershell/module/storage/new-volume?view=win10-ps

I also attempted this through the GUI, and got slightly different, but still incorrect, results. See screenshots below.

The attempt to create a 500MB virtual disk.

What Get-disk showed after the creation:

What the cluster GUI showed. Note the disk number, and the fact that I already have a disk 10. Wonder what problems THAT will cause. Also note that Powershell does not show a disk number. What's up with that?

Perhaps I have made mistakes in this creation. I only hope someone can point them out for me and help me correct the problem.

Thanks,
Chris





Possible clients for Infrastructure Scale-Out File Server in Windows Server 2019?

$
0
0

Windows Server 2019 has a new Scale-Out File Server role named Infrastructure, which lets us provide continuously available shares from volumes in a hyper-converged S2D cluster. We can then use these volumes to store vhdx files from a separate Hyper-V Failover Cluster.

Described here:

https://techcommunity.microsoft.com/t5/Failover-Clustering/Scale-Out-File-Server-Improvements-in-Windows-Server-2019/ba-p/372156

Created like this:

Add-ClusterScaleOutFileServerRole -Cluster MyCluster -Infrastructure -Name InfraSOFSName

I understand that the S2D cluster that provides such shares need to be a Windows Server 2019 cluster, but can earlier clients use these shares?

Specifically: Can I run a Windows Server 2016 Hyper-V failover cluster and store the vhdx files on the new infrastructure shares?

Storage Spaces Direct Volume Creation

$
0
0

We have implemented S2D and looking at setting multiple volumes up. Upon creation I'm taking to a 'Specify the size of the virtual disk' with options for tiering. Does anyone know what the best practice is here. As an example the volume is going to be 20TB in size



Hyper-V 2016 Cluster some VMs rebooted unexpectedly

$
0
0

Hello ,

We have two Node Hyper-V 2016 Clusters with Dell Storage .

Suddenly some VMs rebooted unexpectedly 

From Event Viewer following result found , tried to search on Google but could not found root cause why they got rebooted .

Checked also from Dell Storage events and Hosts Hardware Events from iDRAC , but nothing suspected found from Storage and Hosts side.

Both Hosts have equally amount of RAM distributed VMs running. and VMs were on Different CSV those were rebooted.

Any help would be more appreciated.

  • The Cluster service failed to bring clustered role My-VM1 completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered role. Event ID : 1205
  • Cluster resource 'Virtual Machine My-VM1 of type 'Virtual Machine' in clustered role My-VM1 failed.

Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it.  Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet. Event ID : 1069

  • 'Virtual Machine My-VM1 failed to start.

Failed to initiate the startup of the virtual machine: Element not found. (0x00000490). Event ID : 21502

  • 'Virtual Machine My-VM1' failed to terminate. Event ID : 21502
  • The Virtual Machine Management Service failed to start the virtual machine 'CA7C23FD-2052-45D2-A45B-B8BC535452F8': Element not found. (0x80070490).Event ID : 20108

Thanks


Thanks , Prakash ,Please Note: My Posts are provided “AS IS” without warranty of any kind, either expressed or implied.


Move-ClusterSetVM command in Server 2019

$
0
0

Hi there,

I am currently setting up a lab to test out cluster sets in Server 2019 by following https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/Cluster-Sets

Right now I am stuck on the step for testing out live migrate between cluster sets by running the command Move-ClusterSetVM. The example command shown was -

Move-ClusterSetVM -CimSession CSMASTER -VMName CSVM1 -ClusterName CLUSTER3

But -ClusterName is not even a valid parameter. There is another parameter called -Node, but when I tried to specify a node on a different cluster set, all I got was an error saying the move encountered a terminal failure.

So has anyone managed to get live migration working for cluster sets?

GPO policy item stops cluster services from starting

$
0
0

 2-node multi-subnet W2016 failover cluster running SQL2016

GPO  policy is applied  - policy item  “Deny access to this computer from the network” has “NT AUTHORITY\Local account and BUILTIN\Guests” listed in the setting.

W2016 cluster services will not start with this policy item in place
 After removing “NT AUTHORITY\Local account” from this setting the Cluster Service started successfully.

Is this expected behaviour?

Is there a modification we can make to the policy setting that will retain the setting to deny local accounts but enable cluster services to start?

Is there an option to use a domain service account to run cluster services on W2016 instead of CLIUSR?
Viewing all 4519 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>