vMotion Testing Guide
Overview
This guide documents the complete process for testing vMotion functionality across ESXi clusters using shared storage. The testing validates that VMs can be successfully migrated between hosts without data loss or corruption.
Environment Details
Intel NUC Cluster:
- Hosts: esxi-nuc-01 (192.168.10.8), esxi-nuc-02 (192.168.10.9), esxi-nuc-03 (192.168.10.10)
- Shared Storage: Synology DS918+ iSCSI LUNs
- NUC-High-Performance: 4TB (UUID: 6885a5ae-a0b6b20a-2d9e-54b20311bba8)
- Development-Testing: 1TB (UUID: 6885a5cb-1342a686-933c-54b20311bba8)
- Archive-Storage: 900GB (UUID: 6885a5d9-b4148fde-cac7-54b20311bba8)
MS-A2 Cluster:
- Hosts: esxi-ms-a2-01 (192.168.10.12), esxi-ms-a2-02 (192.168.10.13)
- Shared Storage: Tanzu datastore 1.3TB (UUID: 68a85be8-1a3ecb65-f2dd-5847ca7fd869)
Prerequisites
Network Configuration
- SSH key authentication configured:
~/.ssh/esxi_homelab - VMkernel interfaces configured for vMotion (typically vmk2 on VLAN 20)
- Network connectivity between all cluster hosts
Storage Requirements
- Shared storage accessible from all hosts in cluster
- Sufficient free space for test VMs (minimum 2GB per test VM)
- VMFS6 datastores properly mounted
Manual Testing Steps
Phase 1: Verify Shared Storage Access
Intel NUC Cluster
# Check shared storage accessibility on all NUC hosts
for host in 192.168.10.8 192.168.10.9 192.168.10.10; do
echo "=== Testing $host ==="
ssh -i ~/.ssh/esxi_homelab root@$host "esxcli storage filesystem list | grep vmfs | grep -E '(NUC-High-Performance|Development-Testing|Archive-Storage)'"
ssh -i ~/.ssh/esxi_homelab root@$host "ls -la /vmfs/volumes/ | grep -E '(NUC-High-Performance|Development-Testing|Archive-Storage)'"
done
MS-A2 Cluster
# Check shared storage accessibility on MS-A2 hosts
for host in 192.168.10.12 192.168.10.13; do
echo "=== Testing $host ==="
ssh -i ~/.ssh/esxi_homelab root@$host "esxcli storage filesystem list | grep vmfs | grep Tanzu"
ssh -i ~/.ssh/esxi_homelab root@$host "ls -la /vmfs/volumes/ | grep Tanzu"
done
Phase 2: Create Test VM
Create VM Directory and Virtual Disk
# For NUC cluster (using Development-Testing datastore)
SOURCE_HOST="192.168.10.8"
DATASTORE="/vmfs/volumes/Development-Testing"
VM_NAME="vMotionTest"
ssh -i ~/.ssh/esxi_homelab root@$SOURCE_HOST "mkdir -p $DATASTORE/$VM_NAME"
ssh -i ~/.ssh/esxi_homelab root@$SOURCE_HOST "vmkfstools -c 1G $DATASTORE/$VM_NAME/$VM_NAME.vmdk"
Create VM Configuration File
ssh -i ~/.ssh/esxi_homelab root@$SOURCE_HOST "cat > $DATASTORE/$VM_NAME/$VM_NAME.vmx << 'EOF'
#!/usr/bin/vmware
.encoding = "UTF-8"
config.version = "8"
virtualHW.version = "11"
vmci0.present = "TRUE"
displayName = "$VM_NAME"
guestOS = "other"
memSize = "512"
numvcpus = "1"
scsi0.present = "TRUE"
scsi0.virtualDev = "lsilogic"
scsi0:0.present = "TRUE"
scsi0:0.fileName = "$VM_NAME.vmdk"
ide1:0.present = "TRUE"
ide1:0.deviceType = "cdrom-raw"
ide1:0.startConnected = "FALSE"
tools.syncTime = "FALSE"
uuid.action = "create"
EOF"
Phase 3: Register and Start VM
# Register VM on source host
VMID=$(ssh -i ~/.ssh/esxi_homelab root@$SOURCE_HOST "vim-cmd solo/registervm $DATASTORE/$VM_NAME/$VM_NAME.vmx")
echo "VM registered with ID: $VMID"
# Power on VM
ssh -i ~/.ssh/esxi_homelab root@$SOURCE_HOST "vim-cmd vmsvc/power.on $VMID"
# Verify VM is running
ssh -i ~/.ssh/esxi_homelab root@$SOURCE_HOST "vim-cmd vmsvc/power.getstate $VMID"
Phase 4: Test VM Migration
Verify Files Accessible from Destination Host
DEST_HOST="192.168.10.9"
ssh -i ~/.ssh/esxi_homelab root@$DEST_HOST "ls -la $DATASTORE/$VM_NAME/"
Perform Migration (Cold Migration Test)
# Power off VM on source host
ssh -i ~/.ssh/esxi_homelab root@$SOURCE_HOST "vim-cmd vmsvc/power.off $VMID"
# Unregister from source
ssh -i ~/.ssh/esxi_homelab root@$SOURCE_HOST "vim-cmd vmsvc/unregister $VMID"
# Register on destination
NEW_VMID=$(ssh -i ~/.ssh/esxi_homelab root@$DEST_HOST "vim-cmd solo/registervm $DATASTORE/$VM_NAME/$VM_NAME.vmx")
# Power on at destination
ssh -i ~/.ssh/esxi_homelab root@$DEST_HOST "vim-cmd vmsvc/power.on $NEW_VMID"
# Verify VM is running on new host
ssh -i ~/.ssh/esxi_homelab root@$DEST_HOST "vim-cmd vmsvc/power.getstate $NEW_VMID"
Phase 5: Verify Data Integrity
# Check VM files are intact
ssh -i ~/.ssh/esxi_homelab root@$DEST_HOST "ls -la $DATASTORE/$VM_NAME/ | grep -E '\.(vmx|vmdk|log)'"
# Verify VM configuration
ssh -i ~/.ssh/esxi_homelab root@$DEST_HOST "grep -E '(displayName|memSize|numvcpus)' $DATASTORE/$VM_NAME/$VM_NAME.vmx"
# Check VM runtime info
ssh -i ~/.ssh/esxi_homelab root@$DEST_HOST "vim-cmd vmsvc/get.summary $NEW_VMID"
Phase 6: Cleanup
# Power off and unregister VM
ssh -i ~/.ssh/esxi_homelab root@$DEST_HOST "vim-cmd vmsvc/power.off $NEW_VMID"
ssh -i ~/.ssh/esxi_homelab root@$DEST_HOST "vim-cmd vmsvc/unregister $NEW_VMID"
# Remove VM files
ssh -i ~/.ssh/esxi_homelab root@$DEST_HOST "rm -rf $DATASTORE/$VM_NAME"
Test Results Log
Successful Test - August 23, 2025
Intel NUC Cluster
- Test Path: esxi-nuc-01 → esxi-nuc-02 → esxi-nuc-03
- Result: ✅ SUCCESS
- Storage Used: Development-Testing (1TB iSCSI LUN)
- Findings:
- All three iSCSI LUNs accessible from all NUC hosts
- VM files (.vmx, .vmdk, .vswp, logs) properly shared
- No file corruption or access issues
- Cold migration successful between all host combinations
MS-A2 Cluster
- Test Path: esxi-ms-a2-01 → esxi-ms-a2-02
- Result: ✅ SUCCESS
- Storage Used: Tanzu (1.3TB shared datastore)
- Findings:
- Tanzu datastore accessible from both MS-A2 hosts
- VM files properly shared and accessible
- No file corruption or access issues
- Cold migration successful between hosts
Common Issues and Troubleshooting
VM Power-On Failures
Issue: UnsupportedGuest error during power-on
Power on failed: (vim.fault.UnsupportedGuest)
Guest operating system 'ubuntu64Guest' is not supported
Solution: Use compatible guest OS and hardware version
# Fix guest OS
sed -i 's/guestOS = "ubuntu64Guest"/guestOS = "other"/' /path/to/vm.vmx
# Fix hardware version (use version 11 for older ESXi)
sed -i 's/virtualHW.version = "19"/virtualHW.version = "11"/' /path/to/vm.vmx
Issue: PCIe slot errors with network adapters
No PCIe slot available for Ethernet0
Solution: Remove network adapter from test VM
sed -i '/ethernet0/d' /path/to/vm.vmx
Storage Access Issues
Issue: Shared storage not visible on destination host
# Verify iSCSI sessions
esxcli iscsi session list
# Rescan storage adapters
esxcli storage core adapter rescan --all
# Check multipathing
esxcli storage core path list
Issue: Permission denied accessing VM files
# Check file ownership and permissions
ls -la /vmfs/volumes/datastore/vm-folder/
# Fix permissions if needed (rarely required)
chmod 644 /vmfs/volumes/datastore/vm-folder/*.vmx
Network Connectivity Issues
Issue: SSH connection failures
# Test basic connectivity
ping 192.168.10.8
# Verify SSH service on ESXi
ssh -i ~/.ssh/esxi_homelab root@192.168.10.8 'service sshd status'
# Check firewall rules
ssh -i ~/.ssh/esxi_homelab root@192.168.10.8 'esxcli network firewall get'
Performance Considerations
Migration Speed Factors
- Network Bandwidth: 1GbE limits migration speed to ~100MB/s
- Storage Latency: iSCSI over 1GbE adds ~1-2ms latency
- Memory Size: Larger VMs take longer to migrate (not applicable for cold migration)
Best Practices
- Use dedicated vMotion network (VLAN 20) when possible
- Schedule migrations during low-activity periods
- Monitor storage I/O during migrations
- Verify cluster resources before migration
Automation Script
See scripts/test-vmotion-functionality.sh for automated testing script that:
- Validates prerequisites
- Creates test VMs on shared storage
- Performs migration tests
- Verifies data integrity
- Cleans up test artifacts
- Generates detailed reports
Related Documentation
- ESXi iSCSI Configuration Guide
- Synology Storage Configuration
- vSAN 2-Node Deployment Guide
- Network VLAN Configuration
Document Version: 1.0
Last Updated: August 23, 2025
Tested Environment: ESXi 8.0.3, Intel NUC6i7KYK, MINISFORUM MS-A2