MINISFORUM MS-A2 Migration Guide
Comprehensive guide for migrating from Intel NUC6i7KYK cluster to MINISFORUM MS-A2 systems with minimal downtime.
Table of Contents
- Overview
- Hardware Comparison
- Migration Strategy
- Pre-Migration Preparation
- Hardware Setup
- vSphere Configuration
- Workload Migration
- Network Configuration
- Performance Validation
- References
Overview
This document provides a detailed migration plan from the current Intel NUC-based vSphere cluster to a new MINISFORUM MS-A2 AMD-based cluster, focusing on maintaining service continuity and optimizing performance.
Migration Objectives
- Zero data loss: Comprehensive backup and validation procedures
- Minimal downtime: Phased migration with rolling updates
- Performance improvement: 4x compute capacity increase
- Future readiness: Support for TKG/TAS workloads
- Network enhancement: Leverage native 10G capabilities
Hardware Comparison
Current Platform: Intel NUC6i7KYK
| Component | Specification | Total (3 units) |
|---|---|---|
| CPU | Intel Core i7-6770HQ (4C/8T @ 2.6-3.5GHz) | 12C/24T |
| Memory | 64GB DDR4 each | 192GB DDR4 |
| Storage | 250GB NVMe each | 750GB total |
| Network | 1x built-in GbE + 1x USB GbE | 6x 1G ports |
| Form Factor | 4.6” x 4.4” x 1.4” | Mini desktop |
| Power | ~35-50W each | ~150W total |
Target Platform: MINISFORUM MS-A2
| Component | Specification | Total (3 units) |
|---|---|---|
| CPU | AMD Ryzen 9 7945HX (16C/32T @ 2.5-5.4GHz) | 48C/96T |
| Memory | 32GB DDR5 each (expandable to 96GB) | 96GB DDR5 |
| Storage | 1TB NVMe each + 2x M.2 slots | 3TB+ potential |
| Network | 2x SFP+ 10G + 2x 2.5G ethernet | Native 10G |
| Form Factor | 8.3” x 8.3” x 2.1” | Rackmountable |
| Power | ~60-80W typical, 129W peak | ~240W typical |
Performance Improvement Summary
- CPU Performance: 4x cores, 2x per-core performance = ~8x total
- Memory Bandwidth: DDR4-2400 → DDR5-4800 = 2x improvement
- Storage: 4x capacity, similar NVMe performance
- Network: 1G → 10G = 10x bandwidth
- Efficiency: Better performance per watt
Migration Strategy
Phased Approach
Phase 1: Infrastructure Preparation (Week 1)
- Install rack infrastructure
- Deploy 10G networking
- Setup first MS-A2 unit
- Validate hardware configuration
Phase 2: vSphere Integration (Week 2)
- Add MS-A2 to existing cluster
- Configure mixed Intel/AMD cluster
- Test EVC compatibility
- Validate vMotion between platforms
Phase 3: Workload Migration (Weeks 3-4)
- Begin VM migration to MS-A2
- Validate application performance
- Monitor for compatibility issues
- Migrate storage-intensive workloads
Phase 4: Cluster Expansion (Weeks 5-6)
- Deploy second MS-A2 unit
- Continue workload migration
- Begin decommissioning first NUC
Phase 5: Completion (Weeks 7-8)
- Deploy third MS-A2 unit
- Complete workload migration
- Remove all NUCs from cluster
- Optimize new configuration
Pre-Migration Preparation
Documentation and Assessment
Current Environment Inventory
# vSphere cluster assessment
Get-Cluster | Get-VM | Select Name, NumCpu, MemoryGB, PowerState
Get-Cluster | Get-VMHost | Select Name, Version, Model
# Network configuration
Get-VMHost | Get-VirtualSwitch | ft
Get-VMHost | Get-VMHostNetworkAdapter | ft
# Storage configuration
Get-Datastore | Select Name, CapacityGB, FreeSpaceGB
Get-VM | Get-HardDisk | ft
Compatibility Validation
- EVC Mode: Determine compatible CPU feature set
- OS Compatibility: Validate guest OS support on AMD
- Application testing: Test critical applications on AMD
- Driver validation: Ensure hardware driver compatibility
Backup Procedures
Complete Environment Backup
- vCenter configuration: Full vCenter backup
- VM backups: Individual VM exports or snapshots
- Host configurations: ESXi configuration backups
- Network settings: Switch and router configurations
- Storage: NAS snapshot before migration
Recovery Testing
- Test restore procedures on isolated environment
- Validate backup integrity
- Document recovery time objectives
Hardware Setup
MS-A2 Physical Installation
Rack Mounting
- Install rack shelf for MS-A2 unit
- Secure unit with appropriate mounting hardware
- Verify clearance for airflow and cable access
- Label unit for identification
Power Configuration
# Power requirements per unit:
- Idle: 23-26W
- Typical: 60-80W
- Peak: 129W
- Power connector: Standard IEC C13
# Total cluster power:
- 3x MS-A2: ~240W typical, ~400W peak
- Plus network gear: ~40W
- Total: ~280W typical, ~440W peak
Network Configuration
10G Connectivity Setup
# MS-A2 network interfaces per unit:
- 2x SFP+ 10G ports (primary networking)
- 2x 2.5G ethernet ports (management/backup)
# Connection plan:
Port 1 (SFP+): Primary 10G to USW-Aggregation
Port 2 (SFP+): Spare/future use
Port 3 (2.5G): Management network
Port 4 (2.5G): Backup/OOB management
VLAN Configuration
# VLAN assignments for MS-A2 hosts:
VLAN 10: Management (2.5G interface)
VLAN 20: vMotion (10G interface)
VLAN 30: Storage (10G interface)
VLAN 100: VM Workload (10G interface)
VLAN 110: TKG Frontend (10G interface)
Storage Configuration
Local Storage Setup
# MS-A2 storage configuration:
Slot 1: 1TB NVMe (included) - ESXi boot + local datastore
Slot 2: Available for expansion
Slot 3: Available for vSAN or additional storage
# ESXi partitioning:
- ESXi boot: 64GB
- Scratch partition: 4GB
- Local datastore: ~900GB
Shared Storage Integration
- NAS connectivity: 10G connection to Synology
- iSCSI configuration: Maintain existing LUN structure
- NFS shares: Leverage improved 10G performance
vSphere Configuration
ESXi Installation and Setup
ESXi 8.0 Installation
# Boot from USB installer
# Custom installation for MS-A2 hardware
# Configure management network on 2.5G interface
# Enable SSH and DCUI access
Host Configuration
# Configure hostname and DNS
esxcli system hostname set --host="esxi-ms-a2-01"
esxcli network ip dns search add --domain="markalston.net"
# Configure NTP
esxcli system ntp set --server="pool.ntp.org"
esxcli system ntp set --enabled=true
# Configure advanced settings for AMD optimization
esxcli system settings advanced set -o /Power/CpuPolicy -s "Balanced"
esxcli system settings advanced set -o /Numa/CoreCapRatioThreshold -i 50
Cluster Integration
EVC Mode Configuration
# Determine lowest common denominator between Intel and AMD
# Configure EVC mode in vCenter
# Test VM compatibility between platforms
# AMD-specific EVC considerations:
- AMD "Zen 2" or "Zen 3" family features
- Intel/AMD feature compatibility
- Workload-specific testing required
Network Configuration
# Create distributed virtual switches
New-VDSwitch -Name "dvSwitch-10G" -Location (Get-Datacenter)
# Configure port groups for VLANs
New-VDPortgroup -VDSwitch "dvSwitch-10G" -Name "Management" -VlanId 10
New-VDPortgroup -VDSwitch "dvSwitch-10G" -Name "vMotion" -VlanId 20
New-VDPortgroup -VDSwitch "dvSwitch-10G" -Name "Storage" -VlanId 30
New-VDPortgroup -VDSwitch "dvSwitch-10G" -Name "Workload" -VlanId 100
# Add MS-A2 hosts to distributed switch
$VMHost = Get-VMHost "esxi-ms-a2-01"
Add-VDSwitchVMHost -VDSwitch "dvSwitch-10G" -VMHost $VMHost
Workload Migration
Migration Planning
Workload Prioritization
- Non-critical VMs: Test migration first
- Development environments: Early migration candidates
- Production services: Migrate during maintenance windows
- Storage-intensive workloads: Benefit most from 10G
Migration Methods
# vMotion (preferred for running VMs)
Move-VM -VM "test-vm" -Destination "esxi-ms-a2-01"
# Cold migration (for powered-off VMs)
Move-VM -VM "test-vm" -Destination "esxi-ms-a2-01" -RunAsync
# Storage vMotion (change datastore)
Move-VM -VM "test-vm" -Datastore "ms-a2-local-01"
Performance Validation
Pre-Migration Baseline
# CPU performance testing
# Memory bandwidth testing
# Storage IOPS measurement
# Network throughput validation
Post-Migration Validation
# Application performance comparison
# Resource utilization monitoring
# User experience validation
# Automated testing where possible
Rollback Procedures
- VM snapshots before migration
- vMotion back to original hosts if issues
- Documented rollback steps for each workload
Network Configuration
10G Network Optimization
ESXi Network Tuning
# Enable jumbo frames for storage traffic
esxcli network vswitch standard set -v vSwitch0 -m 9000
# Optimize network buffer sizes
esxcli system settings advanced set -o /Net/TcpipHeapMax -i 1576960
esxcli system settings advanced set -o /Net/TcpipHeapSize -i 32
# Configure SR-IOV if supported
esxcli system module parameters set -m igb -p "max_vfs=8"
VLAN Configuration Validation
# Test connectivity between VLANs
vmkping -I vmk1 192.168.20.10 # vMotion network
vmkping -I vmk2 192.168.30.10 # Storage network
# Validate routing between networks
esxcli network ip route ipv4 list
NSX-T Preparation
Network Requirements for NSX-T
- Management network: Existing VLAN 10
- Overlay transport: VLAN 20 (vMotion network)
- Edge uplink: VLAN 100 (workload network)
- TEP pool: Dedicated IP range for tunnel endpoints
Edge VM Placement Planning
- Edge VMs: Deploy on MS-A2 hosts for performance
- Resource allocation: 4 vCPU, 8GB RAM minimum per edge
- Placement constraints: Anti-affinity rules for HA
Performance Validation
Benchmark Testing
CPU Performance
# Single-threaded performance
# Multi-threaded scaling
# Virtual machine density testing
# Comparison with Intel baseline
Memory Performance
# Memory bandwidth testing
# Memory latency measurement
# NUMA topology validation
# Memory overcommit testing
Storage Performance
# Local NVMe testing
# NAS connectivity (10G)
# IOPS measurement
# Latency characterization
Network Performance
# 10G throughput testing
iperf3 -s # Server mode
iperf3 -c <target> -t 60 -P 4 # Client test
# Expected results:
# 10G DAC: 9.4+ Gbps
# Storage: 8+ Gbps sustained
# VM-to-VM: 9+ Gbps local
Production Validation
Application Testing
- Web applications: Response time validation
- Database workloads: Query performance comparison
- Development tools: Build time improvements
- Container workloads: Startup and scaling performance
Monitoring and Alerting
# vCenter performance monitoring
# ESXi host resource utilization
# Network bandwidth utilization
# Storage performance metrics
Troubleshooting
Common Migration Issues
EVC Compatibility Problems
# Symptoms: vMotion failures between Intel/AMD
# Solution: Lower EVC mode or separate clusters
# Validation: Test with representative workloads
Network Connectivity Issues
# Symptoms: Loss of network after migration
# Check: VLAN configuration, uplink status
# Validation: Layer 2/3 connectivity testing
Performance Degradation
# Symptoms: Slower performance on AMD
# Check: CPU scheduling, NUMA affinity
# Tuning: VM hardware version, resource allocation
Performance Optimization
AMD-Specific Tuning
# CPU scheduler optimization
esxcli system settings advanced set -o /Cpu/PreemptionTimer -i 100000
# Memory optimization
esxcli system settings advanced set -o /Mem/ShareScanTime -i 60
# Power management
esxcli system settings advanced set -o /Power/CpuPolicy -s "High Performance"
Migration Timeline
Detailed Schedule
Week 1: Infrastructure Preparation
- Day 1-2: Rack installation and cable management
- Day 3-4: 10G network deployment and testing
- Day 5-7: First MS-A2 setup and ESXi installation
Week 2: Cluster Integration
- Day 1-2: Add MS-A2 to vSphere cluster
- Day 3-4: Configure networking and storage
- Day 5-7: Validation testing and performance baseline
Week 3-4: Initial Migration
- Week 3: Migrate development and test VMs
- Week 4: Migrate non-critical production workloads
Week 5-6: Cluster Expansion
- Week 5: Deploy second MS-A2, continue migration
- Week 6: Deploy third MS-A2, accelerate migration
Week 7-8: Completion
- Week 7: Migrate remaining workloads
- Week 8: Decommission NUCs, optimize configuration
Risk Mitigation
- Parallel clusters: Maintain NUCs until migration complete
- Rollback capability: Keep original environment functional
- Extended timeline: Allow flexibility for issues
- Support resources: Engage vendor support if needed
References
Internal Documentation
Vendor Resources
Performance Testing
Last Updated: 2025-01-11 Maintained by: Mark Alston