MS-A2 Network Design and Integration
Minisforum MS-A2 Review: An Almost Perfect Homelab System
Overview
Network architecture for integrating MINISFORUM MS-A2 systems with native 10G connectivity into existing 1G Ubiquiti infrastructure.
Current Network Topology
Existing Infrastructure
Internet
│
┌───▼────┐
│Arris S34│ Cable Modem
│ Modem │
└───┬────┘
│
┌───▼────┐
│UXG-Lite│ Gateway (192.168.10.1)
│Gateway │
└───┬────┘
│
┌───▼────┐
│Garage │ US-8 Switch
│Switch │ Port 2 → Office Uplink
└───┬────┘
│ 1G Trunk
┌───▼────┐
│Office │ US-8 Switch (Current NUC Switch)
│Switch │ Ports 1-6: Intel NUCs
└────────┘ Port 8: Uplink to Garage
VLAN Configuration
| VLAN | Purpose | Network | Interfaces |
|---|---|---|---|
| 10 | Management | 192.168.10.0/24 | Untagged/Native |
| 20 | vMotion | 192.168.20.0/24 | Tagged |
| 30 | vSAN | 192.168.30.0/24 | Tagged |
| 40 | NSX-TEP | 192.168.40.0/24 | Tagged |
| 50 | NSX-Edge-Uplink | 192.168.50.0/24 | Tagged |
| 100 | TKG-Management | 192.168.100.0/24 | Tagged |
| 110 | TKG-Workload | 192.168.110.0/24 | Tagged |
MS-A2 Network Interfaces
Physical Interface Layout
MS-A2 Rear Panel:
┌─────────────────────────────────────┐
│ [SFP+] [SFP+] [2.5G] [2.5G] [HDMI] │
│ 1 2 3 4 │
└─────────────────────────────────────┘
Interface Mapping:
- Port 1 (SFP+): vmnic2 - Storage/vMotion (10G)
- Port 2 (SFP+): vmnic3 - Storage/vMotion (10G)
- Port 3 (2.5G): vmnic0 - Management/VM Traffic
- Port 4 (2.5G): vmnic1 - Backup/Additional VM Traffic
Display outputs are provided by the HDMI port as well as the two USB Type-C ports that can run in alt DP mode. Those USB ports are not USB4. Instead they are USB 3.2 Gen2 10Gbps ports.
For networking, we have two SFP+ ports via an Intel X710 NIC. We then have two 2.5GbE ports. Those are a bit strange since one is an Intel i226-V and the other is a Realtek RTL8125. That means you need three different NIC drivers from two vendors to get the rear wired networking to work.
On the left side, we get two more USB 3.2 ports, one is a 10Gbps port the other is 5Gbps. These are not labeled well. On the top rear, we get the low-profile expansion slot cutout and a vent for the CPU and memory.
ESXi Network Configuration Strategy
Option 1: Current Infrastructure (1G)
For initial deployment with existing US-8 switches:
vSwitch0 (Management):
Uplinks: vmnic0 (2.5G)
Port Groups:
- Management Network (VLAN 10, Untagged)
- VM Network (VLAN 10)
vSwitch1 (High Performance):
Uplinks: vmnic1 (2.5G)
Port Groups:
- vMotion-PG (VLAN 20)
- Storage-PG (VLAN 30)
- NSX-TEP-PG (VLAN 40)
10G Interfaces (vmnic2, vmnic3):
Status: Unused until 10G switch deployment
Future: Dedicated storage and vMotion
Option 2: 10G Switch Integration (Future)
With USW-Aggregation deployment:
vSwitch0 (Management):
Uplinks: vmnic0 (2.5G)
Port Groups:
- Management Network (VLAN 10, Untagged)
Distributed Switch (10G):
Uplinks: vmnic2, vmnic3 (10G SFP+)
Port Groups:
- vMotion-PG (VLAN 20)
- vSAN-PG (VLAN 30)
- NSX-TEP-PG (VLAN 40)
- NSX-Edge-Uplink-PG (VLAN 50)
- TKG-Management-PG (VLAN 100)
- TKG-Workload-PG (VLAN 110)
Redundancy:
Management: vmnic1 (2.5G) as backup
Storage: vmnic2 + vmnic3 in active/active
Network Migration Phases
Phase 1: Initial Deployment (Current State)
Connectivity: 2.5G to existing US-8 switch
MS-A2 Connectivity:
┌─────────┐ 2.5G ┌─────────┐
│ MS-A2 │─────────────│Office │
│ vmnic0 │ │US-8 │
└─────────┘ │Port 5-7 │
└─────────┘
Configuration:
- Single 2.5G connection per MS-A2
- All traffic through vmnic0
- VLANs tagged on switch port
- 10G interfaces unused
Switch Port Configuration:
Office US-8 Port 5-7 (MS-A2 hosts):
Profile: ESXi-Host-Trunk
Native VLAN: 10 (Management)
Tagged VLANs: 20,30,40,50,100,110
Speed: Auto (negotiates to 2.5G)
Phase 2: 10G Switch Deployment
Target Architecture:
Internet
│
┌───▼────┐
│UXG-Lite│ Gateway
└───┬────┘
│
┌───▼────┐
│Garage │ US-8 Switch
│Switch │
└───┬────┘
│ 10G SFP+ Trunk
┌───▼────┐
│USW- │ 10G Aggregation Switch
│Aggrega-│ SFP+ 1: Garage uplink
│tion │ SFP+ 2-4: MS-A2 hosts
└───┬────┘ SFP+ 5-7: Future expansion
│
┌───▼────┐
│Office │ US-8 Switch (NUC only)
│US-8 │ 1G uplink to USW-Aggregation
└────────┘
Cabling Requirements:
10G Connections:
USW-Aggregation ↔ Garage Switch: 10G SFP+ DAC
USW-Aggregation ↔ MS-A2 #1: 10G SFP+ DAC
USW-Aggregation ↔ MS-A2 #2: 10G SFP+ DAC
USW-Aggregation ↔ MS-A2 #3: 10G SFP+ DAC
DAC Cable Specifications:
Length: 1-3 meters
Type: SFP+ to SFP+ Direct Attach Copper
Speed: 10Gbps
Estimated Cost: $20-30 each
IP Address Allocation
Management Network (VLAN 10)
Current Assignments:
192.168.10.1 - UXG-Lite Gateway
192.168.10.7 - Mac Pro ESXi
192.168.10.8 - esxi-nuc-01
192.168.10.9 - esxi-nuc-02
192.168.10.10 - esxi-nuc-03
192.168.10.11 - vCenter Server
MS-A2 Assignments:
192.168.10.12 - esxi-ms-a2-01
192.168.10.13 - esxi-ms-a2-02
192.168.10.14 - esxi-ms-a2-03
High-Performance Networks
vMotion Network (VLAN 20):
192.168.20.12 - esxi-ms-a2-01 vmk1
192.168.20.13 - esxi-ms-a2-02 vmk1
192.168.20.14 - esxi-ms-a2-03 vmk1
vSAN Network (VLAN 30):
192.168.30.12 - esxi-ms-a2-01 vmk2
192.168.30.13 - esxi-ms-a2-02 vmk2
192.168.30.14 - esxi-ms-a2-03 vmk2
NSX TEP Network (VLAN 40):
192.168.40.12 - esxi-ms-a2-01 vmk3
192.168.40.13 - esxi-ms-a2-02 vmk3
192.168.40.14 - esxi-ms-a2-03 vmk3
Performance Considerations
Network Bandwidth Analysis
Current Infrastructure (1G):
Intel NUCs:
Management: 1G built-in + 1G USB adapter
Total per host: 2G aggregate
Cluster total: 6G bandwidth
MS-A2 on 1G Switch:
Management: 2.5G (limited to 1G by switch)
Performance: Underutilized NICs
Bottleneck: Switch backplane
10G Infrastructure:
MS-A2 with 10G Switch:
Management: 2.5G dedicated
Storage/vMotion: 2x 10G = 20G aggregate
Total per host: 22.5G potential
Cluster total: 67.5G bandwidth
Performance Gains:
vMotion: 10x faster VM migrations
Storage: High-throughput workloads
Network: Reduced contention
Jumbo Frames Configuration
MTU Settings by Network:
Management (VLAN 10): 1500 (standard)
vMotion (VLAN 20): 9000 (jumbo frames)
vSAN (VLAN 30): 9000 (jumbo frames)
NSX-TEP (VLAN 40): 1600 (overlay overhead)
NSX-Edge-Uplink (VLAN 50): 1500 (external compatibility)
TKG Networks (VLAN 100,110): 1500 (standard)
Configuration Requirements:
- Enable jumbo frames on switch ports
- Configure VMkernel adapters with appropriate MTU
- Verify end-to-end MTU path
- Test with vmkping -s 8972 for 9000 MTU
Redundancy and High Availability
Network Failover Strategy
Current Limitations:
Intel NUCs:
Built-in NIC: 1G reliable
USB NIC: 1G, potential reliability issues
Failover: Manual intervention may be required
MS-A2 Advantages:
All NICs: Native PCIe interfaces
Reliability: No USB dependencies
Failover: Faster, more reliable
Recommended Failover Configuration:
Management Network:
Primary: vmnic0 (2.5G)
Failover: vmnic1 (2.5G)
Policy: Explicit failover order
Storage/vMotion Networks:
Primary: vmnic2 (10G)
Secondary: vmnic3 (10G)
Policy: Load balancing based on virtual port
Switch Configuration
UniFi Controller Settings
Port Profile for MS-A2 (10G Switch):
Profile Name: MS-A2-Host-Trunk
Port Configuration:
Native Network: Management (VLAN 10)
Tagged Networks:
- vMotion (VLAN 20)
- vSAN (VLAN 30)
- NSX-TEP (VLAN 40)
- NSX-Edge-Uplink (VLAN 50)
- TKG-Management (VLAN 100)
- TKG-Workload (VLAN 110)
Advanced Settings:
Auto PoE: Off
Port Isolation: Off
Jumbo Frames: Enabled (9000)
Storm Control: Enabled
Link Speed: 10 Gbps
USW-Aggregation Port Assignment:
Physical Port Mapping:
SFP+ 1: Switch-Trunk (Uplink to Garage)
SFP+ 2: MS-A2-Host-Trunk (esxi-ms-a2-01)
SFP+ 3: MS-A2-Host-Trunk (esxi-ms-a2-02)
SFP+ 4: MS-A2-Host-Trunk (esxi-ms-a2-03)
SFP+ 5: Reserved (Future expansion)
SFP+ 6: Reserved (Future expansion)
SFP+ 7: Reserved (Storage/NAS)
SFP+ 8: Reserved (Future uplink)
RJ45 Port: Management-Only (Direct access)
Migration Strategy
Coexistence Phase
Mixed Environment Considerations:
Network Segregation:
Intel NUCs: 1G switch (US-8)
MS-A2s: 10G switch (USW-Aggregation)
Interconnect: 1G uplink between switches
vCenter Configuration:
Single vCenter: Manages both clusters
Separate Clusters: Intel vs AMD
Shared Storage: Synology NAS via NFS
Workload Migration
Phase 1: Infrastructure VMs:
- vCenter Server (remain on Mac Pro)
- NSX Manager (move to MS-A2)
- DNS/DHCP services (move to MS-A2)
Phase 2: Production Workloads:
- Storage-intensive VMs (leverage 10G)
- CPU-intensive workloads (AMD Zen 4)
- Container workloads (TKG on MS-A2)
Monitoring and Troubleshooting
Network Performance Monitoring
Key Metrics to Track:
Throughput:
vmnic utilization per interface
Total cluster bandwidth usage
Peak vs average performance
Latency:
vMotion completion times
Storage I/O response times
Network ping times between hosts
Errors:
Dropped packets per interface
CRC errors on physical links
VLAN configuration mismatches
Troubleshooting Tools
ESXi Commands:
# Interface status
esxcfg-nics -l
# Network configuration
esxcfg-vswitch -l
# Performance testing
vmkping -I vmk1 -s 8972 192.168.20.13 # Jumbo frame test
# Traffic monitoring
vsish -e cat /net/portsets/DvsPortset-0/ports/*/clientStats
UniFi Monitoring:
- Port statistics and utilization
- VLAN traffic analysis
- Error rate monitoring
- Switch performance graphs
Cost Analysis
10G Switch Investment
USW-Aggregation: $300-400 SFP+ DAC Cables: $20-30 × 4 = $80-120 Total Network Upgrade: ~$400-500
Performance ROI:
- 10x bandwidth increase for storage/vMotion
- Reduced vMotion times (minutes → seconds)
- Support for bandwidth-intensive workloads
- Future-proofing for additional MS-A2 systems
Next Steps
- Deploy first MS-A2 on existing 1G infrastructure
- Test mixed Intel/AMD cluster functionality
- Order USW-Aggregation and SFP+ cables
- Implement 10G migration when hardware arrives
- Monitor performance improvements and optimize configuration