VCF 9.0 Single Host Installation Plan

Based on current hardware purchases and William Lam’s single-host VCF deployment method.

Current Hardware Status

MS-A2 Configuration (2 units) ✅ PURCHASED

  • 2x AMD Ryzen 9 9955HX (16C/32T each) = 64 vCPU total
  • 2x 128GB DDR5 = 256GB RAM total
  • Storage per unit:
    • Slot 1: Samsung 980 Pro 500GB (Boot/ESX-OSData)
    • Slot 2: Samsung 990 PRO 4TB (NVMe Tiering)
    • Slot 3: WD_BLACK 4TB SN850X (vSAN ESA)

Rack Infrastructure ✅ PURCHASED

  • DeskPi RackMate T1 8U Cabinet
  • Patch panel, cable management, PDU, shelves

Current Network Limitations

  • No 10GbE backbone yet - using existing 1GbE switches
  • USW Lite 16 PoE in rack (1GbE)
  • MS-A2 SFP+ ports will use 2.5GbE initially

Phase 1: Initial Deployment (1GbE Network)

MS-A2 VCF Host Configuration

Network Setup (Initial):

  • SFP+ Port 1: Management (2.5GbE to USW Lite 16 PoE)
  • SFP+ Port 2: vSAN/vMotion (2.5GbE)
  • 2.5G Ports: NSX TEP, secondary connectivity

VLAN Configuration:

VLAN 10: Management (VCF Management)
VLAN 20: vMotion
VLAN 30: vSAN
VLAN 40: NSX TEP (Tunnel Endpoints)
VLAN 100: TKG Management
VLAN 110: TKG Workload
VLAN 200: Intel NUC Management
VLAN 210: NUC Workload Network

Current Rack Configuration

Intel NUC Rack (Existing):

┌─────────────────────────────┐
│ US-8-60W Switch             │ ← Connected to garage
│ 3x Intel NUC6i7KYK          │ ← 64GB each, dual NIC
│ PDU                         │
└─────────────────────────────┘

MS-A2 Rack (New DeskPi RackMate):

┌─────────────────────────────┐ 8U Total
│ 0.5U - Patch Panel          │
│ 0.5U - D-Ring Cable Manager │ 1U
│ 1U   - USW Lite 16 PoE      │ 2U ← New switch for MS-A2s
│ 1U   - Rack Shelf (MS-A2 #1)│ 3U ← VCF Host
│ 1U   - Rack Shelf (MS-A2 #2)│ 4U ← Future workload
│ 1U   - [Available]          │ 5U
│ 1U   - [Available]          │ 6U
│ 1U   - [Available]          │ 7U
│ 1U   - AC PDU               │ 8U
└─────────────────────────────┘

VCF 9.0 Deployment Steps

1. Hardware Preparation

  • Install 128GB DDR5 in first MS-A2
  • Install all 3 NVMe drives (500GB + 2x 4TB)
  • Mount MS-A2 in rack shelf
  • Connect to USW Lite 16 PoE to MS-A2’s built-in 2.5G RJ45 ports (will negotiate down to 1GbE)

2. ESXi Installation

  • Install ESXi 8.0 U3 on MS-A2
  • Configure networking with VLANs
  • Set up management IP on VLAN 10
  • Configure multiple VMkernel adapters

3. VCF 9.0 Single-Host Deployment

  • Download VCF 9.0 installer OVA
  • Deploy VCF installer VM
  • Apply William Lam’s single-host override:

    echo "feature.vcf.internal.single.host.domain=true" >> /home/vcf/feature.properties
    echo 'y' | /opt/vmware/vcf/operationsmanager/scripts/cli/sddcmanager_restart_services.sh
    
  • Create deployment JSON with single host configuration
  • Upload JSON to bypass UI validation
  • Execute VCF deployment

4. Sample JSON Configuration

{
  "managementDomain": {
    "name": "mgmt01",
    "hosts": [
      {
        "hostname": "esx-ms-a2-01.lab.local",
        "ip": "192.168.10.100",
        "username": "root",
        "password": "Cl0udFoundry!"
      }
    ],
    "networkSpecs": {
      "management": {
        "subnet": "192.168.10.0/24",
        "gateway": "192.168.10.1",
        "vlanId": 10,
         "mtu": 1500,
         "portGroupName": "VCF-Management"
      },
      "vmotion": {
        "subnet": "192.168.20.0/24",
        "vlanId": 20,
         "mtu": 9000,
         "portGroupName": "VCF-vMotion"
      },
      "vsan": {
        "subnet": "192.168.30.0/24",
        "vlanId": 30,
        "mtu": 9000,
        "portGroupName": "VCF-vSAN"
      },
      "nsxtOverlay": {
        "subnet": "192.168.40.0/24",
        "vlanId": 40,
        "mtu": 9000,
        "portGroupName": "VCF-NSX-TEP"
      }
    },
    "vcenterSpec": {
      "name": "mgmt-vc01",
      "datacentername": "mgmt-dc01",
      "vmSize": "medium",
      "storageSize": "lstorage",
      "rootPassword": "Cl0udFoundry!"
    },
    "nsxSpec": {
      "nsxManagerSpecs": [
        {
          "name": "mgmt-nsx01",
          "vmSize": "medium"
        }
      ],
      "rootPassword": "Cl0udFoundry!",
      "adminPassword": "Cl0udFoundry!"
    },
    "vsanSpec": {
      "datastoreName": "mgmt-vsan-datastore",
      "licenseKey": "XXXXX-XXXXX-XXXXX-XXXXX-XXXXX",
      "esaConfig": {
        "enabled": true
      }
    },
    "dvsSpecs": [
      {
        "name": "mgmt-vds01",
        "mtu": 9000,
        "portGroupSpecs": [
          {
            "name": "VCF-Management",
            "vlanId": 10
          },
          {
            "name": "VCF-vMotion",
            "vlanId": 20
          },
          {
            "name": "VCF-vSAN",
            "vlanId": 30
          },
          {
            "name": "VCF-NSX-TEP",
            "vlanId": 40
          }
        ]
      }
    ],
    "storageType": "VSAN_ESA"
  }
}

Phase 2: Intel NUC Integration

NUC Cluster Setup (Already Racked) ✅

Current Status:

  • 3x Intel NUCs racked with US-8-60W switch
  • Connected to garage network infrastructure
  • Dedicated PDU for power

Next Steps:

  • Install ESXi 8.0 U3 on 3x Intel NUCs (64GB each)
  • Configure as VCF workload domain OR standalone cluster
  • Deploy Tanzu Platform for Cloud Foundry

Network Configuration:

  • Use VLAN 200-210 for NUC management and workloads
  • Leverage existing US-8-60W → Garage US-8-60W → UXG-Lite path

Phase 3: 10GbE Upgrade (Future)

When Budget Allows

  • Purchase USW-Aggregation or alternative 10GbE switch
  • Purchase SFP+ DAC cables
  • Purchase SFP+ to RJ45 transceivers for Synology
  • Migrate to 10GbE backbone

Updated Rack Layout (Post-10GbE)

MS-A2 Rack with 10GbE:

┌─────────────────────────────┐ 8U Total
│ 0.5U - Patch Panel          │
│ 0.5U - D-Ring Cable Manager │ 1U
│ 1U   - USW-Aggregation      │ 2U ← 10GbE switch
│ 1U   - Rack Shelf (MS-A2 #1)│ 3U ← VCF Host
│ 1U   - Rack Shelf (MS-A2 #2)│ 4U ← Workload host
│ 1U   - [Available]          │ 5U
│ 1U   - [Available]          │ 6U
│ 1U   - [Available]          │ 7U
│ 1U   - AC PDU               │ 8U
└─────────────────────────────┘

Intel NUC Rack (Unchanged):

  • NUCs remain on existing US-8-60W infrastructure
  • 10GbE uplink connects NUC rack to MS-A2 rack via garage

Implementation Timeline

Week 1: VCF Deployment ⭐ PRIORITY

  • Rack and cable first MS-A2
  • Install ESXi 8.0 U3
  • Deploy VCF 9.0 with single-host method
  • Validate basic VCF functionality

Week 2-3: Optimization & Testing

  • Configure TKG management cluster
  • Test workload deployment capabilities
  • Performance baseline testing
  • Documentation and backup procedures

Future Phases: Scale-Out

  • Add second MS-A2 as workload host
  • Integrate Intel NUC cluster
  • Upgrade to 10GbE networking
  • Deploy Tanzu Platform for Cloud Foundry

Key Benefits of This Approach

  1. Immediate VCF Experience: Get full VCF 9.0 running with current hardware
  2. Phased Investment: Spread networking upgrades over time
  3. Performance Baseline: Understand 1GbE limitations before 10GbE upgrade
  4. Learning Path: Master VCF basics before complexity increases
  5. Budget Friendly: Use existing infrastructure while planning upgrades

Network Performance Expectations

With 1GbE Initially:

  • VCF deployment: Functional but slower
  • vSAN performance: Limited by network bandwidth
  • vMotion: Adequate for small VMs
  • NSX overlays: Basic functionality

After 10GbE Upgrade:

  • Significant performance improvement
  • Better suited for production workloads
  • Enhanced vSAN capabilities
  • Optimal NSX performance

This plan maximizes your current investment while providing a clear upgrade path.


This project is for educational and home lab purposes.