vCenter Server 8.0.2 Initial Configuration Guide
Overview
This guide covers the initial configuration of vCenter Server 8.0.2 for a mixed ESXi environment with Intel NUCs (ESXi 8.0.3) and Mac Pro (ESXi 7.0.3).
Environment Details
VCSA Information
- FQDN: vcsa.markalston.net
- IP: 192.168.10.11
- Version: vCenter Server 8.0.2-22617221
- Login: administrator@vsphere.local / Cl0udFoundry!
ESXi Hosts
| Host | IP | Hardware | ESXi Version | Role |
|---|---|---|---|---|
| esxi-nuc-01.markalston.net | 192.168.10.8 | Intel NUC6i7KYK | 8.0.3 | Workload |
| esxi-nuc-02.markalston.net | 192.168.10.9 | Intel NUC6i7KYK | 8.0.3 | Workload |
| esxi-nuc-03.markalston.net | 192.168.10.10 | Intel NUC6i7KYK | 8.0.3 | Workload |
| macpro.markalston.net | 192.168.10.7 | Mac Pro Late 2013 | 7.0.3 | Management/vCenter |
Step 1: Initial vCenter Access
1.1 Access vSphere Client
URL: https://vcsa.markalston.net
Username: administrator@vsphere.local
Password: Cl0udFoundry!
1.2 Accept Certificates
- Accept any SSL certificate warnings
- Consider configuring proper certificates later
Step 2: Create Datacenter Structure
2.1 Create Datacenter
- Right-click on vCenter server in inventory
- Select: New Datacenter
- Name:
Homelab-DC - Click: OK
2.2 Create Folders (Optional but Recommended)
Under Homelab-DC, create folders for organization:
- Right-click on Homelab-DC
- New Folder → New VM and Template Folder
- Name:
Management-VMs - Name:
Workload-VMs
- Name:
- New Folder → New Host and Cluster Folder
- Name:
Compute-Hosts - Name:
Management-Hosts
- Name:
Step 3: Add ESXi Hosts
3.1 Add Intel NUC Hosts (ESXi 8.0.3)
For each Intel NUC:
- Right-click on Homelab-DC (or Compute-Hosts folder)
- Select: Add Host
- Host Configuration:
- Host name:
esxi-nuc-01.markalston.net(etc.) - User name:
root - Password:
Cl0udFoundry!
- Host name:
- Click: Next through security certificate warning
- Host Summary: Review and click Next
- Assign License: Use evaluation license for now
- Lockdown Mode: Disabled (recommended for homelab)
- VM Location: Select appropriate folder (Workload-VMs)
- Ready to Complete: Click Finish
Repeat for:
- esxi-nuc-02.markalston.net
- esxi-nuc-03.markalston.net
3.2 Add Mac Pro Host (ESXi 7.0.3)
- Right-click on Management-Hosts folder (or Homelab-DC)
- Select: Add Host
- Host Configuration:
- Host name:
macpro.markalston.net - User name:
root - Password:
Cl0udFoundry!
- Host name:
- Follow same process as Intel NUCs
Step 4: Create Clusters
4.1 Create Compute Cluster (Intel NUCs Only)
- Right-click on Homelab-DC
- Select: New Cluster
- Basic Configuration:
- Name:
NUC-Cluster - DRS: Enabled (Fully Automated)
- HA: Enabled
- vSphere Lifecycle Manager: Select “Manage all hosts in the cluster with a single image”
- Name:
- Image Setup Options (appears when single image selected):
- For existing hosts: Choose “Import image from an existing host in the vCenter inventory”
- Select host: Choose one of your Intel NUCs (e.g., esxi-nuc-01) as the reference
- Configuration Management: ✅ Check “Manage configuration at the cluster level”
- Click: Next through remaining options
- Click: Finish
Image Import Options Explained:
- Compose a New Image: Start from scratch (not needed - your hosts are already configured)
- Import from existing host: Best option - uses your current NUC configuration as template
- Import from new host: For hosts not yet added to vCenter
Note: The “Manage configuration at the cluster level” option ensures consistent host settings across all cluster members.
4.2 Move Intel NUCs to Cluster
- Select all three Intel NUC hosts
- Drag and drop into NUC-Cluster
- Confirm the move
4.3 Configure EVC Mode (Enhanced vMotion Compatibility)
Since you have mixed CPU generations:
- Right-click on NUC-Cluster
- Settings → EVC
- Enable EVC
- Select: Intel “Haswell” Generation or appropriate baseline
- Apply
Note: Keep Mac Pro separate due to different ESXi version
Step 5: Configure Networking
5.1 Create Distributed Switch (Optional but Recommended)
- Navigate: Networking tab
- Right-click on Homelab-DC
- Select: Distributed Switch → New Distributed Switch
- Configuration:
- Name:
Homelab-DvS - Version: 7.0.0 (compatible with ESXi 7.0.3)
- Number of uplinks: 2 per host
- Name:
5.2 Add Hosts to Distributed Switch
- Right-click on Homelab-DvS
- Add and Manage Hosts
- Select: Add hosts
- Select all ESXi hosts
- Configure uplinks:
- Map physical NICs to uplink ports
- Map vmnic0 → Uplink 1
- Map vusb0 → Uplink 2
- For Intel NUCs: vmnic0 (built-in) and vusb0 (USB adapter)
- For Mac Pro: Both built-in NICs
- Note: USB NICs appear as vusb0, not vmnic1
- Map physical NICs to uplink ports
5.3 Create Port Groups
Create port groups for different VLANs based on VCF requirements:
- Management Network:
- Name:
Management-PG - Port Binding: Static
- Port Allocation: Elastic
- Number of Ports: 8
- Network Resource Pool: Default
- VLAN Type: VLAN
- VLAN ID: 10
- Description: ESXi management, vCenter Server, and infrastructure management traffic
- Usage: VMkernel adapters for management (vmk0)
- Advanced Policies: Default security policy (Promiscuous mode: Reject, MAC changes: Reject, Forged transmits: Reject)
- Name:
- vMotion Network:
- Name:
vMotion-PG - Port Binding: Static
- Port Allocation: Elastic
- Number of Ports: 8
- Network Resource Pool: Default
- VLAN Type: VLAN
- VLAN ID: 20
- Description: Live migration of VMs between hosts, requires low latency and high bandwidth
- Usage: VMkernel adapters for vMotion (vmk1)
- Advanced Policies: MTU 9000 (jumbo frames), Load balancing: Route based on physical NIC load, Network failure detection: Beacon probing
- Name:
- vSAN Network:
- Name:
vSAN-PG - Port Binding: Static
- Port Allocation: Elastic
- Number of Ports: 8
- Network Resource Pool: Default
- VLAN Type: VLAN
- VLAN ID: 30
- Description: Storage traffic for vSAN cluster communication and data replication
- Usage: VMkernel adapters for vSAN (vmk2)
- Advanced Policies: MTU 9000 (jumbo frames), Traffic shaping enabled for guaranteed bandwidth
- Note: Only needed if using vSAN storage
- Name:
- NSX TEP Network:
- Name:
NSX-TEP-PG - Port Binding: Static
- Port Allocation: Elastic
- Number of Ports: 8
- Network Resource Pool: Default
- VLAN Type: VLAN
- VLAN ID: 40
- Description: NSX Tunnel Endpoints for overlay network encapsulation (GENEVE/VXLAN)
- Usage: VMkernel adapters for NSX TEPs
- Advanced Policies: MTU 1600+ (increased for encapsulation overhead)
- Note: Required for NSX-T deployment
- Name:
- NSX Edge Uplink Network:
- Name:
NSX-Edge-Uplink-PG - Port Binding: Ephemeral
- Port Allocation: Elastic
- Number of Ports: 16
- Network Resource Pool: Default
- VLAN Type: VLAN
- VLAN ID: 50
- Description: NSX Edge north-south connectivity to physical network/router
- Usage: NSX Edge VM uplink interfaces for external connectivity
- Advanced Policies: Security policy (Promiscuous mode: Accept, MAC changes: Accept, Forged transmits: Accept)
- Note: This is where Edge nodes connect to physical network for routing
- Name:
- TKG Management Network:
- Name:
TKG-Management-PG - Port Binding: Static
- Port Allocation: Elastic
- Number of Ports: 16
- Network Resource Pool: Default
- VLAN Type: VLAN
- VLAN ID: 100
- Description: Tanzu Kubernetes Grid management cluster and control plane components
- Usage: VMs for TKG management cluster
- Advanced Policies: Default security policy
- Name:
- TKG Workload Network:
- Name:
TKG-Workload-PG - Port Binding: Static
- Port Allocation: Elastic
- Number of Ports: 32
- Network Resource Pool: Default
- VLAN Type: VLAN
- VLAN ID: 110
- Description: Tanzu Kubernetes Grid workload clusters and application deployments
- Usage: VMs for TKG workload clusters
- Advanced Policies: Default security policy
- Name:
- NUC Management Network (Optional - Not Recommended for Production):
- Name:
NUC-Management-PG - Port Binding: Static
- Port Allocation: Elastic
- Number of Ports: 8
- Network Resource Pool: Default
- VLAN Type: VLAN
- VLAN ID: 200
- Description: Alternative management network for testing/troubleshooting only
- Usage: Temporary management access during network reconfiguration
- Advanced Policies: Default security policy
- Note: Keep NUCs on main management network (VLAN 10) for normal operation
- Name:
Step 6: Configure Storage
6.1 Review Existing Datastores
Each host should show its local datastore:
- Intel NUCs: datastore names like
esxi-nuc-01-ssd-local - Mac Pro:
MAC_LOCAL
6.2 Create Datastore Clusters (Optional)
For Intel NUCs with similar storage:
- Navigate: Storage tab
- Right-click on Homelab-DC
- Storage → New Datastore Cluster
- Name:
NUC-Storage-Cluster - Enable Storage DRS
- Add datastores from Intel NUCs
6.3 Configure Storage Policies
- Navigate: Policies and Profiles
- VM Storage Policies → Create
- Create policies for:
- High-performance workloads (NUC local storage)
- Management VMs (Mac Pro storage)
Step 7: Configure Advanced Settings
7.1 Configure vMotion Networks
For each Intel NUC host:
- Select host → Configure → VMkernel adapters
- Add Networking
- VMkernel Network Adapter
- Select vMotion port group
- Enable vMotion service
7.2 Set Up HA/DRS
- Select Compute-Cluster
- Configure → Configuration
- DRS Settings:
- Automation Level: Fully Automated
- Migration Threshold: Apply Priority 1, Priority 2, and Priority 3 recommendations
- HA Settings:
- Host failures cluster tolerates: 1
- Admission Control: Enabled
7.3 Configure Alarms and Monitoring
- Monitor → Alarms
- Review default alarms
- Create custom alarms for:
- Host disconnection
- Datastore usage
- VM resource usage
Step 8: Post-Configuration Tasks
8.1 Update Host Profiles (Optional)
- Extract host profile from a reference host
- Apply to similar hosts for consistency
8.2 Configure Backup
- Menu → Administration → Configuration → Backup
- Configure backup schedule
- Set backup location (to Mac Pro datastore)
8.3 Install vCenter Plugins
Consider installing:
- Update Manager (WSUS integration)
- Log Insight (if available)
Mixed Environment Best Practices
EVC Considerations
- Intel NUCs: Can use newer EVC modes
- Mac Pro: Limited by Ivy Bridge architecture
- Recommendation: Keep clusters separate or use conservative EVC
vMotion Limitations
- Between versions: Use EVC mode
- Cross-version: May have limitations
- Storage vMotion: Works across versions
Resource Management
- Mac Pro: Ideal for management VMs and VCSA
- Intel NUCs: Workload VMs with HA/DRS
- Mixed workloads: Distribute based on requirements
Verification Checklist
- All hosts added and connected
- Clusters created and configured
- Networking configured (standard or distributed switches)
- Storage visible and accessible
- HA/DRS functional
- vMotion working within cluster
- Alarms and monitoring configured
- Backup schedule configured
Troubleshooting
Host Connection Issues
# Test connectivity
ping esxi-nuc-01.markalston.net
# Check ESXi service status
ssh root@esxi-nuc-01.markalston.net "service.sh status"
vMotion Issues
- Check VMkernel networking
- Verify shared storage access
- Review EVC settings
HA/DRS Issues
- Check cluster configuration
- Verify resource pools
- Review constraints and rules
Useful PowerCLI Commands
# Connect to vCenter
Connect-VIServer -Server vcsa.markalston.net
# Get all hosts
Get-VMHost
# Get cluster information
Get-Cluster
# Check vMotion configuration
Get-VMHost | Get-VMHostNetworkAdapter -VMKernel | Where-Object {$_.VMotionEnabled}
Next Steps
Once basic configuration is complete:
- Deploy test VMs on each host
- Test vMotion within cluster
- Configure monitoring and alerting
- Set up automated backup for VMs
- Configure SSL certificates for production use
Configuration URLs:
- vSphere Client: https://vcsa.markalston.net
- VAMI: https://vcsa.markalston.net:5480
- ESXi Web UIs: https://[host-ip]