Homelab Conversation: Network and Storage Upgrade

This document captures a conversation about upgrading a home lab, focusing on network infrastructure and storage solutions for a Synology DS918+.


Part 1: 10G Networking for the Home Lab

User’s Goal

The user wants to upgrade their office lab to 10G networking while maintaining a 1G uplink to the garage switch.

Key Recommendations

  • 10G Uplink to Garage: It was determined that a 10G uplink to the garage is not necessary at this time. The lab traffic will be mostly self-contained, and NSX-T will handle the software-defined networking, minimizing the need for high bandwidth to the rest of the network.
  • Focus on the Lab: The 10G upgrade should be confined to the office lab, where the high-performance compute and storage will reside.

Shopping List for 10G Lab Upgrade

Component Recommendation Price (Approx.)
Core 10G Switch Ubiquiti USW-Aggregation (8 x 10G SFP+ ports) $300-400
SFP+ DAC Cables 10GTek SFP+ DAC Cable Kit (various lengths) $107
Synology NAS 10G Upgrade Synology E10G18-T1 (10GBase-T adapter) $120-150
1G Uplink Component 1G SFP 1000Base-T module $20-30

Part 2: Synology DS918+ Storage Upgrade

User’s Current Setup

  • Drives: 2 x 4TB WD Red Pro in a RAID 0 configuration (8TB raw, ~7TB usable).
  • Cache: 2 x 500GB Crucial P3 Plus NVMe SSDs (1TB total cache).
  • Drive Bays: 2 of 4 bays are in use.

The RAID 0 Problem

  • The user’s current RAID 0 setup offers no data redundancy. If one drive fails, all data is lost.
  • With 5.6TB of data used, they cannot directly convert to a 4TB RAID 1 volume.
  1. Purchase New Drives:
    • Recommendation: 2 x 8TB Seagate IronWolf drives. This is a cost-effective choice for a home lab, providing a good balance of performance and capacity.
    • RAID Configuration: The new drives should be set up in a RAID 1 configuration, which will provide 8TB of usable, redundant storage.
  2. Data Migration:
    • Install the two new 8TB drives into the empty bays of the DS918+.
    • Create a new RAID 1 storage volume with the new drives.
    • Move the 5.6TB of data from the old RAID 0 volume to the new, redundant 8TB volume.
  3. Reconfigure Old Drives:
    • Once the data is safely migrated, delete the old RAID 0 volume.
    • Reconfigure the original 2 x 4TB drives into a new RAID 1 volume. This will create a 4TB redundant volume that can be used for backups or less critical data.

Memory Upgrade

  • Recommendation: Add a single 4GB DDR3L-1866 SODIMM to the empty memory slot in the DS918+.
  • Benefit: This will bring the total RAM to 8GB, improving performance for Docker containers and other services running on the NAS.

Part 3: iSCSI LUN Migration

User’s Current Setup

  • 3 iSCSI targets connected to 3 LUNs, each 1.7TB in size.
  • No host-level permissions (IQNs/WWPNs) are configured.

Recommendations

  • Security: For a home lab, open access is acceptable, but for better security, the user should consider adding ESXi host IQNs to the LUN permissions in the DSM SAN Manager.
  • Migration: iSCSI LUNs cannot be simply copied. The recommended method is to use Storage vMotion:
    1. Create new iSCSI LUNs on the new 8TB storage volume.
    2. Add the new LUNs as datastores in vSphere.
    3. Use Storage vMotion to migrate the VMs from the old datastores to the new ones.
    4. Once all VMs are migrated, decommission the old datastores and LUNs.
  • Optimization:
    • Right-size the LUNs: Check the actual usage in vSphere and create smaller, more appropriately sized LUNs on the new volume.
    • Thin Provisioning: Use thin-provisioned LUNs to save space, but monitor storage usage to avoid over-allocation.

This project is for educational and home lab purposes.