Critical incompatibility alert: DS918+ cannot accept E10G18-T1 adapter

The Synology DS918+ NAS is fundamentally incompatible with the E10G18-T1 10GbE network adapter. The DS918+ lacks the required PCIe expansion slot for this adapter. However, since you asked about safe shutdown procedures for iSCSI-connected VMs, this report provides comprehensive guidance on those procedures, which remain valuable for other maintenance scenarios or if you upgrade to a compatible NAS model.

Hardware incompatibility explained

The DS918+ has no PCIe expansion slot where the E10G18-T1 could be installed. The DS918+ only contains two M.2 NVMe slots on the bottom panel designed exclusively for SSD caching, using a mini PCIe interface with insufficient bandwidth for 10GbE networking. The E10G18-T1 requires a standard PCIe 3.0 x4 slot that simply doesn’t exist in the DS918+ chassis. Synology’s official compatibility list explicitly excludes the DS918+ from E10G18-T1 support, listing only models like DS1517+, DS1618+, DS1621+, DS1821+, and various RackStation models.

The only PCIe connection inside the DS918+ connects the SATA controller to the drive bays—accessing it would destroy the NAS’s core functionality. No workaround exists using M.2-to-PCIe adapters due to bandwidth limitations and physical constraints.

Safe shutdown procedure for iSCSI-connected VMs

Despite the hardware incompatibility, the proper shutdown sequence for a Synology NAS with active iSCSI connections and running VMs remains critically important knowledge. The fundamental principle: VMs and initiators must disconnect BEFORE the NAS shuts down. The iSCSI protocol is stateful, meaning forcibly shutting down a target with active connections leaves clients in a hanging state, risking severe data corruption.

The complete shutdown sequence

Phase 1: Prepare and verify (5-10 minutes)

Before beginning shutdown, verify backup status of all VMs and document current iSCSI connections. Notify users of impending downtime and identify all VMs using iSCSI storage. Check that no critical operations are running: Hyper Backup tasks, Plex transcoding, Docker containers, active file transfers, or surveillance recordings. In DSM, navigate to Resource Monitor to check SMB/CIFS/NFS connection counts and active operations.

Phase 2: Gracefully shut down VMs (10-30 minutes)

Shut down guest VMs using proper OS shutdown commands—never use force shutdown or power-off options. In VMware, use “Shut Down Guest OS”; in Hyper-V, use “Shut Down” (not “Turn Off”); in Proxmox, use qm shutdown <VMID>. Wait for all VMs to completely power off before proceeding. Verify no VM processes remain active. Simply powering off VMs is insufficient because the hypervisor host maintains iSCSI connections independent of VM state.

Phase 3: Disconnect iSCSI initiators (5-15 minutes)

This is the most critical phase. Each hypervisor requires specific disconnection procedures.

For VMware ESXi, unmount datastores from all hosts by right-clicking the datastore and selecting “Unmount” for all hosts. Then detach storage devices from each host via Host → Configure → Storage Devices → Select device → Actions → Detach. For multipath configurations, disable all paths before shutdown to ensure orderly termination of open iSCSI transactions. Verify no iSCSI sessions remain using esxcli iscsi session list, which should return empty output. Optionally remove detached devices using esxcli storage core device detached remove.

For Microsoft Hyper-V, gracefully shut down all VMs first, then disconnect iSCSI sessions via the iSCSI Initiator control panel. Select the target, click Disconnect, and ensure data buffered is committed before disconnecting. If disconnection fails, stop the Windows Backup service, as it can hold iSCSI targets open. Verify disconnection using Get-IscsiSession in PowerShell—no active sessions should remain. Take disks offline in Disk Management if needed using Set-Disk -Number <DiskNumber> -IsOffline $true.

For Proxmox VE, stop all VMs using qm shutdown <VMID>, then disable iSCSI storage in the GUI (Datacenter → Storage → Select iSCSI → Disable). Logout from iSCSI sessions using iscsiadm -m node -T <target_IQN> -u, then delete node configuration with iscsiadm -m node -T <target_IQN> -o delete. If using multipath, flush paths with multipath -f. Stop the open-iscsi service if shutting down the host: systemctl stop open-iscsi.

Phase 4: Verification (2-5 minutes)

Verify no active iSCSI sessions remain on all initiators. Check the NAS for zero active iSCSI connections in DSM by navigating to iSCSI Manager and checking the “Connections” tab—it should show 0 active connections. Wait 30-60 seconds for session cleanup to complete. Do not proceed with NAS shutdown if any active connections remain.

Phase 5: NAS shutdown (5-10 minutes)

Optionally stop the iSCSI service on the NAS first via iSCSI Manager or using synoservice --stop iscsi via SSH. Initiate NAS shutdown through the DSM interface: click your profile icon in the upper right corner, select “Shutdown,” and confirm. This is the only recommended method as it performs pre-shutdown system checks and alerts you to running services or potential issues. Allow 2-5 minutes for complete shutdown. Wait for all LED indicators to turn off completely before proceeding with any hardware work.

Why explicit disconnection is critical

The iSCSI protocol has no built-in “graceful target shutdown” mechanism. Forcibly stopping a target leaves initiators in a hanging state with potentially catastrophic consequences: write cache data loss, VMFS/NTFS/ext4 filesystem corruption, virtual disk damage, guest OS corruption requiring fsck/chkdsk, and host instability. Documented cases include Windows iSCSI data corruption during session recovery (Microsoft KB 2928678, 2955164), VMware ESXi hosts becoming unmanageable and requiring reboots, Linux/Proxmox ZFS/BTRFS pool corruption, and Hyper-V guest VM BSODs.

When write cache is enabled on the NAS, data acknowledged as “written” may only exist in volatile cache. Power loss or shutdown without proper cache flush results in data loss. Multiple initiators connecting to the same iSCSI LUN without cluster-aware filesystems (VMFS, OCFS2, GFS2) guarantees data corruption.

Pre-shutdown checks for Synology DiskStation

Before initiating any shutdown, verify volume health in Storage Manager—all volumes must show “Normal” status. Never shutdown during “Degraded” or “Critical” states. Check parity consistency check status; while you can technically shutdown during a parity check, it will restart from 0% upon reboot and can take days to complete. Community recommendation strongly suggests letting parity checks complete before shutdown.

Verify no active operations: Hyper Backup tasks, Docker containers, Virtual Machines (DSM will warn), file transfers in File Station, or users uploading/downloading files. Use Resource Monitor to check CPU/memory load—high load can slow shutdown or cause hangs. Check SMART status for any drive warnings and ensure system temperatures are within normal range. If a UPS is connected, verify its status in Control Panel → Hardware & Power → UPS.

Volume synchronization and RAID considerations

During Synology shutdown, DSM follows a structured sequence: flushes all dirty caches (write cache) to disk, verifies all data is synced to physical drives, locks volumes to prevent new writes, safely unmounts all RAID volumes, and ensures parity is consistent before power-off. Improper shutdown can cause RAID 5/SHR volumes to enter “Degraded” mode, force parity consistency checks on next boot (taking 24-48+ hours), and risk data loss if a second drive fails during rebuild.

You can safely shutdown during RAID rebuild or expansion operations—they will resume from 0% after restart with reset expected durations. However, parity consistency checks also restart from 0%, so waiting for completion is recommended when possible.

What never to do

Never pull the power plug during normal operation, as this causes immediate power loss with no graceful shutdown sequence and high risk of RAID degradation. Never hold the power button for 10+ seconds unless the system is completely unresponsive—this forces hard shutdown and bypasses critical service termination. Never shutdown during DSM updates, as this can brick the system. Never ignore DSM warnings about running VMs, backups, or services. Never disconnect network cables before shutdown if remote users are connected or during active file transfers, as this can cause open file corruption.

For hardware modifications specifically, never touch internal hardware before disconnecting all cables (power, network, USB, eSATA)—official Synology guides emphasize this prevents electrical damage and shock. Never interrupt the memory validation process on first boot after RAM installation; wait the full 15-20 minutes even if the system seems frozen.

Post-hardware installation procedures (general guidance)

While not applicable to the E10G18-T1 on the DS918+, general post-hardware installation procedures for Synology NAS include: disconnecting all cables, waiting 10-30 seconds for capacitor discharge, performing the hardware modification, reconnecting all cables, and powering on. The first boot may take 10-15 minutes for memory validation (if RAM was added), with the network LED blinking continuously during checks. After restart, check system logs in Log Center for errors, verify volume health in Storage Manager, confirm all services started in Package Center, test network connectivity, and verify new hardware is recognized in DSM.

For models that do support the E10G18-T1 (not DS918+), DSM automatically detects the adapter with built-in drivers. Configuration involves navigating to Control Panel → Network, configuring the new 10GbE interface, assigning static IP or DHCP, setting MTU to 9000 for jumbo frames if supported, testing connectivity at 10000 Mbps full duplex, and updating firewall rules as needed.

Alternative solutions for DS918+ network upgrades

Since the E10G18-T1 cannot be installed, consider these alternatives:

Link aggregation (officially supported): Combine both 1GbE ports for 2Gbps aggregate throughput via Control Panel → Network → Network Interface → Create Bond. This requires an 802.3ad-compatible switch and provides improved performance for multiple simultaneous clients, though single-client speed remains limited to 1Gbps per connection.

Upgrade to compatible NAS model: Synology models with PCIe slots include DS1522+ (5-bay, ~$700, PCIe 3.0 x8 slot), DS923+ (4-bay, ~$600, dedicated 10GbE expansion slot for E10G22-T1-Mini), DS1621+ (6-bay, ~$850), DS1821+ (8-bay, ~$1,100), or used DS1517+/DS1618+ models with PCIe expansion.

USB-to-Ethernet adapters (unsupported workaround, use at own risk): 2.5GbE or 5GbE USB adapters exist but require third-party community drivers from GitHub projects like bb-qq, void warranty, may break with DSM updates, and provide inconsistent performance due to USB 3.0 bandwidth limitations. This approach is not recommended and receives no Synology support.

Startup sequence after shutdown

When powering back on, follow this reverse order: power on the NAS and wait 2-5 minutes for the iSCSI service to be fully online. Verify iSCSI targets are available in iSCSI Manager. Start hypervisor hosts if they were shut down. Wait for iSCSI initiators to automatically reconnect (this may take 1-2 minutes). Verify datastores and storage are visible to all hosts. Finally, start VMs in priority order, ensuring iSCSI connections are established before VMs attempt to access storage.

Timing considerations for safe operations

Allow sufficient time for each phase: 30-60 seconds per VM for graceful shutdown, additional time for database VMs to commit transactions, 10-15 seconds after iSCSI logout for session cleanup, and 60-120 seconds after issuing the NAS shutdown command for all services to stop gracefully. Write-heavy workloads need additional cache flush time, and network latency adds to timeout durations. Rushing these steps significantly increases corruption risk.

Given the DS918+ hardware limitations, you have three options. First, continue using the DS918+ with its built-in 1GbE networking (125MB/s maximum per port), utilize link aggregation for multiple-client scenarios, focus on SSD caching via the M.2 slots to improve IOPS, and wait for a future NAS upgrade when 10GbE becomes essential. Second, if 10GbE connectivity is immediately necessary, upgrade to a compatible Synology model like the DS1522+, DS923+, or DS1621+ that includes proper PCIe expansion capabilities. Third, as a risky workaround not recommended, attempt using USB-to-2.5GbE adapters with community drivers, accepting warranty void, limited support, and potential instability.

The DS918+ remains an excellent 4-bay NAS for 1GbE environments, media transcoding, BTRFS with snapshots, and general file serving. It is simply not suitable for 10GbE networking requirements, direct 4K video editing over network, or high-bandwidth single-client workflows.

Conclusion and final recommendations

While the E10G18-T1 installation cannot proceed on the DS918+, the comprehensive shutdown procedures outlined here remain essential knowledge for any maintenance scenario involving your NAS with iSCSI-connected VMs. The core principle—always disconnect initiators before shutting down the target—prevents data corruption and ensures smooth restarts. If 10GbE connectivity is critical for your workflow, upgrading to a compatible Synology model is the only officially supported path forward. The DS918+ will continue serving reliably in 1GbE environments, and proper shutdown procedures will protect your data integrity during all future maintenance operations.


This project is for educational and home lab purposes.