top of page
Writer's pictureTactical Capacitor

Building an Enterprise-Level Home Lab (Without the Enterprise Rack Gear)

Updated: Mar 18, 2022



Over the decades, enthusiastic and motivated IT professionals from all over have gone to lengths to try and create environments with the same features they're surrounded by at work. Until recently though, that meant you'd either have to


Search Craigslist or other secondhand sources to find old, full-size rack gear for cheap:

-This does get you true enterprise functions, but you're doing it on aged gear with WAY more capacity than a home network needs (eg. multi-socket servers, 48+ port POE switches, etc). With the age, you're getting out of date software and firmware that's either nearly impossible to find updates for now or is completely unsupported by and incompatible with the whole industry by now anyway. Then, worst of all, you have the size+space+power+heat issues. Rack gear is heavy, power hungry, hot running, and will make all but the most sound-proofed homes sound like there's a 747 jumbojet roaring to life in your home 24/7.


OR


Go out and jury rig systems you have at home together, like repurposing your old laptops or desktops as virtual hosts, running nested hypervisors *within* hypervisors for clustering, and otherwise achieving the concept of 'enterprise level' without the 'enterprise deployment' steps:

-This is fun, but when you really want to try some cool new stuff out and your desktop's out of RAM, or you want to game on your main rig and that means shutting down your VM's to free up CPU and memory, or just simply shut the PC down, you can say goodbye to that whole virtual enterprise environment. Also, you'll run into compatibility issues for some software and functionality when your hardware isn't up to snuff in terms of enterprise features (stuff like TPM's, Secureboot options, Out-of-Band management, chipset support, and in really niche but relevant cases there's stuff like ECC memory, or instruction set and feature set support that your consumer CPU might not have).


But now, in the early 202x's, there's finally a better way! We can now get the best of all worlds- and I'm being serious, this isn't some cop out 'we can spin stuff up in the cloud!' approach, this is all physical, affordable, new equipment available for use right now.


HARDWARE:


Dell T140 Tower Server:

CPU: Xeon 6-core/12-thread E-2246G

RAM: 48GB (expandable to 64GB) DDR4 2666MHz Drives:

(vSAN cache tier): Kingston 240GB m.2 SATA (over Dell BOSS card)

(vSAN storage tier): Crucial 500GB m.2 NVMe

Reasoning: Traditionally you'd put NVMe storage as the cache tier, but for a home network I won't see the increased throughput performance even over 2.5GbE connections- so I utilized the extra capacity in the 500GB drives.

Networking: Built-in 2x 2GbE ports, PCIe dual-port 2.5GbE NIC

Video: On-board motherboard video output, and E-2246G integrated graphics


HPE ML30 Gen10 Tower Server:

CPU: Xeon 6-core/12-thread E-2246G

RAM: 48GB (expandable to 64GB) DDR4 2666MHz Drives:

(vSAN cache tier): Kingston 240GB m.2 SATA

(vSAN storage tier): Crucial 500GB m.2 NVMe

Reasoning: Traditionally you'd put NVMe storage as the cache tier, but for a home network I won't see the increased throughput performance even over 2.5GbE connections- so I utilized the extra capacity in the 500GB drives.

Networking: Built-in 2x 2GbE ports, PCIe dual-port 2.5GbE NIC

Video: On-board motherboard video output, and E-2246G integrated graphics


Asus Custom C246-based Tower Server (built from spare/unused hardware in the other 2 servers):

CPU: Xeon 4-core/4-thread E-2224

RAM: 32GB (expandable to 64GB) DDR4 2666MHz Drives:

(vSAN cache tier): Kingston 240GB m.2 SATA

(vSAN storage tier): Crucial 500GB m.2 NVMe

Reasoning: Traditionally you'd put NVMe storage as the cache tier, but for a home network I won't see the increased throughput performance even over 2.5GbE connections- so I utilized the extra capacity in the 500GB drives.

Networking: Built-in 2x 2GbE ports, PCIe dual-port 2.5GbE NIC

Video: Cheap old spare ATI PCIe GPU with VGA output (CPU doesn't have onboard GPU, and motherboard only has outputs but no onboard video)



DESIGN:

(Architecture diagrams):



Out of Band Management:

-Dell iDRAC9: x.x.1.201

-HPE iLO5: x.x.1.202

-Lifecycle Managers and review on each

-theoretical iKVM


Virtualization (VMware vSphere 7):

Clusters:

  • 3x Coffee Lake Intel Xeon-based x86 SMB towers

  • 3x 8GB Raspberry Pi 4's on experimental ARM-based vSphere Fling with POE hats for single-cable connection per host

x86-based Xeon tower cluster:

  • Single vCenter Instance with all 6 servers in vCenter inventory

  • vSAN enabled and licensed

  • vMotion and DRS enabled

  • Redundant 1GbE management connections

  • Redundant 2.5GbE vSAN/vMotion connections

  • SRM enabled as proof of concept with RPi4's as secondary site

  • vRealize Operations

Raspberry Pi cluster:

  • Only used to test proof of concept performance and deployment capability for simple ARM-based operating system options like Ubuntu

  • Boot ESXi from on board SD card

  • Load data from independent USB-based SD cards on each host, originally tried to implement vSAN but it is documented as non-functioning for the current Fling (as of 2021), but I definitely tried as hard as I could to get it running anyway with multiple SD cards (for cache and storage volumes) per host


Domain Controller:

Server 2019 Intel NUC- Native DC role managing all computer and user objects in the camski (lab) domain. Home machines not associated with the lab network segment are not AD-integrated, as I've done this many times over the years and eventually when you want to mess around with, restore, rebuild, or do something specifically for your personal/home use and don't want the hassle of having to authenticate (and possibly contaminate) everything with active directory credentials, it's better to keep your home network independent and lab network the AD-managed network. Plus, biggest issue is trying to remote into an AD-managed PC and troubleshooting authentication hiccups, or even worse, trying to log in with your AD account, realizing your DC server is down/upgrading/offline


DNS:

Server 2019 Intel NUC- DNS role managing DNS resolution for all devices and IPs (home and lab) for clean-looking and universal hostnames from a centrally managed server

DHCP:

Server 2019 Intel NUC- DHCP role managing IP pool only on the lab IP range (192.168.2.x /24)


Firewall:

Server 2019 Intel NUC- Native Windows Firewall


SQL:

Server 2019 Intel NUC, static TCP connections allowed over firewall for remote access from domain servers such as external Veeam ONE server


Networking:

-VLANs:

-Subnets:

-Switches: (POE, Cisco SG350, managed TP-link 8 port, 2x TP-link 5-port 2.5GbE)


Backup:

VEEAM Availability Suite 10a

VEEAM One


Certificate Authority:

Server 2019 Intel NUC- Native Windows CA Role

NAS:

Server 2019 Intel NUC, File and Storage Services with deduplication enabled on NFS-based 1TB m.2 NVMe share over a 2.5GbE connection


Secureboot Functions:

On-board TPM's on all hosts


Physical/DR Redundancy:

-1-host fault tolerance (compute, storage, and memory)

-Redundant networking on all hosts to separate physical switches for each of the 2 connections (making them easier to physically move/ tolerant to losing power to one switch)

-1500VA AVR UPS from Cyberpower


Network Management:

Solarwinds Orion- used for lab for actual profession/job (company used Orion until it was deauthorized when vulnerabilities came out, to my excitement because I have ALWAYS hated Solarwinds)

Attempted Pulseway (not recommended, no easy setup, requires CA for base install- and if you use their cloud instance you're transmitting possible sensitive company info to their cloud servers)

PRTG- used after the Solarwinds exploits were found


Monitoring:

vRealize Operations

Veeam One


Azure Service Integrations:

None as of 03/2022 (no budget for continuous use services, new job does not have any Azure agreements for labs)


Azure Host/Network Integrations:

None as of 03/2022 (no budget for continuous use services, new job does not have any Azure agreements for labs)





677 views0 comments

Comments


bottom of page