If you have read any of my previous DIY server guides, you might have noticed that I am a fan of VMware’s ESXi product. I know there are other alternatives, such as Unraid and Proxmox, but ESXi is the solution I know best and is the one that I encounter most often in my daily professional life. I am also a big fan of pfSense, and some years ago, I put two of my favorite software solutions together and built myself a (very) DIY home router, running ESXi as a hypervisor with pfSense as a VM, and it has worked very well ever since.
Recently, Rohit has been covering some Topton passive firewall systems, and the Intel Core i7 Fanless units caught my eye. The hardware in my home ESXi/router system was getting old, and I saw an opportunity for an upgrade, so I requested one of the units. Patrick sent it over, and here we are.
In this article, I am going to cover a few topics. First, I will explain why I set my home router up in this fashion. Next up, I am going to discuss my reasons for upgrading to the new i7 unit. Lastly, I will cover the actual upgrade process I went through to move from my old system to the new one. Let us get started!
VMware ESXi + pfSense?
I was running pfSense long before I put it onto an ESXi host, but I wanted my router to do more for me. The original impetus was Pi-hole. If you are unaware, Pi-hole is a network-based (DNS) ad, telemetry, and malware blocker. The name Pi-hole comes from the traditional hardware where this would be installed, a Raspberry Pi, but I did not want another device that I had to keep powered on and plugged into my network. Plus, Pi-hole operates as a DNS server on your local network, and to my mind, that was the kind of thing my router could do. Later on, Patrick introduced me to Guacamole. Guacamole is a clientless remote desktop gateway that just seemed super cool, and again I was struck with the thought that this is something I wanted my router to do. And so the idea of rebuilding my router as an ESXi host was born, with a VM for each function I wanted.
My first solution was one of the most DIY things I have ever done. In addition to assembling the system from inexpensive second-hand hardware and stuff I had lying around, I was also going through a burst of COVID-isolation-inspired dreams of learning woodworking, so I built the case myself.
It is not exactly a work of art, but it got the job done. The internals were just as basic as the exterior:
My humble ESXi system was based on an ASUS H110T/CSM motherboard. This little motherboard has a header that accepts a DC12V/19V power supply, and I happened to have an old laptop power brick hanging around. Combined with a second-hand Core i5-6500T CPU and 32GB of laptop RAM, I had myself a system. I had ESXi installed onto a 16GB M.2 from a Chromebook and had an old Samsung 850 Pro 250GB SSD in there to hold VMs. All of this was shoved into the little wooden box.
It worked great and has given me zero problems. Over time, I have found even more advantages to the ESXi router. For example, when pfSense 2.5.x came out, I could install it alongside my existing 2.4.x installation, and then easily swap between the two while I got everything for 2.5.x configured correctly. Recently when 2.6.0 came out, I could upgrade with confidence because I took a snapshot of my 2.5.x before pressing the upgrade button. If things had gone poorly, I could simply roll back to my snapshot.
Lastly, a few more VMs have found a home on my little server. I run Home Assistant for all the smart home things in my life, and it happily hums along on this little system. I also fire up test VMs from time to time, and this gives me a good place to do it.
Time for an upgrade
Alas, the time comes for all things. While my system was working fine, there were a few problems. One of the onboard NICs on the H110T/CSM is a Realtek model, and VMware changed some things in ESXi 7.0, and you cannot install drivers for Realtek NICs in 7.0 or higher.
Second, my house is beginning to transition over to 2.5GbE networking and everything in my existing system was 1GbE. I also had an idea to run my Plex transcoder on my little ESXi host, via iGPU passthrough, and the i5-6500T’s Quick Sync encoder is not particularly stellar.
Additionally, I wanted some more and faster storage for my host, and there just was not a great way to do that on the H110T/CSM.
Lastly, I just saw the opportunity with the little fanless i7 units. On some of the i7-1165G7 benchmark graphs, that CPU was significantly leading my old i5-6500T while presumably consuming less power.
Once Patrick sent me the device, it was time to get to work.
Installing hardware and ESXi
Patrick sent me the Topton unit without any storage or RAM installed, so the first task was installing those. I had 16GB of spare DDR4 RAM hanging around that I could use temporarily, and once I actually put the new host into use, I will reuse the memory out of my old system since there is 32GB of it, and it is compatible with this unit. For the SSD, I had the option to use a NVMe drive on this system, so I installed a WD Red SN700 1TB drive. In my review, I found the SN700 to be a very nice drive with the potential to run a bit hot under sustained load. Thankfully, my home router is not a heavy-load environment, so I do not think it will have any problems.
Once my RAM and SSD were installed, it was time to install ESXi. I am not going to cover the actual ESXi installation process as it is not particularly complicated, but I did want to note that the Intel i225-V NICs on the Topton need an additional driver integrated into the ESXi installation to be supported. That driver can be found here, and once I had it on my installation thumb drive, all went normally.
With ESXi installed, I could log in via the web interface. From here, the very first thing I always do is configure NTP, enable the Autostart feature, and apply my licensing. I own a paid ESXi host license, which will come into play when I use Veeam, but for most purposes, a free ESXi license will work fine here.
Next up was to configure my networking.
ESXi happily saw all six of the network interfaces on my system, and vmnic0 is going to act as the uplink to my home LAN. I will define vmnic1 as the physical WAN port where my internet will plug in. Doing that requires defining a virtual switch.
I named my new virtual switch vSwitch-WAN and assigned vmnic1 to it. I then swapped over to the Port groups tab, and set up two port groups.
The original “VM Network” port group associated with vSwitch0 I renamed to LAN, and I created a new port group associated with my vSwitch-WAN, which I called WAN. Now, when I get my pfSense VM running, I can assign it two network cards, one on the LAN and one on the WAN.
Migrating with Veeam
Since I am moving from an existing ESXi server to a new one, and I have a paid license for ESXi, the option is open to me to migrate VMs from one host to another via a few methods. I could set up a vCenter server and use VMware’s built-in vMotion to migrate VMs from one host to another, but honestly, that is a lot of trouble for a one-time move.
The better option for me was to use Veeam. I already run Veeam on a server at my house to back up the VMs running on my ESXi host, so it was a relatively simple thing to set up a replication job between the old and new host.
I set up the new rep job, selected all my VMs, and fired it off.
A little while later, my new ESXi host had a copy of all of the relevant VMs from the old host. All that was left to do was to power down the existing ESXi host, move the RAM to the new host, plug the new host in and boot it up.
Once the new host was powered on, I had to edit each of the VMs and choose the network that each virtual NIC would connect to. In my picture here, you can see my pfSense VM, which has two NICs.
And with that, I was done! I powered all my VMs up, starting with pfSense and Pi-hole, and went to make sure everything was working.
My internet was up and working just fine. All of the other VMs booted up just as easily, and I have a ton more storage and performance available to play around with. I have started working on the iGPU passthrough, which is not something I have ever done before, and I have high hopes to offload my Plex transcoding work to this little box.
Keeping it Cool
Now, STH readers that looked at Rohit’s article may remember that there were two models of the Topton i7-1165G7 unit, one of which performed well thermally and one that did not. I received a unit that did not do thermally well, and the unit gets uncomfortably hot to the touch. My solution to this problem cost me less than $20, though it did add a bit of “DIY jank” back to my setup.
As you can see from the picture, I simply put a 140MM fan blowing straight down onto the unit. I attached fan grilles to protect against fingers and wires getting hit by the fan. The fan is powered via a USB to PWM header that I already had on hand from a different project. The fan runs at a super low RPM because it is a 12V fan operating on USB’s 5V output, which has the benefit of ensuring it is dead silent. Eventually, I will properly attach this fan via some kind of 3D-printed scaffolding, but for now, it is just resting atop the system. Thanks to this “solution” my thermals dropped nearly 30C, and the whole unit is nice and cool.
Unfortunately, this also means that there is now a fan, albeit one that is quiet.
Final thoughts
Overall, I am very happy with how the process went and with the end result. Thanks to the Veeam replication process, the total downtime on my home internet was less than 20 minutes. The new system takes up less physical space and uses less power than the old Core i5-6500T system. The trade-off, perhaps, is looking less cool than my old wooden box. Most importantly for me, the new system is significantly faster, has more storage, is equipped with better network uplinks, and allows me to move to a much newer version of ESXi. Armed with this new capacity, I can think of a bunch of different projects to test, starting with the iGPU passthrough testing. Once more 2.5 GbE devices come to my house, I will also be prepared for that.
Setting up your router on a virtualization host is not for everyone, but hopefully, my journey was at least interesting to read about. I will be around to answer questions in the comments or forums if anyone has any, as well as listening for suggestions on what other interesting services can be added to my router host!
This content was originally published here.