My research group ordered a nice new shiny server with some grant money, for crunching through models of security protocols with our model checker. We requested a server that would be suitable for lots of heaving lifting with many cores and lots of memory, and eventually ordered a nice shiny Dell Poweredge R630 with 48 cores, and 512GB of memory.
The initial plan was to run a hypervisor on the server, and to have one big VM using ~75% of the server's resources, and then a couple of smaller VMs on the side for other purposes, e.g. a file-server, monitoring, and the like. We were strongly recommended to install ESXi for this purpose, and did so, but sadly found out after installing it that on the free licence version of ESXi, each VM can have a maximum of 8 vCores: not so useful. A licence to do what we wanted was only about £400, but the grant money had already been spent, so that wasn't going to happen.
Enter XenServer. This seems to be pretty well respected as a hypervisor, and unlikely to have any artificial software restrictions as it's open source. In spite of taking guidance from various online walkthroughs, installing it from USB stick failed to work with our server, so we went for the old-school bootable DVD, which worked perfectly.
Installation was pretty painless once I eventually found a DVD burner and some blank DVDs; the only downside to XenServer seems to be the lack of a Linux/Mac client for the hypervisor. That said, it was easy enough to spin up a local Windows VM on my Mac to access XenCenter, through which you can configure new VMs and all the routing configurations. Adding ISOs to the XenServer install wasn't as straight forward as I feel it ought to have been (and indeed not possible through XenCenter's GUI), but perfectly possible in the end. This guide by Simon Barnes was very helpful in that respect. (Mirrored here)
TL;DR: SSH into XenServer. Create a folder called ISO_Storage in /var/run/sr-mount/<id-of-main-hypervisor's-storage>/ and
wgetany desired ISOs into this folder. Add this folder as an ISO Library to your hypervisor's management tool. Magic incantations to be found in the above blog post. Note that just '
wget'ting the iso into another location will be restricted to 4GB total space, so you need to choose the right location in the file-system. Click 'Rescan' in XenCenter after downloading a new ISO.
Now we can fire up Virtual Machines, installing their OS from a local ISO image. I configured one VM to be our main heavy lifting VM (so we can hit things with a Tamarin Hammer™), but found that in the Windows XenCenter software, the maximum number of cores per VM is 16. This is easily rectified: do the normal wizard-like configuration in XenCenter, and then once it's configured (but before installing an OS) you can change the number of cores through the Linux interface either in the main hypervisor's console, or over SSH, as described in the following useful guide by Jan Sipke van der Veen.
TL;DR: Create a VM with the wizard, and, e.g. 16 cores. Make sure the VM is halted. Run
xe vm-listto find your VM's <
uuid>. Then:xe vm-param-set VCPUs-max=32 uuid=[replace_with_uuid] xe vm-param-set VCPUs-at-startup=32 uuid=[replace_with_uuid]
Start the VM and you're good to go.
There's a useful XenServer cheatsheet to be found here as well.
Now came the fun bit: networking. It was suggested that we set up one VM as a firewall and router (with a public IP connected to NIC1), and to create a separate private network for the VMs, at least partially separated from the internet; we then use NAT to bridge that gap externally, but only on specific ports. We have a separate IP address for accessing the management console, connected to NIC0. This, in my mind, seems to be very a sensible configuration.
The recommendation we were given was to use pfSense, as this is a well known and respected firewall operating system. Unfortunately, for some unknown reason, pfSense and XenServer don't currently seem to play nicely together, or at least they certainly didn't in our case. We configured the network as follows:
- Hypervisor level: configure XenServer such that pfSense was connected to both NIC1 and 'Private Network', and the VMs underneath only connected to 'Private Network'
- Firewall/pfSense level: WAN connected to NIC1, LAN connected to 'Private Network', and DNS, DHCP, and external IP address set up correctly.
We then had some very odd behaviour:
- Pinging external IP addresses and e.g.
google.comworked perfectly from the pfSense box, as did
curl -v google.com.
- Pinging external IP addresses (e.g.
18.104.22.168) from any of the VMs worked absolutely fine (sub 5ms), but pinging
google.comfailed completely (timed out) if you set the VM's DNS servers to external (e.g. our department DNS servers, or
22.214.171.124, 126.96.36.199), but worked fine if you set the VM's DNS server to the pfSense box, in this case
192.168.1.254. I verified that nothing was actively blocking DNS traffic on port 53.
- Attempting to browse to (or
google.comfrom within a VM failed whether you went direct to the URL, or to the IP address returned by the name-servers.
- Setting up a proxy on the pfSense box (*shudder*) and then configuring the VM's web-browsers to use this proxy worked fine, but was such an ugly, crazy hack that I vetoed it, as this cured the symptoms, not the underlying problem.
- Looking at
traceroute/tracepath, ping, and
nslookupled us to conclude that in spite of the correct connections existing, the packets were timing out somewhere between the VMs and their eventual destination, perhaps caused by pfSense (in spite of being under minimal load).
- Some comments from other users with the same problems suggested it might have been to do with inefficient calculation of checksums within pfSense, but I never found conclusive evidence for or against this before giving up on this route.
None of the suggested fixes in the above links worked, so my only personal opinion and recommendation as a result of this experience is: due to the poor performance I and others seem to have experienced, do not use pfSense with XenServer. This is a real shame, as I otherwise really liked pfSense's setup, interfaces, and user-friendliness.
Looking into various other alternatives to pfSense produced a range of options, but in the end I settled on IPFire as a possibility, and due to its ease of installation and successful configuration, stuck with it. (Sadly I've just noticed they don't have a valid TLS/SSL certificate for their site. Logged as a bug.)
Installing and configuring IPFire was a dream, simply set the WAN side as RED, the LAN side as GREEN and then turn on DHCP; this was sufficient to give the VMs internet access. Enabling port-forwarding using NAT (in our case to SSH into any of the VMs from outside the server, e.g. port 5555 external -> 22 internal to a specific LAN IP) was exceptionally easy. Once these rules were configured and working, it was then just a matter of adding a rule blocking all other incoming traffic through the firewall.
Having been given a second IP address for the VMs' private network via the firewall (rather than for hypervisor management), I mistakenly 'added' this IP address to the hypervisor in XenCenter: doing this is not only not required, but it means that hypervisor itself claims that IP, meaning you log in to the hypervisor's management console over SSH, rather than being pointed towards the firewall VM. Leaving the hypervisor unaware of this IP, but telling the (IPFire) firewall VM that it was responsible for it worked perfectly.
With all the routing and firewall configuration complete, the next step was to set up some basic security precautions on the VMs: Bryan Kennedy's post on "My First 5 Minutes On A Server; Or, Essential Security for Linux Servers" is an excellent starting point.
fail2ban. Put your public key into
~/.ssh/authorized_keys. Lock down SSH logins to public-key only in
/etc/ssh/sshd_config. Enable automatic security updates. Install and configure
Once this was all done, the VMs were ready for use by our group, so I set up personal user accounts, and let them loose. One shiny VM (of many) with 32 cores and 378GB of memory ready and waiting for that Tamarin Hammer!
Update 04/10/2016: Accessing the IPFire control panel is only possible from the GREEN side of the firewall. This is not possible from a normal computer, even within e.g. the university network, as the only machines inside the firewall are VMs on the main server. There are two ways to get around this: either set up a new VM (of e.g. Ubuntu Desktop) in XenCenter, enter the console tab and fire up a browser. Access the firewall's internal IP address on port 444, and login with the credentials you set up. The alternative method is just to log in to a VM on the main server over SSH, but enable X11 forwarding, and then open
chromium-browser or similar.
ssh -X username@hostVMAddress chromium-browser (Access e.g. 192.168.0.1:444 in X11 GUI browser)
This will then open an X11 window locally running a browser window from the server, and you will be able to login to the firewall on the GREEN side, assuming the VM is configured on the GREEN side of the firewall. You will probably have to use one of these two routes to configure the IPFire firewall in the first place.
Email me [martin.dehnel-wild@...OxfordCS] if you have any specific questions or queries about this setup. Also noticed that XenServer's website doesn't have a valid TLS certificate. Reported.