Need test servers in office + ESX Noob + Help me + Recommendations

marley1

Supreme [H]ardness
Joined
Jul 18, 2000
Messages
5,447
Alright guys, lately our office (small IT company servicing home to small business in the area) has been needing some test equipment. We would like to get a machine going where we can run multiple operating systems or servers.

What we would like to do is build an ESX server since its free and have multiple servers/workstations in a virtual environment. The few times I have setup ESX I was a bit in over my head and could never get the networking portion working correctly. The system I tested on only had a single nic (maybe that was the problem).

So anyway I think what we need is a powerful machine that can handle multiple virtual OS.

What would you all recommend for a server like that? Needs to be rackmountable (got 8u free).

I want to do this right so if I need two machines (one for storage? one for the ESX?) that could be done. I want to be able to have enough storage space for each virtual OS so I can do whatever I want with it. Do I need multiple network cards so I can assign a NIC to each OS??

Also I want say if i have a SBS 03 Guest OS running and then create another SBS 03 Guest OS I do not want them to interfere with each other. I want them to be able to get online using same WAN but be seperate so I can have multiple DHCP servers and such running.

Anyway I know its alot of stuff to ask. I can go off the ESX HCL but want some opinions from you guys.

I was thinking machine should either be single quad core or dual proc, loads of memory, fast storage, and good powersupply.

This will not be production grade meaning if a powersupply fails its not a big deal I can replace it, dont need redundancy like that. But dont want cheap crap either.

Thanks!
 
I won't tell you what hardware to go with, even for my test server here at home I have a full blown Dell PE1950 (dual Quad-cores with 16GB memory connected to an iSCSI SAN via 10GbE) for it. But for the networking, you can get by with a single NIC. The VM guests aren't tied to a NIC, they are tied to vSwtiches(Virtual Switches). That vSwitch can be tied to a physical NIC for access to the rest of your network or it can be left without a physical NIC for guest VM access only.

Say you do want to run 2 SBS03 servers. Create a guest with 2 NIC's. Put the primary WAN NIC on the vSwitch that has WAN access. Then create a second vSwitch and put the second LAN NIC on it. Then if you wanted to create a second SBS03 server, put it's WAN NIC on the vSwitch with WAN access, the same as with the first SBS03 server and create another vSwitch and put it's LAN NIC on it.

The choice of a second server for storage is up to you. If you're building a whitebox ESXi server, you can place the storage in it for a lot less money though.


Screenshot so you can see vSwitches that are and aren't connected to physical NICs. The switch not connected to a physical NIC is for the private heartbeat network for a SQL cluster and Exchange cluster (which aren't powered on right now)

esxnetworkpn3.jpg
 
^ so the second vswitch i create for the lan, when i create guest workstations i would add them to that vswitch?
 
Yep, just put them on the LAN vSwtich of which ever SBS03 server you want them to use.

Also, axan has a great worklog of the whitebox ESX server he built, you can find it here.
 
Dell 2950/1950
HP DL350/etc work great as well.

All are cheap, fully supported, will give you no issues, and have no problems.

if you want to use IP storage, make sure you have a second NIC in the system. NFS or iSCSI works great, use OpenFiler for iSCSI. If you have 2 spare nics, I can show you how to do load balancing with Openfiler. Either way, you don't want to share storage traffic with VM/Service console.

If you're using ESXi, you don't have service consoles, btw - you have only vmkernel ports.
 
^ so the second vswitch i create for the lan, when i create guest workstations i would add them to that vswitch?

Most people tie the guests straight to a normal vswitch with a physical connection. If it's internal only, you won't be able to vmotion if you later on add extra servers.
 
I was thinking machine should either be single quad core or dual proc, loads of memory, fast storage, and good powersupply.

The thing to consider with single CPU setups is that currently ESX is only sold as dual socket license. So unless you are going ESXi, you really ought to set up a dual socket box, otherwise you are not fully utilizing the cash you just spend on ESX.
 
i wanna use what ever ESX is free =)

the thing i dont get, or confused about, i have an extra public static ip from my isp and plan to put this server behind that addy (maybe behind a virtual pfsense box or something?)

so anyway

public ip > physical nic > virtual switch 0 - SBS 03 install - 192.168.1.10 - runs dhcp
> virtual switch 1 - SBS 08 install - 192.168.1.10 - runs dhcp

> virtual switch 0 - 2 XP Pro Workstation - given by dhcp from sbs server - 192.168.1.100 and 192.168.1.101

So if i have something like that (would i be able to give each virtual machine on a virtual switch similar internal ip?)

Now what happens if i want to open ports on the firewall so SBS 03 server on Switch0 can recieve email or such. How the hell do you do that?

The networking confuses me a bit.
 
Also as far as hard drive configuration, I am thinking Raid with SATA drives. I do not need all that much storage. I primarily will just be having a quest OS with just some storage. So I am thinking I can do a bunch of fast SATA drives to save some money over SAS.

All I want to do is pull up a browser or RDP, hit the server do some configuration, and mess around.

My office network already has a SBS box for our network, and on the DMZ Vlan (client machines), I have just a NAS box for files and backups.
 
i wanna use what ever ESX is free =)

the thing i dont get, or confused about, i have an extra public static ip from my isp and plan to put this server behind that addy (maybe behind a virtual pfsense box or something?)

so anyway

public ip > physical nic > virtual switch 0 - SBS 03 install - 192.168.1.10 - runs dhcp
> virtual switch 1 - SBS 08 install - 192.168.1.10 - runs dhcp

> virtual switch 0 - 2 XP Pro Workstation - given by dhcp from sbs server - 192.168.1.100 and 192.168.1.101

So if i have something like that (would i be able to give each virtual machine on a virtual switch similar internal ip?)

Now what happens if i want to open ports on the firewall so SBS 03 server on Switch0 can recieve email or such. How the hell do you do that?

The networking confuses me a bit.

Pretend virtual switches are physical switches. They function the same. You hook vm network cards to them, and assign them an uplink to the real world as needed. The best way to look at it is opposite of how you are:

VM server 1 > Vswitch0 -> pnic0
VM server 1 > Vswitch1

VM server 2 > Vswitch1

and so on. you can't easily assign a pnic to more than one vswitch.

I'd hook the physical nic up to the pfsense server or the like, have that go to a public/DMZ vswitch. Have the vm gateway server with a vm nic on each vswitch. everything else on the internal.

This works fine unless you add more servers to vmotion vms.
 
cant easily add? or can easily?

so all the virtual switches can tie into a single physical nic and get online, the only thing i dont get is port fowarding.

but if i set this up i should be able to open up ports to each VM server under different switches?

Should I just get one of those quad port nics and get each of those connected to the outside?

sorry for the noobage, usually i have a firewall going ot a single switch which is just one network.
 
Pnic -> one virtual switch. One vswitch may have more than one nic for failover.

No port forwarding - everything is forwarded by default, and each VM will have it's own IP. the physical nic, as odd as it sounds, is just a portal - it has no network address.

Just try it, it'll make more sense as you do it. IT's not nearly as complex as you think it is.
 
so the ESX server will get an ip lets just say 192.168.1.2.

in the firewall i use (i may just have a hardware firewall instead of Virtualized - good idea?) i just open whatever ports i need (25, 4125, 443) to 192.168.1.2 and that will open up the ports for every VM Server or every VM Switch??

And I think I keep reading posts wrong. I can assign multiple VSwitch to a single Physical Nic? Or do I need a Physical Nic connected to WAN per each VSwitch?

so what i am thinking;
esx server - dual quad xeon, 16gb, sata raid with a bunch of 640GB drives and then just the 2 onboard nics and i will keep it on the last remaining static ip that i have.
 
The service console will get an IP. That's it. The VMS all get their own IP. They access the network through the physical network port that has no ip. It doesn't exist.

You'll need to open ports for the individual VMs as if they were physically connected to the network. You're thinking too much like a physical server - ESX is not a computer, it's a collection of virtual machines. Each machine operates 100% independently, has it's own "network card", mac address, etc. Each will need their own IP.

Each vswitch can have more than one pnic, but each pnic cannot have more than one vswitch.



ESX is not like vmware server, workstation, etc - the VMs don't SHARE a network connection, except at the physical layer. Each acts 100% like a physical machine. Treat them as such :)
 
Why not get a dell 2900 out of the outlet and just ram and the hard drives else where? Server is supported with esx and those hard drive trays are cheap.
 
I :heart: my 2900's. I have 4 of them at clients, solid as a rock. Trays direct from dell are $15.xx as of 4 months ago.
 
2950 is coming our way =)

going to order today, probably brand new dual quad core, 8gb, 5 146GB SAS drives should be enough in Raid 5.

how does virtualizing pfsense work?

any other ideas?

I do not need a crap load of storage these will pretty much be stock installs of OS just to run some testing on. Do I want to just run 2 146gb sas in raid 1 for the VMWare and then the rest SATA in Raid 5 for Images?
 
You'll be happy with the 2950, it'll just work with ESXi.

pfSense releases vmware images now but you'll find a lot of people telling you not to run a firewall in a VM.

I'd pickup 2 SAS drives and 4 large SATA drives. The 2 in RAID 1 don't need to be large for just the ESX install. Then I'd put all the VM's on the RAID5 SATA drives. You'll get more space and still have decent speed.
 
You'll be happy with the 2950, it'll just work with ESXi.

pfSense releases vmware images now but you'll find a lot of people telling you not to run a firewall in a VM.

I'd pickup 2 SAS drives and 4 large SATA drives. The 2 in RAID 1 don't need to be large for just the ESX install. Then I'd put all the VM's on the RAID5 SATA drives. You'll get more space and still have decent speed.

I agree on the storage. The PERC card can do SATA RAID that is fully compatible.
 
I agree on the storage. The PERC card can do SATA RAID that is fully compatible.

It handles sata fine but they don't support sata and sas on the same raid card. Not sure if the card will allow it to run anyway or not.
 
Back
Top