10gbe

Update: FCOE

By | Blog, Computers, Hardware, Homelab | No Comments

So I have done some more work on the Fiber Channel over Ethernet whitebox switch build! I got enough cheapo cards to test this idea, and man is it wonderful and a piece of crap at the same time. Im pretty sure it is because Im testing this idea on some really old hardware, but problems just keep cropping up. Like, if I reboot the machine, it will default the bridge for the LAN to 10.10.10.1 instead to the 192 scheme for some reason. Then, the SFP+ ports will randomly not work when they just work off the two ports on my gigabit switch (hence showing why whitebox builds are a bad idea, they just dont work sometimes).

So what does this mean? I will still build the box, test it out, and giggle a little, but I will eventually just turn that box into my next gen hyperv environment. I will put my router on it as a VM, and have it be the 10gbps connection to my switch, and then just save up money to buy a small SFP+ switch to connect my SAN and servers together. Right now I just have my gaming / rendering box and my media server connected to the two SFP+ ports on my switch so I can have that 10gbps goodness for editing pictures and videos on the fly.

Would I recommend this project to people? No, unless you really want to play around with what you can do with such a build, I would not. It was nice when it worked, I got up to five computers with 200 MBps speeds between them! But it got to the point where I was rebooting that box two times a day to get it to work because it was running out of memory for the transfers. Not worth it since I needed that box to stay up and provide my services to the outside world such as for WWW and VPN. I will make a video eventually about it, but only when I get the new build up and running. Till then! :)

Fiber Channel over Ethernet

By | Blog, Computers, Hardware, Homelab | No Comments

So I have been looking into getting 10gbps connections between my servers and gaming rig, with a 10gbps backbone link for the rest of the 1gbps links on my network such as gaming consoles, wireless access points, and so on. I want the 10gig because I would like to consolidate all my storage to one or two servers and have those servers be iSCSI targets for my VMs and services to store data on, and also to deduplicate data as well as make everything easier to compress and backup to redundant storage, then to the cloud via BackBlaze. Right now as it stands my VM server has its own SSD storage for databases and HDD for websites and other services, backed up manually. The media server has a butt load of HDD for storage and serving up videos and music and etc. This is where I store most of my data and backups. And my cold storage server which has 7x500GB drives for secondary storage of VMs and games and impossible to re-download data. Its a big mess really.

So I started looking at prices for network cards and wire and switches to see what one would have to pay to get some 10gig awesomeness for my homelab. I have discovered that fiber wire costs $3 for 50ft and $7 for 100ft if you look on ebay. Awesome. I brought one of each of the lengths and am looking to get some shorter 1 – 3 ft cables as well that has the SFP+ modules attached already on the ends. I will use the 100ft cable to connect my gaming rig and the 50ft to connect anything else that I want across the house. Maybe a cheap 16 port switch for the gaming consoles that has SFP+ uplink?

NIC cards vary based on what condition it is in. I have gotten two Intel x520-da2 NIC cards for $100 because one of the clips that holds the SFP+ module is broken, but the cards still work. These cards usually go for $150 – $200 new. I have also gotten an Intel x520-da1 which is a single SFP+ port card that I will be testing in my servers.

Now for the SFP+ modules. These are usually expensive too, but you can find them for usually around $25 on ebay as well, so I have gotten four of those to test transfer speeds. Still waiting on them to arrive.

Now for the switch. I have determined that getting a pure SFP+ only switch is insane because the prices to get an 8 port switch is so insane that I could do better at 1/5 the cost and have better control over the traffic. I have decided to make my own whitebox SFP+ switch. I have taken my old i7-920 computer and put both of the two port NICs in it and bridged the ports together to act as a switch, which I have currently connected to my Ubiquiti ES-48-US Lite switch that has a 10gig SFP+ backbone port. Great cheap switch, cant say enough good things about it.

Now I have connected my media server to this whitebox build using the single port NIC card that I have. Cant test anything else until I get those SFP+ modules and another single port NIC card.

Running this whitebox switch is PFSense, which is really meant to be a firewall, but can act as a bridge and route traffic as well. This will add some latency to these connections, but I dont mind so long as I can get above the 1gbps limit (100MBps) during transfers.

So far the bridge between the switch and the media server work! I also have noticed a much more snappy connection and transfers between my gaming rig and the media server, such as getting substained speeds of 100MB p/s where as before I was getting 60 – 75MB p/s. I really blame that on the crap unmanaged switches I had everywhere so I could connect everything. All those hops were a nightmare. Having everything connected to a single managed switch has been the best upgrade I have brought for my homelab in a long, long time.

Well thats all I got for now, I will upload pictures when things start to come together! I will also post more specifics about what I have done so you can be better informed if you want to try this yourself! 😀