Search This Blog

Thursday, September 13, 2018

Infiniband Upgrade: FDR

In a previous post, I showed I had perfect performance scaling with QDR Infiniband. What this means is that the interconnect is no longer the performance bottleneck, so I didn't need anything faster. Thus, I upgraded to a faster FDR Infiniband system. ......shhh....

I purchased 5x Sun 7046442 rev. A3 HCAs. These are dual port CX-3 (pci 3, instead of cx-2, which was pci 2) re-branded Mellanox CX354A-Q HCAs. They're pretty cheap now. I got these for an average of about $28 each. You can reflash these with Mellanox stock firmware of the -F variety, which is the FDR speed version (see one of my previous infiniband posts on how to burn new firmware to these). So that's what I was planning to do. I also picked up 5 FDR rated cables for $18/each, and an EMC SX6005 "FDR 56Gb" switch (these are going for <$100 now, with the managed versions going for just over).

The first thing I tested was all of the Sun HCAs' ports and the cables. To my surprise, "ibstat" showed full FDR 56 Gbit/s link up. I guess the Sun firmware (2.11.1280) supports FDR. Lucky! Now I don't need to reflash their firmware. All of the cards and cables just worked. 

Bench testing HCAs and cables
I didn't have such luck with the switch. Both PSUs arrived half dead. It would pulse on and off when plugged in, so I had to send it back, and they sent a replacement. The replacement worked, but the links would not negotiate to anything faster than FDR10. ibportstate (lid) (port) is a good tool for checking what speeds should be available for your HCAs and switches (ibswitches gives lid of switch and ibstat gives lid of HCAs). I tried forcing the port speed using ibportstate (lid) (port) espeed 31 and other things (see the opensm.conf file for details), but nothing worked. I then did some research. This is an interesting thread for the managed EMC switches...turns out you can burn mlnx-os to them, overwriting the crappy EMC OS. Doesn't really apply to me though, since the SX6005 is unmanaged, so I'm running OpenSM.

I installed MFT and read the MFT manual and the SX6005 manual. I found the LID of the switch using ibswitches. I then queried the switch using flint: flint -d lid-X query full. This showed a slightly outdated firmware, as well as the PSID: EMC1260110026. Cross referencing that with the mellanox SX6005T (the FDR10 version) firmware download PSID: MT_1260110026, and you can clearly see that it's the FDR10 version. THAT's why the switch was auto-negotiating to FDR10 and not FDR. Turns out that you can update the firmware "inband", i.e. across an active infiniband connection. What's cooler: It's the exact same process as for the HCAs! HA! I'm in business. I downloaded the MSX6005F (not MSX6005T) firmware, PSID MT_1260110021, and followed my previous instructions with a slight modification to the burn step: 
flint -d lid-X -i fw.bin -allow_psid_change burn
, where X is the lid of the switch. I rebooted the switch (pulled the plugs, waited a minuted, plugged it back in, waited a few minutes), then queried the switch again, and it showed the new firmware and new PSID. I then checked ibstat, and BAM: 56 Gbit/s, full FDR. I posted this solution to the "beware of EMC switches" thread I linked earlier.

Another advantage of this switch over my current QDR switch is that this one only has 12 ports and is much smaller. It's also quieter, though that's like comparing a large jet engine to a small jet engine.

Now I just have to integrate all the new hardware into the cluster. 

Before I sell the QDR cables, I'm going to try running a dual rail setup (2 infiniband cables from each HCA) just to see what happens. Supposedly OpenMPI will automatically use both, which would be awesome because that'd max out the 80 Gbit/s PCI 3.0 X8 slot bandwidth. We'll see...

Wednesday, September 5, 2018

Supermicro X10DAi with big quiet fan coolers, new processors

Writing papers and studying for comprehensive exams has been eating most of my time recently, but I've done a few homelab-y things.

I've been trying to sell the X10DAi motherboard for a few months now to no avail. Super low demand for them for some reason. Anyways, it's a great motherboard, so I decided to use it to test some new (used) processors and CPU coolers.

I got a great deal on 2x E5-2690 V4 ES (might actually be QS). These are 14c/28t, base frequency 2.4GHz. They're in perfect condition, which is rare for ES processors. I haven't benchmarked them yet, but the all core turbo is probably about 3.0GHz, which is the same as the 10c/10t E5-4627 v3 QS. They can also use the 2400MHz memory I have at full speed. All in all, I should be getting a speed boost of somewhere between 10% (memory only) and 50% (memory and extra cores). This will make the headnode significantly faster than the compute nodes, which generally isn't useful, but if I allocate more tasks to it, then it should balance ok. There's also the possibility that the extra cores will end up being useless due to the memory bottleneck. With the E5-4627 v3's and the openfoam benchmark, going from 16 to 20 cores only improved performance by about 5%. The extra memory bandwidth will help this some, but I expect that the performance difference between 24 and 28 cores will be ~0.

Anyways, on to the coolers. The headnode workstation isn't really loud per se, but it isn't quiet either. This hasn't really mattered until now because the noise generated by the compute nodes + infiniband switch is on par with an R/C turbojet engine. Since the upgraded fans for the soundproof server cabinet are also loud, I haven't actually closed the doors on it yet. However, I'm planning to fix that soon, so I decided to try a quieter CPU cooling solution for the headnode. I looked into water coolers, but I only had space for two 140x140mm radiators, which means that they won't cool better than a good fan cooler. That, coupled with a price comparison, led me to fan coolers. It's a large case, and there's plenty of head space, but since it's a dual socket motherboard, I can't fit two gigantic fan coolers. I also wanted to be able to access the RAM with the coolers installed, which limited me to 120 or 140mm single fan coolers. I purchased two Cooler Master Hyper 212 Evo's (brand new from eBay was cheaper than amazon), which, at <$30 each, have an incredible $/performance ratio. I installed one CPU and one CPU cooler in the X10DAi to test both out. When I turned it on, I got the dreaded boot error beeps, 5 to be precise, which either means it can't detect a keyboard or a graphics card. The X10DAi does not have onboard graphics, so it requires a graphics card, which I had installed. I figured out I had a bad DVI cable, but that didn't fix the problem. After about an hour of head scratching, I realized that I hadn't cleared the CMOS. This is necessary when changing CPU's. I removed power, removed the CMOS battery, shorted the CMOS clear pins, put the CMOS battery back in, powered it on, and it booted right up. I then ran memtest86 on it. Then installed the second CPU and cooler, then ran memtest86 again. blah blah blah...

Anyways, the Cooler Master Hyper 212 Evo fan coolers work great on this dual socket motherboard. Note: you have to modify the instructions. The holes for the cooler are not drilled all the way through this motherboard, so you can't install the back plate. Instead, you have to screw the shorter standoffs into the threaded holes (same thread! that was lucky), then the CPU cooler bracket into those. Make sure not to over-tighten the CPU bracket screws because they are pulling up on the CPU plate, which is only attached to the surface of the motherboard PCB. If you tighten them too much, it could flex the plate enough to break it off the PCB.

Short standoffs installed on socket 2. You can see a post from socket 1 in upper right.

Both installed and running!
In case you're interested: the coolers could have been installed rotated 90 degrees. They're tall enough to clear the RAM, and I'm pretty sure tall enough to allow RAM access even if rotated 90 degrees. Since it's roughly the same size, I think they'll work on the ASUS Z10PE-D8 as well, which is where they'll ultimately be installed.



Coming soon:

  • Quieter cabinet air extraction fans
  • FDR Infiniband (oooo, shiny)