BenseBuilt Liquid-cooled MicroATX Mini Workstation / Riser Card Research / X99 / X299

Bense

Limp Gawd
Joined
Aug 7, 2013
Messages
130
----------------------------------------------------------------------------------------------------
July-August 2018 Update: I have upgraded this to X299 with the MSI X299M 'Gaming Pro Carbon' mATX board, along with an i7-7800x. X299 updates begin on post #34
----------------------------------------------------------------------------------------------------


Here is the latest iteration of my liquid-cooled μATX X99 system. The original contents of this (first) post of this thread has been moved to here.

Objective: To build as powerful of a workstation in as small of a chassis as possible without necessitating high speed (above 4000 RPM) fans. Speficially to maximize the ratio of capabilities / volume as much great as possible.

Components
Cooling System

Here are the latest images of the setup. Clicking on images directs to full-resolution versions.
bensebuilt_x99m_apr23_side02.jpg



bensebuilt_x99m_apr23_side01.jpg




Questions, comments, suggestions, criticisms, are encouraged and appreciated.
 
Last edited:
PCIe splitters are real tricky. I've looked into them but there is pretty much no solid info on them to be had for consumer motherboards. It's mostly meant for server stuff.

Gigabyte has a mATX X99 board coming out in the next week or two that may have a better slot layout for you. Here are the supposed specs: http://compare.eu/uk/gigabyte-ga-x99m-gaming-5-a1159272.html

Take those with a grain of salt, the specs seem plausible but there are several other price aggregators with obviously incorrect specs listed.
 
PCIe splitters are real tricky. I've looked into them but there is pretty much no solid info on them to be had for consumer motherboards. It's mostly meant for server stuff.

Gigabyte has a mATX X99 board coming out in the next week or two that may have a better slot layout for you. Here are the supposed specs: http://compare.eu/uk/gigabyte-ga-x99m-gaming-5-a1159272.html

Take those with a grain of salt, the specs seem plausible but there are several other price aggregators with obviously incorrect specs listed.

Thank you for your reply. At this point, I wish that I had gone with the EVGA board instead. Even though I would have had to use a CPU with 40 PCIe 3.0 lanes to use the 3xPCIe 3.0 x8 slots.

I have concluded that M.2 is garbage, and that it will take at least five years of production in the industry of M.2 devices to convince me that it is a viable option worth considering for any device that is proposed to being used primarily in a desktop (read: non-laptop) machine.

All matters aside, to have a complete, robust system, it seems that it would necessitate
* 1 AMD Radeon GPU
* 1 Nvidia GeForce GPU
* 1 true hardware raid controller - attempting raid with software is a joke. don't even bother. why spend hundreds on a "PCIe SSD controller" when all it is, is a hardware raid controller that is striping your SSDs in RAID0.
* 1 WiFi network adapter
* 1 NIC that is faster that 1000baseT. Seriously. Gigabit ethernet was cutting edge a decade ago. Why something along the lines of SFP+[/ul], has not become standard is a mystery.
 
* 1 true hardware raid controller - attempting raid with software is a joke. don't even bother. why spend hundreds on a "PCIe SSD controller" when all it is, is a hardware raid controller that is striping your SSDs in RAID0.
While not viable for a dual purpose gaming machine, ZFS' 'software RAID' is far superior to any hardware RAID controller for reliability, data stability, and ease of recovery. No way would I ever trust my data to any hardware controller again, no matter how high end.
 
I forgot to ask, why are you using both a Radeon and a GeForce?

There are certain tasks where an Nvidia card is more optimal than a Radeon, and vice versa.

If you saw the link to my HP XW8600 workstation, you'll see that I have been on a building/hardware 'rampage' over the last several months. I accomplished all my objectives, and now I want to have one "simple" build that is relatively small, that can cover just about all bases.


While not viable for a dual purpose gaming machine, ZFS' 'software RAID' is far superior to any hardware RAID controller for reliability, data stability, and ease of recovery. No way would I ever trust my data to any hardware controller again, no matter how high end.

I agree with you. At a later point I want to look into a ZFS build. The majority of my "critical data" on this machine is either in my dropbox, or my google drive.
 
Supermicro RSC-R2UU-2E2E4R. - PCIe 2.0 x8 split to three open ended x8 slots. One operates at x4, the other two operate at x2. This could be perfect, if the first x4 lanes of the riser card are split to two of the x2 slots. If a PCIe 2.0 x4 quadport gigabit adapter were to fall back to PCIe 2.0 x2, this should still be enough bandwidth to not bottleneck throughput. However if the first x4 lanes go to the x4 slot, then this is worthless.
RSC-R2UU-2E2E4R.jpg

I think I am going to take the plunge and just order this riser and hope that the slots are configured in this manner. *sigh.
 
Just realized something, wouldn't you need one facing the other way?

I plan on using a PCIe ribbon cable, so I'd have flexibility to route it however I see fit.

Good news though, I have finally selected a case. The Fractal Design Mini R2

Since I already have two radiators, one being (360mm x 120mm) the other being (240mm x 120mm), this seems like the easiest route.

I received my Swiftech MCP35x2 dual pump today. I'm hoping that I can mount this thing at the bottom of the case without having to occupy one of my 5.25" bays.

Also, I will be using one of these to control the fans. I put a link to it in my HP XW8600 workstation thread that I believe I linked earlier. Evercool EC-DF001 splits one 4pin PWM fan header into (up to) five seperate fans, and draws power from a conventional "molex" plug.
http://www.newegg.com/Product/Product.aspx?Item=N82E16812311001

I haven't decided which fans I will use, but I will find some good fans than an OEM vendor uses, delta, nidec, etc.

And there's also the visual flow meter that I got... I'll post pics later, but here are two links about them. This thing is heavy. I'm not sure if I'm going to use it on this project.
http://www.sightflowindicator.com/sight_flow_indicators.html
http://www.mpcdayton.com/htmldocs/vfimain.html
 
Last edited:
So the new Gigabyte X99 mATX board is out and it looks awesome. Fellow Aibohphobia has an awesome thread over at overclock.net with an excellent look at it.

I am debating on whether I should put my ASRock X99M Extreme4 on eBay and just get the Gigabyte board. I did however find a good solution be able to use the ASRock's M.2 slot. Which, by the way, the ASRock X99M boards (Extreme4 & Fatal1ty) have the best M.2 slot that I have seen on any of the current four mATX X99 boards that I've seen. It is a socket 3, M key M.2 (NGFF) slot. It supports 4 lanes of PCIe 3.0. There are currently no M.2 SSDs that are PCIe 3.0 x4 however. But in the future, when there are more M.2 offerings, and that SSD form factor has matured, and the prices have dropped, that slot will become more useful.

But this is today. And I am impatient, damnit. :D It appears that bplus is working on another card that will be a perfect solution for what I need. It is the M2J34 that is shown in the right on this image that I pulled from a PDF off of the hwtools.net website. The current status is "Develop" so I am assuming that they will be releasing it sometime soon. It would then allow me to run four mHDMI cables (I love how they select an off the shelf, standardized cable rather than developing a whole new proprietary cable) to what I assume will be a daughter card with a female PCIe 3.0 x4 slot. I'm assuming that it will be an open slot much like their P4SM2 card.
bplus_m2j34.jpg


And this is now a perfect segway to posting my latest pictures of my setup; As can be seen here, my Fractal Design Arc Mini R2 case provides an extra expansion slot for devices that aren't directly connected to the onboard expansion slots. The white bracket can be seen here on the left.
atbense_x99_02.jpg


If I were to upgrade my Dell PERC H310 RAID controller with something like this LSI MegaRAID SAS 9341-8i or an OEM derivative from Dell (PERC H330) or HP when they become more available, this will grant me with a card that can operate in PCIe 3.0 mode. This would provide me with a more comfortable bandwidth to work with for my striped SSD RAID array. Here's a table of PCIe bandwidth values that I have I have made from the values retrieved from Wikipedia, as well as the slot layouts of the current mATX X99 boards. It shows what my current PERC H310 (Dell's rebadged LSI 9240-8i) is operating at in the PCIe 2.0 x4 slot #3, which is limited to 2GB/s.
ATB-PCIe_lanes_X99M_M.2_slots.jpg


Oh, and here's picture of my new R9 290 with Heatkiller waterblock
r9_290x_atbns.jpg


And another picture of how things are currently setup.
atbense_x99_01.jpg
 
If that's what mHDMI means that answers my question I posted for you over on OCN.

The M.2 has to be 2.0 x2. It's shared with the SATA Express port (you can only use one or the other) and that's x2.

It's also labeled in the manual as M2_10G presumably meaning 10Gb/s.
 
If that's what mHDMI means that answers my question I posted for you over on OCN.

The M.2 has to be 2.0 x2. It's shared with the SATA Express port (you can only use one or the other) and that's x2.

It's also labeled in the manual as M2_10G presumably meaning 10Gb/s.

Ah, yes :D
 
Next to the PCIe Gen3.0 table in that spreadsheet, did you mean LSI 9341-8i rather than 9241-8i? Anyway, I see in your explanation M2J34 will provide a PCIe 3.0 x4 slot converted from an M Key M.2 slot (that currently ASRock boards provide), but I remember looking at P4SM2 and finding that Bplus advertises it as Gen 2.0 compliant, so I don't know if you can optimistically expect M2J34 to be Gen 3.0 compliant? I admit I have no idea what the difference b/w 2.0 and 3.0 is when it comes to adapters as opposed to the add-on device and the motherboard slot so maybe it doesn't matter? I really want to be educated on this thing but I'm not competent enough to be able to comprehend PCIe spec documents. :(
 
Next to the PCIe Gen3.0 table in that spreadsheet, did you mean LSI 9341-8i rather than 9241-8i? Anyway, I see in your explanation M2J34 will provide a PCIe 3.0 x4 slot converted from an M Key M.2 slot (that currently ASRock boards provide), but I remember looking at P4SM2 and finding that Bplus advertises it as Gen 2.0 compliant, so I don't know if you can optimistically expect M2J34 to be Gen 3.0 compliant? I admit I have no idea what the difference b/w 2.0 and 3.0 is when it comes to adapters as opposed to the add-on device and the motherboard slot so maybe it doesn't matter? I really want to be educated on this thing but I'm not competent enough to be able to comprehend PCIe spec documents. :(

Yes, good catch. I meant the LSI 9341-8i. I think that with them being passive adapters, that it is really whatever the M.2 slot supports.

I emailed the bplus sales about when the M2J34 would become available. I received a response at 2:28am this morning :D Hopefully it is available. If I do decide to go with the Gigabyte board though, the M2J32 would be needed instead. Hmmm
 
I've been looking at RAID controllers today. It seems like the below intel cards would be a cheap PCIe 3.0 solution. However they are limited to only 6Gb/s drives. It could work, but at ~$120 it seems like it would be better to just wait. Plus they don't support RAID5 or RAID6. It's a good SSD controller, but I don't like that it doesn't cover all bases.
RMS25KB040
RMS25KB080

Card is advertised as PCIe 2.0 though. heheh

I was looking at the HP H220, H221, H222 adapters, and found a good page about cross flashing these HBA cards with LSI IR firmware. Apparently some of these cards are LSI 9205-8* based. Which is only PCIe 2.0. The ones that are LSI 9207-8* based support PCIe 3.0. Which is good, I was about to order a card that seemed like a good deal. Glad I dodged that bullet.
 
Last edited:
Original Contents of my initial first post, which contained much more preliminary considerations:

I am currently building a MicroATX X99 build based off of the ASRock X99M Extreme4 motherboard. My objective for this build is to see just how powerful I can make a MicroATX platform. I am aware that most (if not all) of the obstacles that I am encountering in this build could easily be resolved by using an ATX board, however that defeats the whole purpose of this build.

Objective: To build an extremely powerful workstation in a small of a tower (non-rackmount) chassis as possible. I am aiming to maximize this machines capabilities, and minimize the cubic feet that it occupies. In short, I want the ratio of capabilities / volume to be as great as possible.

Current setup:
* board - ASRock X99M Extreme4 MicroATX
asrock_x99m_extreme4-900x877.jpg

* cpu - Intel Core i7-5820K
* cpu waterblock - Koolance CPU-370SI
* ram - 4x4GB of Crucial DDR4-2400
* pcie slot 1 - Nvidia GTX 770 2GB with 2nd DVI port removed, modified to occupy single slot, with Watercool Heatkiller GPU-X GTX 770 full coverage waterblock
* m.2 - awaiting parts
* pcie slot 2 - Dell PERC H310 (re-badged LSI 9240-8i) PCIe 2.0 x8 RAID controller - with 2x256GB mSATA SSDs striped in RAID0
* pcie slot 3 - PCIe x1 -> MiniPCIe wifi adapter with Atheros AR9380 802.11abgn dual band 3x3:3 "N900" Wi-Fi adapter.

Current plans:
* PCIe Slot 1 (PCIe 3.0 x16) - I have a heatkiller r9 280x waterblock currently en route, and there are several of these cards that can be easily modified to only occupy a single slot.
However I may decide to go with an R9 290X instead. If so, I'll find a waterblock move this card to the top PCIe slot and I can avoid hacking off the 2nd DVI plug by using an m.2 extender (see below) and mount my RAID controller elsewhere.

* M.2 Slot (PCIe 3.0 x4) - PCIe x4 female -> M.2 (NGFF) adapter I would rather have an open-ended PCIe 3.0 x4 slot than M.2. M.2 is very cool, yet very young. Current M.2 options are very limited. If needed, I can extend this slot with this P12S-P12F M.2 (NGFF) Extender Board. Once I have this, I plan to move my RAID controller to this slot. However, with my controller being only PCIe 2.0, placing it in this x4 slot will bottleneck its I/O. This shouldn't be an issue with me only having a small SSD RAID0 array, however I am mindful of it. The newer LSI based controllers are PCIe 3.0 x8, so they would provide me with more bandwidth should I want to scale out my SSD array in the future. I anticipate that the availability of re-badged adapters from OEM vendors will come either before, or around the same time that I am nearing the PCIe 2.0 x4 bandwidth threshold.

* PCIe Slot 2 (PCIe 3.0 x8*) - Will be occupied by my single slot, water cooled Nvidia GTX 770.
* = With my 28-lane i7-5820K, this x16 slot only operates a PCIe 3.0 x8

* PCIe slot 3 (PCIe 2.0 x4) - This is where it gets frustrating. My extensively modified HP xw8600 workstation hosts a 6x3TB RAID5 array. It has a quad port gigabit adapter with all ports aggregated into a 4.0 Gbps connection to my switch. I would like to occupy this slot with another quad port gigabit ethernet adapter. My original reason for selecting this ASRock board (as opposed to the EVGA X99 Micro was due to the onboard dual lan. Unfortunately, I later came to realize that teaming these ports together was not feasible.

The other device that I wanted to incorporate is a good WiFi adapter. I am somewhat of a WiFi hardware enthusiast. I enjoy tinkering with the latest wifi hardware. From this perspective, one of the first things that you learn about bleeding edge wifi hardware is that Mini PCIe is THE standard interface for wifi radios. It is typically not until another 18-24 months before the newest chipsets are offered with another interface (USB, ExpressCard, etc). For this reason, if possible, I would like to incorporate a Mini-PCIe -> PCIe x1 adapter. This way I could use something like this readily available Broadcom 802.11abgn+ac with dual band, and 3x3:3 MIMO + bluetooth 4.0 adapter. These cards + adapters typically sell for < $50 shipped on eBay.

So how could I incorporate both of these? Well this is the inconclusive point that my research has lead me to. There are riser cards that I could probably use in that 3rd PCIe 2.0 x4 slot, here's what I have discovered, considerations corresponding to each.

Supermicro RSC-R2UU-2E2E4R. - PCIe 2.0 x8 split to three open ended x8 slots. One operates at x4, the other two operate at x2. This could be perfect, if the first x4 lanes of the riser card are split to two of the x2 slots. If a PCIe 2.0 x4 quadport gigabit adapter were to fall back to PCIe 2.0 x2, this should still be enough bandwidth to not bottleneck throughput. However if the first x4 lanes go to the x4 slot, then this is worthless.
RSC-R2UU-2E2E4R.jpg


Advantech AIMB-R431F-21A1E
This splits PCIe x4 into one x16 slot (operating at x1), and two x1 slots. Expensive, but I could put a PCIe 2.0 x4 lane quadport gigabit adapter and see how it runs. This less desirable option is not worth pursuing.
AIMB-R431F-21A1E.jpg


Advantech AIMB-R431F-21A1E
This splits the PCIe x4 into three x1 slots. Looks like this could be found for less than $40 shipped. Unlike the Supermicro Riser, this would certainly provide the ability to use a MiniPCIe adapter, and there are PCIe 2.0 x1 dual port gigabit adapters.. This would work, but I would sacrifice some ethernet throughput by downscaling from 4.0 Gbps to 2.0 Gbps. Unfortunately the direction of this riser would make it difficult to get everything mounted in my case. Blah.
AIMB-R4301-03A1E.jpg


I wonder if it is possible to aggregate links across seperate adapters. Such as using two matching PCIe x1 dual gigabit adapters, and teaming all four connections into one 4.0Gbps connection.

I wonder how difficult it would be to make my own riser card? What if I were to get a PCIe x4 riser cable and desolder the pins corresponding to the second x2 lanes of the female plug, then solder them to another female x4 plug. I would have to find a way to supply power to the second plug. But this _could_ give me a x4 to dual x2 riser. But can the quad port gigabit adapter operate in PCIe 2.0 x2 mode? I guess I could get one, then physically tape off the terminals to test, however that is not worth all the trouble.
Pci-e-x4-flexible-cable-card-1u-2u-pcie-4x-extension-cable-pcie-cable-18cm.jpg
 
Last edited:
I think I hit the bandwidth ceiling with my SSD RAID array that's connected on the slow, PCIe 2.0 x4 slot....

I still don't understand why ASRock did this...
raid5_4x256_SSD.jpg
 
Well, I certainly have come full circle. These M.2 NVMe cards are insane.
Ars Technica's 950 Pro review: Samsung’s first PCIe M.2 NVMe SSD is an absolute monster

I am impatiently awaiting for the 1TB version to be released. I started a thread documenting my findings - Nothing yet.

However, I've seen a few people mentioning these M.2 to U.2 adapters such as this one. But what is the bandwidth of this U.2?
Addonics M2 SFF-8643 converter

Also looks like ASRock makes one.
ASRock > U.2 Kit

Gigabyte does as well..
GIGABYTE - Motherboard - Accessories - GC-M2-U2-MiniSAS

MSI...
TURBO U.2 HOST CARD | MSI Global | Motherboard - The world leader in motherboard design

Asus...
Hyper Kit - Overview


LOOKS LIKE I'M SEEING A STANDARD CATCHING ON FOR EXTENDING M.2 PCIe 3.0 x4!!!!
Yesssss!!
 
Last edited:
Got a 1TB Toshiba NVMe M.2 SSD for $330. I think I need to adjust a few settings because I think I can get higher benchmarks.
msWOI_Svee3YpOLEfvoYsPmG5UKc2wlvYA21TWTB_vwDEw9mAgX41S4wGqsmMWEVWIuveDBUV-4DXENfo1PsDVPTtGxlDSjUO8p8Kw=w1920-h1200-rw-no
 
Upgrading to MSI X299M Gaming Pro Carbon AC with i7-7800x.

Large update is eminent.
 
Upgrading to MSI X299M Gaming Pro Carbon AC with i7-7800x.

Large update is eminent.

I never noticed this thread until you resurrected it today. You've got an intense setup. X299 will definitely be an upgrade over the aging x99. Though I have to ask, why 7800x? Aside from quad channel memory support it gets trounced by an 8700K. If you're going X299 why not something like the 7900x which has no mainstream competition?

Also, maybe I missed it but are you upgrading your case too? The fractal design arc mini r2 is pretty big by most standards these days, at 40+ liters. The Cerberus is a super cool compact matx case that doesn't seem too restrictive. Of course there's tons of options, it's just one of the smallest that comes to mind. It is the sff section after all ;)
 
Thank you for your reply Ej24.

I strongly prefer to not use i7-7800x. Me using it is mostly do to current life circumstance. Last fall I purchased the MSI motherboard, a CPU, and faster memory.

Something came up in my life, and I returned the memory and the CPU for refund. (I hadn't even opened them) -- I wasn't able to return my motherboard.

I listed it the board on eBay for a few months. It only received one offer that was too low for me to accept.

Fast forward to a few weeks ago. Something messed up on my ASRock X99M board, I think it may have been damaged during a move? Not sure. It was either buy a refurb X99M for $175ish, or sell my i7-5820K CPU, and 'scrape the pots' for some extra cash so that I could get a CPU for the MSI board.

Either way, I'd have to spend about $175, as my i7-5820K sold on eBay for $200.

I needed a functional computer, I had gotten behind on all the places I'd been interviewing with because I've been operating entirely from my phone.

My build has certainly become a bit dated. However my desire to innovate and find solutions is through the roof, so that's why I'm updating this thread. Perhaps some of the solutions that I've found might be helpful for others.

While the i7-7800x is admittedly lackluster, at the very least it seems to be an easy way for me to get my computer working again. My older, 'performance' memory is the standard that the 7800x uses, and the waterblock I've got works for it as well.

While it may only be marginally faster than my older 5820K, it puts me in the conversation with the other X299M builds, which is where I want to be.

Not familiar with that case, but I'll certainly look into it. But I will say that that ASRock Mini ITX X299 motherboard is mindblowing!
 
In regards to the 8700k, how does it compare when both chips are overclocked?

Stock clocks on an unlocked CPU only need to be fast enough for me to spend a day reading online about voltage limits, downloading and installing stress test software.

Yet most of the reviews I've found only seem to benchmark stock clocks.


VRM cooling appears to be of concern for most of these X299 boards. -- Heatkiller universal VRM cooler is about $70 shipped. I'll probably try to order one in the next month or so, along with a DW1830 WiFi card.

Pull the board out, install those parts, and then forget that VRM cooling was ever a concern. :D
 
I think the case you linked me to was designed by Aibohphobia, the first reply to this thread.

Some day, I'd like to build a system that mounts entirely on the underside of a table/desk, one that is based on a 1U server, yet no fans, and entirely liquid cooled.
 
  • Like
Reactions: Ej24
like this
In regards to the 8700k, how does it compare when both chips are overclocked?

Stock clocks on an unlocked CPU only need to be fast enough for me to spend a day reading online about voltage limits, downloading and installing stress test software.

Yet most of the reviews I've found only seem to benchmark stock clocks.


VRM cooling appears to be of concern for most of these X299 boards. -- Heatkiller universal VRM cooler is about $70 shipped. I'll probably try to order one in the next month or so, along with a DW1830 WiFi card.

Pull the board out, install those parts, and then forget that VRM cooling was ever a concern. :D

The 7800x is a strange cpu in that it's only 6 cores but has a mesh bus instead of a ring bus. For low core count cpu's the mesh bus is slower and has more latency than the ring bus. For higher core count, like 10+ cores, the ring bus gets overwhelmed by cross core communication and the mesh bus is superior, thus the use of mesh bus in Intel HEDT and Xeon. So in the realm of 6 cores, the 8700k will have superior performance, overclocking and single core performance. Obviously the 8700k won't work for you though because it's got so few pcie lanes available. It's a great cpu for gamers with a single gpu and no other pcie devices. But if you want 2+ pcie nvme drives and 2+ gpu's you need HEDT, from either Intel or AMD.

It's unfortunate money is so tight for you right now. I understand though, as a graduate student, I'm not exactly flush with cash either. My main system is still rocking an i7-4790K after all. Maybe hold out and see if you can pick up a used 7900x for a reasonable price? It's the cheapest Intel cpu with 44 pcie lanes for maximum expansion. Intel used to offer 44 pcie lanes for around $600 with the 6850k but with x299 they bumped it up to almost $1000 to get 44 pcie lanes!
 
My plan is to upgrade the CPU within the next few months when I get a chance. One of the nice things about these cpus is that they more/less retain their value on ebay.

Nov 3, 2014 - I spent $373 for i7-5820K
Jul 6, 2018 - I sold my i7-5820K for $205
-----------------
So that's what... (373 - 205)/(44 months) = $3.81 / month
That's not bad at all lol.

As far as the case... I didn't realize until just now that I never posted this pic in this thread. I called it my 'bling shot'.
My build is (or at least at one point was): A MicroATX case with: 2 GPUs, 4 radiators, 6 fans, dual water pump, 3 SSDs (2.5"), a PCIe RAID controller, a 3.5" SAS harddrive ziptied to the case...
Along with a 'Machine Products - Visual Flow Indicator'' (1/4" NPT size) -- see here: http://www.sightflowindicator.com/sight_flow_indicators.html
All in an mATX case.
To put it into perspective; since ~2005, one of my hobby's has been modifying and redesigning Honda/Acura manual transmissions. Between that and spending a few years in my early 20s working as a mechanic, I 'feel' like I've got relatively decent hands on, mechanical assembly/disassembly experience. -- I can fully dissassemble a transmission, replace all the bearings/seals, other parts, use a die grinder to 'machine' the trans housing for more clearance, modify a gear or 2 on a bench sander, and fully reassemble everything in about 4 hours.

I've completely disassembled this thing about 4 times. It takes me over 14 hours to fully reassemble this thing.
https://lh3.googleusercontent.com/f...gQgT_4ZLsyi0t27j4ivRP-m719g=w1800-h1230-rw-no

If anyone were to ever consider pursuing a project like this, I would strongly discourage it.

The biggest reason that I wanted to upgrade from x99 to x299 -- was due to this:

ASRock X99m Extreme4
2 x PCIe 3.0 x16 slots (x16/x8 with 28-lane CPU)
1 x M.2 typeM (PCIe 3.0x4)
1 x PCIe 2.0 x4
No on-board Wi-Fi.
I speculate that ASRock chose that PCIe 2.0 x4 configuration for their Thunderbolt 2 AIC card that they were selling at the time (https://www.asrock.com/mb/spec/card.asp?Model=Thunderbolt 2 AIC)

MSI X299M Gaming Pro Carbon AC
2 x PCIe 3.0 x16 slots (x16/x8 with 28-lane cpu)
2 x M.2 typeM (PCIe 3.0x4)
1 x PCIe 3.0 x8 (x4 with 28-lane CPU)
Does have on-board Wi-Fi.

I really like that the X299 chipset provides two PCIe 3.0 x4 slots for m.2 SSD that do not count towards the ones that the CPU provides. I suppose the biggest difference is that it provides the opportunity to have three GPUs.
In theory, it is possible for one to get something like three Radeon RX Vega 64 GPUs, waterblocks for each of them, along with single slot adapters such as these: http://shop.watercool.de/Single-Slot-Cover-R9-Fury-X (I'd prob just use a bandsaw), and one of these bad boys: http://shop.watercool.de/epages/WatercooleK.sf/en_GB/?ObjectPath=/Shops/WatercooleK/Products/10196

Of all the mATX X299 offerings, this one does have the most robust slot layout. I didn't realize that Intel decided to segregate the i7/i9 LGA-2066 CPUs from the Xeon CPUs with X299. That really annoys me.

As far as mATX is concerned, the other board that I am most impressed with is the ASRock X399M 'Taichi'
Looks like ASRock is defining 'Tachi' as: "Taichi" represents the philosophical state of undifferentiated absolute and infinite potential. It's ASRock's biggest offering in the easy-to-use, rock-solidly stable line of motherboards that fulfills every task – with style! Specially designed for the all-round PC user who wants a motherboard packed with premium features.

Right. Anyway, I like how the X399M Chai Latte offers a slot layout that's x16/x16/x16, and it looks like it offers 3xM.2 slots (type 'M', pcie 3.0x4), as well as on-board Wi-Fi (I guess technically that's 4 m.2 slots). Also, the dual, on-board gigabit NICs do support teaming. Why these intel boards provide dual gigabit ethernet without teaming, I do not know.

###################
On a much different note, I'm starting to think that it wasn't my ASRock X99M motherboard that failed. After doing everything on this new setup, I still had the same issues. I may have done something that shorted out my PCIe devices. Whenever I tried to install the graphics driver for my dated Radeon R9 290, windows crashed every time. BSOD.

Me being on a tight budget right now and not being able to afford a new GPU + Waterblock, I managed to pick up an R9 390 for $160 (sans heatsink + fan). Granted, going an R9 290 to an R9 390 typically isn't much of an upgrade.
However, going from a dead R9 290 to working R9 390, is a substantial upgrade. It allows me to squeeze a little more life out of this heatkiller waterblock, and provides me with a functional computer.
https://lh3.googleusercontent.com/J...swFc6lrJMh2Sh1cQfZg83ImwBQw=w1800-h1754-rw-no

I'm having problems with my M.2 SSD. I fear that I may have killed it :(
I just placed an order for an adapter, however I'm not holding my breath.
generic PCIe M.2 SSD to PCIe 3.0 x4 adapter
PS/2 keyboard & mouse splitter adapter Which will help with my IBM SpaceSaver II keyboard.


Imminent Upgrades
Wi-Fi: 802.11ac 3x3:3 upgrade options
* Dell DW1830

Another option could be: BPlus R5y Series With a 4x4:4 Mimo card

Or I could get silly and do a BPlus P11S-P11F along with:
* A pair of these: Linux 4x4 WiFi Module + PCIe Adapter Bundle
* Compex WLE1216V5-20 - 5GHz 80+80MHz 4×4 802.11ac Wave 2 Wireless Module
* Compex WLE1216V2-20 - 2.4GHz 4×4 MU-MIMO 802.11ac Wireless Module

One can only dream though lol.

Before I installed the board in my case, I pulled the VRM heatsink off and recorded the dimensions with a micrometer.
I'll prob get a heatkiller waterblock for the VRM / Mosfets, such as this Heatkiller SW-X 80 DIY Universal MOSFET Water Block
 
Last edited:
As an Amazon Associate, HardForum may earn from qualifying purchases.
To put it into perspective; since ~2005, one of my hobby's has been modifying and redesigning Honda/Acura manual transmissions. Between that and spending a few years in my early 20s working as a mechanic, I 'feel' like I've got relatively decent hands on, mechanical assembly/disassembly experience. -- I can fully dissassemble a transmission, replace all the bearings/seals, other parts, use a die grinder to 'machine' the trans housing for more clearance, modify a gear or 2 on a bench sander, and fully reassemble everything in about 4 hours.

I've completely disassembled this thing about 4 times. It takes me over 14 hours to fully reassemble this thing.
DSC_0208-2.JPG


Me being on a tight budget right now and not being able to afford a new GPU + Waterblock, I managed to pick up an R9 390 for $160 (sans heatsink + fan). Granted, going an R9 290 to an R9 390 typically isn't much of an upgrade.
However, going from a dead R9 290 to working R9 390, is a substantial upgrade. It allows me to squeeze a little more life out of this heatkiller waterblock, and provides me with a functional computer.
IMG_20180731_025322_669__1800x1754.jpg




Some over clock testing.
IMG_20180806_053441_883__1800x1592.jpg
 
  • Like
Reactions: RosaJ
like this
My next plans for this build entail purchasing a pair of 512 GB M.2 NVMe SSDs, and striping them together to form a fast RAID0 volume. I haven't yet decided on the XPG SX8200 Pro 512GB ($100) or the WD Black M.2 NVMe 500GB ($120). I would prefer the WD, but with the current price disparity between the two, I may choose the XPG. I'll decide when I'm ready to purchase (hopefully within the next 30days).

A few months ago, I learned about a potential obstacle that Intel has imposed on X299 motherboards, by (supposedly) disabling non-Intel M.2 NVMe SSDs from being striped together in RAID0 to form a single volume using the on-board, M.2 slots that is BOOTABLE. I cannot seem to find a consistent answer as to whether or not this limitation exists. If this limitation does in fact exist, I have a few ideas as to how I can get around this. Or at least not let this limitation impede me from building the most ridiculous (read: over-engineered) μATX Mini-Workstation.

If this limitation (Inability to create bootable RAID0 volume using two, M.2 NVMe SSDs using the on-board M.2 slots) does exist:
1) See if I can use a bootloader installed one another drive (USB thumb drive). I would look at bootloaders like the ones that I used in my previous OSX86 / Hackintosh projects, where I used the Chameleon bootloader to boot Lenovo R61 / T61 / X61 series laptops to OSX. I recall them having a lot of flexibility with uEFI. However, it was 7+ years ago when I did this. I'm sure a lot has changed since then.
2) Use one of these Dell M.2 NVMe PCIe adapters.


Other considerations / Further Expansion: For the sake of my amusement (or in the event that there's someone else out there that wishes to build a MicroATX monster), and if this Intel RAID limitation does exist, some of the previous considerations that I made 4+ years ago about breaking out an on-board M.2 Type M slot (PCIe 3.0 x4) using risers/cables/adapters to create a PCIe 3.0 x4 slot that can be used by another expansion card (if needed).
* Bplus R4y Series (M.2 -> Female PCIe)
* M.2 to U.2 adapter, then SFF-8643 to Female PCIe (looks like there's even more options than there were when I researched this a while back)
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
Back
Top