I just mean that specifically on the Titan V, NVIDIA decided to disable one of the four HBM modules because it hurts their soul to give consumers fully functional hardware. On these WUs 16GB memory = zero errors, and 12GB memory = ~15-20% errors.
I'm running 4 tasks per, but it is on an older CPU that is definitely holding it back. I just tried to start up another VII on a newer CPU, but now they seem to be out of work. Just like old times!
So what is good at Einstein these days? I fired up my Radeon VIIs which only seemed to get GW work. They don't seem to be running well however it is too early to tell for sure.
Because of QRB, F@h rarely benefits from running multiple units. How does power consumption look during the "low PPD" events? I do agree with Holdolin's sentiment; after running F@h on Linux, it is really hard to go back to Windows, lol.
I'd like to propose a rules amendment: WFeather - and only WFeather - is still eligible to be nommed for HDCOTM. He can't win of course, but he can be nommed and lose every month!
You know when you have to do something, even though you know its wrong, because you may have originally nommed that person. Yeah... Anyway, as an apology I updated one of my old memes to fit the occasion.
Nice 'trudging' so far wareyore! Certainly putting up impressive numbers!
I'll try to post my table this evening. These units are slow and with two projects (randomly assigned) it takes a little while to build a dataset.
Yep - definitely use multithreading. I would start with what you used for the last challenge and test out higher core counts to see if they scale. I just started testing unfortunately, so I can only confirm that they are LONG. LOL
Hmm, I wonder what power it was drawing. If at the full 450W not all that impressive - but I imagine properly power limited it is still very impressive.
These are definitely weird - but I think that makes them fun! I think the Xeon Phi may be the most efficient CPU I have. How often can you say that - a failed product from several years ago does great. Cache and memory limitations also made these a blast to tune - each architecture and...
It was perfect for me - just headed in to work a few minutes late. I know PG says they try to move them around throughout the day to help folks in different timezones/schedules.
Almost done with the Benchmarking - I just have to decide if I want to let 1T/unit finish (it runs so badly - I think the memory can't keep up), but most of the serious configurations are complete.
Initial results seem to indicate very good scaling up to higher core counts. I'll post them soon, but I'll likely be running most of my systems in the 4-10 thread range (it can vary a bit based on CPU arch) and I'll try to do a full scaling test with my 10980XE.
I can confirm these appear to be long units. I'm starting testing with 4T/unit, 1T/core, but it looks like I have a lot of testing to do this weekend :D