How to get 20k PPD without bigadv :p

Status
Not open for further replies.
That will not guarantee you GPU2 only WUs, unfortunately. There are many non-Fermi units out there for Core 15, with more coming, so you still run a high risk of receiving some of them. Core 15 seems to favor G80/G92 for whatever reason.
If I'm reading this correctly then it seems the 200-series cards are the odd ones out if they can't perform well with the GPU3 WUs. Would it be better in the future if one owns for example, a GX2 instead of GTX295 card, or is this but a temporary situation?
 
remember guys this is only worth running on g92 or fermi cards.. the g200 cards absolutely suck ass on gpu3 WU's.. well this just sucks! all i have are 260's
 
edit: ...
 
Last edited:
OK, I attempted to retry this on my card and all I've been getting is server is not available. Actually this problem has been the case since last night when I did a test run. I was only able to receive one core15 WU and then the problem appeared. It's the reason I never got the second GPU to receive these units. When I switch back to the -forcegpu nvidia_g80 flag I instantly receive work. Anyone know why the -forcegpu nvidia_fermi flag does not work with my system anymore?
 
maybe this was an unintended result so PG laid the smack down?
 
maybe this was an unintended result so PG laid the smack down?
I doubt it because everyone else is getting work. This problem occurred last night and is still in effect.
 
just checked prices on a 9800gt those things cost way to much for there age.
 
Speaking of sisters, I wish Syribo was still around.. *sniff* At least we still have Elledan. :3
 
Got one of these on my 8800GT. Niiiiiiiiice
Project: 11233 (R1, C4, G19)
2m 11s TPF, 6,015 PPD which is a good 1-1.5k boost from normal (if not more).
 
I just retried using the -forcegpu nvidia_fermi flag again after a WU uploaded and still having the "no appropriate server was available" message come up. Anyone else see this? What can be the issue?? It worked once last night, but nothing since... :confused:
 
nope, 8 successful WU's from my 9800 GTX, your missing the -advmethods flag though
 
nope, 8 successful WU's from my 9800 GTX, your missing the -advmethods flag though
I tried with and without it, no difference. The issue is the "nvidia_fermi' part of the flag. I receive WUs if I use the -forcegpu nvidia_g80 with or without -advmethods, but as soon as I change it to "nvidia_fermi" that's when the error message comes up. It did work but only once.
 
OK maybe I should post some additional pieces of info. This is a Win XP system with the very latest drivers (v256.98), and the entire flag I'm using is -gpu 0 -forcegpu nvidia_fermi -advmethods -verbosity 9. Both clients are in their respective folders. This is the console version of the GPU3 client. Could I be using an older version of the GPU3 client that may be the problem?

EDIT: confirmed, for some reason I had the old executable which obviously wouldn't work with the "fermi" flag but did work with the "g80" flag. How it got into the folder I have no idea. Must have been when I was moving stuff around... :eek:
 
Last edited:
I got this installed on my GX2 box. It looks like I'm averaging 6K per WU, or 48K for the whole box. I don't have any of the cards overclocked. It's now beating my triple GTX 295 box. :eek:

The systray installer does overwrite the existing folders, but I made copies of everything before I installed.
 
I got this installed on my GX2 box. It looks like I'm averaging 6K per WU, or 48K for the whole box. I don't have any of the cards overclocked. It's now beating my triple GTX 295 box. :eek:
It might be a little too early but I think you should be seeing closer to 8000 PPD per GPU. I have my card OCed one shader domain higher than you if you're stock. My card is EVGA.

Does anyone know if the GX2 cards work according to the shader domain profile of other G92 cards or is it different? I never found a conclusive answer to this when I searched for it a while back.
 
Last edited:
The PPD does seem to be going up. Initially (around 4% complete) the WUs were ranging from 5.8K to 6.2K, and now it's 6K to 6.5K at about 12% complete. Also, the 10968 WUs seem faster than the 112xx ones, but I only have two of the former and six of the latter.
 
The PPD does seem to be going up. Initially (around 4% complete) the WUs were ranging from 5.8K to 6.2K, and now it's 6K to 6.5K at about 12% complete. Also, the 10968 WUs seem faster than the 112xx ones, but I only have two of the former and six of the latter.
Hmm, my 8800GT in another system is averaging 7200 PPD on a P11257 with a 1728MHz shader clock. My GX2 is working on a P10962 on GPU 0 and doing just under 8000 PPD, and on GPU 1 it's working on a P11229 and doing 7900 PPD. When only one GPU was working on a core15 WU, I saw 8200 PPD but for some reason it dropped when both GPUs received core 15 WUs.

I think your cards should be seeing higher PPD, I only OCed my card one step higher than stock. You should be getting well over 60k PPD :eek: providing nothing else is a factor I am currently unaware of...
 
I'm running Muon on the same machine, but that doesn't use all of the CPU (maybe 85%). I'll stop it and see if there are any changes.

My shader clock is 1500MHz (edit: EVGA Precision shows it's actually 1512MHz).

edit: OK, apparently Muon was choking it. My PPD has already shot up to 7.1-7.4K per GPU.
 
Last edited:
This doesn't seem to be working on an older 8800 GTS 640MB (G80). It wouldn't download a WU for me.
 
My shader clock is 1500MHz (edit: EVGA Precision shows it's actually 1512MHz).
That's probably why would be my guess. I checked with RT and the shader clocks are set at 1728MHz for my GX2 (which answers my inquiry about shader domain) and is OCed a bit over stock. Your cards have a lower stock shader frequency that is much lower than my card, plus I increased mine a bit over that. What happens if you attempt to increase the shader clocks?

This doesn't seem to be working on an older 8800 GTS 640MB (G80). It wouldn't download a WU for me.
What did you get?
 
What did you get?

I kept getting an error saying it was trying to get a WU from the server. At first it did download a core15 WU (I think) but it never actually started folding. I already uninstalled it and went back to GPU2 and it's working fine. I'm wondering if the G80 is too old to do this.
 
I kept getting an error saying it was trying to get a WU from the server. At first it did download a core15 WU (I think) but it never actually started folding. I already uninstalled it and went back to GPU2 and it's working fine. I'm wondering if the G80 is too old to do this.
No, I think you are having the exact same problem I encountered which I described a few posts up. I am certain you are using the older GPU2 executable. I only noticed this when I compared the contents of my folders to another machine's GPU client and saw that it wasn't the proper file. I would go to Stanford's site and download the GPU3 client again. Wipe out this folder's files.
 
My 9800GTX+ and GTS250 can both do this. They are identical (750/1900/1000) and are doing like 8.2-8.6k PPD. Thanks for the tip!

My 8800GTX is failing at it. It downloads the core need-be, but fails to run it with a "Core Status 63(99)" error.

I do think this is because the 8800GTX's GPU is the g80. Can anyone clarify?
 
Last edited:
No, I think you are having the exact same problem I encountered which I described a few posts up. I am certain you are using the older GPU2 executable. I only noticed this when I compared the contents of my folders to another machine's GPU client and saw that it wasn't the proper file. I would go to Stanford's site and download the GPU3 client again. Wipe out this folder's files.

Hmm, I may try this again after the current WU I just started finishes (or maybe tomorrow). Although Brak710 above also has a G80 based GPU that isn't working for him. Has anyone been able to do this with a G80?
 
I'm going to have to make the quick conclusion that a g80 does not run Core_15.
 
Ok, here's a question, will a 850TX support an i7 at 3.6ghz folding A3's on 6 cores and a 470 and a GX2 all in the same box? If so I could build a kill little box, 30k from GPUS alone, plus maybe 10k from the CPU

or should I not even fold A3's?
Posted via [H] Mobile Device
 
Ok, here's a question, will a 850TX support an i7 at 3.6ghz folding A3's on 6 cores and a 470 and a GX2 all in the same box? If so I could build a kill little box, 30k from GPUS alone, plus maybe 10k from the CPU

or should I not even fold A3's?
Posted via [H] Mobile Device

I do believe if you made the SMP client use all but one core and then put the GPUs on that dedicated one, you would see good results.

AFAIK, all cores in the SMP are as fast as the slowest one, so you don't want that 3.6GHz getting dropped down on all of them because of the GPU clients.
 
Would my PSU support it?
Posted via [H] Mobile Device
 
Would my PSU support it?
Posted via [H] Mobile Device

Oh, PSU, right...

It should. I don't think that system would total more than 600 from the wall, but I'm just going by what I've read about multiple-gpu systems.
 
I'm going to have to make the quick conclusion that a g80 does not run Core_15.
You guys could be right. I honestly don't know but it was exactly what I saw up until I replaced the folder with the proper files. Also, I'm using the console client. /shrug
 
You guys could be right. I honestly don't know but it was exactly what I saw up until I replaced the folder with the proper files. Also, I'm using the console client. /shrug

I tried to install gpu3 on my g80 card, did a fresh client install and got the same results as above, it would download core15 and then nothing.
 
Status
Not open for further replies.
Back
Top