9th Annual BOINC Pentathlon hosted by SETI.Germany

Posted on the PrimeGrid web site.



What does this mean?

PrimeGrid has two "manual" programs, where the work isn't sent through the BOINC system, but they do give BOINC credit. People were abusing this in the past, and storing up a lot of this manual work and turning it in during the contest.

The two programs are:

Manual Sieving: http://www.primegrid.com/forum_forum.php?id=22
PRPNet: http://www.primegrid.com/forum_forum.php?id=24

The most important thing to know is that if you don't know what manual credit is, this isn't going to affect you.

If PrimeGrid gets chosen, then stick with the main selections on the project page, not these obscure manual programs.

Edit: if you are a badge collector, then these manual sieving points do get you another primegrid badge, just won't help with the Pentathlon race.
 
Ah I see. Makes sense then. Thanks for clarifying that.
 
Rosetta@home is the Sprint. Starts on the 11th.
 
Rosetta@home is CPU only. I have updated the first post. Yes you can adjust the size of the work units you want. Bunkering, go with a full day. After releasing the bunker, drop it down to 1 hour.
 
last Pentathlon pututu verified 100 tasks limit her host as well
I set a 6-hour long task and doesn't appear to have any limit yet. So far 221 tasks have being downloaded on a 16T machine. The 100 tasks limit was based on 2016 pentathlon, kind off outdated, I guess.


upload_2018-5-7_22-5-47.png
 
Some numbers: project, team, credits, unique members

Code:
NumberFields@home  
SETI.USA               42939952    33
Planet 3DNow!          28441721   115
[H]ard|OCP             21746544    37
SETI.Germany           17964952    74
Team China             17774932    55
TeAm AnandTech         10142328    14
Overclock.net           8995658    25
Czech National Team     7961384    50
L'Alliance Francophone  4963616    50
Meisterkuehler.de Team  4004827    10

Rechenkraft.net         3174402    33
USA                     2513436    14
Chinese Dream           1742917     2
Crunching@EVGA          1672820     6
BOINC.Italy             1587918    15
AMD Users               1577222    12
Ukraine                  937290     7
BOINC@AUSTRALIA          843624    21
Das Kartell              518489     3
UK BOINC Team            496632     7

BOINC@MIXI               272711     3
Team 2ch                 266870    11
LITOMYSL                 264240     4
BOINCstats               235552     9
Crystal Dream            132023     3
LinusTechTips_Team        87977     5
SETIKAH@KOREA             84159     2
U.S. Army                 82489     1
BOINC@Pfalz               53053     1
BOINC@Poland              51048     6

BOINC Confederation       17057     1




Universe@Home  
SETI.USA               41552000     43
Planet 3DNow!          35534016    117
L'Alliance Francophone 15020672    118
Team China             13114000     81
[H]ard|OCP             11550000     26
SETI.Germany           11209328    108
Overclock.net          10446000     31
Czech National Team     7116000     87
Rechenkraft.net         6836664     51
BOINC@Poland            6778624    143

TeAm AnandTech          6449336     18
BOINC@AUSTRALIA         4335332     37
BOINC.Italy             3090000     62
Meisterkuehler.de Team  3048000      9
Chinese Dream           2896666      1
Crunching@EVGA          2649332     16
USA                     2608000     65
Ukraine                 1206000     15
BOINCstats              1197336     29
LinusTechTips_Team      1151333     14

LITOMYSL                1014000      4
AMD Users                800668     18
SETIKAH@KOREA            739336     86
Das Kartell              695333      3
Team 2ch                 672667     17
BOINC@MIXI               354667      3
UK BOINC Team            277334     23
BOINC Confederation      221333      3
BOINC@Pfalz              109333      2
Crystal Dream             94667      7
U.S. Army                 29333      8
 
Do we know the exact time that the sprint starts? Or can it start at any time on May 11? Sorry, still trying to figure this out.
 
Do we know the exact time that the sprint starts? Or can it start at any time on May 11? Sorry, still trying to figure this out.

2815902D-1E7A-4298-998E-63D808F8E6EB.jpeg

It start 11th at midnight UTC = GMT. It depends where you life. For me in Tokyo with UTC+9 it start at 9am (while some of my hardware is in UTC+1)

In which area of the world are your systems running ? What timezone ?
 
CPU and GPU tasks both spend about the same amount of time crunching the projects for the same amount of points.

However, if your GPUs are sitting idle right now (or mining and you want to move them over) then running Asteroids right now on JUST the GPUs and blocking the project after you load up a bunch of tasks will allow us to dump a decent amount of points when the project starts.

The key here is since it's running on your GPUs it wont take the CPUs from the projects that are currently running.

I hope that makes sense.
 
CPU and GPU tasks both spend about the same amount of time crunching the projects for the same amount of points.

However, if your GPUs are sitting idle right now (or mining and you want to move them over) then running Asteroids right now on JUST the GPUs and blocking the project after you load up a bunch of tasks will allow us to dump a decent amount of points when the project starts.

The key here is since it's running on your GPUs it wont take the CPUs from the projects that are currently running.

I hope that makes sense.

Makes sense, but it only runs on Nvidia?
 
I am going to go with yes, they only run on Nvidia. If you have AMD GPUs, which I think you do then help us in the Formula-BOINC race that we're kind of side stepping at the moment for the Pentathlon. Although those of us (not me) who have AMD GPUs will be running it, most of the team is Nvidia.

http://formula-boinc.org/sprint.py?sprint=5&lang=&year=2018

The project there is Moo! Wrapper. It starts tomorrow. If you load it up now, let it gather 250 tasks per GPU and then block the IP: 199.217.115.142

Then unleash it tomorrow around this time and let it keep crunching.
 
I am going to go with yes, they only run on Nvidia. If you have AMD GPUs, which I think you do then help us in the Formula-BOINC race that we're kind of side stepping at the moment for the Pentathlon. Although those of us (not me) who have AMD GPUs will be running it, most of the team is Nvidia.

http://formula-boinc.org/sprint.py?sprint=5&lang=&year=2018

The project there is Moo! Wrapper. It starts tomorrow. If you load it up now, let it gather 250 tasks per GPU and then block the IP: 199.217.115.142

Then unleash it tomorrow around this time and let it keep crunching.
Awesome thanks!
 
PPS Sieve prefers nVidia devices over AMD
GFN uses Double Precision and thus AMD out produces nVidia

You can use multi threading to speed up CPU applications but you have to use an app_config to do so.

App_config for all subprojects - let me know if something needs added
Change N to the number of CPU threads or the number of GPU's to use per work unit. Typically people use a fraction of a GPU to run multiple work units. For example: Setting <gpu_usage>0.5</gpu_usage> would tell BOINC to use half the GPU for each work unit resulting in 2 work units running on the card. The following line <cpu_usage>1</cpu_usage> would tell BOINC to dedicate a full CPU thread for each of the work units. The config is very long and can be modified for only the sub projects you plan on running. However, if you plan on running other PrimeGrid tasks in the future, it is good to have already set up. :)

<app_config>
<app>
<name>pps_sr2sieve</name>
<gpu_versions>
<gpu_usage>N</gpu_usage>
<cpu_usage>N</cpu_usage>
</gpu_versions>
</app>
<app>
<name>genefer17low</name>
<gpu_versions>
<gpu_usage>N</gpu_usage>
<cpu_usage>N</cpu_usage>
</gpu_versions>
</app>
<app>
<name>genefer17mega</name>
<gpu_versions>
<gpu_usage>N</gpu_usage>
<cpu_usage>N</cpu_usage>
</gpu_versions>
</app>
<app>
<name>genefer16</name>
<gpu_versions>
<gpu_usage>N</gpu_usage>
<cpu_usage>N</cpu_usage>
</gpu_versions>
</app>
<app>
<name>genefer18</name>
<gpu_versions>
<gpu_usage>N</gpu_usage>
<cpu_usage>N</cpu_usage>
</gpu_versions>
</app>
<app>
<name>llrSOB</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrSOB</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<app>
<name>llrMEGA</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrMEGA</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<app>
<name>llrTRP</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrTRP</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<app>
<name>llr321</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llr321</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<app>
<name>llrPSP</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrPSP</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<app>
<name>llrTPS</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrTPS</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<app>
<name>llrWOO</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrWOO</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<app>
<name>llrCUL</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrCUL</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<app>
<name>llrPPS</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrPPS</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<app>
<name>ap27</name>
<gpu_versions>
<gpu_usage>N</gpu_usage>
<cpu_usage>N</cpu_usage>
</gpu_versions>
</app>
<app>
<name>genefer</name>
<gpu_versions>
<gpu_usage>N</gpu_usage>
<cpu_usage>N</cpu_usage>
</gpu_versions>
</app>
<app>
<name>genefer_wr</name>
<gpu_versions>
<gpu_usage>N</gpu_usage>
<cpu_usage>N</cpu_usage>
</gpu_versions>
</app>
<name>llrPPSE</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrPPSE</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<name>llrSR5</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrSR5</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<name>llrESP</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrESP</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
<max_ncpus>N</max_ncpus>
</app_version>
<app>
<name>genefer15</name>
<gpu_versions>
<gpu_usage>N</gpu_usage>
<cpu_usage>N</cpu_usage>
</gpu_versions>
</app>
<app>
<name>genefer19</name>
<gpu_versions>
<gpu_usage>N</gpu_usage>
<cpu_usage>N</cpu_usage>
</gpu_versions>
</app>
<app>
<name>genefer20</name>
<gpu_versions>
<gpu_usage>N</gpu_usage>
<cpu_usage>N</cpu_usage>
</gpu_versions>
</app>
<app>
<name>gcw_sieve</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>gcw_sieve</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<app>
<app>
<name>llrGCW</name>
<fraction_done_exact/>
</app>
<app_version>
<app_name>llrGCW</app_name>
<cmdline>-t N</cmdline>
<avg_ncpus>N</avg_ncpus>
</app_version>
<app>
</app_config>
 
Last edited:
Stick to PPSSieve with Nvidia for max points for this race. For certain with higher end cards you want to run 2 tasks per gpu. I don't have experience with AMD.

If you have a gpu and want to find primes, then go with the Genefer tasks.

Here is a table on the length of time the tasks take on a 1080TI. And the potential points per day.

upload_2018-5-10_10-48-54.png


Genefer 15 can be ran with two tasks per gpu (kind of sort of helps with 16 and 17), but it doesn't help with the higher Genefer numbers. The code is programmed really well.
 
My TR rig is pulling a ton of watts for these projects $$$. Gonna have to get into PrimeGrid config files when I get home from work.
 
Posted from their website.

Lots of teams voted for Rosetta, hence it is sprint as described in the rules. Thanks to SG for being transparent.
GPU goes to PG due to its popularity when compare to SETI.

Rules:
  1. Marathon
    • The project is set by the organizers.
  2. Sprint
    • The project with the most votes that provides workunits with a quorum of 1.
  3. City Run
    • The CPU project with the most votes that is not already chosen for Marathon or Sprint.
  4. Cross Country

    • The GPU project with the most votes.
  5. Swimming
    • The CPU project with the most votes that is not already chosen for another discipline.


List of project suggestions:

  • 16 votes for Rosetta@home [⇒ chosen for Sprint] (AMD Users, BOINC.Italy, BOINC@AUSTRALIA, BOINC@MIXI, BOINCstats, L'Alliance Francophone, Overclock.net, Planet 3DNow!, Rechenkraft.net, SETI.Germany, SETIKAH@KOREA, Team 2ch, TeAm AnandTech, Team China, USA, [H]ard|OCP)
  • 14 votes for Asteroids@home [⇒ chosen for City Run] (BOINC.Italy, BOINC@MIXI, Crunching@EVGA, Czech National Team, L'Alliance Francophone, LITOMYSL, Planet 3DNow!, SETI.Bitola, SETI.Germany, SETIKAH@KOREA, Team 2ch, TeAm AnandTech, Team China, Ukraine)
  • 13 votes for PrimeGrid [⇒ chosen for Cross Country] (AMD Users, BOINC@MIXI, BOINC@Pfalz, BOINCstats, Crunching@EVGA, Czech National Team, LITOMYSL, Planet 3DNow!, SETIKAH@KOREA, Team 2ch, Team China, USA, [H]ard|OCP)
  • 11 votes for SETI@home [not chosen because another GPU project got more votes] (BOINC.Italy, BOINC@AUSTRALIA, BOINC@Pfalz, SETI.Bitola, SETI.Germany, SETIKAH@KOREA, Team 2ch, TeAm AnandTech, Team China, Ukraine, [H]ard|OCP)
  • 10 votes for Universe@Home [⇒ chosen for Swimming] (BOINC@MIXI, BOINCstats, L'Alliance Francophone, Overclock.net, Planet 3DNow!, Rechenkraft.net, SETI.Bitola, SETI.Germany, Ukraine, [H]ard|OCP)
  • 9 votes for Citizen Science Grid (AMD Users, BOINC@AUSTRALIA, Crunching@EVGA, Czech National Team, LITOMYSL, Rechenkraft.net, SETI.Bitola, TeAm AnandTech, Ukraine)
  • 7 votes for Amicable Numbers (BOINC Confederation, BOINC@AUSTRALIA, L'Alliance Francophone, Overclock.net, Rechenkraft.net, SETI.USA, USA)
  • 6 votes for RakeSearch (AMD Users, BOINC Confederation, BOINC@Pfalz, Czech National Team, LITOMYSL, SETI.USA)
  • 5 votes for SRBase (BOINC Confederation, BOINC.Italy, BOINC@Pfalz, Crunching@EVGA, Overclock.net)
  • 4 votes for NFS@Home (BOINCstats, SETI.USA x2, USA)
  • 1 vote for YAFU (BOINC Confederation)
 
If someone is intimidated by the super long app_config above, please post what sub projects you would like to run and we will tailor an app_config specifically to those apps. It is a lot simpler than it looks and doesn't need to include everything. Knowing what hardware you plan to run it on may help as well.
 
If someone is intimidated by the super long app_config above, please post what sub projects you would like to run and we will tailor an app_config specifically to those apps. It is a lot simpler than it looks and doesn't need to include everything. Knowing what hardware you plan to run it on may help as well.
What should I be running on my AMD rig? Preferably FP64 for my FirePros to take full advantage.
 
GFN work would be better served with AMD Gpu
Best bang for you buck if time permits are the GFN world record work units as IIRC they are worth 500k points each but are monster work units.
 
Back
Top