BOINC Pentathlon 2015

Last edited:
They were informed in advance of the correct names. They should be using HardOCP.
 
Unless someone is bunkering, we are going to have a major position drop once YoYo's portion of the challenge begins. We don't have much going on at that project right now. I still encourage people to do all they can for MalariaControl. It is still our best chance for a medal.
 
For anyone that might be tempted in helping with positioning on YoYo. Don't quote me on it but I believe Muon is the quicker running work units. If someone has better info, please share as it has been several months since I pushed YoYo.

Also, I would appreciate it if anyone with an account or whom may make an account add [H] in their name just at YoYo as that affects how our team shows up at Muon1 in their rankings. It does not affect your points/stats/etc... at BOINCStats or Free-DC.
 
Day 5
[H]ard|OCP (#8) have not slowed down and are launching an attack.


However, [H]ard|OCP are unable to continue at that speed. Is that a temporary state of weakness or is #7 all that can be achieved?

Rank 11 is currently being claimed by Ukraine and [H]ard|OCP who have lost ground due to their GPU-weakness.
 
Day 6
Rank 16 onwards we find TeAm AnandTech, OcUK (#17), RKN (#18), Gridcoin (#19), AMD Users (#20), [H]ard|OCP at #21 and TitanesDC (#22) continuing on their way.

[H]ard|OCP appear to have smelled blood, they don’t seem to be content with #7 as unsettling times are on the cards.

This would be a chance for TitanesDC (#20), SETIKAH@KOREA (#21), [H]ard|OCP (#22) and the AMD Users (#23) who are currently letting this slip through their fingers.
 
They were informed in advance of the correct names. They should be using HardOCP.

Isn't there a way to combine or get rid of the confusing team names and get everything the same? This has got to be one of the biggest detractors to folks running BOINC. :confused:
 
brilong, there is. However, you have projects like WCG that claim they can't do it. I think they just don't want to do it as it is a waste of their time. And they have tons of things that need fixed that they seem to never have time to do. Then the admins that are willing to do it must get approval from the different team founders and make sure they want to be merged. That can be a delicate issue as none of us have "official" [H] authority in the matter and I doubt Kyle will want to go through any of that trouble.

Something that can also be done is whomever is the team founder at those other teams can change the settings so that the unofficial teams no longer accept new members. That is an option but you have to get those users on board. Some may not even be all that active. Perhaps I will use the HardOCPtest account and initiate a founder change at them to see if we can get a response. If they don't then it will automatically be assigned to the HardOCPtest account. But that can take months to do. This also brings to light another issue. None of our teams outside of FAH are technically "official" as [H] only has one official DC project that it supports....
 
Last edited:
Isn't there a way to combine or get rid of the confusing team names and get everything the same? This has got to be one of the biggest detractors to folks running BOINC. :confused:

I have also contacted Eric at LHC to see if they will be willing to merge them as I am an admin/founder on both.

CPDN is down again. Shock! :eek: Oh wait... that isn't that big of a surprise.

I will also check with SETI as I don't see why that couldn't be adjusted since I'm also the founder/admin at both teams there too.

WCG as mentioned has given us the big finger when asked in the past.

Einstein, I am waiting for the test account to get the option to initiate transfer and then I will see what we can do from there.
 
Time to start switching off of MalariaControl. If you have any work to upload, get it done.
 
Looks like we finished in 6th place for MalariaControl. It goes to show how important bunkering is for challenges like this. We would have been in the top 3 for sure if it weren't for that behavior.

YoYo is also currently down. So, make sure WCG is set as a backup if you have plans to run YoYo. We should also be hearing about the last project soon.
 
Day 7
AF continue to secure Rank 4 whereas SG (#5), Team China (#6), CNT (#7) and [H]ard|OCP (#8) are running at the same pace but independent from each other.

The team from [H]ard|OCP continues to be very courageous. AF (now at #7) were overtaken and handed over Rank 6.

The newbies of Gridcoin and [H]ard|OCP are currently both paddling towards #19, followed by OcUK (#20) who seem to be taking in the underwater world.

There are no changes behind them, only [H]ard|OCP were able to win a place (#21) and thus move past SETIKAH@KOREA.

On equal points at #13 we find Crunching@EVGA and [H]ard|OCP followed by Gridcoin at #15
 
Looks like I'm late to the party.

I'll look into firing up the 2P tonight, until then my old phone and laptop is all I can throw at WCG.
 
Day 8

Corks are popping too at [H]ard|OCP. Rank 6 is a super place, especially as the Pentathlon-newbies have managed to keep the whole of L'Alliance Francophone (#7) behind them. Looks like Obelix will need an extra wild boar to counter the frustration.

Only 300 credits are currently separating BOINC@MIXI (#19) and [H]ard|OCP (#20)

Crunching@EVGA and [H]ard|OCP continue to fight for #13, a target that remains in both RKN (#15) and Gridcoin’s (also at #15) sight.

Of some importance might be that a short while ago, [H]ard|OCP were pushed off #7 by CNT; that should please SG.

And maybe you have noticed something else. There’s still a discipline missing. The creators of the Pentathlon are keeping us on tenterhooks. The sprint project will be announced at the last possible date which will be tomorrow. That’s really mean, after such a hard Pentathlon our valiant teams must now line up for the Sprint.
 
LHC@home Six Track was announced as the last project.
 
LHC@Home uses HardOCP currently but don't worry. I have locked the other team to prevent anyone else from joining by accident. However, I may need to contact the admin to hold off until after the challenge to merge our teams as that could be a headache...
 
I am doing the sprint thing hopefully it does not hurt us in WCG too bad, all rigs currently are on LHC
 
I was in the process of moving a bunch of boxes over to LHC when the computer I use to remotely control everything went offline. No idea what's going on; but do know it's completely unresponsive to the network. Did login to the IPMI of a system I could recall... it grabbed a decent amount of work before the system running my SSH tunnel died. On the bright side none of them can send work... so presuming the system isn't dead or blacklisted from the network I should be able to prevent all the LHC work from reporting early.

This sucks big time and I'm pretty stressed out about this (really hope something bad didn't happen relating to this system; have a shit load of work on it)... but I tired! :(
 
OK, I got my small 2p running WCG today, it's not close to what Grandpa_01 generates, but it's better than nothing.

I added the laptop to WCG yesterday, it should be able to generate some points as well.

My desktop is running LHC as usual, but I'm not bunkering on this machine, there's simply not enough points in this low end AMD box to make it worth the trouble, but everything helps.

I am doing the sprint thing hopefully it does not hurt us in WCG too bad, all rigs currently are on LHC

Now you just have to bunker in deep ;-)

I was in the process of moving a bunch of boxes over to LHC when the computer I use to remotely control everything went offline. No idea what's going on; but do know it's completely unresponsive to the network. Did login to the IPMI of a system I could recall... it grabbed a decent amount of work before the system running my SSH tunnel died. On the bright side none of them can send work... so presuming the system isn't dead or blacklisted from the network I should be able to prevent all the LHC work from reporting early.

This sucks big time and I'm pretty stressed out about this (really hope something bad didn't happen relating to this system; have a shit load of work on it)... but I tired! :(

damn not good, I hope you get through to those machines soon.
 
Well...you could still bunker... you would just need to load up with WCG as well to keep the cores fed in between. Your deadlines for the LHC work should be past the bunker drop time period.
 
Damn LHC has a daily limit no big hunkering. :(

And I believe that limit is 4/core. If you need more, I recommend loading a few VM's and also bunkering those. That way you can get your fill.

Work units are not consistent in size either. Some are a few hours up to several. So, your estimated times may be waaay off as well.
 
Last edited:
Also, keep in mind that Einstein finishes up tonight at 7PM CST.
 
Thankfully the system in question was just frozen. After installing a ton of updates (I hate rebooting this thing) it finally came back to life. Unfortunately I'm slammed at work, so won't be able to sort through the cruncher systems until later.
 
You can't....Unless someone has some advanced solution I'm not aware of...

The good news is that you can only bunker a hand full of LHC work anyways.... so you probably have time to do it again....
 
Day 9

A lively up and down can be seen further back. An exchange of places took place between TeAm AnandTech (#15) and AMD Users (#16), between Gridcoin (#17) and 2ch (#18), between [H]ard|OCP (#19) and BOINC@MIXI (#20) and between LITOMYSL (#21) and OcUK (#22). No changes behind them.

It will be a little bit more difficult for BOINC.Italy (#12) however, but at least they have managed to put some distance between them and the squabblers of Crunching@EVGA and [H]ard|OCP at #13.
 
The only solution I can think of is to block the LHC server via a Hosts file.

127.0.0.1 lhcathomeclassic.cern.ch

Obviously this will prohibit downloading additional data too. I have a bunch of WUs waiting to be reported, so I'll give this a try.

If one can block an actual URL, then blacklisting
lhcathomeclassic.cern.ch/sixtrack_cgi/file_upload_hander
would be even better as it should allow for downloading additonal WUs if allowed.
 
The other thing I just tried doing, in hopes of being clever, was to edit the client_state.xml file and change

<upload_url>lhcathomeclassic.cern.ch/sixtrack_cgi/file_upload_hander</upload_url>

to

<upload_url>127.0.0.1</upload_url>

While this worked, BOINC won't download any new WU's due to "too many uploads pending" or some such thing.
 
Einstein's portion of the challenge is over. You may return your GPU's to their normal routine.
 
Have determined, to the best of my ability at least, that LHC won't give a system more than 256 WUs no matter what one tries to do. Just FYI.
 
Day 10
A GPU-weakness could be seen at Team 2ch (#20) and [H]ard|OCP (#21).

The distance to the front runners is getting too large and there&#8217;s not really any pressure from [H]ard|OCP (#7) nor from CNT (#8) behind them. It would be important for CNT to gain a place in view of the Overall Standings. Thus far, all efforts to advance have been energetically thwarted by [H]ard|OCP.

Having sped up and claimed Rank 17, [H]ard|OCP are setting their sights on more

Reason being is that neither Ukraine (#11), nor Boinc.Italy (#12) and maybe even [H]ard|OCP (#13) can be written off.
 
We are making progress. We are about one rank change at a project from 12th place over all. Don't forget that LHC's portion goes live 7PM CST tonight. Their server status looks like there isn't much work available right now, but I have PM'd with Eric and he says that there will be a News posting soon as well as more work to come. Stay tuned, crunch hard, and have fun. At this point everyone should be crunching WCG, YoYo, or attaching to LHC. You can run any combination you like as there is still needs in all three. YoYo's portion will finish tomorrow night at 7PM CST. So, if you are just now hopping on board, you may want to stick with WCG or LHC Six Track. Until they feed more work at LHC, it may be rough getting a bunker or a cache loaded up. So, if you experience those issues, just activate another project until the work starts to flow again.

As always, if you aren't sure about something just ask. This LHC project does NOT require virtualbox like the others. It is a traditional BOINC project.
 
I contacted Eric about making some cache setting changes to help accomodate some of the higher end servers and getting plenty of work. This is what he replied:

Hello again; I don't quite follow all this BUT my colleague Igor
says:

I increased the following parameters in the config.xml:
<daily_result_quota> 250 </daily_result_quota> (was 50)
<max_wus_to_send> 14 </max_wus_to_send> (was 4)
<max_wus_in_progress> 14 </max_wus_in_progress> (was 4)

This may increase the amount of work processed by the machines out there.

Boinc is restarted.

Hope thta is OK. Eric.

So, if anyone could test and see what they can pull now, I will update things. :)
 
Less than 3 hours before LHC's portion of the challenge begins...
 
As many of you know LHC@home has been selected to host
the Sprint event of the BOINC Pentathlon organised by
Seti.Germany. Information can be found at
http://www.seti-germany.de/boinc_pentathlon/22_en_Welcome.html
The event starts at midnight and will last for three days.

This is rather exciting for us and will be a real test of
our BOINC server setup at CERN. Although this is the weekend
following Ascension my colleagues are making a big effort to
submit lots of work, and I am seeing a new record number of active WUs
every time I look. The latest number was over 270,000 and the Sprint
has not yet officially started.

We have done our best to be ready without making any last minute changes
and while this should be fun I must confess to being rather worried
about our infrastructure. We shall see.

We still have our problems, for a year now.

I am having great difficulties building new executables since Windows XP
was deprecated and I am now tring to switch to gfortran on Cygwin.
It would seem to be appropriate to use the free compiler on our
volunteer project.

We are seeing too many null/empty result files. While an empty result can
be valid if the initial conditions for tracking are invalid, I am hoping
to treat these results as invalid. These errors are making it extremely
difficult for me to track down the few real validated but wrong results.
I have seen at least one case where a segment violation occurred, a clear
error, but an empty result was returned. The problem does not seem to
be OS or hardware or case dependent.

I am also working on cleaning the database of ancient WUs. We had not
properly deprecated old versions of executables until very recently.

I am currently using boinctest/sixtracktest to try a SixTrack which will return the full results giving more functionality and also allowing a case to be automatically handled as a series of subcases.

Then we must finally get back MacOS executables, AVX support, etc

Still an enormous amount of production is being carried out successfully
thanks to your support.

I shall say no more until we see how it goes for the next three days. Eric.
____________

http://lhcathomeclassic.cern.ch/sixtrack/forum_thread.php?id=3942#27450
 
EVGA has already started confirming the 14 WU's per core/thread implementation is in effect. If the other settings are working that would me you could get up to 1250 work units per machine now instead of the 250 earlier mentioned. Let me know if anyone confirms this.
 
Back
Top