| Author |
Message |
Thomas KochProject administrator Project developer Project scientist Send message
Joined: 17 Feb 12 Posts: 436 Credit: 37,847 RAC: 0
|
|
I've just finished a major update to our server scripts, which will simplify the job creation and release of BOINC jobs for members of our group, and thus hopefully lead to a more stable work unit delivery. Much of the server code has been consolidated, and should run faster now.
The client side should not be affected by our changes. Let me know if there are any problems.
Ich bin gerade mit einem größeren Update unserer Server-Skripte fertig geworden, wodurch die Erstellung von BOINC Jobs für Mitarbeiter unserer Gruppe erleichtert werden soll, und somit hoffentlich eine stetigere Versorgung mit Work Units gewährleistet wird. Viele Teile des Server Codes wurden konsolidiert, und sollten jetzt schneller laufen.
Für Clients sollte sich dadurch nichts verändert haben. Falls es dennoch zu Problemen kommt, gebt mir bescheid. |
|
|
|
|
|
Ran out of work.
|
|
|
|
|
|
Going in search of a project with enough work readily available to keep users busy. I have left this project before and now I remember why. Bye. |
|
|
|
|
|
Don't put all the blame on the project. Between 14.05., 0:00 UTC and 19.05., 0:00 UTC POEM is part of this years BOINC Pentathlon, so demand is currently much higher than usual. According to the server status, there are more than 200,000 tasks in process.
Unfortunately, the new GPU apps have not been fixed and re-released yet, so all Pentathlon participants are trying to grab CPU jobs. :) |
|
|
|
|
|
All the blame on Poem? Nope. But its been widely known for several days to a weeks time that Poem was going to be the final project in the Pentathlon. They could have been ready for it or made an announcement that they did not have any more work ready to go. Instead we get the normal silence here on the forums from the admins.
Funny how the more popular Boinc projects with lots of people donating computer time are the ones that take the time to post on their forums with regular project updates and generally engage in talking to the people running tasks for the project. Prime Grid, SkyNet Pogs, and GPU Grid are all good examples of projects with good donor relations and active forums. You didn't notice Poem in that list for a good reason. |
|
|
|
|
|
Agree with Khali !
Have never received any answer to questions related to GPU's apps BUGS.
+ No answer to your post : http://boinc.fzk.de/poem/forum_thread.php?id=1041
They are maybe busy with the new GPU apps LOL. Hope these WU will be "real bio's" this time.
Have Fun and Take Care !
NB After the Pentathlon, will crunch again for projects such as the one you mentionned ;) |
|
|
Thomas KochProject administrator Project developer Project scientist Send message
Joined: 17 Feb 12 Posts: 436 Credit: 37,847 RAC: 0
|
|
Hi there,
I've been informed about the BOINC Pentathlon early enough, and re-enabled every frozen POEM@HOME work for BOINC last week. With the numbers of recent challenges in mind, I thought a queue size of more than 200.000 work units would suffice.
Unfortunately I've underestimated the huge run on POEM work units which started with the official announcement two days ago.
Our team worked hard to create new work units yesterday. I thought the new jobs will be ready before the announced Showbag period of the Pentathlon begins. However, the jobs failed a last test on our BOINC Test Server, and I did not want to risk a huge bunch of failing jobs during the challenge.
After investigating this today, I'm almost sure the problem isn't the jobs themselves but our testing environment. The new jobs go live as soon as I'm sure about that.
I'm sorry for the inconvenience. |
|
|
|
|
|
@Thomas: How comes I am not surprised about that? ;)
From discouraging power crunchers to entirely giving up a successful GPU client without any replacement in sight...now finally coming to a point where you cannot serve a few days CPU demand. This concept seems completely logic and consequent to me. Always bad for a distributed computing project if crunchers become a surplus.
____________
|
|
|
|
|
|
I seem to be alone in my belief that the purpose of the crunchers is to serve the project (i.e., the science) and not the other way around. But I don't do video games either. |
|
|
|
|
I seem to be alone in my belief that the purpose of the crunchers is to serve the project (i.e., the science) and not the other way around. But I don't do video games either.
I wouldn't be so sure of that, Jim.
Cheers,
MarkR
____________
|
|
|
|
|
I seem to be alone in my belief that the purpose of the crunchers is to serve the project (i.e., the science) and not the other way around. But I don't do video games either.
Hi, Jim: you are not alone...........:) |
|
|
|
|
|
Makes perfect sense to me, Jim. And I do a video game from time to time. ;)
And to my fellow Pentathlon participants I'd say: Check your cache/bunker size! I see that the number of jobs in progress went up, but the number of users in the last 24 hrs went down and we have only 3.5 days left to go. I've a hunch some people take more than they need. Keep in mind that we have no quorum here, so there's absolutely no need to get more work than you can finish in time.
I got a few dozen new workunits today, which was all that I need to keep my rigs busy, so I'm happy for now. Would be cool to have some more work for the weekend, though, when Enigma is finished as a discipline. :) And if not, well, that's the salt in the Pentathlon soup.
Thanks to Thomas for keeping us up to date. |
|
|
Thomas KochProject administrator Project developer Project scientist Send message
Joined: 17 Feb 12 Posts: 436 Credit: 37,847 RAC: 0
|
|
It's always difficult to make decisions both crunchers as well as project scientists are satisfied with.
As i stated, we currently run every possible work, and did also start a new simulation yesterday. It would have been possible to create more copies of that job, to keep the queue filled during the challenge, but it would have been of no use for the project.
I won't start wasting computing power just to keep clients fed.
As in many BOINC projects, our work units are created consecutively, depending on their antecedent. New work units are created as soon as we receive calculated ones.
IMHO, the problem at the moment is the large amount of buffered work units on clients. I've just reduced the number of maximum tasks a single client can download.
I hope this will lead to a more equal job distribution during the challenge.
Our GPU application has not been given up. In fact a replacement for the old GPU jobs has already been prepared; however after the failed start in March, I don't want to act rashly and make sure that won't happen again. |
|
|
|
|
It's always difficult to make decisions both crunchers as well as project scientists are satisfied with.
As i stated, we currently run every possible work, and did also start a new simulation yesterday. It would have been possible to create more copies of that job, to keep the queue filled during the challenge, but it would have been of no use for the project.
I won't start wasting computing power just to keep clients fed.
As in many BOINC projects, our work units are created consecutively, depending on their antecedent. New work units are created as soon as we receive calculated ones.
IMHO, the problem at the moment is the large amount of buffered work units on clients. I've just reduced the number of maximum tasks a single client can download.
I hope this will lead to a more equal job distribution during the challenge.
Our GPU application has not been given up. In fact a replacement for the old GPU jobs has already been prepared; however after the failed start in March, I don't want to act rashly and make sure that won't happen again.
Thanks Thomas, I know I for one appreciate all the work you guys put in. Although I am sad that the GPU work is not around at the moment (POEM is my fav GPU project) I know it will be released when it's ready, and I wouldn't want it any other way. |
|
|
|
|
It's always difficult to make decisions both crunchers as well as project scientists are satisfied with.
As i stated, we currently run every possible work, and did also start a new simulation yesterday. It would have been possible to create more copies of that job, to keep the queue filled during the challenge, but it would have been of no use for the project.
I won't start wasting computing power just to keep clients fed.
As in many BOINC projects, our work units are created consecutively, depending on their antecedent. New work units are created as soon as we receive calculated ones.
IMHO, the problem at the moment is the large amount of buffered work units on clients. I've just reduced the number of maximum tasks a single client can download.
I hope this will lead to a more equal job distribution during the challenge.
Our GPU application has not been given up. In fact a replacement for the old GPU jobs has already been prepared; however after the failed start in March, I don't want to act rashly and make sure that won't happen again.
Thanks Thomas, I know I for one appreciate all the work you guys put in. Although I am sad that the GPU work is not around at the moment (POEM is my fav GPU project) I know it will be released when it's ready, and I wouldn't want it any other way.
+1
|
|
|
|
|
|
Thanks Thomas,
...It would have been possible to create more copies of that job, to keep the queue filled during the challenge, but it would have been of no use for the project.
I won't start wasting computing power just to keep clients fed.
Very good point! If Boinc crunching would become a pure competition in who can waste most energy ... then its turned into the opposite of its original intention.
IMHO, the problem at the moment is the large amount of buffered work units on clients. I've just reduced the number of maximum tasks a single client can download.
Another good idea of you, when some people become greedy and so afraid to run out of work, they buffer large amounts. Not good of anyone but their own credits. Better prevent this.
Our GPU application has not been given up. In fact a replacement for the old GPU jobs has already been prepared; however after the failed start in March, I don't want to act rashly and make sure that won't happen again.
Take your time. Actually I hardly can wait to start GPU work for Poem, but once again your attitude is absolutely right. Thanks for taking the time to share with us.
|
|
|
Thomas KochProject administrator Project developer Project scientist Send message
Joined: 17 Feb 12 Posts: 436 Credit: 37,847 RAC: 0
|
|
I've noticed the shrinking queue size for poempp jobs this noon. I thought the job sorter might be stuck as it happens from time to time, but it shows some new behaviour and likes to fail frequently and randomly. I'm looking for reasons... |
|
|
Thomas KochProject administrator Project developer Project scientist Send message
Joined: 17 Feb 12 Posts: 436 Credit: 37,847 RAC: 0
|
|
Our server had some issues with its database. Fortunately there is no corrupted data, and after restarting the service, our sorter is running fine.
After completing one manual run, I'll restart the other daemons. |
|
|
|
|
|
Thank you Dr. Koch, the jobs are running again without issue! |
|
|