Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
We ran SETI as a test load when I first built our Beowulf, but I found it increasingly difficult to justify using all that electricity and switched to running Folding@home until we got into the top 1,000
Isn't Stanford looking into a cluster client? I was under the impression one would be available at some pioint.Supposedly there is a lot of data traffic between SMP threads, making a F@H cluster unrealistic due to the high latency of a network interface (milliseconds versus nanoseconds). I looked into it a while ago. The problem is that with a molecular dynamics (MD) simulation like F@H the process as a whole isn't very parallel.
How would you be able to get enough bandwidth between the machines for them to communicate effectively on a single WU?
How would you be able to get enough bandwidth between the machines for them to communicate effectively on a single WU?
Isn't Stanford looking into a cluster client? I was under the impression one would be available at some pioint.
It is called (or will be called) Folding@Cluster if I remember correctly, which I would guess would be a completely different project.
How would you be able to get enough bandwidth between the machines for them to communicate effectively on a single WU?
Well currently Pande Group is moving away from MPI and is going to threads based (aka A3). So the chances of seeing a cluster based client for the masses is slim to none.
That said if someone came to them and said i have a supercomputer, then they might spend the time to develop a client for some A2 WU's
You would need fiber or infiband.
Although a 4 bonded gb ethernet might do ok.
Cost....you can get cheap fiber gear.....check CL
Its relative, but i figure if you are serious about a cluster...
heres the first entry from fiber search
Dual port 1gb fiber nic - $100
It's obviously possible, since if you look at the current Folding@home cluster the latency between most of the individual units runs into the hundreds of milliseconds, seeing as it's over the internet. It's so high that it's prohibitive to actually let the computers actually communicate with each other at all; each one operates independently of the rest for a certain amount of time then sends the results back independently. It's obviously possible to split up proteins into smaller work units for individual computers, it may (should?) be possible to split the work up even further.There's a reason why clusters are used for embarrassingly parallel calculations. I really fail to see how F@H with its MD algorithm could be adapted to work with this.
So, let's say I built 4 headless units to PXE boot. At that point, from my understanding, the FAH process would have to be distributed/assigned to each processor (from each node) individually.
Am I wrong in my understanding? And if I'm not, would there be any advantage to that, other than the reduced amount of power used due to not have four hard drives connected to the headless units.
Stanford has run F@H on Super Computer Clusters before and if you have access to a super computing setup, they will personally work with you on setting this up but they are looking for 100+ cpu clusters.Setups with 3-4 nodes is not worth the effort and don't have the network infrastructure to support F@H.
F@H is already setup to break down large projects and divide them up into Work units, which is what we run now.
Is there any detailed information available on how exactly this work is split up?
Each cpu core would run a thread with high speed networking connecting all the systems.
Think -smp 100 or higher.Instead if mpchi running on the local network stack, it was on a real network.What it was really designed for in the first place.