DSL Ideas and Suggestions :: Would you donate your DSL system ...



Below is my vision.  How many of you would be interested in donating processor cycles if we had such a system?  Please reply.  You also have my permission to copy my signature block into your signature block as a show of support.

My vision is of a light weight Linux distribution with openMosix as an option.  The user could then go to a website and click on a checkbox to chose a project to donate processor cycles to.  Payment, if any, could go to a charity and/or to the opensource development team(s) for the distribution and tools used.

If I were a university administrator, for example, I would give a 3 inch CD to every incoming freshman.  Confidential data would be restricted to the university's secure clusters.  Research labs and students would have the option to use all the processing power of the whole university community.

(A side effect would be turning the Humanities Department computers into a super computer for the science departments.  Science computers could also end up running textual analysis for the Religious Studies department as well!  <laugh>)

(Every program put on the cluster would have to be vetted, otherwise the university would rapidly become one huge CD and DVD ripping engine ... and one bug could bring the whole mess to a grinding halt!)  8-O

Still SETI@Home, I'm sure, has addressed those same issues.

As a real-world example:  My CPU usage as I'm typing here is running between 3 percent and 25 percent and my ram usage is running 12 percent.  I'd love to be able to donate 50 percent to discovering a cure for AIDS or cancer or whatever.

Fearmongers may tell me that such a system would be used to do horrible things.  Well, so has the fountain pen in the hands of a crook!  Homeland Security isn't telling us to turn in our fountain pens -- yet.

nothing wrong with a little vision, most good things start with a wish,then a bit of work and in the end success.
I would support this cause, with unused cycles... Have a lot of computers around. :)

Check out this post for clusters.

http://damnsmalllinux.org/cgi-bin....;hl=pvm

The original site for clusters:
http://www.beowulf.org/

This is a link for building SoftType Clusters, DSL would be best for this type. You can use ssh to access all of your nodes and mpi or pvm to control them.

http://fscked.org/writings/clusters/cluster-3.html#ss3.3

I have built a cluster with RedHat 9 and used it one time for a random number generator. 16 machines were quite fast for a 1024 diget random number. Never had a use afterwards.

Thanks for the moral support guys!

My vision is of a system that would work out of the box for the ordinary user.  Beowolf, though a great system, requires way too much programming and configuration support.

That's why I'm focused on openMosix which handles load balancing in the background.  openMosix, however, works at the local Lan level.  My vision is to hack it to work at the Internet level.  This would require the openMosix kernal patch _and_ another patch as yet unwritten.

Since I am not a C hacker, I need to interest someone who is one.  Two or three components seem to be what's needed as a minimum:

1. The kernel patch that does what the openMosix patch does, but across the Internet, through one's firewall/router;

2. A CGI script that accesses a database of information about projects, connects nodes to the projects they select and tracks the amount of time donated; and

3. A Firefox plugin to do anything needed on the node end, like maintain a database of previously selected projects, a database of servers and report the amount of time donated.

I'm working on finding a scripting language that I like.  (Others could, and will, use other languages.)

However, I am _not_ a C hacker and need to find one enthusiastic enough to take this on.  Once started, advice could be had from the openMosix community.

So, if you've written a few C programs and kernal hacking would be just the thing to stretch your abilities, hop on board.


P.S. to John & Roberts:  Once working the patch(s) will be drop-in code that will require little more than an extra line in your make file.  As best as I can tell, the kernel patch adds just a few hundred bytes and does not effect the functioning of the kernal unless the rest of the openMosix software is turned on.  What I'm saying is, the kernal patch could be painlessly added to the base distro and the rest could be a *.dsl or a *tar.gz loaded by the user.

Quote (newby @ July 20 2006,00:25)
(A side effect would be turning the Humanities Department computers into a super computer for the science departments.  Science computers could also end up running textual analysis for the Religious Studies department as well!  <laugh>)

(Every program put on the cluster would have to be vetted, otherwise the university would rapidly become one huge CD and DVD ripping engine ... and one bug could bring the whole mess to a grinding halt!)  8-O

Still SETI@Home, I'm sure, has addressed those same issues.

Not a bad idea. I've run Seti@Home and Folding@Home in the past. Your idea sounds similar to Folding ( see http://folding.stanford.edu/ ) only more specific.

Seti and Folding are not big clustered supercomputing things though. The way they deal with what I quoted up there is each computer "donating" runs a program that processes a chunk of data. Then that result is re-uploaded to the project when finished. There's no interconnection between the users' computers like there is in a clustered network.

Next Page...
original here.