Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

multi-PC tasking

by samizdat (Vicar)
on Aug 26, 2005 at 15:31 UTC ( [id://486892]=perlmeditation: print w/replies, xml ) Need Help??

Listening in the chatterbox, I was inspired by Nevtlathiel's question "Why would one ever need two machines running the same OS?" to write this ode to progress.

As I sit here, I've got a FreeBSD machine running a 3.8GHz processor and 2G of DDR RAM. It's running Gnome as fast as my old boxen used to run FVWM 1.24. ;-]

Two gigabytes of RAM... DAMN!

When I started, we made punch decks for our high school's downtown IBM 360. In college, we waited overnight for our printout as run as a batch job on a CDC 7600.

In 1986, Intel 8080A/64K/360K floppy iPDS luggables were the bleeding edge because they had plug-in chip emulators that could monitor microcontroller data registers in real time. In 1988, I used 286 PCs in place of the custom Intel jobs, and once had a dual-286 passive-bus PC-like machine running DOS and a C program acting as a daisy-chain serial bus 'server', another regular 286 hosting a Periscope watching the first one, an 8051 security system keyboard with an emulator plugged in, another PC to host the emulator and my assembler, linker and editor, and yet another PC running a serial snooping monitor watching the serial bus. Hmmm, that's only 4 PC's.... I remember six at one time, so there were probably some more, but you get the idea.

Today, my one BSD box has more DRAM than all of the machines in that whole department had RAM and disk combined! It's just finished CVSup from a FreeeBSD.org mirror and is now happily make buildworlding away with four threads going. At the same time, I've got a window into our test lab's machine open with a 'is there new data?' perl script (there, see? this IS Perl related!) ready to alert me of more updates from the testers. I've got two other desktops full of emacs windows and terminals with tail and other goodies ready for me to get back to work on two completely different projects' worth of code, and in my dropdowns this one machine has a complete productivity suite, 3D graphics, morphing, image manipulation, and access (through X and the ethernet) to all the chip design and sim programs in our department's systems.

I'm not even going to speculate. There's a lot of work going on with optical interconnect now (including here) and MEMS; bio-neural interfaces are starting to actually work, and IPv6 is going to let us connect a heckofalot more little machines into one mesh. As Jeff Harrow says, "Don't blink!"

Replies are listed 'Best First'.
Re: multi-PC tasking
by radiantmatrix (Parson) on Aug 26, 2005 at 16:29 UTC

    I'm young by most coder standards (only 25), but I remember doing punch-key coding of hex that represented assembler instructions, and toggling boot code on a PDP-11.

    The amount of computing power on the average desktop never ceases to amaze me -- and neither does the mentality I see where someone who plays Solitaire, uses Word, and surfs the 'Net decides a 2GHz machine with 1GB of RAM is "too slow" and drops $2K on a new box.

    At home, I have a 2.4GHz box w/1GB of RAM -- I've turned swap off, because I wasn't using it. Granted, that's Linux. But at work, I have a 600MHz box with 512MB and WinXP that I've been happily using to develop Perl (using Eclipse, no less!). Most people have far more computing power available to them then they will ever need. My PHB's PDA has a 400MHz processor -- 400MHz in a glorified address book, my GOD!.

    I blame two groups: marketing and development. Marketing is responsible for what they usually do: "but, you need the newest stuff, or you won't get laid!" But "new-world" developers who learned to code at universities that used dual-Xeon boxes with 4GB of RAM don't appreciate optimization and conservation of resources. It is the "well, it will be slow on anything under 1GHz, but who uses that anymore?" attitude that drives the rush to faster, hotter, more power-hungry devices.

    Now, to a certain extent, processor time is cheaper than programmer time: no one needs to optimize Word for a 33MHz machine anymore. But when a 500-user web application that merely displays rows from a (separate) database needs a quad-processor Xeon to perform acceptably, we have problems. Especially when someone else (yeah, toot, toot, it was me) can re-write the thing in a week, using an Interpreted Language (Perl) via CGI and move it to a single-processor 333MHz machine.

    The big question is this: "what can we do about it?" I don't think anything, except refusing to buy/use software that is needlessly bloated. But that even excludes Gnome these days...

    <-radiant.matrix->
    Larry Wall is Yoda: there is no try{} (ok, except in Perl6; way to ruin a joke, Larry! ;P)
    The Code that can be seen is not the true Code
    "In any sufficiently large group of people, most are idiots" - Kaa's Law
      I often wonder what the impetus will be for today's CS students to grok efficiency in cycles, bandwidth, or context switching. I had to throw away (well, recycle) a whole bunch of donated P-II's because schools in the South Valley turned up their noses at them. Never mind that they rendered BSD-based Blender3D faster than P-4's on XP could refresh Corel! No, they were "old".

      Besides the laziness, there's an IT mentality (Re: On the wane?) that thrives on making out PO's for bigger iron and more Microsoft. I'm not sure if it evolved from bureaucracy or whether it's a parallel development. The CYA/job security aspect is surely evident in both. Time and again I see the same story about successful replacement using open source as you relate, and, more often than not, within six months the guy who reports it has moved on to a more stimulating job/culture.

        That's a sad comment on the state of education today. I'd have guessed it would be easy to find a bunch of teenage computer club geeks, give them a bunch of old hardware, throw in a couple books on Beowulf clusters and watch them happily build their own supercomputer...

        -xdg

        Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

      The big question is this: "what can we do about it?" I don't think anything, except refusing to buy/use software that is needlessly bloated. But that even excludes Gnome these days...

      You can't get around a simple fact: programmer time is a resource, too.

      It takes time to optimize code for size/speed/performance, and some programmer needs to give up things they could otherwise be doing to do those optimizations.

      When the benefits of that time investment outweighted the costs, it was a good practice. Typically, those benefits were the freeing up of scare resources, like RAM, processor time, etc.

      Now that the those resources are no longer scarce, the benefits of doing all that extra optimization work become increasingly less worthwhile from a cost/benefit standpoint.

      If it costs the developer valuable time, and doesn't save anyone any time or money in return, how is it of value? In any company that values profits, it's not.

      Also: if an obvious, inefficient, dead simple brute force algorithm will do the job, and a tricky, complex, brittle, and hard to understand algorithm will do it twice as fast, you code it the brute force way. Why? Because it's good enough, and the cost of maintainer time is much more important than the fact that the program technically runs in ten miliseconds instead of only half a milisecond.

      It's just simple economics, really. Don't expend expensive resources trying to save cheap ones; do the opposite. As computing resources have typically grow exponentially cheaper, our costing priorities have had to shift to keep up.

      It's no longer worth it to spend an hour of programmer time to save a hour of computing time: because the programmer's time costs much more than computer's time. On the early mainframes, it was the exact opposite. -- AC

        Well, quite often you would not need any advanced optimization. Quite often all you'd need is someone who's been around for some time to be asked to have a glance over the code of the youngsters and tell them not to quote variables they wanna pass to a function (hey, this is Perl, not a shell script), to add a few indexes in the database here and there, to use Int instead of Numeric(9) in the database, to use this or that module instead of wasting time trying to control MS Excel or MS Word via Win32::OLE, ...

        Noone expects people to rewrite parts of their code to the assembly code to speed it up or to spend hours trying to find the most efficient way to do something to save a few cycles. Even the very basic and easy to implement things can help a lot. And all it would take is for the management to understand that time spent teaching&learning, that the time spent reviewing each others code is not wasted.

        Jenda
        XML sucks. Badly. SOAP on the other hand is the most powerfull vacuum pump ever invented.

      Your point is well said. Programs have grown fat and lazy because there is little incentive to optimize them. Even a badly-written implementation will seem to shine on good, fast hardware.

      I can understand this a bit in commercial shops where profits matter. Programmer time is far more expensive than CPU time and memory. Worse yet, the time needed to optimize could delay the product, allowing a competitor to beat you to market. Let's face it - most software companies don't even do a thorough job of testing and debugging their products. The attitude seems to be, "Hey, it compiles! Ship it."

      Unfortunately, it's not just market pressure that drives this bloat. We see it in Open Source programs, too. Much as I'd love to blame it all on Microsoft, it's pervasive throughout the industry.

      But I'm wondering whether this is really a Bad Thing. Yes, it goes against the grain. It bothers me that programs are bloated and sluggish - but if it's easier to just use fast hardware to compensate, does it really make much difference? Isn't that just using the resources in the most economical way? I don't know.

      So my question is, why should we optimize, when that's so much more expensive than just using faster machines?

        So my question is, why should we optimize, when that's so much more expensive than just using faster machines?

        All I can offer is my personal take on the matter. There are a few reasons, IMO, that optimizing is preferable to "just using faster machines":

        Pride in quality. I think one of the values that our society is slowly losing is the pride in creating something of quality for its own sake. Quality workmanship ends up priced out of the range of most people, and the rush to commodotize and profit (and to consume, on the other end) creates an environment where shininess outweighs quality. That trend will someday lead to the commoditization of development (it's already happening in some places), and the lack of demand for developers capable of quality. All that means is that I will command a lower salary.

        Economic enlightenment. It's all well and good to target deep pockets. On the other hand, selling lots of something at a low price has profit potential as well -- not everyone can afford the latest, greatest machine, especially in developing nations. I'd love to see any major software company that's trying to build markets in developing countries explain why they're not working on making sure their products perform acceptably on the machines that people can actually get there (e.g. P-II class machines).

        Environmental awareness. Unfortunately, there are a lot of hazardous materials that go into the manufacture of computer equipment; a good chunk of them stay in the machine and live in your home. That's all fine, but disposal is an issue -- and recycling these materials isn't the ultimate solution, because that process in itself creates hazardous wastes (not as much, though: recycle if you can). Why should I be forced to get rid of a 900MHz machine that I know is capable of running the type of apps my employer uses simply because the people who developed the apps were careless?

        Granted, there are things we can do to mitigate these issues, and I do encourage them. For example, that "old" 900MHz machine might find its way to a high-availability web cluster or to the test lab for the integrators to poke at. But, I still think that development organizations have a degree of responsibility to be reasonably aware and careful -- to hire programmers who can (and encourge them to) think ahead and be reasonably conservative about resource use.

        And, to beat this dead horse a little harder, I remind you that I'm not necessarily talking about optimizing or refactoring: just about thinking ahead and avoiding needless resource use.

        <-radiant.matrix->
        Larry Wall is Yoda: there is no try{} (ok, except in Perl6; way to ruin a joke, Larry! ;P)
        The Code that can be seen is not the true Code
        "In any sufficiently large group of people, most are idiots" - Kaa's Law
Re: multi-PC tasking
by tilly (Archbishop) on Aug 27, 2005 at 04:21 UTC
    Here are some serious reasons why we need multiple machines running the same OS.

    1. Millions of page hits a day will tax a PC somewhat. While we could optimize, programmers cost more than computers.
    2. Load balancers and failover are a business requirement. By definition, you can't failover from a machine to itself.
    3. When we update the code on the live site, it is nice to be able to take half the machines out of service, update them, then cut over and update the rest.
    4. The production site is at a nice hosting facility. Developers work at a different site. Latency sucks.
    5. When installing experimental software, modules, etc, it is nice to do that in a physically isolated environment. Ditto when testing out things like OS upgrades.
    And those are just the reasons why we have multiple Linux machines. There are plenty more that I can think of, but they don't happen to apply to us.
Re: multi-PC tasking
by g0n (Priest) on Aug 26, 2005 at 18:16 UTC
    I use 3 machines with essentially the same OS:
    • A Shuttle box that acts as a web/print/general server, and is always left on.
    • A Thinkpad with a nice big comfortable screen, which is my normal, day to day machine.
    • A Sharp Zaurus, which allows me to do without carrying the laptop a lot of the time.

    The Zaurus amazes me. In a space small enough to carry in a pocket, I can run perl, read & write OpenOffice documents, ssh into the server through wifi or a mobile phone, and countless other things. It runs for 5 hours plus on an 1800mA 5V battery. My first home machine was an ICL 1603 in the 70's, and it was huge! 5 inch green crt, spools of 1/16 inch magnetic tape. The comparison between the two is startling.

    Fifteen years ago, my father (who's worked in the industry since the sixties) had a habitual verbal slip, referring to disks of e.g. 20Mb as 20k. Chuckle as I did at the time, I now find myself doing the same with Gb and Mb.

    But with the speed of proliferation of computers, and the pace at which power is increasing, should we be at least a little concerned?

    On the one hand, sales of laptops are rocketing, and they generally use less power than desktops. Sales of LCD displays, which use a fraction of the power of CRTs, are also shooting up. Improvements in manufacturing techniques to allow more powerful machines also allow those same machines to run cooler, and hence use less power, at lower clock speeds. When a home user goes to the PC shop, and the sales assistant asks what they want the machine for, it's usually "word processing, internet, and games". The games are getting more and more sophisticated, the graphics are getting more and more realistic. From that point of view, the future of computing is looking constantly cooler and more exciting.

    OTOH PC's and servers are proliferating at an astonishing rate, and the energy use attributable to them is going up, not down (not to mention the energy needed to manufacture them, the energy needed to recycle the components of old machines (or the pollution caused by dumping them)). Those amazingly sophisticated games are part of what drives the need to constantly upgrade.

    OTOOH, those games are probably what will drive the future. When we're putting 'trode sets on our heads to access the net instead of relying on clumsy old keyboards & mice, it will probably have been those resource hungry games that got us there.

    So what's my point? If there is one, it's that the future has cool things in it, but those cool things have consequences, but they're still cool, but etc. We should probably keep both things in mind.

    --------------------------------------------------------------

    g0n, backpropagated monk

      So true, g0n. I just took down a rack I had at my house, and I think my power bill will drop by over $140USD each month! The 1U's alone kepth the whole household awake 'cept for me, I'm mostly deaf already. Most of those 5 rack and 2 desktop machines were there just as replicated failover servers, and their disks were mostly empty except when tar'ing nfs backups of each other. Not only do we abuse CPU cycles these days, we abuse watts and watts of power for less and less important purposes. We've since put the rep machines in the same colo (http://www.dataconnectla.com, highly recommended, 1/2 rack inc b/w & pwr for $330 / month!) with two separate 1mbit feeds from different second/third-tier providers. Consolidated the usage, too; even though we have the space, we're on four machines, not 12, now.
Re: multi-PC tasking
by jcoxen (Deacon) on Aug 26, 2005 at 16:21 UTC
    My reason for having two machines running the same OS is simple.

    Box #1 - Development
    Box #2 - Production

    Jack

      Ahhh, but even that (at least for Web) can be 8080'd with two apaches and two MySQL's. :D Good point, tho!
Re: multi-PC tasking
by neniro (Priest) on Aug 26, 2005 at 17:06 UTC
    There is one thing I really dislike in up-to-date PCs: the noise of the fans needed to cool those systems. My Mini Mac is silent and as fast as needed for all my apps (but I don't play games on it). Increase of speed is something I can't realize nowadays - back in the days of 68k-boxes this was different, but today nearly every machine ist fast enough for normal work.
Re: multi-PC tasking
by GrandFather (Saint) on Aug 27, 2005 at 02:28 UTC

    Having read most of the replies with interest I'm prompted to note that at work we haven't updated our computers, beyond changing to LCD monitors, for about three years now and don't expect to any time soon. It used to be that processor speed was increasing right in line with Moore's law and we were getting a useful increase in processing speed about every two years that made it worth upgrading. It present processor speed has run into a brick wall. Hard drives are still getting larger and faster and that makes some difference, but not a lot.

    What I am really noticing at the moment is that the prices for computers are plummeting. I guess that now that processor speeds have pretty much stabalised chip prices and manufacturing costs are droping and we are seeing that and the effects of competition in the bottom line. I guess that helps with finding the heckofalot of machines that dwildesnl is after :).


    Perl is Huffman encoded by design.
      Even more than just 'machines', think chips. Now that the peripheral architecture has shifted to USB and PCIe/x is much more decoupled from the creaky 8086 bus that drove 'ISA', we're seeing single-chip PC's with ARMs and other more RISC-y CPU architectures where you have gobs of memory and FLASH on one chip. Once you get away from interconnect, disk drives, and multi-part packaging, the price of computers goes way down. I've seen single-chippers with 256M of RAM and 256M of FLASH, gigabit ethernet and multiple USBs, and they're going to _retail_ for a hundred bucks or less (current models here). Oh, yeah, and they don't need fans, either! What's your programming going to look like when you have a stack of them in a box? Or, even more so, when your _wall_ is covered with them?

      We've already seen the beginnings of this with modern cars. Now that serial data busses like CAN are common, virtually every idiot light in a car has its own little CPU. The same thing will happen with houses, although it won't be ubiquitous. The thing to realize is that once you put _any_ silicon in a widget, you can add a CPU for no additional cost. Once the bugs are out of power line networking (or, at least, the standardization of a minimal bug implementation), you will see _everything_ having connected processing power. The truth of things, these days, is that having an ARM and gobs of memory with a multi-hundred-megahertz clock costs very little more than an 8051 running at 11.059 MHz. And, once the English work on static asynchronous CPU designs hits production, power usage will cease to be as big an issue or cost factor. Likewise, shrinking geometries lead to lower voltage swings. The only remaining barriers to ubiquitous computing will be the toxicity of the semiconductors being doped, but economy of scale will solve that, too. Even without considering the coming of photonic computing as a replacement for semiconductors, I can see a not-too-distant future where every manufatured thing from structural framing to toys has a networked wireless CPU in it, even if the thing itself has nothing needing control. This fabric of CPUs will enable any house to have a plethora of active agents in residence.

      Why? Ask rather, why not?

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://486892]
Approved by Tanktalus
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others sharing their wisdom with the Monastery: (4)
As of 2024-04-26 00:15 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found