mandog has asked for the wisdom of the Perl Monks concerning the following question:

For a Microsoft Win2K system admin class, I've written a trivial script to help the students practice performance monitoring. The script loops forever alternating a few seconds of massive CPU use with a few seconds of idleness.

However, I’m afraid I’m making it too easy for the students

Is there a way for me to precisely suck CPU? I'd like to suck say 20% of CPU for a few seconds, then suck 85% for a few more seconds, then suck 95% for a few more seconds, etc...

I didn’t find any CPU monitoring modules on CPAN.

#!/usr/bin -w # script to drive cpu use to max for a specific period use strict; our $now_sec; our $end_sec; our $busy_period=4; our $sleep_period=4; our $n1; our $n2; our $n3; our $n4; print "Press control - C or use task manager to kill me"; while (1) { ($end_sec)=$now_sec+$busy_period; $n1=$n2=$n3=$n4=3.14; while($now_sec<$end_sec){ $n1=$n2=$n3=$n4=$n1*$n2*$n3*$n4; ($now_sec)=localtime(time); } sleep($sleep_period); }

Replies are listed 'Best First'.
Re: gulpng CPU with precision
by John M. Dlugosz (Monsignor) on Aug 14, 2001 at 05:01 UTC
    In general, you want your duity cycle to be faster than the PM's sampling rate.

    To mimic real things, block on a Kernal object. This is normally the file or whatever that it needs to complete. But make it a "waitable timer" and set it to go off at a precise time. So your loop would chew CPU for so many quantums, then block for so many. Look up the quantum value (how long a timeslice is), and use high-precision timing primitives rather than localtime, since you must get an order of magnitude (or two) more granularity.

    The waitable timer is specified to milisecond resolution. The native 64-bit time value (GetSystemTimeAsFileTime) is updated once a quantum, so you can watch it jump if you are reading it in a loop. That tells you what the timeslice size is, too!

    A quick check:

    use strict; use warnings; use Win32::API; my $f= new Win32::API ('Kernel32.dll', 'GetSystemTimeAsFileTime', ['P' +], 'V'); my $buffer= 'x'x8; #8 byte value for (1..100000000) { $f->Call ($buffer); } # printf "%vx\n", $buffer;
    under PermMon shows that it does stay in User mode. That is, the constant calling to GetSystemTimeyadayada doesn't mess things up by constantly switching to kernel mode. Is should be simple, if the internal variable is simply accessed and returned.

    I'm getting roughly 300,000 iterations per second, which is quite enough to get high-granularity: spin for a few miliseconds, block for a few miliseconds.

    —John

Re: gulpng CPU with precision
by blakem (Monsignor) on Aug 14, 2001 at 05:16 UTC
    There really isn't such a thing as "sucking 20% of CPU for a few seconds" The CPU is either crunching on your code with 100% of its might at this precise moment, or its not. The way you phrase it makes it sound like you'd like to gobble 20% of the CPU for a continuous block of seconds. Its the difference between "1 in 5 women are pregnant" and "this woman is 20% pregnant."

    I'd recommend using a much finer granularity than sleep, perhaps Time::HiRes has what you are looking for, though I'm not sure if it works under Windows.

    use Time::HiRes qw( usleep ); usleep ($microseconds);

    -Blake

      If you've seen the tool he's talking about, you'd know what he means.

      I agre it's 100% at any given instant, but averaged over chosen time segment (e.g. 1 second) a mix of tasks will be running a certain percentage of the time. If they don't add up to 100%, you need a faster disk drive. If they do add up to 100%, you need a faster CPU.

        Fair enough, I haven't used that PM tool nor am I a windows expert. I've just seen too many people confuse the CPU issue, and wanted to clear it up a bit.

        -Blake

Re: gulpng CPU with precision
by Nitsuj (Hermit) on Aug 14, 2001 at 14:22 UTC
    An old technique used to do this... generally by people with devious intentions, but also for other purposes, is a fork bomb. Monitor the usage and have branches die at certain depths/performance levels. This will fill the OS's process queue and cause a corresponding change in performace that.

    Just Another Perl Backpacker