in reply to initial walkthrough

Sounds reasonable, but I would consider 200+ processes too much since you might end up running 200 copies of your program just waiting for a pair of SNMP requests.

What is the fixed time basis? You might be able to collect status information from each IP sequentially if you don't need to update the histogram too frequently. Depending on the SNMP modules, using select() (you seem to target *NIX platform) might be an approach to parallelise the networking part using a single thread since most time will presumably been spend waiting for an SNMP response.

I cannot see much coupling parallelism between the thread that collects the information and the thread that processes the statistics. Seems that you need to update the statistics only if the collection thread has completed a round?

Update: s/coupling/parallelism/

Replies are listed 'Best First'.
Re^2: initial walkthrough
by vortmax (Acolyte) on Oct 09, 2008 at 20:12 UTC
    The fixed time basis is variable (set with a switch). It defaults to 1 minute for checking modems and 5 minutes for refreshing the IP cache.

    It sounds like running the snmp queries sequentially is the way to go. If I am passing a hash reference to the forked subs, wouldn't that allow all of them to access the data in 'real time'?

    IE the snmp polling child is halfway through, so half the values are updated and half are stale, but the analysis/plotting child would still access that hash without waiting for the first child to finish. Would perl complain that the resource was being used by to processes?

    Secondly, I'm starting to convert code over (I have all of the polling stuff written for another project, but I guess I'm not understading how to pass/use references in a sub.

    I have this sub:

    sub loadIPs { my $datahashref = shift; my $parmhashref = shift; my $IPs = '1.3.6.1.2.1.10.127.1.3.3.1.3'; my ($session, $error) = Net::SNMP->session($parmhashref); my $reply = $session->get_table(-baseoid=> $IPs); my %reply = %$reply; while ( my ($oid, $ip) = each(%reply) ) { my %tmpHash = (rx=>NULL, tx=>NULL); %$datahashref{$ip} = %tmpHash unless exists $datahashref{$ip}; } }


    Basically the master hash is a hash of hashes. This sub gets a list of IPs and updates the master list with a blank entry.

    However, it doesn't like how I'm referencing %$datahashref

      Please see threads::shared on how to share information between two threads. Usually a newly created copy of a process (fork) or a newly created copy of the Perl interpreter (threads) do not share data structures by default.

      Perl would not complain while accessing shared data (neither warn you) - but you usually do not want to read information that is in the process of being manipulated by another thread at the same time. So, you need to synchronise access to this data (more details via link above). Is there a requirement, that the SNMP thread clears the information before starting another round? Otherwise, I do not understand the stale argument, since each information belonging to a certain IP is valid until updated? You might want to mark updated values with a flag or a time-stamp, but I would consider this too complicated. Why not simply let the SNMP thread use a callback to update the GUI whenever a network entry has been updated?

      Sorry, I don't understand your sub. A HoH-entry is set to something seemingly constant unless this entry does not already exists. So the only information conveyed here between the threads is the existance of a newly found IP? Maybe you can describe a little bit more what is updated (sum of RX/TX values per IP?) and how it must be treated to produce a proper GUI update?

        I actually figured out the sub. All that it does is check list of active IP's pulled in from SNMP against the hash and adds the new ones. I still need to add the part to strip out the old ones.

        anyway, the working code is:
        sub loadIPs { my $datahashref = shift; my $parmhashref = shift; my $IPs = '1.3.6.1.2.1.10.127.1.3.3.1.3'; my ($session, $error) = Net::SNMP->session(%$parmhashref); my $reply = $session->get_table(-baseoid=> $IPs); my %reply = %$reply; while ( my ($oid, $ip) = each(%reply) ) { my %tmpHash = ('rx'=>undef, 'tx'=>undef); $$datahashref{$ip} = %tmpHash unless exists $$datahashref{$ip}; } }

        it was the $$ reference I wasn't grasping.

        When I mentioned stale data, I didn't mean it as a bad thing; just a word to differentiate it from the new stuff, and I want the script to just update the hash on the fly. No clearing out the whole thing and starting fresh.

        I was actually hoping to keep the GUI refresh rate higher then the time it takes to query all the modems. On a large plant, the query process can take a full minute or two, and I'd like to be able to watch the changes trickle in.

        I'll have to read up on the threads::shared mod

        thanks