Sounds reasonable, but I would
consider 200+ processes too much since you might end up running 200 copies of
your program just waiting for a pair of SNMP requests.
What is the fixed time basis? You might be able to collect status information from
each IP sequentially if you don't need to update the histogram too frequently.
Depending on the SNMP modules, using select()
(you seem to target *NIX platform) might be an approach to
parallelise the networking part using a single thread since most time will presumably
been spend waiting for an SNMP response.
I cannot see much coupling parallelism between the thread that collects the information and the thread that processes the statistics. Seems that you need to update the statistics only if the
collection thread has completed a round?
Update: s/coupling/parallelism/
| [reply] [d/l] |
The fixed time basis is variable (set with a switch). It defaults to 1 minute for checking modems and 5 minutes for refreshing the IP cache.
It sounds like running the snmp queries sequentially is the way to go. If I am passing a hash reference to the forked subs, wouldn't that allow all of them to access the data in 'real time'?
IE the snmp polling child is halfway through, so half the values are updated and half are stale, but the analysis/plotting child would still access that hash without waiting for the first child to finish. Would perl complain that the resource was being used by to processes?
Secondly,
I'm starting to convert code over (I have all of the polling stuff written for another project, but I guess I'm not understading how to pass/use references in a sub.
I have this sub:
sub loadIPs {
my $datahashref = shift;
my $parmhashref = shift;
my $IPs = '1.3.6.1.2.1.10.127.1.3.3.1.3';
my ($session, $error) = Net::SNMP->session($parmhashref);
my $reply = $session->get_table(-baseoid=> $IPs);
my %reply = %$reply;
while ( my ($oid, $ip) = each(%reply) ) {
my %tmpHash = (rx=>NULL, tx=>NULL);
%$datahashref{$ip} = %tmpHash unless exists $datahashref{$ip};
}
}
Basically the master hash is a hash of hashes. This sub gets a list of IPs and updates the master list with a blank entry.
However, it doesn't like how I'm referencing %$datahashref | [reply] [d/l] |
Please see threads::shared on how to share information between two threads.
Usually a newly created copy of a process (fork) or a newly created copy of the Perl interpreter (threads) do not share data structures by default.
Perl would not complain while accessing shared data (neither warn you) - but you usually do not want to
read information that is in the process of being manipulated by another thread at the
same time. So, you need to synchronise access to this data (more details via link above).
Is there a requirement, that the SNMP thread clears the information before starting another round? Otherwise, I do not understand the stale argument, since each information belonging to a certain IP is valid until updated? You might want to mark updated values with a flag or a time-stamp, but I would consider this too complicated. Why not simply let the SNMP thread use a callback to update the GUI whenever a network entry has been updated?
Sorry, I don't understand your sub. A HoH-entry is set to something seemingly constant unless this entry does not already exists. So the only information conveyed here between the threads is the existance of a newly found IP? Maybe you can describe a little bit more what is
updated (sum of RX/TX values per IP?) and how it must be treated to produce a proper GUI update?
| [reply] |
| [reply] |