httptech has asked for the wisdom of the Perl Monks concerning the following question:
My sort routine works, but it's kind of clumsy. I'm using IPC::Shareable to store the sorted MX servers, but I ran into problems when trying to store references as hash keys, because it IPC::Shareable tries to tie the references also, and I don't see a way to disable that behavior. So I end up doing some splits/joins to get the job done (amateurishly).
Anyone care to give me some pointers on how I can improve this? I am posting the code as a reply to this message, since it's fairly long.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Forking and sharing variables
by perlmonkey (Hermit) on May 21, 2000 at 06:00 UTC | |
I added Data::Dumper just for easily displaying the data of our multi dimensional hash %mxs. And I added 'strict' as every good human should :) The next part is the most drastic change. Is is the rest of the main function (that the parent will execute): First since we want each hash key to be an array (to get rid of the split/join junk) it seems we have to create the anon arrays in the parent. I always got fatal errors if I tried to do it from the child. I am not saying it can't be done, because I dont know ... but I couldn't figure out how to do it. In the first for loop we call 'child' for each child as long as there are more domains to search for. So if there are more doms than $PREFORK we have to wait for one to die before we fork off another one. The 'wait' is blocking and will wait for any child to die. Then we go into a anonomous block and keep 'redo'ing until we have forked of a child for every domain and all the children have died. Finally once all the children are dead we display the results via Data::Dumper. The last bit is the child function: This code is similar to yours, just a trimmed a bit. First we open up the shared memory segment,then call the mx function with the passed in URL. The list result of that is mapped to get out the data we want, and the list returned from map is pushed on the end of the anonymous array for our url in the mxs hash. I have to mention though that I have never done IPC stuff before, so I might be missing some of the subtleties, but this code worked like a champ for me. I hope this helps. And here is the code all together just to make is easier to cut and past:
| [reply] [d/l] [select] |
|
Re: Forking and sharing variables
by httptech (Chaplain) on May 21, 2000 at 16:38 UTC | |
Also, from what I understand, at the end of the program you need to call IPC::Shareable->clean_up or the shared memory segment persists after the program ends, which is probably not desirable. The last thing, which is one I couldn't get past, is that I seem to be running out of memory when creating the tied %mxs hash when using your method of defining it. It doesn't happen on a small list, but when I tried it on a list of 200 addresses I get: So I added size => 8000000 to %options and that just led me to plain old Out of memory! I can't see why it should take more than 8 megabytes of memory to store MX servers for 200 domains. It didn't do this in my example code. But I don't see anything radically different about defining the %mxs hash ahead of time that would cause this. Any ideas? | [reply] [d/l] [select] |
|
RE: Forking and sharing variables
by httptech (Chaplain) on May 20, 2000 at 23:01 UTC | |
For now I am not worrying about storing the preference of the MX server. And when a domain like foo.bar.com doesn't have an MX record, I am just using foo.bar.com as the MX server. because I don't know how to use Net::DNS recursively. | [reply] [d/l] |