in reply to Re: Re (tilly) 1: Segfault with Storable ( updated w/ code )
in thread Segfault with Storable
The trick is getting the reading code to give up if it cannot get everything in the required time. At least two safe ways come to mind to do the trick. Both are a little heavy.
One is to use a select (or IO::Select)/sysread loop to read character by character and die if you fail. If you write this correctly you can actually multiplex multiple loops. But note that you are now writing a Perl level loop for every character coming in - this is not the most efficient thing to do...
The other is to put a process between you and the other end which you don't mind dying. You could launch it with IPC::Open2 with an argument saying how long it is going to live before dying, it sets up the alarm with an exit on read failure (unstable signal handling doesn't matter now since you want it to die), and it will process one object off of the pipe, echo it back to you, then die. You can now do a blocking read of that object, safe in the knowledge that the other end is going away. After it goes away you can reap it with wait or waitpid. Launching a process per object is extreme, but if the objects are very large it may be more efficient.
UPDATE
Response to your update. All that select will do is
guarantee a single incoming byte. That is why you have
to sysread single chars in a loop. You don't know that
there will be more than that, and you don't want to block.
And since you have to handle the read at a low-level,
you can't punt to Storable's processing method. Which is
why I said up front that you would want to write your own
communication protocol at each end.
|
|---|