If pipes to and from your sub-process are giving you so much pain under mod_perl, is there a reason why you don't just put the input in a file and run ghostscript so it reads from that file and puts the output in another?
To get the thing running, I'm just doing one command (generate this page image), but eventually, yes, I'll be initializing a Ghostscript for a particular document, then coming back to it later to say, OK, now give me a page image for page 5. OK, I'm back, give me page 6. Me again, page 7 please.
Eventually I'll have a hash holding pointers to waiting Ghostscript processes (one for each document) and some intelligence that goes around and says, "OK, you've waited long enough without anything to do, away you go." I can't go that until I can get IPC::Run figured out.
Are you trying to keep a ghostscript process running over the long-term, handling multiple requests, or something like that?
Yes.
Alex / talexb / Toronto
"Groklaw is the open-source mentality applied to legal research" ~ Linus Torvalds
| [reply] |
Uh, Actually yes, He said that in the first sentence. :)
--Brig
| [reply] |
Ouch. Sorry about that. Buffer overflow? No, this is perl. Curses.
Well, it seems likely that the problems are related to mod_perl's fun and games with stdin/stdout. <googles> Is this any good to you? "Apache2::SubProcess provides the Perl API for running and communicating with processes spawned from mod_perl handlers."
And if it does all end up too painful, you can still use file-based queueing with a long-lived process. Have a droppoff dir for a work queue to the long-running gs and a pickup queue for finished pages. You'll need a little wrapper around ghostscript, but it would be started as an independent daemon.
File-based queueing does involve some tedious details though:
- Be sure to write to a temp file and rename(), to avoid half-written files (on both queues)
- You have a new failure mode where the ghostscript daemon isn't running and stuff just builds up in the out queue.
| [reply] |