Conquistadog has asked for the wisdom of the Perl Monks concerning the following question:
Update:
Anyone using perl from nginx, keep HTTP::Body handy! (and client_body_in_file_only on; in your nginx config)
my $file = $r->request_body_file(); my $body = HTTP::Body->new($r->header_in('Content-Type') , $r->header_in('Content-Length')); my $fh = IO::File->new("<$file") or die "can't open $file: $!"; my $len = $r->header_in('Content-Length'); while ($len && 0 < $len) { $rh->read(my $buf, ($len < 8192 ? $len : 8192)); $len -= length($buf); $body->add($buf); }
After that point, $body has all the stuff and info needed.
Thanks for the help, everyone!
Greetings fellow faithfuls!
Today my quest is for an approach to extract body "parts" from an HTTP request body of type multipart/form-data -- but to do it all on-disk and in-place without loading any entire "part" into memory at any time.
Rationale: I am using nginx, which politely stores the request body in a file for me before invoking my perl handler module. In the present case, incoming HTTP requests sometimes contain the contents of files being uploaded, as part(s) of a multipart/form-data message body. Consequently, the request and its parts are often very large. My last approach using HTTP::Request (i.e. HTTP::Message and friends) fails in the large-body-part case, presumably because that approach does require (or result in) the entire request body's presence in a scalar (and indeed more than once, in cases).
Regardless of the approach, the need is ultimately this:
Given the name of a file containing the HTTP request body, I need to produce (or have produced for me) an appropriate number of files corresponding to and containing the extracted multipart/form-data parts.
Has anyone done something like this, and/or know of a workable approach, library, or tool to use?
Many thanks,
Conquistadog
|
|---|