in reply to Re: SOLVED: HTTP-POST with IO::Socket -- Header problem
in thread HTTP-POST with IO::Socket -- Header problem

No, because when I merge the header-array with the data via

join("\r\n", @head).$data

the $data variable has no "\r\n" at the beginning, so the header-array needs to have 2 line breaks so the paket as a whole looks like this:
[...] Content-Type: multipart/form-data; boundary=".$boundary." --".$boundary." Content-Disposition: form-data; name=\"username\" [...]

PS: If you had read my first post you would know that all these nice comfortable modules have the problem that they copy the file completely into RAM while uploading it.
You can circumvent that by setting "$HTTP::Request::Common::DYNAMIC_FILE_UPLOAD = 1", but then the speed drops drastically - from 5mb/s to about 100kb/s. And you can't set the buffersize anywhere to increase that speed.

Best regards,
Kay

Replies are listed 'Best First'.
Re^3: SOLVED: HTTP-POST with IO::Socket -- Header problem
by Corion (Patriarch) on Jun 23, 2011 at 10:13 UTC

    I'm sorry that I didn't read your original post closely enough.

    It would seem to me that a very simple approach to testing where the bottleneck lies would be in patching/replacing the subroutine HTTP::Request::Common::form_data to read the data in chunks larger than 2048 bytes. Unfortunately, ->form_data is very large and monolithic and there is no easy way to change it other than copying it into your source code and replacing it:

    use HTTP::Request::Common; sub my_post_file { my $bufsize = 10_240_000; local *HTTP::Request::Common::form_data = sub { ... my $buflength = length $buf; my $n = read($fh, $buf, $bufsize, $buflength); if ($n) { $buflength += $n; unshift(@parts, ["", $fh]); } ... };

    If this change alone brings "enough" speedup, it might be worth to submit a patch back upstream that makes the POST buffer size configurable.