I've got a website setup for employees to upload large files to so that they may be downloaded by external users. The files sent through this site are routinely 2+ GB and I'm looking for a way to streamline the processing.
I'm using swfupload on the client end to make it easy for the users and it's been working happily for many years. Recently however the files are getting even larger and I've noticed that the way CGI is handing the uploads isn't too efficient.
The upload is streamed into a temporary directory on the c: drive. Once the client finishes POSTing it would seem that the file is then copied to it's final location on the d: and then deleted from c:. I tried changing the temp location to d:, but I still have to wait while the OS makes a copy of the 3GB file before deleting the CGITemp one.
Aside from taking double disk space during the process, the copy delay can be long, and the client see's their download stuck at 100% (via swfupload).
So my question is, how do I force CGI to write directly to the final desired handle and location, bypassing the CGITemp process? I looked into the hook feature, but as far as I understand it, that is only a means to monitor the upload process. I tried setting 'use_tempfile' to 0, and was successful as far as I could tell, but that just made the CGITemp file write to c:\windows\temp instead.
In reply to CGI upload efficiency by Talroot
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |