It was not written to cope with the number of file modification requests it now receives.
Then you need to identify where the limitation lies.
If this is the case, then there are three possible reasons:
Use a better web server.
Purchase bigger hardware. Or a employ/purchase a second (or more) box(s) and have the web server hand off (distribute) the cpu intensive processing across the boxes. </ii>
Use a better web server.
Or, run multiple copies of the existing one on different ports and have the front-end (running on the current port) redirect the incoming connects to the other ports for service.
You are probably better off using an existing web server, than trying to write your own in Perl. It doesn't have to be a behemoth like Apache, something simple and efficient like Thttpd is probably a better choice for something like this.
In reply to Re^3: secure and scalable client server file transfers
by BrowserUk
in thread secure and scalable client server file transfers
by derekw
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |