I've written my own SMTP server in perl. I know there are a lot out there already, but I wanted to write my own, OK? (defensive mode off now).
Over time, I've been adding functionality (i.e. modules) and the server footprint is getting big (approx 15-16Mb of memory). Mail::SpamAssassin adds about 9Mb to the footprint, IO::Socket::SSL adds a few meg, etc.
When each request comes in, I fork a new server to handle it. When I run the top command, I see that nearly all the memory used by the forked processes is Shared (i.e. 15Mb out of 16Mb). I take this to mean that the overhead in forking a new process is fairly low.
My question is, is there anything fundimentally wrong with having a server with such a large footprint? I notice that most servers I see are considerably smaller (e.g. httpd and sendmail are 3-4Mb). I am assuming that loading everything at the beginning is the most efficient way of handling the server, but I wonder if I'd be better off loading some modules as needed (like SSL) or running a separate server for some (like SpamAssassin).
I want this to be a high performance server. It currently handles on the order of 50,000 messages a day, and the hardware it's on is mainly sitting idle. I don't really see any performance bottlenecks yet, but I'm thinking about the future. It's hard to test different configurations, because everything goes so fast, I can't really notice any differences.
Everything is working great, I'm just wondering if I'm approaching this correctly. Any feedback is appreciated.
- Alex Hart
In reply to SMTP server in perl by althepal
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |