Re: speed factor
by dragonchild (Archbishop) on Nov 13, 2007 at 17:34 UTC
|
The cost of a new webapp machine (including all provisioning, bandwidth, sysadmin costs, etc) is generally the cost of 1 week of developer time. The cost of a new database machine (about 4x as powerful) is generally 2 weeks of developer time. If I can write something in 1 week (in Perl) and it requires 2 machines or take 4 weeks (in C++) and it requires 1, I have saved two weeks of my salary. This assumes that the C++ version even does everything the Perl one does and that the C++ one requires half the resources of the Perl one. (Both have proven to be faulty assumptions).
Furthermore, the Perl version is going to be easier and safer to extend than the C++ version. New developers are going to be productive more quickly. And, there is more battle-tested code for Perl (via CPAN) than there is for C++.
The key is determining where your costs are. 30 years ago, the significant cost was hardware, so you optimized for hardware speed. Today, the significant cost is developer time, so you optimize for developer speed. This is the origin of "throwing hardware" at a problem. It's usually the right business decision.
My criteria for good software:
- Does it work?
- Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
| [reply] |
|
|
There are tons of hidden costs in those extra servers of course, such as "we'll need another server room"...
Plus, of course, every new box adds a maintenance burdon.
| [reply] |
|
|
Of course. Some you didn't mention are:
- Power costs
- disaster recovery planning
- load balancers
- proxy servers
- a SAN/NAS (and its backup)
- Additional internal gigabit networking
- retooling apps to live on multiple servers vs. just one
- planning and handling failover between servers
The point is that, in general, the TCO of a new server tends to average between 5 and 7 days of developer time. That's, roughly, $4000-$6000. (Yes, a good developer will have an average TCO of $800-$850/day.) That gets you a nice server-class dual-dual CPU, 2G RAM, a decent set of disks, and 2 gigabit NICs. As the server numbers drop and as clustering technologies improve, those hardware numbers keep going down. The cost of that (good) developer is only going to go up.
My criteria for good software:
- Does it work?
- Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
| [reply] |
Re: speed factor
by gamache (Friar) on Nov 13, 2007 at 17:00 UTC
|
Perl is not designed to be as fast as possible on computers. It's designed to be fast enough on computers, and as fast as possible on the human writing the program. Sometimes using a strictly compiled language will save CPU time compared to a Perl alternative; sometimes not. The Perl program, however, will almost always be easier to write, debug, maintain and extend. And Perl itself is written in very fast C, so the speed hit is not as hard as you might think.
Regarding your CGI example, there are ways to avoid the performance hit of spawning a new interpreter for each request; mod_perl is the most popular.
Finally, your comment about "other better options are out there" seems to indicate you have your mind made up already... | [reply] |
Re: speed factor
by zentara (Cardinal) on Nov 13, 2007 at 17:00 UTC
|
Perl has mod_perl , a module built-in to the apache web server, which addresses this speed issue. It requires a slightly different programming technique, but works well. For more mudane speed improvements, there is FastCGI which Perl has an interface to.If you want pure speed, at the expense of developer time, use c or c++.
| [reply] |
Re: speed factor
by jbert (Priest) on Nov 13, 2007 at 16:56 UTC
|
Startup time isn't a factor for any modern language for web serving.
Any site taking a reasonable load will move to having persistent application processes. With perl, this is normally achieved with Apache mod_perl or FastCGI. The application server processes are then either long-lived and standalone (FastCGI) or the code is linked into the web server (mod_perl).
I think (not sure) that perl led the way in this sort of persistence (as it led the way for CGI processing before that) but most languages used in any kind of performance-sensitive web serving environment these days would use this model (or a related model where the web server itself is written in the language in question too, allowing the application code to be even more simply embedded and persistent). | [reply] |
Re: speed factor
by perrin (Chancellor) on Nov 13, 2007 at 17:53 UTC
|
| [reply] |
Re: speed factor
by Your Mother (Archbishop) on Nov 13, 2007 at 19:24 UTC
|
Nobody big is doing it the way you suggest (non-persistent). And once it's persistent, Perl is really quite zippy. Here is a trivial example (Hello world) but probably about as valuable as any benchmarks that don't compare real world use-
use Benchmark qw(:all) ;
my $count = 1_000;
cmpthese($count,
{
'C++' => 'qx{curl -i http://localhost/cgi/hello.cgi}',
'Perl' => 'qx{curl -i http://localhost/cgi/hello2.cgi}',
});
__END__
Perl under FCGI. C++ as plain executable.
# first run
Rate C++ Perl
C++ 340/s -- -5%
Perl 360/s 6% --
# second run b/c the first seemed too good
Rate C++ Perl
C++ 287/s -- 17%
Perl 246/s -14% --
When someone says Perl is slow, always check the context and take it with a grain of salt. What makes a web app slow can be unrelated app code (DB, network, server, etc). Also, what dragonchild said. Server slots are cheaper than developers.
update: added a missing apostrophe. | [reply] [d/l] |
Re: speed factor
by RaduH (Scribe) on Nov 13, 2007 at 17:55 UTC
|
the bottleneck is in general elsewhere, not in your Perl scripts being interpreted, for a web application. It's either the bandwidth (oh, yes...) or the database your Perl script is accessing... While you do take a performance hit it's not something that really affects you. In exchange, with interpreted languages (Perl in particular) you gain access to a level of flexibility that requires a lot more work in other languages. You pay a price you most of the times don't care about because of other factors that influence your performance for development-time flexibility and ease of doing things.
Think about this: A has $10billion in the bank and B has $11billion. Who is richer? Well, A, but really, at this fortune does that $1B really make a difference? They both have more than they need (was about to say "can spend" but I stopped). Similarly, you have other bottlenecks that make your price negligible compared to the benefits (many/most times). | [reply] |
Re: speed factor
by Anonymous Monk on Nov 14, 2007 at 01:49 UTC
|
CGI doesn't require resident memory.
I remember one particular intranet server at a large corporate I was at, where their standard intranet development process was Perl and CGI for 5 years.
They had 300+ CGI applications running on one 64-cpu sun box, doing all sorts of stuff.
Then the company got the Java religion, and an edict came down from on high all new apps must be Java.
So they bought an entirely new 64-cpu sun box to hold the Java applications. It was filled to capacity in 3-6 months.
CGI may be worse performance-wise on a per-call basis, but it has much LOWER resource usage on a per-application basis.
So CGI is ideal for applications that don't see a lot of usage. I have some web apps I use no more than once a month. Why should I allocate permanent memory to them.
For high traffic applications, once you reach the point where an application can consume an entire CPU on it's own the economics reverse.
By that point, you should be well and truly switched over to some sort of memory-resident run-mode. | [reply] |
Re: speed factor
by locked_user sundialsvc4 (Abbot) on Nov 13, 2007 at 19:38 UTC
|
The "cost" of creating a new instance of the Perl interpreter even in a "brute-force" CGI situation, is actually not as severe as you might suppose. Operating systems are designed to be "lazy" about loading and unloading program-segments, and file-data, from memory.
The "speed differences" that you might be concerned about are for the most part measured in microseconds, or at-most a few milliseconds. Which usually makes them "a mere trifle." Of far more importance is the fact that your web-application does its job efficiently and well. Choose good, sensible algorithms (leveraging CPAN anywhere you can), and you should be good-to-go.
| |
Re: speed factor
by girarde (Hermit) on Nov 14, 2007 at 15:52 UTC
|
Another excellent reminder that the universal answer is "it depends". | [reply] |
Re: speed factor
by Anonymous Monk on Nov 15, 2007 at 20:28 UTC
|
Programmer productivity and time to market.
They are making computers faster and cheaper.
They are not making programmers faster or cheaper.
A modern fast web server can be bought for the price of 10 hours of development. Unless it is an extremely high-volume situation, it is way more cost-effective to buy more machines than to force programmers to use a less productive environment. Studies show Perl is not just percentages but factors more productive than Java or C#.
You can always use mod_perl if you don't all those instances. | [reply] |