RE (tilly) 6: Load Balancing fun..
by tilly (Archbishop) on Oct 03, 2000 at 01:47 UTC
|
My real problems listing is stuff that in an ideal world would never be discovered after the fact, but in the real world frequently is.
As for your whoa, you read correctly. If I am setting up a hash with 100 elements that I expect to access a million times, the cost of returning it as a list may indeed be outweighed by the fact of looking up that reference a million times.
Continuing on, garbage collector? What garbage collector? You didn't know that Perl has no garbage collector? It uses reference counting, and if you create circular references, that is your problem. Oh, and last I heard the only OS under which Perl actually returns memory to the OS is the Macintosh.
As for the benchmarks, you are talking to the wrong guy if you expect me to care. I routinely put in run-time checks that get called every time my function runs and verify that the argument list really did make sense. This makes my code easier to develop, and for what I do debugging time is worth more than computer time.
If you really care about performance, Perl is the wrong language. It is fast for an interpreted language, sure, but compared to C it is pathetic. I mean, in C you don't waste time doing such silly things as worrying about whether or not your string is too large and needs to be moved to somewhere it has more room, no you just let them quickly overflow that buffer!
Really, my opinion on optimizing is that it is like running. If you are so eager to run that you start flailing your feet, you will fall over. First make sure you are upright and moving... | [reply] |
|
|
| [reply] [d/l] [select] |
|
|
But you can only use aliasing like that if you use a global variable. Of course in 5.6 you could immediately turn around with our and make it lexical, but accessing lexicals is faster than globals. (Of course we are quibbling at speed at this point, I believe the hash lookups are slower than either by a good margin.)
As for refcounting being garbage collection, yes. But there is no real garbage collector per se.
And interpreted language is somewhat semantic. Certainly you don't compile binaries. Certainly you don't interpret sourcecode directly. But the opcodes are run through an interpreter, and that causes a lot of overhead. And there is this nice thing called "eval" which is associated with interpreted langauges...
| [reply] |
|
|
Really, my opinion on optimizing is that it is like running. ...
And many agree with you, to quote Knuth:
'Premature optimization is the root
of all evil' :)
Code should be optimized for readability first. Then
if a performance problem occors, locate the source of
the problem and optimize
only that part of the code. As another saying goes,
the processor usually spends 99 percent of its time
in 1 percent of the code.
Have Fun
| [reply] |
|
|
I like your acidic tone. It makes me think better. What you describe as the reference counter is most certainly the garbage collector. I've done the reading, I know that. Yes, MacOS is currently oen the few OSs whose memory manager actually decides whether to purge, move, allocate memory and return RAM to the system. Nifty huh? I actually don't care about the benchmarks either. Perl is one the faster interpreted languages and...cliche, cliche, cliche. oh well. im bored with this commentary, i don't actually see what we're arguing about. By the way, if you consider a "real world discovery after the fact" then most of the program will need to be rewritten, not just a return statement. oh well. that's all i have to say. thanx for the info on your program.
| [reply] |
|
|
Sorry for the tone. A stressful day at work, I am sick, and I was taking that out on you. Apologies.
As for garbage collection, there are serious arguments about whether Perl should do real garbage collection. So far it does not, but I suspect that Perl 6 will vary between implementations, and to my eyes that will be a very bad thing. Given the choice between truly reliable destruction semantics and cleaning up circular references, I know which one matters in my code...
As for your claim on having to rewrite the program when you hit a performance duh, my experience contradicts that. Of course you need to make an effort to modularize and an ongoing effort to clean up and organize your code-base, but then overall development goes faster, you wind up with less buggy software, and when you find the inevitable performance 'duh', you can generally fix it fairly easily.
Of course to get to that point you have to be willing to constantly pay the penalty of calling lots of small functions, loading modules, having some sort of centralized development system, and all sorts of other things that slow you down in the short term. I have seen (read had it demonstrated to me) how much this will speed you up in the end that I now firmly believe in putting ease of development and maintainance well above raw performance...
| [reply] |