Re: Space taken by a coderef
by chromatic (Archbishop) on Jan 18, 2002 at 22:10 UTC
|
Good questions, but I'm not convinced of the practical utility. What's on your mind?
- That depends. You can look at the definition of the xpvcv struct in cv.h in the Perl sources for the basic information any subroutine (anonymous or not) has to carry around. Of course, a reference to a subroutine is just a scalar, and takes as much memory as any other reference.
- Yes, the number of operations in the function and the pads it uses all eat up a little bit of memory.
- Yes, anything that you'd think costs memory probably does. :)
- I believe so, based on the existence of some funky closure-like bugs, but I'm not an authority on the subject. I do know that it's reused on subsequent subroutine calls, if possible, to avoid the malloc-free cycle. I would imagine recursion has its own opportunities and challenges for optimizations.
| [reply] |
|
|
I'm not sure there is practical utility. The first thing is that closures provide "hard encapsulation", where the only way to the internals of an object is through the interface. This comes at a small CPU hit, naturally.
My immediate thought is for flyweight objects that take more than one scalar's worth of info. Instead of using class-level parallel arrays to keep the info, I was thinking that, maybe, closures would be more memory-efficient. (This is over arrays or hashes.)
Now, yes, I know that optimizing for memory isn't necessarily a good thing, especially this early in the game. But, I just want to keep my options open and see what the comparison is.
An example of what I'm talking about would be:
sub new {
my $class = shift;
return undef if ref $class;
my ($first, $second) = @_;
my $self = sub {
my $var = shift;
my ($mode, $newval) = @_;
if ($var == 0) {
return $mode ? $first = $newval : $first;
} elsif ($var == 1) {
return $mode ? $second = $newval : $second;
}
return undef;
};
bless $self, $class;
return $self;
}
------ We are the carpenters and bricklayers of the Information Age. Don't go borrowing trouble. For programmers, this means Worry only about what you need to implement. | [reply] [d/l] |
|
|
Ahh, I see. In this case, Perl compiles the closure once and only needs one copy of the optree. It only has to attach to the lexical scope once for each unique closure created, so you'll have the cost of a scratchpad that holds $first and $second, as well as the cost of the lexical variables within the closure.
It wouldn't surprise me if this were slightly more efficient than a hash, but you'd probably need to get up above five or six member variables before it pays off. You'll save a little bit on accessors, though, but that's probably just the cost of the optrees, and you can avoid that if you're clever.
| [reply] |
|
|
|
|
Re: Space taken by a coderef
by hakkr (Chaplain) on Jan 18, 2002 at 20:52 UTC
|
Surely all referances are just scalar memory addresses. So the size of the refs depends on how much memory your addressing. I think that means they should all take up the same amount of space. I'm not sure but I don't see how what a ref points to makes any differance to it's size. | [reply] |
Re: Space taken by a coderef
by dragonchild (Archbishop) on Jan 18, 2002 at 21:59 UTC
|
I was probably unclear as to what I was curious about. What I'm looking for isn't the different sizes of references, but how much space a closure takes as opposed to a hash or array.
As for why I'm asking ... I'm just curious. It's something I don't know about, so I want to know about it. :-)
------ We are the carpenters and bricklayers of the Information Age. Don't go borrowing trouble. For programmers, this means Worry only about what you need to implement. | [reply] |
Re: Space taken by a coderef
by Stegalex (Chaplain) on Jan 18, 2002 at 21:54 UTC
|
I am interested to know why you are concerned about this? Memory is cheap and plentiful. Perl hogs memory anyway, so worrying about the small stuff usually isn't important imho.
I like chicken. | [reply] |
|
|
Thoughts like this are what is going to keep perl from going into a production respected language. We at my company have recently had to completely recode our entire backend because the coding was done so loosely, that when put into production with actual customers, we were eating up gigabytes of ram with all the forking and bg processes that were taking place. RAM is not THAT cheap when you start having to purchase terrabytes of it just to work around sloppy code.
| [reply] |
|
|
Cheap memory is no excuse for sloppy code. Nuf said.
| [reply] |
|
|
I really don't think that is 'nuff said. In fact
I said quite a bit more about the topic at Re (tilly) 1: What does "efficient" mean?!?,
as well as in other nodes.
As obvious as it may be to you that you should never be
knowingly inefficient, optimizing for the wrong problem
is a waste of overall resources.
If you disagree, then I invite you to look at the memory
use of Perl and decide to work in C and assembler until
you see the light.
| [reply] |
|
|
|
|