in reply to Runtime introspection: What good is it?

Multiple thoughts. First of all Why monkeypatching is destroying Ruby has a decent, if obvious, discussion of the dangers of run-time code that modifies existing run-time code. Certainly dynamic run-time generation of code has dangers which need to be clearly understood and avoided.

Secondly there are often conflicts between different forms of run-time dynamic stuff. For instance see Why breaking can() is acceptable where I try to explain the conflicts between how Perl defines UNIVERSAL::can and AUTOLOAD. So you can't really use the dynamic code introspection in Perl unless you know that other techniques are not being used, or at least have an idea how they might impact you.

Thirdly I'm going to dispute that it is better to do things at compile time than at run time. And the reason why is that at run time you have more information than you do at compile time. For example you have information about what code paths you will and will not execute, and so don't waste time dealing with what you don't need. Yes, I am talking about JIT, but JIT goes a lot further than most people realize. Dynamic Languages Strike Back has a lot to say on this topic that you might like. In particular if you combine introspection with aggressive JITing, you get the opportunity to achieve more aggressive optimizations than you could afford to do at compile time. Why? Well the problem is that at compile time there is no end to the number of combinations you might have to worry about, and if you try to optimize all of them you wind up with an extremely large executable that hits performance problems because it takes up too much memory. But when you go JIT you can see the 2-4 combinations that really get used and optimize those.

Of course using that as an argument for using introspection in Perl is seriously disingenuous since Perl 5 does not do JIT and is unlikely to ever do JIT. :-)

Now where have I, personally, done stuff at runtime using things like introspection and reflection? Truthfully, not often. But when I've done it, it has been useful. For example in one place in my current reporting framework I have a way that objects from lots of modules can be passed into a particular method in another module. There are several useful methods that they might implement. When I load the first module I don't know what others might exist and I don't know what will be passed in, so I leave the decision as to whether to call the method to run time where I check that the method exists by calling can, and then do one thing if it does and another if it doesn't.

Were there other ways to accomplish the same thing? Of course. But it seemed to me that the best way to do that was at run time since at compile time I simply did not have sufficient information to know what might be passed in. In another language at compile time there would have been more information and it could have been done then. However it is in the nature of the beast that this method is only called once per program run. Unless you want to add a separate compile phase (which introduces its own overheads and problems), doing this at compile time instead of run time gains you nothing and would require more overhead. So it stands as a counterexample to your thesis that it is always better to do things at compile time.

  • Comment on Re: Runtime introspection: What good is it?

Replies are listed 'Best First'.
Re^2: Runtime introspection: What good is it?
by BrowserUk (Patriarch) on Jul 12, 2008 at 08:53 UTC

    1. JIT.

      I'm going to reject JIT as a counter argument to my premise on the basis that:

      • If you do what JIT does at compile-time, it isn't Just In Time.

        Java bytecode is frequently compiled on a different platform to where it is run. It's not practical to translate to machine code for an unknown (number of) target platform(s).

      • What JIT does is not under the control of the (application) programmer.

        Whilst it is possible to adjust one's application programmming style to gain (more) benefit from JIT on a specific platform, and a particular implementation of the runtime on that platform, generically, JIT is beyond the control of the application programmer.

    2. ... so I leave the decision as to whether to call the method to run time where I check that the method exists by calling can, and then do one thing if it does and another if it doesn't.

      This is the 'plug-in' scenario.

      You could also do:

      sub Another::Module::particularMethod { my $o = shift; ... eval{ $o->method( ... ); } if( $@ =~ q[^Can't locate object method "method"] ) { do{ oneThing() }; } else { do{ anotherThing() }; } ... }

      Still a run-time decision. But, it can be done this way in any language that supports exceptions. No need for the inclusion of RTTI tables, or picking apart the bytecode.

      Is there any advantage to doing it this way?

      I think yes. Just because a class has a method named X, doesn't mean that X is what you think it is.

      1. That it takes the same number of parameters as you're expecting.
      2. Or the same types of parameters you're expecting.

        With some reflection APIs (eg. Java), you can discover both of these. At a considerable cost of decompiling the byte code at run-time. And at the further considerable cost of programming the logic in your code, to iterate the known public methods, with the particular name you're interested in and then check the number, and types of the parameters they expect, and the type they return.

        But even then, having done all of that discovery, you still don't know whether it:

      3. Will actually implement the same semantics as you want it to.

      Even after you've been through the laborious process of run-time discovery, when you (or whomever) eventually gets around to invoking the method, it may still raise an exception--either an 'expected' one due to bad input, or an unexpected one due to it's semantic being entirely different to what you are hoping for. Ie. Instead of calculating some statistics, it trying to wipe your harddrive.

      So, when you eventually do get around to calling the method, you're going to have to wrap the call in an exception handler anyway. So why not skip all the slow, laborious and run-time costly discovery, and just try invoking it?

      Simpler (less), clearer (it worked or it didn't; rather than: it might work(or not), it still might work(or not); it still might work(or not); it worked(or not)) code.

      Same final effect.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      I'm going to reject your rejection of JIT on the basis of the fact that there are things you can do with JIT that you simply cannot do at compile time. And furthermore while it is true that what changing what JIT does is beyond the control of the programmer, deliberately taking advantage of its full capabilities is not.

      A runtime type check followed by a runtime branching operation is exactly the kind of code that JIT can optimize away if you have a good JIT system.

      However I reiterate that JIT is a red herring in the case of languages like Perl that don't have it.

      Going to your exception solution, that solution has a major drawback. There are lots of possible reasons why there could be an exception, and your code has swallowed most of them. Easily fixable, granted, but not without adding more code and obscuring what is going on. And it is easy for a programmer to forget that they need to do that - I've seen many forget exactly that, including you just now.

      Not to mention the fact that if Perl made a minor change to its error message, then your code would break. Not that Perl is likely to do that, but they haven't promised they won't, and they have documented how UNIVERSAL::can works.

      Furthermore your criticisms strike me as unrealistic. If I define a plugin API, I expect to have things passed into it that are designed to be plugins. Yes, it is possible (but unlikely if you use descriptive method names, which I try to) that some random module might implement methods named the same as what I expect in my plugins. But if so then it still doesn't matter because no sane programmer is going to be passing it into my module as a plugin. (I can't solve the problem of insane programmers, and I refuse to try.)

      Thus trying to use something that isn't a plugin as a plugin is not a problem that I'm going to waste code protecting against.

      Now we have the problem of dealing with a badly designed plugin that doesn't do what it is supposed to do. Before you even consider doing that, you need to understand your problem domain. My problem domain is that I am writing plugins for use in my own module. If the plugin doesn't do what it is supposed to, that is a bug that I will fix. There is, therefore, no need for me to protect against that case. The same would apply for many of us.

      A problem domain that more closely mirrors what you're saying is one where you're writing a popular application which random third parties will add plugins to. But even there you can defend the position that it is the responsibility of the plugin author to make sure they follow your API, and not yours to code against the possibility that they didn't.