Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Re^4: Runtime introspection: What good is it?

by BrowserUk (Patriarch)
on Jul 07, 2008 at 14:59 UTC ( [id://696007]=note: print w/replies, xml ) Need Help??


in reply to Re^3: Runtime introspection: What good is it?
in thread Runtime introspection: What good is it?

At this point we could (or not) get into a deep discussion about what constitutes "runtime introspection". For example, (coded in any suitable language), is this?

open FILE, '<', ... my @numbers; while( <DATA> } { push @numbers, m[^[ ]*[0-9]+[ ]*$] ? 0+$_ : m[^[ ]*[0-9A-Z]+[ ]*$]i ? hex() : die "Bad number at $."; } ## That hex() would do all of that for you and more is irrelevant

I would say not. I would call that data driven code. And this is the reasoning behind my "special casing" of syphilis' example of IV/UV/NV/PV branching within the Perl core. The type is SCALAR. The method called depends entirely upon the form in which the data in the scalar is currently stored, not its 'type', in Perl terms. You can equally apply substr to a scalar currently containing the NV = 1234.56789, as you can if it contains the PV = '1234.56789'. It just takes a little more (internal) work. In Perl terms there is no type change, just a storage format conversion. Like converting an integer to a double in order to call pow(), because that's all that is available in the library.

Of course, you might cause an exception if you try to apply + to 'fred', but no more so than if you try to divide by 0. Both methods ('+' and '/'), are available to every scalar at any time. Whether thay are applicable at any given time depends upon what value they currently hold. And if the source of data is external, there is no way to make a compile-decision about it. It (the potential exception), can only be dealt with at runtime, regardless of what language you use.

Then we come to the discussion of whether a parser, is an interpreter. Or perhaps more correctly, at what point a parser becomes an interpreter. That's the sort of thing CS guys will continue to debate until well after I'm going through my final type conversion, and is something that you are probably far better suited to argue than I--but I'll have a go :)

Does your "new type of rule" for your trading system, require a Turing complete language?

If so, you need an interpreter. If not, you only need a parser. Whether that interpreter needs runtime introspection to function is an open question, that could only be answered by a full definition of the rules, and the language to define them.

Indeed, we've now come full circle. Because, in that sense perl is a statically-type (by some definition), compiled program that parses and subsequently interprets runtime data, in the form of Perl.

Could you write a Perl parser and interpreter in Haskell. We know you can. (Whether its a good idea is another matter :). But, would you need to endow either Haskell or the Perl dialect you implement, with runtime introspection in order to do it?

Now were full circle yet again, because it depends upon how you define runtime introspection (see sbove:).

The only trading system with which I have any familiarity, was an Excel spreadsheet! A very large and complex one, but still just a spreadsheet. And traders would add their own pages to their private workbooks to look for new ways of discerning patterns and raising triggers. Occasionally, these would become useful enough and complex enough that they were 'adopted' and the meat of the calculations they performed would be converted to subroutines by professional programmers, written in Fortran for performance, packaged in a DLL, and accessible to the spreadsheets via OLE.

The system was extensible, dynamic and reactive, but was any runtime introspection involved? I'd have to say not. Not in any sense that a Ruby programmer would understand anyway.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^5: Runtime introspection: What good is it?
by dragonchild (Archbishop) on Jul 08, 2008 at 00:54 UTC
    Data-driven code is the solution if you only have the first two requirements. You hard-code a set of rule patterns, then you allow new rules based on the existing patterns. In fact, you would (essentially) have a UI that allowed you to pick the rule, then fill in the blanks. Your datastore, if it's an RDBMS, would have a table for each rule pattern and just go from there.

    The moment you have some sort of run-time parsing (required for my third requirement), I would argue that you have run-time introspection. You might not be introspecting the language you are written in, but you are introspecting the language you are parsing. So, yes, your example does run-time introspection over the language of hex numbers. In that sense, string eval is a basis for all forms of run-time introspection in those languages that provide one.

    But, I think that's a bit of an easy way out. The question here is the ability to make decisions based on the qualities and attributes of the run-time environment. That's always going to be a set of data structures (in Perl, it's the symbol table). So, you can always view run-time introspection as data-driven programming. The big key, imho, is whether or not the language natively provides facilities for that (such as Perl and Ruby) or forces you to write a DSL that does (like C or Haskell).


    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
      The big key, imho, is whether or not the language natively provides facilities for that (such as Perl and Ruby) or forces you to write a DSL that does (like C or Haskell).

      The other side of that argument is: Why burden all your programs, classes and instances with all the infrastructural overhead of a Domain Generic Language, which is all the greater because it is generic and therfore must cater for all foreseable eventualities, when all you achieve by doing so is the defferal of the construction of the required DSLs--one for each use case--on top of the DGL (MOP), until runtime?

      When you could construct one or more DSLs, at compile-time, that have far lower infrastructural requirements because they only build what they need for this particular use, and then apply them to just those programs, classes and instances that need it.

      And despite my difficulties with the both the languages (Haskell & Lisp (see Paul Graham's arc)) that exemplify this approach to code construction, I have a great affinity for the principles: brevity==clarity; and YAGNI, that they embody. And it is hard to challenge the results of the approach(*).

      (*) Anyone at a loose end next weekend?


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        Your point is well-made. This is why all these languages are embeddable in each other. :-)

        My criteria for good software:
        1. Does it work?
        2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://696007]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others contemplating the Monastery: (5)
As of 2024-03-28 08:16 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found