Yep, PPI worked. Prototypes are half done. The definitions of constants and prototypes are collected and sent in as a parameter to the expression parser.(When e.g. parsing lists, you of course need to know if a sub is declared to use n parameters.)
| [reply] |
> Yep, PPI worked
Not sure if you got my point.
PPI is a static parser, this (should?) mean if it sees at "compile" time something like ...
- use Try::Tiny;
- it won't try to search @INC
- to require the first .../Try/Tiny.pm
- run Try::Tiny->import()
- "export" try (&) {...} (among others)
- with an accurate prototype
to keep on parsing correctly if it encounters a try {...}
I haven't tested this, but I bet you will need to patch PPI to handle this.°
At least if you want to construct a decent Perl dialect and not just some interesting demos.
I say dialect because it's very hard to achieve full compatibility.
OTOH once your parser and compiler really work, you can try to transpile it into your target languages to bootstrap a stand-alone dialect which is consistent in all those languages.
At least in theory...
updates
°) There is a whole bunch of more things happening at compile-time...
| [reply] [d/l] [select] |
Sorry, I was unclear. Yes, the use dependencies are loaded and prototypes (like in Try::Tiny) etc are extracted.
I had a problem with this by the way, and realized why Perl don't cache its byte code. I wanted to do a cache of compiled code, with key from the hash of the file content. But it just seems too fragile. You would need a cache of used files (so they could be loaded) and everything would need a recompile if a file further down in the stack was changed.
I put on the todo to compile a program with dependencies to a standalone code file, instead.
Edit: If you tell me that Try::Tiny works with require, then I am totally wrong. :-) That should only work with use right? (I had to read the code, there was some claim that it was slow so I never Try:ed.)
| [reply] [d/l] [select] |