in reply to snakes and ladders
Help would be very much appreciated.
Two things will help you more than anything else.
The first is a decent understanding of algorithmic complexity, colloquially called "big O notation". The value of this is being able to analyze a piece of code and have a sense of how well or poorly it will scale. This is technically a science, but there's an art to it, and the basic rule is "How much work does it take to do things this way?" (The corollary is "How much data do I expect to process?" If that's small, a big big O doesn't matter very much.)
The second is how to combine an efficient tokenizer with a finite state machine. As I've mentioned before, this is an important concept covered in SICP and HOP. In short, you want to process your input document once, probably character by character (and you can make that more efficient if you want), to build an intermediary data structure which represents your document. You can do this even if you have parts of the code you can only evaluate fully after you've processed previous parts of the document (it's how Perl's eval works, after all).
I won't promise you that this will make your code beloved to other hackers (what I've seen doesn't fit my needs from a human factors perspective, but I admit I don't have the experience with it you do), but I can promise you that this is the technique favored by compiler writers as reasonably straightforward but efficient and effective. What you're doing is, essentially, writing a compiler.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: snakes and ladders
by Logicus (Initiate) on Aug 25, 2011 at 05:03 UTC | |
by chromatic (Archbishop) on Aug 25, 2011 at 07:03 UTC | |
by Logicus (Initiate) on Aug 25, 2011 at 07:16 UTC | |
by pemungkah (Priest) on Aug 25, 2011 at 08:04 UTC | |
by Logicus (Initiate) on Aug 25, 2011 at 11:52 UTC | |
| |
by JavaFan (Canon) on Aug 25, 2011 at 23:12 UTC | |
| |
by Logicus (Initiate) on Aug 25, 2011 at 05:59 UTC |