in reply to Tokenizer / Lexical Analyzer for parsing Wiki

I want to write my own wiki formatter using some Perl module for the tokenizing / lexical analyzing.
If speed/perfornace is your main focus, and Text::WikiFormat is too slow for you, I'd say you don't want to use perl at all.

MJD says "you can't just make shit up and expect the computer to know what you mean, retardo!"
I run a Win32 PPM repository for perl 5.6.x and 5.8.x -- I take requests (README).
** The third rule of perl club is a statement of fact: pod is sexy.

  • Comment on Re: Tokenizer / Lexical Analyzer for parsing Wiki

Replies are listed 'Best First'.
Re^2: Tokenizer / Lexical Analyzer for parsing Wiki
by Jaap (Curate) on Sep 02, 2004 at 11:15 UTC
    That is a rather good point. But i know that Text::WikiFormat is not very speed-oriented so i can do a lot speedier in Perl. My entire Wiki is in Perl too.

    Currently looking at Parse::Lex.
      That is a rather good point. But i know that Text::WikiFormat is not very speed-oriented so i can do a lot speedier in Perl

      Is Text::WikiFormat a lost cause? Or do you think it could be speed up? If so, why not see if the author is interested in you attempting to do so?

      However, If you really feel Text::WikiFormat is a lost cause, then I would recommend writing your own lexer and tokenizer, that way you will be sure to squeeze all the speed out of it you need.

      -stvn