in reply to Parsing the command line: manual or module?

In a way it is like parsing CSV or HTML/XML with regexen - it's easy for the easy stuff, but you will get bitten by the edge cases. With a good module someone has already thought about the edge cases and provided ways of managing them.

The down side with the few command line parsing modules I've glanced at is that they all focus on *nix style command line conventions. They simply don't handle DOS/Windows conventions (that I've noticed). Because of that I tend to use a command line parsing "template" chunk of code that gets pasted (along with a help/error exit routine) into whatever new script I'm writing that needs command line processing. I should at least generate a module from it, but it hasn't happened yet.

The fish hooks in command line processing come from duplicate flag processing, quoted parameters and intersperced flags and parameters. Handling defaults, required parameters and help processing and error handling tend to be related issues. By the time you've handled all that lot there is a fair chunk of code involved. Add in the test suite and you really have something worth a decent sized module. At that point letting someone else do the work starts to seem worth while!


DWIM is Perl's answer to Gödel
  • Comment on Re: Parsing the command line: manual or module?