in reply to Re^3: Parsing of serialized data
in thread Parsing of serialized data

Aha, now I understand what generalized means :)

Obviously, this is all true. And it might be the reason why some modules don't end up in CPAN. On the other hand, there are thousands of modules there. But the modules that provide the interface I've described are the minority. So the generalization process shouldn't be the explanation for this situation.

Just to be clear. I'm not trying to find an excuse for not attempting to share some of my work. I'm trying to understand why nobody had shared similar work already. There are lots of modules that handle HTTP by reading sockets, why there are almost no modules that just parse HTTP without handling sockets? :) So far, my guess is, such modules are considered to be too complex for "drop in" usage, so nobody feels like going that way. Effectively, this can be called "design flaw".

Another example. XML parsers. The perl module XML::Parser uses the underlying expat library. The library itself uses the "Parser" approach. Ie, it processes data as the user provides it. Nevertheless, the XML::Parser does not provide the same interface. Instead it provides the front-end that does the reading for the user. (Well, there's XML::Parser::ExpatNB module that provides the "pure" expat interface). The point is, the creator of XML::Parser wrapper also considered the approach taken by expat author to be too complex. No need to mention, that the most popular way to parse XML is to let the parser create the object and then search in that object for the nodes that are interesting. This is what most of modern parsers offer.

So, it seems that even though the "divide and conquer" approach to data parsing promises lots of flexibility, in reality it is not used because majority of developers favor simpler interfaces over the flexibility.