in reply to Re^6: API complexity measures
in thread API complexity measures

You seem to conflate understanding what a module does with measuring the complexity of it's API.

I'd re-word that slightly and aim it generally: Many people conflate what a module can do, with measuring its interface complexity.

As with my TV analogy elsewhere, it doesn't really matter how complex (rich) the interface is, if some large percentage of that complexity is rarely needed. If an appropriate abstraction and defaults are chosen to model the underlying complexity, then a large percentage of the applications using the api can be written with a small and frequently or always used subset of the full api.

This encourages simple starting points that are quick to put together and get the basics operating. Refinements of the application can then be made incrementally as the need arises.

The best Perl example I can think of, it IO::Socket::INET. Putting together simple clients, and simple servers is almost trivial. Essentially, it's just IO::Socket::INET::new( 'hostname:port' ) plus perl's print and diamond <> operators. Far simpler than using the underlying systems calls: accept, bind, connect, getpeername, getsockname, getsockopt, listen, recv, send, setsockopt, shutdown, socket, socketpair; endprotoent, endservent, gethostbyaddr, gethostbyname, gethostent, getnetbyaddr, getnetbyname, getnetent, getprotobyname, getprotobynumber, getprotoent, getservbyname, getservbyport, getservent, sethostent, setnetent, setprotoent, setservent.

All of that complexity is still there, you simply do not need to use it most of the time. Another win of the whole IO::*, and IO::Socket::* suite of modules, is their consistancy. Need to move from and INET sockets to a UNIX-domain sockets or vice versa? No problem. 90% of your code needn't change at all. Of course, much of that is helped by the consistancy of the underlying unix IO model in the first place, but still Graham Barr's abstraction and implementation of those modules is the very essence of good OO and consistancy.

This is also where the much maligned and underused tie mechanisms wins. A huge proportion of CS algorithms and datastructures can be encapsulated beneath the simple and well-known interfaces presented by tied scalars, arrays and hashes. If more modules used these interfaces, for many applications, the programmers wouldn't need to even read the documentation, because they already know how to use Perl's native constructs. Code is smaller and simpler, learning curves flatter, productivity higher. Documentation effort can be confined to the differences and inconsistancies rather than having to re-iterate the basics of how to instantiate and access things.

Too often, interface design is done from the perspective of, and with the knowledge of, what goes on and is available and possible inside the library or class, when it should always be done from the perspective of the application user.

It is probably the single biggest flaw in the XP program development model, that people write tests to test the interface, instead writing the interface to suite the application(s). Individual tests are written to exercise individual function or method calls which leaves the big picture of how those APis fit together to construct a full application often unexercised until the module is so far along that it's too late to make fundamental changes.

APIs should be written to fit the requirements of a (real) application. With a mind to future application requirements for sure, but not attempting to cater for every possible future application--because half of them will never be written.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."