One of the problems with many OO interfaces is that it allows the author to conceal a lot of cruft in addition to the legitimate encapsulation it provides. Another is that it tends to encourage the duplication of data.
For example: A while ago I was collaborating in writing a module to efficiently determine the N maximum or N minimum values of a list or array. I wrote this as two functions that took N, and either a list or an array reference, and returned the N required values.
My collaborator wanted to turn this into a OO interface. I asked why. The response was that this would allow the users to accumulate their values in the object before asking for the N max or N min values. I said, they can just as easily accumulate them in an array before calling the functions. He said, They might want to obtain the values again without having them re-calculated. I said, they can just as easily store and re-use them externally. That went on for a bit.
For me, the final nail in the OO-coffin, is that for the majority of uses, the data would not exist entirely for the purposes of this module, but would usually be a part of some other dataset that is also used for other purposes. Perhaps as a part of a larger dataset, say a hash of arrays; each of which must be N-Maxed. And storing the data internally to the module would just duplicate data that is (and must be) stored elsewhere. And that extra storage could have a substantial impact upon the user programs.
Sure, you could construct the object, call the nMax() method, and the destroy the object immediately for each dataset. But then, where is the gain over just calling a function.
The trouble with OO doctrine is that it only considers data for the life of the object. But data rarely lives in isolation. It comes from somewhere, and it goes some where, and inbetween, it has many operations, often not obviously related, applied to it.
So you start by accumulating data into a hash of arrays; make copies of it into a nMaxMin object; then copy it onto a stats object to obtain the standard deviation; then copy it into a GD::Data object to draw a graph; then copy it into a CSV object in order to write it to disk interspersed with commas. But all of these operations can be better accomplished by applying functions to a standard array without all the creation/destruction of myriad objects or the duplications of the data.
Now (auto)boxing enthusiasts might say that if we could add methods to standard arrays, then that would solve the duplications problem. But if all that does is allow my @min5 = @a->nMin( 5 ); rather than my @min5 = nMin( 5, \@a );, is the overhead, however small, worth it?
The basic criteria of whether an interface should be OO or procedural is: do the basic operations of that interface require the retention of supplimentary state that cannot be contained easily in the basic datatype.
A good example of bad OO is the FileHandle module. This wraps over the basic built-in file handling facilities, but does nothing to address the problems. It doesn't provide for per-handle supplimentary state. That is, you cannot set $/, $\, or a raft of other state that you'd wish were settable per-handle. So, like that other wrapover, it does nothing to fix the underlying problem, but rather just draws attention to it.
Bad OO is much worse that bad procedural. Example. FileHandle gives the ability to set autoflush on a handle using a nicely named method, but to get it involves 1000s of lines of code and a great deal of overhead. Sure it is nicer than