It sounds like the abstraction is going to leak heavily.
First of all the idea that you create a model that you can implement portably and program to it is a good one. It is the right way to make a complex application cross-platform. The wrong way is to scatter around the equivalents of #ifdef everywhere.
If you can't create a model like that, there is a problem right there. As you try to document differences, they will quickly explode in complexity.
For instance take transaction semantics. In every relational database that I know that supports transactions except Oracle, if you try to read a row that is being updated in a transaction, you block. A key part of Oracle's design is that queries are non-blocking abd give you a consistent view of the database at a specific time. So if the row has been updated (partially or completely) since your transaction started, then it goes to the redo buffer to offer you what the value was when you opened your transaction. No blocking.
As you can imagine, this is a major difference. Oracle's behaviour is consistent with the standards. Oracle's approach is great for concurrency. However it introduces tons of possibilities for race conditions that people who are experienced with other databases would never think of. For instance suppose that you open a transaction, read the value, update it, and commit. The transaction guarantees an atomic update so that is perfectly safe, right? Not in Oracle! 2 transactions can start at the same time, read, one updates, the other's write blocks until the transaction finishes, and then it gets to write and commit. The second one never saw the first one's update.
I can easily see someone with experience on multiple databases use your module and not see that they have to do something different for Oracle than everyone else. Worse yet, it seems to work (that is always the fun with race conditions). And then when it goes wrong, if they can track it down they'll blame Oracle for working exactly as Oracle has always been documented to work.
So there is the problem for you. You can add an option to tell people when someone implements Oracle's semantics. You can try to add an explanation of the issue. Of course someone who reads that can't know if they have really forced serialization to happen where they need it to without having Oracle to play with. (And test heavily, race conditions are notoriously hard to detect.)
But that is just one issue with one database. The Sybase issue that I mentioned is nasty. My solution when I worked with Sybase was always to prepare and close one handle at a time, and never, ever use prepare_cached. It wasn't hard to avoid the problem. But fixing a system that already makes the mistake would be a lot more fun.
And no, I didn't get around it by opening up 2 database handles. Because in many versions of Sybase (including the one that I was on), they do page-level locking. (I think that they implemented row-level locking in version 12.) Page level locking means that all sorts of things that shouldn't deadlock, can in Sybase. Deliberately setting up races where you can readily deadlock yourself didn't strike me as a good idea.
So you document this as more stuff that people need to know to really program portably.
Before long you wind up with a document that winds up explaining tons of details of how lots of different databases work. And it all matters.
In the end DBMS portability is not a checkbox that you can just put in a comparison list. Because the abstraction leaks badly. You can offer assistance to people who want to write portable code. You can define a portability problem that you address, and address it. But you can't solve DBMS portability itself because it is intrinsically unsolvable except by forcing people to a very low lowest common denominator.