in reply to removing reversed elements

I do not know how many pairs you have.   If you have enough pairs that you could reasonably store all of them in memory, a hash-table is a convenient way to look for duplicates.

A more generalized solution would be to write all (A,B) pairs to one external disk file, and all (B,A) pairs to a second.   Using an external disk sort, sort both of the files.   Now, merge the two files and discard all matching entries.   You only need to consider the one tuple that you have most-recently read from each file:   if they are identical, throw them away (and when you decide to do this, also throw away all identical occurrences ... since the files are sorted, they will all be adjacent in each file).

(The latter technique, which goes straight back to COBOL and, in fact, to the days before digital computers, will work for arbitrary quantities of data, has a completion time curve that is mostly-linear, and requires almost no RAM.)

For smaller quantities of data, which are known to fit comfortably in available RAM, the same technique can be used with Perl in-memory lists and its sort verb.