I would think that getting the file size is faster than computing the hash for the file.
You shouldn't be doing either. It should have been done for free when the file was written.
If you didn't, you could compare files in a clever order and calculate their hash as they are being compared. This may save you from having to do more compares.
So it seems to me that pruning the list of files for which hashes have to be computed by comparing file sizes would be faster, especially for large numbers of files.
As the number of files grows, the number of collisions in file size grows.