@cameron give jdupes a try:
jdupes -rSs ./ > dupes.txt
Will output all duplicates in your current folder, recursively into subfolders, and output to a txt file
Or include the -d flag to let you pick which files to keep, which to delete
Believe me, I've been clearing out hundreds of gigabytes of files for a few years now, most of them are dupes, and jdupes is invaluable.
I did, yes, and thank you! I have been using a combination of GThumb, File manager and fslint to dig through, but I was grumpy because I have to check each one manually no matter how I do it. It's all good though. Almost done. I cleared out about 20 GB of duplicates yesterday. :)
@cameron Ah! True. :) Cool! But how did you end up with duplicates worth 20 GB of disk space? :P
Some weird stuff with file name capitalization, giving me file.jpg and file.JPG in the same folder. And somehow copying files in the past auto added a numeral to the end of the file rather than replacing it. So I had file_1, file_2 and so on.... Nightmares!
@cameron I have probably a few of those as well. ;)
Linux Geeks doing what Linux Geeks do..