Dealing with duplicates
Often, you need to eliminate duplicates from an input file. This could be based on the entire line content or based on certain fields. These are typically solved with the sort
and uniq
commands. Advantages with Perl include regexp based field separator, record separator other than newline, input doesn't have to be sorted, and in general more flexibility because it is a programming language.
The example_files directory has all the files used in the examples.
Whole line duplicates
You can use the uniq
function from the List::Util
module or use a hash to retain only the first copy of duplicates from one or more input files.
$ cat purchases.txt
coffee
tea
washing powder
coffee
toothpaste
tea
soap
tea
# same as: perl -MList::Util=uniq -e 'print uniq <>' purchases.txt
# can also use: perl -ne 'print if !exists $h{$_}; $h{$_}=1'
$ perl -ne 'print if !$h{$_}++' purchases.txt
coffee
tea
washing powder
toothpaste
soap
See also huniq, a faster alternative for removing line based duplicates.
Column wise duplicates
The hash based solution is easy to adapt for removing field based duplicates. Just change $_
to the required fields after setting the appropriate field separator.
$ cat duplicates.txt
brown,toy,bread,42
dark red,ruby,rose,111
blue,ruby,water,333
dark red,sky,rose,555
yellow,toy,flower,333
white,sky,bread,111
light red,purse,rose,333
# based on the last field
# -l isn't needed if all the lines end with a newline character
$ perl -F, -ane 'print if !$h{$F[-1]}++' duplicates.txt
brown,toy,bread,42
dark red,ruby,rose,111
blue,ruby,water,333
dark red,sky,rose,555
Multiple fields example. As seen in the Comparing fields section, you can either use comma separated values to construct the hash key or use a hash of hashes.
# based on the first and third fields
# can also use: perl -F, -ane 'print if !$h{$F[0]}{$F[2]}++'
$ perl -F, -ane 'print if !$h{$F[0],$F[2]}++' duplicates.txt
brown,toy,bread,42
dark red,ruby,rose,111
blue,ruby,water,333
yellow,toy,flower,333
white,sky,bread,111
light red,purse,rose,333
Duplicate count
In this section, how many times a duplicate record is found plays a role in determining the output. First up, printing only a specific numbered duplicate.
# print only the second occurrence of duplicates based on the second field
$ perl -F, -ane 'print if ++$h{$F[1]} == 2' duplicates.txt
blue,ruby,water,333
yellow,toy,flower,333
white,sky,bread,111
# print only the third occurrence of duplicates based on the last field
$ perl -F, -ane 'print if ++$h{$F[-1]} == 3' duplicates.txt
light red,purse,rose,333
Next, printing only the last copy of duplicate. Since the count isn't known, the tac
command comes in handy again.
# reverse the input line-wise, retain the first copy and then reverse again
$ tac duplicates.txt | perl -F, -ane 'print if !$h{$F[-1]}++' | tac
brown,toy,bread,42
dark red,sky,rose,555
white,sky,bread,111
light red,purse,rose,333
To get all the records based on a duplicate count, you can pass the input file twice. Then use the two file processing tricks to make decisions.
# all duplicates based on the last column
$ perl -F, -ane '!$#ARGV ? $h{$F[-1]}++ :
$h{$F[-1]}>1 && print' duplicates.txt duplicates.txt
dark red,ruby,rose,111
blue,ruby,water,333
yellow,toy,flower,333
white,sky,bread,111
light red,purse,rose,333
# all duplicates based on the last column, minimum 3 duplicates
$ perl -F, -ane '!$#ARGV ? $h{$F[-1]}++ :
$h{$F[-1]}>2 && print' duplicates.txt duplicates.txt
blue,ruby,water,333
yellow,toy,flower,333
light red,purse,rose,333
# only unique lines based on the third column
$ perl -F, -ane '!$#ARGV ? $h{$F[2]}++ :
$h{$F[2]}==1 && print' duplicates.txt duplicates.txt
blue,ruby,water,333
yellow,toy,flower,333
Summary
This chapter showed how to work with duplicate contents for records and fields. If you don't need regexp based separators and if your input is too big to handle, then specialized command line tools like sort
and uniq
will be better suited.
Exercises
The exercises directory has all the files used in this section.
1) Retain only the first copy of a line for the input file lines.txt
. Case should be ignored while comparing the lines. For example, hi there
and HI TheRE
should be considered as duplicates.
$ cat lines.txt
Go There
come on
go there
---
2 apples and 5 mangoes
come on!
---
2 Apples
COME ON
##### add your solution here
Go There
come on
---
2 apples and 5 mangoes
come on!
2 Apples
2) Retain only the first copy of a line for the input file twos.txt
. Assume space as the field separator with exactly two fields per line. Compare the lines irrespective of the order of the fields. For example, hehe haha
and haha hehe
should be considered as duplicates.
$ cat twos.txt
hehe haha
door floor
haha hehe
6;8 3-4
true blue
hehe bebe
floor door
3-4 6;8
tru eblue
haha hehe
##### add your solution here
hehe haha
door floor
6;8 3-4
true blue
hehe bebe
tru eblue
3) For the input file twos.txt
, display only the unique lines. Assume space as the field separator with exactly two fields per line. Compare the lines irrespective of the order of the fields. For example, hehe haha
and haha hehe
should be considered as duplicates.
##### add your solution here
true blue
hehe bebe
tru eblue