'Sort and display uniq IP's in a file containing lots of different IP's
I have a file containing lots of lines as following formnat.
The first column would be IP's, and IP could be duplicated. The other columns don't have to be sorted. If the first column just a number I can use "sort -u k1,1".However, in this case, IP has 4 numbers. Can you please help to sort lines in IP's order, and remove duplicates, only list lines with unique IP's?
Thank you in advance!
Solution 1:[1]
Lets say your file containing the data is called data.txt, you can do:
awk '{print $1}' data.txt | sort | uniq
awkkeeps only the first column, the IP addressessort: sort the IPsuniq: remove duplicates
If you need to know how many times each IP appears in the file, you can add option -c to uniq.
Solution 2:[2]
This should work sorting each column individually and using number order:
awk '{print $1}' file.txt | sort -ut . -k1,1n -k2,2n -k 3,3n -k 4,4n
Solution 3:[3]
Assuming what you want to do is sort based on IP addresses while removing duplicate lines only based on the IP address, below code of sorting and traversing the sorted file to remove duplicates should work:
#!/bin/bash
originalFile=/path/to/original/file
outputFile=/path/to/intermediate/file
cleanFile=/path/to/final/file
sort $originalFile > $outputFile
lastIP=""
while read -r line; do
words=("${line// / }")
if [ "${words[0]}" != "$lastIP" ]
then
printf "%s\n" "$line" >> $cleanFile
fi
lastIP="${words[0]}"
done < $outputFile
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Nic3500 |
| Solution 2 | Diego Torres Milano |
| Solution 3 | Haeri Yoon |
