'Remove all occurrences of duplicate lines in bash or Python and getting only and only the unique lines

I have already tried the solution here but it gives me an empty file, even though I have non-duplicated unique lines.

I have a large text file (2GB) containing very long strings in each line.

AB02819380213.   : (( 00 99   -   MO:ASKDJIO*U* HIUGHUHAHUHHA AUCCGTCTTCTTTTTTA FFFFFFFFFFFFFFFFFFFFF:FFFFFFFFFFFFFFFFF
a01219f8b
NJSAJDH*)8888-   + 99 100.    -   NKJJABHASDGASGYUOISADIJIJA  TCTCTCTTTCTACACTAATCACAATACTACA FFFFFFFFFFF
a023129ab
NJSAJDH*)8888-   + 99 100.    -   NKJJABHASDGASGYUOISADIJIJA  TCTCTCTTTCTACACTAATCACAATACTACA FFFFFFFFFFF
000axa2381a
AB02819380213.   : (( 00 99   -   MO:ASKDJIO*U* HIUGHUHAHUHHA AUCCGTCTTCTTTTTTA FFFFFFFFFFFFFFFFFFFFF:FFFFFFFFFFFFFFFFF

The expected output here would be

a01219f8b
a023129ab
000axa2381a

How can I do this in bash or Python?



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source