'AWK very slow when splitting large file based on column value and using close command

I need to split a large log file into smaller ones based on the id found in the first column, this solution worked wonders and very fast for months:

awk -v dir="$nome" -F\; '{print>dir"/"dir"_"$1".log"}' ${nome}.all;

Where $nome is a file and directory name.

Its very fast and worked until the log file reachead several million lines +2GB text file, then it started to show

"Too many open files"

The solution is indeed very simple, adding the close command:

awk -v dir="$nome" -F\; '{print>dir"/"dir"_"$1".log"; close(dir"/"dir"_"$1".log")}' ${nome}.all;

The problem is, now its VERY slow, its taking forever to do something that was done in seconds and I need to optmize this.

AWK is not mandatory, I can use an alternative, I just dont know how



Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source