'How to Add Multiple Prefix to Multiple Paths
I have two files one has urls like
https://example.com/
https://example2.com/
https://example3.com/
and other file has paths
abc efg xyz
I am looking for output to be like this
https://example.com/abc
https://example.com/efg
https://example.com/xyz
https://example2.com/abc
https://example2.com/efg
https://example2.com/xyz
https://example3.com/abc
https://example3.com/efg
https://example3.com/xyz
I have tried this command, but its only doing for first url only
awk 'NR==FNR{p=$0; next} {print p $0}' dodo.txt xoxo.txt
sed 'r dodo.txt' xoxo.txt | sed -E 'N;s/(.*)\/(.*)/\2\\1/'
How can i do it with bash/sed or awk
Solution 1:[1]
You can use awk in the following manner:
[user@host ~]$ awk 'NR==FNR { a[$0]; next } { for (i in a) print i $0 }' dodo.txt xoxo.txt
https://example.com/abc
https://example3.com/abc
https://example2.com/abc
https://example.com/efg
https://example3.com/efg
https://example2.com/efg
https://example.com/xyz
https://example3.com/xyz
https://example2.com/xyz
Which gives a slightly different order, but all the required lines.
I personally prefer using join for this task because the syntax is simpler:
[user@host ~]$ join -j 2 dodo.txt xoxo.txt | sed -e "s/\s//g"
https://example.com/abc
https://example.com/efg
https://example.com/xyz
https://example2.com/abc
https://example2.com/efg
https://example2.com/xyz
https://example3.com/abc
https://example3.com/efg
https://example3.com/xyz
Piping into sed to remove all whitespace.
Solution 2:[2]
You might use awk and first load the file with the paths to split on the fields separator if the file paths file only has a single line:
awk '
NR==FNR {
nr = split($0,a,FS)
next
}
{
for (i = 1; i <= nr; i++) {
print $0 a[i]
}
}
' paths.txt urls.txt
Output
https://example.com/abc
https://example.com/efg
https://example.com/xyz
https://example2.com/abc
https://example2.com/efg
https://example2.com/xyz
https://example3.com/abc
https://example3.com/efg
https://example3.com/xyz
If the paths file can have multiple lines, you can loop all the columns for all the path lines and add all the field values to an array which you can then loop for every url.
awk '
NR==FNR {
for(i=1; i<=NF; i++) {
arr[++j] = $i
}
next
}
{
for (x = 1; x <= j; x++) {
print $0 arr[x]
}
}
' paths.txt urls.txt
Solution 3:[3]
This might work for you (GNU sed):
sed 'y/ /\n/;s/.*/g;s#$#&#p/mg' file2 | sed -ne h -f - file1
Convert file2 into a sed script and invoke it against file1.
Or perhaps use gnu parallel:
parallel -a file1 echo '{1}{2}' ::: $(cat file2)
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | |
| Solution 2 | The fourth bird |
| Solution 3 | potong |
