'golang build constraints exclude all Go files in
I build a simple "Hello, World" wasm app like:
GOARCH=wasm GOOS=js go build -o lib.wasm main.go
All is well. But then I want to test my code in my local (Linux) environment. So I change it to a pretty standard:
package main
import "fmt"
func main () {
fmt.Println("test")
}
And do a simple:
go run .
but now I get:
package xxx: build constraints exclude all Go files in xxx
I understand that if I specify a wasm build, I can't expect to run it locally. But I'm not trying to do a wasm build anymore. There are no build constraints in my .go source file, and my go.mod file just has the package and go version.
If I copy these file to a new directory and only do the go run . without the wasm build - all is good.
And if I do:
go build filename.go
it works properly.
It's like something "remembers" that this directory was supposed to be built with wasm - but I cant find what/where.
Is there a cache or something elsewhere that's remembering some build settings that I need to clear? I've tried go clean and go clean -cache - no luck!
Solution 1:[1]
I understand that I can just split up the big list file with something like "split" and then set the script to background the task with &, but I cannot wrap my head around that part.
Put the stuff to execute in a function. Then use GNU parallel or xargs.
doit() {
# command in an array for comments
cmd=(
curl --silent --request GET
--url "https://api.example.com/$1"
# check your scripts with shellcheck
--header "authorization: Bearer $auth_key"
--data '{}'
)
# execute it and store in variable
tmp=$("{$cmd[@]}")
# output in a single call, so that hopefully buffering does not bite us that much
# if it does, use a separate file for each call
# or use GNU parallel
printf "%s\n" "$tmp"
}
export auth_key # export needed variables
export -f doit # export needed functions
# I replaced /home/debian by ~
# run xargs that runs bash that runs the command with passed argument
# see man xargs , man bash
xargs -d '\n' -P 1000 -n 1 bash -c 'doit "$@"' _ < ~/names.list > ~/results.list
of bursting the ~10,000 calls in batches of 1 or 2 thousand.
You would have to write a manual loop for that:
trap 'kill $(jobs -p)' EXIT # kill all jobs on ctrl+c
n=0
max=1000
# see https://mywiki.wooledge.org/BashFAQ/001
while IFS= read -r line; do
if ((++n > max)); then
wait # wait for all the currently running processes
n=0
fi
doit "$line" &
done < ~/names.list > ~/results.list
wait
Problems with your script:
- Use shellcheck to check your scripts
- do not use
for i in $(cat..)also do not use it in the form oftmp=$(cat ..); for i in $tmp. See https://mywiki.wooledge.org/BashFAQ/001 - Users is not an array
$nand$auth_keyare not quoted. In particular,auth_key="***"is replaced by all your files in current directory.- your second script does not
waitfor them.
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | KamilCuk |
