'Can I use wget to check , but not download
Can I use wget to check for a 404 and not actually download the resource? If so how? Thanks
Solution 1:[1]
There is the command line parameter --spider exactly for this. In this mode, wget does not download the files and its return value is zero if the resource was found and non-zero if it was not found. Try this (in your favorite shell):
wget -q --spider address
echo $?
Or if you want full output, leave the -q off, so just wget --spider address. -nv shows some output, but not as much as the default.
Solution 2:[2]
If you want to check quietly via $? without the hassle of grep'ing wget's output you can use:
wget -q "http://blah.meh.com/my/path" -O /dev/null
Works even on URLs with just a path but has the disadvantage that something's really downloaded so this is not recommended when checking big files for existence.
Solution 3:[3]
You can use the following option to check for the files:
wget --delete-after URL
Solution 4:[4]
Yes easy.
wget --spider www.bluespark.co.nz
That will give you
Resolving www.bluespark.co.nz... 210.48.79.121
Connecting to www.bluespark.co.nz[210.48.79.121]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
200 OK
Solution 5:[5]
Yes, to use the wget to check , but not download the target URL/file, just run:
wget --spider -S www.example.com
Sources
This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.
Source: Stack Overflow
| Solution | Source |
|---|---|
| Solution 1 | Shadikka |
| Solution 2 | |
| Solution 3 | Adiii |
| Solution 4 | |
| Solution 5 | O.Caliari |
