wget: Recursive Retrieval Options
2.11 Recursive Retrieval Options
================================
‘-r’
‘--recursive’
Turn on recursive retrieving. ⇒Recursive Download, for more
details. The default maximum depth is 5.
‘-l DEPTH’
‘--level=DEPTH’
Set the maximum number of subdirectories that Wget will recurse
into to DEPTH. In order to prevent one from accidentally
downloading very large websites when using recursion this is
limited to a depth of 5 by default, i.e., it will traverse at most
5 directories deep starting from the provided URL. Set ‘-l 0’ or
‘-l inf’ for infinite recursion depth.
wget -r -l 0 http://SITE/1.html
Ideally, one would expect this to download just ‘1.html’. but
unfortunately this is not the case, because ‘-l 0’ is equivalent to
‘-l inf’—that is, infinite recursion. To download a single HTML
page (or a handful of them), specify them all on the command line
and leave away ‘-r’ and ‘-l’. To download the essential items to
view a single HTML page, see ‘page requisites’.
‘--delete-after’
This option tells Wget to delete every single file it downloads,
_after_ having done so. It is useful for pre-fetching popular
pages through a proxy, e.g.:
wget -r -nd --delete-after http://whatever.com/~popular/page/
The ‘-r’ option is to retrieve recursively, and ‘-nd’ to not create
directories.
Note that ‘--delete-after’ deletes files on the local machine. It
does not issue the ‘DELE’ command to remote FTP sites, for
instance. Also note that when ‘--delete-after’ is specified,
‘--convert-links’ is ignored, so ‘.orig’ files are simply not
created in the first place.
‘-k’
‘--convert-links’
After the download is complete, convert the links in the document
to make them suitable for local viewing. This affects not only the
visible hyperlinks, but any part of the document that links to
external content, such as embedded images, links to style sheets,
hyperlinks to non-HTML content, etc.
Each link will be changed in one of the two ways:
• The links to files that have been downloaded by Wget will be
changed to refer to the file they point to as a relative link.
Example: if the downloaded file ‘/foo/doc.html’ links to
‘/bar/img.gif’, also downloaded, then the link in ‘doc.html’
will be modified to point to ‘../bar/img.gif’. This kind of
transformation works reliably for arbitrary combinations of
directories.
• The links to files that have not been downloaded by Wget will
be changed to include host name and absolute path of the
location they point to.
Example: if the downloaded file ‘/foo/doc.html’ links to
‘/bar/img.gif’ (or to ‘../bar/img.gif’), then the link in
‘doc.html’ will be modified to point to
‘http://HOSTNAME/bar/img.gif’.
Because of this, local browsing works reliably: if a linked file
was downloaded, the link will refer to its local name; if it was
not downloaded, the link will refer to its full Internet address
rather than presenting a broken link. The fact that the former
links are converted to relative links ensures that you can move the
downloaded hierarchy to another directory.
Note that only at the end of the download can Wget know which links
have been downloaded. Because of that, the work done by ‘-k’ will
be performed at the end of all the downloads.
‘--convert-file-only’
This option converts only the filename part of the URLs, leaving
the rest of the URLs untouched. This filename part is sometimes
referred to as the "basename", although we avoid that term here in
order not to cause confusion.
It works particularly well in conjunction with
‘--adjust-extension’, although this coupling is not enforced. It
proves useful to populate Internet caches with files downloaded
from different hosts.
Example: if some link points to ‘//foo.com/bar.cgi?xyz’ with
‘--adjust-extension’ asserted and its local destination is intended
to be ‘./foo.com/bar.cgi?xyz.css’, then the link would be converted
to ‘//foo.com/bar.cgi?xyz.css’. Note that only the filename part
has been modified. The rest of the URL has been left untouched,
including the net path (‘//’) which would otherwise be processed by
Wget and converted to the effective scheme (ie. ‘http://’).
‘-K’
‘--backup-converted’
When converting a file, back up the original version with a ‘.orig’
suffix. Affects the behavior of ‘-N’ (⇒HTTP Time-Stamping
Internals).
‘-m’
‘--mirror’
Turn on options suitable for mirroring. This option turns on
recursion and time-stamping, sets infinite recursion depth and
keeps FTP directory listings. It is currently equivalent to ‘-r -N
-l inf --no-remove-listing’.
‘-p’
‘--page-requisites’
This option causes Wget to download all the files that are
necessary to properly display a given HTML page. This includes
such things as inlined images, sounds, and referenced stylesheets.
Ordinarily, when downloading a single HTML page, any requisite
documents that may be needed to display it properly are not
downloaded. Using ‘-r’ together with ‘-l’ can help, but since Wget
does not ordinarily distinguish between external and inlined
documents, one is generally left with “leaf documents” that are
missing their requisites.
For instance, say document ‘1.html’ contains an ‘<IMG>’ tag
referencing ‘1.gif’ and an ‘<A>’ tag pointing to external document
‘2.html’. Say that ‘2.html’ is similar but that its image is
‘2.gif’ and it links to ‘3.html’. Say this continues up to some
arbitrarily high number.
If one executes the command:
wget -r -l 2 http://SITE/1.html
then ‘1.html’, ‘1.gif’, ‘2.html’, ‘2.gif’, and ‘3.html’ will be
downloaded. As you can see, ‘3.html’ is without its requisite
‘3.gif’ because Wget is simply counting the number of hops (up to
2) away from ‘1.html’ in order to determine where to stop the
recursion. However, with this command:
wget -r -l 2 -p http://SITE/1.html
all the above files _and_ ‘3.html’’s requisite ‘3.gif’ will be
downloaded. Similarly,
wget -r -l 1 -p http://SITE/1.html
will cause ‘1.html’, ‘1.gif’, ‘2.html’, and ‘2.gif’ to be
downloaded. One might think that:
wget -r -l 0 -p http://SITE/1.html
would download just ‘1.html’ and ‘1.gif’, but unfortunately this is
not the case, because ‘-l 0’ is equivalent to ‘-l inf’—that is,
infinite recursion. To download a single HTML page (or a handful
of them, all specified on the command-line or in a ‘-i’ URL input
file) and its (or their) requisites, simply leave off ‘-r’ and
‘-l’:
wget -p http://SITE/1.html
Note that Wget will behave as if ‘-r’ had been specified, but only
that single page and its requisites will be downloaded. Links from
that page to external documents will not be followed. Actually, to
download a single page and all its requisites (even if they exist
on separate websites), and make sure the lot displays properly
locally, this author likes to use a few options in addition to
‘-p’:
wget -E -H -k -K -p http://SITE/DOCUMENT
To finish off this topic, it’s worth knowing that Wget’s idea of an
external document link is any URL specified in an ‘<A>’ tag, an
‘<AREA>’ tag, or a ‘<LINK>’ tag other than ‘<LINK
REL="stylesheet">’.
‘--strict-comments’
Turn on strict parsing of HTML comments. The default is to
terminate comments at the first occurrence of ‘-->’.
According to specifications, HTML comments are expressed as SGML
“declarations”. Declaration is special markup that begins with
‘<!’ and ends with ‘>’, such as ‘<!DOCTYPE ...>’, that may contain
comments between a pair of ‘--’ delimiters. HTML comments are
“empty declarations”, SGML declarations without any non-comment
text. Therefore, ‘<!--foo-->’ is a valid comment, and so is
‘<!--one-- --two-->’, but ‘<!--1--2-->’ is not.
On the other hand, most HTML writers don’t perceive comments as
anything other than text delimited with ‘<!--’ and ‘-->’, which is
not quite the same. For example, something like ‘<!------------>’
works as a valid comment as long as the number of dashes is a
multiple of four (!). If not, the comment technically lasts until
the next ‘--’, which may be at the other end of the document.
Because of this, many popular browsers completely ignore the
specification and implement what users have come to expect:
comments delimited with ‘<!--’ and ‘-->’.
Until version 1.9, Wget interpreted comments strictly, which
resulted in missing links in many web pages that displayed fine in
browsers, but had the misfortune of containing non-compliant
comments. Beginning with version 1.9, Wget has joined the ranks of
clients that implements “naive” comments, terminating each comment
at the first occurrence of ‘-->’.
If, for whatever reason, you want strict comment parsing, use this
option to turn it on.