0

With the reference from this, I tried to download entire tutorial website from https://www.guru99.com/ so I tried to execute the following commands without any success

wget -r --mirror -p --convert-links -P . https://www.guru99.com

wget -r https://www.guru99.com

wget -r -l 0 https://www.guru99.com

The return from terminal console is as below

--2019-04-17 08:33:48--  https://www.guru99.com/
Resolving www.guru99.com (www.guru99.com)... 72.52.251.71
Connecting to www.guru99.com (www.guru99.com)|72.52.251.71|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘www.guru99.com/index.html’

www.guru99.com/index.html                [  <=>                                                                 ]  13.31K  43.4KB/s    in 0.3s    

2019-04-17 08:33:50 (43.4 KB/s) - ‘www.guru99.com/index.html’ saved [13633]

FINISHED --2019-04-17 08:33:50--
Total wall clock time: 1.7s
Downloaded: 1 files, 13K in 0.3s (43.4 KB/s)

And the downloaded file has only index.html. What is the problem with that how can I download this website for offline? Thanks.

2 Answers2

1

The program “httrack” will do exactly what your looking for. For more info goto Ubuntu httrack.

Install with : sudo apt install httrack and start it by entering httrack in your terminal.

For that particular site, it will take a very long time and doesn’t show any indication of progress. Be patient ;)

bashBedlam
  • 1,022
0

You can try doing this in the way below:

wget \
     --recursive \
     --no-clobber \
     --page-requisites \
     --html-extension \
     --convert-links \
     --restrict-file-names=windows \
     --domains guru99.com \
     --no-parent \
     www.guru99.com/index.html

Reference : https://www.linuxjournal.com/content/downloading-entire-web-site-wget

pomsky
  • 68,507