User:Jekyll Grim Payne/Downloading the ZDoom Wiki: Difference between revisions

From ZDoom Wiki
Jump to navigation Jump to search
(Stop doing this, people. Somebody feel free to edit this into third person.)
m (Please don't ban me for downloading <x> history pages because they weren't in the tutorial. :D)
Line 8: Line 8:
:* <code>-*&action=*</code>
:* <code>-*&action=*</code>
:* <code>-*&printable=*</code>
:* <code>-*&printable=*</code>
:* <code>-*&oldid=*</code>
# In the "Spider" tab, make sure the drop-down field containing "robots.txt" is set to "no robots.txt rules", and make sure that the “Force old HTTP/1.0 requests (no 1.1)” box is checked.
# In the "Spider" tab, make sure the drop-down field containing "robots.txt" is set to "no robots.txt rules", and make sure that the “Force old HTTP/1.0 requests (no 1.1)” box is checked.
# Point HTTrack to the following web address: http://www.zdoom.org/wiki
# Point HTTrack to the following web address: http://www.zdoom.org/wiki

Revision as of 21:04, 6 February 2015

Instructions for downloading the ZDoom Wiki so that is viewable off-line:

  1. Download HTTrack from the HTTrack Homepage: http://www.httrack.com
  2. Configure the filter under the "Scan Rules" tab in the Set Options menu to include this:
  • -*Special:*
  • -*Talk:*
  • -*User:*
  • -*&action=*
  • -*&printable=*
  • -*&oldid=*
  1. In the "Spider" tab, make sure the drop-down field containing "robots.txt" is set to "no robots.txt rules", and make sure that the “Force old HTTP/1.0 requests (no 1.1)” box is checked.
  2. Point HTTrack to the following web address: http://www.zdoom.org/wiki
  3. Run HTTrack.

[1]

The program will rip all the pages from the site and store them in a directory on your hard drive. Download times will vary, depending on your connection, anywhere between two minutes to an hour.

After that, you can browse the ZDoom Wiki off-line just like you would online.

If you have wget installed, you can also use the following command to download the wiki:

wget -rk http://www.zdoom.org/wiki/Main_Page

Warning

Failure to properly configure your web leaching program is liable to get you blocked from the site. If I see that there is unusually high network traffic for hours, and when I check my logs, I see that somebody is downloading every single page on the wiki, including discussion pages, edit pages, history pages, and every special page possible, I will block them at my own discretion. - Randy Heit (talk) 19:34, 18 January 2015 (CST)

References

  1. Configuration and settings obtained from Grubber.