diff --git a/README.md b/README.md index e4e7ebb..c7457c1 100644 --- a/README.md +++ b/README.md @@ -9,7 +9,7 @@ Included here is partial content from other forks, namely those @ [ShiftaDeband] Download a website's latest snapshot: ```bash -ruby wayback_machine_downloader https://example.com +wayback_machine_downloader https://example.com ``` Your files will save to `./websites/example.com/` with their original structure preserved. @@ -27,6 +27,7 @@ To run most commands, just like in the original WMD, you can use: ```bash wayback_machine_downloader https://example.com ``` +Do note that you can also manually download this repository and run commands here by appending `ruby` before a command, e.g. `ruby wayback_machine_downloader https://example.com`. **Note**: this gem may conflict with hartator's wayback_machine_downloader gem, and so you may have to uninstall it for this WMD fork to work. A good way to know is if a command fails; it will list the gem version as 2.3.1 or earlier, while this WMD fork uses 2.3.2 or above. ### Step-by-step setup @@ -63,15 +64,14 @@ docker build -t wayback_machine_downloader . docker run -it --rm wayback_machine_downloader [options] URL ``` -or the example without cloning the repo - fetching smallrockets.com until the year 2013: +As an example of how this works without cloning this repo, this command fetches smallrockets.com until the year 2013: ```bash docker run -v .:/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com ``` ### 🐳 Using Docker Compose - -We can also use it with Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database): +You can also use Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database): ```yaml # docker-compose.yml services: @@ -120,6 +120,7 @@ STATE_DB_FILENAME = '.downloaded.txt' # Tracks completed downloads | `-t TS`, `--to TS` | Stop at timestamp | | `-e`, `--exact-url` | Download exact URL only | | `-r`, `--rewritten` | Download rewritten Wayback Archive files only | +| `-rt`, `--retry NUM` | Number of tries in case a download fails (default: 1) | **Example** - Download files to `downloaded-backup` folder ```bash @@ -165,6 +166,8 @@ ruby wayback_machine_downloader https://example.com --rewritten ``` Useful if you want to download the rewritten files from the Wayback Machine instead of the original ones. +--- + ### Filtering Content | Option | Description | |--------|-------------| @@ -199,6 +202,8 @@ Or if you want to download everything except images: ruby wayback_machine_downloader https://example.com --exclude "/\.(gif|jpg|jpeg)$/i" ``` +--- + ### Performance | Option | Description | |--------|-------------| @@ -217,6 +222,8 @@ ruby wayback_machine_downloader https://example.com --snapshot-pages 300 ``` Will specify the maximum number of snapshot pages to consider. Count an average of 150,000 snapshots per page. 100 is the default maximum number of snapshot pages and should be sufficient for most websites. Use a bigger number if you want to download a very large website. +--- + ### Diagnostics | Option | Description | |--------|-------------| @@ -235,6 +242,8 @@ ruby wayback_machine_downloader https://example.com --list ``` It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application. +--- + ### Job management The downloader automatically saves its progress (`.cdx.json` for snapshot list, `.downloaded.txt` for completed files) in the output directory. If you run the same command again pointing to the same output directory, it will resume where it left off, skipping already downloaded files. @@ -258,6 +267,8 @@ ruby wayback_machine_downloader https://example.com --keep ``` This can be useful for debugging or if you plan to extend the download later with different parameters (e.g., adding `--to` timestamp) while leveraging the existing snapshot list. +--- + ## 🤝 Contributing 1. Fork the repository 2. Create a feature branch diff --git a/bin/wayback_machine_downloader b/bin/wayback_machine_downloader index f990789..1265f2e 100755 --- a/bin/wayback_machine_downloader +++ b/bin/wayback_machine_downloader @@ -74,6 +74,10 @@ option_parser = OptionParser.new do |opts| options[:keep] = true end + opts.on("--rt", "--retry N", Integer, "Maximum number of retries for failed downloads (default: 3)") do |t| + options[:max_retries] = t + end + opts.on("--recursive-subdomains", "Recursively download content from subdomains") do |t| options[:recursive_subdomains] = true end diff --git a/lib/wayback_machine_downloader.rb b/lib/wayback_machine_downloader.rb index 488165a..bdcf3b0 100644 --- a/lib/wayback_machine_downloader.rb +++ b/lib/wayback_machine_downloader.rb @@ -11,7 +11,6 @@ require 'concurrent-ruby' require 'logger' require 'zlib' require 'stringio' -require 'digest' require_relative 'wayback_machine_downloader/tidy_bytes' require_relative 'wayback_machine_downloader/to_regex' require_relative 'wayback_machine_downloader/archive_api' @@ -117,7 +116,7 @@ class WaybackMachineDownloader include ArchiveAPI include SubdomainProcessor - VERSION = "2.4.3" + VERSION = "2.4.0" DEFAULT_TIMEOUT = 30 MAX_RETRIES = 3 RETRY_DELAY = 2 @@ -172,19 +171,12 @@ class WaybackMachineDownloader def backup_name url_to_process = @base_url.end_with?('/*') ? @base_url.chomp('/*') : @base_url - raw = if url_to_process.include?('//') + + if url_to_process.include? '//' url_to_process.split('/')[2] else url_to_process end - - # sanitize for Windows (and safe cross-platform) to avoid ENOTDIR on mkdir (colon in host:port) - if Gem.win_platform? - raw = raw.gsub(/[:*?"<>|]/, '_') - raw = raw.gsub(/[ .]+\z/, '') - end - raw = 'site' if raw.nil? || raw.empty? - raw end def backup_path @@ -348,15 +340,15 @@ class WaybackMachineDownloader get_all_snapshots_to_consider.each do |file_timestamp, file_url| next unless file_url.include?('/') next if file_timestamp.to_i > target_timestamp - - raw_tail = file_url.split('/')[3..-1]&.join('/') - file_id = sanitize_and_prepare_id(raw_tail, file_url) + file_id = file_url.split('/')[3..-1].join('/') + file_id = CGI::unescape file_id + file_id = file_id.tidy_bytes unless file_id == "" next if file_id.nil? next if match_exclude_filter(file_url) next unless match_only_filter(file_url) - + # Select the most recent version <= target_timestamp if !file_versions[file_id] || file_versions[file_id][:timestamp].to_i < file_timestamp.to_i - file_versions[file_id] = { file_url: file_url, timestamp: file_timestamp, file_id: file_id } + file_versions[file_id] = {file_url: file_url, timestamp: file_timestamp, file_id: file_id} end end file_versions.values @@ -376,27 +368,22 @@ class WaybackMachineDownloader file_list_curated = Hash.new get_all_snapshots_to_consider.each do |file_timestamp, file_url| next unless file_url.include?('/') - - raw_tail = file_url.split('/')[3..-1]&.join('/') - file_id = sanitize_and_prepare_id(raw_tail, file_url) + file_id = file_url.split('/')[3..-1].join('/') + file_id = CGI::unescape file_id + file_id = file_id.tidy_bytes unless file_id == "" if file_id.nil? puts "Malformed file url, ignoring: #{file_url}" - next - end - - if file_id.include?('<') || file_id.include?('>') - puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}" else if match_exclude_filter(file_url) puts "File url matches exclude filter, ignoring: #{file_url}" - elsif !match_only_filter(file_url) + elsif not match_only_filter(file_url) puts "File url doesn't match only filter, ignoring: #{file_url}" elsif file_list_curated[file_id] unless file_list_curated[file_id][:timestamp] > file_timestamp - file_list_curated[file_id] = { file_url: file_url, timestamp: file_timestamp } + file_list_curated[file_id] = {file_url: file_url, timestamp: file_timestamp} end else - file_list_curated[file_id] = { file_url: file_url, timestamp: file_timestamp } + file_list_curated[file_id] = {file_url: file_url, timestamp: file_timestamp} end end end @@ -407,32 +394,21 @@ class WaybackMachineDownloader file_list_curated = Hash.new get_all_snapshots_to_consider.each do |file_timestamp, file_url| next unless file_url.include?('/') - - raw_tail = file_url.split('/')[3..-1]&.join('/') - file_id = sanitize_and_prepare_id(raw_tail, file_url) + file_id = file_url.split('/')[3..-1].join('/') + file_id_and_timestamp = [file_timestamp, file_id].join('/') + file_id_and_timestamp = CGI::unescape file_id_and_timestamp + file_id_and_timestamp = file_id_and_timestamp.tidy_bytes unless file_id_and_timestamp == "" if file_id.nil? puts "Malformed file url, ignoring: #{file_url}" - next - end - - file_id_and_timestamp_raw = [file_timestamp, file_id].join('/') - file_id_and_timestamp = sanitize_and_prepare_id(file_id_and_timestamp_raw, file_url) - if file_id_and_timestamp.nil? - puts "Malformed file id/timestamp combo, ignoring: #{file_url}" - next - end - - if file_id_and_timestamp.include?('<') || file_id_and_timestamp.include?('>') - puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}" else if match_exclude_filter(file_url) puts "File url matches exclude filter, ignoring: #{file_url}" - elsif !match_only_filter(file_url) + elsif not match_only_filter(file_url) puts "File url doesn't match only filter, ignoring: #{file_url}" elsif file_list_curated[file_id_and_timestamp] - # duplicate combo, ignore silently (verbose flag not shown here) + puts "Duplicate file and timestamp combo, ignoring: #{file_id}" if @verbose else - file_list_curated[file_id_and_timestamp] = { file_url: file_url, timestamp: file_timestamp } + file_list_curated[file_id_and_timestamp] = {file_url: file_url, timestamp: file_timestamp} end end end @@ -773,86 +749,6 @@ class WaybackMachineDownloader end logger end - - # safely sanitize a file id (or id+timestamp) - def sanitize_and_prepare_id(raw, file_url) - return nil if raw.nil? - return "" if raw.empty? - original = raw.dup - begin - # work on a binary copy to avoid premature encoding errors - raw = raw.dup.force_encoding(Encoding::BINARY) - - # percent-decode (repeat until stable in case of double-encoding) - loop do - decoded = raw.gsub(/%([0-9A-Fa-f]{2})/) { [$1].pack('H2') } - break if decoded == raw - raw = decoded - end - - # try tidy_bytes - begin - raw = raw.tidy_bytes - rescue StandardError - # fallback: scrub to UTF-8 - raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '') - end - - # ensure UTF-8 and scrub again - unless raw.encoding == Encoding::UTF_8 && raw.valid_encoding? - raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '') - end - - # strip HTML/comment artifacts & control chars - raw.gsub!(/