Added retry command

fixes [Feature request} Retry flag
Fixes StrawberryMaster/wayback-machine-downloader#31
This commit is contained in:
Felipe 2025-08-20 01:21:29 +00:00 committed by GitHub
parent fa306ac92b
commit fc8d8a9441
3 changed files with 40 additions and 129 deletions

View File

@ -9,7 +9,7 @@ Included here is partial content from other forks, namely those @ [ShiftaDeband]
Download a website's latest snapshot: Download a website's latest snapshot:
```bash ```bash
ruby wayback_machine_downloader https://example.com wayback_machine_downloader https://example.com
``` ```
Your files will save to `./websites/example.com/` with their original structure preserved. Your files will save to `./websites/example.com/` with their original structure preserved.
@ -27,6 +27,7 @@ To run most commands, just like in the original WMD, you can use:
```bash ```bash
wayback_machine_downloader https://example.com wayback_machine_downloader https://example.com
``` ```
Do note that you can also manually download this repository and run commands here by appending `ruby` before a command, e.g. `ruby wayback_machine_downloader https://example.com`.
**Note**: this gem may conflict with hartator's wayback_machine_downloader gem, and so you may have to uninstall it for this WMD fork to work. A good way to know is if a command fails; it will list the gem version as 2.3.1 or earlier, while this WMD fork uses 2.3.2 or above. **Note**: this gem may conflict with hartator's wayback_machine_downloader gem, and so you may have to uninstall it for this WMD fork to work. A good way to know is if a command fails; it will list the gem version as 2.3.1 or earlier, while this WMD fork uses 2.3.2 or above.
### Step-by-step setup ### Step-by-step setup
@ -63,15 +64,14 @@ docker build -t wayback_machine_downloader .
docker run -it --rm wayback_machine_downloader [options] URL docker run -it --rm wayback_machine_downloader [options] URL
``` ```
or the example without cloning the repo - fetching smallrockets.com until the year 2013: As an example of how this works without cloning this repo, this command fetches smallrockets.com until the year 2013:
```bash ```bash
docker run -v .:/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com docker run -v .:/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com
``` ```
### 🐳 Using Docker Compose ### 🐳 Using Docker Compose
You can also use Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
We can also use it with Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
```yaml ```yaml
# docker-compose.yml # docker-compose.yml
services: services:
@ -120,6 +120,7 @@ STATE_DB_FILENAME = '.downloaded.txt' # Tracks completed downloads
| `-t TS`, `--to TS` | Stop at timestamp | | `-t TS`, `--to TS` | Stop at timestamp |
| `-e`, `--exact-url` | Download exact URL only | | `-e`, `--exact-url` | Download exact URL only |
| `-r`, `--rewritten` | Download rewritten Wayback Archive files only | | `-r`, `--rewritten` | Download rewritten Wayback Archive files only |
| `-rt`, `--retry NUM` | Number of tries in case a download fails (default: 1) |
**Example** - Download files to `downloaded-backup` folder **Example** - Download files to `downloaded-backup` folder
```bash ```bash
@ -165,6 +166,8 @@ ruby wayback_machine_downloader https://example.com --rewritten
``` ```
Useful if you want to download the rewritten files from the Wayback Machine instead of the original ones. Useful if you want to download the rewritten files from the Wayback Machine instead of the original ones.
---
### Filtering Content ### Filtering Content
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
@ -199,6 +202,8 @@ Or if you want to download everything except images:
ruby wayback_machine_downloader https://example.com --exclude "/\.(gif|jpg|jpeg)$/i" ruby wayback_machine_downloader https://example.com --exclude "/\.(gif|jpg|jpeg)$/i"
``` ```
---
### Performance ### Performance
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
@ -217,6 +222,8 @@ ruby wayback_machine_downloader https://example.com --snapshot-pages 300
``` ```
Will specify the maximum number of snapshot pages to consider. Count an average of 150,000 snapshots per page. 100 is the default maximum number of snapshot pages and should be sufficient for most websites. Use a bigger number if you want to download a very large website. Will specify the maximum number of snapshot pages to consider. Count an average of 150,000 snapshots per page. 100 is the default maximum number of snapshot pages and should be sufficient for most websites. Use a bigger number if you want to download a very large website.
---
### Diagnostics ### Diagnostics
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
@ -235,6 +242,8 @@ ruby wayback_machine_downloader https://example.com --list
``` ```
It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application. It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application.
---
### Job management ### Job management
The downloader automatically saves its progress (`.cdx.json` for snapshot list, `.downloaded.txt` for completed files) in the output directory. If you run the same command again pointing to the same output directory, it will resume where it left off, skipping already downloaded files. The downloader automatically saves its progress (`.cdx.json` for snapshot list, `.downloaded.txt` for completed files) in the output directory. If you run the same command again pointing to the same output directory, it will resume where it left off, skipping already downloaded files.
@ -258,6 +267,8 @@ ruby wayback_machine_downloader https://example.com --keep
``` ```
This can be useful for debugging or if you plan to extend the download later with different parameters (e.g., adding `--to` timestamp) while leveraging the existing snapshot list. This can be useful for debugging or if you plan to extend the download later with different parameters (e.g., adding `--to` timestamp) while leveraging the existing snapshot list.
---
## 🤝 Contributing ## 🤝 Contributing
1. Fork the repository 1. Fork the repository
2. Create a feature branch 2. Create a feature branch

View File

@ -74,6 +74,10 @@ option_parser = OptionParser.new do |opts|
options[:keep] = true options[:keep] = true
end end
opts.on("--rt", "--retry N", Integer, "Maximum number of retries for failed downloads (default: 3)") do |t|
options[:max_retries] = t
end
opts.on("--recursive-subdomains", "Recursively download content from subdomains") do |t| opts.on("--recursive-subdomains", "Recursively download content from subdomains") do |t|
options[:recursive_subdomains] = true options[:recursive_subdomains] = true
end end

View File

@ -11,7 +11,6 @@ require 'concurrent-ruby'
require 'logger' require 'logger'
require 'zlib' require 'zlib'
require 'stringio' require 'stringio'
require 'digest'
require_relative 'wayback_machine_downloader/tidy_bytes' require_relative 'wayback_machine_downloader/tidy_bytes'
require_relative 'wayback_machine_downloader/to_regex' require_relative 'wayback_machine_downloader/to_regex'
require_relative 'wayback_machine_downloader/archive_api' require_relative 'wayback_machine_downloader/archive_api'
@ -117,7 +116,7 @@ class WaybackMachineDownloader
include ArchiveAPI include ArchiveAPI
include SubdomainProcessor include SubdomainProcessor
VERSION = "2.4.3" VERSION = "2.4.0"
DEFAULT_TIMEOUT = 30 DEFAULT_TIMEOUT = 30
MAX_RETRIES = 3 MAX_RETRIES = 3
RETRY_DELAY = 2 RETRY_DELAY = 2
@ -172,19 +171,12 @@ class WaybackMachineDownloader
def backup_name def backup_name
url_to_process = @base_url.end_with?('/*') ? @base_url.chomp('/*') : @base_url url_to_process = @base_url.end_with?('/*') ? @base_url.chomp('/*') : @base_url
raw = if url_to_process.include?('//')
if url_to_process.include? '//'
url_to_process.split('/')[2] url_to_process.split('/')[2]
else else
url_to_process url_to_process
end end
# sanitize for Windows (and safe cross-platform) to avoid ENOTDIR on mkdir (colon in host:port)
if Gem.win_platform?
raw = raw.gsub(/[:*?"<>|]/, '_')
raw = raw.gsub(/[ .]+\z/, '')
end
raw = 'site' if raw.nil? || raw.empty?
raw
end end
def backup_path def backup_path
@ -348,15 +340,15 @@ class WaybackMachineDownloader
get_all_snapshots_to_consider.each do |file_timestamp, file_url| get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/') next unless file_url.include?('/')
next if file_timestamp.to_i > target_timestamp next if file_timestamp.to_i > target_timestamp
file_id = file_url.split('/')[3..-1].join('/')
raw_tail = file_url.split('/')[3..-1]&.join('/') file_id = CGI::unescape file_id
file_id = sanitize_and_prepare_id(raw_tail, file_url) file_id = file_id.tidy_bytes unless file_id == ""
next if file_id.nil? next if file_id.nil?
next if match_exclude_filter(file_url) next if match_exclude_filter(file_url)
next unless match_only_filter(file_url) next unless match_only_filter(file_url)
# Select the most recent version <= target_timestamp
if !file_versions[file_id] || file_versions[file_id][:timestamp].to_i < file_timestamp.to_i if !file_versions[file_id] || file_versions[file_id][:timestamp].to_i < file_timestamp.to_i
file_versions[file_id] = { file_url: file_url, timestamp: file_timestamp, file_id: file_id } file_versions[file_id] = {file_url: file_url, timestamp: file_timestamp, file_id: file_id}
end end
end end
file_versions.values file_versions.values
@ -376,27 +368,22 @@ class WaybackMachineDownloader
file_list_curated = Hash.new file_list_curated = Hash.new
get_all_snapshots_to_consider.each do |file_timestamp, file_url| get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/') next unless file_url.include?('/')
file_id = file_url.split('/')[3..-1].join('/')
raw_tail = file_url.split('/')[3..-1]&.join('/') file_id = CGI::unescape file_id
file_id = sanitize_and_prepare_id(raw_tail, file_url) file_id = file_id.tidy_bytes unless file_id == ""
if file_id.nil? if file_id.nil?
puts "Malformed file url, ignoring: #{file_url}" puts "Malformed file url, ignoring: #{file_url}"
next
end
if file_id.include?('<') || file_id.include?('>')
puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}"
else else
if match_exclude_filter(file_url) if match_exclude_filter(file_url)
puts "File url matches exclude filter, ignoring: #{file_url}" puts "File url matches exclude filter, ignoring: #{file_url}"
elsif !match_only_filter(file_url) elsif not match_only_filter(file_url)
puts "File url doesn't match only filter, ignoring: #{file_url}" puts "File url doesn't match only filter, ignoring: #{file_url}"
elsif file_list_curated[file_id] elsif file_list_curated[file_id]
unless file_list_curated[file_id][:timestamp] > file_timestamp unless file_list_curated[file_id][:timestamp] > file_timestamp
file_list_curated[file_id] = { file_url: file_url, timestamp: file_timestamp } file_list_curated[file_id] = {file_url: file_url, timestamp: file_timestamp}
end end
else else
file_list_curated[file_id] = { file_url: file_url, timestamp: file_timestamp } file_list_curated[file_id] = {file_url: file_url, timestamp: file_timestamp}
end end
end end
end end
@ -407,32 +394,21 @@ class WaybackMachineDownloader
file_list_curated = Hash.new file_list_curated = Hash.new
get_all_snapshots_to_consider.each do |file_timestamp, file_url| get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/') next unless file_url.include?('/')
file_id = file_url.split('/')[3..-1].join('/')
raw_tail = file_url.split('/')[3..-1]&.join('/') file_id_and_timestamp = [file_timestamp, file_id].join('/')
file_id = sanitize_and_prepare_id(raw_tail, file_url) file_id_and_timestamp = CGI::unescape file_id_and_timestamp
file_id_and_timestamp = file_id_and_timestamp.tidy_bytes unless file_id_and_timestamp == ""
if file_id.nil? if file_id.nil?
puts "Malformed file url, ignoring: #{file_url}" puts "Malformed file url, ignoring: #{file_url}"
next
end
file_id_and_timestamp_raw = [file_timestamp, file_id].join('/')
file_id_and_timestamp = sanitize_and_prepare_id(file_id_and_timestamp_raw, file_url)
if file_id_and_timestamp.nil?
puts "Malformed file id/timestamp combo, ignoring: #{file_url}"
next
end
if file_id_and_timestamp.include?('<') || file_id_and_timestamp.include?('>')
puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}"
else else
if match_exclude_filter(file_url) if match_exclude_filter(file_url)
puts "File url matches exclude filter, ignoring: #{file_url}" puts "File url matches exclude filter, ignoring: #{file_url}"
elsif !match_only_filter(file_url) elsif not match_only_filter(file_url)
puts "File url doesn't match only filter, ignoring: #{file_url}" puts "File url doesn't match only filter, ignoring: #{file_url}"
elsif file_list_curated[file_id_and_timestamp] elsif file_list_curated[file_id_and_timestamp]
# duplicate combo, ignore silently (verbose flag not shown here) puts "Duplicate file and timestamp combo, ignoring: #{file_id}" if @verbose
else else
file_list_curated[file_id_and_timestamp] = { file_url: file_url, timestamp: file_timestamp } file_list_curated[file_id_and_timestamp] = {file_url: file_url, timestamp: file_timestamp}
end end
end end
end end
@ -774,86 +750,6 @@ class WaybackMachineDownloader
logger logger
end end
# safely sanitize a file id (or id+timestamp)
def sanitize_and_prepare_id(raw, file_url)
return nil if raw.nil?
return "" if raw.empty?
original = raw.dup
begin
# work on a binary copy to avoid premature encoding errors
raw = raw.dup.force_encoding(Encoding::BINARY)
# percent-decode (repeat until stable in case of double-encoding)
loop do
decoded = raw.gsub(/%([0-9A-Fa-f]{2})/) { [$1].pack('H2') }
break if decoded == raw
raw = decoded
end
# try tidy_bytes
begin
raw = raw.tidy_bytes
rescue StandardError
# fallback: scrub to UTF-8
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
end
# ensure UTF-8 and scrub again
unless raw.encoding == Encoding::UTF_8 && raw.valid_encoding?
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
end
# strip HTML/comment artifacts & control chars
raw.gsub!(/<!--+/, '')
raw.gsub!(/[\x00-\x1F]/, '')
# split query; hash it for stable short name
path_part, query_part = raw.split('?', 2)
if query_part && !query_part.empty?
q_digest = Digest::SHA256.hexdigest(query_part)[0, 12]
if path_part.include?('.')
pre, _sep, post = path_part.rpartition('.')
path_part = "#{pre}__q#{q_digest}.#{post}"
else
path_part = "#{path_part}__q#{q_digest}"
end
end
raw = path_part
# collapse slashes & trim leading slash
raw.gsub!(%r{/+}, '/')
raw.sub!(%r{\A/}, '')
# segment-wise sanitation
raw = raw.split('/').map do |segment|
seg = segment.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
seg = seg.gsub(/[:*?"<>|\\]/) { |c| "%#{c.ord.to_s(16).upcase}" }
seg = seg.gsub(/[ .]+\z/, '') if Gem.win_platform?
seg.empty? ? '_' : seg
end.join('/')
# remove any remaining angle brackets
raw.tr!('<>', '')
# final fallback if empty
raw = "file__#{Digest::SHA1.hexdigest(original)[0,10]}" if raw.nil? || raw.empty?
raw
rescue => e
@logger&.warn("Failed to sanitize file id from #{file_url}: #{e.message}")
# deterministic fallback never return nil so caller wont mark malformed
"file__#{Digest::SHA1.hexdigest(original)[0,10]}"
end
end
# wrap URL in parentheses if it contains characters that commonly break unquoted
# Windows CMD usage (e.g., &). This is only for display; user still must quote
# when invoking manually.
def safe_display_url(url)
return url unless url && url.match?(/[&]/)
"(#{url})"
end
def download_with_retry(file_path, file_url, file_timestamp, connection, redirect_count = 0) def download_with_retry(file_path, file_url, file_timestamp, connection, redirect_count = 0)
retries = 0 retries = 0
begin begin