Compare commits

...

23 Commits

Author SHA1 Message Date
Felipe
b2fc748c2c another page_requisites fix 2025-12-10 12:16:26 +00:00
Felipe
8632050c45 page requisites fix 2025-12-10 12:13:39 +00:00
Felipe
2aa694eed0 Initial implementation of --page-requisites
see StrawberryMaster/wayback-machine-downloader#39
2025-12-10 11:59:00 +00:00
Felipe
4d2513eca8 Be a bit more tolerant of timeouts here 2025-11-15 12:59:07 +00:00
Felipe
67685b781e Improve handling for wildcard URLs
fixes #38
2025-11-15 12:45:34 +00:00
Felipe
f7c0f1a964 Better support for .php, .asp, and other files when using --local
see #37
2025-11-04 23:18:04 +00:00
Nicolai Weitkemper
99da3ca48e
Fix Docker command volume mount path in README (#35) 2025-10-28 15:30:19 -03:00
Felipe
34f22c128c Bump to 2.4.4 2025-10-27 16:51:58 +00:00
Felipe
71bdc7c2de Use explicit current directory to avoid ambiguity
see `Results saved in /build/websites` but nothing is saved :(
Fixes StrawberryMaster/wayback-machine-downloader#34
2025-10-27 16:48:15 +00:00
Felipe
4b1ec1e1cc Added troubleshooting section
includes a workaround fix for SSL CRL error
Fixes StrawberryMaster/wayback-machine-downloader#33
2025-10-08 11:33:50 +00:00
Felipe
d7a63361e3 Use a FixedThreadPool for concurrent API calls 2025-09-24 21:05:22 +00:00
Felipe
b1974a8dfa Refactor ConnectionPool to use SizedQueue for connection management and improve cleanup logic 2025-09-24 20:50:10 +00:00
Huw Fulcher
012b295aed
Corrected wrong flag in example (#32)
Example 2 in Performance section incorrectly stated to use `--snapshot-pages` whereas the parameter is actually `--maximum-snapshot`
2025-09-10 08:06:57 -03:00
adampweb
dec9083b43 Fix: Fixed trivial mistake with function call 2025-09-04 19:24:44 +00:00
Felipe
c517bd20d3 Actual retry implementation
seems I pushed an older revision of this apparently
2025-09-04 19:16:52 +00:00
Felipe
fc8d8a9441 Added retry command
fixes [Feature request} Retry flag
Fixes StrawberryMaster/wayback-machine-downloader#31
2025-08-20 01:21:29 +00:00
Felipe
fa306ac92b Bumped version 2025-08-19 16:17:53 +00:00
Felipe
8c27aaebc9 Fix issue with index.html pages not loading
we were rejecting empty paths, causing these files to be skipped. How did I miss this?
2025-08-19 16:16:24 +00:00
Felipe
40e9c9bb51 Bumped version 2025-08-16 19:38:01 +00:00
Felipe
6bc08947b7
More aggressive sanitization
this should deal with some of the issues we've seen, luckily. What a ride!
2025-08-12 18:55:00 -03:00
Felipe
c731e0c7bd Bumped version 2025-08-12 11:46:03 +00:00
Felipe
9fd2a7f8d1
Minor refactoring of HTML tag sanitization 2025-08-12 08:42:27 -03:00
Felipe
6ad312f31f Sanitizing HTML tags
some sites contain tags *in* their URL, and fail to save on some devices like Windows
2025-08-05 23:44:34 +00:00
7 changed files with 607 additions and 214 deletions

View File

@ -9,7 +9,7 @@ Included here is partial content from other forks, namely those @ [ShiftaDeband]
Download a website's latest snapshot:
```bash
ruby wayback_machine_downloader https://example.com
wayback_machine_downloader https://example.com
```
Your files will save to `./websites/example.com/` with their original structure preserved.
@ -27,6 +27,7 @@ To run most commands, just like in the original WMD, you can use:
```bash
wayback_machine_downloader https://example.com
```
Do note that you can also manually download this repository and run commands here by appending `ruby` before a command, e.g. `ruby wayback_machine_downloader https://example.com`.
**Note**: this gem may conflict with hartator's wayback_machine_downloader gem, and so you may have to uninstall it for this WMD fork to work. A good way to know is if a command fails; it will list the gem version as 2.3.1 or earlier, while this WMD fork uses 2.3.2 or above.
### Step-by-step setup
@ -63,15 +64,14 @@ docker build -t wayback_machine_downloader .
docker run -it --rm wayback_machine_downloader [options] URL
```
or the example without cloning the repo - fetching smallrockets.com until the year 2013:
As an example of how this works without cloning this repo, this command fetches smallrockets.com until the year 2013:
```bash
docker run -v .:/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com
docker run -v .:/build/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com
```
### 🐳 Using Docker Compose
We can also use it with Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
You can also use Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
```yaml
# docker-compose.yml
services:
@ -120,6 +120,7 @@ STATE_DB_FILENAME = '.downloaded.txt' # Tracks completed downloads
| `-t TS`, `--to TS` | Stop at timestamp |
| `-e`, `--exact-url` | Download exact URL only |
| `-r`, `--rewritten` | Download rewritten Wayback Archive files only |
| `-rt`, `--retry NUM` | Number of tries in case a download fails (default: 1) |
**Example** - Download files to `downloaded-backup` folder
```bash
@ -165,6 +166,8 @@ ruby wayback_machine_downloader https://example.com --rewritten
```
Useful if you want to download the rewritten files from the Wayback Machine instead of the original ones.
---
### Filtering Content
| Option | Description |
|--------|-------------|
@ -199,6 +202,8 @@ Or if you want to download everything except images:
ruby wayback_machine_downloader https://example.com --exclude "/\.(gif|jpg|jpeg)$/i"
```
---
### Performance
| Option | Description |
|--------|-------------|
@ -213,10 +218,12 @@ Will specify the number of multiple files you want to download at the same time.
**Example 2** - 300 snapshot pages:
```bash
ruby wayback_machine_downloader https://example.com --snapshot-pages 300
ruby wayback_machine_downloader https://example.com --maximum-snapshot 300
```
Will specify the maximum number of snapshot pages to consider. Count an average of 150,000 snapshots per page. 100 is the default maximum number of snapshot pages and should be sufficient for most websites. Use a bigger number if you want to download a very large website.
---
### Diagnostics
| Option | Description |
|--------|-------------|
@ -235,6 +242,8 @@ ruby wayback_machine_downloader https://example.com --list
```
It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application.
---
### Job management
The downloader automatically saves its progress (`.cdx.json` for snapshot list, `.downloaded.txt` for completed files) in the output directory. If you run the same command again pointing to the same output directory, it will resume where it left off, skipping already downloaded files.
@ -258,6 +267,47 @@ ruby wayback_machine_downloader https://example.com --keep
```
This can be useful for debugging or if you plan to extend the download later with different parameters (e.g., adding `--to` timestamp) while leveraging the existing snapshot list.
---
## Troubleshooting
### SSL certificate errors
If you encounter an SSL error like:
```
SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get certificate CRL)
```
This is a known issue with **OpenSSL 3.6.0** when used with certain Ruby installations, and not a bug with this WMD work specifically. (See [ruby/openssl#949](https://github.com/ruby/openssl/issues/949) for details.)
The workaround is to create a file named `fix_ssl_store.rb` with the following content:
```ruby
require "openssl"
store = OpenSSL::X509::Store.new.tap(&:set_default_paths)
OpenSSL::SSL::SSLContext::DEFAULT_PARAMS[:cert_store] = store
```
and run wayback-machine-downloader with:
```bash
RUBYOPT="-r./fix_ssl_store.rb" wayback_machine_downloader "http://example.com"
```
#### Verifying the issue
You can test if your Ruby environment has this issue by running:
```ruby
require "net/http"
require "uri"
uri = URI("https://web.archive.org/")
Net::HTTP.start(uri.host, uri.port, use_ssl: true) do |http|
resp = http.get("/")
puts "GET / => #{resp.code}"
end
```
If this fails with the same SSL error, the workaround above will fix it.
---
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch

View File

@ -74,6 +74,10 @@ option_parser = OptionParser.new do |opts|
options[:keep] = true
end
opts.on("--rt", "--retry N", Integer, "Maximum number of retries for failed downloads (default: 3)") do |t|
options[:max_retries] = t
end
opts.on("--recursive-subdomains", "Recursively download content from subdomains") do |t|
options[:recursive_subdomains] = true
end
@ -82,6 +86,10 @@ option_parser = OptionParser.new do |opts|
options[:subdomain_depth] = t
end
opts.on("--page-requisites", "Download related assets (images, css, js) for downloaded HTML pages") do |t|
options[:page_requisites] = true
end
opts.on("-v", "--version", "Display version") do |t|
options[:version] = t
end

View File

@ -11,9 +11,11 @@ require 'concurrent-ruby'
require 'logger'
require 'zlib'
require 'stringio'
require 'digest'
require_relative 'wayback_machine_downloader/tidy_bytes'
require_relative 'wayback_machine_downloader/to_regex'
require_relative 'wayback_machine_downloader/archive_api'
require_relative 'wayback_machine_downloader/page_requisites'
require_relative 'wayback_machine_downloader/subdom_processor'
require_relative 'wayback_machine_downloader/url_rewrite'
@ -24,58 +26,90 @@ class ConnectionPool
MAX_RETRIES = 3
def initialize(size)
@size = size
@pool = Concurrent::Map.new
@creation_times = Concurrent::Map.new
@pool = SizedQueue.new(size)
size.times { @pool << build_connection_entry }
@cleanup_thread = schedule_cleanup
end
def with_connection(&block)
conn = acquire_connection
def with_connection
entry = acquire_connection
begin
yield conn
yield entry[:http]
ensure
release_connection(conn)
release_connection(entry)
end
end
def shutdown
@cleanup_thread&.exit
@pool.each_value { |conn| conn.finish if conn&.started? }
@pool.clear
@creation_times.clear
drain_pool { |entry| safe_finish(entry[:http]) }
end
private
def acquire_connection
thread_id = Thread.current.object_id
conn = @pool[thread_id]
if should_create_new?(conn)
conn&.finish if conn&.started?
conn = create_connection
@pool[thread_id] = conn
@creation_times[thread_id] = Time.now
entry = @pool.pop
if stale?(entry)
safe_finish(entry[:http])
entry = build_connection_entry
end
entry
end
conn
def release_connection(entry)
if stale?(entry)
safe_finish(entry[:http])
entry = build_connection_entry
end
@pool << entry
end
def release_connection(conn)
return unless conn
if conn.started? && Time.now - @creation_times[Thread.current.object_id] > MAX_AGE
conn.finish
@pool.delete(Thread.current.object_id)
@creation_times.delete(Thread.current.object_id)
def stale?(entry)
http = entry[:http]
!http.started? || (Time.now - entry[:created_at] > MAX_AGE)
end
def build_connection_entry
{ http: create_connection, created_at: Time.now }
end
def safe_finish(http)
http.finish if http&.started?
rescue StandardError
nil
end
def drain_pool
loop do
entry = begin
@pool.pop(true)
rescue ThreadError
break
end
yield(entry)
end
end
def should_create_new?(conn)
return true if conn.nil?
return true unless conn.started?
return true if Time.now - @creation_times[Thread.current.object_id] > MAX_AGE
false
def cleanup_old_connections
entry = begin
@pool.pop(true)
rescue ThreadError
return
end
if stale?(entry)
safe_finish(entry[:http])
entry = build_connection_entry
end
@pool << entry
end
def schedule_cleanup
Thread.new do
loop do
cleanup_old_connections
sleep CLEANUP_INTERVAL
end
end
end
def create_connection
@ -88,35 +122,15 @@ class ConnectionPool
http.start
http
end
def schedule_cleanup
Thread.new do
loop do
cleanup_old_connections
sleep CLEANUP_INTERVAL
end
end
end
def cleanup_old_connections
current_time = Time.now
@creation_times.each do |thread_id, creation_time|
if current_time - creation_time > MAX_AGE
conn = @pool[thread_id]
conn&.finish if conn&.started?
@pool.delete(thread_id)
@creation_times.delete(thread_id)
end
end
end
end
class WaybackMachineDownloader
include ArchiveAPI
include SubdomainProcessor
include URLRewrite
VERSION = "2.4.0"
VERSION = "2.4.5"
DEFAULT_TIMEOUT = 30
MAX_RETRIES = 3
RETRY_DELAY = 2
@ -130,7 +144,7 @@ class WaybackMachineDownloader
attr_accessor :base_url, :exact_url, :directory, :all_timestamps,
:from_timestamp, :to_timestamp, :only_filter, :exclude_filter,
:all, :maximum_pages, :threads_count, :logger, :reset, :keep, :rewrite,
:snapshot_at
:snapshot_at, :page_requisites
def initialize params
validate_params(params)
@ -162,6 +176,9 @@ class WaybackMachineDownloader
@recursive_subdomains = params[:recursive_subdomains] || false
@subdomain_depth = params[:subdomain_depth] || 1
@snapshot_at = params[:snapshot_at] ? params[:snapshot_at].to_i : nil
@max_retries = params[:max_retries] ? params[:max_retries].to_i : MAX_RETRIES
@page_requisites = params[:page_requisites] || false
@pending_jobs = Concurrent::AtomicFixnum.new(0)
# URL for rejecting invalid/unencoded wayback urls
@url_regexp = /^(([A-Za-z][A-Za-z0-9+.-]*):((\/\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=]))+)(:([0-9]*))?)(((\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))|((\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)?))|((((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))(\?((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?(\#((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?)$/
@ -170,13 +187,31 @@ class WaybackMachineDownloader
end
def backup_name
url_to_process = @base_url.end_with?('/*') ? @base_url.chomp('/*') : @base_url
url_to_process = @base_url
url_to_process = url_to_process.chomp('/*') if url_to_process&.end_with?('/*')
if url_to_process.include? '//'
raw = if url_to_process.include?('//')
url_to_process.split('/')[2]
else
url_to_process
end
# if it looks like a wildcard pattern, normalize to a safe host-ish name
if raw&.start_with?('*.')
raw = raw.sub(/\A\*\./, 'all-')
end
# sanitize for Windows (and safe cross-platform) to avoid ENOTDIR on mkdir (colon in host:port)
if Gem.win_platform?
raw = raw.gsub(/[:*?"<>|]/, '_')
raw = raw.gsub(/[ .]+\z/, '')
else
# still good practice to strip path separators (and maybe '*' for POSIX too)
raw = raw.gsub(/[\/:*?"<>|]/, '_')
end
raw = 'site' if raw.nil? || raw.empty?
raw
end
def backup_path
@ -185,7 +220,8 @@ class WaybackMachineDownloader
@directory
else
# ensure the default path is absolute and normalized
File.expand_path(File.join('websites', backup_name))
cwd = Dir.pwd
File.expand_path(File.join(cwd, 'websites', backup_name))
end
end
@ -269,7 +305,8 @@ class WaybackMachineDownloader
page_index = 0
batch_size = [@threads_count, 5].min
continue_fetching = true
fetch_pool = Concurrent::FixedThreadPool.new([@threads_count, 1].max)
begin
while continue_fetching && page_index < @maximum_pages
# Determine the range of pages to fetch in this batch
end_index = [page_index + batch_size, @maximum_pages].min
@ -277,7 +314,7 @@ class WaybackMachineDownloader
# Create futures for concurrent API calls
futures = current_batch.map do |page|
Concurrent::Future.execute do
Concurrent::Future.execute(executor: fetch_pool) do
result = nil
@connection_pool.with_connection do |connection|
result = get_raw_list_from_api("#{@base_url}/*", page, connection)
@ -317,6 +354,10 @@ class WaybackMachineDownloader
sleep(RATE_LIMIT) if continue_fetching
end
ensure
fetch_pool.shutdown
fetch_pool.wait_for_termination
end
end
puts " found #{snapshot_list_to_consider.length} snapshots."
@ -340,13 +381,13 @@ class WaybackMachineDownloader
get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/')
next if file_timestamp.to_i > target_timestamp
file_id = file_url.split('/')[3..-1].join('/')
file_id = CGI::unescape file_id
file_id = file_id.tidy_bytes unless file_id == ""
raw_tail = file_url.split('/')[3..-1]&.join('/')
file_id = sanitize_and_prepare_id(raw_tail, file_url)
next if file_id.nil?
next if match_exclude_filter(file_url)
next unless match_only_filter(file_url)
# Select the most recent version <= target_timestamp
if !file_versions[file_id] || file_versions[file_id][:timestamp].to_i < file_timestamp.to_i
file_versions[file_id] = { file_url: file_url, timestamp: file_timestamp, file_id: file_id }
end
@ -368,15 +409,20 @@ class WaybackMachineDownloader
file_list_curated = Hash.new
get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/')
file_id = file_url.split('/')[3..-1].join('/')
file_id = CGI::unescape file_id
file_id = file_id.tidy_bytes unless file_id == ""
raw_tail = file_url.split('/')[3..-1]&.join('/')
file_id = sanitize_and_prepare_id(raw_tail, file_url)
if file_id.nil?
puts "Malformed file url, ignoring: #{file_url}"
next
end
if file_id.include?('<') || file_id.include?('>')
puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}"
else
if match_exclude_filter(file_url)
puts "File url matches exclude filter, ignoring: #{file_url}"
elsif not match_only_filter(file_url)
elsif !match_only_filter(file_url)
puts "File url doesn't match only filter, ignoring: #{file_url}"
elsif file_list_curated[file_id]
unless file_list_curated[file_id][:timestamp] > file_timestamp
@ -394,19 +440,30 @@ class WaybackMachineDownloader
file_list_curated = Hash.new
get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/')
file_id = file_url.split('/')[3..-1].join('/')
file_id_and_timestamp = [file_timestamp, file_id].join('/')
file_id_and_timestamp = CGI::unescape file_id_and_timestamp
file_id_and_timestamp = file_id_and_timestamp.tidy_bytes unless file_id_and_timestamp == ""
raw_tail = file_url.split('/')[3..-1]&.join('/')
file_id = sanitize_and_prepare_id(raw_tail, file_url)
if file_id.nil?
puts "Malformed file url, ignoring: #{file_url}"
next
end
file_id_and_timestamp_raw = [file_timestamp, file_id].join('/')
file_id_and_timestamp = sanitize_and_prepare_id(file_id_and_timestamp_raw, file_url)
if file_id_and_timestamp.nil?
puts "Malformed file id/timestamp combo, ignoring: #{file_url}"
next
end
if file_id_and_timestamp.include?('<') || file_id_and_timestamp.include?('>')
puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}"
else
if match_exclude_filter(file_url)
puts "File url matches exclude filter, ignoring: #{file_url}"
elsif not match_only_filter(file_url)
elsif !match_only_filter(file_url)
puts "File url doesn't match only filter, ignoring: #{file_url}"
elsif file_list_curated[file_id_and_timestamp]
puts "Duplicate file and timestamp combo, ignoring: #{file_id}" if @verbose
# duplicate combo, ignore silently (verbose flag not shown here)
else
file_list_curated[file_id_and_timestamp] = { file_url: file_url, timestamp: file_timestamp }
end
@ -528,6 +585,12 @@ class WaybackMachineDownloader
# Load IDs of already downloaded files
downloaded_ids = load_downloaded_ids
# We use a thread-safe Set to track what we have queued/downloaded in this session
# to avoid infinite loops with page requisites
@session_downloaded_ids = Concurrent::Set.new
downloaded_ids.each { |id| @session_downloaded_ids.add(id) }
files_to_process = files_to_download.reject do |file_info|
downloaded_ids.include?(file_info[:file_id])
end
@ -539,7 +602,7 @@ class WaybackMachineDownloader
puts "Found #{skipped_count} previously downloaded files, skipping them."
end
if remaining_count == 0
if remaining_count == 0 && !@page_requisites
puts "All matching files have already been downloaded."
cleanup
return
@ -552,12 +615,22 @@ class WaybackMachineDownloader
@download_mutex = Mutex.new
thread_count = [@threads_count, CONNECTION_POOL_SIZE].min
pool = Concurrent::FixedThreadPool.new(thread_count)
@worker_pool = Concurrent::FixedThreadPool.new(thread_count)
processing_files(pool, files_to_process)
# initial batch
files_to_process.each do |file_remote_info|
@session_downloaded_ids.add(file_remote_info[:file_id])
submit_download_job(file_remote_info)
end
pool.shutdown
pool.wait_for_termination
# wait for all jobs to finish
loop do
sleep 0.5
break if @pending_jobs.value == 0
end
@worker_pool.shutdown
@worker_pool.wait_for_termination
end_time = Time.now
puts "\nDownload finished in #{(end_time - start_time).round(2)}s."
@ -575,6 +648,125 @@ class WaybackMachineDownloader
cleanup
end
# helper to submit jobs and increment the counter
def submit_download_job(file_remote_info)
@pending_jobs.increment
@worker_pool.post do
begin
process_single_file(file_remote_info)
ensure
@pending_jobs.decrement
end
end
end
def process_single_file(file_remote_info)
download_success = false
downloaded_path = nil
@connection_pool.with_connection do |connection|
result_message, path = download_file(file_remote_info, connection)
downloaded_path = path
if result_message && result_message.include?(' -> ')
download_success = true
end
@download_mutex.synchronize do
@processed_file_count += 1 if @processed_file_count < @total_to_download
# only print if it's a "User" file or a requisite we found
puts result_message if result_message
end
end
if download_success
append_to_db(file_remote_info[:file_id])
if @page_requisites && downloaded_path && File.extname(downloaded_path) =~ /\.(html?|php|asp|aspx|jsp)$/i
process_page_requisites(downloaded_path, file_remote_info)
end
end
rescue => e
@logger.error("Error processing file #{file_remote_info[:file_url]}: #{e.message}")
end
def process_page_requisites(file_path, parent_remote_info)
return unless File.exist?(file_path)
content = File.read(file_path)
content = content.force_encoding('UTF-8').scrub
assets = PageRequisites.extract(content)
# prepare base URI for resolving relative paths
parent_raw = parent_remote_info[:file_url]
parent_raw = "http://#{parent_raw}" unless parent_raw.match?(/^https?:\/\//)
begin
base_uri = URI(parent_raw)
# calculate the "root" host of the site we are downloading to compare later
current_project_host = URI("http://" + @base_url.gsub(%r{^https?://}, '')).host
rescue URI::InvalidURIError
return
end
parent_timestamp = parent_remote_info[:timestamp]
assets.each do |asset_rel_url|
begin
# resolve full URL (handles relative paths like "../img/logo.png")
resolved_uri = base_uri + asset_rel_url
# filter out navigation links (pages) vs assets
# skip if extension is empty or looks like an HTML page
path = resolved_uri.path
ext = File.extname(path).downcase
if ext.empty? || ['.html', '.htm', '.php', '.asp', '.aspx'].include?(ext)
next
end
# construct the URL for the Wayback API
asset_wbm_url = resolved_uri.host + resolved_uri.path
asset_wbm_url += "?#{resolved_uri.query}" if resolved_uri.query
# construct the local file ID
# if the asset is on the SAME domain, strip the domain from the folder path
# if it's on a DIFFERENT domain (e.g. cdn.jquery.com), keep the domain folder
if resolved_uri.host == current_project_host
# e.g. /static/css/style.css
asset_file_id = resolved_uri.path
asset_file_id = asset_file_id[1..-1] if asset_file_id.start_with?('/')
else
# e.g. cdn.google.com/jquery.js
asset_file_id = asset_wbm_url
end
rescue URI::InvalidURIError, StandardError
next
end
# sanitize and queue
asset_id = sanitize_and_prepare_id(asset_file_id, asset_wbm_url)
unless @session_downloaded_ids.include?(asset_id)
@session_downloaded_ids.add(asset_id)
new_file_info = {
file_url: asset_wbm_url,
timestamp: parent_timestamp,
file_id: asset_id
}
@download_mutex.synchronize do
@total_to_download += 1
puts "Queued requisite: #{asset_file_id}"
end
submit_download_job(new_file_info)
end
end
end
def structure_dir_path dir_path
begin
FileUtils::mkdir_p dir_path unless File.exist? dir_path
@ -606,7 +798,8 @@ class WaybackMachineDownloader
begin
content = File.binread(file_path)
if file_ext == '.html' || file_ext == '.htm'
# detect encoding for HTML files
if file_ext == '.html' || file_ext == '.htm' || file_ext == '.php' || file_ext == '.asp'
encoding = content.match(/<meta\s+charset=["']?([^"'>]+)/i)&.captures&.first || 'UTF-8'
content.force_encoding(encoding) rescue content.force_encoding('UTF-8')
else
@ -614,21 +807,21 @@ class WaybackMachineDownloader
end
# URLs in HTML attributes
rewrite_html_attr_urls(content)
content = rewrite_html_attr_urls(content)
# URLs in CSS
rewrite_css_urls(content)
content = rewrite_css_urls(content)
# URLs in JavaScript
rewrite_js_urls(content)
content = rewrite_js_urls(content)
# for URLs in HTML attributes that start with a single slash
# for URLs that start with a single slash, make them relative
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])\/([^"'\/][^"']*)(["'])/i) do
prefix, path, suffix = $1, $2, $3
"#{prefix}./#{path}#{suffix}"
end
# for URLs in CSS that start with a single slash
# for URLs in CSS that start with a single slash, make them relative
content.gsub!(/url\(\s*["']?\/([^"'\)\/][^"'\)]*?)["']?\s*\)/i) do
path = $1
"url(\"./#{path}\")"
@ -681,7 +874,7 @@ class WaybackMachineDownloader
# check existence *before* download attempt
# this handles cases where a file was created manually or by a previous partial run without a .db entry
if File.exist? file_path
return "#{file_url} # #{file_path} already exists. (#{@processed_file_count + 1}/#{@total_to_download})"
return ["#{file_url} # #{file_path} already exists. (#{@processed_file_count + 1}/#{@total_to_download})", file_path]
end
begin
@ -693,13 +886,13 @@ class WaybackMachineDownloader
if @rewrite && File.extname(file_path) =~ /\.(html?|css|js)$/i
rewrite_urls_to_relative(file_path)
end
"#{file_url} -> #{file_path} (#{@processed_file_count + 1}/#{@total_to_download})"
return ["#{file_url} -> #{file_path} (#{@processed_file_count + 1}/#{@total_to_download})", file_path]
when :skipped_not_found
"Skipped (not found): #{file_url} (#{@processed_file_count + 1}/#{@total_to_download})"
return ["Skipped (not found): #{file_url} (#{@processed_file_count + 1}/#{@total_to_download})", nil]
else
# ideally, this case should not be reached if download_with_retry behaves as expected.
@logger.warn("Unknown status from download_with_retry for #{file_url}: #{status}")
"Unknown status for #{file_url}: #{status} (#{@processed_file_count + 1}/#{@total_to_download})"
return ["Unknown status for #{file_url}: #{status} (#{@processed_file_count + 1}/#{@total_to_download})", nil]
end
rescue StandardError => e
msg = "Failed: #{file_url} # #{e} (#{@processed_file_count + 1}/#{@total_to_download})"
@ -707,7 +900,7 @@ class WaybackMachineDownloader
File.delete(file_path)
msg += "\n#{file_path} was empty and was removed."
end
msg
return [msg, nil]
end
end
@ -750,6 +943,86 @@ class WaybackMachineDownloader
logger
end
# safely sanitize a file id (or id+timestamp)
def sanitize_and_prepare_id(raw, file_url)
return nil if raw.nil?
return "" if raw.empty?
original = raw.dup
begin
# work on a binary copy to avoid premature encoding errors
raw = raw.dup.force_encoding(Encoding::BINARY)
# percent-decode (repeat until stable in case of double-encoding)
loop do
decoded = raw.gsub(/%([0-9A-Fa-f]{2})/) { [$1].pack('H2') }
break if decoded == raw
raw = decoded
end
# try tidy_bytes
begin
raw = raw.tidy_bytes
rescue StandardError
# fallback: scrub to UTF-8
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
end
# ensure UTF-8 and scrub again
unless raw.encoding == Encoding::UTF_8 && raw.valid_encoding?
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
end
# strip HTML/comment artifacts & control chars
raw.gsub!(/<!--+/, '')
raw.gsub!(/[\x00-\x1F]/, '')
# split query; hash it for stable short name
path_part, query_part = raw.split('?', 2)
if query_part && !query_part.empty?
q_digest = Digest::SHA256.hexdigest(query_part)[0, 12]
if path_part.include?('.')
pre, _sep, post = path_part.rpartition('.')
path_part = "#{pre}__q#{q_digest}.#{post}"
else
path_part = "#{path_part}__q#{q_digest}"
end
end
raw = path_part
# collapse slashes & trim leading slash
raw.gsub!(%r{/+}, '/')
raw.sub!(%r{\A/}, '')
# segment-wise sanitation
raw = raw.split('/').map do |segment|
seg = segment.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
seg = seg.gsub(/[:*?"<>|\\]/) { |c| "%#{c.ord.to_s(16).upcase}" }
seg = seg.gsub(/[ .]+\z/, '') if Gem.win_platform?
seg.empty? ? '_' : seg
end.join('/')
# remove any remaining angle brackets
raw.tr!('<>', '')
# final fallback if empty
raw = "file__#{Digest::SHA1.hexdigest(original)[0,10]}" if raw.nil? || raw.empty?
raw
rescue => e
@logger&.warn("Failed to sanitize file id from #{file_url}: #{e.message}")
# deterministic fallback never return nil so caller wont mark malformed
"file__#{Digest::SHA1.hexdigest(original)[0,10]}"
end
end
# wrap URL in parentheses if it contains characters that commonly break unquoted
# Windows CMD usage (e.g., &). This is only for display; user still must quote
# when invoking manually.
def safe_display_url(url)
return url unless url && url.match?(/[&]/)
"(#{url})"
end
def download_with_retry(file_path, file_url, file_timestamp, connection, redirect_count = 0)
retries = 0
begin
@ -830,9 +1103,9 @@ class WaybackMachineDownloader
end
rescue StandardError => e
if retries < MAX_RETRIES
if retries < @max_retries
retries += 1
@logger.warn("Retry #{retries}/#{MAX_RETRIES} for #{file_url}: #{e.message}")
@logger.warn("Retry #{retries}/#{@max_retries} for #{file_url}: #{e.message}")
sleep(RETRY_DELAY * retries)
retry
else

View File

@ -16,6 +16,10 @@ module ArchiveAPI
params = [["output", "json"], ["url", url]] + parameters_for_api(page_index)
request_url.query = URI.encode_www_form(params)
retries = 0
max_retries = (@max_retries || 3)
delay = WaybackMachineDownloader::RETRY_DELAY rescue 2
begin
response = http.get(request_url)
body = response.body.to_s.strip
@ -26,7 +30,21 @@ module ArchiveAPI
json.shift if json.first == ["timestamp", "original"]
json
rescue JSON::ParserError => e
warn "Failed to fetch data from API: #{e.message}"
warn "Failed to parse JSON from API for #{url}: #{e.message}"
[]
rescue Net::ReadTimeout, Net::OpenTimeout => e
if retries < max_retries
retries += 1
warn "Timeout talking to Wayback CDX API (#{e.class}: #{e.message}) for #{url}, retry #{retries}/#{max_retries}..."
sleep(delay * retries)
retry
else
warn "Giving up on Wayback CDX API for #{url} after #{max_retries} timeouts."
[]
end
rescue StandardError => e
# treat any other transient-ish error similarly, though without retries for now
warn "Error fetching CDX data for #{url}: #{e.message}"
[]
end
end

View File

@ -0,0 +1,33 @@
module PageRequisites
# regex to find links in href, src, url(), and srcset
# this ignores data: URIs, mailto:, and anchors
ASSET_REGEX = /(?:href|src|data-src|data-url)\s*=\s*["']([^"']+)["']|url\(\s*["']?([^"'\)]+)["']?\s*\)|srcset\s*=\s*["']([^"']+)["']/i
def self.extract(html_content)
assets = []
html_content.scan(ASSET_REGEX) do |match|
# match is an array of capture groups; find the one that matched
url = match.compact.first
next unless url
# handle srcset (e.g. comma separated values like "image.jpg 1x, image2.jpg 2x")
if url.include?(',') && (url.include?(' 1x') || url.include?(' 2w'))
url.split(',').each do |src_def|
src_url = src_def.strip.split(' ').first
assets << src_url if valid_asset?(src_url)
end
else
assets << url if valid_asset?(url)
end
end
assets.uniq
end
def self.valid_asset?(url)
return false if url.strip.empty?
return false if url.start_with?('data:', 'mailto:', '#', 'javascript:')
true
end
end

View File

@ -1,74 +1,85 @@
# frozen_string_literal: true
# URLs in HTML attributes
module URLRewrite
# server-side extensions that should work locally
SERVER_SIDE_EXTS = %w[.php .asp .aspx .jsp .cgi .pl .py].freeze
def rewrite_html_attr_urls(content)
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do
prefix, url, suffix = $1, $2, $3
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
# rewrite URLs to relative paths
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/web\.archive\.org\/web\/\d+(?:id_)?\/https?:\/\/[^\/]+([^"']*)(["'])/i) do
prefix, path, suffix = $1, $2, $3
path = normalize_path_for_local(path)
"#{prefix}#{path}#{suffix}"
rescue
"#{prefix}#{url}#{suffix}"
end
elsif url.start_with?('/')
"#{prefix}./#{url[1..-1]}#{suffix}"
else
"#{prefix}#{url}#{suffix}"
end
# rewrite absolute URLs to same domain as relative
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/[^\/]+([^"']*)(["'])/i) do
prefix, path, suffix = $1, $2, $3
path = normalize_path_for_local(path)
"#{prefix}#{path}#{suffix}"
end
content
end
# URLs in CSS
def rewrite_css_urls(content)
content.gsub!(/url\(\s*["']?https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"'\)]+)["']?\s*\)/i) do
url = $1
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
# rewrite URLs in CSS
content.gsub!(/url\(\s*["']?https?:\/\/web\.archive\.org\/web\/\d+(?:id_)?\/https?:\/\/[^\/]+([^"'\)]*?)["']?\s*\)/i) do
path = normalize_path_for_local($1)
"url(\"#{path}\")"
rescue
"url(\"#{url}\")"
end
elsif url.start_with?('/')
"url(\"./#{url[1..-1]}\")"
else
"url(\"#{url}\")"
end
# rewrite absolute URLs in CSS
content.gsub!(/url\(\s*["']?https?:\/\/[^\/]+([^"'\)]*?)["']?\s*\)/i) do
path = normalize_path_for_local($1)
"url(\"#{path}\")"
end
content
end
# URLs in JavaScript
def rewrite_js_urls(content)
content.gsub!(/(["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do
quote_start, url, quote_end = $1, $2, $3
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
# rewrite archive.org URLs in JavaScript strings
content.gsub!(/(["'])https?:\/\/web\.archive\.org\/web\/\d+(?:id_)?\/https?:\/\/[^\/]+([^"']*)(["'])/i) do
quote_start, path, quote_end = $1, $2, $3
path = normalize_path_for_local(path)
"#{quote_start}#{path}#{quote_end}"
rescue
"#{quote_start}#{url}#{quote_end}"
end
elsif url.start_with?('/')
"#{quote_start}./#{url[1..-1]}#{quote_end}"
else
"#{quote_start}#{url}#{quote_end}"
end
# rewrite absolute URLs in JavaScript
content.gsub!(/(["'])https?:\/\/[^\/]+([^"']*)(["'])/i) do
quote_start, path, quote_end = $1, $2, $3
next "#{quote_start}http#{$2}#{quote_end}" if $2.start_with?('s://', '://')
path = normalize_path_for_local(path)
"#{quote_start}#{path}#{quote_end}"
end
content
end
private
def normalize_path_for_local(path)
return "./index.html" if path.empty? || path == "/"
# handle query strings - they're already part of the filename
path = path.split('?').first if path.include?('?')
# check if this is a server-side script
ext = File.extname(path).downcase
if SERVER_SIDE_EXTS.include?(ext)
# keep the path as-is but ensure it starts with ./
path = "./#{path}" unless path.start_with?('./', '/')
else
# regular file handling
path = "./#{path}" unless path.start_with?('./', '/')
# if it looks like a directory, add index.html
if path.end_with?('/') || !path.include?('.')
path = "#{path.chomp('/')}/index.html"
end
end
path
end
end

View File

@ -1,12 +1,12 @@
Gem::Specification.new do |s|
s.name = "wayback_machine_downloader_straw"
s.version = "2.4.0"
s.version = "2.4.5"
s.executables << "wayback_machine_downloader"
s.summary = "Download an entire website from the Wayback Machine."
s.description = "Download complete websites from the Internet Archive's Wayback Machine. While the Wayback Machine (archive.org) excellently preserves web history, it lacks a built-in export functionality; this gem does just that, allowing you to download entire archived websites. (This is a significant rewrite of the original wayback_machine_downloader gem by hartator, with enhanced features and performance improvements.)"
s.authors = ["strawberrymaster"]
s.email = "strawberrymaster@vivaldi.net"
s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb", "lib/wayback_machine_downloader/subdom_processor.rb", "lib/wayback_machine_downloader/url_rewrite.rb"]
s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb", "lib/wayback_machine_downloader/page_requisites.rb", "lib/wayback_machine_downloader/subdom_processor.rb", "lib/wayback_machine_downloader/url_rewrite.rb"]
s.homepage = "https://github.com/StrawberryMaster/wayback-machine-downloader"
s.license = "MIT"
s.required_ruby_version = ">= 3.4.3"