40 Commits

Author SHA1 Message Date
Felipe
34f22c128c Bump to 2.4.4 2025-10-27 16:51:58 +00:00
Felipe
71bdc7c2de Use explicit current directory to avoid ambiguity
see `Results saved in /build/websites` but nothing is saved :(
Fixes StrawberryMaster/wayback-machine-downloader#34
2025-10-27 16:48:15 +00:00
Felipe
4b1ec1e1cc Added troubleshooting section
includes a workaround fix for SSL CRL error
Fixes StrawberryMaster/wayback-machine-downloader#33
2025-10-08 11:33:50 +00:00
Felipe
d7a63361e3 Use a FixedThreadPool for concurrent API calls 2025-09-24 21:05:22 +00:00
Felipe
b1974a8dfa Refactor ConnectionPool to use SizedQueue for connection management and improve cleanup logic 2025-09-24 20:50:10 +00:00
Huw Fulcher
012b295aed Corrected wrong flag in example (#32)
Example 2 in Performance section incorrectly stated to use `--snapshot-pages` whereas the parameter is actually `--maximum-snapshot`
2025-09-10 08:06:57 -03:00
adampweb
dec9083b43 Fix: Fixed trivial mistake with function call 2025-09-04 19:24:44 +00:00
Felipe
c517bd20d3 Actual retry implementation
seems I pushed an older revision of this apparently
2025-09-04 19:16:52 +00:00
Felipe
fc8d8a9441 Added retry command
fixes [Feature request} Retry flag
Fixes StrawberryMaster/wayback-machine-downloader#31
2025-08-20 01:21:29 +00:00
Felipe
fa306ac92b Bumped version 2025-08-19 16:17:53 +00:00
Felipe
8c27aaebc9 Fix issue with index.html pages not loading
we were rejecting empty paths, causing these files to be skipped. How did I miss this?
2025-08-19 16:16:24 +00:00
Felipe
40e9c9bb51 Bumped version 2025-08-16 19:38:01 +00:00
Felipe
6bc08947b7 More aggressive sanitization
this should deal with some of the issues we've seen, luckily. What a ride!
2025-08-12 18:55:00 -03:00
Felipe
c731e0c7bd Bumped version 2025-08-12 11:46:03 +00:00
Felipe
9fd2a7f8d1 Minor refactoring of HTML tag sanitization 2025-08-12 08:42:27 -03:00
Felipe
6ad312f31f Sanitizing HTML tags
some sites contain tags *in* their URL, and fail to save on some devices like Windows
2025-08-05 23:44:34 +00:00
Felipe
62ea35daa6 Bumping version 2025-08-04 21:23:48 +00:00
Felipe
1f4202908f Fixes for tidy_bytes
admittedly not the cleanest way to do this, although it works for #25.
2025-07-31 12:58:22 -03:00
Felipe
bed3f6101c Added missing gemspec file 2025-07-31 12:57:03 -03:00
Felipe
754df6b8d6 Merge pull request #27 from adampweb/master
Refactored huge functions & cleanup
2025-07-29 18:09:51 -03:00
adampweb
801fb77f79 Perf: Refactored a huge function into smaller subprocesses 2025-07-29 21:12:20 +02:00
adampweb
e9849e6c9c Cleanup: I removed the obsolete options.
The classic way provides more flexibility
2025-07-29 20:55:10 +02:00
Felipe
bc868e6b39 Refactor tidy_bytes.rb
I'm not sure if we can easily determine the encoding behind each site (and I don't think Wayback Machine does that), *but* we can at least translate it and get it to download. This should be mostly useful for other, non-Western European languages. See #25
2025-07-29 10:10:56 -03:00
Felipe
2bf04aff48 Sanitize base_url and directory parameters
this might be the cause of #25, at least from what it appears
2025-07-27 17:18:57 +00:00
Felipe
51becde916 Minor fix 2025-07-26 21:01:40 +00:00
Felipe
c30ee73977 Sanitize file_id
we were not consistently handling non-UTF-8 characters here, especially after commit e4487baafc. This also fixes #25
2025-07-26 20:58:50 +00:00
Felipe
d3466b3387 Bumping version
normally I would've yanked the old gem, but that's not working here
2025-07-22 12:41:26 +00:00
Felipe
0250579f0e Added missing file 2025-07-22 12:38:12 +00:00
Felipe
0663c1c122 Merge pull request #23 from adampweb/master
Fixed base image vulnerability
2025-07-21 14:44:43 -03:00
adampweb
93115f70ec Merge pull request #5 from adampweb/snyk-fix-88576ceadf7e0c41b63a2af504a3c8ae
[Snyk] Security upgrade ruby from 3.4.4-alpine to 3.4.5-alpine
2025-07-21 18:46:03 +02:00
snyk-bot
3d37ae10fd fix: Dockerfile to reduce vulnerabilities
The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-ALPINE322-OPENSSL-10597997
- https://snyk.io/vuln/SNYK-ALPINE322-OPENSSL-10597997
2025-07-21 16:45:10 +00:00
Felipe
bff10e7260 Initial implementation of a composite snapshot
see issue #22. TBF
2025-07-21 15:30:49 +00:00
Felipe
3d181ce84c Bumped version 2025-07-21 13:48:34 +00:00
Alfonso Corrado
999aa211ae fix match filters 2025-07-21 13:42:44 +00:00
adampweb
ffdce7e4ec Exclude dev enviroment config 2025-07-20 17:14:09 +02:00
adampweb
e4487baafc Fix: Handle default case in tidy_bytes 2025-07-20 17:13:36 +02:00
Felipe
82ff2de3dc Added brief note for users with both WMD gems here 2025-07-14 08:12:38 -03:00
Felipe
fd329afdd2 Merge pull request #20 from underarchiver/rfc3968-url-validity-check
Prevent fetching off non RFC3968-compliant URLs
2025-07-11 10:55:12 -03:00
Felipe
038785557d Ability to recursively download across subdomains
this is quite experimental. Fixes #15 but still needs more testing
2025-07-09 12:53:58 +00:00
underarchiver
f03d92a3c4 Prevent fetching off non RFC3968-compliant URLs 2025-06-17 13:27:10 +02:00
11 changed files with 810 additions and 303 deletions

4
.gitignore vendored
View File

@@ -32,3 +32,7 @@ tmp
*.rbc *.rbc
test.rb test.rb
# Dev environment
.vscode
*.code-workspace

View File

@@ -1,4 +1,4 @@
FROM ruby:3.4.4-alpine FROM ruby:3.4.5-alpine
USER root USER root
WORKDIR /build WORKDIR /build
@@ -6,10 +6,9 @@ COPY Gemfile /build/
COPY *.gemspec /build/ COPY *.gemspec /build/
RUN bundle config set jobs "$(nproc)" \ RUN bundle config set jobs "$(nproc)" \
&& bundle config set without 'development test' \
&& bundle install && bundle install
COPY . /build COPY . /build
WORKDIR / WORKDIR /build
ENTRYPOINT [ "/build/bin/wayback_machine_downloader" ] ENTRYPOINT [ "/build/bin/wayback_machine_downloader", "--directory", "/build/websites" ]

View File

@@ -9,7 +9,7 @@ Included here is partial content from other forks, namely those @ [ShiftaDeband]
Download a website's latest snapshot: Download a website's latest snapshot:
```bash ```bash
ruby wayback_machine_downloader https://example.com wayback_machine_downloader https://example.com
``` ```
Your files will save to `./websites/example.com/` with their original structure preserved. Your files will save to `./websites/example.com/` with their original structure preserved.
@@ -27,6 +27,8 @@ To run most commands, just like in the original WMD, you can use:
```bash ```bash
wayback_machine_downloader https://example.com wayback_machine_downloader https://example.com
``` ```
Do note that you can also manually download this repository and run commands here by appending `ruby` before a command, e.g. `ruby wayback_machine_downloader https://example.com`.
**Note**: this gem may conflict with hartator's wayback_machine_downloader gem, and so you may have to uninstall it for this WMD fork to work. A good way to know is if a command fails; it will list the gem version as 2.3.1 or earlier, while this WMD fork uses 2.3.2 or above.
### Step-by-step setup ### Step-by-step setup
1. **Install Ruby**: 1. **Install Ruby**:
@@ -62,15 +64,14 @@ docker build -t wayback_machine_downloader .
docker run -it --rm wayback_machine_downloader [options] URL docker run -it --rm wayback_machine_downloader [options] URL
``` ```
or the example without cloning the repo - fetching smallrockets.com until the year 2013: As an example of how this works without cloning this repo, this command fetches smallrockets.com until the year 2013:
```bash ```bash
docker run -v .:/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com docker run -v .:/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com
``` ```
### 🐳 Using Docker Compose ### 🐳 Using Docker Compose
You can also use Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
We can also use it with Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
```yaml ```yaml
# docker-compose.yml # docker-compose.yml
services: services:
@@ -80,36 +81,19 @@ services:
tty: true tty: true
image: wayback_machine_downloader:latest image: wayback_machine_downloader:latest
container_name: wayback_machine_downloader container_name: wayback_machine_downloader
environment:
- ENVIRONMENT=${ENVIRONMENT:-development}
- OPTIONS=${OPTIONS:-""}
- TARGET_URL=${TARGET_URL}
volumes: volumes:
- .:/build:rw - .:/build:rw
- ./websites:/build/websites:rw - ./websites:/build/websites:rw
command: --directory /build/websites ${OPTIONS} ${TARGET_URL}
``` ```
#### Usage: #### Usage:
Now You can create a Docker image as named "wayback_machine_downloader" with the following command: Now you can create a Docker image as named "wayback_machine_downloader" with the following command:
```bash ```bash
docker compose up -d --build docker compose up -d --build
``` ```
After that you must set TARGET_URL environment variable:
```bash
export TARGET_URL="https://example.com/"
```
The **OPTIONS** env. variable is optional this may include additional settings which are found in the "**Advanced usage**" section below.
Example:
```bash
export OPTIONS="--list -f 20060121"
```
After that you can run the exists container with the following command: After that you can run the exists container with the following command:
```bash ```bash
docker compose run --rm wayback_machine_downloader https://example.com docker compose run --rm wayback_machine_downloader https://example.com [options]
``` ```
## ⚙️ Configuration ## ⚙️ Configuration
@@ -136,6 +120,7 @@ STATE_DB_FILENAME = '.downloaded.txt' # Tracks completed downloads
| `-t TS`, `--to TS` | Stop at timestamp | | `-t TS`, `--to TS` | Stop at timestamp |
| `-e`, `--exact-url` | Download exact URL only | | `-e`, `--exact-url` | Download exact URL only |
| `-r`, `--rewritten` | Download rewritten Wayback Archive files only | | `-r`, `--rewritten` | Download rewritten Wayback Archive files only |
| `-rt`, `--retry NUM` | Number of tries in case a download fails (default: 1) |
**Example** - Download files to `downloaded-backup` folder **Example** - Download files to `downloaded-backup` folder
```bash ```bash
@@ -181,6 +166,8 @@ ruby wayback_machine_downloader https://example.com --rewritten
``` ```
Useful if you want to download the rewritten files from the Wayback Machine instead of the original ones. Useful if you want to download the rewritten files from the Wayback Machine instead of the original ones.
---
### Filtering Content ### Filtering Content
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
@@ -215,6 +202,8 @@ Or if you want to download everything except images:
ruby wayback_machine_downloader https://example.com --exclude "/\.(gif|jpg|jpeg)$/i" ruby wayback_machine_downloader https://example.com --exclude "/\.(gif|jpg|jpeg)$/i"
``` ```
---
### Performance ### Performance
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
@@ -229,10 +218,12 @@ Will specify the number of multiple files you want to download at the same time.
**Example 2** - 300 snapshot pages: **Example 2** - 300 snapshot pages:
```bash ```bash
ruby wayback_machine_downloader https://example.com --snapshot-pages 300 ruby wayback_machine_downloader https://example.com --maximum-snapshot 300
``` ```
Will specify the maximum number of snapshot pages to consider. Count an average of 150,000 snapshots per page. 100 is the default maximum number of snapshot pages and should be sufficient for most websites. Use a bigger number if you want to download a very large website. Will specify the maximum number of snapshot pages to consider. Count an average of 150,000 snapshots per page. 100 is the default maximum number of snapshot pages and should be sufficient for most websites. Use a bigger number if you want to download a very large website.
---
### Diagnostics ### Diagnostics
| Option | Description | | Option | Description |
|--------|-------------| |--------|-------------|
@@ -251,6 +242,8 @@ ruby wayback_machine_downloader https://example.com --list
``` ```
It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application. It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application.
---
### Job management ### Job management
The downloader automatically saves its progress (`.cdx.json` for snapshot list, `.downloaded.txt` for completed files) in the output directory. If you run the same command again pointing to the same output directory, it will resume where it left off, skipping already downloaded files. The downloader automatically saves its progress (`.cdx.json` for snapshot list, `.downloaded.txt` for completed files) in the output directory. If you run the same command again pointing to the same output directory, it will resume where it left off, skipping already downloaded files.
@@ -274,6 +267,47 @@ ruby wayback_machine_downloader https://example.com --keep
``` ```
This can be useful for debugging or if you plan to extend the download later with different parameters (e.g., adding `--to` timestamp) while leveraging the existing snapshot list. This can be useful for debugging or if you plan to extend the download later with different parameters (e.g., adding `--to` timestamp) while leveraging the existing snapshot list.
---
## Troubleshooting
### SSL certificate errors
If you encounter an SSL error like:
```
SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get certificate CRL)
```
This is a known issue with **OpenSSL 3.6.0** when used with certain Ruby installations, and not a bug with this WMD work specifically. (See [ruby/openssl#949](https://github.com/ruby/openssl/issues/949) for details.)
The workaround is to create a file named `fix_ssl_store.rb` with the following content:
```ruby
require "openssl"
store = OpenSSL::X509::Store.new.tap(&:set_default_paths)
OpenSSL::SSL::SSLContext::DEFAULT_PARAMS[:cert_store] = store
```
and run wayback-machine-downloader with:
```bash
RUBYOPT="-r./fix_ssl_store.rb" wayback_machine_downloader "http://example.com"
```
#### Verifying the issue
You can test if your Ruby environment has this issue by running:
```ruby
require "net/http"
require "uri"
uri = URI("https://web.archive.org/")
Net::HTTP.start(uri.host, uri.port, use_ssl: true) do |http|
resp = http.get("/")
puts "GET / => #{resp.code}"
end
```
If this fails with the same SSL error, the workaround above will fix it.
---
## 🤝 Contributing ## 🤝 Contributing
1. Fork the repository 1. Fork the repository
2. Create a feature branch 2. Create a feature branch

View File

@@ -74,6 +74,18 @@ option_parser = OptionParser.new do |opts|
options[:keep] = true options[:keep] = true
end end
opts.on("--rt", "--retry N", Integer, "Maximum number of retries for failed downloads (default: 3)") do |t|
options[:max_retries] = t
end
opts.on("--recursive-subdomains", "Recursively download content from subdomains") do |t|
options[:recursive_subdomains] = true
end
opts.on("--subdomain-depth DEPTH", Integer, "Maximum depth for subdomain recursion (default: 1)") do |t|
options[:subdomain_depth] = t
end
opts.on("-v", "--version", "Display version") do |t| opts.on("-v", "--version", "Display version") do |t|
options[:version] = t options[:version] = t
end end

View File

@@ -5,11 +5,6 @@ services:
tty: true tty: true
image: wayback_machine_downloader:latest image: wayback_machine_downloader:latest
container_name: wayback_machine_downloader container_name: wayback_machine_downloader
environment:
- ENVIRONMENT=${DEVELOPMENT:-production}
- OPTIONS=${OPTIONS:-""}
- TARGET_URL=${TARGET_URL}
volumes: volumes:
- .:/build:rw - .:/build:rw
- ./websites:/websites:rw - ./websites:/build/websites:rw
command: /build/bin/wayback_machine_downloader ${TARGET_URL} ${OPTIONS}

View File

@@ -11,9 +11,12 @@ require 'concurrent-ruby'
require 'logger' require 'logger'
require 'zlib' require 'zlib'
require 'stringio' require 'stringio'
require 'digest'
require_relative 'wayback_machine_downloader/tidy_bytes' require_relative 'wayback_machine_downloader/tidy_bytes'
require_relative 'wayback_machine_downloader/to_regex' require_relative 'wayback_machine_downloader/to_regex'
require_relative 'wayback_machine_downloader/archive_api' require_relative 'wayback_machine_downloader/archive_api'
require_relative 'wayback_machine_downloader/subdom_processor'
require_relative 'wayback_machine_downloader/url_rewrite'
class ConnectionPool class ConnectionPool
MAX_AGE = 300 MAX_AGE = 300
@@ -22,58 +25,90 @@ class ConnectionPool
MAX_RETRIES = 3 MAX_RETRIES = 3
def initialize(size) def initialize(size)
@size = size @pool = SizedQueue.new(size)
@pool = Concurrent::Map.new size.times { @pool << build_connection_entry }
@creation_times = Concurrent::Map.new
@cleanup_thread = schedule_cleanup @cleanup_thread = schedule_cleanup
end end
def with_connection(&block) def with_connection
conn = acquire_connection entry = acquire_connection
begin begin
yield conn yield entry[:http]
ensure ensure
release_connection(conn) release_connection(entry)
end end
end end
def shutdown def shutdown
@cleanup_thread&.exit @cleanup_thread&.exit
@pool.each_value { |conn| conn.finish if conn&.started? } drain_pool { |entry| safe_finish(entry[:http]) }
@pool.clear
@creation_times.clear
end end
private private
def acquire_connection def acquire_connection
thread_id = Thread.current.object_id entry = @pool.pop
conn = @pool[thread_id] if stale?(entry)
safe_finish(entry[:http])
if should_create_new?(conn) entry = build_connection_entry
conn&.finish if conn&.started?
conn = create_connection
@pool[thread_id] = conn
@creation_times[thread_id] = Time.now
end end
entry
conn
end end
def release_connection(conn) def release_connection(entry)
return unless conn if stale?(entry)
if conn.started? && Time.now - @creation_times[Thread.current.object_id] > MAX_AGE safe_finish(entry[:http])
conn.finish entry = build_connection_entry
@pool.delete(Thread.current.object_id) end
@creation_times.delete(Thread.current.object_id) @pool << entry
end
def stale?(entry)
http = entry[:http]
!http.started? || (Time.now - entry[:created_at] > MAX_AGE)
end
def build_connection_entry
{ http: create_connection, created_at: Time.now }
end
def safe_finish(http)
http.finish if http&.started?
rescue StandardError
nil
end
def drain_pool
loop do
entry = begin
@pool.pop(true)
rescue ThreadError
break
end
yield(entry)
end end
end end
def should_create_new?(conn) def cleanup_old_connections
return true if conn.nil? entry = begin
return true unless conn.started? @pool.pop(true)
return true if Time.now - @creation_times[Thread.current.object_id] > MAX_AGE rescue ThreadError
false return
end
if stale?(entry)
safe_finish(entry[:http])
entry = build_connection_entry
end
@pool << entry
end
def schedule_cleanup
Thread.new do
loop do
cleanup_old_connections
sleep CLEANUP_INTERVAL
end
end
end end
def create_connection def create_connection
@@ -86,34 +121,14 @@ class ConnectionPool
http.start http.start
http http
end end
def schedule_cleanup
Thread.new do
loop do
cleanup_old_connections
sleep CLEANUP_INTERVAL
end
end
end
def cleanup_old_connections
current_time = Time.now
@creation_times.each do |thread_id, creation_time|
if current_time - creation_time > MAX_AGE
conn = @pool[thread_id]
conn&.finish if conn&.started?
@pool.delete(thread_id)
@creation_times.delete(thread_id)
end
end
end
end end
class WaybackMachineDownloader class WaybackMachineDownloader
include ArchiveAPI include ArchiveAPI
include SubdomainProcessor
VERSION = "2.3.10" VERSION = "2.4.4"
DEFAULT_TIMEOUT = 30 DEFAULT_TIMEOUT = 30
MAX_RETRIES = 3 MAX_RETRIES = 3
RETRY_DELAY = 2 RETRY_DELAY = 2
@@ -123,16 +138,19 @@ class WaybackMachineDownloader
STATE_CDX_FILENAME = ".cdx.json" STATE_CDX_FILENAME = ".cdx.json"
STATE_DB_FILENAME = ".downloaded.txt" STATE_DB_FILENAME = ".downloaded.txt"
attr_accessor :base_url, :exact_url, :directory, :all_timestamps, attr_accessor :base_url, :exact_url, :directory, :all_timestamps,
:from_timestamp, :to_timestamp, :only_filter, :exclude_filter, :from_timestamp, :to_timestamp, :only_filter, :exclude_filter,
:all, :maximum_pages, :threads_count, :logger, :reset, :keep, :rewrite :all, :maximum_pages, :threads_count, :logger, :reset, :keep, :rewrite,
:snapshot_at
def initialize params def initialize params
validate_params(params) validate_params(params)
@base_url = params[:base_url] @base_url = params[:base_url]&.tidy_bytes
@exact_url = params[:exact_url] @exact_url = params[:exact_url]
if params[:directory] if params[:directory]
@directory = File.expand_path(params[:directory]) sanitized_dir = params[:directory].tidy_bytes
@directory = File.expand_path(sanitized_dir)
else else
@directory = nil @directory = nil
end end
@@ -153,18 +171,32 @@ class WaybackMachineDownloader
@connection_pool = ConnectionPool.new(CONNECTION_POOL_SIZE) @connection_pool = ConnectionPool.new(CONNECTION_POOL_SIZE)
@db_mutex = Mutex.new @db_mutex = Mutex.new
@rewrite = params[:rewrite] || false @rewrite = params[:rewrite] || false
@recursive_subdomains = params[:recursive_subdomains] || false
@subdomain_depth = params[:subdomain_depth] || 1
@snapshot_at = params[:snapshot_at] ? params[:snapshot_at].to_i : nil
@max_retries = params[:max_retries] ? params[:max_retries].to_i : MAX_RETRIES
# URL for rejecting invalid/unencoded wayback urls
@url_regexp = /^(([A-Za-z][A-Za-z0-9+.-]*):((\/\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=]))+)(:([0-9]*))?)(((\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))|((\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)?))|((((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))(\?((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?(\#((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?)$/
handle_reset handle_reset
end end
def backup_name def backup_name
url_to_process = @base_url.end_with?('/*') ? @base_url.chomp('/*') : @base_url url_to_process = @base_url.end_with?('/*') ? @base_url.chomp('/*') : @base_url
raw = if url_to_process.include?('//')
if url_to_process.include? '//'
url_to_process.split('/')[2] url_to_process.split('/')[2]
else else
url_to_process url_to_process
end end
# sanitize for Windows (and safe cross-platform) to avoid ENOTDIR on mkdir (colon in host:port)
if Gem.win_platform?
raw = raw.gsub(/[:*?"<>|]/, '_')
raw = raw.gsub(/[ .]+\z/, '')
end
raw = 'site' if raw.nil? || raw.empty?
raw
end end
def backup_path def backup_path
@@ -173,7 +205,8 @@ class WaybackMachineDownloader
@directory @directory
else else
# ensure the default path is absolute and normalized # ensure the default path is absolute and normalized
File.expand_path(File.join('websites', backup_name)) cwd = Dir.pwd
File.expand_path(File.join(cwd, 'websites', backup_name))
end end
end end
@@ -196,7 +229,7 @@ class WaybackMachineDownloader
def match_only_filter file_url def match_only_filter file_url
if @only_filter if @only_filter
only_filter_regex = @only_filter.to_regex only_filter_regex = @only_filter.to_regex(detect: true)
if only_filter_regex if only_filter_regex
only_filter_regex =~ file_url only_filter_regex =~ file_url
else else
@@ -209,7 +242,7 @@ class WaybackMachineDownloader
def match_exclude_filter file_url def match_exclude_filter file_url
if @exclude_filter if @exclude_filter
exclude_filter_regex = @exclude_filter.to_regex exclude_filter_regex = @exclude_filter.to_regex(detect: true)
if exclude_filter_regex if exclude_filter_regex
exclude_filter_regex =~ file_url exclude_filter_regex =~ file_url
else else
@@ -257,53 +290,58 @@ class WaybackMachineDownloader
page_index = 0 page_index = 0
batch_size = [@threads_count, 5].min batch_size = [@threads_count, 5].min
continue_fetching = true continue_fetching = true
fetch_pool = Concurrent::FixedThreadPool.new([@threads_count, 1].max)
begin
while continue_fetching && page_index < @maximum_pages
# Determine the range of pages to fetch in this batch
end_index = [page_index + batch_size, @maximum_pages].min
current_batch = (page_index...end_index).to_a
while continue_fetching && page_index < @maximum_pages # Create futures for concurrent API calls
# Determine the range of pages to fetch in this batch futures = current_batch.map do |page|
end_index = [page_index + batch_size, @maximum_pages].min Concurrent::Future.execute(executor: fetch_pool) do
current_batch = (page_index...end_index).to_a result = nil
@connection_pool.with_connection do |connection|
# Create futures for concurrent API calls result = get_raw_list_from_api("#{@base_url}/*", page, connection)
futures = current_batch.map do |page| end
Concurrent::Future.execute do result ||= []
result = nil [page, result]
@connection_pool.with_connection do |connection|
result = get_raw_list_from_api("#{@base_url}/*", page, connection)
end
result ||= []
[page, result]
end
end
results = []
futures.each do |future|
begin
results << future.value
rescue => e
puts "\nError fetching page #{future}: #{e.message}"
end
end
# Sort results by page number to maintain order
results.sort_by! { |page, _| page }
# Process results and check for empty pages
results.each do |page, result|
if result.nil? || result.empty?
continue_fetching = false
break
else
mutex.synchronize do
snapshot_list_to_consider.concat(result)
print "."
end end
end end
results = []
futures.each do |future|
begin
results << future.value
rescue => e
puts "\nError fetching page #{future}: #{e.message}"
end
end
# Sort results by page number to maintain order
results.sort_by! { |page, _| page }
# Process results and check for empty pages
results.each do |page, result|
if result.nil? || result.empty?
continue_fetching = false
break
else
mutex.synchronize do
snapshot_list_to_consider.concat(result)
print "."
end
end
end
page_index = end_index
sleep(RATE_LIMIT) if continue_fetching
end end
ensure
page_index = end_index fetch_pool.shutdown
fetch_pool.wait_for_termination
sleep(RATE_LIMIT) if continue_fetching
end end
end end
@@ -322,26 +360,61 @@ class WaybackMachineDownloader
snapshot_list_to_consider snapshot_list_to_consider
end end
# Get a composite snapshot file list for a specific timestamp
def get_composite_snapshot_file_list(target_timestamp)
file_versions = {}
get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/')
next if file_timestamp.to_i > target_timestamp
raw_tail = file_url.split('/')[3..-1]&.join('/')
file_id = sanitize_and_prepare_id(raw_tail, file_url)
next if file_id.nil?
next if match_exclude_filter(file_url)
next unless match_only_filter(file_url)
if !file_versions[file_id] || file_versions[file_id][:timestamp].to_i < file_timestamp.to_i
file_versions[file_id] = { file_url: file_url, timestamp: file_timestamp, file_id: file_id }
end
end
file_versions.values
end
# Returns a list of files for the composite snapshot
def get_file_list_composite_snapshot(target_timestamp)
file_list = get_composite_snapshot_file_list(target_timestamp)
file_list = file_list.sort_by { |_,v| v[:timestamp].to_s }.reverse
file_list.map do |file_remote_info|
file_remote_info[1][:file_id] = file_remote_info[0]
file_remote_info[1]
end
end
def get_file_list_curated def get_file_list_curated
file_list_curated = Hash.new file_list_curated = Hash.new
get_all_snapshots_to_consider.each do |file_timestamp, file_url| get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/') next unless file_url.include?('/')
file_id = file_url.split('/')[3..-1].join('/')
file_id = CGI::unescape file_id raw_tail = file_url.split('/')[3..-1]&.join('/')
file_id = file_id.tidy_bytes unless file_id == "" file_id = sanitize_and_prepare_id(raw_tail, file_url)
if file_id.nil? if file_id.nil?
puts "Malformed file url, ignoring: #{file_url}" puts "Malformed file url, ignoring: #{file_url}"
next
end
if file_id.include?('<') || file_id.include?('>')
puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}"
else else
if match_exclude_filter(file_url) if match_exclude_filter(file_url)
puts "File url matches exclude filter, ignoring: #{file_url}" puts "File url matches exclude filter, ignoring: #{file_url}"
elsif not match_only_filter(file_url) elsif !match_only_filter(file_url)
puts "File url doesn't match only filter, ignoring: #{file_url}" puts "File url doesn't match only filter, ignoring: #{file_url}"
elsif file_list_curated[file_id] elsif file_list_curated[file_id]
unless file_list_curated[file_id][:timestamp] > file_timestamp unless file_list_curated[file_id][:timestamp] > file_timestamp
file_list_curated[file_id] = {file_url: file_url, timestamp: file_timestamp} file_list_curated[file_id] = { file_url: file_url, timestamp: file_timestamp }
end end
else else
file_list_curated[file_id] = {file_url: file_url, timestamp: file_timestamp} file_list_curated[file_id] = { file_url: file_url, timestamp: file_timestamp }
end end
end end
end end
@@ -352,21 +425,32 @@ class WaybackMachineDownloader
file_list_curated = Hash.new file_list_curated = Hash.new
get_all_snapshots_to_consider.each do |file_timestamp, file_url| get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/') next unless file_url.include?('/')
file_id = file_url.split('/')[3..-1].join('/')
file_id_and_timestamp = [file_timestamp, file_id].join('/') raw_tail = file_url.split('/')[3..-1]&.join('/')
file_id_and_timestamp = CGI::unescape file_id_and_timestamp file_id = sanitize_and_prepare_id(raw_tail, file_url)
file_id_and_timestamp = file_id_and_timestamp.tidy_bytes unless file_id_and_timestamp == ""
if file_id.nil? if file_id.nil?
puts "Malformed file url, ignoring: #{file_url}" puts "Malformed file url, ignoring: #{file_url}"
next
end
file_id_and_timestamp_raw = [file_timestamp, file_id].join('/')
file_id_and_timestamp = sanitize_and_prepare_id(file_id_and_timestamp_raw, file_url)
if file_id_and_timestamp.nil?
puts "Malformed file id/timestamp combo, ignoring: #{file_url}"
next
end
if file_id_and_timestamp.include?('<') || file_id_and_timestamp.include?('>')
puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}"
else else
if match_exclude_filter(file_url) if match_exclude_filter(file_url)
puts "File url matches exclude filter, ignoring: #{file_url}" puts "File url matches exclude filter, ignoring: #{file_url}"
elsif not match_only_filter(file_url) elsif !match_only_filter(file_url)
puts "File url doesn't match only filter, ignoring: #{file_url}" puts "File url doesn't match only filter, ignoring: #{file_url}"
elsif file_list_curated[file_id_and_timestamp] elsif file_list_curated[file_id_and_timestamp]
puts "Duplicate file and timestamp combo, ignoring: #{file_id}" if @verbose # duplicate combo, ignore silently (verbose flag not shown here)
else else
file_list_curated[file_id_and_timestamp] = {file_url: file_url, timestamp: file_timestamp} file_list_curated[file_id_and_timestamp] = { file_url: file_url, timestamp: file_timestamp }
end end
end end
end end
@@ -376,7 +460,9 @@ class WaybackMachineDownloader
def get_file_list_by_timestamp def get_file_list_by_timestamp
if @all_timestamps if @snapshot_at
@file_list_by_snapshot_at ||= get_composite_snapshot_file_list(@snapshot_at)
elsif @all_timestamps
file_list_curated = get_file_list_all_timestamps file_list_curated = get_file_list_all_timestamps
file_list_curated.map do |file_remote_info| file_list_curated.map do |file_remote_info|
file_remote_info[1][:file_id] = file_remote_info[0] file_remote_info[1][:file_id] = file_remote_info[0]
@@ -431,6 +517,39 @@ class WaybackMachineDownloader
end end
end end
def processing_files(pool, files_to_process)
files_to_process.each do |file_remote_info|
pool.post do
download_success = false
begin
@connection_pool.with_connection do |connection|
result_message = download_file(file_remote_info, connection)
# assume download success if the result message contains ' -> '
if result_message && result_message.include?(' -> ')
download_success = true
end
@download_mutex.synchronize do
@processed_file_count += 1
# adjust progress message to reflect remaining files
progress_message = result_message.sub(/\(#{@processed_file_count}\/\d+\)/, "(#{@processed_file_count}/#{@total_to_download})") if result_message
puts progress_message if progress_message
end
end
# sppend to DB only after successful download outside the connection block
if download_success
append_to_db(file_remote_info[:file_id])
end
rescue => e
@logger.error("Error processing file #{file_remote_info[:file_url]}: #{e.message}")
@download_mutex.synchronize do
@processed_file_count += 1
end
end
sleep(RATE_LIMIT)
end
end
end
def download_files def download_files
start_time = Time.now start_time = Time.now
puts "Downloading #{@base_url} to #{backup_path} from Wayback Machine archives." puts "Downloading #{@base_url} to #{backup_path} from Wayback Machine archives."
@@ -477,42 +596,23 @@ class WaybackMachineDownloader
thread_count = [@threads_count, CONNECTION_POOL_SIZE].min thread_count = [@threads_count, CONNECTION_POOL_SIZE].min
pool = Concurrent::FixedThreadPool.new(thread_count) pool = Concurrent::FixedThreadPool.new(thread_count)
files_to_process.each do |file_remote_info| processing_files(pool, files_to_process)
pool.post do
download_success = false
begin
@connection_pool.with_connection do |connection|
result_message = download_file(file_remote_info, connection)
# assume download success if the result message contains ' -> '
if result_message && result_message.include?(' -> ')
download_success = true
end
@download_mutex.synchronize do
@processed_file_count += 1
# adjust progress message to reflect remaining files
progress_message = result_message.sub(/\(#{@processed_file_count}\/\d+\)/, "(#{@processed_file_count}/#{@total_to_download})") if result_message
puts progress_message if progress_message
end
end
# sppend to DB only after successful download outside the connection block
if download_success
append_to_db(file_remote_info[:file_id])
end
rescue => e
@logger.error("Error processing file #{file_remote_info[:file_url]}: #{e.message}")
@download_mutex.synchronize do
@processed_file_count += 1
end
end
sleep(RATE_LIMIT)
end
end
pool.shutdown pool.shutdown
pool.wait_for_termination pool.wait_for_termination
end_time = Time.now end_time = Time.now
puts "\nDownload finished in #{(end_time - start_time).round(2)}s." puts "\nDownload finished in #{(end_time - start_time).round(2)}s."
# process subdomains if enabled
if @recursive_subdomains
subdomain_start_time = Time.now
process_subdomains
subdomain_end_time = Time.now
subdomain_time = (subdomain_end_time - subdomain_start_time).round(2)
puts "Subdomain processing finished in #{subdomain_time}s."
end
puts "Results saved in #{backup_path}" puts "Results saved in #{backup_path}"
cleanup cleanup
end end
@@ -556,64 +656,13 @@ class WaybackMachineDownloader
end end
# URLs in HTML attributes # URLs in HTML attributes
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do content = rewrite_html_attr_urls(content)
prefix, url, suffix = $1, $2, $3
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
"#{prefix}#{path}#{suffix}"
rescue
"#{prefix}#{url}#{suffix}"
end
elsif url.start_with?('/')
"#{prefix}./#{url[1..-1]}#{suffix}"
else
"#{prefix}#{url}#{suffix}"
end
end
# URLs in CSS # URLs in CSS
content.gsub!(/url\(\s*["']?https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"'\)]+)["']?\s*\)/i) do content = rewrite_css_urls(content)
url = $1
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
"url(\"#{path}\")"
rescue
"url(\"#{url}\")"
end
elsif url.start_with?('/')
"url(\"./#{url[1..-1]}\")"
else
"url(\"#{url}\")"
end
end
# URLs in JavaScript # URLs in JavaScript
content.gsub!(/(["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do content = rewrite_js_urls(content)
quote_start, url, quote_end = $1, $2, $3
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
"#{quote_start}#{path}#{quote_end}"
rescue
"#{quote_start}#{url}#{quote_end}"
end
elsif url.start_with?('/')
"#{quote_start}./#{url[1..-1]}#{quote_end}"
else
"#{quote_start}#{url}#{quote_end}"
end
end
# for URLs in HTML attributes that start with a single slash # for URLs in HTML attributes that start with a single slash
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])\/([^"'\/][^"']*)(["'])/i) do content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])\/([^"'\/][^"']*)(["'])/i) do
@@ -709,7 +758,22 @@ class WaybackMachineDownloader
end end
def file_list_by_timestamp def file_list_by_timestamp
@file_list_by_timestamp ||= get_file_list_by_timestamp if @snapshot_at
@file_list_by_snapshot_at ||= get_composite_snapshot_file_list(@snapshot_at)
elsif @all_timestamps
file_list_curated = get_file_list_all_timestamps
file_list_curated.map do |file_remote_info|
file_remote_info[1][:file_id] = file_remote_info[0]
file_remote_info[1]
end
else
file_list_curated = get_file_list_curated
file_list_curated = file_list_curated.sort_by { |_,v| v[:timestamp].to_s }.reverse
file_list_curated.map do |file_remote_info|
file_remote_info[1][:file_id] = file_remote_info[0]
file_remote_info[1]
end
end
end end
private private
@@ -727,6 +791,86 @@ class WaybackMachineDownloader
end end
logger logger
end end
# safely sanitize a file id (or id+timestamp)
def sanitize_and_prepare_id(raw, file_url)
return nil if raw.nil?
return "" if raw.empty?
original = raw.dup
begin
# work on a binary copy to avoid premature encoding errors
raw = raw.dup.force_encoding(Encoding::BINARY)
# percent-decode (repeat until stable in case of double-encoding)
loop do
decoded = raw.gsub(/%([0-9A-Fa-f]{2})/) { [$1].pack('H2') }
break if decoded == raw
raw = decoded
end
# try tidy_bytes
begin
raw = raw.tidy_bytes
rescue StandardError
# fallback: scrub to UTF-8
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
end
# ensure UTF-8 and scrub again
unless raw.encoding == Encoding::UTF_8 && raw.valid_encoding?
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
end
# strip HTML/comment artifacts & control chars
raw.gsub!(/<!--+/, '')
raw.gsub!(/[\x00-\x1F]/, '')
# split query; hash it for stable short name
path_part, query_part = raw.split('?', 2)
if query_part && !query_part.empty?
q_digest = Digest::SHA256.hexdigest(query_part)[0, 12]
if path_part.include?('.')
pre, _sep, post = path_part.rpartition('.')
path_part = "#{pre}__q#{q_digest}.#{post}"
else
path_part = "#{path_part}__q#{q_digest}"
end
end
raw = path_part
# collapse slashes & trim leading slash
raw.gsub!(%r{/+}, '/')
raw.sub!(%r{\A/}, '')
# segment-wise sanitation
raw = raw.split('/').map do |segment|
seg = segment.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
seg = seg.gsub(/[:*?"<>|\\]/) { |c| "%#{c.ord.to_s(16).upcase}" }
seg = seg.gsub(/[ .]+\z/, '') if Gem.win_platform?
seg.empty? ? '_' : seg
end.join('/')
# remove any remaining angle brackets
raw.tr!('<>', '')
# final fallback if empty
raw = "file__#{Digest::SHA1.hexdigest(original)[0,10]}" if raw.nil? || raw.empty?
raw
rescue => e
@logger&.warn("Failed to sanitize file id from #{file_url}: #{e.message}")
# deterministic fallback never return nil so caller wont mark malformed
"file__#{Digest::SHA1.hexdigest(original)[0,10]}"
end
end
# wrap URL in parentheses if it contains characters that commonly break unquoted
# Windows CMD usage (e.g., &). This is only for display; user still must quote
# when invoking manually.
def safe_display_url(url)
return url unless url && url.match?(/[&]/)
"(#{url})"
end
def download_with_retry(file_path, file_url, file_timestamp, connection, redirect_count = 0) def download_with_retry(file_path, file_url, file_timestamp, connection, redirect_count = 0)
retries = 0 retries = 0
@@ -740,6 +884,12 @@ class WaybackMachineDownloader
# Escape square brackets because they are not valid in URI() # Escape square brackets because they are not valid in URI()
wayback_url = wayback_url.gsub('[', '%5B').gsub(']', '%5D') wayback_url = wayback_url.gsub('[', '%5B').gsub(']', '%5D')
# reject invalid/unencoded wayback_url, behaving as if the resource weren't found
if not @url_regexp.match?(wayback_url)
@logger.warn("Skipped #{file_url}: invalid URL")
return :skipped_not_found
end
request = Net::HTTP::Get.new(URI(wayback_url)) request = Net::HTTP::Get.new(URI(wayback_url))
request["Connection"] = "keep-alive" request["Connection"] = "keep-alive"
request["User-Agent"] = "WaybackMachineDownloader/#{VERSION}" request["User-Agent"] = "WaybackMachineDownloader/#{VERSION}"
@@ -802,9 +952,9 @@ class WaybackMachineDownloader
end end
rescue StandardError => e rescue StandardError => e
if retries < MAX_RETRIES if retries < @max_retries
retries += 1 retries += 1
@logger.warn("Retry #{retries}/#{MAX_RETRIES} for #{file_url}: #{e.message}") @logger.warn("Retry #{retries}/#{@max_retries} for #{file_url}: #{e.message}")
sleep(RETRY_DELAY * retries) sleep(RETRY_DELAY * retries)
retry retry
else else

View File

@@ -25,7 +25,7 @@ module ArchiveAPI
# Check if the response contains the header ["timestamp", "original"] # Check if the response contains the header ["timestamp", "original"]
json.shift if json.first == ["timestamp", "original"] json.shift if json.first == ["timestamp", "original"]
json json
rescue JSON::ParserError, StandardError => e rescue JSON::ParserError => e
warn "Failed to fetch data from API: #{e.message}" warn "Failed to fetch data from API: #{e.message}"
[] []
end end

View File

@@ -0,0 +1,238 @@
# frozen_string_literal: true
module SubdomainProcessor
def process_subdomains
return unless @recursive_subdomains
puts "Starting subdomain processing..."
# extract base domain from the URL for comparison
base_domain = extract_base_domain(@base_url)
@processed_domains = Set.new([base_domain])
@subdomain_queue = Queue.new
# scan downloaded files for subdomain links
initial_files = Dir.glob(File.join(backup_path, "**/*.{html,htm,css,js}"))
puts "Scanning #{initial_files.size} downloaded files for subdomain links..."
subdomains_found = scan_files_for_subdomains(initial_files, base_domain)
if subdomains_found.empty?
puts "No subdomains found in downloaded content."
return
end
puts "Found #{subdomains_found.size} subdomains to process: #{subdomains_found.join(', ')}"
# add found subdomains to the queue
subdomains_found.each do |subdomain|
full_domain = "#{subdomain}.#{base_domain}"
@subdomain_queue << "https://#{full_domain}/"
end
# process the subdomain queue
download_subdomains(base_domain)
# after all downloads, rewrite all URLs to make local references
rewrite_subdomain_links(base_domain) if @rewrite
end
private
def extract_base_domain(url)
uri = URI.parse(url.gsub(/^https?:\/\//, '').split('/').first) rescue nil
return nil unless uri
host = uri.host || uri.path.split('/').first
host = host.downcase
# extract the base domain (e.g., "example.com" from "sub.example.com")
parts = host.split('.')
return host if parts.size <= 2
# for domains like co.uk, we want to keep the last 3 parts
if parts[-2].length <= 3 && parts[-1].length <= 3 && parts.size > 2
parts.last(3).join('.')
else
parts.last(2).join('.')
end
end
def scan_files_for_subdomains(files, base_domain)
return [] unless base_domain
subdomains = Set.new
files.each do |file_path|
next unless File.exist?(file_path)
begin
content = File.read(file_path)
# extract URLs from HTML href/src attributes
content.scan(/(?:href|src|action|data-src)=["']https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
subdomain = match[0].downcase
next if subdomain == 'www' # skip www subdomain
subdomains.add(subdomain)
end
# extract URLs from CSS
content.scan(/url\(["']?https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
subdomain = match[0].downcase
next if subdomain == 'www' # skip www subdomain
subdomains.add(subdomain)
end
# extract URLs from JavaScript strings
content.scan(/["']https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
subdomain = match[0].downcase
next if subdomain == 'www' # skip www subdomain
subdomains.add(subdomain)
end
rescue => e
puts "Error scanning file #{file_path}: #{e.message}"
end
end
subdomains.to_a
end
def download_subdomains(base_domain)
puts "Starting subdomain downloads..."
depth = 0
max_depth = @subdomain_depth || 1
while depth < max_depth && !@subdomain_queue.empty?
current_batch = []
# get all subdomains at current depth
while !@subdomain_queue.empty?
current_batch << @subdomain_queue.pop
end
puts "Processing #{current_batch.size} subdomains at depth #{depth + 1}..."
# download each subdomain
current_batch.each do |subdomain_url|
download_subdomain(subdomain_url, base_domain)
end
# if we need to go deeper, scan the newly downloaded files
if depth + 1 < max_depth
# get all files in the subdomains directory
new_files = Dir.glob(File.join(backup_path, "subdomains", "**/*.{html,htm,css,js}"))
new_subdomains = scan_files_for_subdomains(new_files, base_domain)
# filter out already processed subdomains
new_subdomains.each do |subdomain|
full_domain = "#{subdomain}.#{base_domain}"
unless @processed_domains.include?(full_domain)
@processed_domains.add(full_domain)
@subdomain_queue << "https://#{full_domain}/"
end
end
puts "Found #{@subdomain_queue.size} new subdomains at depth #{depth + 1}" if !@subdomain_queue.empty?
end
depth += 1
end
end
def download_subdomain(subdomain_url, base_domain)
begin
uri = URI.parse(subdomain_url)
subdomain_host = uri.host
# skip if already processed
if @processed_domains.include?(subdomain_host)
puts "Skipping already processed subdomain: #{subdomain_host}"
return
end
@processed_domains.add(subdomain_host)
puts "Downloading subdomain: #{subdomain_url}"
# create the directory for this subdomain
subdomain_dir = File.join(backup_path, "subdomains", subdomain_host)
FileUtils.mkdir_p(subdomain_dir)
# create subdomain downloader with appropriate options
subdomain_options = {
base_url: subdomain_url,
directory: subdomain_dir,
from_timestamp: @from_timestamp,
to_timestamp: @to_timestamp,
all: @all,
threads_count: @threads_count,
maximum_pages: [@maximum_pages / 2, 10].max,
rewrite: @rewrite,
# don't recursively process subdomains from here
recursive_subdomains: false
}
# download the subdomain content
subdomain_downloader = WaybackMachineDownloader.new(subdomain_options)
subdomain_downloader.download_files
puts "Completed download of subdomain: #{subdomain_host}"
rescue => e
puts "Error downloading subdomain #{subdomain_url}: #{e.message}"
end
end
def rewrite_subdomain_links(base_domain)
puts "Rewriting all files to use local subdomain references..."
all_files = Dir.glob(File.join(backup_path, "**/*.{html,htm,css,js}"))
subdomains = @processed_domains.reject { |domain| domain == base_domain }
puts "Found #{all_files.size} files to check for rewriting"
puts "Will rewrite links for subdomains: #{subdomains.join(', ')}"
rewritten_count = 0
all_files.each do |file_path|
next unless File.exist?(file_path)
begin
content = File.read(file_path)
original_content = content.dup
# replace subdomain URLs with local paths
subdomains.each do |subdomain_host|
# for HTML attributes (href, src, etc.)
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/#{Regexp.escape(subdomain_host)}([^"']*)(["'])/i) do
prefix, path, suffix = $1, $2, $3
path = "/index.html" if path.empty? || path == "/"
"#{prefix}../subdomains/#{subdomain_host}#{path}#{suffix}"
end
# for CSS url()
content.gsub!(/url\(\s*["']?https?:\/\/#{Regexp.escape(subdomain_host)}([^"'\)]*?)["']?\s*\)/i) do
path = $1
path = "/index.html" if path.empty? || path == "/"
"url(\"../subdomains/#{subdomain_host}#{path}\")"
end
# for JavaScript strings
content.gsub!(/(["'])https?:\/\/#{Regexp.escape(subdomain_host)}([^"']*)(["'])/i) do
quote_start, path, quote_end = $1, $2, $3
path = "/index.html" if path.empty? || path == "/"
"#{quote_start}../subdomains/#{subdomain_host}#{path}#{quote_end}"
end
end
# save if modified
if content != original_content
File.write(file_path, content)
rewritten_count += 1
end
rescue => e
puts "Error rewriting file #{file_path}: #{e.message}"
end
end
puts "Rewrote links in #{rewritten_count} files"
end
end

View File

@@ -1,73 +1,74 @@
# frozen_string_literal: true # frozen_string_literal: true
# essentially, this is for converting a string with a potentially
# broken or unknown encoding into a valid UTF-8 string
# @todo: consider using charlock_holmes for this in the future
module TidyBytes module TidyBytes
# precomputing CP1252 to UTF-8 mappings for bytes 128-159 UNICODE_REPLACEMENT_CHARACTER = "<EFBFBD>"
CP1252_MAP = (128..159).map do |byte|
case byte
when 128 then [226, 130, 172] # EURO SIGN
when 130 then [226, 128, 154] # SINGLE LOW-9 QUOTATION MARK
when 131 then [198, 146] # LATIN SMALL LETTER F WITH HOOK
when 132 then [226, 128, 158] # DOUBLE LOW-9 QUOTATION MARK
when 133 then [226, 128, 166] # HORIZONTAL ELLIPSIS
when 134 then [226, 128, 160] # DAGGER
when 135 then [226, 128, 161] # DOUBLE DAGGER
when 136 then [203, 134] # MODIFIER LETTER CIRCUMFLEX ACCENT
when 137 then [226, 128, 176] # PER MILLE SIGN
when 138 then [197, 160] # LATIN CAPITAL LETTER S WITH CARON
when 139 then [226, 128, 185] # SINGLE LEFT-POINTING ANGLE QUOTATION MARK
when 140 then [197, 146] # LATIN CAPITAL LIGATURE OE
when 142 then [197, 189] # LATIN CAPITAL LETTER Z WITH CARON
when 145 then [226, 128, 152] # LEFT SINGLE QUOTATION MARK
when 146 then [226, 128, 153] # RIGHT SINGLE QUOTATION MARK
when 147 then [226, 128, 156] # LEFT DOUBLE QUOTATION MARK
when 148 then [226, 128, 157] # RIGHT DOUBLE QUOTATION MARK
when 149 then [226, 128, 162] # BULLET
when 150 then [226, 128, 147] # EN DASH
when 151 then [226, 128, 148] # EM DASH
when 152 then [203, 156] # SMALL TILDE
when 153 then [226, 132, 162] # TRADE MARK SIGN
when 154 then [197, 161] # LATIN SMALL LETTER S WITH CARON
when 155 then [226, 128, 186] # SINGLE RIGHT-POINTING ANGLE QUOTATION MARK
when 156 then [197, 147] # LATIN SMALL LIGATURE OE
when 158 then [197, 190] # LATIN SMALL LETTER Z WITH CARON
when 159 then [197, 184] # LATIN SMALL LETTER Y WITH DIAERESIS
end
end.freeze
# precomputing all possible byte conversions # common encodings to try for best multilingual compatibility
CP1252_TO_UTF8 = Array.new(256) do |b| COMMON_ENCODINGS = [
if (128..159).cover?(b) Encoding::UTF_8,
CP1252_MAP[b - 128]&.pack('C*') Encoding::Windows_1251, # Cyrillic/Russian legacy
elsif b < 128 Encoding::GB18030, # Simplified Chinese
b.chr Encoding::Shift_JIS, # Japanese
else Encoding::EUC_KR, # Korean
b < 192 ? [194, b].pack('C*') : [195, b - 64].pack('C*') Encoding::ISO_8859_1, # Western European
Encoding::Windows_1252 # Western European/Latin1 superset
].select { |enc| Encoding.name_list.include?(enc.name) }
# returns true if the string appears to be binary (has null bytes)
def binary_data?
self.include?("\x00".b)
end
# attempts to return a valid UTF-8 version of the string
def tidy_bytes
return self if self.encoding == Encoding::UTF_8 && self.valid_encoding?
return self.dup.force_encoding("BINARY") if binary_data?
str = self.dup
COMMON_ENCODINGS.each do |enc|
str.force_encoding(enc)
begin
utf8 = str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
return utf8 if utf8.valid_encoding? && !utf8.include?(UNICODE_REPLACEMENT_CHARACTER)
rescue Encoding::UndefinedConversionError, Encoding::InvalidByteSequenceError
# try next encoding
end
end end
end.freeze
# if no clean conversion found, try again but accept replacement characters
str = self.dup
COMMON_ENCODINGS.each do |enc|
str.force_encoding(enc)
begin
utf8 = str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
return utf8 if utf8.valid_encoding?
rescue Encoding::UndefinedConversionError, Encoding::InvalidByteSequenceError
# try next encoding
end
end
# fallback: replace all invalid/undefined bytes
str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
end
def tidy_bytes!
replace(self.tidy_bytes)
end
def self.included(base) def self.included(base)
base.class_eval do base.send(:include, InstanceMethods)
def tidy_bytes(force = false) end
return nil if empty?
if force
buffer = String.new(capacity: bytesize)
each_byte { |b| buffer << CP1252_TO_UTF8[b] }
return buffer.force_encoding(Encoding::UTF_8)
end
begin module InstanceMethods
encode('UTF-8') def tidy_bytes
rescue Encoding::UndefinedConversionError, Encoding::InvalidByteSequenceError TidyBytes.instance_method(:tidy_bytes).bind(self).call
buffer = String.new(capacity: bytesize) end
scrub { |b| CP1252_TO_UTF8[b.ord] }
end
end
def tidy_bytes!(force = false) def tidy_bytes!
result = tidy_bytes(force) TidyBytes.instance_method(:tidy_bytes!).bind(self).call
result ? replace(result) : self
end
end end
end end
end end

View File

@@ -0,0 +1,74 @@
# frozen_string_literal: true
# URLs in HTML attributes
def rewrite_html_attr_urls(content)
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do
prefix, url, suffix = $1, $2, $3
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
"#{prefix}#{path}#{suffix}"
rescue
"#{prefix}#{url}#{suffix}"
end
elsif url.start_with?('/')
"#{prefix}./#{url[1..-1]}#{suffix}"
else
"#{prefix}#{url}#{suffix}"
end
end
content
end
# URLs in CSS
def rewrite_css_urls(content)
content.gsub!(/url\(\s*["']?https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"'\)]+)["']?\s*\)/i) do
url = $1
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
"url(\"#{path}\")"
rescue
"url(\"#{url}\")"
end
elsif url.start_with?('/')
"url(\"./#{url[1..-1]}\")"
else
"url(\"#{url}\")"
end
end
content
end
# URLs in JavaScript
def rewrite_js_urls(content)
content.gsub!(/(["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do
quote_start, url, quote_end = $1, $2, $3
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
"#{quote_start}#{path}#{quote_end}"
rescue
"#{quote_start}#{url}#{quote_end}"
end
elsif url.start_with?('/')
"#{quote_start}./#{url[1..-1]}#{quote_end}"
else
"#{quote_start}#{url}#{quote_end}"
end
end
content
end

View File

@@ -1,12 +1,12 @@
Gem::Specification.new do |s| Gem::Specification.new do |s|
s.name = "wayback_machine_downloader_straw" s.name = "wayback_machine_downloader_straw"
s.version = "2.3.10" s.version = "2.4.4"
s.executables << "wayback_machine_downloader" s.executables << "wayback_machine_downloader"
s.summary = "Download an entire website from the Wayback Machine." s.summary = "Download an entire website from the Wayback Machine."
s.description = "Download complete websites from the Internet Archive's Wayback Machine. While the Wayback Machine (archive.org) excellently preserves web history, it lacks a built-in export functionality; this gem does just that, allowing you to download entire archived websites. (This is a significant rewrite of the original wayback_machine_downloader gem by hartator, with enhanced features and performance improvements.)" s.description = "Download complete websites from the Internet Archive's Wayback Machine. While the Wayback Machine (archive.org) excellently preserves web history, it lacks a built-in export functionality; this gem does just that, allowing you to download entire archived websites. (This is a significant rewrite of the original wayback_machine_downloader gem by hartator, with enhanced features and performance improvements.)"
s.authors = ["strawberrymaster"] s.authors = ["strawberrymaster"]
s.email = "strawberrymaster@vivaldi.net" s.email = "strawberrymaster@vivaldi.net"
s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb"] s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb", "lib/wayback_machine_downloader/subdom_processor.rb", "lib/wayback_machine_downloader/url_rewrite.rb"]
s.homepage = "https://github.com/StrawberryMaster/wayback-machine-downloader" s.homepage = "https://github.com/StrawberryMaster/wayback-machine-downloader"
s.license = "MIT" s.license = "MIT"
s.required_ruby_version = ">= 3.4.3" s.required_ruby_version = ">= 3.4.3"