mirror of
https://github.com/StrawberryMaster/wayback-machine-downloader.git
synced 2025-12-29 16:16:06 +00:00
Compare commits
90 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
40e9c9bb51 | ||
|
|
6bc08947b7 | ||
|
|
c731e0c7bd | ||
|
|
9fd2a7f8d1 | ||
|
|
6ad312f31f | ||
|
|
62ea35daa6 | ||
|
|
1f4202908f | ||
|
|
bed3f6101c | ||
|
|
754df6b8d6 | ||
|
|
801fb77f79 | ||
|
|
e9849e6c9c | ||
|
|
bc868e6b39 | ||
|
|
2bf04aff48 | ||
|
|
51becde916 | ||
|
|
c30ee73977 | ||
|
|
d3466b3387 | ||
|
|
0250579f0e | ||
|
|
0663c1c122 | ||
|
|
93115f70ec | ||
|
|
3d37ae10fd | ||
|
|
bff10e7260 | ||
|
|
3d181ce84c | ||
|
|
999aa211ae | ||
|
|
ffdce7e4ec | ||
|
|
e4487baafc | ||
|
|
82ff2de3dc | ||
|
|
fd329afdd2 | ||
|
|
038785557d | ||
|
|
2eead8cc27 | ||
|
|
7e5cdd54fb | ||
|
|
4160ff5e4a | ||
|
|
f03d92a3c4 | ||
|
|
2490109cfe | ||
|
|
c3c5b8446a | ||
|
|
18357a77ed | ||
|
|
3fdfd70fc1 | ||
|
|
2bf74b4173 | ||
|
|
79cbb639e7 | ||
|
|
071d208b31 | ||
|
|
1681a12579 | ||
|
|
f38756dd76 | ||
|
|
9452411e32 | ||
|
|
61e22cfe25 | ||
|
|
183ed61104 | ||
|
|
e6ecf32a43 | ||
|
|
375c6314ad | ||
|
|
6e2739f5a8 | ||
|
|
caba6a665f | ||
|
|
ab4324c0eb | ||
|
|
e28d7d578b | ||
|
|
a7a25574cf | ||
|
|
23cc3d69b1 | ||
|
|
01fa1f8c9f | ||
|
|
d2f98d9428 | ||
|
|
c7a5381eaf | ||
|
|
9709834e20 | ||
|
|
77998372cb | ||
|
|
2c789b7df6 | ||
|
|
1ef8c14c48 | ||
|
|
780e45343f | ||
|
|
42e6d62284 | ||
|
|
543161d7fb | ||
|
|
99a6de981e | ||
|
|
d85c880d23 | ||
|
|
917f4f8798 | ||
|
|
787bc2e535 | ||
|
|
4db13a7792 | ||
|
|
31d51728af | ||
|
|
febffe5de4 | ||
|
|
27dd619aa4 | ||
|
|
576298dca8 | ||
|
|
dc71d1d167 | ||
|
|
13e88ce04a | ||
|
|
c7fc7c7b58 | ||
|
|
5aebf83fca | ||
|
|
b1080f0219 | ||
|
|
dde36ea840 | ||
|
|
acec026ce1 | ||
|
|
ec3fd2dcaa | ||
|
|
6518ecf215 | ||
|
|
f5572d6129 | ||
|
|
fc4ccf62e2 | ||
|
|
84bf76363c | ||
|
|
0c701ee890 | ||
|
|
c953d038e2 | ||
|
|
b726e94947 | ||
|
|
f86302e7aa | ||
|
|
791068e9bd | ||
|
|
456e08e745 | ||
|
|
90069fad41 |
5
.dockerignore
Normal file
5
.dockerignore
Normal file
@@ -0,0 +1,5 @@
|
||||
*.md
|
||||
*.yml
|
||||
|
||||
.github
|
||||
websites
|
||||
4
.env.example
Normal file
4
.env.example
Normal file
@@ -0,0 +1,4 @@
|
||||
DB_HOST="db"
|
||||
DB_USER="root"
|
||||
DB_PASSWORD="example1234"
|
||||
DB_NAME="wayback"
|
||||
9
.gitignore
vendored
9
.gitignore
vendored
@@ -18,6 +18,11 @@ Gemfile.lock
|
||||
.ruby-version
|
||||
.rbenv*
|
||||
|
||||
## ENV
|
||||
*.env*
|
||||
!.env*.example
|
||||
|
||||
|
||||
## RCOV
|
||||
coverage.data
|
||||
|
||||
@@ -27,3 +32,7 @@ tmp
|
||||
*.rbc
|
||||
|
||||
test.rb
|
||||
|
||||
# Dev environment
|
||||
.vscode
|
||||
*.code-workspace
|
||||
|
||||
13
Dockerfile
13
Dockerfile
@@ -1,7 +1,14 @@
|
||||
FROM ruby:2.3-alpine
|
||||
FROM ruby:3.4.5-alpine
|
||||
USER root
|
||||
WORKDIR /build
|
||||
|
||||
COPY Gemfile /build/
|
||||
COPY *.gemspec /build/
|
||||
|
||||
RUN bundle config set jobs "$(nproc)" \
|
||||
&& bundle install
|
||||
|
||||
COPY . /build
|
||||
|
||||
WORKDIR /
|
||||
ENTRYPOINT [ "/build/bin/wayback_machine_downloader" ]
|
||||
WORKDIR /build
|
||||
ENTRYPOINT [ "/build/bin/wayback_machine_downloader", "--directory", "/build/websites" ]
|
||||
|
||||
88
README.md
88
README.md
@@ -1,6 +1,5 @@
|
||||
# Wayback Machine Downloader
|
||||
|
||||

|
||||
[](https://rubygems.org/gems/wayback_machine_downloader_straw)
|
||||
|
||||
This is a fork of the [Wayback Machine Downloader](https://github.com/hartator/wayback-machine-downloader). With this, you can download a website from the Internet Archive Wayback Machine.
|
||||
|
||||
@@ -19,6 +18,17 @@ Your files will save to `./websites/example.com/` with their original structure
|
||||
- Ruby 2.3+ ([download Ruby here](https://www.ruby-lang.org/en/downloads/))
|
||||
- Bundler gem (`gem install bundler`)
|
||||
|
||||
### Quick install
|
||||
It took a while, but we have a gem for this! Install it with:
|
||||
```bash
|
||||
gem install wayback_machine_downloader_straw
|
||||
```
|
||||
To run most commands, just like in the original WMD, you can use:
|
||||
```bash
|
||||
wayback_machine_downloader https://example.com
|
||||
```
|
||||
**Note**: this gem may conflict with hartator's wayback_machine_downloader gem, and so you may have to uninstall it for this WMD fork to work. A good way to know is if a command fails; it will list the gem version as 2.3.1 or earlier, while this WMD fork uses 2.3.2 or above.
|
||||
|
||||
### Step-by-step setup
|
||||
1. **Install Ruby**:
|
||||
```bash
|
||||
@@ -31,6 +41,11 @@ Your files will save to `./websites/example.com/` with their original structure
|
||||
bundle install
|
||||
```
|
||||
|
||||
If you encounter an error like cannot load such file -- concurrent-ruby, manually install the missing gem:
|
||||
```bash
|
||||
gem install concurrent-ruby
|
||||
```
|
||||
|
||||
3. **Run it**:
|
||||
```bash
|
||||
cd path/to/wayback-machine-downloader/bin
|
||||
@@ -48,16 +63,50 @@ docker build -t wayback_machine_downloader .
|
||||
docker run -it --rm wayback_machine_downloader [options] URL
|
||||
```
|
||||
|
||||
or the example without cloning the repo - fetching smallrockets.com until the year 2013:
|
||||
|
||||
```bash
|
||||
docker run -v .:/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com
|
||||
```
|
||||
|
||||
### 🐳 Using Docker Compose
|
||||
|
||||
We can also use it with Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
|
||||
```yaml
|
||||
# docker-compose.yml
|
||||
services:
|
||||
wayback_machine_downloader:
|
||||
build:
|
||||
context: .
|
||||
tty: true
|
||||
image: wayback_machine_downloader:latest
|
||||
container_name: wayback_machine_downloader
|
||||
volumes:
|
||||
- .:/build:rw
|
||||
- ./websites:/build/websites:rw
|
||||
```
|
||||
#### Usage:
|
||||
Now you can create a Docker image as named "wayback_machine_downloader" with the following command:
|
||||
```bash
|
||||
docker compose up -d --build
|
||||
```
|
||||
|
||||
After that you can run the exists container with the following command:
|
||||
```bash
|
||||
docker compose run --rm wayback_machine_downloader https://example.com [options]
|
||||
```
|
||||
|
||||
## ⚙️ Configuration
|
||||
There are a few constants that can be edited in the `wayback_machine_downloader.rb` file for your convenience. The default values may be conservative, so you can adjust them to your needs. They are:
|
||||
|
||||
```ruby
|
||||
DEFAULT_TIMEOUT = 30 # HTTP timeout (in seconds)
|
||||
MAX_RETRIES = 3 # Failed request retries
|
||||
RETRY_DELAY = 2 # Wait between retries
|
||||
RATE_LIMIT = 0.25 # Throttle between requests
|
||||
CONNECTION_POOL_SIZE = 10 # No. of simultaneous connections
|
||||
MEMORY_BUFFER_SIZE = 16384 # Size of download buffer
|
||||
MAX_RETRIES = 3 # Number of times to retry failed requests
|
||||
RETRY_DELAY = 2 # Wait time between retries (seconds)
|
||||
RATE_LIMIT = 0.25 # Throttle between requests (seconds)
|
||||
CONNECTION_POOL_SIZE = 10 # Maximum simultaneous connections
|
||||
MEMORY_BUFFER_SIZE = 16384 # Download buffer size (bytes)
|
||||
STATE_CDX_FILENAME = '.cdx.json' # Stores snapshot listing
|
||||
STATE_DB_FILENAME = '.downloaded.txt' # Tracks completed downloads
|
||||
```
|
||||
|
||||
## 🛠️ Advanced usage
|
||||
@@ -186,6 +235,29 @@ ruby wayback_machine_downloader https://example.com --list
|
||||
```
|
||||
It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application.
|
||||
|
||||
### Job management
|
||||
The downloader automatically saves its progress (`.cdx.json` for snapshot list, `.downloaded.txt` for completed files) in the output directory. If you run the same command again pointing to the same output directory, it will resume where it left off, skipping already downloaded files.
|
||||
|
||||
> [!NOTE]
|
||||
> Automatic resumption can be affected by changing the URL, mode selection (like `--all-timestamps`), filtering selections, or other options. If you want to ensure a clean start, use the `--reset` option.
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--reset` | Delete state files (`.cdx.json`, `.downloaded.txt`) and restart the download from scratch. Does not delete already downloaded website files. |
|
||||
| `--keep` | Keep state files (`.cdx.json`, `.downloaded.txt`) even after a successful download. By default, these are deleted upon successful completion. |
|
||||
|
||||
**Example** - Restart a download job from the beginning:
|
||||
```bash
|
||||
ruby wayback_machine_downloader https://example.com --reset
|
||||
```
|
||||
This is useful if you suspect the state files are corrupted or want to ensure a completely fresh download process without deleting the files you already have.
|
||||
|
||||
**Example 2** - Keep state files after download:
|
||||
```bash
|
||||
ruby wayback_machine_downloader https://example.com --keep
|
||||
```
|
||||
This can be useful for debugging or if you plan to extend the download later with different parameters (e.g., adding `--to` timestamp) while leveraging the existing snapshot list.
|
||||
|
||||
## 🤝 Contributing
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
|
||||
@@ -59,7 +59,27 @@ option_parser = OptionParser.new do |opts|
|
||||
end
|
||||
|
||||
opts.on("-r", "--rewritten", "Downloads the rewritten Wayback Machine files instead of the original files") do |t|
|
||||
options[:rewritten] = t
|
||||
options[:rewritten] = true
|
||||
end
|
||||
|
||||
opts.on("--local", "Rewrite URLs to make them relative for local browsing") do |t|
|
||||
options[:rewrite] = true
|
||||
end
|
||||
|
||||
opts.on("--reset", "Delete state files (.cdx.json, .downloaded.txt) and restart the download from scratch") do |t|
|
||||
options[:reset] = true
|
||||
end
|
||||
|
||||
opts.on("--keep", "Keep state files (.cdx.json, .downloaded.txt) after a successful download") do |t|
|
||||
options[:keep] = true
|
||||
end
|
||||
|
||||
opts.on("--recursive-subdomains", "Recursively download content from subdomains") do |t|
|
||||
options[:recursive_subdomains] = true
|
||||
end
|
||||
|
||||
opts.on("--subdomain-depth DEPTH", Integer, "Maximum depth for subdomain recursion (default: 1)") do |t|
|
||||
options[:subdomain_depth] = t
|
||||
end
|
||||
|
||||
opts.on("-v", "--version", "Display version") do |t|
|
||||
|
||||
10
docker-compose.yml
Normal file
10
docker-compose.yml
Normal file
@@ -0,0 +1,10 @@
|
||||
services:
|
||||
wayback_machine_downloader:
|
||||
build:
|
||||
context: .
|
||||
tty: true
|
||||
image: wayback_machine_downloader:latest
|
||||
container_name: wayback_machine_downloader
|
||||
volumes:
|
||||
- .:/build:rw
|
||||
- ./websites:/build/websites:rw
|
||||
9
entrypoint.sh
Normal file
9
entrypoint.sh
Normal file
@@ -0,0 +1,9 @@
|
||||
#!/bin/bash
|
||||
|
||||
if [ "$ENVIRONMENT" == "development" ]; then
|
||||
echo "Running in development mode. Starting rerun..."
|
||||
exec rerun --dir /build --ignore "websites/*" -- /build/bin/wayback_machine_downloader "$@"
|
||||
else
|
||||
echo "Not in development mode. Skipping rerun."
|
||||
exec /build/bin/wayback_machine_downloader "$@"
|
||||
fi
|
||||
@@ -9,9 +9,14 @@ require 'json'
|
||||
require 'time'
|
||||
require 'concurrent-ruby'
|
||||
require 'logger'
|
||||
require 'zlib'
|
||||
require 'stringio'
|
||||
require 'digest'
|
||||
require_relative 'wayback_machine_downloader/tidy_bytes'
|
||||
require_relative 'wayback_machine_downloader/to_regex'
|
||||
require_relative 'wayback_machine_downloader/archive_api'
|
||||
require_relative 'wayback_machine_downloader/subdom_processor'
|
||||
require_relative 'wayback_machine_downloader/url_rewrite'
|
||||
|
||||
class ConnectionPool
|
||||
MAX_AGE = 300
|
||||
@@ -110,24 +115,34 @@ end
|
||||
class WaybackMachineDownloader
|
||||
|
||||
include ArchiveAPI
|
||||
include SubdomainProcessor
|
||||
|
||||
VERSION = "2.3.3"
|
||||
VERSION = "2.4.2"
|
||||
DEFAULT_TIMEOUT = 30
|
||||
MAX_RETRIES = 3
|
||||
RETRY_DELAY = 2
|
||||
RATE_LIMIT = 0.25 # Delay between requests in seconds
|
||||
CONNECTION_POOL_SIZE = 10
|
||||
MEMORY_BUFFER_SIZE = 16384 # 16KB chunks
|
||||
STATE_CDX_FILENAME = ".cdx.json"
|
||||
STATE_DB_FILENAME = ".downloaded.txt"
|
||||
|
||||
|
||||
attr_accessor :base_url, :exact_url, :directory, :all_timestamps,
|
||||
:from_timestamp, :to_timestamp, :only_filter, :exclude_filter,
|
||||
:all, :maximum_pages, :threads_count, :logger
|
||||
:all, :maximum_pages, :threads_count, :logger, :reset, :keep, :rewrite,
|
||||
:snapshot_at
|
||||
|
||||
def initialize params
|
||||
validate_params(params)
|
||||
@base_url = params[:base_url]
|
||||
@base_url = params[:base_url]&.tidy_bytes
|
||||
@exact_url = params[:exact_url]
|
||||
@directory = params[:directory]
|
||||
if params[:directory]
|
||||
sanitized_dir = params[:directory].tidy_bytes
|
||||
@directory = File.expand_path(sanitized_dir)
|
||||
else
|
||||
@directory = nil
|
||||
end
|
||||
@all_timestamps = params[:all_timestamps]
|
||||
@from_timestamp = params[:from_timestamp].to_i
|
||||
@to_timestamp = params[:to_timestamp].to_i
|
||||
@@ -137,35 +152,71 @@ class WaybackMachineDownloader
|
||||
@maximum_pages = params[:maximum_pages] ? params[:maximum_pages].to_i : 100
|
||||
@threads_count = [params[:threads_count].to_i, 1].max
|
||||
@rewritten = params[:rewritten]
|
||||
@reset = params[:reset]
|
||||
@keep = params[:keep]
|
||||
@timeout = params[:timeout] || DEFAULT_TIMEOUT
|
||||
@logger = setup_logger
|
||||
@failed_downloads = Concurrent::Array.new
|
||||
@connection_pool = ConnectionPool.new(CONNECTION_POOL_SIZE)
|
||||
@db_mutex = Mutex.new
|
||||
@rewrite = params[:rewrite] || false
|
||||
@recursive_subdomains = params[:recursive_subdomains] || false
|
||||
@subdomain_depth = params[:subdomain_depth] || 1
|
||||
@snapshot_at = params[:snapshot_at] ? params[:snapshot_at].to_i : nil
|
||||
|
||||
# URL for rejecting invalid/unencoded wayback urls
|
||||
@url_regexp = /^(([A-Za-z][A-Za-z0-9+.-]*):((\/\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=]))+)(:([0-9]*))?)(((\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))|((\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)?))|((((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))(\?((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?(\#((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?)$/
|
||||
|
||||
handle_reset
|
||||
end
|
||||
|
||||
def backup_name
|
||||
if @base_url.include? '//'
|
||||
@base_url.split('/')[2]
|
||||
url_to_process = @base_url.end_with?('/*') ? @base_url.chomp('/*') : @base_url
|
||||
raw = if url_to_process.include?('//')
|
||||
url_to_process.split('/')[2]
|
||||
else
|
||||
@base_url
|
||||
url_to_process
|
||||
end
|
||||
|
||||
# sanitize for Windows (and safe cross-platform) to avoid ENOTDIR on mkdir (colon in host:port)
|
||||
if Gem.win_platform?
|
||||
raw = raw.gsub(/[:*?"<>|]/, '_')
|
||||
raw = raw.gsub(/[ .]+\z/, '')
|
||||
end
|
||||
raw = 'site' if raw.nil? || raw.empty?
|
||||
raw
|
||||
end
|
||||
|
||||
def backup_path
|
||||
if @directory
|
||||
if @directory[-1] == '/'
|
||||
@directory
|
||||
else
|
||||
@directory + '/'
|
||||
end
|
||||
# because @directory is already an absolute path, we just ensure it exists
|
||||
@directory
|
||||
else
|
||||
'websites/' + backup_name + '/'
|
||||
# ensure the default path is absolute and normalized
|
||||
File.expand_path(File.join('websites', backup_name))
|
||||
end
|
||||
end
|
||||
|
||||
def cdx_path
|
||||
File.join(backup_path, STATE_CDX_FILENAME)
|
||||
end
|
||||
|
||||
def db_path
|
||||
File.join(backup_path, STATE_DB_FILENAME)
|
||||
end
|
||||
|
||||
def handle_reset
|
||||
if @reset
|
||||
puts "Resetting download state..."
|
||||
FileUtils.rm_f(cdx_path)
|
||||
FileUtils.rm_f(db_path)
|
||||
puts "Removed state files: #{cdx_path}, #{db_path}"
|
||||
end
|
||||
end
|
||||
|
||||
def match_only_filter file_url
|
||||
if @only_filter
|
||||
only_filter_regex = @only_filter.to_regex
|
||||
only_filter_regex = @only_filter.to_regex(detect: true)
|
||||
if only_filter_regex
|
||||
only_filter_regex =~ file_url
|
||||
else
|
||||
@@ -178,7 +229,7 @@ class WaybackMachineDownloader
|
||||
|
||||
def match_exclude_filter file_url
|
||||
if @exclude_filter
|
||||
exclude_filter_regex = @exclude_filter.to_regex
|
||||
exclude_filter_regex = @exclude_filter.to_regex(detect: true)
|
||||
if exclude_filter_regex
|
||||
exclude_filter_regex =~ file_url
|
||||
else
|
||||
@@ -190,53 +241,162 @@ class WaybackMachineDownloader
|
||||
end
|
||||
|
||||
def get_all_snapshots_to_consider
|
||||
snapshot_list_to_consider = []
|
||||
|
||||
@connection_pool.with_connection do |connection|
|
||||
puts "Getting snapshot pages"
|
||||
|
||||
# Fetch the initial set of snapshots
|
||||
snapshot_list_to_consider += get_raw_list_from_api(@base_url, nil, connection)
|
||||
print "."
|
||||
|
||||
# Fetch additional pages if the exact URL flag is not set
|
||||
unless @exact_url
|
||||
@maximum_pages.times do |page_index|
|
||||
snapshot_list = get_raw_list_from_api("#{@base_url}/*", page_index, connection)
|
||||
break if snapshot_list.empty?
|
||||
|
||||
snapshot_list_to_consider += snapshot_list
|
||||
print "."
|
||||
end
|
||||
if File.exist?(cdx_path) && !@reset
|
||||
puts "Loading snapshot list from #{cdx_path}"
|
||||
begin
|
||||
snapshot_list_to_consider = JSON.parse(File.read(cdx_path))
|
||||
puts "Loaded #{snapshot_list_to_consider.length} snapshots from cache."
|
||||
puts
|
||||
return Concurrent::Array.new(snapshot_list_to_consider)
|
||||
rescue JSON::ParserError => e
|
||||
puts "Error reading snapshot cache file #{cdx_path}: #{e.message}. Refetching..."
|
||||
FileUtils.rm_f(cdx_path)
|
||||
rescue => e
|
||||
puts "Error loading snapshot cache #{cdx_path}: #{e.message}. Refetching..."
|
||||
FileUtils.rm_f(cdx_path)
|
||||
end
|
||||
end
|
||||
|
||||
puts " found #{snapshot_list_to_consider.length} snapshots to consider."
|
||||
snapshot_list_to_consider = Concurrent::Array.new
|
||||
mutex = Mutex.new
|
||||
|
||||
puts "Getting snapshot pages from Wayback Machine API..."
|
||||
|
||||
# Fetch the initial set of snapshots, sequentially
|
||||
@connection_pool.with_connection do |connection|
|
||||
initial_list = get_raw_list_from_api(@base_url, nil, connection)
|
||||
initial_list ||= []
|
||||
mutex.synchronize do
|
||||
snapshot_list_to_consider.concat(initial_list)
|
||||
print "."
|
||||
end
|
||||
end
|
||||
|
||||
# Fetch additional pages if the exact URL flag is not set
|
||||
unless @exact_url
|
||||
page_index = 0
|
||||
batch_size = [@threads_count, 5].min
|
||||
continue_fetching = true
|
||||
|
||||
while continue_fetching && page_index < @maximum_pages
|
||||
# Determine the range of pages to fetch in this batch
|
||||
end_index = [page_index + batch_size, @maximum_pages].min
|
||||
current_batch = (page_index...end_index).to_a
|
||||
|
||||
# Create futures for concurrent API calls
|
||||
futures = current_batch.map do |page|
|
||||
Concurrent::Future.execute do
|
||||
result = nil
|
||||
@connection_pool.with_connection do |connection|
|
||||
result = get_raw_list_from_api("#{@base_url}/*", page, connection)
|
||||
end
|
||||
result ||= []
|
||||
[page, result]
|
||||
end
|
||||
end
|
||||
|
||||
results = []
|
||||
|
||||
futures.each do |future|
|
||||
begin
|
||||
results << future.value
|
||||
rescue => e
|
||||
puts "\nError fetching page #{future}: #{e.message}"
|
||||
end
|
||||
end
|
||||
|
||||
# Sort results by page number to maintain order
|
||||
results.sort_by! { |page, _| page }
|
||||
|
||||
# Process results and check for empty pages
|
||||
results.each do |page, result|
|
||||
if result.nil? || result.empty?
|
||||
continue_fetching = false
|
||||
break
|
||||
else
|
||||
mutex.synchronize do
|
||||
snapshot_list_to_consider.concat(result)
|
||||
print "."
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
page_index = end_index
|
||||
|
||||
sleep(RATE_LIMIT) if continue_fetching
|
||||
end
|
||||
end
|
||||
|
||||
puts " found #{snapshot_list_to_consider.length} snapshots."
|
||||
|
||||
# Save the fetched list to the cache file
|
||||
begin
|
||||
FileUtils.mkdir_p(File.dirname(cdx_path))
|
||||
File.write(cdx_path, JSON.pretty_generate(snapshot_list_to_consider.to_a)) # Convert Concurrent::Array back to Array for JSON
|
||||
puts "Saved snapshot list to #{cdx_path}"
|
||||
rescue => e
|
||||
puts "Error saving snapshot cache to #{cdx_path}: #{e.message}"
|
||||
end
|
||||
puts
|
||||
|
||||
snapshot_list_to_consider
|
||||
end
|
||||
|
||||
# Get a composite snapshot file list for a specific timestamp
|
||||
def get_composite_snapshot_file_list(target_timestamp)
|
||||
file_versions = {}
|
||||
get_all_snapshots_to_consider.each do |file_timestamp, file_url|
|
||||
next unless file_url.include?('/')
|
||||
next if file_timestamp.to_i > target_timestamp
|
||||
|
||||
raw_tail = file_url.split('/')[3..-1]&.join('/')
|
||||
file_id = sanitize_and_prepare_id(raw_tail, file_url)
|
||||
next if file_id.nil?
|
||||
next if match_exclude_filter(file_url)
|
||||
next unless match_only_filter(file_url)
|
||||
|
||||
if !file_versions[file_id] || file_versions[file_id][:timestamp].to_i < file_timestamp.to_i
|
||||
file_versions[file_id] = { file_url: file_url, timestamp: file_timestamp, file_id: file_id }
|
||||
end
|
||||
end
|
||||
file_versions.values
|
||||
end
|
||||
|
||||
# Returns a list of files for the composite snapshot
|
||||
def get_file_list_composite_snapshot(target_timestamp)
|
||||
file_list = get_composite_snapshot_file_list(target_timestamp)
|
||||
file_list = file_list.sort_by { |_,v| v[:timestamp].to_s }.reverse
|
||||
file_list.map do |file_remote_info|
|
||||
file_remote_info[1][:file_id] = file_remote_info[0]
|
||||
file_remote_info[1]
|
||||
end
|
||||
end
|
||||
|
||||
def get_file_list_curated
|
||||
file_list_curated = Hash.new
|
||||
get_all_snapshots_to_consider.each do |file_timestamp, file_url|
|
||||
next unless file_url.include?('/')
|
||||
file_id = file_url.split('/')[3..-1].join('/')
|
||||
file_id = CGI::unescape file_id
|
||||
file_id = file_id.tidy_bytes unless file_id == ""
|
||||
|
||||
raw_tail = file_url.split('/')[3..-1]&.join('/')
|
||||
file_id = sanitize_and_prepare_id(raw_tail, file_url)
|
||||
if file_id.nil?
|
||||
puts "Malformed file url, ignoring: #{file_url}"
|
||||
next
|
||||
end
|
||||
|
||||
if file_id.include?('<') || file_id.include?('>')
|
||||
puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}"
|
||||
else
|
||||
if match_exclude_filter(file_url)
|
||||
puts "File url matches exclude filter, ignoring: #{file_url}"
|
||||
elsif not match_only_filter(file_url)
|
||||
elsif !match_only_filter(file_url)
|
||||
puts "File url doesn't match only filter, ignoring: #{file_url}"
|
||||
elsif file_list_curated[file_id]
|
||||
unless file_list_curated[file_id][:timestamp] > file_timestamp
|
||||
file_list_curated[file_id] = {file_url: file_url, timestamp: file_timestamp}
|
||||
file_list_curated[file_id] = { file_url: file_url, timestamp: file_timestamp }
|
||||
end
|
||||
else
|
||||
file_list_curated[file_id] = {file_url: file_url, timestamp: file_timestamp}
|
||||
file_list_curated[file_id] = { file_url: file_url, timestamp: file_timestamp }
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -247,21 +407,32 @@ class WaybackMachineDownloader
|
||||
file_list_curated = Hash.new
|
||||
get_all_snapshots_to_consider.each do |file_timestamp, file_url|
|
||||
next unless file_url.include?('/')
|
||||
file_id = file_url.split('/')[3..-1].join('/')
|
||||
file_id_and_timestamp = [file_timestamp, file_id].join('/')
|
||||
file_id_and_timestamp = CGI::unescape file_id_and_timestamp
|
||||
file_id_and_timestamp = file_id_and_timestamp.tidy_bytes unless file_id_and_timestamp == ""
|
||||
|
||||
raw_tail = file_url.split('/')[3..-1]&.join('/')
|
||||
file_id = sanitize_and_prepare_id(raw_tail, file_url)
|
||||
if file_id.nil?
|
||||
puts "Malformed file url, ignoring: #{file_url}"
|
||||
next
|
||||
end
|
||||
|
||||
file_id_and_timestamp_raw = [file_timestamp, file_id].join('/')
|
||||
file_id_and_timestamp = sanitize_and_prepare_id(file_id_and_timestamp_raw, file_url)
|
||||
if file_id_and_timestamp.nil?
|
||||
puts "Malformed file id/timestamp combo, ignoring: #{file_url}"
|
||||
next
|
||||
end
|
||||
|
||||
if file_id_and_timestamp.include?('<') || file_id_and_timestamp.include?('>')
|
||||
puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}"
|
||||
else
|
||||
if match_exclude_filter(file_url)
|
||||
puts "File url matches exclude filter, ignoring: #{file_url}"
|
||||
elsif not match_only_filter(file_url)
|
||||
elsif !match_only_filter(file_url)
|
||||
puts "File url doesn't match only filter, ignoring: #{file_url}"
|
||||
elsif file_list_curated[file_id_and_timestamp]
|
||||
puts "Duplicate file and timestamp combo, ignoring: #{file_id}" if @verbose
|
||||
# duplicate combo, ignore silently (verbose flag not shown here)
|
||||
else
|
||||
file_list_curated[file_id_and_timestamp] = {file_url: file_url, timestamp: file_timestamp}
|
||||
file_list_curated[file_id_and_timestamp] = { file_url: file_url, timestamp: file_timestamp }
|
||||
end
|
||||
end
|
||||
end
|
||||
@@ -271,7 +442,9 @@ class WaybackMachineDownloader
|
||||
|
||||
|
||||
def get_file_list_by_timestamp
|
||||
if @all_timestamps
|
||||
if @snapshot_at
|
||||
@file_list_by_snapshot_at ||= get_composite_snapshot_file_list(@snapshot_at)
|
||||
elsif @all_timestamps
|
||||
file_list_curated = get_file_list_all_timestamps
|
||||
file_list_curated.map do |file_remote_info|
|
||||
file_remote_info[1][:file_id] = file_remote_info[0]
|
||||
@@ -279,7 +452,7 @@ class WaybackMachineDownloader
|
||||
end
|
||||
else
|
||||
file_list_curated = get_file_list_curated
|
||||
file_list_curated = file_list_curated.sort_by { |k,v| v[:timestamp] }.reverse
|
||||
file_list_curated = file_list_curated.sort_by { |_,v| v[:timestamp].to_s }.reverse
|
||||
file_list_curated.map do |file_remote_info|
|
||||
file_remote_info[1][:file_id] = file_remote_info[0]
|
||||
file_remote_info[1]
|
||||
@@ -301,42 +474,128 @@ class WaybackMachineDownloader
|
||||
puts "]"
|
||||
end
|
||||
|
||||
def download_files
|
||||
start_time = Time.now
|
||||
puts "Downloading #{@base_url} to #{backup_path} from Wayback Machine archives."
|
||||
|
||||
if file_list_by_timestamp.empty?
|
||||
puts "No files to download."
|
||||
return
|
||||
def load_downloaded_ids
|
||||
downloaded_ids = Set.new
|
||||
if File.exist?(db_path) && !@reset
|
||||
puts "Loading list of already downloaded files from #{db_path}"
|
||||
begin
|
||||
File.foreach(db_path) { |line| downloaded_ids.add(line.strip) }
|
||||
rescue => e
|
||||
puts "Error reading downloaded files list #{db_path}: #{e.message}. Assuming no files downloaded."
|
||||
downloaded_ids.clear
|
||||
end
|
||||
end
|
||||
downloaded_ids
|
||||
end
|
||||
|
||||
total_files = file_list_by_timestamp.count
|
||||
puts "#{total_files} files to download:"
|
||||
|
||||
@processed_file_count = 0
|
||||
@download_mutex = Mutex.new
|
||||
|
||||
thread_count = [@threads_count, CONNECTION_POOL_SIZE].min
|
||||
pool = Concurrent::FixedThreadPool.new(thread_count)
|
||||
|
||||
file_list_by_timestamp.each do |file_remote_info|
|
||||
def append_to_db(file_id)
|
||||
@db_mutex.synchronize do
|
||||
begin
|
||||
FileUtils.mkdir_p(File.dirname(db_path))
|
||||
File.open(db_path, 'a') { |f| f.puts(file_id) }
|
||||
rescue => e
|
||||
@logger.error("Failed to append downloaded file ID #{file_id} to #{db_path}: #{e.message}")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
def processing_files(pool, files_to_process)
|
||||
files_to_process.each do |file_remote_info|
|
||||
pool.post do
|
||||
@connection_pool.with_connection do |connection|
|
||||
result = download_file(file_remote_info, connection)
|
||||
@download_mutex.synchronize do
|
||||
@processed_file_count += 1
|
||||
puts result if result
|
||||
download_success = false
|
||||
begin
|
||||
@connection_pool.with_connection do |connection|
|
||||
result_message = download_file(file_remote_info, connection)
|
||||
# assume download success if the result message contains ' -> '
|
||||
if result_message && result_message.include?(' -> ')
|
||||
download_success = true
|
||||
end
|
||||
@download_mutex.synchronize do
|
||||
@processed_file_count += 1
|
||||
# adjust progress message to reflect remaining files
|
||||
progress_message = result_message.sub(/\(#{@processed_file_count}\/\d+\)/, "(#{@processed_file_count}/#{@total_to_download})") if result_message
|
||||
puts progress_message if progress_message
|
||||
end
|
||||
end
|
||||
# sppend to DB only after successful download outside the connection block
|
||||
if download_success
|
||||
append_to_db(file_remote_info[:file_id])
|
||||
end
|
||||
rescue => e
|
||||
@logger.error("Error processing file #{file_remote_info[:file_url]}: #{e.message}")
|
||||
@download_mutex.synchronize do
|
||||
@processed_file_count += 1
|
||||
end
|
||||
end
|
||||
sleep(RATE_LIMIT)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
def download_files
|
||||
start_time = Time.now
|
||||
puts "Downloading #{@base_url} to #{backup_path} from Wayback Machine archives."
|
||||
|
||||
FileUtils.mkdir_p(backup_path)
|
||||
|
||||
# Load the list of files to potentially download
|
||||
files_to_download = file_list_by_timestamp
|
||||
|
||||
if files_to_download.empty?
|
||||
puts "No files found matching criteria."
|
||||
cleanup
|
||||
return
|
||||
end
|
||||
|
||||
total_files = files_to_download.count
|
||||
puts "#{total_files} files found matching criteria."
|
||||
|
||||
# Load IDs of already downloaded files
|
||||
downloaded_ids = load_downloaded_ids
|
||||
files_to_process = files_to_download.reject do |file_info|
|
||||
downloaded_ids.include?(file_info[:file_id])
|
||||
end
|
||||
|
||||
remaining_count = files_to_process.count
|
||||
skipped_count = total_files - remaining_count
|
||||
|
||||
if skipped_count > 0
|
||||
puts "Found #{skipped_count} previously downloaded files, skipping them."
|
||||
end
|
||||
|
||||
if remaining_count == 0
|
||||
puts "All matching files have already been downloaded."
|
||||
cleanup
|
||||
return
|
||||
end
|
||||
|
||||
puts "#{remaining_count} files to download:"
|
||||
|
||||
@processed_file_count = 0
|
||||
@total_to_download = remaining_count
|
||||
@download_mutex = Mutex.new
|
||||
|
||||
thread_count = [@threads_count, CONNECTION_POOL_SIZE].min
|
||||
pool = Concurrent::FixedThreadPool.new(thread_count)
|
||||
|
||||
processing_files(pool, files_to_process)
|
||||
|
||||
pool.shutdown
|
||||
pool.wait_for_termination
|
||||
|
||||
end_time = Time.now
|
||||
puts "\nDownload completed in #{(end_time - start_time).round(2)}s, saved in #{backup_path}"
|
||||
puts "\nDownload finished in #{(end_time - start_time).round(2)}s."
|
||||
|
||||
# process subdomains if enabled
|
||||
if @recursive_subdomains
|
||||
subdomain_start_time = Time.now
|
||||
process_subdomains
|
||||
subdomain_end_time = Time.now
|
||||
subdomain_time = (subdomain_end_time - subdomain_start_time).round(2)
|
||||
puts "Subdomain processing finished in #{subdomain_time}s."
|
||||
end
|
||||
|
||||
puts "Results saved in #{backup_path}"
|
||||
cleanup
|
||||
end
|
||||
|
||||
@@ -363,42 +622,116 @@ class WaybackMachineDownloader
|
||||
end
|
||||
end
|
||||
|
||||
def rewrite_urls_to_relative(file_path)
|
||||
return unless File.exist?(file_path)
|
||||
|
||||
file_ext = File.extname(file_path).downcase
|
||||
|
||||
begin
|
||||
content = File.binread(file_path)
|
||||
|
||||
if file_ext == '.html' || file_ext == '.htm'
|
||||
encoding = content.match(/<meta\s+charset=["']?([^"'>]+)/i)&.captures&.first || 'UTF-8'
|
||||
content.force_encoding(encoding) rescue content.force_encoding('UTF-8')
|
||||
else
|
||||
content.force_encoding('UTF-8')
|
||||
end
|
||||
|
||||
# URLs in HTML attributes
|
||||
rewrite_html_attr_urls(content)
|
||||
|
||||
# URLs in CSS
|
||||
rewrite_css_urls(content)
|
||||
|
||||
# URLs in JavaScript
|
||||
rewrite_js_urls(content)
|
||||
|
||||
# for URLs in HTML attributes that start with a single slash
|
||||
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])\/([^"'\/][^"']*)(["'])/i) do
|
||||
prefix, path, suffix = $1, $2, $3
|
||||
"#{prefix}./#{path}#{suffix}"
|
||||
end
|
||||
|
||||
# for URLs in CSS that start with a single slash
|
||||
content.gsub!(/url\(\s*["']?\/([^"'\)\/][^"'\)]*?)["']?\s*\)/i) do
|
||||
path = $1
|
||||
"url(\"./#{path}\")"
|
||||
end
|
||||
|
||||
# save the modified content back to the file
|
||||
File.binwrite(file_path, content)
|
||||
puts "Rewrote URLs in #{file_path} to be relative."
|
||||
rescue Errno::ENOENT => e
|
||||
@logger.warn("Error reading file #{file_path}: #{e.message}")
|
||||
end
|
||||
end
|
||||
|
||||
def download_file (file_remote_info, http)
|
||||
current_encoding = "".encoding
|
||||
file_url = file_remote_info[:file_url].encode(current_encoding)
|
||||
file_id = file_remote_info[:file_id]
|
||||
file_timestamp = file_remote_info[:timestamp]
|
||||
file_path_elements = file_id.split('/')
|
||||
|
||||
# sanitize file_id to ensure it is a valid path component
|
||||
raw_path_elements = file_id.split('/')
|
||||
|
||||
sanitized_path_elements = raw_path_elements.map do |element|
|
||||
if Gem.win_platform?
|
||||
# for Windows, we need to sanitize path components to avoid invalid characters
|
||||
# this prevents issues with file names that contain characters not allowed in
|
||||
# Windows file systems. See # https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file#naming-conventions
|
||||
element.gsub(/[:\*?"<>\|\&\=\/\\]/) { |match| '%' + match.ord.to_s(16).upcase }
|
||||
else
|
||||
element
|
||||
end
|
||||
end
|
||||
|
||||
current_backup_path = backup_path
|
||||
|
||||
if file_id == ""
|
||||
dir_path = backup_path
|
||||
file_path = backup_path + 'index.html'
|
||||
elsif file_url[-1] == '/' or not file_path_elements[-1].include? '.'
|
||||
dir_path = backup_path + file_path_elements[0..-1].join('/')
|
||||
file_path = backup_path + file_path_elements[0..-1].join('/') + '/index.html'
|
||||
dir_path = current_backup_path
|
||||
file_path = File.join(dir_path, 'index.html')
|
||||
elsif file_url[-1] == '/' || (sanitized_path_elements.last && !sanitized_path_elements.last.include?('.'))
|
||||
# if file_id is a directory, we treat it as such
|
||||
dir_path = File.join(current_backup_path, *sanitized_path_elements)
|
||||
file_path = File.join(dir_path, 'index.html')
|
||||
else
|
||||
dir_path = backup_path + file_path_elements[0..-2].join('/')
|
||||
file_path = backup_path + file_path_elements[0..-1].join('/')
|
||||
# if file_id is a file, we treat it as such
|
||||
filename = sanitized_path_elements.pop
|
||||
dir_path = File.join(current_backup_path, *sanitized_path_elements)
|
||||
file_path = File.join(dir_path, filename)
|
||||
end
|
||||
if Gem.win_platform?
|
||||
dir_path = dir_path.gsub(/[:*?&=<>\\|]/) {|s| '%' + s.ord.to_s(16) }
|
||||
file_path = file_path.gsub(/[:*?&=<>\\|]/) {|s| '%' + s.ord.to_s(16) }
|
||||
|
||||
# check existence *before* download attempt
|
||||
# this handles cases where a file was created manually or by a previous partial run without a .db entry
|
||||
if File.exist? file_path
|
||||
return "#{file_url} # #{file_path} already exists. (#{@processed_file_count + 1}/#{@total_to_download})"
|
||||
end
|
||||
unless File.exist? file_path
|
||||
begin
|
||||
structure_dir_path dir_path
|
||||
download_with_retry(file_path, file_url, file_timestamp, http)
|
||||
"#{file_url} -> #{file_path} (#{@processed_file_count + 1}/#{file_list_by_timestamp.size})"
|
||||
rescue StandardError => e
|
||||
msg = "#{file_url} # #{e}"
|
||||
if not @all and File.exist?(file_path) and File.size(file_path) == 0
|
||||
File.delete(file_path)
|
||||
msg += "\n#{file_path} was empty and was removed."
|
||||
|
||||
begin
|
||||
structure_dir_path dir_path
|
||||
status = download_with_retry(file_path, file_url, file_timestamp, http)
|
||||
|
||||
case status
|
||||
when :saved
|
||||
if @rewrite && File.extname(file_path) =~ /\.(html?|css|js)$/i
|
||||
rewrite_urls_to_relative(file_path)
|
||||
end
|
||||
msg
|
||||
"#{file_url} -> #{file_path} (#{@processed_file_count + 1}/#{@total_to_download})"
|
||||
when :skipped_not_found
|
||||
"Skipped (not found): #{file_url} (#{@processed_file_count + 1}/#{@total_to_download})"
|
||||
else
|
||||
# ideally, this case should not be reached if download_with_retry behaves as expected.
|
||||
@logger.warn("Unknown status from download_with_retry for #{file_url}: #{status}")
|
||||
"Unknown status for #{file_url}: #{status} (#{@processed_file_count + 1}/#{@total_to_download})"
|
||||
end
|
||||
else
|
||||
"#{file_url} # #{file_path} already exists. (#{@processed_file_count + 1}/#{file_list_by_timestamp.size})"
|
||||
rescue StandardError => e
|
||||
msg = "Failed: #{file_url} # #{e} (#{@processed_file_count + 1}/#{@total_to_download})"
|
||||
if File.exist?(file_path) and File.size(file_path) == 0
|
||||
File.delete(file_path)
|
||||
msg += "\n#{file_path} was empty and was removed."
|
||||
end
|
||||
msg
|
||||
end
|
||||
end
|
||||
|
||||
@@ -407,7 +740,22 @@ class WaybackMachineDownloader
|
||||
end
|
||||
|
||||
def file_list_by_timestamp
|
||||
@file_list_by_timestamp ||= get_file_list_by_timestamp
|
||||
if @snapshot_at
|
||||
@file_list_by_snapshot_at ||= get_composite_snapshot_file_list(@snapshot_at)
|
||||
elsif @all_timestamps
|
||||
file_list_curated = get_file_list_all_timestamps
|
||||
file_list_curated.map do |file_remote_info|
|
||||
file_remote_info[1][:file_id] = file_remote_info[0]
|
||||
file_remote_info[1]
|
||||
end
|
||||
else
|
||||
file_list_curated = get_file_list_curated
|
||||
file_list_curated = file_list_curated.sort_by { |_,v| v[:timestamp].to_s }.reverse
|
||||
file_list_curated.map do |file_remote_info|
|
||||
file_remote_info[1][:file_id] = file_remote_info[0]
|
||||
file_remote_info[1]
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
private
|
||||
@@ -425,46 +773,165 @@ class WaybackMachineDownloader
|
||||
end
|
||||
logger
|
||||
end
|
||||
|
||||
# safely sanitize a file id (or id+timestamp)
|
||||
def sanitize_and_prepare_id(raw, file_url)
|
||||
return nil if raw.nil? || raw.empty?
|
||||
original = raw.dup
|
||||
begin
|
||||
# work on a binary copy to avoid premature encoding errors
|
||||
raw = raw.dup.force_encoding(Encoding::BINARY)
|
||||
|
||||
# percent-decode (repeat until stable in case of double-encoding)
|
||||
loop do
|
||||
decoded = raw.gsub(/%([0-9A-Fa-f]{2})/) { [$1].pack('H2') }
|
||||
break if decoded == raw
|
||||
raw = decoded
|
||||
end
|
||||
|
||||
# try tidy_bytes
|
||||
begin
|
||||
raw = raw.tidy_bytes
|
||||
rescue StandardError
|
||||
# fallback: scrub to UTF-8
|
||||
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
|
||||
end
|
||||
|
||||
# ensure UTF-8 and scrub again
|
||||
unless raw.encoding == Encoding::UTF_8 && raw.valid_encoding?
|
||||
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
|
||||
end
|
||||
|
||||
# strip HTML/comment artifacts & control chars
|
||||
raw.gsub!(/<!--+/, '')
|
||||
raw.gsub!(/[\x00-\x1F]/, '')
|
||||
|
||||
# split query; hash it for stable short name
|
||||
path_part, query_part = raw.split('?', 2)
|
||||
if query_part && !query_part.empty?
|
||||
q_digest = Digest::SHA256.hexdigest(query_part)[0, 12]
|
||||
if path_part.include?('.')
|
||||
pre, _sep, post = path_part.rpartition('.')
|
||||
path_part = "#{pre}__q#{q_digest}.#{post}"
|
||||
else
|
||||
path_part = "#{path_part}__q#{q_digest}"
|
||||
end
|
||||
end
|
||||
raw = path_part
|
||||
|
||||
# collapse slashes & trim leading slash
|
||||
raw.gsub!(%r{/+}, '/')
|
||||
raw.sub!(%r{\A/}, '')
|
||||
|
||||
# segment-wise sanitation
|
||||
raw = raw.split('/').map do |segment|
|
||||
seg = segment.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
|
||||
seg = seg.gsub(/[:*?"<>|\\]/) { |c| "%#{c.ord.to_s(16).upcase}" }
|
||||
seg = seg.gsub(/[ .]+\z/, '') if Gem.win_platform?
|
||||
seg.empty? ? '_' : seg
|
||||
end.join('/')
|
||||
|
||||
# remove any remaining angle brackets
|
||||
raw.tr!('<>', '')
|
||||
|
||||
# final fallback if empty
|
||||
raw = "file__#{Digest::SHA1.hexdigest(original)[0,10]}" if raw.nil? || raw.empty?
|
||||
|
||||
raw
|
||||
rescue => e
|
||||
@logger&.warn("Failed to sanitize file id from #{file_url}: #{e.message}")
|
||||
# deterministic fallback – never return nil so caller won’t mark malformed
|
||||
"file__#{Digest::SHA1.hexdigest(original)[0,10]}"
|
||||
end
|
||||
end
|
||||
|
||||
# wrap URL in parentheses if it contains characters that commonly break unquoted
|
||||
# Windows CMD usage (e.g., &). This is only for display; user still must quote
|
||||
# when invoking manually.
|
||||
def safe_display_url(url)
|
||||
return url unless url && url.match?(/[&]/)
|
||||
"(#{url})"
|
||||
end
|
||||
|
||||
def download_with_retry(file_path, file_url, file_timestamp, connection, redirect_count = 0)
|
||||
retries = 0
|
||||
begin
|
||||
wayback_url = if @rewritten
|
||||
"https://web.archive.org/web/#{file_timestamp}/#{file_url}"
|
||||
else
|
||||
else
|
||||
"https://web.archive.org/web/#{file_timestamp}id_/#{file_url}"
|
||||
end
|
||||
|
||||
|
||||
# Escape square brackets because they are not valid in URI()
|
||||
wayback_url = wayback_url.gsub('[', '%5B').gsub(']', '%5D')
|
||||
|
||||
# reject invalid/unencoded wayback_url, behaving as if the resource weren't found
|
||||
if not @url_regexp.match?(wayback_url)
|
||||
@logger.warn("Skipped #{file_url}: invalid URL")
|
||||
return :skipped_not_found
|
||||
end
|
||||
|
||||
request = Net::HTTP::Get.new(URI(wayback_url))
|
||||
request["Connection"] = "keep-alive"
|
||||
request["User-Agent"] = "WaybackMachineDownloader/#{VERSION}"
|
||||
|
||||
request["Accept-Encoding"] = "gzip, deflate"
|
||||
|
||||
response = connection.request(request)
|
||||
|
||||
case response
|
||||
when Net::HTTPSuccess
|
||||
|
||||
save_response_body = lambda do
|
||||
File.open(file_path, "wb") do |file|
|
||||
if block_given?
|
||||
yield(response, file)
|
||||
body = response.body
|
||||
if response['content-encoding'] == 'gzip' && body && !body.empty?
|
||||
begin
|
||||
gz = Zlib::GzipReader.new(StringIO.new(body))
|
||||
decompressed_body = gz.read
|
||||
gz.close
|
||||
file.write(decompressed_body)
|
||||
rescue Zlib::GzipFile::Error => e
|
||||
@logger.warn("Failure decompressing gzip file #{file_url}: #{e.message}. Writing raw body.")
|
||||
file.write(body)
|
||||
end
|
||||
else
|
||||
file.write(response.body)
|
||||
file.write(body) if body
|
||||
end
|
||||
end
|
||||
when Net::HTTPRedirection
|
||||
raise "Too many redirects for #{file_url}" if redirect_count >= 2
|
||||
location = response['location']
|
||||
@logger.warn("Redirect found for #{file_url} -> #{location}")
|
||||
return download_with_retry(file_path, location, file_timestamp, connection, redirect_count + 1)
|
||||
when Net::HTTPTooManyRequests
|
||||
sleep(RATE_LIMIT * 2)
|
||||
raise "Rate limited, retrying..."
|
||||
when Net::HTTPNotFound
|
||||
@logger.warn("File not found, skipping: #{file_url}")
|
||||
return
|
||||
else
|
||||
raise "HTTP Error: #{response.code} #{response.message}"
|
||||
end
|
||||
|
||||
|
||||
if @all
|
||||
case response
|
||||
when Net::HTTPSuccess, Net::HTTPRedirection, Net::HTTPClientError, Net::HTTPServerError
|
||||
save_response_body.call
|
||||
if response.is_a?(Net::HTTPRedirection)
|
||||
@logger.info("Saved redirect page for #{file_url} (status #{response.code}).")
|
||||
elsif response.is_a?(Net::HTTPClientError) || response.is_a?(Net::HTTPServerError)
|
||||
@logger.info("Saved error page for #{file_url} (status #{response.code}).")
|
||||
end
|
||||
return :saved
|
||||
else
|
||||
# for any other response type when --all is true, treat as an error to be retried or failed
|
||||
raise "Unhandled HTTP response: #{response.code} #{response.message}"
|
||||
end
|
||||
else # not @all (our default behavior)
|
||||
case response
|
||||
when Net::HTTPSuccess
|
||||
save_response_body.call
|
||||
return :saved
|
||||
when Net::HTTPRedirection
|
||||
raise "Too many redirects for #{file_url}" if redirect_count >= 2
|
||||
location = response['location']
|
||||
@logger.warn("Redirect found for #{file_url} -> #{location}")
|
||||
return download_with_retry(file_path, location, file_timestamp, connection, redirect_count + 1)
|
||||
when Net::HTTPTooManyRequests
|
||||
sleep(RATE_LIMIT * 2)
|
||||
raise "Rate limited, retrying..."
|
||||
when Net::HTTPNotFound
|
||||
@logger.warn("File not found, skipping: #{file_url}")
|
||||
return :skipped_not_found
|
||||
else
|
||||
raise "HTTP Error: #{response.code} #{response.message}"
|
||||
end
|
||||
end
|
||||
|
||||
rescue StandardError => e
|
||||
if retries < MAX_RETRIES
|
||||
retries += 1
|
||||
@@ -480,12 +947,25 @@ class WaybackMachineDownloader
|
||||
|
||||
def cleanup
|
||||
@connection_pool.shutdown
|
||||
|
||||
|
||||
if @failed_downloads.any?
|
||||
@logger.error("Download completed with errors.")
|
||||
@logger.error("Failed downloads summary:")
|
||||
@failed_downloads.each do |failure|
|
||||
@logger.error(" #{failure[:url]} - #{failure[:error]}")
|
||||
end
|
||||
unless @reset
|
||||
puts "State files kept due to download errors: #{cdx_path}, #{db_path}"
|
||||
return
|
||||
end
|
||||
end
|
||||
|
||||
if !@keep || @reset
|
||||
puts "Cleaning up state files..." unless @keep && !@reset
|
||||
FileUtils.rm_f(cdx_path)
|
||||
FileUtils.rm_f(db_path)
|
||||
elsif @keep
|
||||
puts "Keeping state files as requested: #{cdx_path}, #{db_path}"
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
@@ -4,7 +4,15 @@ require 'uri'
|
||||
module ArchiveAPI
|
||||
|
||||
def get_raw_list_from_api(url, page_index, http)
|
||||
request_url = URI("https://web.archive.org/cdx/search/xd")
|
||||
# Automatically append /* if the URL doesn't contain a path after the domain
|
||||
# This is a workaround for an issue with the API and *some* domains.
|
||||
# See https://github.com/StrawberryMaster/wayback-machine-downloader/issues/6
|
||||
# But don't do this when exact_url flag is set
|
||||
if url && !url.match(/^https?:\/\/.*\//i) && !@exact_url
|
||||
url = "#{url}/*"
|
||||
end
|
||||
|
||||
request_url = URI("https://web.archive.org/cdx/search/cdx")
|
||||
params = [["output", "json"], ["url", url]] + parameters_for_api(page_index)
|
||||
request_url.query = URI.encode_www_form(params)
|
||||
|
||||
@@ -17,7 +25,7 @@ module ArchiveAPI
|
||||
# Check if the response contains the header ["timestamp", "original"]
|
||||
json.shift if json.first == ["timestamp", "original"]
|
||||
json
|
||||
rescue JSON::ParserError, StandardError => e
|
||||
rescue JSON::ParserError => e
|
||||
warn "Failed to fetch data from API: #{e.message}"
|
||||
[]
|
||||
end
|
||||
|
||||
238
lib/wayback_machine_downloader/subdom_processor.rb
Normal file
238
lib/wayback_machine_downloader/subdom_processor.rb
Normal file
@@ -0,0 +1,238 @@
|
||||
# frozen_string_literal: true
|
||||
|
||||
module SubdomainProcessor
|
||||
def process_subdomains
|
||||
return unless @recursive_subdomains
|
||||
|
||||
puts "Starting subdomain processing..."
|
||||
|
||||
# extract base domain from the URL for comparison
|
||||
base_domain = extract_base_domain(@base_url)
|
||||
@processed_domains = Set.new([base_domain])
|
||||
@subdomain_queue = Queue.new
|
||||
|
||||
# scan downloaded files for subdomain links
|
||||
initial_files = Dir.glob(File.join(backup_path, "**/*.{html,htm,css,js}"))
|
||||
puts "Scanning #{initial_files.size} downloaded files for subdomain links..."
|
||||
|
||||
subdomains_found = scan_files_for_subdomains(initial_files, base_domain)
|
||||
|
||||
if subdomains_found.empty?
|
||||
puts "No subdomains found in downloaded content."
|
||||
return
|
||||
end
|
||||
|
||||
puts "Found #{subdomains_found.size} subdomains to process: #{subdomains_found.join(', ')}"
|
||||
|
||||
# add found subdomains to the queue
|
||||
subdomains_found.each do |subdomain|
|
||||
full_domain = "#{subdomain}.#{base_domain}"
|
||||
@subdomain_queue << "https://#{full_domain}/"
|
||||
end
|
||||
|
||||
# process the subdomain queue
|
||||
download_subdomains(base_domain)
|
||||
|
||||
# after all downloads, rewrite all URLs to make local references
|
||||
rewrite_subdomain_links(base_domain) if @rewrite
|
||||
end
|
||||
|
||||
private
|
||||
|
||||
def extract_base_domain(url)
|
||||
uri = URI.parse(url.gsub(/^https?:\/\//, '').split('/').first) rescue nil
|
||||
return nil unless uri
|
||||
|
||||
host = uri.host || uri.path.split('/').first
|
||||
host = host.downcase
|
||||
|
||||
# extract the base domain (e.g., "example.com" from "sub.example.com")
|
||||
parts = host.split('.')
|
||||
return host if parts.size <= 2
|
||||
|
||||
# for domains like co.uk, we want to keep the last 3 parts
|
||||
if parts[-2].length <= 3 && parts[-1].length <= 3 && parts.size > 2
|
||||
parts.last(3).join('.')
|
||||
else
|
||||
parts.last(2).join('.')
|
||||
end
|
||||
end
|
||||
|
||||
def scan_files_for_subdomains(files, base_domain)
|
||||
return [] unless base_domain
|
||||
|
||||
subdomains = Set.new
|
||||
|
||||
files.each do |file_path|
|
||||
next unless File.exist?(file_path)
|
||||
|
||||
begin
|
||||
content = File.read(file_path)
|
||||
|
||||
# extract URLs from HTML href/src attributes
|
||||
content.scan(/(?:href|src|action|data-src)=["']https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
|
||||
subdomain = match[0].downcase
|
||||
next if subdomain == 'www' # skip www subdomain
|
||||
subdomains.add(subdomain)
|
||||
end
|
||||
|
||||
# extract URLs from CSS
|
||||
content.scan(/url\(["']?https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
|
||||
subdomain = match[0].downcase
|
||||
next if subdomain == 'www' # skip www subdomain
|
||||
subdomains.add(subdomain)
|
||||
end
|
||||
|
||||
# extract URLs from JavaScript strings
|
||||
content.scan(/["']https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
|
||||
subdomain = match[0].downcase
|
||||
next if subdomain == 'www' # skip www subdomain
|
||||
subdomains.add(subdomain)
|
||||
end
|
||||
rescue => e
|
||||
puts "Error scanning file #{file_path}: #{e.message}"
|
||||
end
|
||||
end
|
||||
|
||||
subdomains.to_a
|
||||
end
|
||||
|
||||
def download_subdomains(base_domain)
|
||||
puts "Starting subdomain downloads..."
|
||||
depth = 0
|
||||
max_depth = @subdomain_depth || 1
|
||||
|
||||
while depth < max_depth && !@subdomain_queue.empty?
|
||||
current_batch = []
|
||||
|
||||
# get all subdomains at current depth
|
||||
while !@subdomain_queue.empty?
|
||||
current_batch << @subdomain_queue.pop
|
||||
end
|
||||
|
||||
puts "Processing #{current_batch.size} subdomains at depth #{depth + 1}..."
|
||||
|
||||
# download each subdomain
|
||||
current_batch.each do |subdomain_url|
|
||||
download_subdomain(subdomain_url, base_domain)
|
||||
end
|
||||
|
||||
# if we need to go deeper, scan the newly downloaded files
|
||||
if depth + 1 < max_depth
|
||||
# get all files in the subdomains directory
|
||||
new_files = Dir.glob(File.join(backup_path, "subdomains", "**/*.{html,htm,css,js}"))
|
||||
new_subdomains = scan_files_for_subdomains(new_files, base_domain)
|
||||
|
||||
# filter out already processed subdomains
|
||||
new_subdomains.each do |subdomain|
|
||||
full_domain = "#{subdomain}.#{base_domain}"
|
||||
unless @processed_domains.include?(full_domain)
|
||||
@processed_domains.add(full_domain)
|
||||
@subdomain_queue << "https://#{full_domain}/"
|
||||
end
|
||||
end
|
||||
|
||||
puts "Found #{@subdomain_queue.size} new subdomains at depth #{depth + 1}" if !@subdomain_queue.empty?
|
||||
end
|
||||
|
||||
depth += 1
|
||||
end
|
||||
end
|
||||
|
||||
def download_subdomain(subdomain_url, base_domain)
|
||||
begin
|
||||
uri = URI.parse(subdomain_url)
|
||||
subdomain_host = uri.host
|
||||
|
||||
# skip if already processed
|
||||
if @processed_domains.include?(subdomain_host)
|
||||
puts "Skipping already processed subdomain: #{subdomain_host}"
|
||||
return
|
||||
end
|
||||
|
||||
@processed_domains.add(subdomain_host)
|
||||
puts "Downloading subdomain: #{subdomain_url}"
|
||||
|
||||
# create the directory for this subdomain
|
||||
subdomain_dir = File.join(backup_path, "subdomains", subdomain_host)
|
||||
FileUtils.mkdir_p(subdomain_dir)
|
||||
|
||||
# create subdomain downloader with appropriate options
|
||||
subdomain_options = {
|
||||
base_url: subdomain_url,
|
||||
directory: subdomain_dir,
|
||||
from_timestamp: @from_timestamp,
|
||||
to_timestamp: @to_timestamp,
|
||||
all: @all,
|
||||
threads_count: @threads_count,
|
||||
maximum_pages: [@maximum_pages / 2, 10].max,
|
||||
rewrite: @rewrite,
|
||||
# don't recursively process subdomains from here
|
||||
recursive_subdomains: false
|
||||
}
|
||||
|
||||
# download the subdomain content
|
||||
subdomain_downloader = WaybackMachineDownloader.new(subdomain_options)
|
||||
subdomain_downloader.download_files
|
||||
|
||||
puts "Completed download of subdomain: #{subdomain_host}"
|
||||
rescue => e
|
||||
puts "Error downloading subdomain #{subdomain_url}: #{e.message}"
|
||||
end
|
||||
end
|
||||
|
||||
def rewrite_subdomain_links(base_domain)
|
||||
puts "Rewriting all files to use local subdomain references..."
|
||||
|
||||
all_files = Dir.glob(File.join(backup_path, "**/*.{html,htm,css,js}"))
|
||||
subdomains = @processed_domains.reject { |domain| domain == base_domain }
|
||||
|
||||
puts "Found #{all_files.size} files to check for rewriting"
|
||||
puts "Will rewrite links for subdomains: #{subdomains.join(', ')}"
|
||||
|
||||
rewritten_count = 0
|
||||
|
||||
all_files.each do |file_path|
|
||||
next unless File.exist?(file_path)
|
||||
|
||||
begin
|
||||
content = File.read(file_path)
|
||||
original_content = content.dup
|
||||
|
||||
# replace subdomain URLs with local paths
|
||||
subdomains.each do |subdomain_host|
|
||||
# for HTML attributes (href, src, etc.)
|
||||
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/#{Regexp.escape(subdomain_host)}([^"']*)(["'])/i) do
|
||||
prefix, path, suffix = $1, $2, $3
|
||||
path = "/index.html" if path.empty? || path == "/"
|
||||
"#{prefix}../subdomains/#{subdomain_host}#{path}#{suffix}"
|
||||
end
|
||||
|
||||
# for CSS url()
|
||||
content.gsub!(/url\(\s*["']?https?:\/\/#{Regexp.escape(subdomain_host)}([^"'\)]*?)["']?\s*\)/i) do
|
||||
path = $1
|
||||
path = "/index.html" if path.empty? || path == "/"
|
||||
"url(\"../subdomains/#{subdomain_host}#{path}\")"
|
||||
end
|
||||
|
||||
# for JavaScript strings
|
||||
content.gsub!(/(["'])https?:\/\/#{Regexp.escape(subdomain_host)}([^"']*)(["'])/i) do
|
||||
quote_start, path, quote_end = $1, $2, $3
|
||||
path = "/index.html" if path.empty? || path == "/"
|
||||
"#{quote_start}../subdomains/#{subdomain_host}#{path}#{quote_end}"
|
||||
end
|
||||
end
|
||||
|
||||
# save if modified
|
||||
if content != original_content
|
||||
File.write(file_path, content)
|
||||
rewritten_count += 1
|
||||
end
|
||||
rescue => e
|
||||
puts "Error rewriting file #{file_path}: #{e.message}"
|
||||
end
|
||||
end
|
||||
|
||||
puts "Rewrote links in #{rewritten_count} files"
|
||||
end
|
||||
end
|
||||
@@ -1,73 +1,74 @@
|
||||
# frozen_string_literal: true
|
||||
|
||||
# essentially, this is for converting a string with a potentially
|
||||
# broken or unknown encoding into a valid UTF-8 string
|
||||
# @todo: consider using charlock_holmes for this in the future
|
||||
module TidyBytes
|
||||
# precomputing CP1252 to UTF-8 mappings for bytes 128-159
|
||||
CP1252_MAP = (128..159).map do |byte|
|
||||
case byte
|
||||
when 128 then [226, 130, 172] # EURO SIGN
|
||||
when 130 then [226, 128, 154] # SINGLE LOW-9 QUOTATION MARK
|
||||
when 131 then [198, 146] # LATIN SMALL LETTER F WITH HOOK
|
||||
when 132 then [226, 128, 158] # DOUBLE LOW-9 QUOTATION MARK
|
||||
when 133 then [226, 128, 166] # HORIZONTAL ELLIPSIS
|
||||
when 134 then [226, 128, 160] # DAGGER
|
||||
when 135 then [226, 128, 161] # DOUBLE DAGGER
|
||||
when 136 then [203, 134] # MODIFIER LETTER CIRCUMFLEX ACCENT
|
||||
when 137 then [226, 128, 176] # PER MILLE SIGN
|
||||
when 138 then [197, 160] # LATIN CAPITAL LETTER S WITH CARON
|
||||
when 139 then [226, 128, 185] # SINGLE LEFT-POINTING ANGLE QUOTATION MARK
|
||||
when 140 then [197, 146] # LATIN CAPITAL LIGATURE OE
|
||||
when 142 then [197, 189] # LATIN CAPITAL LETTER Z WITH CARON
|
||||
when 145 then [226, 128, 152] # LEFT SINGLE QUOTATION MARK
|
||||
when 146 then [226, 128, 153] # RIGHT SINGLE QUOTATION MARK
|
||||
when 147 then [226, 128, 156] # LEFT DOUBLE QUOTATION MARK
|
||||
when 148 then [226, 128, 157] # RIGHT DOUBLE QUOTATION MARK
|
||||
when 149 then [226, 128, 162] # BULLET
|
||||
when 150 then [226, 128, 147] # EN DASH
|
||||
when 151 then [226, 128, 148] # EM DASH
|
||||
when 152 then [203, 156] # SMALL TILDE
|
||||
when 153 then [226, 132, 162] # TRADE MARK SIGN
|
||||
when 154 then [197, 161] # LATIN SMALL LETTER S WITH CARON
|
||||
when 155 then [226, 128, 186] # SINGLE RIGHT-POINTING ANGLE QUOTATION MARK
|
||||
when 156 then [197, 147] # LATIN SMALL LIGATURE OE
|
||||
when 158 then [197, 190] # LATIN SMALL LETTER Z WITH CARON
|
||||
when 159 then [197, 184] # LATIN SMALL LETTER Y WITH DIAERESIS
|
||||
end
|
||||
end.freeze
|
||||
UNICODE_REPLACEMENT_CHARACTER = "<EFBFBD>"
|
||||
|
||||
# precomputing all possible byte conversions
|
||||
CP1252_TO_UTF8 = Array.new(256) do |b|
|
||||
if (128..159).cover?(b)
|
||||
CP1252_MAP[b - 128]&.pack('C*')
|
||||
elsif b < 128
|
||||
b.chr
|
||||
else
|
||||
b < 192 ? [194, b].pack('C*') : [195, b - 64].pack('C*')
|
||||
# common encodings to try for best multilingual compatibility
|
||||
COMMON_ENCODINGS = [
|
||||
Encoding::UTF_8,
|
||||
Encoding::Windows_1251, # Cyrillic/Russian legacy
|
||||
Encoding::GB18030, # Simplified Chinese
|
||||
Encoding::Shift_JIS, # Japanese
|
||||
Encoding::EUC_KR, # Korean
|
||||
Encoding::ISO_8859_1, # Western European
|
||||
Encoding::Windows_1252 # Western European/Latin1 superset
|
||||
].select { |enc| Encoding.name_list.include?(enc.name) }
|
||||
|
||||
# returns true if the string appears to be binary (has null bytes)
|
||||
def binary_data?
|
||||
self.include?("\x00".b)
|
||||
end
|
||||
|
||||
# attempts to return a valid UTF-8 version of the string
|
||||
def tidy_bytes
|
||||
return self if self.encoding == Encoding::UTF_8 && self.valid_encoding?
|
||||
return self.dup.force_encoding("BINARY") if binary_data?
|
||||
|
||||
str = self.dup
|
||||
COMMON_ENCODINGS.each do |enc|
|
||||
str.force_encoding(enc)
|
||||
begin
|
||||
utf8 = str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
|
||||
return utf8 if utf8.valid_encoding? && !utf8.include?(UNICODE_REPLACEMENT_CHARACTER)
|
||||
rescue Encoding::UndefinedConversionError, Encoding::InvalidByteSequenceError
|
||||
# try next encoding
|
||||
end
|
||||
end
|
||||
end.freeze
|
||||
|
||||
# if no clean conversion found, try again but accept replacement characters
|
||||
str = self.dup
|
||||
COMMON_ENCODINGS.each do |enc|
|
||||
str.force_encoding(enc)
|
||||
begin
|
||||
utf8 = str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
|
||||
return utf8 if utf8.valid_encoding?
|
||||
rescue Encoding::UndefinedConversionError, Encoding::InvalidByteSequenceError
|
||||
# try next encoding
|
||||
end
|
||||
end
|
||||
|
||||
# fallback: replace all invalid/undefined bytes
|
||||
str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
|
||||
end
|
||||
|
||||
def tidy_bytes!
|
||||
replace(self.tidy_bytes)
|
||||
end
|
||||
|
||||
def self.included(base)
|
||||
base.class_eval do
|
||||
def tidy_bytes(force = false)
|
||||
return nil if empty?
|
||||
|
||||
if force
|
||||
buffer = String.new(capacity: bytesize)
|
||||
each_byte { |b| buffer << CP1252_TO_UTF8[b] }
|
||||
return buffer.force_encoding(Encoding::UTF_8)
|
||||
end
|
||||
base.send(:include, InstanceMethods)
|
||||
end
|
||||
|
||||
begin
|
||||
encode('UTF-8')
|
||||
rescue Encoding::UndefinedConversionError, Encoding::InvalidByteSequenceError
|
||||
buffer = String.new(capacity: bytesize)
|
||||
scrub { |b| CP1252_TO_UTF8[b.ord] }
|
||||
end
|
||||
end
|
||||
module InstanceMethods
|
||||
def tidy_bytes
|
||||
TidyBytes.instance_method(:tidy_bytes).bind(self).call
|
||||
end
|
||||
|
||||
def tidy_bytes!(force = false)
|
||||
result = tidy_bytes(force)
|
||||
result ? replace(result) : self
|
||||
end
|
||||
def tidy_bytes!
|
||||
TidyBytes.instance_method(:tidy_bytes!).bind(self).call
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
74
lib/wayback_machine_downloader/url_rewrite.rb
Normal file
74
lib/wayback_machine_downloader/url_rewrite.rb
Normal file
@@ -0,0 +1,74 @@
|
||||
# frozen_string_literal: true
|
||||
|
||||
# URLs in HTML attributes
|
||||
def rewrite_html_attr_urls(content)
|
||||
|
||||
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do
|
||||
prefix, url, suffix = $1, $2, $3
|
||||
|
||||
if url.start_with?('http')
|
||||
begin
|
||||
uri = URI.parse(url)
|
||||
path = uri.path
|
||||
path = path[1..-1] if path.start_with?('/')
|
||||
"#{prefix}#{path}#{suffix}"
|
||||
rescue
|
||||
"#{prefix}#{url}#{suffix}"
|
||||
end
|
||||
elsif url.start_with?('/')
|
||||
"#{prefix}./#{url[1..-1]}#{suffix}"
|
||||
else
|
||||
"#{prefix}#{url}#{suffix}"
|
||||
end
|
||||
end
|
||||
content
|
||||
end
|
||||
|
||||
# URLs in CSS
|
||||
def rewrite_css_urls(content)
|
||||
|
||||
content.gsub!(/url\(\s*["']?https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"'\)]+)["']?\s*\)/i) do
|
||||
url = $1
|
||||
|
||||
if url.start_with?('http')
|
||||
begin
|
||||
uri = URI.parse(url)
|
||||
path = uri.path
|
||||
path = path[1..-1] if path.start_with?('/')
|
||||
"url(\"#{path}\")"
|
||||
rescue
|
||||
"url(\"#{url}\")"
|
||||
end
|
||||
elsif url.start_with?('/')
|
||||
"url(\"./#{url[1..-1]}\")"
|
||||
else
|
||||
"url(\"#{url}\")"
|
||||
end
|
||||
end
|
||||
content
|
||||
end
|
||||
|
||||
# URLs in JavaScript
|
||||
def rewrite_js_urls(content)
|
||||
|
||||
content.gsub!(/(["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do
|
||||
quote_start, url, quote_end = $1, $2, $3
|
||||
|
||||
if url.start_with?('http')
|
||||
begin
|
||||
uri = URI.parse(url)
|
||||
path = uri.path
|
||||
path = path[1..-1] if path.start_with?('/')
|
||||
"#{quote_start}#{path}#{quote_end}"
|
||||
rescue
|
||||
"#{quote_start}#{url}#{quote_end}"
|
||||
end
|
||||
elsif url.start_with?('/')
|
||||
"#{quote_start}./#{url[1..-1]}#{quote_end}"
|
||||
else
|
||||
"#{quote_start}#{url}#{quote_end}"
|
||||
end
|
||||
end
|
||||
|
||||
content
|
||||
end
|
||||
@@ -1,17 +1,15 @@
|
||||
require './lib/wayback_machine_downloader'
|
||||
|
||||
Gem::Specification.new do |s|
|
||||
s.name = "wayback_machine_downloader"
|
||||
s.version = WaybackMachineDownloader::VERSION
|
||||
s.name = "wayback_machine_downloader_straw"
|
||||
s.version = "2.4.2"
|
||||
s.executables << "wayback_machine_downloader"
|
||||
s.summary = "Download an entire website from the Wayback Machine."
|
||||
s.description = "Download an entire website from the Wayback Machine. Wayback Machine by Internet Archive (archive.org) is an awesome tool to view any website at any point of time but lacks an export feature. Wayback Machine Downloader brings exactly this."
|
||||
s.authors = ["hartator"]
|
||||
s.email = "hartator@gmail.com"
|
||||
s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb"]
|
||||
s.homepage = "https://github.com/hartator/wayback-machine-downloader"
|
||||
s.description = "Download complete websites from the Internet Archive's Wayback Machine. While the Wayback Machine (archive.org) excellently preserves web history, it lacks a built-in export functionality; this gem does just that, allowing you to download entire archived websites. (This is a significant rewrite of the original wayback_machine_downloader gem by hartator, with enhanced features and performance improvements.)"
|
||||
s.authors = ["strawberrymaster"]
|
||||
s.email = "strawberrymaster@vivaldi.net"
|
||||
s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb", "lib/wayback_machine_downloader/subdom_processor.rb", "lib/wayback_machine_downloader/url_rewrite.rb"]
|
||||
s.homepage = "https://github.com/StrawberryMaster/wayback-machine-downloader"
|
||||
s.license = "MIT"
|
||||
s.required_ruby_version = ">= 1.9.2"
|
||||
s.required_ruby_version = ">= 3.4.3"
|
||||
s.add_runtime_dependency "concurrent-ruby", "~> 1.3", ">= 1.3.4"
|
||||
s.add_development_dependency "rake", "~> 12.2"
|
||||
s.add_development_dependency "minitest", "~> 5.2"
|
||||
|
||||
Reference in New Issue
Block a user