95 Commits

Author SHA1 Message Date
Felipe
40e9c9bb51 Bumped version 2025-08-16 19:38:01 +00:00
Felipe
6bc08947b7 More aggressive sanitization
this should deal with some of the issues we've seen, luckily. What a ride!
2025-08-12 18:55:00 -03:00
Felipe
c731e0c7bd Bumped version 2025-08-12 11:46:03 +00:00
Felipe
9fd2a7f8d1 Minor refactoring of HTML tag sanitization 2025-08-12 08:42:27 -03:00
Felipe
6ad312f31f Sanitizing HTML tags
some sites contain tags *in* their URL, and fail to save on some devices like Windows
2025-08-05 23:44:34 +00:00
Felipe
62ea35daa6 Bumping version 2025-08-04 21:23:48 +00:00
Felipe
1f4202908f Fixes for tidy_bytes
admittedly not the cleanest way to do this, although it works for #25.
2025-07-31 12:58:22 -03:00
Felipe
bed3f6101c Added missing gemspec file 2025-07-31 12:57:03 -03:00
Felipe
754df6b8d6 Merge pull request #27 from adampweb/master
Refactored huge functions & cleanup
2025-07-29 18:09:51 -03:00
adampweb
801fb77f79 Perf: Refactored a huge function into smaller subprocesses 2025-07-29 21:12:20 +02:00
adampweb
e9849e6c9c Cleanup: I removed the obsolete options.
The classic way provides more flexibility
2025-07-29 20:55:10 +02:00
Felipe
bc868e6b39 Refactor tidy_bytes.rb
I'm not sure if we can easily determine the encoding behind each site (and I don't think Wayback Machine does that), *but* we can at least translate it and get it to download. This should be mostly useful for other, non-Western European languages. See #25
2025-07-29 10:10:56 -03:00
Felipe
2bf04aff48 Sanitize base_url and directory parameters
this might be the cause of #25, at least from what it appears
2025-07-27 17:18:57 +00:00
Felipe
51becde916 Minor fix 2025-07-26 21:01:40 +00:00
Felipe
c30ee73977 Sanitize file_id
we were not consistently handling non-UTF-8 characters here, especially after commit e4487baafc. This also fixes #25
2025-07-26 20:58:50 +00:00
Felipe
d3466b3387 Bumping version
normally I would've yanked the old gem, but that's not working here
2025-07-22 12:41:26 +00:00
Felipe
0250579f0e Added missing file 2025-07-22 12:38:12 +00:00
Felipe
0663c1c122 Merge pull request #23 from adampweb/master
Fixed base image vulnerability
2025-07-21 14:44:43 -03:00
adampweb
93115f70ec Merge pull request #5 from adampweb/snyk-fix-88576ceadf7e0c41b63a2af504a3c8ae
[Snyk] Security upgrade ruby from 3.4.4-alpine to 3.4.5-alpine
2025-07-21 18:46:03 +02:00
snyk-bot
3d37ae10fd fix: Dockerfile to reduce vulnerabilities
The following vulnerabilities are fixed with an upgrade:
- https://snyk.io/vuln/SNYK-ALPINE322-OPENSSL-10597997
- https://snyk.io/vuln/SNYK-ALPINE322-OPENSSL-10597997
2025-07-21 16:45:10 +00:00
Felipe
bff10e7260 Initial implementation of a composite snapshot
see issue #22. TBF
2025-07-21 15:30:49 +00:00
Felipe
3d181ce84c Bumped version 2025-07-21 13:48:34 +00:00
Alfonso Corrado
999aa211ae fix match filters 2025-07-21 13:42:44 +00:00
adampweb
ffdce7e4ec Exclude dev enviroment config 2025-07-20 17:14:09 +02:00
adampweb
e4487baafc Fix: Handle default case in tidy_bytes 2025-07-20 17:13:36 +02:00
Felipe
82ff2de3dc Added brief note for users with both WMD gems here 2025-07-14 08:12:38 -03:00
Felipe
fd329afdd2 Merge pull request #20 from underarchiver/rfc3968-url-validity-check
Prevent fetching off non RFC3968-compliant URLs
2025-07-11 10:55:12 -03:00
Felipe
038785557d Ability to recursively download across subdomains
this is quite experimental. Fixes #15 but still needs more testing
2025-07-09 12:53:58 +00:00
Felipe
2eead8cc27 Bumping version 2025-06-27 19:50:39 +00:00
cybercode3
7e5cdd54fb Fix: path sanitizer and timestamp sorting errors
Fix: path sanitizer and timestamp sorting errors

( I encountered these errors issues with the script using Windows 11. Changing these two lines got the script to work for me. )

- Fixed a bug in Windows path sanitizer where String#gsub was incorrectly called with a Proc as the replacement. Replaced with block form to ensure proper character escaping for Windows-incompatible file path characters.
- Fixed an ArgumentError in file sorting when a file snapshot’s timestamp was nil. Updated sort logic to safely handle nil timestamps by converting them to strings or integers, preventing comparison errors between NilClass and String/Integer.

These changes prevent fatal runtime errors when downloading files with certain URLs or incomplete metadata, improving robustness for sites with inconsistent archive data.
2025-06-25 02:07:20 +00:00
Felipe
4160ff5e4a Bumping version 2025-06-18 18:05:31 +00:00
underarchiver
f03d92a3c4 Prevent fetching off non RFC3968-compliant URLs 2025-06-17 13:27:10 +02:00
Felipe
2490109cfe Merge pull request #17 from elidickinson/fix-exact-url
don’t append /* when using —exact-url
2025-06-15 22:18:40 -03:00
Eli Dickinson
c3c5b8446a don’t append /* when —exact-url 2025-06-15 13:26:11 -04:00
Felipe
18357a77ed Correct file path and sanitization in Windows
Not only we weren't normalizing the file directories, we were also agressively sanitizing incorrect characters, leading to some funny stuff on Windows. Fixes #16
2025-06-15 13:48:11 +00:00
Felipe
3fdfd70fc1 Bump version 2025-06-05 22:34:40 +00:00
Felipe
2bf74b4173 Merge pull request #14 from elidickinson/fix-bracket-urls
Fix bug with archive urls containing square brackets
2025-06-03 23:12:07 -03:00
Eli Dickinson
79cbb639e7 Fix bug with archive urls containing square brackets 2025-06-03 16:36:03 -04:00
Felipe
071d208b31 Merge pull request #13 from elidickinson/master
workaround for API only showing html files for some domains (fixes #6)
2025-05-30 14:34:32 -03:00
Eli Dickinson
1681a12579 workaround for API only showing html files for some domains
See https://github.com/StrawberryMaster/wayback-machine-downloader/issues/6
2025-05-30 12:50:48 -04:00
Felipe
f38756dd76 Correction for downloaded data folder
if you downloaded content from example.org/*, it would be listed in a folder titled * instead of the sitename. See #6 (and thanks to elidickinson for pointing it out!)
2025-05-30 14:00:32 +00:00
Felipe
9452411e32 Added nil checks 2025-05-30 13:52:25 +00:00
Felipe
61e22cfe25 Bump versions 2025-05-27 18:10:09 +00:00
Felipe
183ed61104 Attempt at fixing --all
I honestly don't recall if this was implemented in the original code, and I'm guessing this worked at *some point* during this fork. It seems to work correctly now, however. See #6 and #11
2025-05-27 17:17:34 +00:00
Felipe
e6ecf32a43 Dockerfile test 2
I really should not be using deprecated parameters.
2025-05-21 21:34:36 -03:00
Felipe
375c6314ad Dockerfile test
...again
2025-05-21 21:26:37 -03:00
Felipe
6e2739f5a8 Testing 2025-05-18 18:00:10 +00:00
Felipe
caba6a665f Rough attempt to make this more efficient 2025-05-18 17:52:28 +00:00
Felipe
ab4324c0eb Bumping to 2.3.6 2025-05-18 16:49:44 +00:00
Felipe
e28d7d578b Experimental ability to rewrite URLs to local browsing 2025-05-18 16:48:50 +00:00
Felipe
a7a25574cf Merge pull request #10 from adampweb/master
Using ghcr.io for pulling Docker image
2025-05-15 08:50:33 -03:00
Felipe
23cc3d69b1 Merge pull request #9 from adampweb/feature/increase-performance
Increase performance of  Bundler processes
2025-05-15 08:50:04 -03:00
adampweb
01fa1f8c9f Merge pull request #2 from vitaly-zdanevich/patch-1
README.md: add docker example without cloning the repo
2025-05-14 21:19:11 +02:00
adampweb
d2f98d9428 Merge remote-tracking branch 'upstream/master' into feature/increase-performance 2025-05-14 15:41:07 +02:00
adampweb
c7a5381eaf Using nproc in Bundler processes 2025-05-14 15:03:22 +02:00
Felipe
9709834e20 Merge pull request #8 from adampweb/master
Fix: delete empty files, Compose command fixes
2025-05-12 10:36:10 -03:00
adampweb
77998372cb Docker: If you load any component of the app before (or during) the Docker build process, it may cause failures 2025-05-11 20:05:00 +02:00
adampweb
2c789b7df6 Restructure Docker Compose config 2025-05-11 11:27:08 +02:00
adampweb
1ef8c14c48 Removed unused variable from if condition 2025-05-11 10:57:36 +02:00
Felipe
780e45343f Merge pull request #7 from adampweb/master
Vulnerablity fix: Ruby 3.x
2025-05-10 11:34:07 -03:00
adampweb
42e6d62284 Merge remote-tracking branch 'upstream/master' 2025-05-09 20:17:01 +02:00
adampweb
543161d7fb Supplement of docs 2025-05-09 19:54:15 +02:00
adampweb
99a6de981e Env. vars: set default values and related docs 2025-05-09 19:38:39 +02:00
adampweb
d85c880d23 Vulnerablity fix:
Updates Ruby version to address vulnerability

Updates the Ruby version in the Dockerfile
to the latest stable release in the 3.x series
to address identified vulnerabilities.

Details: https://hub.docker.com/layers/library/ruby/3.1.6-alpine/images/sha256-7ff1261ca74033c38e86b04e30a6078567ec17e59d465d96250665897fb52180
2025-05-09 18:32:47 +02:00
Felipe
917f4f8798 Bumping version 2025-04-30 13:05:30 +00:00
Felipe
787bc2e535 Added missing configs 2025-04-30 13:05:21 +00:00
Felipe
4db13a7792 Fix --all-timestamps
we were accidentally removing the timestamp prefix from `file_id`, rendering that option useless in 2.3.4. This should again now. This will fix #4
2025-04-30 13:01:29 +00:00
Felipe
31d51728af Bump version 2025-04-19 14:07:05 +00:00
Felipe
febffe5de4 Added support for resuming incomplete downloads 2025-04-19 13:40:14 +00:00
Felipe
27dd619aa4 gzip support 2025-04-19 13:07:07 +00:00
Felipe
576298dca8 License fix 2025-04-19 13:05:09 +00:00
Felipe
dc71d1d167 Merge pull request #3 from adampweb/master
Using Docker Compose
2025-04-14 12:06:37 -03:00
Vitaly Zdanevich
13e88ce04a README.md: add -v .:/websites 2025-04-14 10:56:01 +04:00
Vitaly Zdanevich
c7fc7c7b58 README.md: add docker example without cloning the repo 2025-04-14 10:43:49 +04:00
adampweb
5aebf83fca Add interactivity by CLI 2025-04-06 17:02:39 +02:00
adampweb
b1080f0219 Keep secrets :) 2025-04-06 16:56:59 +02:00
adampweb
dde36ea840 Merge branch 'StrawberryMaster:master' into master 2025-04-06 12:46:04 +02:00
adampweb
acec026ce1 Using Docker Compose 2025-04-06 12:36:31 +02:00
Felipe
ec3fd2dcaa Merge pull request #2 from adampweb/master
Upgrade Ruby version + install necessary gem
2025-04-02 22:52:25 -03:00
adampweb
6518ecf215 Install concurrent-ruby gem to avoid errors like cannot load such file -- concurrent-ruby 2025-04-02 15:23:24 +02:00
adampweb
f5572d6129 Merge branch 'master' of https://github.com/adampweb/wayback-machine-downloader 2025-04-02 14:50:14 +02:00
adampweb
fc4ccf62e2 Ugpraded Ruby version 2025-04-02 14:48:50 +02:00
adampweb
84bf76363c Merge pull request #1 from StrawberryMaster/master
Fetching API calls sequentially
2025-04-02 14:27:31 +02:00
Felipe
0c701ee890 Fetching API calls sequentially
although the WM API is particularly wonky and this will not prevent all errors, this aligns better with what we have here.
2025-03-29 22:27:01 +00:00
Felipe
c953d038e2 Fixed banner for gem version 2025-03-09 21:15:29 -03:00
Felipe
b726e94947 start → install
This is more accurate
2025-03-08 22:06:48 +00:00
Felipe
f86302e7aa Updated README 2025-03-08 21:25:00 +00:00
Felipe
791068e9bd Updated gemspec 2025-03-08 21:21:23 +00:00
Felipe
456e08e745 Merge pull request #1 from idkanymoreforone/master
Add missing instructions for concurrent-ruby install
2025-02-13 09:06:04 -03:00
idkanymoreforone
90069fad41 issue 2025-02-12 20:55:57 -06:00
Felipe
2243958643 Fixes in cases of too many redirects or files not found 2025-02-09 16:48:52 +00:00
Felipe
e25732e19c Bumping to 2.3.3 2025-02-09 16:48:33 +00:00
Felipe
46450d7c20 Refactoring tidy_bytes, part 2 2025-02-09 16:47:29 +00:00
Felipe
019534794c Taking care of empty responses
fixes "unexpected token at ''" appearing after fetching a list of snapshots
2025-02-09 16:24:02 +00:00
Felipe
7142be5c16 Fixed license link 2025-02-09 15:42:31 +00:00
15 changed files with 1146 additions and 268 deletions

5
.dockerignore Normal file
View File

@@ -0,0 +1,5 @@
*.md
*.yml
.github
websites

4
.env.example Normal file
View File

@@ -0,0 +1,4 @@
DB_HOST="db"
DB_USER="root"
DB_PASSWORD="example1234"
DB_NAME="wayback"

9
.gitignore vendored
View File

@@ -18,6 +18,11 @@ Gemfile.lock
.ruby-version .ruby-version
.rbenv* .rbenv*
## ENV
*.env*
!.env*.example
## RCOV ## RCOV
coverage.data coverage.data
@@ -27,3 +32,7 @@ tmp
*.rbc *.rbc
test.rb test.rb
# Dev environment
.vscode
*.code-workspace

View File

@@ -1,7 +1,14 @@
FROM ruby:2.3-alpine FROM ruby:3.4.5-alpine
USER root USER root
WORKDIR /build WORKDIR /build
COPY Gemfile /build/
COPY *.gemspec /build/
RUN bundle config set jobs "$(nproc)" \
&& bundle install
COPY . /build COPY . /build
WORKDIR / WORKDIR /build
ENTRYPOINT [ "/build/bin/wayback_machine_downloader" ] ENTRYPOINT [ "/build/bin/wayback_machine_downloader", "--directory", "/build/websites" ]

View File

@@ -1,6 +1,5 @@
# Wayback Machine Downloader # Wayback Machine Downloader
[![version](https://badge.fury.io/rb/wayback_machine_downloader_straw.svg)](https://rubygems.org/gems/wayback_machine_downloader_straw)
[![Gem Version](https://badge.fury.io/rb/wayback_machine_downloader.svg)](https://rubygems.org/gems/wayback_machine_downloader/)
This is a fork of the [Wayback Machine Downloader](https://github.com/hartator/wayback-machine-downloader). With this, you can download a website from the Internet Archive Wayback Machine. This is a fork of the [Wayback Machine Downloader](https://github.com/hartator/wayback-machine-downloader). With this, you can download a website from the Internet Archive Wayback Machine.
@@ -19,6 +18,17 @@ Your files will save to `./websites/example.com/` with their original structure
- Ruby 2.3+ ([download Ruby here](https://www.ruby-lang.org/en/downloads/)) - Ruby 2.3+ ([download Ruby here](https://www.ruby-lang.org/en/downloads/))
- Bundler gem (`gem install bundler`) - Bundler gem (`gem install bundler`)
### Quick install
It took a while, but we have a gem for this! Install it with:
```bash
gem install wayback_machine_downloader_straw
```
To run most commands, just like in the original WMD, you can use:
```bash
wayback_machine_downloader https://example.com
```
**Note**: this gem may conflict with hartator's wayback_machine_downloader gem, and so you may have to uninstall it for this WMD fork to work. A good way to know is if a command fails; it will list the gem version as 2.3.1 or earlier, while this WMD fork uses 2.3.2 or above.
### Step-by-step setup ### Step-by-step setup
1. **Install Ruby**: 1. **Install Ruby**:
```bash ```bash
@@ -31,6 +41,11 @@ Your files will save to `./websites/example.com/` with their original structure
bundle install bundle install
``` ```
If you encounter an error like cannot load such file -- concurrent-ruby, manually install the missing gem:
```bash
gem install concurrent-ruby
```
3. **Run it**: 3. **Run it**:
```bash ```bash
cd path/to/wayback-machine-downloader/bin cd path/to/wayback-machine-downloader/bin
@@ -48,16 +63,50 @@ docker build -t wayback_machine_downloader .
docker run -it --rm wayback_machine_downloader [options] URL docker run -it --rm wayback_machine_downloader [options] URL
``` ```
or the example without cloning the repo - fetching smallrockets.com until the year 2013:
```bash
docker run -v .:/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com
```
### 🐳 Using Docker Compose
We can also use it with Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
```yaml
# docker-compose.yml
services:
wayback_machine_downloader:
build:
context: .
tty: true
image: wayback_machine_downloader:latest
container_name: wayback_machine_downloader
volumes:
- .:/build:rw
- ./websites:/build/websites:rw
```
#### Usage:
Now you can create a Docker image as named "wayback_machine_downloader" with the following command:
```bash
docker compose up -d --build
```
After that you can run the exists container with the following command:
```bash
docker compose run --rm wayback_machine_downloader https://example.com [options]
```
## ⚙️ Configuration ## ⚙️ Configuration
There are a few constants that can be edited in the `wayback_machine_downloader.rb` file for your convenience. The default values may be conservative, so you can adjust them to your needs. They are: There are a few constants that can be edited in the `wayback_machine_downloader.rb` file for your convenience. The default values may be conservative, so you can adjust them to your needs. They are:
```ruby ```ruby
DEFAULT_TIMEOUT = 30 # HTTP timeout (in seconds) DEFAULT_TIMEOUT = 30 # HTTP timeout (in seconds)
MAX_RETRIES = 3 # Failed request retries MAX_RETRIES = 3 # Number of times to retry failed requests
RETRY_DELAY = 2 # Wait between retries RETRY_DELAY = 2 # Wait time between retries (seconds)
RATE_LIMIT = 0.25 # Throttle between requests RATE_LIMIT = 0.25 # Throttle between requests (seconds)
CONNECTION_POOL_SIZE = 10 # No. of simultaneous connections CONNECTION_POOL_SIZE = 10 # Maximum simultaneous connections
MEMORY_BUFFER_SIZE = 16384 # Size of download buffer MEMORY_BUFFER_SIZE = 16384 # Download buffer size (bytes)
STATE_CDX_FILENAME = '.cdx.json' # Stores snapshot listing
STATE_DB_FILENAME = '.downloaded.txt' # Tracks completed downloads
``` ```
## 🛠️ Advanced usage ## 🛠️ Advanced usage
@@ -186,6 +235,29 @@ ruby wayback_machine_downloader https://example.com --list
``` ```
It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application. It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application.
### Job management
The downloader automatically saves its progress (`.cdx.json` for snapshot list, `.downloaded.txt` for completed files) in the output directory. If you run the same command again pointing to the same output directory, it will resume where it left off, skipping already downloaded files.
> [!NOTE]
> Automatic resumption can be affected by changing the URL, mode selection (like `--all-timestamps`), filtering selections, or other options. If you want to ensure a clean start, use the `--reset` option.
| Option | Description |
|--------|-------------|
| `--reset` | Delete state files (`.cdx.json`, `.downloaded.txt`) and restart the download from scratch. Does not delete already downloaded website files. |
| `--keep` | Keep state files (`.cdx.json`, `.downloaded.txt`) even after a successful download. By default, these are deleted upon successful completion. |
**Example** - Restart a download job from the beginning:
```bash
ruby wayback_machine_downloader https://example.com --reset
```
This is useful if you suspect the state files are corrupted or want to ensure a completely fresh download process without deleting the files you already have.
**Example 2** - Keep state files after download:
```bash
ruby wayback_machine_downloader https://example.com --keep
```
This can be useful for debugging or if you plan to extend the download later with different parameters (e.g., adding `--to` timestamp) while leveraging the existing snapshot list.
## 🤝 Contributing ## 🤝 Contributing
1. Fork the repository 1. Fork the repository
2. Create a feature branch 2. Create a feature branch

View File

@@ -59,7 +59,27 @@ option_parser = OptionParser.new do |opts|
end end
opts.on("-r", "--rewritten", "Downloads the rewritten Wayback Machine files instead of the original files") do |t| opts.on("-r", "--rewritten", "Downloads the rewritten Wayback Machine files instead of the original files") do |t|
options[:rewritten] = t options[:rewritten] = true
end
opts.on("--local", "Rewrite URLs to make them relative for local browsing") do |t|
options[:rewrite] = true
end
opts.on("--reset", "Delete state files (.cdx.json, .downloaded.txt) and restart the download from scratch") do |t|
options[:reset] = true
end
opts.on("--keep", "Keep state files (.cdx.json, .downloaded.txt) after a successful download") do |t|
options[:keep] = true
end
opts.on("--recursive-subdomains", "Recursively download content from subdomains") do |t|
options[:recursive_subdomains] = true
end
opts.on("--subdomain-depth DEPTH", Integer, "Maximum depth for subdomain recursion (default: 1)") do |t|
options[:subdomain_depth] = t
end end
opts.on("-v", "--version", "Display version") do |t| opts.on("-v", "--version", "Display version") do |t|

10
docker-compose.yml Normal file
View File

@@ -0,0 +1,10 @@
services:
wayback_machine_downloader:
build:
context: .
tty: true
image: wayback_machine_downloader:latest
container_name: wayback_machine_downloader
volumes:
- .:/build:rw
- ./websites:/build/websites:rw

9
entrypoint.sh Normal file
View File

@@ -0,0 +1,9 @@
#!/bin/bash
if [ "$ENVIRONMENT" == "development" ]; then
echo "Running in development mode. Starting rerun..."
exec rerun --dir /build --ignore "websites/*" -- /build/bin/wayback_machine_downloader "$@"
else
echo "Not in development mode. Skipping rerun."
exec /build/bin/wayback_machine_downloader "$@"
fi

View File

@@ -9,9 +9,14 @@ require 'json'
require 'time' require 'time'
require 'concurrent-ruby' require 'concurrent-ruby'
require 'logger' require 'logger'
require 'zlib'
require 'stringio'
require 'digest'
require_relative 'wayback_machine_downloader/tidy_bytes' require_relative 'wayback_machine_downloader/tidy_bytes'
require_relative 'wayback_machine_downloader/to_regex' require_relative 'wayback_machine_downloader/to_regex'
require_relative 'wayback_machine_downloader/archive_api' require_relative 'wayback_machine_downloader/archive_api'
require_relative 'wayback_machine_downloader/subdom_processor'
require_relative 'wayback_machine_downloader/url_rewrite'
class ConnectionPool class ConnectionPool
MAX_AGE = 300 MAX_AGE = 300
@@ -110,24 +115,34 @@ end
class WaybackMachineDownloader class WaybackMachineDownloader
include ArchiveAPI include ArchiveAPI
include SubdomainProcessor
VERSION = "2.3.2" VERSION = "2.4.2"
DEFAULT_TIMEOUT = 30 DEFAULT_TIMEOUT = 30
MAX_RETRIES = 3 MAX_RETRIES = 3
RETRY_DELAY = 2 RETRY_DELAY = 2
RATE_LIMIT = 0.25 # Delay between requests in seconds RATE_LIMIT = 0.25 # Delay between requests in seconds
CONNECTION_POOL_SIZE = 10 CONNECTION_POOL_SIZE = 10
MEMORY_BUFFER_SIZE = 16384 # 16KB chunks MEMORY_BUFFER_SIZE = 16384 # 16KB chunks
STATE_CDX_FILENAME = ".cdx.json"
STATE_DB_FILENAME = ".downloaded.txt"
attr_accessor :base_url, :exact_url, :directory, :all_timestamps, attr_accessor :base_url, :exact_url, :directory, :all_timestamps,
:from_timestamp, :to_timestamp, :only_filter, :exclude_filter, :from_timestamp, :to_timestamp, :only_filter, :exclude_filter,
:all, :maximum_pages, :threads_count, :logger :all, :maximum_pages, :threads_count, :logger, :reset, :keep, :rewrite,
:snapshot_at
def initialize params def initialize params
validate_params(params) validate_params(params)
@base_url = params[:base_url] @base_url = params[:base_url]&.tidy_bytes
@exact_url = params[:exact_url] @exact_url = params[:exact_url]
@directory = params[:directory] if params[:directory]
sanitized_dir = params[:directory].tidy_bytes
@directory = File.expand_path(sanitized_dir)
else
@directory = nil
end
@all_timestamps = params[:all_timestamps] @all_timestamps = params[:all_timestamps]
@from_timestamp = params[:from_timestamp].to_i @from_timestamp = params[:from_timestamp].to_i
@to_timestamp = params[:to_timestamp].to_i @to_timestamp = params[:to_timestamp].to_i
@@ -137,35 +152,71 @@ class WaybackMachineDownloader
@maximum_pages = params[:maximum_pages] ? params[:maximum_pages].to_i : 100 @maximum_pages = params[:maximum_pages] ? params[:maximum_pages].to_i : 100
@threads_count = [params[:threads_count].to_i, 1].max @threads_count = [params[:threads_count].to_i, 1].max
@rewritten = params[:rewritten] @rewritten = params[:rewritten]
@reset = params[:reset]
@keep = params[:keep]
@timeout = params[:timeout] || DEFAULT_TIMEOUT @timeout = params[:timeout] || DEFAULT_TIMEOUT
@logger = setup_logger @logger = setup_logger
@failed_downloads = Concurrent::Array.new @failed_downloads = Concurrent::Array.new
@connection_pool = ConnectionPool.new(CONNECTION_POOL_SIZE) @connection_pool = ConnectionPool.new(CONNECTION_POOL_SIZE)
@db_mutex = Mutex.new
@rewrite = params[:rewrite] || false
@recursive_subdomains = params[:recursive_subdomains] || false
@subdomain_depth = params[:subdomain_depth] || 1
@snapshot_at = params[:snapshot_at] ? params[:snapshot_at].to_i : nil
# URL for rejecting invalid/unencoded wayback urls
@url_regexp = /^(([A-Za-z][A-Za-z0-9+.-]*):((\/\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=]))+)(:([0-9]*))?)(((\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))|((\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)?))|((((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))(\?((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?(\#((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?)$/
handle_reset
end end
def backup_name def backup_name
if @base_url.include? '//' url_to_process = @base_url.end_with?('/*') ? @base_url.chomp('/*') : @base_url
@base_url.split('/')[2] raw = if url_to_process.include?('//')
url_to_process.split('/')[2]
else else
@base_url url_to_process
end end
# sanitize for Windows (and safe cross-platform) to avoid ENOTDIR on mkdir (colon in host:port)
if Gem.win_platform?
raw = raw.gsub(/[:*?"<>|]/, '_')
raw = raw.gsub(/[ .]+\z/, '')
end
raw = 'site' if raw.nil? || raw.empty?
raw
end end
def backup_path def backup_path
if @directory if @directory
if @directory[-1] == '/' # because @directory is already an absolute path, we just ensure it exists
@directory @directory
else
@directory + '/'
end
else else
'websites/' + backup_name + '/' # ensure the default path is absolute and normalized
File.expand_path(File.join('websites', backup_name))
end
end
def cdx_path
File.join(backup_path, STATE_CDX_FILENAME)
end
def db_path
File.join(backup_path, STATE_DB_FILENAME)
end
def handle_reset
if @reset
puts "Resetting download state..."
FileUtils.rm_f(cdx_path)
FileUtils.rm_f(db_path)
puts "Removed state files: #{cdx_path}, #{db_path}"
end end
end end
def match_only_filter file_url def match_only_filter file_url
if @only_filter if @only_filter
only_filter_regex = @only_filter.to_regex only_filter_regex = @only_filter.to_regex(detect: true)
if only_filter_regex if only_filter_regex
only_filter_regex =~ file_url only_filter_regex =~ file_url
else else
@@ -178,7 +229,7 @@ class WaybackMachineDownloader
def match_exclude_filter file_url def match_exclude_filter file_url
if @exclude_filter if @exclude_filter
exclude_filter_regex = @exclude_filter.to_regex exclude_filter_regex = @exclude_filter.to_regex(detect: true)
if exclude_filter_regex if exclude_filter_regex
exclude_filter_regex =~ file_url exclude_filter_regex =~ file_url
else else
@@ -190,53 +241,162 @@ class WaybackMachineDownloader
end end
def get_all_snapshots_to_consider def get_all_snapshots_to_consider
snapshot_list_to_consider = [] if File.exist?(cdx_path) && !@reset
puts "Loading snapshot list from #{cdx_path}"
@connection_pool.with_connection do |connection| begin
puts "Getting snapshot pages" snapshot_list_to_consider = JSON.parse(File.read(cdx_path))
puts "Loaded #{snapshot_list_to_consider.length} snapshots from cache."
# Fetch the initial set of snapshots puts
snapshot_list_to_consider += get_raw_list_from_api(@base_url, nil, connection) return Concurrent::Array.new(snapshot_list_to_consider)
print "." rescue JSON::ParserError => e
puts "Error reading snapshot cache file #{cdx_path}: #{e.message}. Refetching..."
# Fetch additional pages if the exact URL flag is not set FileUtils.rm_f(cdx_path)
unless @exact_url rescue => e
@maximum_pages.times do |page_index| puts "Error loading snapshot cache #{cdx_path}: #{e.message}. Refetching..."
snapshot_list = get_raw_list_from_api("#{@base_url}/*", page_index, connection) FileUtils.rm_f(cdx_path)
break if snapshot_list.empty?
snapshot_list_to_consider += snapshot_list
print "."
end
end end
end end
puts " found #{snapshot_list_to_consider.length} snapshots to consider." snapshot_list_to_consider = Concurrent::Array.new
mutex = Mutex.new
puts "Getting snapshot pages from Wayback Machine API..."
# Fetch the initial set of snapshots, sequentially
@connection_pool.with_connection do |connection|
initial_list = get_raw_list_from_api(@base_url, nil, connection)
initial_list ||= []
mutex.synchronize do
snapshot_list_to_consider.concat(initial_list)
print "."
end
end
# Fetch additional pages if the exact URL flag is not set
unless @exact_url
page_index = 0
batch_size = [@threads_count, 5].min
continue_fetching = true
while continue_fetching && page_index < @maximum_pages
# Determine the range of pages to fetch in this batch
end_index = [page_index + batch_size, @maximum_pages].min
current_batch = (page_index...end_index).to_a
# Create futures for concurrent API calls
futures = current_batch.map do |page|
Concurrent::Future.execute do
result = nil
@connection_pool.with_connection do |connection|
result = get_raw_list_from_api("#{@base_url}/*", page, connection)
end
result ||= []
[page, result]
end
end
results = []
futures.each do |future|
begin
results << future.value
rescue => e
puts "\nError fetching page #{future}: #{e.message}"
end
end
# Sort results by page number to maintain order
results.sort_by! { |page, _| page }
# Process results and check for empty pages
results.each do |page, result|
if result.nil? || result.empty?
continue_fetching = false
break
else
mutex.synchronize do
snapshot_list_to_consider.concat(result)
print "."
end
end
end
page_index = end_index
sleep(RATE_LIMIT) if continue_fetching
end
end
puts " found #{snapshot_list_to_consider.length} snapshots."
# Save the fetched list to the cache file
begin
FileUtils.mkdir_p(File.dirname(cdx_path))
File.write(cdx_path, JSON.pretty_generate(snapshot_list_to_consider.to_a)) # Convert Concurrent::Array back to Array for JSON
puts "Saved snapshot list to #{cdx_path}"
rescue => e
puts "Error saving snapshot cache to #{cdx_path}: #{e.message}"
end
puts puts
snapshot_list_to_consider snapshot_list_to_consider
end end
# Get a composite snapshot file list for a specific timestamp
def get_composite_snapshot_file_list(target_timestamp)
file_versions = {}
get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/')
next if file_timestamp.to_i > target_timestamp
raw_tail = file_url.split('/')[3..-1]&.join('/')
file_id = sanitize_and_prepare_id(raw_tail, file_url)
next if file_id.nil?
next if match_exclude_filter(file_url)
next unless match_only_filter(file_url)
if !file_versions[file_id] || file_versions[file_id][:timestamp].to_i < file_timestamp.to_i
file_versions[file_id] = { file_url: file_url, timestamp: file_timestamp, file_id: file_id }
end
end
file_versions.values
end
# Returns a list of files for the composite snapshot
def get_file_list_composite_snapshot(target_timestamp)
file_list = get_composite_snapshot_file_list(target_timestamp)
file_list = file_list.sort_by { |_,v| v[:timestamp].to_s }.reverse
file_list.map do |file_remote_info|
file_remote_info[1][:file_id] = file_remote_info[0]
file_remote_info[1]
end
end
def get_file_list_curated def get_file_list_curated
file_list_curated = Hash.new file_list_curated = Hash.new
get_all_snapshots_to_consider.each do |file_timestamp, file_url| get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/') next unless file_url.include?('/')
file_id = file_url.split('/')[3..-1].join('/')
file_id = CGI::unescape file_id raw_tail = file_url.split('/')[3..-1]&.join('/')
file_id = file_id.tidy_bytes unless file_id == "" file_id = sanitize_and_prepare_id(raw_tail, file_url)
if file_id.nil? if file_id.nil?
puts "Malformed file url, ignoring: #{file_url}" puts "Malformed file url, ignoring: #{file_url}"
next
end
if file_id.include?('<') || file_id.include?('>')
puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}"
else else
if match_exclude_filter(file_url) if match_exclude_filter(file_url)
puts "File url matches exclude filter, ignoring: #{file_url}" puts "File url matches exclude filter, ignoring: #{file_url}"
elsif not match_only_filter(file_url) elsif !match_only_filter(file_url)
puts "File url doesn't match only filter, ignoring: #{file_url}" puts "File url doesn't match only filter, ignoring: #{file_url}"
elsif file_list_curated[file_id] elsif file_list_curated[file_id]
unless file_list_curated[file_id][:timestamp] > file_timestamp unless file_list_curated[file_id][:timestamp] > file_timestamp
file_list_curated[file_id] = {file_url: file_url, timestamp: file_timestamp} file_list_curated[file_id] = { file_url: file_url, timestamp: file_timestamp }
end end
else else
file_list_curated[file_id] = {file_url: file_url, timestamp: file_timestamp} file_list_curated[file_id] = { file_url: file_url, timestamp: file_timestamp }
end end
end end
end end
@@ -247,21 +407,32 @@ class WaybackMachineDownloader
file_list_curated = Hash.new file_list_curated = Hash.new
get_all_snapshots_to_consider.each do |file_timestamp, file_url| get_all_snapshots_to_consider.each do |file_timestamp, file_url|
next unless file_url.include?('/') next unless file_url.include?('/')
file_id = file_url.split('/')[3..-1].join('/')
file_id_and_timestamp = [file_timestamp, file_id].join('/') raw_tail = file_url.split('/')[3..-1]&.join('/')
file_id_and_timestamp = CGI::unescape file_id_and_timestamp file_id = sanitize_and_prepare_id(raw_tail, file_url)
file_id_and_timestamp = file_id_and_timestamp.tidy_bytes unless file_id_and_timestamp == ""
if file_id.nil? if file_id.nil?
puts "Malformed file url, ignoring: #{file_url}" puts "Malformed file url, ignoring: #{file_url}"
next
end
file_id_and_timestamp_raw = [file_timestamp, file_id].join('/')
file_id_and_timestamp = sanitize_and_prepare_id(file_id_and_timestamp_raw, file_url)
if file_id_and_timestamp.nil?
puts "Malformed file id/timestamp combo, ignoring: #{file_url}"
next
end
if file_id_and_timestamp.include?('<') || file_id_and_timestamp.include?('>')
puts "Invalid characters in file_id after sanitization, ignoring: #{file_url}"
else else
if match_exclude_filter(file_url) if match_exclude_filter(file_url)
puts "File url matches exclude filter, ignoring: #{file_url}" puts "File url matches exclude filter, ignoring: #{file_url}"
elsif not match_only_filter(file_url) elsif !match_only_filter(file_url)
puts "File url doesn't match only filter, ignoring: #{file_url}" puts "File url doesn't match only filter, ignoring: #{file_url}"
elsif file_list_curated[file_id_and_timestamp] elsif file_list_curated[file_id_and_timestamp]
puts "Duplicate file and timestamp combo, ignoring: #{file_id}" if @verbose # duplicate combo, ignore silently (verbose flag not shown here)
else else
file_list_curated[file_id_and_timestamp] = {file_url: file_url, timestamp: file_timestamp} file_list_curated[file_id_and_timestamp] = { file_url: file_url, timestamp: file_timestamp }
end end
end end
end end
@@ -271,7 +442,9 @@ class WaybackMachineDownloader
def get_file_list_by_timestamp def get_file_list_by_timestamp
if @all_timestamps if @snapshot_at
@file_list_by_snapshot_at ||= get_composite_snapshot_file_list(@snapshot_at)
elsif @all_timestamps
file_list_curated = get_file_list_all_timestamps file_list_curated = get_file_list_all_timestamps
file_list_curated.map do |file_remote_info| file_list_curated.map do |file_remote_info|
file_remote_info[1][:file_id] = file_remote_info[0] file_remote_info[1][:file_id] = file_remote_info[0]
@@ -279,7 +452,7 @@ class WaybackMachineDownloader
end end
else else
file_list_curated = get_file_list_curated file_list_curated = get_file_list_curated
file_list_curated = file_list_curated.sort_by { |k,v| v[:timestamp] }.reverse file_list_curated = file_list_curated.sort_by { |_,v| v[:timestamp].to_s }.reverse
file_list_curated.map do |file_remote_info| file_list_curated.map do |file_remote_info|
file_remote_info[1][:file_id] = file_remote_info[0] file_remote_info[1][:file_id] = file_remote_info[0]
file_remote_info[1] file_remote_info[1]
@@ -301,42 +474,128 @@ class WaybackMachineDownloader
puts "]" puts "]"
end end
def load_downloaded_ids
downloaded_ids = Set.new
if File.exist?(db_path) && !@reset
puts "Loading list of already downloaded files from #{db_path}"
begin
File.foreach(db_path) { |line| downloaded_ids.add(line.strip) }
rescue => e
puts "Error reading downloaded files list #{db_path}: #{e.message}. Assuming no files downloaded."
downloaded_ids.clear
end
end
downloaded_ids
end
def append_to_db(file_id)
@db_mutex.synchronize do
begin
FileUtils.mkdir_p(File.dirname(db_path))
File.open(db_path, 'a') { |f| f.puts(file_id) }
rescue => e
@logger.error("Failed to append downloaded file ID #{file_id} to #{db_path}: #{e.message}")
end
end
end
def processing_files(pool, files_to_process)
files_to_process.each do |file_remote_info|
pool.post do
download_success = false
begin
@connection_pool.with_connection do |connection|
result_message = download_file(file_remote_info, connection)
# assume download success if the result message contains ' -> '
if result_message && result_message.include?(' -> ')
download_success = true
end
@download_mutex.synchronize do
@processed_file_count += 1
# adjust progress message to reflect remaining files
progress_message = result_message.sub(/\(#{@processed_file_count}\/\d+\)/, "(#{@processed_file_count}/#{@total_to_download})") if result_message
puts progress_message if progress_message
end
end
# sppend to DB only after successful download outside the connection block
if download_success
append_to_db(file_remote_info[:file_id])
end
rescue => e
@logger.error("Error processing file #{file_remote_info[:file_url]}: #{e.message}")
@download_mutex.synchronize do
@processed_file_count += 1
end
end
sleep(RATE_LIMIT)
end
end
end
def download_files def download_files
start_time = Time.now start_time = Time.now
puts "Downloading #{@base_url} to #{backup_path} from Wayback Machine archives." puts "Downloading #{@base_url} to #{backup_path} from Wayback Machine archives."
if file_list_by_timestamp.empty? FileUtils.mkdir_p(backup_path)
puts "No files to download."
# Load the list of files to potentially download
files_to_download = file_list_by_timestamp
if files_to_download.empty?
puts "No files found matching criteria."
cleanup
return return
end end
total_files = file_list_by_timestamp.count total_files = files_to_download.count
puts "#{total_files} files to download:" puts "#{total_files} files found matching criteria."
# Load IDs of already downloaded files
downloaded_ids = load_downloaded_ids
files_to_process = files_to_download.reject do |file_info|
downloaded_ids.include?(file_info[:file_id])
end
remaining_count = files_to_process.count
skipped_count = total_files - remaining_count
if skipped_count > 0
puts "Found #{skipped_count} previously downloaded files, skipping them."
end
if remaining_count == 0
puts "All matching files have already been downloaded."
cleanup
return
end
puts "#{remaining_count} files to download:"
@processed_file_count = 0 @processed_file_count = 0
@total_to_download = remaining_count
@download_mutex = Mutex.new @download_mutex = Mutex.new
thread_count = [@threads_count, CONNECTION_POOL_SIZE].min thread_count = [@threads_count, CONNECTION_POOL_SIZE].min
pool = Concurrent::FixedThreadPool.new(thread_count) pool = Concurrent::FixedThreadPool.new(thread_count)
file_list_by_timestamp.each do |file_remote_info| processing_files(pool, files_to_process)
pool.post do
@connection_pool.with_connection do |connection|
result = download_file(file_remote_info, connection)
@download_mutex.synchronize do
@processed_file_count += 1
puts result if result
end
end
sleep(RATE_LIMIT)
end
end
pool.shutdown pool.shutdown
pool.wait_for_termination pool.wait_for_termination
end_time = Time.now end_time = Time.now
puts "\nDownload completed in #{(end_time - start_time).round(2)}s, saved in #{backup_path}" puts "\nDownload finished in #{(end_time - start_time).round(2)}s."
# process subdomains if enabled
if @recursive_subdomains
subdomain_start_time = Time.now
process_subdomains
subdomain_end_time = Time.now
subdomain_time = (subdomain_end_time - subdomain_start_time).round(2)
puts "Subdomain processing finished in #{subdomain_time}s."
end
puts "Results saved in #{backup_path}"
cleanup cleanup
end end
@@ -363,42 +622,116 @@ class WaybackMachineDownloader
end end
end end
def rewrite_urls_to_relative(file_path)
return unless File.exist?(file_path)
file_ext = File.extname(file_path).downcase
begin
content = File.binread(file_path)
if file_ext == '.html' || file_ext == '.htm'
encoding = content.match(/<meta\s+charset=["']?([^"'>]+)/i)&.captures&.first || 'UTF-8'
content.force_encoding(encoding) rescue content.force_encoding('UTF-8')
else
content.force_encoding('UTF-8')
end
# URLs in HTML attributes
rewrite_html_attr_urls(content)
# URLs in CSS
rewrite_css_urls(content)
# URLs in JavaScript
rewrite_js_urls(content)
# for URLs in HTML attributes that start with a single slash
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])\/([^"'\/][^"']*)(["'])/i) do
prefix, path, suffix = $1, $2, $3
"#{prefix}./#{path}#{suffix}"
end
# for URLs in CSS that start with a single slash
content.gsub!(/url\(\s*["']?\/([^"'\)\/][^"'\)]*?)["']?\s*\)/i) do
path = $1
"url(\"./#{path}\")"
end
# save the modified content back to the file
File.binwrite(file_path, content)
puts "Rewrote URLs in #{file_path} to be relative."
rescue Errno::ENOENT => e
@logger.warn("Error reading file #{file_path}: #{e.message}")
end
end
def download_file (file_remote_info, http) def download_file (file_remote_info, http)
current_encoding = "".encoding current_encoding = "".encoding
file_url = file_remote_info[:file_url].encode(current_encoding) file_url = file_remote_info[:file_url].encode(current_encoding)
file_id = file_remote_info[:file_id] file_id = file_remote_info[:file_id]
file_timestamp = file_remote_info[:timestamp] file_timestamp = file_remote_info[:timestamp]
file_path_elements = file_id.split('/')
# sanitize file_id to ensure it is a valid path component
raw_path_elements = file_id.split('/')
sanitized_path_elements = raw_path_elements.map do |element|
if Gem.win_platform?
# for Windows, we need to sanitize path components to avoid invalid characters
# this prevents issues with file names that contain characters not allowed in
# Windows file systems. See # https://docs.microsoft.com/en-us/windows/win32/fileio/naming-a-file#naming-conventions
element.gsub(/[:\*?"<>\|\&\=\/\\]/) { |match| '%' + match.ord.to_s(16).upcase }
else
element
end
end
current_backup_path = backup_path
if file_id == "" if file_id == ""
dir_path = backup_path dir_path = current_backup_path
file_path = backup_path + 'index.html' file_path = File.join(dir_path, 'index.html')
elsif file_url[-1] == '/' or not file_path_elements[-1].include? '.' elsif file_url[-1] == '/' || (sanitized_path_elements.last && !sanitized_path_elements.last.include?('.'))
dir_path = backup_path + file_path_elements[0..-1].join('/') # if file_id is a directory, we treat it as such
file_path = backup_path + file_path_elements[0..-1].join('/') + '/index.html' dir_path = File.join(current_backup_path, *sanitized_path_elements)
file_path = File.join(dir_path, 'index.html')
else else
dir_path = backup_path + file_path_elements[0..-2].join('/') # if file_id is a file, we treat it as such
file_path = backup_path + file_path_elements[0..-1].join('/') filename = sanitized_path_elements.pop
dir_path = File.join(current_backup_path, *sanitized_path_elements)
file_path = File.join(dir_path, filename)
end end
if Gem.win_platform?
dir_path = dir_path.gsub(/[:*?&=<>\\|]/) {|s| '%' + s.ord.to_s(16) } # check existence *before* download attempt
file_path = file_path.gsub(/[:*?&=<>\\|]/) {|s| '%' + s.ord.to_s(16) } # this handles cases where a file was created manually or by a previous partial run without a .db entry
if File.exist? file_path
return "#{file_url} # #{file_path} already exists. (#{@processed_file_count + 1}/#{@total_to_download})"
end end
unless File.exist? file_path
begin begin
structure_dir_path dir_path structure_dir_path dir_path
download_with_retry(file_path, file_url, file_timestamp, http) status = download_with_retry(file_path, file_url, file_timestamp, http)
"#{file_url} -> #{file_path} (#{@processed_file_count + 1}/#{file_list_by_timestamp.size})"
rescue StandardError => e case status
msg = "#{file_url} # #{e}" when :saved
if not @all and File.exist?(file_path) and File.size(file_path) == 0 if @rewrite && File.extname(file_path) =~ /\.(html?|css|js)$/i
File.delete(file_path) rewrite_urls_to_relative(file_path)
msg += "\n#{file_path} was empty and was removed."
end end
msg "#{file_url} -> #{file_path} (#{@processed_file_count + 1}/#{@total_to_download})"
when :skipped_not_found
"Skipped (not found): #{file_url} (#{@processed_file_count + 1}/#{@total_to_download})"
else
# ideally, this case should not be reached if download_with_retry behaves as expected.
@logger.warn("Unknown status from download_with_retry for #{file_url}: #{status}")
"Unknown status for #{file_url}: #{status} (#{@processed_file_count + 1}/#{@total_to_download})"
end end
else rescue StandardError => e
"#{file_url} # #{file_path} already exists. (#{@processed_file_count + 1}/#{file_list_by_timestamp.size})" msg = "Failed: #{file_url} # #{e} (#{@processed_file_count + 1}/#{@total_to_download})"
if File.exist?(file_path) and File.size(file_path) == 0
File.delete(file_path)
msg += "\n#{file_path} was empty and was removed."
end
msg
end end
end end
@@ -407,7 +740,22 @@ class WaybackMachineDownloader
end end
def file_list_by_timestamp def file_list_by_timestamp
@file_list_by_timestamp ||= get_file_list_by_timestamp if @snapshot_at
@file_list_by_snapshot_at ||= get_composite_snapshot_file_list(@snapshot_at)
elsif @all_timestamps
file_list_curated = get_file_list_all_timestamps
file_list_curated.map do |file_remote_info|
file_remote_info[1][:file_id] = file_remote_info[0]
file_remote_info[1]
end
else
file_list_curated = get_file_list_curated
file_list_curated = file_list_curated.sort_by { |_,v| v[:timestamp].to_s }.reverse
file_list_curated.map do |file_remote_info|
file_remote_info[1][:file_id] = file_remote_info[0]
file_remote_info[1]
end
end
end end
private private
@@ -426,7 +774,86 @@ class WaybackMachineDownloader
logger logger
end end
def download_with_retry(file_path, file_url, file_timestamp, connection) # safely sanitize a file id (or id+timestamp)
def sanitize_and_prepare_id(raw, file_url)
return nil if raw.nil? || raw.empty?
original = raw.dup
begin
# work on a binary copy to avoid premature encoding errors
raw = raw.dup.force_encoding(Encoding::BINARY)
# percent-decode (repeat until stable in case of double-encoding)
loop do
decoded = raw.gsub(/%([0-9A-Fa-f]{2})/) { [$1].pack('H2') }
break if decoded == raw
raw = decoded
end
# try tidy_bytes
begin
raw = raw.tidy_bytes
rescue StandardError
# fallback: scrub to UTF-8
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
end
# ensure UTF-8 and scrub again
unless raw.encoding == Encoding::UTF_8 && raw.valid_encoding?
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
end
# strip HTML/comment artifacts & control chars
raw.gsub!(/<!--+/, '')
raw.gsub!(/[\x00-\x1F]/, '')
# split query; hash it for stable short name
path_part, query_part = raw.split('?', 2)
if query_part && !query_part.empty?
q_digest = Digest::SHA256.hexdigest(query_part)[0, 12]
if path_part.include?('.')
pre, _sep, post = path_part.rpartition('.')
path_part = "#{pre}__q#{q_digest}.#{post}"
else
path_part = "#{path_part}__q#{q_digest}"
end
end
raw = path_part
# collapse slashes & trim leading slash
raw.gsub!(%r{/+}, '/')
raw.sub!(%r{\A/}, '')
# segment-wise sanitation
raw = raw.split('/').map do |segment|
seg = segment.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
seg = seg.gsub(/[:*?"<>|\\]/) { |c| "%#{c.ord.to_s(16).upcase}" }
seg = seg.gsub(/[ .]+\z/, '') if Gem.win_platform?
seg.empty? ? '_' : seg
end.join('/')
# remove any remaining angle brackets
raw.tr!('<>', '')
# final fallback if empty
raw = "file__#{Digest::SHA1.hexdigest(original)[0,10]}" if raw.nil? || raw.empty?
raw
rescue => e
@logger&.warn("Failed to sanitize file id from #{file_url}: #{e.message}")
# deterministic fallback never return nil so caller wont mark malformed
"file__#{Digest::SHA1.hexdigest(original)[0,10]}"
end
end
# wrap URL in parentheses if it contains characters that commonly break unquoted
# Windows CMD usage (e.g., &). This is only for display; user still must quote
# when invoking manually.
def safe_display_url(url)
return url unless url && url.match?(/[&]/)
"(#{url})"
end
def download_with_retry(file_path, file_url, file_timestamp, connection, redirect_count = 0)
retries = 0 retries = 0
begin begin
wayback_url = if @rewritten wayback_url = if @rewritten
@@ -435,26 +862,74 @@ class WaybackMachineDownloader
"https://web.archive.org/web/#{file_timestamp}id_/#{file_url}" "https://web.archive.org/web/#{file_timestamp}id_/#{file_url}"
end end
# Escape square brackets because they are not valid in URI()
wayback_url = wayback_url.gsub('[', '%5B').gsub(']', '%5D')
# reject invalid/unencoded wayback_url, behaving as if the resource weren't found
if not @url_regexp.match?(wayback_url)
@logger.warn("Skipped #{file_url}: invalid URL")
return :skipped_not_found
end
request = Net::HTTP::Get.new(URI(wayback_url)) request = Net::HTTP::Get.new(URI(wayback_url))
request["Connection"] = "keep-alive" request["Connection"] = "keep-alive"
request["User-Agent"] = "WaybackMachineDownloader/#{VERSION}" request["User-Agent"] = "WaybackMachineDownloader/#{VERSION}"
request["Accept-Encoding"] = "gzip, deflate"
response = connection.request(request) response = connection.request(request)
case response save_response_body = lambda do
when Net::HTTPSuccess
File.open(file_path, "wb") do |file| File.open(file_path, "wb") do |file|
if block_given? body = response.body
yield(response, file) if response['content-encoding'] == 'gzip' && body && !body.empty?
begin
gz = Zlib::GzipReader.new(StringIO.new(body))
decompressed_body = gz.read
gz.close
file.write(decompressed_body)
rescue Zlib::GzipFile::Error => e
@logger.warn("Failure decompressing gzip file #{file_url}: #{e.message}. Writing raw body.")
file.write(body)
end
else else
file.write(response.body) file.write(body) if body
end end
end end
when Net::HTTPTooManyRequests end
sleep(RATE_LIMIT * 2)
raise "Rate limited, retrying..." if @all
else case response
raise "HTTP Error: #{response.code} #{response.message}" when Net::HTTPSuccess, Net::HTTPRedirection, Net::HTTPClientError, Net::HTTPServerError
save_response_body.call
if response.is_a?(Net::HTTPRedirection)
@logger.info("Saved redirect page for #{file_url} (status #{response.code}).")
elsif response.is_a?(Net::HTTPClientError) || response.is_a?(Net::HTTPServerError)
@logger.info("Saved error page for #{file_url} (status #{response.code}).")
end
return :saved
else
# for any other response type when --all is true, treat as an error to be retried or failed
raise "Unhandled HTTP response: #{response.code} #{response.message}"
end
else # not @all (our default behavior)
case response
when Net::HTTPSuccess
save_response_body.call
return :saved
when Net::HTTPRedirection
raise "Too many redirects for #{file_url}" if redirect_count >= 2
location = response['location']
@logger.warn("Redirect found for #{file_url} -> #{location}")
return download_with_retry(file_path, location, file_timestamp, connection, redirect_count + 1)
when Net::HTTPTooManyRequests
sleep(RATE_LIMIT * 2)
raise "Rate limited, retrying..."
when Net::HTTPNotFound
@logger.warn("File not found, skipping: #{file_url}")
return :skipped_not_found
else
raise "HTTP Error: #{response.code} #{response.message}"
end
end end
rescue StandardError => e rescue StandardError => e
@@ -474,10 +949,23 @@ class WaybackMachineDownloader
@connection_pool.shutdown @connection_pool.shutdown
if @failed_downloads.any? if @failed_downloads.any?
@logger.error("Download completed with errors.")
@logger.error("Failed downloads summary:") @logger.error("Failed downloads summary:")
@failed_downloads.each do |failure| @failed_downloads.each do |failure|
@logger.error(" #{failure[:url]} - #{failure[:error]}") @logger.error(" #{failure[:url]} - #{failure[:error]}")
end end
unless @reset
puts "State files kept due to download errors: #{cdx_path}, #{db_path}"
return
end
end
if !@keep || @reset
puts "Cleaning up state files..." unless @keep && !@reset
FileUtils.rm_f(cdx_path)
FileUtils.rm_f(db_path)
elsif @keep
puts "Keeping state files as requested: #{cdx_path}, #{db_path}"
end end
end end
end end

View File

@@ -4,18 +4,28 @@ require 'uri'
module ArchiveAPI module ArchiveAPI
def get_raw_list_from_api(url, page_index, http) def get_raw_list_from_api(url, page_index, http)
request_url = URI("https://web.archive.org/cdx/search/xd") # Automatically append /* if the URL doesn't contain a path after the domain
# This is a workaround for an issue with the API and *some* domains.
# See https://github.com/StrawberryMaster/wayback-machine-downloader/issues/6
# But don't do this when exact_url flag is set
if url && !url.match(/^https?:\/\/.*\//i) && !@exact_url
url = "#{url}/*"
end
request_url = URI("https://web.archive.org/cdx/search/cdx")
params = [["output", "json"], ["url", url]] + parameters_for_api(page_index) params = [["output", "json"], ["url", url]] + parameters_for_api(page_index)
request_url.query = URI.encode_www_form(params) request_url.query = URI.encode_www_form(params)
begin begin
response = http.get(request_url) response = http.get(request_url)
json = JSON.parse(response.body) body = response.body.to_s.strip
return [] if body.empty?
json = JSON.parse(body)
# Check if the response contains the header ["timestamp", "original"] # Check if the response contains the header ["timestamp", "original"]
json.shift if json.first == ["timestamp", "original"] json.shift if json.first == ["timestamp", "original"]
json json
rescue JSON::ParserError, StandardError => e rescue JSON::ParserError => e
warn "Failed to fetch data from API: #{e.message}" warn "Failed to fetch data from API: #{e.message}"
[] []
end end

View File

@@ -0,0 +1,238 @@
# frozen_string_literal: true
module SubdomainProcessor
def process_subdomains
return unless @recursive_subdomains
puts "Starting subdomain processing..."
# extract base domain from the URL for comparison
base_domain = extract_base_domain(@base_url)
@processed_domains = Set.new([base_domain])
@subdomain_queue = Queue.new
# scan downloaded files for subdomain links
initial_files = Dir.glob(File.join(backup_path, "**/*.{html,htm,css,js}"))
puts "Scanning #{initial_files.size} downloaded files for subdomain links..."
subdomains_found = scan_files_for_subdomains(initial_files, base_domain)
if subdomains_found.empty?
puts "No subdomains found in downloaded content."
return
end
puts "Found #{subdomains_found.size} subdomains to process: #{subdomains_found.join(', ')}"
# add found subdomains to the queue
subdomains_found.each do |subdomain|
full_domain = "#{subdomain}.#{base_domain}"
@subdomain_queue << "https://#{full_domain}/"
end
# process the subdomain queue
download_subdomains(base_domain)
# after all downloads, rewrite all URLs to make local references
rewrite_subdomain_links(base_domain) if @rewrite
end
private
def extract_base_domain(url)
uri = URI.parse(url.gsub(/^https?:\/\//, '').split('/').first) rescue nil
return nil unless uri
host = uri.host || uri.path.split('/').first
host = host.downcase
# extract the base domain (e.g., "example.com" from "sub.example.com")
parts = host.split('.')
return host if parts.size <= 2
# for domains like co.uk, we want to keep the last 3 parts
if parts[-2].length <= 3 && parts[-1].length <= 3 && parts.size > 2
parts.last(3).join('.')
else
parts.last(2).join('.')
end
end
def scan_files_for_subdomains(files, base_domain)
return [] unless base_domain
subdomains = Set.new
files.each do |file_path|
next unless File.exist?(file_path)
begin
content = File.read(file_path)
# extract URLs from HTML href/src attributes
content.scan(/(?:href|src|action|data-src)=["']https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
subdomain = match[0].downcase
next if subdomain == 'www' # skip www subdomain
subdomains.add(subdomain)
end
# extract URLs from CSS
content.scan(/url\(["']?https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
subdomain = match[0].downcase
next if subdomain == 'www' # skip www subdomain
subdomains.add(subdomain)
end
# extract URLs from JavaScript strings
content.scan(/["']https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
subdomain = match[0].downcase
next if subdomain == 'www' # skip www subdomain
subdomains.add(subdomain)
end
rescue => e
puts "Error scanning file #{file_path}: #{e.message}"
end
end
subdomains.to_a
end
def download_subdomains(base_domain)
puts "Starting subdomain downloads..."
depth = 0
max_depth = @subdomain_depth || 1
while depth < max_depth && !@subdomain_queue.empty?
current_batch = []
# get all subdomains at current depth
while !@subdomain_queue.empty?
current_batch << @subdomain_queue.pop
end
puts "Processing #{current_batch.size} subdomains at depth #{depth + 1}..."
# download each subdomain
current_batch.each do |subdomain_url|
download_subdomain(subdomain_url, base_domain)
end
# if we need to go deeper, scan the newly downloaded files
if depth + 1 < max_depth
# get all files in the subdomains directory
new_files = Dir.glob(File.join(backup_path, "subdomains", "**/*.{html,htm,css,js}"))
new_subdomains = scan_files_for_subdomains(new_files, base_domain)
# filter out already processed subdomains
new_subdomains.each do |subdomain|
full_domain = "#{subdomain}.#{base_domain}"
unless @processed_domains.include?(full_domain)
@processed_domains.add(full_domain)
@subdomain_queue << "https://#{full_domain}/"
end
end
puts "Found #{@subdomain_queue.size} new subdomains at depth #{depth + 1}" if !@subdomain_queue.empty?
end
depth += 1
end
end
def download_subdomain(subdomain_url, base_domain)
begin
uri = URI.parse(subdomain_url)
subdomain_host = uri.host
# skip if already processed
if @processed_domains.include?(subdomain_host)
puts "Skipping already processed subdomain: #{subdomain_host}"
return
end
@processed_domains.add(subdomain_host)
puts "Downloading subdomain: #{subdomain_url}"
# create the directory for this subdomain
subdomain_dir = File.join(backup_path, "subdomains", subdomain_host)
FileUtils.mkdir_p(subdomain_dir)
# create subdomain downloader with appropriate options
subdomain_options = {
base_url: subdomain_url,
directory: subdomain_dir,
from_timestamp: @from_timestamp,
to_timestamp: @to_timestamp,
all: @all,
threads_count: @threads_count,
maximum_pages: [@maximum_pages / 2, 10].max,
rewrite: @rewrite,
# don't recursively process subdomains from here
recursive_subdomains: false
}
# download the subdomain content
subdomain_downloader = WaybackMachineDownloader.new(subdomain_options)
subdomain_downloader.download_files
puts "Completed download of subdomain: #{subdomain_host}"
rescue => e
puts "Error downloading subdomain #{subdomain_url}: #{e.message}"
end
end
def rewrite_subdomain_links(base_domain)
puts "Rewriting all files to use local subdomain references..."
all_files = Dir.glob(File.join(backup_path, "**/*.{html,htm,css,js}"))
subdomains = @processed_domains.reject { |domain| domain == base_domain }
puts "Found #{all_files.size} files to check for rewriting"
puts "Will rewrite links for subdomains: #{subdomains.join(', ')}"
rewritten_count = 0
all_files.each do |file_path|
next unless File.exist?(file_path)
begin
content = File.read(file_path)
original_content = content.dup
# replace subdomain URLs with local paths
subdomains.each do |subdomain_host|
# for HTML attributes (href, src, etc.)
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/#{Regexp.escape(subdomain_host)}([^"']*)(["'])/i) do
prefix, path, suffix = $1, $2, $3
path = "/index.html" if path.empty? || path == "/"
"#{prefix}../subdomains/#{subdomain_host}#{path}#{suffix}"
end
# for CSS url()
content.gsub!(/url\(\s*["']?https?:\/\/#{Regexp.escape(subdomain_host)}([^"'\)]*?)["']?\s*\)/i) do
path = $1
path = "/index.html" if path.empty? || path == "/"
"url(\"../subdomains/#{subdomain_host}#{path}\")"
end
# for JavaScript strings
content.gsub!(/(["'])https?:\/\/#{Regexp.escape(subdomain_host)}([^"']*)(["'])/i) do
quote_start, path, quote_end = $1, $2, $3
path = "/index.html" if path.empty? || path == "/"
"#{quote_start}../subdomains/#{subdomain_host}#{path}#{quote_end}"
end
end
# save if modified
if content != original_content
File.write(file_path, content)
rewritten_count += 1
end
rescue => e
puts "Error rewriting file #{file_path}: #{e.message}"
end
end
puts "Rewrote links in #{rewritten_count} files"
end
end

View File

@@ -1,140 +1,74 @@
# frozen_string_literal: true # frozen_string_literal: true
# essentially, this is for converting a string with a potentially
# broken or unknown encoding into a valid UTF-8 string
# @todo: consider using charlock_holmes for this in the future
module TidyBytes module TidyBytes
# Using a frozen array so we have a O(1) lookup time UNICODE_REPLACEMENT_CHARACTER = "<EFBFBD>"
CP1252_MAP = Array.new(160) do |i|
case i # common encodings to try for best multilingual compatibility
when 128 then [226, 130, 172] COMMON_ENCODINGS = [
when 130 then [226, 128, 154] Encoding::UTF_8,
when 131 then [198, 146] Encoding::Windows_1251, # Cyrillic/Russian legacy
when 132 then [226, 128, 158] Encoding::GB18030, # Simplified Chinese
when 133 then [226, 128, 166] Encoding::Shift_JIS, # Japanese
when 134 then [226, 128, 160] Encoding::EUC_KR, # Korean
when 135 then [226, 128, 161] Encoding::ISO_8859_1, # Western European
when 136 then [203, 134] Encoding::Windows_1252 # Western European/Latin1 superset
when 137 then [226, 128, 176] ].select { |enc| Encoding.name_list.include?(enc.name) }
when 138 then [197, 160]
when 139 then [226, 128, 185] # returns true if the string appears to be binary (has null bytes)
when 140 then [197, 146] def binary_data?
when 142 then [197, 189] self.include?("\x00".b)
when 145 then [226, 128, 152] end
when 146 then [226, 128, 153]
when 147 then [226, 128, 156] # attempts to return a valid UTF-8 version of the string
when 148 then [226, 128, 157] def tidy_bytes
when 149 then [226, 128, 162] return self if self.encoding == Encoding::UTF_8 && self.valid_encoding?
when 150 then [226, 128, 147] return self.dup.force_encoding("BINARY") if binary_data?
when 151 then [226, 128, 148]
when 152 then [203, 156] str = self.dup
when 153 then [226, 132, 162] COMMON_ENCODINGS.each do |enc|
when 154 then [197, 161] str.force_encoding(enc)
when 155 then [226, 128, 186] begin
when 156 then [197, 147] utf8 = str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
when 158 then [197, 190] return utf8 if utf8.valid_encoding? && !utf8.include?(UNICODE_REPLACEMENT_CHARACTER)
when 159 then [197, 184] rescue Encoding::UndefinedConversionError, Encoding::InvalidByteSequenceError
# try next encoding
end
end end
end.freeze
# if no clean conversion found, try again but accept replacement characters
str = self.dup
COMMON_ENCODINGS.each do |enc|
str.force_encoding(enc)
begin
utf8 = str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
return utf8 if utf8.valid_encoding?
rescue Encoding::UndefinedConversionError, Encoding::InvalidByteSequenceError
# try next encoding
end
end
# fallback: replace all invalid/undefined bytes
str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
end
def tidy_bytes!
replace(self.tidy_bytes)
end
def self.included(base) def self.included(base)
base.class_eval do base.send(:include, InstanceMethods)
private end
def tidy_byte(byte) module InstanceMethods
if byte < 160 def tidy_bytes
CP1252_MAP[byte] TidyBytes.instance_method(:tidy_bytes).bind(self).call
else end
byte < 192 ? [194, byte] : [195, byte - 64]
end
end
public def tidy_bytes!
TidyBytes.instance_method(:tidy_bytes!).bind(self).call
# Attempt to replace invalid UTF-8 bytes with valid ones. This method
# naively assumes if you have invalid UTF8 bytes, they are either Windows
# CP-1252 or ISO8859-1. In practice this isn't a bad assumption, but may not
# always work.
#
# Passing +true+ will forcibly tidy all bytes, assuming that the string's
# encoding is CP-1252 or ISO-8859-1.
def tidy_bytes(force = false)
return nil if empty?
if force
buffer = String.new(capacity: bytesize)
each_byte do |b|
cleaned = tidy_byte(b)
buffer << cleaned.pack("C*") if cleaned
end
return buffer.force_encoding(Encoding::UTF_8)
end
buffer = String.new(capacity: bytesize)
bytes = each_byte.to_a
conts_expected = 0
last_lead = 0
bytes.each_with_index do |byte, i|
if byte < 128 # ASCII
buffer << byte
next
end
if byte > 244 || byte > 240 # invalid bytes
cleaned = tidy_byte(byte)
buffer << cleaned.pack("C*") if cleaned
next
end
is_cont = byte > 127 && byte < 192
is_lead = byte > 191 && byte < 245
if is_cont
# Not expecting continuation byte? Clean up. Otherwise, now expect one less.
if conts_expected == 0
cleaned = tidy_byte(byte)
buffer << cleaned.pack("C*") if cleaned
else
buffer << byte
conts_expected -= 1
end
else
if conts_expected > 0
# Expected continuation, but got ASCII or leading? Clean backwards up to
# the leading byte.
(1..(i - last_lead)).each do |j|
back_byte = bytes[i - j]
cleaned = tidy_byte(back_byte)
buffer << cleaned.pack("C*") if cleaned
end
conts_expected = 0
end
if is_lead
# Final byte is leading? Clean it.
if i == bytes.length - 1
cleaned = tidy_byte(byte)
buffer << cleaned.pack("C*") if cleaned
else
# Valid leading byte? Expect continuations determined by position of
# first zero bit, with max of 3.
buffer << byte
conts_expected = byte < 224 ? 1 : byte < 240 ? 2 : 3
last_lead = i
end
end
end
end
buffer.force_encoding(Encoding::UTF_8)
rescue
nil
end
# Tidy bytes in place.
def tidy_bytes!(force = false)
result = tidy_bytes(force)
result ? replace(result) : self
end
end end
end end
end end

View File

@@ -0,0 +1,74 @@
# frozen_string_literal: true
# URLs in HTML attributes
def rewrite_html_attr_urls(content)
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do
prefix, url, suffix = $1, $2, $3
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
"#{prefix}#{path}#{suffix}"
rescue
"#{prefix}#{url}#{suffix}"
end
elsif url.start_with?('/')
"#{prefix}./#{url[1..-1]}#{suffix}"
else
"#{prefix}#{url}#{suffix}"
end
end
content
end
# URLs in CSS
def rewrite_css_urls(content)
content.gsub!(/url\(\s*["']?https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"'\)]+)["']?\s*\)/i) do
url = $1
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
"url(\"#{path}\")"
rescue
"url(\"#{url}\")"
end
elsif url.start_with?('/')
"url(\"./#{url[1..-1]}\")"
else
"url(\"#{url}\")"
end
end
content
end
# URLs in JavaScript
def rewrite_js_urls(content)
content.gsub!(/(["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do
quote_start, url, quote_end = $1, $2, $3
if url.start_with?('http')
begin
uri = URI.parse(url)
path = uri.path
path = path[1..-1] if path.start_with?('/')
"#{quote_start}#{path}#{quote_end}"
rescue
"#{quote_start}#{url}#{quote_end}"
end
elsif url.start_with?('/')
"#{quote_start}./#{url[1..-1]}#{quote_end}"
else
"#{quote_start}#{url}#{quote_end}"
end
end
content
end

View File

@@ -1,17 +1,15 @@
require './lib/wayback_machine_downloader'
Gem::Specification.new do |s| Gem::Specification.new do |s|
s.name = "wayback_machine_downloader" s.name = "wayback_machine_downloader_straw"
s.version = WaybackMachineDownloader::VERSION s.version = "2.4.2"
s.executables << "wayback_machine_downloader" s.executables << "wayback_machine_downloader"
s.summary = "Download an entire website from the Wayback Machine." s.summary = "Download an entire website from the Wayback Machine."
s.description = "Download an entire website from the Wayback Machine. Wayback Machine by Internet Archive (archive.org) is an awesome tool to view any website at any point of time but lacks an export feature. Wayback Machine Downloader brings exactly this." s.description = "Download complete websites from the Internet Archive's Wayback Machine. While the Wayback Machine (archive.org) excellently preserves web history, it lacks a built-in export functionality; this gem does just that, allowing you to download entire archived websites. (This is a significant rewrite of the original wayback_machine_downloader gem by hartator, with enhanced features and performance improvements.)"
s.authors = ["hartator"] s.authors = ["strawberrymaster"]
s.email = "hartator@gmail.com" s.email = "strawberrymaster@vivaldi.net"
s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb"] s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb", "lib/wayback_machine_downloader/subdom_processor.rb", "lib/wayback_machine_downloader/url_rewrite.rb"]
s.homepage = "https://github.com/hartator/wayback-machine-downloader" s.homepage = "https://github.com/StrawberryMaster/wayback-machine-downloader"
s.license = "MIT" s.license = "MIT"
s.required_ruby_version = ">= 1.9.2" s.required_ruby_version = ">= 3.4.3"
s.add_runtime_dependency "concurrent-ruby", "~> 1.3", ">= 1.3.4" s.add_runtime_dependency "concurrent-ruby", "~> 1.3", ">= 1.3.4"
s.add_development_dependency "rake", "~> 12.2" s.add_development_dependency "rake", "~> 12.2"
s.add_development_dependency "minitest", "~> 5.2" s.add_development_dependency "minitest", "~> 5.2"