mirror of
https://github.com/StrawberryMaster/wayback-machine-downloader.git
synced 2025-12-17 17:56:44 +00:00
Compare commits
20 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b2fc748c2c | ||
|
|
8632050c45 | ||
|
|
2aa694eed0 | ||
|
|
4d2513eca8 | ||
|
|
67685b781e | ||
|
|
f7c0f1a964 | ||
|
|
99da3ca48e | ||
|
|
34f22c128c | ||
|
|
71bdc7c2de | ||
|
|
4b1ec1e1cc | ||
|
|
d7a63361e3 | ||
|
|
b1974a8dfa | ||
|
|
012b295aed | ||
|
|
dec9083b43 | ||
|
|
c517bd20d3 | ||
|
|
fc8d8a9441 | ||
|
|
fa306ac92b | ||
|
|
8c27aaebc9 | ||
|
|
40e9c9bb51 | ||
|
|
6bc08947b7 |
62
README.md
62
README.md
@ -9,7 +9,7 @@ Included here is partial content from other forks, namely those @ [ShiftaDeband]
|
|||||||
|
|
||||||
Download a website's latest snapshot:
|
Download a website's latest snapshot:
|
||||||
```bash
|
```bash
|
||||||
ruby wayback_machine_downloader https://example.com
|
wayback_machine_downloader https://example.com
|
||||||
```
|
```
|
||||||
Your files will save to `./websites/example.com/` with their original structure preserved.
|
Your files will save to `./websites/example.com/` with their original structure preserved.
|
||||||
|
|
||||||
@ -27,6 +27,7 @@ To run most commands, just like in the original WMD, you can use:
|
|||||||
```bash
|
```bash
|
||||||
wayback_machine_downloader https://example.com
|
wayback_machine_downloader https://example.com
|
||||||
```
|
```
|
||||||
|
Do note that you can also manually download this repository and run commands here by appending `ruby` before a command, e.g. `ruby wayback_machine_downloader https://example.com`.
|
||||||
**Note**: this gem may conflict with hartator's wayback_machine_downloader gem, and so you may have to uninstall it for this WMD fork to work. A good way to know is if a command fails; it will list the gem version as 2.3.1 or earlier, while this WMD fork uses 2.3.2 or above.
|
**Note**: this gem may conflict with hartator's wayback_machine_downloader gem, and so you may have to uninstall it for this WMD fork to work. A good way to know is if a command fails; it will list the gem version as 2.3.1 or earlier, while this WMD fork uses 2.3.2 or above.
|
||||||
|
|
||||||
### Step-by-step setup
|
### Step-by-step setup
|
||||||
@ -63,15 +64,14 @@ docker build -t wayback_machine_downloader .
|
|||||||
docker run -it --rm wayback_machine_downloader [options] URL
|
docker run -it --rm wayback_machine_downloader [options] URL
|
||||||
```
|
```
|
||||||
|
|
||||||
or the example without cloning the repo - fetching smallrockets.com until the year 2013:
|
As an example of how this works without cloning this repo, this command fetches smallrockets.com until the year 2013:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker run -v .:/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com
|
docker run -v .:/build/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com
|
||||||
```
|
```
|
||||||
|
|
||||||
### 🐳 Using Docker Compose
|
### 🐳 Using Docker Compose
|
||||||
|
You can also use Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
|
||||||
We can also use it with Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
|
|
||||||
```yaml
|
```yaml
|
||||||
# docker-compose.yml
|
# docker-compose.yml
|
||||||
services:
|
services:
|
||||||
@ -120,6 +120,7 @@ STATE_DB_FILENAME = '.downloaded.txt' # Tracks completed downloads
|
|||||||
| `-t TS`, `--to TS` | Stop at timestamp |
|
| `-t TS`, `--to TS` | Stop at timestamp |
|
||||||
| `-e`, `--exact-url` | Download exact URL only |
|
| `-e`, `--exact-url` | Download exact URL only |
|
||||||
| `-r`, `--rewritten` | Download rewritten Wayback Archive files only |
|
| `-r`, `--rewritten` | Download rewritten Wayback Archive files only |
|
||||||
|
| `-rt`, `--retry NUM` | Number of tries in case a download fails (default: 1) |
|
||||||
|
|
||||||
**Example** - Download files to `downloaded-backup` folder
|
**Example** - Download files to `downloaded-backup` folder
|
||||||
```bash
|
```bash
|
||||||
@ -165,6 +166,8 @@ ruby wayback_machine_downloader https://example.com --rewritten
|
|||||||
```
|
```
|
||||||
Useful if you want to download the rewritten files from the Wayback Machine instead of the original ones.
|
Useful if you want to download the rewritten files from the Wayback Machine instead of the original ones.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Filtering Content
|
### Filtering Content
|
||||||
| Option | Description |
|
| Option | Description |
|
||||||
|--------|-------------|
|
|--------|-------------|
|
||||||
@ -199,6 +202,8 @@ Or if you want to download everything except images:
|
|||||||
ruby wayback_machine_downloader https://example.com --exclude "/\.(gif|jpg|jpeg)$/i"
|
ruby wayback_machine_downloader https://example.com --exclude "/\.(gif|jpg|jpeg)$/i"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Performance
|
### Performance
|
||||||
| Option | Description |
|
| Option | Description |
|
||||||
|--------|-------------|
|
|--------|-------------|
|
||||||
@ -213,10 +218,12 @@ Will specify the number of multiple files you want to download at the same time.
|
|||||||
|
|
||||||
**Example 2** - 300 snapshot pages:
|
**Example 2** - 300 snapshot pages:
|
||||||
```bash
|
```bash
|
||||||
ruby wayback_machine_downloader https://example.com --snapshot-pages 300
|
ruby wayback_machine_downloader https://example.com --maximum-snapshot 300
|
||||||
```
|
```
|
||||||
Will specify the maximum number of snapshot pages to consider. Count an average of 150,000 snapshots per page. 100 is the default maximum number of snapshot pages and should be sufficient for most websites. Use a bigger number if you want to download a very large website.
|
Will specify the maximum number of snapshot pages to consider. Count an average of 150,000 snapshots per page. 100 is the default maximum number of snapshot pages and should be sufficient for most websites. Use a bigger number if you want to download a very large website.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Diagnostics
|
### Diagnostics
|
||||||
| Option | Description |
|
| Option | Description |
|
||||||
|--------|-------------|
|
|--------|-------------|
|
||||||
@ -235,6 +242,8 @@ ruby wayback_machine_downloader https://example.com --list
|
|||||||
```
|
```
|
||||||
It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application.
|
It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
### Job management
|
### Job management
|
||||||
The downloader automatically saves its progress (`.cdx.json` for snapshot list, `.downloaded.txt` for completed files) in the output directory. If you run the same command again pointing to the same output directory, it will resume where it left off, skipping already downloaded files.
|
The downloader automatically saves its progress (`.cdx.json` for snapshot list, `.downloaded.txt` for completed files) in the output directory. If you run the same command again pointing to the same output directory, it will resume where it left off, skipping already downloaded files.
|
||||||
|
|
||||||
@ -258,6 +267,47 @@ ruby wayback_machine_downloader https://example.com --keep
|
|||||||
```
|
```
|
||||||
This can be useful for debugging or if you plan to extend the download later with different parameters (e.g., adding `--to` timestamp) while leveraging the existing snapshot list.
|
This can be useful for debugging or if you plan to extend the download later with different parameters (e.g., adding `--to` timestamp) while leveraging the existing snapshot list.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### SSL certificate errors
|
||||||
|
If you encounter an SSL error like:
|
||||||
|
```
|
||||||
|
SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get certificate CRL)
|
||||||
|
```
|
||||||
|
|
||||||
|
This is a known issue with **OpenSSL 3.6.0** when used with certain Ruby installations, and not a bug with this WMD work specifically. (See [ruby/openssl#949](https://github.com/ruby/openssl/issues/949) for details.)
|
||||||
|
|
||||||
|
The workaround is to create a file named `fix_ssl_store.rb` with the following content:
|
||||||
|
```ruby
|
||||||
|
require "openssl"
|
||||||
|
store = OpenSSL::X509::Store.new.tap(&:set_default_paths)
|
||||||
|
OpenSSL::SSL::SSLContext::DEFAULT_PARAMS[:cert_store] = store
|
||||||
|
```
|
||||||
|
|
||||||
|
and run wayback-machine-downloader with:
|
||||||
|
```bash
|
||||||
|
RUBYOPT="-r./fix_ssl_store.rb" wayback_machine_downloader "http://example.com"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Verifying the issue
|
||||||
|
You can test if your Ruby environment has this issue by running:
|
||||||
|
|
||||||
|
```ruby
|
||||||
|
require "net/http"
|
||||||
|
require "uri"
|
||||||
|
|
||||||
|
uri = URI("https://web.archive.org/")
|
||||||
|
Net::HTTP.start(uri.host, uri.port, use_ssl: true) do |http|
|
||||||
|
resp = http.get("/")
|
||||||
|
puts "GET / => #{resp.code}"
|
||||||
|
end
|
||||||
|
```
|
||||||
|
If this fails with the same SSL error, the workaround above will fix it.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## 🤝 Contributing
|
## 🤝 Contributing
|
||||||
1. Fork the repository
|
1. Fork the repository
|
||||||
2. Create a feature branch
|
2. Create a feature branch
|
||||||
|
|||||||
@ -74,6 +74,10 @@ option_parser = OptionParser.new do |opts|
|
|||||||
options[:keep] = true
|
options[:keep] = true
|
||||||
end
|
end
|
||||||
|
|
||||||
|
opts.on("--rt", "--retry N", Integer, "Maximum number of retries for failed downloads (default: 3)") do |t|
|
||||||
|
options[:max_retries] = t
|
||||||
|
end
|
||||||
|
|
||||||
opts.on("--recursive-subdomains", "Recursively download content from subdomains") do |t|
|
opts.on("--recursive-subdomains", "Recursively download content from subdomains") do |t|
|
||||||
options[:recursive_subdomains] = true
|
options[:recursive_subdomains] = true
|
||||||
end
|
end
|
||||||
@ -82,6 +86,10 @@ option_parser = OptionParser.new do |opts|
|
|||||||
options[:subdomain_depth] = t
|
options[:subdomain_depth] = t
|
||||||
end
|
end
|
||||||
|
|
||||||
|
opts.on("--page-requisites", "Download related assets (images, css, js) for downloaded HTML pages") do |t|
|
||||||
|
options[:page_requisites] = true
|
||||||
|
end
|
||||||
|
|
||||||
opts.on("-v", "--version", "Display version") do |t|
|
opts.on("-v", "--version", "Display version") do |t|
|
||||||
options[:version] = t
|
options[:version] = t
|
||||||
end
|
end
|
||||||
|
|||||||
@ -11,9 +11,11 @@ require 'concurrent-ruby'
|
|||||||
require 'logger'
|
require 'logger'
|
||||||
require 'zlib'
|
require 'zlib'
|
||||||
require 'stringio'
|
require 'stringio'
|
||||||
|
require 'digest'
|
||||||
require_relative 'wayback_machine_downloader/tidy_bytes'
|
require_relative 'wayback_machine_downloader/tidy_bytes'
|
||||||
require_relative 'wayback_machine_downloader/to_regex'
|
require_relative 'wayback_machine_downloader/to_regex'
|
||||||
require_relative 'wayback_machine_downloader/archive_api'
|
require_relative 'wayback_machine_downloader/archive_api'
|
||||||
|
require_relative 'wayback_machine_downloader/page_requisites'
|
||||||
require_relative 'wayback_machine_downloader/subdom_processor'
|
require_relative 'wayback_machine_downloader/subdom_processor'
|
||||||
require_relative 'wayback_machine_downloader/url_rewrite'
|
require_relative 'wayback_machine_downloader/url_rewrite'
|
||||||
|
|
||||||
@ -24,58 +26,90 @@ class ConnectionPool
|
|||||||
MAX_RETRIES = 3
|
MAX_RETRIES = 3
|
||||||
|
|
||||||
def initialize(size)
|
def initialize(size)
|
||||||
@size = size
|
@pool = SizedQueue.new(size)
|
||||||
@pool = Concurrent::Map.new
|
size.times { @pool << build_connection_entry }
|
||||||
@creation_times = Concurrent::Map.new
|
|
||||||
@cleanup_thread = schedule_cleanup
|
@cleanup_thread = schedule_cleanup
|
||||||
end
|
end
|
||||||
|
|
||||||
def with_connection(&block)
|
def with_connection
|
||||||
conn = acquire_connection
|
entry = acquire_connection
|
||||||
begin
|
begin
|
||||||
yield conn
|
yield entry[:http]
|
||||||
ensure
|
ensure
|
||||||
release_connection(conn)
|
release_connection(entry)
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
def shutdown
|
def shutdown
|
||||||
@cleanup_thread&.exit
|
@cleanup_thread&.exit
|
||||||
@pool.each_value { |conn| conn.finish if conn&.started? }
|
drain_pool { |entry| safe_finish(entry[:http]) }
|
||||||
@pool.clear
|
|
||||||
@creation_times.clear
|
|
||||||
end
|
end
|
||||||
|
|
||||||
private
|
private
|
||||||
|
|
||||||
def acquire_connection
|
def acquire_connection
|
||||||
thread_id = Thread.current.object_id
|
entry = @pool.pop
|
||||||
conn = @pool[thread_id]
|
if stale?(entry)
|
||||||
|
safe_finish(entry[:http])
|
||||||
if should_create_new?(conn)
|
entry = build_connection_entry
|
||||||
conn&.finish if conn&.started?
|
|
||||||
conn = create_connection
|
|
||||||
@pool[thread_id] = conn
|
|
||||||
@creation_times[thread_id] = Time.now
|
|
||||||
end
|
end
|
||||||
|
entry
|
||||||
conn
|
|
||||||
end
|
end
|
||||||
|
|
||||||
def release_connection(conn)
|
def release_connection(entry)
|
||||||
return unless conn
|
if stale?(entry)
|
||||||
if conn.started? && Time.now - @creation_times[Thread.current.object_id] > MAX_AGE
|
safe_finish(entry[:http])
|
||||||
conn.finish
|
entry = build_connection_entry
|
||||||
@pool.delete(Thread.current.object_id)
|
end
|
||||||
@creation_times.delete(Thread.current.object_id)
|
@pool << entry
|
||||||
|
end
|
||||||
|
|
||||||
|
def stale?(entry)
|
||||||
|
http = entry[:http]
|
||||||
|
!http.started? || (Time.now - entry[:created_at] > MAX_AGE)
|
||||||
|
end
|
||||||
|
|
||||||
|
def build_connection_entry
|
||||||
|
{ http: create_connection, created_at: Time.now }
|
||||||
|
end
|
||||||
|
|
||||||
|
def safe_finish(http)
|
||||||
|
http.finish if http&.started?
|
||||||
|
rescue StandardError
|
||||||
|
nil
|
||||||
|
end
|
||||||
|
|
||||||
|
def drain_pool
|
||||||
|
loop do
|
||||||
|
entry = begin
|
||||||
|
@pool.pop(true)
|
||||||
|
rescue ThreadError
|
||||||
|
break
|
||||||
|
end
|
||||||
|
yield(entry)
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
def should_create_new?(conn)
|
def cleanup_old_connections
|
||||||
return true if conn.nil?
|
entry = begin
|
||||||
return true unless conn.started?
|
@pool.pop(true)
|
||||||
return true if Time.now - @creation_times[Thread.current.object_id] > MAX_AGE
|
rescue ThreadError
|
||||||
false
|
return
|
||||||
|
end
|
||||||
|
if stale?(entry)
|
||||||
|
safe_finish(entry[:http])
|
||||||
|
entry = build_connection_entry
|
||||||
|
end
|
||||||
|
@pool << entry
|
||||||
|
end
|
||||||
|
|
||||||
|
def schedule_cleanup
|
||||||
|
Thread.new do
|
||||||
|
loop do
|
||||||
|
cleanup_old_connections
|
||||||
|
sleep CLEANUP_INTERVAL
|
||||||
|
end
|
||||||
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
def create_connection
|
def create_connection
|
||||||
@ -88,35 +122,15 @@ class ConnectionPool
|
|||||||
http.start
|
http.start
|
||||||
http
|
http
|
||||||
end
|
end
|
||||||
|
|
||||||
def schedule_cleanup
|
|
||||||
Thread.new do
|
|
||||||
loop do
|
|
||||||
cleanup_old_connections
|
|
||||||
sleep CLEANUP_INTERVAL
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
def cleanup_old_connections
|
|
||||||
current_time = Time.now
|
|
||||||
@creation_times.each do |thread_id, creation_time|
|
|
||||||
if current_time - creation_time > MAX_AGE
|
|
||||||
conn = @pool[thread_id]
|
|
||||||
conn&.finish if conn&.started?
|
|
||||||
@pool.delete(thread_id)
|
|
||||||
@creation_times.delete(thread_id)
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
|
||||||
end
|
end
|
||||||
|
|
||||||
class WaybackMachineDownloader
|
class WaybackMachineDownloader
|
||||||
|
|
||||||
include ArchiveAPI
|
include ArchiveAPI
|
||||||
include SubdomainProcessor
|
include SubdomainProcessor
|
||||||
|
include URLRewrite
|
||||||
|
|
||||||
VERSION = "2.4.1"
|
VERSION = "2.4.5"
|
||||||
DEFAULT_TIMEOUT = 30
|
DEFAULT_TIMEOUT = 30
|
||||||
MAX_RETRIES = 3
|
MAX_RETRIES = 3
|
||||||
RETRY_DELAY = 2
|
RETRY_DELAY = 2
|
||||||
@ -130,7 +144,7 @@ class WaybackMachineDownloader
|
|||||||
attr_accessor :base_url, :exact_url, :directory, :all_timestamps,
|
attr_accessor :base_url, :exact_url, :directory, :all_timestamps,
|
||||||
:from_timestamp, :to_timestamp, :only_filter, :exclude_filter,
|
:from_timestamp, :to_timestamp, :only_filter, :exclude_filter,
|
||||||
:all, :maximum_pages, :threads_count, :logger, :reset, :keep, :rewrite,
|
:all, :maximum_pages, :threads_count, :logger, :reset, :keep, :rewrite,
|
||||||
:snapshot_at
|
:snapshot_at, :page_requisites
|
||||||
|
|
||||||
def initialize params
|
def initialize params
|
||||||
validate_params(params)
|
validate_params(params)
|
||||||
@ -162,6 +176,9 @@ class WaybackMachineDownloader
|
|||||||
@recursive_subdomains = params[:recursive_subdomains] || false
|
@recursive_subdomains = params[:recursive_subdomains] || false
|
||||||
@subdomain_depth = params[:subdomain_depth] || 1
|
@subdomain_depth = params[:subdomain_depth] || 1
|
||||||
@snapshot_at = params[:snapshot_at] ? params[:snapshot_at].to_i : nil
|
@snapshot_at = params[:snapshot_at] ? params[:snapshot_at].to_i : nil
|
||||||
|
@max_retries = params[:max_retries] ? params[:max_retries].to_i : MAX_RETRIES
|
||||||
|
@page_requisites = params[:page_requisites] || false
|
||||||
|
@pending_jobs = Concurrent::AtomicFixnum.new(0)
|
||||||
|
|
||||||
# URL for rejecting invalid/unencoded wayback urls
|
# URL for rejecting invalid/unencoded wayback urls
|
||||||
@url_regexp = /^(([A-Za-z][A-Za-z0-9+.-]*):((\/\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=]))+)(:([0-9]*))?)(((\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))|((\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)?))|((((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))(\?((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?(\#((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?)$/
|
@url_regexp = /^(([A-Za-z][A-Za-z0-9+.-]*):((\/\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=]))+)(:([0-9]*))?)(((\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))|((\/(((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)?))|((((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)+)(\/((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)*))*)))(\?((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?(\#((([A-Za-z0-9._~-])|(%[ABCDEFabcdef0-9][ABCDEFabcdef0-9])|([!$&'('')'*+,;=])|:|@)|\/|\?)*)?)$/
|
||||||
@ -170,13 +187,31 @@ class WaybackMachineDownloader
|
|||||||
end
|
end
|
||||||
|
|
||||||
def backup_name
|
def backup_name
|
||||||
url_to_process = @base_url.end_with?('/*') ? @base_url.chomp('/*') : @base_url
|
url_to_process = @base_url
|
||||||
|
url_to_process = url_to_process.chomp('/*') if url_to_process&.end_with?('/*')
|
||||||
if url_to_process.include? '//'
|
|
||||||
|
raw = if url_to_process.include?('//')
|
||||||
url_to_process.split('/')[2]
|
url_to_process.split('/')[2]
|
||||||
else
|
else
|
||||||
url_to_process
|
url_to_process
|
||||||
end
|
end
|
||||||
|
|
||||||
|
# if it looks like a wildcard pattern, normalize to a safe host-ish name
|
||||||
|
if raw&.start_with?('*.')
|
||||||
|
raw = raw.sub(/\A\*\./, 'all-')
|
||||||
|
end
|
||||||
|
|
||||||
|
# sanitize for Windows (and safe cross-platform) to avoid ENOTDIR on mkdir (colon in host:port)
|
||||||
|
if Gem.win_platform?
|
||||||
|
raw = raw.gsub(/[:*?"<>|]/, '_')
|
||||||
|
raw = raw.gsub(/[ .]+\z/, '')
|
||||||
|
else
|
||||||
|
# still good practice to strip path separators (and maybe '*' for POSIX too)
|
||||||
|
raw = raw.gsub(/[\/:*?"<>|]/, '_')
|
||||||
|
end
|
||||||
|
|
||||||
|
raw = 'site' if raw.nil? || raw.empty?
|
||||||
|
raw
|
||||||
end
|
end
|
||||||
|
|
||||||
def backup_path
|
def backup_path
|
||||||
@ -185,7 +220,8 @@ class WaybackMachineDownloader
|
|||||||
@directory
|
@directory
|
||||||
else
|
else
|
||||||
# ensure the default path is absolute and normalized
|
# ensure the default path is absolute and normalized
|
||||||
File.expand_path(File.join('websites', backup_name))
|
cwd = Dir.pwd
|
||||||
|
File.expand_path(File.join(cwd, 'websites', backup_name))
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
@ -269,53 +305,58 @@ class WaybackMachineDownloader
|
|||||||
page_index = 0
|
page_index = 0
|
||||||
batch_size = [@threads_count, 5].min
|
batch_size = [@threads_count, 5].min
|
||||||
continue_fetching = true
|
continue_fetching = true
|
||||||
|
fetch_pool = Concurrent::FixedThreadPool.new([@threads_count, 1].max)
|
||||||
|
begin
|
||||||
|
while continue_fetching && page_index < @maximum_pages
|
||||||
|
# Determine the range of pages to fetch in this batch
|
||||||
|
end_index = [page_index + batch_size, @maximum_pages].min
|
||||||
|
current_batch = (page_index...end_index).to_a
|
||||||
|
|
||||||
while continue_fetching && page_index < @maximum_pages
|
# Create futures for concurrent API calls
|
||||||
# Determine the range of pages to fetch in this batch
|
futures = current_batch.map do |page|
|
||||||
end_index = [page_index + batch_size, @maximum_pages].min
|
Concurrent::Future.execute(executor: fetch_pool) do
|
||||||
current_batch = (page_index...end_index).to_a
|
result = nil
|
||||||
|
@connection_pool.with_connection do |connection|
|
||||||
# Create futures for concurrent API calls
|
result = get_raw_list_from_api("#{@base_url}/*", page, connection)
|
||||||
futures = current_batch.map do |page|
|
end
|
||||||
Concurrent::Future.execute do
|
result ||= []
|
||||||
result = nil
|
[page, result]
|
||||||
@connection_pool.with_connection do |connection|
|
|
||||||
result = get_raw_list_from_api("#{@base_url}/*", page, connection)
|
|
||||||
end
|
|
||||||
result ||= []
|
|
||||||
[page, result]
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
results = []
|
|
||||||
|
|
||||||
futures.each do |future|
|
|
||||||
begin
|
|
||||||
results << future.value
|
|
||||||
rescue => e
|
|
||||||
puts "\nError fetching page #{future}: #{e.message}"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
|
|
||||||
# Sort results by page number to maintain order
|
|
||||||
results.sort_by! { |page, _| page }
|
|
||||||
|
|
||||||
# Process results and check for empty pages
|
|
||||||
results.each do |page, result|
|
|
||||||
if result.nil? || result.empty?
|
|
||||||
continue_fetching = false
|
|
||||||
break
|
|
||||||
else
|
|
||||||
mutex.synchronize do
|
|
||||||
snapshot_list_to_consider.concat(result)
|
|
||||||
print "."
|
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
|
results = []
|
||||||
|
|
||||||
|
futures.each do |future|
|
||||||
|
begin
|
||||||
|
results << future.value
|
||||||
|
rescue => e
|
||||||
|
puts "\nError fetching page #{future}: #{e.message}"
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
# Sort results by page number to maintain order
|
||||||
|
results.sort_by! { |page, _| page }
|
||||||
|
|
||||||
|
# Process results and check for empty pages
|
||||||
|
results.each do |page, result|
|
||||||
|
if result.nil? || result.empty?
|
||||||
|
continue_fetching = false
|
||||||
|
break
|
||||||
|
else
|
||||||
|
mutex.synchronize do
|
||||||
|
snapshot_list_to_consider.concat(result)
|
||||||
|
print "."
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
page_index = end_index
|
||||||
|
|
||||||
|
sleep(RATE_LIMIT) if continue_fetching
|
||||||
end
|
end
|
||||||
|
ensure
|
||||||
page_index = end_index
|
fetch_pool.shutdown
|
||||||
|
fetch_pool.wait_for_termination
|
||||||
sleep(RATE_LIMIT) if continue_fetching
|
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
@ -523,7 +564,7 @@ class WaybackMachineDownloader
|
|||||||
end
|
end
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
def download_files
|
def download_files
|
||||||
start_time = Time.now
|
start_time = Time.now
|
||||||
puts "Downloading #{@base_url} to #{backup_path} from Wayback Machine archives."
|
puts "Downloading #{@base_url} to #{backup_path} from Wayback Machine archives."
|
||||||
@ -544,6 +585,12 @@ class WaybackMachineDownloader
|
|||||||
|
|
||||||
# Load IDs of already downloaded files
|
# Load IDs of already downloaded files
|
||||||
downloaded_ids = load_downloaded_ids
|
downloaded_ids = load_downloaded_ids
|
||||||
|
|
||||||
|
# We use a thread-safe Set to track what we have queued/downloaded in this session
|
||||||
|
# to avoid infinite loops with page requisites
|
||||||
|
@session_downloaded_ids = Concurrent::Set.new
|
||||||
|
downloaded_ids.each { |id| @session_downloaded_ids.add(id) }
|
||||||
|
|
||||||
files_to_process = files_to_download.reject do |file_info|
|
files_to_process = files_to_download.reject do |file_info|
|
||||||
downloaded_ids.include?(file_info[:file_id])
|
downloaded_ids.include?(file_info[:file_id])
|
||||||
end
|
end
|
||||||
@ -554,8 +601,8 @@ class WaybackMachineDownloader
|
|||||||
if skipped_count > 0
|
if skipped_count > 0
|
||||||
puts "Found #{skipped_count} previously downloaded files, skipping them."
|
puts "Found #{skipped_count} previously downloaded files, skipping them."
|
||||||
end
|
end
|
||||||
|
|
||||||
if remaining_count == 0
|
if remaining_count == 0 && !@page_requisites
|
||||||
puts "All matching files have already been downloaded."
|
puts "All matching files have already been downloaded."
|
||||||
cleanup
|
cleanup
|
||||||
return
|
return
|
||||||
@ -568,12 +615,22 @@ class WaybackMachineDownloader
|
|||||||
@download_mutex = Mutex.new
|
@download_mutex = Mutex.new
|
||||||
|
|
||||||
thread_count = [@threads_count, CONNECTION_POOL_SIZE].min
|
thread_count = [@threads_count, CONNECTION_POOL_SIZE].min
|
||||||
pool = Concurrent::FixedThreadPool.new(thread_count)
|
@worker_pool = Concurrent::FixedThreadPool.new(thread_count)
|
||||||
|
|
||||||
processing_files(pool, files_to_process)
|
# initial batch
|
||||||
|
files_to_process.each do |file_remote_info|
|
||||||
|
@session_downloaded_ids.add(file_remote_info[:file_id])
|
||||||
|
submit_download_job(file_remote_info)
|
||||||
|
end
|
||||||
|
|
||||||
pool.shutdown
|
# wait for all jobs to finish
|
||||||
pool.wait_for_termination
|
loop do
|
||||||
|
sleep 0.5
|
||||||
|
break if @pending_jobs.value == 0
|
||||||
|
end
|
||||||
|
|
||||||
|
@worker_pool.shutdown
|
||||||
|
@worker_pool.wait_for_termination
|
||||||
|
|
||||||
end_time = Time.now
|
end_time = Time.now
|
||||||
puts "\nDownload finished in #{(end_time - start_time).round(2)}s."
|
puts "\nDownload finished in #{(end_time - start_time).round(2)}s."
|
||||||
@ -591,6 +648,125 @@ class WaybackMachineDownloader
|
|||||||
cleanup
|
cleanup
|
||||||
end
|
end
|
||||||
|
|
||||||
|
# helper to submit jobs and increment the counter
|
||||||
|
def submit_download_job(file_remote_info)
|
||||||
|
@pending_jobs.increment
|
||||||
|
@worker_pool.post do
|
||||||
|
begin
|
||||||
|
process_single_file(file_remote_info)
|
||||||
|
ensure
|
||||||
|
@pending_jobs.decrement
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
def process_single_file(file_remote_info)
|
||||||
|
download_success = false
|
||||||
|
downloaded_path = nil
|
||||||
|
|
||||||
|
@connection_pool.with_connection do |connection|
|
||||||
|
result_message, path = download_file(file_remote_info, connection)
|
||||||
|
downloaded_path = path
|
||||||
|
|
||||||
|
if result_message && result_message.include?(' -> ')
|
||||||
|
download_success = true
|
||||||
|
end
|
||||||
|
|
||||||
|
@download_mutex.synchronize do
|
||||||
|
@processed_file_count += 1 if @processed_file_count < @total_to_download
|
||||||
|
# only print if it's a "User" file or a requisite we found
|
||||||
|
puts result_message if result_message
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
if download_success
|
||||||
|
append_to_db(file_remote_info[:file_id])
|
||||||
|
|
||||||
|
if @page_requisites && downloaded_path && File.extname(downloaded_path) =~ /\.(html?|php|asp|aspx|jsp)$/i
|
||||||
|
process_page_requisites(downloaded_path, file_remote_info)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
rescue => e
|
||||||
|
@logger.error("Error processing file #{file_remote_info[:file_url]}: #{e.message}")
|
||||||
|
end
|
||||||
|
|
||||||
|
def process_page_requisites(file_path, parent_remote_info)
|
||||||
|
return unless File.exist?(file_path)
|
||||||
|
|
||||||
|
content = File.read(file_path)
|
||||||
|
content = content.force_encoding('UTF-8').scrub
|
||||||
|
|
||||||
|
assets = PageRequisites.extract(content)
|
||||||
|
|
||||||
|
# prepare base URI for resolving relative paths
|
||||||
|
parent_raw = parent_remote_info[:file_url]
|
||||||
|
parent_raw = "http://#{parent_raw}" unless parent_raw.match?(/^https?:\/\//)
|
||||||
|
|
||||||
|
begin
|
||||||
|
base_uri = URI(parent_raw)
|
||||||
|
# calculate the "root" host of the site we are downloading to compare later
|
||||||
|
current_project_host = URI("http://" + @base_url.gsub(%r{^https?://}, '')).host
|
||||||
|
rescue URI::InvalidURIError
|
||||||
|
return
|
||||||
|
end
|
||||||
|
|
||||||
|
parent_timestamp = parent_remote_info[:timestamp]
|
||||||
|
|
||||||
|
assets.each do |asset_rel_url|
|
||||||
|
begin
|
||||||
|
# resolve full URL (handles relative paths like "../img/logo.png")
|
||||||
|
resolved_uri = base_uri + asset_rel_url
|
||||||
|
|
||||||
|
# filter out navigation links (pages) vs assets
|
||||||
|
# skip if extension is empty or looks like an HTML page
|
||||||
|
path = resolved_uri.path
|
||||||
|
ext = File.extname(path).downcase
|
||||||
|
if ext.empty? || ['.html', '.htm', '.php', '.asp', '.aspx'].include?(ext)
|
||||||
|
next
|
||||||
|
end
|
||||||
|
|
||||||
|
# construct the URL for the Wayback API
|
||||||
|
asset_wbm_url = resolved_uri.host + resolved_uri.path
|
||||||
|
asset_wbm_url += "?#{resolved_uri.query}" if resolved_uri.query
|
||||||
|
|
||||||
|
# construct the local file ID
|
||||||
|
# if the asset is on the SAME domain, strip the domain from the folder path
|
||||||
|
# if it's on a DIFFERENT domain (e.g. cdn.jquery.com), keep the domain folder
|
||||||
|
if resolved_uri.host == current_project_host
|
||||||
|
# e.g. /static/css/style.css
|
||||||
|
asset_file_id = resolved_uri.path
|
||||||
|
asset_file_id = asset_file_id[1..-1] if asset_file_id.start_with?('/')
|
||||||
|
else
|
||||||
|
# e.g. cdn.google.com/jquery.js
|
||||||
|
asset_file_id = asset_wbm_url
|
||||||
|
end
|
||||||
|
|
||||||
|
rescue URI::InvalidURIError, StandardError
|
||||||
|
next
|
||||||
|
end
|
||||||
|
|
||||||
|
# sanitize and queue
|
||||||
|
asset_id = sanitize_and_prepare_id(asset_file_id, asset_wbm_url)
|
||||||
|
|
||||||
|
unless @session_downloaded_ids.include?(asset_id)
|
||||||
|
@session_downloaded_ids.add(asset_id)
|
||||||
|
|
||||||
|
new_file_info = {
|
||||||
|
file_url: asset_wbm_url,
|
||||||
|
timestamp: parent_timestamp,
|
||||||
|
file_id: asset_id
|
||||||
|
}
|
||||||
|
|
||||||
|
@download_mutex.synchronize do
|
||||||
|
@total_to_download += 1
|
||||||
|
puts "Queued requisite: #{asset_file_id}"
|
||||||
|
end
|
||||||
|
|
||||||
|
submit_download_job(new_file_info)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
def structure_dir_path dir_path
|
def structure_dir_path dir_path
|
||||||
begin
|
begin
|
||||||
FileUtils::mkdir_p dir_path unless File.exist? dir_path
|
FileUtils::mkdir_p dir_path unless File.exist? dir_path
|
||||||
@ -622,7 +798,8 @@ class WaybackMachineDownloader
|
|||||||
begin
|
begin
|
||||||
content = File.binread(file_path)
|
content = File.binread(file_path)
|
||||||
|
|
||||||
if file_ext == '.html' || file_ext == '.htm'
|
# detect encoding for HTML files
|
||||||
|
if file_ext == '.html' || file_ext == '.htm' || file_ext == '.php' || file_ext == '.asp'
|
||||||
encoding = content.match(/<meta\s+charset=["']?([^"'>]+)/i)&.captures&.first || 'UTF-8'
|
encoding = content.match(/<meta\s+charset=["']?([^"'>]+)/i)&.captures&.first || 'UTF-8'
|
||||||
content.force_encoding(encoding) rescue content.force_encoding('UTF-8')
|
content.force_encoding(encoding) rescue content.force_encoding('UTF-8')
|
||||||
else
|
else
|
||||||
@ -630,21 +807,21 @@ class WaybackMachineDownloader
|
|||||||
end
|
end
|
||||||
|
|
||||||
# URLs in HTML attributes
|
# URLs in HTML attributes
|
||||||
rewrite_html_attr_urls(content)
|
content = rewrite_html_attr_urls(content)
|
||||||
|
|
||||||
# URLs in CSS
|
# URLs in CSS
|
||||||
rewrite_css_urls(content)
|
content = rewrite_css_urls(content)
|
||||||
|
|
||||||
# URLs in JavaScript
|
# URLs in JavaScript
|
||||||
rewrite_js_urls(content)
|
content = rewrite_js_urls(content)
|
||||||
|
|
||||||
# for URLs in HTML attributes that start with a single slash
|
# for URLs that start with a single slash, make them relative
|
||||||
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])\/([^"'\/][^"']*)(["'])/i) do
|
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])\/([^"'\/][^"']*)(["'])/i) do
|
||||||
prefix, path, suffix = $1, $2, $3
|
prefix, path, suffix = $1, $2, $3
|
||||||
"#{prefix}./#{path}#{suffix}"
|
"#{prefix}./#{path}#{suffix}"
|
||||||
end
|
end
|
||||||
|
|
||||||
# for URLs in CSS that start with a single slash
|
# for URLs in CSS that start with a single slash, make them relative
|
||||||
content.gsub!(/url\(\s*["']?\/([^"'\)\/][^"'\)]*?)["']?\s*\)/i) do
|
content.gsub!(/url\(\s*["']?\/([^"'\)\/][^"'\)]*?)["']?\s*\)/i) do
|
||||||
path = $1
|
path = $1
|
||||||
"url(\"./#{path}\")"
|
"url(\"./#{path}\")"
|
||||||
@ -697,7 +874,7 @@ class WaybackMachineDownloader
|
|||||||
# check existence *before* download attempt
|
# check existence *before* download attempt
|
||||||
# this handles cases where a file was created manually or by a previous partial run without a .db entry
|
# this handles cases where a file was created manually or by a previous partial run without a .db entry
|
||||||
if File.exist? file_path
|
if File.exist? file_path
|
||||||
return "#{file_url} # #{file_path} already exists. (#{@processed_file_count + 1}/#{@total_to_download})"
|
return ["#{file_url} # #{file_path} already exists. (#{@processed_file_count + 1}/#{@total_to_download})", file_path]
|
||||||
end
|
end
|
||||||
|
|
||||||
begin
|
begin
|
||||||
@ -709,13 +886,13 @@ class WaybackMachineDownloader
|
|||||||
if @rewrite && File.extname(file_path) =~ /\.(html?|css|js)$/i
|
if @rewrite && File.extname(file_path) =~ /\.(html?|css|js)$/i
|
||||||
rewrite_urls_to_relative(file_path)
|
rewrite_urls_to_relative(file_path)
|
||||||
end
|
end
|
||||||
"#{file_url} -> #{file_path} (#{@processed_file_count + 1}/#{@total_to_download})"
|
return ["#{file_url} -> #{file_path} (#{@processed_file_count + 1}/#{@total_to_download})", file_path]
|
||||||
when :skipped_not_found
|
when :skipped_not_found
|
||||||
"Skipped (not found): #{file_url} (#{@processed_file_count + 1}/#{@total_to_download})"
|
return ["Skipped (not found): #{file_url} (#{@processed_file_count + 1}/#{@total_to_download})", nil]
|
||||||
else
|
else
|
||||||
# ideally, this case should not be reached if download_with_retry behaves as expected.
|
# ideally, this case should not be reached if download_with_retry behaves as expected.
|
||||||
@logger.warn("Unknown status from download_with_retry for #{file_url}: #{status}")
|
@logger.warn("Unknown status from download_with_retry for #{file_url}: #{status}")
|
||||||
"Unknown status for #{file_url}: #{status} (#{@processed_file_count + 1}/#{@total_to_download})"
|
return ["Unknown status for #{file_url}: #{status} (#{@processed_file_count + 1}/#{@total_to_download})", nil]
|
||||||
end
|
end
|
||||||
rescue StandardError => e
|
rescue StandardError => e
|
||||||
msg = "Failed: #{file_url} # #{e} (#{@processed_file_count + 1}/#{@total_to_download})"
|
msg = "Failed: #{file_url} # #{e} (#{@processed_file_count + 1}/#{@total_to_download})"
|
||||||
@ -723,7 +900,7 @@ class WaybackMachineDownloader
|
|||||||
File.delete(file_path)
|
File.delete(file_path)
|
||||||
msg += "\n#{file_path} was empty and was removed."
|
msg += "\n#{file_path} was empty and was removed."
|
||||||
end
|
end
|
||||||
msg
|
return [msg, nil]
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
@ -769,17 +946,83 @@ class WaybackMachineDownloader
|
|||||||
# safely sanitize a file id (or id+timestamp)
|
# safely sanitize a file id (or id+timestamp)
|
||||||
def sanitize_and_prepare_id(raw, file_url)
|
def sanitize_and_prepare_id(raw, file_url)
|
||||||
return nil if raw.nil?
|
return nil if raw.nil?
|
||||||
|
return "" if raw.empty?
|
||||||
|
original = raw.dup
|
||||||
begin
|
begin
|
||||||
raw = CGI.unescape(raw) rescue raw
|
# work on a binary copy to avoid premature encoding errors
|
||||||
raw.gsub!(/<[^>]*>/, '')
|
raw = raw.dup.force_encoding(Encoding::BINARY)
|
||||||
raw = raw.tidy_bytes unless raw.empty?
|
|
||||||
|
# percent-decode (repeat until stable in case of double-encoding)
|
||||||
|
loop do
|
||||||
|
decoded = raw.gsub(/%([0-9A-Fa-f]{2})/) { [$1].pack('H2') }
|
||||||
|
break if decoded == raw
|
||||||
|
raw = decoded
|
||||||
|
end
|
||||||
|
|
||||||
|
# try tidy_bytes
|
||||||
|
begin
|
||||||
|
raw = raw.tidy_bytes
|
||||||
|
rescue StandardError
|
||||||
|
# fallback: scrub to UTF-8
|
||||||
|
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
|
||||||
|
end
|
||||||
|
|
||||||
|
# ensure UTF-8 and scrub again
|
||||||
|
unless raw.encoding == Encoding::UTF_8 && raw.valid_encoding?
|
||||||
|
raw = raw.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
|
||||||
|
end
|
||||||
|
|
||||||
|
# strip HTML/comment artifacts & control chars
|
||||||
|
raw.gsub!(/<!--+/, '')
|
||||||
|
raw.gsub!(/[\x00-\x1F]/, '')
|
||||||
|
|
||||||
|
# split query; hash it for stable short name
|
||||||
|
path_part, query_part = raw.split('?', 2)
|
||||||
|
if query_part && !query_part.empty?
|
||||||
|
q_digest = Digest::SHA256.hexdigest(query_part)[0, 12]
|
||||||
|
if path_part.include?('.')
|
||||||
|
pre, _sep, post = path_part.rpartition('.')
|
||||||
|
path_part = "#{pre}__q#{q_digest}.#{post}"
|
||||||
|
else
|
||||||
|
path_part = "#{path_part}__q#{q_digest}"
|
||||||
|
end
|
||||||
|
end
|
||||||
|
raw = path_part
|
||||||
|
|
||||||
|
# collapse slashes & trim leading slash
|
||||||
|
raw.gsub!(%r{/+}, '/')
|
||||||
|
raw.sub!(%r{\A/}, '')
|
||||||
|
|
||||||
|
# segment-wise sanitation
|
||||||
|
raw = raw.split('/').map do |segment|
|
||||||
|
seg = segment.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: '')
|
||||||
|
seg = seg.gsub(/[:*?"<>|\\]/) { |c| "%#{c.ord.to_s(16).upcase}" }
|
||||||
|
seg = seg.gsub(/[ .]+\z/, '') if Gem.win_platform?
|
||||||
|
seg.empty? ? '_' : seg
|
||||||
|
end.join('/')
|
||||||
|
|
||||||
|
# remove any remaining angle brackets
|
||||||
|
raw.tr!('<>', '')
|
||||||
|
|
||||||
|
# final fallback if empty
|
||||||
|
raw = "file__#{Digest::SHA1.hexdigest(original)[0,10]}" if raw.nil? || raw.empty?
|
||||||
|
|
||||||
raw
|
raw
|
||||||
rescue => e
|
rescue => e
|
||||||
@logger&.warn("Failed to sanitize file id from #{file_url}: #{e.message}")
|
@logger&.warn("Failed to sanitize file id from #{file_url}: #{e.message}")
|
||||||
nil
|
# deterministic fallback – never return nil so caller won’t mark malformed
|
||||||
|
"file__#{Digest::SHA1.hexdigest(original)[0,10]}"
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
|
# wrap URL in parentheses if it contains characters that commonly break unquoted
|
||||||
|
# Windows CMD usage (e.g., &). This is only for display; user still must quote
|
||||||
|
# when invoking manually.
|
||||||
|
def safe_display_url(url)
|
||||||
|
return url unless url && url.match?(/[&]/)
|
||||||
|
"(#{url})"
|
||||||
|
end
|
||||||
|
|
||||||
def download_with_retry(file_path, file_url, file_timestamp, connection, redirect_count = 0)
|
def download_with_retry(file_path, file_url, file_timestamp, connection, redirect_count = 0)
|
||||||
retries = 0
|
retries = 0
|
||||||
begin
|
begin
|
||||||
@ -860,9 +1103,9 @@ class WaybackMachineDownloader
|
|||||||
end
|
end
|
||||||
|
|
||||||
rescue StandardError => e
|
rescue StandardError => e
|
||||||
if retries < MAX_RETRIES
|
if retries < @max_retries
|
||||||
retries += 1
|
retries += 1
|
||||||
@logger.warn("Retry #{retries}/#{MAX_RETRIES} for #{file_url}: #{e.message}")
|
@logger.warn("Retry #{retries}/#{@max_retries} for #{file_url}: #{e.message}")
|
||||||
sleep(RETRY_DELAY * retries)
|
sleep(RETRY_DELAY * retries)
|
||||||
retry
|
retry
|
||||||
else
|
else
|
||||||
|
|||||||
@ -16,6 +16,10 @@ module ArchiveAPI
|
|||||||
params = [["output", "json"], ["url", url]] + parameters_for_api(page_index)
|
params = [["output", "json"], ["url", url]] + parameters_for_api(page_index)
|
||||||
request_url.query = URI.encode_www_form(params)
|
request_url.query = URI.encode_www_form(params)
|
||||||
|
|
||||||
|
retries = 0
|
||||||
|
max_retries = (@max_retries || 3)
|
||||||
|
delay = WaybackMachineDownloader::RETRY_DELAY rescue 2
|
||||||
|
|
||||||
begin
|
begin
|
||||||
response = http.get(request_url)
|
response = http.get(request_url)
|
||||||
body = response.body.to_s.strip
|
body = response.body.to_s.strip
|
||||||
@ -26,7 +30,21 @@ module ArchiveAPI
|
|||||||
json.shift if json.first == ["timestamp", "original"]
|
json.shift if json.first == ["timestamp", "original"]
|
||||||
json
|
json
|
||||||
rescue JSON::ParserError => e
|
rescue JSON::ParserError => e
|
||||||
warn "Failed to fetch data from API: #{e.message}"
|
warn "Failed to parse JSON from API for #{url}: #{e.message}"
|
||||||
|
[]
|
||||||
|
rescue Net::ReadTimeout, Net::OpenTimeout => e
|
||||||
|
if retries < max_retries
|
||||||
|
retries += 1
|
||||||
|
warn "Timeout talking to Wayback CDX API (#{e.class}: #{e.message}) for #{url}, retry #{retries}/#{max_retries}..."
|
||||||
|
sleep(delay * retries)
|
||||||
|
retry
|
||||||
|
else
|
||||||
|
warn "Giving up on Wayback CDX API for #{url} after #{max_retries} timeouts."
|
||||||
|
[]
|
||||||
|
end
|
||||||
|
rescue StandardError => e
|
||||||
|
# treat any other transient-ish error similarly, though without retries for now
|
||||||
|
warn "Error fetching CDX data for #{url}: #{e.message}"
|
||||||
[]
|
[]
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|||||||
33
lib/wayback_machine_downloader/page_requisites.rb
Normal file
33
lib/wayback_machine_downloader/page_requisites.rb
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
module PageRequisites
|
||||||
|
# regex to find links in href, src, url(), and srcset
|
||||||
|
# this ignores data: URIs, mailto:, and anchors
|
||||||
|
ASSET_REGEX = /(?:href|src|data-src|data-url)\s*=\s*["']([^"']+)["']|url\(\s*["']?([^"'\)]+)["']?\s*\)|srcset\s*=\s*["']([^"']+)["']/i
|
||||||
|
|
||||||
|
def self.extract(html_content)
|
||||||
|
assets = []
|
||||||
|
|
||||||
|
html_content.scan(ASSET_REGEX) do |match|
|
||||||
|
# match is an array of capture groups; find the one that matched
|
||||||
|
url = match.compact.first
|
||||||
|
next unless url
|
||||||
|
|
||||||
|
# handle srcset (e.g. comma separated values like "image.jpg 1x, image2.jpg 2x")
|
||||||
|
if url.include?(',') && (url.include?(' 1x') || url.include?(' 2w'))
|
||||||
|
url.split(',').each do |src_def|
|
||||||
|
src_url = src_def.strip.split(' ').first
|
||||||
|
assets << src_url if valid_asset?(src_url)
|
||||||
|
end
|
||||||
|
else
|
||||||
|
assets << url if valid_asset?(url)
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
assets.uniq
|
||||||
|
end
|
||||||
|
|
||||||
|
def self.valid_asset?(url)
|
||||||
|
return false if url.strip.empty?
|
||||||
|
return false if url.start_with?('data:', 'mailto:', '#', 'javascript:')
|
||||||
|
true
|
||||||
|
end
|
||||||
|
end
|
||||||
@ -1,74 +1,85 @@
|
|||||||
# frozen_string_literal: true
|
# frozen_string_literal: true
|
||||||
|
|
||||||
# URLs in HTML attributes
|
module URLRewrite
|
||||||
def rewrite_html_attr_urls(content)
|
# server-side extensions that should work locally
|
||||||
|
SERVER_SIDE_EXTS = %w[.php .asp .aspx .jsp .cgi .pl .py].freeze
|
||||||
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do
|
|
||||||
prefix, url, suffix = $1, $2, $3
|
|
||||||
|
|
||||||
if url.start_with?('http')
|
|
||||||
begin
|
|
||||||
uri = URI.parse(url)
|
|
||||||
path = uri.path
|
|
||||||
path = path[1..-1] if path.start_with?('/')
|
|
||||||
"#{prefix}#{path}#{suffix}"
|
|
||||||
rescue
|
|
||||||
"#{prefix}#{url}#{suffix}"
|
|
||||||
end
|
|
||||||
elsif url.start_with?('/')
|
|
||||||
"#{prefix}./#{url[1..-1]}#{suffix}"
|
|
||||||
else
|
|
||||||
"#{prefix}#{url}#{suffix}"
|
|
||||||
end
|
|
||||||
end
|
|
||||||
content
|
|
||||||
end
|
|
||||||
|
|
||||||
# URLs in CSS
|
def rewrite_html_attr_urls(content)
|
||||||
def rewrite_css_urls(content)
|
# rewrite URLs to relative paths
|
||||||
|
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/web\.archive\.org\/web\/\d+(?:id_)?\/https?:\/\/[^\/]+([^"']*)(["'])/i) do
|
||||||
content.gsub!(/url\(\s*["']?https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"'\)]+)["']?\s*\)/i) do
|
prefix, path, suffix = $1, $2, $3
|
||||||
url = $1
|
path = normalize_path_for_local(path)
|
||||||
|
"#{prefix}#{path}#{suffix}"
|
||||||
if url.start_with?('http')
|
|
||||||
begin
|
|
||||||
uri = URI.parse(url)
|
|
||||||
path = uri.path
|
|
||||||
path = path[1..-1] if path.start_with?('/')
|
|
||||||
"url(\"#{path}\")"
|
|
||||||
rescue
|
|
||||||
"url(\"#{url}\")"
|
|
||||||
end
|
|
||||||
elsif url.start_with?('/')
|
|
||||||
"url(\"./#{url[1..-1]}\")"
|
|
||||||
else
|
|
||||||
"url(\"#{url}\")"
|
|
||||||
end
|
end
|
||||||
end
|
|
||||||
content
|
|
||||||
end
|
|
||||||
|
|
||||||
# URLs in JavaScript
|
# rewrite absolute URLs to same domain as relative
|
||||||
def rewrite_js_urls(content)
|
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/[^\/]+([^"']*)(["'])/i) do
|
||||||
|
prefix, path, suffix = $1, $2, $3
|
||||||
content.gsub!(/(["'])https?:\/\/web\.archive\.org\/web\/[0-9]+(?:id_)?\/([^"']+)(["'])/i) do
|
path = normalize_path_for_local(path)
|
||||||
quote_start, url, quote_end = $1, $2, $3
|
"#{prefix}#{path}#{suffix}"
|
||||||
|
|
||||||
if url.start_with?('http')
|
|
||||||
begin
|
|
||||||
uri = URI.parse(url)
|
|
||||||
path = uri.path
|
|
||||||
path = path[1..-1] if path.start_with?('/')
|
|
||||||
"#{quote_start}#{path}#{quote_end}"
|
|
||||||
rescue
|
|
||||||
"#{quote_start}#{url}#{quote_end}"
|
|
||||||
end
|
|
||||||
elsif url.start_with?('/')
|
|
||||||
"#{quote_start}./#{url[1..-1]}#{quote_end}"
|
|
||||||
else
|
|
||||||
"#{quote_start}#{url}#{quote_end}"
|
|
||||||
end
|
end
|
||||||
|
|
||||||
|
content
|
||||||
|
end
|
||||||
|
|
||||||
|
def rewrite_css_urls(content)
|
||||||
|
# rewrite URLs in CSS
|
||||||
|
content.gsub!(/url\(\s*["']?https?:\/\/web\.archive\.org\/web\/\d+(?:id_)?\/https?:\/\/[^\/]+([^"'\)]*?)["']?\s*\)/i) do
|
||||||
|
path = normalize_path_for_local($1)
|
||||||
|
"url(\"#{path}\")"
|
||||||
|
end
|
||||||
|
|
||||||
|
# rewrite absolute URLs in CSS
|
||||||
|
content.gsub!(/url\(\s*["']?https?:\/\/[^\/]+([^"'\)]*?)["']?\s*\)/i) do
|
||||||
|
path = normalize_path_for_local($1)
|
||||||
|
"url(\"#{path}\")"
|
||||||
|
end
|
||||||
|
|
||||||
|
content
|
||||||
|
end
|
||||||
|
|
||||||
|
def rewrite_js_urls(content)
|
||||||
|
# rewrite archive.org URLs in JavaScript strings
|
||||||
|
content.gsub!(/(["'])https?:\/\/web\.archive\.org\/web\/\d+(?:id_)?\/https?:\/\/[^\/]+([^"']*)(["'])/i) do
|
||||||
|
quote_start, path, quote_end = $1, $2, $3
|
||||||
|
path = normalize_path_for_local(path)
|
||||||
|
"#{quote_start}#{path}#{quote_end}"
|
||||||
|
end
|
||||||
|
|
||||||
|
# rewrite absolute URLs in JavaScript
|
||||||
|
content.gsub!(/(["'])https?:\/\/[^\/]+([^"']*)(["'])/i) do
|
||||||
|
quote_start, path, quote_end = $1, $2, $3
|
||||||
|
next "#{quote_start}http#{$2}#{quote_end}" if $2.start_with?('s://', '://')
|
||||||
|
path = normalize_path_for_local(path)
|
||||||
|
"#{quote_start}#{path}#{quote_end}"
|
||||||
|
end
|
||||||
|
|
||||||
|
content
|
||||||
|
end
|
||||||
|
|
||||||
|
private
|
||||||
|
|
||||||
|
def normalize_path_for_local(path)
|
||||||
|
return "./index.html" if path.empty? || path == "/"
|
||||||
|
|
||||||
|
# handle query strings - they're already part of the filename
|
||||||
|
path = path.split('?').first if path.include?('?')
|
||||||
|
|
||||||
|
# check if this is a server-side script
|
||||||
|
ext = File.extname(path).downcase
|
||||||
|
if SERVER_SIDE_EXTS.include?(ext)
|
||||||
|
# keep the path as-is but ensure it starts with ./
|
||||||
|
path = "./#{path}" unless path.start_with?('./', '/')
|
||||||
|
else
|
||||||
|
# regular file handling
|
||||||
|
path = "./#{path}" unless path.start_with?('./', '/')
|
||||||
|
|
||||||
|
# if it looks like a directory, add index.html
|
||||||
|
if path.end_with?('/') || !path.include?('.')
|
||||||
|
path = "#{path.chomp('/')}/index.html"
|
||||||
|
end
|
||||||
|
end
|
||||||
|
|
||||||
|
path
|
||||||
end
|
end
|
||||||
|
|
||||||
content
|
|
||||||
end
|
end
|
||||||
@ -1,12 +1,12 @@
|
|||||||
Gem::Specification.new do |s|
|
Gem::Specification.new do |s|
|
||||||
s.name = "wayback_machine_downloader_straw"
|
s.name = "wayback_machine_downloader_straw"
|
||||||
s.version = "2.4.1"
|
s.version = "2.4.5"
|
||||||
s.executables << "wayback_machine_downloader"
|
s.executables << "wayback_machine_downloader"
|
||||||
s.summary = "Download an entire website from the Wayback Machine."
|
s.summary = "Download an entire website from the Wayback Machine."
|
||||||
s.description = "Download complete websites from the Internet Archive's Wayback Machine. While the Wayback Machine (archive.org) excellently preserves web history, it lacks a built-in export functionality; this gem does just that, allowing you to download entire archived websites. (This is a significant rewrite of the original wayback_machine_downloader gem by hartator, with enhanced features and performance improvements.)"
|
s.description = "Download complete websites from the Internet Archive's Wayback Machine. While the Wayback Machine (archive.org) excellently preserves web history, it lacks a built-in export functionality; this gem does just that, allowing you to download entire archived websites. (This is a significant rewrite of the original wayback_machine_downloader gem by hartator, with enhanced features and performance improvements.)"
|
||||||
s.authors = ["strawberrymaster"]
|
s.authors = ["strawberrymaster"]
|
||||||
s.email = "strawberrymaster@vivaldi.net"
|
s.email = "strawberrymaster@vivaldi.net"
|
||||||
s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb", "lib/wayback_machine_downloader/subdom_processor.rb", "lib/wayback_machine_downloader/url_rewrite.rb"]
|
s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb", "lib/wayback_machine_downloader/page_requisites.rb", "lib/wayback_machine_downloader/subdom_processor.rb", "lib/wayback_machine_downloader/url_rewrite.rb"]
|
||||||
s.homepage = "https://github.com/StrawberryMaster/wayback-machine-downloader"
|
s.homepage = "https://github.com/StrawberryMaster/wayback-machine-downloader"
|
||||||
s.license = "MIT"
|
s.license = "MIT"
|
||||||
s.required_ruby_version = ">= 3.4.3"
|
s.required_ruby_version = ">= 3.4.3"
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user