Compare commits

..

No commits in common. "master" and "v2.3.7" have entirely different histories.

12 changed files with 348 additions and 1116 deletions

4
.gitignore vendored
View File

@ -32,7 +32,3 @@ tmp
*.rbc
test.rb
# Dev environment
.vscode
*.code-workspace

View File

@ -1,4 +1,4 @@
FROM ruby:3.4.5-alpine
FROM ruby:3.4.4-alpine
USER root
WORKDIR /build
@ -6,9 +6,10 @@ COPY Gemfile /build/
COPY *.gemspec /build/
RUN bundle config set jobs "$(nproc)" \
&& bundle config set without 'development test' \
&& bundle install
COPY . /build
WORKDIR /build
ENTRYPOINT [ "/build/bin/wayback_machine_downloader", "--directory", "/build/websites" ]
WORKDIR /
ENTRYPOINT [ "/build/bin/wayback_machine_downloader" ]

View File

@ -9,7 +9,7 @@ Included here is partial content from other forks, namely those @ [ShiftaDeband]
Download a website's latest snapshot:
```bash
wayback_machine_downloader https://example.com
ruby wayback_machine_downloader https://example.com
```
Your files will save to `./websites/example.com/` with their original structure preserved.
@ -27,8 +27,6 @@ To run most commands, just like in the original WMD, you can use:
```bash
wayback_machine_downloader https://example.com
```
Do note that you can also manually download this repository and run commands here by appending `ruby` before a command, e.g. `ruby wayback_machine_downloader https://example.com`.
**Note**: this gem may conflict with hartator's wayback_machine_downloader gem, and so you may have to uninstall it for this WMD fork to work. A good way to know is if a command fails; it will list the gem version as 2.3.1 or earlier, while this WMD fork uses 2.3.2 or above.
### Step-by-step setup
1. **Install Ruby**:
@ -64,14 +62,15 @@ docker build -t wayback_machine_downloader .
docker run -it --rm wayback_machine_downloader [options] URL
```
As an example of how this works without cloning this repo, this command fetches smallrockets.com until the year 2013:
or the example without cloning the repo - fetching smallrockets.com until the year 2013:
```bash
docker run -v .:/build/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com
docker run -v .:/websites ghcr.io/strawberrymaster/wayback-machine-downloader:master wayback_machine_downloader --to 20130101 smallrockets.com
```
### 🐳 Using Docker Compose
You can also use Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
We can also use it with Docker Compose, which provides a lot of benefits for extending more functionalities (such as implementing storing previous downloads in a database):
```yaml
# docker-compose.yml
services:
@ -81,19 +80,36 @@ services:
tty: true
image: wayback_machine_downloader:latest
container_name: wayback_machine_downloader
environment:
- ENVIRONMENT=${ENVIRONMENT:-development}
- OPTIONS=${OPTIONS:-""}
- TARGET_URL=${TARGET_URL}
volumes:
- .:/build:rw
- ./websites:/build/websites:rw
command: --directory /build/websites ${OPTIONS} ${TARGET_URL}
```
#### Usage:
Now you can create a Docker image as named "wayback_machine_downloader" with the following command:
Now You can create a Docker image as named "wayback_machine_downloader" with the following command:
```bash
docker compose up -d --build
```
After that you must set TARGET_URL environment variable:
```bash
export TARGET_URL="https://example.com/"
```
The **OPTIONS** env. variable is optional this may include additional settings which are found in the "**Advanced usage**" section below.
Example:
```bash
export OPTIONS="--list -f 20060121"
```
After that you can run the exists container with the following command:
```bash
docker compose run --rm wayback_machine_downloader https://example.com [options]
docker compose run --rm wayback_machine_downloader https://example.com
```
## ⚙️ Configuration
@ -120,7 +136,6 @@ STATE_DB_FILENAME = '.downloaded.txt' # Tracks completed downloads
| `-t TS`, `--to TS` | Stop at timestamp |
| `-e`, `--exact-url` | Download exact URL only |
| `-r`, `--rewritten` | Download rewritten Wayback Archive files only |
| `-rt`, `--retry NUM` | Number of tries in case a download fails (default: 1) |
**Example** - Download files to `downloaded-backup` folder
```bash
@ -166,8 +181,6 @@ ruby wayback_machine_downloader https://example.com --rewritten
```
Useful if you want to download the rewritten files from the Wayback Machine instead of the original ones.
---
### Filtering Content
| Option | Description |
|--------|-------------|
@ -202,8 +215,6 @@ Or if you want to download everything except images:
ruby wayback_machine_downloader https://example.com --exclude "/\.(gif|jpg|jpeg)$/i"
```
---
### Performance
| Option | Description |
|--------|-------------|
@ -218,12 +229,10 @@ Will specify the number of multiple files you want to download at the same time.
**Example 2** - 300 snapshot pages:
```bash
ruby wayback_machine_downloader https://example.com --maximum-snapshot 300
ruby wayback_machine_downloader https://example.com --snapshot-pages 300
```
Will specify the maximum number of snapshot pages to consider. Count an average of 150,000 snapshots per page. 100 is the default maximum number of snapshot pages and should be sufficient for most websites. Use a bigger number if you want to download a very large website.
---
### Diagnostics
| Option | Description |
|--------|-------------|
@ -242,8 +251,6 @@ ruby wayback_machine_downloader https://example.com --list
```
It will just display the files to be downloaded with their snapshot timestamps and urls. The output format is JSON. It won't download anything. It's useful for debugging or to connect to another application.
---
### Job management
The downloader automatically saves its progress (`.cdx.json` for snapshot list, `.downloaded.txt` for completed files) in the output directory. If you run the same command again pointing to the same output directory, it will resume where it left off, skipping already downloaded files.
@ -267,47 +274,6 @@ ruby wayback_machine_downloader https://example.com --keep
```
This can be useful for debugging or if you plan to extend the download later with different parameters (e.g., adding `--to` timestamp) while leveraging the existing snapshot list.
---
## Troubleshooting
### SSL certificate errors
If you encounter an SSL error like:
```
SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get certificate CRL)
```
This is a known issue with **OpenSSL 3.6.0** when used with certain Ruby installations, and not a bug with this WMD work specifically. (See [ruby/openssl#949](https://github.com/ruby/openssl/issues/949) for details.)
The workaround is to create a file named `fix_ssl_store.rb` with the following content:
```ruby
require "openssl"
store = OpenSSL::X509::Store.new.tap(&:set_default_paths)
OpenSSL::SSL::SSLContext::DEFAULT_PARAMS[:cert_store] = store
```
and run wayback-machine-downloader with:
```bash
RUBYOPT="-r./fix_ssl_store.rb" wayback_machine_downloader "http://example.com"
```
#### Verifying the issue
You can test if your Ruby environment has this issue by running:
```ruby
require "net/http"
require "uri"
uri = URI("https://web.archive.org/")
Net::HTTP.start(uri.host, uri.port, use_ssl: true) do |http|
resp = http.get("/")
puts "GET / => #{resp.code}"
end
```
If this fails with the same SSL error, the workaround above will fix it.
---
## 🤝 Contributing
1. Fork the repository
2. Create a feature branch

View File

@ -74,22 +74,6 @@ option_parser = OptionParser.new do |opts|
options[:keep] = true
end
opts.on("--rt", "--retry N", Integer, "Maximum number of retries for failed downloads (default: 3)") do |t|
options[:max_retries] = t
end
opts.on("--recursive-subdomains", "Recursively download content from subdomains") do |t|
options[:recursive_subdomains] = true
end
opts.on("--subdomain-depth DEPTH", Integer, "Maximum depth for subdomain recursion (default: 1)") do |t|
options[:subdomain_depth] = t
end
opts.on("--page-requisites", "Download related assets (images, css, js) for downloaded HTML pages") do |t|
options[:page_requisites] = true
end
opts.on("-v", "--version", "Display version") do |t|
options[:version] = t
end

View File

@ -5,6 +5,11 @@ services:
tty: true
image: wayback_machine_downloader:latest
container_name: wayback_machine_downloader
environment:
- ENVIRONMENT=${DEVELOPMENT:-production}
- OPTIONS=${OPTIONS:-""}
- TARGET_URL=${TARGET_URL}
volumes:
- .:/build:rw
- ./websites:/build/websites:rw
- ./websites:/websites:rw
command: /build/bin/wayback_machine_downloader ${TARGET_URL} ${OPTIONS}

File diff suppressed because it is too large Load Diff

View File

@ -4,22 +4,10 @@ require 'uri'
module ArchiveAPI
def get_raw_list_from_api(url, page_index, http)
# Automatically append /* if the URL doesn't contain a path after the domain
# This is a workaround for an issue with the API and *some* domains.
# See https://github.com/StrawberryMaster/wayback-machine-downloader/issues/6
# But don't do this when exact_url flag is set
if url && !url.match(/^https?:\/\/.*\//i) && !@exact_url
url = "#{url}/*"
end
request_url = URI("https://web.archive.org/cdx/search/cdx")
params = [["output", "json"], ["url", url]] + parameters_for_api(page_index)
request_url.query = URI.encode_www_form(params)
retries = 0
max_retries = (@max_retries || 3)
delay = WaybackMachineDownloader::RETRY_DELAY rescue 2
begin
response = http.get(request_url)
body = response.body.to_s.strip
@ -29,22 +17,8 @@ module ArchiveAPI
# Check if the response contains the header ["timestamp", "original"]
json.shift if json.first == ["timestamp", "original"]
json
rescue JSON::ParserError => e
warn "Failed to parse JSON from API for #{url}: #{e.message}"
[]
rescue Net::ReadTimeout, Net::OpenTimeout => e
if retries < max_retries
retries += 1
warn "Timeout talking to Wayback CDX API (#{e.class}: #{e.message}) for #{url}, retry #{retries}/#{max_retries}..."
sleep(delay * retries)
retry
else
warn "Giving up on Wayback CDX API for #{url} after #{max_retries} timeouts."
[]
end
rescue StandardError => e
# treat any other transient-ish error similarly, though without retries for now
warn "Error fetching CDX data for #{url}: #{e.message}"
rescue JSON::ParserError, StandardError => e
warn "Failed to fetch data from API: #{e.message}"
[]
end
end

View File

@ -1,33 +0,0 @@
module PageRequisites
# regex to find links in href, src, url(), and srcset
# this ignores data: URIs, mailto:, and anchors
ASSET_REGEX = /(?:href|src|data-src|data-url)\s*=\s*["']([^"']+)["']|url\(\s*["']?([^"'\)]+)["']?\s*\)|srcset\s*=\s*["']([^"']+)["']/i
def self.extract(html_content)
assets = []
html_content.scan(ASSET_REGEX) do |match|
# match is an array of capture groups; find the one that matched
url = match.compact.first
next unless url
# handle srcset (e.g. comma separated values like "image.jpg 1x, image2.jpg 2x")
if url.include?(',') && (url.include?(' 1x') || url.include?(' 2w'))
url.split(',').each do |src_def|
src_url = src_def.strip.split(' ').first
assets << src_url if valid_asset?(src_url)
end
else
assets << url if valid_asset?(url)
end
end
assets.uniq
end
def self.valid_asset?(url)
return false if url.strip.empty?
return false if url.start_with?('data:', 'mailto:', '#', 'javascript:')
true
end
end

View File

@ -1,238 +0,0 @@
# frozen_string_literal: true
module SubdomainProcessor
def process_subdomains
return unless @recursive_subdomains
puts "Starting subdomain processing..."
# extract base domain from the URL for comparison
base_domain = extract_base_domain(@base_url)
@processed_domains = Set.new([base_domain])
@subdomain_queue = Queue.new
# scan downloaded files for subdomain links
initial_files = Dir.glob(File.join(backup_path, "**/*.{html,htm,css,js}"))
puts "Scanning #{initial_files.size} downloaded files for subdomain links..."
subdomains_found = scan_files_for_subdomains(initial_files, base_domain)
if subdomains_found.empty?
puts "No subdomains found in downloaded content."
return
end
puts "Found #{subdomains_found.size} subdomains to process: #{subdomains_found.join(', ')}"
# add found subdomains to the queue
subdomains_found.each do |subdomain|
full_domain = "#{subdomain}.#{base_domain}"
@subdomain_queue << "https://#{full_domain}/"
end
# process the subdomain queue
download_subdomains(base_domain)
# after all downloads, rewrite all URLs to make local references
rewrite_subdomain_links(base_domain) if @rewrite
end
private
def extract_base_domain(url)
uri = URI.parse(url.gsub(/^https?:\/\//, '').split('/').first) rescue nil
return nil unless uri
host = uri.host || uri.path.split('/').first
host = host.downcase
# extract the base domain (e.g., "example.com" from "sub.example.com")
parts = host.split('.')
return host if parts.size <= 2
# for domains like co.uk, we want to keep the last 3 parts
if parts[-2].length <= 3 && parts[-1].length <= 3 && parts.size > 2
parts.last(3).join('.')
else
parts.last(2).join('.')
end
end
def scan_files_for_subdomains(files, base_domain)
return [] unless base_domain
subdomains = Set.new
files.each do |file_path|
next unless File.exist?(file_path)
begin
content = File.read(file_path)
# extract URLs from HTML href/src attributes
content.scan(/(?:href|src|action|data-src)=["']https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
subdomain = match[0].downcase
next if subdomain == 'www' # skip www subdomain
subdomains.add(subdomain)
end
# extract URLs from CSS
content.scan(/url\(["']?https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
subdomain = match[0].downcase
next if subdomain == 'www' # skip www subdomain
subdomains.add(subdomain)
end
# extract URLs from JavaScript strings
content.scan(/["']https?:\/\/([^\/."']+)\.#{Regexp.escape(base_domain)}[\/"]/) do |match|
subdomain = match[0].downcase
next if subdomain == 'www' # skip www subdomain
subdomains.add(subdomain)
end
rescue => e
puts "Error scanning file #{file_path}: #{e.message}"
end
end
subdomains.to_a
end
def download_subdomains(base_domain)
puts "Starting subdomain downloads..."
depth = 0
max_depth = @subdomain_depth || 1
while depth < max_depth && !@subdomain_queue.empty?
current_batch = []
# get all subdomains at current depth
while !@subdomain_queue.empty?
current_batch << @subdomain_queue.pop
end
puts "Processing #{current_batch.size} subdomains at depth #{depth + 1}..."
# download each subdomain
current_batch.each do |subdomain_url|
download_subdomain(subdomain_url, base_domain)
end
# if we need to go deeper, scan the newly downloaded files
if depth + 1 < max_depth
# get all files in the subdomains directory
new_files = Dir.glob(File.join(backup_path, "subdomains", "**/*.{html,htm,css,js}"))
new_subdomains = scan_files_for_subdomains(new_files, base_domain)
# filter out already processed subdomains
new_subdomains.each do |subdomain|
full_domain = "#{subdomain}.#{base_domain}"
unless @processed_domains.include?(full_domain)
@processed_domains.add(full_domain)
@subdomain_queue << "https://#{full_domain}/"
end
end
puts "Found #{@subdomain_queue.size} new subdomains at depth #{depth + 1}" if !@subdomain_queue.empty?
end
depth += 1
end
end
def download_subdomain(subdomain_url, base_domain)
begin
uri = URI.parse(subdomain_url)
subdomain_host = uri.host
# skip if already processed
if @processed_domains.include?(subdomain_host)
puts "Skipping already processed subdomain: #{subdomain_host}"
return
end
@processed_domains.add(subdomain_host)
puts "Downloading subdomain: #{subdomain_url}"
# create the directory for this subdomain
subdomain_dir = File.join(backup_path, "subdomains", subdomain_host)
FileUtils.mkdir_p(subdomain_dir)
# create subdomain downloader with appropriate options
subdomain_options = {
base_url: subdomain_url,
directory: subdomain_dir,
from_timestamp: @from_timestamp,
to_timestamp: @to_timestamp,
all: @all,
threads_count: @threads_count,
maximum_pages: [@maximum_pages / 2, 10].max,
rewrite: @rewrite,
# don't recursively process subdomains from here
recursive_subdomains: false
}
# download the subdomain content
subdomain_downloader = WaybackMachineDownloader.new(subdomain_options)
subdomain_downloader.download_files
puts "Completed download of subdomain: #{subdomain_host}"
rescue => e
puts "Error downloading subdomain #{subdomain_url}: #{e.message}"
end
end
def rewrite_subdomain_links(base_domain)
puts "Rewriting all files to use local subdomain references..."
all_files = Dir.glob(File.join(backup_path, "**/*.{html,htm,css,js}"))
subdomains = @processed_domains.reject { |domain| domain == base_domain }
puts "Found #{all_files.size} files to check for rewriting"
puts "Will rewrite links for subdomains: #{subdomains.join(', ')}"
rewritten_count = 0
all_files.each do |file_path|
next unless File.exist?(file_path)
begin
content = File.read(file_path)
original_content = content.dup
# replace subdomain URLs with local paths
subdomains.each do |subdomain_host|
# for HTML attributes (href, src, etc.)
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/#{Regexp.escape(subdomain_host)}([^"']*)(["'])/i) do
prefix, path, suffix = $1, $2, $3
path = "/index.html" if path.empty? || path == "/"
"#{prefix}../subdomains/#{subdomain_host}#{path}#{suffix}"
end
# for CSS url()
content.gsub!(/url\(\s*["']?https?:\/\/#{Regexp.escape(subdomain_host)}([^"'\)]*?)["']?\s*\)/i) do
path = $1
path = "/index.html" if path.empty? || path == "/"
"url(\"../subdomains/#{subdomain_host}#{path}\")"
end
# for JavaScript strings
content.gsub!(/(["'])https?:\/\/#{Regexp.escape(subdomain_host)}([^"']*)(["'])/i) do
quote_start, path, quote_end = $1, $2, $3
path = "/index.html" if path.empty? || path == "/"
"#{quote_start}../subdomains/#{subdomain_host}#{path}#{quote_end}"
end
end
# save if modified
if content != original_content
File.write(file_path, content)
rewritten_count += 1
end
rescue => e
puts "Error rewriting file #{file_path}: #{e.message}"
end
end
puts "Rewrote links in #{rewritten_count} files"
end
end

View File

@ -1,74 +1,73 @@
# frozen_string_literal: true
# essentially, this is for converting a string with a potentially
# broken or unknown encoding into a valid UTF-8 string
# @todo: consider using charlock_holmes for this in the future
module TidyBytes
UNICODE_REPLACEMENT_CHARACTER = "<EFBFBD>"
# common encodings to try for best multilingual compatibility
COMMON_ENCODINGS = [
Encoding::UTF_8,
Encoding::Windows_1251, # Cyrillic/Russian legacy
Encoding::GB18030, # Simplified Chinese
Encoding::Shift_JIS, # Japanese
Encoding::EUC_KR, # Korean
Encoding::ISO_8859_1, # Western European
Encoding::Windows_1252 # Western European/Latin1 superset
].select { |enc| Encoding.name_list.include?(enc.name) }
# returns true if the string appears to be binary (has null bytes)
def binary_data?
self.include?("\x00".b)
end
# attempts to return a valid UTF-8 version of the string
def tidy_bytes
return self if self.encoding == Encoding::UTF_8 && self.valid_encoding?
return self.dup.force_encoding("BINARY") if binary_data?
str = self.dup
COMMON_ENCODINGS.each do |enc|
str.force_encoding(enc)
begin
utf8 = str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
return utf8 if utf8.valid_encoding? && !utf8.include?(UNICODE_REPLACEMENT_CHARACTER)
rescue Encoding::UndefinedConversionError, Encoding::InvalidByteSequenceError
# try next encoding
end
# precomputing CP1252 to UTF-8 mappings for bytes 128-159
CP1252_MAP = (128..159).map do |byte|
case byte
when 128 then [226, 130, 172] # EURO SIGN
when 130 then [226, 128, 154] # SINGLE LOW-9 QUOTATION MARK
when 131 then [198, 146] # LATIN SMALL LETTER F WITH HOOK
when 132 then [226, 128, 158] # DOUBLE LOW-9 QUOTATION MARK
when 133 then [226, 128, 166] # HORIZONTAL ELLIPSIS
when 134 then [226, 128, 160] # DAGGER
when 135 then [226, 128, 161] # DOUBLE DAGGER
when 136 then [203, 134] # MODIFIER LETTER CIRCUMFLEX ACCENT
when 137 then [226, 128, 176] # PER MILLE SIGN
when 138 then [197, 160] # LATIN CAPITAL LETTER S WITH CARON
when 139 then [226, 128, 185] # SINGLE LEFT-POINTING ANGLE QUOTATION MARK
when 140 then [197, 146] # LATIN CAPITAL LIGATURE OE
when 142 then [197, 189] # LATIN CAPITAL LETTER Z WITH CARON
when 145 then [226, 128, 152] # LEFT SINGLE QUOTATION MARK
when 146 then [226, 128, 153] # RIGHT SINGLE QUOTATION MARK
when 147 then [226, 128, 156] # LEFT DOUBLE QUOTATION MARK
when 148 then [226, 128, 157] # RIGHT DOUBLE QUOTATION MARK
when 149 then [226, 128, 162] # BULLET
when 150 then [226, 128, 147] # EN DASH
when 151 then [226, 128, 148] # EM DASH
when 152 then [203, 156] # SMALL TILDE
when 153 then [226, 132, 162] # TRADE MARK SIGN
when 154 then [197, 161] # LATIN SMALL LETTER S WITH CARON
when 155 then [226, 128, 186] # SINGLE RIGHT-POINTING ANGLE QUOTATION MARK
when 156 then [197, 147] # LATIN SMALL LIGATURE OE
when 158 then [197, 190] # LATIN SMALL LETTER Z WITH CARON
when 159 then [197, 184] # LATIN SMALL LETTER Y WITH DIAERESIS
end
end.freeze
# if no clean conversion found, try again but accept replacement characters
str = self.dup
COMMON_ENCODINGS.each do |enc|
str.force_encoding(enc)
begin
utf8 = str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
return utf8 if utf8.valid_encoding?
rescue Encoding::UndefinedConversionError, Encoding::InvalidByteSequenceError
# try next encoding
end
# precomputing all possible byte conversions
CP1252_TO_UTF8 = Array.new(256) do |b|
if (128..159).cover?(b)
CP1252_MAP[b - 128]&.pack('C*')
elsif b < 128
b.chr
else
b < 192 ? [194, b].pack('C*') : [195, b - 64].pack('C*')
end
# fallback: replace all invalid/undefined bytes
str.encode(Encoding::UTF_8, invalid: :replace, undef: :replace, replace: UNICODE_REPLACEMENT_CHARACTER)
end
def tidy_bytes!
replace(self.tidy_bytes)
end
end.freeze
def self.included(base)
base.send(:include, InstanceMethods)
end
base.class_eval do
def tidy_bytes(force = false)
return nil if empty?
module InstanceMethods
def tidy_bytes
TidyBytes.instance_method(:tidy_bytes).bind(self).call
end
if force
buffer = String.new(capacity: bytesize)
each_byte { |b| buffer << CP1252_TO_UTF8[b] }
return buffer.force_encoding(Encoding::UTF_8)
end
def tidy_bytes!
TidyBytes.instance_method(:tidy_bytes!).bind(self).call
begin
encode('UTF-8')
rescue Encoding::UndefinedConversionError, Encoding::InvalidByteSequenceError
buffer = String.new(capacity: bytesize)
scrub { |b| CP1252_TO_UTF8[b.ord] }
end
end
def tidy_bytes!(force = false)
result = tidy_bytes(force)
result ? replace(result) : self
end
end
end
end

View File

@ -1,85 +0,0 @@
# frozen_string_literal: true
module URLRewrite
# server-side extensions that should work locally
SERVER_SIDE_EXTS = %w[.php .asp .aspx .jsp .cgi .pl .py].freeze
def rewrite_html_attr_urls(content)
# rewrite URLs to relative paths
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/web\.archive\.org\/web\/\d+(?:id_)?\/https?:\/\/[^\/]+([^"']*)(["'])/i) do
prefix, path, suffix = $1, $2, $3
path = normalize_path_for_local(path)
"#{prefix}#{path}#{suffix}"
end
# rewrite absolute URLs to same domain as relative
content.gsub!(/(\s(?:href|src|action|data-src|data-url)=["'])https?:\/\/[^\/]+([^"']*)(["'])/i) do
prefix, path, suffix = $1, $2, $3
path = normalize_path_for_local(path)
"#{prefix}#{path}#{suffix}"
end
content
end
def rewrite_css_urls(content)
# rewrite URLs in CSS
content.gsub!(/url\(\s*["']?https?:\/\/web\.archive\.org\/web\/\d+(?:id_)?\/https?:\/\/[^\/]+([^"'\)]*?)["']?\s*\)/i) do
path = normalize_path_for_local($1)
"url(\"#{path}\")"
end
# rewrite absolute URLs in CSS
content.gsub!(/url\(\s*["']?https?:\/\/[^\/]+([^"'\)]*?)["']?\s*\)/i) do
path = normalize_path_for_local($1)
"url(\"#{path}\")"
end
content
end
def rewrite_js_urls(content)
# rewrite archive.org URLs in JavaScript strings
content.gsub!(/(["'])https?:\/\/web\.archive\.org\/web\/\d+(?:id_)?\/https?:\/\/[^\/]+([^"']*)(["'])/i) do
quote_start, path, quote_end = $1, $2, $3
path = normalize_path_for_local(path)
"#{quote_start}#{path}#{quote_end}"
end
# rewrite absolute URLs in JavaScript
content.gsub!(/(["'])https?:\/\/[^\/]+([^"']*)(["'])/i) do
quote_start, path, quote_end = $1, $2, $3
next "#{quote_start}http#{$2}#{quote_end}" if $2.start_with?('s://', '://')
path = normalize_path_for_local(path)
"#{quote_start}#{path}#{quote_end}"
end
content
end
private
def normalize_path_for_local(path)
return "./index.html" if path.empty? || path == "/"
# handle query strings - they're already part of the filename
path = path.split('?').first if path.include?('?')
# check if this is a server-side script
ext = File.extname(path).downcase
if SERVER_SIDE_EXTS.include?(ext)
# keep the path as-is but ensure it starts with ./
path = "./#{path}" unless path.start_with?('./', '/')
else
# regular file handling
path = "./#{path}" unless path.start_with?('./', '/')
# if it looks like a directory, add index.html
if path.end_with?('/') || !path.include?('.')
path = "#{path.chomp('/')}/index.html"
end
end
path
end
end

View File

@ -1,12 +1,12 @@
Gem::Specification.new do |s|
s.name = "wayback_machine_downloader_straw"
s.version = "2.4.5"
s.version = "2.3.7"
s.executables << "wayback_machine_downloader"
s.summary = "Download an entire website from the Wayback Machine."
s.description = "Download complete websites from the Internet Archive's Wayback Machine. While the Wayback Machine (archive.org) excellently preserves web history, it lacks a built-in export functionality; this gem does just that, allowing you to download entire archived websites. (This is a significant rewrite of the original wayback_machine_downloader gem by hartator, with enhanced features and performance improvements.)"
s.authors = ["strawberrymaster"]
s.email = "strawberrymaster@vivaldi.net"
s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb", "lib/wayback_machine_downloader/page_requisites.rb", "lib/wayback_machine_downloader/subdom_processor.rb", "lib/wayback_machine_downloader/url_rewrite.rb"]
s.files = ["lib/wayback_machine_downloader.rb", "lib/wayback_machine_downloader/tidy_bytes.rb", "lib/wayback_machine_downloader/to_regex.rb", "lib/wayback_machine_downloader/archive_api.rb"]
s.homepage = "https://github.com/StrawberryMaster/wayback-machine-downloader"
s.license = "MIT"
s.required_ruby_version = ">= 3.4.3"