62 Commits

Author SHA1 Message Date
Yuvi63771
257111d462 Update main_window.py 2025-11-02 09:40:25 +05:30
Yuvi63771
9563ce82db Commit 2025-11-01 10:41:00 +05:30
Yuvi63771
169ded3fd8 Commit 2025-10-30 08:05:45 +05:30
Yuvi63771
7e8e8a59e2 commit 2025-10-26 12:08:48 +05:30
Yuvi63771
0acd433920 commit 2025-10-25 08:19:06 +05:30
Yuvi63771
cef4211d7b Commit 2025-10-20 13:37:27 +05:30
Yuvi63771
9fe0c37127 Commit 2025-10-18 16:03:34 +05:30
Yuvi63771
5d4e08f794 Commit 2025-10-08 19:38:29 +05:30
Yuvi63771
8239fdb8f3 Commit 2025-10-08 17:02:46 +05:30
Yuvi9587
df8a305e81 Update security.md 2025-09-08 08:25:26 -07:00
Yuvi9587
090f1a638d Update readme.md 2025-09-08 08:24:37 -07:00
Yuvi9587
871ee75a2a Update readme.md 2025-09-08 08:24:06 -07:00
Yuvi9587
fea59c7903 Update readme.md 2025-09-08 08:23:18 -07:00
Yuvi9587
a9b210b2ba Commit 2025-09-07 08:16:43 -07:00
Yuvi9587
ec94417569 Update main.py 2025-09-07 05:24:54 -07:00
Yuvi9587
0a902895a8 Update main_window.py 2025-09-07 05:23:44 -07:00
Yuvi9587
7217bfdb39 Commit 2025-09-07 04:56:08 -07:00
Yuvi9587
24880b5042 Update readme.md 2025-08-28 04:34:37 -07:00
Yuvi9587
510ae5e1d1 Update readme.md 2025-08-27 19:59:11 -07:00
Yuvi9587
65b4759bad Commit 2025-08-27 19:51:42 -07:00
Yuvi9587
6e993d88de Commit 2025-08-27 19:50:13 -07:00
Yuvi9587
cc3565b12b Commit 2025-08-27 07:21:30 -07:00
Yuvi9587
f8b150dfdb commit 2025-08-17 08:43:27 -07:00
Yuvi9587
5f7b526852 Commit 2025-08-17 05:51:25 -07:00
Yuvi9587
b0a6c264e1 Commit 2025-08-15 20:22:40 -07:00
Yuvi9587
d9364f4f91 commit 2025-08-14 09:48:55 -07:00
Yuvi9587
9cd48bb63a Update main_window.py 2025-08-13 19:49:10 -07:00
Yuvi9587
d0f11c4a06 Commit 2025-08-13 19:38:33 -07:00
Yuvi9587
26fa3b9bc1 Commit 2025-08-10 09:16:31 -07:00
Yuvi9587
f7c4d892a8 commit 2025-08-07 21:42:04 -07:00
Yuvi9587
661b97aa16 Commit 2025-08-06 06:56:49 -07:00
Yuvi9587
3704fece2b Update main_window.py 2025-08-04 04:53:52 -07:00
Yuvi9587
bdb7ac93c4 Update readme.md 2025-08-03 09:16:25 -07:00
Yuvi9587
76d4a3ea8a Update main_window.py 2025-08-03 09:15:01 -07:00
Yuvi9587
ccc7804505 Update readme.md 2025-08-03 09:13:47 -07:00
Yuvi9587
4ee750c5d4 Update drive_downloader.py 2025-08-03 09:11:27 -07:00
Yuvi9587
e9be13c4e3 Update readme.md 2025-08-03 09:07:29 -07:00
Yuvi9587
a5cb04ea6f Update features.md 2025-08-03 06:46:30 -07:00
Yuvi9587
842f18d70d Update features.md 2025-08-03 06:32:32 -07:00
Yuvi9587
fb3f0e8913 Update features.md 2025-08-03 06:11:05 -07:00
Yuvi9587
0758887154 Update features.md 2025-08-03 06:07:05 -07:00
Yuvi9587
e752d881e7 Update features.md 2025-08-03 06:01:32 -07:00
Yuvi9587
a776d1abe9 Update features.md 2025-08-03 06:01:15 -07:00
Yuvi9587
21d1ce4fa9 Commit 2025-08-03 05:46:51 -07:00
Yuvi9587
d5112a25ee Commit 2025-08-01 09:42:10 -07:00
Yuvi9587
791ce503ff Update main_window.py 2025-08-01 07:57:32 -07:00
Yuvi9587
e5b519d5ce Commit 2025-08-01 06:33:36 -07:00
Yuvi9587
9888ed0862 Update multipart_downloader.py 2025-07-30 22:28:05 -07:00
Yuvi9587
9e996bf682 Commit 2025-07-30 21:31:02 -07:00
Yuvi9587
e7a6a91542 commit 2025-07-30 19:30:50 -07:00
Yuvi9587
d7faccce18 Commit 2025-07-29 06:37:28 -07:00
Yuvi9587
a78c01c4f6 Update workers.py 2025-07-27 07:44:14 -07:00
Yuvi9587
6de9967e0b Commit 2025-07-27 07:18:08 -07:00
Yuvi9587
e3dd0e70b6 commit 2025-07-27 06:32:15 -07:00
Yuvi9587
9db89cfad0 Commit 2025-07-25 11:00:33 -07:00
Yuvi9587
0a6034a632 Update features.md 2025-07-25 10:49:51 -07:00
Yuvi9587
2da69e7017 Update features.md 2025-07-25 10:45:50 -07:00
Yuvi9587
3209770d00 Update LICENSE 2025-07-23 20:14:01 -07:00
Yuvi9587
337cdd342c Commit 2025-07-23 20:08:44 -07:00
Yuvi9587
d54b013bbc commit 2025-07-22 07:00:34 -07:00
Yuvi9587
2785fc1121 Update EmptyPopupDialog.py 2025-07-19 20:27:55 -07:00
Yuvi9587
fbdae61b80 Commit 2025-07-19 03:28:32 -07:00
76 changed files with 11998 additions and 3160 deletions

24
LICENSE
View File

@@ -1,11 +1,21 @@
Custom License - No Commercial Use
MIT License
Copyright [Yuvi9587] [2025]
Copyright (c) [2025] [Yuvi9587]
Permission is hereby granted to any person obtaining a copy of this software and associated documentation files (the "Software"), to use, copy, modify, and distribute the Software for **non-commercial purposes only**, subject to the following conditions:
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
1. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
2. Proper credit must be given to the original author in any public use, distribution, or derivative works.
3. Commercial use, resale, or sublicensing of the Software or any derivative works is strictly prohibited without explicit written permission.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND...
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

BIN
assets/Ko-fi.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

BIN
assets/buymeacoffee.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

BIN
assets/patreon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 978 B

View File

@@ -1,192 +1,159 @@
# Kemono Downloader - Feature Guide
This guide provides a comprehensive overview of all user interface elements, input fields, buttons, popups, and functionalities available in the Kemono Downloader.
## 1. Main Interface & Workflow
These are the primary controls you'll interact with to initiate and manage downloads.
### 1.1. Core Inputs
**🔗 Creator/Post URL Input Field**  
- **Purpose**: Paste the URL of the content you want to download.  
- **Supported Sites**: Kemono.su, Coomer.party, Simpcity.su.  
- **Supported URL Types**:  
  - Creator pages (e.g., `https://kemono.su/patreon/user/12345`).  
  - Individual posts (e.g., `https://kemono.su/patreon/user/12345/post/98765`).  
- **Note**: When ⭐ Favorite Mode is active, this field is disabled. For Simpcity.su URLs, the "Use Cookie" option is mandatory and auto-enabled.
**🎨 Creator Selection Button**  
- **Icon**: 🎨 (Artist Palette)  
- **Purpose**: Opens the "Creator Selection" dialog to browse and queue downloads from known creators.  
- **Dialog Features**:  
  - Loads creators from `creators.json`.  
  - **Search Bar**: Filter creators by name.  
  - **Creator List**: Displays creators with their service (e.g., Patreon, Fanbox).  
  - **Selection**: Checkboxes to select one or more creators.  
  - **Download Scope**: Organize downloads by Characters or Creators.  
  - **Add to Queue**: Adds selected creators or their posts to the download queue.
**Page Range (Start to End) Input Fields**  
- **Purpose**: Specify a range of pages to fetch for creator URLs.  
- **Usage**: Enter the starting and ending page numbers.  
- **Behavior**:  
  - If blank, all pages are processed.  
  - Disabled for single post URLs.
**📁 Download Location Input Field & Browse Button**  
- **Purpose**: Specify the main directory for downloaded files.  
- **Usage**: Type the path or click "Browse..." to select a folder.  
- **Requirement**: Mandatory for all download operations.
### 1.2. Action Buttons
**⬇️ Start Download / 🔗 Extract Links Button**  
- **Purpose**: Initiates downloading or link extraction.  
- **Behavior**:  
  - Shows "🔗 Extract Links" if "Only Links" is selected.  
  - Otherwise, shows "⬇️ Start Download".  
  - Supports single-threaded or multi-threaded downloads based on settings.
**🔄 Restore Download Button**  
- **Visibility**: Appears if an incomplete session is detected on startup.  
- **Purpose**: Resumes a previously interrupted download session.
**⏸️ Pause / ▶️ Resume Download Button**  
- **Purpose**: Pause or resume the ongoing download.  
- **Behavior**: Toggles between "Pause" and "Resume". Some UI settings can be changed while paused.
**❌ Cancel & Reset UI Button**  
- **Purpose**: Stops the current operation and performs a "soft" reset.  
- **Behavior**: Halts background threads, preserves URL and Download Location inputs, resets other settings.
**🔄 Reset Button (in the log area)**  
- **Purpose**: Performs a "hard" reset when no operation is active.  
- **Behavior**: Clears all inputs, resets options to default, and clears logs.
## 2. Filtering & Content Selection
These options allow precise control over downloaded content.
### 2.1. Content Filtering
**🎯 Filter by Character(s) Input Field**  
- **Purpose**: Download content related to specific characters or series.  
- **Usage**: Enter comma-separated character names.  
- **Advanced Syntax**:  
  - `Nami`: Simple filter.  
  - `(Vivi, Ulti)`: Grouped filter. Matches posts with "Vivi" OR "Ulti". Creates a shared folder like `Vivi Ulti` if subfolders are enabled.  
  - `(Boa, Hancock)~`: Aliased filter. Treats "Boa" and "Hancock" as the same entity.
**Filter: [Type] Button (Character Filter Scope)**  
- **Purpose**: Defines where the character filter is applied. Cycles on click.  
- **Options**:  
  - **Filter: Title** (Default): Matches post titles.  
  - **Filter: Files**: Matches filenames.  
  - **Filter: Both**: Checks title first, then filenames.  
  - **Filter: Comments (Beta)**: Checks filenames, then post comments.
**🚫 Skip with Words Input Field**  
- **Purpose**: Exclude posts/files with specified keywords (e.g., `WIP`, `sketch`).
**Scope: [Type] Button (Skip Words Scope)**  
- **Purpose**: Defines where skip words are applied. Cycles on click.  
- **Options**:  
  - **Scope: Posts** (Default): Skips posts if the title contains a skip word.  
  - **Scope: Files**: Skips files if the filename contains a skip word.  
  - **Scope: Both**: Applies both rules.
**✂️ Remove Words from Name Input Field**  
- **Purpose**: Remove unwanted text from filenames (e.g., `patreon`, `[HD]`).
### 2.2. File Type Filtering
**Filter Files (Radio Buttons)**  
- **Purpose**: Select file types to download.  
- **Options**:  
  - **All**: All file types.  
  - **Images/GIFs**: Common image formats.  
  - **Videos**: Common video formats.  
  - **🎧 Only Audio**: Common audio formats.  
  - **📦 Only Archives**: Only `.zip` and `.rar` files.  
  - **🔗 Only Links**: Extracts external links without downloading files.
**Skip .zip / Skip .rar Checkboxes**  
- **Purpose**: Skip downloading `.zip` or `.rar` files.  
- **Behavior**: Disabled when "📦 Only Archives" is active.
## 3. Download Customization
Options to refine the download process and output.
- **Download Thumbnails Only**: Downloads small preview images instead of full-resolution files.  
- **Scan Content for Images**: Scans post HTML for `<img>` tags, crucial for images in descriptions.  
- **Compress to WebP**: Converts images to WebP format (requires Pillow library).
- **Keep Duplicates**: Normally, if a post contains multiple files with the same name, only the first is downloaded. Checking this option will download all of them, renaming subsequent unique files with a numeric suffix (e.g., `image_1.jpg`).
- **🗄️ Custom Folder Name (Single Post Only)**: Specify a custom folder name for a single post's content (appears if subfolders are enabled).
## 4. 📖 Manga/Comic Mode
A mode for downloading creator feeds in chronological order, ideal for sequential content.
- **Activation**: Active when downloading a creator's entire feed (not a single post).  
- **Core Behavior**: Fetches all posts, processing from oldest to newest.  
- **Filename Style Toggle Button (in the log area)**:  
  - **Purpose**: Controls file naming in Manga Mode. Cycles on click.  
  - **Options**:  
    - **Name: Post Title**: First file named after post title; others keep original names.  
    - **Name: Original File**: Files keep server-provided names, with optional prefix.  
    - **Name: Title+G.Num**: Global numbering with post title prefix (e.g., `Chapter 1_001.jpg`).  
    - **Name: Date Based**: Sequential naming by post date (e.g., `001.jpg`), with optional prefix.  
    - **Name: Post ID**: Files named after post ID to avoid clashes.  
    - **Name: Date + Title**: Combines post date and title for filenames.
## 5. Folder Organization & Known.txt
Controls for structuring downloaded content.
- **Separate Folders by Name/Title Checkbox**: Enables automatic subfolder creation.  
- **Subfolder per Post Checkbox**: Creates subfolders for each post, named after the post title.  
- **Date Prefix for Post Subfolders Checkbox**: When used with "Subfolder per Post," this option prefixes the folder name with the post's upload date (e.g., `2025-07-11 Post Title`), allowing for chronological sorting.
- **Known.txt Management UI (Bottom Left)**:  
  - **Purpose**: Manages a local `Known.txt` file for series, characters, or terms used in folder creation.  
  - **List Display**: Shows primary names from `Known.txt`.  
  - ** Add Button**: Adds names or groups (e.g., `(Character A, Alias B)~`).  
  - **⤵️ Add to Filter Button**: Select names from `Known.txt` for the character filter.  
  - **🗑️ Delete Selected Button**: Removes selected names from `Known.txt`.  
  - **Open Known.txt Button**: Opens the file in the default text editor.  
  - **❓ Help Button**: Opens this feature guide.  
  - **📜 History Button**: Views recent download history.
## 6. ⭐ Favorite Mode (Kemono.su Only)
Download from favorited artists/posts on Kemono.su.
- **Enable Checkbox ("⭐ Favorite Mode")**:  
  - Switches to Favorite Mode.  
  - Disables the main URL input.  
  - Changes action buttons to "Favorite Artists" and "Favorite Posts".  
  - Requires cookies.  
- **🖼️ Favorite Artists Button**: Select and download from favorited artists.  
- **📄 Favorite Posts Button**: Select and download specific favorited posts.  
- **Favorite Download Scope Button**:  
  - **Scope: Selected Location**: Downloads favorites to the main directory.  
  - **Scope: Artist Folders**: Creates subfolders per artist.
## 7. Advanced Settings & Performance
- **🍪 Cookie Management**:  
  - **Use Cookie Checkbox**: Enables cookies for restricted content.  
  - **Cookie Text Field**: Paste cookie string.  
  - **Browse... Button**: Select a `cookies.txt` file (Netscape format).  
- **Use Multithreading Checkbox & Threads Input**:  
  - **Purpose**: Configures simultaneous operations.  
  - **Behavior**: Sets concurrent post processing (creator feeds) or file downloads (single posts).  
- **Multi-part Download Toggle Button**:  
  - **Purpose**: Enables/disables multi-segment downloading for large files.  
  - **Note**: Best for large files; less efficient for small files.
## 8. Logging, Monitoring & Error Handling
- **📜 Progress Log Area**: Displays messages, progress, and errors.  
- **👁️ / 🙈 Log View Toggle Button**: Switches between Progress Log and Missed Character Log (skipped posts).  
- **Show External Links in Log**: Displays external links (e.g., Mega, Google Drive) in a secondary panel.  
- **Export Links Button**: Saves extracted links to a `.txt` file in "Only Links" mode.  
- **Download Extracted Links Button**: Downloads files from supported external links in "Only Links" mode.  
- **🆘 Error Button & Dialog**:  
  - **Purpose**: Active if files fail to download. The button will display a live count of failed files (e.g., **(3) Error**).  
  - **Dialog Features**:  
    - Lists failed files.  
    - Retry failed downloads.  
    - Export failed URLs to a text file.
## 9. Application Settings (⚙️)
- **Appearance**: Switch between Light and Dark themes.  
- **Language**: Change UI language (restart required).
<h1>Kemono Downloader - Comprehensive Feature Guide</h1>
<p>This guide provides a detailed overview of all user interface elements, input fields, buttons, popups, and functionalities available in the application.</p>
<hr>
<h2>1. Core Concepts &amp; Supported Sites</h2>
<h3>URL Input (🔗)</h3>
<p>This is the primary input field where you specify the content you want to download.</p>
<p><strong>Supported URL Types:</strong></p>
<ul>
<li><strong>Creator URL</strong>: A link to a creator's main page. Downloads all posts from that creator.</li>
<li><strong>Post URL</strong>: A direct link to a specific post. Downloads only that single post.</li>
<li><strong>Batch Command</strong>: Special keywords to trigger bulk downloading from a text file (see Batch Downloading section).</li>
</ul>
<p><strong>Supported Websites:</strong></p>
<ul>
<li>Kemono (<code>kemono.su</code>, <code>kemono.party</code>, etc.)</li>
<li>Coomer (<code>coomer.su</code>, <code>coomer.party</code>, etc.)</li>
<li>Discord (via Kemono/Coomer API)</li>
<li>Bunkr</li>
<li>Erome</li>
<li>Saint2.su</li>
<li>nhentai</li>
</ul>
<hr>
<h2>2. Main Download Controls &amp; Inputs</h2>
<h3>Download Location (📁)</h3>
<p>This input defines the main folder where your files will be saved.</p>
<ul>
<li><strong>Browse Button</strong>: Opens a system dialog to choose a folder.</li>
<li><strong>Directory Creation</strong>: If the folder doesn't exist, the app will ask for confirmation to create it.</li>
</ul>
<h3>Filter by Character(s) &amp; Scope</h3>
<p>Used to download content for specific characters or series and organize them into subfolders.</p>
<ul>
<li><strong>Input Field</strong>: Enter comma-separated names (e.g., <code>Tifa, Aerith</code>). Group aliases using parentheses for folder naming (e.g., <code>(Cloud, Zack)</code>).</li>
<li><strong>Scope Button</strong>: Cycles through where to look for name matches:
<ul>
<li><strong>Filter: Title</strong>: Matches names in the post title.</li>
<li><strong>Filter: Files</strong>: Matches names in the filenames.</li>
<li><strong>Filter: Both</strong>: Checks the title first, then filenames.</li>
<li><strong>Filter: Comments</strong>: Checks filenames first, then post comments.</li>
</ul>
</li>
</ul>
<h3>Skip with Words &amp; Scope</h3>
<p>Prevents downloading content based on keywords or file size.</p>
<ul>
<li><strong>Input Field</strong>: Enter comma-separated keywords (e.g., <code>WIP, sketch, preview</code>).</li>
<li><strong>Skip by Size</strong>: Enter a number in square brackets to skip any file <strong>smaller than</strong> that size in MB. For example, <code>WIP, [200]</code> skips files with "WIP" in the name AND any file smaller than 200 MB.</li>
<li><strong>Scope Button</strong>: Cycles through where to apply keyword filters:
<ul>
<li><strong>Scope: Posts</strong>: Skips the entire post if the title matches.</li>
<li><strong>Scope: Files</strong>: Skips individual files if the filename matches.</li>
<li><strong>Scope: Both</strong>: Checks the post title first, then individual files.</li>
</ul>
</li>
</ul>
<h3>Remove Words from Name (✂️)</h3>
<p>Enter comma-separated words to remove from final filenames (e.g., <code>patreon, [HD]</code>). This helps clean up file naming.</p>
<hr>
<h2>3. Primary Download Modes (Filter File Section)</h2>
<p>This section uses radio buttons to set the main download mode. Only one can be active at a time.</p>
<ul>
<li><strong>All</strong>: Default mode. Downloads every file and attachment.</li>
<li><strong>Images/GIFs</strong>: Downloads only common image formats.</li>
<li><strong>Videos</strong>: Downloads only common video formats.</li>
<li><strong>Only Archives</strong>: Downloads only <code>.zip</code>, <code>.rar</code>, etc.</li>
<li><strong>Only Audio</strong>: Downloads only common audio formats.</li>
<li><strong>Only Links</strong>: Extracts external hyperlinks (e.g., Mega, Google Drive) from post descriptions instead of downloading files. <strong>This mode unlocks special features</strong> (see section 6).</li>
<li><strong>More</strong>: Opens a dialog to download text-based content.
<ul>
<li><strong>Scope</strong>: Choose to extract text from the post description or comments.</li>
<li><strong>Export Format</strong>: Save as PDF, DOCX, or TXT.</li>
<li><strong>Single PDF</strong>: Compile all text from the session into one consolidated PDF file.</li>
</ul>
</li>
</ul>
<hr>
<h2>4. Advanced Features &amp; Toggles (Checkboxes)</h2>
<h3>Folder Organization</h3>
<ul>
<li><strong>Separate folders by Known.txt</strong>: Automatically organizes downloads into subfolders based on name matches from your <code>Known.txt</code> list or the "Filter by Character(s)" input.</li>
<li><strong>Subfolder per post</strong>: Creates a unique folder for each post, named after the post's title. This prevents files from different posts from mixing.</li>
<li><strong>Date prefix</strong>: (Only available with "Subfolder per post") Prepends the post date to the folder name (e.g., <code>2025-08-03 My Post Title</code>) for chronological sorting.</li>
</ul>
<h3>Special Modes</h3>
<ul>
<li><strong>⭐ Favorite Mode</strong>: Switches the UI to download from your personal favorites list instead of using the URL input.</li>
<li><strong>Manga/Comic mode</strong>: Sorts a creator's posts from oldest to newest before downloading, ensuring correct page order. A scope button appears to control the filename style (e.g., using post title, date, or a global number).</li>
</ul>
<h3>File Handling</h3>
<ul>
<li><strong>Skip Archives</strong>: Ignores <code>.zip</code> and <code>.rar</code> files during downloads.</li>
<li><strong>Download Thumbnail Only</strong>: Saves only the small preview images instead of full-resolution files.</li>
<li><strong>Scan Content for Images</strong>: Parses post HTML to find embedded images that may not be listed in the API data.</li>
<li><strong>Compress to WebP</strong>: Converts large images (over 1.5 MB) to the space-saving WebP format.</li>
<li><strong>Keep Duplicates</strong>: Opens a dialog to control how duplicate files are handled (skip by default, keep all, or keep a specific number of copies).</li>
</ul>
<h3>General Functionality</h3>
<ul>
<li><strong>Use cookie</strong>: Enables login-based access. You can paste a cookie string or browse for a <code>cookies.txt</code> file.</li>
<li><strong>Use Multithreading</strong>: Enables parallel processing of posts for faster downloads. You can set the number of concurrent worker threads.</li>
<li><strong>Show external links in log</strong>: Opens a secondary log panel that displays external links found in post descriptions.</li>
</ul>
<hr>
<h2>5. Specialized Downloaders &amp; Batch Mode</h2>
<h3>Discord Features</h3>
<ul>
<li>When a Discord URL is entered, a <strong>Scope</strong> button appears.
<ul>
<li><strong>Scope: Files</strong>: Downloads all files from the channel/server.</li>
<li><strong>Scope: Messages</strong>: Saves the entire message history of the channel/server as a formatted PDF.</li>
</ul>
</li>
<li>A <strong>"Save as PDF"</strong> button also appears as a shortcut for the message saving feature.</li>
</ul>
<h3>Batch Downloading (<code>nhentai</code> &amp; <code>saint2.su</code>)</h3>
<p>This feature allows you to download hundreds of galleries or videos from a simple text file.</p>
<ol>
<li>In the <code>appdata</code> folder, create <code>nhentai.txt</code> or <code>saint2.su.txt</code>.</li>
<li>Add one full URL per line to the corresponding file.</li>
<li>In the app's URL input, type either <code>nhentai.net</code> or <code>saint2.su</code> and click "Start Download".</li>
<li>The app will read the file and process every URL in the queue.</li>
</ol>
<hr>
<h2>6. "Only Links" Mode: Extraction &amp; Direct Download</h2>
<p>When you select the <strong>"Only Links"</strong> radio button, the application's behavior changes significantly.</p>
<ul>
<li><strong>Link Extraction</strong>: Instead of downloading files, the main log panel will fill with all external links found (Mega, Google Drive, Dropbox, etc.).</li>
<li><strong>Export Links</strong>: An "Export Links" button appears, allowing you to save the full list of extracted URLs to a <code>.txt</code> file.</li>
<li><strong>Direct Cloud Download</strong>: A <strong>"Download"</strong> button appears next to the export button.
<ul>
<li>Clicking this opens a new dialog listing all supported cloud links (Mega, G-Drive, Dropbox).</li>
<li>You can select which files you want to download from this list.</li>
<li>The application will then download the selected files directly from the cloud service to your chosen download location.</li>
</ul>
</li>
</ul>
<hr>
<h2>7. Session &amp; Process Management</h2>
<h3>Main Action Buttons</h3>
<ul>
<li><strong>Start Download</strong>: Begins the download process. This button's text changes contextually (e.g., "Extract Links", "Check for Updates").</li>
<li><strong>Pause / Resume</strong>: Pauses or resumes the ongoing download. When paused, you can safely change some settings.</li>
<li><strong>Cancel &amp; Reset UI</strong>: Stops the current download and performs a soft reset of the UI, preserving your URL and download location.</li>
</ul>
<h3>Restore Interrupted Download</h3>
<p>If the application is closed unexpectedly during a download, it will save its progress.</p>
<ul>
<li>On the next launch, the UI will be pre-filled with the settings from the interrupted session.</li>
<li>The <strong>Pause</strong> button will change to <strong>"🔄 Restore Download"</strong>. Clicking it will resume the download exactly where it left off, skipping already processed posts.</li>
<li>The <strong>Cancel</strong> button will change to <strong>"🗑️ Discard Session"</strong>, allowing you to clear the saved state and start fresh.</li>
</ul>
<h3>Other UI Controls</h3>
<ul>
<li><strong>Error Button</strong>: Shows a count of failed files. Clicking it opens a dialog where you can view, export, or retry the failed downloads.</li>
<li><strong>History Button</strong>: Shows a log of recently downloaded files and processed posts.</li>
<li><strong>Settings Button</strong>: Opens the settings dialog where you can change the theme, language, and <strong>check for application updates</strong>.</li>
<li><strong>Support Button</strong>: Opens a dialog with links to the project's source code and developer support pages.</li>
</ul>

View File

@@ -107,4 +107,4 @@ def main():
if __name__ == '__main__':
main()
main()

324
readme.md
View File

@@ -1,217 +1,151 @@
<h1 align="center">Kemono Downloader v6.0.0</h1>
<h1 align="center">Kemono Downloader</h1>
<p>A powerful, feature-rich GUI application for downloading content from a wide array of sites, including <strong>Kemono</strong>, <strong>Coomer</strong>, <strong>Bunkr</strong>, <strong>Erome</strong>, <strong>Saint2.su</strong>, and <strong>nhentai</strong>.</p>
<p>Built with PyQt5, this tool is designed for users who want deep filtering capabilities, customizable folder structures, efficient downloads, and intelligent automation — all within a modern and user-friendly graphical interface.</p>
<div align="center">
<table>
<tr>
<td align="center">
<img src="Read/Read.png" alt="Default Mode" width="400"><br>
<strong>Default</strong>
</td>
<td align="center">
<img src="Read/Read1.png" alt="Favorite Mode" width="400"><br>
<strong>Favorite Mode</strong>
</td>
</tr>
<tr>
<td align="center">
<img src="Read/Read2.png" alt="Single Post" width="400"><br>
<strong>Single Post</strong>
</td>
<td align="center">
<img src="Read/Read3.png" alt="Manga/Comic Mode" width="400"><br>
<strong>Manga/Comic Mode</strong>
</td>
</tr>
</table>
<a href="features.md"><img src="https://img.shields.io/badge/📚%20Full%20Feature%20List-FFD700?style=for-the-badge&logoColor=black&color=FFD700" alt="Full Feature List"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/📝%20License-90EE90?style=for-the-badge&logoColor=black&color=90EE90" alt="License"></a>
</div>
---
<h2>Core Capabilities Overview</h2>
<h3>High-Performance &amp; Resilient Downloading</h3>
<ul>
<li><strong>Multi-threading:</strong> Processes multiple posts simultaneously to greatly accelerate downloads from large creator profiles.</li>
<li><strong>Multi-part Downloading:</strong> Splits large files into chunks and downloads them in parallel to maximize speed.</li>
<li><strong>Session Management:</strong> Supports pausing, resuming, and <strong>restoring downloads</strong> after crashes or interruptions, so you never lose your progress.</li>
</ul>
<h3>Expanded Site Support</h3>
<ul>
<li><strong>Direct Downloading:</strong> Full support for Kemono, Coomer, Bunkr, Erome, Saint2.su, and nhentai.</li>
<li><strong>Batch Mode:</strong> Download hundreds of URLs at once from <code>nhentai.txt</code> or <code>saint2.su.txt</code> files.</li>
<li><strong>Discord Support:</strong> Download files or save entire channel histories as PDFs directly through the API.</li>
</ul>
<h3>Advanced Filtering &amp; Content Control</h3>
<ul>
<li><strong>Content Type Filtering:</strong> Select whether to download all files or limit to images, videos, audio, or archives only.</li>
<li><strong>Keyword Skipping:</strong> Automatically skips posts or files containing certain keywords (e.g., "WIP", "sketch").</li>
<li><strong>Skip by Size:</strong> Avoid small files by setting a minimum size threshold in MB (e.g., <code>[200]</code>).</li>
<li><strong>Character Filtering:</strong> Restricts downloads to posts that match specific character or series names, with scope controls for title, filename, or comments.</li>
</ul>
<h3>Intelligent File Organization</h3>
<ul>
<li><strong>Automated Subfolders:</strong> Automatically organizes downloaded files into subdirectories based on character names or per post.</li>
<li><strong>Advanced File Renaming:</strong> Flexible renaming options, especially in Manga Mode, including by post title, date, sequential numbering, or post ID.</li>
<li><strong>Filename Cleaning:</strong> Automatically removes unwanted text from filenames.</li>
</ul>
<h3>Specialized Modes</h3>
<ul>
<li><strong>Renaming Mode:</strong> Sorts posts chronologically before downloading to ensure pages appear in the correct sequence.</li>
<li><strong>Favorite Mode:</strong> Connects to your account and downloads from your favorites list (artists or posts).</li>
<li><strong>Link Extraction Mode:</strong> Extracts external links (Mega, Google Drive) from posts for export or <strong>direct in-app downloading</strong>.</li>
<li><strong>Text Extraction Mode:</strong> Saves post descriptions or comment sections as <code>PDF</code>, <code>DOCX</code>, or <code>TXT</code> files.</li>
</ul>
<h3>Utility &amp; Advanced Features</h3>
<ul>
<li><strong>In-App Updater:</strong> Check for new versions directly from the settings menu.</li>
<li><strong>Cookie Support:</strong> Enables access to subscriber-only content via browser session cookies.</li>
<li><strong>Duplicate Detection:</strong> Prevents saving duplicate files using content-based comparison, with configurable limits.</li>
<li><strong>Image Compression:</strong> Automatically converts large images to <code>.webp</code> to reduce disk usage.</li>
<li><strong>Creator Management:</strong> Built-in creator browser and update checker for downloading only new posts from saved profiles.</li>
<li><strong>Error Handling:</strong> Tracks failed downloads and provides a retry dialog with options to export or redownload missing files.</li>
</ul>
<section aria-labelledby="supported-sites">
<h2 id="supported-sites">Supported Sites</h2>
A powerful, feature-rich GUI application for downloading content from **[Kemono.su](https://kemono.su)** (and its mirrors like kemono.party) and **[Coomer.party](https://coomer.party)** (and its mirrors like coomer.su).
<h3>Main Platforms</h3>
<p>
The downloader is primarily built to archive content from the platforms below.
</p>
<ul>
<li>
<strong>Kemono &amp; Coomer</strong> — Core supported sites; download posts and files from creators on services such as
<em>Patreon, Fanbox, OnlyFans, Fansly</em>, and similar platforms.
</li>
<li>
<strong>Discord</strong> — Two modes for a channel URL:
<ul>
<li>Download all files and attachments.</li>
<li>Save the entire message history as a formatted PDF.</li>
</ul>
</li>
</ul>
Built with PyQt5, this tool is designed for users who want deep filtering capabilities, customizable folder structures, efficient downloads, and intelligent automation — all within a modern and user-friendly graphical interface.
<hr>
<div align="center">
<h3>Specialized Site Support</h3>
<p>Paste a link from any of the following and the app will handle the download automatically:</p>
[![](https://img.shields.io/badge/📚%20Full%20Feature%20List-FFD700?style=for-the-badge&logoColor=black&color=FFD700)](features.md)
[![](https://img.shields.io/badge/📝%20License-90EE90?style=for-the-badge&logoColor=black&color=90EE90)](LICENSE)
[![](https://img.shields.io/badge/⚠️%20Important%20Note-FFCCCB?style=for-the-badge&logoColor=black&color=FFCCCB)](note.md)
<details>
<summary>Supported specialized sites (click to expand)</summary>
<ul>
<li>AllPornComic</li>
<li>Bunkr</li>
<li>Erome</li>
<li>Fap-Nation</li>
<li>Hentai2Read</li>
<li>nhentai</li>
<li>Pixeldrain</li>
<li>Saint2</li>
<li>Toonily</li>
</ul>
</details>
</div>
<hr>
<h3>Direct File Hosting</h3>
<p>
You may paste direct links from these file hosting services to download contents without using the
<code>&quot;Only Links&quot;</code> mode:
</p>
<ul>
<li>Dropbox</li>
<li>Gofile</li>
<li>Google Drive</li>
<li>Mega</li>
</ul>
</section>
---
<h2>💻 Installation</h2>
<h3>Requirements</h3>
<ul>
<li>Python 3.6 or higher</li>
<li>pip (Python package installer)</li>
</ul>
<h3>Install Dependencies</h3>
<pre><code>Required - pip install PyQt5 requests packaging cloudscraper bs4 pycryptodome
</code></pre>
## Feature Overview
<pre><code>Optional - pip install gdown pillow fpdf python-docx
</code></pre>
Kemono Downloader offers a range of features to streamline your content downloading experience:
<h3>Running the Application</h3>
<p>Navigate to the application's directory in your terminal and run:</p>
<pre><code>python main.py
</code></pre>
<h2>Contribution</h2>
<p>Feel free to fork this repo and submit pull requests for bug fixes, new features, or UI improvements!</p>
<h2>License</h2>
<p>This project is under the MIT Licence</p>
### Included Third-Party Tools
- **User-Friendly Interface:** A modern PyQt5 GUI for easy navigation and operation.
- **Flexible Downloading:**
- Download content from Kemono.su (and mirrors) and Coomer.party (and mirrors).
- Supports creator pages (with page range selection) and individual post URLs.
- Standard download controls: Start, Pause, Resume, and Cancel.
- **Powerful Filtering:**
- **Character Filtering:** Filter content by character names. Supports simple comma-separated names and grouped names for shared folders.
- **Keyword Skipping:** Skip posts or files based on specified keywords.
- **Filename Cleaning:** Remove unwanted words or phrases from downloaded filenames.
- **File Type Selection:** Choose to download all files, or limit to images/GIFs, videos, audio, or archives. Can also extract external links only.
- **Customizable Downloads:**
- **Thumbnails Only:** Option to download only small preview images.
- **Content Scanning:** Scan post HTML for `<img>` tags and direct image links, useful for images embedded in descriptions.
- **WebP Conversion:** Convert images to WebP format for smaller file sizes (requires Pillow library).
- **Organized Output:**
- **Automatic Subfolders:** Create subfolders based on character names (from filters or `Known.txt`) or post titles.
- **Per-Post Subfolders:** Option to create an additional subfolder for each individual post.
- **Manga/Comic Mode:**
- Downloads posts from a creator's feed in chronological order (oldest to newest).
- Offers various filename styling options for sequential reading (e.g., post title, original name, global numbering).
- **⭐ Favorite Mode:**
- Directly download from your favorited artists and posts on Kemono.su.
- Requires a valid cookie and adapts the UI for easy selection from your favorites.
- Supports downloading into a single location or artist-specific subfolders.
- **Performance & Advanced Options:**
- **Cookie Support:** Use cookies (paste string or load from `cookies.txt`) to access restricted content.
- **Multithreading:** Configure the number of simultaneous downloads/post processing threads for improved speed.
- **Logging:**
- A detailed progress log displays download activity, errors, and summaries.
- **Multi-language Interface:** Choose from several languages for the UI (English, Japanese, French, Spanish, German, Russian, Korean, Chinese Simplified).
- **Theme Customization:** Selectable Light and Dark themes for user comfort.
---
## ✨ What's New in v6.0.0
This release focuses on providing more granular control over file organization and improving at-a-glance status monitoring.
### New Features
- **Live Error Count on Button**
The **"Error" button** now dynamically displays the number of failed files during a download. Instead of opening the dialog, you can quickly see a live count like `(3) Error`, helping you track issues at a glance.
- **Date Prefix for Post Subfolders**
A new checkbox labeled **"Date Prefix"** is now available in the advanced settings.
When enabled alongside **"Subfolder per Post"**, it prepends the post's upload date to the folder name (e.g., `2025-07-11 Post Title`).
This makes your downloads sortable and easier to browse chronologically.
- **Keep Duplicates Within a Post**
A **"Keep Duplicates"** option has been added to preserve all files from a post — even if some have the same name.
Instead of skipping or overwriting, the downloader will save duplicates with numbered suffixes (e.g., `image.jpg`, `image_1.jpg`, etc.), which is especially useful when the same file name points to different media.
### Bug Fixes
- The downloader now correctly renames large `.part` files when completed, avoiding leftover temp files.
- The list of failed files shown in the Error Dialog is now saved and restored with your session — so no errors get lost if you close the app.
- Your selected download location is remembered, even after pressing the **Reset** button.
- The **Cancel** button is now enabled when restoring a pending session, so you can abort stuck jobs more easily.
- Internal cleanup logs (like "Deleting post cache") are now excluded from the final download summary for clarity.
---
## 📅 Next Update Plans
### 🔖 Post Tag Filtering (Planned for v6.1.0)
A powerful new **"Filter by Post Tags"** feature is planned:
- Filter and download content based on specific post tags.
- Combine tag filtering with current filters (character, file type, etc.).
- Use tag presets to automate frequent downloads.
This will provide **much greater control** over what gets downloaded, especially for creators who use tags consistently.
### 📁 Creator Download History (.json Save)
To streamline incremental downloads, a new system will allow the app to:
- Save a `.json` file with metadata about already-downloaded posts.
- Compare that file on future runs, so only **new** posts are downloaded.
- Avoids duplication and makes regular syncs fast and efficient.
Ideal for users managing large collections or syncing favorites regularly.
---
## 💻 Installation
### Requirements
- Python 3.6 or higher
- pip (Python package installer)
### Install Dependencies
```bash
pip install PyQt5 requests Pillow mega.py
```
### Running the Application
Navigate to the application's directory in your terminal and run:
```bash
python main.py
```
### Optional Setup
- **Main Inputs:**
- Place your `cookies.txt` in the root directory (if using cookies).
- Prepare your `Known.txt` and `creators.json` in the same directory for advanced filtering and selection features.
---
## Troubleshooting
### AttributeError: module 'asyncio' has no attribute 'coroutine'
If you encounter an error message similar to:
```
AttributeError: module 'asyncio' has no attribute 'coroutine'. Did you mean: 'coroutines'?
```
This usually means that a dependency, often `tenacity` (used by `mega.py`), is an older version that's incompatible with your Python version (typically Python 3.10+).
To fix this, activate your virtual environment and run the following commands to upgrade the libraries:
```bash
pip install --upgrade tenacity
pip install --upgrade mega.py
```
---
## Contribution
Feel free to fork this repo and submit pull requests for bug fixes, new features, or UI improvements!
---
## License
This project is under the Custom Licence
## Star History
This project includes a pre-compiled binary of `yt-dlp` for handling certain video downloads. `yt-dlp` is in the public domain. For more information or to get the latest version, please visit the official [yt-dlp GitHub repository](https://github.com/yt-dlp/yt-dlp).
<h2>Star History</h2>
<table align="center" style="border-collapse: collapse; border: none; margin-left: auto; margin-right: auto;">
<tr>
<td align="center" valign="middle" style="padding: 10px; border: none;">
<a href="https://www.star-history.com/#Yuvi9587/Kemono-Downloader&Date">
<img src="https://api.star-history.com/svg?repos=Yuvi9587/Kemono-Downloader&type=Date" alt="Star History Chart" width="650">
</a>
<tbody>
<tr>
<td align="center" valign="middle" style="padding: 10px; border: none;">
<a href="https://www.star-history.com/#Yuvi9587/Kemono-Downloader&amp;Date">
<img src="https://api.star-history.com/svg?repos=Yuvi9587/Kemono-Downloader&amp;type=Date" alt="Star History Chart" width="650">
</a>
</td>
</tr>
</tbody>
</table>
<p align="center">
<a href="https://buymeacoffee.com/yuvi9587">
<img src="https://img.shields.io/badge/🍺%20Buy%20Me%20a%20Coffee-FFCCCB?style=for-the-badge&logoColor=black&color=FFDD00" alt="Buy Me a Coffee">
<img src="https://img.shields.io/badge/🍺%20Buy%20Me%20a%20Coffee-FFCCCB?style=for-the-badge&amp;logoColor=black&amp;color=FFDD00" alt="Buy Me a Coffee">
</a>
</p>

View File

@@ -6,9 +6,9 @@ We are committed to maintaining and improving the Kemono Downloader. For the bes
| Version | Supported Status |
| -------------- | ------------------------------------ |
| >= 5.0.0 | :white_check_mark: Actively Supported |
| 4.0.0 - 4.x.x | :warning: Supported (Limited Features) |
| < 4.0.0 | :x: End of Life (EOL) |
| >= 7.0.0 | :white_check_mark: Actively Supported |
| 6.0.0 - 6.x.x | :warning: Supported (Limited Features) |
| < 5.0.0 | :x: End of Life (EOL) |
Users are encouraged to update to **v5.0.0 or newer** versions.

View File

@@ -1,4 +1,3 @@
# --- Application Metadata ---
CONFIG_ORGANIZATION_NAME = "KemonoDownloader"
CONFIG_APP_NAME_MAIN = "ApplicationSettings"
CONFIG_APP_NAME_TOUR = "ApplicationTour"
@@ -9,7 +8,7 @@ STYLE_ORIGINAL_NAME = "original_name"
STYLE_DATE_BASED = "date_based"
STYLE_DATE_POST_TITLE = "date_post_title"
STYLE_POST_TITLE_GLOBAL_NUMBERING = "post_title_global_numbering"
STYLE_POST_ID = "post_id" # Add this line
STYLE_POST_ID = "post_id"
MANGA_DATE_PREFIX_DEFAULT = ""
# --- Download Scopes ---
@@ -48,6 +47,8 @@ MAX_PARTS_FOR_MULTIPART_DOWNLOAD = 15
# --- UI and Settings Keys (for QSettings) ---
TOUR_SHOWN_KEY = "neverShowTourAgainV19"
MANGA_FILENAME_STYLE_KEY = "mangaFilenameStyleV1"
MANGA_CUSTOM_FORMAT_KEY = "mangaCustomFormatV1"
MANGA_CUSTOM_DATE_FORMAT_KEY = "mangaCustomDateFormatV1"
SKIP_WORDS_SCOPE_KEY = "skipWordsScopeV1"
ALLOW_MULTIPART_DOWNLOAD_KEY = "allowMultipartDownloadV1"
USE_COOKIE_KEY = "useCookieV1"
@@ -59,6 +60,13 @@ LANGUAGE_KEY = "currentLanguageV1"
DOWNLOAD_LOCATION_KEY = "downloadLocationV1"
RESOLUTION_KEY = "window_resolution"
UI_SCALE_KEY = "ui_scale_factor"
SAVE_CREATOR_JSON_KEY = "saveCreatorJsonProfile"
DATE_PREFIX_FORMAT_KEY = "datePrefixFormatV1"
AUTO_RETRY_ON_FINISH_KEY = "auto_retry_on_finish"
FETCH_FIRST_KEY = "fetchAllPostsFirst"
DISCORD_TOKEN_KEY = "discord/token"
POST_DOWNLOAD_ACTION_KEY = "postDownloadAction"
# --- UI Constants and Identifiers ---
HTML_PREFIX = "<!HTML!>"
@@ -80,7 +88,7 @@ VIDEO_EXTENSIONS = {
'.mpg', '.m4v', '.3gp', '.ogv', '.ts', '.vob'
}
ARCHIVE_EXTENSIONS = {
'.zip', '.rar', '.7z', '.tar', '.gz', '.bz2'
'.zip', '.rar', '.7z', '.tar', '.gz', '.bz2', '.bin'
}
AUDIO_EXTENSIONS = {
'.mp3', '.wav', '.aac', '.flac', '.ogg', '.wma', '.m4a', '.opus',
@@ -96,7 +104,7 @@ FOLDER_NAME_STOP_WORDS = {
"for", "he", "her", "his", "i", "im", "in", "is", "it", "its",
"me", "my", "net", "not", "of", "on", "or", "org", "our",
"s", "she", "so", "the", "their", "they", "this",
"to", "ve", "was", "we", "were", "with", "www", "you", "your",
"to", "ve", "was", "we", "were", "with", "www", "you", "your", "nsfw", "sfw",
# add more according to need
}
@@ -110,10 +118,13 @@ CREATOR_DOWNLOAD_DEFAULT_FOLDER_IGNORE_WORDS = {
"may", "jun", "june", "jul", "july", "aug", "august", "sep", "september",
"oct", "october", "nov", "november", "dec", "december",
"mon", "monday", "tue", "tuesday", "wed", "wednesday", "thu", "thursday",
"fri", "friday", "sat", "saturday", "sun", "sunday"
"fri", "friday", "sat", "saturday", "sun", "sunday", "Pack", "tier", "spoiler",
# add more according to need
}
# --- Duplicate Handling Modes ---
DUPLICATE_HANDLING_HASH = "hash"
DUPLICATE_HANDLING_KEEP_ALL = "keep_all"
DUPLICATE_HANDLING_KEEP_ALL = "keep_all"
STYLE_CUSTOM = "custom"

View File

@@ -0,0 +1,234 @@
import re
import os
import time
import cloudscraper
from bs4 import BeautifulSoup
from urllib.parse import urljoin
from concurrent.futures import ThreadPoolExecutor
import queue
def run_hentai2read_download(start_url, output_dir, progress_callback, overall_progress_callback, check_pause_func):
"""
Orchestrates the download process using a producer-consumer model.
The main thread scrapes image URLs and puts them in a queue.
A pool of worker threads consumes from the queue to download images concurrently.
"""
scraper = cloudscraper.create_scraper()
try:
progress_callback(" [Hentai2Read] Scraping series page for all metadata...")
top_level_folder_name, chapters_to_process = _get_series_metadata(start_url, progress_callback, scraper)
if not chapters_to_process:
progress_callback("❌ No chapters found to download. Aborting.")
return 0, 0
total_chapters = len(chapters_to_process)
overall_progress_callback(total_chapters, 0)
total_downloaded_count = 0
total_skipped_count = 0
for idx, chapter in enumerate(chapters_to_process):
if check_pause_func(): break
progress_callback(f"\n-- Processing and Downloading Chapter {idx + 1}/{total_chapters}: '{chapter['title']}' --")
series_folder = re.sub(r'[\\/*?:"<>|]', "", top_level_folder_name).strip()
chapter_folder = re.sub(r'[\\/*?:"<>|]', "", chapter['title']).strip()
final_save_path = os.path.join(output_dir, series_folder, chapter_folder)
os.makedirs(final_save_path, exist_ok=True)
# This function now scrapes and downloads simultaneously
dl_count, skip_count = _process_and_download_chapter(
chapter_url=chapter['url'],
save_path=final_save_path,
scraper=scraper,
progress_callback=progress_callback,
check_pause_func=check_pause_func
)
total_downloaded_count += dl_count
total_skipped_count += skip_count
overall_progress_callback(total_chapters, idx + 1)
if check_pause_func(): break
return total_downloaded_count, total_skipped_count
except Exception as e:
progress_callback(f"❌ A critical error occurred in the Hentai2Read client: {e}")
return 0, 0
def _get_series_metadata(start_url, progress_callback, scraper):
"""
Scrapes the main series page to get the Artist Name, Series Title, and chapter list.
Includes a retry mechanism for the initial connection.
"""
max_retries = 4 # Total number of attempts (1 initial + 3 retries)
last_exception = None
soup = None
for attempt in range(max_retries):
try:
if attempt > 0:
progress_callback(f" [Hentai2Read] ⚠️ Retrying connection (Attempt {attempt + 1}/{max_retries})...")
response = scraper.get(start_url, timeout=30)
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
# If successful, clear exception and break the loop
last_exception = None
break
except Exception as e:
last_exception = e
progress_callback(f" [Hentai2Read] ⚠️ Connection attempt {attempt + 1} failed: {e}")
if attempt < max_retries - 1:
time.sleep(2 * (attempt + 1)) # Wait 2s, 4s, 6s
continue # Try again
if last_exception:
progress_callback(f" [Hentai2Read] ❌ Error getting series metadata after {max_retries} attempts: {last_exception}")
return "Unknown Series", []
try:
series_title = "Unknown Series"
artist_name = None
metadata_list = soup.select_one("ul.list.list-simple-mini")
if metadata_list:
first_li = metadata_list.find('li', recursive=False)
if first_li and not first_li.find('a'):
series_title = first_li.get_text(strip=True)
for b_tag in metadata_list.find_all('b'):
label = b_tag.get_text(strip=True)
if label in ("Artist", "Author"):
a_tag = b_tag.find_next_sibling('a')
if a_tag:
artist_name = a_tag.get_text(strip=True)
if label == "Artist":
break
top_level_folder_name = artist_name if artist_name else series_title
chapter_links = soup.select("div.media a.pull-left.font-w600")
if not chapter_links:
chapters_to_process = [{'url': start_url, 'title': series_title}]
else:
chapters_to_process = [
{'url': urljoin(start_url, link['href']), 'title': " ".join(link.stripped_strings)}
for link in chapter_links
]
chapters_to_process.reverse()
progress_callback(f" [Hentai2Read] ✅ Found Artist/Series: '{top_level_folder_name}'")
progress_callback(f" [Hentai2Read] ✅ Found {len(chapters_to_process)} chapters to process.")
return top_level_folder_name, chapters_to_process
except Exception as e:
progress_callback(f" [Hentai2Read] ❌ Error parsing metadata after successful connection: {e}")
return "Unknown Series", []
def _process_and_download_chapter(chapter_url, save_path, scraper, progress_callback, check_pause_func):
"""
Uses a producer-consumer pattern to download a chapter.
The main thread (producer) scrapes URLs one by one.
Worker threads (consumers) download the URLs as they are found.
"""
task_queue = queue.Queue()
num_download_threads = 8
download_stats = {'downloaded': 0, 'skipped': 0}
def downloader_worker():
"""The function that each download thread will run."""
worker_scraper = cloudscraper.create_scraper()
while True:
try:
# Get a task from the queue
task = task_queue.get()
# The sentinel value to signal the end
if task is None:
break
filepath, img_url = task
if os.path.exists(filepath):
progress_callback(f" -> Skip: '{os.path.basename(filepath)}'")
download_stats['skipped'] += 1
else:
progress_callback(f" Downloading: '{os.path.basename(filepath)}'...")
response = worker_scraper.get(img_url, stream=True, timeout=60, headers={'Referer': chapter_url})
response.raise_for_status()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
download_stats['downloaded'] += 1
except Exception as e:
progress_callback(f" ❌ Download failed for task. Error: {e}")
download_stats['skipped'] += 1
finally:
task_queue.task_done()
executor = ThreadPoolExecutor(max_workers=num_download_threads, thread_name_prefix='H2R_Downloader')
for _ in range(num_download_threads):
executor.submit(downloader_worker)
page_number = 1
while True:
if check_pause_func(): break
if page_number > 300: # Safety break
progress_callback(" [Hentai2Read] ⚠️ Safety break: Reached 300 pages.")
break
page_url_to_check = f"{chapter_url}{page_number}/"
try:
page_response = None
page_last_exception = None
for page_attempt in range(3): # 3 attempts for sub-pages
try:
page_response = scraper.get(page_url_to_check, timeout=30)
page_last_exception = None
break
except Exception as e:
page_last_exception = e
time.sleep(1) # Short delay for page scraping retries
if page_last_exception:
raise page_last_exception # Give up after 3 tries
if page_response.history or page_response.status_code != 200:
progress_callback(f" [Hentai2Read] End of chapter detected on page {page_number}.")
break
soup = BeautifulSoup(page_response.text, 'html.parser')
img_tag = soup.select_one("img#arf-reader")
img_src = img_tag.get("src") if img_tag else None
if not img_tag or img_src == "https://static.hentai.direct/hentai":
progress_callback(f" [Hentai2Read] End of chapter detected (Placeholder image on page {page_number}).")
break
normalized_img_src = urljoin(page_response.url, img_src)
ext = os.path.splitext(normalized_img_src.split('/')[-1])[-1] or ".jpg"
filename = f"{page_number:03d}{ext}"
filepath = os.path.join(save_path, filename)
task_queue.put((filepath, normalized_img_src))
page_number += 1
time.sleep(0.1) # Small delay between scraping pages
except Exception as e:
progress_callback(f" [Hentai2Read] ❌ Error while scraping page {page_number}: {e}")
break
for _ in range(num_download_threads):
task_queue.put(None)
executor.shutdown(wait=True)
progress_callback(f" Found and processed {page_number - 1} images for this chapter.")
return download_stats['downloaded'], download_stats['skipped']

121
src/core/allcomic_client.py Normal file
View File

@@ -0,0 +1,121 @@
import requests
import re
from bs4 import BeautifulSoup
import time
import random
from urllib.parse import urlparse
def get_chapter_list(scraper, series_url, logger_func):
"""
Checks if a URL is a series page and returns a list of all chapter URLs if it is.
Relies on a passed-in scraper session for connection.
"""
logger_func(f" [AllComic] Checking for chapter list at: {series_url}")
headers = {'Referer': 'https://allporncomic.com/'}
response = None
max_retries = 8
for attempt in range(max_retries):
try:
response = scraper.get(series_url, headers=headers, timeout=30)
response.raise_for_status()
logger_func(f" [AllComic] Successfully connected to series page on attempt {attempt + 1}.")
break
except requests.RequestException as e:
logger_func(f" [AllComic] ⚠️ Series page check attempt {attempt + 1}/{max_retries} failed: {e}")
if attempt < max_retries - 1:
wait_time = (2 ** attempt) + random.uniform(0, 2)
logger_func(f" Retrying in {wait_time:.1f} seconds...")
time.sleep(wait_time)
else:
logger_func(f" [AllComic] ❌ All attempts to check series page failed.")
return []
if not response:
return []
try:
soup = BeautifulSoup(response.text, 'html.parser')
chapter_links = soup.select('li.wp-manga-chapter a')
if not chapter_links:
logger_func(" [AllComic] No chapter list found. Assuming this is a single chapter page.")
return []
chapter_urls = [link['href'] for link in chapter_links]
chapter_urls.reverse()
logger_func(f" [AllComic] ✅ Found {len(chapter_urls)} chapters.")
return chapter_urls
except Exception as e:
logger_func(f" [AllComic] ❌ Error parsing chapters after successful connection: {e}")
return []
def fetch_chapter_data(scraper, chapter_url, logger_func):
"""
Fetches the comic title, chapter title, and image URLs for a single chapter page.
Relies on a passed-in scraper session for connection.
"""
logger_func(f" [AllComic] Fetching page: {chapter_url}")
headers = {'Referer': 'https://allporncomic.com/'}
response = None
max_retries = 8
for attempt in range(max_retries):
try:
response = scraper.get(chapter_url, headers=headers, timeout=30)
response.raise_for_status()
break
except requests.RequestException as e:
logger_func(f" [AllComic] ⚠️ Chapter page connection attempt {attempt + 1}/{max_retries} failed: {e}")
if attempt < max_retries - 1:
wait_time = (2 ** attempt) + random.uniform(0, 2)
logger_func(f" Retrying in {wait_time:.1f} seconds...")
time.sleep(wait_time)
else:
logger_func(f" [AllComic] ❌ All connection attempts failed for chapter: {chapter_url}")
return None, None, None
if not response:
return None, None, None
try:
soup = BeautifulSoup(response.text, 'html.parser')
comic_title = "Unknown Comic"
title_element = soup.find('h1', class_='post-title')
if title_element:
comic_title = title_element.text.strip()
else:
try:
path_parts = urlparse(chapter_url).path.strip('/').split('/')
if len(path_parts) >= 3 and path_parts[-3] == 'porncomic':
comic_slug = path_parts[-2]
comic_title = comic_slug.replace('-', ' ').title()
except Exception:
pass
chapter_slug = chapter_url.strip('/').split('/')[-1]
chapter_title = chapter_slug.replace('-', ' ').title()
reading_container = soup.find('div', class_='reading-content')
list_of_image_urls = []
if reading_container:
image_elements = reading_container.find_all('img', class_='wp-manga-chapter-img')
for img in image_elements:
img_url = (img.get('data-src') or img.get('src', '')).strip()
if img_url:
list_of_image_urls.append(img_url)
if not list_of_image_urls:
logger_func(f" [AllComic] ❌ Could not find any images on the page.")
return None, None, None
return comic_title, chapter_title, list_of_image_urls
except Exception as e:
logger_func(f" [AllComic] ❌ An unexpected error occurred while parsing the page: {e}")
return None, None, None

View File

@@ -1,10 +1,9 @@
import time
import traceback
from urllib.parse import urlparse
import json # Ensure json is imported
import json
import requests
# (Keep the rest of your imports)
import cloudscraper
from ..utils.network_utils import extract_post_info, prepare_cookies_for_request
from ..config.constants import (
STYLE_DATE_POST_TITLE
@@ -14,7 +13,6 @@ from ..config.constants import (
def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_event=None, pause_event=None, cookies_dict=None):
"""
Fetches a single page of posts from the API with robust retry logic.
NEW: Requests only essential fields to keep the response size small and reliable.
"""
if cancellation_event and cancellation_event.is_set():
raise RuntimeError("Fetch operation cancelled by user.")
@@ -25,9 +23,6 @@ def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_ev
raise RuntimeError("Fetch operation cancelled by user while paused.")
time.sleep(0.5)
logger(" Post fetching resumed.")
# --- MODIFICATION: Added `fields` to the URL to request only metadata ---
# This prevents the large 'content' field from being included in the list, avoiding timeouts.
fields_to_request = "id,user,service,title,shared_file,added,published,edited,file,attachments,tags"
paginated_url = f'{api_url_base}?o={offset}&fields={fields_to_request}'
@@ -38,18 +33,31 @@ def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_ev
if cancellation_event and cancellation_event.is_set():
raise RuntimeError("Fetch operation cancelled by user during retry loop.")
log_message = f" Fetching post list: {api_url_base}?o={offset} (Page approx. {offset // 50 + 1})"
log_message = f" Fetching post list: {api_url_base} (Page approx. {offset // 50 + 1})"
if attempt > 0:
log_message += f" (Attempt {attempt + 1}/{max_retries})"
logger(log_message)
try:
# We can now remove the streaming logic as the response will be small and fast.
response = requests.get(paginated_url, headers=headers, timeout=(15, 60), cookies=cookies_dict)
response.raise_for_status()
response.encoding = 'utf-8'
return response.json()
except requests.exceptions.RequestException as e:
# Handle 403 error on the FIRST page as a rate limit/block
if e.response is not None and e.response.status_code == 403 and offset == 0:
logger(" ❌ Access Denied (403 Forbidden) on the first page.")
logger(" This is likely a rate limit or a Cloudflare block.")
logger(" 💡 SOLUTION: Wait a while, use a VPN, or provide a valid session cookie.")
return [] # Stop the process gracefully
# Handle 400 error as the end of pages
if e.response is not None and e.response.status_code == 400:
logger(f" ✅ Reached end of posts (API returned 400 Bad Request for offset {offset}).")
return []
# Handle all other network errors with a retry
logger(f" ⚠️ Retryable network error on page fetch (Attempt {attempt + 1}): {e}")
if attempt < max_retries - 1:
delay = retry_delay * (2 ** attempt)
@@ -71,28 +79,28 @@ def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_ev
raise RuntimeError(f"Failed to fetch page {paginated_url} after all attempts.")
def fetch_single_post_data(api_domain, service, user_id, post_id, headers, logger, cookies_dict=None):
"""
--- NEW FUNCTION ---
Fetches the full data, including the 'content' field, for a single post.
--- MODIFIED FUNCTION ---
Fetches the full data, including the 'content' field, for a single post using cloudscraper.
"""
post_api_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}/post/{post_id}"
logger(f" Fetching full content for post ID {post_id}...")
scraper = cloudscraper.create_scraper()
try:
# Use streaming here as a precaution for single posts that are still very large.
with requests.get(post_api_url, headers=headers, timeout=(15, 300), cookies=cookies_dict, stream=True) as response:
response.raise_for_status()
response_body = b""
for chunk in response.iter_content(chunk_size=8192):
response_body += chunk
full_post_data = json.loads(response_body)
# The API sometimes wraps the post in a list, handle that.
if isinstance(full_post_data, list) and full_post_data:
return full_post_data[0]
return full_post_data
response = scraper.get(post_api_url, headers=headers, timeout=(15, 300), cookies=cookies_dict)
response.raise_for_status()
full_post_data = response.json()
if isinstance(full_post_data, list) and full_post_data:
return full_post_data[0]
if isinstance(full_post_data, dict) and 'post' in full_post_data:
return full_post_data['post']
return full_post_data
except Exception as e:
logger(f" ❌ Failed to fetch full content for post {post_id}: {e}")
return None
@@ -109,6 +117,7 @@ def fetch_post_comments(api_domain, service, user_id, post_id, headers, logger,
try:
response = requests.get(comments_api_url, headers=headers, timeout=(10, 30), cookies=cookies_dict)
response.raise_for_status()
response.encoding = 'utf-8'
return response.json()
except requests.exceptions.RequestException as e:
raise RuntimeError(f"Error fetching comments for post {post_id}: {e}")
@@ -128,20 +137,22 @@ def download_from_api(
selected_cookie_file=None,
app_base_dir=None,
manga_filename_style_for_sort_check=None,
processed_post_ids=None # --- ADD THIS ARGUMENT ---
):
processed_post_ids=None,
fetch_all_first=False
):
parsed_input_url_for_domain = urlparse(api_url_input)
api_domain = parsed_input_url_for_domain.netloc
headers = {
'User-Agent': 'Mozilla/5.0',
'Accept': 'application/json'
'User-Agent': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
'Referer': f'https://{api_domain}/',
'Accept': 'text/css'
}
# --- ADD THIS BLOCK ---
# Ensure processed_post_ids is a set for fast lookups
if processed_post_ids is None:
processed_post_ids = set()
else:
processed_post_ids = set(processed_post_ids)
# --- END OF ADDITION ---
service, user_id, target_post_id = extract_post_info(api_url_input)
@@ -149,25 +160,25 @@ def download_from_api(
logger(" Download_from_api cancelled at start.")
return
parsed_input_url_for_domain = urlparse(api_url_input)
api_domain = parsed_input_url_for_domain.netloc
if not any(d in api_domain.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
# The code that defined api_domain was moved from here to the top of the function
if not any(d in api_domain.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
logger(f"⚠️ Unrecognized domain '{api_domain}' from input URL. Defaulting to kemono.su for API calls.")
api_domain = "kemono.su"
cookies_for_api = None
if use_cookie and app_base_dir:
cookies_for_api = prepare_cookies_for_request(use_cookie, cookie_text, selected_cookie_file, app_base_dir, logger, target_domain=api_domain)
if target_post_id:
# --- ADD THIS CHECK FOR RESTORE ---
if target_post_id in processed_post_ids:
logger(f" Skipping already processed target post ID: {target_post_id}")
return
# --- END OF ADDITION ---
direct_post_api_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}/post/{target_post_id}"
logger(f" Attempting direct fetch for target post: {direct_post_api_url}")
try:
direct_response = requests.get(direct_post_api_url, headers=headers, timeout=(10, 30), cookies=cookies_for_api)
direct_response.raise_for_status()
direct_response.encoding = 'utf-8'
direct_post_data = direct_response.json()
if isinstance(direct_post_data, list) and direct_post_data:
direct_post_data = direct_post_data[0]
@@ -192,7 +203,8 @@ def download_from_api(
logger("⚠️ Page range (start/end page) is ignored when a specific post URL is provided (searching all pages for the post).")
is_manga_mode_fetch_all_and_sort_oldest_first = manga_mode and (manga_filename_style_for_sort_check != STYLE_DATE_POST_TITLE) and not target_post_id
api_base_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}"
should_fetch_all = fetch_all_first or is_manga_mode_fetch_all_and_sort_oldest_first
api_base_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}/posts"
page_size = 50
if is_manga_mode_fetch_all_and_sort_oldest_first:
logger(f" Manga Mode (Style: {manga_filename_style_for_sort_check if manga_filename_style_for_sort_check else 'Default'} - Oldest First Sort Active): Fetching all posts to sort by date...")
@@ -234,6 +246,9 @@ def download_from_api(
logger(f" Manga Mode: No posts found within the specified page range ({start_page or 1}-{end_page}).")
break
all_posts_for_manga_mode.extend(posts_batch_manga)
logger(f"RENAMING_MODE_FETCH_PROGRESS:{len(all_posts_for_manga_mode)}:{current_page_num_manga}")
current_offset_manga += page_size
time.sleep(0.6)
except RuntimeError as e:
@@ -246,16 +261,19 @@ def download_from_api(
logger(f"❌ Unexpected error during manga mode fetch: {e}")
traceback.print_exc()
break
if cancellation_event and cancellation_event.is_set(): return
if all_posts_for_manga_mode:
logger(f"RENAMING_MODE_FETCH_COMPLETE:{len(all_posts_for_manga_mode)}")
if all_posts_for_manga_mode:
# --- ADD THIS BLOCK TO FILTER POSTS IN MANGA MODE ---
if processed_post_ids:
original_count = len(all_posts_for_manga_mode)
all_posts_for_manga_mode = [post for post in all_posts_for_manga_mode if post.get('id') not in processed_post_ids]
skipped_count = original_count - len(all_posts_for_manga_mode)
if skipped_count > 0:
logger(f" Manga Mode: Skipped {skipped_count} already processed post(s) before sorting.")
# --- END OF ADDITION ---
logger(f" Manga Mode: Fetched {len(all_posts_for_manga_mode)} total posts. Sorting by publication date (oldest first)...")
def sort_key_tuple(post):
@@ -326,15 +344,12 @@ def download_from_api(
logger(f"❌ Unexpected error fetching page {current_page_num} (offset {current_offset}): {e}")
traceback.print_exc()
break
# --- ADD THIS BLOCK TO FILTER POSTS IN STANDARD MODE ---
if processed_post_ids:
original_count = len(posts_batch)
posts_batch = [post for post in posts_batch if post.get('id') not in processed_post_ids]
skipped_count = original_count - len(posts_batch)
if skipped_count > 0:
logger(f" Skipped {skipped_count} already processed post(s) from page {current_page_num}.")
# --- END OF ADDITION ---
if not posts_batch:
if target_post_id and not processed_target_post_flag:
@@ -360,3 +375,4 @@ def download_from_api(
time.sleep(0.6)
if target_post_id and not processed_target_post_flag and not (cancellation_event and cancellation_event.is_set()):
logger(f"❌ Target post {target_post_id} could not be found after checking all relevant pages (final check after loop).")

374
src/core/booru_client.py Normal file
View File

@@ -0,0 +1,374 @@
import os
import re
import time
import datetime
import urllib.parse
import requests
import logging
import cloudscraper
# --- Start of Combined Code from 1.py ---
# Part 1: Essential Utilities & Exceptions
class BooruClientException(Exception):
"""Base class for exceptions in this client."""
pass
class HttpError(BooruClientException):
"""HTTP request during data extraction failed."""
def __init__(self, message="", response=None):
self.response = response
self.status = response.status_code if response else 0
if response and not message:
message = f"'{response.status_code} {response.reason}' for '{response.url}'"
super().__init__(message)
class NotFoundError(BooruClientException):
pass
def unquote(s):
return urllib.parse.unquote(s)
def parse_datetime(date_string, fmt):
try:
# Assumes date_string is in a format that strptime can handle with timezone
return datetime.datetime.strptime(date_string, fmt)
except (ValueError, TypeError):
return None
def nameext_from_url(url, data=None):
if data is None: data = {}
try:
path = urllib.parse.urlparse(url).path
filename = unquote(os.path.basename(path))
if '.' in filename:
name, ext = filename.rsplit('.', 1)
data["filename"], data["extension"] = name, ext.lower()
else:
data["filename"], data["extension"] = filename, ""
except Exception:
data["filename"], data["extension"] = "", ""
return data
USERAGENT_FIREFOX = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/118.0"
# Part 2: Core Extractor Logic
class Extractor:
category = ""
subcategory = ""
directory_fmt = ("{category}", "{id}")
filename_fmt = "{filename}.{extension}"
_retries = 3
_timeout = 30
def __init__(self, match, logger_func=print):
self.url = match.string
self.match = match
self.groups = match.groups()
self.session = cloudscraper.create_scraper()
self.session.headers["User-Agent"] = USERAGENT_FIREFOX
self.log = logger_func
self.api_key = None
self.user_id = None
def set_auth(self, api_key, user_id):
self.api_key = api_key
self.user_id = user_id
self._init_auth()
def _init_auth(self):
"""Placeholder for extractor-specific auth setup."""
pass
def request(self, url, method="GET", fatal=True, **kwargs):
for attempt in range(self._retries + 1):
try:
response = self.session.request(method, url, timeout=self._timeout, **kwargs)
if response.status_code < 400:
return response
if response.status_code == 404 and fatal:
raise NotFoundError(f"Resource not found at {url}")
self.log(f"Request for {url} failed with status {response.status_code}. Retrying...")
except requests.exceptions.RequestException as e:
self.log(f"Request for {url} failed: {e}. Retrying...")
if attempt < self._retries:
time.sleep(2 ** attempt)
if fatal:
raise HttpError(f"Failed to retrieve {url} after {self._retries} retries.")
return None
def request_json(self, url, **kwargs):
response = self.request(url, **kwargs)
try:
return response.json()
except (ValueError, TypeError) as exc:
self.log(f"Failed to decode JSON from {url}: {exc}")
raise BooruClientException("Invalid JSON response")
def items(self):
data = self.metadata()
for item in self.posts():
# Check for our special page update message
if isinstance(item, tuple) and item[0] == 'PAGE_UPDATE':
yield item
continue
# Otherwise, process it as a post
post = item
url = post.get("file_url")
if not url: continue
nameext_from_url(url, post)
post["date"] = parse_datetime(post.get("created_at"), "%Y-%m-%dT%H:%M:%S.%f%z")
if url.startswith("/"):
url = self.root + url
post['file_url'] = url # Ensure full URL
post.update(data)
yield post
class BaseExtractor(Extractor):
instances = ()
def __init__(self, match, logger_func=print):
super().__init__(match, logger_func)
self._init_category()
def _init_category(self):
parsed_url = urllib.parse.urlparse(self.url)
self.root = f"{parsed_url.scheme}://{parsed_url.netloc}"
for i, group in enumerate(self.groups):
if group is not None:
try:
self.category = self.instances[i][0]
return
except IndexError:
continue
@classmethod
def update(cls, instances):
pattern_list = []
instance_list = cls.instances = []
for category, info in instances.items():
root = info["root"].rstrip("/") if info["root"] else ""
instance_list.append((category, root, info))
pattern = info.get("pattern", re.escape(root.partition("://")[2]))
pattern_list.append(f"({pattern})")
return r"(?:https?://)?(?:" + "|".join(pattern_list) + r")"
# Part 3: Danbooru Extractor
class DanbooruExtractor(BaseExtractor):
filename_fmt = "{category}_{id}_{filename}.{extension}"
per_page = 200
def __init__(self, match, logger_func=print):
super().__init__(match, logger_func)
self._auth_logged = False
def _init_auth(self):
if self.user_id and self.api_key:
if not self._auth_logged:
self.log("Danbooru auth set.")
self._auth_logged = True
self.session.auth = (self.user_id, self.api_key)
def items(self):
data = self.metadata()
for item in self.posts():
# Check for our special page update message
if isinstance(item, tuple) and item[0] == 'PAGE_UPDATE':
yield item
continue
# Otherwise, process it as a post
post = item
url = post.get("file_url")
if not url: continue
nameext_from_url(url, post)
post["date"] = parse_datetime(post.get("created_at"), "%Y-%m-%dT%H:%M:%S.%f%z")
if url.startswith("/"):
url = self.root + url
post['file_url'] = url # Ensure full URL
post.update(data)
yield post
def metadata(self):
return {}
def posts(self):
return []
def _pagination(self, endpoint, params, prefix="b"):
url = self.root + endpoint
params["limit"] = self.per_page
params["page"] = 1
threshold = self.per_page - 20
while True:
posts = self.request_json(url, params=params)
if not posts: break
yield ('PAGE_UPDATE', len(posts))
yield from posts
if len(posts) < threshold: return
if prefix:
params["page"] = f"{prefix}{posts[-1]['id']}"
else:
params["page"] += 1
BASE_PATTERN = DanbooruExtractor.update({
"danbooru": {"root": None, "pattern": r"(?:danbooru|safebooru)\.donmai\.us"},
})
class DanbooruTagExtractor(DanbooruExtractor):
subcategory = "tag"
directory_fmt = ("{category}", "{search_tags}")
pattern = BASE_PATTERN + r"(/posts\?(?:[^&#]*&)*tags=([^&#]*))"
def metadata(self):
self.tags = unquote(self.groups[-1].replace("+", " ")).strip()
sanitized_tags = re.sub(r'[\\/*?:"<>|]', "_", self.tags)
return {"search_tags": sanitized_tags}
def posts(self):
return self._pagination("/posts.json", {"tags": self.tags})
class DanbooruPostExtractor(DanbooruExtractor):
subcategory = "post"
pattern = BASE_PATTERN + r"(/post(?:s|/show)/(\d+))"
def posts(self):
post_id = self.groups[-1]
url = f"{self.root}/posts/{post_id}.json"
post = self.request_json(url)
return (post,) if post else ()
class GelbooruBase(Extractor):
category = "gelbooru"
root = "https://gelbooru.com"
def __init__(self, match, logger_func=print):
super().__init__(match, logger_func)
self._auth_logged = False
def _api_request(self, params, key="post"):
# Auth is now added dynamically
if self.api_key and self.user_id:
if not self._auth_logged:
self.log("Gelbooru auth set.")
self._auth_logged = True
params.update({"api_key": self.api_key, "user_id": self.user_id})
url = self.root + "/index.php?page=dapi&q=index&json=1"
data = self.request_json(url, params=params)
if not key: return data
posts = data.get(key, [])
return posts if isinstance(posts, list) else [posts] if posts else []
def items(self):
base_data = self.metadata()
base_data['category'] = self.category
for item in self.posts():
# Check for our special page update message
if isinstance(item, tuple) and item[0] == 'PAGE_UPDATE':
yield item
continue
# Otherwise, process it as a post
post = item
url = post.get("file_url")
if not url: continue
data = base_data.copy()
data.update(post)
nameext_from_url(url, data)
yield data
def metadata(self): return {}
def posts(self): return []
GELBOORU_PATTERN = r"(?:https?://)?(?:www\.)?gelbooru\.com"
class GelbooruTagExtractor(GelbooruBase):
subcategory = "tag"
directory_fmt = ("{category}", "{search_tags}")
filename_fmt = "{category}_{id}_{md5}.{extension}"
pattern = GELBOORU_PATTERN + r"(/index\.php\?page=post&s=list&tags=([^&#]*))"
def metadata(self):
self.tags = unquote(self.groups[-1].replace("+", " ")).strip()
sanitized_tags = re.sub(r'[\\/*?:"<>|]', "_", self.tags)
return {"search_tags": sanitized_tags}
def posts(self):
"""Scrapes HTML search pages as API can be restrictive for tags."""
pid = 0
posts_per_page = 42
search_url = self.root + "/index.php"
params = {"page": "post", "s": "list", "tags": self.tags}
while True:
params['pid'] = pid
self.log(f"Scraping search results page (offset: {pid})...")
response = self.request(search_url, params=params)
html_content = response.text
post_ids = re.findall(r'id="p(\d+)"', html_content)
if not post_ids:
self.log("No more posts found on page. Ending scrape.")
break
yield ('PAGE_UPDATE', len(post_ids))
for post_id in post_ids:
post_data = self._api_request({"s": "post", "id": post_id})
yield from post_data
pid += posts_per_page
class GelbooruPostExtractor(GelbooruBase):
subcategory = "post"
filename_fmt = "{category}_{id}_{md5}.{extension}"
pattern = GELBOORU_PATTERN + r"(/index\.php\?page=post&s=view&id=(\d+))"
def posts(self):
post_id = self.groups[-1]
return self._api_request({"s": "post", "id": post_id})
# --- Main Entry Point ---
EXTRACTORS = [
DanbooruTagExtractor,
DanbooruPostExtractor,
GelbooruTagExtractor,
GelbooruPostExtractor,
]
def find_extractor(url, logger_func):
for extractor_cls in EXTRACTORS:
match = re.search(extractor_cls.pattern, url)
if match:
return extractor_cls(match, logger_func)
return None
def fetch_booru_data(url, api_key, user_id, logger_func):
"""
Main function to find an extractor and yield image data.
"""
extractor = find_extractor(url, logger_func)
if not extractor:
logger_func(f"No suitable Booru extractor found for URL: {url}")
return
logger_func(f"Using extractor: {extractor.__class__.__name__}")
extractor.set_auth(api_key, user_id)
# The 'items' method will now yield the data dictionaries directly
yield from extractor.items()

282
src/core/bunkr_client.py Normal file
View File

@@ -0,0 +1,282 @@
import logging
import os
import re
import requests
import html
import time
import datetime
import urllib.parse
import json
import random
import binascii
import itertools
class MockMessage:
Directory = 1
Url = 2
Version = 3
class AlbumException(Exception): pass
class ExtractionError(AlbumException): pass
class HttpError(ExtractionError):
def __init__(self, message="", response=None):
self.response = response
self.status = response.status_code if response is not None else 0
super().__init__(message)
class ControlException(AlbumException): pass
class AbortExtraction(ExtractionError, ControlException): pass
try:
re_compile = re._compiler.compile
except AttributeError:
re_compile = re.sre_compile.compile
HTML_RE = re_compile(r"<[^>]+>")
def extr(txt, begin, end, default=""):
try:
first = txt.index(begin) + len(begin)
return txt[first:txt.index(end, first)]
except Exception: return default
def extract_iter(txt, begin, end, pos=None):
try:
index = txt.index
lbeg = len(begin)
lend = len(end)
while True:
first = index(begin, pos) + lbeg
last = index(end, first)
pos = last + lend
yield txt[first:last]
except Exception: return
def split_html(txt):
try: return [html.unescape(x).strip() for x in HTML_RE.split(txt) if x and not x.isspace()]
except TypeError: return []
def parse_datetime(date_string, format="%Y-%m-%dT%H:%M:%S%z", utcoffset=0):
try:
d = datetime.datetime.strptime(date_string, format)
o = d.utcoffset()
if o is not None: d = d.replace(tzinfo=None, microsecond=0) - o
else:
if d.microsecond: d = d.replace(microsecond=0)
if utcoffset: d += datetime.timedelta(0, utcoffset * -3600)
return d
except (TypeError, IndexError, KeyError, ValueError, OverflowError): return None
unquote = urllib.parse.unquote
unescape = html.unescape
def decrypt_xor(encrypted, key, base64=True, fromhex=False):
if base64: encrypted = binascii.a2b_base64(encrypted)
if fromhex: encrypted = bytes.fromhex(encrypted.decode())
div = len(key)
return bytes([encrypted[i] ^ key[i % div] for i in range(len(encrypted))]).decode()
def advance(iterable, num):
iterator = iter(iterable)
next(itertools.islice(iterator, num, num), None)
return iterator
def json_loads(s): return json.loads(s)
def json_dumps(obj): return json.dumps(obj, separators=(",", ":"))
class Extractor:
def __init__(self, match, logger):
self.log = logger
self.url = match.string
self.match = match
self.groups = match.groups()
self.session = requests.Session()
self.session.headers["User-Agent"] = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0"
@classmethod
def from_url(cls, url, logger):
if isinstance(cls.pattern, str): cls.pattern = re.compile(cls.pattern)
match = cls.pattern.match(url)
return cls(match, logger) if match else None
def __iter__(self): return self.items()
def items(self): yield MockMessage.Version, 1
def request(self, url, method="GET", fatal=True, **kwargs):
tries = 1
while True:
try:
response = self.session.request(method, url, **kwargs)
if response.status_code < 400: return response
msg = f"'{response.status_code} {response.reason}' for '{response.url}'"
except requests.exceptions.RequestException as exc:
msg = str(exc)
self.log.info("%s (retrying...)", msg)
if tries > 4: break
time.sleep(tries)
tries += 1
if not fatal: return None
raise HttpError(msg)
def request_json(self, url, **kwargs):
response = self.request(url, **kwargs)
try: return json_loads(response.text)
except Exception as exc:
self.log.warning("%s: %s", exc.__class__.__name__, exc)
if not kwargs.get("fatal", True): return {}
raise
BASE_PATTERN_BUNKR = r"(?:https?://)?(?:[a-zA-Z0-9-]+\.)?(bunkr\.(?:si|la|ws|red|black|media|site|is|to|ac|cr|ci|fi|pk|ps|sk|ph|su)|bunkrr\.ru)"
DOMAINS = ["bunkr.si", "bunkr.ws", "bunkr.la", "bunkr.red", "bunkr.black", "bunkr.media", "bunkr.site"]
CF_DOMAINS = set()
class BunkrAlbumExtractor(Extractor):
category = "bunkr"
root = "https://bunkr.si"
root_dl = "https://get.bunkrr.su"
root_api = "https://apidl.bunkr.ru"
pattern = re.compile(BASE_PATTERN_BUNKR + r"/a/([^/?#]+)")
def __init__(self, match, logger):
super().__init__(match, logger)
domain_match = re.search(BASE_PATTERN_BUNKR, match.string)
if domain_match:
self.root = "https://" + domain_match.group(1)
self.endpoint = self.root_api + "/api/_001_v2"
self.album_id = self.groups[-1]
def items(self):
page = self.request(self.url).text
title = unescape(unescape(extr(page, 'property="og:title" content="', '"')))
items_html = list(extract_iter(page, '<div class="grid-images_box', "</a>"))
album_data = {
"album_id": self.album_id,
"album_name": title,
"count": len(items_html),
}
yield MockMessage.Directory, album_data, {}
for item_html in items_html:
try:
webpage_url = unescape(extr(item_html, ' href="', '"'))
if webpage_url.startswith("/"):
webpage_url = self.root + webpage_url
file_data = self._extract_file(webpage_url)
info = split_html(item_html)
if not file_data.get("name"):
file_data["name"] = info[-3]
yield MockMessage.Url, file_data, {}
except Exception as exc:
self.log.error("%s: %s", exc.__class__.__name__, exc)
def _extract_file(self, webpage_url):
page = self.request(webpage_url).text
data_id = extr(page, 'data-file-id="', '"')
# This referer is for the API call only
api_referer = self.root_dl + "/file/" + data_id
headers = {"Referer": api_referer, "Origin": self.root_dl}
data = self.request_json(self.endpoint, method="POST", headers=headers, json={"id": data_id})
# Get the raw file URL (no domain replacement)
file_url = decrypt_xor(data["url"], f"SECRET_KEY_{data['timestamp'] // 3600}".encode()) if data.get("encrypted") else data["url"]
file_name = extr(page, "<h1", "<").rpartition(">")[2]
# --- NEW FIX ---
# The download thread uses a new `requests` call, so we must
# explicitly pass BOTH the User-Agent and the correct Referer.
# 1. Get the User-Agent from this extractor's session
user_agent = self.session.headers.get("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0")
# 2. Use the original album URL as the Referer
download_referer = self.url
return {
"url": file_url,
"name": unescape(file_name),
"_http_headers": {
"Referer": download_referer,
"User-Agent": user_agent
}
}
class BunkrMediaExtractor(BunkrAlbumExtractor):
pattern = re.compile(BASE_PATTERN_BUNKR + r"(/[fvid]/[^/?#]+)")
def items(self):
try:
media_path = self.groups[-1]
file_data = self._extract_file(self.root + media_path)
album_data = {"album_name": file_data.get("name", "bunkr_media"), "count": 1}
yield MockMessage.Directory, album_data, {}
yield MockMessage.Url, file_data, {}
except Exception as exc:
self.log.error("%s: %s", exc.__class__.__name__, exc)
yield MockMessage.Directory, {"album_name": "error", "count": 0}, {}
def get_bunkr_extractor(url, logger):
"""Selects the correct Bunkr extractor based on the URL pattern."""
if BunkrAlbumExtractor.pattern.match(url):
logger.info("Bunkr Album URL detected.")
return BunkrAlbumExtractor.from_url(url, logger)
elif BunkrMediaExtractor.pattern.match(url):
logger.info("Bunkr Media URL detected.")
return BunkrMediaExtractor.from_url(url, logger)
else:
logger.error(f"No suitable Bunkr extractor found for URL: {url}")
return None
def fetch_bunkr_data(url, logger):
"""
Main function to be called from the GUI.
It extracts all file information from a Bunkr URL, now handling both albums and direct file links.
Returns:
A tuple of (album_name, list_of_files)
- album_name (str): The name of the album.
- list_of_files (list): A list of dicts, each containing 'url', 'name', and '_http_headers'.
Returns (None, None) on failure.
"""
# --- START: New logic to handle direct CDN file URLs ---
try:
parsed_url = urllib.parse.urlparse(url)
# Check if the hostname contains 'cdn' and the path has a common file extension
is_direct_cdn_file = (parsed_url.hostname and 'cdn' in parsed_url.hostname and 'bunkr' in parsed_url.hostname and
any(parsed_url.path.lower().endswith(ext) for ext in ['.mp4', '.mkv', '.webm', '.jpg', '.jpeg', '.png', '.gif', '.zip', '.rar']))
if is_direct_cdn_file:
logger.info("Bunkr direct file URL detected.")
filename = os.path.basename(parsed_url.path)
# Use the filename (without extension) as a sensible album name
album_name = os.path.splitext(filename)[0]
files_to_download = [{
'url': url,
'name': filename,
'_http_headers': {'Referer': 'https://bunkr.ru/'} # Use a generic Referer
}]
return album_name, files_to_download
except Exception as e:
logger.warning(f"Could not parse Bunkr URL for direct file check: {e}")
# --- END: New logic ---
# This is the original logic for album and media pages
extractor = get_bunkr_extractor(url, logger)
if not extractor:
return None, None
try:
album_name = "default_bunkr_album"
files_to_download = []
for msg_type, data, metadata in extractor:
if msg_type == MockMessage.Directory:
raw_album_name = data.get('album_name', 'untitled')
album_name = re.sub(r'[<>:"/\\|?*]', '_', raw_album_name).strip() or "untitled"
logger.info(f"Processing Bunkr album: {album_name}")
elif msg_type == MockMessage.Url:
files_to_download.append(data)
if not files_to_download:
logger.warning("No files found to download from the Bunkr URL.")
return None, None
return album_name, files_to_download
except Exception as e:
logger.error(f"An error occurred while extracting Bunkr info: {e}", exc_info=True)
return None, None

View File

@@ -0,0 +1,88 @@
import time
import cloudscraper
import json
def fetch_server_channels(server_id, logger=print, cookies_dict=None):
"""
Fetches all channels for a given Discord server ID from the API.
Uses cloudscraper to bypass Cloudflare.
"""
api_url = f"https://kemono.cr/api/v1/discord/server/{server_id}"
logger(f" Fetching channels for server: {api_url}")
scraper = cloudscraper.create_scraper()
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Referer': f'https://kemono.cr/discord/server/{server_id}',
'Accept': 'text/css'
}
try:
response = scraper.get(api_url, headers=headers, cookies=cookies_dict, timeout=30)
response.raise_for_status()
channels = response.json()
if isinstance(channels, list):
logger(f" ✅ Found {len(channels)} channels for server {server_id}.")
return channels
return None
except Exception as e:
logger(f" ❌ Error fetching server channels for {server_id}: {e}")
return None
def fetch_channel_messages(channel_id, logger=print, cancellation_event=None, pause_event=None, cookies_dict=None):
"""
A generator that fetches all messages for a specific Discord channel, handling pagination.
Uses cloudscraper and proper headers to bypass server protection.
"""
scraper = cloudscraper.create_scraper()
base_url = f"https://kemono.cr/api/v1/discord/channel/{channel_id}"
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Referer': f'https://kemono.cr/discord/channel/{channel_id}',
'Accept': 'text/css'
}
offset = 0
page_size = 150
while True:
if cancellation_event and cancellation_event.is_set():
logger(" Discord message fetching cancelled.")
break
if pause_event and pause_event.is_set():
logger(" Discord message fetching paused...")
while pause_event.is_set():
if cancellation_event and cancellation_event.is_set():
break
time.sleep(0.5)
if not (cancellation_event and cancellation_event.is_set()):
logger(" Discord message fetching resumed.")
paginated_url = f"{base_url}?o={offset}"
logger(f" Fetching messages from API: page starting at offset {offset}")
try:
response = scraper.get(paginated_url, headers=headers, cookies=cookies_dict, timeout=30)
response.raise_for_status()
messages_batch = response.json()
if not messages_batch:
logger(f" ✅ Reached end of messages for channel {channel_id}.")
break
logger(f" Fetched {len(messages_batch)} messages...")
yield messages_batch
if len(messages_batch) < page_size:
logger(f" ✅ Last page of messages received for channel {channel_id}.")
break
offset += page_size
time.sleep(0.5) # Be respectful to the API
except (cloudscraper.exceptions.CloudflareException, json.JSONDecodeError) as e:
logger(f" ❌ Error fetching messages at offset {offset}: {e}")
break
except Exception as e:
logger(f" ❌ An unexpected error occurred while fetching messages: {e}")
break

131
src/core/erome_client.py Normal file
View File

@@ -0,0 +1,131 @@
import os
import re
import html
import time
import urllib.parse
import requests
from datetime import datetime
import cloudscraper
def extr(txt, begin, end, default=""):
"""Stripped-down version of 'extract()' to find text between two delimiters."""
try:
first = txt.index(begin) + len(begin)
return txt[first:txt.index(end, first)]
except (ValueError, IndexError):
return default
def extract_iter(txt, begin, end):
"""Yields all occurrences of text between two delimiters."""
try:
index = txt.index
lbeg = len(begin)
lend = len(end)
pos = 0
while True:
first = index(begin, pos) + lbeg
last = index(end, first)
pos = last + lend
yield txt[first:last]
except (ValueError, IndexError):
return
def nameext_from_url(url):
"""Extracts filename and extension from a URL."""
data = {}
filename = urllib.parse.unquote(url.partition("?")[0].rpartition("/")[2])
name, _, ext = filename.rpartition(".")
if name and len(ext) <= 16:
data["filename"], data["extension"] = name, ext.lower()
else:
data["filename"], data["extension"] = filename, ""
return data
def parse_timestamp(ts, default=None):
"""Creates a datetime object from a Unix timestamp."""
try:
return datetime.fromtimestamp(int(ts))
except (ValueError, TypeError):
return default
def fetch_erome_data(url, logger):
"""
Identifies and extracts all media files from an Erome album URL.
Args:
url (str): The Erome album URL (e.g., https://www.erome.com/a/albumID).
logger (function): A function to log progress messages.
Returns:
tuple: A tuple containing (album_folder_name, list_of_file_dicts).
Returns (None, []) if data extraction fails.
"""
album_id_match = re.search(r"/a/(\w+)", url)
if not album_id_match:
logger(f"Error: The URL '{url}' does not appear to be a valid Erome album link.")
return None, []
album_id = album_id_match.group(1)
page_url = f"https://www.erome.com/a/{album_id}"
session = cloudscraper.create_scraper()
try:
logger(f" Fetching Erome album page: {page_url}")
for attempt in range(5):
response = session.get(page_url, timeout=30)
response.raise_for_status()
page_content = response.text
if "<title>Please wait a few moments</title>" in page_content:
logger(f" Cloudflare check detected. Waiting 5 seconds... (Attempt {attempt + 1}/5)")
time.sleep(5)
continue
break
else:
logger(" Error: Could not bypass Cloudflare check after several attempts.")
return None, []
title = html.unescape(extr(page_content, 'property="og:title" content="', '"'))
user = urllib.parse.unquote(extr(page_content, 'href="https://www.erome.com/', '"', default="unknown_user"))
sanitized_title = re.sub(r'[<>:"/\\|?*]', '_', title).strip()
sanitized_user = re.sub(r'[<>:"/\\|?*]', '_', user).strip()
album_folder_name = f"Erome - {sanitized_user} - {sanitized_title} [{album_id}]"
urls = []
media_groups = page_content.split('<div class="media-group"')
for group in media_groups[1:]:
video_url = extr(group, '<source src="', '"') or extr(group, 'data-src="', '"')
if video_url:
urls.append(video_url)
if not urls:
logger(" Warning: No media URLs found on the album page.")
return album_folder_name, []
logger(f" Found {len(urls)} media files in album '{title}'.")
file_list = []
for i, file_url in enumerate(urls, 1):
filename_info = nameext_from_url(file_url)
filename = f"{album_id}_{sanitized_title}_{i:03d}.{filename_info.get('extension', 'mp4')}"
file_data = {
"url": file_url,
"filename": filename,
"headers": {"Referer": page_url},
}
file_list.append(file_data)
return album_folder_name, file_list
except requests.exceptions.RequestException as e:
logger(f" Error fetching Erome page: {e}")
return None, []
except Exception as e:
logger(f" An unexpected error occurred during Erome extraction: {e}")
return None, []

View File

@@ -0,0 +1,138 @@
import re
import os
import cloudscraper
from urllib.parse import urlparse, urljoin
from ..utils.file_utils import clean_folder_name
def fetch_fap_nation_data(album_url, logger_func):
"""
Scrapes a fap-nation page by prioritizing HLS streams first, then falling
back to direct download links. Selects the highest quality available.
"""
logger_func(f" [Fap-Nation] Fetching album data from: {album_url}")
scraper = cloudscraper.create_scraper()
try:
response = scraper.get(album_url, timeout=45)
response.raise_for_status()
html_content = response.text
title_match = re.search(r'<h1[^>]*itemprop="name"[^>]*>(.*?)</h1>', html_content, re.IGNORECASE)
album_slug = clean_folder_name(os.path.basename(urlparse(album_url).path.strip('/')))
album_title = clean_folder_name(title_match.group(1).strip()) if title_match else album_slug
files_to_download = []
final_url = None
link_type = None
filename_from_video_tag = None
video_tag_title_match = re.search(r'data-plyr-config=.*?&quot;title&quot;:.*?&quot;([^&]+?\.mp4)&quot;', html_content, re.IGNORECASE)
if video_tag_title_match:
filename_from_video_tag = clean_folder_name(video_tag_title_match.group(1))
logger_func(f" [Fap-Nation] Found high-quality filename in video tag: {filename_from_video_tag}")
# --- REVISED LOGIC: HLS FIRST ---
# 1. Prioritize finding an HLS stream.
logger_func(" [Fap-Nation] Priority 1: Searching for HLS stream...")
iframe_match = re.search(r'<iframe[^>]+src="([^"]+mediadelivery\.net[^"]+)"', html_content, re.IGNORECASE)
if iframe_match:
iframe_url = iframe_match.group(1)
logger_func(f" [Fap-Nation] Found video iframe. Visiting: {iframe_url}")
try:
iframe_response = scraper.get(iframe_url, timeout=30)
iframe_response.raise_for_status()
iframe_html = iframe_response.text
playlist_match = re.search(r'<source[^>]+src="([^"]+\.m3u8)"', iframe_html, re.IGNORECASE)
if playlist_match:
final_url = playlist_match.group(1)
link_type = 'hls'
logger_func(f" [Fap-Nation] Found embedded HLS stream in iframe: {final_url}")
except Exception as e:
logger_func(f" [Fap-Nation] ⚠️ Error fetching or parsing iframe content: {e}")
if not final_url:
logger_func(" [Fap-Nation] No stream found in iframe. Checking main page content as a last resort...")
js_var_match = re.search(r'"(https?://[^"]+\.m3u8)"', html_content, re.IGNORECASE)
if js_var_match:
final_url = js_var_match.group(1)
link_type = 'hls'
logger_func(f" [Fap-Nation] Found HLS stream on main page: {final_url}")
# 2. Fallback: If no HLS stream was found, search for direct links.
if not final_url:
logger_func(" [Fap-Nation] No HLS stream found. Priority 2 (Fallback): Searching for direct download links...")
direct_link_pattern = r'<a\s+[^>]*href="([^"]+\.(?:mp4|webm|mkv|mov))"[^>]*>'
direct_links_found = re.findall(direct_link_pattern, html_content, re.IGNORECASE)
if direct_links_found:
logger_func(f" [Fap-Nation] Found {len(direct_links_found)} direct media link(s). Selecting the best quality...")
best_link = None
# Define qualities from highest to lowest
qualities_to_check = ['1080p', '720p', '480p', '360p']
# Find the best quality link by iterating through preferred qualities
for quality in qualities_to_check:
for link in direct_links_found:
if quality in link.lower():
best_link = link
logger_func(f" [Fap-Nation] Found '{quality}' link: {best_link}")
break # Found the best link for this quality level
if best_link:
break # Found the highest quality available
# Fallback if no quality string was found in any link
if not best_link:
best_link = direct_links_found[0]
logger_func(f" [Fap-Nation] ⚠️ No quality tags (1080p, 720p, etc.) found in links. Defaulting to first link: {best_link}")
final_url = best_link
link_type = 'direct'
logger_func(f" [Fap-Nation] Identified direct media link: {final_url}")
# If after all checks, we still have no URL, then fail.
if not final_url:
logger_func(" [Fap-Nation] ❌ Stage 1 Failed: Could not find any HLS stream or direct link.")
return None, []
# --- HLS Quality Selection Logic ---
if link_type == 'hls' and final_url:
logger_func(" [Fap-Nation] HLS stream found. Checking for higher quality variants...")
try:
master_playlist_response = scraper.get(final_url, timeout=20)
master_playlist_response.raise_for_status()
playlist_content = master_playlist_response.text
streams = re.findall(r'#EXT-X-STREAM-INF:.*?RESOLUTION=(\d+)x(\d+).*?\n(.*?)\s', playlist_content)
if streams:
best_stream = max(streams, key=lambda s: int(s[0]) * int(s[1]))
height = best_stream[1]
relative_path = best_stream[2]
new_final_url = urljoin(final_url, relative_path)
logger_func(f" [Fap-Nation] ✅ Best quality found: {height}p. Updating URL to: {new_final_url}")
final_url = new_final_url
else:
logger_func(" [Fap-Nation] No alternate quality streams found in playlist. Using original.")
except Exception as e:
logger_func(f" [Fap-Nation] ⚠️ Could not parse HLS master playlist for quality selection: {e}. Using original URL.")
if final_url and link_type:
if filename_from_video_tag:
base_name, _ = os.path.splitext(filename_from_video_tag)
new_filename = f"{base_name}.mp4"
else:
new_filename = f"{album_slug}.mp4"
files_to_download.append({'url': final_url, 'filename': new_filename, 'type': link_type})
logger_func(f" [Fap-Nation] ✅ Ready to download '{new_filename}' ({link_type} method).")
return album_title, files_to_download
logger_func(f" [Fap-Nation] ❌ Could not determine a valid download link.")
return None, []
except Exception as e:
logger_func(f" [Fap-Nation] ❌ Error fetching Fap-Nation data: {e}")
return None, []

View File

@@ -1,19 +1,14 @@
# --- Standard Library Imports ---
import threading
import time
import os
import json
import traceback
from concurrent.futures import ThreadPoolExecutor, as_completed, Future
# --- Local Application Imports ---
# These imports reflect the new, organized project structure.
from .api_client import download_from_api
from .workers import PostProcessorWorker, DownloadThread
from .workers import PostProcessorWorker
from ..config.constants import (
STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING,
MAX_THREADS, POST_WORKER_BATCH_THRESHOLD, POST_WORKER_NUM_BATCHES,
POST_WORKER_BATCH_DELAY_SECONDS
MAX_THREADS
)
from ..utils.file_utils import clean_folder_name
@@ -36,8 +31,6 @@ class DownloadManager:
self.progress_queue = progress_queue
self.thread_pool = None
self.active_futures = []
# --- Session State ---
self.cancellation_event = threading.Event()
self.pause_event = threading.Event()
self.is_running = False
@@ -47,6 +40,10 @@ class DownloadManager:
self.total_downloads = 0
self.total_skips = 0
self.all_kept_original_filenames = []
self.creator_profiles_dir = None
self.current_creator_name_for_profile = None
self.current_creator_profile_path = None
self.session_file_path = None
def _log(self, message):
"""Puts a progress message into the queue for the UI."""
@@ -65,7 +62,16 @@ class DownloadManager:
self._log("❌ Cannot start a new session: A session is already in progress.")
return
# --- Reset state for the new session ---
self.session_file_path = config.get('session_file_path')
creator_profile_data = self._setup_creator_profile(config)
# Save settings to profile at the start of the session
if self.current_creator_profile_path:
creator_profile_data['settings'] = config
creator_profile_data.setdefault('processed_post_ids', [])
self._save_creator_profile(creator_profile_data)
self._log(f"✅ Loaded/created profile for '{self.current_creator_name_for_profile}'. Settings saved.")
self.is_running = True
self.cancellation_event.clear()
self.pause_event.clear()
@@ -75,121 +81,122 @@ class DownloadManager:
self.total_downloads = 0
self.total_skips = 0
self.all_kept_original_filenames = []
# --- Decide execution strategy (multi-threaded vs. single-threaded) ---
is_single_post = bool(config.get('target_post_id_from_initial_url'))
use_multithreading = config.get('use_multithreading', True)
is_manga_sequential = config.get('manga_mode_active') and config.get('manga_filename_style') in [STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING]
should_use_multithreading_for_posts = use_multithreading and not is_single_post and not is_manga_sequential
if should_use_multithreading_for_posts:
# Start a separate thread to manage fetching and queuing to the thread pool
fetcher_thread = threading.Thread(
target=self._fetch_and_queue_posts_for_pool,
args=(config, restore_data),
args=(config, restore_data, creator_profile_data),
daemon=True
)
fetcher_thread.start()
else:
# For single posts or sequential manga mode, use a single worker thread
# which is simpler and ensures order.
self._start_single_threaded_session(config)
# Single-threaded mode does not use the manager's complex logic
self._log(" Manager is handing off to a single-threaded worker...")
# The single-threaded worker will manage its own lifecycle and signals.
# The manager's role for this session is effectively over.
self.is_running = False # Allow another session to start if needed
self.progress_queue.put({'type': 'handoff_to_single_thread', 'payload': (config,)})
def _start_single_threaded_session(self, config):
"""Handles downloads that are best processed by a single worker thread."""
self._log(" Initializing single-threaded download process...")
# The original DownloadThread is now a pure Python thread, not a QThread.
# We run its `run` method in a standard Python thread.
self.worker_thread = threading.Thread(
target=self._run_single_worker,
args=(config,),
daemon=True
)
self.worker_thread.start()
def _run_single_worker(self, config):
"""Target function for the single-worker thread."""
try:
# Pass the queue directly to the worker for it to send updates
worker = DownloadThread(config, self.progress_queue)
worker.run() # This is the main blocking call for this thread
except Exception as e:
self._log(f"❌ CRITICAL ERROR in single-worker thread: {e}")
self._log(traceback.format_exc())
finally:
self.is_running = False
def _fetch_and_queue_posts_for_pool(self, config, restore_data):
def _fetch_and_queue_posts_for_pool(self, config, restore_data, creator_profile_data):
"""
Fetches all posts from the API and submits them as tasks to a thread pool.
This method runs in its own dedicated thread to avoid blocking.
Fetches posts from the API in batches and submits them as tasks to a thread pool.
This method runs in its own dedicated thread to avoid blocking the UI.
It provides immediate feedback as soon as the first batch of posts is found.
"""
try:
num_workers = min(config.get('num_threads', 4), MAX_THREADS)
self.thread_pool = ThreadPoolExecutor(max_workers=num_workers, thread_name_prefix='PostWorker_')
# Fetch posts
# In a real implementation, this would call `api_client.download_from_api`
if restore_data:
session_processed_ids = set(restore_data.get('processed_post_ids', [])) if restore_data else set()
profile_processed_ids = set(creator_profile_data.get('processed_post_ids', []))
processed_ids = session_processed_ids.union(profile_processed_ids)
if restore_data and 'all_posts_data' in restore_data:
# This logic for session restore remains as it relies on a pre-fetched list
all_posts = restore_data['all_posts_data']
processed_ids = set(restore_data['processed_post_ids'])
posts_to_process = [p for p in all_posts if p.get('id') not in processed_ids]
self.total_posts = len(all_posts)
self.processed_posts = len(processed_ids)
self._log(f"🔄 Restoring session. {len(posts_to_process)} posts remaining.")
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
if not posts_to_process:
self._log("✅ No new posts to process from restored session.")
return
for post_data in posts_to_process:
if self.cancellation_event.is_set(): break
worker = PostProcessorWorker(post_data, config, self.progress_queue)
future = self.thread_pool.submit(worker.process)
future.add_done_callback(self._handle_future_result)
self.active_futures.append(future)
else:
posts_to_process = self._get_all_posts(config)
self.total_posts = len(posts_to_process)
# --- START: REFACTORED STREAMING LOGIC ---
post_generator = download_from_api(
api_url_input=config['api_url'],
logger=self._log,
start_page=config.get('start_page'),
end_page=config.get('end_page'),
manga_mode=config.get('manga_mode_active', False),
cancellation_event=self.cancellation_event,
pause_event=self.pause_event,
use_cookie=config.get('use_cookie', False),
cookie_text=config.get('cookie_text', ''),
selected_cookie_file=config.get('selected_cookie_file'),
app_base_dir=config.get('app_base_dir'),
manga_filename_style_for_sort_check=config.get('manga_filename_style'),
processed_post_ids=list(processed_ids)
)
self.total_posts = 0
self.processed_posts = 0
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
if not posts_to_process:
self._log("✅ No new posts to process.")
return
# Process posts in batches as they are yielded by the API client
for batch in post_generator:
if self.cancellation_event.is_set():
self._log(" Post fetching cancelled.")
break
# Filter out any posts that might have been processed since the start
posts_in_batch_to_process = [p for p in batch if p.get('id') not in processed_ids]
if not posts_in_batch_to_process:
continue
# Update total count and immediately inform the UI
self.total_posts += len(posts_in_batch_to_process)
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
for post_data in posts_in_batch_to_process:
if self.cancellation_event.is_set(): break
worker = PostProcessorWorker(post_data, config, self.progress_queue)
future = self.thread_pool.submit(worker.process)
future.add_done_callback(self._handle_future_result)
self.active_futures.append(future)
if self.total_posts == 0 and not self.cancellation_event.is_set():
self._log("✅ No new posts found to process.")
# Submit tasks to the pool
for post_data in posts_to_process:
if self.cancellation_event.is_set():
break
# Each PostProcessorWorker gets the queue to send its own updates
worker = PostProcessorWorker(post_data, config, self.progress_queue)
future = self.thread_pool.submit(worker.process)
future.add_done_callback(self._handle_future_result)
self.active_futures.append(future)
except Exception as e:
self._log(f"❌ CRITICAL ERROR in post fetcher thread: {e}")
self._log(traceback.format_exc())
finally:
# Wait for all submitted tasks to complete before shutting down
if self.thread_pool:
self.thread_pool.shutdown(wait=True)
self.is_running = False
self._log("🏁 All processing tasks have completed.")
# Emit final signal
self._log("🏁 All processing tasks have completed or been cancelled.")
self.progress_queue.put({
'type': 'finished',
'payload': (self.total_downloads, self.total_skips, self.cancellation_event.is_set(), self.all_kept_original_filenames)
})
def _get_all_posts(self, config):
"""Helper to fetch all posts using the API client."""
all_posts = []
# This generator yields batches of posts
post_generator = download_from_api(
api_url_input=config['api_url'],
logger=self._log,
# ... pass other relevant config keys ...
cancellation_event=self.cancellation_event,
pause_event=self.pause_event
)
for batch in post_generator:
all_posts.extend(batch)
return all_posts
def _handle_future_result(self, future: Future):
"""Callback executed when a worker task completes."""
if self.cancellation_event.is_set():
@@ -203,39 +210,76 @@ class DownloadManager:
self.total_skips += 1
else:
result = future.result()
# Unpack result tuple from the worker
(dl_count, skip_count, kept_originals,
retryable, permanent, history) = result
self.total_downloads += dl_count
self.total_skips += skip_count
self.all_kept_original_filenames.extend(kept_originals)
# Queue up results for UI to handle
if retryable:
self.progress_queue.put({'type': 'retryable_failure', 'payload': (retryable,)})
if permanent:
self.progress_queue.put({'type': 'permanent_failure', 'payload': (permanent,)})
if history:
self.progress_queue.put({'type': 'post_processed_history', 'payload': (history,)})
post_id = history.get('post_id')
if post_id and self.current_creator_profile_path:
profile_data = self._setup_creator_profile({'creator_name_for_profile': self.current_creator_name_for_profile, 'session_file_path': self.session_file_path})
if post_id not in profile_data.get('processed_post_ids', []):
profile_data.setdefault('processed_post_ids', []).append(post_id)
self._save_creator_profile(profile_data)
except Exception as e:
self._log(f"❌ Worker task resulted in an exception: {e}")
self.total_skips += 1 # Count errored posts as skipped
# Update overall progress
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
def _setup_creator_profile(self, config):
"""Prepares the path and loads data for the current creator's profile."""
self.current_creator_name_for_profile = config.get('creator_name_for_profile')
if not self.current_creator_name_for_profile:
self._log("⚠️ Cannot create creator profile: Name not provided in config.")
return {}
appdata_dir = os.path.dirname(config.get('session_file_path', '.'))
self.creator_profiles_dir = os.path.join(appdata_dir, "creator_profiles")
os.makedirs(self.creator_profiles_dir, exist_ok=True)
safe_filename = clean_folder_name(self.current_creator_name_for_profile) + ".json"
self.current_creator_profile_path = os.path.join(self.creator_profiles_dir, safe_filename)
if os.path.exists(self.current_creator_profile_path):
try:
with open(self.current_creator_profile_path, 'r', encoding='utf-8') as f:
return json.load(f)
except (json.JSONDecodeError, OSError) as e:
self._log(f"❌ Error loading creator profile '{safe_filename}': {e}. Starting fresh.")
return {}
def _save_creator_profile(self, data):
"""Saves the provided data to the current creator's profile file."""
if not self.current_creator_profile_path:
return
try:
temp_path = self.current_creator_profile_path + ".tmp"
with open(temp_path, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2)
os.replace(temp_path, self.current_creator_profile_path)
except OSError as e:
self._log(f"❌ Error saving creator profile to '{self.current_creator_profile_path}': {e}")
def cancel_session(self):
"""Cancels the current running session."""
if not self.is_running:
return
if self.cancellation_event.is_set():
self._log(" Cancellation already in progress.")
return
self._log("⚠️ Cancellation requested by user...")
self.cancellation_event.set()
# For single thread mode, the worker checks the event
# For multi-thread mode, shut down the pool
if self.thread_pool:
# Don't wait, just cancel pending futures and let the fetcher thread exit
self.thread_pool.shutdown(wait=False, cancel_futures=True)
self.is_running = False
self._log(" Signaling all worker threads to stop and shutting down pool...")
self.thread_pool.shutdown(wait=False)

189
src/core/mangadex_client.py Normal file
View File

@@ -0,0 +1,189 @@
# src/core/mangadex_client.py
import os
import re
import time
import cloudscraper
from collections import defaultdict
from ..utils.file_utils import clean_folder_name
def fetch_mangadex_data(start_url, output_dir, logger_func, file_progress_callback, overall_progress_callback, pause_event, cancellation_event):
"""
Fetches and downloads all content from a MangaDex series or chapter URL.
Returns a tuple of (downloaded_count, skipped_count).
"""
grand_total_dl = 0
grand_total_skip = 0
api = _MangadexAPI(logger_func)
def _check_pause():
if cancellation_event and cancellation_event.is_set(): return True
if pause_event and pause_event.is_set():
logger_func(" Download paused...")
while pause_event.is_set():
if cancellation_event and cancellation_event.is_set(): return True
time.sleep(0.5)
logger_func(" Download resumed.")
return cancellation_event.is_set()
series_match = re.search(r"mangadex\.org/(?:title|manga)/([0-9a-f-]+)", start_url)
chapter_match = re.search(r"mangadex\.org/chapter/([0-9a-f-]+)", start_url)
chapters_to_process = []
if series_match:
series_id = series_match.group(1)
logger_func(f" Series detected. Fetching chapter list for ID: {series_id}")
chapters_to_process = api.get_manga_chapters(series_id, cancellation_event, pause_event)
elif chapter_match:
chapter_id = chapter_match.group(1)
logger_func(f" Single chapter detected. Fetching info for ID: {chapter_id}")
chapter_info = api.get_chapter_info(chapter_id)
if chapter_info:
chapters_to_process = [chapter_info]
if not chapters_to_process:
logger_func("❌ No chapters found or failed to fetch chapter info.")
return 0, 0
logger_func(f"✅ Found {len(chapters_to_process)} chapter(s) to download.")
if overall_progress_callback:
overall_progress_callback.emit(len(chapters_to_process), 0)
for chap_idx, chapter_json in enumerate(chapters_to_process):
if _check_pause(): break
try:
metadata = api.transform_chapter_data(chapter_json)
logger_func("-" * 40)
logger_func(f"Processing Chapter {chap_idx + 1}/{len(chapters_to_process)}: Vol. {metadata['volume']} Ch. {metadata['chapter']}{metadata['chapter_minor']} - {metadata['title']}")
server_info = api.get_at_home_server(chapter_json["id"])
if not server_info:
logger_func(" ❌ Could not get image server for this chapter. Skipping.")
continue
base_url = f"{server_info['baseUrl']}/data/{server_info['chapter']['hash']}/"
image_files = server_info['chapter']['data']
series_folder = clean_folder_name(metadata['manga'])
chapter_folder_title = metadata['title'] or ''
chapter_folder = clean_folder_name(f"Vol {metadata['volume']:02d} Chap {metadata['chapter']:03d}{metadata['chapter_minor']} - {chapter_folder_title}".strip().strip('-').strip())
final_save_path = os.path.join(output_dir, series_folder, chapter_folder)
os.makedirs(final_save_path, exist_ok=True)
for img_idx, filename in enumerate(image_files):
if _check_pause(): break
full_img_url = base_url + filename
img_path = os.path.join(final_save_path, f"{img_idx + 1:03d}{os.path.splitext(filename)[1]}")
if os.path.exists(img_path):
logger_func(f" -> Skip ({img_idx+1}/{len(image_files)}): '{os.path.basename(img_path)}' already exists.")
grand_total_skip += 1
continue
logger_func(f" Downloading ({img_idx+1}/{len(image_files)}): '{os.path.basename(img_path)}'...")
try:
response = api.session.get(full_img_url, stream=True, timeout=60, headers={'Referer': 'https://mangadex.org/'})
response.raise_for_status()
total_size = int(response.headers.get('content-length', 0))
if file_progress_callback:
file_progress_callback.emit(os.path.basename(img_path), (0, total_size))
with open(img_path, 'wb') as f:
downloaded_bytes = 0
for chunk in response.iter_content(chunk_size=8192):
if _check_pause(): break
f.write(chunk)
downloaded_bytes += len(chunk)
if file_progress_callback:
file_progress_callback.emit(os.path.basename(img_path), (downloaded_bytes, total_size))
if _check_pause():
if os.path.exists(img_path): os.remove(img_path)
break
grand_total_dl += 1
except Exception as e:
logger_func(f" ❌ Failed to download page {img_idx+1}: {e}")
grand_total_skip += 1
if overall_progress_callback:
overall_progress_callback.emit(len(chapters_to_process), chap_idx + 1)
time.sleep(1)
except Exception as e:
logger_func(f" ❌ An unexpected error occurred while processing chapter {chapter_json.get('id')}: {e}")
return grand_total_dl, grand_total_skip
class _MangadexAPI:
def __init__(self, logger_func):
self.logger_func = logger_func
self.session = cloudscraper.create_scraper()
self.root = "https://api.mangadex.org"
def _call(self, endpoint, params=None, cancellation_event=None):
if cancellation_event and cancellation_event.is_set(): return None
try:
response = self.session.get(f"{self.root}{endpoint}", params=params, timeout=30)
if response.status_code == 429:
retry_after = int(response.headers.get("X-RateLimit-Retry-After", 5))
self.logger_func(f" ⚠️ Rate limited. Waiting for {retry_after} seconds...")
time.sleep(retry_after)
return self._call(endpoint, params, cancellation_event)
response.raise_for_status()
return response.json()
except Exception as e:
self.logger_func(f" ❌ API call to '{endpoint}' failed: {e}")
return None
def get_manga_chapters(self, series_id, cancellation_event, pause_event):
all_chapters = []
offset = 0
limit = 500
base_params = {
"limit": limit, "order[volume]": "asc", "order[chapter]": "asc",
"translatedLanguage[]": ["en"], "includes[]": ["scanlation_group", "user", "manga"]
}
while True:
if cancellation_event.is_set(): break
while pause_event.is_set(): time.sleep(0.5)
params = {**base_params, "offset": offset}
data = self._call(f"/manga/{series_id}/feed", params, cancellation_event)
if not data or data.get("result") != "ok": break
results = data.get("data", [])
all_chapters.extend(results)
if (offset + limit) >= data.get("total", 0): break
offset += limit
return all_chapters
def get_chapter_info(self, chapter_id):
params = {"includes[]": ["scanlation_group", "user", "manga"]}
data = self._call(f"/chapter/{chapter_id}", params)
return data.get("data") if data and data.get("result") == "ok" else None
def get_at_home_server(self, chapter_id):
return self._call(f"/at-home/server/{chapter_id}")
def transform_chapter_data(self, chapter):
relationships = {item["type"]: item for item in chapter.get("relationships", [])}
manga = relationships.get("manga", {})
c_attrs = chapter.get("attributes", {})
m_attrs = manga.get("attributes", {})
chapter_num_str = c_attrs.get("chapter", "0") or "0"
chnum, sep, minor = chapter_num_str.partition(".")
return {
"manga": (m_attrs.get("title", {}).get("en") or next(iter(m_attrs.get("title", {}).values()), "Unknown Series")),
"title": c_attrs.get("title", ""),
"volume": int(float(c_attrs.get("volume", 0) or 0)),
"chapter": int(float(chnum or 0)),
"chapter_minor": sep + minor if minor else ""
}

View File

@@ -0,0 +1,44 @@
import requests
import cloudscraper
import json
def fetch_nhentai_gallery(gallery_id, logger=print):
"""
Fetches the metadata for a single nhentai gallery using cloudscraper to bypass Cloudflare.
Args:
gallery_id (str or int): The ID of the nhentai gallery.
logger (function): A function to log progress and error messages.
Returns:
dict: A dictionary containing the gallery's metadata if successful, otherwise None.
"""
api_url = f"https://nhentai.net/api/gallery/{gallery_id}"
scraper = cloudscraper.create_scraper()
logger(f" Fetching nhentai gallery metadata from: {api_url}")
try:
# Use the scraper to make the GET request
response = scraper.get(api_url, timeout=20)
if response.status_code == 404:
logger(f" ❌ Gallery not found (404): ID {gallery_id}")
return None
response.raise_for_status()
gallery_data = response.json()
if "id" in gallery_data and "media_id" in gallery_data and "images" in gallery_data:
logger(f" ✅ Successfully fetched metadata for '{gallery_data['title']['english']}'")
gallery_data['pages'] = gallery_data.pop('images')['pages']
return gallery_data
else:
logger(" ❌ API response is missing essential keys (id, media_id, or images).")
return None
except Exception as e:
logger(f" ❌ An error occurred while fetching gallery {gallery_id}: {e}")
return None

View File

@@ -0,0 +1,93 @@
import os
import re
import cloudscraper
from ..utils.file_utils import clean_folder_name
# --- ADDED IMPORTS ---
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
def fetch_pixeldrain_data(url: str, logger):
"""
Scrapes a given Pixeldrain URL to extract album or file information.
Handles single files (/u/), albums/lists (/l/), and folders (/d/).
"""
logger(f"Fetching data for Pixeldrain URL: {url}")
scraper = cloudscraper.create_scraper()
root = "https://pixeldrain.com"
# --- START OF FIX: Add a robust retry strategy ---
try:
retry_strategy = Retry(
total=5, # Total number of retries
backoff_factor=1, # Wait 1s, 2s, 4s, 8s between retries
status_forcelist=[429, 500, 502, 503, 504], # Retry on these server errors
allowed_methods=["HEAD", "GET"]
)
adapter = HTTPAdapter(max_retries=retry_strategy)
scraper.mount("https://", adapter)
scraper.mount("http://", adapter)
logger(" [Pixeldrain] Configured retry strategy for network requests.")
except Exception as e:
logger(f" [Pixeldrain] ⚠️ Could not configure retry strategy: {e}")
# --- END OF FIX ---
file_match = re.search(r"/u/(\w+)", url)
album_match = re.search(r"/l/(\w+)", url)
folder_match = re.search(r"/d/([^?]+)", url)
try:
if file_match:
file_id = file_match.group(1)
logger(f" Detected Pixeldrain File ID: {file_id}")
api_url = f"{root}/api/file/{file_id}/info"
data = scraper.get(api_url).json()
title = data.get("name", file_id)
files = [{
'url': f"{root}/api/file/{file_id}?download",
'filename': data.get("name", f"{file_id}.tmp")
}]
return title, files
elif album_match:
album_id = album_match.group(1)
logger(f" Detected Pixeldrain Album ID: {album_id}")
api_url = f"{root}/api/list/{album_id}"
data = scraper.get(api_url).json()
title = data.get("title", album_id)
files = []
for file_info in data.get("files", []):
files.append({
'url': f"{root}/api/file/{file_info['id']}?download",
'filename': file_info.get("name", f"{file_info['id']}.tmp")
})
return title, files
elif folder_match:
path_id = folder_match.group(1)
logger(f" Detected Pixeldrain Folder Path: {path_id}")
api_url = f"{root}/api/filesystem/{path_id}?stat"
data = scraper.get(api_url).json()
path_info = data["path"][data["base_index"]]
title = path_info.get("name", path_id)
files = []
for child in data.get("children", []):
if child.get("type") == "file":
files.append({
'url': f"{root}/api/filesystem{child['path']}?attach",
'filename': child.get("name")
})
return title, files
else:
logger(" ❌ Could not identify Pixeldrain URL type (file, album, or folder).")
return None, []
except Exception as e:
logger(f"❌ An error occurred while fetching Pixeldrain data: {e}")
return None, []

View File

@@ -0,0 +1,107 @@
import cloudscraper
from bs4 import BeautifulSoup
import re
import html
def fetch_rule34video_data(video_url, logger_func):
"""
Scrapes a rule34video.com page by specifically finding the 'Download' div,
then selecting the best available quality link.
Args:
video_url (str): The full URL to the rule34video.com page.
logger_func (callable): Function to use for logging progress.
Returns:
tuple: (video_title, final_video_url) or (None, None) on failure.
"""
logger_func(f" [Rule34Video] Fetching page: {video_url}")
scraper = cloudscraper.create_scraper()
try:
main_page_response = scraper.get(video_url, timeout=20)
main_page_response.raise_for_status()
soup = BeautifulSoup(main_page_response.text, 'html.parser')
page_title_tag = soup.find('title')
video_title = page_title_tag.text.strip() if page_title_tag else "rule34video_file"
# --- START OF FINAL FIX ---
# 1. Find the SPECIFIC "Download" label first. This is the key.
download_label = soup.find('div', class_='label', string='Download')
if not download_label:
logger_func(" [Rule34Video] ❌ Could not find the 'Download' label. Unable to locate the correct links div.")
return None, None
# 2. The correct container is the parent of this label.
download_div = download_label.parent
# 3. Now, find the links ONLY within this correct container.
link_tags = download_div.find_all('a', class_='tag_item')
if not link_tags:
logger_func(" [Rule34Video] ❌ Found the 'Download' div, but no download links were inside it.")
return None, None
# --- END OF FINAL FIX ---
links_by_quality = {}
quality_pattern = re.compile(r'(\d+p|4k)')
for tag in link_tags:
href = tag.get('href')
if not href:
continue
quality = None
text_match = quality_pattern.search(tag.text)
if text_match:
quality = text_match.group(1)
else:
href_match = quality_pattern.search(href)
if href_match:
quality = href_match.group(1)
if quality:
links_by_quality[quality] = href
if not links_by_quality:
logger_func(" [Rule34Video] ⚠️ Could not parse specific qualities. Using first available link as a fallback.")
final_video_url = link_tags[0].get('href')
if not final_video_url:
logger_func(" [Rule34Video] ❌ Fallback failed: First link tag had no href attribute.")
return None, None
final_video_url = html.unescape(final_video_url)
logger_func(f" [Rule34Video] ✅ Selected first available link as fallback: {final_video_url}")
return video_title, final_video_url
logger_func(f" [Rule34Video] Found available qualities: {list(links_by_quality.keys())}")
final_video_url = None
if '1080p' in links_by_quality:
final_video_url = links_by_quality['1080p']
logger_func(" [Rule34Video] ✅ Selected preferred 1080p link.")
elif '720p' in links_by_quality:
final_video_url = links_by_quality['720p']
logger_func(" [Rule34Video] ✅ 1080p not found. Selected fallback 720p link.")
else:
fallback_order = ['480p', '360p']
for quality in fallback_order:
if quality in links_by_quality:
final_video_url = links_by_quality[quality]
logger_func(f" [Rule34Video] ⚠️ 1080p/720p not found. Selected best available fallback: {quality}")
break
if not final_video_url:
logger_func(" [Rule34Video] ❌ Could not find a suitable download link.")
return None, None
final_video_url = html.unescape(final_video_url)
logger_func(f" [Rule34Video] ✅ Selected direct download URL: {final_video_url}")
return video_title, final_video_url
except Exception as e:
logger_func(f" [Rule34Video] ❌ An error occurred: {e}")
return None, None

163
src/core/saint2_client.py Normal file
View File

@@ -0,0 +1,163 @@
import os
import re as re_module
import html
import urllib.parse
import requests
PATTERN_CACHE = {}
def re(pattern):
"""Compile a regular expression pattern and cache it."""
try:
return PATTERN_CACHE[pattern]
except KeyError:
p = PATTERN_CACHE[pattern] = re_module.compile(pattern)
return p
def extract_from(txt, pos=None, default=""):
"""Returns a function that extracts text between two delimiters from 'txt'."""
def extr(begin, end, index=txt.find, txt=txt):
nonlocal pos
try:
start_pos = pos if pos is not None else 0
first = index(begin, start_pos) + len(begin)
last = index(end, first)
if pos is not None:
pos = last + len(end)
return txt[first:last]
except (ValueError, IndexError):
return default
return extr
def nameext_from_url(url):
"""Extract filename and extension from a URL."""
data = {}
filename = urllib.parse.unquote(url.partition("?")[0].rpartition("/")[2])
name, _, ext = filename.rpartition(".")
if name and len(ext) <= 16:
data["filename"], data["extension"] = name, ext.lower()
else:
data["filename"], data["extension"] = filename, ""
return data
class BaseExtractor:
"""A simplified base class for extractors."""
def __init__(self, match, session, logger):
self.match = match
self.groups = match.groups()
self.session = session
self.log = logger
def request(self, url, **kwargs):
"""Makes an HTTP request using the session."""
try:
response = self.session.get(url, **kwargs)
response.raise_for_status()
return response
except requests.exceptions.RequestException as e:
self.log(f"Error making request to {url}: {e}")
return None
class SaintAlbumExtractor(BaseExtractor):
"""Extractor for saint.su albums."""
root = "https://saint2.su"
pattern = re(r"(?:https?://)?saint\d*\.(?:su|pk|cr|to)/a/([^/?#]+)")
def items(self):
"""Generator that yields all files from an album."""
album_id = self.groups[0]
response = self.request(f"{self.root}/a/{album_id}")
if not response:
return None, []
extr = extract_from(response.text)
title = extr("<title>", "<").rpartition(" - ")[0]
self.log(f"Downloading album: {title}")
files_html = re_module.findall(r'<a class="image".*?</a>', response.text, re_module.DOTALL)
file_list = []
for i, file_html in enumerate(files_html, 1):
file_extr = extract_from(file_html)
file_url = html.unescape(file_extr("onclick=\"play('", "'"))
if not file_url:
continue
filename_info = nameext_from_url(file_url)
filename = f"{filename_info['filename']}.{filename_info['extension']}"
file_data = {
"url": file_url,
"filename": filename,
"headers": {"Referer": response.url},
}
file_list.append(file_data)
return title, file_list
class SaintMediaExtractor(BaseExtractor):
"""Extractor for single saint.su media links."""
root = "https://saint2.su"
pattern = re(r"(?:https?://)?saint\d*\.(?:su|pk|cr|to)(/(embe)?d/([^/?#]+))")
def items(self):
"""Generator that yields the single file from a media page."""
path, embed, media_id = self.groups
url = self.root + path
response = self.request(url)
if not response:
return None, []
extr = extract_from(response.text)
file_url = ""
title = extr("<title>", "<").rpartition(" - ")[0] or media_id
if embed: # /embed/ link
file_url = html.unescape(extr('<source src="', '"'))
else: # /d/ link
file_url = html.unescape(extr('<a href="', '"'))
if not file_url:
self.log("Could not find video URL on the page.")
return title, []
filename_info = nameext_from_url(file_url)
filename = f"{filename_info['filename'] or media_id}.{filename_info['extension'] or 'mp4'}"
file_data = {
"url": file_url,
"filename": filename,
"headers": {"Referer": response.url}
}
return title, [file_data]
def fetch_saint2_data(url, logger):
"""
Identifies the correct extractor for a saint2.su URL and returns the data.
Args:
url (str): The saint2.su URL.
logger (function): A function to log progress messages.
Returns:
tuple: A tuple containing (album_title, list_of_file_dicts).
Returns (None, []) if no data could be fetched.
"""
extractors = [SaintMediaExtractor, SaintAlbumExtractor]
session = requests.Session()
session.headers.update({
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
})
for extractor_cls in extractors:
match = extractor_cls.pattern.match(url)
if match:
extractor = extractor_cls(match, session, logger)
album_title, files = extractor.items()
sanitized_title = re_module.sub(r'[<>:"/\\|?*]', '_', album_title) if album_title else "saint2_download"
return sanitized_title, files
logger(f"Error: The URL '{url}' does not match a known saint2 pattern.")
return None, []

102
src/core/simpcity_client.py Normal file
View File

@@ -0,0 +1,102 @@
# src/core/simpcity_client.py
import cloudscraper
from bs4 import BeautifulSoup
from urllib.parse import urlparse, unquote
import os
import re
from ..utils.file_utils import clean_folder_name
import urllib.parse
def fetch_single_simpcity_page(url, logger_func, cookies=None, post_id=None):
"""
Scrapes a single SimpCity page for images, external links, video tags, and iframes.
"""
scraper = cloudscraper.create_scraper()
headers = {'Referer': 'https://simpcity.cr/'}
try:
response = scraper.get(url, timeout=30, headers=headers, cookies=cookies)
final_url = response.url # Capture the final URL after any redirects
if response.status_code == 404:
return None, [], final_url
response.raise_for_status()
soup = BeautifulSoup(response.text, 'html.parser')
album_title = None
title_element = soup.find('h1', class_='p-title-value')
if title_element:
album_title = title_element.text.strip()
search_scope = soup
if post_id:
post_content_container = soup.find('div', attrs={'data-lb-id': f'post-{post_id}'})
if post_content_container:
logger_func(f" [SimpCity] ✅ Isolating search to post content container for ID {post_id}.")
search_scope = post_content_container
else:
logger_func(f" [SimpCity] ⚠️ Could not find content container for post ID {post_id}.")
jobs_on_page = []
# Find native SimpCity images
image_tags = search_scope.find_all('img', class_='bbImage')
for img_tag in image_tags:
thumbnail_url = img_tag.get('src')
if not thumbnail_url or not isinstance(thumbnail_url, str) or 'saint2.su' in thumbnail_url: continue
full_url = thumbnail_url.replace('.md.', '.')
filename = img_tag.get('alt', '').replace('.md.', '.') or os.path.basename(unquote(urlparse(full_url).path))
jobs_on_page.append({'type': 'image', 'filename': filename, 'url': full_url})
# Find links in <a> tags, now with redirect handling
link_tags = search_scope.find_all('a', href=True)
for link in link_tags:
href = link.get('href', '')
actual_url = href
if '/misc/goto?url=' in href:
try:
# Extract and decode the real URL from the 'url' parameter
parsed_href = urlparse(href)
query_params = dict(urllib.parse.parse_qsl(parsed_href.query))
if 'url' in query_params:
actual_url = unquote(query_params['url'])
except Exception:
actual_url = href # Fallback if parsing fails
# Perform all checks on the 'actual_url' which is now the real destination
if re.search(r'pixeldrain\.com/[lud]/', actual_url): jobs_on_page.append({'type': 'pixeldrain', 'url': actual_url})
elif re.search(r'saint2\.(su|pk|cr|to)/embed/', actual_url): jobs_on_page.append({'type': 'saint2', 'url': actual_url})
elif re.search(r'bunkr\.(?:cr|si|la|ws|is|ru|su|red|black|media|site|to|ac|ci|fi|pk|ps|sk|ph)|bunkrr\.ru', actual_url): jobs_on_page.append({'type': 'bunkr', 'url': actual_url})
elif re.search(r'mega\.(nz|io)', actual_url): jobs_on_page.append({'type': 'mega', 'url': actual_url})
elif re.search(r'gofile\.io', actual_url): jobs_on_page.append({'type': 'gofile', 'url': actual_url})
# Find direct Saint2 video embeds in <video> tags
video_tags = search_scope.find_all('video')
for video in video_tags:
source_tag = video.find('source')
if source_tag and source_tag.get('src'):
src_url = source_tag['src']
if re.search(r'saint2\.(su|pk|cr|to)', src_url):
jobs_on_page.append({'type': 'saint2_direct', 'url': src_url})
# Find embeds in <iframe> tags (as a fallback)
iframe_tags = search_scope.find_all('iframe')
for iframe in iframe_tags:
src_url = iframe.get('src')
if src_url and isinstance(src_url, str):
if re.search(r'saint2\.(su|pk|cr|to)/embed/', src_url):
jobs_on_page.append({'type': 'saint2', 'url': src_url})
if jobs_on_page:
# We use a set to remove duplicate URLs that might be found in multiple ways
unique_jobs = list({job['url']: job for job in jobs_on_page}.values())
logger_func(f" [SimpCity] Scraper found jobs: {[job['type'] for job in unique_jobs]}")
return album_title, unique_jobs, final_url
return album_title, [], final_url
except Exception as e:
logger_func(f" [SimpCity] ❌ Error fetching page {url}: {e}")
raise e

View File

@@ -0,0 +1,73 @@
import cloudscraper
from bs4 import BeautifulSoup
import time
def get_chapter_list(series_url, logger_func):
logger_func(f" [Toonily] Scraping series page for chapter list: {series_url}")
scraper = cloudscraper.create_scraper()
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36',
'Referer': 'https://toonily.com/'
}
try:
response = scraper.get(series_url, timeout=30, headers=headers)
response.raise_for_status()
soup = BeautifulSoup(response.content, 'html.parser')
chapter_links = soup.select('li.wp-manga-chapter > a')
if not chapter_links:
logger_func(" [Toonily] ❌ Could not find any chapter links on the page.")
return []
urls = [link['href'] for link in chapter_links]
urls.reverse()
logger_func(f" [Toonily] Found {len(urls)} chapters.")
return urls
except Exception as e:
logger_func(f" [Toonily] ❌ Error getting chapter list: {e}")
return []
def fetch_chapter_data(chapter_url, logger_func, scraper_session):
"""
Scrapes a single Toonily.com chapter page for its title and image URLs.
"""
main_series_url = chapter_url.rsplit('/', 2)[0] + '/'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
'Accept-Language': 'en-US,en;q=0.9',
'Referer': main_series_url
}
try:
response = scraper_session.get(chapter_url, timeout=30, headers=headers)
response.raise_for_status()
soup = BeautifulSoup(response.content, 'html.parser')
title_element = soup.select_one('h1#chapter-heading')
image_container = soup.select_one('div.reading-content')
if not title_element or not image_container:
logger_func(" [Toonily] ❌ Page structure invalid. Could not find title or image container.")
return None, None, []
full_chapter_title = title_element.text.strip()
if " - Chapter" in full_chapter_title:
series_title = full_chapter_title.split(" - Chapter")[0].strip()
else:
series_title = full_chapter_title.strip()
chapter_title = full_chapter_title # The full string is best for the chapter folder name
image_elements = image_container.select('img')
image_urls = [img.get('data-src', img.get('src')).strip() for img in image_elements if img.get('data-src') or img.get('src')]
return series_title, chapter_title, image_urls
except Exception as e:
logger_func(f" [Toonily] ❌ An error occurred scraping chapter '{chapter_url}': {e}")
return None, None, []

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@@ -3,161 +3,701 @@ import os
import re
import traceback
import json
import base64
import time
import zipfile
import struct
import sys
import io
import hashlib
from contextlib import redirect_stdout
from urllib.parse import urlparse, urlunparse, parse_qs, urlencode
from concurrent.futures import ThreadPoolExecutor, as_completed
from threading import Lock
# --- Third-Party Library Imports ---
# --- Third-party Library Imports ---
import requests
import cloudscraper
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
from ..utils.file_utils import clean_folder_name
try:
from mega import Mega
MEGA_AVAILABLE = True
from Crypto.Cipher import AES
PYCRYPTODOME_AVAILABLE = True
except ImportError:
MEGA_AVAILABLE = False
PYCRYPTODOME_AVAILABLE = False
try:
import gdown
GDOWN_AVAILABLE = True
GDRIVE_AVAILABLE = True
except ImportError:
GDOWN_AVAILABLE = False
GDRIVE_AVAILABLE = False
# --- Helper Functions ---
MEGA_API_URL = "https://g.api.mega.co.nz"
MIN_SIZE_FOR_MULTIPART_MEGA = 20 * 1024 * 1024 # 20 MB
NUM_PARTS_FOR_MEGA = 5
def _get_filename_from_headers(headers):
"""
Extracts a filename from the Content-Disposition header.
Args:
headers (dict): A dictionary of HTTP response headers.
Returns:
str or None: The extracted filename, or None if not found.
"""
cd = headers.get('content-disposition')
if not cd:
return None
fname_match = re.findall('filename="?([^"]+)"?', cd)
if fname_match:
# Sanitize the filename to prevent directory traversal issues
# and remove invalid characters for most filesystems.
sanitized_name = re.sub(r'[<>:"/\\|?*]', '_', fname_match[0].strip())
return sanitized_name
return None
# --- Main Service Downloader Functions ---
def urlb64_to_b64(s):
s += '=' * (-len(s) % 4)
return s.replace('-', '+').replace('_', '/')
def download_mega_file(mega_link, download_path=".", logger_func=print):
"""
Downloads a file from a public Mega.nz link.
def b64_to_bytes(s):
return base64.b64decode(urlb64_to_b64(s))
Args:
mega_link (str): The public Mega.nz link to the file.
download_path (str): The directory to save the downloaded file.
logger_func (callable): Function to use for logging.
"""
if not MEGA_AVAILABLE:
logger_func("❌ Error: mega.py library is not installed. Cannot download from Mega.")
logger_func(" Please install it: pip install mega.py")
raise ImportError("mega.py library not found.")
def bytes_to_b64(b):
return base64.b64encode(b).decode('utf-8')
logger_func(f" [Mega] Initializing Mega client...")
def _decrypt_mega_attribute(encrypted_attr_b64, key_bytes):
try:
mega_client = Mega()
m = mega_client.login()
logger_func(f" [Mega] Attempting to download from: {mega_link}")
attr_bytes = b64_to_bytes(encrypted_attr_b64)
padded_len = (len(attr_bytes) + 15) & ~15
padded_attr_bytes = attr_bytes.ljust(padded_len, b'\0')
iv = b'\0' * 16
cipher = AES.new(key_bytes, AES.MODE_CBC, iv)
decrypted_attr = cipher.decrypt(padded_attr_bytes)
json_str = decrypted_attr.strip(b'\0').decode('utf-8')
if json_str.startswith('MEGA'):
return json.loads(json_str[4:])
return json.loads(json_str)
except Exception:
return {}
def _decrypt_mega_key(encrypted_key_b64, master_key_bytes):
key_bytes = b64_to_bytes(encrypted_key_b64)
iv = b'\0' * 16
cipher = AES.new(master_key_bytes, AES.MODE_ECB)
return cipher.decrypt(key_bytes)
def _parse_mega_key(key_b64):
key_bytes = b64_to_bytes(key_b64)
key_parts = struct.unpack('>' + 'I' * (len(key_bytes) // 4), key_bytes)
if len(key_parts) == 8:
final_key = (key_parts[0] ^ key_parts[4], key_parts[1] ^ key_parts[5], key_parts[2] ^ key_parts[6], key_parts[3] ^ key_parts[7])
iv = (key_parts[4], key_parts[5], 0, 0)
key_bytes = struct.pack('>' + 'I' * 4, *final_key)
iv_bytes = struct.pack('>' + 'I' * 4, *iv)
return key_bytes, iv_bytes, None
elif len(key_parts) == 4:
return key_bytes, None, None
raise ValueError("Invalid Mega key length")
def _process_file_key(file_key_bytes):
key_parts = struct.unpack('>' + 'I' * 8, file_key_bytes)
final_key_parts = (key_parts[0] ^ key_parts[4], key_parts[1] ^ key_parts[5], key_parts[2] ^ key_parts[6], key_parts[3] ^ key_parts[7])
return struct.pack('>' + 'I' * 4, *final_key_parts)
def _download_and_decrypt_chunk(args):
url, temp_path, start_byte, end_byte, key, nonce, part_num, progress_data, progress_callback_func, file_name, cancellation_event, pause_event = args
try:
headers = {'Range': f'bytes={start_byte}-{end_byte}'}
initial_counter = start_byte // 16
cipher = AES.new(key, AES.MODE_CTR, nonce=nonce, initial_value=initial_counter)
if not os.path.exists(download_path):
os.makedirs(download_path, exist_ok=True)
logger_func(f" [Mega] Created download directory: {download_path}")
# The download_url method handles file info fetching and saving internally.
downloaded_file_path = m.download_url(mega_link, dest_path=download_path)
if downloaded_file_path and os.path.exists(downloaded_file_path):
logger_func(f" [Mega] ✅ File downloaded successfully! Saved as: {downloaded_file_path}")
else:
raise Exception(f"Mega download failed or file not found. Returned: {downloaded_file_path}")
with requests.get(url, headers=headers, stream=True, timeout=(15, 300)) as r:
r.raise_for_status()
with open(temp_path, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
if cancellation_event and cancellation_event.is_set():
return False
while pause_event and pause_event.is_set():
time.sleep(0.5)
if cancellation_event and cancellation_event.is_set():
return False
decrypted_chunk = cipher.decrypt(chunk)
f.write(decrypted_chunk)
with progress_data['lock']:
progress_data['downloaded'] += len(chunk)
if progress_callback_func and (time.time() - progress_data['last_update'] > 1):
progress_callback_func(file_name, (progress_data['downloaded'], progress_data['total_size']))
progress_data['last_update'] = time.time()
return True
except Exception as e:
logger_func(f" [Mega] ❌ An unexpected error occurred during Mega download: {e}")
traceback.print_exc(limit=2)
raise # Re-raise the exception to be handled by the calling worker
return False
def download_gdrive_file(gdrive_link, download_path=".", logger_func=print):
"""
Downloads a file from a public Google Drive link using the gdown library.
def download_and_decrypt_mega_file(info, download_path, logger_func, progress_callback_func=None, cancellation_event=None, pause_event=None):
file_name = info['file_name']
file_size = info['file_size']
dl_url = info['dl_url']
final_path = os.path.join(download_path, file_name)
Args:
gdrive_link (str): The public Google Drive link to the file.
download_path (str): The directory to save the downloaded file.
logger_func (callable): Function to use for logging.
"""
if not GDOWN_AVAILABLE:
logger_func("❌ Error: gdown library is not installed. Cannot download from Google Drive.")
logger_func(" Please install it: pip install gdown")
raise ImportError("gdown library not found.")
if os.path.exists(final_path) and os.path.getsize(final_path) == file_size:
logger_func(f" [Mega] File '{file_name}' already exists with the correct size. Skipping.")
return
logger_func(f" [GDrive] Attempting to download: {gdrive_link}")
try:
if not os.path.exists(download_path):
os.makedirs(download_path, exist_ok=True)
logger_func(f" [GDrive] Created download directory: {download_path}")
# gdown handles finding the file ID and downloading. 'fuzzy=True' helps with various URL formats.
output_file_path = gdown.download(gdrive_link, output=download_path, quiet=False, fuzzy=True)
if output_file_path and os.path.exists(output_file_path):
logger_func(f" [GDrive] ✅ Google Drive file downloaded successfully: {output_file_path}")
else:
raise Exception(f"gdown download failed or file not found. Returned: {output_file_path}")
except Exception as e:
logger_func(f" [GDrive] ❌ An error occurred during Google Drive download: {e}")
traceback.print_exc(limit=2)
raise
def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
"""
Downloads a file from a public Dropbox link by modifying the URL for direct download.
Args:
dropbox_link (str): The public Dropbox link to the file.
download_path (str): The directory to save the downloaded file.
logger_func (callable): Function to use for logging.
"""
logger_func(f" [Dropbox] Attempting to download: {dropbox_link}")
os.makedirs(download_path, exist_ok=True)
key, iv, _ = _parse_mega_key(urlb64_to_b64(info['file_key']))
nonce = iv[:8]
# Modify the Dropbox URL to force a direct download instead of showing the preview page.
# Check for cancellation before starting
if cancellation_event and cancellation_event.is_set():
logger_func(f" [Mega] Download for '{file_name}' cancelled before starting.")
return
if file_size < MIN_SIZE_FOR_MULTIPART_MEGA:
logger_func(f" [Mega] Downloading '{file_name}' (Single Stream)...")
try:
cipher = AES.new(key, AES.MODE_CTR, nonce=nonce, initial_value=0)
with requests.get(dl_url, stream=True, timeout=(15, 300)) as r:
r.raise_for_status()
downloaded_bytes = 0
last_update_time = time.time()
with open(final_path, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
if cancellation_event and cancellation_event.is_set():
break
while pause_event and pause_event.is_set():
time.sleep(0.5)
if cancellation_event and cancellation_event.is_set():
break
if cancellation_event and cancellation_event.is_set():
break
decrypted_chunk = cipher.decrypt(chunk)
f.write(decrypted_chunk)
downloaded_bytes += len(chunk)
current_time = time.time()
if current_time - last_update_time > 1:
if progress_callback_func:
progress_callback_func(file_name, (downloaded_bytes, file_size))
last_update_time = time.time()
if cancellation_event and cancellation_event.is_set():
logger_func(f" [Mega] ❌ Download cancelled for '{file_name}'. Deleting partial file.")
if os.path.exists(final_path): os.remove(final_path)
else:
logger_func(f" [Mega] ✅ Successfully downloaded '{file_name}'")
except Exception as e:
logger_func(f" [Mega] ❌ Download failed for '{file_name}': {e}")
if os.path.exists(final_path): os.remove(final_path)
else:
logger_func(f" [Mega] Downloading '{file_name}' ({NUM_PARTS_FOR_MEGA} Parts)...")
chunk_size = file_size // NUM_PARTS_FOR_MEGA
chunks = []
for i in range(NUM_PARTS_FOR_MEGA):
start = i * chunk_size
end = start + chunk_size - 1 if i < NUM_PARTS_FOR_MEGA - 1 else file_size - 1
chunks.append((start, end))
progress_data = {'downloaded': 0, 'total_size': file_size, 'lock': Lock(), 'last_update': time.time()}
tasks = []
for i, (start, end) in enumerate(chunks):
temp_path = f"{final_path}.part{i}"
tasks.append((dl_url, temp_path, start, end, key, nonce, i, progress_data, progress_callback_func, file_name, cancellation_event, pause_event))
all_parts_successful = True
with ThreadPoolExecutor(max_workers=NUM_PARTS_FOR_MEGA) as executor:
if cancellation_event and cancellation_event.is_set():
executor.shutdown(wait=False, cancel_futures=True)
all_parts_successful = False
else:
results = executor.map(_download_and_decrypt_chunk, tasks)
for result in results:
if not result:
all_parts_successful = False
# Check for cancellation after threads finish/are cancelled
if cancellation_event and cancellation_event.is_set():
all_parts_successful = False
logger_func(f" [Mega] ❌ Multipart download cancelled for '{file_name}'.")
if all_parts_successful:
logger_func(f" [Mega] All parts for '{file_name}' downloaded. Assembling file...")
try:
with open(final_path, 'wb') as f_out:
for i in range(NUM_PARTS_FOR_MEGA):
part_path = f"{final_path}.part{i}"
with open(part_path, 'rb') as f_in:
f_out.write(f_in.read())
os.remove(part_path)
logger_func(f" [Mega] ✅ Successfully downloaded and assembled '{file_name}'")
except Exception as e:
logger_func(f" [Mega] ❌ File assembly failed for '{file_name}': {e}")
else:
logger_func(f" [Mega] ❌ Multipart download failed or was cancelled for '{file_name}'. Cleaning up partial files.")
for i in range(NUM_PARTS_FOR_MEGA):
part_path = f"{final_path}.part{i}"
if os.path.exists(part_path):
os.remove(part_path)
def _process_mega_folder(folder_id, folder_key, session, logger_func):
try:
master_key_bytes, _, _ = _parse_mega_key(folder_key)
payload = [{"a": "f", "c": 1, "r": 1}]
params = {'n': folder_id}
response = session.post(f"{MEGA_API_URL}/cs", params=params, json=payload, timeout=30)
response.raise_for_status()
res_json = response.json()
if isinstance(res_json, int) or (isinstance(res_json, list) and res_json and isinstance(res_json[0], int)):
error_code = res_json if isinstance(res_json, int) else res_json[0]
logger_func(f" [Mega Folder] ❌ API returned error code: {error_code}. The folder may be invalid or removed.")
return None, None
if not isinstance(res_json, list) or not res_json or not isinstance(res_json[0], dict) or 'f' not in res_json[0]:
logger_func(f" [Mega Folder] ❌ Invalid folder data received: {str(res_json)[:200]}")
return None, None
nodes = res_json[0]['f']
decrypted_nodes = {}
for node in nodes:
try:
encrypted_key_b64 = node['k'].split(':')[-1]
decrypted_key_raw = _decrypt_mega_key(encrypted_key_b64, master_key_bytes)
attr_key = _process_file_key(decrypted_key_raw) if node.get('t') == 0 else decrypted_key_raw
attributes = _decrypt_mega_attribute(node['a'], attr_key)
name = re.sub(r'[<>:"/\\|?*]', '_', attributes.get('n', f"unknown_{node['h']}"))
decrypted_nodes[node['h']] = {"name": name, "parent": node.get('p'), "type": node.get('t'), "size": node.get('s'), "raw_key_b64": urlb64_to_b64(bytes_to_b64(decrypted_key_raw))}
except Exception as e:
logger_func(f" [Mega Folder] ⚠️ Could not process node {node.get('h')}: {e}")
root_name = decrypted_nodes.get(folder_id, {}).get("name", "Mega_Folder")
files_to_download = []
for handle, node_info in decrypted_nodes.items():
if node_info.get("type") == 0:
path_parts = [node_info['name']]
current_parent_id = node_info.get('parent')
while current_parent_id in decrypted_nodes:
parent_node = decrypted_nodes[current_parent_id]
path_parts.insert(0, parent_node['name'])
current_parent_id = parent_node.get('parent')
if current_parent_id == folder_id:
break
files_to_download.append({'h': handle, 's': node_info['size'], 'key': node_info['raw_key_b64'], 'relative_path': os.path.join(*path_parts)})
return root_name, files_to_download
except Exception as e:
logger_func(f" [Mega Folder] ❌ Failed to get folder info: {e}")
return None, None
def download_mega_file(mega_url, download_path, logger_func=print, progress_callback_func=None, overall_progress_callback=None, cancellation_event=None, pause_event=None):
if not PYCRYPTODOME_AVAILABLE:
logger_func("❌ Mega download failed: 'pycryptodome' library is not installed.")
if overall_progress_callback: overall_progress_callback(1, 1)
return
logger_func(f" [Mega] Initializing download for: {mega_url}")
folder_match = re.search(r'mega(?:\.co)?\.nz/folder/([a-zA-Z0-9]+)#([a-zA-Z0-9_.-]+)', mega_url)
file_match = re.search(r'mega(?:\.co)?\.nz/(?:file/|#!)?([a-zA-Z0-9]+)(?:#|!)([a-zA-Z0-9_.-]+)', mega_url)
session = requests.Session()
session.headers.update({'User-Agent': 'Kemono-Downloader-PyQt/1.0'})
if folder_match:
folder_id, folder_key = folder_match.groups()
logger_func(f" [Mega] Folder link detected. Starting crawl...")
root_folder_name, files = _process_mega_folder(folder_id, folder_key, session, logger_func)
if root_folder_name is None or files is None:
logger_func(" [Mega Folder] ❌ Crawling failed. Aborting.")
if overall_progress_callback: overall_progress_callback(1, 1)
return
if not files:
logger_func(" [Mega Folder] Folder is empty. Nothing to download.")
if overall_progress_callback: overall_progress_callback(0, 0)
return
logger_func(" [Mega Folder] Prioritizing largest files first...")
files.sort(key=lambda f: f.get('s', 0), reverse=True)
total_files = len(files)
logger_func(f" [Mega Folder] ✅ Crawl complete. Found {total_files} file(s) in folder '{root_folder_name}'.")
if overall_progress_callback: overall_progress_callback(total_files, 0)
folder_download_path = os.path.join(download_path, root_folder_name)
os.makedirs(folder_download_path, exist_ok=True)
progress_lock = Lock()
processed_count = 0
MAX_WORKERS = 3
logger_func(f" [Mega Folder] Starting concurrent download with up to {MAX_WORKERS} workers...")
def _download_worker(file_data):
nonlocal processed_count
try:
if cancellation_event and cancellation_event.is_set():
return
params = {'n': folder_id}
payload = [{"a": "g", "g": 1, "n": file_data['h']}]
response = session.post(f"{MEGA_API_URL}/cs", params=params, json=payload, timeout=20)
response.raise_for_status()
res_json = response.json()
if isinstance(res_json, int) or (isinstance(res_json, list) and res_json and isinstance(res_json[0], int)):
error_code = res_json if isinstance(res_json, int) else res_json[0]
logger_func(f" [Mega Worker] ❌ API Error {error_code} for '{file_data['relative_path']}'. Skipping.")
return
dl_temp_url = res_json[0]['g']
file_info = {'file_name': os.path.basename(file_data['relative_path']), 'file_size': file_data['s'], 'dl_url': dl_temp_url, 'file_key': file_data['key']}
file_specific_path = os.path.dirname(file_data['relative_path'])
final_download_dir = os.path.join(folder_download_path, file_specific_path)
download_and_decrypt_mega_file(file_info, final_download_dir, logger_func, progress_callback_func, cancellation_event, pause_event)
except Exception as e:
# Don't log error if it was a cancellation
if not (cancellation_event and cancellation_event.is_set()):
logger_func(f" [Mega Worker] ❌ Failed to process '{file_data['relative_path']}': {e}")
finally:
with progress_lock:
processed_count += 1
if overall_progress_callback:
overall_progress_callback(total_files, processed_count)
with ThreadPoolExecutor(max_workers=MAX_WORKERS) as executor:
futures = [executor.submit(_download_worker, file_data) for file_data in files]
for future in as_completed(futures):
if cancellation_event and cancellation_event.is_set():
# Attempt to cancel remaining futures
for f in futures:
if not f.done():
f.cancel()
break
try:
future.result()
except Exception as e:
if not (cancellation_event and cancellation_event.is_set()):
logger_func(f" [Mega Folder] A download worker failed with an error: {e}")
logger_func(" [Mega Folder] ✅ All concurrent downloads complete or cancelled.")
elif file_match:
if overall_progress_callback: overall_progress_callback(1, 0)
file_id, file_key = file_match.groups()
try:
payload = [{"a": "g", "p": file_id}]
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
res_json = response.json()
if isinstance(res_json, list) and res_json and isinstance(res_json[0], int):
logger_func(f" [Mega] ❌ API Error {res_json[0]}. Link may be invalid or removed.")
if overall_progress_callback: overall_progress_callback(1, 1)
return
file_size = res_json[0]['s']
at_b64 = res_json[0]['at']
raw_file_key_bytes = b64_to_bytes(file_key)
attr_key_bytes = _process_file_key(raw_file_key_bytes)
attrs = _decrypt_mega_attribute(at_b64, attr_key_bytes)
file_name = attrs.get('n', f"unknown_file_{file_id}")
payload_dl = [{"a": "g", "g": 1, "p": file_id}]
response_dl = session.post(f"{MEGA_API_URL}/cs", json=payload_dl, timeout=20)
dl_temp_url = response_dl.json()[0]['g']
file_info_obj = {'file_name': file_name, 'file_size': file_size, 'dl_url': dl_temp_url, 'file_key': file_key}
download_and_decrypt_mega_file(file_info_obj, download_path, logger_func, progress_callback_func, cancellation_event, pause_event)
if overall_progress_callback: overall_progress_callback(1, 1)
except Exception as e:
if not (cancellation_event and cancellation_event.is_set()):
logger_func(f" [Mega] ❌ Failed to process single file: {e}")
if overall_progress_callback: overall_progress_callback(1, 1)
else:
logger_func(f" [Mega] ❌ Error: Invalid or unsupported Mega URL format.")
if '/folder/' in mega_url and '/file/' in mega_url:
logger_func(" [Mega] This looks like a link to a file inside a folder. Please use a direct, shareable link to the individual file.")
if overall_progress_callback: overall_progress_callback(1, 1)
def download_gdrive_file(url, download_path, logger_func=print, progress_callback_func=None, overall_progress_callback=None, use_post_subfolder=False, post_title=None):
if not GDRIVE_AVAILABLE:
logger_func("❌ Google Drive download failed: 'gdown' library is not installed.")
return
# --- Subfolder Logic ---
final_download_path = download_path
if use_post_subfolder and post_title:
subfolder_name = clean_folder_name(post_title)
final_download_path = os.path.join(download_path, subfolder_name)
logger_func(f" [G-Drive] Using post subfolder: '{subfolder_name}'")
os.makedirs(final_download_path, exist_ok=True)
# --- End Subfolder Logic ---
original_stdout = sys.stdout
original_stderr = sys.stderr
captured_output_buffer = io.StringIO()
paths = None
try:
logger_func(f" [G-Drive] Starting folder download for: {url}")
sys.stdout = captured_output_buffer
sys.stderr = captured_output_buffer
paths = gdown.download_folder(url, output=final_download_path, quiet=False, use_cookies=False, remaining_ok=True)
except Exception as e:
logger_func(f" [G-Drive] ❌ An unexpected error occurred: {e}")
logger_func(" [G-Drive] This can happen if the folder is private, deleted, or you have been rate-limited by Google.")
finally:
sys.stdout = original_stdout
sys.stderr = original_stderr
captured_output = captured_output_buffer.getvalue()
if captured_output:
processed_files_count = 0
current_filename = None
if overall_progress_callback:
overall_progress_callback(-1, 0)
lines = captured_output.splitlines()
for i, line in enumerate(lines):
cleaned_line = line.strip('\r').strip()
if not cleaned_line:
continue
if cleaned_line.startswith("To: "):
try:
if current_filename:
logger_func(f" [G-Drive] ✅ Saved '{current_filename}'")
filepath = cleaned_line[4:]
current_filename = os.path.basename(filepath)
processed_files_count += 1
logger_func(f" [G-Drive] ({processed_files_count}/?) Downloading '{current_filename}'...")
if progress_callback_func:
progress_callback_func(current_filename, "In Progress...")
if overall_progress_callback:
overall_progress_callback(-1, processed_files_count -1)
except Exception:
logger_func(f" [gdown] {cleaned_line}")
if current_filename:
logger_func(f" [G-Drive] ✅ Saved '{current_filename}'")
if overall_progress_callback:
overall_progress_callback(-1, processed_files_count)
if paths and all(os.path.exists(p) for p in paths):
final_folder_path = os.path.dirname(paths[0]) if paths else final_download_path
logger_func(f" [G-Drive] ✅ Finished. Downloaded {len(paths)} file(s) to folder '{final_folder_path}'")
else:
logger_func(f" [G-Drive] ❌ Download failed or folder was empty. Check the log above for details from gdown.")
def download_dropbox_file(dropbox_link, download_path=".", logger_func=print, progress_callback_func=None, use_post_subfolder=False, post_title=None):
logger_func(f" [Dropbox] Attempting to download: {dropbox_link}")
final_download_path = download_path
if use_post_subfolder and post_title:
subfolder_name = clean_folder_name(post_title)
final_download_path = os.path.join(download_path, subfolder_name)
logger_func(f" [Dropbox] Using post subfolder: '{subfolder_name}'")
parsed_url = urlparse(dropbox_link)
query_params = parse_qs(parsed_url.query)
query_params['dl'] = ['1']
new_query = urlencode(query_params, doseq=True)
direct_download_url = urlunparse(parsed_url._replace(query=new_query))
logger_func(f" [Dropbox] Using direct download URL: {direct_download_url}")
scraper = cloudscraper.create_scraper()
try:
if not os.path.exists(download_path):
os.makedirs(download_path, exist_ok=True)
logger_func(f" [Dropbox] Created download directory: {download_path}")
with requests.get(direct_download_url, stream=True, allow_redirects=True, timeout=(10, 300)) as r:
os.makedirs(final_download_path, exist_ok=True)
with scraper.get(direct_download_url, stream=True, allow_redirects=True, timeout=(20, 600)) as r:
r.raise_for_status()
# Determine filename from headers or URL
filename = _get_filename_from_headers(r.headers) or os.path.basename(parsed_url.path) or "dropbox_file"
full_save_path = os.path.join(download_path, filename)
filename = _get_filename_from_headers(r.headers) or os.path.basename(parsed_url.path) or "dropbox_download"
if not os.path.splitext(filename)[1]:
filename += ".zip"
full_save_path = os.path.join(final_download_path, filename)
logger_func(f" [Dropbox] Starting download of '{filename}'...")
# Write file to disk in chunks
total_size = int(r.headers.get('content-length', 0))
downloaded_bytes = 0
last_log_time = time.time()
with open(full_save_path, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
logger_func(f" [Dropbox] ✅ Dropbox file downloaded successfully: {full_save_path}")
downloaded_bytes += len(chunk)
current_time = time.time()
if current_time - last_log_time > 1:
if progress_callback_func:
progress_callback_func(filename, (downloaded_bytes, total_size))
last_log_time = current_time
logger_func(f" [Dropbox] ✅ Download complete: {full_save_path}")
if zipfile.is_zipfile(full_save_path):
logger_func(f" [Dropbox] ዚ Detected zip file. Attempting to extract...")
extract_folder_name = os.path.splitext(filename)[0]
extract_path = os.path.join(final_download_path, extract_folder_name)
os.makedirs(extract_path, exist_ok=True)
with zipfile.ZipFile(full_save_path, 'r') as zip_ref:
zip_ref.extractall(extract_path)
logger_func(f" [Dropbox] ✅ Successfully extracted to folder: '{extract_path}'")
try:
os.remove(full_save_path)
logger_func(f" [Dropbox] 🗑️ Removed original zip file.")
except OSError as e:
logger_func(f" [Dropbox] ⚠️ Could not remove original zip file: {e}")
except Exception as e:
logger_func(f" [Dropbox] ❌ An error occurred during Dropbox download: {e}")
traceback.print_exc(limit=2)
raise
def _get_gofile_api_token(session, logger_func):
"""Creates a temporary guest account to get an API token."""
try:
logger_func(" [Gofile] Creating temporary guest account for API token...")
response = session.post("https://api.gofile.io/accounts", timeout=20)
response.raise_for_status()
data = response.json()
if data.get("status") == "ok":
token = data["data"]["token"]
logger_func(" [Gofile] ✅ Successfully obtained API token.")
return token
else:
logger_func(f" [Gofile] ❌ Failed to get API token, status: {data.get('status')}")
return None
except Exception as e:
logger_func(f" [Gofile] ❌ Error creating guest account: {e}")
return None
def _get_gofile_website_token(session, logger_func):
"""Fetches the 'wt' (website token) from Gofile's global JS file."""
try:
logger_func(" [Gofile] Fetching website token (wt)...")
response = session.get("https://gofile.io/dist/js/global.js", timeout=20)
response.raise_for_status()
match = re.search(r'\.wt = "([^"]+)"', response.text)
if match:
wt = match.group(1)
logger_func(" [Gofile] ✅ Successfully fetched website token.")
return wt
logger_func(" [Gofile] ❌ Could not find website token in JS file.")
return None
except Exception as e:
logger_func(f" [Gofile] ❌ Error fetching website token: {e}")
return None
def download_gofile_folder(gofile_url, download_path, logger_func=print, progress_callback_func=None, overall_progress_callback=None):
"""Downloads all files from a Gofile folder URL."""
logger_func(f" [Gofile] Initializing download for: {gofile_url}")
match = re.search(r"gofile\.io/d/([^/?#]+)", gofile_url)
if not match:
logger_func(" [Gofile] ❌ Invalid Gofile folder URL format.")
if overall_progress_callback: overall_progress_callback(1, 1)
return
content_id = match.group(1)
scraper = cloudscraper.create_scraper()
try:
retry_strategy = Retry(
total=5,
backoff_factor=1,
status_forcelist=[429, 500, 502, 503, 504],
allowed_methods=["HEAD", "GET", "POST"]
)
adapter = HTTPAdapter(max_retries=retry_strategy)
scraper.mount("http://", adapter)
scraper.mount("https://", adapter)
logger_func(" [Gofile] 🔧 Configured robust retry strategy for network requests.")
except Exception as e:
logger_func(f" [Gofile] ⚠️ Could not configure retry strategy: {e}")
api_token = _get_gofile_api_token(scraper, logger_func)
if not api_token:
if overall_progress_callback: overall_progress_callback(1, 1)
return
website_token = _get_gofile_website_token(scraper, logger_func)
if not website_token:
if overall_progress_callback: overall_progress_callback(1, 1)
return
try:
scraper.cookies.set("accountToken", api_token, domain=".gofile.io")
scraper.headers.update({"Authorization": f"Bearer {api_token}"})
api_url = f"https://api.gofile.io/contents/{content_id}?wt={website_token}"
logger_func(f" [Gofile] Fetching folder contents for ID: {content_id}")
response = scraper.get(api_url, timeout=30)
response.raise_for_status()
data = response.json()
if data.get("status") != "ok":
if data.get("status") == "error-passwordRequired":
logger_func(" [Gofile] ❌ This folder is password protected. Downloading password-protected folders is not supported.")
else:
logger_func(f" [Gofile] ❌ API Error: {data.get('status')}. The folder may be expired or invalid.")
if overall_progress_callback: overall_progress_callback(1, 1)
return
folder_info = data.get("data", {})
folder_name = clean_folder_name(folder_info.get("name", content_id))
files_to_download = [item for item in folder_info.get("children", {}).values() if item.get("type") == "file"]
if not files_to_download:
logger_func(" [Gofile] No files found in this Gofile folder.")
if overall_progress_callback: overall_progress_callback(0, 0)
return
final_download_path = os.path.join(download_path, folder_name)
os.makedirs(final_download_path, exist_ok=True)
logger_func(f" [Gofile] Found {len(files_to_download)} file(s). Saving to folder: '{folder_name}'")
if overall_progress_callback: overall_progress_callback(len(files_to_download), 0)
download_session = requests.Session()
adapter = HTTPAdapter(max_retries=Retry(
total=5, backoff_factor=1, status_forcelist=[429, 500, 502, 503, 504]
))
download_session.mount("http://", adapter)
download_session.mount("https://", adapter)
for i, file_info in enumerate(files_to_download):
filename = file_info.get("name")
file_url = file_info.get("link")
file_size = file_info.get("size", 0)
filepath = os.path.join(final_download_path, filename)
if os.path.exists(filepath) and os.path.getsize(filepath) == file_size:
logger_func(f" [Gofile] ({i+1}/{len(files_to_download)}) ⏩ Skipping existing file: '{filename}'")
if overall_progress_callback: overall_progress_callback(len(files_to_download), i + 1)
continue
logger_func(f" [Gofile] ({i+1}/{len(files_to_download)}) 🔽 Downloading: '{filename}'")
with download_session.get(file_url, stream=True, timeout=(60, 600)) as r:
r.raise_for_status()
if progress_callback_func:
progress_callback_func(filename, (0, file_size))
downloaded_bytes = 0
last_log_time = time.time()
with open(filepath, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
downloaded_bytes += len(chunk)
current_time = time.time()
if current_time - last_log_time > 0.5: # Update slightly faster
if progress_callback_func:
progress_callback_func(filename, (downloaded_bytes, file_size))
last_log_time = current_time
if progress_callback_func:
progress_callback_func(filename, (file_size, file_size))
logger_func(f" [Gofile] ✅ Finished '{filename}'")
if overall_progress_callback: overall_progress_callback(len(files_to_download), i + 1)
time.sleep(1)
except Exception as e:
logger_func(f" [Gofile] ❌ An error occurred during Gofile download: {e}")
if not isinstance(e, requests.exceptions.RequestException):
traceback.print_exc()

View File

@@ -1,4 +1,5 @@
# --- Standard Library Imports ---
# --- Standard Library Imports ---
import os
import time
import hashlib
@@ -10,28 +11,49 @@ from concurrent.futures import ThreadPoolExecutor, as_completed
# --- Third-Party Library Imports ---
import requests
MULTIPART_DOWNLOADER_AVAILABLE = True
# --- Module Constants ---
CHUNK_DOWNLOAD_RETRY_DELAY = 2
MAX_CHUNK_DOWNLOAD_RETRIES = 1
MAX_CHUNK_DOWNLOAD_RETRIES = 5
DOWNLOAD_CHUNK_SIZE_ITER = 1024 * 256 # 256 KB per iteration chunk
# Flag to indicate if this module and its dependencies are available.
# This was missing and caused the ImportError.
MULTIPART_DOWNLOADER_AVAILABLE = True
def _download_individual_chunk(
chunk_url, temp_file_path, start_byte, end_byte, headers,
chunk_url, chunk_temp_file_path, start_byte, end_byte, headers,
part_num, total_parts, progress_data, cancellation_event,
skip_event, pause_event, global_emit_time_ref, cookies_for_chunk,
logger_func, emitter=None, api_original_filename=None
):
"""
Downloads a single segment (chunk) of a larger file. This function is
intended to be run in a separate thread by a ThreadPoolExecutor.
Downloads a single segment (chunk) of a larger file to its own unique part file.
This function is intended to be run in a separate thread by a ThreadPoolExecutor.
It handles retries, pauses, and cancellations for its specific chunk.
It handles retries, pauses, and cancellations for its specific chunk. If a
download fails, the partial chunk file is removed, allowing a clean retry later.
Args:
chunk_url (str): The URL to download the file from.
chunk_temp_file_path (str): The unique path to save this specific chunk
(e.g., 'my_video.mp4.part0').
start_byte (int): The starting byte for the Range header.
end_byte (int): The ending byte for the Range header.
headers (dict): The HTTP headers to use for the request.
part_num (int): The index of this chunk (e.g., 0 for the first part).
total_parts (int): The total number of chunks for the entire file.
progress_data (dict): A thread-safe dictionary for sharing progress.
cancellation_event (threading.Event): Event to signal cancellation.
skip_event (threading.Event): Event to signal skipping the file.
pause_event (threading.Event): Event to signal pausing the download.
global_emit_time_ref (list): A mutable list with one element (a timestamp)
to rate-limit UI updates.
cookies_for_chunk (dict): Cookies to use for the request.
logger_func (function): A function to log messages.
emitter (queue.Queue or QObject): Emitter for sending progress to the UI.
api_original_filename (str): The original filename for UI display.
Returns:
tuple: A tuple containing (bytes_downloaded, success_flag).
"""
# --- Pre-download checks for control events ---
if cancellation_event and cancellation_event.is_set():
@@ -49,103 +71,135 @@ def _download_individual_chunk(
time.sleep(0.2)
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Download resumed.")
# Prepare headers for the specific byte range of this chunk
chunk_headers = headers.copy()
if end_byte != -1:
chunk_headers['Range'] = f"bytes={start_byte}-{end_byte}"
bytes_this_chunk = 0
last_speed_calc_time = time.time()
bytes_at_last_speed_calc = 0
# Set this chunk's status to 'active' before starting the download.
with progress_data['lock']:
progress_data['chunks_status'][part_num]['active'] = True
# --- Retry Loop ---
for attempt in range(MAX_CHUNK_DOWNLOAD_RETRIES + 1):
if cancellation_event and cancellation_event.is_set():
return bytes_this_chunk, False
try:
# Prepare headers for the specific byte range of this chunk
chunk_headers = headers.copy()
if end_byte != -1:
chunk_headers['Range'] = f"bytes={start_byte}-{end_byte}"
try:
if attempt > 0:
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Retrying (Attempt {attempt + 1}/{MAX_CHUNK_DOWNLOAD_RETRIES + 1})...")
time.sleep(CHUNK_DOWNLOAD_RETRY_DELAY * (2 ** (attempt - 1)))
last_speed_calc_time = time.time()
bytes_at_last_speed_calc = bytes_this_chunk
bytes_this_chunk = 0
last_speed_calc_time = time.time()
bytes_at_last_speed_calc = 0
logger_func(f" 🚀 [Chunk {part_num + 1}/{total_parts}] Starting download: bytes {start_byte}-{end_byte if end_byte != -1 else 'EOF'}")
response = requests.get(chunk_url, headers=chunk_headers, timeout=(10, 120), stream=True, cookies=cookies_for_chunk)
response.raise_for_status()
# --- Retry Loop ---
for attempt in range(MAX_CHUNK_DOWNLOAD_RETRIES + 1):
if cancellation_event and cancellation_event.is_set():
return bytes_this_chunk, False
# --- Data Writing Loop ---
with open(temp_file_path, 'r+b') as f:
f.seek(start_byte)
for data_segment in response.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE_ITER):
if cancellation_event and cancellation_event.is_set():
return bytes_this_chunk, False
if pause_event and pause_event.is_set():
# Handle pausing during the download stream
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Paused...")
while pause_event.is_set():
if cancellation_event and cancellation_event.is_set(): return bytes_this_chunk, False
time.sleep(0.2)
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Resumed.")
try:
if attempt > 0:
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Retrying (Attempt {attempt + 1}/{MAX_CHUNK_DOWNLOAD_RETRIES + 1})...")
time.sleep(CHUNK_DOWNLOAD_RETRY_DELAY * (2 ** (attempt - 1)))
last_speed_calc_time = time.time()
bytes_at_last_speed_calc = bytes_this_chunk
if data_segment:
f.write(data_segment)
bytes_this_chunk += len(data_segment)
# Update shared progress data structure
with progress_data['lock']:
progress_data['total_downloaded_so_far'] += len(data_segment)
progress_data['chunks_status'][part_num]['downloaded'] = bytes_this_chunk
# Calculate and update speed for this chunk
current_time = time.time()
time_delta = current_time - last_speed_calc_time
if time_delta > 0.5:
bytes_delta = bytes_this_chunk - bytes_at_last_speed_calc
current_speed_bps = (bytes_delta * 8) / time_delta if time_delta > 0 else 0
progress_data['chunks_status'][part_num]['speed_bps'] = current_speed_bps
last_speed_calc_time = current_time
bytes_at_last_speed_calc = bytes_this_chunk
# Emit progress signal to the UI via the queue
if emitter and (current_time - global_emit_time_ref[0] > 0.25):
global_emit_time_ref[0] = current_time
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
if isinstance(emitter, queue.Queue):
emitter.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
elif hasattr(emitter, 'file_progress_signal'):
emitter.file_progress_signal.emit(api_original_filename, status_list_copy)
# If we reach here, the download for this chunk was successful
return bytes_this_chunk, True
logger_func(f" 🚀 [Chunk {part_num + 1}/{total_parts}] Starting download: bytes {start_byte}-{end_byte if end_byte != -1 else 'EOF'}")
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout, http.client.IncompleteRead) as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Retryable error: {e}")
except requests.exceptions.RequestException as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Non-retryable error: {e}")
return bytes_this_chunk, False # Break loop on non-retryable errors
except Exception as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Unexpected error: {e}\n{traceback.format_exc(limit=1)}")
return bytes_this_chunk, False
response = requests.get(chunk_url, headers=chunk_headers, timeout=(10, 120), stream=True, cookies=cookies_for_chunk)
response.raise_for_status()
return bytes_this_chunk, False
# --- Data Writing Loop ---
# We open the unique chunk file in write-binary ('wb') mode.
# No more seeking is required.
with open(chunk_temp_file_path, 'wb') as f:
for data_segment in response.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE_ITER):
if cancellation_event and cancellation_event.is_set():
return bytes_this_chunk, False
if pause_event and pause_event.is_set():
# Handle pausing during the download stream
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Paused...")
while pause_event.is_set():
if cancellation_event and cancellation_event.is_set(): return bytes_this_chunk, False
time.sleep(0.2)
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Resumed.")
if data_segment:
f.write(data_segment)
bytes_this_chunk += len(data_segment)
# Update shared progress data structure
with progress_data['lock']:
progress_data['total_downloaded_so_far'] += len(data_segment)
progress_data['chunks_status'][part_num]['downloaded'] = bytes_this_chunk
# Calculate and update speed for this chunk
current_time = time.time()
time_delta = current_time - last_speed_calc_time
if time_delta > 0.5:
bytes_delta = bytes_this_chunk - bytes_at_last_speed_calc
current_speed_bps = (bytes_delta * 8) / time_delta if time_delta > 0 else 0
progress_data['chunks_status'][part_num]['speed_bps'] = current_speed_bps
last_speed_calc_time = current_time
bytes_at_last_speed_calc = bytes_this_chunk
# Emit progress signal to the UI via the queue
if emitter and (current_time - global_emit_time_ref[0] > 0.25):
global_emit_time_ref[0] = current_time
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
if isinstance(emitter, queue.Queue):
emitter.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
elif hasattr(emitter, 'file_progress_signal'):
emitter.file_progress_signal.emit(api_original_filename, status_list_copy)
# If we get here, the download for this chunk is successful
return bytes_this_chunk, True
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout, http.client.IncompleteRead) as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Retryable error: {e}")
except requests.exceptions.RequestException as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Non-retryable error: {e}")
return bytes_this_chunk, False # Break loop on non-retryable errors
except Exception as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Unexpected error: {e}\n{traceback.format_exc(limit=1)}")
return bytes_this_chunk, False
# If the retry loop finishes without a successful download
return bytes_this_chunk, False
finally:
# This block runs whether the download succeeded or failed
with progress_data['lock']:
progress_data['chunks_status'][part_num]['active'] = False
progress_data['chunks_status'][part_num]['speed_bps'] = 0.0
def download_file_in_parts(file_url, save_path, total_size, num_parts, headers, api_original_filename,
emitter_for_multipart, cookies_for_chunk_session,
cancellation_event, skip_event, logger_func, pause_event):
logger_func(f"⬇️ Initializing Multi-part Download ({num_parts} parts) for: '{api_original_filename}' (Size: {total_size / (1024*1024):.2f} MB)")
temp_file_path = save_path + ".part"
"""
Manages a resilient, multipart file download by saving each chunk to a separate file.
try:
with open(temp_file_path, 'wb') as f_temp:
if total_size > 0:
f_temp.truncate(total_size)
except IOError as e:
logger_func(f" ❌ Error creating/truncating temp file '{temp_file_path}': {e}")
return False, 0, None, None
This function orchestrates the download process by:
1. Checking for already completed chunk files to resume a previous download.
2. Submitting only the missing chunks to a thread pool for parallel download.
3. Assembling the final file from the individual chunks upon successful completion.
4. Cleaning up temporary chunk files after assembly.
5. Leaving completed chunks on disk if the download fails, allowing for a future resume.
Args:
file_url (str): The URL of the file to download.
save_path (str): The final desired path for the downloaded file (e.g., 'my_video.mp4').
total_size (int): The total size of the file in bytes.
num_parts (int): The number of parts to split the download into.
headers (dict): HTTP headers for the download requests.
api_original_filename (str): The original filename for UI progress display.
emitter_for_multipart (queue.Queue or QObject): Emitter for UI signals.
cookies_for_chunk_session (dict): Cookies for the download requests.
cancellation_event (threading.Event): Event to signal cancellation.
skip_event (threading.Event): Event to signal skipping the file.
logger_func (function): A function for logging messages.
pause_event (threading.Event): Event to signal pausing the download.
Returns:
tuple: A tuple containing (success_flag, total_bytes_downloaded, md5_hash, file_handle).
The file_handle will be for the final assembled file if successful, otherwise None.
"""
logger_func(f"⬇️ Initializing Resumable Multi-part Download ({num_parts} parts) for: '{api_original_filename}' (Size: {total_size / (1024*1024):.2f} MB)")
# Calculate the byte range for each chunk
chunk_size_calc = total_size // num_parts
chunks_ranges = []
for i in range(num_parts):
@@ -153,76 +207,119 @@ def download_file_in_parts(file_url, save_path, total_size, num_parts, headers,
end = start + chunk_size_calc - 1 if i < num_parts - 1 else total_size - 1
if start <= end:
chunks_ranges.append((start, end))
elif total_size == 0 and i == 0:
elif total_size == 0 and i == 0: # Handle zero-byte files
chunks_ranges.append((0, -1))
# Calculate the expected size of each chunk
chunk_actual_sizes = []
for start, end in chunks_ranges:
if end == -1 and start == 0:
chunk_actual_sizes.append(0)
else:
chunk_actual_sizes.append(end - start + 1)
chunk_actual_sizes.append(end - start + 1 if end != -1 else 0)
if not chunks_ranges and total_size > 0:
logger_func(f" ⚠️ No valid chunk ranges for multipart download of '{api_original_filename}'. Aborting multipart.")
if os.path.exists(temp_file_path): os.remove(temp_file_path)
logger_func(f" ⚠️ No valid chunk ranges for multipart download of '{api_original_filename}'. Aborting.")
return False, 0, None, None
# --- Resumption Logic: Check for existing complete chunks ---
chunks_to_download = []
total_bytes_resumed = 0
for i, (start, end) in enumerate(chunks_ranges):
chunk_part_path = f"{save_path}.part{i}"
expected_chunk_size = chunk_actual_sizes[i]
if os.path.exists(chunk_part_path) and os.path.getsize(chunk_part_path) == expected_chunk_size:
logger_func(f" [Chunk {i + 1}/{num_parts}] Resuming with existing complete chunk file.")
total_bytes_resumed += expected_chunk_size
else:
chunks_to_download.append({'index': i, 'start': start, 'end': end})
# Setup the shared progress data structure
progress_data = {
'total_file_size': total_size,
'total_downloaded_so_far': 0,
'chunks_status': [
{'id': i, 'downloaded': 0, 'total': chunk_actual_sizes[i] if i < len(chunk_actual_sizes) else 0, 'active': False, 'speed_bps': 0.0}
for i in range(num_parts)
],
'total_downloaded_so_far': total_bytes_resumed,
'chunks_status': [],
'lock': threading.Lock(),
'last_global_emit_time': [time.time()]
}
for i in range(num_parts):
is_resumed = not any(c['index'] == i for c in chunks_to_download)
progress_data['chunks_status'].append({
'id': i,
'downloaded': chunk_actual_sizes[i] if is_resumed else 0,
'total': chunk_actual_sizes[i],
'active': False,
'speed_bps': 0.0
})
# --- Download Phase ---
chunk_futures = []
all_chunks_successful = True
total_bytes_from_chunks = 0
total_bytes_from_threads = 0
with ThreadPoolExecutor(max_workers=num_parts, thread_name_prefix=f"MPChunk_{api_original_filename[:10]}_") as chunk_pool:
for i, (start, end) in enumerate(chunks_ranges):
if cancellation_event and cancellation_event.is_set(): all_chunks_successful = False; break
chunk_futures.append(chunk_pool.submit(
_download_individual_chunk, chunk_url=file_url, temp_file_path=temp_file_path,
for chunk_info in chunks_to_download:
if cancellation_event and cancellation_event.is_set():
all_chunks_successful = False
break
i, start, end = chunk_info['index'], chunk_info['start'], chunk_info['end']
chunk_part_path = f"{save_path}.part{i}"
future = chunk_pool.submit(
_download_individual_chunk,
chunk_url=file_url,
chunk_temp_file_path=chunk_part_path,
start_byte=start, end_byte=end, headers=headers, part_num=i, total_parts=num_parts,
progress_data=progress_data, cancellation_event=cancellation_event, skip_event=skip_event, global_emit_time_ref=progress_data['last_global_emit_time'],
pause_event=pause_event, cookies_for_chunk=cookies_for_chunk_session, logger_func=logger_func, emitter=emitter_for_multipart,
progress_data=progress_data, cancellation_event=cancellation_event,
skip_event=skip_event, global_emit_time_ref=progress_data['last_global_emit_time'],
pause_event=pause_event, cookies_for_chunk=cookies_for_chunk_session,
logger_func=logger_func, emitter=emitter_for_multipart,
api_original_filename=api_original_filename
))
)
chunk_futures.append(future)
for future in as_completed(chunk_futures):
if cancellation_event and cancellation_event.is_set(): all_chunks_successful = False; break
bytes_downloaded_this_chunk, success_this_chunk = future.result()
total_bytes_from_chunks += bytes_downloaded_this_chunk
if not success_this_chunk:
if cancellation_event and cancellation_event.is_set():
all_chunks_successful = False
bytes_downloaded, success = future.result()
total_bytes_from_threads += bytes_downloaded
if not success:
all_chunks_successful = False
total_bytes_final = total_bytes_resumed + total_bytes_from_threads
if cancellation_event and cancellation_event.is_set():
logger_func(f" Multi-part download for '{api_original_filename}' cancelled by main event.")
all_chunks_successful = False
if emitter_for_multipart:
with progress_data['lock']:
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
if isinstance(emitter_for_multipart, queue.Queue):
emitter_for_multipart.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
elif hasattr(emitter_for_multipart, 'file_progress_signal'):
emitter_for_multipart.file_progress_signal.emit(api_original_filename, status_list_copy)
if all_chunks_successful and (total_bytes_from_chunks == total_size or total_size == 0):
logger_func(f" ✅ Multi-part download successful for '{api_original_filename}'. Total bytes: {total_bytes_from_chunks}")
# --- Assembly and Cleanup Phase ---
if all_chunks_successful and (total_bytes_final == total_size or total_size == 0):
logger_func(f" ✅ All {num_parts} chunks complete. Assembling final file...")
md5_hasher = hashlib.md5()
with open(temp_file_path, 'rb') as f_hash:
for buf in iter(lambda: f_hash.read(4096*10), b''):
md5_hasher.update(buf)
calculated_hash = md5_hasher.hexdigest()
return True, total_bytes_from_chunks, calculated_hash, open(temp_file_path, 'rb')
try:
with open(save_path, 'wb') as final_file:
for i in range(num_parts):
chunk_part_path = f"{save_path}.part{i}"
with open(chunk_part_path, 'rb') as chunk_file:
content = chunk_file.read()
final_file.write(content)
md5_hasher.update(content)
calculated_hash = md5_hasher.hexdigest()
logger_func(f" ✅ Assembly successful for '{api_original_filename}'. Total bytes: {total_bytes_final}")
return True, total_bytes_final, calculated_hash, open(save_path, 'rb')
except Exception as e:
logger_func(f" ❌ Critical error during file assembly: {e}. Cleaning up.")
return False, total_bytes_final, None, None
finally:
# Cleanup all individual chunk files after successful assembly
for i in range(num_parts):
chunk_part_path = f"{save_path}.part{i}"
if os.path.exists(chunk_part_path):
try:
os.remove(chunk_part_path)
except OSError as e:
logger_func(f" ⚠️ Failed to remove temp part file '{chunk_part_path}': {e}")
else:
logger_func(f" ❌ Multi-part download failed for '{api_original_filename}'. Success: {all_chunks_successful}, Bytes: {total_bytes_from_chunks}/{total_size}. Cleaning up.")
if os.path.exists(temp_file_path):
try: os.remove(temp_file_path)
except OSError as e: logger_func(f" Failed to remove temp part file '{temp_file_path}': {e}")
return False, total_bytes_from_chunks, None, None
# If download failed, we do NOT clean up, allowing for resumption later
logger_func(f" ❌ Multi-part download failed for '{api_original_filename}'. Success: {all_chunks_successful}, Bytes: {total_bytes_final}/{total_size}. Partial chunks saved for future resumption.")
return False, total_bytes_final, None, None

120
src/services/updater.py Normal file
View File

@@ -0,0 +1,120 @@
import sys
import os
import requests
import subprocess # Keep this for now, though it's not used in the final command
from packaging.version import parse as parse_version
from PyQt5.QtCore import QThread, pyqtSignal
# Constants for the updater
GITHUB_REPO_URL = "https://api.github.com/repos/Yuvi63771/Kemono-Downloader/releases/latest"
EXE_NAME = "Kemono.Downloader.exe"
class UpdateChecker(QThread):
"""Checks for a new version on GitHub in a background thread."""
update_available = pyqtSignal(str, str) # new_version, download_url
up_to_date = pyqtSignal(str)
update_error = pyqtSignal(str)
def __init__(self, current_version):
super().__init__()
self.current_version_str = current_version.lstrip('v')
def run(self):
try:
response = requests.get(GITHUB_REPO_URL, timeout=15)
response.raise_for_status()
data = response.json()
latest_version_str = data['tag_name'].lstrip('v')
current_version = parse_version(self.current_version_str)
latest_version = parse_version(latest_version_str)
if latest_version > current_version:
for asset in data.get('assets', []):
if asset['name'] == EXE_NAME:
self.update_available.emit(latest_version_str, asset['browser_download_url'])
return
self.update_error.emit(f"Update found, but '{EXE_NAME}' is missing from the release assets.")
else:
self.up_to_date.emit("You are on the latest version.")
except requests.exceptions.RequestException as e:
self.update_error.emit(f"Network error: {e}")
except Exception as e:
self.update_error.emit(f"An error occurred: {e}")
class UpdateDownloader(QThread):
"""
Downloads the new executable and runs an updater script that kills the old process,
replaces the file, and displays a message in the terminal.
"""
download_finished = pyqtSignal()
download_error = pyqtSignal(str)
def __init__(self, download_url, parent_app):
super().__init__()
self.download_url = download_url
self.parent_app = parent_app
def run(self):
try:
app_path = sys.executable
app_dir = os.path.dirname(app_path)
temp_path = os.path.join(app_dir, f"{EXE_NAME}.tmp")
old_path = os.path.join(app_dir, f"{EXE_NAME}.old")
updater_script_path = os.path.join(app_dir, "updater.bat")
pid_file_path = os.path.join(app_dir, "updater.pid")
with requests.get(self.download_url, stream=True, timeout=300) as r:
r.raise_for_status()
with open(temp_path, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
with open(pid_file_path, "w") as f:
f.write(str(os.getpid()))
script_content = f"""
@echo off
SETLOCAL
echo.
echo Reading process information...
set /p PID=<{pid_file_path}
echo Closing the old application (PID: %PID%)...
taskkill /F /PID %PID%
echo Waiting for files to unlock...
timeout /t 2 /nobreak > nul
echo Replacing application files...
if exist "{old_path}" del /F /Q "{old_path}"
rename "{app_path}" "{os.path.basename(old_path)}"
rename "{temp_path}" "{EXE_NAME}"
echo.
echo ============================================================
echo Update Complete!
echo You can now close this window and run {EXE_NAME}.
echo ============================================================
echo.
pause
echo Cleaning up helper files...
del "{pid_file_path}"
del "%~f0"
ENDLOCAL
"""
with open(updater_script_path, "w") as f:
f.write(script_content)
# --- Go back to the os.startfile command that we know works ---
os.startfile(updater_script_path)
self.download_finished.emit()
except Exception as e:
self.download_error.emit(f"Failed to download or run updater: {e}")

View File

@@ -1,8 +1,5 @@
# --- Standard Library Imports ---
import os
import sys
# --- PyQt5 Imports ---
from PyQt5.QtGui import QIcon
_app_icon_cache = None

View File

@@ -0,0 +1,137 @@
import os
import threading
import time
from urllib.parse import urlparse
import cloudscraper
import requests
from PyQt5.QtCore import QThread, pyqtSignal
from ...core.allcomic_client import (fetch_chapter_data as allcomic_fetch_data,
get_chapter_list as allcomic_get_list)
from ...utils.file_utils import clean_folder_name
class AllcomicDownloadThread(QThread):
"""A dedicated QThread for handling allcomic.com downloads."""
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool)
overall_progress_signal = pyqtSignal(int, int)
def __init__(self, url, output_dir, parent=None):
super().__init__(parent)
self.comic_url = url
self.output_dir = output_dir
self.is_cancelled = False
self.pause_event = parent.pause_event if hasattr(parent, 'pause_event') else threading.Event()
def _check_pause(self):
if self.is_cancelled: return True
if self.pause_event and self.pause_event.is_set():
self.progress_signal.emit(" Download paused...")
while self.pause_event.is_set():
if self.is_cancelled: return True
time.sleep(0.5)
self.progress_signal.emit(" Download resumed.")
return self.is_cancelled
def run(self):
grand_total_dl = 0
grand_total_skip = 0
# Create the scraper session ONCE for the entire job
scraper = cloudscraper.create_scraper(
browser={'browser': 'firefox', 'platform': 'windows', 'desktop': True}
)
# Pass the scraper to the function
chapters_to_download = allcomic_get_list(scraper, self.comic_url, self.progress_signal.emit)
if not chapters_to_download:
chapters_to_download = [self.comic_url]
self.progress_signal.emit(f"--- Starting download of {len(chapters_to_download)} chapter(s) ---")
for chapter_idx, chapter_url in enumerate(chapters_to_download):
if self._check_pause(): break
self.progress_signal.emit(f"\n-- Processing Chapter {chapter_idx + 1}/{len(chapters_to_download)} --")
# Pass the scraper to the function
comic_title, chapter_title, image_urls = allcomic_fetch_data(scraper, chapter_url, self.progress_signal.emit)
if not image_urls:
self.progress_signal.emit(f"❌ Failed to get data for chapter. Skipping.")
continue
series_folder_name = clean_folder_name(comic_title)
chapter_folder_name = clean_folder_name(chapter_title)
final_save_path = os.path.join(self.output_dir, series_folder_name, chapter_folder_name)
try:
os.makedirs(final_save_path, exist_ok=True)
self.progress_signal.emit(f" Saving to folder: '{os.path.join(series_folder_name, chapter_folder_name)}'")
except OSError as e:
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
grand_total_skip += len(image_urls)
continue
total_files_in_chapter = len(image_urls)
self.overall_progress_signal.emit(total_files_in_chapter, 0)
headers = {'Referer': chapter_url}
for i, img_url in enumerate(image_urls):
if self._check_pause(): break
file_extension = os.path.splitext(urlparse(img_url).path)[1] or '.jpg'
filename = f"{i+1:03d}{file_extension}"
filepath = os.path.join(final_save_path, filename)
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip ({i+1}/{total_files_in_chapter}): '{filename}' already exists.")
grand_total_skip += 1
else:
download_successful = False
max_retries = 8
for attempt in range(max_retries):
if self._check_pause(): break
try:
self.progress_signal.emit(f" Downloading ({i+1}/{total_files_in_chapter}): '{filename}' (Attempt {attempt + 1})...")
# Use the persistent scraper object
response = scraper.get(img_url, stream=True, headers=headers, timeout=60)
response.raise_for_status()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self._check_pause(): break
f.write(chunk)
if self._check_pause():
if os.path.exists(filepath): os.remove(filepath)
break
download_successful = True
grand_total_dl += 1
break
except requests.RequestException as e:
self.progress_signal.emit(f" ⚠️ Attempt {attempt + 1} failed for '{filename}': {e}")
if attempt < max_retries - 1:
wait_time = 2 * (attempt + 1)
self.progress_signal.emit(f" Retrying in {wait_time} seconds...")
time.sleep(wait_time)
else:
self.progress_signal.emit(f" ❌ All attempts failed for '{filename}'. Skipping.")
grand_total_skip += 1
self.overall_progress_signal.emit(total_files_in_chapter, i + 1)
time.sleep(0.5) # Increased delay between images for this site
if self._check_pause(): break
self.file_progress_signal.emit("", None)
self.finished_signal.emit(grand_total_dl, grand_total_skip, self.is_cancelled)
def cancel(self):
self.is_cancelled = True
self.progress_signal.emit(" Cancellation signal received by AllComic thread.")

View File

@@ -0,0 +1,133 @@
import os
import threading
import time
import datetime
import requests
from PyQt5.QtCore import QThread, pyqtSignal
from ...core.booru_client import fetch_booru_data, BooruClientException
from ...utils.file_utils import clean_folder_name
_ff_ver = (datetime.date.today().toordinal() - 735506) // 28
USERAGENT_FIREFOX = (f"Mozilla/5.0 (Windows NT 10.0; Win64; x64; "
f"rv:{_ff_ver}.0) Gecko/20100101 Firefox/{_ff_ver}.0")
class BooruDownloadThread(QThread):
"""A dedicated QThread for handling Danbooru and Gelbooru downloads."""
progress_signal = pyqtSignal(str)
overall_progress_signal = pyqtSignal(int, int)
finished_signal = pyqtSignal(int, int, bool) # dl_count, skip_count, cancelled
def __init__(self, url, output_dir, api_key, user_id, parent=None):
super().__init__(parent)
self.booru_url = url
self.output_dir = output_dir
self.api_key = api_key
self.user_id = user_id
self.is_cancelled = False
self.pause_event = parent.pause_event if hasattr(parent, 'pause_event') else threading.Event()
def run(self):
download_count = 0
skip_count = 0
processed_count = 0
cumulative_total = 0
def logger(msg):
self.progress_signal.emit(str(msg))
try:
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Booru Download for: {self.booru_url}")
item_generator = fetch_booru_data(self.booru_url, self.api_key, self.user_id, logger)
download_path = self.output_dir # Default path
path_initialized = False
session = requests.Session()
session.headers["User-Agent"] = USERAGENT_FIREFOX
for item in item_generator:
if self.is_cancelled:
break
if isinstance(item, tuple) and item[0] == 'PAGE_UPDATE':
newly_found = item[1]
cumulative_total += newly_found
self.progress_signal.emit(f" Found {newly_found} more posts. Total so far: {cumulative_total}")
self.overall_progress_signal.emit(cumulative_total, processed_count)
continue
post_data = item
processed_count += 1
if not path_initialized:
base_folder_name = post_data.get('search_tags', 'booru_download')
download_path = os.path.join(self.output_dir, clean_folder_name(base_folder_name))
os.makedirs(download_path, exist_ok=True)
path_initialized = True
if self.pause_event.is_set():
self.progress_signal.emit(" Download paused...")
while self.pause_event.is_set():
if self.is_cancelled: break
time.sleep(0.5)
if self.is_cancelled: break
self.progress_signal.emit(" Download resumed.")
file_url = post_data.get('file_url')
if not file_url:
skip_count += 1
self.progress_signal.emit(f" -> Skip ({processed_count}/{cumulative_total}): Post ID {post_data.get('id')} has no file URL.")
continue
cat = post_data.get('category', 'booru')
post_id = post_data.get('id', 'unknown')
md5 = post_data.get('md5', '')
fname = post_data.get('filename', f"file_{post_id}")
ext = post_data.get('extension', 'jpg')
final_filename = f"{cat}_{post_id}_{md5 or fname}.{ext}"
filepath = os.path.join(download_path, final_filename)
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip ({processed_count}/{cumulative_total}): '{final_filename}' already exists.")
skip_count += 1
else:
try:
self.progress_signal.emit(f" Downloading ({processed_count}/{cumulative_total}): '{final_filename}'...")
response = session.get(file_url, stream=True, timeout=60)
response.raise_for_status()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self.is_cancelled: break
f.write(chunk)
if not self.is_cancelled:
download_count += 1
else:
if os.path.exists(filepath): os.remove(filepath)
skip_count += 1
except Exception as e:
self.progress_signal.emit(f" ❌ Failed to download '{final_filename}': {e}")
skip_count += 1
self.overall_progress_signal.emit(cumulative_total, processed_count)
time.sleep(0.2)
if not path_initialized:
self.progress_signal.emit("No posts found for the given URL/tags.")
except BooruClientException as e:
self.progress_signal.emit(f"❌ A Booru client error occurred: {e}")
except Exception as e:
self.progress_signal.emit(f"❌ An unexpected error occurred in Booru thread: {e}")
finally:
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
def cancel(self):
self.is_cancelled = True
self.progress_signal.emit(" Cancellation signal received by Booru thread.")

View File

@@ -0,0 +1,195 @@
import os
import re
import time
import requests
import threading
from concurrent.futures import ThreadPoolExecutor
from PyQt5.QtCore import QThread, pyqtSignal
from ...core.bunkr_client import fetch_bunkr_data
# Define image extensions
IMG_EXTS = ('.jpg', '.jpeg', '.png', '.gif', '.webp', '.bmp', '.avif')
BUNKR_IMG_THREADS = 6 # Hardcoded thread count for images
class BunkrDownloadThread(QThread):
"""A dedicated QThread for handling Bunkr downloads."""
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool, list)
def __init__(self, url, output_dir, parent=None):
super().__init__(parent)
self.bunkr_url = url
self.output_dir = output_dir
self.is_cancelled = False
# --- NEW: Threading members ---
self.lock = threading.Lock()
self.download_count = 0
self.skip_count = 0
self.file_index = 0 # Use a shared index for logging
class ThreadLogger:
def __init__(self, signal_emitter):
self.signal_emitter = signal_emitter
def info(self, msg, *args, **kwargs):
self.signal_emitter.emit(str(msg))
def error(self, msg, *args, **kwargs):
self.signal_emitter.emit(f"❌ ERROR: {msg}")
def warning(self, msg, *args, **kwargs):
self.signal_emitter.emit(f"⚠️ WARNING: {msg}")
def debug(self, msg, *args, **kwargs):
pass
self.logger = ThreadLogger(self.progress_signal)
def _download_file(self, file_data, total_files, album_path, is_image_task=False):
"""
A thread-safe method to download a single file.
This function will be called by the main thread (for videos)
and worker threads (for images).
"""
# Stop if a cancellation signal was received before starting
if self.is_cancelled:
return
# --- Thread-safe index for logging ---
with self.lock:
self.file_index += 1
current_file_num = self.file_index
try:
filename = file_data.get('name', 'untitled_file')
file_url = file_data.get('url')
headers = file_data.get('_http_headers')
filename = re.sub(r'[<>:"/\\|?*]', '_', filename).strip()
filepath = os.path.join(album_path, filename)
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip ({current_file_num}/{total_files}): '{filename}' already exists.")
with self.lock:
self.skip_count += 1
return
self.progress_signal.emit(f" Downloading ({current_file_num}/{total_files}): '{filename}'...")
response = requests.get(file_url, stream=True, headers=headers, timeout=60)
response.raise_for_status()
total_size = int(response.headers.get('content-length', 0))
downloaded_size = 0
last_update_time = time.time()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self.is_cancelled:
break
if chunk:
f.write(chunk)
downloaded_size += len(chunk)
# For videos/other files, send frequent progress
# For images, don't send progress to avoid UI flicker
if not is_image_task:
current_time = time.time()
if total_size > 0 and (current_time - last_update_time) > 0.5:
self.file_progress_signal.emit(filename, (downloaded_size, total_size))
last_update_time = current_time
if self.is_cancelled:
self.progress_signal.emit(f" Download cancelled for '{filename}'.")
if os.path.exists(filepath): os.remove(filepath)
return
if total_size > 0:
self.file_progress_signal.emit(filename, (total_size, total_size))
with self.lock:
self.download_count += 1
except requests.exceptions.RequestException as e:
self.progress_signal.emit(f" ❌ Failed to download '{filename}'. Error: {e}")
if os.path.exists(filepath): os.remove(filepath)
with self.lock:
self.skip_count += 1
except Exception as e:
self.progress_signal.emit(f" ❌ An unexpected error occurred with '{filename}': {e}")
if os.path.exists(filepath): os.remove(filepath)
with self.lock:
self.skip_count += 1
def run(self):
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Bunkr Download for: {self.bunkr_url}")
album_name, files_to_download = fetch_bunkr_data(self.bunkr_url, self.logger)
if not files_to_download:
self.progress_signal.emit("❌ Failed to extract file information from Bunkr. Aborting.")
self.finished_signal.emit(0, 0, self.is_cancelled, [])
return
album_path = os.path.join(self.output_dir, album_name)
try:
os.makedirs(album_path, exist_ok=True)
self.progress_signal.emit(f" Saving to folder: '{album_path}'")
except OSError as e:
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
self.finished_signal.emit(0, len(files_to_download), self.is_cancelled, [])
return
total_files = len(files_to_download)
# --- NEW: Separate files into images and others ---
image_files = []
other_files = []
for f in files_to_download:
name = f.get('name', '').lower()
if name.endswith(IMG_EXTS):
image_files.append(f)
else:
other_files.append(f)
self.progress_signal.emit(f" Found {len(image_files)} images and {len(other_files)} other files.")
# --- 1. Process videos and other files sequentially (one by one) ---
if other_files:
self.progress_signal.emit(f" Downloading {len(other_files)} videos/other files sequentially...")
for file_data in other_files:
if self.is_cancelled:
break
# Call the new download helper method
self._download_file(file_data, total_files, album_path, is_image_task=False)
# --- 2. Process images concurrently using a fixed 6-thread pool ---
if image_files and not self.is_cancelled:
self.progress_signal.emit(f" Downloading {len(image_files)} images concurrently ({BUNKR_IMG_THREADS} threads)...")
with ThreadPoolExecutor(max_workers=BUNKR_IMG_THREADS, thread_name_prefix='BunkrImg') as executor:
# Submit all image download tasks
futures = {executor.submit(self._download_file, file_data, total_files, album_path, is_image_task=True): file_data for file_data in image_files}
try:
# Wait for tasks to complete, but check for cancellation
for future in futures:
if self.is_cancelled:
future.cancel() # Try to cancel running/pending tasks
else:
future.result() # Wait for the task to finish (or raise exception)
except Exception as e:
self.progress_signal.emit(f" ❌ A thread pool error occurred: {e}")
if self.is_cancelled:
self.progress_signal.emit(" Download cancelled by user.")
# Update skip count to reflect all non-downloaded files
self.skip_count = total_files - self.download_count
self.file_progress_signal.emit("", None) # Clear file progress
self.finished_signal.emit(self.download_count, self.skip_count, self.is_cancelled, [])
def cancel(self):
self.is_cancelled = True
self.progress_signal.emit(" Cancellation signal received by Bunkr thread.")

View File

@@ -0,0 +1,189 @@
import os
import time
import datetime
import requests
from PyQt5.QtCore import QThread, pyqtSignal
# Assuming discord_pdf_generator is in the dialogs folder, sibling to the classes folder
from ..dialogs.discord_pdf_generator import create_pdf_from_discord_messages
# This constant is needed for the thread to function independently
_ff_ver = (datetime.date.today().toordinal() - 735506) // 28
USERAGENT_FIREFOX = (f"Mozilla/5.0 (Windows NT 10.0; Win64; x64; "
f"rv:{_ff_ver}.0) Gecko/20100101 Firefox/{_ff_ver}.0")
class DiscordDownloadThread(QThread):
"""A dedicated QThread for handling all official Discord downloads."""
progress_signal = pyqtSignal(str)
progress_label_signal = pyqtSignal(str)
finished_signal = pyqtSignal(int, int, bool, list)
def __init__(self, mode, session, token, output_dir, server_id, channel_id, url, app_base_dir, limit=None, parent=None):
super().__init__(parent)
self.mode = mode
self.session = session
self.token = token
self.output_dir = output_dir
self.server_id = server_id
self.channel_id = channel_id
self.api_url = url
self.message_limit = limit
self.app_base_dir = app_base_dir # Path to app's base directory
self.is_cancelled = False
self.is_paused = False
def run(self):
if self.mode == 'pdf':
self._run_pdf_creation()
else:
self._run_file_download()
def cancel(self):
self.progress_signal.emit(" Cancellation signal received by Discord thread.")
self.is_cancelled = True
def pause(self):
self.progress_signal.emit(" Pausing Discord download...")
self.is_paused = True
def resume(self):
self.progress_signal.emit(" Resuming Discord download...")
self.is_paused = False
def _check_events(self):
if self.is_cancelled:
return True
while self.is_paused:
time.sleep(0.5)
if self.is_cancelled:
return True
return False
def _fetch_all_messages(self):
all_messages = []
last_message_id = None
headers = {'Authorization': self.token, 'User-Agent': USERAGENT_FIREFOX}
while True:
if self._check_events(): break
endpoint = f"/channels/{self.channel_id}/messages?limit=100"
if last_message_id:
endpoint += f"&before={last_message_id}"
try:
resp = self.session.get(f"https://discord.com/api/v10{endpoint}", headers=headers, timeout=30)
resp.raise_for_status()
message_batch = resp.json()
except Exception as e:
self.progress_signal.emit(f" ❌ Error fetching message batch: {e}")
break
if not message_batch:
break
all_messages.extend(message_batch)
if self.message_limit and len(all_messages) >= self.message_limit:
self.progress_signal.emit(f" Reached message limit of {self.message_limit}. Halting fetch.")
all_messages = all_messages[:self.message_limit]
break
last_message_id = message_batch[-1]['id']
self.progress_label_signal.emit(f"Fetched {len(all_messages)} messages...")
time.sleep(1) # API Rate Limiting
return all_messages
def _run_pdf_creation(self):
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Discord PDF export for: {self.api_url}")
self.progress_label_signal.emit("Fetching messages...")
all_messages = self._fetch_all_messages()
if self.is_cancelled:
self.finished_signal.emit(0, 0, True, [])
return
self.progress_label_signal.emit(f"Collected {len(all_messages)} total messages. Generating PDF...")
all_messages.reverse()
font_path = os.path.join(self.app_base_dir, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
output_filepath = os.path.join(self.output_dir, f"discord_{self.server_id}_{self.channel_id or 'server'}.pdf")
success = create_pdf_from_discord_messages(
all_messages, self.server_id, self.channel_id,
output_filepath, font_path, logger=self.progress_signal.emit,
cancellation_event=self, pause_event=self
)
if success:
self.progress_label_signal.emit(f"✅ PDF export complete!")
elif not self.is_cancelled:
self.progress_label_signal.emit(f"❌ PDF export failed. Check log for details.")
self.finished_signal.emit(0, len(all_messages), self.is_cancelled, [])
def _run_file_download(self):
download_count = 0
skip_count = 0
try:
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Discord download for channel: {self.channel_id}")
self.progress_label_signal.emit("Fetching messages...")
all_messages = self._fetch_all_messages()
if self.is_cancelled:
self.finished_signal.emit(0, 0, True, [])
return
self.progress_label_signal.emit(f"Collected {len(all_messages)} messages. Starting downloads...")
total_attachments = sum(len(m.get('attachments', [])) for m in all_messages)
for message in reversed(all_messages):
if self._check_events(): break
for attachment in message.get('attachments', []):
if self._check_events(): break
file_url = attachment['url']
original_filename = attachment['filename']
filepath = os.path.join(self.output_dir, original_filename)
filename_to_use = original_filename
counter = 1
base_name, extension = os.path.splitext(original_filename)
while os.path.exists(filepath):
filename_to_use = f"{base_name} ({counter}){extension}"
filepath = os.path.join(self.output_dir, filename_to_use)
counter += 1
if filename_to_use != original_filename:
self.progress_signal.emit(f" -> Duplicate name '{original_filename}'. Saving as '{filename_to_use}'.")
try:
self.progress_signal.emit(f" Downloading ({download_count+1}/{total_attachments}): '{filename_to_use}'...")
response = requests.get(file_url, stream=True, timeout=60)
response.raise_for_status()
download_cancelled_mid_file = False
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self._check_events():
download_cancelled_mid_file = True
break
f.write(chunk)
if download_cancelled_mid_file:
self.progress_signal.emit(f" Download cancelled for '{filename_to_use}'. Deleting partial file.")
if os.path.exists(filepath):
os.remove(filepath)
continue
download_count += 1
except Exception as e:
self.progress_signal.emit(f" ❌ Failed to download '{filename_to_use}': {e}")
skip_count += 1
finally:
self.finished_signal.emit(download_count, skip_count, self.is_cancelled, [])

View File

@@ -0,0 +1,183 @@
import re
import requests
from urllib.parse import urlparse
# Utility Imports
from ...utils.network_utils import prepare_cookies_for_request
from ...utils.file_utils import clean_folder_name
# Downloader Thread Imports (Alphabetical Order Recommended)
from .allcomic_downloader_thread import AllcomicDownloadThread
from .booru_downloader_thread import BooruDownloadThread
from .bunkr_downloader_thread import BunkrDownloadThread
from .discord_downloader_thread import DiscordDownloadThread # Official Discord
from .drive_downloader_thread import DriveDownloadThread
from .erome_downloader_thread import EromeDownloadThread
from .external_link_downloader_thread import ExternalLinkDownloadThread
from .fap_nation_downloader_thread import FapNationDownloadThread
from .hentai2read_downloader_thread import Hentai2readDownloadThread
from .kemono_discord_downloader_thread import KemonoDiscordDownloadThread
from .mangadex_downloader_thread import MangaDexDownloadThread
from .nhentai_downloader_thread import NhentaiDownloadThread
from .pixeldrain_downloader_thread import PixeldrainDownloadThread
from .rule34video_downloader_thread import Rule34VideoDownloadThread
from .saint2_downloader_thread import Saint2DownloadThread
from .simp_city_downloader_thread import SimpCityDownloadThread
from .toonily_downloader_thread import ToonilyDownloadThread
def create_downloader_thread(main_app, api_url, service, id1, id2, effective_output_dir_for_run):
"""
Factory function to create and configure the correct QThread for a given URL.
Returns a configured QThread instance, a specific error string ("COOKIE_ERROR", "FETCH_ERROR"),
or None if no special handler is found (indicating fallback to generic BackendDownloadThread).
"""
# Handler for Booru sites (Danbooru, Gelbooru)
if service in ['danbooru', 'gelbooru']:
api_key = main_app.api_key_input.text().strip()
user_id = main_app.user_id_input.text().strip()
return BooruDownloadThread(
url=api_url, output_dir=effective_output_dir_for_run,
api_key=api_key, user_id=user_id, parent=main_app
)
# Handler for cloud storage sites (Mega, GDrive, Dropbox, GoFile)
platform = None
if 'mega.nz' in api_url or 'mega.io' in api_url: platform = 'mega'
elif 'drive.google.com' in api_url: platform = 'gdrive'
elif 'dropbox.com' in api_url: platform = 'dropbox'
elif 'gofile.io' in api_url: platform = 'gofile'
if platform:
use_post_subfolder = main_app.use_subfolder_per_post_checkbox.isChecked()
return DriveDownloadThread(
api_url, effective_output_dir_for_run, platform, use_post_subfolder,
main_app.cancellation_event, main_app.pause_event, main_app.log_signal.emit,
parent=main_app # Pass parent for consistency
)
# Handler for Erome
if 'erome.com' in api_url:
return EromeDownloadThread(api_url, effective_output_dir_for_run, main_app)
# Handler for MangaDex
if 'mangadex.org' in api_url:
return MangaDexDownloadThread(api_url, effective_output_dir_for_run, main_app)
# Handler for Saint2
is_saint2_url = service == 'saint2' or 'saint2.su' in api_url or 'saint2.pk' in api_url # Add more domains if needed
if is_saint2_url and api_url.strip().lower() != 'saint2.su': # Exclude batch mode trigger if using URL input
return Saint2DownloadThread(api_url, effective_output_dir_for_run, main_app)
# Handler for SimpCity
if service == 'simpcity':
cookies = prepare_cookies_for_request(
use_cookie_flag=True, # SimpCity requires cookies
cookie_text_input=main_app.simpcity_cookie_text_input.text(), # Use dedicated input
selected_cookie_file_path=main_app.selected_cookie_filepath, # Use shared selection
app_base_dir=main_app.app_base_dir,
logger_func=main_app.log_signal.emit,
target_domain='simpcity.cr' # Specific domain
)
if not cookies:
main_app.log_signal.emit("❌ SimpCity requires valid cookies. Please provide them.")
return "COOKIE_ERROR" # Sentinel value for cookie failure
return SimpCityDownloadThread(api_url, id2, effective_output_dir_for_run, cookies, main_app)
# Handler for Rule34Video
if service == 'rule34video':
main_app.log_signal.emit(" Rule34Video.com URL detected. Starting dedicated downloader.")
return Rule34VideoDownloadThread(api_url, effective_output_dir_for_run, main_app) # id1 (video_id) is used inside the thread
# HANDLER FOR KEMONO DISCORD (Place BEFORE official Discord)
elif service == 'discord' and any(domain in api_url for domain in ['kemono.cr', 'kemono.su', 'kemono.party']):
main_app.log_signal.emit(" Kemono Discord URL detected. Starting dedicated downloader.")
cookies = prepare_cookies_for_request(
use_cookie_flag=main_app.use_cookie_checkbox.isChecked(), # Respect UI setting
cookie_text_input=main_app.cookie_text_input.text(),
selected_cookie_file_path=main_app.selected_cookie_filepath,
app_base_dir=main_app.app_base_dir,
logger_func=main_app.log_signal.emit,
target_domain='kemono.cr' # Primary Kemono domain, adjust if needed
)
# KemonoDiscordDownloadThread expects parent for events
return KemonoDiscordDownloadThread(
server_id=id1,
channel_id=id2,
output_dir=effective_output_dir_for_run,
cookies_dict=cookies,
parent=main_app
)
# Handler for official Discord URLs
elif service == 'discord' and 'discord.com' in api_url:
main_app.log_signal.emit(" Official Discord URL detected. Starting dedicated downloader.")
token = main_app.remove_from_filename_input.text().strip() # Token is in the "Remove Words" field for Discord
if not token:
main_app.log_signal.emit("❌ Official Discord requires an Authorization Token in the 'Remove Words' field.")
return None # Or a specific error sentinel
limit_text = main_app.discord_message_limit_input.text().strip()
message_limit = int(limit_text) if limit_text.isdigit() else None
mode = main_app.discord_download_scope # Should be 'pdf' or 'files'
return DiscordDownloadThread(
mode=mode,
session=requests.Session(), # Create a session for this thread
token=token,
output_dir=effective_output_dir_for_run,
server_id=id1,
channel_id=id2,
url=api_url,
app_base_dir=main_app.app_base_dir,
limit=message_limit,
parent=main_app # Pass main_app for events/signals
)
# Check specific domains or rely on service name if extract_post_info provides it
if service == 'allcomic' or 'allcomic.com' in api_url or 'allporncomic.com' in api_url:
return AllcomicDownloadThread(api_url, effective_output_dir_for_run, main_app)
# Handler for Hentai2Read
if service == 'hentai2read' or 'hentai2read.com' in api_url:
return Hentai2readDownloadThread(api_url, effective_output_dir_for_run, main_app)
# Handler for Fap-Nation
if service == 'fap-nation' or 'fap-nation.com' in api_url or 'fap-nation.org' in api_url:
use_post_subfolder = main_app.use_subfolder_per_post_checkbox.isChecked()
# Ensure signals are passed correctly if needed by the thread
return FapNationDownloadThread(
api_url, effective_output_dir_for_run, use_post_subfolder,
main_app.pause_event, main_app.cancellation_event, main_app.actual_gui_signals, main_app
)
# Handler for Pixeldrain
if service == 'pixeldrain' or 'pixeldrain.com' in api_url:
return PixeldrainDownloadThread(api_url, effective_output_dir_for_run, main_app) # URL contains the ID
# Handler for nHentai
if service == 'nhentai':
from ...core.nhentai_client import fetch_nhentai_gallery
main_app.log_signal.emit(f" nHentai gallery ID {id1} detected. Fetching gallery data...")
gallery_data = fetch_nhentai_gallery(id1, main_app.log_signal.emit)
if not gallery_data:
main_app.log_signal.emit(f"❌ Failed to fetch nHentai gallery data for ID {id1}.")
return "FETCH_ERROR" # Sentinel value for fetch failure
return NhentaiDownloadThread(gallery_data, effective_output_dir_for_run, main_app)
# Handler for Toonily
if service == 'toonily' or 'toonily.com' in api_url:
return ToonilyDownloadThread(api_url, effective_output_dir_for_run, main_app)
# Handler for Bunkr
if service == 'bunkr':
# id1 contains the full URL or album ID from extract_post_info
return BunkrDownloadThread(id1, effective_output_dir_for_run, main_app)
# --- Fallback ---
# If no specific handler matched based on service name or URL pattern, return None.
# This signals main_window.py to use the generic BackendDownloadThread/PostProcessorWorker
# which uses the standard Kemono/Coomer post API.
main_app.log_signal.emit(f" No specialized downloader found for service '{service}' and URL '{api_url[:50]}...'. Using generic downloader.")
return None

View File

@@ -0,0 +1,77 @@
from PyQt5.QtCore import QThread, pyqtSignal
from ...services.drive_downloader import (
download_dropbox_file,
download_gdrive_file,
download_gofile_folder,
download_mega_file as drive_download_mega_file,
)
class DriveDownloadThread(QThread):
"""A dedicated QThread for handling direct Mega, GDrive, and Dropbox links."""
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool, list)
overall_progress_signal = pyqtSignal(int, int)
def __init__(self, url, output_dir, platform, use_post_subfolder, cancellation_event, pause_event, logger_func, parent=None):
super().__init__(parent)
self.drive_url = url
self.output_dir = output_dir
self.platform = platform
self.use_post_subfolder = use_post_subfolder
self.is_cancelled = False
self.cancellation_event = cancellation_event
self.pause_event = pause_event
self.logger_func = logger_func
def run(self):
self.logger_func("=" * 40)
self.logger_func(f"🚀 Starting direct {self.platform.capitalize()} Download for: {self.drive_url}")
try:
if self.platform == 'mega':
drive_download_mega_file(
self.drive_url, self.output_dir,
logger_func=self.logger_func,
progress_callback_func=self.file_progress_signal.emit,
overall_progress_callback=self.overall_progress_signal.emit,
cancellation_event=self.cancellation_event,
pause_event=self.pause_event
)
elif self.platform == 'gdrive':
download_gdrive_file(
self.drive_url, self.output_dir,
logger_func=self.logger_func,
progress_callback_func=self.file_progress_signal.emit,
overall_progress_callback=self.overall_progress_signal.emit,
use_post_subfolder=self.use_post_subfolder,
post_title="Google Drive Download"
)
elif self.platform == 'dropbox':
download_dropbox_file(
self.drive_url, self.output_dir,
logger_func=self.logger_func,
progress_callback_func=self.file_progress_signal.emit,
use_post_subfolder=self.use_post_subfolder,
post_title="Dropbox Download"
)
elif self.platform == 'gofile':
download_gofile_folder(
self.drive_url, self.output_dir,
logger_func=self.logger_func,
progress_callback_func=self.file_progress_signal.emit,
overall_progress_callback=self.overall_progress_signal.emit
)
self.finished_signal.emit(1, 0, self.is_cancelled, [])
except Exception as e:
self.logger_func(f"❌ An unexpected error occurred in DriveDownloadThread: {e}")
self.finished_signal.emit(0, 1, self.is_cancelled, [])
def cancel(self):
self.is_cancelled = True
if self.cancellation_event:
self.cancellation_event.set()
self.logger_func(f" Cancellation signal received by {self.platform.capitalize()} thread.")

View File

@@ -0,0 +1,106 @@
import os
import time
import requests
import cloudscraper
from PyQt5.QtCore import QThread, pyqtSignal
from ...core.erome_client import fetch_erome_data
class EromeDownloadThread(QThread):
"""A dedicated QThread for handling erome.com downloads."""
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool) # dl_count, skip_count, cancelled
def __init__(self, url, output_dir, parent=None):
super().__init__(parent)
self.erome_url = url
self.output_dir = output_dir
self.is_cancelled = False
def run(self):
download_count = 0
skip_count = 0
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Erome.com Download for: {self.erome_url}")
album_name, files_to_download = fetch_erome_data(self.erome_url, self.progress_signal.emit)
if not files_to_download:
self.progress_signal.emit("❌ Failed to extract file information from Erome. Aborting.")
self.finished_signal.emit(0, 0, self.is_cancelled)
return
album_path = os.path.join(self.output_dir, album_name)
try:
os.makedirs(album_path, exist_ok=True)
self.progress_signal.emit(f" Saving to folder: '{album_path}'")
except OSError as e:
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
self.finished_signal.emit(0, len(files_to_download), self.is_cancelled)
return
total_files = len(files_to_download)
session = cloudscraper.create_scraper()
for i, file_data in enumerate(files_to_download):
if self.is_cancelled:
self.progress_signal.emit(" Download cancelled by user.")
skip_count = total_files - download_count
break
filename = file_data.get('filename', f'untitled_{i+1}.mp4')
file_url = file_data.get('url')
headers = file_data.get('headers')
filepath = os.path.join(album_path, filename)
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip ({i+1}/{total_files}): '{filename}' already exists.")
skip_count += 1
continue
self.progress_signal.emit(f" Downloading ({i+1}/{total_files}): '{filename}'...")
try:
response = session.get(file_url, stream=True, headers=headers, timeout=60)
response.raise_for_status()
total_size = int(response.headers.get('content-length', 0))
downloaded_size = 0
last_update_time = time.time()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self.is_cancelled:
break
if chunk:
f.write(chunk)
downloaded_size += len(chunk)
current_time = time.time()
if total_size > 0 and (current_time - last_update_time) > 0.5:
self.file_progress_signal.emit(filename, (downloaded_size, total_size))
last_update_time = current_time
if self.is_cancelled:
if os.path.exists(filepath): os.remove(filepath)
continue
if total_size > 0:
self.file_progress_signal.emit(filename, (total_size, total_size))
download_count += 1
except requests.exceptions.RequestException as e:
self.progress_signal.emit(f" ❌ Failed to download '{filename}'. Error: {e}")
if os.path.exists(filepath): os.remove(filepath)
skip_count += 1
except Exception as e:
self.progress_signal.emit(f" ❌ An unexpected error occurred with '{filename}': {e}")
if os.path.exists(filepath): os.remove(filepath)
skip_count += 1
self.file_progress_signal.emit("", None)
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
def cancel(self):
self.is_cancelled = True
self.progress_signal.emit(" Cancellation signal received by Erome thread.")

View File

@@ -0,0 +1,86 @@
from PyQt5.QtCore import QThread, pyqtSignal
from ...services.drive_downloader import (
download_dropbox_file,
download_gdrive_file,
download_mega_file as drive_download_mega_file,
)
class ExternalLinkDownloadThread(QThread):
"""A QThread to handle downloading multiple external links sequentially."""
progress_signal = pyqtSignal(str)
file_complete_signal = pyqtSignal(str, bool)
finished_signal = pyqtSignal()
overall_progress_signal = pyqtSignal(int, int)
file_progress_signal = pyqtSignal(str, object)
def __init__(self, tasks_to_download, download_base_path, parent_logger_func, parent=None, use_post_subfolder=False):
super().__init__(parent)
self.tasks = tasks_to_download
self.download_base_path = download_base_path
self.parent_logger_func = parent_logger_func
self.is_cancelled = False
self.use_post_subfolder = use_post_subfolder
def run(self):
total_tasks = len(self.tasks)
self.progress_signal.emit(f" Starting external link download thread for {total_tasks} link(s).")
self.overall_progress_signal.emit(total_tasks, 0)
for i, task_info in enumerate(self.tasks):
if self.is_cancelled:
self.progress_signal.emit("External link download cancelled by user.")
break
self.overall_progress_signal.emit(total_tasks, i + 1)
platform = task_info.get('platform', 'unknown').lower()
full_url = task_info['url']
post_title = task_info['title']
self.progress_signal.emit(f"Download ({i + 1}/{total_tasks}): Starting '{post_title}' ({platform.upper()}) from {full_url}")
try:
if platform == 'mega':
drive_download_mega_file(
full_url,
self.download_base_path,
logger_func=self.parent_logger_func,
progress_callback_func=self.file_progress_signal.emit,
overall_progress_callback=self.overall_progress_signal.emit
)
elif platform == 'google drive':
download_gdrive_file(
full_url,
self.download_base_path,
logger_func=self.parent_logger_func,
progress_callback_func=self.file_progress_signal.emit,
overall_progress_callback=self.overall_progress_signal.emit,
use_post_subfolder=self.use_post_subfolder,
post_title=post_title
)
elif platform == 'dropbox':
download_dropbox_file(
full_url,
self.download_base_path,
logger_func=self.parent_logger_func,
progress_callback_func=self.file_progress_signal.emit,
use_post_subfolder=self.use_post_subfolder,
post_title=post_title
)
else:
self.progress_signal.emit(f"⚠️ Unsupported platform '{platform}' for link: {full_url}")
self.file_complete_signal.emit(full_url, False)
continue
self.file_complete_signal.emit(full_url, True)
except Exception as e:
self.progress_signal.emit(f"❌ Error downloading ({platform.upper()}) link '{full_url}': {e}")
self.file_complete_signal.emit(full_url, False)
self.finished_signal.emit()
def cancel(self):
"""Sets the cancellation flag to stop the thread gracefully."""
self.progress_signal.emit(" [External Links] Cancellation signal received by thread.")
self.is_cancelled = True

View File

@@ -0,0 +1,162 @@
import os
import sys
import re
import threading
import time
from PyQt5.QtCore import QThread, pyqtSignal, QProcess
import cloudscraper
from ...core.fap_nation_client import fetch_fap_nation_data
from ...services.multipart_downloader import download_file_in_parts
class FapNationDownloadThread(QThread):
"""
A dedicated QThread for Fap-Nation that uses a hybrid approach, choosing
between yt-dlp for HLS streams and a multipart downloader for direct links.
"""
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool)
overall_progress_signal = pyqtSignal(int, int)
def __init__(self, url, output_dir, use_post_subfolder, pause_event, cancellation_event, gui_signals, parent=None):
super().__init__(parent)
self.album_url = url
self.output_dir = output_dir
self.use_post_subfolder = use_post_subfolder
self.is_cancelled = False
self.process = None
self.current_filename = "Unknown File"
self.album_name = "fap-nation_album"
self.pause_event = pause_event
self.cancellation_event = cancellation_event
self.gui_signals = gui_signals
self._is_finished = False
self.process = QProcess(self)
self.process.readyReadStandardOutput.connect(self.handle_ytdlp_output)
def run(self):
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Fap-Nation Download for: {self.album_url}")
self.album_name, files_to_download = fetch_fap_nation_data(self.album_url, self.progress_signal.emit)
if self.is_cancelled or not files_to_download:
self.progress_signal.emit("❌ Failed to extract file information. Aborting.")
self.finished_signal.emit(0, 1, self.is_cancelled)
return
self.overall_progress_signal.emit(1, 0)
save_path = self.output_dir
if self.use_post_subfolder:
save_path = os.path.join(self.output_dir, self.album_name)
self.progress_signal.emit(f" Subfolder per Post is ON. Saving to: '{self.album_name}'")
os.makedirs(save_path, exist_ok=True)
file_data = files_to_download[0]
self.current_filename = file_data.get('filename')
download_url = file_data.get('url')
link_type = file_data.get('type')
filepath = os.path.join(save_path, self.current_filename)
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip: '{self.current_filename}' already exists.")
self.overall_progress_signal.emit(1, 1)
self.finished_signal.emit(0, 1, self.is_cancelled)
return
if link_type == 'hls':
self.download_with_ytdlp(filepath, download_url)
elif link_type == 'direct':
self.download_with_multipart(filepath, download_url)
else:
self.progress_signal.emit(f" ❌ Unknown link type '{link_type}'. Aborting.")
self._on_ytdlp_finished(-1)
def download_with_ytdlp(self, filepath, playlist_url):
self.progress_signal.emit(f" Downloading (HLS Stream): '{self.current_filename}' using yt-dlp...")
try:
if getattr(sys, 'frozen', False):
base_path = sys._MEIPASS
ytdlp_path = os.path.join(base_path, "yt-dlp.exe")
else:
ytdlp_path = "yt-dlp.exe"
if not os.path.exists(ytdlp_path):
self.progress_signal.emit(f" ❌ ERROR: yt-dlp.exe not found at '{ytdlp_path}'.")
self._on_ytdlp_finished(-1)
return
command = [ytdlp_path, '--no-warnings', '--progress', '--output', filepath, '--merge-output-format', 'mp4', playlist_url]
self.process.start(command[0], command[1:])
self.process.waitForFinished(-1)
self._on_ytdlp_finished(self.process.exitCode())
except Exception as e:
self.progress_signal.emit(f" ❌ Failed to start yt-dlp: {e}")
self._on_ytdlp_finished(-1)
def download_with_multipart(self, filepath, direct_url):
self.progress_signal.emit(f" Downloading (Direct Link): '{self.current_filename}' using multipart downloader...")
try:
session = cloudscraper.create_scraper()
head_response = session.head(direct_url, allow_redirects=True, timeout=20)
head_response.raise_for_status()
total_size = int(head_response.headers.get('content-length', 0))
success, _, _, _ = download_file_in_parts(
file_url=direct_url, save_path=filepath, total_size=total_size, num_parts=5,
headers=session.headers, api_original_filename=self.current_filename,
emitter_for_multipart=self.gui_signals,
cookies_for_chunk_session=session.cookies,
cancellation_event=self.cancellation_event,
skip_event=None, logger_func=self.progress_signal.emit, pause_event=self.pause_event
)
self._on_ytdlp_finished(0 if success else 1)
except Exception as e:
self.progress_signal.emit(f" ❌ Multipart download failed: {e}")
self._on_ytdlp_finished(1)
def handle_ytdlp_output(self):
if not self.process:
return
output = self.process.readAllStandardOutput().data().decode('utf-8', errors='ignore')
for line in reversed(output.strip().splitlines()):
line = line.strip()
progress_match = re.search(r'\[download\]\s+([\d.]+)%\s+of\s+~?\s*([\d.]+\w+B)', line)
if progress_match:
percent, size = progress_match.groups()
self.file_progress_signal.emit("yt-dlp:", f"{percent}% of {size}")
break
def _on_ytdlp_finished(self, exit_code):
if self._is_finished:
return
self._is_finished = True
download_count, skip_count = 0, 0
if self.is_cancelled:
self.progress_signal.emit(f" Download of '{self.current_filename}' was cancelled.")
skip_count = 1
elif exit_code == 0:
self.progress_signal.emit(f" ✅ Download process finished successfully for '{self.current_filename}'.")
download_count = 1
else:
self.progress_signal.emit(f" ❌ Download process exited with an error (Code: {exit_code}) for '{self.current_filename}'.")
skip_count = 1
self.overall_progress_signal.emit(1, 1)
self.process = None
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
def cancel(self):
self.is_cancelled = True
self.cancellation_event.set()
if self.process and self.process.state() == QProcess.Running:
self.progress_signal.emit(" Cancellation signal received, terminating yt-dlp process.")
self.process.kill()

View File

@@ -0,0 +1,51 @@
import threading
import time
from PyQt5.QtCore import QThread, pyqtSignal
from ...core.Hentai2read_client import run_hentai2read_download as h2r_run_download
class Hentai2readDownloadThread(QThread):
"""
A dedicated QThread that calls the self-contained Hentai2Read client to
perform scraping and downloading.
"""
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool)
overall_progress_signal = pyqtSignal(int, int)
def __init__(self, url, output_dir, parent=None):
super().__init__(parent)
self.start_url = url
self.output_dir = output_dir
self.is_cancelled = False
self.pause_event = parent.pause_event if hasattr(parent, 'pause_event') else threading.Event()
def _check_pause(self):
"""Helper to handle pausing and cancellation events."""
if self.is_cancelled: return True
if self.pause_event and self.pause_event.is_set():
self.progress_signal.emit(" Download paused...")
while self.pause_event.is_set():
if self.is_cancelled: return True
time.sleep(0.5)
self.progress_signal.emit(" Download resumed.")
return self.is_cancelled
def run(self):
"""
Executes the main download logic by calling the dedicated client function.
"""
downloaded, skipped = h2r_run_download(
start_url=self.start_url,
output_dir=self.output_dir,
progress_callback=self.progress_signal.emit,
overall_progress_callback=self.overall_progress_signal.emit,
check_pause_func=self._check_pause
)
self.finished_signal.emit(downloaded, skipped, self.is_cancelled)
def cancel(self):
self.is_cancelled = True

View File

@@ -0,0 +1,549 @@
# kemono_discord_downloader_thread.py
import os
import time
import uuid
import threading
import cloudscraper
import requests
from concurrent.futures import ThreadPoolExecutor, as_completed
from PyQt5.QtCore import QThread, pyqtSignal
# --- Assuming these files are in the correct relative path ---
# Adjust imports if your project structure is different
try:
from ...core.discord_client import fetch_server_channels, fetch_channel_messages
from ...utils.file_utils import clean_filename
except ImportError as e:
# Basic fallback logging if signals aren't ready
print(f"ERROR: Failed to import required modules for Kemono Discord thread: {e}")
# Re-raise to prevent the thread from being created incorrectly
raise
# Custom exception for clean cancellation/pausing
class InterruptedError(Exception):
"""Custom exception for handling cancellations/pausing gracefully within download loops."""
pass
class KemonoDiscordDownloadThread(QThread):
"""
A dedicated QThread for downloading files from Kemono Discord server/channel pages,
using the Kemono API via discord_client and multithreading for file downloads.
Includes a single retry attempt after a 15-second delay for specific errors.
"""
# --- Signals ---
progress_signal = pyqtSignal(str) # General log messages
progress_label_signal = pyqtSignal(str) # Update main progress label (e.g., "Fetching messages...")
file_progress_signal = pyqtSignal(str, object) # Update file progress bar (filename, (downloaded_bytes, total_bytes | None))
permanent_file_failed_signal = pyqtSignal(list) # To report failures to main window
finished_signal = pyqtSignal(int, int, bool, list) # (downloaded_count, skipped_count, was_cancelled, [])
def __init__(self, server_id, channel_id, output_dir, cookies_dict, parent):
"""
Initializes the Kemono Discord downloader thread.
Args:
server_id (str): The Discord server ID from Kemono.
channel_id (str | None): The specific Discord channel ID from Kemono, if provided.
output_dir (str): The base directory to save downloaded files.
cookies_dict (dict | None): Cookies to use for requests.
parent (QWidget): The parent widget (main_app) to access events/settings.
"""
super().__init__(parent)
self.server_id = server_id
self.target_channel_id = channel_id # The specific channel from URL, if any
self.output_dir = output_dir
self.cookies_dict = cookies_dict
self.parent_app = parent # Access main app's events and settings
# --- Shared Events & Internal State ---
self.cancellation_event = getattr(parent, 'cancellation_event', threading.Event())
self.pause_event = getattr(parent, 'pause_event', threading.Event())
self._is_cancelled_internal = False # Internal flag for quick breaking
# --- Thread-Safe Counters ---
self.download_count = 0
self.skip_count = 0
self.count_lock = threading.Lock()
# --- List to Store Failure Details ---
self.permanently_failed_details = []
# --- Multithreading Configuration ---
self.num_file_threads = 1 # Default
try:
use_mt = getattr(self.parent_app, 'use_multithreading_checkbox', None)
thread_input = getattr(self.parent_app, 'thread_count_input', None)
if use_mt and use_mt.isChecked() and thread_input:
thread_count_ui = int(thread_input.text().strip())
# Apply a reasonable cap specific to this downloader type (adjust as needed)
self.num_file_threads = max(1, min(thread_count_ui, 20)) # Cap at 20 file threads
except (ValueError, AttributeError, TypeError):
try: self.progress_signal.emit("⚠️ Warning: Could not read thread count setting, defaulting to 1.")
except: pass
self.num_file_threads = 1 # Fallback on error getting setting
# --- Network Client ---
try:
self.scraper = cloudscraper.create_scraper(browser={'browser': 'firefox', 'platform': 'windows', 'mobile': False})
except Exception as e:
try: self.progress_signal.emit(f"❌ ERROR: Failed to initialize cloudscraper: {e}")
except: pass
self.scraper = None
# --- Control Methods (cancel, pause, resume - same as before) ---
def cancel(self):
self._is_cancelled_internal = True
self.cancellation_event.set()
try: self.progress_signal.emit(" Cancellation requested for Kemono Discord download.")
except: pass
def pause(self):
if not self.pause_event.is_set():
self.pause_event.set()
try: self.progress_signal.emit(" Pausing Kemono Discord download...")
except: pass
def resume(self):
if self.pause_event.is_set():
self.pause_event.clear()
try: self.progress_signal.emit(" Resuming Kemono Discord download...")
except: pass
# --- Helper: Check Cancellation/Pause (same as before) ---
def _check_events(self):
if self._is_cancelled_internal or self.cancellation_event.is_set():
if not self._is_cancelled_internal:
self._is_cancelled_internal = True
try: self.progress_signal.emit(" Cancellation detected by Kemono Discord thread check.")
except: pass
return True # Cancelled
was_paused = False
while self.pause_event.is_set():
if not was_paused:
try: self.progress_signal.emit(" Kemono Discord operation paused...")
except: pass
was_paused = True
if self.cancellation_event.is_set():
self._is_cancelled_internal = True
try: self.progress_signal.emit(" Cancellation detected while paused.")
except: pass
return True
time.sleep(0.5)
return False
# --- REVISED Helper: Download Single File with ONE Retry ---
def _download_single_kemono_file(self, file_info):
"""
Downloads a single file, handles collisions after download,
and automatically retries ONCE after 15s for specific network errors.
Returns:
tuple: (bool_success, dict_error_details_or_None)
"""
# --- Constants for Retry Logic ---
MAX_ATTEMPTS = 2 # 1 initial attempt + 1 retry
RETRY_DELAY_SECONDS = 15
# --- Extract info ---
channel_dir = file_info['channel_dir']
original_filename = file_info['original_filename']
file_url = file_info['file_url']
channel_id = file_info['channel_id']
post_title = file_info.get('post_title', f"Message in channel {channel_id}")
original_post_id_for_log = file_info.get('message_id', 'N/A')
base_kemono_domain = "kemono.cr"
if not self.scraper:
try: self.progress_signal.emit(f" ❌ Cannot download '{original_filename}': Cloudscraper not initialized.")
except: pass
failure_details = { 'file_info': {'url': file_url, 'name': original_filename}, 'post_title': post_title, 'original_post_id_for_log': original_post_id_for_log, 'target_folder_path': channel_dir, 'error': 'Cloudscraper not initialized', 'service': 'discord', 'user_id': self.server_id }
return False, failure_details
if self._check_events(): return False, None # Interrupted before start
# --- Determine filenames ---
cleaned_original_filename = clean_filename(original_filename)
intended_final_filename = cleaned_original_filename
unique_suffix = uuid.uuid4().hex[:8]
temp_filename = f"{intended_final_filename}.{unique_suffix}.part"
temp_filepath = os.path.join(channel_dir, temp_filename)
# --- Download Attempt Loop ---
download_successful = False
last_exception = None
should_retry = False # Flag to indicate if the first attempt failed with a retryable error
for attempt in range(1, MAX_ATTEMPTS + 1):
response = None
try:
# --- Pre-attempt checks ---
if self._check_events(): raise InterruptedError("Cancelled/Paused before attempt")
if attempt == 2 and should_retry: # Only delay *before* the retry
try: self.progress_signal.emit(f" ⏳ Retrying '{original_filename}' (Attempt {attempt}/{MAX_ATTEMPTS}) after {RETRY_DELAY_SECONDS}s...")
except: pass
for _ in range(RETRY_DELAY_SECONDS):
if self._check_events(): raise InterruptedError("Cancelled/Paused during retry delay")
time.sleep(1)
# If it's attempt 2 but should_retry is False, it means the first error was non-retryable, so skip
elif attempt == 2 and not should_retry:
break # Exit loop, failure already determined
# --- Log attempt ---
log_prefix = f" ⬇️ Downloading:" if attempt == 1 else f" 🔄 Retrying:"
try: self.progress_signal.emit(f"{log_prefix} '{original_filename}' (Attempt {attempt}/{MAX_ATTEMPTS})...")
except: pass
if attempt == 1:
try: self.file_progress_signal.emit(original_filename, (0, 0))
except: pass
# --- Perform Download ---
headers = { 'User-Agent': 'Mozilla/5.0 ...', 'Referer': f'https://{base_kemono_domain}/discord/channel/{channel_id}'} # Shortened for brevity
response = self.scraper.get(file_url, headers=headers, cookies=self.cookies_dict, stream=True, timeout=(15, 120))
response.raise_for_status()
total_size = int(response.headers.get('content-length', 0))
downloaded_size = 0
last_progress_emit_time = time.time()
with open(temp_filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024*1024):
if self._check_events(): raise InterruptedError("Cancelled/Paused during chunk writing")
if chunk:
f.write(chunk)
downloaded_size += len(chunk)
current_time = time.time()
if total_size > 0 and (current_time - last_progress_emit_time > 0.5 or downloaded_size == total_size):
try: self.file_progress_signal.emit(original_filename, (downloaded_size, total_size))
except: pass
last_progress_emit_time = current_time
elif total_size == 0 and (current_time - last_progress_emit_time > 0.5):
try: self.file_progress_signal.emit(original_filename, (downloaded_size, 0))
except: pass
last_progress_emit_time = current_time
response.close()
# --- Verification ---
if self._check_events(): raise InterruptedError("Cancelled/Paused after download completion")
if total_size > 0 and downloaded_size != total_size:
try: self.progress_signal.emit(f" ⚠️ Size mismatch on attempt {attempt} for '{original_filename}'. Expected {total_size}, got {downloaded_size}.")
except: pass
last_exception = IOError(f"Size mismatch: Expected {total_size}, got {downloaded_size}")
if os.path.exists(temp_filepath):
try: os.remove(temp_filepath)
except OSError: pass
should_retry = (attempt == 1) # Only retry if it was the first attempt
continue # Try again if attempt 1, otherwise loop finishes
else:
download_successful = True
break # Success!
# --- Error Handling within Loop ---
except InterruptedError as e:
last_exception = e
should_retry = False # Don't retry if interrupted
break
except (requests.exceptions.Timeout, requests.exceptions.ConnectionError, cloudscraper.exceptions.CloudflareException) as e:
last_exception = e
try: self.progress_signal.emit(f" ❌ Network/Cloudflare error on attempt {attempt} for '{original_filename}': {e}")
except: pass
should_retry = (attempt == 1) # Retry only if first attempt
except requests.exceptions.RequestException as e:
status_code = getattr(e.response, 'status_code', None)
if status_code and 500 <= status_code <= 599: # Retry on 5xx
last_exception = e
try: self.progress_signal.emit(f" ❌ Server error ({status_code}) on attempt {attempt} for '{original_filename}'. Will retry...")
except: pass
should_retry = (attempt == 1) # Retry only if first attempt
else: # Don't retry on 4xx or other request errors
last_exception = e
try: self.progress_signal.emit(f" ❌ Non-retryable HTTP error for '{original_filename}': {e}")
except: pass
should_retry = False
break
except OSError as e:
last_exception = e
try: self.progress_signal.emit(f" ❌ OS error during download attempt {attempt} for '{original_filename}': {e}")
except: pass
should_retry = False
break
except Exception as e:
last_exception = e
try: self.progress_signal.emit(f" ❌ Unexpected error on attempt {attempt} for '{original_filename}': {e}")
except: pass
should_retry = False
break
finally:
if response:
try: response.close()
except Exception: pass
# --- End Download Attempt Loop ---
try: self.file_progress_signal.emit(original_filename, None) # Clear progress
except: pass
# --- Post-Download Processing ---
if download_successful:
# --- Rename Logic ---
final_filename_to_use = intended_final_filename
final_filepath_on_disk = os.path.join(channel_dir, final_filename_to_use)
counter = 1
base_name, extension = os.path.splitext(intended_final_filename)
while os.path.exists(final_filepath_on_disk):
final_filename_to_use = f"{base_name} ({counter}){extension}"
final_filepath_on_disk = os.path.join(channel_dir, final_filename_to_use)
counter += 1
if final_filename_to_use != intended_final_filename:
try: self.progress_signal.emit(f" -> Name conflict for '{intended_final_filename}'. Renaming to '{final_filename_to_use}'.")
except: pass
try:
os.rename(temp_filepath, final_filepath_on_disk)
try: self.progress_signal.emit(f" ✅ Saved: '{final_filename_to_use}'")
except: pass
return True, None # SUCCESS
except OSError as e:
try: self.progress_signal.emit(f" ❌ OS error renaming temp file to '{final_filename_to_use}': {e}")
except: pass
if os.path.exists(temp_filepath):
try: os.remove(temp_filepath)
except OSError: pass
# ---> RETURN FAILURE TUPLE (Rename Failed) <---
failure_details = { 'file_info': {'url': file_url, 'name': original_filename}, 'post_title': post_title, 'original_post_id_for_log': original_post_id_for_log, 'target_folder_path': channel_dir, 'intended_filename': intended_final_filename, 'error': f"Rename failed: {e}", 'service': 'discord', 'user_id': self.server_id }
return False, failure_details
else:
# Download failed or was interrupted
if not isinstance(last_exception, InterruptedError):
try: self.progress_signal.emit(f" ❌ FAILED to download '{original_filename}' after {MAX_ATTEMPTS} attempts. Last error: {last_exception}")
except: pass
if os.path.exists(temp_filepath):
try: os.remove(temp_filepath)
except OSError as e_rem:
try: self.progress_signal.emit(f" (Failed to remove temp file '{temp_filename}': {e_rem})")
except: pass
# ---> RETURN FAILURE TUPLE (Download Failed/Interrupted) <---
# Only generate details if it wasn't interrupted by user
failure_details = None
if not isinstance(last_exception, InterruptedError):
failure_details = {
'file_info': {'url': file_url, 'name': original_filename},
'post_title': post_title, 'original_post_id_for_log': original_post_id_for_log,
'target_folder_path': channel_dir, 'intended_filename': intended_final_filename,
'error': f"Failed after {MAX_ATTEMPTS} attempts: {last_exception}",
'service': 'discord', 'user_id': self.server_id,
'forced_filename_override': intended_final_filename,
'file_index_in_post': file_info.get('file_index', 0),
'num_files_in_this_post': file_info.get('num_files', 1)
}
return False, failure_details # Return None details if interrupted
# --- Main Thread Execution ---
def run(self):
"""Main execution logic: Fetches channels/messages and dispatches file downloads."""
self.download_count = 0
self.skip_count = 0
self._is_cancelled_internal = False
self.permanently_failed_details = [] # Reset failed list
if not self.scraper:
try: self.progress_signal.emit("❌ Aborting Kemono Discord download: Cloudscraper failed to initialize.")
except: pass
self.finished_signal.emit(0, 0, False, [])
return
try:
# --- Log Start ---
try:
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Kemono Discord download for server: {self.server_id}")
self.progress_signal.emit(f" Using {self.num_file_threads} thread(s) for file downloads.")
except: pass
# --- Channel Fetching (same as before) ---
channels_to_process = []
# ... (logic to populate channels_to_process using fetch_server_channels or target_channel_id) ...
if self.target_channel_id:
channels_to_process.append({'id': self.target_channel_id, 'name': self.target_channel_id})
try: self.progress_signal.emit(f" Targeting specific channel: {self.target_channel_id}")
except: pass
else:
try: self.progress_label_signal.emit("Fetching server channels via Kemono API...")
except: pass
channels_data = fetch_server_channels(self.server_id, logger=self.progress_signal.emit, cookies_dict=self.cookies_dict)
if self._check_events(): return
if channels_data is not None:
channels_to_process = channels_data
try: self.progress_signal.emit(f" Found {len(channels_to_process)} channels.")
except: pass
else:
try: self.progress_signal.emit(f" ❌ Failed to fetch channels for server {self.server_id} via Kemono API.")
except: pass
return
# --- Process Each Channel ---
for channel in channels_to_process:
if self._check_events(): break
channel_id = channel['id']
channel_name = clean_filename(channel.get('name', channel_id))
channel_dir = os.path.join(self.output_dir, channel_name)
try:
os.makedirs(channel_dir, exist_ok=True)
except OSError as e:
try: self.progress_signal.emit(f" ❌ Failed to create directory for channel '{channel_name}': {e}. Skipping channel.")
except: pass
continue
try:
self.progress_signal.emit(f"\n--- Processing Channel: #{channel_name} ({channel_id}) ---")
self.progress_label_signal.emit(f"Fetching messages for #{channel_name}...")
except: pass
# --- Collect File Download Tasks ---
file_tasks = []
message_generator = fetch_channel_messages(
channel_id, logger=self.progress_signal.emit,
cancellation_event=self.cancellation_event, pause_event=self.pause_event,
cookies_dict=self.cookies_dict
)
try:
message_index = 0
for message_batch in message_generator:
if self._check_events(): break
for message in message_batch:
message_id = message.get('id', f'msg_{message_index}')
post_title_context = (message.get('content') or f"Message {message_id}")[:50] + "..."
attachments = message.get('attachments', [])
file_index_in_message = 0
num_files_in_message = len(attachments)
for attachment in attachments:
if self._check_events(): raise InterruptedError
file_path = attachment.get('path')
original_filename = attachment.get('name')
if file_path and original_filename:
base_kemono_domain = "kemono.cr"
if not file_path.startswith('/'): file_path = '/' + file_path
file_url = f"https://{base_kemono_domain}/data{file_path}"
file_tasks.append({
'channel_dir': channel_dir, 'original_filename': original_filename,
'file_url': file_url, 'channel_id': channel_id,
'message_id': message_id, 'post_title': post_title_context,
'file_index': file_index_in_message, 'num_files': num_files_in_message
})
file_index_in_message += 1
message_index += 1
if self._check_events(): raise InterruptedError
if self._check_events(): raise InterruptedError
except InterruptedError:
try: self.progress_signal.emit(" Interrupted while collecting file tasks.")
except: pass
break # Exit channel processing
except Exception as e_msg:
try: self.progress_signal.emit(f" ❌ Error fetching messages for channel {channel_name}: {e_msg}")
except: pass
continue # Continue to next channel
if self._check_events(): break
if not file_tasks:
try: self.progress_signal.emit(" No downloadable file attachments found in this channel's messages.")
except: pass
continue
try:
self.progress_signal.emit(f" Found {len(file_tasks)} potential file attachments. Starting downloads...")
self.progress_label_signal.emit(f"Downloading {len(file_tasks)} files for #{channel_name}...")
except: pass
# --- Execute Downloads Concurrently ---
files_processed_in_channel = 0
with ThreadPoolExecutor(max_workers=self.num_file_threads, thread_name_prefix=f"KDC_{channel_id[:4]}_") as executor:
futures = {executor.submit(self._download_single_kemono_file, task): task for task in file_tasks}
try:
for future in as_completed(futures):
files_processed_in_channel += 1
task_info = futures[future]
try:
success, details = future.result() # Unpack result
with self.count_lock:
if success:
self.download_count += 1
else:
self.skip_count += 1
if details: # Append details if the download permanently failed
self.permanently_failed_details.append(details)
except Exception as e_future:
filename = task_info.get('original_filename', 'unknown file')
try: self.progress_signal.emit(f" ❌ System error processing download future for '{filename}': {e_future}")
except: pass
with self.count_lock:
self.skip_count += 1
# Append details on system failure
failure_details = { 'file_info': {'url': task_info.get('file_url'), 'name': filename}, 'post_title': task_info.get('post_title', 'N/A'), 'original_post_id_for_log': task_info.get('message_id', 'N/A'), 'target_folder_path': task_info.get('channel_dir'), 'error': f"Future execution error: {e_future}", 'service': 'discord', 'user_id': self.server_id, 'forced_filename_override': clean_filename(filename), 'file_index_in_post': task_info.get('file_index', 0), 'num_files_in_this_post': task_info.get('num_files', 1) }
self.permanently_failed_details.append(failure_details)
try: self.progress_label_signal.emit(f"#{channel_name}: {files_processed_in_channel}/{len(file_tasks)} files processed")
except: pass
if self._check_events():
try: self.progress_signal.emit(" Cancelling remaining file downloads for this channel...")
except: pass
executor.shutdown(wait=False, cancel_futures=True)
break # Exit as_completed loop
except InterruptedError:
try: self.progress_signal.emit(" Download processing loop interrupted.")
except: pass
executor.shutdown(wait=False, cancel_futures=True)
if self._check_events(): break # Check between channels
# --- End Channel Loop ---
except Exception as e:
# Catch unexpected errors in the main run logic
try:
self.progress_signal.emit(f"❌ Unexpected critical error in Kemono Discord thread run loop: {e}")
import traceback
self.progress_signal.emit(traceback.format_exc())
except: pass # Avoid errors if signals fail at the very end
finally:
# --- Final Cleanup and Signal ---
try:
try: self.progress_signal.emit("=" * 40)
except: pass
cancelled = self._is_cancelled_internal or self.cancellation_event.is_set()
# --- EMIT FAILED FILES SIGNAL ---
if self.permanently_failed_details:
try:
self.progress_signal.emit(f" Reporting {len(self.permanently_failed_details)} permanently failed files...")
self.permanent_file_failed_signal.emit(list(self.permanently_failed_details)) # Emit a copy
except Exception as e_emit_fail:
print(f"ERROR emitting permanent_file_failed_signal: {e_emit_fail}")
# Log final status
try:
if cancelled and not self._is_cancelled_internal:
self.progress_signal.emit(" Kemono Discord download cancelled externally.")
elif self._is_cancelled_internal:
self.progress_signal.emit(" Kemono Discord download finished due to cancellation.")
else:
self.progress_signal.emit("✅ Kemono Discord download process finished.")
except: pass
# Clear file progress
try: self.file_progress_signal.emit("", None)
except: pass
# Get final counts safely
with self.count_lock:
final_download_count = self.download_count
final_skip_count = self.skip_count
# Emit finished signal
self.finished_signal.emit(final_download_count, final_skip_count, cancelled, [])
except Exception as e_final:
# Log final signal emission error if possible
print(f"ERROR in KemonoDiscordDownloadThread finally block: {e_final}")

View File

@@ -0,0 +1,45 @@
import threading
from PyQt5.QtCore import QThread, pyqtSignal
from ...core.mangadex_client import fetch_mangadex_data
class MangaDexDownloadThread(QThread):
"""A wrapper QThread for running the MangaDex client function."""
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool)
overall_progress_signal = pyqtSignal(int, int)
def __init__(self, url, output_dir, parent=None):
super().__init__(parent)
self.start_url = url
self.output_dir = output_dir
self.is_cancelled = False
self.pause_event = parent.pause_event if hasattr(parent, 'pause_event') else threading.Event()
self.cancellation_event = parent.cancellation_event if hasattr(parent, 'cancellation_event') else threading.Event()
def run(self):
downloaded = 0
skipped = 0
try:
downloaded, skipped = fetch_mangadex_data(
self.start_url,
self.output_dir,
logger_func=self.progress_signal.emit,
file_progress_callback=self.file_progress_signal,
overall_progress_callback=self.overall_progress_signal,
pause_event=self.pause_event,
cancellation_event=self.cancellation_event
)
except Exception as e:
self.progress_signal.emit(f"❌ A critical error occurred in the MangaDex thread: {e}")
skipped = 1 # Mark as skipped if there was a critical failure
finally:
self.finished_signal.emit(downloaded, skipped, self.is_cancelled)
def cancel(self):
self.is_cancelled = True
if self.cancellation_event:
self.cancellation_event.set()
self.progress_signal.emit(" Cancellation signal received by MangaDex thread.")

View File

@@ -0,0 +1,105 @@
import os
import time
import cloudscraper
from PyQt5.QtCore import QThread, pyqtSignal
from ...utils.file_utils import clean_folder_name
class NhentaiDownloadThread(QThread):
progress_signal = pyqtSignal(str)
finished_signal = pyqtSignal(int, int, bool)
IMAGE_SERVERS = [
"https://i.nhentai.net", "https://i2.nhentai.net", "https://i3.nhentai.net",
"https://i5.nhentai.net", "https://i7.nhentai.net"
]
EXTENSION_MAP = {'j': 'jpg', 'p': 'png', 'g': 'gif', 'w': 'webp' }
def __init__(self, gallery_data, output_dir, parent=None):
super().__init__(parent)
self.gallery_data = gallery_data
self.output_dir = output_dir
self.is_cancelled = False
def run(self):
title = self.gallery_data.get("title", {}).get("english", f"gallery_{self.gallery_data.get('id')}")
gallery_id = self.gallery_data.get("id")
media_id = self.gallery_data.get("media_id")
pages_info = self.gallery_data.get("pages", [])
folder_name = clean_folder_name(title)
gallery_path = os.path.join(self.output_dir, folder_name)
try:
os.makedirs(gallery_path, exist_ok=True)
except OSError as e:
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
self.finished_signal.emit(0, len(pages_info), False)
return
self.progress_signal.emit(f"⬇️ Downloading '{title}' to folder '{folder_name}'...")
scraper = cloudscraper.create_scraper()
download_count = 0
skip_count = 0
for i, page_data in enumerate(pages_info):
if self.is_cancelled:
break
page_num = i + 1
ext_char = page_data.get('t', 'j')
extension = self.EXTENSION_MAP.get(ext_char, 'jpg')
relative_path = f"/galleries/{media_id}/{page_num}.{extension}"
local_filename = f"{page_num:03d}.{extension}"
filepath = os.path.join(gallery_path, local_filename)
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip (Exists): {local_filename}")
skip_count += 1
continue
download_successful = False
for server in self.IMAGE_SERVERS:
if self.is_cancelled:
break
full_url = f"{server}{relative_path}"
try:
self.progress_signal.emit(f" Downloading page {page_num}/{len(pages_info)} from {server} ...")
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36',
'Referer': f'https://nhentai.net/g/{gallery_id}/'
}
response = scraper.get(full_url, headers=headers, timeout=60, stream=True)
if response.status_code == 200:
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
f.write(chunk)
download_count += 1
download_successful = True
break
else:
self.progress_signal.emit(f" -> {server} returned status {response.status_code}. Trying next server...")
except Exception as e:
self.progress_signal.emit(f" -> {server} failed to connect or timed out: {e}. Trying next server...")
if not download_successful:
self.progress_signal.emit(f" ❌ Failed to download {local_filename} from all servers.")
skip_count += 1
time.sleep(0.5)
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
def cancel(self):
self.is_cancelled = True

View File

@@ -0,0 +1,101 @@
import os
import time
import requests
import cloudscraper
from PyQt5.QtCore import QThread, pyqtSignal
from ...core.pixeldrain_client import fetch_pixeldrain_data
from ...utils.file_utils import clean_folder_name
class PixeldrainDownloadThread(QThread):
"""A dedicated QThread for handling pixeldrain.com downloads."""
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool) # dl_count, skip_count, cancelled
def __init__(self, url, output_dir, parent=None):
super().__init__(parent)
self.pixeldrain_url = url
self.output_dir = output_dir
self.is_cancelled = False
def run(self):
download_count = 0
skip_count = 0
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Pixeldrain.com Download for: {self.pixeldrain_url}")
album_title_raw, files_to_download = fetch_pixeldrain_data(self.pixeldrain_url, self.progress_signal.emit)
if not files_to_download:
self.progress_signal.emit("❌ Failed to extract file information from Pixeldrain. Aborting.")
self.finished_signal.emit(0, 0, self.is_cancelled)
return
album_folder_name = clean_folder_name(album_title_raw)
album_path = os.path.join(self.output_dir, album_folder_name)
try:
os.makedirs(album_path, exist_ok=True)
self.progress_signal.emit(f" Saving to folder: '{album_path}'")
except OSError as e:
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
self.finished_signal.emit(0, len(files_to_download), self.is_cancelled)
return
total_files = len(files_to_download)
session = cloudscraper.create_scraper()
for i, file_data in enumerate(files_to_download):
if self.is_cancelled:
self.progress_signal.emit(" Download cancelled by user.")
skip_count = total_files - download_count
break
filename = file_data.get('filename')
file_url = file_data.get('url')
filepath = os.path.join(album_path, filename)
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip ({i+1}/{total_files}): '{filename}' already exists.")
skip_count += 1
continue
self.progress_signal.emit(f" Downloading ({i+1}/{total_files}): '{filename}'...")
try:
response = session.get(file_url, stream=True, timeout=90)
response.raise_for_status()
total_size = int(response.headers.get('content-length', 0))
downloaded_size = 0
last_update_time = time.time()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self.is_cancelled:
break
if chunk:
f.write(chunk)
downloaded_size += len(chunk)
current_time = time.time()
if total_size > 0 and (current_time - last_update_time) > 0.5:
self.file_progress_signal.emit(filename, (downloaded_size, total_size))
last_update_time = current_time
if self.is_cancelled:
if os.path.exists(filepath): os.remove(filepath)
continue
download_count += 1
except requests.exceptions.RequestException as e:
self.progress_signal.emit(f" ❌ Failed to download '{filename}'. Error: {e}")
if os.path.exists(filepath): os.remove(filepath)
skip_count += 1
self.file_progress_signal.emit("", None)
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
def cancel(self):
self.is_cancelled = True
self.progress_signal.emit(" Cancellation signal received by Pixeldrain thread.")

View File

@@ -0,0 +1,87 @@
import os
import time
import requests
from PyQt5.QtCore import QThread, pyqtSignal
import cloudscraper
from ...core.rule34video_client import fetch_rule34video_data
from ...utils.file_utils import clean_folder_name
class Rule34VideoDownloadThread(QThread):
"""A dedicated QThread for handling rule34video.com downloads."""
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool) # dl_count, skip_count, cancelled
def __init__(self, url, output_dir, parent=None):
super().__init__(parent)
self.video_url = url
self.output_dir = output_dir
self.is_cancelled = False
def run(self):
download_count = 0
skip_count = 0
video_title, final_video_url = fetch_rule34video_data(self.video_url, self.progress_signal.emit)
if not final_video_url:
self.progress_signal.emit("❌ Failed to get video data. Aborting.")
self.finished_signal.emit(0, 1, self.is_cancelled)
return
# Create a safe filename from the title, defaulting if needed
safe_title = clean_folder_name(video_title if video_title else "rule34video_file")
filename = f"{safe_title}.mp4"
filepath = os.path.join(self.output_dir, filename)
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip: '{filename}' already exists.")
self.finished_signal.emit(0, 1, self.is_cancelled)
return
self.progress_signal.emit(f" Downloading: '{filename}'...")
try:
scraper = cloudscraper.create_scraper()
# The CDN link might not require special headers, but a referer is good practice
headers = {'Referer': 'https://rule34video.com/'}
response = scraper.get(final_video_url, stream=True, headers=headers, timeout=90)
response.raise_for_status()
total_size = int(response.headers.get('content-length', 0))
downloaded_size = 0
last_update_time = time.time()
with open(filepath, 'wb') as f:
# Use a larger chunk size for video files
for chunk in response.iter_content(chunk_size=8192 * 4):
if self.is_cancelled:
break
if chunk:
f.write(chunk)
downloaded_size += len(chunk)
current_time = time.time()
if total_size > 0 and (current_time - last_update_time) > 0.5:
self.file_progress_signal.emit(filename, (downloaded_size, total_size))
last_update_time = current_time
if self.is_cancelled:
if os.path.exists(filepath):
os.remove(filepath)
skip_count = 1
self.progress_signal.emit(f" Download cancelled for '{filename}'.")
else:
download_count = 1
except Exception as e:
self.progress_signal.emit(f" ❌ Failed to download '{filename}': {e}")
if os.path.exists(filepath):
os.remove(filepath)
skip_count = 1
self.file_progress_signal.emit("", None)
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
def cancel(self):
self.is_cancelled = True
self.progress_signal.emit(" Cancellation signal received by Rule34Video thread.")

View File

@@ -0,0 +1,105 @@
import os
import time
import requests
from PyQt5.QtCore import QThread, pyqtSignal
from ...core.saint2_client import fetch_saint2_data
class Saint2DownloadThread(QThread):
"""A dedicated QThread for handling saint2.su downloads."""
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool) # dl_count, skip_count, cancelled
def __init__(self, url, output_dir, parent=None):
super().__init__(parent)
self.saint2_url = url
self.output_dir = output_dir
self.is_cancelled = False
def run(self):
download_count = 0
skip_count = 0
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Saint2.su Download for: {self.saint2_url}")
album_name, files_to_download = fetch_saint2_data(self.saint2_url, self.progress_signal.emit)
if not files_to_download:
self.progress_signal.emit("❌ Failed to extract file information from Saint2. Aborting.")
self.finished_signal.emit(0, 0, self.is_cancelled)
return
album_path = os.path.join(self.output_dir, album_name)
try:
os.makedirs(album_path, exist_ok=True)
self.progress_signal.emit(f" Saving to folder: '{album_path}'")
except OSError as e:
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
self.finished_signal.emit(0, len(files_to_download), self.is_cancelled)
return
total_files = len(files_to_download)
session = requests.Session()
for i, file_data in enumerate(files_to_download):
if self.is_cancelled:
self.progress_signal.emit(" Download cancelled by user.")
skip_count = total_files - download_count
break
filename = file_data.get('filename', f'untitled_{i+1}.mp4')
file_url = file_data.get('url')
headers = file_data.get('headers')
filepath = os.path.join(album_path, filename)
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip ({i+1}/{total_files}): '{filename}' already exists.")
skip_count += 1
continue
self.progress_signal.emit(f" Downloading ({i+1}/{total_files}): '{filename}'...")
try:
response = session.get(file_url, stream=True, headers=headers, timeout=60)
response.raise_for_status()
total_size = int(response.headers.get('content-length', 0))
downloaded_size = 0
last_update_time = time.time()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self.is_cancelled:
break
if chunk:
f.write(chunk)
downloaded_size += len(chunk)
current_time = time.time()
if total_size > 0 and (current_time - last_update_time) > 0.5:
self.file_progress_signal.emit(filename, (downloaded_size, total_size))
last_update_time = current_time
if self.is_cancelled:
if os.path.exists(filepath): os.remove(filepath)
continue
if total_size > 0:
self.file_progress_signal.emit(filename, (total_size, total_size))
download_count += 1
except requests.exceptions.RequestException as e:
self.progress_signal.emit(f" ❌ Failed to download '{filename}'. Error: {e}")
if os.path.exists(filepath): os.remove(filepath)
skip_count += 1
except Exception as e:
self.progress_signal.emit(f" ❌ An unexpected error occurred with '{filename}': {e}")
if os.path.exists(filepath): os.remove(filepath)
skip_count += 1
self.file_progress_signal.emit("", None)
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
def cancel(self):
self.is_cancelled = True
self.progress_signal.emit(" Cancellation signal received by Saint2 thread.")

View File

@@ -0,0 +1,347 @@
import os
import queue
import re
import threading
import time
from collections import Counter
from concurrent.futures import ThreadPoolExecutor
from urllib.parse import urlparse
import cloudscraper
import requests
from PyQt5.QtCore import QThread, pyqtSignal
from ...core.bunkr_client import fetch_bunkr_data
from ...core.pixeldrain_client import fetch_pixeldrain_data
from ...core.saint2_client import fetch_saint2_data
from ...core.simpcity_client import fetch_single_simpcity_page
from ...services.drive_downloader import (
download_mega_file as drive_download_mega_file,
download_gofile_folder
)
from ...utils.file_utils import clean_folder_name
class SimpCityDownloadThread(QThread):
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool, list)
overall_progress_signal = pyqtSignal(int, int)
def __init__(self, url, post_id, output_dir, cookies, parent=None):
super().__init__(parent)
self.start_url = url
self.post_id = post_id
self.output_dir = output_dir
self.cookies = cookies
self.is_cancelled = False
self.parent_app = parent
self.image_queue = queue.Queue()
self.service_queue = queue.Queue()
self.counter_lock = threading.Lock()
self.total_dl_count = 0
self.total_skip_count = 0
self.total_jobs_found = 0
self.total_jobs_processed = 0
self.processed_job_urls = set()
def cancel(self):
self.is_cancelled = True
class _ServiceLoggerAdapter:
"""Wraps the progress signal to provide .info(), .error(), .warning() methods for other clients."""
def __init__(self, signal_emitter, prefix=""):
self.emit = signal_emitter
self.prefix = prefix
def __call__(self, msg, *args, **kwargs):
# Make the logger callable, defaulting to the info method.
self.info(msg, *args, **kwargs)
def info(self, msg, *args, **kwargs): self.emit(f"{self.prefix}{str(msg) % args}")
def error(self, msg, *args, **kwargs): self.emit(f"{self.prefix}❌ ERROR: {str(msg) % args}")
def warning(self, msg, *args, **kwargs): self.emit(f"{self.prefix}⚠️ WARNING: {str(msg) % args}")
def _log_interceptor(self, message):
"""Filters out verbose log messages from the simpcity_client."""
if "[SimpCity] Scraper found" in message or "[SimpCity] Scraping page" in message:
pass
else:
self.progress_signal.emit(message)
def _get_enriched_jobs(self, jobs_to_check):
"""Performs a pre-flight check on jobs to get an accurate total file count and summary."""
if not jobs_to_check:
return []
enriched_jobs = []
bunkr_logger = self._ServiceLoggerAdapter(self.progress_signal.emit, prefix=" ")
pixeldrain_logger = self._ServiceLoggerAdapter(self.progress_signal.emit, prefix=" ")
saint2_logger = self._ServiceLoggerAdapter(self.progress_signal.emit, prefix=" ")
for job in jobs_to_check:
job_type = job.get('type')
job_url = job.get('url')
if job_type in ['image', 'saint2_direct']:
enriched_jobs.append(job)
elif (job_type == 'bunkr' and self.should_dl_bunkr) or \
(job_type == 'pixeldrain' and self.should_dl_pixeldrain) or \
(job_type == 'saint2' and self.should_dl_saint2):
self.progress_signal.emit(f" -> Checking {job_type} album for file count...")
fetch_map = {
'bunkr': (fetch_bunkr_data, bunkr_logger),
'pixeldrain': (fetch_pixeldrain_data, pixeldrain_logger),
'saint2': (fetch_saint2_data, saint2_logger)
}
fetch_func, logger_adapter = fetch_map[job_type]
album_name, files = fetch_func(job_url, logger_adapter)
if files:
job['prefetched_files'] = files
job['prefetched_album_name'] = album_name
enriched_jobs.append(job)
if enriched_jobs:
summary_counts = Counter()
current_page_file_count = 0
for job in enriched_jobs:
if job.get('prefetched_files'):
file_count = len(job['prefetched_files'])
summary_counts[job['type']] += file_count
current_page_file_count += file_count
else:
summary_counts[job['type']] += 1
current_page_file_count += 1
summary_parts = [f"{job_type} ({count})" for job_type, count in summary_counts.items()]
self.progress_signal.emit(f" [SimpCity] Content Found: {' | '.join(summary_parts)}")
with self.counter_lock: self.total_jobs_found += current_page_file_count
self.overall_progress_signal.emit(self.total_jobs_found, self.total_jobs_processed)
return enriched_jobs
def _download_single_image(self, job, album_path, session):
"""Downloads one image file; this is run by the image thread pool."""
filename = job['filename']
filepath = os.path.join(album_path, filename)
try:
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip (Image): '{filename}'")
with self.counter_lock: self.total_skip_count += 1
return
self.progress_signal.emit(f" -> Downloading (Image): '{filename}'...")
response = session.get(job['url'], stream=True, timeout=90, headers={'Referer': self.start_url})
response.raise_for_status()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self.is_cancelled: break
f.write(chunk)
if not self.is_cancelled:
with self.counter_lock: self.total_dl_count += 1
except Exception as e:
self.progress_signal.emit(f" -> ❌ Image download failed for '{filename}': {e}")
with self.counter_lock: self.total_skip_count += 1
finally:
if not self.is_cancelled:
with self.counter_lock: self.total_jobs_processed += 1
self.overall_progress_signal.emit(self.total_jobs_found, self.total_jobs_processed)
def _image_worker(self, album_path):
"""Target function for the image thread pool that pulls jobs from the queue."""
session = cloudscraper.create_scraper()
while True:
if self.is_cancelled: break
try:
job = self.image_queue.get(timeout=1)
if job is None: break
self._download_single_image(job, album_path, session)
self.image_queue.task_done()
except queue.Empty:
continue
def _service_worker(self, album_path):
"""Target function for the single service thread, ensuring sequential downloads."""
while True:
if self.is_cancelled: break
try:
job = self.service_queue.get(timeout=1)
if job is None: break
job_type = job['type']
job_url = job['url']
if job_type in ['pixeldrain', 'saint2', 'bunkr']:
if (job_type == 'pixeldrain' and self.should_dl_pixeldrain) or \
(job_type == 'saint2' and self.should_dl_saint2) or \
(job_type == 'bunkr' and self.should_dl_bunkr):
self.progress_signal.emit(f"\n--- Processing Service ({job_type.capitalize()}): {job_url} ---")
self._download_album(job.get('prefetched_files', []), job_url, album_path)
elif job_type == 'mega' and self.should_dl_mega:
self.progress_signal.emit(f"\n--- Processing Service (Mega): {job_url} ---")
drive_download_mega_file(job_url, album_path, self.progress_signal.emit, self.file_progress_signal.emit)
elif job_type == 'gofile' and self.should_dl_gofile:
self.progress_signal.emit(f"\n--- Processing Service (Gofile): {job_url} ---")
download_gofile_folder(job_url, album_path, self.progress_signal.emit, self.file_progress_signal.emit)
elif job_type == 'saint2_direct' and self.should_dl_saint2:
self.progress_signal.emit(f"\n--- Processing Service (Saint2 Direct): {job_url} ---")
try:
filename = os.path.basename(urlparse(job_url).path)
filepath = os.path.join(album_path, filename)
if os.path.exists(filepath):
with self.counter_lock: self.total_skip_count += 1
else:
response = cloudscraper.create_scraper().get(job_url, stream=True, timeout=120, headers={'Referer': self.start_url})
response.raise_for_status()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self.is_cancelled: break
f.write(chunk)
if not self.is_cancelled:
with self.counter_lock: self.total_dl_count += 1
except Exception as e:
with self.counter_lock: self.total_skip_count += 1
finally:
if not self.is_cancelled:
with self.counter_lock: self.total_jobs_processed += 1
self.overall_progress_signal.emit(self.total_jobs_found, self.total_jobs_processed)
self.service_queue.task_done()
except queue.Empty:
continue
def _download_album(self, files_to_process, source_url, album_path):
"""Helper to download all files from a pre-fetched album list."""
if not files_to_process: return
session = cloudscraper.create_scraper()
for file_data in files_to_process:
if self.is_cancelled: return
filename = file_data.get('filename') or file_data.get('name')
filepath = os.path.join(album_path, filename)
try:
if os.path.exists(filepath):
with self.counter_lock: self.total_skip_count += 1
else:
self.progress_signal.emit(f" -> Downloading: '{filename}'...")
headers = file_data.get('headers', {'Referer': source_url})
response = session.get(file_data.get('url'), stream=True, timeout=90, headers=headers)
response.raise_for_status()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self.is_cancelled: break
f.write(chunk)
if not self.is_cancelled:
with self.counter_lock: self.total_dl_count += 1
except Exception as e:
with self.counter_lock: self.total_skip_count += 1
finally:
if not self.is_cancelled:
with self.counter_lock: self.total_jobs_processed += 1
self.overall_progress_signal.emit(self.total_jobs_found, self.total_jobs_processed)
def run(self):
"""Main entry point for the thread, orchestrates the entire download."""
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting SimpCity Download for: {self.start_url}")
self.should_dl_pixeldrain = self.parent_app.simpcity_dl_pixeldrain_cb.isChecked()
self.should_dl_saint2 = self.parent_app.simpcity_dl_saint2_cb.isChecked()
self.should_dl_mega = self.parent_app.simpcity_dl_mega_cb.isChecked()
self.should_dl_bunkr = self.parent_app.simpcity_dl_bunkr_cb.isChecked()
self.should_dl_gofile = self.parent_app.simpcity_dl_gofile_cb.isChecked()
is_single_post_mode = self.post_id or '/post-' in self.start_url
album_path = ""
try:
if is_single_post_mode:
self.progress_signal.emit(" Mode: Single Post detected.")
album_title, _, _ = fetch_single_simpcity_page(self.start_url, self._log_interceptor, cookies=self.cookies, post_id=self.post_id)
album_path = os.path.join(self.output_dir, clean_folder_name(album_title or "simpcity_post"))
else:
self.progress_signal.emit(" Mode: Full Thread detected.")
first_page_url = re.sub(r'(/page-\d+)|(/post-\d+)', '', self.start_url).split('#')[0].strip('/')
album_title, _, _ = fetch_single_simpcity_page(first_page_url, self._log_interceptor, cookies=self.cookies)
album_path = os.path.join(self.output_dir, clean_folder_name(album_title or "simpcity_album"))
os.makedirs(album_path, exist_ok=True)
self.progress_signal.emit(f" Saving all content to folder: '{os.path.basename(album_path)}'")
except Exception as e:
self.progress_signal.emit(f"❌ Could not process the initial page. Aborting. Error: {e}")
self.finished_signal.emit(0, 0, self.is_cancelled, []); return
service_thread = threading.Thread(target=self._service_worker, args=(album_path,), daemon=True)
service_thread.start()
num_image_threads = 15
image_executor = ThreadPoolExecutor(max_workers=num_image_threads, thread_name_prefix='SimpCityImage')
for _ in range(num_image_threads): image_executor.submit(self._image_worker, album_path)
try:
if is_single_post_mode:
_, jobs, _ = fetch_single_simpcity_page(self.start_url, self._log_interceptor, cookies=self.cookies, post_id=self.post_id)
enriched_jobs = self._get_enriched_jobs(jobs)
if enriched_jobs:
for job in enriched_jobs:
if job['type'] == 'image': self.image_queue.put(job)
else: self.service_queue.put(job)
else:
base_url = re.sub(r'(/page-\d+)|(/post-\d+)', '', self.start_url).split('#')[0].strip('/')
page_counter = 1; end_of_thread = False; MAX_RETRIES = 3
while not end_of_thread:
if self.is_cancelled: break
page_url = f"{base_url}/page-{page_counter}"; retries = 0; page_fetch_successful = False
while retries < MAX_RETRIES:
if self.is_cancelled: end_of_thread = True; break
self.progress_signal.emit(f"\n--- Analyzing page {page_counter} (Attempt {retries + 1}/{MAX_RETRIES}) ---")
try:
page_title, jobs_on_page, final_url = fetch_single_simpcity_page(page_url, self._log_interceptor, cookies=self.cookies)
if final_url != page_url:
self.progress_signal.emit(f" -> Redirect detected from {page_url} to {final_url}")
try:
req_page_match = re.search(r'/page-(\d+)', page_url)
final_page_match = re.search(r'/page-(\d+)', final_url)
if req_page_match and final_page_match and int(final_page_match.group(1)) < int(req_page_match.group(1)):
self.progress_signal.emit(" -> Redirected to an earlier page. Reached end of thread.")
end_of_thread = True
except (ValueError, TypeError):
pass
if end_of_thread:
page_fetch_successful = True; break
if page_counter > 1 and not page_title:
self.progress_signal.emit(f" -> Page {page_counter} is invalid or has no title. Reached end of thread.")
end_of_thread = True
elif not jobs_on_page:
end_of_thread = True
else:
new_jobs = [job for job in jobs_on_page if job.get('url') not in self.processed_job_urls]
if not new_jobs and page_counter > 1:
end_of_thread = True
else:
enriched_jobs = self._get_enriched_jobs(new_jobs)
for job in enriched_jobs:
self.processed_job_urls.add(job.get('url'))
if job['type'] == 'image': self.image_queue.put(job)
else: self.service_queue.put(job)
page_fetch_successful = True; break
except requests.exceptions.HTTPError as e:
if e.response.status_code in [403, 404]: end_of_thread = True; break
elif e.response.status_code == 429: time.sleep(5 * (retries + 2)); retries += 1
else: end_of_thread = True; break
except Exception as e:
self.progress_signal.emit(f" Stopping crawl due to error on page {page_counter}: {e}"); end_of_thread = True; break
if not page_fetch_successful and not end_of_thread: end_of_thread = True
if not end_of_thread: page_counter += 1
except Exception as e:
self.progress_signal.emit(f"❌ A critical error occurred during the main fetch phase: {e}")
self.progress_signal.emit("\n--- All pages analyzed. Waiting for background downloads to complete... ---")
for _ in range(num_image_threads): self.image_queue.put(None)
self.service_queue.put(None)
image_executor.shutdown(wait=True)
service_thread.join()
self.finished_signal.emit(self.total_dl_count, self.total_skip_count, self.is_cancelled, [])

View File

@@ -0,0 +1,128 @@
import os
import threading
import time
from urllib.parse import urlparse
import cloudscraper
from PyQt5.QtCore import QThread, pyqtSignal
from ...core.toonily_client import (
fetch_chapter_data as toonily_fetch_data,
get_chapter_list as toonily_get_list
)
from ...utils.file_utils import clean_folder_name
class ToonilyDownloadThread(QThread):
"""A dedicated QThread for handling toonily.com series or single chapters."""
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool)
overall_progress_signal = pyqtSignal(int, int) # Signal for chapter progress
def __init__(self, url, output_dir, parent=None):
super().__init__(parent)
self.start_url = url
self.output_dir = output_dir
self.is_cancelled = False
# Get access to the pause event from the main app
self.pause_event = parent.pause_event if hasattr(parent, 'pause_event') else threading.Event()
def _check_pause(self):
# Helper function to check for pause/cancel events
if self.is_cancelled: return True
if self.pause_event and self.pause_event.is_set():
self.progress_signal.emit(" Download paused...")
while self.pause_event.is_set():
if self.is_cancelled: return True
time.sleep(0.5)
self.progress_signal.emit(" Download resumed.")
return self.is_cancelled
def run(self):
grand_total_dl = 0
grand_total_skip = 0
# Check if the URL is a series or a chapter
if '/chapter-' in self.start_url:
# It's a single chapter URL
chapters_to_download = [self.start_url]
self.progress_signal.emit(" Single Toonily chapter URL detected.")
else:
# It's a series URL, so get all chapters
chapters_to_download = toonily_get_list(self.start_url, self.progress_signal.emit)
if not chapters_to_download:
self.progress_signal.emit("❌ No chapters found to download.")
self.finished_signal.emit(0, 0, self.is_cancelled)
return
self.progress_signal.emit(f"--- Starting download of {len(chapters_to_download)} chapter(s) ---")
self.overall_progress_signal.emit(len(chapters_to_download), 0)
scraper = cloudscraper.create_scraper()
for chapter_idx, chapter_url in enumerate(chapters_to_download):
if self._check_pause(): break
self.progress_signal.emit(f"\n-- Processing Chapter {chapter_idx + 1}/{len(chapters_to_download)} --")
series_title, chapter_title, image_urls = toonily_fetch_data(chapter_url, self.progress_signal.emit, scraper)
if not image_urls:
self.progress_signal.emit(f"❌ Failed to get data for chapter. Skipping.")
continue
# Create folders like: /Downloads/Series Name/Chapter 01/
series_folder_name = clean_folder_name(series_title)
# Make a safe folder name from the full chapter title
chapter_folder_name = clean_folder_name(chapter_title)
final_save_path = os.path.join(self.output_dir, series_folder_name, chapter_folder_name)
try:
os.makedirs(final_save_path, exist_ok=True)
self.progress_signal.emit(f" Saving to folder: '{os.path.join(series_folder_name, chapter_folder_name)}'")
except OSError as e:
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
grand_total_skip += len(image_urls)
continue
for i, img_url in enumerate(image_urls):
if self._check_pause(): break
try:
file_extension = os.path.splitext(urlparse(img_url).path)[1] or '.jpg'
filename = f"{i+1:03d}{file_extension}"
filepath = os.path.join(final_save_path, filename)
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip ({i+1}/{len(image_urls)}): '{filename}' already exists.")
grand_total_skip += 1
else:
self.progress_signal.emit(f" Downloading ({i+1}/{len(image_urls)}): '{filename}'...")
response = scraper.get(img_url, stream=True, timeout=60, headers={'Referer': chapter_url})
response.raise_for_status()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self._check_pause(): break
f.write(chunk)
if self._check_pause():
if os.path.exists(filepath): os.remove(filepath)
break
grand_total_dl += 1
time.sleep(0.2)
except Exception as e:
self.progress_signal.emit(f" ❌ Failed to download '{filename}': {e}")
grand_total_skip += 1
self.overall_progress_signal.emit(len(chapters_to_download), chapter_idx + 1)
time.sleep(1) # Wait a second between chapters
self.file_progress_signal.emit("", None)
self.finished_signal.emit(grand_total_dl, grand_total_skip, self.is_cancelled)
def cancel(self):
self.is_cancelled = True
self.progress_signal.emit(" Cancellation signal received by Toonily thread.")

View File

@@ -1,18 +1,10 @@
# --- PyQt5 Imports ---
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QListWidget, QListWidgetItem,
QPushButton, QVBoxLayout
)
# --- Local Application Imports ---
# This assumes the new project structure is in place.
from ...i18n.translator import get_translation
# get_app_icon_object is defined in the main window module in this refactoring plan.
from ..main_window import get_app_icon_object
# --- Constants for Dialog Choices ---
# These were moved from main.py to be self-contained within this module's context.
CONFIRM_ADD_ALL_ACCEPTED = 1
CONFIRM_ADD_ALL_SKIP_ADDING = 2
CONFIRM_ADD_ALL_CANCEL_DOWNLOAD = 3
@@ -38,23 +30,16 @@ class ConfirmAddAllDialog(QDialog):
self.parent_app = parent_app
self.setModal(True)
self.new_filter_objects_list = new_filter_objects_list
# Default choice if the dialog is closed without a button press
self.user_choice = CONFIRM_ADD_ALL_CANCEL_DOWNLOAD
# --- Basic Window Setup ---
app_icon = get_app_icon_object()
if app_icon and not app_icon.isNull():
self.setWindowIcon(app_icon)
# Set window size dynamically
screen_height = QApplication.primaryScreen().availableGeometry().height() if QApplication.primaryScreen() else 768
scale_factor = screen_height / 768.0
base_min_w, base_min_h = 480, 350
scaled_min_w = int(base_min_w * scale_factor)
scaled_min_h = int(base_min_h * scale_factor)
self.setMinimumSize(scaled_min_w, scaled_min_h)
# --- Initialize UI and Apply Theming ---
self._init_ui()
self._retranslate_ui()
self._apply_theme()
@@ -70,8 +55,6 @@ class ConfirmAddAllDialog(QDialog):
self.names_list_widget = QListWidget()
self._populate_list()
main_layout.addWidget(self.names_list_widget)
# --- Selection Buttons ---
selection_buttons_layout = QHBoxLayout()
self.select_all_button = QPushButton()
self.select_all_button.clicked.connect(self._select_all_items)
@@ -82,8 +65,6 @@ class ConfirmAddAllDialog(QDialog):
selection_buttons_layout.addWidget(self.deselect_all_button)
selection_buttons_layout.addStretch()
main_layout.addLayout(selection_buttons_layout)
# --- Action Buttons ---
buttons_layout = QHBoxLayout()
self.add_selected_button = QPushButton()
self.add_selected_button.clicked.connect(self._accept_add_selected)
@@ -171,7 +152,6 @@ class ConfirmAddAllDialog(QDialog):
sensible default if no items are selected but the "Add" button is clicked.
"""
super().exec_()
# If the user clicked "Add Selected" but didn't select any items, treat it as skipping.
if isinstance(self.user_choice, list) and not self.user_choice:
return CONFIRM_ADD_ALL_SKIP_ADDING
return self.user_choice

View File

@@ -16,7 +16,6 @@ class CookieHelpDialog(QDialog):
It can be displayed as a simple informational popup or as a modal choice
when cookies are required but not found.
"""
# Constants to define the user's choice from the dialog
CHOICE_PROCEED_WITHOUT_COOKIES = 1
CHOICE_CANCEL_DOWNLOAD = 2
CHOICE_OK_INFO_ONLY = 3
@@ -64,7 +63,6 @@ class CookieHelpDialog(QDialog):
button_layout.addStretch(1)
if self.offer_download_without_option:
# Add buttons for making a choice
self.download_without_button = QPushButton()
self.download_without_button.clicked.connect(self._proceed_without_cookies)
button_layout.addWidget(self.download_without_button)
@@ -73,7 +71,6 @@ class CookieHelpDialog(QDialog):
self.cancel_button.clicked.connect(self._cancel_download)
button_layout.addWidget(self.cancel_button)
else:
# Add a simple OK button for informational display
self.ok_button = QPushButton()
self.ok_button.clicked.connect(self._ok_info_only)
button_layout.addWidget(self.ok_button)

View File

@@ -0,0 +1,89 @@
from PyQt5.QtWidgets import (
QDialog, QVBoxLayout, QHBoxLayout, QLabel, QLineEdit, QPushButton,
QDialogButtonBox, QTextEdit
)
from PyQt5.QtCore import Qt
class CustomFilenameDialog(QDialog):
"""A dialog for creating a custom filename format string."""
# --- REPLACE THE 'AVAILABLE_KEYS' LIST WITH THIS DICTIONARY ---
DISPLAY_KEY_MAP = {
"PostID": "id",
"CreatorName": "creator_name",
"service": "service",
"title": "title",
"added": "added",
"published": "published",
"edited": "edited",
"name": "name"
}
def __init__(self, current_format, current_date_format, parent=None):
super().__init__(parent)
self.setWindowTitle("Custom Filename Format")
self.setMinimumWidth(500)
self.current_format = current_format
self.current_date_format = current_date_format
# --- Main Layout ---
layout = QVBoxLayout(self)
# --- Description ---
description_label = QLabel(
"Create a filename format using placeholders. The date/time values for 'added', 'published', and 'edited' will be automatically shortened to your specified format."
)
description_label.setWordWrap(True)
layout.addWidget(description_label)
# --- Format Input ---
format_label = QLabel("Filename Format:")
layout.addWidget(format_label)
self.format_input = QLineEdit(self)
self.format_input.setText(self.current_format)
self.format_input.setPlaceholderText("e.g., {published} {title} {id}")
layout.addWidget(self.format_input)
# --- Date Format Input ---
date_format_label = QLabel("Date Format (for {added}, {published}, {edited}):")
layout.addWidget(date_format_label)
self.date_format_input = QLineEdit(self)
self.date_format_input.setText(self.current_date_format)
self.date_format_input.setPlaceholderText("e.g., YYYY-MM-DD or DD-MM-YYYY")
layout.addWidget(self.date_format_input)
# --- Available Keys Display ---
keys_label = QLabel("Click to add a placeholder:")
layout.addWidget(keys_label)
keys_layout = QHBoxLayout()
keys_layout.setSpacing(5)
for display_key, internal_key in self.DISPLAY_KEY_MAP.items():
key_button = QPushButton(f"{{{display_key}}}")
# Use a lambda to pass the correct internal key when the button is clicked
key_button.clicked.connect(lambda checked, key=internal_key: self.add_key_to_input(key))
keys_layout.addWidget(key_button)
keys_layout.addStretch()
layout.addLayout(keys_layout)
# --- OK/Cancel Buttons ---
button_box = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel)
button_box.accepted.connect(self.accept)
button_box.rejected.connect(self.reject)
layout.addWidget(button_box)
def add_key_to_input(self, key_to_insert):
"""Adds the corresponding internal key placeholder to the input field."""
self.format_input.insert(f" {{{key_to_insert}}} ")
self.format_input.setFocus()
def get_format_string(self):
"""Returns the final format string from the input field."""
return self.format_input.text().strip()
def get_date_format_string(self):
"""Returns the date format string from its input field."""
return self.date_format_input.text().strip()

View File

@@ -1,14 +1,9 @@
# --- Standard Library Imports ---
from collections import defaultdict
# --- PyQt5 Imports ---
from PyQt5.QtCore import pyqtSignal, Qt
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QListWidget, QListWidgetItem,
QMessageBox, QPushButton, QVBoxLayout, QAbstractItemView
)
# --- Local Application Imports ---
from ...i18n.translator import get_translation
from ..main_window import get_app_icon_object
from ...utils.resolution import get_dark_theme
@@ -18,8 +13,6 @@ class DownloadExtractedLinksDialog(QDialog):
A dialog to select and initiate the download for extracted, supported links
from external cloud services like Mega, Google Drive, and Dropbox.
"""
# Signal emitted with a list of selected link information dictionaries
download_requested = pyqtSignal(list)
def __init__(self, links_data, parent_app, parent=None):
@@ -34,23 +27,13 @@ class DownloadExtractedLinksDialog(QDialog):
super().__init__(parent)
self.links_data = links_data
self.parent_app = parent_app
# --- Basic Window Setup ---
app_icon = get_app_icon_object()
if not app_icon.isNull():
self.setWindowIcon(app_icon)
# --- START OF FIX ---
# Get the user-defined scale factor from the parent application.
scale_factor = getattr(self.parent_app, 'scale_factor', 1.0)
# Define base dimensions and apply the correct scale factor.
base_width, base_height = 600, 450
self.setMinimumSize(int(base_width * scale_factor), int(base_height * scale_factor))
self.resize(int(base_width * scale_factor * 1.1), int(base_height * scale_factor * 1.1))
# --- END OF FIX ---
# --- Initialize UI and Apply Theming ---
self._init_ui()
self._retranslate_ui()
self._apply_theme()
@@ -68,8 +51,6 @@ class DownloadExtractedLinksDialog(QDialog):
self.links_list_widget.setSelectionMode(QAbstractItemView.NoSelection)
self._populate_list()
layout.addWidget(self.links_list_widget)
# --- Control Buttons ---
button_layout = QHBoxLayout()
self.select_all_button = QPushButton()
self.select_all_button.clicked.connect(lambda: self._set_all_items_checked(Qt.Checked))
@@ -100,7 +81,6 @@ class DownloadExtractedLinksDialog(QDialog):
sorted_post_titles = sorted(grouped_links.keys(), key=lambda x: x.lower())
for post_title_key in sorted_post_titles:
# Add a non-selectable header for each post
header_item = QListWidgetItem(f"{post_title_key}")
header_item.setFlags(Qt.NoItemFlags)
font = header_item.font()
@@ -108,8 +88,6 @@ class DownloadExtractedLinksDialog(QDialog):
font.setPointSize(font.pointSize() + 1)
header_item.setFont(font)
self.links_list_widget.addItem(header_item)
# Add checkable items for each link within that post
for link_info_data in grouped_links[post_title_key]:
platform_display = link_info_data.get('platform', 'unknown').upper()
display_text = f" [{platform_display}] {link_info_data['link_text']} ({link_info_data['url']})"
@@ -139,19 +117,13 @@ class DownloadExtractedLinksDialog(QDialog):
is_dark_theme = self.parent_app and self.parent_app.current_theme == "dark"
if is_dark_theme:
# Get the scale factor from the parent app
scale = getattr(self.parent_app, 'scale_factor', 1)
# Call the imported function with the correct scale
self.setStyleSheet(get_dark_theme(scale))
else:
# Explicitly set a blank stylesheet for light mode
self.setStyleSheet("")
# Set header text color based on theme
header_color = Qt.cyan if is_dark_theme else Qt.blue
for i in range(self.links_list_widget.count()):
item = self.links_list_widget.item(i)
# Headers are not checkable (they have no checkable flag)
if not item.flags() & Qt.ItemIsUserCheckable:
item.setForeground(header_color)

View File

@@ -1,16 +1,12 @@
# --- Standard Library Imports ---
import os
import time
import json
# --- PyQt5 Imports ---
from PyQt5.QtCore import Qt, QStandardPaths, QTimer
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QScrollArea,
QPushButton, QVBoxLayout, QSplitter, QWidget, QGroupBox,
QFileDialog, QMessageBox
)
# --- Local Application Imports ---
from ...i18n.translator import get_translation
from ..main_window import get_app_icon_object
from ...utils.resolution import get_dark_theme
@@ -25,17 +21,14 @@ class DownloadHistoryDialog (QDialog ):
self .first_processed_entries =first_processed_entries
self .setModal (True )
self._apply_theme()
# Patch missing creator_display_name and creator_name using parent_app.creator_name_cache if available
creator_name_cache = getattr(parent_app, 'creator_name_cache', None)
if creator_name_cache:
# Patch left pane (files)
for entry in self.last_3_downloaded_entries:
if not entry.get('creator_display_name'):
service = entry.get('service', '').lower()
user_id = str(entry.get('user_id', ''))
key = (service, user_id)
entry['creator_display_name'] = creator_name_cache.get(key, entry.get('folder_context_name', 'Unknown Creator/Series'))
# Patch right pane (posts)
for entry in self.first_processed_entries:
if not entry.get('creator_name'):
service = entry.get('service', '').lower()

View File

@@ -13,7 +13,7 @@ from PyQt5.QtCore import pyqtSignal, QCoreApplication, QSize, QThread, QTimer, Q
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QLineEdit, QListWidget,
QListWidgetItem, QMessageBox, QPushButton, QVBoxLayout, QAbstractItemView,
QSplitter, QProgressBar, QWidget
QSplitter, QProgressBar, QWidget, QFileDialog
)
# --- Local Application Imports ---
@@ -151,6 +151,8 @@ class EmptyPopupDialog (QDialog ):
app_icon =get_app_icon_object ()
if app_icon and not app_icon .isNull ():
self .setWindowIcon (app_icon )
self.update_profile_data = None
self.update_creator_name = None
self .selected_creators_for_queue =[]
self .globally_selected_creators ={}
self .fetched_posts_data ={}
@@ -205,6 +207,9 @@ class EmptyPopupDialog (QDialog ):
self .scope_button .clicked .connect (self ._toggle_scope_mode )
left_bottom_buttons_layout .addWidget (self .scope_button )
left_pane_layout .addLayout (left_bottom_buttons_layout )
self.update_button = QPushButton()
self.update_button.clicked.connect(self._handle_update_check)
left_bottom_buttons_layout.addWidget(self.update_button)
self .right_pane_widget =QWidget ()
@@ -315,6 +320,31 @@ class EmptyPopupDialog (QDialog ):
except AttributeError :
pass
def _handle_update_check(self):
"""Opens a dialog to select a creator profile and loads it for an update session."""
appdata_dir = os.path.join(self.app_base_dir, "appdata")
profiles_dir = os.path.join(appdata_dir, "creator_profiles")
if not os.path.isdir(profiles_dir):
QMessageBox.warning(self, "Directory Not Found", f"The creator profiles directory does not exist yet.\n\nPath: {profiles_dir}")
return
filepath, _ = QFileDialog.getOpenFileName(self, "Select Creator Profile for Update", profiles_dir, "JSON Files (*.json)")
if filepath:
try:
with open(filepath, 'r', encoding='utf-8') as f:
data = json.load(f)
if 'creator_url' not in data or 'processed_post_ids' not in data:
raise ValueError("Invalid profile format.")
self.update_profile_data = data
self.update_creator_name = os.path.basename(filepath).replace('.json', '')
self.accept() # Close the dialog and signal success
except Exception as e:
QMessageBox.critical(self, "Error Loading Profile", f"Could not load or parse the selected profile file:\n\n{e}")
def _handle_fetch_posts_click (self ):
selected_creators =list (self .globally_selected_creators .values ())
print(f"[DEBUG] Selected creators for fetch: {selected_creators}")
@@ -370,6 +400,7 @@ class EmptyPopupDialog (QDialog ):
self .add_selected_button .setText (self ._tr ("creator_popup_add_selected_button","Add Selected"))
self .fetch_posts_button .setText (self ._tr ("fetch_posts_button_text","Fetch Posts"))
self ._update_scope_button_text_and_tooltip ()
self.update_button.setText(self._tr("check_for_updates_button", "Check for Updates"))
self .posts_search_input .setPlaceholderText (self ._tr ("creator_popup_posts_search_placeholder","Search fetched posts by title..."))
@@ -929,15 +960,19 @@ class EmptyPopupDialog (QDialog ):
self .parent_app .log_signal .emit (f" Added {num_just_added_posts } selected posts to the download queue. Total in queue: {total_in_queue }.")
# --- START: MODIFIED LOGIC ---
# Removed the blockSignals(True/False) calls to allow the main window's UI to update correctly.
if self .parent_app .link_input :
self .parent_app .link_input .blockSignals (True )
self .parent_app .link_input .setText (
self .parent_app ._tr ("popup_posts_selected_text","Posts - {count} selected").format (count =num_just_added_posts )
)
self .parent_app .link_input .blockSignals (False )
self .parent_app .link_input .setPlaceholderText (
self .parent_app ._tr ("items_in_queue_placeholder","{count} items in queue from popup.").format (count =total_in_queue )
)
# --- END: MODIFIED LOGIC ---
self.selected_creators_for_queue.clear()
self .accept ()
else :
QMessageBox .information (self ,self ._tr ("no_selection_title","No Selection"),
@@ -955,9 +990,6 @@ class EmptyPopupDialog (QDialog ):
self .add_selected_button .setEnabled (True )
self .setWindowTitle (self ._tr ("creator_popup_title","Creator Selection"))
def _get_domain_for_service (self ,service_name ):
"""Determines the base domain for a given service."""
service_lower =service_name .lower ()
@@ -1003,4 +1035,4 @@ class EmptyPopupDialog (QDialog ):
else :
if unique_key in self .globally_selected_creators :
del self .globally_selected_creators [unique_key ]
self .fetch_posts_button .setEnabled (bool (self .globally_selected_creators ))
self .fetch_posts_button .setEnabled (bool (self .globally_selected_creators ))

View File

@@ -2,73 +2,55 @@
from PyQt5.QtCore import pyqtSignal, Qt
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QListWidget, QListWidgetItem,
QMessageBox, QPushButton, QVBoxLayout, QAbstractItemView, QFileDialog
QMessageBox, QPushButton, QVBoxLayout, QAbstractItemView, QFileDialog, QCheckBox
)
# --- Local Application Imports ---
from ...i18n.translator import get_translation
from ..assets import get_app_icon_object
# Corrected Import: The filename uses PascalCase.
from .ExportOptionsDialog import ExportOptionsDialog
from ...utils.resolution import get_dark_theme
from ...config.constants import AUTO_RETRY_ON_FINISH_KEY
class ErrorFilesDialog(QDialog):
"""
Dialog to display files that were skipped due to errors and
allows the user to retry downloading them or export the list of URLs.
"""
# Signal emitted with a list of file info dictionaries to retry
retry_selected_signal = pyqtSignal(list)
def __init__(self, error_files_info_list, parent_app, parent=None):
"""
Initializes the dialog.
Args:
error_files_info_list (list): A list of dictionaries, each containing
info about a failed file.
parent_app (DownloaderApp): A reference to the main application window
for theming and translations.
parent (QWidget, optional): The parent widget. Defaults to None.
"""
super().__init__(parent)
self.parent_app = parent_app
self.setModal(True)
self.error_files = error_files_info_list
# --- Basic Window Setup ---
app_icon = get_app_icon_object()
if app_icon and not app_icon.isNull():
self.setWindowIcon(app_icon)
scale_factor = getattr(self.parent_app, 'scale_factor', 1.0)
base_width, base_height = 550, 400
base_width, base_height = 600, 450
self.setMinimumSize(int(base_width * scale_factor), int(base_height * scale_factor))
self.resize(int(base_width * scale_factor * 1.1), int(base_height * scale_factor * 1.1))
# --- Initialize UI and Apply Theming ---
self._init_ui()
self._retranslate_ui()
self._apply_theme()
def _init_ui(self):
"""Initializes all UI components and layouts for the dialog."""
main_layout = QVBoxLayout(self)
self.info_label = QLabel()
self.info_label.setWordWrap(True)
main_layout.addWidget(self.info_label)
if self.error_files:
self.files_list_widget = QListWidget()
self.files_list_widget.setSelectionMode(QAbstractItemView.NoSelection)
self._populate_list()
main_layout.addWidget(self.files_list_widget)
self.files_list_widget = QListWidget()
self.files_list_widget.setSelectionMode(QAbstractItemView.ExtendedSelection)
main_layout.addWidget(self.files_list_widget)
self._populate_list()
# --- Control Buttons ---
buttons_layout = QHBoxLayout()
self.select_all_button = QPushButton()
self.select_all_button.clicked.connect(self._select_all_items)
buttons_layout.addWidget(self.select_all_button)
@@ -77,94 +59,170 @@ class ErrorFilesDialog(QDialog):
self.retry_button.clicked.connect(self._handle_retry_selected)
buttons_layout.addWidget(self.retry_button)
self.load_button = QPushButton()
self.load_button.clicked.connect(self._handle_load_errors_from_txt)
buttons_layout.addWidget(self.load_button)
self.export_button = QPushButton()
self.export_button.clicked.connect(self._handle_export_errors_to_txt)
buttons_layout.addWidget(self.export_button)
# The stretch will push everything added after this point to the right
buttons_layout.addStretch(1)
# --- MOVED: Auto Retry Checkbox ---
self.auto_retry_checkbox = QCheckBox()
auto_retry_enabled = self.parent_app.settings.value(AUTO_RETRY_ON_FINISH_KEY, False, type=bool)
self.auto_retry_checkbox.setChecked(auto_retry_enabled)
self.auto_retry_checkbox.toggled.connect(self._save_auto_retry_setting)
buttons_layout.addWidget(self.auto_retry_checkbox)
# --- END ---
self.ok_button = QPushButton()
self.ok_button.clicked.connect(self.accept)
self.ok_button.setDefault(True)
buttons_layout.addWidget(self.ok_button)
main_layout.addLayout(buttons_layout)
# Enable/disable buttons based on whether there are errors
has_errors = bool(self.error_files)
self.select_all_button.setEnabled(has_errors)
self.retry_button.setEnabled(has_errors)
self.export_button.setEnabled(has_errors)
def _populate_list(self):
"""Populates the list widget with details of the failed files."""
self.files_list_widget.clear()
for error_info in self.error_files:
filename = error_info.get('forced_filename_override',
error_info.get('file_info', {}).get('name', 'Unknown Filename'))
post_title = error_info.get('post_title', 'Unknown Post')
post_id = error_info.get('original_post_id_for_log', 'N/A')
self._add_item_to_list(error_info)
item_text = f"File: {filename}\nFrom Post: '{post_title}' (ID: {post_id})"
list_item = QListWidgetItem(item_text)
list_item.setData(Qt.UserRole, error_info)
list_item.setFlags(list_item.flags() | Qt.ItemIsUserCheckable)
list_item.setCheckState(Qt.Unchecked)
self.files_list_widget.addItem(list_item)
def _handle_load_errors_from_txt(self):
"""Opens a file dialog to load URLs from a .txt file."""
import re
filepath, _ = QFileDialog.getOpenFileName(
self,
self._tr("error_files_load_dialog_title", "Load Error File URLs"),
"",
"Text Files (*.txt);;All Files (*)"
)
if not filepath:
return
try:
detailed_pattern = re.compile(r"^(https?://[^\s]+)\s*\[Post: '(.*?)' \(ID: (.*?)\), File: '(.*?)'\]$")
simple_pattern = re.compile(r'^(https?://[^\s]+)')
with open(filepath, 'r', encoding='utf-8') as f:
for line in f:
line = line.strip()
if not line: continue
url, post_title, post_id, filename = None, 'Loaded from .txt', 'N/A', None
detailed_match = detailed_pattern.match(line)
if detailed_match:
url, post_title, post_id, filename = detailed_match.groups()
else:
simple_match = simple_pattern.match(line)
if simple_match:
url = simple_match.group(1)
filename = url.split('/')[-1]
if url:
simple_error_info = {
'is_loaded_from_txt': True, 'file_info': {'url': url, 'name': filename},
'post_title': post_title, 'original_post_id_for_log': post_id,
'target_folder_path': self.parent_app.dir_input.text().strip(),
'forced_filename_override': filename, 'file_index_in_post': 0,
'num_files_in_this_post': 1, 'service': None, 'user_id': None, 'api_url_input': ''
}
self.error_files.append(simple_error_info)
self._add_item_to_list(simple_error_info)
self.info_label.setText(self._tr("error_files_found_label", "The following {count} file(s)...").format(count=len(self.error_files)))
has_errors = bool(self.error_files)
self.select_all_button.setEnabled(has_errors)
self.retry_button.setEnabled(has_errors)
self.export_button.setEnabled(has_errors)
except Exception as e:
QMessageBox.critical(self, self._tr("error_files_load_error_title", "Load Error"),
self._tr("error_files_load_error_message", "Could not load or parse the file: {error}").format(error=str(e)))
def _tr(self, key, default_text=""):
"""Helper to get translation based on the main application's current language."""
if callable(get_translation) and self.parent_app:
return get_translation(self.parent_app.current_selected_language, key, default_text)
return default_text
def _retranslate_ui(self):
"""Sets the text for all translatable UI elements."""
self.setWindowTitle(self._tr("error_files_dialog_title", "Files Skipped Due to Errors"))
if not self.error_files:
self.info_label.setText(self._tr("error_files_no_errors_label", "No files were recorded as skipped..."))
else:
self.info_label.setText(self._tr("error_files_found_label", "The following {count} file(s)...").format(count=len(self.error_files)))
self.select_all_button.setText(self._tr("error_files_select_all_button", "Select All"))
self.auto_retry_checkbox.setText(self._tr("error_files_auto_retry_checkbox", "Auto Retry at End"))
self.select_all_button.setText(self._tr("error_files_select_all_button", "Select/Deselect All"))
self.retry_button.setText(self._tr("error_files_retry_selected_button", "Retry Selected"))
self.load_button.setText(self._tr("error_files_load_urls_button", "Load URLs from .txt"))
self.export_button.setText(self._tr("error_files_export_urls_button", "Export URLs to .txt"))
self.ok_button.setText(self._tr("ok_button", "OK"))
def _apply_theme(self):
"""Applies the current theme from the parent application."""
if self.parent_app and self.parent_app.current_theme == "dark":
# Get the scale factor from the parent app
scale = getattr(self.parent_app, 'scale_factor', 1)
# Call the imported function with the correct scale
self.setStyleSheet(get_dark_theme(scale))
else:
# Explicitly set a blank stylesheet for light mode
self.setStyleSheet("")
def _save_auto_retry_setting(self, checked):
"""Saves the state of the auto-retry checkbox to QSettings."""
self.parent_app.settings.setValue(AUTO_RETRY_ON_FINISH_KEY, checked)
def _add_item_to_list(self, error_info):
"""Creates and adds a single QListWidgetItem based on error_info content."""
if error_info.get('is_loaded_from_txt'):
filename = error_info.get('file_info', {}).get('name', 'Unknown Filename')
post_title = error_info.get('post_title', 'N/A')
post_id = error_info.get('original_post_id_for_log', 'N/A')
item_text = f"File: {filename}\nPost: '{post_title}' (ID: {post_id}) [Loaded from .txt]"
else:
filename = error_info.get('forced_filename_override', error_info.get('file_info', {}).get('name', 'Unknown Filename'))
post_title = error_info.get('post_title', 'Unknown Post')
post_id = error_info.get('original_post_id_for_log', 'N/A')
creator_name = "Unknown Creator"
service, user_id = error_info.get('service'), error_info.get('user_id')
if service and user_id and hasattr(self.parent_app, 'creator_name_cache'):
creator_name = self.parent_app.creator_name_cache.get((service.lower(), str(user_id)), user_id)
item_text = f"File: {filename}\nCreator: {creator_name} - Post: '{post_title}' (ID: {post_id})"
list_item = QListWidgetItem(item_text)
list_item.setData(Qt.UserRole, error_info)
list_item.setFlags(list_item.flags() | Qt.ItemIsUserCheckable)
list_item.setCheckState(Qt.Unchecked) # Start as unchecked
self.files_list_widget.addItem(list_item)
def _select_all_items(self):
"""Checks all items in the list."""
if hasattr(self, 'files_list_widget'):
for i in range(self.files_list_widget.count()):
self.files_list_widget.item(i).setCheckState(Qt.Checked)
"""Toggles checking all items in the list."""
# Determine if we should check or uncheck all based on the first item's state
is_currently_checked = self.files_list_widget.item(0).checkState() == Qt.Checked if self.files_list_widget.count() > 0 else False
new_state = Qt.Unchecked if is_currently_checked else Qt.Checked
for i in range(self.files_list_widget.count()):
self.files_list_widget.item(i).setCheckState(new_state)
def _handle_retry_selected(self):
"""Gathers selected files and emits the retry signal."""
if not hasattr(self, 'files_list_widget'):
return
selected_files_for_retry = [
self.files_list_widget.item(i).data(Qt.UserRole)
for i in range(self.files_list_widget.count())
if self.files_list_widget.item(i).checkState() == Qt.Checked
]
if selected_files_for_retry:
self.retry_selected_signal.emit(selected_files_for_retry)
self.accept()
else:
QMessageBox.information(
self,
self._tr("fav_artists_no_selection_title", "No Selection"),
self._tr("error_files_no_selection_retry_message", "Please select at least one file to retry.")
)
QMessageBox.information(self, self._tr("fav_artists_no_selection_title", "No Selection"),
self._tr("error_files_no_selection_retry_message", "Please check the box next to at least one file to retry."))
def _handle_export_errors_to_txt(self):
"""Exports the URLs of failed files to a text file."""
@@ -189,10 +247,13 @@ class ErrorFilesDialog(QDialog):
if url:
if export_option == ExportOptionsDialog.EXPORT_MODE_WITH_DETAILS:
original_filename = file_info.get('name', 'Unknown Filename')
post_title = error_item.get('post_title', 'Unknown Post')
post_id = error_item.get('original_post_id_for_log', 'N/A')
details_string = f" [Post: '{post_title}' (ID: {post_id}), File: '{original_filename}']"
# Prioritize the final renamed filename, but fall back to the original from the API
filename_to_display = error_item.get('forced_filename_override') or file_info.get('name', 'Unknown Filename')
details_string = f" [Post: '{post_title}' (ID: {post_id}), File: '{filename_to_display}']"
lines_to_export.append(f"{url}{details_string}")
else:
lines_to_export.append(url)

View File

@@ -0,0 +1,226 @@
import os
import json
import re
from collections import defaultdict
from PyQt5.QtWidgets import (
QApplication, QWidget, QLabel, QLineEdit, QTextEdit, QPushButton,
QVBoxLayout, QHBoxLayout, QFileDialog, QMessageBox, QListWidget, QRadioButton,
QButtonGroup, QCheckBox, QSplitter, QGroupBox, QDialog, QStackedWidget,
QScrollArea, QListWidgetItem, QSizePolicy, QProgressBar, QAbstractItemView, QFrame,
QMainWindow, QAction, QGridLayout,
)
from PyQt5.QtCore import Qt
class ExportLinksDialog(QDialog):
"""
A dialog for exporting extracted links with various format options, including custom templates.
"""
def __init__(self, links_data, parent=None):
super().__init__(parent)
self.links_data = links_data
self.setWindowTitle("Export Extracted Links")
self.setMinimumWidth(550)
self._setup_ui()
self._update_options_visibility()
def _setup_ui(self):
"""Initializes the UI components of the dialog."""
main_layout = QVBoxLayout(self)
# Format Selection (Top Level)
format_group = QGroupBox("Export Format")
format_layout = QHBoxLayout()
self.radio_txt = QRadioButton("Plain Text (.txt)")
self.radio_json = QRadioButton("JSON (.json)")
self.radio_txt.setChecked(True)
format_layout.addWidget(self.radio_txt)
format_layout.addWidget(self.radio_json)
format_group.setLayout(format_layout)
main_layout.addWidget(format_group)
# TXT Options Group
self.txt_options_group = QGroupBox("TXT Options")
txt_options_layout = QVBoxLayout()
self.txt_mode_group = QButtonGroup(self)
self.radio_simple = QRadioButton("Simple (URL only, one per line)")
self.radio_detailed = QRadioButton("Detailed (with checkboxes)")
self.radio_custom = QRadioButton("Custom Format Template")
self.txt_mode_group.addButton(self.radio_simple)
self.txt_mode_group.addButton(self.radio_detailed)
self.txt_mode_group.addButton(self.radio_custom)
txt_options_layout.addWidget(self.radio_simple)
txt_options_layout.addWidget(self.radio_detailed)
self.detailed_options_widget = QWidget()
detailed_layout = QVBoxLayout(self.detailed_options_widget)
detailed_layout.setContentsMargins(20, 5, 0, 5)
self.check_include_titles = QCheckBox("Include post titles as separators")
self.check_include_link_text = QCheckBox("Include link text/description")
self.check_include_platform = QCheckBox("Include platform (e.g., Mega, GDrive)")
detailed_layout.addWidget(self.check_include_titles)
detailed_layout.addWidget(self.check_include_link_text)
detailed_layout.addWidget(self.check_include_platform)
txt_options_layout.addWidget(self.detailed_options_widget)
txt_options_layout.addWidget(self.radio_custom)
self.custom_format_widget = QWidget()
custom_layout = QVBoxLayout(self.custom_format_widget)
custom_layout.setContentsMargins(20, 5, 0, 5)
placeholders_label = QLabel("Available placeholders: <b>{url} {post_title} {link_text} {platform} {key}</b>")
self.custom_format_input = QTextEdit()
self.custom_format_input.setAcceptRichText(False)
self.custom_format_input.setPlaceholderText("Enter your format, e.g., ({url}) or Title: {post_title}\\nLink: {url}")
self.custom_format_input.setText("{url}")
self.custom_format_input.setFixedHeight(80)
custom_layout.addWidget(placeholders_label)
custom_layout.addWidget(self.custom_format_input)
txt_options_layout.addWidget(self.custom_format_widget)
separator = QLabel("-" * 70)
txt_options_layout.addWidget(separator)
self.check_separate_files = QCheckBox("Save each platform to a separate file (e.g., export_mega.txt)")
txt_options_layout.addWidget(self.check_separate_files)
self.txt_options_group.setLayout(txt_options_layout)
main_layout.addWidget(self.txt_options_group)
# File Path Selection
path_layout = QHBoxLayout()
self.path_input = QLineEdit()
self.browse_button = QPushButton("Browse...")
path_layout.addWidget(self.path_input)
path_layout.addWidget(self.browse_button)
main_layout.addLayout(path_layout)
# Action Buttons
button_layout = QHBoxLayout()
button_layout.addStretch(1)
self.export_button = QPushButton("Export")
self.cancel_button = QPushButton("Cancel")
button_layout.addWidget(self.export_button)
button_layout.addWidget(self.cancel_button)
main_layout.addLayout(button_layout)
# Connections
self.radio_txt.toggled.connect(self._update_options_visibility)
self.radio_simple.toggled.connect(self._update_options_visibility)
self.radio_detailed.toggled.connect(self._update_options_visibility)
self.radio_custom.toggled.connect(self._update_options_visibility)
self.browse_button.clicked.connect(self._browse)
self.export_button.clicked.connect(self._accept_and_export)
self.cancel_button.clicked.connect(self.reject)
self.radio_simple.setChecked(True)
def _update_options_visibility(self):
is_txt = self.radio_txt.isChecked()
self.txt_options_group.setVisible(is_txt)
self.detailed_options_widget.setVisible(is_txt and self.radio_detailed.isChecked())
self.custom_format_widget.setVisible(is_txt and self.radio_custom.isChecked())
def _browse(self, base_filepath):
is_separate_files_mode = self.radio_txt.isChecked() and self.check_separate_files.isChecked()
if is_separate_files_mode:
dir_path = QFileDialog.getExistingDirectory(self, "Select Folder to Save Files")
if dir_path:
self.path_input.setText(os.path.join(dir_path, "exported_links"))
else:
default_filename = "exported_links"
file_filter = "Text Files (*.txt)"
if self.radio_json.isChecked():
default_filename += ".json"
file_filter = "JSON Files (*.json)"
else:
default_filename += ".txt"
filepath, _ = QFileDialog.getSaveFileName(self, "Save Links", default_filename, file_filter)
if filepath:
self.path_input.setText(filepath)
def _accept_and_export(self):
filepath = self.path_input.text().strip()
if not filepath:
QMessageBox.warning(self, "Input Error", "Please select a file path or folder.")
return
try:
if self.radio_txt.isChecked():
self._write_txt_file(filepath)
else:
self._write_json_file(filepath)
QMessageBox.information(self, "Export Successful", "Links successfully exported!")
self.accept()
except OSError as e:
QMessageBox.critical(self, "Export Error", f"Could not write to file:\n{e}")
def _write_txt_file(self, base_filepath):
if self.check_separate_files.isChecked():
links_by_platform = defaultdict(list)
for _, _, link_url, platform, _ in self.links_data:
sanitized_platform = re.sub(r'[<>:"/\\|?*]', '_', platform.lower().replace(' ', '_'))
links_by_platform[sanitized_platform].append(link_url)
base, ext = os.path.splitext(base_filepath)
if not ext: ext = ".txt"
for platform_key, links in links_by_platform.items():
platform_filepath = f"{base}_{platform_key}{ext}"
with open(platform_filepath, 'w', encoding='utf-8') as f:
for url in links:
f.write(url + "\n")
return
with open(base_filepath, 'w', encoding='utf-8') as f:
if self.radio_simple.isChecked():
for _, _, link_url, _, _ in self.links_data:
f.write(link_url + "\n")
elif self.radio_detailed.isChecked():
include_titles = self.check_include_titles.isChecked()
include_text = self.check_include_link_text.isChecked()
include_platform = self.check_include_platform.isChecked()
current_title = None
for post_title, link_text, link_url, platform, _ in self.links_data:
if include_titles and post_title != current_title:
if current_title is not None: f.write("\n" + "="*60 + "\n\n")
f.write(f"# Post: {post_title}\n")
current_title = post_title
line_parts = [link_url]
if include_platform: line_parts.append(f"Platform: {platform}")
if include_text and link_text: line_parts.append(f"Description: {link_text}")
f.write(" | ".join(line_parts) + "\n")
elif self.radio_custom.isChecked():
template = self.custom_format_input.toPlainText().replace("\\n", "\n")
for post_title, link_text, link_url, platform, decryption_key in self.links_data:
formatted_line = template.format(
url=link_url,
post_title=post_title,
link_text=link_text,
platform=platform,
key=decryption_key or ""
)
f.write(formatted_line)
if not template.endswith('\n'):
f.write('\n')
def _write_json_file(self, filepath):
output_data = []
for post_title, link_text, link_url, platform, decryption_key in self.links_data:
output_data.append({
"post_title": post_title,
"url": link_url,
"link_text": link_text,
"platform": platform,
"key": decryption_key or None
})
with open(filepath, 'w', encoding='utf-8') as f:
json.dump(output_data, f, indent=2)

View File

@@ -3,7 +3,7 @@ import html
import re
# --- Third-Party Library Imports ---
import requests
import cloudscraper # MODIFIED: Import cloudscraper
from PyQt5.QtCore import QCoreApplication, Qt
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QLineEdit, QListWidget,
@@ -12,7 +12,6 @@ from PyQt5.QtWidgets import (
# --- Local Application Imports ---
from ...i18n.translator import get_translation
# Corrected Import: Get the icon from the new assets utility module
from ..assets import get_app_icon_object
from ...utils.network_utils import prepare_cookies_for_request
from .CookieHelpDialog import CookieHelpDialog
@@ -37,13 +36,13 @@ class FavoriteArtistsDialog (QDialog ):
self ._init_ui ()
self ._fetch_favorite_artists ()
def _get_domain_for_service (self ,service_name ):
service_lower =service_name .lower ()
coomer_primary_services ={'onlyfans','fansly','manyvids','candfans'}
if service_lower in coomer_primary_services :
return "coomer.su"
else :
return "kemono.su"
def _get_domain_for_service(self, service_name):
service_lower = service_name.lower()
coomer_primary_services = {'onlyfans', 'fansly', 'manyvids', 'candfans'}
if service_lower in coomer_primary_services:
return "coomer.st"
else:
return "kemono.cr"
def _tr (self ,key ,default_text =""):
"""Helper to get translation based on current app language."""
@@ -126,32 +125,49 @@ class FavoriteArtistsDialog (QDialog ):
self .artist_list_widget .setVisible (show )
def _fetch_favorite_artists (self ):
# --- FIX: Use cloudscraper and add proper headers ---
scraper = cloudscraper.create_scraper()
# --- END FIX ---
if self.cookies_config['use_cookie']:
# Check if we can load cookies for at least one of the services.
kemono_cookies = prepare_cookies_for_request(True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'], self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.su")
coomer_cookies = prepare_cookies_for_request(True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'], self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.su")
kemono_cookies = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.cr"
)
if not kemono_cookies:
self._logger("No cookies for kemono.cr, trying fallback kemono.su...")
kemono_cookies = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.su"
)
coomer_cookies = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.st"
)
if not coomer_cookies:
self._logger("No cookies for coomer.st, trying fallback coomer.su...")
coomer_cookies = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.su"
)
if not kemono_cookies and not coomer_cookies:
# If cookies are enabled but none could be loaded, show help and stop.
self.status_label.setText(self._tr("fav_artists_cookies_required_status", "Error: Cookies enabled but could not be loaded for any source."))
self._logger("Error: Cookies enabled but no valid cookies were loaded. Showing help dialog.")
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
cookie_help_dialog.exec_()
self.download_button.setEnabled(False)
return # Stop further execution
kemono_fav_url ="https://kemono.su/api/v1/account/favorites?type=artist"
coomer_fav_url ="https://coomer.su/api/v1/account/favorites?type=artist"
return
self .all_fetched_artists =[]
fetched_any_successfully =False
errors_occurred =[]
any_cookies_loaded_successfully_for_any_source =False
api_sources =[
{"name":"Kemono.su","url":kemono_fav_url ,"domain":"kemono.su"},
{"name":"Coomer.su","url":coomer_fav_url ,"domain":"coomer.su"}
api_sources = [
{"name": "Kemono.cr", "url": "https://kemono.cr/api/v1/account/favorites?type=artist", "domain": "kemono.cr"},
{"name": "Coomer.st", "url": "https://coomer.st/api/v1/account/favorites?type=artist", "domain": "coomer.st"}
]
for source in api_sources :
@@ -159,23 +175,39 @@ class FavoriteArtistsDialog (QDialog ):
self .status_label .setText (self ._tr ("fav_artists_loading_from_source_status","⏳ Loading favorites from {source_name}...").format (source_name =source ['name']))
QCoreApplication .processEvents ()
cookies_dict_for_source =None
if self .cookies_config ['use_cookie']:
cookies_dict_for_source =prepare_cookies_for_request (
True ,
self .cookies_config ['cookie_text'],
self .cookies_config ['selected_cookie_file'],
self .cookies_config ['app_base_dir'],
self ._logger ,
target_domain =source ['domain']
cookies_dict_for_source = None
if self.cookies_config['use_cookie']:
primary_domain = source['domain']
fallback_domain = "kemono.su" if "kemono" in primary_domain else "coomer.su"
cookies_dict_for_source = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain=primary_domain
)
if cookies_dict_for_source :
any_cookies_loaded_successfully_for_any_source =True
else :
self ._logger (f"Warning ({source ['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
if not cookies_dict_for_source:
self._logger(f"Warning ({source['name']}): No cookies for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
cookies_dict_for_source = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain=fallback_domain
)
if cookies_dict_for_source:
any_cookies_loaded_successfully_for_any_source = True
else:
self._logger(f"Warning ({source['name']}): Cookies enabled but not loaded for this source. Fetch may fail.")
try :
headers ={'User-Agent':'Mozilla/5.0'}
response =requests .get (source ['url'],headers =headers ,cookies =cookies_dict_for_source ,timeout =20 )
# --- FIX: Add Referer and Accept headers ---
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Referer': f"https://{source['domain']}/favorites",
'Accept': 'text/css'
}
# --- END FIX ---
# --- FIX: Use scraper instead of requests ---
response = scraper.get(source['url'], headers=headers, cookies=cookies_dict_for_source, timeout=20)
# --- END FIX ---
response .raise_for_status ()
artists_data_from_api =response .json ()
@@ -210,20 +242,15 @@ class FavoriteArtistsDialog (QDialog ):
fetched_any_successfully =True
self ._logger (f"Fetched {processed_artists_from_source } artists from {source ['name']}.")
except requests .exceptions .RequestException as e :
except Exception as e :
error_msg =f"Error fetching favorites from {source ['name']}: {e }"
self ._logger (error_msg )
errors_occurred .append (error_msg )
except Exception as e :
error_msg =f"An unexpected error occurred with {source ['name']}: {e }"
self ._logger (error_msg )
errors_occurred .append (error_msg )
if self .cookies_config ['use_cookie']and not any_cookies_loaded_successfully_for_any_source :
self .status_label .setText (self ._tr ("fav_artists_cookies_required_status","Error: Cookies enabled but could not be loaded for any source."))
self ._logger ("Error: Cookies enabled but no cookies loaded for any source. Showing help dialog.")
cookie_help_dialog =CookieHelpDialog (self )
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
cookie_help_dialog .exec_ ()
self .download_button .setEnabled (False )
if not fetched_any_successfully :
@@ -244,7 +271,7 @@ class FavoriteArtistsDialog (QDialog ):
self ._show_content_elements (True )
self .download_button .setEnabled (True )
elif not fetched_any_successfully and not errors_occurred :
self .status_label .setText (self ._tr ("fav_artists_none_found_status","No favorite artists found on Kemono.su or Coomer.su."))
self .status_label .setText (self ._tr ("fav_artists_none_found_status","No favorite artists found on Kemono or Coomer."))
self ._show_content_elements (False )
self .download_button .setEnabled (False )
else :
@@ -300,4 +327,4 @@ class FavoriteArtistsDialog (QDialog ):
self .accept ()
def get_selected_artists (self ):
return self .selected_artists_data
return self .selected_artists_data

View File

@@ -1,4 +1,3 @@
# --- Standard Library Imports ---
import html
import os
import sys
@@ -8,21 +7,16 @@ import traceback
import json
import re
from collections import defaultdict
# --- Third-Party Library Imports ---
import requests
import cloudscraper # MODIFIED: Import cloudscraper
from PyQt5.QtCore import QCoreApplication, Qt, pyqtSignal, QThread
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QLineEdit, QListWidget,
QListWidgetItem, QMessageBox, QPushButton, QVBoxLayout, QProgressBar,
QWidget, QCheckBox
)
# --- Local Application Imports ---
from ...i18n.translator import get_translation
from ..assets import get_app_icon_object
from ...utils.network_utils import prepare_cookies_for_request
# Corrected Import: Import CookieHelpDialog directly from its own module
from .CookieHelpDialog import CookieHelpDialog
from ...core.api_client import download_from_api
from ...utils.resolution import get_dark_theme
@@ -40,28 +34,29 @@ class FavoritePostsFetcherThread (QThread ):
self .target_domain_preference =target_domain_preference
self .cancellation_event =threading .Event ()
self .error_key_map ={
"Kemono.su":"kemono_su",
"Coomer.su":"coomer_su"
"kemono.cr":"kemono_su",
"coomer.st":"coomer_su"
}
def _logger (self ,message ):
self .parent_logger_func (f"[FavPostsFetcherThread] {message }")
def run (self ):
kemono_fav_posts_url ="https://kemono.su/api/v1/account/favorites?type=post"
coomer_fav_posts_url ="https://coomer.su/api/v1/account/favorites?type=post"
def run(self):
# --- FIX: Use cloudscraper and add proper headers ---
scraper = cloudscraper.create_scraper()
# --- END FIX ---
all_fetched_posts_temp =[]
error_messages_for_summary =[]
fetched_any_successfully =False
any_cookies_loaded_successfully_for_any_source =False
all_fetched_posts_temp = []
error_messages_for_summary = []
fetched_any_successfully = False
any_cookies_loaded_successfully_for_any_source = False
self .status_update .emit ("key_fetching_fav_post_list_init")
self .progress_bar_update .emit (0 ,0 )
self.status_update.emit("key_fetching_fav_post_list_init")
self.progress_bar_update.emit(0, 0)
api_sources =[
{"name":"Kemono.su","url":kemono_fav_posts_url ,"domain":"kemono.su"},
{"name":"Coomer.su","url":coomer_fav_posts_url ,"domain":"coomer.su"}
api_sources = [
{"name": "Kemono.cr", "url": "https://kemono.cr/api/v1/account/favorites?type=post", "domain": "kemono.cr"},
{"name": "Coomer.st", "url": "https://coomer.st/api/v1/account/favorites?type=post", "domain": "coomer.st"}
]
api_sources_to_try =[]
@@ -82,20 +77,27 @@ class FavoritePostsFetcherThread (QThread ):
if self .cancellation_event .is_set ():
self .finished .emit ([],"KEY_FETCH_CANCELLED_DURING")
return
cookies_dict_for_source =None
if self .cookies_config ['use_cookie']:
cookies_dict_for_source =prepare_cookies_for_request (
True ,
self .cookies_config ['cookie_text'],
self .cookies_config ['selected_cookie_file'],
self .cookies_config ['app_base_dir'],
self ._logger ,
target_domain =source ['domain']
cookies_dict_for_source = None
if self.cookies_config['use_cookie']:
primary_domain = source['domain']
fallback_domain = "kemono.su" if "kemono" in primary_domain else "coomer.su"
cookies_dict_for_source = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain=primary_domain
)
if cookies_dict_for_source :
any_cookies_loaded_successfully_for_any_source =True
else :
self ._logger (f"Warning ({source ['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
if not cookies_dict_for_source and fallback_domain:
self._logger(f"Warning ({source['name']}): No cookies for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
cookies_dict_for_source = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain=fallback_domain
)
if cookies_dict_for_source:
any_cookies_loaded_successfully_for_any_source = True
else:
self._logger(f"Warning ({source['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
self ._logger (f"Attempting to fetch favorite posts from: {source ['name']} ({source ['url']})")
source_key_part =self .error_key_map .get (source ['name'],source ['name'].lower ().replace ('.','_'))
@@ -103,8 +105,18 @@ class FavoritePostsFetcherThread (QThread ):
QCoreApplication .processEvents ()
try :
headers ={'User-Agent':'Mozilla/5.0'}
response =requests .get (source ['url'],headers =headers ,cookies =cookies_dict_for_source ,timeout =20 )
# --- FIX: Add Referer and Accept headers ---
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Referer': f"https://{source['domain']}/favorites",
'Accept': 'text/css'
}
# --- END FIX ---
# --- FIX: Use scraper instead of requests ---
response = scraper.get(source['url'], headers=headers, cookies=cookies_dict_for_source, timeout=20)
# --- END FIX ---
response .raise_for_status ()
posts_data_from_api =response .json ()
@@ -136,33 +148,24 @@ class FavoritePostsFetcherThread (QThread ):
fetched_any_successfully =True
self ._logger (f"Fetched {processed_posts_from_source } posts from {source ['name']}.")
except requests .exceptions .RequestException as e :
except Exception as e :
err_detail =f"Error fetching favorite posts from {source ['name']}: {e }"
self ._logger (err_detail )
error_messages_for_summary .append (err_detail )
if e .response is not None and e .response .status_code ==401 :
if hasattr(e, 'response') and e.response is not None and e.response.status_code == 401:
self .finished .emit ([],"KEY_AUTH_FAILED")
self ._logger (f"Authorization failed for {source ['name']}, emitting KEY_AUTH_FAILED.")
return
except Exception as e :
err_detail =f"An unexpected error occurred with {source ['name']}: {e }"
self ._logger (err_detail )
error_messages_for_summary .append (err_detail )
if self .cancellation_event .is_set ():
self .finished .emit ([],"KEY_FETCH_CANCELLED_AFTER")
return
if self .cookies_config ['use_cookie']and not any_cookies_loaded_successfully_for_any_source :
if self .target_domain_preference and not any_cookies_loaded_successfully_for_any_source :
domain_key_part =self .error_key_map .get (self .target_domain_preference ,self .target_domain_preference .lower ().replace ('.','_'))
self .finished .emit ([],f"KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_FOR_DOMAIN_{domain_key_part }")
return
self .finished .emit ([],"KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_GENERIC")
return
@@ -415,14 +418,14 @@ class FavoritePostsDialog (QDialog ):
if status_key .startswith ("KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_FOR_DOMAIN_")or status_key =="KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_GENERIC":
status_label_text_key ="fav_posts_cookies_required_error"
self ._logger (f"Cookie error: {status_key }. Showing help dialog.")
cookie_help_dialog =CookieHelpDialog (self )
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
cookie_help_dialog .exec_ ()
elif status_key =="KEY_AUTH_FAILED":
status_label_text_key ="fav_posts_auth_failed_title"
self ._logger (f"Auth error: {status_key }. Showing help dialog.")
QMessageBox .warning (self ,self ._tr ("fav_posts_auth_failed_title","Authorization Failed (Posts)"),
self ._tr ("fav_posts_auth_failed_message_generic","...").format (domain_specific_part =specific_domain_msg_part ))
cookie_help_dialog =CookieHelpDialog (self )
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
cookie_help_dialog .exec_ ()
elif status_key =="KEY_NO_FAVORITES_FOUND_ALL_PLATFORMS":
status_label_text_key ="fav_posts_no_posts_found_status"
@@ -626,4 +629,4 @@ class FavoritePostsDialog (QDialog ):
self .accept ()
def get_selected_posts (self ):
return self .selected_posts_data
return self .selected_posts_data

View File

@@ -1,23 +1,112 @@
# --- Standard Library Imports ---
import os
import json
import sys
# --- PyQt5 Imports ---
from PyQt5.QtCore import Qt, QStandardPaths
from PyQt5.QtCore import Qt, QStandardPaths, QTimer
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
QGroupBox, QComboBox, QMessageBox, QGridLayout
QGroupBox, QComboBox, QMessageBox, QGridLayout, QCheckBox, QLineEdit
)
# --- Local Application Imports ---
from ...i18n.translator import get_translation
from ...utils.resolution import get_dark_theme
from ..assets import get_app_icon_object
from ..main_window import get_app_icon_object
from ...config.constants import (
THEME_KEY, LANGUAGE_KEY, DOWNLOAD_LOCATION_KEY,
RESOLUTION_KEY, UI_SCALE_KEY
RESOLUTION_KEY, UI_SCALE_KEY, SAVE_CREATOR_JSON_KEY,
DATE_PREFIX_FORMAT_KEY,
COOKIE_TEXT_KEY, USE_COOKIE_KEY,
FETCH_FIRST_KEY, DISCORD_TOKEN_KEY, POST_DOWNLOAD_ACTION_KEY
)
from ...services.updater import UpdateChecker, UpdateDownloader
class CountdownMessageBox(QDialog):
"""
A custom message box that includes a countdown timer for the 'Yes' button,
which automatically accepts the dialog when the timer reaches zero.
"""
def __init__(self, title, text, countdown_seconds=10, parent_app=None, parent=None):
super().__init__(parent)
self.parent_app = parent_app
self.countdown = countdown_seconds
# --- Basic Window Setup ---
self.setWindowTitle(title)
self.setModal(True)
app_icon = get_app_icon_object()
if app_icon and not app_icon.isNull():
self.setWindowIcon(app_icon)
self._init_ui(text)
self._apply_theme()
# --- Timer Setup ---
self.timer = QTimer(self)
self.timer.setInterval(1000) # Tick every second
self.timer.timeout.connect(self._update_countdown)
self.timer.start()
def _init_ui(self, text):
"""Initializes the UI components of the dialog."""
main_layout = QVBoxLayout(self)
self.message_label = QLabel(text)
self.message_label.setWordWrap(True)
self.message_label.setAlignment(Qt.AlignCenter)
main_layout.addWidget(self.message_label)
buttons_layout = QHBoxLayout()
buttons_layout.addStretch(1)
self.yes_button = QPushButton()
self.yes_button.clicked.connect(self.accept)
self.yes_button.setDefault(True)
self.no_button = QPushButton()
self.no_button.clicked.connect(self.reject)
buttons_layout.addWidget(self.yes_button)
buttons_layout.addWidget(self.no_button)
buttons_layout.addStretch(1)
main_layout.addLayout(buttons_layout)
self._retranslate_ui()
self._update_countdown() # Initial text setup
def _tr(self, key, default_text=""):
"""Helper for translations."""
if self.parent_app and hasattr(self.parent_app, 'current_selected_language'):
return get_translation(self.parent_app.current_selected_language, key, default_text)
return default_text
def _retranslate_ui(self):
"""Sets translated text for UI elements."""
self.no_button.setText(self._tr("no_button_text", "No"))
# The 'yes' button text is handled by the countdown
def _update_countdown(self):
"""Updates the countdown and button text each second."""
if self.countdown <= 0:
self.timer.stop()
self.accept() # Automatically accept when countdown finishes
return
yes_text = self._tr("yes_button_text", "Yes")
self.yes_button.setText(f"{yes_text} ({self.countdown})")
self.countdown -= 1
def _apply_theme(self):
"""Applies the current theme from the parent application."""
if self.parent_app and hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
scale = getattr(self.parent_app, 'scale_factor', 1)
self.setStyleSheet(get_dark_theme(scale))
else:
self.setStyleSheet("")
class FutureSettingsDialog(QDialog):
"""
@@ -28,6 +117,7 @@ class FutureSettingsDialog(QDialog):
super().__init__(parent)
self.parent_app = parent_app_ref
self.setModal(True)
self.update_downloader_thread = None # To keep a reference
app_icon = get_app_icon_object()
if app_icon and not app_icon.isNull():
@@ -35,7 +125,7 @@ class FutureSettingsDialog(QDialog):
screen_height = QApplication.primaryScreen().availableGeometry().height() if QApplication.primaryScreen() else 800
scale_factor = screen_height / 800.0
base_min_w, base_min_h = 420, 320 # Adjusted height for new layout
base_min_w, base_min_h = 420, 520 # Increased height for new options
scaled_min_w = int(base_min_w * scale_factor)
scaled_min_h = int(base_min_h * scale_factor)
self.setMinimumSize(scaled_min_w, scaled_min_h)
@@ -48,25 +138,21 @@ class FutureSettingsDialog(QDialog):
"""Initializes all UI components and layouts for the dialog."""
main_layout = QVBoxLayout(self)
# --- Group 1: Interface Settings ---
self.interface_group_box = QGroupBox()
interface_layout = QGridLayout(self.interface_group_box)
# Theme
self.theme_label = QLabel()
self.theme_toggle_button = QPushButton()
self.theme_toggle_button.clicked.connect(self._toggle_theme)
interface_layout.addWidget(self.theme_label, 0, 0)
interface_layout.addWidget(self.theme_toggle_button, 0, 1)
# UI Scale
self.ui_scale_label = QLabel()
self.ui_scale_combo_box = QComboBox()
self.ui_scale_combo_box.currentIndexChanged.connect(self._display_setting_changed)
interface_layout.addWidget(self.ui_scale_label, 1, 0)
interface_layout.addWidget(self.ui_scale_combo_box, 1, 1)
# Language
self.language_label = QLabel()
self.language_combo_box = QComboBox()
self.language_combo_box.currentIndexChanged.connect(self._language_selection_changed)
@@ -75,64 +161,173 @@ class FutureSettingsDialog(QDialog):
main_layout.addWidget(self.interface_group_box)
# --- Group 2: Download & Window Settings ---
self.download_window_group_box = QGroupBox()
download_window_layout = QGridLayout(self.download_window_group_box)
# Window Size (Resolution)
self.window_size_label = QLabel()
self.resolution_combo_box = QComboBox()
self.resolution_combo_box.currentIndexChanged.connect(self._display_setting_changed)
download_window_layout.addWidget(self.window_size_label, 0, 0)
download_window_layout.addWidget(self.resolution_combo_box, 0, 1)
# Default Path
self.default_path_label = QLabel()
self.save_path_button = QPushButton()
self.save_path_button.clicked.connect(self._save_download_path)
self.save_path_button.clicked.connect(self._save_settings)
download_window_layout.addWidget(self.default_path_label, 1, 0)
download_window_layout.addWidget(self.save_path_button, 1, 1)
self.date_prefix_format_label = QLabel()
self.date_prefix_format_input = QLineEdit()
self.date_prefix_format_input.textChanged.connect(self._date_prefix_format_changed)
download_window_layout.addWidget(self.date_prefix_format_label, 2, 0)
download_window_layout.addWidget(self.date_prefix_format_input, 2, 1)
self.post_download_action_label = QLabel()
self.post_download_action_combo = QComboBox()
self.post_download_action_combo.currentIndexChanged.connect(self._post_download_action_changed)
download_window_layout.addWidget(self.post_download_action_label, 3, 0)
download_window_layout.addWidget(self.post_download_action_combo, 3, 1)
self.save_creator_json_checkbox = QCheckBox()
self.save_creator_json_checkbox.stateChanged.connect(self._creator_json_setting_changed)
download_window_layout.addWidget(self.save_creator_json_checkbox, 4, 0, 1, 2)
self.fetch_first_checkbox = QCheckBox()
self.fetch_first_checkbox.stateChanged.connect(self._fetch_first_setting_changed)
download_window_layout.addWidget(self.fetch_first_checkbox, 5, 0, 1, 2)
main_layout.addWidget(self.download_window_group_box)
self.update_group_box = QGroupBox()
update_layout = QGridLayout(self.update_group_box)
self.version_label = QLabel()
self.update_status_label = QLabel()
self.check_update_button = QPushButton()
self.check_update_button.clicked.connect(self._check_for_updates)
update_layout.addWidget(self.version_label, 0, 0)
update_layout.addWidget(self.update_status_label, 0, 1)
update_layout.addWidget(self.check_update_button, 1, 0, 1, 2)
main_layout.addWidget(self.update_group_box)
main_layout.addStretch(1)
# --- OK Button ---
self.ok_button = QPushButton()
self.ok_button.clicked.connect(self.accept)
main_layout.addWidget(self.ok_button, 0, Qt.AlignRight | Qt.AlignBottom)
def _retranslate_ui(self):
self.setWindowTitle(self._tr("settings_dialog_title", "Settings"))
self.interface_group_box.setTitle(self._tr("interface_group_title", "Interface Settings"))
self.download_window_group_box.setTitle(self._tr("download_window_group_title", "Download & Window Settings"))
self.theme_label.setText(self._tr("theme_label", "Theme:"))
self.ui_scale_label.setText(self._tr("ui_scale_label", "UI Scale:"))
self.language_label.setText(self._tr("language_label", "Language:"))
self.window_size_label.setText(self._tr("window_size_label", "Window Size:"))
self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
self.date_prefix_format_label.setText(self._tr("date_prefix_format_label", "Post Subfolder Format:"))
# Update placeholder to include {post}
self.date_prefix_format_input.setPlaceholderText(self._tr("date_prefix_format_placeholder", "e.g., YYYY-MM-DD {post} {postid}"))
# Add the tooltip to explain usage
self.date_prefix_format_input.setToolTip(self._tr(
"date_prefix_format_tooltip",
"Create a custom folder name using placeholders:\n"
"• YYYY, MM, DD: for the date\n"
"{post}: for the post title\n"
"{postid}: for the post's unique ID\n\n"
"Example: {post} [{postid}] [YYYY-MM-DD]"
))
self.post_download_action_label.setText(self._tr("post_download_action_label", "Action After Download:"))
self.post_download_action_label.setText(self._tr("post_download_action_label", "Action After Download:"))
self.save_creator_json_checkbox.setText(self._tr("save_creator_json_label", "Save Creator.json file"))
self.fetch_first_checkbox.setText(self._tr("fetch_first_label", "Fetch First (Download after all pages are found)"))
self.fetch_first_checkbox.setToolTip(self._tr("fetch_first_tooltip", "If checked, the downloader will find all posts from a creator first before starting any downloads.\nThis can be slower to start but provides a more accurate progress bar."))
self._update_theme_toggle_button_text()
self.save_path_button.setText(self._tr("settings_save_all_button", "Save Path + Cookie + Token"))
self.save_path_button.setToolTip(self._tr("settings_save_all_tooltip", "Save the current 'Download Location', Cookie, and Discord Token settings for future sessions."))
self.ok_button.setText(self._tr("ok_button", "OK"))
self.update_group_box.setTitle(self._tr("update_group_title", "Application Updates"))
current_version = self.parent_app.windowTitle().split(' v')[-1]
self.version_label.setText(self._tr("current_version_label", f"Current Version: v{current_version}"))
self.update_status_label.setText(self._tr("update_status_ready", "Ready to check."))
self.check_update_button.setText(self._tr("check_for_updates_button", "Check for Updates"))
self._populate_display_combo_boxes()
self._populate_language_combo_box()
self._populate_post_download_action_combo()
self._load_date_prefix_format()
self._load_checkbox_states()
def _check_for_updates(self):
self.check_update_button.setEnabled(False)
self.update_status_label.setText(self._tr("update_status_checking", "Checking..."))
current_version = self.parent_app.windowTitle().split(' v')[-1]
self.update_checker_thread = UpdateChecker(current_version)
self.update_checker_thread.update_available.connect(self._on_update_available)
self.update_checker_thread.up_to_date.connect(self._on_up_to_date)
self.update_checker_thread.update_error.connect(self._on_update_error)
self.update_checker_thread.start()
def _on_update_available(self, new_version, download_url):
self.update_status_label.setText(self._tr("update_status_found", f"Update found: v{new_version}"))
self.check_update_button.setEnabled(True)
reply = QMessageBox.question(self, self._tr("update_available_title", "Update Available"),
self._tr("update_available_message", f"A new version (v{new_version}) is available.\nWould you like to download and install it now?"),
QMessageBox.Yes | QMessageBox.No, QMessageBox.Yes)
if reply == QMessageBox.Yes:
self.ok_button.setEnabled(False)
self.check_update_button.setEnabled(False)
self.update_status_label.setText(self._tr("update_status_downloading", "Downloading update..."))
self.update_downloader_thread = UpdateDownloader(download_url, self.parent_app)
self.update_downloader_thread.download_finished.connect(self._on_download_finished)
self.update_downloader_thread.download_error.connect(self._on_update_error)
self.update_downloader_thread.start()
def _on_download_finished(self):
QApplication.instance().quit()
def _on_up_to_date(self, message):
self.update_status_label.setText(self._tr("update_status_latest", message))
self.check_update_button.setEnabled(True)
def _on_update_error(self, message):
self.update_status_label.setText(self._tr("update_status_error", f"Error: {message}"))
self.check_update_button.setEnabled(True)
self.ok_button.setEnabled(True)
def _load_checkbox_states(self):
self.save_creator_json_checkbox.blockSignals(True)
should_save = self.parent_app.settings.value(SAVE_CREATOR_JSON_KEY, True, type=bool)
self.save_creator_json_checkbox.setChecked(should_save)
self.save_creator_json_checkbox.blockSignals(False)
self.fetch_first_checkbox.blockSignals(True)
should_fetch_first = self.parent_app.settings.value(FETCH_FIRST_KEY, False, type=bool)
self.fetch_first_checkbox.setChecked(should_fetch_first)
self.fetch_first_checkbox.blockSignals(False)
def _creator_json_setting_changed(self, state):
is_checked = state == Qt.Checked
self.parent_app.settings.setValue(SAVE_CREATOR_JSON_KEY, is_checked)
self.parent_app.settings.sync()
def _fetch_first_setting_changed(self, state):
is_checked = state == Qt.Checked
self.parent_app.settings.setValue(FETCH_FIRST_KEY, is_checked)
self.parent_app.settings.sync()
def _tr(self, key, default_text=""):
if callable(get_translation) and self.parent_app:
return get_translation(self.parent_app.current_selected_language, key, default_text)
return default_text
def _retranslate_ui(self):
self.setWindowTitle(self._tr("settings_dialog_title", "Settings"))
# Group Box Titles
self.interface_group_box.setTitle(self._tr("interface_group_title", "Interface Settings"))
self.download_window_group_box.setTitle(self._tr("download_window_group_title", "Download & Window Settings"))
# Interface Group Labels
self.theme_label.setText(self._tr("theme_label", "Theme:"))
self.ui_scale_label.setText(self._tr("ui_scale_label", "UI Scale:"))
self.language_label.setText(self._tr("language_label", "Language:"))
# Download & Window Group Labels
self.window_size_label.setText(self._tr("window_size_label", "Window Size:"))
self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
# Buttons and Controls
self._update_theme_toggle_button_text()
self.save_path_button.setText(self._tr("settings_save_path_button", "Save Current Download Path"))
self.save_path_button.setToolTip(self._tr("settings_save_path_tooltip", "Save the current 'Download Location' for future sessions."))
self.ok_button.setText(self._tr("ok_button", "OK"))
# Populate dropdowns
self._populate_display_combo_boxes()
self._populate_language_combo_box()
def _apply_theme(self):
if self.parent_app and self.parent_app.current_theme == "dark":
scale = getattr(self.parent_app, 'scale_factor', 1)
@@ -158,14 +353,7 @@ class FutureSettingsDialog(QDialog):
def _populate_display_combo_boxes(self):
self.resolution_combo_box.blockSignals(True)
self.resolution_combo_box.clear()
resolutions = [
("Auto", self._tr("auto_resolution", "Auto (System Default)")),
("1280x720", "1280 x 720"),
("1600x900", "1600 x 900"),
("1920x1080", "1920 x 1080 (Full HD)"),
("2560x1440", "2560 x 1440 (2K)"),
("3840x2160", "3840 x 2160 (4K)")
]
resolutions = [("Auto", "Auto"), ("1280x720", "1280x720"), ("1600x900", "1600x900"), ("1920x1080", "1920x1080")]
current_res = self.parent_app.settings.value(RESOLUTION_KEY, "Auto")
for res_key, res_name in resolutions:
self.resolution_combo_box.addItem(res_name, res_key)
@@ -176,43 +364,24 @@ class FutureSettingsDialog(QDialog):
self.ui_scale_combo_box.blockSignals(True)
self.ui_scale_combo_box.clear()
scales = [
(0.5, "50%"),
(0.7, "70%"),
(0.9, "90%"),
(1.0, "100% (Default)"),
(1.25, "125%"),
(1.50, "150%"),
(1.75, "175%"),
(2.0, "200%")
(0.5, "50%"), (0.7, "70%"), (0.9, "90%"), (1.0, "100% (Default)"),
(1.25, "125%"), (1.50, "150%"), (1.75, "175%"), (2.0, "200%")
]
current_scale = float(self.parent_app.settings.value(UI_SCALE_KEY, 1.0))
current_scale = self.parent_app.settings.value(UI_SCALE_KEY, 1.0)
for scale_val, scale_name in scales:
self.ui_scale_combo_box.addItem(scale_name, scale_val)
if abs(current_scale - scale_val) < 0.01:
if abs(float(current_scale) - scale_val) < 0.01:
self.ui_scale_combo_box.setCurrentIndex(self.ui_scale_combo_box.count() - 1)
self.ui_scale_combo_box.blockSignals(False)
def _display_setting_changed(self):
selected_res = self.resolution_combo_box.currentData()
selected_scale = self.ui_scale_combo_box.currentData()
self.parent_app.settings.setValue(RESOLUTION_KEY, selected_res)
self.parent_app.settings.setValue(UI_SCALE_KEY, selected_scale)
self.parent_app.settings.sync()
msg_box = QMessageBox(self)
msg_box.setIcon(QMessageBox.Information)
msg_box.setWindowTitle(self._tr("display_change_title", "Display Settings Changed"))
msg_box.setText(self._tr("language_change_message", "A restart is required for these changes to take effect."))
msg_box.setInformativeText(self._tr("language_change_informative", "Would you like to restart now?"))
restart_button = msg_box.addButton(self._tr("restart_now_button", "Restart Now"), QMessageBox.ApplyRole)
ok_button = msg_box.addButton(self._tr("ok_button", "OK"), QMessageBox.AcceptRole)
msg_box.setDefaultButton(ok_button)
msg_box.exec_()
if msg_box.clickedButton() == restart_button:
self.parent_app._request_restart_application()
QMessageBox.information(self, self._tr("display_change_title", "Display Settings Changed"),
self._tr("language_change_message", "A restart is required..."))
def _populate_language_combo_box(self):
self.language_combo_box.blockSignals(True)
@@ -236,40 +405,89 @@ class FutureSettingsDialog(QDialog):
self.parent_app.settings.setValue(LANGUAGE_KEY, selected_lang_code)
self.parent_app.settings.sync()
self.parent_app.current_selected_language = selected_lang_code
self._retranslate_ui()
if hasattr(self.parent_app, '_retranslate_main_ui'):
self.parent_app._retranslate_main_ui()
msg_box = QMessageBox(self)
msg_box.setIcon(QMessageBox.Information)
msg_box.setWindowTitle(self._tr("language_change_title", "Language Changed"))
msg_box.setText(self._tr("language_change_message", "A restart is required..."))
msg_box.setInformativeText(self._tr("language_change_informative", "Would you like to restart now?"))
restart_button = msg_box.addButton(self._tr("restart_now_button", "Restart Now"), QMessageBox.ApplyRole)
ok_button = msg_box.addButton(self._tr("ok_button", "OK"), QMessageBox.AcceptRole)
msg_box.setDefaultButton(ok_button)
msg_box.exec_()
self.parent_app._retranslate_main_ui()
QMessageBox.information(self, self._tr("language_change_title", "Language Changed"),
self._tr("language_change_message", "A restart is required..."))
if msg_box.clickedButton() == restart_button:
self.parent_app._request_restart_application()
def _populate_post_download_action_combo(self):
"""Populates the action dropdown and sets the current selection from settings."""
self.post_download_action_combo.blockSignals(True)
self.post_download_action_combo.clear()
actions = [
(self._tr("action_off", "Off"), "off"),
(self._tr("action_notify", "Notify with Sound"), "notify"),
(self._tr("action_sleep", "Sleep"), "sleep"),
(self._tr("action_shutdown", "Shutdown"), "shutdown")
]
current_action = self.parent_app.settings.value(POST_DOWNLOAD_ACTION_KEY, "off")
for text, key in actions:
self.post_download_action_combo.addItem(text, key)
if current_action == key:
self.post_download_action_combo.setCurrentIndex(self.post_download_action_combo.count() - 1)
self.post_download_action_combo.blockSignals(False)
def _post_download_action_changed(self):
"""Saves the selected post-download action to settings."""
selected_action = self.post_download_action_combo.currentData()
self.parent_app.settings.setValue(POST_DOWNLOAD_ACTION_KEY, selected_action)
self.parent_app.settings.sync()
def _load_date_prefix_format(self):
"""Loads the saved date prefix format and sets it in the input field."""
self.date_prefix_format_input.blockSignals(True)
current_format = self.parent_app.settings.value(DATE_PREFIX_FORMAT_KEY, "YYYY-MM-DD {post}", type=str)
self.date_prefix_format_input.setText(current_format)
self.date_prefix_format_input.blockSignals(False)
def _date_prefix_format_changed(self, text):
"""Saves the date prefix format whenever it's changed."""
self.parent_app.settings.setValue(DATE_PREFIX_FORMAT_KEY, text)
self.parent_app.settings.sync()
# Also update the live value in the parent app
if hasattr(self.parent_app, 'date_prefix_format'):
self.parent_app.date_prefix_format = text
def _save_settings(self):
path_saved = False
cookie_saved = False
token_saved = False
def _save_download_path(self):
if hasattr(self.parent_app, 'dir_input') and self.parent_app.dir_input:
current_path = self.parent_app.dir_input.text().strip()
if current_path and os.path.isdir(current_path):
self.parent_app.settings.setValue(DOWNLOAD_LOCATION_KEY, current_path)
self.parent_app.settings.sync()
QMessageBox.information(self,
self._tr("settings_save_path_success_title", "Path Saved"),
self._tr("settings_save_path_success_message", "Download location '{path}' saved.").format(path=current_path))
elif not current_path:
QMessageBox.warning(self,
self._tr("settings_save_path_empty_title", "Empty Path"),
self._tr("settings_save_path_empty_message", "Download location cannot be empty."))
path_saved = True
if hasattr(self.parent_app, 'use_cookie_checkbox'):
use_cookie = self.parent_app.use_cookie_checkbox.isChecked()
cookie_content = self.parent_app.cookie_text_input.text().strip()
if use_cookie and cookie_content:
self.parent_app.settings.setValue(USE_COOKIE_KEY, True)
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, cookie_content)
cookie_saved = True
else:
QMessageBox.warning(self,
self._tr("settings_save_path_invalid_title", "Invalid Path"),
self._tr("settings_save_path_invalid_message", "The path '{path}' is not a valid directory.").format(path=current_path))
self.parent_app.settings.setValue(USE_COOKIE_KEY, False)
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, "")
if (hasattr(self.parent_app, 'remove_from_filename_input') and
hasattr(self.parent_app, 'remove_from_filename_label_widget')):
label_text = self.parent_app.remove_from_filename_label_widget.text()
if "Token" in label_text:
discord_token = self.parent_app.remove_from_filename_input.text().strip()
if discord_token:
self.parent_app.settings.setValue(DISCORD_TOKEN_KEY, discord_token)
token_saved = True
self.parent_app.settings.sync()
if path_saved or cookie_saved or token_saved:
QMessageBox.information(self, "Settings Saved", "Settings have been saved successfully.")
else:
QMessageBox.critical(self, "Error", "Could not access download path input from main application.")
QMessageBox.warning(self, "Nothing to Save", "No valid settings were found to save.")

View File

@@ -1,16 +1,11 @@
# --- Standard Library Imports ---
import os
import sys
# --- PyQt5 Imports ---
from PyQt5.QtCore import QUrl, QSize, Qt
from PyQt5.QtGui import QIcon, QDesktopServices
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
QStackedWidget, QScrollArea, QFrame, QWidget
QStackedWidget, QListWidget, QFrame, QWidget, QScrollArea
)
# --- Local Application Imports ---
from ...i18n.translator import get_translation
from ..main_window import get_app_icon_object
from ...utils.resolution import get_dark_theme
@@ -51,13 +46,12 @@ class TourStepWidget(QWidget):
layout.addWidget(scroll_area, 1)
class HelpGuideDialog (QDialog ):
"""A multi-page dialog for displaying the feature guide."""
def __init__ (self ,steps_data ,parent_app ,parent =None ):
super ().__init__ (parent )
self .current_step =0
self .steps_data =steps_data
self .parent_app =parent_app
class HelpGuideDialog(QDialog):
"""A multi-page dialog for displaying the feature guide with a navigation list."""
def __init__(self, steps_data, parent_app, parent=None):
super().__init__(parent)
self.steps_data = steps_data
self.parent_app = parent_app
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
@@ -66,7 +60,7 @@ class HelpGuideDialog (QDialog ):
self.setWindowIcon(app_icon)
self.setModal(True)
self.resize(int(650 * scale), int(600 * scale))
self.resize(int(800 * scale), int(650 * scale))
dialog_font_size = int(11 * scale)
@@ -74,6 +68,7 @@ class HelpGuideDialog (QDialog ):
if hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
current_theme_style = get_dark_theme(scale)
else:
# Basic light theme fallback
current_theme_style = f"""
QDialog {{ background-color: #F0F0F0; border: 1px solid #B0B0B0; }}
QLabel {{ color: #1E1E1E; }}
@@ -91,118 +86,107 @@ class HelpGuideDialog (QDialog ):
"""
self.setStyleSheet(current_theme_style)
self ._init_ui ()
if self .parent_app :
self .move (self .parent_app .geometry ().center ()-self .rect ().center ())
self._init_ui()
if self.parent_app:
self.move(self.parent_app.geometry().center() - self.rect().center())
def _tr (self ,key ,default_text =""):
def _tr(self, key, default_text=""):
"""Helper to get translation based on current app language."""
if callable (get_translation )and self .parent_app :
return get_translation (self .parent_app .current_selected_language ,key ,default_text )
return default_text
if callable(get_translation) and self.parent_app:
return get_translation(self.parent_app.current_selected_language, key, default_text)
return default_text
def _init_ui(self):
main_layout = QVBoxLayout(self)
main_layout.setContentsMargins(15, 15, 15, 15)
main_layout.setSpacing(10)
def _init_ui (self ):
main_layout =QVBoxLayout (self )
main_layout .setContentsMargins (0 ,0 ,0 ,0 )
main_layout .setSpacing (0 )
# Title
title_label = QLabel(self._tr("help_guide_dialog_title", "Kemono Downloader - Feature Guide"))
scale = getattr(self.parent_app, 'scale_factor', 1.0)
title_font_size = int(16 * scale)
title_label.setStyleSheet(f"font-size: {title_font_size}pt; font-weight: bold; color: #E0E0E0;")
title_label.setAlignment(Qt.AlignCenter)
main_layout.addWidget(title_label)
self .stacked_widget =QStackedWidget ()
main_layout .addWidget (self .stacked_widget ,1 )
# Content Layout (Navigation + Stacked Pages)
content_layout = QHBoxLayout()
main_layout.addLayout(content_layout, 1)
self .tour_steps_widgets =[]
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
for title, content in self.steps_data:
step_widget = TourStepWidget(title, content, scale=scale)
self.tour_steps_widgets.append(step_widget)
self.nav_list = QListWidget()
self.nav_list.setFixedWidth(int(220 * scale))
self.nav_list.setStyleSheet(f"""
QListWidget {{
background-color: #2E2E2E;
border: 1px solid #4A4A4A;
border-radius: 4px;
font-size: {int(11 * scale)}pt;
}}
QListWidget::item {{
padding: 10px;
border-bottom: 1px solid #4A4A4A;
}}
QListWidget::item:selected {{
background-color: #87CEEB;
color: #2E2E2E;
font-weight: bold;
}}
""")
content_layout.addWidget(self.nav_list)
self.stacked_widget = QStackedWidget()
content_layout.addWidget(self.stacked_widget)
for title_key, content_key in self.steps_data:
title = self._tr(title_key, title_key)
content = self._tr(content_key, f"Content for {content_key} not found.")
self.nav_list.addItem(title)
step_widget = TourStepWidget(title, content, scale=scale)
self.stacked_widget.addWidget(step_widget)
self .setWindowTitle (self ._tr ("help_guide_dialog_title","Kemono Downloader - Feature Guide"))
self.nav_list.currentRowChanged.connect(self.stacked_widget.setCurrentIndex)
if self.nav_list.count() > 0:
self.nav_list.setCurrentRow(0)
buttons_layout =QHBoxLayout ()
buttons_layout .setContentsMargins (15 ,10 ,15 ,15 )
buttons_layout .setSpacing (10 )
# Footer Layout (Social links and Close button)
footer_layout = QHBoxLayout()
footer_layout.setContentsMargins(0, 10, 0, 0)
# Social Media Icons
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
assets_base_dir = sys._MEIPASS
else:
assets_base_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
self .back_button =QPushButton (self ._tr ("tour_dialog_back_button","Back"))
self .back_button .clicked .connect (self ._previous_step )
self .back_button .setEnabled (False )
github_icon_path = os.path.join(assets_base_dir, "assets", "github.png")
instagram_icon_path = os.path.join(assets_base_dir, "assets", "instagram.png")
discord_icon_path = os.path.join(assets_base_dir, "assets", "discord.png")
if getattr (sys ,'frozen',False )and hasattr (sys ,'_MEIPASS'):
assets_base_dir =sys ._MEIPASS
else :
assets_base_dir =os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
self.github_button = QPushButton(QIcon(github_icon_path), "")
self.instagram_button = QPushButton(QIcon(instagram_icon_path), "")
self.discord_button = QPushButton(QIcon(discord_icon_path), "")
github_icon_path =os .path .join (assets_base_dir ,"assets","github.png")
instagram_icon_path =os .path .join (assets_base_dir ,"assets","instagram.png")
discord_icon_path =os .path .join (assets_base_dir ,"assets","discord.png")
self .github_button =QPushButton (QIcon (github_icon_path ),"")
self .instagram_button =QPushButton (QIcon (instagram_icon_path ),"")
self .Discord_button =QPushButton (QIcon (discord_icon_path ),"")
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
icon_dim = int(24 * scale)
icon_size = QSize(icon_dim, icon_dim)
self .github_button .setIconSize (icon_size )
self .instagram_button .setIconSize (icon_size )
self .Discord_button .setIconSize (icon_size )
for button, tooltip_key, url in [
(self.github_button, "help_guide_github_tooltip", "https://github.com/Yuvi63771/Kemono-Downloader"),
(self.instagram_button, "help_guide_instagram_tooltip", "https://www.instagram.com/uvi.arts/"),
(self.discord_button, "help_guide_discord_tooltip", "https://discord.gg/BqP64XTdJN")
]:
button.setIconSize(icon_size)
button.setToolTip(self._tr(tooltip_key))
button.setFixedSize(icon_size.width() + 8, icon_size.height() + 8)
button.setStyleSheet("background-color: transparent; border: none;")
button.clicked.connect(lambda _, u=url: QDesktopServices.openUrl(QUrl(u)))
footer_layout.addWidget(button)
self .next_button =QPushButton (self ._tr ("tour_dialog_next_button","Next"))
self .next_button .clicked .connect (self ._next_step_action )
self .next_button .setDefault (True )
self .github_button .clicked .connect (self ._open_github_link )
self .instagram_button .clicked .connect (self ._open_instagram_link )
self .Discord_button .clicked .connect (self ._open_Discord_link )
self .github_button .setToolTip (self ._tr ("help_guide_github_tooltip","Visit project's GitHub page (Opens in browser)"))
self .instagram_button .setToolTip (self ._tr ("help_guide_instagram_tooltip","Visit our Instagram page (Opens in browser)"))
self .Discord_button .setToolTip (self ._tr ("help_guide_discord_tooltip","Visit our Discord community (Opens in browser)"))
footer_layout.addStretch(1)
self.finish_button = QPushButton(self._tr("tour_dialog_finish_button", "Finish"))
self.finish_button.clicked.connect(self.accept)
footer_layout.addWidget(self.finish_button)
social_layout =QHBoxLayout ()
social_layout .setSpacing (10 )
social_layout .addWidget (self .github_button )
social_layout .addWidget (self .instagram_button )
social_layout .addWidget (self .Discord_button )
while buttons_layout .count ():
item =buttons_layout .takeAt (0 )
if item .widget ():
item .widget ().setParent (None )
elif item .layout ():
pass
buttons_layout .addLayout (social_layout )
buttons_layout .addStretch (1 )
buttons_layout .addWidget (self .back_button )
buttons_layout .addWidget (self .next_button )
main_layout .addLayout (buttons_layout )
self ._update_button_states ()
def _next_step_action (self ):
if self .current_step <len (self .tour_steps_widgets )-1 :
self .current_step +=1
self .stacked_widget .setCurrentIndex (self .current_step )
else :
self .accept ()
self ._update_button_states ()
def _previous_step (self ):
if self .current_step >0 :
self .current_step -=1
self .stacked_widget .setCurrentIndex (self .current_step )
self ._update_button_states ()
def _update_button_states (self ):
if self .current_step ==len (self .tour_steps_widgets )-1 :
self .next_button .setText (self ._tr ("tour_dialog_finish_button","Finish"))
else :
self .next_button .setText (self ._tr ("tour_dialog_next_button","Next"))
self .back_button .setEnabled (self .current_step >0 )
def _open_github_link (self ):
QDesktopServices .openUrl (QUrl ("https://github.com/Yuvi9587"))
def _open_instagram_link (self ):
QDesktopServices .openUrl (QUrl ("https://www.instagram.com/uvi.arts/"))
def _open_Discord_link (self ):
QDesktopServices .openUrl (QUrl ("https://discord.gg/BqP64XTdJN"))
main_layout.addLayout(footer_layout)

View File

@@ -1,13 +1,8 @@
# KeepDuplicatesDialog.py
# --- PyQt5 Imports ---
from PyQt5.QtWidgets import (
QDialog, QVBoxLayout, QGroupBox, QRadioButton,
QPushButton, QHBoxLayout, QButtonGroup, QLabel, QLineEdit
)
from PyQt5.QtGui import QIntValidator
# --- Local Application Imports ---
from ...i18n.translator import get_translation
from ...config.constants import DUPLICATE_HANDLING_HASH, DUPLICATE_HANDLING_KEEP_ALL
@@ -25,8 +20,6 @@ class KeepDuplicatesDialog(QDialog):
if self.parent_app and hasattr(self.parent_app, '_apply_theme_to_widget'):
self.parent_app._apply_theme_to_widget(self)
# Set the initial state based on current settings
if current_mode == DUPLICATE_HANDLING_KEEP_ALL:
self.radio_keep_everything.setChecked(True)
self.limit_input.setText(str(current_limit) if current_limit > 0 else "")
@@ -44,13 +37,9 @@ class KeepDuplicatesDialog(QDialog):
options_group = QGroupBox()
options_layout = QVBoxLayout(options_group)
self.button_group = QButtonGroup(self)
# --- Skip by Hash Option ---
self.radio_skip_by_hash = QRadioButton()
self.button_group.addButton(self.radio_skip_by_hash)
options_layout.addWidget(self.radio_skip_by_hash)
# --- Keep Everything Option with Limit Input ---
keep_everything_layout = QHBoxLayout()
self.radio_keep_everything = QRadioButton()
self.button_group.addButton(self.radio_keep_everything)
@@ -66,8 +55,6 @@ class KeepDuplicatesDialog(QDialog):
options_layout.addLayout(keep_everything_layout)
main_layout.addWidget(options_group)
# --- OK and Cancel buttons ---
button_layout = QHBoxLayout()
self.ok_button = QPushButton()
self.cancel_button = QPushButton()
@@ -75,8 +62,6 @@ class KeepDuplicatesDialog(QDialog):
button_layout.addWidget(self.ok_button)
button_layout.addWidget(self.cancel_button)
main_layout.addLayout(button_layout)
# --- Connections ---
self.ok_button.clicked.connect(self.accept)
self.cancel_button.clicked.connect(self.reject)
self.radio_keep_everything.toggled.connect(self.limit_input.setEnabled)

View File

@@ -37,12 +37,16 @@ class KnownNamesFilterDialog(QDialog):
if app_icon and not app_icon.isNull():
self.setWindowIcon(app_icon)
# Set window size dynamically
screen_geometry = QApplication.primaryScreen().availableGeometry()
# --- START OF FIX ---
# Get the user-defined scale factor from the parent application
# instead of calculating an independent one.
scale_factor = getattr(self.parent_app, 'scale_factor', 1.0)
# Define base size and apply the correct scale factor
base_width, base_height = 460, 450
self.setMinimumSize(int(base_width * scale_factor), int(base_height * scale_factor))
self.resize(int(base_width * scale_factor * 1.1), int(base_height * scale_factor * 1.1))
# --- END OF FIX ---
# --- Initialize UI and Apply Theming ---
self._init_ui()

View File

@@ -24,7 +24,7 @@ class MoreOptionsDialog(QDialog):
layout.addWidget(self.description_label)
self.radio_button_group = QButtonGroup(self)
self.radio_content = QRadioButton("Description/Content")
self.radio_comments = QRadioButton("Comments (Not Working)")
self.radio_comments = QRadioButton("Comments")
self.radio_button_group.addButton(self.radio_content)
self.radio_button_group.addButton(self.radio_comments)
layout.addWidget(self.radio_content)

View File

@@ -0,0 +1,118 @@
# multipart_scope_dialog.py
from PyQt5.QtWidgets import (
QDialog, QVBoxLayout, QGroupBox, QRadioButton, QDialogButtonBox, QButtonGroup,
QLabel, QLineEdit, QHBoxLayout, QFrame
)
from PyQt5.QtGui import QIntValidator
from PyQt5.QtCore import Qt
# It's good practice to get this constant from the source
# but for this example, we will define it here.
MAX_PARTS = 16
class MultipartScopeDialog(QDialog):
"""
A dialog to let the user select the scope, number of parts, and minimum size for multipart downloads.
"""
SCOPE_VIDEOS = 'videos'
SCOPE_ARCHIVES = 'archives'
SCOPE_BOTH = 'both'
def __init__(self, current_scope='both', current_parts=4, current_min_size_mb=100, parent=None):
super().__init__(parent)
self.setWindowTitle("Multipart Download Options")
self.setWindowFlags(self.windowFlags() & ~Qt.WindowContextHelpButtonHint)
self.setMinimumWidth(350)
# Main Layout
layout = QVBoxLayout(self)
# --- Options Group for Scope ---
self.options_group_box = QGroupBox("Apply multipart downloads to:")
options_layout = QVBoxLayout()
# ... (Radio buttons and button group code remains unchanged) ...
self.radio_videos = QRadioButton("Videos Only")
self.radio_archives = QRadioButton("Archives Only (.zip, .rar, etc.)")
self.radio_both = QRadioButton("Both Videos and Archives")
if current_scope == self.SCOPE_VIDEOS:
self.radio_videos.setChecked(True)
elif current_scope == self.SCOPE_ARCHIVES:
self.radio_archives.setChecked(True)
else:
self.radio_both.setChecked(True)
self.button_group = QButtonGroup(self)
self.button_group.addButton(self.radio_videos)
self.button_group.addButton(self.radio_archives)
self.button_group.addButton(self.radio_both)
options_layout.addWidget(self.radio_videos)
options_layout.addWidget(self.radio_archives)
options_layout.addWidget(self.radio_both)
self.options_group_box.setLayout(options_layout)
layout.addWidget(self.options_group_box)
# --- START: MODIFIED Download Settings Group ---
self.settings_group_box = QGroupBox("Download settings:")
settings_layout = QVBoxLayout()
# Layout for Parts count
parts_layout = QHBoxLayout()
self.parts_label = QLabel("Number of download parts per file:")
self.parts_input = QLineEdit(str(current_parts))
self.parts_input.setValidator(QIntValidator(2, MAX_PARTS, self))
self.parts_input.setFixedWidth(40)
self.parts_input.setToolTip(f"Set the number of concurrent connections per file (2-{MAX_PARTS}).")
parts_layout.addWidget(self.parts_label)
parts_layout.addStretch()
parts_layout.addWidget(self.parts_input)
settings_layout.addLayout(parts_layout)
# Layout for Minimum Size
size_layout = QHBoxLayout()
self.size_label = QLabel("Minimum file size for multipart (MB):")
self.size_input = QLineEdit(str(current_min_size_mb))
self.size_input.setValidator(QIntValidator(10, 10000, self)) # Min 10MB, Max ~10GB
self.size_input.setFixedWidth(40)
self.size_input.setToolTip("Files smaller than this will use a normal, single-part download.")
size_layout.addWidget(self.size_label)
size_layout.addStretch()
size_layout.addWidget(self.size_input)
settings_layout.addLayout(size_layout)
self.settings_group_box.setLayout(settings_layout)
layout.addWidget(self.settings_group_box)
# --- END: MODIFIED Download Settings Group ---
# OK and Cancel Buttons
self.button_box = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel)
self.button_box.accepted.connect(self.accept)
self.button_box.rejected.connect(self.reject)
layout.addWidget(self.button_box)
self.setLayout(layout)
def get_selected_scope(self):
# ... (This method remains unchanged) ...
if self.radio_videos.isChecked():
return self.SCOPE_VIDEOS
if self.radio_archives.isChecked():
return self.SCOPE_ARCHIVES
return self.SCOPE_BOTH
def get_selected_parts(self):
# ... (This method remains unchanged) ...
try:
parts = int(self.parts_input.text())
return max(2, min(parts, MAX_PARTS))
except (ValueError, TypeError):
return 4
def get_selected_min_size(self):
"""Returns the selected minimum size in MB as an integer."""
try:
size = int(self.size_input.text())
return max(10, min(size, 10000)) # Enforce valid range
except (ValueError, TypeError):
return 100 # Return a safe default

View File

@@ -1,34 +1,39 @@
# SinglePDF.py
import os
import re
try:
from fpdf import FPDF
FPDF_AVAILABLE = True
# --- FIX: Move the class definition inside the try block ---
class PDF(FPDF):
"""Custom PDF class to handle headers and footers."""
def header(self):
pass
def footer(self):
self.set_y(-15)
if self.font_family:
self.set_font(self.font_family, '', 8)
else:
self.set_font('Arial', '', 8)
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'C')
except ImportError:
FPDF_AVAILABLE = False
# If the import fails, FPDF and PDF will not be defined,
# but the program won't crash here.
FPDF = None
PDF = None
class PDF(FPDF):
"""Custom PDF class to handle headers and footers."""
def header(self):
# No header
pass
def footer(self):
# Position at 1.5 cm from bottom
self.set_y(-15)
self.set_font('DejaVu', '', 8)
# Page number
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'C')
def strip_html_tags(text):
if not text:
return ""
clean = re.compile('<.*?>')
return re.sub(clean, '', text)
def create_single_pdf_from_content(posts_data, output_filename, font_path, logger=print):
"""
Creates a single PDF from a list of post titles and content.
Args:
posts_data (list): A list of dictionaries, where each dict has 'title' and 'content' keys.
output_filename (str): The full path for the output PDF file.
font_path (str): Path to the DejaVuSans.ttf font file.
logger (function, optional): A function to log progress and errors. Defaults to print.
Creates a single, continuous PDF, correctly formatting both descriptions and comments.
"""
if not FPDF_AVAILABLE:
logger("❌ PDF Creation failed: 'fpdf2' library is not installed. Please run: pip install fpdf2")
@@ -39,34 +44,62 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
return False
pdf = PDF()
default_font_family = 'DejaVu'
bold_font_path = ""
if font_path:
bold_font_path = font_path.replace("DejaVuSans.ttf", "DejaVuSans-Bold.ttf")
try:
if not os.path.exists(font_path):
raise RuntimeError("Font file not found.")
pdf.add_font('DejaVu', '', font_path, uni=True)
pdf.add_font('DejaVu', 'B', font_path, uni=True) # Add Bold variant
except Exception as font_error:
logger(f" ⚠️ Could not load DejaVu font: {font_error}")
logger(" PDF may not support all characters. Falling back to default Arial font.")
pdf.set_font('Arial', '', 12)
pdf.set_font('Arial', 'B', 16)
logger(f" Starting PDF creation with content from {len(posts_data)} posts...")
for post in posts_data:
pdf.add_page()
# Post Title
pdf.set_font('DejaVu', 'B', 16)
# vvv THIS LINE IS CORRECTED vvv
# We explicitly set align='L' and remove the incorrect positional arguments.
pdf.multi_cell(w=0, h=10, text=post.get('title', 'Untitled Post'), align='L')
if not os.path.exists(font_path): raise RuntimeError(f"Font file not found: {font_path}")
if not os.path.exists(bold_font_path): raise RuntimeError(f"Bold font file not found: {bold_font_path}")
pdf.ln(5) # Add a little space after the title
pdf.add_font('DejaVu', '', font_path, uni=True)
pdf.add_font('DejaVu', 'B', bold_font_path, uni=True)
except Exception as font_error:
logger(f" ⚠️ Could not load DejaVu font: {font_error}. Falling back to Arial.")
default_font_family = 'Arial'
pdf.add_page()
# Post Content
pdf.set_font('DejaVu', '', 12)
pdf.multi_cell(w=0, h=7, text=post.get('content', 'No Content'))
logger(f" Starting continuous PDF creation with content from {len(posts_data)} posts...")
for i, post in enumerate(posts_data):
if i > 0:
# This ensures every post after the first gets its own page.
pdf.add_page()
pdf.set_font(default_font_family, 'B', 16)
pdf.multi_cell(w=0, h=10, txt=post.get('title', 'Untitled Post'), align='L')
pdf.ln(5)
if 'comments' in post and post['comments']:
comments_list = post['comments']
for comment_index, comment in enumerate(comments_list):
user = comment.get('commenter_name', 'Unknown User')
timestamp = comment.get('published', 'No Date')
body = strip_html_tags(comment.get('content', ''))
pdf.set_font(default_font_family, '', 10)
pdf.write(8, "Comment by: ")
if user is not None:
pdf.set_font(default_font_family, 'B', 10)
pdf.write(8, str(user))
pdf.set_font(default_font_family, '', 10)
pdf.write(8, f" on {timestamp}")
pdf.ln(10)
pdf.set_font(default_font_family, '', 11)
pdf.multi_cell(w=0, h=7, txt=body)
if comment_index < len(comments_list) - 1:
pdf.ln(3)
pdf.cell(w=0, h=0, border='T')
pdf.ln(3)
elif 'content' in post:
pdf.set_font(default_font_family, '', 12)
pdf.multi_cell(w=0, h=7, txt=post.get('content', 'No Content'))
try:
pdf.output(output_filename)
@@ -74,4 +107,4 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
return True
except Exception as e:
logger(f"❌ A critical error occurred while saving the final PDF: {e}")
return False
return False

View File

@@ -1,71 +1,144 @@
# src/ui/dialogs/SupportDialog.py
# src/app/dialogs/SupportDialog.py
# --- Standard Library Imports ---
import sys
import os
# --- PyQt5 Imports ---
from PyQt5.QtWidgets import (
QDialog, QVBoxLayout, QLabel, QFrame, QDialogButtonBox, QGridLayout
QDialog, QVBoxLayout, QHBoxLayout, QLabel, QFrame,
QPushButton, QSizePolicy
)
from PyQt5.QtCore import Qt, QSize
from PyQt5.QtGui import QFont, QPixmap
from PyQt5.QtCore import Qt, QSize, QUrl
from PyQt5.QtGui import QPixmap, QDesktopServices
# --- Local Application Imports ---
from ...utils.resolution import get_dark_theme
# --- Helper function for robust asset loading ---
def get_asset_path(filename):
"""
Gets the absolute path to a file in the assets folder,
handling both development and frozen (PyInstaller) environments.
"""
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
# Running in a PyInstaller bundle
base_path = sys._MEIPASS
else:
# Running in a normal Python environment from src/ui/dialogs/
base_path = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
return os.path.join(base_path, 'assets', filename)
class SupportDialog(QDialog):
"""
A dialog to show support and donation options.
A polished dialog showcasing support and community options in a
clean, modern card-based layout.
"""
def __init__(self, parent=None):
super().__init__(parent)
self.parent_app = parent
self.setWindowTitle("❤️ Support the Developer")
self.setMinimumWidth(450)
self.setWindowTitle("❤️ Support & Community")
self.setMinimumWidth(560)
self._init_ui()
self._apply_theme()
def _init_ui(self):
"""Initializes all UI components and layouts for the dialog."""
# Main layout
main_layout = QVBoxLayout(self)
main_layout.setSpacing(15)
def _create_card_button(
self, icon_path, title, subtitle, url,
hover_color="#2E2E2E", min_height=110, icon_size=44
):
"""Reusable clickable card widget with icon, title, and subtitle."""
button = QPushButton()
button.setCursor(Qt.PointingHandCursor)
button.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding)
button.setMinimumHeight(min_height)
# Title Label
title_label = QLabel("Thank You for Your Support!")
font = title_label.font()
font.setPointSize(14)
# Consistent style
button.setStyleSheet(f"""
QPushButton {{
background-color: #3A3A3A;
border: 1px solid #555;
border-radius: 10px;
text-align: center;
padding: 12px;
}}
QPushButton:hover {{
background-color: {hover_color};
border: 1px solid #777;
}}
""")
layout = QVBoxLayout(button)
layout.setSpacing(6)
# Icon
icon_label = QLabel()
pixmap = QPixmap(icon_path)
if not pixmap.isNull():
scale = getattr(self.parent_app, 'scale_factor', 1.0)
scaled_size = int(icon_size * scale)
icon_label.setPixmap(
pixmap.scaled(QSize(scaled_size, scaled_size), Qt.KeepAspectRatio, Qt.SmoothTransformation)
)
icon_label.setAlignment(Qt.AlignCenter)
layout.addWidget(icon_label)
# Title
title_label = QLabel(title)
font = self.font()
font.setPointSize(11)
font.setBold(True)
title_label.setFont(font)
title_label.setAlignment(Qt.AlignCenter)
main_layout.addWidget(title_label)
title_label.setStyleSheet("background-color: transparent; border: none;")
layout.addWidget(title_label)
# Informational Text
info_label = QLabel(
"If you find this application useful, please consider supporting its development. "
"Your contribution helps cover costs and encourages future updates and features."
# Subtitle
if subtitle:
subtitle_label = QLabel(subtitle)
subtitle_label.setStyleSheet("color: #A8A8A8; background-color: transparent; border: none;")
subtitle_label.setAlignment(Qt.AlignCenter)
layout.addWidget(subtitle_label)
button.clicked.connect(lambda: QDesktopServices.openUrl(QUrl(url)))
return button
def _create_section_title(self, text):
"""Stylized section heading."""
label = QLabel(text)
font = label.font()
font.setPointSize(13)
font.setBold(True)
label.setFont(font)
label.setAlignment(Qt.AlignCenter)
label.setStyleSheet("margin-top: 10px; margin-bottom: 5px;")
return label
def _init_ui(self):
main_layout = QVBoxLayout(self)
main_layout.setSpacing(18)
main_layout.setContentsMargins(20, 20, 20, 20)
# Header
header_label = QLabel("Support the Project")
font = header_label.font()
font.setPointSize(17)
font.setBold(True)
header_label.setFont(font)
header_label.setAlignment(Qt.AlignCenter)
main_layout.addWidget(header_label)
subtext = QLabel(
"If you enjoy this application, consider supporting its development. "
"Your help keeps the project alive and growing!"
)
info_label.setWordWrap(True)
info_label.setAlignment(Qt.AlignCenter)
main_layout.addWidget(info_label)
subtext.setWordWrap(True)
subtext.setAlignment(Qt.AlignCenter)
main_layout.addWidget(subtext)
# Financial Support
main_layout.addWidget(self._create_section_title("Contribute Financially"))
donation_layout = QHBoxLayout()
donation_layout.setSpacing(15)
donation_layout.addWidget(self._create_card_button(
get_asset_path("ko-fi.png"), "Ko-fi", "One-time ",
"https://ko-fi.com/yuvi427183", "#2B2F36"
))
donation_layout.addWidget(self._create_card_button(
get_asset_path("patreon.png"), "Patreon", "Soon ",
"https://www.patreon.com/Yuvi102", "#3A2E2B"
))
donation_layout.addWidget(self._create_card_button(
get_asset_path("buymeacoffee.png"), "Buy Me a Coffee", "One-time",
"https://buymeacoffee.com/yuvi9587", "#403520"
))
main_layout.addLayout(donation_layout)
# Separator
line = QFrame()
@@ -73,83 +146,62 @@ class SupportDialog(QDialog):
line.setFrameShadow(QFrame.Sunken)
main_layout.addWidget(line)
# --- Donation Options Layout (using a grid for icons and text) ---
options_layout = QGridLayout()
options_layout.setSpacing(18)
options_layout.setColumnStretch(0, 1) # Add stretch to center the content horizontally
options_layout.setColumnStretch(3, 1)
# Community Section
main_layout.addWidget(self._create_section_title("Get Help & Connect"))
community_layout = QHBoxLayout()
community_layout.setSpacing(15)
link_font = self.font()
link_font.setPointSize(12)
link_font.setBold(True)
scale = getattr(self.parent_app, 'scale_factor', 1.0)
icon_size = int(32 * scale)
# --- Ko-fi ---
kofi_icon_label = QLabel()
kofi_pixmap = QPixmap(get_asset_path("kofi.png"))
if not kofi_pixmap.isNull():
kofi_icon_label.setPixmap(kofi_pixmap.scaled(QSize(icon_size, icon_size), Qt.KeepAspectRatio, Qt.SmoothTransformation))
kofi_text_label = QLabel(
'<a href="https://ko-fi.com/yuvi427183" style="color: #13C2C2; text-decoration: none;">'
'☕ Buy me a Ko-fi'
'</a>'
)
kofi_text_label.setOpenExternalLinks(True)
kofi_text_label.setFont(link_font)
options_layout.addWidget(kofi_icon_label, 0, 1, Qt.AlignRight | Qt.AlignVCenter)
options_layout.addWidget(kofi_text_label, 0, 2, Qt.AlignLeft | Qt.AlignVCenter)
# --- GitHub Sponsors ---
github_icon_label = QLabel()
github_pixmap = QPixmap(get_asset_path("github_sponsors.png"))
if not github_pixmap.isNull():
github_icon_label.setPixmap(github_pixmap.scaled(QSize(icon_size, icon_size), Qt.KeepAspectRatio, Qt.SmoothTransformation))
github_text_label = QLabel(
'<a href="https://github.com/sponsors/Yuvi9587" style="color: #EA4AAA; text-decoration: none;">'
'💜 Sponsor on GitHub'
'</a>'
)
github_text_label.setOpenExternalLinks(True)
github_text_label.setFont(link_font)
options_layout.addWidget(github_icon_label, 1, 1, Qt.AlignRight | Qt.AlignVCenter)
options_layout.addWidget(github_text_label, 1, 2, Qt.AlignLeft | Qt.AlignVCenter)
# --- Buy Me a Coffee (New) ---
bmac_icon_label = QLabel()
bmac_pixmap = QPixmap(get_asset_path("bmac.png"))
if not bmac_pixmap.isNull():
bmac_icon_label.setPixmap(bmac_pixmap.scaled(QSize(icon_size, icon_size), Qt.KeepAspectRatio, Qt.SmoothTransformation))
bmac_text_label = QLabel(
'<a href="https://buymeacoffee.com/yuvi9587" style="color: #FFDD00; text-decoration: none;">'
'🍺 Buy Me a Coffee'
'</a>'
)
bmac_text_label.setOpenExternalLinks(True)
bmac_text_label.setFont(link_font)
options_layout.addWidget(bmac_icon_label, 2, 1, Qt.AlignRight | Qt.AlignVCenter)
options_layout.addWidget(bmac_text_label, 2, 2, Qt.AlignLeft | Qt.AlignVCenter)
main_layout.addLayout(options_layout)
community_layout.addWidget(self._create_card_button(
get_asset_path("github.png"), "GitHub", "Report issues",
"https://github.com/Yuvi9587/Kemono-Downloader", "#2E2E2E",
min_height=100, icon_size=36
))
community_layout.addWidget(self._create_card_button(
get_asset_path("discord.png"), "Discord", "Join the server",
"https://discord.gg/BqP64XTdJN", "#2C2F33",
min_height=100, icon_size=36
))
community_layout.addWidget(self._create_card_button(
get_asset_path("instagram.png"), "Instagram", "Follow me",
"https://www.instagram.com/uvi.arts/", "#3B2E40",
min_height=100, icon_size=36
))
main_layout.addLayout(community_layout)
# Close Button
self.button_box = QDialogButtonBox(QDialogButtonBox.Close)
self.button_box.rejected.connect(self.reject)
main_layout.addWidget(self.button_box)
close_button = QPushButton("Close")
close_button.setMinimumWidth(100)
close_button.clicked.connect(self.accept)
close_button.setStyleSheet("""
QPushButton {
padding: 6px 14px;
border-radius: 6px;
background-color: #444;
color: white;
}
QPushButton:hover {
background-color: #555;
}
""")
self.setLayout(main_layout)
button_layout = QHBoxLayout()
button_layout.addStretch()
button_layout.addWidget(close_button)
button_layout.addStretch()
main_layout.addLayout(button_layout)
def _apply_theme(self):
"""Applies the current theme from the parent application."""
if self.parent_app and hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
scale = getattr(self.parent_app, 'scale_factor', 1)
self.setStyleSheet(get_dark_theme(scale))
else:
self.setStyleSheet("")
self.setStyleSheet("")
def get_asset_path(filename):
"""Return the path to an asset, works in both dev and packaged environments."""
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
base_path = sys._MEIPASS
else:
base_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
return os.path.join(base_path, 'assets', filename)

View File

@@ -1,97 +1,87 @@
# --- Standard Library Imports ---
import os
import sys
# --- PyQt5 Imports ---
from PyQt5.QtCore import pyqtSignal, Qt, QSettings, QCoreApplication
from PyQt5.QtCore import pyqtSignal, Qt, QSettings
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
QStackedWidget, QScrollArea, QFrame, QWidget, QCheckBox
)
# --- Local Application Imports ---
from ...i18n.translator import get_translation
from ..main_window import get_app_icon_object
from ...utils.resolution import get_dark_theme
from ...config.constants import (
CONFIG_ORGANIZATION_NAME
)
from ...config.constants import CONFIG_ORGANIZATION_NAME
class TourStepWidget(QWidget):
"""
A custom widget representing a single step or page in the feature tour.
It neatly formats a title and its corresponding content.
A custom widget for a single tour page, with improved styling for titles and content.
"""
def __init__(self, title_text, content_text, parent=None):
super().__init__(parent)
layout = QVBoxLayout(self)
layout.setContentsMargins(20, 20, 20, 20)
layout.setSpacing(10)
layout.setContentsMargins(25, 20, 25, 20)
layout.setSpacing(15)
layout.setAlignment(Qt.AlignHCenter)
title_label = QLabel(title_text)
title_label.setAlignment(Qt.AlignCenter)
title_label.setStyleSheet("font-size: 18px; font-weight: bold; color: #E0E0E0; padding-bottom: 15px;")
title_label.setWordWrap(True)
title_label.setStyleSheet("font-size: 18pt; font-weight: bold; color: #E0E0E0; padding-bottom: 10px;")
layout.addWidget(title_label)
# Frame for the content area to give it a nice border
content_frame = QFrame()
content_frame.setObjectName("contentFrame")
content_layout = QVBoxLayout(content_frame)
scroll_area = QScrollArea()
scroll_area.setWidgetResizable(True)
scroll_area.setFrameShape(QFrame.NoFrame)
scroll_area.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff)
scroll_area.setVerticalScrollBarPolicy(Qt.ScrollBarAsNeeded)
scroll_area.setStyleSheet("background-color: transparent;")
content_label = QLabel(content_text)
content_label.setWordWrap(True)
content_label.setAlignment(Qt.AlignLeft | Qt.AlignTop)
content_label.setTextFormat(Qt.RichText)
content_label.setOpenExternalLinks(True)
content_label.setStyleSheet("font-size: 11pt; color: #C8C8C8; line-height: 1.8;")
# Indent the content slightly for better readability
content_label.setStyleSheet("font-size: 11pt; color: #C8C8C8; padding-left: 5px; padding-right: 5px;")
scroll_area.setWidget(content_label)
layout.addWidget(scroll_area, 1)
content_layout.addWidget(scroll_area)
layout.addWidget(content_frame, 1)
class TourDialog(QDialog):
"""
A dialog that shows a multi-page tour to the user on first launch.
Includes a "Never show again" checkbox and uses QSettings to remember this preference.
A redesigned, multi-page tour dialog with a visual progress indicator.
"""
tour_finished_normally = pyqtSignal()
tour_skipped = pyqtSignal()
# Constants for QSettings
CONFIG_APP_NAME_TOUR = "ApplicationTour"
TOUR_SHOWN_KEY = "neverShowTourAgainV19"
TOUR_SHOWN_KEY = "neverShowTourAgainV20" # Version bumped to ensure new tour shows once
CONFIG_ORGANIZATION_NAME = CONFIG_ORGANIZATION_NAME
def __init__(self, parent_app, parent=None):
"""
Initializes the dialog.
Args:
parent_app (DownloaderApp): A reference to the main application window.
parent (QWidget, optional): The parent widget. Defaults to None.
"""
super().__init__(parent)
self.settings = QSettings(CONFIG_ORGANIZATION_NAME, self.CONFIG_APP_NAME_TOUR)
self.settings = QSettings(self.CONFIG_ORGANIZATION_NAME, self.CONFIG_APP_NAME_TOUR)
self.current_step = 0
self.parent_app = parent_app
self.progress_dots = []
self.setWindowIcon(get_app_icon_object())
self.setModal(True)
self.setFixedSize(600, 620)
self.setFixedSize(680, 650)
self._init_ui()
self._apply_theme()
self._center_on_screen()
def _tr(self, key, default_text=""):
"""Helper for translation."""
if callable(get_translation) and self.parent_app:
return get_translation(self.parent_app.current_selected_language, key, default_text)
return default_text
def _init_ui(self):
"""Initializes all UI components and layouts."""
main_layout = QVBoxLayout(self)
main_layout.setContentsMargins(0, 0, 0, 0)
main_layout.setSpacing(0)
@@ -99,7 +89,7 @@ class TourDialog(QDialog):
self.stacked_widget = QStackedWidget()
main_layout.addWidget(self.stacked_widget, 1)
# Load content for each step
# All 8 steps from your translator.py file
steps_content = [
("tour_dialog_step1_title", "tour_dialog_step1_content"),
("tour_dialog_step2_title", "tour_dialog_step2_content"),
@@ -111,54 +101,105 @@ class TourDialog(QDialog):
("tour_dialog_step8_title", "tour_dialog_step8_content"),
]
self.tour_steps_widgets = []
for title_key, content_key in steps_content:
title = self._tr(title_key, title_key)
content = self._tr(content_key, "Content not found.")
step_widget = TourStepWidget(title, content)
self.tour_steps_widgets.append(step_widget)
self.stacked_widget.addWidget(step_widget)
self.setWindowTitle(self._tr("tour_dialog_title", "Welcome to Kemono Downloader!"))
# --- Bottom Controls Area ---
bottom_frame = QFrame()
bottom_frame.setObjectName("bottomFrame")
main_layout.addWidget(bottom_frame)
# --- Bottom Controls ---
bottom_controls_layout = QVBoxLayout()
bottom_controls_layout.setContentsMargins(15, 10, 15, 15)
bottom_controls_layout.setSpacing(12)
bottom_controls_layout = QVBoxLayout(bottom_frame)
bottom_controls_layout.setContentsMargins(20, 15, 20, 20)
bottom_controls_layout.setSpacing(15)
self.never_show_again_checkbox = QCheckBox(self._tr("tour_dialog_never_show_checkbox", "Never show this tour again"))
bottom_controls_layout.addWidget(self.never_show_again_checkbox, 0, Qt.AlignLeft)
# --- Progress Indicator ---
progress_layout = QHBoxLayout()
progress_layout.addStretch()
for i in range(len(steps_content)):
dot = QLabel()
dot.setObjectName("progressDot")
dot.setFixedSize(12, 12)
self.progress_dots.append(dot)
progress_layout.addWidget(dot)
progress_layout.addStretch()
bottom_controls_layout.addLayout(progress_layout)
buttons_layout = QHBoxLayout()
buttons_layout.setSpacing(10)
self.skip_button = QPushButton(self._tr("tour_dialog_skip_button", "Skip Tour"))
# --- Buttons and Checkbox ---
buttons_and_check_layout = QHBoxLayout()
self.never_show_again_checkbox = QCheckBox(self._tr("tour_dialog_never_show_checkbox", "Never show this again"))
buttons_and_check_layout.addWidget(self.never_show_again_checkbox, 0, Qt.AlignLeft)
buttons_and_check_layout.addStretch()
self.skip_button = QPushButton(self._tr("tour_dialog_skip_button", "Skip"))
self.skip_button.clicked.connect(self._skip_tour_action)
self.back_button = QPushButton(self._tr("tour_dialog_back_button", "Back"))
self.back_button.clicked.connect(self._previous_step)
self.next_button = QPushButton(self._tr("tour_dialog_next_button", "Next"))
self.next_button.clicked.connect(self._next_step_action)
self.next_button.setDefault(True)
self.next_button.setObjectName("nextButton") # For special styling
buttons_layout.addWidget(self.skip_button)
buttons_layout.addStretch(1)
buttons_layout.addWidget(self.back_button)
buttons_layout.addWidget(self.next_button)
buttons_and_check_layout.addWidget(self.skip_button)
buttons_and_check_layout.addWidget(self.back_button)
buttons_and_check_layout.addWidget(self.next_button)
bottom_controls_layout.addLayout(buttons_and_check_layout)
bottom_controls_layout.addLayout(buttons_layout)
main_layout.addLayout(bottom_controls_layout)
self._update_button_states()
self._update_ui_states()
def _apply_theme(self):
"""Applies the current theme from the parent application."""
if self.parent_app and self.parent_app.current_theme == "dark":
scale = getattr(self.parent_app, 'scale_factor', 1)
self.setStyleSheet(get_dark_theme(scale))
dark_theme_base = get_dark_theme(scale)
tour_styles = """
QDialog {
background-color: #2D2D30;
}
#bottomFrame {
background-color: #252526;
border-top: 1px solid #3E3E42;
}
#contentFrame {
border: 1px solid #3E3E42;
border-radius: 5px;
}
QScrollArea {
background-color: transparent;
border: none;
}
#progressDot {
background-color: #555;
border-radius: 6px;
border: 1px solid #4F4F4F;
}
#progressDot[active="true"] {
background-color: #007ACC;
border: 1px solid #005A9E;
}
#nextButton {
background-color: #007ACC;
border: 1px solid #005A9E;
padding: 8px 18px;
font-weight: bold;
}
#nextButton:hover {
background-color: #1E90FF;
}
#nextButton:disabled {
background-color: #444;
border-color: #555;
}
"""
self.setStyleSheet(dark_theme_base + tour_styles)
else:
self.setStyleSheet("QDialog { background-color: #f0f0f0; }")
def _center_on_screen(self):
"""Centers the dialog on the screen."""
try:
screen_geo = QApplication.primaryScreen().availableGeometry()
self.move(screen_geo.center() - self.rect().center())
@@ -166,54 +207,49 @@ class TourDialog(QDialog):
print(f"[TourDialog] Error centering dialog: {e}")
def _next_step_action(self):
"""Moves to the next step or finishes the tour."""
if self.current_step < len(self.tour_steps_widgets) - 1:
if self.current_step < self.stacked_widget.count() - 1:
self.current_step += 1
self.stacked_widget.setCurrentIndex(self.current_step)
else:
self._finish_tour_action()
self._update_button_states()
self._update_ui_states()
def _previous_step(self):
"""Moves to the previous step."""
if self.current_step > 0:
self.current_step -= 1
self.stacked_widget.setCurrentIndex(self.current_step)
self._update_button_states()
self._update_ui_states()
def _update_button_states(self):
"""Updates the state and text of navigation buttons."""
is_last_step = self.current_step == len(self.tour_steps_widgets) - 1
def _update_ui_states(self):
is_last_step = self.current_step == self.stacked_widget.count() - 1
self.next_button.setText(self._tr("tour_dialog_finish_button", "Finish") if is_last_step else self._tr("tour_dialog_next_button", "Next"))
self.back_button.setEnabled(self.current_step > 0)
self.skip_button.setVisible(not is_last_step)
for i, dot in enumerate(self.progress_dots):
dot.setProperty("active", i == self.current_step)
dot.style().polish(dot)
def _skip_tour_action(self):
"""Handles the action when the tour is skipped."""
self._save_settings_if_checked()
self.tour_skipped.emit()
self.reject()
def _finish_tour_action(self):
"""Handles the action when the tour is finished normally."""
self._save_settings_if_checked()
self.tour_finished_normally.emit()
self.accept()
def _save_settings_if_checked(self):
"""Saves the 'never show again' preference to QSettings."""
self.settings.setValue(self.TOUR_SHOWN_KEY, self.never_show_again_checkbox.isChecked())
self.settings.sync()
@staticmethod
def should_show_tour():
"""Checks QSettings to see if the tour should be shown on startup."""
settings = QSettings(TourDialog.CONFIG_ORGANIZATION_NAME, TourDialog.CONFIG_APP_NAME_TOUR)
never_show = settings.value(TourDialog.TOUR_SHOWN_KEY, False, type=bool)
return not never_show
CONFIG_ORGANIZATION_NAME = CONFIG_ORGANIZATION_NAME
def closeEvent(self, event):
"""Ensures settings are saved if the dialog is closed via the 'X' button."""
self._skip_tour_action()
super().closeEvent(event)
super().closeEvent(event)

View File

@@ -0,0 +1,160 @@
import os
import re
import datetime
import time
try:
from fpdf import FPDF
FPDF_AVAILABLE = True
class PDF(FPDF):
"""Custom PDF class for Discord chat logs."""
def __init__(self, server_name, channel_name, *args, **kwargs):
super().__init__(*args, **kwargs)
self.server_name = server_name
self.channel_name = channel_name
self.default_font_family = 'DejaVu' # Can be changed to Arial if font fails
def header(self):
if self.page_no() == 1:
return # No header on the title page
self.set_font(self.default_font_family, '', 8)
self.cell(0, 10, f'{self.server_name} - #{self.channel_name}', 0, 0, 'L')
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'R')
self.ln(10)
def footer(self):
pass # No footer needed, header has page number
except ImportError:
FPDF_AVAILABLE = False
FPDF = None
PDF = None
def create_pdf_from_discord_messages(messages_data, server_name, channel_name, output_filename, font_path, logger=print, cancellation_event=None, pause_event=None):
"""
Creates a single PDF from a list of Discord message objects, formatted as a chat log.
UPDATED to include clickable links for attachments and embeds.
"""
if not FPDF_AVAILABLE:
logger("❌ PDF Creation failed: 'fpdf2' library is not installed.")
return False
if not messages_data:
logger(" No messages were found or fetched to create a PDF.")
return False
# --- FIX: This helper function now correctly accepts and checks the event objects ---
def check_events(c_event, p_event):
"""Helper to safely check for pause and cancel events."""
if c_event and hasattr(c_event, 'is_cancelled') and c_event.is_cancelled:
return True # Stop
if p_event and hasattr(p_event, 'is_paused'):
while p_event.is_paused:
time.sleep(0.5)
if c_event and hasattr(c_event, 'is_cancelled') and c_event.is_cancelled:
return True
return False
logger(" Sorting messages by date (oldest first)...")
messages_data.sort(key=lambda m: m.get('published', m.get('timestamp', '')))
pdf = PDF(server_name, channel_name)
default_font_family = 'DejaVu'
try:
bold_font_path = font_path.replace("DejaVuSans.ttf", "DejaVuSans-Bold.ttf")
if not os.path.exists(font_path) or not os.path.exists(bold_font_path):
raise RuntimeError("Font files not found")
pdf.add_font('DejaVu', '', font_path, uni=True)
pdf.add_font('DejaVu', 'B', bold_font_path, uni=True)
except Exception as font_error:
logger(f" ⚠️ Could not load DejaVu font: {font_error}. Falling back to Arial.")
default_font_family = 'Arial'
pdf.default_font_family = 'Arial'
# --- Title Page ---
pdf.add_page()
pdf.set_font(default_font_family, 'B', 24)
pdf.cell(w=0, h=20, text="Discord Chat Log", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.ln(10)
pdf.set_font(default_font_family, '', 16)
pdf.cell(w=0, h=10, text=f"Server: {server_name}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.cell(w=0, h=10, text=f"Channel: #{channel_name}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.ln(5)
pdf.set_font(default_font_family, '', 10)
pdf.cell(w=0, h=10, text=f"Generated on: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.cell(w=0, h=10, text=f"Total Messages: {len(messages_data)}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.add_page()
logger(f" Starting PDF creation with {len(messages_data)} messages...")
for i, message in enumerate(messages_data):
# --- FIX: Pass the event objects to the helper function ---
if i % 50 == 0:
if check_events(cancellation_event, pause_event):
logger(" PDF generation cancelled by user.")
return False
author = message.get('author', {}).get('global_name') or message.get('author', {}).get('username', 'Unknown User')
timestamp_str = message.get('published', message.get('timestamp', ''))
content = message.get('content', '')
attachments = message.get('attachments', [])
embeds = message.get('embeds', [])
try:
if timestamp_str.endswith('Z'):
timestamp_str = timestamp_str[:-1] + '+00:00'
dt_obj = datetime.datetime.fromisoformat(timestamp_str)
formatted_timestamp = dt_obj.strftime('%Y-%m-%d %H:%M:%S')
except (ValueError, TypeError):
formatted_timestamp = timestamp_str
if i > 0:
pdf.ln(2)
pdf.set_draw_color(200, 200, 200)
pdf.cell(0, 0, '', border='T')
pdf.ln(2)
pdf.set_font(default_font_family, 'B', 11)
pdf.write(5, f"{author} ")
pdf.set_font(default_font_family, '', 9)
pdf.set_text_color(128, 128, 128)
pdf.write(5, f"({formatted_timestamp})")
pdf.set_text_color(0, 0, 0)
pdf.ln(6)
if content:
pdf.set_font(default_font_family, '', 10)
pdf.multi_cell(w=0, h=5, text=content)
if attachments or embeds:
pdf.ln(1)
pdf.set_font(default_font_family, '', 9)
pdf.set_text_color(22, 119, 219)
for att in attachments:
file_name = att.get('filename', 'untitled')
full_url = att.get('url', '#')
pdf.write(5, text=f"[Attachment: {file_name}]", link=full_url)
pdf.ln()
for embed in embeds:
embed_url = embed.get('url', 'no url')
pdf.write(5, text=f"[Embed: {embed_url}]", link=embed_url)
pdf.ln()
pdf.set_text_color(0, 0, 0)
if check_events(cancellation_event, pause_event):
logger(" PDF generation cancelled by user before final save.")
return False
try:
pdf.output(output_filename)
logger(f"✅ Successfully created Discord chat log PDF: '{os.path.basename(output_filename)}'")
return True
except Exception as e:
logger(f"❌ A critical error occurred while saving the final PDF: {e}")
return False

File diff suppressed because it is too large Load Diff

49
src/utils/command.py Normal file
View File

@@ -0,0 +1,49 @@
import re
# Command constants
CMD_ARCHIVE_ONLY = 'ao'
CMD_DOMAIN_OVERRIDE_PREFIX = '.'
CMD_SFP_PREFIX = 'sfp-'
CMD_UNKNOWN = 'unknown' # New command constant
def parse_commands_from_text(raw_text: str):
"""
Parses special commands from a text string and returns the cleaned text
and a dictionary of found commands.
Commands are in the format [command].
Example: "Tifa, (Cloud, Zack) [.st] [sfp-10] [unknown]"
Returns:
tuple[str, dict]: A tuple containing:
- The text string with commands removed.
- A dictionary of commands and their values.
"""
command_pattern = re.compile(r'\[(.*?)\]')
commands = {}
def command_replacer(match):
command_str = match.group(1).strip().lower()
if command_str.startswith(CMD_DOMAIN_OVERRIDE_PREFIX):
tld = command_str[len(CMD_DOMAIN_OVERRIDE_PREFIX):]
if 'domain_override' not in commands:
commands['domain_override'] = tld
elif command_str == CMD_ARCHIVE_ONLY:
commands['archive_only'] = True
elif command_str.startswith(CMD_SFP_PREFIX):
try:
threshold_str = command_str[len(CMD_SFP_PREFIX):]
threshold = int(threshold_str)
if 'sfp_threshold' not in commands:
commands['sfp_threshold'] = threshold
except (ValueError, IndexError):
pass
elif command_str == CMD_UNKNOWN: # Logic to handle the new command
commands['handle_unknown'] = True
return ''
text_without_commands = command_pattern.sub(command_replacer, raw_text).strip()
return text_without_commands, commands

View File

@@ -20,7 +20,7 @@ VIDEO_EXTENSIONS = {
'.mpg', '.m4v', '.3gp', '.ogv', '.ts', '.vob'
}
ARCHIVE_EXTENSIONS = {
'.zip', '.rar', '.7z', '.tar', '.gz', '.bz2'
'.zip', '.rar', '.7z', '.tar', '.gz', '.bz2', '.bin'
}
AUDIO_EXTENSIONS = {
'.mp3', '.wav', '.aac', '.flac', '.ogg', '.wma', '.m4a', '.opus',
@@ -140,3 +140,5 @@ def is_audio(filename):
if not filename: return False
_, ext = os.path.splitext(filename)
return ext.lower() in AUDIO_EXTENSIONS

View File

@@ -1,14 +1,7 @@
# --- Standard Library Imports ---
import os
import re
from urllib.parse import urlparse
# --- Third-Party Library Imports ---
# This module might not require third-party libraries directly,
# but 'requests' is a common dependency for network operations.
# import requests
def parse_cookie_string(cookie_string):
"""
Parses a 'name=value; name2=value2' cookie string into a dictionary.
@@ -106,13 +99,11 @@ def prepare_cookies_for_request(use_cookie_flag, cookie_text_input, selected_coo
if not use_cookie_flag:
return None
# Priority 1: Use the specifically browsed file first
if selected_cookie_file_path and os.path.exists(selected_cookie_file_path):
cookies = load_cookies_from_netscape_file(selected_cookie_file_path, logger_func, target_domain)
if cookies:
return cookies
# Priority 2: Look for a domain-specific cookie file
if app_base_dir and target_domain:
domain_specific_path = os.path.join(app_base_dir, "data", f"{target_domain}_cookies.txt")
if os.path.exists(domain_specific_path):
@@ -120,7 +111,6 @@ def prepare_cookies_for_request(use_cookie_flag, cookie_text_input, selected_coo
if cookies:
return cookies
# Priority 3: Look for a generic cookies.txt
if app_base_dir:
default_path = os.path.join(app_base_dir, "appdata", "cookies.txt")
if os.path.exists(default_path):
@@ -128,7 +118,6 @@ def prepare_cookies_for_request(use_cookie_flag, cookie_text_input, selected_coo
if cookies:
return cookies
# Priority 4: Fall back to manually entered text
if cookie_text_input:
cookies = parse_cookie_string(cookie_text_input)
if cookies:
@@ -141,28 +130,78 @@ def prepare_cookies_for_request(use_cookie_flag, cookie_text_input, selected_coo
def extract_post_info(url_string):
"""
Parses a URL string to extract the service, user ID, and post ID.
Args:
url_string (str): The URL to parse.
Returns:
tuple: A tuple containing (service, user_id, post_id). Any can be None.
UPDATED to support Hentai2Read series and chapters.
"""
if not isinstance(url_string, str) or not url_string.strip():
return None, None, None
try:
parsed_url = urlparse(url_string.strip())
path_parts = [part for part in parsed_url.path.strip('/').split('/') if part]
stripped_url = url_string.strip()
# --- Rule34Video Check ---
rule34video_match = re.search(r'rule34video\.com/video/(\d+)', stripped_url)
if rule34video_match:
video_id = rule34video_match.group(1)
return 'rule34video', video_id, None
# --- Danbooru Check ---
danbooru_match = re.search(r'danbooru\.donmai\.us|safebooru\.donmai\.us', stripped_url)
if danbooru_match:
return 'danbooru', None, None
# Standard format: /<service>/user/<user_id>/post/<post_id>
# --- Gelbooru Check ---
gelbooru_match = re.search(r'gelbooru\.com', stripped_url)
if gelbooru_match:
return 'gelbooru', None, None
# --- Bunkr Check ---
bunkr_pattern = re.compile(
r"(?:https?://)?(?:[a-zA-Z0-9-]+\.)?bunkr\.(?:si|la|ws|red|black|media|site|is|to|ac|cr|ci|fi|pk|ps|sk|ph|su|ru)|bunkrr\.ru"
)
if bunkr_pattern.search(stripped_url):
return 'bunkr', stripped_url, None
# --- SimpCity Check (Corrected version) ---
simpcity_match = re.search(r'simpcity\.cr/threads/([^/]+)(?:/post-(\d+))?', stripped_url)
if simpcity_match:
thread_info = simpcity_match.group(1)
post_id = simpcity_match.group(2)
return 'simpcity', thread_info, post_id
# --- nhentai Check ---
nhentai_match = re.search(r'nhentai\.net/g/(\d+)', stripped_url)
if nhentai_match:
return 'nhentai', nhentai_match.group(1), None
# --- Hentai2Read Check (Corrected to match series, chapter, and image URLs) ---
hentai2read_match = re.search(r'hentai2read\.com/([^/]+)(?:/(\d+))?/?', stripped_url)
if hentai2read_match:
manga_slug, chapter_num = hentai2read_match.groups()
return 'hentai2read', manga_slug, chapter_num
# --- Pixeldrain Check ---
pixeldrain_match = re.search(r'pixeldrain\.com/[lud]/([^/?#]+)', stripped_url)
if pixeldrain_match:
return 'pixeldrain', stripped_url, None
discord_channel_match = re.search(r'discord\.com/channels/(@me|\d+)/(\d+)', stripped_url)
if discord_channel_match:
server_id, channel_id = discord_channel_match.groups()
return 'discord', server_id, channel_id
# --- Kemono/Coomer/Discord Parsing ---
try:
parsed_url = urlparse(stripped_url)
path_parts = [part for part in parsed_url.path.strip('/').split('/') if part]
if len(path_parts) >= 3 and path_parts[0].lower() == 'discord' and path_parts[1].lower() == 'server':
return 'discord', path_parts[2], path_parts[3] if len(path_parts) >= 4 else None
if len(path_parts) >= 3 and path_parts[1].lower() == 'user':
service = path_parts[0]
user_id = path_parts[2]
post_id = path_parts[4] if len(path_parts) >= 5 and path_parts[3].lower() == 'post' else None
return service, user_id, post_id
# API format: /api/v1/<service>/user/<user_id>...
if len(path_parts) >= 5 and path_parts[0:2] == ['api', 'v1'] and path_parts[3].lower() == 'user':
service = path_parts[2]
user_id = path_parts[4]
@@ -173,8 +212,7 @@ def extract_post_info(url_string):
print(f"Debug: Exception during URL parsing for '{url_string}': {e}")
return None, None, None
def get_link_platform(url):
"""
Identifies the platform of a given URL based on its domain.
@@ -196,10 +234,9 @@ def get_link_platform(url):
if 'twitter.com' in domain or 'x.com' in domain: return 'twitter/x'
if 'discord.gg' in domain or 'discord.com/invite' in domain: return 'discord invite'
if 'pixiv.net' in domain: return 'pixiv'
if 'kemono.su' in domain or 'kemono.party' in domain: return 'kemono'
if 'coomer.su' in domain or 'coomer.party' in domain: return 'coomer'
if 'kemono.su' in domain or 'kemono.party' in domain or 'kemono.cr' in domain: return 'kemono'
if 'coomer.su' in domain or 'coomer.party' in domain or 'coomer.st' in domain: return 'coomer'
# Fallback to a generic name for other domains
parts = domain.split('.')
if len(parts) >= 2:
return parts[-2]

View File

@@ -28,19 +28,12 @@ def setup_ui(main_app):
main_app.scale_factor = scale
default_font = QApplication.font()
base_font_size = 9 # Use a standard base size
base_font_size = 9
default_font.setPointSize(int(base_font_size * scale))
main_app.setFont(default_font)
default_font = QApplication.font()
base_font_size = 9 # Use a standard base size
default_font.setPointSize(int(base_font_size * scale))
main_app.setFont(default_font)
# --- END: Improved Scaling Logic ---
main_app.main_splitter = QSplitter(Qt.Horizontal)
# --- Use a scroll area for the left panel for consistency ---
left_scroll_area = QScrollArea()
left_scroll_area.setWidgetResizable(True)
left_scroll_area.setFrameShape(QFrame.NoFrame)
@@ -75,7 +68,7 @@ def setup_ui(main_app):
main_app.empty_popup_button.clicked.connect(main_app._show_empty_popup)
url_input_layout.addWidget(main_app.empty_popup_button)
main_app.page_range_label = QLabel(main_app._tr("page_range_label_text", "Page Range:"))
main_app.page_range_label.setStyleSheet("font-weight: bold; padding-left: 10px;")
main_app.page_range_label .setStyleSheet("font-weight: bold; padding-left: 10px;")
url_input_layout.addWidget(main_app.page_range_label)
main_app.start_page_input = QLineEdit()
main_app.start_page_input.setPlaceholderText(main_app._tr("start_page_input_placeholder", "Start"))
@@ -134,8 +127,6 @@ def setup_ui(main_app):
main_app._update_char_filter_scope_button_text()
char_input_and_button_layout.addWidget(main_app.char_filter_scope_toggle_button, 1)
character_filter_v_layout.addLayout(char_input_and_button_layout)
# --- Custom Folder Widget Definition ---
main_app.custom_folder_widget = QWidget()
custom_folder_v_layout = QVBoxLayout(main_app.custom_folder_widget)
custom_folder_v_layout.setContentsMargins(0, 0, 0, 0)
@@ -146,7 +137,6 @@ def setup_ui(main_app):
custom_folder_v_layout.addWidget(main_app.custom_folder_label)
custom_folder_v_layout.addWidget(main_app.custom_folder_input)
main_app.custom_folder_widget.setVisible(False)
filters_and_custom_folder_layout.addWidget(main_app.character_filter_widget, 1)
filters_and_custom_folder_layout.addWidget(main_app.custom_folder_widget, 1)
left_layout.addWidget(main_app.filters_and_custom_folder_container_widget)
@@ -199,7 +189,6 @@ def setup_ui(main_app):
main_app.radio_only_audio = QRadioButton("🎧 Only Audio")
main_app.radio_only_links = QRadioButton("🔗 Only Links")
main_app.radio_more = QRadioButton("More")
main_app.radio_all.setChecked(True)
for btn in [main_app.radio_all, main_app.radio_images, main_app.radio_videos, main_app.radio_only_archives, main_app.radio_only_audio, main_app.radio_only_links, main_app.radio_more]:
main_app.radio_group.addButton(btn)
@@ -211,6 +200,24 @@ def setup_ui(main_app):
file_filter_layout.addLayout(radio_button_layout)
left_layout.addLayout(file_filter_layout)
# --- Booru Inputs Container ---
main_app.booru_inputs_widget = QWidget()
booru_inputs_layout = QHBoxLayout(main_app.booru_inputs_widget)
booru_inputs_layout.setContentsMargins(0, 5, 0, 0)
main_app.api_key_label = QLabel("API Key:")
main_app.api_key_input = QLineEdit()
main_app.api_key_input.setPlaceholderText("Danbooru or Gelbooru API Key")
main_app.user_id_label = QLabel("User ID:")
main_app.user_id_input = QLineEdit()
main_app.user_id_input.setPlaceholderText("Danbooru Username or Gelbooru User ID")
booru_inputs_layout.addWidget(main_app.api_key_label)
booru_inputs_layout.addWidget(main_app.api_key_input, 1)
booru_inputs_layout.addSpacing(10)
booru_inputs_layout.addWidget(main_app.user_id_label)
booru_inputs_layout.addWidget(main_app.user_id_input, 1)
left_layout.addWidget(main_app.booru_inputs_widget)
main_app.booru_inputs_widget.setVisible(False)
# --- Checkboxes Group ---
checkboxes_group_layout = QVBoxLayout()
checkboxes_group_layout.setSpacing(10)
@@ -234,33 +241,42 @@ def setup_ui(main_app):
row1_layout.addStretch(1)
checkboxes_group_layout.addLayout(row1_layout)
# --- Advanced Settings ---
# --- Advanced Settings Container ---
main_app.advanced_settings_widget = QWidget()
advanced_settings_layout = QVBoxLayout(main_app.advanced_settings_widget)
advanced_settings_layout.setContentsMargins(0, 0, 0, 0)
advanced_settings_layout.setSpacing(10)
advanced_settings_label = QLabel("⚙️ Advanced Settings:")
checkboxes_group_layout.addWidget(advanced_settings_label)
advanced_row1_layout = QHBoxLayout()
advanced_row1_layout.setSpacing(10)
main_app.use_subfolders_checkbox = QCheckBox("Separate Folders by Known.txt")
main_app.use_subfolders_checkbox.setChecked(True)
main_app.use_subfolders_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
advanced_row1_layout.addWidget(main_app.use_subfolders_checkbox)
advanced_settings_layout.addWidget(advanced_settings_label)
main_app.advanced_row1_layout = QHBoxLayout()
main_app.advanced_row1_layout.setSpacing(10)
main_app.use_subfolder_per_post_checkbox = QCheckBox("Subfolder per Post")
main_app.use_subfolder_per_post_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
advanced_row1_layout.addWidget(main_app.use_subfolder_per_post_checkbox)
main_app.use_subfolder_per_post_checkbox.setChecked(True)
main_app.advanced_row1_layout.addWidget(main_app.use_subfolder_per_post_checkbox)
main_app.date_prefix_checkbox = QCheckBox("Date Prefix")
main_app.date_prefix_checkbox.setToolTip("When 'Subfolder per Post' is active, prefix the folder name with the post's upload date.")
advanced_row1_layout.addWidget(main_app.date_prefix_checkbox)
main_app.advanced_row1_layout.addWidget(main_app.date_prefix_checkbox)
main_app.use_subfolders_checkbox = QCheckBox("Separate Folders by Known.txt")
main_app.use_subfolders_checkbox.setChecked(False)
main_app.use_subfolders_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
main_app.advanced_row1_layout.addWidget(main_app.use_subfolders_checkbox)
# --- Original Cookie Controls (for non-SimpCity sites) ---
main_app.use_cookie_checkbox = QCheckBox("Use Cookie")
main_app.use_cookie_checkbox.setChecked(main_app.use_cookie_setting)
main_app.cookie_text_input = QLineEdit()
main_app.cookie_text_input.setPlaceholderText("if no Select cookies.txt)")
main_app.cookie_text_input.setPlaceholderText("Cookie string or path from Browse...")
main_app.cookie_text_input.setText(main_app.cookie_text_setting)
advanced_row1_layout.addWidget(main_app.use_cookie_checkbox)
advanced_row1_layout.addWidget(main_app.cookie_text_input, 2)
main_app.cookie_browse_button = QPushButton("Browse...")
main_app.cookie_browse_button.setFixedWidth(int(80 * scale))
advanced_row1_layout.addWidget(main_app.cookie_browse_button)
advanced_row1_layout.addStretch(1)
checkboxes_group_layout.addLayout(advanced_row1_layout)
main_app.advanced_row1_layout.addWidget(main_app.use_cookie_checkbox)
main_app.advanced_row1_layout.addWidget(main_app.cookie_text_input, 2)
main_app.advanced_row1_layout.addWidget(main_app.cookie_browse_button)
main_app.advanced_row1_layout.addStretch(1)
advanced_settings_layout.addLayout(main_app.advanced_row1_layout)
advanced_row2_layout = QHBoxLayout()
advanced_row2_layout.setSpacing(10)
multithreading_layout = QHBoxLayout()
@@ -277,13 +293,58 @@ def setup_ui(main_app):
advanced_row2_layout.addLayout(multithreading_layout)
main_app.external_links_checkbox = QCheckBox("Show External Links in Log")
advanced_row2_layout.addWidget(main_app.external_links_checkbox)
main_app.manga_mode_checkbox = QCheckBox("Manga/Comic Mode")
main_app.manga_mode_checkbox = QCheckBox("Renaming Mode")
advanced_row2_layout.addWidget(main_app.manga_mode_checkbox)
advanced_row2_layout.addStretch(1)
checkboxes_group_layout.addLayout(advanced_row2_layout)
advanced_settings_layout.addLayout(advanced_row2_layout)
checkboxes_group_layout.addWidget(main_app.advanced_settings_widget)
# --- SimpCity Settings Container (with its own cookie controls) ---
main_app.simpcity_settings_widget = QWidget()
simpcity_settings_layout = QVBoxLayout(main_app.simpcity_settings_widget)
simpcity_settings_layout.setContentsMargins(0, 0, 0, 0)
simpcity_settings_layout.setSpacing(10)
simpcity_settings_label = QLabel("⚙️ SimpCity Download Options:")
simpcity_settings_layout.addWidget(simpcity_settings_label)
# Checkbox row
simpcity_checkboxes_layout = QHBoxLayout()
main_app.simpcity_dl_pixeldrain_cb = QCheckBox("Download Pixeldrain")
main_app.simpcity_dl_saint2_cb = QCheckBox("Download Saint2.su")
main_app.simpcity_dl_mega_cb = QCheckBox("Download Mega")
main_app.simpcity_dl_bunkr_cb = QCheckBox("Download Bunkr")
main_app.simpcity_dl_gofile_cb = QCheckBox("Download Gofile")
simpcity_checkboxes_layout.addWidget(main_app.simpcity_dl_pixeldrain_cb)
simpcity_checkboxes_layout.addWidget(main_app.simpcity_dl_saint2_cb)
simpcity_checkboxes_layout.addWidget(main_app.simpcity_dl_mega_cb)
simpcity_checkboxes_layout.addWidget(main_app.simpcity_dl_bunkr_cb)
simpcity_checkboxes_layout.addWidget(main_app.simpcity_dl_gofile_cb)
simpcity_checkboxes_layout.addStretch(1)
simpcity_settings_layout.addLayout(simpcity_checkboxes_layout)
# --- START NEW CODE ---
# Create the second, dedicated set of cookie controls for SimpCity
simpcity_cookie_layout = QHBoxLayout()
simpcity_cookie_layout.setContentsMargins(0, 5, 0, 0) # Add some top margin
simpcity_cookie_label = QLabel("Cookie:")
main_app.simpcity_cookie_text_input = QLineEdit()
main_app.simpcity_cookie_text_input.setPlaceholderText("Cookie string or path... (Required)")
main_app.simpcity_cookie_browse_button = QPushButton("Browse...")
main_app.simpcity_cookie_browse_button.setFixedWidth(int(80 * scale))
simpcity_cookie_layout.addWidget(simpcity_cookie_label)
simpcity_cookie_layout.addWidget(main_app.simpcity_cookie_text_input, 1) # Stretch factor
simpcity_cookie_layout.addWidget(main_app.simpcity_cookie_browse_button)
simpcity_settings_layout.addLayout(simpcity_cookie_layout)
checkboxes_group_layout.addWidget(main_app.simpcity_settings_widget)
main_app.simpcity_settings_widget.setVisible(False)
left_layout.addLayout(checkboxes_group_layout)
# --- Action Buttons ---
# --- Action Buttons & Remaining UI ---
# ... (The rest of the setup_ui function remains unchanged)
main_app.standard_action_buttons_widget = QWidget()
btn_layout = QHBoxLayout(main_app.standard_action_buttons_widget)
btn_layout.setContentsMargins(0, 10, 0, 0)
@@ -319,8 +380,6 @@ def setup_ui(main_app):
main_app.bottom_action_buttons_stack.addWidget(main_app.favorite_action_buttons_widget)
left_layout.addWidget(main_app.bottom_action_buttons_stack)
left_layout.addSpacing(10)
# --- Known Names Layout ---
known_chars_label_layout = QHBoxLayout()
known_chars_label_layout.setSpacing(10)
main_app.known_chars_label = QLabel("🎭 Known Shows/Characters (for Folder Names):")
@@ -369,8 +428,6 @@ def setup_ui(main_app):
char_manage_layout.addWidget(main_app.support_button, 0)
left_layout.addLayout(char_manage_layout)
left_layout.addStretch(0)
# --- Right Panel (Logs) ---
right_panel_widget.setLayout(right_layout)
log_title_layout = QHBoxLayout()
main_app.progress_log_label = QLabel("📜 Progress Log:")
@@ -380,15 +437,31 @@ def setup_ui(main_app):
main_app.link_search_input.setPlaceholderText("Search Links...")
main_app.link_search_input.setVisible(False)
log_title_layout.addWidget(main_app.link_search_input)
main_app.link_search_button = QPushButton("🔍")
main_app.link_search_button = QPushButton("🔎")
main_app.link_search_button.setVisible(False)
main_app.link_search_button.setFixedWidth(int(30 * scale))
log_title_layout.addWidget(main_app.link_search_button)
discord_controls_layout = QHBoxLayout()
main_app.discord_scope_toggle_button = QPushButton("Scope: Files")
main_app.discord_scope_toggle_button.setVisible(False)
discord_controls_layout.addWidget(main_app.discord_scope_toggle_button)
main_app.discord_message_limit_input = QLineEdit(main_app)
main_app.discord_message_limit_input.setPlaceholderText("Msg Limit")
main_app.discord_message_limit_input.setToolTip("Optional: Limit the number of recent messages to process.")
main_app.discord_message_limit_input.setValidator(QIntValidator(1, 9999999, main_app))
main_app.discord_message_limit_input.setFixedWidth(int(80 * scale))
main_app.discord_message_limit_input.setVisible(False)
discord_controls_layout.addWidget(main_app.discord_message_limit_input)
log_title_layout.addLayout(discord_controls_layout)
main_app.manga_rename_toggle_button = QPushButton()
main_app.manga_rename_toggle_button.setVisible(False)
main_app.manga_rename_toggle_button.setFixedWidth(int(140 * scale))
main_app._update_manga_filename_style_button_text()
log_title_layout.addWidget(main_app.manga_rename_toggle_button)
main_app.custom_rename_dialog_button = QPushButton("Open Dialog")
main_app.custom_rename_dialog_button.setVisible(False)
main_app.custom_rename_dialog_button.clicked.connect(main_app._show_custom_rename_dialog)
log_title_layout.addWidget(main_app.custom_rename_dialog_button)
main_app.manga_date_prefix_input = QLineEdit()
main_app.manga_date_prefix_input.setPlaceholderText("Prefix for Manga Filenames")
main_app.manga_date_prefix_input.setVisible(False)
@@ -451,26 +524,17 @@ def setup_ui(main_app):
main_app.file_progress_label.setWordWrap(True)
main_app.file_progress_label.setStyleSheet("padding-top: 2px; font-style: italic; color: #A0A0A0;")
right_layout.addWidget(main_app.file_progress_label)
# --- Final Assembly ---
main_app.main_splitter.addWidget(left_scroll_area)
main_app.main_splitter.addWidget(right_panel_widget)
if main_app.width() >= 1920:
# For wider resolutions, give more space to the log panel (right).
main_app.main_splitter.setStretchFactor(0, 4)
main_app.main_splitter.setStretchFactor(1, 6)
else:
# Default for lower resolutions, giving more space to controls (left).
main_app.main_splitter.setStretchFactor(0, 7)
main_app.main_splitter.setStretchFactor(1, 3)
top_level_layout = QHBoxLayout(main_app)
top_level_layout.setContentsMargins(0, 0, 0, 0)
top_level_layout.addWidget(main_app.main_splitter)
# --- Initial UI State Updates ---
main_app.update_ui_for_subfolders(main_app.use_subfolders_checkbox.isChecked())
main_app.update_external_links_setting(main_app.external_links_checkbox.isChecked())
main_app.update_multithreading_label(main_app.thread_count_input.text())
@@ -486,7 +550,6 @@ def setup_ui(main_app):
if hasattr(main_app, 'radio_group') and main_app.radio_group.checkedButton():
main_app._handle_filter_mode_change(main_app.radio_group.checkedButton(), True)
main_app.radio_group.buttonToggled.connect(main_app._handle_more_options_toggled)
main_app._update_manga_filename_style_button_text()
main_app._update_skip_scope_button_text()
main_app._update_char_filter_scope_button_text()
@@ -528,6 +591,9 @@ def get_dark_theme(scale=1):
border-radius: 4px;
font-size: {font_size}pt;
}}
QLineEdit::placeholder {{
color: #8A8A8A; /* A muted grey color for placeholder text */
}}
QTextEdit {{
font-family: Consolas, Courier New, monospace;
}}

View File

@@ -26,6 +26,16 @@ KNOWN_TXT_MATCH_CLEANUP_PATTERNS = [
r'\bPreview\b',
]
# --- START NEW CODE ---
# Regular expression to detect CJK characters
# Covers Hiragana, Katakana, Half/Full width forms, CJK Unified Ideographs, Hangul Syllables, etc.
cjk_pattern = re.compile(r'[\u3000-\u303f\u3040-\u309f\u30a0-\u30ff\uff00-\uffef\u4e00-\u9fff\uac00-\ud7af]')
def contains_cjk(text):
"""Checks if the text contains any CJK characters."""
return bool(cjk_pattern.search(text))
# --- END NEW CODE ---
# --- Text Matching and Manipulation Utilities ---
def is_title_match_for_character(post_title, character_name_filter):
@@ -42,7 +52,7 @@ def is_title_match_for_character(post_title, character_name_filter):
"""
if not post_title or not character_name_filter:
return False
# Use word boundaries (\b) to match whole words only
pattern = r"(?i)\b" + re.escape(str(character_name_filter).strip()) + r"\b"
return bool(re.search(pattern, post_title))
@@ -62,7 +72,7 @@ def is_filename_match_for_character(filename, character_name_filter):
"""
if not filename or not character_name_filter:
return False
return str(character_name_filter).strip().lower() in filename.lower()
@@ -101,16 +111,16 @@ def extract_folder_name_from_title(title, unwanted_keywords):
"""
if not title:
return 'Uncategorized'
title_lower = title.lower()
# Find all whole words in the title
tokens = re.findall(r'\b[\w\-]+\b', title_lower)
for token in tokens:
clean_token = clean_folder_name(token)
if clean_token and clean_token.lower() not in unwanted_keywords:
return clean_token
# Fallback to cleaning the full title if no single significant word is found
cleaned_full_title = clean_folder_name(title)
return cleaned_full_title if cleaned_full_title else 'Uncategorized'
@@ -120,6 +130,7 @@ def match_folders_from_title(title, names_to_match, unwanted_keywords):
"""
Matches folder names from a title based on a list of known name objects.
Each name object is a dict: {'name': 'PrimaryName', 'aliases': ['alias1', ...]}
MODIFIED: Uses substring matching for CJK aliases, word boundary for others.
Args:
title (str): The post title to check.
@@ -137,10 +148,11 @@ def match_folders_from_title(title, names_to_match, unwanted_keywords):
for pat_str in KNOWN_TXT_MATCH_CLEANUP_PATTERNS:
cleaned_title = re.sub(pat_str, ' ', cleaned_title, flags=re.IGNORECASE)
cleaned_title = re.sub(r'\s+', ' ', cleaned_title).strip()
# Store both original case cleaned title and lower case for different matching
title_lower = cleaned_title.lower()
matched_cleaned_names = set()
# Sort by name length descending to match longer names first (e.g., "Cloud Strife" before "Cloud")
sorted_name_objects = sorted(names_to_match, key=lambda x: len(x.get("name", "")), reverse=True)
@@ -149,25 +161,52 @@ def match_folders_from_title(title, names_to_match, unwanted_keywords):
aliases = name_obj.get("aliases", [])
if not primary_folder_name or not aliases:
continue
# <<< START MODIFICATION >>>
cleaned_primary_name = clean_folder_name(primary_folder_name)
if not cleaned_primary_name or cleaned_primary_name.lower() in unwanted_keywords:
continue # Skip this entry entirely if its primary name is unwanted or empty
match_found_for_this_object = False
for alias in aliases:
if not alias: continue
alias_lower = alias.lower()
if not alias_lower: continue
# Use word boundaries for accurate matching
pattern = r'\b' + re.escape(alias_lower) + r'\b'
if re.search(pattern, title_lower):
cleaned_primary_name = clean_folder_name(primary_folder_name)
if cleaned_primary_name.lower() not in unwanted_keywords:
# Check if the alias contains CJK characters
if contains_cjk(alias):
# Use simple substring matching for CJK
if alias_lower in title_lower:
matched_cleaned_names.add(cleaned_primary_name)
break # Move to the next name object once a match is found for this one
match_found_for_this_object = True
break # Move to the next name object
else:
# Use original word boundary matching for non-CJK
try:
# Compile pattern for efficiency if used repeatedly, though here it changes each loop
pattern = r'\b' + re.escape(alias_lower) + r'\b'
if re.search(pattern, title_lower):
matched_cleaned_names.add(cleaned_primary_name)
match_found_for_this_object = True
break # Move to the next name object
except re.error as e:
# Log error if the alias creates an invalid regex (unlikely with escape)
print(f"Regex error for alias '{alias}': {e}") # Or use proper logging
continue
# This outer break logic remains the same (though slightly redundant with inner breaks)
if match_found_for_this_object:
pass # Already added and broke inner loop
# <<< END MODIFICATION >>>
return sorted(list(matched_cleaned_names))
def match_folders_from_filename_enhanced(filename, names_to_match, unwanted_keywords):
"""
Matches folder names from a filename, prioritizing longer and more specific aliases.
It returns immediately after finding the first (longest) match.
MODIFIED: Prioritizes boundary-aware matches for Latin characters,
falls back to substring search for CJK compatibility.
Args:
filename (str): The filename to check.
@@ -175,33 +214,61 @@ def match_folders_from_filename_enhanced(filename, names_to_match, unwanted_keyw
unwanted_keywords (set): A set of folder names to ignore.
Returns:
list: A sorted list of matched primary folder names.
list: A list containing the single best folder name match, or an empty list.
"""
if not filename or not names_to_match:
return []
filename_lower = filename.lower()
matched_primary_names = set()
# Create a flat list of (alias, primary_name) tuples to sort by alias length
# Create a flat list of (alias, primary_name) tuples
alias_map_to_primary = []
for name_obj in names_to_match:
primary_name = name_obj.get("name")
if not primary_name: continue
cleaned_primary_name = clean_folder_name(primary_name)
if not cleaned_primary_name or cleaned_primary_name.lower() in unwanted_keywords:
continue
for alias in name_obj.get("aliases", []):
if alias.lower():
alias_map_to_primary.append((alias.lower(), cleaned_primary_name))
if alias: # Check if alias is not None and not an empty string
alias_lower_val = alias.lower()
if alias_lower_val: # Check again after lowercasing
alias_map_to_primary.append((alias_lower_val, cleaned_primary_name))
# Sort by alias length, descending, to match longer aliases first
alias_map_to_primary.sort(key=lambda x: len(x[0]), reverse=True)
# Return the FIRST match found, which will be the longest
for alias_lower, primary_name_for_alias in alias_map_to_primary:
if filename_lower.startswith(alias_lower):
matched_primary_names.add(primary_name_for_alias)
try:
# 1. Attempt boundary-aware match first (good for English/Latin)
# Matches alias if it's at the start/end or surrounded by common separators
# We use word boundaries (\b) and also check for common non-word separators like +_-
pattern = r'(?:^|[\s_+-])' + re.escape(alias_lower) + r'(?:[\s_+-]|$)'
if re.search(pattern, filename_lower):
# Found a precise, boundary-aware match. This is the best case.
return [primary_name_for_alias]
return sorted(list(matched_primary_names))
# 2. Fallback: Simple substring check (for CJK or other cases)
# This executes ONLY if the boundary match above failed.
# We check if the alias contains CJK OR if the filename does.
# This avoids applying the simple 'in' check for Latin-only aliases in Latin-only filenames.
elif (contains_cjk(alias_lower) or contains_cjk(filename_lower)) and alias_lower in filename_lower:
# This is the fallback for CJK compatibility.
return [primary_name_for_alias]
# If alias is "ul" and filename is "sin+título":
# 1. re.search(r'(?:^|[\s_+-])ul(?:[\s_+-]|$)', "sin+título") -> Fails (good)
# 2. contains_cjk("ul") -> False
# 3. contains_cjk("sin+título") -> False
# 4. No match is found for "ul". (correct)
except re.error as e:
print(f"Regex error matching alias '{alias_lower}' in filename '{filename_lower}': {e}")
continue # Skip this alias if regex fails
# If the loop finishes without any matches, return an empty list.
return []

BIN
yt-dlp.exe Normal file

Binary file not shown.