34 Commits

Author SHA1 Message Date
Yuvi9587
56a83195b2 Update readme.md 2025-08-11 09:31:53 -07:00
Yuvi9587
26fa3b9bc1 Commit 2025-08-10 09:16:31 -07:00
Yuvi9587
f7c4d892a8 commit 2025-08-07 21:42:04 -07:00
Yuvi9587
661b97aa16 Commit 2025-08-06 06:56:49 -07:00
Yuvi9587
3704fece2b Update main_window.py 2025-08-04 04:53:52 -07:00
Yuvi9587
bdb7ac93c4 Update readme.md 2025-08-03 09:16:25 -07:00
Yuvi9587
76d4a3ea8a Update main_window.py 2025-08-03 09:15:01 -07:00
Yuvi9587
ccc7804505 Update readme.md 2025-08-03 09:13:47 -07:00
Yuvi9587
4ee750c5d4 Update drive_downloader.py 2025-08-03 09:11:27 -07:00
Yuvi9587
e9be13c4e3 Update readme.md 2025-08-03 09:07:29 -07:00
Yuvi9587
a5cb04ea6f Update features.md 2025-08-03 06:46:30 -07:00
Yuvi9587
842f18d70d Update features.md 2025-08-03 06:32:32 -07:00
Yuvi9587
fb3f0e8913 Update features.md 2025-08-03 06:11:05 -07:00
Yuvi9587
0758887154 Update features.md 2025-08-03 06:07:05 -07:00
Yuvi9587
e752d881e7 Update features.md 2025-08-03 06:01:32 -07:00
Yuvi9587
a776d1abe9 Update features.md 2025-08-03 06:01:15 -07:00
Yuvi9587
21d1ce4fa9 Commit 2025-08-03 05:46:51 -07:00
Yuvi9587
d5112a25ee Commit 2025-08-01 09:42:10 -07:00
Yuvi9587
791ce503ff Update main_window.py 2025-08-01 07:57:32 -07:00
Yuvi9587
e5b519d5ce Commit 2025-08-01 06:33:36 -07:00
Yuvi9587
9888ed0862 Update multipart_downloader.py 2025-07-30 22:28:05 -07:00
Yuvi9587
9e996bf682 Commit 2025-07-30 21:31:02 -07:00
Yuvi9587
e7a6a91542 commit 2025-07-30 19:30:50 -07:00
Yuvi9587
d7faccce18 Commit 2025-07-29 06:37:28 -07:00
Yuvi9587
a78c01c4f6 Update workers.py 2025-07-27 07:44:14 -07:00
Yuvi9587
6de9967e0b Commit 2025-07-27 07:18:08 -07:00
Yuvi9587
e3dd0e70b6 commit 2025-07-27 06:32:15 -07:00
Yuvi9587
9db89cfad0 Commit 2025-07-25 11:00:33 -07:00
Yuvi9587
0a6034a632 Update features.md 2025-07-25 10:49:51 -07:00
Yuvi9587
2da69e7017 Update features.md 2025-07-25 10:45:50 -07:00
Yuvi9587
3209770d00 Update LICENSE 2025-07-23 20:14:01 -07:00
Yuvi9587
337cdd342c Commit 2025-07-23 20:08:44 -07:00
Yuvi9587
d54b013bbc commit 2025-07-22 07:00:34 -07:00
Yuvi9587
2785fc1121 Update EmptyPopupDialog.py 2025-07-19 20:27:55 -07:00
22 changed files with 3657 additions and 1617 deletions

24
LICENSE
View File

@@ -1,11 +1,21 @@
Custom License - No Commercial Use
MIT License
Copyright [Yuvi9587] [2025]
Copyright (c) [2025] [Yuvi9587]
Permission is hereby granted to any person obtaining a copy of this software and associated documentation files (the "Software"), to use, copy, modify, and distribute the Software for **non-commercial purposes only**, subject to the following conditions:
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
1. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
2. Proper credit must be given to the original author in any public use, distribution, or derivative works.
3. Commercial use, resale, or sublicensing of the Software or any derivative works is strictly prohibited without explicit written permission.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND...
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,192 +1,391 @@
# Kemono Downloader - Feature Guide
This guide provides a comprehensive overview of all user interface elements, input fields, buttons, popups, and functionalities available in the Kemono Downloader.
<div>
<h1>Kemono Downloader - Comprehensive Feature Guide</h1>
<p>This guide provides a detailed overview of all user interface elements, input fields, buttons, popups, and functionalities available in the application.</p>
<hr>
## 1. Main Interface & Workflow
These are the primary controls you'll interact with to initiate and manage downloads.
<h2><strong>1. URL Input (🔗)</strong></h2>
<p>This is the primary input field where you specify the content you want to download.</p>
### 1.1. Core Inputs
**🔗 Creator/Post URL Input Field**  
- **Purpose**: Paste the URL of the content you want to download.  
- **Supported Sites**: Kemono.su, Coomer.party, Simpcity.su.  
- **Supported URL Types**:  
  - Creator pages (e.g., `https://kemono.su/patreon/user/12345`).  
  - Individual posts (e.g., `https://kemono.su/patreon/user/12345/post/98765`).  
- **Note**: When ⭐ Favorite Mode is active, this field is disabled. For Simpcity.su URLs, the "Use Cookie" option is mandatory and auto-enabled.
<p><strong>Functionality:</strong></p>
<ul>
<li><strong>Creator URL:</strong> A link to a creator's main page (e.g., https://kemono.su/patreon/user/12345). Downloads all posts from the creator.</li>
<li><strong>Post URL:</strong> A direct link to a specific post (e.g., .../post/98765). Downloads only the specified post.</li>
</ul>
**🎨 Creator Selection Button**  
- **Icon**: 🎨 (Artist Palette)  
- **Purpose**: Opens the "Creator Selection" dialog to browse and queue downloads from known creators.  
- **Dialog Features**:  
  - Loads creators from `creators.json`.  
  - **Search Bar**: Filter creators by name.  
  - **Creator List**: Displays creators with their service (e.g., Patreon, Fanbox).  
  - **Selection**: Checkboxes to select one or more creators.  
  - **Download Scope**: Organize downloads by Characters or Creators.  
  - **Add to Queue**: Adds selected creators or their posts to the download queue.
<p><strong>Interaction with Other Features:</strong> The content of this field influences "Manga Mode" and "Page Range". "Page Range" is enabled only with a creator URL.</p>
**Page Range (Start to End) Input Fields**  
- **Purpose**: Specify a range of pages to fetch for creator URLs.  
- **Usage**: Enter the starting and ending page numbers.  
- **Behavior**:  
  - If blank, all pages are processed.  
  - Disabled for single post URLs.
<hr>
**📁 Download Location Input Field & Browse Button**  
- **Purpose**: Specify the main directory for downloaded files.  
- **Usage**: Type the path or click "Browse..." to select a folder.  
- **Requirement**: Mandatory for all download operations.
<h2><strong>2. Creator Selection & Update (🎨)</strong></h2>
<p>The color palette emoji button opens the Creator Selection & Update dialog. This allows managing and downloading from a local creator database.</p>
### 1.2. Action Buttons
**⬇️ Start Download / 🔗 Extract Links Button**  
- **Purpose**: Initiates downloading or link extraction.  
- **Behavior**:  
  - Shows "🔗 Extract Links" if "Only Links" is selected.  
  - Otherwise, shows "⬇️ Start Download".  
  - Supports single-threaded or multi-threaded downloads based on settings.
<p><strong>Functionality:</strong></p>
<ul>
<li><strong>Creator Browser:</strong> Loads a list from <code>creators.json</code>. Search by name, service, or paste a URL to find creators.</li>
<li><strong>Batch Selection:</strong> Select multiple creators and click "Add Selected" to add them to the batch download session.</li>
<li><strong>Update Checker:</strong> Use a saved profile (.json) to download only new content based on previously fetched posts.</li>
<li><strong>Post Fetching & Filtering:</strong> "Fetch Posts" loads post titles, allowing you to choose specific posts for download.</li>
</ul>
**🔄 Restore Download Button**  
- **Visibility**: Appears if an incomplete session is detected on startup.  
- **Purpose**: Resumes a previously interrupted download session.
<hr>
**⏸️ Pause / ▶️ Resume Download Button**  
- **Purpose**: Pause or resume the ongoing download.  
- **Behavior**: Toggles between "Pause" and "Resume". Some UI settings can be changed while paused.
<h2><strong>3. Download Location Input (📁)</strong></h2>
<p>This input defines the destination directory for downloaded files.</p>
**❌ Cancel & Reset UI Button**  
- **Purpose**: Stops the current operation and performs a "soft" reset.  
- **Behavior**: Halts background threads, preserves URL and Download Location inputs, resets other settings.
<p><strong>Functionality:</strong></p>
<ul>
<li><strong>Manual Entry:</strong> Enter or paste the folder path.</li>
<li><strong>Browse Button:</strong> Opens a system dialog to choose a folder.</li>
<li><strong>Directory Creation:</strong> If the folder doesn't exist, the app can create it after user confirmation.</li>
</ul>
**🔄 Reset Button (in the log area)**  
- **Purpose**: Performs a "hard" reset when no operation is active.  
- **Behavior**: Clears all inputs, resets options to default, and clears logs.
<hr>
## 2. Filtering & Content Selection
These options allow precise control over downloaded content.
<h2><strong>4. Filter by Character(s) & Scope Button</strong></h2>
<p>Used to download content for specific characters or series and organize them into subfolders.</p>
### 2.1. Content Filtering
**🎯 Filter by Character(s) Input Field**  
- **Purpose**: Download content related to specific characters or series.  
- **Usage**: Enter comma-separated character names.  
- **Advanced Syntax**:  
  - `Nami`: Simple filter.  
  - `(Vivi, Ulti)`: Grouped filter. Matches posts with "Vivi" OR "Ulti". Creates a shared folder like `Vivi Ulti` if subfolders are enabled.  
  - `(Boa, Hancock)~`: Aliased filter. Treats "Boa" and "Hancock" as the same entity.
<p><strong>Input Field (Filter by Character(s)):</strong></p>
<ul>
<li>Enter comma-separated names (e.g., <code>Tifa, Aerith</code>).</li>
<li>Group aliases using parentheses (e.g., <code>(Cloud, Zack)</code>).</li>
<li>Names are matched against titles, filenames, or comments.</li>
<li>If "Separate Folders by Known.txt" is enabled, the name becomes the subfolder name.</li>
</ul>
**Filter: [Type] Button (Character Filter Scope)**  
- **Purpose**: Defines where the character filter is applied. Cycles on click.  
- **Options**:  
  - **Filter: Title** (Default): Matches post titles.  
  - **Filter: Files**: Matches filenames.  
  - **Filter: Both**: Checks title first, then filenames.  
  - **Filter: Comments (Beta)**: Checks filenames, then post comments.
<p><strong>Scope Button Modes:</strong></p>
<ul>
<li><strong>Filter: Title</strong> (default) Match names in post titles only.</li>
<li><strong>Filter: Files</strong> Match names in filenames only.</li>
<li><strong>Filter: Both</strong> Try title match first, then filenames.</li>
<li><strong>Filter: Comments</strong> Try filenames first, then post comments if no match.</li>
</ul>
**🚫 Skip with Words Input Field**  
- **Purpose**: Exclude posts/files with specified keywords (e.g., `WIP`, `sketch`).
<hr>
**Scope: [Type] Button (Skip Words Scope)**  
- **Purpose**: Defines where skip words are applied. Cycles on click.  
- **Options**:  
  - **Scope: Posts** (Default): Skips posts if the title contains a skip word.  
  - **Scope: Files**: Skips files if the filename contains a skip word.  
  - **Scope: Both**: Applies both rules.
<h2><strong>5. Skip with Words & Scope Button</strong></h2>
<p>Prevents downloading content based on keywords.</p>
**✂️ Remove Words from Name Input Field**  
- **Purpose**: Remove unwanted text from filenames (e.g., `patreon`, `[HD]`).
<p><strong>Input Field (Skip with Words):</strong></p>
<ul>
<li>Enter comma-separated keywords (e.g., <code>WIP, sketch, preview</code>).</li>
<li>Matching is case-insensitive.</li>
<li>If a keyword matches, the file or post is skipped.</li>
</ul>
### 2.2. File Type Filtering
**Filter Files (Radio Buttons)**  
- **Purpose**: Select file types to download.  
- **Options**:  
  - **All**: All file types.  
  - **Images/GIFs**: Common image formats.  
  - **Videos**: Common video formats.  
  - **🎧 Only Audio**: Common audio formats.  
  - **📦 Only Archives**: Only `.zip` and `.rar` files.  
  - **🔗 Only Links**: Extracts external links without downloading files.
<p><strong>Scope Button Modes:</strong></p>
<ul>
<li><strong>Scope: Posts</strong> (default) Skips post if title contains a keyword.</li>
<li><strong>Scope: Files</strong> Skips individual files with keyword matches.</li>
<li><strong>Scope: Both</strong> Skips entire post if title matches, otherwise filters individual files.</li>
</ul>
</div>
<div>
<h2><strong>Filter File Section (Radio Buttons)</strong></h2>
<p>This section uses a group of radio buttons to control the primary download mode, dictating which types of files are targeted. Only one of these modes can be active at a time.</p>
**Skip .zip / Skip .rar Checkboxes**  
- **Purpose**: Skip downloading `.zip` or `.rar` files.  
- **Behavior**: Disabled when "📦 Only Archives" is active.
<ul>
<li>
<strong>All:</strong> Default mode. Downloads every file and attachment provided by the API, regardless of type.
</li>
<li>
<strong>Images/GIFs:</strong> Filters for common image formats (<code>.jpg</code>, <code>.png</code>, <code>.gif</code>, <code>.webp</code>), skipping non-image files.
</li>
<li>
<strong>Videos:</strong> Filters for common video formats like <code>.mp4</code>, <code>.webm</code>, and <code>.mov</code>, skipping all others.
</li>
<li>
<strong>Only Archives:</strong> Downloads only archive files (<code>.zip</code>, <code>.rar</code>). Disables "Compress to WebP" and unchecks "Skip Archives".
</li>
<li>
<strong>Only Audio:</strong> Filters for common audio formats like <code>.mp3</code>, <code>.wav</code>, and <code>.flac</code>.
</li>
<li>
<strong>Only Links:</strong> Extracts external hyperlinks from post descriptions (e.g., Mega, Google Drive) and displays them in the log. Disables all download options.
</li>
<li>
<strong>More:</strong> Opens the "More Options" dialog to download text-based content instead of media files.
<ul>
<li><strong>Scope:</strong> Choose to extract from post description or comments.</li>
<li><strong>Export Format:</strong> Save text as PDF, DOCX, or TXT.</li>
<li><strong>Single PDF:</strong> Optionally compile all text into one PDF.</li>
</ul>
</li>
</ul>
## 3. Download Customization
Options to refine the download process and output.
<hr>
- **Download Thumbnails Only**: Downloads small preview images instead of full-resolution files.  
- **Scan Content for Images**: Scans post HTML for `<img>` tags, crucial for images in descriptions.  
- **Compress to WebP**: Converts images to WebP format (requires Pillow library).
- **Keep Duplicates**: Normally, if a post contains multiple files with the same name, only the first is downloaded. Checking this option will download all of them, renaming subsequent unique files with a numeric suffix (e.g., `image_1.jpg`).
- **🗄️ Custom Folder Name (Single Post Only)**: Specify a custom folder name for a single post's content (appears if subfolders are enabled).
<h2><strong>Check Box Buttons</strong></h2>
<p>These checkboxes provide additional toggles to refine the download behavior and enable special features.</p>
## 4. 📖 Manga/Comic Mode
A mode for downloading creator feeds in chronological order, ideal for sequential content.
<ul>
<li>
<strong>⭐ Favorite Mode:</strong> Changes workflow to download from your personal favorites. Disables the URL input.
<ul>
<li><strong>Favorite Artists:</strong> Opens a dialog to select from your favorited creators.</li>
<li><strong>Favorite Posts:</strong> Opens a dialog to select from your favorited posts on Kemono and Coomer.</li>
</ul>
</li>
<li>
<strong>Skip Archives:</strong> When checked, archive files (<code>.zip</code>, <code>.rar</code>) are ignored. Disabled in "Only Archives" mode.
</li>
<li>
<strong>Download Thumbnail Only:</strong> Saves only thumbnail previews, not full-resolution files. Enables "Scan Content for Images".
</li>
<li>
<strong>Scan Content for Images:</strong> Parses post HTML for embedded images not listed in the API. Looks for <code>&lt;img&gt;</code> tags and direct image links.
</li>
<li>
<strong>Compress to WebP:</strong> Converts large images (over 1.5 MB) to WebP format using the Pillow library for space-saving.
</li>
<li>
<strong>Keep Duplicates:</strong> Provides control over duplicate handling via the "Duplicate Handling Options" dialog.
<ul>
<li><strong>Skip by Hash:</strong> Default skip identical files.</li>
<li><strong>Keep Everything:</strong> Save all files regardless of duplication.</li>
<li><strong>Limit:</strong> Set a limit on how many copies of the same file are saved. A limit of <code>0</code> means no limit.</li>
</ul>
</li>
</ul>
</div>
<h2><strong>Folder Organization Checkboxes</strong></h2>
<ul>
<li>
<strong>Separate folders by Known.txt:</strong> Automatically organizes downloads into folders based on name matches.
<ul>
<li>Uses "Filter by Character(s)" input first, if available.</li>
<li>Then checks names in <code>Known.txt</code>.</li>
<li>Falls back to extracting from post title.</li>
</ul>
</li>
<li>
<strong>Subfolder per post:</strong> Creates a unique folder per post, using the posts title.
<ul>
<li>Prevents mixing files from multiple posts.</li>
<li>Can be combined with Known.txt-based folders.</li>
<li>Ensures uniqueness (e.g., <code>My Post Title_1</code>).</li>
<li>Automatically removes empty folders.</li>
</ul>
</li>
<li>
<strong>Date prefix:</strong> Enabled only with "Subfolder per post". Prepends the post date (e.g., <code>2025-08-03 My Post Title</code>) for chronological sorting.
</li>
</ul>
- **Activation**: Active when downloading a creator's entire feed (not a single post).  
- **Core Behavior**: Fetches all posts, processing from oldest to newest.  
- **Filename Style Toggle Button (in the log area)**:  
  - **Purpose**: Controls file naming in Manga Mode. Cycles on click.  
  - **Options**:  
    - **Name: Post Title**: First file named after post title; others keep original names.  
    - **Name: Original File**: Files keep server-provided names, with optional prefix.  
    - **Name: Title+G.Num**: Global numbering with post title prefix (e.g., `Chapter 1_001.jpg`).  
    - **Name: Date Based**: Sequential naming by post date (e.g., `001.jpg`), with optional prefix.  
    - **Name: Post ID**: Files named after post ID to avoid clashes.  
    - **Name: Date + Title**: Combines post date and title for filenames.
<h2><strong>General Functionality Checkboxes</strong></h2>
<ul>
<li>
<strong>Use cookie:</strong> Enables login-based access via cookies.
<ul>
<li>Paste cookie string directly, or browse to select a <code>cookies.txt</code> file.</li>
<li>Cookies are used in all authenticated API requests.</li>
</ul>
</li>
<li>
<strong>Use Multithreading:</strong> Enables parallel downloading of posts.
<ul>
<li>Specify the number of worker threads (e.g., 10).</li>
<li>Disabled for Manga Mode and Only Links mode.</li>
</ul>
</li>
<li>
<strong>Show external links in log:</strong> Adds a secondary log that displays links (e.g., Mega, Dropbox) found in post text.
</li>
<li>
<strong>Manga/Comic mode:</strong> Sorts posts chronologically before download.
<ul>
<li>Ensures correct page order for comics/manga.</li>
</ul>
<strong>Scope Button (Name: ...):</strong> Controls filename style:
<ul>
<li><strong>Name: Post Title</strong> — e.g., <code>Chapter-1.jpg</code></li>
<li><strong>Name: Date + Original</strong> — e.g., <code>2025-08-03_filename.png</code></li>
<li><strong>Name: Date + Title</strong> — e.g., <code>2025-08-03_Chapter-1.jpg</code></li>
<li><strong>Name: Title+G.Num</strong> — e.g., <code>Page_001.jpg</code></li>
<li><strong>Name: Date Based</strong> — e.g., <code>001.jpg</code>, with optional prefix</li>
<li><strong>Name: Post ID</strong> — uses unique post ID as filename</li>
</ul>
</li>
</ul>
<h2><strong>Start Download</strong></h2>
<ul>
<li>
<strong>Default State ("⬇️ Start Download"):</strong> When idle, this button gathers all current settings (URL, filters, checkboxes, etc.) and begins the download process via the DownloadManager.
</li>
<li>
<strong>Restore State:</strong> If an interrupted session is detected, the tooltip will indicate that starting a new download will discard previous session progress.
</li>
<li>
<strong>Update Mode (Phase 1 - "🔄 Check For Updates"):</strong> If a creator profile is loaded, clicking this button will fetch the creator's posts and compare them against your saved profile to identify new content.
</li>
<li>
<strong>Update Mode (Phase 2 - "⬇️ Start Download (X new)"):</strong> After new posts are found, the button text updates to reflect the number. Clicking it downloads only the new content.
</li>
</ul>
## 5. Folder Organization & Known.txt
Controls for structuring downloaded content.
<h2><strong>Pause / Resume Download</strong></h2>
<ul>
<li>
<strong>While Downloading:</strong> The button toggles between:
<ul>
<li><strong>"⏸️ Pause Download":</strong> Sets a <code>pause_event</code>, which tells all worker threads to halt their current task and wait.</li>
<li><strong>"▶️ Resume Download":</strong> Clears the <code>pause_event</code>, allowing threads to resume their work.</li>
</ul>
</li>
<li>
<strong>While Idle:</strong> The button is disabled.
</li>
<li>
<strong>Restore State:</strong> Changes to "🔄 Restore Download", which resumes the last session from saved data.
</li>
</ul>
- **Separate Folders by Name/Title Checkbox**: Enables automatic subfolder creation.  
- **Subfolder per Post Checkbox**: Creates subfolders for each post, named after the post title.  
- **Date Prefix for Post Subfolders Checkbox**: When used with "Subfolder per Post," this option prefixes the folder name with the post's upload date (e.g., `2025-07-11 Post Title`), allowing for chronological sorting.
- **Known.txt Management UI (Bottom Left)**:  
  - **Purpose**: Manages a local `Known.txt` file for series, characters, or terms used in folder creation.  
  - **List Display**: Shows primary names from `Known.txt`.  
  - ** Add Button**: Adds names or groups (e.g., `(Character A, Alias B)~`).  
  - **⤵️ Add to Filter Button**: Select names from `Known.txt` for the character filter.  
  - **🗑️ Delete Selected Button**: Removes selected names from `Known.txt`.  
  - **Open Known.txt Button**: Opens the file in the default text editor.  
  - **❓ Help Button**: Opens this feature guide.  
  - **📜 History Button**: Views recent download history.
<h2><strong>Cancel & Reset UI</strong></h2>
<ul>
<li>
<strong>Functionality:</strong> Stops downloads gracefully using a <code>cancellation_event</code>. Threads finish current tasks before shutting down.
</li>
<li>
<strong>The Soft Reset:</strong> After cancellation is confirmed by background threads, the UI resets via the <code>download_finished</code> function. Input fields (URL and Download Location) are preserved for convenience.
</li>
<li>
<strong>Restore State:</strong> Changes to "🗑️ Discard Session", which deletes <code>session.json</code> and resets the UI.
</li>
<li>
<strong>Update State:</strong> Changes to "🗑️ Clear Selection", unloading the selected creator profile and returning to normal UI state.
</li>
</ul>
## 6. ⭐ Favorite Mode (Kemono.su Only)
Download from favorited artists/posts on Kemono.su.
<h2><strong>Error Button</strong></h2>
<ul>
<li>
<strong>Error Counter:</strong> Shows how many files failed to download (e.g., <code>(3) Error</code>). Disabled if there are no errors.
</li>
<li>
<strong>Error Dialog:</strong> Clicking opens the "Files Skipped Due to Errors" dialog (defined in <code>ErrorFilesDialog.py</code>), listing all failed files.
</li>
<li>
<strong>Dialog Features:</strong>
<ul>
<li><strong>View Failed Files:</strong> Shows filenames and related post info.</li>
<li><strong>Select and Retry:</strong> Retry selected failed files in a focused download session.</li>
<li><strong>Export URLs:</strong> Save a <code>.txt</code> file of direct download links. Optionally include post metadata with each URL.</li>
</ul>
</li>
</ul>
<h2><strong>"Known Area" and its Controls</strong></h2>
<p>This section, located on the right side of the main window, manages your personal name database (<code>Known.txt</code>), which the app uses to organize downloads into subfolders.</p>
- **Enable Checkbox ("⭐ Favorite Mode")**:  
  - Switches to Favorite Mode.  
  - Disables the main URL input.  
  - Changes action buttons to "Favorite Artists" and "Favorite Posts".  
  - Requires cookies.  
- **🖼️ Favorite Artists Button**: Select and download from favorited artists.  
- **📄 Favorite Posts Button**: Select and download specific favorited posts.  
- **Favorite Download Scope Button**:  
  - **Scope: Selected Location**: Downloads favorites to the main directory.  
  - **Scope: Artist Folders**: Creates subfolders per artist.
<ul>
<li>
<strong>Open Known.txt:</strong> Opens the <code>Known.txt</code> file in your system's default text editor for manual editing, such as bulk changes or cleanup.
</li>
<li>
<strong>Search character input:</strong> A live search filter that hides any list items not matching your input text. Useful for quickly locating specific names in large lists.
</li>
<li>
<strong>Known Series/Characters Area:</strong> Displays all names currently stored in your <code>Known.txt</code>. These names are used when "Separate folders by Known.txt" is enabled.
</li>
<li>
<strong>Input at bottom & Add button:</strong> Type a new character or series name into the input field, then click " Add". The app checks for duplicates, updates the list, and saves to <code>Known.txt</code>.
</li>
<li>
<strong>Add to Filter:</strong> Opens a dialog showing all entries from <code>Known.txt</code> with checkboxes. You can select one or more to auto-fill the "Filter by Character(s)" field at the top of the app.
</li>
<li>
<strong>Delete Selected:</strong> Select one or more entries from the list and click "🗑️ Delete Selected" to remove them from the app and update <code>Known.txt</code> accordingly.
</li>
</ul>
## 7. Advanced Settings & Performance
- **🍪 Cookie Management**:  
  - **Use Cookie Checkbox**: Enables cookies for restricted content.  
  - **Cookie Text Field**: Paste cookie string.  
  - **Browse... Button**: Select a `cookies.txt` file (Netscape format).  
- **Use Multithreading Checkbox & Threads Input**:  
  - **Purpose**: Configures simultaneous operations.  
  - **Behavior**: Sets concurrent post processing (creator feeds) or file downloads (single posts).  
- **Multi-part Download Toggle Button**:  
  - **Purpose**: Enables/disables multi-segment downloading for large files.  
  - **Note**: Best for large files; less efficient for small files.
<h2><strong>Other Buttons</strong></h2>
<ul>
<li>
<strong>(?_?) mark button (Help Guide):</strong> Opens a multi-page help dialog with step-by-step instructions and explanations for all app features. Useful for new users.
</li>
<li>
<strong>History Button:</strong> Opens the Download History dialog (from <code>DownloadHistoryDialog.py</code>), showing:
<ul>
<li>Recently downloaded files</li>
<li>The first few posts processed in the last session</li>
</ul>
This allows for a quick review of recent activity.
</li>
<li>
<strong>Settings Button:</strong> Opens the Settings dialog (from <code>FutureSettingsDialog.py</code>), where you can change app-wide settings such as theme (light/dark) and language.
</li>
<li>
<strong>Support Button:</strong> Opens the Support dialog (from <code>SupportDialog.py</code>), which includes developer info, source links, and donation platforms like Ko-fi or Patreon.
</li>
</ul>
<h2><strong>Log Area Controls</strong></h2>
<p>These controls are located around the main log panel and offer tools for managing downloads, configuring advanced options, and resetting the application.</p>
## 8. Logging, Monitoring & Error Handling
- **📜 Progress Log Area**: Displays messages, progress, and errors.  
- **👁️ / 🙈 Log View Toggle Button**: Switches between Progress Log and Missed Character Log (skipped posts).  
- **Show External Links in Log**: Displays external links (e.g., Mega, Google Drive) in a secondary panel.  
- **Export Links Button**: Saves extracted links to a `.txt` file in "Only Links" mode.  
- **Download Extracted Links Button**: Downloads files from supported external links in "Only Links" mode.  
- **🆘 Error Button & Dialog**:  
  - **Purpose**: Active if files fail to download. The button will display a live count of failed files (e.g., **(3) Error**).  
  - **Dialog Features**:  
    - Lists failed files.  
    - Retry failed downloads.  
    - Export failed URLs to a text file.
<ul>
<li>
<strong>Multi-part: OFF</strong><br>
This button acts as both a status indicator and a configuration panel for multi-part downloading (parallel downloading of large files).
<ul>
<li><strong>Function:</strong> Opens the <code>Multipart Download Options</code> dialog (defined in <code>MultipartScopeDialog.py</code>).</li>
<li><strong>Scope Options:</strong> Choose between "Videos Only", "Archives Only", or "Both".</li>
<li><strong>Number of parts:</strong> Set how many simultaneous connections to use (216).</li>
<li><strong>Minimum file size:</strong> Set a threshold (MB) below which files are downloaded normally.</li>
<li><strong>Status:</strong> After applying settings, the button's text updates (e.g., <code>Multi-part: Both</code>); otherwise, it resets to <code>Multi-part: OFF</code>.</li>
</ul>
</li>
## 9. Application Settings (⚙️)
- **Appearance**: Switch between Light and Dark themes.  
- **Language**: Change UI language (restart required).
<li>
<strong>👁️ Eye Emoji Button (Log View Toggle)</strong><br>
Switches between two views in the log panel:
<ul>
<li><strong>👁️ Progress Log View:</strong> Shows real-time download progress, status messages, and errors.</li>
<li><strong>🚫 Missed Character View:</strong> Displays names detected in posts that didnt match the current filter — useful for updating <code>Known.txt</code>.</li>
</ul>
</li>
<li>
<strong>Reset Button</strong><br>
Performs a full "soft reset" of the UI when the application is idle.
<ul>
<li>Clears all inputs (except saved Download Location)</li>
<li>Resets checkboxes, buttons, and logs</li>
<li>Clears counters, queues, and restores the UI to its default state</li>
<li><strong>Note:</strong> This is different from <em>Cancel & Reset UI</em>, which halts active downloads</li>
</ul>
</li>
</ul>
<h3><strong>The Progress Log and "Only Links" Mode Controls</strong></h3>
<ul>
<li>
<strong>Standard Mode (Progress Log)</strong><br>
This is the default behavior. The <code>main_log_output</code> field displays:
<ul>
<li>Post processing steps</li>
<li>Download/skipped file notifications</li>
<li>Error messages</li>
<li>Session summaries</li>
</ul>
</li>
<li>
<strong>"Only Links" Mode</strong><br>
When enabled, the log panel switches modes and reveals new controls.
<ul>
<li><strong>📜 Extracted Links Log:</strong> Replaces progress info with a list of found external links (e.g., Mega, Dropbox).</li>
<li><strong>Export Links Button:</strong> Saves the extracted links to a <code>.txt</code> file.</li>
<li><strong>Download Button:</strong> Opens the <code>Download Selected External Links</code> dialog (from <code>DownloadExtractedLinksDialog.py</code>), where you can:
<ul>
<li>View all supported external links</li>
<li>Select which ones to download</li>
<li>Begin download directly from cloud services</li>
</ul>
</li>
<li><strong>Links View Button:</strong> Toggles log display between:
<ul>
<li><strong>🔗 Links View:</strong> Shows all extracted links</li>
<li><strong>⬇️ Progress View:</strong> Shows download progress from external services (e.g., Mega)</li>
</ul>
</li>
</ul>
</li>
</ul>

145
readme.md
View File

@@ -1,4 +1,4 @@
<h1 align="center">Kemono Downloader v6.0.0</h1>
<h1 align="center">Kemono Downloader </h1>
<div align="center">
@@ -41,108 +41,53 @@ Built with PyQt5, this tool is designed for users who want deep filtering capabi
</div>
<h2><strong>Core Capabilities Overview</strong></h2>
---
<h3><strong>High-Performance Downloading</strong></h3>
<ul>
<li><strong>Multi-threading:</strong> Processes multiple posts simultaneously to greatly accelerate downloads from large creator profiles.</li>
<li><strong>Multi-part Downloading:</strong> Splits large files into chunks and downloads them in parallel to maximize speed.</li>
<li><strong>Resilience:</strong> Supports pausing, resuming, and restoring downloads after crashes or interruptions.</li>
</ul>
## Feature Overview
<h3><strong>Advanced Filtering & Content Control</strong></h3>
<ul>
<li><strong>Content Type Filtering:</strong> Select whether to download all files or limit to images, videos, audio, or archives only.</li>
<li><strong>Keyword Skipping:</strong> Automatically skips posts or files containing certain keywords (e.g., "WIP", "sketch").</li>
<li><strong>Character Filtering:</strong> Restricts downloads to posts that match specific character or series names.</li>
</ul>
Kemono Downloader offers a range of features to streamline your content downloading experience:
<h3><strong>File Organization & Renaming</strong></h3>
<ul>
<li><strong>Automated Subfolders:</strong> Automatically organizes downloaded files into subdirectories based on character names or per post.</li>
<li><strong>Advanced File Renaming:</strong> Flexible renaming options, especially in Manga Mode, including:
<ul>
<li><strong>Post Title:</strong> Uses the post's title (e.g., <code>Chapter-One.jpg</code>).</li>
<li><strong>Date + Original Name:</strong> Prepends the publication date to the original filename.</li>
<li><strong>Date + Title:</strong> Combines the date with the post title.</li>
<li><strong>Sequential Numbering (Date Based):</strong> Simple sequence numbers (e.g., <code>001.jpg</code>, <code>002.jpg</code>).</li>
<li><strong>Title + Global Numbering:</strong> Uses post title with a globally incrementing number across the session.</li>
<li><strong>Post ID:</strong> Names files using the posts unique ID.</li>
</ul>
</li>
</ul>
- **User-Friendly Interface:** A modern PyQt5 GUI for easy navigation and operation.
<h3><strong>Specialized Modes</strong></h3>
<ul>
<li><strong>Manga/Comic Mode:</strong> Sorts posts chronologically before downloading to ensure pages appear in the correct sequence.</li>
<li><strong>Favorite Mode:</strong> Connects to your account and downloads from your favorites list (artists or posts).</li>
<li><strong>Link Extraction Mode:</strong> Extracts external links from posts for export or targeted downloading.</li>
<li><strong>Text Extraction Mode:</strong> Saves post descriptions or comment sections as <code>PDF</code>, <code>DOCX</code>, or <code>TXT</code> files.</li>
</ul>
- **Flexible Downloading:**
- Download content from Kemono.su (and mirrors) and Coomer.party (and mirrors).
- Supports creator pages (with page range selection) and individual post URLs.
- Standard download controls: Start, Pause, Resume, and Cancel.
- **Powerful Filtering:**
- **Character Filtering:** Filter content by character names. Supports simple comma-separated names and grouped names for shared folders.
- **Keyword Skipping:** Skip posts or files based on specified keywords.
- **Filename Cleaning:** Remove unwanted words or phrases from downloaded filenames.
- **File Type Selection:** Choose to download all files, or limit to images/GIFs, videos, audio, or archives. Can also extract external links only.
- **Customizable Downloads:**
- **Thumbnails Only:** Option to download only small preview images.
- **Content Scanning:** Scan post HTML for `<img>` tags and direct image links, useful for images embedded in descriptions.
- **WebP Conversion:** Convert images to WebP format for smaller file sizes (requires Pillow library).
- **Organized Output:**
- **Automatic Subfolders:** Create subfolders based on character names (from filters or `Known.txt`) or post titles.
- **Per-Post Subfolders:** Option to create an additional subfolder for each individual post.
- **Manga/Comic Mode:**
- Downloads posts from a creator's feed in chronological order (oldest to newest).
- Offers various filename styling options for sequential reading (e.g., post title, original name, global numbering).
- **⭐ Favorite Mode:**
- Directly download from your favorited artists and posts on Kemono.su.
- Requires a valid cookie and adapts the UI for easy selection from your favorites.
- Supports downloading into a single location or artist-specific subfolders.
- **Performance & Advanced Options:**
- **Cookie Support:** Use cookies (paste string or load from `cookies.txt`) to access restricted content.
- **Multithreading:** Configure the number of simultaneous downloads/post processing threads for improved speed.
- **Logging:**
- A detailed progress log displays download activity, errors, and summaries.
- **Multi-language Interface:** Choose from several languages for the UI (English, Japanese, French, Spanish, German, Russian, Korean, Chinese Simplified).
- **Theme Customization:** Selectable Light and Dark themes for user comfort.
---
## ✨ What's New in v6.0.0
This release focuses on providing more granular control over file organization and improving at-a-glance status monitoring.
### New Features
- **Live Error Count on Button**
The **"Error" button** now dynamically displays the number of failed files during a download. Instead of opening the dialog, you can quickly see a live count like `(3) Error`, helping you track issues at a glance.
- **Date Prefix for Post Subfolders**
A new checkbox labeled **"Date Prefix"** is now available in the advanced settings.
When enabled alongside **"Subfolder per Post"**, it prepends the post's upload date to the folder name (e.g., `2025-07-11 Post Title`).
This makes your downloads sortable and easier to browse chronologically.
- **Keep Duplicates Within a Post**
A **"Keep Duplicates"** option has been added to preserve all files from a post — even if some have the same name.
Instead of skipping or overwriting, the downloader will save duplicates with numbered suffixes (e.g., `image.jpg`, `image_1.jpg`, etc.), which is especially useful when the same file name points to different media.
### Bug Fixes
- The downloader now correctly renames large `.part` files when completed, avoiding leftover temp files.
- The list of failed files shown in the Error Dialog is now saved and restored with your session — so no errors get lost if you close the app.
- Your selected download location is remembered, even after pressing the **Reset** button.
- The **Cancel** button is now enabled when restoring a pending session, so you can abort stuck jobs more easily.
- Internal cleanup logs (like "Deleting post cache") are now excluded from the final download summary for clarity.
---
## 📅 Next Update Plans
### 🔖 Post Tag Filtering (Planned for v6.1.0)
A powerful new **"Filter by Post Tags"** feature is planned:
- Filter and download content based on specific post tags.
- Combine tag filtering with current filters (character, file type, etc.).
- Use tag presets to automate frequent downloads.
This will provide **much greater control** over what gets downloaded, especially for creators who use tags consistently.
### 📁 Creator Download History (.json Save)
To streamline incremental downloads, a new system will allow the app to:
- Save a `.json` file with metadata about already-downloaded posts.
- Compare that file on future runs, so only **new** posts are downloaded.
- Avoids duplication and makes regular syncs fast and efficient.
Ideal for users managing large collections or syncing favorites regularly.
---
<h3><strong>Utility & Advanced Features</strong></h3>
<ul>
<li><strong>Cookie Support:</strong> Enables access to subscriber-only content via browser session cookies.</li>
<li><strong>Duplicate Detection:</strong> Prevents saving duplicate files using content-based comparison, with configurable limits.</li>
<li><strong>Image Compression:</strong> Automatically converts large images to <code>.webp</code> to reduce disk usage.</li>
<li><strong>Creator Management:</strong> Built-in creator browser and update checker for downloading only new posts from saved profiles.</li>
<li><strong>Error Handling:</strong> Tracks failed downloads and provides a retry dialog with options to export or redownload missing files.</li>
</ul>
## 💻 Installation
@@ -154,7 +99,7 @@ Ideal for users managing large collections or syncing favorites regularly.
### Install Dependencies
```bash
pip install PyQt5 requests Pillow mega.py
pip install PyQt5 requests Pillow mega.py fpdf python-docx
```
### Running the Application
@@ -197,7 +142,7 @@ Feel free to fork this repo and submit pull requests for bug fixes, new features
## License
This project is under the Custom Licence
This project is under the MIT Licence
## Star History

View File

@@ -59,6 +59,8 @@ LANGUAGE_KEY = "currentLanguageV1"
DOWNLOAD_LOCATION_KEY = "downloadLocationV1"
RESOLUTION_KEY = "window_resolution"
UI_SCALE_KEY = "ui_scale_factor"
SAVE_CREATOR_JSON_KEY = "saveCreatorJsonProfile"
FETCH_FIRST_KEY = "fetchAllPostsFirst"
# --- UI Constants and Identifiers ---
HTML_PREFIX = "<!HTML!>"
@@ -96,7 +98,7 @@ FOLDER_NAME_STOP_WORDS = {
"for", "he", "her", "his", "i", "im", "in", "is", "it", "its",
"me", "my", "net", "not", "of", "on", "or", "org", "our",
"s", "she", "so", "the", "their", "they", "this",
"to", "ve", "was", "we", "were", "with", "www", "you", "your",
"to", "ve", "was", "we", "were", "with", "www", "you", "your", "nsfw", "sfw",
# add more according to need
}
@@ -110,7 +112,9 @@ CREATOR_DOWNLOAD_DEFAULT_FOLDER_IGNORE_WORDS = {
"may", "jun", "june", "jul", "july", "aug", "august", "sep", "september",
"oct", "october", "nov", "november", "dec", "december",
"mon", "monday", "tue", "tuesday", "wed", "wednesday", "thu", "thursday",
"fri", "friday", "sat", "saturday", "sun", "sunday"
"fri", "friday", "sat", "saturday", "sun", "sunday", "Pack", "tier", "spoiler",
# add more according to need
}

View File

@@ -1,7 +1,7 @@
import time
import traceback
from urllib.parse import urlparse
import json # Ensure json is imported
import json
import requests
from ..utils.network_utils import extract_post_info, prepare_cookies_for_request
from ..config.constants import (
@@ -120,7 +120,8 @@ def download_from_api(
selected_cookie_file=None,
app_base_dir=None,
manga_filename_style_for_sort_check=None,
processed_post_ids=None # --- ADD THIS ARGUMENT ---
processed_post_ids=None,
fetch_all_first=False
):
headers = {
'User-Agent': 'Mozilla/5.0',
@@ -139,9 +140,14 @@ def download_from_api(
parsed_input_url_for_domain = urlparse(api_url_input)
api_domain = parsed_input_url_for_domain.netloc
if not any(d in api_domain.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
# --- START: MODIFIED LOGIC ---
# This list is updated to include the new .cr and .st mirrors for validation.
if not any(d in api_domain.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
logger(f"⚠️ Unrecognized domain '{api_domain}' from input URL. Defaulting to kemono.su for API calls.")
api_domain = "kemono.su"
# --- END: MODIFIED LOGIC ---
cookies_for_api = None
if use_cookie and app_base_dir:
cookies_for_api = prepare_cookies_for_request(use_cookie, cookie_text, selected_cookie_file, app_base_dir, logger, target_domain=api_domain)
@@ -178,6 +184,7 @@ def download_from_api(
logger("⚠️ Page range (start/end page) is ignored when a specific post URL is provided (searching all pages for the post).")
is_manga_mode_fetch_all_and_sort_oldest_first = manga_mode and (manga_filename_style_for_sort_check != STYLE_DATE_POST_TITLE) and not target_post_id
should_fetch_all = fetch_all_first or is_manga_mode_fetch_all_and_sort_oldest_first
api_base_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}"
page_size = 50
if is_manga_mode_fetch_all_and_sort_oldest_first:
@@ -220,6 +227,9 @@ def download_from_api(
logger(f" Manga Mode: No posts found within the specified page range ({start_page or 1}-{end_page}).")
break
all_posts_for_manga_mode.extend(posts_batch_manga)
logger(f"MANGA_FETCH_PROGRESS:{len(all_posts_for_manga_mode)}:{current_page_num_manga}")
current_offset_manga += page_size
time.sleep(0.6)
except RuntimeError as e:
@@ -232,7 +242,12 @@ def download_from_api(
logger(f"❌ Unexpected error during manga mode fetch: {e}")
traceback.print_exc()
break
if cancellation_event and cancellation_event.is_set(): return
if all_posts_for_manga_mode:
logger(f"MANGA_FETCH_COMPLETE:{len(all_posts_for_manga_mode)}")
if all_posts_for_manga_mode:
if processed_post_ids:
original_count = len(all_posts_for_manga_mode)

View File

@@ -0,0 +1,80 @@
import time
import requests
import json
from urllib.parse import urlparse
def fetch_server_channels(server_id, logger, cookies=None, cancellation_event=None, pause_event=None):
"""
Fetches the list of channels for a given Discord server ID from the Kemono API.
UPDATED to be pausable and cancellable.
"""
domains_to_try = ["kemono.cr", "kemono.su"]
for domain in domains_to_try:
if cancellation_event and cancellation_event.is_set():
logger(" Channel fetching cancelled by user.")
return None
while pause_event and pause_event.is_set():
if cancellation_event and cancellation_event.is_set(): break
time.sleep(0.5)
lookup_url = f"https://{domain}/api/v1/discord/channel/lookup/{server_id}"
logger(f" Attempting to fetch channel list from: {lookup_url}")
try:
response = requests.get(lookup_url, cookies=cookies, timeout=15)
response.raise_for_status()
channels = response.json()
if isinstance(channels, list):
logger(f" ✅ Found {len(channels)} channels for server {server_id}.")
return channels
except (requests.exceptions.RequestException, json.JSONDecodeError):
# This is a silent failure, we'll just try the next domain
pass
logger(f" ❌ Failed to fetch channel list for server {server_id} from all available domains.")
return None
def fetch_channel_messages(channel_id, logger, cancellation_event, pause_event, cookies=None):
"""
Fetches all messages from a Discord channel by looping through API pages (pagination).
Uses a page size of 150 and handles the specific offset logic.
"""
offset = 0
page_size = 150 # Corrected page size based on your findings
api_base_url = f"https://kemono.cr/api/v1/discord/channel/{channel_id}"
while not (cancellation_event and cancellation_event.is_set()):
if pause_event and pause_event.is_set():
logger(" Message fetching paused...")
while pause_event.is_set():
if cancellation_event and cancellation_event.is_set(): break
time.sleep(0.5)
logger(" Message fetching resumed.")
if cancellation_event and cancellation_event.is_set():
break
paginated_url = f"{api_base_url}?o={offset}"
logger(f" Fetching messages from API: page starting at offset {offset}")
try:
response = requests.get(paginated_url, cookies=cookies, timeout=20)
response.raise_for_status()
messages_batch = response.json()
if not messages_batch:
logger(f" ✅ Reached end of messages for channel {channel_id}.")
break
logger(f" Fetched {len(messages_batch)} messages...")
yield messages_batch
if len(messages_batch) < page_size:
logger(f" ✅ Last page of messages received for channel {channel_id}.")
break
offset += page_size
time.sleep(0.5)
except (requests.exceptions.RequestException, json.JSONDecodeError) as e:
logger(f" ❌ Error fetching messages at offset {offset}: {e}")
break

View File

@@ -5,11 +5,10 @@ import json
import traceback
from concurrent.futures import ThreadPoolExecutor, as_completed, Future
from .api_client import download_from_api
from .workers import PostProcessorWorker, DownloadThread
from .workers import PostProcessorWorker
from ..config.constants import (
STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING,
MAX_THREADS, POST_WORKER_BATCH_THRESHOLD, POST_WORKER_NUM_BATCHES,
POST_WORKER_BATCH_DELAY_SECONDS
MAX_THREADS
)
from ..utils.file_utils import clean_folder_name
@@ -41,6 +40,10 @@ class DownloadManager:
self.total_downloads = 0
self.total_skips = 0
self.all_kept_original_filenames = []
self.creator_profiles_dir = None
self.current_creator_name_for_profile = None
self.current_creator_profile_path = None
self.session_file_path = None
def _log(self, message):
"""Puts a progress message into the queue for the UI."""
@@ -58,6 +61,17 @@ class DownloadManager:
if self.is_running:
self._log("❌ Cannot start a new session: A session is already in progress.")
return
self.session_file_path = config.get('session_file_path')
creator_profile_data = self._setup_creator_profile(config)
# Save settings to profile at the start of the session
if self.current_creator_profile_path:
creator_profile_data['settings'] = config
creator_profile_data.setdefault('processed_post_ids', [])
self._save_creator_profile(creator_profile_data)
self._log(f"✅ Loaded/created profile for '{self.current_creator_name_for_profile}'. Settings saved.")
self.is_running = True
self.cancellation_event.clear()
self.pause_event.clear()
@@ -67,76 +81,109 @@ class DownloadManager:
self.total_downloads = 0
self.total_skips = 0
self.all_kept_original_filenames = []
is_single_post = bool(config.get('target_post_id_from_initial_url'))
use_multithreading = config.get('use_multithreading', True)
is_manga_sequential = config.get('manga_mode_active') and config.get('manga_filename_style') in [STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING]
should_use_multithreading_for_posts = use_multithreading and not is_single_post and not is_manga_sequential
if should_use_multithreading_for_posts:
fetcher_thread = threading.Thread(
target=self._fetch_and_queue_posts_for_pool,
args=(config, restore_data),
args=(config, restore_data, creator_profile_data),
daemon=True
)
fetcher_thread.start()
else:
self._start_single_threaded_session(config)
# Single-threaded mode does not use the manager's complex logic
self._log(" Manager is handing off to a single-threaded worker...")
# The single-threaded worker will manage its own lifecycle and signals.
# The manager's role for this session is effectively over.
self.is_running = False # Allow another session to start if needed
self.progress_queue.put({'type': 'handoff_to_single_thread', 'payload': (config,)})
def _start_single_threaded_session(self, config):
"""Handles downloads that are best processed by a single worker thread."""
self._log(" Initializing single-threaded download process...")
self.worker_thread = threading.Thread(
target=self._run_single_worker,
args=(config,),
daemon=True
)
self.worker_thread.start()
def _run_single_worker(self, config):
"""Target function for the single-worker thread."""
try:
worker = DownloadThread(config, self.progress_queue)
worker.run() # This is the main blocking call for this thread
except Exception as e:
self._log(f"❌ CRITICAL ERROR in single-worker thread: {e}")
self._log(traceback.format_exc())
finally:
self.is_running = False
def _fetch_and_queue_posts_for_pool(self, config, restore_data):
def _fetch_and_queue_posts_for_pool(self, config, restore_data, creator_profile_data):
"""
Fetches all posts from the API and submits them as tasks to a thread pool.
This method runs in its own dedicated thread to avoid blocking.
Fetches posts from the API in batches and submits them as tasks to a thread pool.
This method runs in its own dedicated thread to avoid blocking the UI.
It provides immediate feedback as soon as the first batch of posts is found.
"""
try:
num_workers = min(config.get('num_threads', 4), MAX_THREADS)
self.thread_pool = ThreadPoolExecutor(max_workers=num_workers, thread_name_prefix='PostWorker_')
if restore_data:
session_processed_ids = set(restore_data.get('processed_post_ids', [])) if restore_data else set()
profile_processed_ids = set(creator_profile_data.get('processed_post_ids', []))
processed_ids = session_processed_ids.union(profile_processed_ids)
if restore_data and 'all_posts_data' in restore_data:
# This logic for session restore remains as it relies on a pre-fetched list
all_posts = restore_data['all_posts_data']
processed_ids = set(restore_data['processed_post_ids'])
posts_to_process = [p for p in all_posts if p.get('id') not in processed_ids]
self.total_posts = len(all_posts)
self.processed_posts = len(processed_ids)
self._log(f"🔄 Restoring session. {len(posts_to_process)} posts remaining.")
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
if not posts_to_process:
self._log("✅ No new posts to process from restored session.")
return
for post_data in posts_to_process:
if self.cancellation_event.is_set(): break
worker = PostProcessorWorker(post_data, config, self.progress_queue)
future = self.thread_pool.submit(worker.process)
future.add_done_callback(self._handle_future_result)
self.active_futures.append(future)
else:
posts_to_process = self._get_all_posts(config)
self.total_posts = len(posts_to_process)
# --- START: REFACTORED STREAMING LOGIC ---
post_generator = download_from_api(
api_url_input=config['api_url'],
logger=self._log,
start_page=config.get('start_page'),
end_page=config.get('end_page'),
manga_mode=config.get('manga_mode_active', False),
cancellation_event=self.cancellation_event,
pause_event=self.pause_event,
use_cookie=config.get('use_cookie', False),
cookie_text=config.get('cookie_text', ''),
selected_cookie_file=config.get('selected_cookie_file'),
app_base_dir=config.get('app_base_dir'),
manga_filename_style_for_sort_check=config.get('manga_filename_style'),
processed_post_ids=list(processed_ids)
)
self.total_posts = 0
self.processed_posts = 0
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
if not posts_to_process:
self._log("✅ No new posts to process.")
return
for post_data in posts_to_process:
if self.cancellation_event.is_set():
break
worker = PostProcessorWorker(post_data, config, self.progress_queue)
future = self.thread_pool.submit(worker.process)
future.add_done_callback(self._handle_future_result)
self.active_futures.append(future)
# Process posts in batches as they are yielded by the API client
for batch in post_generator:
if self.cancellation_event.is_set():
self._log(" Post fetching cancelled.")
break
# Filter out any posts that might have been processed since the start
posts_in_batch_to_process = [p for p in batch if p.get('id') not in processed_ids]
if not posts_in_batch_to_process:
continue
# Update total count and immediately inform the UI
self.total_posts += len(posts_in_batch_to_process)
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
for post_data in posts_in_batch_to_process:
if self.cancellation_event.is_set(): break
worker = PostProcessorWorker(post_data, config, self.progress_queue)
future = self.thread_pool.submit(worker.process)
future.add_done_callback(self._handle_future_result)
self.active_futures.append(future)
if self.total_posts == 0 and not self.cancellation_event.is_set():
self._log("✅ No new posts found to process.")
except Exception as e:
self._log(f"❌ CRITICAL ERROR in post fetcher thread: {e}")
self._log(traceback.format_exc())
@@ -144,33 +191,11 @@ class DownloadManager:
if self.thread_pool:
self.thread_pool.shutdown(wait=True)
self.is_running = False
self._log("🏁 All processing tasks have completed.")
self._log("🏁 All processing tasks have completed or been cancelled.")
self.progress_queue.put({
'type': 'finished',
'payload': (self.total_downloads, self.total_skips, self.cancellation_event.is_set(), self.all_kept_original_filenames)
})
def _get_all_posts(self, config):
"""Helper to fetch all posts using the API client."""
all_posts = []
post_generator = download_from_api(
api_url_input=config['api_url'],
logger=self._log,
start_page=config.get('start_page'),
end_page=config.get('end_page'),
manga_mode=config.get('manga_mode_active', False),
cancellation_event=self.cancellation_event,
pause_event=self.pause_event,
use_cookie=config.get('use_cookie', False),
cookie_text=config.get('cookie_text', ''),
selected_cookie_file=config.get('selected_cookie_file'),
app_base_dir=config.get('app_base_dir'),
manga_filename_style_for_sort_check=config.get('manga_filename_style'),
processed_post_ids=config.get('processed_post_ids', [])
)
for batch in post_generator:
all_posts.extend(batch)
return all_posts
def _handle_future_result(self, future: Future):
"""Callback executed when a worker task completes."""
@@ -196,19 +221,65 @@ class DownloadManager:
self.progress_queue.put({'type': 'permanent_failure', 'payload': (permanent,)})
if history:
self.progress_queue.put({'type': 'post_processed_history', 'payload': (history,)})
post_id = history.get('post_id')
if post_id and self.current_creator_profile_path:
profile_data = self._setup_creator_profile({'creator_name_for_profile': self.current_creator_name_for_profile, 'session_file_path': self.session_file_path})
if post_id not in profile_data.get('processed_post_ids', []):
profile_data.setdefault('processed_post_ids', []).append(post_id)
self._save_creator_profile(profile_data)
except Exception as e:
self._log(f"❌ Worker task resulted in an exception: {e}")
self.total_skips += 1 # Count errored posts as skipped
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
def _setup_creator_profile(self, config):
"""Prepares the path and loads data for the current creator's profile."""
self.current_creator_name_for_profile = config.get('creator_name_for_profile')
if not self.current_creator_name_for_profile:
self._log("⚠️ Cannot create creator profile: Name not provided in config.")
return {}
appdata_dir = os.path.dirname(config.get('session_file_path', '.'))
self.creator_profiles_dir = os.path.join(appdata_dir, "creator_profiles")
os.makedirs(self.creator_profiles_dir, exist_ok=True)
safe_filename = clean_folder_name(self.current_creator_name_for_profile) + ".json"
self.current_creator_profile_path = os.path.join(self.creator_profiles_dir, safe_filename)
if os.path.exists(self.current_creator_profile_path):
try:
with open(self.current_creator_profile_path, 'r', encoding='utf-8') as f:
return json.load(f)
except (json.JSONDecodeError, OSError) as e:
self._log(f"❌ Error loading creator profile '{safe_filename}': {e}. Starting fresh.")
return {}
def _save_creator_profile(self, data):
"""Saves the provided data to the current creator's profile file."""
if not self.current_creator_profile_path:
return
try:
temp_path = self.current_creator_profile_path + ".tmp"
with open(temp_path, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2)
os.replace(temp_path, self.current_creator_profile_path)
except OSError as e:
self._log(f"❌ Error saving creator profile to '{self.current_creator_profile_path}': {e}")
def cancel_session(self):
"""Cancels the current running session."""
if not self.is_running:
return
if self.cancellation_event.is_set():
self._log(" Cancellation already in progress.")
return
self._log("⚠️ Cancellation requested by user...")
self.cancellation_event.set()
if self.thread_pool:
self.thread_pool.shutdown(wait=False, cancel_futures=True)
self.is_running = False
self._log(" Signaling all worker threads to stop and shutting down pool...")
self.thread_pool.shutdown(wait=False)

View File

@@ -1,4 +1,5 @@
import os
import sys
import queue
import re
import threading
@@ -53,6 +54,24 @@ from ..utils.text_utils import (
)
from ..config.constants import *
def robust_clean_name(name):
"""A more robust function to remove illegal characters for filenames and folders."""
if not name:
return ""
# Removes illegal characters for Windows, macOS, and Linux: < > : " / \ | ? *
# Also removes control characters (ASCII 0-31) which are invisible but invalid.
illegal_chars_pattern = r'[\x00-\x1f<>:"/\\|?*]'
cleaned_name = re.sub(illegal_chars_pattern, '', name)
# Remove leading/trailing spaces or periods, which can cause issues.
cleaned_name = cleaned_name.strip(' .')
# If the name is empty after cleaning (e.g., it was only illegal chars),
# provide a safe fallback name.
if not cleaned_name:
return "untitled_folder" # Or "untitled_file" depending on context
return cleaned_name
class PostProcessorSignals (QObject ):
progress_signal =pyqtSignal (str )
file_download_status_signal =pyqtSignal (bool )
@@ -63,7 +82,6 @@ class PostProcessorSignals (QObject ):
worker_finished_signal = pyqtSignal(tuple)
class PostProcessorWorker:
def __init__(self, post_data, download_root, known_names,
filter_character_list, emitter,
unwanted_keywords, filter_mode, skip_zip,
@@ -103,7 +121,10 @@ class PostProcessorWorker:
text_export_format='txt',
single_pdf_mode=False,
project_root_dir=None,
processed_post_ids=None
processed_post_ids=None,
multipart_scope='both',
multipart_parts_count=4,
multipart_min_size_mb=100
):
self.post = post_data
self.download_root = download_root
@@ -165,7 +186,9 @@ class PostProcessorWorker:
self.single_pdf_mode = single_pdf_mode
self.project_root_dir = project_root_dir
self.processed_post_ids = processed_post_ids if processed_post_ids is not None else []
self.multipart_scope = multipart_scope
self.multipart_parts_count = multipart_parts_count
self.multipart_min_size_mb = multipart_min_size_mb
if self.compress_images and Image is None:
self.logger("⚠️ Image compression disabled: Pillow library not found.")
self.compress_images = False
@@ -199,8 +222,38 @@ class PostProcessorWorker:
if self .dynamic_filter_holder :
return self .dynamic_filter_holder .get_filters ()
return self .filter_character_list_objects_initial
def _download_single_file(self, file_info, target_folder_path, headers, original_post_id_for_log, skip_event,
def _find_valid_subdomain(self, url: str, max_subdomains: int = 4) -> str:
"""
Attempts to find a working subdomain for a Kemono/Coomer URL that returned a 403 error.
Returns the original URL if no other valid subdomain is found.
"""
self.logger(f" probing for a valid subdomain...")
parsed_url = urlparse(url)
original_domain = parsed_url.netloc
for i in range(1, max_subdomains + 1):
domain_parts = original_domain.split('.')
if len(domain_parts) > 1:
base_domain = ".".join(domain_parts[-2:])
new_domain = f"n{i}.{base_domain}"
else:
continue
new_url = parsed_url._replace(netloc=new_domain).geturl()
try:
with requests.head(new_url, headers={'User-Agent': 'Mozilla/5.0'}, timeout=5, allow_redirects=True) as resp:
if resp.status_code == 200:
self.logger(f" ✅ Valid subdomain found: {new_domain}")
return new_url
except requests.RequestException:
continue
self.logger(f" ⚠️ No other valid subdomain found. Sticking with the original.")
return url
def _download_single_file(self, file_info, target_folder_path, post_page_url, original_post_id_for_log, skip_event,
post_title="", file_index_in_post=0, num_files_in_this_post=1,
manga_date_file_counter_ref=None,
forced_filename_override=None,
@@ -214,6 +267,11 @@ class PostProcessorWorker:
if self.check_cancel() or (skip_event and skip_event.is_set()):
return 0, 1, "", False, FILE_DOWNLOAD_STATUS_SKIPPED, None
file_download_headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36',
'Referer': post_page_url
}
file_url = file_info.get('url')
cookies_to_use_for_file = None
if self.use_cookie:
@@ -232,23 +290,28 @@ class PostProcessorWorker:
self.logger(f" -> Skip File (Keyword in Original Name '{skip_word}'): '{api_original_filename}'. Scope: {self.skip_words_scope}")
return 0, 1, api_original_filename, False, FILE_DOWNLOAD_STATUS_SKIPPED, None
cleaned_original_api_filename = clean_filename(api_original_filename)
cleaned_original_api_filename = robust_clean_name(api_original_filename)
original_filename_cleaned_base, original_ext = os.path.splitext(cleaned_original_api_filename)
if not original_ext.startswith('.'): original_ext = '.' + original_ext if original_ext else ''
if self.manga_mode_active:
if self.manga_filename_style == STYLE_ORIGINAL_NAME:
filename_to_save_in_main_path = cleaned_original_api_filename
if self.manga_date_prefix and self.manga_date_prefix.strip():
cleaned_prefix = clean_filename(self.manga_date_prefix.strip())
if cleaned_prefix:
filename_to_save_in_main_path = f"{cleaned_prefix} {filename_to_save_in_main_path}"
else:
self.logger(f"⚠️ Manga Original Name Mode: Provided prefix '{self.manga_date_prefix}' was empty after cleaning. Using original name only.")
published_date_str = self.post.get('published')
added_date_str = self.post.get('added')
formatted_date_str = "nodate"
date_to_use_str = published_date_str or added_date_str
if date_to_use_str:
try:
formatted_date_str = date_to_use_str.split('T')[0]
except Exception:
self.logger(f" ⚠️ Could not parse date '{date_to_use_str}'. Using 'nodate' prefix.")
else:
self.logger(f" ⚠️ Post ID {original_post_id_for_log} has no date. Using 'nodate' prefix.")
filename_to_save_in_main_path = f"{formatted_date_str}_{cleaned_original_api_filename}"
was_original_name_kept_flag = True
elif self.manga_filename_style == STYLE_POST_TITLE:
if post_title and post_title.strip():
cleaned_post_title_base = clean_filename(post_title.strip())
cleaned_post_title_base = robust_clean_name(post_title.strip())
if num_files_in_this_post > 1:
if file_index_in_post == 0:
filename_to_save_in_main_path = f"{cleaned_post_title_base}{original_ext}"
@@ -269,7 +332,7 @@ class PostProcessorWorker:
manga_date_file_counter_ref[0] += 1
base_numbered_name = f"{counter_val_for_filename:03d}"
if self.manga_date_prefix and self.manga_date_prefix.strip():
cleaned_prefix = clean_filename(self.manga_date_prefix.strip())
cleaned_prefix = robust_clean_name(self.manga_date_prefix.strip())
if cleaned_prefix:
filename_to_save_in_main_path = f"{cleaned_prefix} {base_numbered_name}{original_ext}"
else:
@@ -286,7 +349,7 @@ class PostProcessorWorker:
with counter_lock:
counter_val_for_filename = manga_global_file_counter_ref[0]
manga_global_file_counter_ref[0] += 1
cleaned_post_title_base_for_global = clean_filename(post_title.strip() if post_title and post_title.strip() else "post")
cleaned_post_title_base_for_global = robust_clean_name(post_title.strip() if post_title and post_title.strip() else "post")
filename_to_save_in_main_path = f"{cleaned_post_title_base_for_global}_{counter_val_for_filename:03d}{original_ext}"
else:
self.logger(f"⚠️ Manga Title+GlobalNum Mode: Counter ref not provided or malformed for '{api_original_filename}'. Using original. Ref: {manga_global_file_counter_ref}")
@@ -318,8 +381,8 @@ class PostProcessorWorker:
self.logger(f" ⚠️ Post ID {original_post_id_for_log} missing both 'published' and 'added' dates for STYLE_DATE_POST_TITLE. Using 'nodate'.")
if post_title and post_title.strip():
temp_cleaned_title = clean_filename(post_title.strip())
if not temp_cleaned_title or temp_cleaned_title.startswith("untitled_file"):
temp_cleaned_title = robust_clean_name(post_title.strip())
if not temp_cleaned_title or temp_cleaned_title.startswith("untitled_folder"):
self.logger(f"⚠️ Manga mode (Date+PostTitle Style): Post title for post {original_post_id_for_log} ('{post_title}') was empty or generic after cleaning. Using 'post' as title part.")
cleaned_post_title_for_filename = "post"
else:
@@ -398,6 +461,33 @@ class PostProcessorWorker:
unique_id_for_part_file = uuid.uuid4().hex[:8]
unique_part_file_stem_on_disk = f"{temp_file_base_for_unique_part}_{unique_id_for_part_file}"
max_retries = 3
if not self.keep_in_post_duplicates:
final_save_path_check = os.path.join(target_folder_path, filename_to_save_in_main_path)
if os.path.exists(final_save_path_check):
try:
with requests.head(file_url, headers=file_download_headers, timeout=15, cookies=cookies_to_use_for_file, allow_redirects=True) as head_response:
head_response.raise_for_status()
expected_size = int(head_response.headers.get('Content-Length', -1))
actual_size = os.path.getsize(final_save_path_check)
if expected_size != -1 and actual_size == expected_size:
self.logger(f" -> Skip (File Exists & Complete): '{filename_to_save_in_main_path}' is already on disk with the correct size.")
try:
md5_hasher = hashlib.md5()
with open(final_save_path_check, 'rb') as f_verify:
for chunk in iter(lambda: f_verify.read(8192), b""):
md5_hasher.update(chunk)
with self.downloaded_hash_counts_lock:
self.downloaded_hash_counts[md5_hasher.hexdigest()] += 1
except Exception as hash_exc:
self.logger(f" ⚠️ Could not hash existing file '{filename_to_save_in_main_path}' for session: {hash_exc}")
return 0, 1, filename_to_save_in_main_path, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_SKIPPED, None
else:
self.logger(f" ⚠️ File '{filename_to_save_in_main_path}' exists but is incomplete (Expected: {expected_size}, Actual: {actual_size}). Re-downloading.")
except requests.RequestException as e:
self.logger(f" ⚠️ Could not verify size of existing file '{filename_to_save_in_main_path}': {e}. Proceeding with download.")
retry_delay = 5
downloaded_size_bytes = 0
calculated_file_hash = None
@@ -417,14 +507,44 @@ class PostProcessorWorker:
if attempt_num_single_stream > 0:
self.logger(f" Retrying download for '{api_original_filename}' (Overall Attempt {attempt_num_single_stream + 1}/{max_retries + 1})...")
time.sleep(retry_delay * (2 ** (attempt_num_single_stream - 1)))
self._emit_signal('file_download_status', True)
response = requests.get(file_url, headers=headers, timeout=(15, 300), stream=True, cookies=cookies_to_use_for_file)
current_url_to_try = file_url
response = requests.get(current_url_to_try, headers=file_download_headers, timeout=(30, 300), stream=True, cookies=cookies_to_use_for_file)
if response.status_code == 403 and ('kemono.cr' in current_url_to_try or 'coomer.st' in current_url_to_try):
self.logger(f" ⚠️ Got 403 Forbidden for '{api_original_filename}'. Attempting subdomain rotation...")
new_url = self._find_valid_subdomain(current_url_to_try)
if new_url != current_url_to_try:
self.logger(f" Retrying with new URL: {new_url}")
file_url = new_url # Update the main file_url for subsequent retries
response = requests.get(new_url, headers=file_download_headers, timeout=(30, 300), stream=True, cookies=cookies_to_use_for_file)
response.raise_for_status()
total_size_bytes = int(response.headers.get('Content-Length', 0))
num_parts_for_file = min(self.num_file_threads, MAX_PARTS_FOR_MULTIPART_DOWNLOAD)
num_parts_for_file = min(self.multipart_parts_count, MAX_PARTS_FOR_MULTIPART_DOWNLOAD)
file_is_eligible_by_scope = False
if self.multipart_scope == 'videos':
if is_video(api_original_filename):
file_is_eligible_by_scope = True
elif self.multipart_scope == 'archives':
if is_archive(api_original_filename):
file_is_eligible_by_scope = True
elif self.multipart_scope == 'both':
if is_video(api_original_filename) or is_archive(api_original_filename):
file_is_eligible_by_scope = True
min_size_in_bytes = self.multipart_min_size_mb * 1024 * 1024
attempt_multipart = (self.allow_multipart_download and MULTIPART_DOWNLOADER_AVAILABLE and
num_parts_for_file > 1 and total_size_bytes > MIN_SIZE_FOR_MULTIPART_DOWNLOAD and
file_is_eligible_by_scope and
num_parts_for_file > 1 and total_size_bytes > min_size_in_bytes and
'bytes' in response.headers.get('Accept-Ranges', '').lower())
if self._check_pause(f"Multipart decision for '{api_original_filename}'"): break
if attempt_multipart:
@@ -433,7 +553,7 @@ class PostProcessorWorker:
response_for_this_attempt = None
mp_save_path_for_unique_part_stem_arg = os.path.join(target_folder_path, f"{unique_part_file_stem_on_disk}{temp_file_ext_for_unique_part}")
mp_success, mp_bytes, mp_hash, mp_file_handle = download_file_in_parts(
file_url, mp_save_path_for_unique_part_stem_arg, total_size_bytes, num_parts_for_file, headers, api_original_filename,
file_url, mp_save_path_for_unique_part_stem_arg, total_size_bytes, num_parts_for_file, file_download_headers, api_original_filename,
emitter_for_multipart=self.emitter, cookies_for_chunk_session=cookies_to_use_for_file,
cancellation_event=self.cancellation_event, skip_event=skip_event, logger_func=self.logger,
pause_event=self.pause_event
@@ -442,7 +562,7 @@ class PostProcessorWorker:
download_successful_flag = True
downloaded_size_bytes = mp_bytes
calculated_file_hash = mp_hash
downloaded_part_file_path = mp_save_path_for_unique_part_stem_arg + ".part"
downloaded_part_file_path = mp_save_path_for_unique_part_stem_arg
if mp_file_handle: mp_file_handle.close()
break
else:
@@ -508,12 +628,15 @@ class PostProcessorWorker:
if isinstance(e, requests.exceptions.ConnectionError) and ("Failed to resolve" in str(e) or "NameResolutionError" in str(e)):
self.logger(" 💡 This looks like a DNS resolution problem. Please check your internet connection, DNS settings, or VPN.")
except requests.exceptions.RequestException as e:
self.logger(f" ❌ Download Error (Non-Retryable): {api_original_filename}. Error: {e}")
last_exception_for_retry_later = e
is_permanent_error = True
if ("Failed to resolve" in str(e) or "NameResolutionError" in str(e)):
self.logger(" 💡 This looks like a DNS resolution problem. Please check your internet connection, DNS settings, or VPN.")
break
if e.response is not None and e.response.status_code == 403:
self.logger(f" ⚠️ Download Error (403 Forbidden): {api_original_filename}. This often requires valid cookies.")
self.logger(f" Will retry... Check your 'Use Cookie' settings if this persists.")
last_exception_for_retry_later = e
else:
self.logger(f" ❌ Download Error (Non-Retryable): {api_original_filename}. Error: {e}")
last_exception_for_retry_later = e
is_permanent_error = True
break
except Exception as e:
self.logger(f" ❌ Unexpected Download Error: {api_original_filename}: {e}\n{traceback.format_exc(limit=2)}")
last_exception_for_retry_later = e
@@ -582,7 +705,30 @@ class PostProcessorWorker:
os.remove(downloaded_part_file_path)
except OSError: pass
return 0, 1, filename_to_save_in_main_path, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_SKIPPED, None
if (self.compress_images and downloaded_part_file_path and
is_image(api_original_filename) and
os.path.getsize(downloaded_part_file_path) > 1.5 * 1024 * 1024):
self.logger(f" 🔄 Compressing '{api_original_filename}' to WebP...")
try:
with Image.open(downloaded_part_file_path) as img:
if img.mode not in ('RGB', 'RGBA'):
img = img.convert('RGBA')
output_buffer = BytesIO()
img.save(output_buffer, format='WebP', quality=85)
data_to_write_io = output_buffer
base, _ = os.path.splitext(filename_to_save_in_main_path)
filename_to_save_in_main_path = f"{base}.webp"
self.logger(f" ✅ Compression successful. New size: {len(data_to_write_io.getvalue()) / (1024*1024):.2f} MB")
except Exception as e_compress:
self.logger(f" ⚠️ Failed to compress '{api_original_filename}': {e_compress}. Saving original file instead.")
data_to_write_io = None
effective_save_folder = target_folder_path
base_name, extension = os.path.splitext(filename_to_save_in_main_path)
counter = 1
@@ -600,7 +746,6 @@ class PostProcessorWorker:
try:
if data_to_write_io:
with open(final_save_path, 'wb') as f_out:
time.sleep(0.05)
f_out.write(data_to_write_io.getvalue())
if downloaded_part_file_path and os.path.exists(downloaded_part_file_path):
try:
@@ -654,7 +799,7 @@ class PostProcessorWorker:
self.logger(f" -> Failed to remove partially saved file: {final_save_path}")
permanent_failure_details = {
'file_info': file_info, 'target_folder_path': target_folder_path, 'headers': headers,
'file_info': file_info, 'target_folder_path': target_folder_path, 'headers': file_download_headers,
'original_post_id_for_log': original_post_id_for_log, 'post_title': post_title,
'file_index_in_post': file_index_in_post, 'num_files_in_this_post': num_files_in_this_post,
'forced_filename_override': filename_to_save_in_main_path,
@@ -668,7 +813,7 @@ class PostProcessorWorker:
details_for_failure = {
'file_info': file_info,
'target_folder_path': target_folder_path,
'headers': headers,
'headers': file_download_headers,
'original_post_id_for_log': original_post_id_for_log,
'post_title': post_title,
'file_index_in_post': file_index_in_post,
@@ -680,45 +825,81 @@ class PostProcessorWorker:
else:
return 0, 1, filename_to_save_in_main_path, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_FAILED_RETRYABLE_LATER, details_for_failure
def process(self):
# --- START: REFACTORED PROCESS METHOD ---
# 1. DATA MAPPING: Map Discord Message or Creator Post fields to a consistent set of variables.
if self.service == 'discord':
# For Discord, self.post is a MESSAGE object from the API.
post_title = self.post.get('content', '') or f"Message {self.post.get('id', 'N/A')}"
post_id = self.post.get('id', 'unknown_id')
post_main_file_info = {} # Discord messages don't have a single main file
post_attachments = self.post.get('attachments', [])
post_content_html = self.post.get('content', '')
post_data = self.post # Keep a reference to the original message object
log_prefix = "Message"
else:
# Existing logic for standard creator posts
post_title = self.post.get('title', '') or 'untitled_post'
post_id = self.post.get('id', 'unknown_id')
post_main_file_info = self.post.get('file')
post_attachments = self.post.get('attachments', [])
post_content_html = self.post.get('content', '')
post_data = self.post # Reference to the post object
log_prefix = "Post"
# 2. SHARED PROCESSING LOGIC: The rest of the function now uses the consistent variables from above.
result_tuple = (0, 0, [], [], [], None, None)
total_downloaded_this_post = 0
total_skipped_this_post = 0
determined_post_save_path_for_history = self.override_output_dir if self.override_output_dir else self.download_root
try:
if self._check_pause(f"Post processing for ID {self.post.get('id', 'N/A')}"):
result_tuple = (0, 0, [], [], [], None, None)
return result_tuple
if self._check_pause(f"{log_prefix} processing for ID {post_id}"):
return (0, 0, [], [], [], None, None)
if self.check_cancel():
result_tuple = (0, 0, [], [], [], None, None)
return result_tuple
return (0, 0, [], [], [], None, None)
current_character_filters = self._get_current_character_filters()
kept_original_filenames_for_log = []
retryable_failures_this_post = []
permanent_failures_this_post = []
total_downloaded_this_post = 0
total_skipped_this_post = 0
history_data_for_this_post = None
parsed_api_url = urlparse(self.api_url_input)
referer_url = f"https://{parsed_api_url.netloc}/"
headers = {'User-Agent': 'Mozilla/5.0', 'Referer': referer_url, 'Accept': '*/*'}
link_pattern = re.compile(r"""<a\s+.*?href=["'](https?://[^"']+)["'][^>]*>(.*?)</a>""", re.IGNORECASE | re.DOTALL)
post_data = self.post
post_title = post_data.get('title', '') or 'untitled_post'
post_id = post_data.get('id', 'unknown_id')
post_main_file_info = post_data.get('file')
post_attachments = post_data.get('attachments', [])
# CONTEXT-AWARE URL for Referer Header
if self.service == 'discord':
server_id = self.user_id
channel_id = self.post.get('channel', 'unknown_channel')
post_page_url = f"https://{parsed_api_url.netloc}/discord/server/{server_id}/{channel_id}"
else:
post_page_url = f"https://{parsed_api_url.netloc}/{self.service}/user/{self.user_id}/post/{post_id}"
headers = {'User-Agent': 'Mozilla/5.0', 'Referer': post_page_url, 'Accept': '*/*'}
link_pattern = re.compile(r"""<a\s+.*?href=["'](https?://[^"']+)["'][^>]*>(.*?)</a>""", re.IGNORECASE | re.DOTALL)
effective_unwanted_keywords_for_folder_naming = self.unwanted_keywords.copy()
is_full_creator_download_no_char_filter = not self.target_post_id_from_initial_url and not current_character_filters
if (self.show_external_links or self.extract_links_only):
embed_data = post_data.get('embed')
if isinstance(embed_data, dict) and embed_data.get('url'):
embed_url = embed_data['url']
embed_subject = embed_data.get('subject', embed_url) # Use subject as link text, fallback to URL
platform = get_link_platform(embed_url)
self.logger(f" 🔗 Found embed link: {embed_url}")
self._emit_signal('external_link', post_title, embed_subject, embed_url, platform, "")
if is_full_creator_download_no_char_filter and self.creator_download_folder_ignore_words:
self.logger(f" Applying creator download specific folder ignore words ({len(self.creator_download_folder_ignore_words)} words).")
effective_unwanted_keywords_for_folder_naming.update(self.creator_download_folder_ignore_words)
post_content_html = post_data.get('content', '')
if not self.extract_links_only:
self.logger(f"\n--- Processing Post {post_id} ('{post_title[:50]}...') (Thread: {threading.current_thread().name}) ---")
self.logger(f"\n--- Processing {log_prefix} {post_id} ('{post_title[:50]}...') (Thread: {threading.current_thread().name}) ---")
num_potential_files_in_post = len(post_attachments or []) + (1 if post_main_file_info and post_main_file_info.get('path') else 0)
post_is_candidate_by_title_char_match = False
@@ -750,8 +931,8 @@ class PostProcessorWorker:
all_files_from_post_api_for_char_check = []
api_file_domain_for_char_check = urlparse(self.api_url_input).netloc
if not api_file_domain_for_char_check or not any(d in api_file_domain_for_char_check.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
api_file_domain_for_char_check = "kemono.su" if "kemono" in self.service.lower() else "coomer.party"
if not api_file_domain_for_char_check or not any(d in api_file_domain_for_char_check.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
api_file_domain_for_char_check = "kemono.cr" if "kemono" in self.service.lower() else "coomer.st"
if post_main_file_info and isinstance(post_main_file_info, dict) and post_main_file_info.get('path'):
original_api_name = post_main_file_info.get('name') or os.path.basename(post_main_file_info['path'].lstrip('/'))
if original_api_name:
@@ -762,7 +943,7 @@ class PostProcessorWorker:
if original_api_att_name:
all_files_from_post_api_for_char_check.append({'_original_name_for_log': original_api_att_name})
if current_character_filters and self.char_filter_scope == CHAR_SCOPE_COMMENTS:
if current_character_filters and self.char_filter_scope == CHAR_SCOPE_COMMENTS and self.service != 'discord':
self.logger(f" [Char Scope: Comments] Phase 1: Checking post files for matches before comments for post ID '{post_id}'.")
if self._check_pause(f"File check (comments scope) for post {post_id}"):
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
@@ -785,7 +966,7 @@ class PostProcessorWorker:
if post_is_candidate_by_file_char_match_in_comment_scope: break
self.logger(f" [Char Scope: Comments] Phase 1 Result: post_is_candidate_by_file_char_match_in_comment_scope = {post_is_candidate_by_file_char_match_in_comment_scope}")
if current_character_filters and self.char_filter_scope == CHAR_SCOPE_COMMENTS:
if current_character_filters and self.char_filter_scope == CHAR_SCOPE_COMMENTS and self.service != 'discord':
if not post_is_candidate_by_file_char_match_in_comment_scope:
if self._check_pause(f"Comment check for post {post_id}"):
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
@@ -794,9 +975,9 @@ class PostProcessorWorker:
try:
parsed_input_url_for_comments = urlparse(self.api_url_input)
api_domain_for_comments = parsed_input_url_for_comments.netloc
if not any(d in api_domain_for_comments.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
if not any(d in api_domain_for_comments.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
self.logger(f"⚠️ Unrecognized domain '{api_domain_for_comments}' for comment API. Defaulting based on service.")
api_domain_for_comments = "kemono.su" if "kemono" in self.service.lower() else "coomer.party"
api_domain_for_comments = "kemono.cr" if "kemono" in self.service.lower() else "coomer.st"
comments_data = fetch_post_comments(
api_domain_for_comments, self.service, self.user_id, post_id,
headers, self.logger, self.cancellation_event, self.pause_event,
@@ -848,27 +1029,17 @@ class PostProcessorWorker:
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
return result_tuple
if self.skip_words_list and (self.skip_words_scope == SKIP_SCOPE_POSTS or self.skip_words_scope == SKIP_SCOPE_BOTH):
if self._check_pause(f"Skip words (post title) for post {post_id}"):
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
return result_tuple
post_title_lower = post_title.lower()
for skip_word in self.skip_words_list:
if skip_word.lower() in post_title_lower:
self.logger(f" -> Skip Post (Keyword in Title '{skip_word}'): '{post_title[:50]}...'. Scope: {self.skip_words_scope}")
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
return result_tuple
if not self.extract_links_only and self.manga_mode_active and current_character_filters and (self.char_filter_scope == CHAR_SCOPE_TITLE or self.char_filter_scope == CHAR_SCOPE_BOTH) and not post_is_candidate_by_title_char_match:
self.logger(f" -> Skip Post (Manga Mode with Title/Both Scope - No Title Char Match): Title '{post_title[:50]}' doesn't match filters.")
self._emit_signal('missed_character_post', post_title, "Manga Mode: No title match for character filter (Title/Both scope)")
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
return result_tuple
self.logger(f" -> Skip Post (Manga Mode with Title/Both Scope - No Title Char Match): Title '{post_title[:50]}' doesn't match filters.")
self._emit_signal('missed_character_post', post_title, "Manga Mode: No title match for character filter (Title/Both scope)")
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
return result_tuple
if not isinstance(post_attachments, list):
self.logger(f"⚠️ Corrupt attachment data for post {post_id} (expected list, got {type(post_attachments)}). Skipping attachments.")
post_attachments = []
# CORRECTED LOGIC: Determine folder path BEFORE skip checks
base_folder_names_for_post_content = []
determined_post_save_path_for_history = self.override_output_dir if self.override_output_dir else self.download_root
if not self.extract_links_only and self.use_subfolders:
@@ -967,7 +1138,10 @@ class PostProcessorWorker:
determined_post_save_path_for_history = os.path.join(determined_post_save_path_for_history, base_folder_names_for_post_content[0])
if not self.extract_links_only and self.use_post_subfolders:
cleaned_post_title_for_sub = clean_folder_name(post_title)
cleaned_post_title_for_sub = robust_clean_name(post_title)
max_folder_len = 100
if len(cleaned_post_title_for_sub) > max_folder_len:
cleaned_post_title_for_sub = cleaned_post_title_for_sub[:max_folder_len].strip()
post_id_for_fallback = self.post.get('id', 'unknown_id')
if not cleaned_post_title_for_sub or cleaned_post_title_for_sub == "untitled_folder":
@@ -992,31 +1166,74 @@ class PostProcessorWorker:
suffix_counter = 0
final_post_subfolder_name = ""
while True:
suffix_counter = 0
folder_creation_successful = False
final_post_subfolder_name = ""
post_id_for_folder = str(self.post.get('id', 'unknown_id'))
while not folder_creation_successful:
if suffix_counter == 0:
name_candidate = original_cleaned_post_title_for_sub
else:
name_candidate = f"{original_cleaned_post_title_for_sub}_{suffix_counter}"
potential_post_subfolder_path = os.path.join(base_path_for_post_subfolder, name_candidate)
try:
os.makedirs(potential_post_subfolder_path, exist_ok=False)
final_post_subfolder_name = name_candidate
if suffix_counter > 0:
self.logger(f" Post subfolder name conflict: Using '{final_post_subfolder_name}' instead of '{original_cleaned_post_title_for_sub}' to avoid mixing posts.")
break
except FileExistsError:
suffix_counter += 1
if suffix_counter > 100:
self.logger(f" ⚠️ Exceeded 100 attempts to find unique subfolder name for '{original_cleaned_post_title_for_sub}'. Using UUID.")
final_post_subfolder_name = f"{original_cleaned_post_title_for_sub}_{uuid.uuid4().hex[:8]}"
os.makedirs(os.path.join(base_path_for_post_subfolder, final_post_subfolder_name), exist_ok=True)
id_file_path = os.path.join(potential_post_subfolder_path, f".postid_{post_id_for_folder}")
if not os.path.isdir(potential_post_subfolder_path):
# Folder does not exist, create it and its ID file
try:
os.makedirs(potential_post_subfolder_path)
with open(id_file_path, 'w') as f:
f.write(post_id_for_folder)
final_post_subfolder_name = name_candidate
folder_creation_successful = True
if suffix_counter > 0:
self.logger(f" Post subfolder name conflict: Using '{final_post_subfolder_name}' to avoid mixing posts.")
except OSError as e_mkdir:
self.logger(f" ❌ Error creating directory '{potential_post_subfolder_path}': {e_mkdir}.")
final_post_subfolder_name = original_cleaned_post_title_for_sub
break
except OSError as e_mkdir:
self.logger(f" ❌ Error creating directory '{potential_post_subfolder_path}': {e_mkdir}. Files for this post might be saved in parent or fail.")
final_post_subfolder_name = original_cleaned_post_title_for_sub
break
else:
# Folder exists, check if it's for this post or a different one
if os.path.exists(id_file_path):
# ID file matches! This is a restore scenario. Reuse the folder.
self.logger(f" Re-using existing post subfolder: '{name_candidate}'")
final_post_subfolder_name = name_candidate
folder_creation_successful = True
else:
# Folder exists but ID file does not match (or is missing). This is a normal name collision.
suffix_counter += 1
if suffix_counter > 100: # Safety break
self.logger(f" ⚠️ Exceeded 100 attempts to find unique subfolder for '{original_cleaned_post_title_for_sub}'.")
final_post_subfolder_name = f"{original_cleaned_post_title_for_sub}_{uuid.uuid4().hex[:8]}"
os.makedirs(os.path.join(base_path_for_post_subfolder, final_post_subfolder_name), exist_ok=True)
break
determined_post_save_path_for_history = os.path.join(base_path_for_post_subfolder, final_post_subfolder_name)
if self.skip_words_list and (self.skip_words_scope == SKIP_SCOPE_POSTS or self.skip_words_scope == SKIP_SCOPE_BOTH):
if self._check_pause(f"Skip words (post title) for post {post_id}"):
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
return result_tuple
post_title_lower = post_title.lower()
for skip_word in self.skip_words_list:
if skip_word.lower() in post_title_lower:
self.logger(f" -> Skip Post (Keyword in Title '{skip_word}'): '{post_title[:50]}...'. Scope: {self.skip_words_scope}")
# Create a history object for the skipped post to record its ID
history_data_for_skipped_post = {
'post_id': post_id,
'service': self.service,
'user_id': self.user_id,
'post_title': post_title,
'top_file_name': "N/A (Post Skipped)",
'num_files': num_potential_files_in_post,
'upload_date_str': post_data.get('published') or post_data.get('added') or "Unknown",
'download_location': determined_post_save_path_for_history
}
result_tuple = (0, num_potential_files_in_post, [], [], [], history_data_for_skipped_post, None)
return result_tuple
if self.filter_mode == 'text_only' and not self.extract_links_only:
self.logger(f" Mode: Text Only (Scope: {self.text_only_scope})")
post_title_lower = post_title.lower()
@@ -1124,11 +1341,18 @@ class PostProcessorWorker:
if FPDF:
self.logger(f" Creating formatted PDF for {'comments' if self.text_only_scope == 'comments' else 'content'}...")
pdf = PDF()
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
# If the application is run as a bundled exe, _MEIPASS is the temp folder
base_path = sys._MEIPASS
else:
# If running as a normal .py script, use the project_root_dir
base_path = self.project_root_dir
font_path = ""
bold_font_path = ""
if self.project_root_dir:
font_path = os.path.join(self.project_root_dir, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
bold_font_path = os.path.join(self.project_root_dir, 'data', 'dejavu-sans', 'DejaVuSans-Bold.ttf')
if base_path:
font_path = os.path.join(base_path, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
bold_font_path = os.path.join(base_path, 'data', 'dejavu-sans', 'DejaVuSans-Bold.ttf')
try:
if not os.path.exists(font_path): raise RuntimeError(f"Font file not found: {font_path}")
@@ -1261,9 +1485,8 @@ class PostProcessorWorker:
all_files_from_post_api = []
api_file_domain = urlparse(self.api_url_input).netloc
if not api_file_domain or not any(d in api_file_domain.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
api_file_domain = "kemono.su" if "kemono" in self.service.lower() else "coomer.party"
if not api_file_domain or not any(d in api_file_domain.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
api_file_domain = "kemono.cr" if "kemono" in self.service.lower() else "coomer.st"
if post_main_file_info and isinstance(post_main_file_info, dict) and post_main_file_info.get('path'):
file_path = post_main_file_info['path'].lstrip('/')
original_api_name = post_main_file_info.get('name') or os.path.basename(file_path)
@@ -1385,7 +1608,17 @@ class PostProcessorWorker:
if not all_files_from_post_api:
self.logger(f" No files found to download for post {post_id}.")
result_tuple = (0, 0, [], [], [], None, None)
history_data_for_no_files_post = {
'post_title': post_title,
'post_id': post_id,
'service': self.service,
'user_id': self.user_id,
'top_file_name': "N/A (No Files)",
'num_files': 0,
'upload_date_str': post_data.get('published') or post_data.get('added') or "Unknown",
'download_location': determined_post_save_path_for_history
}
result_tuple = (0, 0, [], [], [], history_data_for_no_files_post, None)
return result_tuple
files_to_download_info_list = []
@@ -1511,7 +1744,7 @@ class PostProcessorWorker:
self._download_single_file,
file_info=file_info_to_dl,
target_folder_path=current_path_for_file_instance,
headers=headers, original_post_id_for_log=post_id, skip_event=self.skip_current_file_flag,
post_page_url=post_page_url, original_post_id_for_log=post_id, skip_event=self.skip_current_file_flag,
post_title=post_title, manga_date_file_counter_ref=manga_date_counter_to_pass,
manga_global_file_counter_ref=manga_global_counter_to_pass, folder_context_name_for_history=folder_context_for_file,
file_index_in_post=file_idx, num_files_in_this_post=len(files_to_download_info_list)
@@ -1605,10 +1838,12 @@ class PostProcessorWorker:
if not self.extract_links_only and self.use_post_subfolders and total_downloaded_this_post == 0:
path_to_check_for_emptiness = determined_post_save_path_for_history
try:
# Check if the path is a directory and if it's empty
if os.path.isdir(path_to_check_for_emptiness) and not os.listdir(path_to_check_for_emptiness):
self.logger(f" 🗑️ Removing empty post-specific subfolder: '{path_to_check_for_emptiness}'")
os.rmdir(path_to_check_for_emptiness)
except OSError as e_rmdir:
# Log if removal fails for any reason (e.g., permissions)
self.logger(f" ⚠️ Could not remove empty post-specific subfolder '{path_to_check_for_emptiness}': {e_rmdir}")
result_tuple = (total_downloaded_this_post, total_skipped_this_post,
@@ -1616,7 +1851,25 @@ class PostProcessorWorker:
permanent_failures_this_post, history_data_for_this_post,
None)
except Exception as main_thread_err:
self.logger(f"\n❌ Critical error within Worker process for {log_prefix} {post_id}: {main_thread_err}")
self.logger(traceback.format_exc())
# Ensure we still return a valid tuple to prevent the app from stalling
result_tuple = (0, 1, [], [], [{'error': str(main_thread_err)}], None, None)
finally:
# This block ALWAYS executes, ensuring that every task signals its completion.
# This is critical for the main thread to know when all work is done.
if not self.extract_links_only and self.use_post_subfolders and total_downloaded_this_post == 0:
path_to_check_for_emptiness = determined_post_save_path_for_history
try:
# Check if the path is a directory and if it's empty
if os.path.isdir(path_to_check_for_emptiness) and not os.listdir(path_to_check_for_emptiness):
self.logger(f" 🗑️ Removing empty post-specific subfolder: '{path_to_check_for_emptiness}'")
os.rmdir(path_to_check_for_emptiness)
except OSError as e_rmdir:
# Log if removal fails for any reason (e.g., permissions)
self.logger(f" ⚠️ Could not remove potentially empty subfolder '{path_to_check_for_emptiness}': {e_rmdir}")
self._emit_signal('worker_finished', result_tuple)
return result_tuple
@@ -1657,6 +1910,8 @@ class DownloadThread(QThread):
remove_from_filename_words_list=None,
manga_date_prefix='',
allow_multipart_download=True,
multipart_parts_count=4,
multipart_min_size_mb=100,
selected_cookie_file=None,
override_output_dir=None,
app_base_dir=None,
@@ -1679,7 +1934,8 @@ class DownloadThread(QThread):
single_pdf_mode=False,
project_root_dir=None,
processed_post_ids=None,
start_offset=0):
start_offset=0,
fetch_first=False):
super().__init__()
self.api_url_input = api_url_input
self.output_dir = output_dir
@@ -1719,6 +1975,8 @@ class DownloadThread(QThread):
self.remove_from_filename_words_list = remove_from_filename_words_list
self.manga_date_prefix = manga_date_prefix
self.allow_multipart_download = allow_multipart_download
self.multipart_parts_count = multipart_parts_count
self.multipart_min_size_mb = multipart_min_size_mb
self.selected_cookie_file = selected_cookie_file
self.app_base_dir = app_base_dir
self.cookie_text = cookie_text
@@ -1743,6 +2001,7 @@ class DownloadThread(QThread):
self.project_root_dir = project_root_dir
self.processed_post_ids_set = set(processed_post_ids) if processed_post_ids is not None else set()
self.start_offset = start_offset
self.fetch_first = fetch_first
if self.compress_images and Image is None:
self.logger("⚠️ Image compression disabled: Pillow library not found (DownloadThread).")
@@ -1789,7 +2048,8 @@ class DownloadThread(QThread):
selected_cookie_file=self.selected_cookie_file,
app_base_dir=self.app_base_dir,
manga_filename_style_for_sort_check=self.manga_filename_style if self.manga_mode_active else None,
processed_post_ids=self.processed_post_ids_set
processed_post_ids=self.processed_post_ids_set,
fetch_all_first=self.fetch_first
)
for posts_batch_data in post_generator:
@@ -1860,6 +2120,8 @@ class DownloadThread(QThread):
'text_only_scope': self.text_only_scope,
'text_export_format': self.text_export_format,
'single_pdf_mode': self.single_pdf_mode,
'multipart_parts_count': self.multipart_parts_count,
'multipart_min_size_mb': self.multipart_min_size_mb,
'project_root_dir': self.project_root_dir,
}

View File

@@ -3,33 +3,30 @@ import os
import re
import traceback
import json
import base64
import time
from urllib.parse import urlparse, urlunparse, parse_qs, urlencode
# --- Third-Party Library Imports ---
import requests
try:
from mega import Mega
MEGA_AVAILABLE = True
from Crypto.Cipher import AES
PYCRYPTODOME_AVAILABLE = True
except ImportError:
MEGA_AVAILABLE = False
PYCRYPTODOME_AVAILABLE = False
try:
import gdown
GDOWN_AVAILABLE = True
GDRIVE_AVAILABLE = True
except ImportError:
GDOWN_AVAILABLE = False
GDRIVE_AVAILABLE = False
# --- Helper Functions ---
MEGA_API_URL = "https://g.api.mega.co.nz"
def _get_filename_from_headers(headers):
"""
Extracts a filename from the Content-Disposition header.
Args:
headers (dict): A dictionary of HTTP response headers.
Returns:
str or None: The extracted filename, or None if not found.
(This is from your original file and is kept for Dropbox downloads)
"""
cd = headers.get('content-disposition')
if not cd:
@@ -37,97 +34,205 @@ def _get_filename_from_headers(headers):
fname_match = re.findall('filename="?([^"]+)"?', cd)
if fname_match:
# Sanitize the filename to prevent directory traversal issues
# and remove invalid characters for most filesystems.
sanitized_name = re.sub(r'[<>:"/\\|?*]', '_', fname_match[0].strip())
return sanitized_name
return None
# --- Main Service Downloader Functions ---
# --- NEW: Helper functions for Mega decryption ---
def download_mega_file(mega_link, download_path=".", logger_func=print):
"""
Downloads a file from a public Mega.nz link.
def urlb64_to_b64(s):
"""Converts a URL-safe base64 string to a standard base64 string."""
s = s.replace('-', '+').replace('_', '/')
s += '=' * (-len(s) % 4)
return s
Args:
mega_link (str): The public Mega.nz link to the file.
download_path (str): The directory to save the downloaded file.
logger_func (callable): Function to use for logging.
"""
if not MEGA_AVAILABLE:
logger_func("❌ Error: mega.py library is not installed. Cannot download from Mega.")
logger_func(" Please install it: pip install mega.py")
raise ImportError("mega.py library not found.")
def b64_to_bytes(s):
"""Decodes a URL-safe base64 string to bytes."""
return base64.b64decode(urlb64_to_b64(s))
logger_func(f" [Mega] Initializing Mega client...")
def bytes_to_hex(b):
"""Converts bytes to a hex string."""
return b.hex()
def hex_to_bytes(h):
"""Converts a hex string to bytes."""
return bytes.fromhex(h)
def hrk2hk(hex_raw_key):
"""Derives the final AES key from the raw key components for Mega."""
key_part1 = int(hex_raw_key[0:16], 16)
key_part2 = int(hex_raw_key[16:32], 16)
key_part3 = int(hex_raw_key[32:48], 16)
key_part4 = int(hex_raw_key[48:64], 16)
final_key_part1 = key_part1 ^ key_part3
final_key_part2 = key_part2 ^ key_part4
return f'{final_key_part1:016x}{final_key_part2:016x}'
def decrypt_at(at_b64, key_bytes):
"""Decrypts the 'at' attribute to get file metadata."""
at_bytes = b64_to_bytes(at_b64)
iv = b'\0' * 16
cipher = AES.new(key_bytes, AES.MODE_CBC, iv)
decrypted_at = cipher.decrypt(at_bytes)
return decrypted_at.decode('utf-8').strip('\0').replace('MEGA', '')
# --- NEW: Core Logic for Mega Downloads ---
def get_mega_file_info(file_id, file_key, session, logger_func):
"""Fetches file metadata and the temporary download URL from the Mega API."""
try:
mega_client = Mega()
m = mega_client.login()
logger_func(f" [Mega] Attempting to download from: {mega_link}")
if not os.path.exists(download_path):
os.makedirs(download_path, exist_ok=True)
logger_func(f" [Mega] Created download directory: {download_path}")
hex_raw_key = bytes_to_hex(b64_to_bytes(file_key))
hex_key = hrk2hk(hex_raw_key)
key_bytes = hex_to_bytes(hex_key)
# Request file attributes
payload = [{"a": "g", "p": file_id}]
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
response.raise_for_status()
res_json = response.json()
if isinstance(res_json, list) and isinstance(res_json[0], int) and res_json[0] < 0:
logger_func(f" [Mega] ❌ API Error: {res_json[0]}. The link may be invalid or removed.")
return None
file_size = res_json[0]['s']
at_b64 = res_json[0]['at']
# Decrypt attributes to get the file name
at_dec_json_str = decrypt_at(at_b64, key_bytes)
at_dec_json = json.loads(at_dec_json_str)
file_name = at_dec_json['n']
# Request the temporary download URL
payload = [{"a": "g", "g": 1, "p": file_id}]
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
response.raise_for_status()
res_json = response.json()
dl_temp_url = res_json[0]['g']
return {
'file_name': file_name,
'file_size': file_size,
'dl_url': dl_temp_url,
'hex_raw_key': hex_raw_key
}
except (requests.RequestException, json.JSONDecodeError, KeyError, ValueError) as e:
logger_func(f" [Mega] ❌ Failed to get file info: {e}")
return None
def download_and_decrypt_mega_file(info, download_path, logger_func):
"""Downloads the file and decrypts it chunk by chunk, reporting progress."""
file_name = info['file_name']
file_size = info['file_size']
dl_url = info['dl_url']
hex_raw_key = info['hex_raw_key']
final_path = os.path.join(download_path, file_name)
if os.path.exists(final_path) and os.path.getsize(final_path) == file_size:
logger_func(f" [Mega] File '{file_name}' already exists with the correct size. Skipping.")
return
# Prepare for decryption
key = hex_to_bytes(hrk2hk(hex_raw_key))
iv_hex = hex_raw_key[32:48] + '0000000000000000'
iv_bytes = hex_to_bytes(iv_hex)
cipher = AES.new(key, AES.MODE_CTR, initial_value=iv_bytes, nonce=b'')
try:
with requests.get(dl_url, stream=True, timeout=(15, 300)) as r:
r.raise_for_status()
downloaded_bytes = 0
last_log_time = time.time()
# The download_url method handles file info fetching and saving internally.
downloaded_file_path = m.download_url(mega_link, dest_path=download_path)
if downloaded_file_path and os.path.exists(downloaded_file_path):
logger_func(f" [Mega] ✅ File downloaded successfully! Saved as: {downloaded_file_path}")
else:
raise Exception(f"Mega download failed or file not found. Returned: {downloaded_file_path}")
with open(final_path, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
if not chunk:
continue
decrypted_chunk = cipher.decrypt(chunk)
f.write(decrypted_chunk)
downloaded_bytes += len(chunk)
# Log progress every second
current_time = time.time()
if current_time - last_log_time > 1:
progress_percent = (downloaded_bytes / file_size) * 100 if file_size > 0 else 0
logger_func(f" [Mega] Downloading '{file_name}': {downloaded_bytes/1024/1024:.2f}MB / {file_size/1024/1024:.2f}MB ({progress_percent:.1f}%)")
last_log_time = current_time
logger_func(f" [Mega] ✅ Successfully downloaded '{file_name}' to '{download_path}'")
except requests.RequestException as e:
logger_func(f" [Mega] ❌ Download failed for '{file_name}': {e}")
except IOError as e:
logger_func(f" [Mega] ❌ Could not write to file '{final_path}': {e}")
except Exception as e:
logger_func(f" [Mega] ❌ An unexpected error occurred during Mega download: {e}")
traceback.print_exc(limit=2)
raise # Re-raise the exception to be handled by the calling worker
logger_func(f" [Mega] ❌ An unexpected error occurred during download/decryption: {e}")
def download_gdrive_file(gdrive_link, download_path=".", logger_func=print):
# --- REPLACEMENT Main Service Downloader Function for Mega ---
def download_mega_file(mega_url, download_path, logger_func=print):
"""
Downloads a file from a public Google Drive link using the gdown library.
Args:
gdrive_link (str): The public Google Drive link to the file.
download_path (str): The directory to save the downloaded file.
logger_func (callable): Function to use for logging.
Downloads a file from a Mega.nz URL using direct requests and decryption.
This replaces the old mega.py implementation.
"""
if not GDOWN_AVAILABLE:
logger_func("Error: gdown library is not installed. Cannot download from Google Drive.")
logger_func(" Please install it: pip install gdown")
raise ImportError("gdown library not found.")
if not PYCRYPTODOME_AVAILABLE:
logger_func("Mega download failed: 'pycryptodome' library is not installed. Please run: pip install pycryptodome")
return
logger_func(f" [GDrive] Attempting to download: {gdrive_link}")
logger_func(f" [Mega] Initializing download for: {mega_url}")
# Regex to capture file ID and key from both old and new URL formats
match = re.search(r'mega(?:\.co)?\.nz/(?:file/|#!)?([a-zA-Z0-9]+)(?:#|!)([a-zA-Z0-9_.-]+)', mega_url)
if not match:
logger_func(f" [Mega] ❌ Error: Invalid Mega URL format.")
return
file_id = match.group(1)
file_key = match.group(2)
session = requests.Session()
session.headers.update({'User-Agent': 'Kemono-Downloader-PyQt/1.0'})
file_info = get_mega_file_info(file_id, file_key, session, logger_func)
if not file_info:
logger_func(f" [Mega] ❌ Failed to get file info. The link may be invalid or expired. Aborting.")
return
logger_func(f" [Mega] File found: '{file_info['file_name']}' (Size: {file_info['file_size'] / 1024 / 1024:.2f} MB)")
download_and_decrypt_mega_file(file_info, download_path, logger_func)
# --- ORIGINAL Functions for Google Drive and Dropbox (Unchanged) ---
def download_gdrive_file(url, download_path, logger_func=print):
"""Downloads a file from a Google Drive link."""
if not GDRIVE_AVAILABLE:
logger_func("❌ Google Drive download failed: 'gdown' library is not installed.")
return
try:
if not os.path.exists(download_path):
os.makedirs(download_path, exist_ok=True)
logger_func(f" [GDrive] Created download directory: {download_path}")
# gdown handles finding the file ID and downloading. 'fuzzy=True' helps with various URL formats.
output_file_path = gdown.download(gdrive_link, output=download_path, quiet=False, fuzzy=True)
if output_file_path and os.path.exists(output_file_path):
logger_func(f" [GDrive] ✅ Google Drive file downloaded successfully: {output_file_path}")
logger_func(f" [G-Drive] Starting download for: {url}")
logger_func(" [G-Drive] Download in progress... This may take some time. Please wait.")
output_path = gdown.download(url, output=download_path, quiet=True, fuzzy=True)
if output_path and os.path.exists(output_path):
logger_func(f" [G-Drive] ✅ Successfully downloaded to '{output_path}'")
else:
raise Exception(f"gdown download failed or file not found. Returned: {output_file_path}")
logger_func(f" [G-Drive] ❌ Download failed. The file may have been moved, deleted, or is otherwise inaccessible.")
except Exception as e:
logger_func(f" [GDrive] ❌ An error occurred during Google Drive download: {e}")
traceback.print_exc(limit=2)
raise
logger_func(f" [G-Drive] ❌ An unexpected error occurred: {e}")
def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
"""
Downloads a file from a public Dropbox link by modifying the URL for direct download.
Args:
dropbox_link (str): The public Dropbox link to the file.
download_path (str): The directory to save the downloaded file.
logger_func (callable): Function to use for logging.
"""
logger_func(f" [Dropbox] Attempting to download: {dropbox_link}")
# Modify the Dropbox URL to force a direct download instead of showing the preview page.
parsed_url = urlparse(dropbox_link)
query_params = parse_qs(parsed_url.query)
query_params['dl'] = ['1']
@@ -144,13 +249,11 @@ def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
with requests.get(direct_download_url, stream=True, allow_redirects=True, timeout=(10, 300)) as r:
r.raise_for_status()
# Determine filename from headers or URL
filename = _get_filename_from_headers(r.headers) or os.path.basename(parsed_url.path) or "dropbox_file"
full_save_path = os.path.join(download_path, filename)
logger_func(f" [Dropbox] Starting download of '{filename}'...")
# Write file to disk in chunks
with open(full_save_path, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)

View File

@@ -1,4 +1,5 @@
# --- Standard Library Imports ---
# --- Standard Library Imports ---
import os
import time
import hashlib
@@ -10,28 +11,49 @@ from concurrent.futures import ThreadPoolExecutor, as_completed
# --- Third-Party Library Imports ---
import requests
MULTIPART_DOWNLOADER_AVAILABLE = True
# --- Module Constants ---
CHUNK_DOWNLOAD_RETRY_DELAY = 2
MAX_CHUNK_DOWNLOAD_RETRIES = 1
DOWNLOAD_CHUNK_SIZE_ITER = 1024 * 256 # 256 KB per iteration chunk
# Flag to indicate if this module and its dependencies are available.
# This was missing and caused the ImportError.
MULTIPART_DOWNLOADER_AVAILABLE = True
def _download_individual_chunk(
chunk_url, temp_file_path, start_byte, end_byte, headers,
chunk_url, chunk_temp_file_path, start_byte, end_byte, headers,
part_num, total_parts, progress_data, cancellation_event,
skip_event, pause_event, global_emit_time_ref, cookies_for_chunk,
logger_func, emitter=None, api_original_filename=None
):
"""
Downloads a single segment (chunk) of a larger file. This function is
intended to be run in a separate thread by a ThreadPoolExecutor.
Downloads a single segment (chunk) of a larger file to its own unique part file.
This function is intended to be run in a separate thread by a ThreadPoolExecutor.
It handles retries, pauses, and cancellations for its specific chunk.
It handles retries, pauses, and cancellations for its specific chunk. If a
download fails, the partial chunk file is removed, allowing a clean retry later.
Args:
chunk_url (str): The URL to download the file from.
chunk_temp_file_path (str): The unique path to save this specific chunk
(e.g., 'my_video.mp4.part0').
start_byte (int): The starting byte for the Range header.
end_byte (int): The ending byte for the Range header.
headers (dict): The HTTP headers to use for the request.
part_num (int): The index of this chunk (e.g., 0 for the first part).
total_parts (int): The total number of chunks for the entire file.
progress_data (dict): A thread-safe dictionary for sharing progress.
cancellation_event (threading.Event): Event to signal cancellation.
skip_event (threading.Event): Event to signal skipping the file.
pause_event (threading.Event): Event to signal pausing the download.
global_emit_time_ref (list): A mutable list with one element (a timestamp)
to rate-limit UI updates.
cookies_for_chunk (dict): Cookies to use for the request.
logger_func (function): A function to log messages.
emitter (queue.Queue or QObject): Emitter for sending progress to the UI.
api_original_filename (str): The original filename for UI display.
Returns:
tuple: A tuple containing (bytes_downloaded, success_flag).
"""
# --- Pre-download checks for control events ---
if cancellation_event and cancellation_event.is_set():
@@ -49,103 +71,135 @@ def _download_individual_chunk(
time.sleep(0.2)
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Download resumed.")
# Prepare headers for the specific byte range of this chunk
chunk_headers = headers.copy()
if end_byte != -1:
chunk_headers['Range'] = f"bytes={start_byte}-{end_byte}"
bytes_this_chunk = 0
last_speed_calc_time = time.time()
bytes_at_last_speed_calc = 0
# Set this chunk's status to 'active' before starting the download.
with progress_data['lock']:
progress_data['chunks_status'][part_num]['active'] = True
# --- Retry Loop ---
for attempt in range(MAX_CHUNK_DOWNLOAD_RETRIES + 1):
if cancellation_event and cancellation_event.is_set():
return bytes_this_chunk, False
try:
# Prepare headers for the specific byte range of this chunk
chunk_headers = headers.copy()
if end_byte != -1:
chunk_headers['Range'] = f"bytes={start_byte}-{end_byte}"
try:
if attempt > 0:
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Retrying (Attempt {attempt + 1}/{MAX_CHUNK_DOWNLOAD_RETRIES + 1})...")
time.sleep(CHUNK_DOWNLOAD_RETRY_DELAY * (2 ** (attempt - 1)))
last_speed_calc_time = time.time()
bytes_at_last_speed_calc = bytes_this_chunk
bytes_this_chunk = 0
last_speed_calc_time = time.time()
bytes_at_last_speed_calc = 0
logger_func(f" 🚀 [Chunk {part_num + 1}/{total_parts}] Starting download: bytes {start_byte}-{end_byte if end_byte != -1 else 'EOF'}")
response = requests.get(chunk_url, headers=chunk_headers, timeout=(10, 120), stream=True, cookies=cookies_for_chunk)
response.raise_for_status()
# --- Retry Loop ---
for attempt in range(MAX_CHUNK_DOWNLOAD_RETRIES + 1):
if cancellation_event and cancellation_event.is_set():
return bytes_this_chunk, False
# --- Data Writing Loop ---
with open(temp_file_path, 'r+b') as f:
f.seek(start_byte)
for data_segment in response.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE_ITER):
if cancellation_event and cancellation_event.is_set():
return bytes_this_chunk, False
if pause_event and pause_event.is_set():
# Handle pausing during the download stream
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Paused...")
while pause_event.is_set():
if cancellation_event and cancellation_event.is_set(): return bytes_this_chunk, False
time.sleep(0.2)
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Resumed.")
try:
if attempt > 0:
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Retrying (Attempt {attempt + 1}/{MAX_CHUNK_DOWNLOAD_RETRIES + 1})...")
time.sleep(CHUNK_DOWNLOAD_RETRY_DELAY * (2 ** (attempt - 1)))
last_speed_calc_time = time.time()
bytes_at_last_speed_calc = bytes_this_chunk
if data_segment:
f.write(data_segment)
bytes_this_chunk += len(data_segment)
# Update shared progress data structure
with progress_data['lock']:
progress_data['total_downloaded_so_far'] += len(data_segment)
progress_data['chunks_status'][part_num]['downloaded'] = bytes_this_chunk
# Calculate and update speed for this chunk
current_time = time.time()
time_delta = current_time - last_speed_calc_time
if time_delta > 0.5:
bytes_delta = bytes_this_chunk - bytes_at_last_speed_calc
current_speed_bps = (bytes_delta * 8) / time_delta if time_delta > 0 else 0
progress_data['chunks_status'][part_num]['speed_bps'] = current_speed_bps
last_speed_calc_time = current_time
bytes_at_last_speed_calc = bytes_this_chunk
# Emit progress signal to the UI via the queue
if emitter and (current_time - global_emit_time_ref[0] > 0.25):
global_emit_time_ref[0] = current_time
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
if isinstance(emitter, queue.Queue):
emitter.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
elif hasattr(emitter, 'file_progress_signal'):
emitter.file_progress_signal.emit(api_original_filename, status_list_copy)
# If we reach here, the download for this chunk was successful
return bytes_this_chunk, True
logger_func(f" 🚀 [Chunk {part_num + 1}/{total_parts}] Starting download: bytes {start_byte}-{end_byte if end_byte != -1 else 'EOF'}")
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout, http.client.IncompleteRead) as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Retryable error: {e}")
except requests.exceptions.RequestException as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Non-retryable error: {e}")
return bytes_this_chunk, False # Break loop on non-retryable errors
except Exception as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Unexpected error: {e}\n{traceback.format_exc(limit=1)}")
return bytes_this_chunk, False
response = requests.get(chunk_url, headers=chunk_headers, timeout=(10, 120), stream=True, cookies=cookies_for_chunk)
response.raise_for_status()
return bytes_this_chunk, False
# --- Data Writing Loop ---
# We open the unique chunk file in write-binary ('wb') mode.
# No more seeking is required.
with open(chunk_temp_file_path, 'wb') as f:
for data_segment in response.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE_ITER):
if cancellation_event and cancellation_event.is_set():
return bytes_this_chunk, False
if pause_event and pause_event.is_set():
# Handle pausing during the download stream
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Paused...")
while pause_event.is_set():
if cancellation_event and cancellation_event.is_set(): return bytes_this_chunk, False
time.sleep(0.2)
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Resumed.")
if data_segment:
f.write(data_segment)
bytes_this_chunk += len(data_segment)
# Update shared progress data structure
with progress_data['lock']:
progress_data['total_downloaded_so_far'] += len(data_segment)
progress_data['chunks_status'][part_num]['downloaded'] = bytes_this_chunk
# Calculate and update speed for this chunk
current_time = time.time()
time_delta = current_time - last_speed_calc_time
if time_delta > 0.5:
bytes_delta = bytes_this_chunk - bytes_at_last_speed_calc
current_speed_bps = (bytes_delta * 8) / time_delta if time_delta > 0 else 0
progress_data['chunks_status'][part_num]['speed_bps'] = current_speed_bps
last_speed_calc_time = current_time
bytes_at_last_speed_calc = bytes_this_chunk
# Emit progress signal to the UI via the queue
if emitter and (current_time - global_emit_time_ref[0] > 0.25):
global_emit_time_ref[0] = current_time
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
if isinstance(emitter, queue.Queue):
emitter.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
elif hasattr(emitter, 'file_progress_signal'):
emitter.file_progress_signal.emit(api_original_filename, status_list_copy)
# If we get here, the download for this chunk is successful
return bytes_this_chunk, True
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout, http.client.IncompleteRead) as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Retryable error: {e}")
except requests.exceptions.RequestException as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Non-retryable error: {e}")
return bytes_this_chunk, False # Break loop on non-retryable errors
except Exception as e:
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Unexpected error: {e}\n{traceback.format_exc(limit=1)}")
return bytes_this_chunk, False
# If the retry loop finishes without a successful download
return bytes_this_chunk, False
finally:
# This block runs whether the download succeeded or failed
with progress_data['lock']:
progress_data['chunks_status'][part_num]['active'] = False
progress_data['chunks_status'][part_num]['speed_bps'] = 0.0
def download_file_in_parts(file_url, save_path, total_size, num_parts, headers, api_original_filename,
emitter_for_multipart, cookies_for_chunk_session,
cancellation_event, skip_event, logger_func, pause_event):
logger_func(f"⬇️ Initializing Multi-part Download ({num_parts} parts) for: '{api_original_filename}' (Size: {total_size / (1024*1024):.2f} MB)")
temp_file_path = save_path + ".part"
"""
Manages a resilient, multipart file download by saving each chunk to a separate file.
try:
with open(temp_file_path, 'wb') as f_temp:
if total_size > 0:
f_temp.truncate(total_size)
except IOError as e:
logger_func(f" ❌ Error creating/truncating temp file '{temp_file_path}': {e}")
return False, 0, None, None
This function orchestrates the download process by:
1. Checking for already completed chunk files to resume a previous download.
2. Submitting only the missing chunks to a thread pool for parallel download.
3. Assembling the final file from the individual chunks upon successful completion.
4. Cleaning up temporary chunk files after assembly.
5. Leaving completed chunks on disk if the download fails, allowing for a future resume.
Args:
file_url (str): The URL of the file to download.
save_path (str): The final desired path for the downloaded file (e.g., 'my_video.mp4').
total_size (int): The total size of the file in bytes.
num_parts (int): The number of parts to split the download into.
headers (dict): HTTP headers for the download requests.
api_original_filename (str): The original filename for UI progress display.
emitter_for_multipart (queue.Queue or QObject): Emitter for UI signals.
cookies_for_chunk_session (dict): Cookies for the download requests.
cancellation_event (threading.Event): Event to signal cancellation.
skip_event (threading.Event): Event to signal skipping the file.
logger_func (function): A function for logging messages.
pause_event (threading.Event): Event to signal pausing the download.
Returns:
tuple: A tuple containing (success_flag, total_bytes_downloaded, md5_hash, file_handle).
The file_handle will be for the final assembled file if successful, otherwise None.
"""
logger_func(f"⬇️ Initializing Resumable Multi-part Download ({num_parts} parts) for: '{api_original_filename}' (Size: {total_size / (1024*1024):.2f} MB)")
# Calculate the byte range for each chunk
chunk_size_calc = total_size // num_parts
chunks_ranges = []
for i in range(num_parts):
@@ -153,76 +207,119 @@ def download_file_in_parts(file_url, save_path, total_size, num_parts, headers,
end = start + chunk_size_calc - 1 if i < num_parts - 1 else total_size - 1
if start <= end:
chunks_ranges.append((start, end))
elif total_size == 0 and i == 0:
elif total_size == 0 and i == 0: # Handle zero-byte files
chunks_ranges.append((0, -1))
# Calculate the expected size of each chunk
chunk_actual_sizes = []
for start, end in chunks_ranges:
if end == -1 and start == 0:
chunk_actual_sizes.append(0)
else:
chunk_actual_sizes.append(end - start + 1)
chunk_actual_sizes.append(end - start + 1 if end != -1 else 0)
if not chunks_ranges and total_size > 0:
logger_func(f" ⚠️ No valid chunk ranges for multipart download of '{api_original_filename}'. Aborting multipart.")
if os.path.exists(temp_file_path): os.remove(temp_file_path)
logger_func(f" ⚠️ No valid chunk ranges for multipart download of '{api_original_filename}'. Aborting.")
return False, 0, None, None
# --- Resumption Logic: Check for existing complete chunks ---
chunks_to_download = []
total_bytes_resumed = 0
for i, (start, end) in enumerate(chunks_ranges):
chunk_part_path = f"{save_path}.part{i}"
expected_chunk_size = chunk_actual_sizes[i]
if os.path.exists(chunk_part_path) and os.path.getsize(chunk_part_path) == expected_chunk_size:
logger_func(f" [Chunk {i + 1}/{num_parts}] Resuming with existing complete chunk file.")
total_bytes_resumed += expected_chunk_size
else:
chunks_to_download.append({'index': i, 'start': start, 'end': end})
# Setup the shared progress data structure
progress_data = {
'total_file_size': total_size,
'total_downloaded_so_far': 0,
'chunks_status': [
{'id': i, 'downloaded': 0, 'total': chunk_actual_sizes[i] if i < len(chunk_actual_sizes) else 0, 'active': False, 'speed_bps': 0.0}
for i in range(num_parts)
],
'total_downloaded_so_far': total_bytes_resumed,
'chunks_status': [],
'lock': threading.Lock(),
'last_global_emit_time': [time.time()]
}
for i in range(num_parts):
is_resumed = not any(c['index'] == i for c in chunks_to_download)
progress_data['chunks_status'].append({
'id': i,
'downloaded': chunk_actual_sizes[i] if is_resumed else 0,
'total': chunk_actual_sizes[i],
'active': False,
'speed_bps': 0.0
})
# --- Download Phase ---
chunk_futures = []
all_chunks_successful = True
total_bytes_from_chunks = 0
total_bytes_from_threads = 0
with ThreadPoolExecutor(max_workers=num_parts, thread_name_prefix=f"MPChunk_{api_original_filename[:10]}_") as chunk_pool:
for i, (start, end) in enumerate(chunks_ranges):
if cancellation_event and cancellation_event.is_set(): all_chunks_successful = False; break
chunk_futures.append(chunk_pool.submit(
_download_individual_chunk, chunk_url=file_url, temp_file_path=temp_file_path,
for chunk_info in chunks_to_download:
if cancellation_event and cancellation_event.is_set():
all_chunks_successful = False
break
i, start, end = chunk_info['index'], chunk_info['start'], chunk_info['end']
chunk_part_path = f"{save_path}.part{i}"
future = chunk_pool.submit(
_download_individual_chunk,
chunk_url=file_url,
chunk_temp_file_path=chunk_part_path,
start_byte=start, end_byte=end, headers=headers, part_num=i, total_parts=num_parts,
progress_data=progress_data, cancellation_event=cancellation_event, skip_event=skip_event, global_emit_time_ref=progress_data['last_global_emit_time'],
pause_event=pause_event, cookies_for_chunk=cookies_for_chunk_session, logger_func=logger_func, emitter=emitter_for_multipart,
progress_data=progress_data, cancellation_event=cancellation_event,
skip_event=skip_event, global_emit_time_ref=progress_data['last_global_emit_time'],
pause_event=pause_event, cookies_for_chunk=cookies_for_chunk_session,
logger_func=logger_func, emitter=emitter_for_multipart,
api_original_filename=api_original_filename
))
)
chunk_futures.append(future)
for future in as_completed(chunk_futures):
if cancellation_event and cancellation_event.is_set(): all_chunks_successful = False; break
bytes_downloaded_this_chunk, success_this_chunk = future.result()
total_bytes_from_chunks += bytes_downloaded_this_chunk
if not success_this_chunk:
if cancellation_event and cancellation_event.is_set():
all_chunks_successful = False
bytes_downloaded, success = future.result()
total_bytes_from_threads += bytes_downloaded
if not success:
all_chunks_successful = False
total_bytes_final = total_bytes_resumed + total_bytes_from_threads
if cancellation_event and cancellation_event.is_set():
logger_func(f" Multi-part download for '{api_original_filename}' cancelled by main event.")
all_chunks_successful = False
if emitter_for_multipart:
with progress_data['lock']:
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
if isinstance(emitter_for_multipart, queue.Queue):
emitter_for_multipart.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
elif hasattr(emitter_for_multipart, 'file_progress_signal'):
emitter_for_multipart.file_progress_signal.emit(api_original_filename, status_list_copy)
if all_chunks_successful and (total_bytes_from_chunks == total_size or total_size == 0):
logger_func(f" ✅ Multi-part download successful for '{api_original_filename}'. Total bytes: {total_bytes_from_chunks}")
# --- Assembly and Cleanup Phase ---
if all_chunks_successful and (total_bytes_final == total_size or total_size == 0):
logger_func(f" ✅ All {num_parts} chunks complete. Assembling final file...")
md5_hasher = hashlib.md5()
with open(temp_file_path, 'rb') as f_hash:
for buf in iter(lambda: f_hash.read(4096*10), b''):
md5_hasher.update(buf)
calculated_hash = md5_hasher.hexdigest()
return True, total_bytes_from_chunks, calculated_hash, open(temp_file_path, 'rb')
try:
with open(save_path, 'wb') as final_file:
for i in range(num_parts):
chunk_part_path = f"{save_path}.part{i}"
with open(chunk_part_path, 'rb') as chunk_file:
content = chunk_file.read()
final_file.write(content)
md5_hasher.update(content)
calculated_hash = md5_hasher.hexdigest()
logger_func(f" ✅ Assembly successful for '{api_original_filename}'. Total bytes: {total_bytes_final}")
return True, total_bytes_final, calculated_hash, open(save_path, 'rb')
except Exception as e:
logger_func(f" ❌ Critical error during file assembly: {e}. Cleaning up.")
return False, total_bytes_final, None, None
finally:
# Cleanup all individual chunk files after successful assembly
for i in range(num_parts):
chunk_part_path = f"{save_path}.part{i}"
if os.path.exists(chunk_part_path):
try:
os.remove(chunk_part_path)
except OSError as e:
logger_func(f" ⚠️ Failed to remove temp part file '{chunk_part_path}': {e}")
else:
logger_func(f" ❌ Multi-part download failed for '{api_original_filename}'. Success: {all_chunks_successful}, Bytes: {total_bytes_from_chunks}/{total_size}. Cleaning up.")
if os.path.exists(temp_file_path):
try: os.remove(temp_file_path)
except OSError as e: logger_func(f" Failed to remove temp part file '{temp_file_path}': {e}")
return False, total_bytes_from_chunks, None, None
# If download failed, we do NOT clean up, allowing for resumption later
logger_func(f" ❌ Multi-part download failed for '{api_original_filename}'. Success: {all_chunks_successful}, Bytes: {total_bytes_final}/{total_size}. Partial chunks saved for future resumption.")
return False, total_bytes_final, None, None

View File

@@ -13,7 +13,7 @@ from PyQt5.QtCore import pyqtSignal, QCoreApplication, QSize, QThread, QTimer, Q
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QLineEdit, QListWidget,
QListWidgetItem, QMessageBox, QPushButton, QVBoxLayout, QAbstractItemView,
QSplitter, QProgressBar, QWidget
QSplitter, QProgressBar, QWidget, QFileDialog
)
# --- Local Application Imports ---
@@ -151,6 +151,8 @@ class EmptyPopupDialog (QDialog ):
app_icon =get_app_icon_object ()
if app_icon and not app_icon .isNull ():
self .setWindowIcon (app_icon )
self.update_profile_data = None
self.update_creator_name = None
self .selected_creators_for_queue =[]
self .globally_selected_creators ={}
self .fetched_posts_data ={}
@@ -205,6 +207,9 @@ class EmptyPopupDialog (QDialog ):
self .scope_button .clicked .connect (self ._toggle_scope_mode )
left_bottom_buttons_layout .addWidget (self .scope_button )
left_pane_layout .addLayout (left_bottom_buttons_layout )
self.update_button = QPushButton()
self.update_button.clicked.connect(self._handle_update_check)
left_bottom_buttons_layout.addWidget(self.update_button)
self .right_pane_widget =QWidget ()
@@ -315,6 +320,31 @@ class EmptyPopupDialog (QDialog ):
except AttributeError :
pass
def _handle_update_check(self):
"""Opens a dialog to select a creator profile and loads it for an update session."""
appdata_dir = os.path.join(self.app_base_dir, "appdata")
profiles_dir = os.path.join(appdata_dir, "creator_profiles")
if not os.path.isdir(profiles_dir):
QMessageBox.warning(self, "Directory Not Found", f"The creator profiles directory does not exist yet.\n\nPath: {profiles_dir}")
return
filepath, _ = QFileDialog.getOpenFileName(self, "Select Creator Profile for Update", profiles_dir, "JSON Files (*.json)")
if filepath:
try:
with open(filepath, 'r', encoding='utf-8') as f:
data = json.load(f)
if 'creator_url' not in data or 'processed_post_ids' not in data:
raise ValueError("Invalid profile format.")
self.update_profile_data = data
self.update_creator_name = os.path.basename(filepath).replace('.json', '')
self.accept() # Close the dialog and signal success
except Exception as e:
QMessageBox.critical(self, "Error Loading Profile", f"Could not load or parse the selected profile file:\n\n{e}")
def _handle_fetch_posts_click (self ):
selected_creators =list (self .globally_selected_creators .values ())
print(f"[DEBUG] Selected creators for fetch: {selected_creators}")
@@ -370,6 +400,7 @@ class EmptyPopupDialog (QDialog ):
self .add_selected_button .setText (self ._tr ("creator_popup_add_selected_button","Add Selected"))
self .fetch_posts_button .setText (self ._tr ("fetch_posts_button_text","Fetch Posts"))
self ._update_scope_button_text_and_tooltip ()
self.update_button.setText(self._tr("check_for_updates_button", "Check for Updates"))
self .posts_search_input .setPlaceholderText (self ._tr ("creator_popup_posts_search_placeholder","Search fetched posts by title..."))
@@ -929,15 +960,19 @@ class EmptyPopupDialog (QDialog ):
self .parent_app .log_signal .emit (f" Added {num_just_added_posts } selected posts to the download queue. Total in queue: {total_in_queue }.")
# --- START: MODIFIED LOGIC ---
# Removed the blockSignals(True/False) calls to allow the main window's UI to update correctly.
if self .parent_app .link_input :
self .parent_app .link_input .blockSignals (True )
self .parent_app .link_input .setText (
self .parent_app ._tr ("popup_posts_selected_text","Posts - {count} selected").format (count =num_just_added_posts )
)
self .parent_app .link_input .blockSignals (False )
self .parent_app .link_input .setPlaceholderText (
self .parent_app ._tr ("items_in_queue_placeholder","{count} items in queue from popup.").format (count =total_in_queue )
)
# --- END: MODIFIED LOGIC ---
self.selected_creators_for_queue.clear()
self .accept ()
else :
QMessageBox .information (self ,self ._tr ("no_selection_title","No Selection"),
@@ -955,9 +990,6 @@ class EmptyPopupDialog (QDialog ):
self .add_selected_button .setEnabled (True )
self .setWindowTitle (self ._tr ("creator_popup_title","Creator Selection"))
def _get_domain_for_service (self ,service_name ):
"""Determines the base domain for a given service."""
service_lower =service_name .lower ()
@@ -1003,4 +1035,4 @@ class EmptyPopupDialog (QDialog ):
else :
if unique_key in self .globally_selected_creators :
del self .globally_selected_creators [unique_key ]
self .fetch_posts_button .setEnabled (bool (self .globally_selected_creators ))
self .fetch_posts_button .setEnabled (bool (self .globally_selected_creators ))

View File

@@ -37,13 +37,13 @@ class FavoriteArtistsDialog (QDialog ):
self ._init_ui ()
self ._fetch_favorite_artists ()
def _get_domain_for_service (self ,service_name ):
service_lower =service_name .lower ()
coomer_primary_services ={'onlyfans','fansly','manyvids','candfans'}
if service_lower in coomer_primary_services :
return "coomer.su"
else :
return "kemono.su"
def _get_domain_for_service(self, service_name):
service_lower = service_name.lower()
coomer_primary_services = {'onlyfans', 'fansly', 'manyvids', 'candfans'}
if service_lower in coomer_primary_services:
return "coomer.st" # Use the new domain
else:
return "kemono.cr" # Use the new domain
def _tr (self ,key ,default_text =""):
"""Helper to get translation based on current app language."""
@@ -128,9 +128,29 @@ class FavoriteArtistsDialog (QDialog ):
def _fetch_favorite_artists (self ):
if self.cookies_config['use_cookie']:
# Check if we can load cookies for at least one of the services.
kemono_cookies = prepare_cookies_for_request(True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'], self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.su")
coomer_cookies = prepare_cookies_for_request(True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'], self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.su")
# --- Kemono Check with Fallback ---
kemono_cookies = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.cr"
)
if not kemono_cookies:
self._logger("No cookies for kemono.cr, trying fallback kemono.su...")
kemono_cookies = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.su"
)
# --- Coomer Check with Fallback ---
coomer_cookies = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.st"
)
if not coomer_cookies:
self._logger("No cookies for coomer.st, trying fallback coomer.su...")
coomer_cookies = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.su"
)
if not kemono_cookies and not coomer_cookies:
# If cookies are enabled but none could be loaded, show help and stop.
@@ -139,7 +159,7 @@ class FavoriteArtistsDialog (QDialog ):
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
cookie_help_dialog.exec_()
self.download_button.setEnabled(False)
return # Stop further execution
return # Stop further execution
kemono_fav_url ="https://kemono.su/api/v1/account/favorites?type=artist"
coomer_fav_url ="https://coomer.su/api/v1/account/favorites?type=artist"
@@ -149,9 +169,12 @@ class FavoriteArtistsDialog (QDialog ):
errors_occurred =[]
any_cookies_loaded_successfully_for_any_source =False
api_sources =[
{"name":"Kemono.su","url":kemono_fav_url ,"domain":"kemono.su"},
{"name":"Coomer.su","url":coomer_fav_url ,"domain":"coomer.su"}
kemono_cr_fav_url = "https://kemono.cr/api/v1/account/favorites?type=artist"
coomer_st_fav_url = "https://coomer.st/api/v1/account/favorites?type=artist"
api_sources = [
{"name": "Kemono.cr", "url": kemono_cr_fav_url, "domain": "kemono.cr"},
{"name": "Coomer.st", "url": coomer_st_fav_url, "domain": "coomer.st"}
]
for source in api_sources :
@@ -159,20 +182,41 @@ class FavoriteArtistsDialog (QDialog ):
self .status_label .setText (self ._tr ("fav_artists_loading_from_source_status","⏳ Loading favorites from {source_name}...").format (source_name =source ['name']))
QCoreApplication .processEvents ()
cookies_dict_for_source =None
if self .cookies_config ['use_cookie']:
cookies_dict_for_source =prepare_cookies_for_request (
True ,
self .cookies_config ['cookie_text'],
self .cookies_config ['selected_cookie_file'],
self .cookies_config ['app_base_dir'],
self ._logger ,
target_domain =source ['domain']
cookies_dict_for_source = None
if self.cookies_config['use_cookie']:
primary_domain = source['domain']
fallback_domain = None
if primary_domain == "kemono.cr":
fallback_domain = "kemono.su"
elif primary_domain == "coomer.st":
fallback_domain = "coomer.su"
# First, try the primary domain
cookies_dict_for_source = prepare_cookies_for_request(
True,
self.cookies_config['cookie_text'],
self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'],
self._logger,
target_domain=primary_domain
)
if cookies_dict_for_source :
any_cookies_loaded_successfully_for_any_source =True
else :
self ._logger (f"Warning ({source ['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
# If no cookies found, try the fallback domain
if not cookies_dict_for_source and fallback_domain:
self._logger(f"Warning ({source['name']}): No cookies found for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
cookies_dict_for_source = prepare_cookies_for_request(
True,
self.cookies_config['cookie_text'],
self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'],
self._logger,
target_domain=fallback_domain
)
if cookies_dict_for_source:
any_cookies_loaded_successfully_for_any_source = True
else:
self._logger(f"Warning ({source['name']}): Cookies enabled but could not be loaded for this source (including fallbacks). Fetch might fail.")
try :
headers ={'User-Agent':'Mozilla/5.0'}
response =requests .get (source ['url'],headers =headers ,cookies =cookies_dict_for_source ,timeout =20 )
@@ -223,7 +267,7 @@ class FavoriteArtistsDialog (QDialog ):
if self .cookies_config ['use_cookie']and not any_cookies_loaded_successfully_for_any_source :
self .status_label .setText (self ._tr ("fav_artists_cookies_required_status","Error: Cookies enabled but could not be loaded for any source."))
self ._logger ("Error: Cookies enabled but no cookies loaded for any source. Showing help dialog.")
cookie_help_dialog =CookieHelpDialog (self )
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
cookie_help_dialog .exec_ ()
self .download_button .setEnabled (False )
if not fetched_any_successfully :

View File

@@ -34,28 +34,30 @@ class FavoritePostsFetcherThread (QThread ):
self .target_domain_preference =target_domain_preference
self .cancellation_event =threading .Event ()
self .error_key_map ={
"Kemono.su":"kemono_su",
"Coomer.su":"coomer_su"
"kemono.cr":"kemono_su",
"coomer.st":"coomer_su"
}
def _logger (self ,message ):
self .parent_logger_func (f"[FavPostsFetcherThread] {message }")
def run (self ):
kemono_fav_posts_url ="https://kemono.su/api/v1/account/favorites?type=post"
coomer_fav_posts_url ="https://coomer.su/api/v1/account/favorites?type=post"
def run(self):
kemono_su_fav_posts_url = "https://kemono.su/api/v1/account/favorites?type=post"
coomer_su_fav_posts_url = "https://coomer.su/api/v1/account/favorites?type=post"
kemono_cr_fav_posts_url = "https://kemono.cr/api/v1/account/favorites?type=post"
coomer_st_fav_posts_url = "https://coomer.st/api/v1/account/favorites?type=post"
all_fetched_posts_temp =[]
error_messages_for_summary =[]
fetched_any_successfully =False
any_cookies_loaded_successfully_for_any_source =False
all_fetched_posts_temp = []
error_messages_for_summary = []
fetched_any_successfully = False
any_cookies_loaded_successfully_for_any_source = False
self .status_update .emit ("key_fetching_fav_post_list_init")
self .progress_bar_update .emit (0 ,0 )
self.status_update.emit("key_fetching_fav_post_list_init")
self.progress_bar_update.emit(0, 0)
api_sources =[
{"name":"Kemono.su","url":kemono_fav_posts_url ,"domain":"kemono.su"},
{"name":"Coomer.su","url":coomer_fav_posts_url ,"domain":"coomer.su"}
api_sources = [
{"name": "Kemono.cr", "url": kemono_cr_fav_posts_url, "domain": "kemono.cr"},
{"name": "Coomer.st", "url": coomer_st_fav_posts_url, "domain": "coomer.st"}
]
api_sources_to_try =[]
@@ -76,20 +78,41 @@ class FavoritePostsFetcherThread (QThread ):
if self .cancellation_event .is_set ():
self .finished .emit ([],"KEY_FETCH_CANCELLED_DURING")
return
cookies_dict_for_source =None
if self .cookies_config ['use_cookie']:
cookies_dict_for_source =prepare_cookies_for_request (
True ,
self .cookies_config ['cookie_text'],
self .cookies_config ['selected_cookie_file'],
self .cookies_config ['app_base_dir'],
self ._logger ,
target_domain =source ['domain']
cookies_dict_for_source = None
if self.cookies_config['use_cookie']:
primary_domain = source['domain']
fallback_domain = None
if primary_domain == "kemono.cr":
fallback_domain = "kemono.su"
elif primary_domain == "coomer.st":
fallback_domain = "coomer.su"
# First, try the primary domain
cookies_dict_for_source = prepare_cookies_for_request(
True,
self.cookies_config['cookie_text'],
self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'],
self._logger,
target_domain=primary_domain
)
if cookies_dict_for_source :
any_cookies_loaded_successfully_for_any_source =True
else :
self ._logger (f"Warning ({source ['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
# If no cookies found, try the fallback domain
if not cookies_dict_for_source and fallback_domain:
self._logger(f"Warning ({source['name']}): No cookies found for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
cookies_dict_for_source = prepare_cookies_for_request(
True,
self.cookies_config['cookie_text'],
self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'],
self._logger,
target_domain=fallback_domain
)
if cookies_dict_for_source:
any_cookies_loaded_successfully_for_any_source = True
else:
self._logger(f"Warning ({source['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
self ._logger (f"Attempting to fetch favorite posts from: {source ['name']} ({source ['url']})")
source_key_part =self .error_key_map .get (source ['name'],source ['name'].lower ().replace ('.','_'))
@@ -409,14 +432,14 @@ class FavoritePostsDialog (QDialog ):
if status_key .startswith ("KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_FOR_DOMAIN_")or status_key =="KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_GENERIC":
status_label_text_key ="fav_posts_cookies_required_error"
self ._logger (f"Cookie error: {status_key }. Showing help dialog.")
cookie_help_dialog =CookieHelpDialog (self )
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
cookie_help_dialog .exec_ ()
elif status_key =="KEY_AUTH_FAILED":
status_label_text_key ="fav_posts_auth_failed_title"
self ._logger (f"Auth error: {status_key }. Showing help dialog.")
QMessageBox .warning (self ,self ._tr ("fav_posts_auth_failed_title","Authorization Failed (Posts)"),
self ._tr ("fav_posts_auth_failed_message_generic","...").format (domain_specific_part =specific_domain_msg_part ))
cookie_help_dialog =CookieHelpDialog (self )
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
cookie_help_dialog .exec_ ()
elif status_key =="KEY_NO_FAVORITES_FOUND_ALL_PLATFORMS":
status_label_text_key ="fav_posts_no_posts_found_status"

View File

@@ -6,7 +6,7 @@ import json
from PyQt5.QtCore import Qt, QStandardPaths
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
QGroupBox, QComboBox, QMessageBox, QGridLayout
QGroupBox, QComboBox, QMessageBox, QGridLayout, QCheckBox
)
# --- Local Application Imports ---
@@ -15,7 +15,9 @@ from ...utils.resolution import get_dark_theme
from ..main_window import get_app_icon_object
from ...config.constants import (
THEME_KEY, LANGUAGE_KEY, DOWNLOAD_LOCATION_KEY,
RESOLUTION_KEY, UI_SCALE_KEY
RESOLUTION_KEY, UI_SCALE_KEY, SAVE_CREATOR_JSON_KEY,
COOKIE_TEXT_KEY, USE_COOKIE_KEY,
FETCH_FIRST_KEY ### ADDED ###
)
@@ -35,7 +37,7 @@ class FutureSettingsDialog(QDialog):
screen_height = QApplication.primaryScreen().availableGeometry().height() if QApplication.primaryScreen() else 800
scale_factor = screen_height / 800.0
base_min_w, base_min_h = 420, 320 # Adjusted height for new layout
base_min_w, base_min_h = 420, 390
scaled_min_w = int(base_min_w * scale_factor)
scaled_min_h = int(base_min_h * scale_factor)
self.setMinimumSize(scaled_min_w, scaled_min_h)
@@ -48,7 +50,6 @@ class FutureSettingsDialog(QDialog):
"""Initializes all UI components and layouts for the dialog."""
main_layout = QVBoxLayout(self)
# --- Group 1: Interface Settings ---
self.interface_group_box = QGroupBox()
interface_layout = QGridLayout(self.interface_group_box)
@@ -75,33 +76,60 @@ class FutureSettingsDialog(QDialog):
main_layout.addWidget(self.interface_group_box)
# --- Group 2: Download & Window Settings ---
self.download_window_group_box = QGroupBox()
download_window_layout = QGridLayout(self.download_window_group_box)
# Window Size (Resolution)
self.window_size_label = QLabel()
self.resolution_combo_box = QComboBox()
self.resolution_combo_box.currentIndexChanged.connect(self._display_setting_changed)
download_window_layout.addWidget(self.window_size_label, 0, 0)
download_window_layout.addWidget(self.resolution_combo_box, 0, 1)
# Default Path
self.default_path_label = QLabel()
self.save_path_button = QPushButton()
self.save_path_button.clicked.connect(self._save_download_path)
self.save_path_button.clicked.connect(self._save_cookie_and_path)
download_window_layout.addWidget(self.default_path_label, 1, 0)
download_window_layout.addWidget(self.save_path_button, 1, 1)
self.save_creator_json_checkbox = QCheckBox()
self.save_creator_json_checkbox.stateChanged.connect(self._creator_json_setting_changed)
download_window_layout.addWidget(self.save_creator_json_checkbox, 2, 0, 1, 2)
self.fetch_first_checkbox = QCheckBox()
self.fetch_first_checkbox.stateChanged.connect(self._fetch_first_setting_changed)
download_window_layout.addWidget(self.fetch_first_checkbox, 3, 0, 1, 2)
main_layout.addWidget(self.download_window_group_box)
main_layout.addStretch(1)
# --- OK Button ---
self.ok_button = QPushButton()
self.ok_button.clicked.connect(self.accept)
main_layout.addWidget(self.ok_button, 0, Qt.AlignRight | Qt.AlignBottom)
def _load_checkbox_states(self):
"""Loads the initial state for all checkboxes from settings."""
self.save_creator_json_checkbox.blockSignals(True)
should_save = self.parent_app.settings.value(SAVE_CREATOR_JSON_KEY, True, type=bool)
self.save_creator_json_checkbox.setChecked(should_save)
self.save_creator_json_checkbox.blockSignals(False)
self.fetch_first_checkbox.blockSignals(True)
should_fetch_first = self.parent_app.settings.value(FETCH_FIRST_KEY, False, type=bool)
self.fetch_first_checkbox.setChecked(should_fetch_first)
self.fetch_first_checkbox.blockSignals(False)
def _creator_json_setting_changed(self, state):
"""Saves the state of the 'Save Creator.json' checkbox."""
is_checked = state == Qt.Checked
self.parent_app.settings.setValue(SAVE_CREATOR_JSON_KEY, is_checked)
self.parent_app.settings.sync()
def _fetch_first_setting_changed(self, state):
"""Saves the state of the 'Fetch First' checkbox."""
is_checked = state == Qt.Checked
self.parent_app.settings.setValue(FETCH_FIRST_KEY, is_checked)
self.parent_app.settings.sync()
def _tr(self, key, default_text=""):
if callable(get_translation) and self.parent_app:
return get_translation(self.parent_app.current_selected_language, key, default_text)
@@ -110,28 +138,30 @@ class FutureSettingsDialog(QDialog):
def _retranslate_ui(self):
self.setWindowTitle(self._tr("settings_dialog_title", "Settings"))
# Group Box Titles
self.interface_group_box.setTitle(self._tr("interface_group_title", "Interface Settings"))
self.download_window_group_box.setTitle(self._tr("download_window_group_title", "Download & Window Settings"))
# Interface Group Labels
self.theme_label.setText(self._tr("theme_label", "Theme:"))
self.ui_scale_label.setText(self._tr("ui_scale_label", "UI Scale:"))
self.language_label.setText(self._tr("language_label", "Language:"))
# Download & Window Group Labels
self.window_size_label.setText(self._tr("window_size_label", "Window Size:"))
self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
self.save_creator_json_checkbox.setText(self._tr("save_creator_json_label", "Save Creator.json file"))
self.fetch_first_checkbox.setText(self._tr("fetch_first_label", "Fetch First (Download after all pages are found)"))
self.fetch_first_checkbox.setToolTip(self._tr("fetch_first_tooltip", "If checked, the downloader will find all posts from a creator first before starting any downloads.\nThis can be slower to start but provides a more accurate progress bar."))
# Buttons and Controls
self._update_theme_toggle_button_text()
self.save_path_button.setText(self._tr("settings_save_path_button", "Save Current Download Path"))
self.save_path_button.setToolTip(self._tr("settings_save_path_tooltip", "Save the current 'Download Location' for future sessions."))
self.save_path_button.setText(self._tr("settings_save_cookie_path_button", "Save Cookie + Download Path"))
self.save_path_button.setToolTip(self._tr("settings_save_cookie_path_tooltip", "Save the current 'Download Location' and Cookie settings for future sessions."))
self.ok_button.setText(self._tr("ok_button", "OK"))
# Populate dropdowns
self._populate_display_combo_boxes()
self._populate_language_combo_box()
self._load_checkbox_states()
# --- (The rest of the file remains unchanged) ---
def _apply_theme(self):
if self.parent_app and self.parent_app.current_theme == "dark":
@@ -254,22 +284,41 @@ class FutureSettingsDialog(QDialog):
if msg_box.clickedButton() == restart_button:
self.parent_app._request_restart_application()
def _save_download_path(self):
def _save_cookie_and_path(self):
"""Saves the current download path and/or cookie settings from the main window."""
path_saved = False
cookie_saved = False
if hasattr(self.parent_app, 'dir_input') and self.parent_app.dir_input:
current_path = self.parent_app.dir_input.text().strip()
if current_path and os.path.isdir(current_path):
self.parent_app.settings.setValue(DOWNLOAD_LOCATION_KEY, current_path)
self.parent_app.settings.sync()
QMessageBox.information(self,
self._tr("settings_save_path_success_title", "Path Saved"),
self._tr("settings_save_path_success_message", "Download location '{path}' saved.").format(path=current_path))
elif not current_path:
QMessageBox.warning(self,
self._tr("settings_save_path_empty_title", "Empty Path"),
self._tr("settings_save_path_empty_message", "Download location cannot be empty."))
else:
QMessageBox.warning(self,
self._tr("settings_save_path_invalid_title", "Invalid Path"),
self._tr("settings_save_path_invalid_message", "The path '{path}' is not a valid directory.").format(path=current_path))
path_saved = True
if hasattr(self.parent_app, 'use_cookie_checkbox'):
use_cookie = self.parent_app.use_cookie_checkbox.isChecked()
cookie_content = self.parent_app.cookie_text_input.text().strip()
if use_cookie and cookie_content:
self.parent_app.settings.setValue(USE_COOKIE_KEY, True)
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, cookie_content)
cookie_saved = True
else:
self.parent_app.settings.setValue(USE_COOKIE_KEY, False)
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, "")
self.parent_app.settings.sync()
# --- User Feedback ---
if path_saved and cookie_saved:
message = self._tr("settings_save_both_success", "Download location and cookie settings saved.")
elif path_saved:
message = self._tr("settings_save_path_only_success", "Download location saved. No cookie settings were active to save.")
elif cookie_saved:
message = self._tr("settings_save_cookie_only_success", "Cookie settings saved. Download location was not set.")
else:
QMessageBox.critical(self, "Error", "Could not access download path input from main application.")
QMessageBox.warning(self, self._tr("settings_save_nothing_title", "Nothing to Save"),
self._tr("settings_save_nothing_message", "The download location is not a valid directory and no cookie was active."))
return
QMessageBox.information(self, self._tr("settings_save_success_title", "Settings Saved"), message)

View File

@@ -4,7 +4,7 @@ from PyQt5.QtCore import QUrl, QSize, Qt
from PyQt5.QtGui import QIcon, QDesktopServices
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
QStackedWidget, QScrollArea, QFrame, QWidget
QStackedWidget, QListWidget, QFrame, QWidget, QScrollArea
)
from ...i18n.translator import get_translation
from ..main_window import get_app_icon_object
@@ -46,13 +46,12 @@ class TourStepWidget(QWidget):
layout.addWidget(scroll_area, 1)
class HelpGuideDialog (QDialog ):
"""A multi-page dialog for displaying the feature guide."""
def __init__ (self ,steps_data ,parent_app ,parent =None ):
super ().__init__ (parent )
self .current_step =0
self .steps_data =steps_data
self .parent_app =parent_app
class HelpGuideDialog(QDialog):
"""A multi-page dialog for displaying the feature guide with a navigation list."""
def __init__(self, steps_data, parent_app, parent=None):
super().__init__(parent)
self.steps_data = steps_data
self.parent_app = parent_app
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
@@ -61,7 +60,7 @@ class HelpGuideDialog (QDialog ):
self.setWindowIcon(app_icon)
self.setModal(True)
self.resize(int(650 * scale), int(600 * scale))
self.resize(int(800 * scale), int(650 * scale))
dialog_font_size = int(11 * scale)
@@ -69,6 +68,7 @@ class HelpGuideDialog (QDialog ):
if hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
current_theme_style = get_dark_theme(scale)
else:
# Basic light theme fallback
current_theme_style = f"""
QDialog {{ background-color: #F0F0F0; border: 1px solid #B0B0B0; }}
QLabel {{ color: #1E1E1E; }}
@@ -86,118 +86,107 @@ class HelpGuideDialog (QDialog ):
"""
self.setStyleSheet(current_theme_style)
self ._init_ui ()
if self .parent_app :
self .move (self .parent_app .geometry ().center ()-self .rect ().center ())
self._init_ui()
if self.parent_app:
self.move(self.parent_app.geometry().center() - self.rect().center())
def _tr (self ,key ,default_text =""):
def _tr(self, key, default_text=""):
"""Helper to get translation based on current app language."""
if callable (get_translation )and self .parent_app :
return get_translation (self .parent_app .current_selected_language ,key ,default_text )
return default_text
if callable(get_translation) and self.parent_app:
return get_translation(self.parent_app.current_selected_language, key, default_text)
return default_text
def _init_ui(self):
main_layout = QVBoxLayout(self)
main_layout.setContentsMargins(15, 15, 15, 15)
main_layout.setSpacing(10)
def _init_ui (self ):
main_layout =QVBoxLayout (self )
main_layout .setContentsMargins (0 ,0 ,0 ,0 )
main_layout .setSpacing (0 )
# Title
title_label = QLabel(self._tr("help_guide_dialog_title", "Kemono Downloader - Feature Guide"))
scale = getattr(self.parent_app, 'scale_factor', 1.0)
title_font_size = int(16 * scale)
title_label.setStyleSheet(f"font-size: {title_font_size}pt; font-weight: bold; color: #E0E0E0;")
title_label.setAlignment(Qt.AlignCenter)
main_layout.addWidget(title_label)
self .stacked_widget =QStackedWidget ()
main_layout .addWidget (self .stacked_widget ,1 )
# Content Layout (Navigation + Stacked Pages)
content_layout = QHBoxLayout()
main_layout.addLayout(content_layout, 1)
self .tour_steps_widgets =[]
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
for title, content in self.steps_data:
step_widget = TourStepWidget(title, content, scale=scale)
self.tour_steps_widgets.append(step_widget)
self.nav_list = QListWidget()
self.nav_list.setFixedWidth(int(220 * scale))
self.nav_list.setStyleSheet(f"""
QListWidget {{
background-color: #2E2E2E;
border: 1px solid #4A4A4A;
border-radius: 4px;
font-size: {int(11 * scale)}pt;
}}
QListWidget::item {{
padding: 10px;
border-bottom: 1px solid #4A4A4A;
}}
QListWidget::item:selected {{
background-color: #87CEEB;
color: #2E2E2E;
font-weight: bold;
}}
""")
content_layout.addWidget(self.nav_list)
self.stacked_widget = QStackedWidget()
content_layout.addWidget(self.stacked_widget)
for title_key, content_key in self.steps_data:
title = self._tr(title_key, title_key)
content = self._tr(content_key, f"Content for {content_key} not found.")
self.nav_list.addItem(title)
step_widget = TourStepWidget(title, content, scale=scale)
self.stacked_widget.addWidget(step_widget)
self .setWindowTitle (self ._tr ("help_guide_dialog_title","Kemono Downloader - Feature Guide"))
self.nav_list.currentRowChanged.connect(self.stacked_widget.setCurrentIndex)
if self.nav_list.count() > 0:
self.nav_list.setCurrentRow(0)
buttons_layout =QHBoxLayout ()
buttons_layout .setContentsMargins (15 ,10 ,15 ,15 )
buttons_layout .setSpacing (10 )
# Footer Layout (Social links and Close button)
footer_layout = QHBoxLayout()
footer_layout.setContentsMargins(0, 10, 0, 0)
# Social Media Icons
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
assets_base_dir = sys._MEIPASS
else:
assets_base_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
self .back_button =QPushButton (self ._tr ("tour_dialog_back_button","Back"))
self .back_button .clicked .connect (self ._previous_step )
self .back_button .setEnabled (False )
github_icon_path = os.path.join(assets_base_dir, "assets", "github.png")
instagram_icon_path = os.path.join(assets_base_dir, "assets", "instagram.png")
discord_icon_path = os.path.join(assets_base_dir, "assets", "discord.png")
if getattr (sys ,'frozen',False )and hasattr (sys ,'_MEIPASS'):
assets_base_dir =sys ._MEIPASS
else :
assets_base_dir =os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
self.github_button = QPushButton(QIcon(github_icon_path), "")
self.instagram_button = QPushButton(QIcon(instagram_icon_path), "")
self.discord_button = QPushButton(QIcon(discord_icon_path), "")
github_icon_path =os .path .join (assets_base_dir ,"assets","github.png")
instagram_icon_path =os .path .join (assets_base_dir ,"assets","instagram.png")
discord_icon_path =os .path .join (assets_base_dir ,"assets","discord.png")
self .github_button =QPushButton (QIcon (github_icon_path ),"")
self .instagram_button =QPushButton (QIcon (instagram_icon_path ),"")
self .Discord_button =QPushButton (QIcon (discord_icon_path ),"")
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
icon_dim = int(24 * scale)
icon_size = QSize(icon_dim, icon_dim)
self .github_button .setIconSize (icon_size )
self .instagram_button .setIconSize (icon_size )
self .Discord_button .setIconSize (icon_size )
for button, tooltip_key, url in [
(self.github_button, "help_guide_github_tooltip", "https://github.com/Yuvi9587"),
(self.instagram_button, "help_guide_instagram_tooltip", "https://www.instagram.com/uvi.arts/"),
(self.discord_button, "help_guide_discord_tooltip", "https://discord.gg/BqP64XTdJN")
]:
button.setIconSize(icon_size)
button.setToolTip(self._tr(tooltip_key))
button.setFixedSize(icon_size.width() + 8, icon_size.height() + 8)
button.setStyleSheet("background-color: transparent; border: none;")
button.clicked.connect(lambda _, u=url: QDesktopServices.openUrl(QUrl(u)))
footer_layout.addWidget(button)
self .next_button =QPushButton (self ._tr ("tour_dialog_next_button","Next"))
self .next_button .clicked .connect (self ._next_step_action )
self .next_button .setDefault (True )
self .github_button .clicked .connect (self ._open_github_link )
self .instagram_button .clicked .connect (self ._open_instagram_link )
self .Discord_button .clicked .connect (self ._open_Discord_link )
self .github_button .setToolTip (self ._tr ("help_guide_github_tooltip","Visit project's GitHub page (Opens in browser)"))
self .instagram_button .setToolTip (self ._tr ("help_guide_instagram_tooltip","Visit our Instagram page (Opens in browser)"))
self .Discord_button .setToolTip (self ._tr ("help_guide_discord_tooltip","Visit our Discord community (Opens in browser)"))
footer_layout.addStretch(1)
self.finish_button = QPushButton(self._tr("tour_dialog_finish_button", "Finish"))
self.finish_button.clicked.connect(self.accept)
footer_layout.addWidget(self.finish_button)
social_layout =QHBoxLayout ()
social_layout .setSpacing (10 )
social_layout .addWidget (self .github_button )
social_layout .addWidget (self .instagram_button )
social_layout .addWidget (self .Discord_button )
while buttons_layout .count ():
item =buttons_layout .takeAt (0 )
if item .widget ():
item .widget ().setParent (None )
elif item .layout ():
pass
buttons_layout .addLayout (social_layout )
buttons_layout .addStretch (1 )
buttons_layout .addWidget (self .back_button )
buttons_layout .addWidget (self .next_button )
main_layout .addLayout (buttons_layout )
self ._update_button_states ()
def _next_step_action (self ):
if self .current_step <len (self .tour_steps_widgets )-1 :
self .current_step +=1
self .stacked_widget .setCurrentIndex (self .current_step )
else :
self .accept ()
self ._update_button_states ()
def _previous_step (self ):
if self .current_step >0 :
self .current_step -=1
self .stacked_widget .setCurrentIndex (self .current_step )
self ._update_button_states ()
def _update_button_states (self ):
if self .current_step ==len (self .tour_steps_widgets )-1 :
self .next_button .setText (self ._tr ("tour_dialog_finish_button","Finish"))
else :
self .next_button .setText (self ._tr ("tour_dialog_next_button","Next"))
self .back_button .setEnabled (self .current_step >0 )
def _open_github_link (self ):
QDesktopServices .openUrl (QUrl ("https://github.com/Yuvi9587"))
def _open_instagram_link (self ):
QDesktopServices .openUrl (QUrl ("https://www.instagram.com/uvi.arts/"))
def _open_Discord_link (self ):
QDesktopServices .openUrl (QUrl ("https://discord.gg/BqP64XTdJN"))
main_layout.addLayout(footer_layout)

View File

@@ -24,7 +24,7 @@ class MoreOptionsDialog(QDialog):
layout.addWidget(self.description_label)
self.radio_button_group = QButtonGroup(self)
self.radio_content = QRadioButton("Description/Content")
self.radio_comments = QRadioButton("Comments (Not Working)")
self.radio_comments = QRadioButton("Comments")
self.radio_button_group.addButton(self.radio_content)
self.radio_button_group.addButton(self.radio_comments)
layout.addWidget(self.radio_content)

View File

@@ -0,0 +1,118 @@
# multipart_scope_dialog.py
from PyQt5.QtWidgets import (
QDialog, QVBoxLayout, QGroupBox, QRadioButton, QDialogButtonBox, QButtonGroup,
QLabel, QLineEdit, QHBoxLayout, QFrame
)
from PyQt5.QtGui import QIntValidator
from PyQt5.QtCore import Qt
# It's good practice to get this constant from the source
# but for this example, we will define it here.
MAX_PARTS = 16
class MultipartScopeDialog(QDialog):
"""
A dialog to let the user select the scope, number of parts, and minimum size for multipart downloads.
"""
SCOPE_VIDEOS = 'videos'
SCOPE_ARCHIVES = 'archives'
SCOPE_BOTH = 'both'
def __init__(self, current_scope='both', current_parts=4, current_min_size_mb=100, parent=None):
super().__init__(parent)
self.setWindowTitle("Multipart Download Options")
self.setWindowFlags(self.windowFlags() & ~Qt.WindowContextHelpButtonHint)
self.setMinimumWidth(350)
# Main Layout
layout = QVBoxLayout(self)
# --- Options Group for Scope ---
self.options_group_box = QGroupBox("Apply multipart downloads to:")
options_layout = QVBoxLayout()
# ... (Radio buttons and button group code remains unchanged) ...
self.radio_videos = QRadioButton("Videos Only")
self.radio_archives = QRadioButton("Archives Only (.zip, .rar, etc.)")
self.radio_both = QRadioButton("Both Videos and Archives")
if current_scope == self.SCOPE_VIDEOS:
self.radio_videos.setChecked(True)
elif current_scope == self.SCOPE_ARCHIVES:
self.radio_archives.setChecked(True)
else:
self.radio_both.setChecked(True)
self.button_group = QButtonGroup(self)
self.button_group.addButton(self.radio_videos)
self.button_group.addButton(self.radio_archives)
self.button_group.addButton(self.radio_both)
options_layout.addWidget(self.radio_videos)
options_layout.addWidget(self.radio_archives)
options_layout.addWidget(self.radio_both)
self.options_group_box.setLayout(options_layout)
layout.addWidget(self.options_group_box)
# --- START: MODIFIED Download Settings Group ---
self.settings_group_box = QGroupBox("Download settings:")
settings_layout = QVBoxLayout()
# Layout for Parts count
parts_layout = QHBoxLayout()
self.parts_label = QLabel("Number of download parts per file:")
self.parts_input = QLineEdit(str(current_parts))
self.parts_input.setValidator(QIntValidator(2, MAX_PARTS, self))
self.parts_input.setFixedWidth(40)
self.parts_input.setToolTip(f"Set the number of concurrent connections per file (2-{MAX_PARTS}).")
parts_layout.addWidget(self.parts_label)
parts_layout.addStretch()
parts_layout.addWidget(self.parts_input)
settings_layout.addLayout(parts_layout)
# Layout for Minimum Size
size_layout = QHBoxLayout()
self.size_label = QLabel("Minimum file size for multipart (MB):")
self.size_input = QLineEdit(str(current_min_size_mb))
self.size_input.setValidator(QIntValidator(10, 10000, self)) # Min 10MB, Max ~10GB
self.size_input.setFixedWidth(40)
self.size_input.setToolTip("Files smaller than this will use a normal, single-part download.")
size_layout.addWidget(self.size_label)
size_layout.addStretch()
size_layout.addWidget(self.size_input)
settings_layout.addLayout(size_layout)
self.settings_group_box.setLayout(settings_layout)
layout.addWidget(self.settings_group_box)
# --- END: MODIFIED Download Settings Group ---
# OK and Cancel Buttons
self.button_box = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel)
self.button_box.accepted.connect(self.accept)
self.button_box.rejected.connect(self.reject)
layout.addWidget(self.button_box)
self.setLayout(layout)
def get_selected_scope(self):
# ... (This method remains unchanged) ...
if self.radio_videos.isChecked():
return self.SCOPE_VIDEOS
if self.radio_archives.isChecked():
return self.SCOPE_ARCHIVES
return self.SCOPE_BOTH
def get_selected_parts(self):
# ... (This method remains unchanged) ...
try:
parts = int(self.parts_input.text())
return max(2, min(parts, MAX_PARTS))
except (ValueError, TypeError):
return 4
def get_selected_min_size(self):
"""Returns the selected minimum size in MB as an integer."""
try:
size = int(self.size_input.text())
return max(10, min(size, 10000)) # Enforce valid range
except (ValueError, TypeError):
return 100 # Return a safe default

View File

@@ -3,8 +3,27 @@ import re
try:
from fpdf import FPDF
FPDF_AVAILABLE = True
# --- FIX: Move the class definition inside the try block ---
class PDF(FPDF):
"""Custom PDF class to handle headers and footers."""
def header(self):
pass
def footer(self):
self.set_y(-15)
if self.font_family:
self.set_font(self.font_family, '', 8)
else:
self.set_font('Arial', '', 8)
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'C')
except ImportError:
FPDF_AVAILABLE = False
# If the import fails, FPDF and PDF will not be defined,
# but the program won't crash here.
FPDF = None
PDF = None
def strip_html_tags(text):
if not text:
@@ -12,19 +31,6 @@ def strip_html_tags(text):
clean = re.compile('<.*?>')
return re.sub(clean, '', text)
class PDF(FPDF):
"""Custom PDF class to handle headers and footers."""
def header(self):
pass
def footer(self):
self.set_y(-15)
if self.font_family:
self.set_font(self.font_family, '', 8)
else:
self.set_font('Arial', '', 8)
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'C')
def create_single_pdf_from_content(posts_data, output_filename, font_path, logger=print):
"""
Creates a single, continuous PDF, correctly formatting both descriptions and comments.
@@ -68,7 +74,7 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
pdf.ln(10)
pdf.set_font(default_font_family, 'B', 16)
pdf.multi_cell(w=0, h=10, text=post.get('title', 'Untitled Post'), align='L')
pdf.multi_cell(w=0, h=10, txt=post.get('title', 'Untitled Post'), align='L')
pdf.ln(5)
if 'comments' in post and post['comments']:
@@ -89,7 +95,7 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
pdf.ln(10)
pdf.set_font(default_font_family, '', 11)
pdf.multi_cell(0, 7, body)
pdf.multi_cell(w=0, h=7, txt=body)
if comment_index < len(comments_list) - 1:
pdf.ln(3)
@@ -97,7 +103,7 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
pdf.ln(3)
elif 'content' in post:
pdf.set_font(default_font_family, '', 12)
pdf.multi_cell(w=0, h=7, text=post.get('content', 'No Content'))
pdf.multi_cell(w=0, h=7, txt=post.get('content', 'No Content'))
try:
pdf.output(output_filename)
@@ -105,4 +111,4 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
return True
except Exception as e:
logger(f"❌ A critical error occurred while saving the final PDF: {e}")
return False
return False

View File

@@ -0,0 +1,146 @@
import os
import re
import datetime
try:
from fpdf import FPDF
FPDF_AVAILABLE = True
class PDF(FPDF):
"""Custom PDF class for Discord chat logs."""
def __init__(self, server_name, channel_name, *args, **kwargs):
super().__init__(*args, **kwargs)
self.server_name = server_name
self.channel_name = channel_name
self.default_font_family = 'DejaVu' # Can be changed to Arial if font fails
def header(self):
if self.page_no() == 1:
return # No header on the title page
self.set_font(self.default_font_family, '', 8)
self.cell(0, 10, f'{self.server_name} - #{self.channel_name}', 0, 0, 'L')
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'R')
self.ln(10)
def footer(self):
pass # No footer needed, header has page number
except ImportError:
FPDF_AVAILABLE = False
FPDF = None
PDF = None
def create_pdf_from_discord_messages(messages_data, server_name, channel_name, output_filename, font_path, logger=print):
"""
Creates a single PDF from a list of Discord message objects, formatted as a chat log.
UPDATED to include clickable links for attachments and embeds.
"""
if not FPDF_AVAILABLE:
logger("❌ PDF Creation failed: 'fpdf2' library is not installed.")
return False
if not messages_data:
logger(" No messages were found or fetched to create a PDF.")
return False
logger(" Sorting messages by date (oldest first)...")
messages_data.sort(key=lambda m: m.get('published', ''))
pdf = PDF(server_name, channel_name)
default_font_family = 'DejaVu'
try:
bold_font_path = font_path.replace("DejaVuSans.ttf", "DejaVuSans-Bold.ttf")
if not os.path.exists(font_path) or not os.path.exists(bold_font_path):
raise RuntimeError("Font files not found")
pdf.add_font('DejaVu', '', font_path, uni=True)
pdf.add_font('DejaVu', 'B', bold_font_path, uni=True)
except Exception as font_error:
logger(f" ⚠️ Could not load DejaVu font: {font_error}. Falling back to Arial.")
default_font_family = 'Arial'
pdf.default_font_family = 'Arial'
# --- Title Page ---
pdf.add_page()
pdf.set_font(default_font_family, 'B', 24)
pdf.cell(w=0, h=20, text="Discord Chat Log", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.ln(10)
pdf.set_font(default_font_family, '', 16)
pdf.cell(w=0, h=10, text=f"Server: {server_name}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.cell(w=0, h=10, text=f"Channel: #{channel_name}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.ln(5)
pdf.set_font(default_font_family, '', 10)
pdf.cell(w=0, h=10, text=f"Generated on: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.cell(w=0, h=10, text=f"Total Messages: {len(messages_data)}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.add_page()
logger(f" Starting PDF creation with {len(messages_data)} messages...")
for i, message in enumerate(messages_data):
author = message.get('author', {}).get('global_name') or message.get('author', {}).get('username', 'Unknown User')
timestamp_str = message.get('published', '')
content = message.get('content', '')
attachments = message.get('attachments', [])
embeds = message.get('embeds', [])
try:
# Handle timezone information correctly
if timestamp_str.endswith('Z'):
timestamp_str = timestamp_str[:-1] + '+00:00'
dt_obj = datetime.datetime.fromisoformat(timestamp_str)
formatted_timestamp = dt_obj.strftime('%Y-%m-%d %H:%M:%S')
except (ValueError, TypeError):
formatted_timestamp = timestamp_str
# Draw a separator line
if i > 0:
pdf.ln(2)
pdf.set_draw_color(200, 200, 200) # Light grey line
pdf.cell(0, 0, '', border='T')
pdf.ln(2)
# Message Header
pdf.set_font(default_font_family, 'B', 11)
pdf.write(5, f"{author} ")
pdf.set_font(default_font_family, '', 9)
pdf.set_text_color(128, 128, 128)
pdf.write(5, f"({formatted_timestamp})")
pdf.set_text_color(0, 0, 0)
pdf.ln(6)
# Message Content
if content:
pdf.set_font(default_font_family, '', 10)
pdf.multi_cell(w=0, h=5, text=content)
# --- START: MODIFIED ATTACHMENT AND EMBED LOGIC ---
if attachments or embeds:
pdf.ln(1)
pdf.set_font(default_font_family, '', 9)
pdf.set_text_color(22, 119, 219) # A nice blue for links
for att in attachments:
file_name = att.get('name', 'untitled')
file_path = att.get('path', '')
# Construct the full, clickable URL for the attachment
full_url = f"https://kemono.cr/data{file_path}"
pdf.write(5, text=f"[Attachment: {file_name}]", link=full_url)
pdf.ln() # New line after each attachment
for embed in embeds:
embed_url = embed.get('url', 'no url')
# The embed URL is already a full URL
pdf.write(5, text=f"[Embed: {embed_url}]", link=embed_url)
pdf.ln() # New line after each embed
pdf.set_text_color(0, 0, 0) # Reset color to black
# --- END: MODIFIED ATTACHMENT AND EMBED LOGIC ---
try:
pdf.output(output_filename)
logger(f"✅ Successfully created Discord chat log PDF: '{os.path.basename(output_filename)}'")
return True
except Exception as e:
logger(f"❌ A critical error occurred while saving the final PDF: {e}")
return False

File diff suppressed because it is too large Load Diff

View File

@@ -141,12 +141,15 @@ def prepare_cookies_for_request(use_cookie_flag, cookie_text_input, selected_coo
def extract_post_info(url_string):
"""
Parses a URL string to extract the service, user ID, and post ID.
UPDATED to support Discord server/channel URLs.
Args:
url_string (str): The URL to parse.
Returns:
tuple: A tuple containing (service, user_id, post_id). Any can be None.
tuple: A tuple containing (service, id1, id2).
For posts: (service, user_id, post_id).
For Discord: ('discord', server_id, channel_id).
"""
if not isinstance(url_string, str) or not url_string.strip():
return None, None, None
@@ -155,7 +158,15 @@ def extract_post_info(url_string):
parsed_url = urlparse(url_string.strip())
path_parts = [part for part in parsed_url.path.strip('/').split('/') if part]
# Standard format: /<service>/user/<user_id>/post/<post_id>
# Check for new Discord URL format first
# e.g., /discord/server/891670433978531850/1252332668805189723
if len(path_parts) >= 3 and path_parts[0].lower() == 'discord' and path_parts[1].lower() == 'server':
service = 'discord'
server_id = path_parts[2]
channel_id = path_parts[3] if len(path_parts) >= 4 else None
return service, server_id, channel_id
# Standard creator/post format: /<service>/user/<user_id>/post/<post_id>
if len(path_parts) >= 3 and path_parts[1].lower() == 'user':
service = path_parts[0]
user_id = path_parts[2]
@@ -174,7 +185,6 @@ def extract_post_info(url_string):
return None, None, None
def get_link_platform(url):
"""
Identifies the platform of a given URL based on its domain.
@@ -196,10 +206,9 @@ def get_link_platform(url):
if 'twitter.com' in domain or 'x.com' in domain: return 'twitter/x'
if 'discord.gg' in domain or 'discord.com/invite' in domain: return 'discord invite'
if 'pixiv.net' in domain: return 'pixiv'
if 'kemono.su' in domain or 'kemono.party' in domain: return 'kemono'
if 'coomer.su' in domain or 'coomer.party' in domain: return 'coomer'
if 'kemono.su' in domain or 'kemono.party' in domain or 'kemono.cr' in domain: return 'kemono'
if 'coomer.su' in domain or 'coomer.party' in domain or 'coomer.st' in domain: return 'coomer'
# Fallback to a generic name for other domains
parts = domain.split('.')
if len(parts) >= 2:
return parts[-2]

View File

@@ -239,16 +239,23 @@ def setup_ui(main_app):
checkboxes_group_layout.addWidget(advanced_settings_label)
advanced_row1_layout = QHBoxLayout()
advanced_row1_layout.setSpacing(10)
main_app.use_subfolders_checkbox = QCheckBox("Separate Folders by Known.txt")
main_app.use_subfolders_checkbox.setChecked(True)
main_app.use_subfolders_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
advanced_row1_layout.addWidget(main_app.use_subfolders_checkbox)
# --- REORDERED CHECKBOXES ---
main_app.use_subfolder_per_post_checkbox = QCheckBox("Subfolder per Post")
main_app.use_subfolder_per_post_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
main_app.use_subfolder_per_post_checkbox.setChecked(True)
advanced_row1_layout.addWidget(main_app.use_subfolder_per_post_checkbox)
main_app.date_prefix_checkbox = QCheckBox("Date Prefix")
main_app.date_prefix_checkbox.setToolTip("When 'Subfolder per Post' is active, prefix the folder name with the post's upload date.")
advanced_row1_layout.addWidget(main_app.date_prefix_checkbox)
main_app.use_subfolders_checkbox = QCheckBox("Separate Folders by Known.txt")
main_app.use_subfolders_checkbox.setChecked(False)
main_app.use_subfolders_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
advanced_row1_layout.addWidget(main_app.use_subfolders_checkbox)
# --- END REORDER ---
main_app.use_cookie_checkbox = QCheckBox("Use Cookie")
main_app.use_cookie_checkbox.setChecked(main_app.use_cookie_setting)
main_app.cookie_text_input = QLineEdit()
@@ -380,10 +387,14 @@ def setup_ui(main_app):
main_app.link_search_input.setPlaceholderText("Search Links...")
main_app.link_search_input.setVisible(False)
log_title_layout.addWidget(main_app.link_search_input)
main_app.link_search_button = QPushButton("🔍")
main_app.link_search_button = QPushButton("<EFBFBD>")
main_app.link_search_button.setVisible(False)
main_app.link_search_button.setFixedWidth(int(30 * scale))
log_title_layout.addWidget(main_app.link_search_button)
main_app.discord_scope_toggle_button = QPushButton("Scope: Files")
main_app.discord_scope_toggle_button.setVisible(False) # Hidden by default
main_app.discord_scope_toggle_button.setFixedWidth(int(140 * scale))
log_title_layout.addWidget(main_app.discord_scope_toggle_button)
main_app.manga_rename_toggle_button = QPushButton()
main_app.manga_rename_toggle_button.setVisible(False)
main_app.manga_rename_toggle_button.setFixedWidth(int(140 * scale))