mirror of
https://github.com/Yuvi9587/Kemono-Downloader.git
synced 2025-12-29 16:14:44 +00:00
Compare commits
14 Commits
v6.2.0
...
e5b519d5ce
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e5b519d5ce | ||
|
|
9888ed0862 | ||
|
|
9e996bf682 | ||
|
|
e7a6a91542 | ||
|
|
d7faccce18 | ||
|
|
a78c01c4f6 | ||
|
|
6de9967e0b | ||
|
|
e3dd0e70b6 | ||
|
|
9db89cfad0 | ||
|
|
0a6034a632 | ||
|
|
2da69e7017 | ||
|
|
3209770d00 | ||
|
|
337cdd342c | ||
|
|
d54b013bbc |
24
LICENSE
24
LICENSE
@@ -1,11 +1,21 @@
|
|||||||
Custom License - No Commercial Use
|
MIT License
|
||||||
|
|
||||||
Copyright [Yuvi9587] [2025]
|
Copyright (c) [2025] [Yuvi9587]
|
||||||
|
|
||||||
Permission is hereby granted to any person obtaining a copy of this software and associated documentation files (the "Software"), to use, copy, modify, and distribute the Software for **non-commercial purposes only**, subject to the following conditions:
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
1. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
The above copyright notice and this permission notice shall be included in all
|
||||||
2. Proper credit must be given to the original author in any public use, distribution, or derivative works.
|
copies or substantial portions of the Software.
|
||||||
3. Commercial use, resale, or sublicensing of the Software or any derivative works is strictly prohibited without explicit written permission.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND...
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
||||||
|
|||||||
339
features.md
339
features.md
@@ -1,192 +1,147 @@
|
|||||||
# Kemono Downloader - Feature Guide
|
<div>
|
||||||
This guide provides a comprehensive overview of all user interface elements, input fields, buttons, popups, and functionalities available in the Kemono Downloader.
|
<h1>Kemono Downloader - Comprehensive Feature Guide</h1>
|
||||||
|
<p>This guide provides a detailed overview of all user interface elements, input fields, buttons, popups, and functionalities available in the application.</p>
|
||||||
## 1. Main Interface & Workflow
|
<hr>
|
||||||
These are the primary controls you'll interact with to initiate and manage downloads.
|
<h2><strong>Main Window: Core Functionality</strong></h2>
|
||||||
|
<p>The application is divided into a configuration panel on the left and a status/log panel on the right.</p>
|
||||||
### 1.1. Core Inputs
|
<h3><strong>Primary Inputs (Top-Left)</strong></h3>
|
||||||
**🔗 Creator/Post URL Input Field**
|
<ul>
|
||||||
- **Purpose**: Paste the URL of the content you want to download.
|
<li><strong>URL Input Field</strong>: This is the starting point for most downloads. You can paste a URL for a specific post or for an entire creator's feed. The application's behavior adapts based on the URL type.</li>
|
||||||
- **Supported Sites**: Kemono.su, Coomer.party, Simpcity.su.
|
<li><strong>🎨 Creator Selection Popup</strong>: This button opens a powerful dialog listing all known creators. From here, you can:
|
||||||
- **Supported URL Types**:
|
<ul>
|
||||||
- Creator pages (e.g., `https://kemono.su/patreon/user/12345`).
|
<li><strong>Search and Queue</strong>: Search for creators and check multiple names. Clicking "Add Selected" populates the main input field, preparing a batch download.</li>
|
||||||
- Individual posts (e.g., `https://kemono.su/patreon/user/12345/post/98765`).
|
<li><strong>Check for Updates</strong>: Select a single creator's saved profile. This loads their information and switches the main download button to "Check for Updates" mode, allowing you to download only new content since your last session.</li>
|
||||||
- **Note**: When ⭐ Favorite Mode is active, this field is disabled. For Simpcity.su URLs, the "Use Cookie" option is mandatory and auto-enabled.
|
</ul>
|
||||||
|
</li>
|
||||||
**🎨 Creator Selection Button**
|
<li><strong>Download Location</strong>: The primary folder where all content will be saved. The <strong>Browse...</strong> button lets you select this folder from your computer.</li>
|
||||||
- **Icon**: 🎨 (Artist Palette)
|
<li><strong>Page Range (Start/End)</strong>: These fields activate only for creator feed URLs. They allow you to download a specific slice of a creator's history (e.g., pages 5 through 10) instead of their entire feed.</li>
|
||||||
- **Purpose**: Opens the "Creator Selection" dialog to browse and queue downloads from known creators.
|
</ul>
|
||||||
- **Dialog Features**:
|
<hr>
|
||||||
- Loads creators from `creators.json`.
|
<h2><strong>Filtering & Naming (Left Panel)</strong></h2>
|
||||||
- **Search Bar**: Filter creators by name.
|
<p>These features give you precise control over what gets downloaded and how it's named and organized.</p>
|
||||||
- **Creator List**: Displays creators with their service (e.g., Patreon, Fanbox).
|
<ul>
|
||||||
- **Selection**: Checkboxes to select one or more creators.
|
<li><strong>Filter by Character(s)</strong>: A powerful tool to download content featuring specific characters. You can enter multiple names separated by commas.
|
||||||
- **Download Scope**: Organize downloads by Characters or Creators.
|
<ul>
|
||||||
- **Add to Queue**: Adds selected creators or their posts to the download queue.
|
<li><strong>Filter: [Scope] Button</strong>: This button changes how the character filter works:
|
||||||
|
<ul>
|
||||||
**Page Range (Start to End) Input Fields**
|
<li><strong>Title</strong>: Downloads posts only if a character's name is in the post title.</li>
|
||||||
- **Purpose**: Specify a range of pages to fetch for creator URLs.
|
<li><strong>Files</strong>: Downloads posts if a character's name is in any of the filenames within the post.</li>
|
||||||
- **Usage**: Enter the starting and ending page numbers.
|
<li><strong>Both</strong>: Combines the "Title" and "Files" logic.</li>
|
||||||
- **Behavior**:
|
<li><strong>Comments (Beta)</strong>: Downloads a post if a character's name is mentioned in the comments section.</li>
|
||||||
- If blank, all pages are processed.
|
</ul>
|
||||||
- Disabled for single post URLs.
|
</li>
|
||||||
|
</ul>
|
||||||
**📁 Download Location Input Field & Browse Button**
|
</li>
|
||||||
- **Purpose**: Specify the main directory for downloaded files.
|
<li><strong>Skip with Words</strong>: A keyword-based filter to avoid unwanted content (e.g., <code>WIP</code>, <code>sketch</code>).
|
||||||
- **Usage**: Type the path or click "Browse..." to select a folder.
|
<ul>
|
||||||
- **Requirement**: Mandatory for all download operations.
|
<li><strong>Scope: [Type] Button</strong>: This button changes how the skip filter works:
|
||||||
|
<ul>
|
||||||
### 1.2. Action Buttons
|
<li><strong>Posts</strong>: Skips the entire post if a keyword is found in the title.</li>
|
||||||
**⬇️ Start Download / 🔗 Extract Links Button**
|
<li><strong>Files</strong>: Skips only individual files if a keyword is found in the filename.</li>
|
||||||
- **Purpose**: Initiates downloading or link extraction.
|
<li><strong>Both</strong>: Applies both levels of skipping.</li>
|
||||||
- **Behavior**:
|
</ul>
|
||||||
- Shows "🔗 Extract Links" if "Only Links" is selected.
|
</li>
|
||||||
- Otherwise, shows "⬇️ Start Download".
|
</ul>
|
||||||
- Supports single-threaded or multi-threaded downloads based on settings.
|
</li>
|
||||||
|
<li><strong>Remove Words from name</strong>: Automatically cleans downloaded filenames by removing any specified words (e.g., "patreon," "HD").</li>
|
||||||
**🔄 Restore Download Button**
|
</ul>
|
||||||
- **Visibility**: Appears if an incomplete session is detected on startup.
|
<h3><strong>File Type Filter (Radio Buttons)</strong></h3>
|
||||||
- **Purpose**: Resumes a previously interrupted download session.
|
<p>This section lets you choose the kind of content you want:</p>
|
||||||
|
<ul>
|
||||||
**⏸️ Pause / ▶️ Resume Download Button**
|
<li><strong>All, Images/GIFs, Videos, 🎧 Only Audio, 📦 Only Archives</strong>: These options filter the downloads to only include the selected file types.</li>
|
||||||
- **Purpose**: Pause or resume the ongoing download.
|
<li><strong>🔗 Only Links</strong>: This special mode doesn't download any files. Instead, it scans post descriptions and lists all external links (like Mega, Google Drive) in the log panel.</li>
|
||||||
- **Behavior**: Toggles between "Pause" and "Resume". Some UI settings can be changed while paused.
|
<li><strong>More</strong>: Opens a dialog for text-only downloads. You can choose to save post <strong>descriptions</strong> or <strong>comments</strong> as formatted <strong>PDF, DOCX, or TXT</strong> files. A key feature here is the <strong>"Single PDF"</strong> option, which compiles the text from all downloaded posts into one continuous, sorted PDF document.</li>
|
||||||
|
</ul>
|
||||||
**❌ Cancel & Reset UI Button**
|
<hr>
|
||||||
- **Purpose**: Stops the current operation and performs a "soft" reset.
|
<h2><strong>Download Options & Advanced Settings (Checkboxes)</strong></h2>
|
||||||
- **Behavior**: Halts background threads, preserves URL and Download Location inputs, resets other settings.
|
<ul>
|
||||||
|
<li><strong>Skip .zip</strong>: A simple toggle to ignore archive files during downloads.</li>
|
||||||
**🔄 Reset Button (in the log area)**
|
<li><strong>Download Thumbnails Only</strong>: Downloads only the small preview images instead of the full-resolution files.</li>
|
||||||
- **Purpose**: Performs a "hard" reset when no operation is active.
|
<li><strong>Scan Content for Images</strong>: A crucial feature that scans the post's text content for embedded images that may not be listed in the API, ensuring a more complete download.</li>
|
||||||
- **Behavior**: Clears all inputs, resets options to default, and clears logs.
|
<li><strong>Compress to WebP</strong>: Saves disk space by automatically converting large images into the efficient WebP format.</li>
|
||||||
|
<li><strong>Keep Duplicates</strong>: Opens a dialog to control how files with identical content are handled. The default is to skip duplicates, but you can choose to keep all of them or set a specific limit (e.g., "keep up to 2 copies of the same file").</li>
|
||||||
## 2. Filtering & Content Selection
|
<li><strong>Subfolder per Post</strong>: Organizes downloads by creating a unique folder for each post, named after the post's title.</li>
|
||||||
These options allow precise control over downloaded content.
|
<li><strong>Date Prefix</strong>: When "Subfolder per Post" is on, this adds the post's date to the beginning of the folder name (e.g., <code>2025-07-25 Post Title</code>).</li>
|
||||||
|
<li><strong>Separate Folders by Known.txt</strong>: This enables the automatic folder organization system based on your "Known Names" list.</li>
|
||||||
### 2.1. Content Filtering
|
<li><strong>Use Cookie</strong>: Allows the application to use browser cookies to access content that might be behind a paywall or login. You can paste a cookie string directly or use <strong>Browse...</strong> to select a <code>cookies.txt</code> file.</li>
|
||||||
**🎯 Filter by Character(s) Input Field**
|
<li><strong>Use Multithreading</strong>: Greatly speeds up downloads of creator feeds by processing multiple posts at once. The number of <strong>Threads</strong> can be configured.</li>
|
||||||
- **Purpose**: Download content related to specific characters or series.
|
<li><strong>Show External Links in Log</strong>: When checked, a secondary log panel appears at the bottom of the right side, dedicated to listing any external links found.</li>
|
||||||
- **Usage**: Enter comma-separated character names.
|
</ul>
|
||||||
- **Advanced Syntax**:
|
<hr>
|
||||||
- `Nami`: Simple filter.
|
<h2><strong>Known Names Management (Bottom-Left)</strong></h2>
|
||||||
- `(Vivi, Ulti)`: Grouped filter. Matches posts with "Vivi" OR "Ulti". Creates a shared folder like `Vivi Ulti` if subfolders are enabled.
|
<p>This powerful feature automates the creation of organized, named folders.</p>
|
||||||
- `(Boa, Hancock)~`: Aliased filter. Treats "Boa" and "Hancock" as the same entity.
|
<ul>
|
||||||
|
<li><strong>Known Shows/Characters List</strong>: Displays all the names and groups you've saved.</li>
|
||||||
**Filter: [Type] Button (Character Filter Scope)**
|
<li><strong>Search...</strong>: Filters the list to quickly find a name.</li>
|
||||||
- **Purpose**: Defines where the character filter is applied. Cycles on click.
|
<li><strong>Open Known.txt</strong>: Opens the source file in a text editor for advanced manual editing.</li>
|
||||||
- **Options**:
|
<li><strong>Add New Name</strong>:
|
||||||
- **Filter: Title** (Default): Matches post titles.
|
<ul>
|
||||||
- **Filter: Files**: Matches filenames.
|
<li><strong>Single Name</strong>: Typing <code>Tifa Lockhart</code> and clicking <strong>➕ Add</strong> creates an entry that will match "Tifa Lockhart".</li>
|
||||||
- **Filter: Both**: Checks title first, then filenames.
|
<li><strong>Group</strong>: Typing <code>(Boa, Hancock, Snake Princess)~</code> and clicking <strong>➕ Add</strong> creates a single entry named "Boa Hancock Snake Princess". The application will then look for "Boa," "Hancock," OR "Snake Princess" in titles/filenames and save any matches into that combined folder.</li>
|
||||||
- **Filter: Comments (Beta)**: Checks filenames, then post comments.
|
</ul>
|
||||||
|
</li>
|
||||||
**🚫 Skip with Words Input Field**
|
<li><strong>⤵️ Add to Filter</strong>: Opens a dialog with your full Known Names list, allowing you to check multiple entries and add them all to the "Filter by Character(s)" field at once.</li>
|
||||||
- **Purpose**: Exclude posts/files with specified keywords (e.g., `WIP`, `sketch`).
|
<li><strong>🗑️ Delete Selected</strong>: Removes highlighted names from your list.</li>
|
||||||
|
</ul>
|
||||||
**Scope: [Type] Button (Skip Words Scope)**
|
<hr>
|
||||||
- **Purpose**: Defines where skip words are applied. Cycles on click.
|
<h2><strong>Action Buttons & Status Controls</strong></h2>
|
||||||
- **Options**:
|
<ul>
|
||||||
- **Scope: Posts** (Default): Skips posts if the title contains a skip word.
|
<li><strong>⬇️ Start Download / 🔗 Extract Links</strong>: The main action button. Its function is dynamic:
|
||||||
- **Scope: Files**: Skips files if the filename contains a skip word.
|
<ul>
|
||||||
- **Scope: Both**: Applies both rules.
|
<li><strong>Normal Mode</strong>: Starts the download based on the current settings.</li>
|
||||||
|
<li><strong>Update Mode</strong>: After selecting a creator profile, this button changes to <strong>🔄 Check for Updates</strong>.</li>
|
||||||
**✂️ Remove Words from Name Input Field**
|
<li><strong>Update Confirmation</strong>: After new posts are found, it changes to <strong>⬇️ Start Download (X new)</strong>.</li>
|
||||||
- **Purpose**: Remove unwanted text from filenames (e.g., `patreon`, `[HD]`).
|
<li><strong>Link Extraction Mode</strong>: The text changes to <strong>🔗 Extract Links</strong>.</li>
|
||||||
|
</ul>
|
||||||
### 2.2. File Type Filtering
|
</li>
|
||||||
**Filter Files (Radio Buttons)**
|
<li><strong>⏸️ Pause / ▶️ Resume Download</strong>: Pauses the ongoing download, allowing you to change certain settings (like filters) on the fly. Click again to resume.</li>
|
||||||
- **Purpose**: Select file types to download.
|
<li><strong>❌ Cancel & Reset UI</strong>: Immediately stops all download activity and resets the UI to a clean state, preserving your URL and Download Location inputs.</li>
|
||||||
- **Options**:
|
<li><strong>Error Button</strong>: If files fail to download, they are logged. This button opens a dialog listing all failed files and will show a count of errors (e.g., <strong>(5) Error</strong>). From the dialog, you can:
|
||||||
- **All**: All file types.
|
<ul>
|
||||||
- **Images/GIFs**: Common image formats.
|
<li>Select specific files to <strong>Retry</strong> downloading.</li>
|
||||||
- **Videos**: Common video formats.
|
<li><strong>Export</strong> the list of failed URLs to a <code>.txt</code> file.</li>
|
||||||
- **🎧 Only Audio**: Common audio formats.
|
</ul>
|
||||||
- **📦 Only Archives**: Only `.zip` and `.rar` files.
|
</li>
|
||||||
- **🔗 Only Links**: Extracts external links without downloading files.
|
<li><strong>🔄 Reset (Top-Right)</strong>: A hard reset that clears all logs and returns every single UI element to its default state.</li>
|
||||||
|
<li><strong>⚙️ (Settings)</strong>: Opens the main Settings dialog.</li>
|
||||||
**Skip .zip / Skip .rar Checkboxes**
|
<li><strong>📜 (History)</strong>: Opens the Download History dialog.</li>
|
||||||
- **Purpose**: Skip downloading `.zip` or `.rar` files.
|
<li><strong>? (Help)</strong>: Opens a helpful guide explaining the application's features.</li>
|
||||||
- **Behavior**: Disabled when "📦 Only Archives" is active.
|
<li><strong>❤️ Support</strong>: Opens a dialog with information on how to support the developer.</li>
|
||||||
|
</ul>
|
||||||
## 3. Download Customization
|
<hr>
|
||||||
Options to refine the download process and output.
|
<h2><strong>Specialized Modes & Features</strong></h2>
|
||||||
|
<h3><strong>⭐ Favorite Mode</strong></h3>
|
||||||
- **Download Thumbnails Only**: Downloads small preview images instead of full-resolution files.
|
<p>Activating this mode transforms the UI for managing saved collections:</p>
|
||||||
- **Scan Content for Images**: Scans post HTML for `<img>` tags, crucial for images in descriptions.
|
<ul>
|
||||||
- **Compress to WebP**: Converts images to WebP format (requires Pillow library).
|
<li>The URL input is disabled.</li>
|
||||||
- **Keep Duplicates**: Normally, if a post contains multiple files with the same name, only the first is downloaded. Checking this option will download all of them, renaming subsequent unique files with a numeric suffix (e.g., `image_1.jpg`).
|
<li>The main action buttons are replaced with:
|
||||||
- **🗄️ Custom Folder Name (Single Post Only)**: Specify a custom folder name for a single post's content (appears if subfolders are enabled).
|
<ul>
|
||||||
|
<li><strong>🖼️ Favorite Artists</strong>: Opens a dialog to browse and queue downloads from your saved favorite creators.</li>
|
||||||
## 4. 📖 Manga/Comic Mode
|
<li><strong>📄 Favorite Posts</strong>: Opens a dialog to browse and queue downloads for specific saved favorite posts.</li>
|
||||||
A mode for downloading creator feeds in chronological order, ideal for sequential content.
|
</ul>
|
||||||
|
</li>
|
||||||
- **Activation**: Active when downloading a creator's entire feed (not a single post).
|
<li><strong>Scope: [Location] Button</strong>: Toggles where the favorited content is saved:
|
||||||
- **Core Behavior**: Fetches all posts, processing from oldest to newest.
|
<ul>
|
||||||
- **Filename Style Toggle Button (in the log area)**:
|
<li><strong>Selected Location</strong>: Saves all content directly into the main "Download Location".</li>
|
||||||
- **Purpose**: Controls file naming in Manga Mode. Cycles on click.
|
<li><strong>Artist Folders</strong>: Creates a subfolder for each artist inside the main "Download Location".</li>
|
||||||
- **Options**:
|
</ul>
|
||||||
- **Name: Post Title**: First file named after post title; others keep original names.
|
</li>
|
||||||
- **Name: Original File**: Files keep server-provided names, with optional prefix.
|
</ul>
|
||||||
- **Name: Title+G.Num**: Global numbering with post title prefix (e.g., `Chapter 1_001.jpg`).
|
<h3><strong>📖 Manga/Comic Mode</strong></h3>
|
||||||
- **Name: Date Based**: Sequential naming by post date (e.g., `001.jpg`), with optional prefix.
|
<p>This mode is designed for sequential content and has several effects:</p>
|
||||||
- **Name: Post ID**: Files named after post ID to avoid clashes.
|
<ul>
|
||||||
- **Name: Date + Title**: Combines post date and title for filenames.
|
<li><strong>Reverses Download Order</strong>: It fetches and downloads posts from <strong>oldest to newest</strong>.</li>
|
||||||
|
<li><strong>Enables Special Naming</strong>: A <strong><code>Name: [Style]</code></strong> button appears, allowing you to choose how files are named to maintain their correct order (e.g., by Post Title, by Date, or simple sequential numbering like <code>001, 002, 003...</code>).</li>
|
||||||
## 5. Folder Organization & Known.txt
|
<li><strong>Disables Multithreading (for certain styles)</strong>: To guarantee perfect sequential numbering, multithreading for posts is automatically disabled for certain naming styles.</li>
|
||||||
Controls for structuring downloaded content.
|
</ul>
|
||||||
|
<h3><strong>Session & Error Management</strong></h3>
|
||||||
- **Separate Folders by Name/Title Checkbox**: Enables automatic subfolder creation.
|
<ul>
|
||||||
- **Subfolder per Post Checkbox**: Creates subfolders for each post, named after the post title.
|
<li><strong>Session Restore</strong>: If the application is closed unexpectedly during a download, it will detect the incomplete session on the next launch. The UI will present a <strong>🔄 Restore Download</strong> button to resume exactly where you left off. You can also choose to discard the session.</li>
|
||||||
- **Date Prefix for Post Subfolders Checkbox**: When used with "Subfolder per Post," this option prefixes the folder name with the post's upload date (e.g., `2025-07-11 Post Title`), allowing for chronological sorting.
|
<li><strong>Update Checking</strong>: By selecting a creator profile via the <strong>🎨 Creator Selection Popup</strong>, you can run an update check. The application compares the posts on the server with your download history for that creator and will prompt you to download only the new content.</li>
|
||||||
- **Known.txt Management UI (Bottom Left)**:
|
</ul>
|
||||||
- **Purpose**: Manages a local `Known.txt` file for series, characters, or terms used in folder creation.
|
<h3><strong>Logging & Monitoring</strong></h3>
|
||||||
- **List Display**: Shows primary names from `Known.txt`.
|
<ul>
|
||||||
- **➕ Add Button**: Adds names or groups (e.g., `(Character A, Alias B)~`).
|
<li><strong>Progress Log</strong>: The main log provides real-time feedback on the download process, including status messages, file saves, skips, and errors.</li>
|
||||||
- **⤵️ Add to Filter Button**: Select names from `Known.txt` for the character filter.
|
<li><strong>👁️ Log View Toggle</strong>: Switches the log view between the standard <strong>Progress Log</strong> and a <strong>Missed Character Log</strong>, which shows potential character names from posts that were skipped by your filters, helping you discover new names to add to your list.</li>
|
||||||
- **🗑️ Delete Selected Button**: Removes selected names from `Known.txt`.
|
</ul>
|
||||||
- **Open Known.txt Button**: Opens the file in the default text editor.
|
</div>
|
||||||
- **❓ Help Button**: Opens this feature guide.
|
|
||||||
- **📜 History Button**: Views recent download history.
|
|
||||||
|
|
||||||
## 6. ⭐ Favorite Mode (Kemono.su Only)
|
|
||||||
Download from favorited artists/posts on Kemono.su.
|
|
||||||
|
|
||||||
- **Enable Checkbox ("⭐ Favorite Mode")**:
|
|
||||||
- Switches to Favorite Mode.
|
|
||||||
- Disables the main URL input.
|
|
||||||
- Changes action buttons to "Favorite Artists" and "Favorite Posts".
|
|
||||||
- Requires cookies.
|
|
||||||
- **🖼️ Favorite Artists Button**: Select and download from favorited artists.
|
|
||||||
- **📄 Favorite Posts Button**: Select and download specific favorited posts.
|
|
||||||
- **Favorite Download Scope Button**:
|
|
||||||
- **Scope: Selected Location**: Downloads favorites to the main directory.
|
|
||||||
- **Scope: Artist Folders**: Creates subfolders per artist.
|
|
||||||
|
|
||||||
## 7. Advanced Settings & Performance
|
|
||||||
- **🍪 Cookie Management**:
|
|
||||||
- **Use Cookie Checkbox**: Enables cookies for restricted content.
|
|
||||||
- **Cookie Text Field**: Paste cookie string.
|
|
||||||
- **Browse... Button**: Select a `cookies.txt` file (Netscape format).
|
|
||||||
- **Use Multithreading Checkbox & Threads Input**:
|
|
||||||
- **Purpose**: Configures simultaneous operations.
|
|
||||||
- **Behavior**: Sets concurrent post processing (creator feeds) or file downloads (single posts).
|
|
||||||
- **Multi-part Download Toggle Button**:
|
|
||||||
- **Purpose**: Enables/disables multi-segment downloading for large files.
|
|
||||||
- **Note**: Best for large files; less efficient for small files.
|
|
||||||
|
|
||||||
## 8. Logging, Monitoring & Error Handling
|
|
||||||
- **📜 Progress Log Area**: Displays messages, progress, and errors.
|
|
||||||
- **👁️ / 🙈 Log View Toggle Button**: Switches between Progress Log and Missed Character Log (skipped posts).
|
|
||||||
- **Show External Links in Log**: Displays external links (e.g., Mega, Google Drive) in a secondary panel.
|
|
||||||
- **Export Links Button**: Saves extracted links to a `.txt` file in "Only Links" mode.
|
|
||||||
- **Download Extracted Links Button**: Downloads files from supported external links in "Only Links" mode.
|
|
||||||
- **🆘 Error Button & Dialog**:
|
|
||||||
- **Purpose**: Active if files fail to download. The button will display a live count of failed files (e.g., **(3) Error**).
|
|
||||||
- **Dialog Features**:
|
|
||||||
- Lists failed files.
|
|
||||||
- Retry failed downloads.
|
|
||||||
- Export failed URLs to a text file.
|
|
||||||
|
|
||||||
## 9. Application Settings (⚙️)
|
|
||||||
- **Appearance**: Switch between Light and Dark themes.
|
|
||||||
- **Language**: Change UI language (restart required).
|
|
||||||
|
|||||||
@@ -59,6 +59,7 @@ LANGUAGE_KEY = "currentLanguageV1"
|
|||||||
DOWNLOAD_LOCATION_KEY = "downloadLocationV1"
|
DOWNLOAD_LOCATION_KEY = "downloadLocationV1"
|
||||||
RESOLUTION_KEY = "window_resolution"
|
RESOLUTION_KEY = "window_resolution"
|
||||||
UI_SCALE_KEY = "ui_scale_factor"
|
UI_SCALE_KEY = "ui_scale_factor"
|
||||||
|
SAVE_CREATOR_JSON_KEY = "saveCreatorJsonProfile"
|
||||||
|
|
||||||
# --- UI Constants and Identifiers ---
|
# --- UI Constants and Identifiers ---
|
||||||
HTML_PREFIX = "<!HTML!>"
|
HTML_PREFIX = "<!HTML!>"
|
||||||
|
|||||||
@@ -120,7 +120,7 @@ def download_from_api(
|
|||||||
selected_cookie_file=None,
|
selected_cookie_file=None,
|
||||||
app_base_dir=None,
|
app_base_dir=None,
|
||||||
manga_filename_style_for_sort_check=None,
|
manga_filename_style_for_sort_check=None,
|
||||||
processed_post_ids=None # --- ADD THIS ARGUMENT ---
|
processed_post_ids=None
|
||||||
):
|
):
|
||||||
headers = {
|
headers = {
|
||||||
'User-Agent': 'Mozilla/5.0',
|
'User-Agent': 'Mozilla/5.0',
|
||||||
@@ -139,9 +139,14 @@ def download_from_api(
|
|||||||
|
|
||||||
parsed_input_url_for_domain = urlparse(api_url_input)
|
parsed_input_url_for_domain = urlparse(api_url_input)
|
||||||
api_domain = parsed_input_url_for_domain.netloc
|
api_domain = parsed_input_url_for_domain.netloc
|
||||||
if not any(d in api_domain.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
|
|
||||||
|
# --- START: MODIFIED LOGIC ---
|
||||||
|
# This list is updated to include the new .cr and .st mirrors for validation.
|
||||||
|
if not any(d in api_domain.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
|
||||||
logger(f"⚠️ Unrecognized domain '{api_domain}' from input URL. Defaulting to kemono.su for API calls.")
|
logger(f"⚠️ Unrecognized domain '{api_domain}' from input URL. Defaulting to kemono.su for API calls.")
|
||||||
api_domain = "kemono.su"
|
api_domain = "kemono.su"
|
||||||
|
# --- END: MODIFIED LOGIC ---
|
||||||
|
|
||||||
cookies_for_api = None
|
cookies_for_api = None
|
||||||
if use_cookie and app_base_dir:
|
if use_cookie and app_base_dir:
|
||||||
cookies_for_api = prepare_cookies_for_request(use_cookie, cookie_text, selected_cookie_file, app_base_dir, logger, target_domain=api_domain)
|
cookies_for_api = prepare_cookies_for_request(use_cookie, cookie_text, selected_cookie_file, app_base_dir, logger, target_domain=api_domain)
|
||||||
@@ -220,6 +225,9 @@ def download_from_api(
|
|||||||
logger(f" Manga Mode: No posts found within the specified page range ({start_page or 1}-{end_page}).")
|
logger(f" Manga Mode: No posts found within the specified page range ({start_page or 1}-{end_page}).")
|
||||||
break
|
break
|
||||||
all_posts_for_manga_mode.extend(posts_batch_manga)
|
all_posts_for_manga_mode.extend(posts_batch_manga)
|
||||||
|
|
||||||
|
logger(f"MANGA_FETCH_PROGRESS:{len(all_posts_for_manga_mode)}:{current_page_num_manga}")
|
||||||
|
|
||||||
current_offset_manga += page_size
|
current_offset_manga += page_size
|
||||||
time.sleep(0.6)
|
time.sleep(0.6)
|
||||||
except RuntimeError as e:
|
except RuntimeError as e:
|
||||||
@@ -232,7 +240,12 @@ def download_from_api(
|
|||||||
logger(f"❌ Unexpected error during manga mode fetch: {e}")
|
logger(f"❌ Unexpected error during manga mode fetch: {e}")
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
break
|
break
|
||||||
|
|
||||||
if cancellation_event and cancellation_event.is_set(): return
|
if cancellation_event and cancellation_event.is_set(): return
|
||||||
|
|
||||||
|
if all_posts_for_manga_mode:
|
||||||
|
logger(f"MANGA_FETCH_COMPLETE:{len(all_posts_for_manga_mode)}")
|
||||||
|
|
||||||
if all_posts_for_manga_mode:
|
if all_posts_for_manga_mode:
|
||||||
if processed_post_ids:
|
if processed_post_ids:
|
||||||
original_count = len(all_posts_for_manga_mode)
|
original_count = len(all_posts_for_manga_mode)
|
||||||
|
|||||||
@@ -5,11 +5,10 @@ import json
|
|||||||
import traceback
|
import traceback
|
||||||
from concurrent.futures import ThreadPoolExecutor, as_completed, Future
|
from concurrent.futures import ThreadPoolExecutor, as_completed, Future
|
||||||
from .api_client import download_from_api
|
from .api_client import download_from_api
|
||||||
from .workers import PostProcessorWorker, DownloadThread
|
from .workers import PostProcessorWorker
|
||||||
from ..config.constants import (
|
from ..config.constants import (
|
||||||
STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING,
|
STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING,
|
||||||
MAX_THREADS, POST_WORKER_BATCH_THRESHOLD, POST_WORKER_NUM_BATCHES,
|
MAX_THREADS
|
||||||
POST_WORKER_BATCH_DELAY_SECONDS
|
|
||||||
)
|
)
|
||||||
from ..utils.file_utils import clean_folder_name
|
from ..utils.file_utils import clean_folder_name
|
||||||
|
|
||||||
@@ -41,6 +40,10 @@ class DownloadManager:
|
|||||||
self.total_downloads = 0
|
self.total_downloads = 0
|
||||||
self.total_skips = 0
|
self.total_skips = 0
|
||||||
self.all_kept_original_filenames = []
|
self.all_kept_original_filenames = []
|
||||||
|
self.creator_profiles_dir = None
|
||||||
|
self.current_creator_name_for_profile = None
|
||||||
|
self.current_creator_profile_path = None
|
||||||
|
self.session_file_path = None
|
||||||
|
|
||||||
def _log(self, message):
|
def _log(self, message):
|
||||||
"""Puts a progress message into the queue for the UI."""
|
"""Puts a progress message into the queue for the UI."""
|
||||||
@@ -58,6 +61,17 @@ class DownloadManager:
|
|||||||
if self.is_running:
|
if self.is_running:
|
||||||
self._log("❌ Cannot start a new session: A session is already in progress.")
|
self._log("❌ Cannot start a new session: A session is already in progress.")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
self.session_file_path = config.get('session_file_path')
|
||||||
|
creator_profile_data = self._setup_creator_profile(config)
|
||||||
|
|
||||||
|
# Save settings to profile at the start of the session
|
||||||
|
if self.current_creator_profile_path:
|
||||||
|
creator_profile_data['settings'] = config
|
||||||
|
creator_profile_data.setdefault('processed_post_ids', [])
|
||||||
|
self._save_creator_profile(creator_profile_data)
|
||||||
|
self._log(f"✅ Loaded/created profile for '{self.current_creator_name_for_profile}'. Settings saved.")
|
||||||
|
|
||||||
self.is_running = True
|
self.is_running = True
|
||||||
self.cancellation_event.clear()
|
self.cancellation_event.clear()
|
||||||
self.pause_event.clear()
|
self.pause_event.clear()
|
||||||
@@ -67,76 +81,109 @@ class DownloadManager:
|
|||||||
self.total_downloads = 0
|
self.total_downloads = 0
|
||||||
self.total_skips = 0
|
self.total_skips = 0
|
||||||
self.all_kept_original_filenames = []
|
self.all_kept_original_filenames = []
|
||||||
|
|
||||||
is_single_post = bool(config.get('target_post_id_from_initial_url'))
|
is_single_post = bool(config.get('target_post_id_from_initial_url'))
|
||||||
use_multithreading = config.get('use_multithreading', True)
|
use_multithreading = config.get('use_multithreading', True)
|
||||||
is_manga_sequential = config.get('manga_mode_active') and config.get('manga_filename_style') in [STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING]
|
is_manga_sequential = config.get('manga_mode_active') and config.get('manga_filename_style') in [STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING]
|
||||||
|
|
||||||
should_use_multithreading_for_posts = use_multithreading and not is_single_post and not is_manga_sequential
|
should_use_multithreading_for_posts = use_multithreading and not is_single_post and not is_manga_sequential
|
||||||
|
|
||||||
if should_use_multithreading_for_posts:
|
if should_use_multithreading_for_posts:
|
||||||
fetcher_thread = threading.Thread(
|
fetcher_thread = threading.Thread(
|
||||||
target=self._fetch_and_queue_posts_for_pool,
|
target=self._fetch_and_queue_posts_for_pool,
|
||||||
args=(config, restore_data),
|
args=(config, restore_data, creator_profile_data),
|
||||||
daemon=True
|
daemon=True
|
||||||
)
|
)
|
||||||
fetcher_thread.start()
|
fetcher_thread.start()
|
||||||
else:
|
else:
|
||||||
self._start_single_threaded_session(config)
|
# Single-threaded mode does not use the manager's complex logic
|
||||||
|
self._log("ℹ️ Manager is handing off to a single-threaded worker...")
|
||||||
|
# The single-threaded worker will manage its own lifecycle and signals.
|
||||||
|
# The manager's role for this session is effectively over.
|
||||||
|
self.is_running = False # Allow another session to start if needed
|
||||||
|
self.progress_queue.put({'type': 'handoff_to_single_thread', 'payload': (config,)})
|
||||||
|
|
||||||
def _start_single_threaded_session(self, config):
|
|
||||||
"""Handles downloads that are best processed by a single worker thread."""
|
|
||||||
self._log("ℹ️ Initializing single-threaded download process...")
|
|
||||||
self.worker_thread = threading.Thread(
|
|
||||||
target=self._run_single_worker,
|
|
||||||
args=(config,),
|
|
||||||
daemon=True
|
|
||||||
)
|
|
||||||
self.worker_thread.start()
|
|
||||||
|
|
||||||
def _run_single_worker(self, config):
|
def _fetch_and_queue_posts_for_pool(self, config, restore_data, creator_profile_data):
|
||||||
"""Target function for the single-worker thread."""
|
|
||||||
try:
|
|
||||||
worker = DownloadThread(config, self.progress_queue)
|
|
||||||
worker.run() # This is the main blocking call for this thread
|
|
||||||
except Exception as e:
|
|
||||||
self._log(f"❌ CRITICAL ERROR in single-worker thread: {e}")
|
|
||||||
self._log(traceback.format_exc())
|
|
||||||
finally:
|
|
||||||
self.is_running = False
|
|
||||||
|
|
||||||
def _fetch_and_queue_posts_for_pool(self, config, restore_data):
|
|
||||||
"""
|
"""
|
||||||
Fetches all posts from the API and submits them as tasks to a thread pool.
|
Fetches posts from the API in batches and submits them as tasks to a thread pool.
|
||||||
This method runs in its own dedicated thread to avoid blocking.
|
This method runs in its own dedicated thread to avoid blocking the UI.
|
||||||
|
It provides immediate feedback as soon as the first batch of posts is found.
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
num_workers = min(config.get('num_threads', 4), MAX_THREADS)
|
num_workers = min(config.get('num_threads', 4), MAX_THREADS)
|
||||||
self.thread_pool = ThreadPoolExecutor(max_workers=num_workers, thread_name_prefix='PostWorker_')
|
self.thread_pool = ThreadPoolExecutor(max_workers=num_workers, thread_name_prefix='PostWorker_')
|
||||||
if restore_data:
|
|
||||||
|
session_processed_ids = set(restore_data.get('processed_post_ids', [])) if restore_data else set()
|
||||||
|
profile_processed_ids = set(creator_profile_data.get('processed_post_ids', []))
|
||||||
|
processed_ids = session_processed_ids.union(profile_processed_ids)
|
||||||
|
|
||||||
|
if restore_data and 'all_posts_data' in restore_data:
|
||||||
|
# This logic for session restore remains as it relies on a pre-fetched list
|
||||||
all_posts = restore_data['all_posts_data']
|
all_posts = restore_data['all_posts_data']
|
||||||
processed_ids = set(restore_data['processed_post_ids'])
|
|
||||||
posts_to_process = [p for p in all_posts if p.get('id') not in processed_ids]
|
posts_to_process = [p for p in all_posts if p.get('id') not in processed_ids]
|
||||||
self.total_posts = len(all_posts)
|
self.total_posts = len(all_posts)
|
||||||
self.processed_posts = len(processed_ids)
|
self.processed_posts = len(processed_ids)
|
||||||
self._log(f"🔄 Restoring session. {len(posts_to_process)} posts remaining.")
|
self._log(f"🔄 Restoring session. {len(posts_to_process)} posts remaining.")
|
||||||
|
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
|
||||||
|
|
||||||
|
if not posts_to_process:
|
||||||
|
self._log("✅ No new posts to process from restored session.")
|
||||||
|
return
|
||||||
|
|
||||||
|
for post_data in posts_to_process:
|
||||||
|
if self.cancellation_event.is_set(): break
|
||||||
|
worker = PostProcessorWorker(post_data, config, self.progress_queue)
|
||||||
|
future = self.thread_pool.submit(worker.process)
|
||||||
|
future.add_done_callback(self._handle_future_result)
|
||||||
|
self.active_futures.append(future)
|
||||||
else:
|
else:
|
||||||
posts_to_process = self._get_all_posts(config)
|
# --- START: REFACTORED STREAMING LOGIC ---
|
||||||
self.total_posts = len(posts_to_process)
|
post_generator = download_from_api(
|
||||||
|
api_url_input=config['api_url'],
|
||||||
|
logger=self._log,
|
||||||
|
start_page=config.get('start_page'),
|
||||||
|
end_page=config.get('end_page'),
|
||||||
|
manga_mode=config.get('manga_mode_active', False),
|
||||||
|
cancellation_event=self.cancellation_event,
|
||||||
|
pause_event=self.pause_event,
|
||||||
|
use_cookie=config.get('use_cookie', False),
|
||||||
|
cookie_text=config.get('cookie_text', ''),
|
||||||
|
selected_cookie_file=config.get('selected_cookie_file'),
|
||||||
|
app_base_dir=config.get('app_base_dir'),
|
||||||
|
manga_filename_style_for_sort_check=config.get('manga_filename_style'),
|
||||||
|
processed_post_ids=list(processed_ids)
|
||||||
|
)
|
||||||
|
|
||||||
|
self.total_posts = 0
|
||||||
self.processed_posts = 0
|
self.processed_posts = 0
|
||||||
|
|
||||||
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
|
# Process posts in batches as they are yielded by the API client
|
||||||
|
for batch in post_generator:
|
||||||
if not posts_to_process:
|
if self.cancellation_event.is_set():
|
||||||
self._log("✅ No new posts to process.")
|
self._log(" Post fetching cancelled.")
|
||||||
return
|
break
|
||||||
for post_data in posts_to_process:
|
|
||||||
if self.cancellation_event.is_set():
|
# Filter out any posts that might have been processed since the start
|
||||||
break
|
posts_in_batch_to_process = [p for p in batch if p.get('id') not in processed_ids]
|
||||||
worker = PostProcessorWorker(post_data, config, self.progress_queue)
|
|
||||||
future = self.thread_pool.submit(worker.process)
|
if not posts_in_batch_to_process:
|
||||||
future.add_done_callback(self._handle_future_result)
|
continue
|
||||||
self.active_futures.append(future)
|
|
||||||
|
# Update total count and immediately inform the UI
|
||||||
|
self.total_posts += len(posts_in_batch_to_process)
|
||||||
|
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
|
||||||
|
|
||||||
|
for post_data in posts_in_batch_to_process:
|
||||||
|
if self.cancellation_event.is_set(): break
|
||||||
|
worker = PostProcessorWorker(post_data, config, self.progress_queue)
|
||||||
|
future = self.thread_pool.submit(worker.process)
|
||||||
|
future.add_done_callback(self._handle_future_result)
|
||||||
|
self.active_futures.append(future)
|
||||||
|
|
||||||
|
if self.total_posts == 0 and not self.cancellation_event.is_set():
|
||||||
|
self._log("✅ No new posts found to process.")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self._log(f"❌ CRITICAL ERROR in post fetcher thread: {e}")
|
self._log(f"❌ CRITICAL ERROR in post fetcher thread: {e}")
|
||||||
self._log(traceback.format_exc())
|
self._log(traceback.format_exc())
|
||||||
@@ -144,33 +191,11 @@ class DownloadManager:
|
|||||||
if self.thread_pool:
|
if self.thread_pool:
|
||||||
self.thread_pool.shutdown(wait=True)
|
self.thread_pool.shutdown(wait=True)
|
||||||
self.is_running = False
|
self.is_running = False
|
||||||
self._log("🏁 All processing tasks have completed.")
|
self._log("🏁 All processing tasks have completed or been cancelled.")
|
||||||
self.progress_queue.put({
|
self.progress_queue.put({
|
||||||
'type': 'finished',
|
'type': 'finished',
|
||||||
'payload': (self.total_downloads, self.total_skips, self.cancellation_event.is_set(), self.all_kept_original_filenames)
|
'payload': (self.total_downloads, self.total_skips, self.cancellation_event.is_set(), self.all_kept_original_filenames)
|
||||||
})
|
})
|
||||||
|
|
||||||
def _get_all_posts(self, config):
|
|
||||||
"""Helper to fetch all posts using the API client."""
|
|
||||||
all_posts = []
|
|
||||||
post_generator = download_from_api(
|
|
||||||
api_url_input=config['api_url'],
|
|
||||||
logger=self._log,
|
|
||||||
start_page=config.get('start_page'),
|
|
||||||
end_page=config.get('end_page'),
|
|
||||||
manga_mode=config.get('manga_mode_active', False),
|
|
||||||
cancellation_event=self.cancellation_event,
|
|
||||||
pause_event=self.pause_event,
|
|
||||||
use_cookie=config.get('use_cookie', False),
|
|
||||||
cookie_text=config.get('cookie_text', ''),
|
|
||||||
selected_cookie_file=config.get('selected_cookie_file'),
|
|
||||||
app_base_dir=config.get('app_base_dir'),
|
|
||||||
manga_filename_style_for_sort_check=config.get('manga_filename_style'),
|
|
||||||
processed_post_ids=config.get('processed_post_ids', [])
|
|
||||||
)
|
|
||||||
for batch in post_generator:
|
|
||||||
all_posts.extend(batch)
|
|
||||||
return all_posts
|
|
||||||
|
|
||||||
def _handle_future_result(self, future: Future):
|
def _handle_future_result(self, future: Future):
|
||||||
"""Callback executed when a worker task completes."""
|
"""Callback executed when a worker task completes."""
|
||||||
@@ -196,19 +221,65 @@ class DownloadManager:
|
|||||||
self.progress_queue.put({'type': 'permanent_failure', 'payload': (permanent,)})
|
self.progress_queue.put({'type': 'permanent_failure', 'payload': (permanent,)})
|
||||||
if history:
|
if history:
|
||||||
self.progress_queue.put({'type': 'post_processed_history', 'payload': (history,)})
|
self.progress_queue.put({'type': 'post_processed_history', 'payload': (history,)})
|
||||||
|
post_id = history.get('post_id')
|
||||||
|
if post_id and self.current_creator_profile_path:
|
||||||
|
profile_data = self._setup_creator_profile({'creator_name_for_profile': self.current_creator_name_for_profile, 'session_file_path': self.session_file_path})
|
||||||
|
if post_id not in profile_data.get('processed_post_ids', []):
|
||||||
|
profile_data.setdefault('processed_post_ids', []).append(post_id)
|
||||||
|
self._save_creator_profile(profile_data)
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self._log(f"❌ Worker task resulted in an exception: {e}")
|
self._log(f"❌ Worker task resulted in an exception: {e}")
|
||||||
self.total_skips += 1 # Count errored posts as skipped
|
self.total_skips += 1 # Count errored posts as skipped
|
||||||
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
|
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
|
||||||
|
|
||||||
|
def _setup_creator_profile(self, config):
|
||||||
|
"""Prepares the path and loads data for the current creator's profile."""
|
||||||
|
self.current_creator_name_for_profile = config.get('creator_name_for_profile')
|
||||||
|
if not self.current_creator_name_for_profile:
|
||||||
|
self._log("⚠️ Cannot create creator profile: Name not provided in config.")
|
||||||
|
return {}
|
||||||
|
|
||||||
|
appdata_dir = os.path.dirname(config.get('session_file_path', '.'))
|
||||||
|
self.creator_profiles_dir = os.path.join(appdata_dir, "creator_profiles")
|
||||||
|
os.makedirs(self.creator_profiles_dir, exist_ok=True)
|
||||||
|
|
||||||
|
safe_filename = clean_folder_name(self.current_creator_name_for_profile) + ".json"
|
||||||
|
self.current_creator_profile_path = os.path.join(self.creator_profiles_dir, safe_filename)
|
||||||
|
|
||||||
|
if os.path.exists(self.current_creator_profile_path):
|
||||||
|
try:
|
||||||
|
with open(self.current_creator_profile_path, 'r', encoding='utf-8') as f:
|
||||||
|
return json.load(f)
|
||||||
|
except (json.JSONDecodeError, OSError) as e:
|
||||||
|
self._log(f"❌ Error loading creator profile '{safe_filename}': {e}. Starting fresh.")
|
||||||
|
return {}
|
||||||
|
|
||||||
|
def _save_creator_profile(self, data):
|
||||||
|
"""Saves the provided data to the current creator's profile file."""
|
||||||
|
if not self.current_creator_profile_path:
|
||||||
|
return
|
||||||
|
try:
|
||||||
|
temp_path = self.current_creator_profile_path + ".tmp"
|
||||||
|
with open(temp_path, 'w', encoding='utf-8') as f:
|
||||||
|
json.dump(data, f, indent=2)
|
||||||
|
os.replace(temp_path, self.current_creator_profile_path)
|
||||||
|
except OSError as e:
|
||||||
|
self._log(f"❌ Error saving creator profile to '{self.current_creator_profile_path}': {e}")
|
||||||
|
|
||||||
def cancel_session(self):
|
def cancel_session(self):
|
||||||
"""Cancels the current running session."""
|
"""Cancels the current running session."""
|
||||||
if not self.is_running:
|
if not self.is_running:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
if self.cancellation_event.is_set():
|
||||||
|
self._log("ℹ️ Cancellation already in progress.")
|
||||||
|
return
|
||||||
|
|
||||||
self._log("⚠️ Cancellation requested by user...")
|
self._log("⚠️ Cancellation requested by user...")
|
||||||
self.cancellation_event.set()
|
self.cancellation_event.set()
|
||||||
|
|
||||||
if self.thread_pool:
|
if self.thread_pool:
|
||||||
self.thread_pool.shutdown(wait=False, cancel_futures=True)
|
self._log(" Signaling all worker threads to stop and shutting down pool...")
|
||||||
|
self.thread_pool.shutdown(wait=False)
|
||||||
self.is_running = False
|
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
import os
|
import os
|
||||||
|
import sys
|
||||||
import queue
|
import queue
|
||||||
import re
|
import re
|
||||||
import threading
|
import threading
|
||||||
@@ -53,6 +54,24 @@ from ..utils.text_utils import (
|
|||||||
)
|
)
|
||||||
from ..config.constants import *
|
from ..config.constants import *
|
||||||
|
|
||||||
|
def robust_clean_name(name):
|
||||||
|
"""A more robust function to remove illegal characters for filenames and folders."""
|
||||||
|
if not name:
|
||||||
|
return ""
|
||||||
|
# Removes illegal characters for Windows, macOS, and Linux: < > : " / \ | ? *
|
||||||
|
# Also removes control characters (ASCII 0-31) which are invisible but invalid.
|
||||||
|
illegal_chars_pattern = r'[\x00-\x1f<>:"/\\|?*]'
|
||||||
|
cleaned_name = re.sub(illegal_chars_pattern, '', name)
|
||||||
|
|
||||||
|
# Remove leading/trailing spaces or periods, which can cause issues.
|
||||||
|
cleaned_name = cleaned_name.strip(' .')
|
||||||
|
|
||||||
|
# If the name is empty after cleaning (e.g., it was only illegal chars),
|
||||||
|
# provide a safe fallback name.
|
||||||
|
if not cleaned_name:
|
||||||
|
return "untitled_folder" # Or "untitled_file" depending on context
|
||||||
|
return cleaned_name
|
||||||
|
|
||||||
class PostProcessorSignals (QObject ):
|
class PostProcessorSignals (QObject ):
|
||||||
progress_signal =pyqtSignal (str )
|
progress_signal =pyqtSignal (str )
|
||||||
file_download_status_signal =pyqtSignal (bool )
|
file_download_status_signal =pyqtSignal (bool )
|
||||||
@@ -63,7 +82,6 @@ class PostProcessorSignals (QObject ):
|
|||||||
worker_finished_signal = pyqtSignal(tuple)
|
worker_finished_signal = pyqtSignal(tuple)
|
||||||
|
|
||||||
class PostProcessorWorker:
|
class PostProcessorWorker:
|
||||||
|
|
||||||
def __init__(self, post_data, download_root, known_names,
|
def __init__(self, post_data, download_root, known_names,
|
||||||
filter_character_list, emitter,
|
filter_character_list, emitter,
|
||||||
unwanted_keywords, filter_mode, skip_zip,
|
unwanted_keywords, filter_mode, skip_zip,
|
||||||
@@ -103,7 +121,10 @@ class PostProcessorWorker:
|
|||||||
text_export_format='txt',
|
text_export_format='txt',
|
||||||
single_pdf_mode=False,
|
single_pdf_mode=False,
|
||||||
project_root_dir=None,
|
project_root_dir=None,
|
||||||
processed_post_ids=None
|
processed_post_ids=None,
|
||||||
|
multipart_scope='both',
|
||||||
|
multipart_parts_count=4,
|
||||||
|
multipart_min_size_mb=100
|
||||||
):
|
):
|
||||||
self.post = post_data
|
self.post = post_data
|
||||||
self.download_root = download_root
|
self.download_root = download_root
|
||||||
@@ -165,7 +186,9 @@ class PostProcessorWorker:
|
|||||||
self.single_pdf_mode = single_pdf_mode
|
self.single_pdf_mode = single_pdf_mode
|
||||||
self.project_root_dir = project_root_dir
|
self.project_root_dir = project_root_dir
|
||||||
self.processed_post_ids = processed_post_ids if processed_post_ids is not None else []
|
self.processed_post_ids = processed_post_ids if processed_post_ids is not None else []
|
||||||
|
self.multipart_scope = multipart_scope
|
||||||
|
self.multipart_parts_count = multipart_parts_count
|
||||||
|
self.multipart_min_size_mb = multipart_min_size_mb
|
||||||
if self.compress_images and Image is None:
|
if self.compress_images and Image is None:
|
||||||
self.logger("⚠️ Image compression disabled: Pillow library not found.")
|
self.logger("⚠️ Image compression disabled: Pillow library not found.")
|
||||||
self.compress_images = False
|
self.compress_images = False
|
||||||
@@ -200,7 +223,7 @@ class PostProcessorWorker:
|
|||||||
return self .dynamic_filter_holder .get_filters ()
|
return self .dynamic_filter_holder .get_filters ()
|
||||||
return self .filter_character_list_objects_initial
|
return self .filter_character_list_objects_initial
|
||||||
|
|
||||||
def _download_single_file(self, file_info, target_folder_path, headers, original_post_id_for_log, skip_event,
|
def _download_single_file(self, file_info, target_folder_path, post_page_url, original_post_id_for_log, skip_event,
|
||||||
post_title="", file_index_in_post=0, num_files_in_this_post=1,
|
post_title="", file_index_in_post=0, num_files_in_this_post=1,
|
||||||
manga_date_file_counter_ref=None,
|
manga_date_file_counter_ref=None,
|
||||||
forced_filename_override=None,
|
forced_filename_override=None,
|
||||||
@@ -238,17 +261,28 @@ class PostProcessorWorker:
|
|||||||
|
|
||||||
if self.manga_mode_active:
|
if self.manga_mode_active:
|
||||||
if self.manga_filename_style == STYLE_ORIGINAL_NAME:
|
if self.manga_filename_style == STYLE_ORIGINAL_NAME:
|
||||||
filename_to_save_in_main_path = cleaned_original_api_filename
|
# Get the post's publication or added date
|
||||||
if self.manga_date_prefix and self.manga_date_prefix.strip():
|
published_date_str = self.post.get('published')
|
||||||
cleaned_prefix = clean_filename(self.manga_date_prefix.strip())
|
added_date_str = self.post.get('added')
|
||||||
if cleaned_prefix:
|
formatted_date_str = "nodate" # Fallback if no date is found
|
||||||
filename_to_save_in_main_path = f"{cleaned_prefix} {filename_to_save_in_main_path}"
|
|
||||||
else:
|
date_to_use_str = published_date_str or added_date_str
|
||||||
self.logger(f"⚠️ Manga Original Name Mode: Provided prefix '{self.manga_date_prefix}' was empty after cleaning. Using original name only.")
|
|
||||||
|
if date_to_use_str:
|
||||||
|
try:
|
||||||
|
# Extract just the YYYY-MM-DD part from the timestamp
|
||||||
|
formatted_date_str = date_to_use_str.split('T')[0]
|
||||||
|
except Exception:
|
||||||
|
self.logger(f" ⚠️ Could not parse date '{date_to_use_str}'. Using 'nodate' prefix.")
|
||||||
|
else:
|
||||||
|
self.logger(f" ⚠️ Post ID {original_post_id_for_log} has no date. Using 'nodate' prefix.")
|
||||||
|
|
||||||
|
# Combine the date with the cleaned original filename
|
||||||
|
filename_to_save_in_main_path = f"{formatted_date_str}_{cleaned_original_api_filename}"
|
||||||
was_original_name_kept_flag = True
|
was_original_name_kept_flag = True
|
||||||
elif self.manga_filename_style == STYLE_POST_TITLE:
|
elif self.manga_filename_style == STYLE_POST_TITLE:
|
||||||
if post_title and post_title.strip():
|
if post_title and post_title.strip():
|
||||||
cleaned_post_title_base = clean_filename(post_title.strip())
|
cleaned_post_title_base = robust_clean_name(post_title.strip())
|
||||||
if num_files_in_this_post > 1:
|
if num_files_in_this_post > 1:
|
||||||
if file_index_in_post == 0:
|
if file_index_in_post == 0:
|
||||||
filename_to_save_in_main_path = f"{cleaned_post_title_base}{original_ext}"
|
filename_to_save_in_main_path = f"{cleaned_post_title_base}{original_ext}"
|
||||||
@@ -318,7 +352,7 @@ class PostProcessorWorker:
|
|||||||
self.logger(f" ⚠️ Post ID {original_post_id_for_log} missing both 'published' and 'added' dates for STYLE_DATE_POST_TITLE. Using 'nodate'.")
|
self.logger(f" ⚠️ Post ID {original_post_id_for_log} missing both 'published' and 'added' dates for STYLE_DATE_POST_TITLE. Using 'nodate'.")
|
||||||
|
|
||||||
if post_title and post_title.strip():
|
if post_title and post_title.strip():
|
||||||
temp_cleaned_title = clean_filename(post_title.strip())
|
temp_cleaned_title = robust_clean_name(post_title.strip())
|
||||||
if not temp_cleaned_title or temp_cleaned_title.startswith("untitled_file"):
|
if not temp_cleaned_title or temp_cleaned_title.startswith("untitled_file"):
|
||||||
self.logger(f"⚠️ Manga mode (Date+PostTitle Style): Post title for post {original_post_id_for_log} ('{post_title}') was empty or generic after cleaning. Using 'post' as title part.")
|
self.logger(f"⚠️ Manga mode (Date+PostTitle Style): Post title for post {original_post_id_for_log} ('{post_title}') was empty or generic after cleaning. Using 'post' as title part.")
|
||||||
cleaned_post_title_for_filename = "post"
|
cleaned_post_title_for_filename = "post"
|
||||||
@@ -398,6 +432,44 @@ class PostProcessorWorker:
|
|||||||
unique_id_for_part_file = uuid.uuid4().hex[:8]
|
unique_id_for_part_file = uuid.uuid4().hex[:8]
|
||||||
unique_part_file_stem_on_disk = f"{temp_file_base_for_unique_part}_{unique_id_for_part_file}"
|
unique_part_file_stem_on_disk = f"{temp_file_base_for_unique_part}_{unique_id_for_part_file}"
|
||||||
max_retries = 3
|
max_retries = 3
|
||||||
|
if not self.keep_in_post_duplicates:
|
||||||
|
final_save_path_check = os.path.join(target_folder_path, filename_to_save_in_main_path)
|
||||||
|
if os.path.exists(final_save_path_check):
|
||||||
|
try:
|
||||||
|
# Use a HEAD request to get the expected size without downloading the body
|
||||||
|
with requests.head(file_url, headers=file_download_headers, timeout=15, cookies=cookies_to_use_for_file, allow_redirects=True) as head_response:
|
||||||
|
head_response.raise_for_status()
|
||||||
|
expected_size = int(head_response.headers.get('Content-Length', -1))
|
||||||
|
|
||||||
|
actual_size = os.path.getsize(final_save_path_check)
|
||||||
|
|
||||||
|
if expected_size != -1 and actual_size == expected_size:
|
||||||
|
self.logger(f" -> Skip (File Exists & Complete): '{filename_to_save_in_main_path}' is already on disk with the correct size.")
|
||||||
|
|
||||||
|
# We still need to add its hash to the session to prevent duplicates in other modes
|
||||||
|
# This is a quick hash calculation for the already existing file
|
||||||
|
try:
|
||||||
|
md5_hasher = hashlib.md5()
|
||||||
|
with open(final_save_path_check, 'rb') as f_verify:
|
||||||
|
for chunk in iter(lambda: f_verify.read(8192), b""):
|
||||||
|
md5_hasher.update(chunk)
|
||||||
|
|
||||||
|
with self.downloaded_hash_counts_lock:
|
||||||
|
self.downloaded_hash_counts[md5_hasher.hexdigest()] += 1
|
||||||
|
except Exception as hash_exc:
|
||||||
|
self.logger(f" ⚠️ Could not hash existing file '{filename_to_save_in_main_path}' for session: {hash_exc}")
|
||||||
|
|
||||||
|
return 0, 1, filename_to_save_in_main_path, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_SKIPPED, None
|
||||||
|
else:
|
||||||
|
self.logger(f" ⚠️ File '{filename_to_save_in_main_path}' exists but is incomplete (Expected: {expected_size}, Actual: {actual_size}). Re-downloading.")
|
||||||
|
|
||||||
|
except requests.RequestException as e:
|
||||||
|
self.logger(f" ⚠️ Could not verify size of existing file '{filename_to_save_in_main_path}': {e}. Proceeding with download.")
|
||||||
|
file_download_headers = {
|
||||||
|
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36',
|
||||||
|
'Referer': post_page_url
|
||||||
|
}
|
||||||
|
|
||||||
retry_delay = 5
|
retry_delay = 5
|
||||||
downloaded_size_bytes = 0
|
downloaded_size_bytes = 0
|
||||||
calculated_file_hash = None
|
calculated_file_hash = None
|
||||||
@@ -418,13 +490,31 @@ class PostProcessorWorker:
|
|||||||
self.logger(f" Retrying download for '{api_original_filename}' (Overall Attempt {attempt_num_single_stream + 1}/{max_retries + 1})...")
|
self.logger(f" Retrying download for '{api_original_filename}' (Overall Attempt {attempt_num_single_stream + 1}/{max_retries + 1})...")
|
||||||
time.sleep(retry_delay * (2 ** (attempt_num_single_stream - 1)))
|
time.sleep(retry_delay * (2 ** (attempt_num_single_stream - 1)))
|
||||||
self._emit_signal('file_download_status', True)
|
self._emit_signal('file_download_status', True)
|
||||||
response = requests.get(file_url, headers=headers, timeout=(15, 300), stream=True, cookies=cookies_to_use_for_file)
|
response = requests.get(file_url, headers=file_download_headers, timeout=(15, 300), stream=True, cookies=cookies_to_use_for_file)
|
||||||
|
|
||||||
response.raise_for_status()
|
response.raise_for_status()
|
||||||
total_size_bytes = int(response.headers.get('Content-Length', 0))
|
total_size_bytes = int(response.headers.get('Content-Length', 0))
|
||||||
num_parts_for_file = min(self.num_file_threads, MAX_PARTS_FOR_MULTIPART_DOWNLOAD)
|
# Use the dedicated parts count from the dialog, not the main thread count
|
||||||
|
num_parts_for_file = min(self.multipart_parts_count, MAX_PARTS_FOR_MULTIPART_DOWNLOAD)
|
||||||
|
|
||||||
|
file_is_eligible_by_scope = False
|
||||||
|
if self.multipart_scope == 'videos':
|
||||||
|
if is_video(api_original_filename):
|
||||||
|
file_is_eligible_by_scope = True
|
||||||
|
elif self.multipart_scope == 'archives':
|
||||||
|
if is_archive(api_original_filename):
|
||||||
|
file_is_eligible_by_scope = True
|
||||||
|
elif self.multipart_scope == 'both':
|
||||||
|
if is_video(api_original_filename) or is_archive(api_original_filename):
|
||||||
|
file_is_eligible_by_scope = True
|
||||||
|
|
||||||
|
min_size_in_bytes = self.multipart_min_size_mb * 1024 * 1024
|
||||||
|
|
||||||
attempt_multipart = (self.allow_multipart_download and MULTIPART_DOWNLOADER_AVAILABLE and
|
attempt_multipart = (self.allow_multipart_download and MULTIPART_DOWNLOADER_AVAILABLE and
|
||||||
num_parts_for_file > 1 and total_size_bytes > MIN_SIZE_FOR_MULTIPART_DOWNLOAD and
|
file_is_eligible_by_scope and
|
||||||
|
num_parts_for_file > 1 and total_size_bytes > min_size_in_bytes and
|
||||||
'bytes' in response.headers.get('Accept-Ranges', '').lower())
|
'bytes' in response.headers.get('Accept-Ranges', '').lower())
|
||||||
|
|
||||||
if self._check_pause(f"Multipart decision for '{api_original_filename}'"): break
|
if self._check_pause(f"Multipart decision for '{api_original_filename}'"): break
|
||||||
|
|
||||||
if attempt_multipart:
|
if attempt_multipart:
|
||||||
@@ -433,7 +523,7 @@ class PostProcessorWorker:
|
|||||||
response_for_this_attempt = None
|
response_for_this_attempt = None
|
||||||
mp_save_path_for_unique_part_stem_arg = os.path.join(target_folder_path, f"{unique_part_file_stem_on_disk}{temp_file_ext_for_unique_part}")
|
mp_save_path_for_unique_part_stem_arg = os.path.join(target_folder_path, f"{unique_part_file_stem_on_disk}{temp_file_ext_for_unique_part}")
|
||||||
mp_success, mp_bytes, mp_hash, mp_file_handle = download_file_in_parts(
|
mp_success, mp_bytes, mp_hash, mp_file_handle = download_file_in_parts(
|
||||||
file_url, mp_save_path_for_unique_part_stem_arg, total_size_bytes, num_parts_for_file, headers, api_original_filename,
|
file_url, mp_save_path_for_unique_part_stem_arg, total_size_bytes, num_parts_for_file, file_download_headers, api_original_filename,
|
||||||
emitter_for_multipart=self.emitter, cookies_for_chunk_session=cookies_to_use_for_file,
|
emitter_for_multipart=self.emitter, cookies_for_chunk_session=cookies_to_use_for_file,
|
||||||
cancellation_event=self.cancellation_event, skip_event=skip_event, logger_func=self.logger,
|
cancellation_event=self.cancellation_event, skip_event=skip_event, logger_func=self.logger,
|
||||||
pause_event=self.pause_event
|
pause_event=self.pause_event
|
||||||
@@ -508,12 +598,15 @@ class PostProcessorWorker:
|
|||||||
if isinstance(e, requests.exceptions.ConnectionError) and ("Failed to resolve" in str(e) or "NameResolutionError" in str(e)):
|
if isinstance(e, requests.exceptions.ConnectionError) and ("Failed to resolve" in str(e) or "NameResolutionError" in str(e)):
|
||||||
self.logger(" 💡 This looks like a DNS resolution problem. Please check your internet connection, DNS settings, or VPN.")
|
self.logger(" 💡 This looks like a DNS resolution problem. Please check your internet connection, DNS settings, or VPN.")
|
||||||
except requests.exceptions.RequestException as e:
|
except requests.exceptions.RequestException as e:
|
||||||
self.logger(f" ❌ Download Error (Non-Retryable): {api_original_filename}. Error: {e}")
|
if e.response is not None and e.response.status_code == 403:
|
||||||
last_exception_for_retry_later = e
|
self.logger(f" ⚠️ Download Error (403 Forbidden): {api_original_filename}. This often requires valid cookies.")
|
||||||
is_permanent_error = True
|
self.logger(f" Will retry... Check your 'Use Cookie' settings if this persists.")
|
||||||
if ("Failed to resolve" in str(e) or "NameResolutionError" in str(e)):
|
last_exception_for_retry_later = e
|
||||||
self.logger(" 💡 This looks like a DNS resolution problem. Please check your internet connection, DNS settings, or VPN.")
|
else:
|
||||||
break
|
self.logger(f" ❌ Download Error (Non-Retryable): {api_original_filename}. Error: {e}")
|
||||||
|
last_exception_for_retry_later = e
|
||||||
|
is_permanent_error = True
|
||||||
|
break
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.logger(f" ❌ Unexpected Download Error: {api_original_filename}: {e}\n{traceback.format_exc(limit=2)}")
|
self.logger(f" ❌ Unexpected Download Error: {api_original_filename}: {e}\n{traceback.format_exc(limit=2)}")
|
||||||
last_exception_for_retry_later = e
|
last_exception_for_retry_later = e
|
||||||
@@ -582,6 +675,33 @@ class PostProcessorWorker:
|
|||||||
os.remove(downloaded_part_file_path)
|
os.remove(downloaded_part_file_path)
|
||||||
except OSError: pass
|
except OSError: pass
|
||||||
return 0, 1, filename_to_save_in_main_path, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_SKIPPED, None
|
return 0, 1, filename_to_save_in_main_path, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_SKIPPED, None
|
||||||
|
|
||||||
|
if (self.compress_images and downloaded_part_file_path and
|
||||||
|
is_image(api_original_filename) and
|
||||||
|
os.path.getsize(downloaded_part_file_path) > 1.5 * 1024 * 1024):
|
||||||
|
|
||||||
|
self.logger(f" 🔄 Compressing '{api_original_filename}' to WebP...")
|
||||||
|
try:
|
||||||
|
with Image.open(downloaded_part_file_path) as img:
|
||||||
|
# Convert to RGB to avoid issues with paletted images or alpha channels in WebP
|
||||||
|
if img.mode not in ('RGB', 'RGBA'):
|
||||||
|
img = img.convert('RGBA')
|
||||||
|
|
||||||
|
# Use an in-memory buffer to save the compressed image
|
||||||
|
output_buffer = BytesIO()
|
||||||
|
img.save(output_buffer, format='WebP', quality=85)
|
||||||
|
|
||||||
|
# This buffer now holds the compressed data
|
||||||
|
data_to_write_io = output_buffer
|
||||||
|
|
||||||
|
# Update the filename to use the .webp extension
|
||||||
|
base, _ = os.path.splitext(filename_to_save_in_main_path)
|
||||||
|
filename_to_save_in_main_path = f"{base}.webp"
|
||||||
|
self.logger(f" ✅ Compression successful. New size: {len(data_to_write_io.getvalue()) / (1024*1024):.2f} MB")
|
||||||
|
|
||||||
|
except Exception as e_compress:
|
||||||
|
self.logger(f" ⚠️ Failed to compress '{api_original_filename}': {e_compress}. Saving original file instead.")
|
||||||
|
data_to_write_io = None # Ensure we fall back to saving the original
|
||||||
|
|
||||||
effective_save_folder = target_folder_path
|
effective_save_folder = target_folder_path
|
||||||
base_name, extension = os.path.splitext(filename_to_save_in_main_path)
|
base_name, extension = os.path.splitext(filename_to_save_in_main_path)
|
||||||
@@ -599,15 +719,17 @@ class PostProcessorWorker:
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
if data_to_write_io:
|
if data_to_write_io:
|
||||||
|
# Write the compressed data from the in-memory buffer
|
||||||
with open(final_save_path, 'wb') as f_out:
|
with open(final_save_path, 'wb') as f_out:
|
||||||
time.sleep(0.05)
|
|
||||||
f_out.write(data_to_write_io.getvalue())
|
f_out.write(data_to_write_io.getvalue())
|
||||||
|
# Clean up the original downloaded part file
|
||||||
if downloaded_part_file_path and os.path.exists(downloaded_part_file_path):
|
if downloaded_part_file_path and os.path.exists(downloaded_part_file_path):
|
||||||
try:
|
try:
|
||||||
os.remove(downloaded_part_file_path)
|
os.remove(downloaded_part_file_path)
|
||||||
except OSError as e_rem:
|
except OSError as e_rem:
|
||||||
self.logger(f" -> Failed to remove .part after compression: {e_rem}")
|
self.logger(f" -> Failed to remove .part after compression: {e_rem}")
|
||||||
else:
|
else:
|
||||||
|
# No compression was done, just rename the original file
|
||||||
if downloaded_part_file_path and os.path.exists(downloaded_part_file_path):
|
if downloaded_part_file_path and os.path.exists(downloaded_part_file_path):
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
os.rename(downloaded_part_file_path, final_save_path)
|
os.rename(downloaded_part_file_path, final_save_path)
|
||||||
@@ -654,7 +776,7 @@ class PostProcessorWorker:
|
|||||||
self.logger(f" -> Failed to remove partially saved file: {final_save_path}")
|
self.logger(f" -> Failed to remove partially saved file: {final_save_path}")
|
||||||
|
|
||||||
permanent_failure_details = {
|
permanent_failure_details = {
|
||||||
'file_info': file_info, 'target_folder_path': target_folder_path, 'headers': headers,
|
'file_info': file_info, 'target_folder_path': target_folder_path, 'headers': file_download_headers,
|
||||||
'original_post_id_for_log': original_post_id_for_log, 'post_title': post_title,
|
'original_post_id_for_log': original_post_id_for_log, 'post_title': post_title,
|
||||||
'file_index_in_post': file_index_in_post, 'num_files_in_this_post': num_files_in_this_post,
|
'file_index_in_post': file_index_in_post, 'num_files_in_this_post': num_files_in_this_post,
|
||||||
'forced_filename_override': filename_to_save_in_main_path,
|
'forced_filename_override': filename_to_save_in_main_path,
|
||||||
@@ -668,7 +790,7 @@ class PostProcessorWorker:
|
|||||||
details_for_failure = {
|
details_for_failure = {
|
||||||
'file_info': file_info,
|
'file_info': file_info,
|
||||||
'target_folder_path': target_folder_path,
|
'target_folder_path': target_folder_path,
|
||||||
'headers': headers,
|
'headers': file_download_headers,
|
||||||
'original_post_id_for_log': original_post_id_for_log,
|
'original_post_id_for_log': original_post_id_for_log,
|
||||||
'post_title': post_title,
|
'post_title': post_title,
|
||||||
'file_index_in_post': file_index_in_post,
|
'file_index_in_post': file_index_in_post,
|
||||||
@@ -680,7 +802,6 @@ class PostProcessorWorker:
|
|||||||
else:
|
else:
|
||||||
return 0, 1, filename_to_save_in_main_path, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_FAILED_RETRYABLE_LATER, details_for_failure
|
return 0, 1, filename_to_save_in_main_path, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_FAILED_RETRYABLE_LATER, details_for_failure
|
||||||
|
|
||||||
|
|
||||||
def process(self):
|
def process(self):
|
||||||
|
|
||||||
result_tuple = (0, 0, [], [], [], None, None)
|
result_tuple = (0, 0, [], [], [], None, None)
|
||||||
@@ -701,8 +822,11 @@ class PostProcessorWorker:
|
|||||||
history_data_for_this_post = None
|
history_data_for_this_post = None
|
||||||
|
|
||||||
parsed_api_url = urlparse(self.api_url_input)
|
parsed_api_url = urlparse(self.api_url_input)
|
||||||
referer_url = f"https://{parsed_api_url.netloc}/"
|
post_data = self.post
|
||||||
headers = {'User-Agent': 'Mozilla/5.0', 'Referer': referer_url, 'Accept': '*/*'}
|
post_id = post_data.get('id', 'unknown_id')
|
||||||
|
|
||||||
|
post_page_url = f"https://{parsed_api_url.netloc}/{self.service}/user/{self.user_id}/post/{post_id}"
|
||||||
|
headers = {'User-Agent': 'Mozilla/5.0', 'Referer': post_page_url, 'Accept': '*/*'}
|
||||||
link_pattern = re.compile(r"""<a\s+.*?href=["'](https?://[^"']+)["'][^>]*>(.*?)</a>""", re.IGNORECASE | re.DOTALL)
|
link_pattern = re.compile(r"""<a\s+.*?href=["'](https?://[^"']+)["'][^>]*>(.*?)</a>""", re.IGNORECASE | re.DOTALL)
|
||||||
post_data = self.post
|
post_data = self.post
|
||||||
post_title = post_data.get('title', '') or 'untitled_post'
|
post_title = post_data.get('title', '') or 'untitled_post'
|
||||||
@@ -712,6 +836,17 @@ class PostProcessorWorker:
|
|||||||
|
|
||||||
effective_unwanted_keywords_for_folder_naming = self.unwanted_keywords.copy()
|
effective_unwanted_keywords_for_folder_naming = self.unwanted_keywords.copy()
|
||||||
is_full_creator_download_no_char_filter = not self.target_post_id_from_initial_url and not current_character_filters
|
is_full_creator_download_no_char_filter = not self.target_post_id_from_initial_url and not current_character_filters
|
||||||
|
|
||||||
|
if (self.show_external_links or self.extract_links_only):
|
||||||
|
embed_data = post_data.get('embed')
|
||||||
|
if isinstance(embed_data, dict) and embed_data.get('url'):
|
||||||
|
embed_url = embed_data['url']
|
||||||
|
embed_subject = embed_data.get('subject', embed_url) # Use subject as link text, fallback to URL
|
||||||
|
platform = get_link_platform(embed_url)
|
||||||
|
|
||||||
|
self.logger(f" 🔗 Found embed link: {embed_url}")
|
||||||
|
self._emit_signal('external_link', post_title, embed_subject, embed_url, platform, "")
|
||||||
|
|
||||||
if is_full_creator_download_no_char_filter and self.creator_download_folder_ignore_words:
|
if is_full_creator_download_no_char_filter and self.creator_download_folder_ignore_words:
|
||||||
self.logger(f" Applying creator download specific folder ignore words ({len(self.creator_download_folder_ignore_words)} words).")
|
self.logger(f" Applying creator download specific folder ignore words ({len(self.creator_download_folder_ignore_words)} words).")
|
||||||
effective_unwanted_keywords_for_folder_naming.update(self.creator_download_folder_ignore_words)
|
effective_unwanted_keywords_for_folder_naming.update(self.creator_download_folder_ignore_words)
|
||||||
@@ -750,8 +885,8 @@ class PostProcessorWorker:
|
|||||||
|
|
||||||
all_files_from_post_api_for_char_check = []
|
all_files_from_post_api_for_char_check = []
|
||||||
api_file_domain_for_char_check = urlparse(self.api_url_input).netloc
|
api_file_domain_for_char_check = urlparse(self.api_url_input).netloc
|
||||||
if not api_file_domain_for_char_check or not any(d in api_file_domain_for_char_check.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
|
if not api_file_domain_for_char_check or not any(d in api_file_domain_for_char_check.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
|
||||||
api_file_domain_for_char_check = "kemono.su" if "kemono" in self.service.lower() else "coomer.party"
|
api_file_domain_for_char_check = "kemono.cr" if "kemono" in self.service.lower() else "coomer.st"
|
||||||
if post_main_file_info and isinstance(post_main_file_info, dict) and post_main_file_info.get('path'):
|
if post_main_file_info and isinstance(post_main_file_info, dict) and post_main_file_info.get('path'):
|
||||||
original_api_name = post_main_file_info.get('name') or os.path.basename(post_main_file_info['path'].lstrip('/'))
|
original_api_name = post_main_file_info.get('name') or os.path.basename(post_main_file_info['path'].lstrip('/'))
|
||||||
if original_api_name:
|
if original_api_name:
|
||||||
@@ -794,9 +929,9 @@ class PostProcessorWorker:
|
|||||||
try:
|
try:
|
||||||
parsed_input_url_for_comments = urlparse(self.api_url_input)
|
parsed_input_url_for_comments = urlparse(self.api_url_input)
|
||||||
api_domain_for_comments = parsed_input_url_for_comments.netloc
|
api_domain_for_comments = parsed_input_url_for_comments.netloc
|
||||||
if not any(d in api_domain_for_comments.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
|
if not any(d in api_domain_for_comments.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
|
||||||
self.logger(f"⚠️ Unrecognized domain '{api_domain_for_comments}' for comment API. Defaulting based on service.")
|
self.logger(f"⚠️ Unrecognized domain '{api_domain_for_comments}' for comment API. Defaulting based on service.")
|
||||||
api_domain_for_comments = "kemono.su" if "kemono" in self.service.lower() else "coomer.party"
|
api_domain_for_comments = "kemono.cr" if "kemono" in self.service.lower() else "coomer.st"
|
||||||
comments_data = fetch_post_comments(
|
comments_data = fetch_post_comments(
|
||||||
api_domain_for_comments, self.service, self.user_id, post_id,
|
api_domain_for_comments, self.service, self.user_id, post_id,
|
||||||
headers, self.logger, self.cancellation_event, self.pause_event,
|
headers, self.logger, self.cancellation_event, self.pause_event,
|
||||||
@@ -848,17 +983,6 @@ class PostProcessorWorker:
|
|||||||
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
|
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
|
||||||
return result_tuple
|
return result_tuple
|
||||||
|
|
||||||
if self.skip_words_list and (self.skip_words_scope == SKIP_SCOPE_POSTS or self.skip_words_scope == SKIP_SCOPE_BOTH):
|
|
||||||
if self._check_pause(f"Skip words (post title) for post {post_id}"):
|
|
||||||
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
|
|
||||||
return result_tuple
|
|
||||||
post_title_lower = post_title.lower()
|
|
||||||
for skip_word in self.skip_words_list:
|
|
||||||
if skip_word.lower() in post_title_lower:
|
|
||||||
self.logger(f" -> Skip Post (Keyword in Title '{skip_word}'): '{post_title[:50]}...'. Scope: {self.skip_words_scope}")
|
|
||||||
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
|
|
||||||
return result_tuple
|
|
||||||
|
|
||||||
if not self.extract_links_only and self.manga_mode_active and current_character_filters and (self.char_filter_scope == CHAR_SCOPE_TITLE or self.char_filter_scope == CHAR_SCOPE_BOTH) and not post_is_candidate_by_title_char_match:
|
if not self.extract_links_only and self.manga_mode_active and current_character_filters and (self.char_filter_scope == CHAR_SCOPE_TITLE or self.char_filter_scope == CHAR_SCOPE_BOTH) and not post_is_candidate_by_title_char_match:
|
||||||
self.logger(f" -> Skip Post (Manga Mode with Title/Both Scope - No Title Char Match): Title '{post_title[:50]}' doesn't match filters.")
|
self.logger(f" -> Skip Post (Manga Mode with Title/Both Scope - No Title Char Match): Title '{post_title[:50]}' doesn't match filters.")
|
||||||
self._emit_signal('missed_character_post', post_title, "Manga Mode: No title match for character filter (Title/Both scope)")
|
self._emit_signal('missed_character_post', post_title, "Manga Mode: No title match for character filter (Title/Both scope)")
|
||||||
@@ -869,6 +993,7 @@ class PostProcessorWorker:
|
|||||||
self.logger(f"⚠️ Corrupt attachment data for post {post_id} (expected list, got {type(post_attachments)}). Skipping attachments.")
|
self.logger(f"⚠️ Corrupt attachment data for post {post_id} (expected list, got {type(post_attachments)}). Skipping attachments.")
|
||||||
post_attachments = []
|
post_attachments = []
|
||||||
|
|
||||||
|
# CORRECTED LOGIC: Determine folder path BEFORE skip checks
|
||||||
base_folder_names_for_post_content = []
|
base_folder_names_for_post_content = []
|
||||||
determined_post_save_path_for_history = self.override_output_dir if self.override_output_dir else self.download_root
|
determined_post_save_path_for_history = self.override_output_dir if self.override_output_dir else self.download_root
|
||||||
if not self.extract_links_only and self.use_subfolders:
|
if not self.extract_links_only and self.use_subfolders:
|
||||||
@@ -967,7 +1092,7 @@ class PostProcessorWorker:
|
|||||||
determined_post_save_path_for_history = os.path.join(determined_post_save_path_for_history, base_folder_names_for_post_content[0])
|
determined_post_save_path_for_history = os.path.join(determined_post_save_path_for_history, base_folder_names_for_post_content[0])
|
||||||
|
|
||||||
if not self.extract_links_only and self.use_post_subfolders:
|
if not self.extract_links_only and self.use_post_subfolders:
|
||||||
cleaned_post_title_for_sub = clean_folder_name(post_title)
|
cleaned_post_title_for_sub = robust_clean_name(post_title)
|
||||||
post_id_for_fallback = self.post.get('id', 'unknown_id')
|
post_id_for_fallback = self.post.get('id', 'unknown_id')
|
||||||
|
|
||||||
if not cleaned_post_title_for_sub or cleaned_post_title_for_sub == "untitled_folder":
|
if not cleaned_post_title_for_sub or cleaned_post_title_for_sub == "untitled_folder":
|
||||||
@@ -1017,6 +1142,28 @@ class PostProcessorWorker:
|
|||||||
break
|
break
|
||||||
determined_post_save_path_for_history = os.path.join(base_path_for_post_subfolder, final_post_subfolder_name)
|
determined_post_save_path_for_history = os.path.join(base_path_for_post_subfolder, final_post_subfolder_name)
|
||||||
|
|
||||||
|
if self.skip_words_list and (self.skip_words_scope == SKIP_SCOPE_POSTS or self.skip_words_scope == SKIP_SCOPE_BOTH):
|
||||||
|
if self._check_pause(f"Skip words (post title) for post {post_id}"):
|
||||||
|
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
|
||||||
|
return result_tuple
|
||||||
|
post_title_lower = post_title.lower()
|
||||||
|
for skip_word in self.skip_words_list:
|
||||||
|
if skip_word.lower() in post_title_lower:
|
||||||
|
self.logger(f" -> Skip Post (Keyword in Title '{skip_word}'): '{post_title[:50]}...'. Scope: {self.skip_words_scope}")
|
||||||
|
# Create a history object for the skipped post to record its ID
|
||||||
|
history_data_for_skipped_post = {
|
||||||
|
'post_id': post_id,
|
||||||
|
'service': self.service,
|
||||||
|
'user_id': self.user_id,
|
||||||
|
'post_title': post_title,
|
||||||
|
'top_file_name': "N/A (Post Skipped)",
|
||||||
|
'num_files': num_potential_files_in_post,
|
||||||
|
'upload_date_str': post_data.get('published') or post_data.get('added') or "Unknown",
|
||||||
|
'download_location': determined_post_save_path_for_history
|
||||||
|
}
|
||||||
|
result_tuple = (0, num_potential_files_in_post, [], [], [], history_data_for_skipped_post, None)
|
||||||
|
return result_tuple
|
||||||
|
|
||||||
if self.filter_mode == 'text_only' and not self.extract_links_only:
|
if self.filter_mode == 'text_only' and not self.extract_links_only:
|
||||||
self.logger(f" Mode: Text Only (Scope: {self.text_only_scope})")
|
self.logger(f" Mode: Text Only (Scope: {self.text_only_scope})")
|
||||||
post_title_lower = post_title.lower()
|
post_title_lower = post_title.lower()
|
||||||
@@ -1124,11 +1271,18 @@ class PostProcessorWorker:
|
|||||||
if FPDF:
|
if FPDF:
|
||||||
self.logger(f" Creating formatted PDF for {'comments' if self.text_only_scope == 'comments' else 'content'}...")
|
self.logger(f" Creating formatted PDF for {'comments' if self.text_only_scope == 'comments' else 'content'}...")
|
||||||
pdf = PDF()
|
pdf = PDF()
|
||||||
|
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
|
||||||
|
# If the application is run as a bundled exe, _MEIPASS is the temp folder
|
||||||
|
base_path = sys._MEIPASS
|
||||||
|
else:
|
||||||
|
# If running as a normal .py script, use the project_root_dir
|
||||||
|
base_path = self.project_root_dir
|
||||||
|
|
||||||
font_path = ""
|
font_path = ""
|
||||||
bold_font_path = ""
|
bold_font_path = ""
|
||||||
if self.project_root_dir:
|
if base_path:
|
||||||
font_path = os.path.join(self.project_root_dir, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
|
font_path = os.path.join(base_path, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
|
||||||
bold_font_path = os.path.join(self.project_root_dir, 'data', 'dejavu-sans', 'DejaVuSans-Bold.ttf')
|
bold_font_path = os.path.join(base_path, 'data', 'dejavu-sans', 'DejaVuSans-Bold.ttf')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if not os.path.exists(font_path): raise RuntimeError(f"Font file not found: {font_path}")
|
if not os.path.exists(font_path): raise RuntimeError(f"Font file not found: {font_path}")
|
||||||
@@ -1261,9 +1415,8 @@ class PostProcessorWorker:
|
|||||||
|
|
||||||
all_files_from_post_api = []
|
all_files_from_post_api = []
|
||||||
api_file_domain = urlparse(self.api_url_input).netloc
|
api_file_domain = urlparse(self.api_url_input).netloc
|
||||||
if not api_file_domain or not any(d in api_file_domain.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
|
if not api_file_domain or not any(d in api_file_domain.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
|
||||||
api_file_domain = "kemono.su" if "kemono" in self.service.lower() else "coomer.party"
|
api_file_domain = "kemono.cr" if "kemono" in self.service.lower() else "coomer.st"
|
||||||
|
|
||||||
if post_main_file_info and isinstance(post_main_file_info, dict) and post_main_file_info.get('path'):
|
if post_main_file_info and isinstance(post_main_file_info, dict) and post_main_file_info.get('path'):
|
||||||
file_path = post_main_file_info['path'].lstrip('/')
|
file_path = post_main_file_info['path'].lstrip('/')
|
||||||
original_api_name = post_main_file_info.get('name') or os.path.basename(file_path)
|
original_api_name = post_main_file_info.get('name') or os.path.basename(file_path)
|
||||||
@@ -1385,7 +1538,17 @@ class PostProcessorWorker:
|
|||||||
|
|
||||||
if not all_files_from_post_api:
|
if not all_files_from_post_api:
|
||||||
self.logger(f" No files found to download for post {post_id}.")
|
self.logger(f" No files found to download for post {post_id}.")
|
||||||
result_tuple = (0, 0, [], [], [], None, None)
|
history_data_for_no_files_post = {
|
||||||
|
'post_title': post_title,
|
||||||
|
'post_id': post_id,
|
||||||
|
'service': self.service,
|
||||||
|
'user_id': self.user_id,
|
||||||
|
'top_file_name': "N/A (No Files)",
|
||||||
|
'num_files': 0,
|
||||||
|
'upload_date_str': post_data.get('published') or post_data.get('added') or "Unknown",
|
||||||
|
'download_location': determined_post_save_path_for_history
|
||||||
|
}
|
||||||
|
result_tuple = (0, 0, [], [], [], history_data_for_no_files_post, None)
|
||||||
return result_tuple
|
return result_tuple
|
||||||
|
|
||||||
files_to_download_info_list = []
|
files_to_download_info_list = []
|
||||||
@@ -1511,7 +1674,7 @@ class PostProcessorWorker:
|
|||||||
self._download_single_file,
|
self._download_single_file,
|
||||||
file_info=file_info_to_dl,
|
file_info=file_info_to_dl,
|
||||||
target_folder_path=current_path_for_file_instance,
|
target_folder_path=current_path_for_file_instance,
|
||||||
headers=headers, original_post_id_for_log=post_id, skip_event=self.skip_current_file_flag,
|
post_page_url=post_page_url, original_post_id_for_log=post_id, skip_event=self.skip_current_file_flag,
|
||||||
post_title=post_title, manga_date_file_counter_ref=manga_date_counter_to_pass,
|
post_title=post_title, manga_date_file_counter_ref=manga_date_counter_to_pass,
|
||||||
manga_global_file_counter_ref=manga_global_counter_to_pass, folder_context_name_for_history=folder_context_for_file,
|
manga_global_file_counter_ref=manga_global_counter_to_pass, folder_context_name_for_history=folder_context_for_file,
|
||||||
file_index_in_post=file_idx, num_files_in_this_post=len(files_to_download_info_list)
|
file_index_in_post=file_idx, num_files_in_this_post=len(files_to_download_info_list)
|
||||||
@@ -1605,10 +1768,12 @@ class PostProcessorWorker:
|
|||||||
if not self.extract_links_only and self.use_post_subfolders and total_downloaded_this_post == 0:
|
if not self.extract_links_only and self.use_post_subfolders and total_downloaded_this_post == 0:
|
||||||
path_to_check_for_emptiness = determined_post_save_path_for_history
|
path_to_check_for_emptiness = determined_post_save_path_for_history
|
||||||
try:
|
try:
|
||||||
|
# Check if the path is a directory and if it's empty
|
||||||
if os.path.isdir(path_to_check_for_emptiness) and not os.listdir(path_to_check_for_emptiness):
|
if os.path.isdir(path_to_check_for_emptiness) and not os.listdir(path_to_check_for_emptiness):
|
||||||
self.logger(f" 🗑️ Removing empty post-specific subfolder: '{path_to_check_for_emptiness}'")
|
self.logger(f" 🗑️ Removing empty post-specific subfolder: '{path_to_check_for_emptiness}'")
|
||||||
os.rmdir(path_to_check_for_emptiness)
|
os.rmdir(path_to_check_for_emptiness)
|
||||||
except OSError as e_rmdir:
|
except OSError as e_rmdir:
|
||||||
|
# Log if removal fails for any reason (e.g., permissions)
|
||||||
self.logger(f" ⚠️ Could not remove empty post-specific subfolder '{path_to_check_for_emptiness}': {e_rmdir}")
|
self.logger(f" ⚠️ Could not remove empty post-specific subfolder '{path_to_check_for_emptiness}': {e_rmdir}")
|
||||||
|
|
||||||
result_tuple = (total_downloaded_this_post, total_skipped_this_post,
|
result_tuple = (total_downloaded_this_post, total_skipped_this_post,
|
||||||
@@ -1617,6 +1782,15 @@ class PostProcessorWorker:
|
|||||||
None)
|
None)
|
||||||
|
|
||||||
finally:
|
finally:
|
||||||
|
if not self.extract_links_only and self.use_post_subfolders and total_downloaded_this_post == 0:
|
||||||
|
path_to_check_for_emptiness = determined_post_save_path_for_history
|
||||||
|
try:
|
||||||
|
if os.path.isdir(path_to_check_for_emptiness) and not os.listdir(path_to_check_for_emptiness):
|
||||||
|
self.logger(f" 🗑️ Removing empty post-specific subfolder: '{path_to_check_for_emptiness}'")
|
||||||
|
os.rmdir(path_to_check_for_emptiness)
|
||||||
|
except OSError as e_rmdir:
|
||||||
|
self.logger(f" ⚠️ Could not remove potentially empty subfolder '{path_to_check_for_emptiness}': {e_rmdir}")
|
||||||
|
|
||||||
self._emit_signal('worker_finished', result_tuple)
|
self._emit_signal('worker_finished', result_tuple)
|
||||||
|
|
||||||
return result_tuple
|
return result_tuple
|
||||||
@@ -1657,6 +1831,8 @@ class DownloadThread(QThread):
|
|||||||
remove_from_filename_words_list=None,
|
remove_from_filename_words_list=None,
|
||||||
manga_date_prefix='',
|
manga_date_prefix='',
|
||||||
allow_multipart_download=True,
|
allow_multipart_download=True,
|
||||||
|
multipart_parts_count=4,
|
||||||
|
multipart_min_size_mb=100,
|
||||||
selected_cookie_file=None,
|
selected_cookie_file=None,
|
||||||
override_output_dir=None,
|
override_output_dir=None,
|
||||||
app_base_dir=None,
|
app_base_dir=None,
|
||||||
@@ -1719,6 +1895,8 @@ class DownloadThread(QThread):
|
|||||||
self.remove_from_filename_words_list = remove_from_filename_words_list
|
self.remove_from_filename_words_list = remove_from_filename_words_list
|
||||||
self.manga_date_prefix = manga_date_prefix
|
self.manga_date_prefix = manga_date_prefix
|
||||||
self.allow_multipart_download = allow_multipart_download
|
self.allow_multipart_download = allow_multipart_download
|
||||||
|
self.multipart_parts_count = multipart_parts_count
|
||||||
|
self.multipart_min_size_mb = multipart_min_size_mb
|
||||||
self.selected_cookie_file = selected_cookie_file
|
self.selected_cookie_file = selected_cookie_file
|
||||||
self.app_base_dir = app_base_dir
|
self.app_base_dir = app_base_dir
|
||||||
self.cookie_text = cookie_text
|
self.cookie_text = cookie_text
|
||||||
@@ -1860,6 +2038,8 @@ class DownloadThread(QThread):
|
|||||||
'text_only_scope': self.text_only_scope,
|
'text_only_scope': self.text_only_scope,
|
||||||
'text_export_format': self.text_export_format,
|
'text_export_format': self.text_export_format,
|
||||||
'single_pdf_mode': self.single_pdf_mode,
|
'single_pdf_mode': self.single_pdf_mode,
|
||||||
|
'multipart_parts_count': self.multipart_parts_count,
|
||||||
|
'multipart_min_size_mb': self.multipart_min_size_mb,
|
||||||
'project_root_dir': self.project_root_dir,
|
'project_root_dir': self.project_root_dir,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -3,33 +3,35 @@ import os
|
|||||||
import re
|
import re
|
||||||
import traceback
|
import traceback
|
||||||
import json
|
import json
|
||||||
|
import base64
|
||||||
|
import time
|
||||||
from urllib.parse import urlparse, urlunparse, parse_qs, urlencode
|
from urllib.parse import urlparse, urlunparse, parse_qs, urlencode
|
||||||
|
|
||||||
# --- Third-Party Library Imports ---
|
# --- Third-Party Library Imports ---
|
||||||
|
# Make sure to install these: pip install requests pycryptodome gdown
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from mega import Mega
|
from Crypto.Cipher import AES
|
||||||
MEGA_AVAILABLE = True
|
PYCRYPTODOME_AVAILABLE = True
|
||||||
except ImportError:
|
except ImportError:
|
||||||
MEGA_AVAILABLE = False
|
PYCRYPTODOME_AVAILABLE = False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import gdown
|
import gdown
|
||||||
GDOWN_AVAILABLE = True
|
GDRIVE_AVAILABLE = True
|
||||||
except ImportError:
|
except ImportError:
|
||||||
GDOWN_AVAILABLE = False
|
GDRIVE_AVAILABLE = False
|
||||||
|
|
||||||
# --- Helper Functions ---
|
# --- Constants ---
|
||||||
|
MEGA_API_URL = "https://g.api.mega.co.nz"
|
||||||
|
|
||||||
|
# --- Helper Functions (Original and New) ---
|
||||||
|
|
||||||
def _get_filename_from_headers(headers):
|
def _get_filename_from_headers(headers):
|
||||||
"""
|
"""
|
||||||
Extracts a filename from the Content-Disposition header.
|
Extracts a filename from the Content-Disposition header.
|
||||||
|
(This is from your original file and is kept for Dropbox downloads)
|
||||||
Args:
|
|
||||||
headers (dict): A dictionary of HTTP response headers.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str or None: The extracted filename, or None if not found.
|
|
||||||
"""
|
"""
|
||||||
cd = headers.get('content-disposition')
|
cd = headers.get('content-disposition')
|
||||||
if not cd:
|
if not cd:
|
||||||
@@ -37,97 +39,205 @@ def _get_filename_from_headers(headers):
|
|||||||
|
|
||||||
fname_match = re.findall('filename="?([^"]+)"?', cd)
|
fname_match = re.findall('filename="?([^"]+)"?', cd)
|
||||||
if fname_match:
|
if fname_match:
|
||||||
# Sanitize the filename to prevent directory traversal issues
|
|
||||||
# and remove invalid characters for most filesystems.
|
|
||||||
sanitized_name = re.sub(r'[<>:"/\\|?*]', '_', fname_match[0].strip())
|
sanitized_name = re.sub(r'[<>:"/\\|?*]', '_', fname_match[0].strip())
|
||||||
return sanitized_name
|
return sanitized_name
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# --- Main Service Downloader Functions ---
|
# --- NEW: Helper functions for Mega decryption ---
|
||||||
|
|
||||||
def download_mega_file(mega_link, download_path=".", logger_func=print):
|
def urlb64_to_b64(s):
|
||||||
"""
|
"""Converts a URL-safe base64 string to a standard base64 string."""
|
||||||
Downloads a file from a public Mega.nz link.
|
s = s.replace('-', '+').replace('_', '/')
|
||||||
|
s += '=' * (-len(s) % 4)
|
||||||
|
return s
|
||||||
|
|
||||||
Args:
|
def b64_to_bytes(s):
|
||||||
mega_link (str): The public Mega.nz link to the file.
|
"""Decodes a URL-safe base64 string to bytes."""
|
||||||
download_path (str): The directory to save the downloaded file.
|
return base64.b64decode(urlb64_to_b64(s))
|
||||||
logger_func (callable): Function to use for logging.
|
|
||||||
"""
|
|
||||||
if not MEGA_AVAILABLE:
|
|
||||||
logger_func("❌ Error: mega.py library is not installed. Cannot download from Mega.")
|
|
||||||
logger_func(" Please install it: pip install mega.py")
|
|
||||||
raise ImportError("mega.py library not found.")
|
|
||||||
|
|
||||||
logger_func(f" [Mega] Initializing Mega client...")
|
def bytes_to_hex(b):
|
||||||
|
"""Converts bytes to a hex string."""
|
||||||
|
return b.hex()
|
||||||
|
|
||||||
|
def hex_to_bytes(h):
|
||||||
|
"""Converts a hex string to bytes."""
|
||||||
|
return bytes.fromhex(h)
|
||||||
|
|
||||||
|
def hrk2hk(hex_raw_key):
|
||||||
|
"""Derives the final AES key from the raw key components for Mega."""
|
||||||
|
key_part1 = int(hex_raw_key[0:16], 16)
|
||||||
|
key_part2 = int(hex_raw_key[16:32], 16)
|
||||||
|
key_part3 = int(hex_raw_key[32:48], 16)
|
||||||
|
key_part4 = int(hex_raw_key[48:64], 16)
|
||||||
|
|
||||||
|
final_key_part1 = key_part1 ^ key_part3
|
||||||
|
final_key_part2 = key_part2 ^ key_part4
|
||||||
|
|
||||||
|
return f'{final_key_part1:016x}{final_key_part2:016x}'
|
||||||
|
|
||||||
|
def decrypt_at(at_b64, key_bytes):
|
||||||
|
"""Decrypts the 'at' attribute to get file metadata."""
|
||||||
|
at_bytes = b64_to_bytes(at_b64)
|
||||||
|
iv = b'\0' * 16
|
||||||
|
cipher = AES.new(key_bytes, AES.MODE_CBC, iv)
|
||||||
|
decrypted_at = cipher.decrypt(at_bytes)
|
||||||
|
return decrypted_at.decode('utf-8').strip('\0').replace('MEGA', '')
|
||||||
|
|
||||||
|
# --- NEW: Core Logic for Mega Downloads ---
|
||||||
|
|
||||||
|
def get_mega_file_info(file_id, file_key, session, logger_func):
|
||||||
|
"""Fetches file metadata and the temporary download URL from the Mega API."""
|
||||||
try:
|
try:
|
||||||
mega_client = Mega()
|
hex_raw_key = bytes_to_hex(b64_to_bytes(file_key))
|
||||||
m = mega_client.login()
|
hex_key = hrk2hk(hex_raw_key)
|
||||||
logger_func(f" [Mega] Attempting to download from: {mega_link}")
|
key_bytes = hex_to_bytes(hex_key)
|
||||||
|
|
||||||
if not os.path.exists(download_path):
|
# Request file attributes
|
||||||
os.makedirs(download_path, exist_ok=True)
|
payload = [{"a": "g", "p": file_id}]
|
||||||
logger_func(f" [Mega] Created download directory: {download_path}")
|
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
|
||||||
|
response.raise_for_status()
|
||||||
|
res_json = response.json()
|
||||||
|
|
||||||
|
if isinstance(res_json, list) and isinstance(res_json[0], int) and res_json[0] < 0:
|
||||||
|
logger_func(f" [Mega] ❌ API Error: {res_json[0]}. The link may be invalid or removed.")
|
||||||
|
return None
|
||||||
|
|
||||||
|
file_size = res_json[0]['s']
|
||||||
|
at_b64 = res_json[0]['at']
|
||||||
|
|
||||||
|
# Decrypt attributes to get the file name
|
||||||
|
at_dec_json_str = decrypt_at(at_b64, key_bytes)
|
||||||
|
at_dec_json = json.loads(at_dec_json_str)
|
||||||
|
file_name = at_dec_json['n']
|
||||||
|
|
||||||
|
# Request the temporary download URL
|
||||||
|
payload = [{"a": "g", "g": 1, "p": file_id}]
|
||||||
|
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
|
||||||
|
response.raise_for_status()
|
||||||
|
res_json = response.json()
|
||||||
|
dl_temp_url = res_json[0]['g']
|
||||||
|
|
||||||
|
return {
|
||||||
|
'file_name': file_name,
|
||||||
|
'file_size': file_size,
|
||||||
|
'dl_url': dl_temp_url,
|
||||||
|
'hex_raw_key': hex_raw_key
|
||||||
|
}
|
||||||
|
except (requests.RequestException, json.JSONDecodeError, KeyError, ValueError) as e:
|
||||||
|
logger_func(f" [Mega] ❌ Failed to get file info: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def download_and_decrypt_mega_file(info, download_path, logger_func):
|
||||||
|
"""Downloads the file and decrypts it chunk by chunk, reporting progress."""
|
||||||
|
file_name = info['file_name']
|
||||||
|
file_size = info['file_size']
|
||||||
|
dl_url = info['dl_url']
|
||||||
|
hex_raw_key = info['hex_raw_key']
|
||||||
|
|
||||||
|
final_path = os.path.join(download_path, file_name)
|
||||||
|
|
||||||
|
if os.path.exists(final_path) and os.path.getsize(final_path) == file_size:
|
||||||
|
logger_func(f" [Mega] ℹ️ File '{file_name}' already exists with the correct size. Skipping.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Prepare for decryption
|
||||||
|
key = hex_to_bytes(hrk2hk(hex_raw_key))
|
||||||
|
iv_hex = hex_raw_key[32:48] + '0000000000000000'
|
||||||
|
iv_bytes = hex_to_bytes(iv_hex)
|
||||||
|
cipher = AES.new(key, AES.MODE_CTR, initial_value=iv_bytes, nonce=b'')
|
||||||
|
|
||||||
|
try:
|
||||||
|
with requests.get(dl_url, stream=True, timeout=(15, 300)) as r:
|
||||||
|
r.raise_for_status()
|
||||||
|
downloaded_bytes = 0
|
||||||
|
last_log_time = time.time()
|
||||||
|
|
||||||
# The download_url method handles file info fetching and saving internally.
|
with open(final_path, 'wb') as f:
|
||||||
downloaded_file_path = m.download_url(mega_link, dest_path=download_path)
|
for chunk in r.iter_content(chunk_size=8192):
|
||||||
|
if not chunk:
|
||||||
if downloaded_file_path and os.path.exists(downloaded_file_path):
|
continue
|
||||||
logger_func(f" [Mega] ✅ File downloaded successfully! Saved as: {downloaded_file_path}")
|
decrypted_chunk = cipher.decrypt(chunk)
|
||||||
else:
|
f.write(decrypted_chunk)
|
||||||
raise Exception(f"Mega download failed or file not found. Returned: {downloaded_file_path}")
|
downloaded_bytes += len(chunk)
|
||||||
|
|
||||||
|
# Log progress every second
|
||||||
|
current_time = time.time()
|
||||||
|
if current_time - last_log_time > 1:
|
||||||
|
progress_percent = (downloaded_bytes / file_size) * 100 if file_size > 0 else 0
|
||||||
|
logger_func(f" [Mega] Downloading '{file_name}': {downloaded_bytes/1024/1024:.2f}MB / {file_size/1024/1024:.2f}MB ({progress_percent:.1f}%)")
|
||||||
|
last_log_time = current_time
|
||||||
|
|
||||||
|
logger_func(f" [Mega] ✅ Successfully downloaded '{file_name}' to '{download_path}'")
|
||||||
|
except requests.RequestException as e:
|
||||||
|
logger_func(f" [Mega] ❌ Download failed for '{file_name}': {e}")
|
||||||
|
except IOError as e:
|
||||||
|
logger_func(f" [Mega] ❌ Could not write to file '{final_path}': {e}")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger_func(f" [Mega] ❌ An unexpected error occurred during Mega download: {e}")
|
logger_func(f" [Mega] ❌ An unexpected error occurred during download/decryption: {e}")
|
||||||
traceback.print_exc(limit=2)
|
|
||||||
raise # Re-raise the exception to be handled by the calling worker
|
|
||||||
|
|
||||||
def download_gdrive_file(gdrive_link, download_path=".", logger_func=print):
|
|
||||||
|
# --- REPLACEMENT Main Service Downloader Function for Mega ---
|
||||||
|
|
||||||
|
def download_mega_file(mega_url, download_path, logger_func=print):
|
||||||
"""
|
"""
|
||||||
Downloads a file from a public Google Drive link using the gdown library.
|
Downloads a file from a Mega.nz URL using direct requests and decryption.
|
||||||
|
This replaces the old mega.py implementation.
|
||||||
Args:
|
|
||||||
gdrive_link (str): The public Google Drive link to the file.
|
|
||||||
download_path (str): The directory to save the downloaded file.
|
|
||||||
logger_func (callable): Function to use for logging.
|
|
||||||
"""
|
"""
|
||||||
if not GDOWN_AVAILABLE:
|
if not PYCRYPTODOME_AVAILABLE:
|
||||||
logger_func("❌ Error: gdown library is not installed. Cannot download from Google Drive.")
|
logger_func("❌ Mega download failed: 'pycryptodome' library is not installed. Please run: pip install pycryptodome")
|
||||||
logger_func(" Please install it: pip install gdown")
|
return
|
||||||
raise ImportError("gdown library not found.")
|
|
||||||
|
|
||||||
logger_func(f" [GDrive] Attempting to download: {gdrive_link}")
|
logger_func(f" [Mega] Initializing download for: {mega_url}")
|
||||||
|
|
||||||
|
# Regex to capture file ID and key from both old and new URL formats
|
||||||
|
match = re.search(r'mega(?:\.co)?\.nz/(?:file/|#!)?([a-zA-Z0-9]+)(?:#|!)([a-zA-Z0-9_.-]+)', mega_url)
|
||||||
|
if not match:
|
||||||
|
logger_func(f" [Mega] ❌ Error: Invalid Mega URL format.")
|
||||||
|
return
|
||||||
|
|
||||||
|
file_id = match.group(1)
|
||||||
|
file_key = match.group(2)
|
||||||
|
|
||||||
|
session = requests.Session()
|
||||||
|
session.headers.update({'User-Agent': 'Kemono-Downloader-PyQt/1.0'})
|
||||||
|
|
||||||
|
file_info = get_mega_file_info(file_id, file_key, session, logger_func)
|
||||||
|
if not file_info:
|
||||||
|
logger_func(f" [Mega] ❌ Failed to get file info. The link may be invalid or expired. Aborting.")
|
||||||
|
return
|
||||||
|
|
||||||
|
logger_func(f" [Mega] File found: '{file_info['file_name']}' (Size: {file_info['file_size'] / 1024 / 1024:.2f} MB)")
|
||||||
|
|
||||||
|
download_and_decrypt_mega_file(file_info, download_path, logger_func)
|
||||||
|
|
||||||
|
|
||||||
|
# --- ORIGINAL Functions for Google Drive and Dropbox (Unchanged) ---
|
||||||
|
|
||||||
|
def download_gdrive_file(url, download_path, logger_func=print):
|
||||||
|
"""Downloads a file from a Google Drive link."""
|
||||||
|
if not GDRIVE_AVAILABLE:
|
||||||
|
logger_func("❌ Google Drive download failed: 'gdown' library is not installed.")
|
||||||
|
return
|
||||||
try:
|
try:
|
||||||
if not os.path.exists(download_path):
|
logger_func(f" [G-Drive] Starting download for: {url}")
|
||||||
os.makedirs(download_path, exist_ok=True)
|
logger_func(" [G-Drive] Download in progress... This may take some time. Please wait.")
|
||||||
logger_func(f" [GDrive] Created download directory: {download_path}")
|
|
||||||
|
output_path = gdown.download(url, output=download_path, quiet=True, fuzzy=True)
|
||||||
# gdown handles finding the file ID and downloading. 'fuzzy=True' helps with various URL formats.
|
|
||||||
output_file_path = gdown.download(gdrive_link, output=download_path, quiet=False, fuzzy=True)
|
if output_path and os.path.exists(output_path):
|
||||||
|
logger_func(f" [G-Drive] ✅ Successfully downloaded to '{output_path}'")
|
||||||
if output_file_path and os.path.exists(output_file_path):
|
|
||||||
logger_func(f" [GDrive] ✅ Google Drive file downloaded successfully: {output_file_path}")
|
|
||||||
else:
|
else:
|
||||||
raise Exception(f"gdown download failed or file not found. Returned: {output_file_path}")
|
logger_func(f" [G-Drive] ❌ Download failed. The file may have been moved, deleted, or is otherwise inaccessible.")
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger_func(f" [GDrive] ❌ An error occurred during Google Drive download: {e}")
|
logger_func(f" [G-Drive] ❌ An unexpected error occurred: {e}")
|
||||||
traceback.print_exc(limit=2)
|
|
||||||
raise
|
|
||||||
|
|
||||||
def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
|
def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
|
||||||
"""
|
"""
|
||||||
Downloads a file from a public Dropbox link by modifying the URL for direct download.
|
Downloads a file from a public Dropbox link by modifying the URL for direct download.
|
||||||
|
|
||||||
Args:
|
|
||||||
dropbox_link (str): The public Dropbox link to the file.
|
|
||||||
download_path (str): The directory to save the downloaded file.
|
|
||||||
logger_func (callable): Function to use for logging.
|
|
||||||
"""
|
"""
|
||||||
logger_func(f" [Dropbox] Attempting to download: {dropbox_link}")
|
logger_func(f" [Dropbox] Attempting to download: {dropbox_link}")
|
||||||
|
|
||||||
# Modify the Dropbox URL to force a direct download instead of showing the preview page.
|
|
||||||
parsed_url = urlparse(dropbox_link)
|
parsed_url = urlparse(dropbox_link)
|
||||||
query_params = parse_qs(parsed_url.query)
|
query_params = parse_qs(parsed_url.query)
|
||||||
query_params['dl'] = ['1']
|
query_params['dl'] = ['1']
|
||||||
@@ -144,13 +254,11 @@ def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
|
|||||||
with requests.get(direct_download_url, stream=True, allow_redirects=True, timeout=(10, 300)) as r:
|
with requests.get(direct_download_url, stream=True, allow_redirects=True, timeout=(10, 300)) as r:
|
||||||
r.raise_for_status()
|
r.raise_for_status()
|
||||||
|
|
||||||
# Determine filename from headers or URL
|
|
||||||
filename = _get_filename_from_headers(r.headers) or os.path.basename(parsed_url.path) or "dropbox_file"
|
filename = _get_filename_from_headers(r.headers) or os.path.basename(parsed_url.path) or "dropbox_file"
|
||||||
full_save_path = os.path.join(download_path, filename)
|
full_save_path = os.path.join(download_path, filename)
|
||||||
|
|
||||||
logger_func(f" [Dropbox] Starting download of '{filename}'...")
|
logger_func(f" [Dropbox] Starting download of '{filename}'...")
|
||||||
|
|
||||||
# Write file to disk in chunks
|
|
||||||
with open(full_save_path, 'wb') as f:
|
with open(full_save_path, 'wb') as f:
|
||||||
for chunk in r.iter_content(chunk_size=8192):
|
for chunk in r.iter_content(chunk_size=8192):
|
||||||
f.write(chunk)
|
f.write(chunk)
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
# --- Standard Library Imports ---
|
# --- Standard Library Imports ---
|
||||||
|
# --- Standard Library Imports ---
|
||||||
import os
|
import os
|
||||||
import time
|
import time
|
||||||
import hashlib
|
import hashlib
|
||||||
@@ -10,28 +11,49 @@ from concurrent.futures import ThreadPoolExecutor, as_completed
|
|||||||
|
|
||||||
# --- Third-Party Library Imports ---
|
# --- Third-Party Library Imports ---
|
||||||
import requests
|
import requests
|
||||||
|
MULTIPART_DOWNLOADER_AVAILABLE = True
|
||||||
|
|
||||||
# --- Module Constants ---
|
# --- Module Constants ---
|
||||||
CHUNK_DOWNLOAD_RETRY_DELAY = 2
|
CHUNK_DOWNLOAD_RETRY_DELAY = 2
|
||||||
MAX_CHUNK_DOWNLOAD_RETRIES = 1
|
MAX_CHUNK_DOWNLOAD_RETRIES = 1
|
||||||
DOWNLOAD_CHUNK_SIZE_ITER = 1024 * 256 # 256 KB per iteration chunk
|
DOWNLOAD_CHUNK_SIZE_ITER = 1024 * 256 # 256 KB per iteration chunk
|
||||||
|
|
||||||
# Flag to indicate if this module and its dependencies are available.
|
|
||||||
# This was missing and caused the ImportError.
|
|
||||||
MULTIPART_DOWNLOADER_AVAILABLE = True
|
|
||||||
|
|
||||||
|
|
||||||
def _download_individual_chunk(
|
def _download_individual_chunk(
|
||||||
chunk_url, temp_file_path, start_byte, end_byte, headers,
|
chunk_url, chunk_temp_file_path, start_byte, end_byte, headers,
|
||||||
part_num, total_parts, progress_data, cancellation_event,
|
part_num, total_parts, progress_data, cancellation_event,
|
||||||
skip_event, pause_event, global_emit_time_ref, cookies_for_chunk,
|
skip_event, pause_event, global_emit_time_ref, cookies_for_chunk,
|
||||||
logger_func, emitter=None, api_original_filename=None
|
logger_func, emitter=None, api_original_filename=None
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Downloads a single segment (chunk) of a larger file. This function is
|
Downloads a single segment (chunk) of a larger file to its own unique part file.
|
||||||
intended to be run in a separate thread by a ThreadPoolExecutor.
|
This function is intended to be run in a separate thread by a ThreadPoolExecutor.
|
||||||
|
|
||||||
It handles retries, pauses, and cancellations for its specific chunk.
|
It handles retries, pauses, and cancellations for its specific chunk. If a
|
||||||
|
download fails, the partial chunk file is removed, allowing a clean retry later.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
chunk_url (str): The URL to download the file from.
|
||||||
|
chunk_temp_file_path (str): The unique path to save this specific chunk
|
||||||
|
(e.g., 'my_video.mp4.part0').
|
||||||
|
start_byte (int): The starting byte for the Range header.
|
||||||
|
end_byte (int): The ending byte for the Range header.
|
||||||
|
headers (dict): The HTTP headers to use for the request.
|
||||||
|
part_num (int): The index of this chunk (e.g., 0 for the first part).
|
||||||
|
total_parts (int): The total number of chunks for the entire file.
|
||||||
|
progress_data (dict): A thread-safe dictionary for sharing progress.
|
||||||
|
cancellation_event (threading.Event): Event to signal cancellation.
|
||||||
|
skip_event (threading.Event): Event to signal skipping the file.
|
||||||
|
pause_event (threading.Event): Event to signal pausing the download.
|
||||||
|
global_emit_time_ref (list): A mutable list with one element (a timestamp)
|
||||||
|
to rate-limit UI updates.
|
||||||
|
cookies_for_chunk (dict): Cookies to use for the request.
|
||||||
|
logger_func (function): A function to log messages.
|
||||||
|
emitter (queue.Queue or QObject): Emitter for sending progress to the UI.
|
||||||
|
api_original_filename (str): The original filename for UI display.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
tuple: A tuple containing (bytes_downloaded, success_flag).
|
||||||
"""
|
"""
|
||||||
# --- Pre-download checks for control events ---
|
# --- Pre-download checks for control events ---
|
||||||
if cancellation_event and cancellation_event.is_set():
|
if cancellation_event and cancellation_event.is_set():
|
||||||
@@ -49,103 +71,135 @@ def _download_individual_chunk(
|
|||||||
time.sleep(0.2)
|
time.sleep(0.2)
|
||||||
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Download resumed.")
|
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Download resumed.")
|
||||||
|
|
||||||
# Prepare headers for the specific byte range of this chunk
|
# Set this chunk's status to 'active' before starting the download.
|
||||||
chunk_headers = headers.copy()
|
with progress_data['lock']:
|
||||||
if end_byte != -1:
|
progress_data['chunks_status'][part_num]['active'] = True
|
||||||
chunk_headers['Range'] = f"bytes={start_byte}-{end_byte}"
|
|
||||||
|
|
||||||
bytes_this_chunk = 0
|
|
||||||
last_speed_calc_time = time.time()
|
|
||||||
bytes_at_last_speed_calc = 0
|
|
||||||
|
|
||||||
# --- Retry Loop ---
|
try:
|
||||||
for attempt in range(MAX_CHUNK_DOWNLOAD_RETRIES + 1):
|
# Prepare headers for the specific byte range of this chunk
|
||||||
if cancellation_event and cancellation_event.is_set():
|
chunk_headers = headers.copy()
|
||||||
return bytes_this_chunk, False
|
if end_byte != -1:
|
||||||
|
chunk_headers['Range'] = f"bytes={start_byte}-{end_byte}"
|
||||||
|
|
||||||
try:
|
bytes_this_chunk = 0
|
||||||
if attempt > 0:
|
last_speed_calc_time = time.time()
|
||||||
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Retrying (Attempt {attempt + 1}/{MAX_CHUNK_DOWNLOAD_RETRIES + 1})...")
|
bytes_at_last_speed_calc = 0
|
||||||
time.sleep(CHUNK_DOWNLOAD_RETRY_DELAY * (2 ** (attempt - 1)))
|
|
||||||
last_speed_calc_time = time.time()
|
|
||||||
bytes_at_last_speed_calc = bytes_this_chunk
|
|
||||||
|
|
||||||
logger_func(f" 🚀 [Chunk {part_num + 1}/{total_parts}] Starting download: bytes {start_byte}-{end_byte if end_byte != -1 else 'EOF'}")
|
# --- Retry Loop ---
|
||||||
|
for attempt in range(MAX_CHUNK_DOWNLOAD_RETRIES + 1):
|
||||||
response = requests.get(chunk_url, headers=chunk_headers, timeout=(10, 120), stream=True, cookies=cookies_for_chunk)
|
if cancellation_event and cancellation_event.is_set():
|
||||||
response.raise_for_status()
|
return bytes_this_chunk, False
|
||||||
|
|
||||||
# --- Data Writing Loop ---
|
try:
|
||||||
with open(temp_file_path, 'r+b') as f:
|
if attempt > 0:
|
||||||
f.seek(start_byte)
|
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Retrying (Attempt {attempt + 1}/{MAX_CHUNK_DOWNLOAD_RETRIES + 1})...")
|
||||||
for data_segment in response.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE_ITER):
|
time.sleep(CHUNK_DOWNLOAD_RETRY_DELAY * (2 ** (attempt - 1)))
|
||||||
if cancellation_event and cancellation_event.is_set():
|
last_speed_calc_time = time.time()
|
||||||
return bytes_this_chunk, False
|
bytes_at_last_speed_calc = bytes_this_chunk
|
||||||
if pause_event and pause_event.is_set():
|
|
||||||
# Handle pausing during the download stream
|
|
||||||
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Paused...")
|
|
||||||
while pause_event.is_set():
|
|
||||||
if cancellation_event and cancellation_event.is_set(): return bytes_this_chunk, False
|
|
||||||
time.sleep(0.2)
|
|
||||||
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Resumed.")
|
|
||||||
|
|
||||||
if data_segment:
|
logger_func(f" 🚀 [Chunk {part_num + 1}/{total_parts}] Starting download: bytes {start_byte}-{end_byte if end_byte != -1 else 'EOF'}")
|
||||||
f.write(data_segment)
|
|
||||||
bytes_this_chunk += len(data_segment)
|
|
||||||
|
|
||||||
# Update shared progress data structure
|
|
||||||
with progress_data['lock']:
|
|
||||||
progress_data['total_downloaded_so_far'] += len(data_segment)
|
|
||||||
progress_data['chunks_status'][part_num]['downloaded'] = bytes_this_chunk
|
|
||||||
|
|
||||||
# Calculate and update speed for this chunk
|
|
||||||
current_time = time.time()
|
|
||||||
time_delta = current_time - last_speed_calc_time
|
|
||||||
if time_delta > 0.5:
|
|
||||||
bytes_delta = bytes_this_chunk - bytes_at_last_speed_calc
|
|
||||||
current_speed_bps = (bytes_delta * 8) / time_delta if time_delta > 0 else 0
|
|
||||||
progress_data['chunks_status'][part_num]['speed_bps'] = current_speed_bps
|
|
||||||
last_speed_calc_time = current_time
|
|
||||||
bytes_at_last_speed_calc = bytes_this_chunk
|
|
||||||
|
|
||||||
# Emit progress signal to the UI via the queue
|
|
||||||
if emitter and (current_time - global_emit_time_ref[0] > 0.25):
|
|
||||||
global_emit_time_ref[0] = current_time
|
|
||||||
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
|
|
||||||
if isinstance(emitter, queue.Queue):
|
|
||||||
emitter.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
|
|
||||||
elif hasattr(emitter, 'file_progress_signal'):
|
|
||||||
emitter.file_progress_signal.emit(api_original_filename, status_list_copy)
|
|
||||||
|
|
||||||
# If we reach here, the download for this chunk was successful
|
|
||||||
return bytes_this_chunk, True
|
|
||||||
|
|
||||||
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout, http.client.IncompleteRead) as e:
|
response = requests.get(chunk_url, headers=chunk_headers, timeout=(10, 120), stream=True, cookies=cookies_for_chunk)
|
||||||
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Retryable error: {e}")
|
response.raise_for_status()
|
||||||
except requests.exceptions.RequestException as e:
|
|
||||||
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Non-retryable error: {e}")
|
|
||||||
return bytes_this_chunk, False # Break loop on non-retryable errors
|
|
||||||
except Exception as e:
|
|
||||||
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Unexpected error: {e}\n{traceback.format_exc(limit=1)}")
|
|
||||||
return bytes_this_chunk, False
|
|
||||||
|
|
||||||
return bytes_this_chunk, False
|
# --- Data Writing Loop ---
|
||||||
|
# We open the unique chunk file in write-binary ('wb') mode.
|
||||||
|
# No more seeking is required.
|
||||||
|
with open(chunk_temp_file_path, 'wb') as f:
|
||||||
|
for data_segment in response.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE_ITER):
|
||||||
|
if cancellation_event and cancellation_event.is_set():
|
||||||
|
return bytes_this_chunk, False
|
||||||
|
if pause_event and pause_event.is_set():
|
||||||
|
# Handle pausing during the download stream
|
||||||
|
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Paused...")
|
||||||
|
while pause_event.is_set():
|
||||||
|
if cancellation_event and cancellation_event.is_set(): return bytes_this_chunk, False
|
||||||
|
time.sleep(0.2)
|
||||||
|
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Resumed.")
|
||||||
|
|
||||||
|
if data_segment:
|
||||||
|
f.write(data_segment)
|
||||||
|
bytes_this_chunk += len(data_segment)
|
||||||
|
|
||||||
|
# Update shared progress data structure
|
||||||
|
with progress_data['lock']:
|
||||||
|
progress_data['total_downloaded_so_far'] += len(data_segment)
|
||||||
|
progress_data['chunks_status'][part_num]['downloaded'] = bytes_this_chunk
|
||||||
|
|
||||||
|
# Calculate and update speed for this chunk
|
||||||
|
current_time = time.time()
|
||||||
|
time_delta = current_time - last_speed_calc_time
|
||||||
|
if time_delta > 0.5:
|
||||||
|
bytes_delta = bytes_this_chunk - bytes_at_last_speed_calc
|
||||||
|
current_speed_bps = (bytes_delta * 8) / time_delta if time_delta > 0 else 0
|
||||||
|
progress_data['chunks_status'][part_num]['speed_bps'] = current_speed_bps
|
||||||
|
last_speed_calc_time = current_time
|
||||||
|
bytes_at_last_speed_calc = bytes_this_chunk
|
||||||
|
|
||||||
|
# Emit progress signal to the UI via the queue
|
||||||
|
if emitter and (current_time - global_emit_time_ref[0] > 0.25):
|
||||||
|
global_emit_time_ref[0] = current_time
|
||||||
|
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
|
||||||
|
if isinstance(emitter, queue.Queue):
|
||||||
|
emitter.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
|
||||||
|
elif hasattr(emitter, 'file_progress_signal'):
|
||||||
|
emitter.file_progress_signal.emit(api_original_filename, status_list_copy)
|
||||||
|
|
||||||
|
# If we get here, the download for this chunk is successful
|
||||||
|
return bytes_this_chunk, True
|
||||||
|
|
||||||
|
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout, http.client.IncompleteRead) as e:
|
||||||
|
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Retryable error: {e}")
|
||||||
|
except requests.exceptions.RequestException as e:
|
||||||
|
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Non-retryable error: {e}")
|
||||||
|
return bytes_this_chunk, False # Break loop on non-retryable errors
|
||||||
|
except Exception as e:
|
||||||
|
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Unexpected error: {e}\n{traceback.format_exc(limit=1)}")
|
||||||
|
return bytes_this_chunk, False
|
||||||
|
|
||||||
|
# If the retry loop finishes without a successful download
|
||||||
|
return bytes_this_chunk, False
|
||||||
|
finally:
|
||||||
|
# This block runs whether the download succeeded or failed
|
||||||
|
with progress_data['lock']:
|
||||||
|
progress_data['chunks_status'][part_num]['active'] = False
|
||||||
|
progress_data['chunks_status'][part_num]['speed_bps'] = 0.0
|
||||||
|
|
||||||
|
|
||||||
def download_file_in_parts(file_url, save_path, total_size, num_parts, headers, api_original_filename,
|
def download_file_in_parts(file_url, save_path, total_size, num_parts, headers, api_original_filename,
|
||||||
emitter_for_multipart, cookies_for_chunk_session,
|
emitter_for_multipart, cookies_for_chunk_session,
|
||||||
cancellation_event, skip_event, logger_func, pause_event):
|
cancellation_event, skip_event, logger_func, pause_event):
|
||||||
logger_func(f"⬇️ Initializing Multi-part Download ({num_parts} parts) for: '{api_original_filename}' (Size: {total_size / (1024*1024):.2f} MB)")
|
"""
|
||||||
temp_file_path = save_path + ".part"
|
Manages a resilient, multipart file download by saving each chunk to a separate file.
|
||||||
|
|
||||||
try:
|
This function orchestrates the download process by:
|
||||||
with open(temp_file_path, 'wb') as f_temp:
|
1. Checking for already completed chunk files to resume a previous download.
|
||||||
if total_size > 0:
|
2. Submitting only the missing chunks to a thread pool for parallel download.
|
||||||
f_temp.truncate(total_size)
|
3. Assembling the final file from the individual chunks upon successful completion.
|
||||||
except IOError as e:
|
4. Cleaning up temporary chunk files after assembly.
|
||||||
logger_func(f" ❌ Error creating/truncating temp file '{temp_file_path}': {e}")
|
5. Leaving completed chunks on disk if the download fails, allowing for a future resume.
|
||||||
return False, 0, None, None
|
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file_url (str): The URL of the file to download.
|
||||||
|
save_path (str): The final desired path for the downloaded file (e.g., 'my_video.mp4').
|
||||||
|
total_size (int): The total size of the file in bytes.
|
||||||
|
num_parts (int): The number of parts to split the download into.
|
||||||
|
headers (dict): HTTP headers for the download requests.
|
||||||
|
api_original_filename (str): The original filename for UI progress display.
|
||||||
|
emitter_for_multipart (queue.Queue or QObject): Emitter for UI signals.
|
||||||
|
cookies_for_chunk_session (dict): Cookies for the download requests.
|
||||||
|
cancellation_event (threading.Event): Event to signal cancellation.
|
||||||
|
skip_event (threading.Event): Event to signal skipping the file.
|
||||||
|
logger_func (function): A function for logging messages.
|
||||||
|
pause_event (threading.Event): Event to signal pausing the download.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
tuple: A tuple containing (success_flag, total_bytes_downloaded, md5_hash, file_handle).
|
||||||
|
The file_handle will be for the final assembled file if successful, otherwise None.
|
||||||
|
"""
|
||||||
|
logger_func(f"⬇️ Initializing Resumable Multi-part Download ({num_parts} parts) for: '{api_original_filename}' (Size: {total_size / (1024*1024):.2f} MB)")
|
||||||
|
|
||||||
|
# Calculate the byte range for each chunk
|
||||||
chunk_size_calc = total_size // num_parts
|
chunk_size_calc = total_size // num_parts
|
||||||
chunks_ranges = []
|
chunks_ranges = []
|
||||||
for i in range(num_parts):
|
for i in range(num_parts):
|
||||||
@@ -153,76 +207,119 @@ def download_file_in_parts(file_url, save_path, total_size, num_parts, headers,
|
|||||||
end = start + chunk_size_calc - 1 if i < num_parts - 1 else total_size - 1
|
end = start + chunk_size_calc - 1 if i < num_parts - 1 else total_size - 1
|
||||||
if start <= end:
|
if start <= end:
|
||||||
chunks_ranges.append((start, end))
|
chunks_ranges.append((start, end))
|
||||||
elif total_size == 0 and i == 0:
|
elif total_size == 0 and i == 0: # Handle zero-byte files
|
||||||
chunks_ranges.append((0, -1))
|
chunks_ranges.append((0, -1))
|
||||||
|
|
||||||
|
# Calculate the expected size of each chunk
|
||||||
chunk_actual_sizes = []
|
chunk_actual_sizes = []
|
||||||
for start, end in chunks_ranges:
|
for start, end in chunks_ranges:
|
||||||
if end == -1 and start == 0:
|
chunk_actual_sizes.append(end - start + 1 if end != -1 else 0)
|
||||||
chunk_actual_sizes.append(0)
|
|
||||||
else:
|
|
||||||
chunk_actual_sizes.append(end - start + 1)
|
|
||||||
|
|
||||||
if not chunks_ranges and total_size > 0:
|
if not chunks_ranges and total_size > 0:
|
||||||
logger_func(f" ⚠️ No valid chunk ranges for multipart download of '{api_original_filename}'. Aborting multipart.")
|
logger_func(f" ⚠️ No valid chunk ranges for multipart download of '{api_original_filename}'. Aborting.")
|
||||||
if os.path.exists(temp_file_path): os.remove(temp_file_path)
|
|
||||||
return False, 0, None, None
|
return False, 0, None, None
|
||||||
|
|
||||||
|
# --- Resumption Logic: Check for existing complete chunks ---
|
||||||
|
chunks_to_download = []
|
||||||
|
total_bytes_resumed = 0
|
||||||
|
for i, (start, end) in enumerate(chunks_ranges):
|
||||||
|
chunk_part_path = f"{save_path}.part{i}"
|
||||||
|
expected_chunk_size = chunk_actual_sizes[i]
|
||||||
|
|
||||||
|
if os.path.exists(chunk_part_path) and os.path.getsize(chunk_part_path) == expected_chunk_size:
|
||||||
|
logger_func(f" [Chunk {i + 1}/{num_parts}] Resuming with existing complete chunk file.")
|
||||||
|
total_bytes_resumed += expected_chunk_size
|
||||||
|
else:
|
||||||
|
chunks_to_download.append({'index': i, 'start': start, 'end': end})
|
||||||
|
|
||||||
|
# Setup the shared progress data structure
|
||||||
progress_data = {
|
progress_data = {
|
||||||
'total_file_size': total_size,
|
'total_file_size': total_size,
|
||||||
'total_downloaded_so_far': 0,
|
'total_downloaded_so_far': total_bytes_resumed,
|
||||||
'chunks_status': [
|
'chunks_status': [],
|
||||||
{'id': i, 'downloaded': 0, 'total': chunk_actual_sizes[i] if i < len(chunk_actual_sizes) else 0, 'active': False, 'speed_bps': 0.0}
|
|
||||||
for i in range(num_parts)
|
|
||||||
],
|
|
||||||
'lock': threading.Lock(),
|
'lock': threading.Lock(),
|
||||||
'last_global_emit_time': [time.time()]
|
'last_global_emit_time': [time.time()]
|
||||||
}
|
}
|
||||||
|
for i in range(num_parts):
|
||||||
|
is_resumed = not any(c['index'] == i for c in chunks_to_download)
|
||||||
|
progress_data['chunks_status'].append({
|
||||||
|
'id': i,
|
||||||
|
'downloaded': chunk_actual_sizes[i] if is_resumed else 0,
|
||||||
|
'total': chunk_actual_sizes[i],
|
||||||
|
'active': False,
|
||||||
|
'speed_bps': 0.0
|
||||||
|
})
|
||||||
|
|
||||||
|
# --- Download Phase ---
|
||||||
chunk_futures = []
|
chunk_futures = []
|
||||||
all_chunks_successful = True
|
all_chunks_successful = True
|
||||||
total_bytes_from_chunks = 0
|
total_bytes_from_threads = 0
|
||||||
|
|
||||||
with ThreadPoolExecutor(max_workers=num_parts, thread_name_prefix=f"MPChunk_{api_original_filename[:10]}_") as chunk_pool:
|
with ThreadPoolExecutor(max_workers=num_parts, thread_name_prefix=f"MPChunk_{api_original_filename[:10]}_") as chunk_pool:
|
||||||
for i, (start, end) in enumerate(chunks_ranges):
|
for chunk_info in chunks_to_download:
|
||||||
if cancellation_event and cancellation_event.is_set(): all_chunks_successful = False; break
|
if cancellation_event and cancellation_event.is_set():
|
||||||
chunk_futures.append(chunk_pool.submit(
|
all_chunks_successful = False
|
||||||
_download_individual_chunk, chunk_url=file_url, temp_file_path=temp_file_path,
|
break
|
||||||
|
|
||||||
|
i, start, end = chunk_info['index'], chunk_info['start'], chunk_info['end']
|
||||||
|
chunk_part_path = f"{save_path}.part{i}"
|
||||||
|
|
||||||
|
future = chunk_pool.submit(
|
||||||
|
_download_individual_chunk,
|
||||||
|
chunk_url=file_url,
|
||||||
|
chunk_temp_file_path=chunk_part_path,
|
||||||
start_byte=start, end_byte=end, headers=headers, part_num=i, total_parts=num_parts,
|
start_byte=start, end_byte=end, headers=headers, part_num=i, total_parts=num_parts,
|
||||||
progress_data=progress_data, cancellation_event=cancellation_event, skip_event=skip_event, global_emit_time_ref=progress_data['last_global_emit_time'],
|
progress_data=progress_data, cancellation_event=cancellation_event,
|
||||||
pause_event=pause_event, cookies_for_chunk=cookies_for_chunk_session, logger_func=logger_func, emitter=emitter_for_multipart,
|
skip_event=skip_event, global_emit_time_ref=progress_data['last_global_emit_time'],
|
||||||
|
pause_event=pause_event, cookies_for_chunk=cookies_for_chunk_session,
|
||||||
|
logger_func=logger_func, emitter=emitter_for_multipart,
|
||||||
api_original_filename=api_original_filename
|
api_original_filename=api_original_filename
|
||||||
))
|
)
|
||||||
|
chunk_futures.append(future)
|
||||||
|
|
||||||
for future in as_completed(chunk_futures):
|
for future in as_completed(chunk_futures):
|
||||||
if cancellation_event and cancellation_event.is_set(): all_chunks_successful = False; break
|
if cancellation_event and cancellation_event.is_set():
|
||||||
bytes_downloaded_this_chunk, success_this_chunk = future.result()
|
|
||||||
total_bytes_from_chunks += bytes_downloaded_this_chunk
|
|
||||||
if not success_this_chunk:
|
|
||||||
all_chunks_successful = False
|
all_chunks_successful = False
|
||||||
|
bytes_downloaded, success = future.result()
|
||||||
|
total_bytes_from_threads += bytes_downloaded
|
||||||
|
if not success:
|
||||||
|
all_chunks_successful = False
|
||||||
|
|
||||||
|
total_bytes_final = total_bytes_resumed + total_bytes_from_threads
|
||||||
|
|
||||||
if cancellation_event and cancellation_event.is_set():
|
if cancellation_event and cancellation_event.is_set():
|
||||||
logger_func(f" Multi-part download for '{api_original_filename}' cancelled by main event.")
|
logger_func(f" Multi-part download for '{api_original_filename}' cancelled by main event.")
|
||||||
all_chunks_successful = False
|
all_chunks_successful = False
|
||||||
if emitter_for_multipart:
|
|
||||||
with progress_data['lock']:
|
|
||||||
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
|
|
||||||
if isinstance(emitter_for_multipart, queue.Queue):
|
|
||||||
emitter_for_multipart.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
|
|
||||||
elif hasattr(emitter_for_multipart, 'file_progress_signal'):
|
|
||||||
emitter_for_multipart.file_progress_signal.emit(api_original_filename, status_list_copy)
|
|
||||||
|
|
||||||
if all_chunks_successful and (total_bytes_from_chunks == total_size or total_size == 0):
|
# --- Assembly and Cleanup Phase ---
|
||||||
logger_func(f" ✅ Multi-part download successful for '{api_original_filename}'. Total bytes: {total_bytes_from_chunks}")
|
if all_chunks_successful and (total_bytes_final == total_size or total_size == 0):
|
||||||
|
logger_func(f" ✅ All {num_parts} chunks complete. Assembling final file...")
|
||||||
md5_hasher = hashlib.md5()
|
md5_hasher = hashlib.md5()
|
||||||
with open(temp_file_path, 'rb') as f_hash:
|
try:
|
||||||
for buf in iter(lambda: f_hash.read(4096*10), b''):
|
with open(save_path, 'wb') as final_file:
|
||||||
md5_hasher.update(buf)
|
for i in range(num_parts):
|
||||||
calculated_hash = md5_hasher.hexdigest()
|
chunk_part_path = f"{save_path}.part{i}"
|
||||||
return True, total_bytes_from_chunks, calculated_hash, open(temp_file_path, 'rb')
|
with open(chunk_part_path, 'rb') as chunk_file:
|
||||||
|
content = chunk_file.read()
|
||||||
|
final_file.write(content)
|
||||||
|
md5_hasher.update(content)
|
||||||
|
|
||||||
|
calculated_hash = md5_hasher.hexdigest()
|
||||||
|
logger_func(f" ✅ Assembly successful for '{api_original_filename}'. Total bytes: {total_bytes_final}")
|
||||||
|
return True, total_bytes_final, calculated_hash, open(save_path, 'rb')
|
||||||
|
except Exception as e:
|
||||||
|
logger_func(f" ❌ Critical error during file assembly: {e}. Cleaning up.")
|
||||||
|
return False, total_bytes_final, None, None
|
||||||
|
finally:
|
||||||
|
# Cleanup all individual chunk files after successful assembly
|
||||||
|
for i in range(num_parts):
|
||||||
|
chunk_part_path = f"{save_path}.part{i}"
|
||||||
|
if os.path.exists(chunk_part_path):
|
||||||
|
try:
|
||||||
|
os.remove(chunk_part_path)
|
||||||
|
except OSError as e:
|
||||||
|
logger_func(f" ⚠️ Failed to remove temp part file '{chunk_part_path}': {e}")
|
||||||
else:
|
else:
|
||||||
logger_func(f" ❌ Multi-part download failed for '{api_original_filename}'. Success: {all_chunks_successful}, Bytes: {total_bytes_from_chunks}/{total_size}. Cleaning up.")
|
# If download failed, we do NOT clean up, allowing for resumption later
|
||||||
if os.path.exists(temp_file_path):
|
logger_func(f" ❌ Multi-part download failed for '{api_original_filename}'. Success: {all_chunks_successful}, Bytes: {total_bytes_final}/{total_size}. Partial chunks saved for future resumption.")
|
||||||
try: os.remove(temp_file_path)
|
return False, total_bytes_final, None, None
|
||||||
except OSError as e: logger_func(f" Failed to remove temp part file '{temp_file_path}': {e}")
|
|
||||||
return False, total_bytes_from_chunks, None, None
|
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ from PyQt5.QtCore import pyqtSignal, QCoreApplication, QSize, QThread, QTimer, Q
|
|||||||
from PyQt5.QtWidgets import (
|
from PyQt5.QtWidgets import (
|
||||||
QApplication, QDialog, QHBoxLayout, QLabel, QLineEdit, QListWidget,
|
QApplication, QDialog, QHBoxLayout, QLabel, QLineEdit, QListWidget,
|
||||||
QListWidgetItem, QMessageBox, QPushButton, QVBoxLayout, QAbstractItemView,
|
QListWidgetItem, QMessageBox, QPushButton, QVBoxLayout, QAbstractItemView,
|
||||||
QSplitter, QProgressBar, QWidget
|
QSplitter, QProgressBar, QWidget, QFileDialog
|
||||||
)
|
)
|
||||||
|
|
||||||
# --- Local Application Imports ---
|
# --- Local Application Imports ---
|
||||||
@@ -151,6 +151,8 @@ class EmptyPopupDialog (QDialog ):
|
|||||||
app_icon =get_app_icon_object ()
|
app_icon =get_app_icon_object ()
|
||||||
if app_icon and not app_icon .isNull ():
|
if app_icon and not app_icon .isNull ():
|
||||||
self .setWindowIcon (app_icon )
|
self .setWindowIcon (app_icon )
|
||||||
|
self.update_profile_data = None
|
||||||
|
self.update_creator_name = None
|
||||||
self .selected_creators_for_queue =[]
|
self .selected_creators_for_queue =[]
|
||||||
self .globally_selected_creators ={}
|
self .globally_selected_creators ={}
|
||||||
self .fetched_posts_data ={}
|
self .fetched_posts_data ={}
|
||||||
@@ -205,6 +207,9 @@ class EmptyPopupDialog (QDialog ):
|
|||||||
self .scope_button .clicked .connect (self ._toggle_scope_mode )
|
self .scope_button .clicked .connect (self ._toggle_scope_mode )
|
||||||
left_bottom_buttons_layout .addWidget (self .scope_button )
|
left_bottom_buttons_layout .addWidget (self .scope_button )
|
||||||
left_pane_layout .addLayout (left_bottom_buttons_layout )
|
left_pane_layout .addLayout (left_bottom_buttons_layout )
|
||||||
|
self.update_button = QPushButton()
|
||||||
|
self.update_button.clicked.connect(self._handle_update_check)
|
||||||
|
left_bottom_buttons_layout.addWidget(self.update_button)
|
||||||
|
|
||||||
|
|
||||||
self .right_pane_widget =QWidget ()
|
self .right_pane_widget =QWidget ()
|
||||||
@@ -315,6 +320,31 @@ class EmptyPopupDialog (QDialog ):
|
|||||||
except AttributeError :
|
except AttributeError :
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def _handle_update_check(self):
|
||||||
|
"""Opens a dialog to select a creator profile and loads it for an update session."""
|
||||||
|
appdata_dir = os.path.join(self.app_base_dir, "appdata")
|
||||||
|
profiles_dir = os.path.join(appdata_dir, "creator_profiles")
|
||||||
|
|
||||||
|
if not os.path.isdir(profiles_dir):
|
||||||
|
QMessageBox.warning(self, "Directory Not Found", f"The creator profiles directory does not exist yet.\n\nPath: {profiles_dir}")
|
||||||
|
return
|
||||||
|
|
||||||
|
filepath, _ = QFileDialog.getOpenFileName(self, "Select Creator Profile for Update", profiles_dir, "JSON Files (*.json)")
|
||||||
|
|
||||||
|
if filepath:
|
||||||
|
try:
|
||||||
|
with open(filepath, 'r', encoding='utf-8') as f:
|
||||||
|
data = json.load(f)
|
||||||
|
|
||||||
|
if 'creator_url' not in data or 'processed_post_ids' not in data:
|
||||||
|
raise ValueError("Invalid profile format.")
|
||||||
|
|
||||||
|
self.update_profile_data = data
|
||||||
|
self.update_creator_name = os.path.basename(filepath).replace('.json', '')
|
||||||
|
self.accept() # Close the dialog and signal success
|
||||||
|
except Exception as e:
|
||||||
|
QMessageBox.critical(self, "Error Loading Profile", f"Could not load or parse the selected profile file:\n\n{e}")
|
||||||
|
|
||||||
def _handle_fetch_posts_click (self ):
|
def _handle_fetch_posts_click (self ):
|
||||||
selected_creators =list (self .globally_selected_creators .values ())
|
selected_creators =list (self .globally_selected_creators .values ())
|
||||||
print(f"[DEBUG] Selected creators for fetch: {selected_creators}")
|
print(f"[DEBUG] Selected creators for fetch: {selected_creators}")
|
||||||
@@ -370,6 +400,7 @@ class EmptyPopupDialog (QDialog ):
|
|||||||
self .add_selected_button .setText (self ._tr ("creator_popup_add_selected_button","Add Selected"))
|
self .add_selected_button .setText (self ._tr ("creator_popup_add_selected_button","Add Selected"))
|
||||||
self .fetch_posts_button .setText (self ._tr ("fetch_posts_button_text","Fetch Posts"))
|
self .fetch_posts_button .setText (self ._tr ("fetch_posts_button_text","Fetch Posts"))
|
||||||
self ._update_scope_button_text_and_tooltip ()
|
self ._update_scope_button_text_and_tooltip ()
|
||||||
|
self.update_button.setText(self._tr("check_for_updates_button", "Check for Updates"))
|
||||||
|
|
||||||
self .posts_search_input .setPlaceholderText (self ._tr ("creator_popup_posts_search_placeholder","Search fetched posts by title..."))
|
self .posts_search_input .setPlaceholderText (self ._tr ("creator_popup_posts_search_placeholder","Search fetched posts by title..."))
|
||||||
|
|
||||||
@@ -929,15 +960,19 @@ class EmptyPopupDialog (QDialog ):
|
|||||||
|
|
||||||
self .parent_app .log_signal .emit (f"ℹ️ Added {num_just_added_posts } selected posts to the download queue. Total in queue: {total_in_queue }.")
|
self .parent_app .log_signal .emit (f"ℹ️ Added {num_just_added_posts } selected posts to the download queue. Total in queue: {total_in_queue }.")
|
||||||
|
|
||||||
|
# --- START: MODIFIED LOGIC ---
|
||||||
|
# Removed the blockSignals(True/False) calls to allow the main window's UI to update correctly.
|
||||||
if self .parent_app .link_input :
|
if self .parent_app .link_input :
|
||||||
self .parent_app .link_input .blockSignals (True )
|
|
||||||
self .parent_app .link_input .setText (
|
self .parent_app .link_input .setText (
|
||||||
self .parent_app ._tr ("popup_posts_selected_text","Posts - {count} selected").format (count =num_just_added_posts )
|
self .parent_app ._tr ("popup_posts_selected_text","Posts - {count} selected").format (count =num_just_added_posts )
|
||||||
)
|
)
|
||||||
self .parent_app .link_input .blockSignals (False )
|
|
||||||
self .parent_app .link_input .setPlaceholderText (
|
self .parent_app .link_input .setPlaceholderText (
|
||||||
self .parent_app ._tr ("items_in_queue_placeholder","{count} items in queue from popup.").format (count =total_in_queue )
|
self .parent_app ._tr ("items_in_queue_placeholder","{count} items in queue from popup.").format (count =total_in_queue )
|
||||||
)
|
)
|
||||||
|
# --- END: MODIFIED LOGIC ---
|
||||||
|
|
||||||
|
self.selected_creators_for_queue.clear()
|
||||||
|
|
||||||
self .accept ()
|
self .accept ()
|
||||||
else :
|
else :
|
||||||
QMessageBox .information (self ,self ._tr ("no_selection_title","No Selection"),
|
QMessageBox .information (self ,self ._tr ("no_selection_title","No Selection"),
|
||||||
@@ -955,9 +990,6 @@ class EmptyPopupDialog (QDialog ):
|
|||||||
self .add_selected_button .setEnabled (True )
|
self .add_selected_button .setEnabled (True )
|
||||||
self .setWindowTitle (self ._tr ("creator_popup_title","Creator Selection"))
|
self .setWindowTitle (self ._tr ("creator_popup_title","Creator Selection"))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def _get_domain_for_service (self ,service_name ):
|
def _get_domain_for_service (self ,service_name ):
|
||||||
"""Determines the base domain for a given service."""
|
"""Determines the base domain for a given service."""
|
||||||
service_lower =service_name .lower ()
|
service_lower =service_name .lower ()
|
||||||
|
|||||||
@@ -37,13 +37,13 @@ class FavoriteArtistsDialog (QDialog ):
|
|||||||
self ._init_ui ()
|
self ._init_ui ()
|
||||||
self ._fetch_favorite_artists ()
|
self ._fetch_favorite_artists ()
|
||||||
|
|
||||||
def _get_domain_for_service (self ,service_name ):
|
def _get_domain_for_service(self, service_name):
|
||||||
service_lower =service_name .lower ()
|
service_lower = service_name.lower()
|
||||||
coomer_primary_services ={'onlyfans','fansly','manyvids','candfans'}
|
coomer_primary_services = {'onlyfans', 'fansly', 'manyvids', 'candfans'}
|
||||||
if service_lower in coomer_primary_services :
|
if service_lower in coomer_primary_services:
|
||||||
return "coomer.su"
|
return "coomer.st" # Use the new domain
|
||||||
else :
|
else:
|
||||||
return "kemono.su"
|
return "kemono.cr" # Use the new domain
|
||||||
|
|
||||||
def _tr (self ,key ,default_text =""):
|
def _tr (self ,key ,default_text =""):
|
||||||
"""Helper to get translation based on current app language."""
|
"""Helper to get translation based on current app language."""
|
||||||
@@ -128,9 +128,29 @@ class FavoriteArtistsDialog (QDialog ):
|
|||||||
def _fetch_favorite_artists (self ):
|
def _fetch_favorite_artists (self ):
|
||||||
|
|
||||||
if self.cookies_config['use_cookie']:
|
if self.cookies_config['use_cookie']:
|
||||||
# Check if we can load cookies for at least one of the services.
|
# --- Kemono Check with Fallback ---
|
||||||
kemono_cookies = prepare_cookies_for_request(True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'], self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.su")
|
kemono_cookies = prepare_cookies_for_request(
|
||||||
coomer_cookies = prepare_cookies_for_request(True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'], self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.su")
|
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.cr"
|
||||||
|
)
|
||||||
|
if not kemono_cookies:
|
||||||
|
self._logger("No cookies for kemono.cr, trying fallback kemono.su...")
|
||||||
|
kemono_cookies = prepare_cookies_for_request(
|
||||||
|
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.su"
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Coomer Check with Fallback ---
|
||||||
|
coomer_cookies = prepare_cookies_for_request(
|
||||||
|
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.st"
|
||||||
|
)
|
||||||
|
if not coomer_cookies:
|
||||||
|
self._logger("No cookies for coomer.st, trying fallback coomer.su...")
|
||||||
|
coomer_cookies = prepare_cookies_for_request(
|
||||||
|
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.su"
|
||||||
|
)
|
||||||
|
|
||||||
if not kemono_cookies and not coomer_cookies:
|
if not kemono_cookies and not coomer_cookies:
|
||||||
# If cookies are enabled but none could be loaded, show help and stop.
|
# If cookies are enabled but none could be loaded, show help and stop.
|
||||||
@@ -139,7 +159,7 @@ class FavoriteArtistsDialog (QDialog ):
|
|||||||
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
|
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
|
||||||
cookie_help_dialog.exec_()
|
cookie_help_dialog.exec_()
|
||||||
self.download_button.setEnabled(False)
|
self.download_button.setEnabled(False)
|
||||||
return # Stop further execution
|
return # Stop further execution
|
||||||
|
|
||||||
kemono_fav_url ="https://kemono.su/api/v1/account/favorites?type=artist"
|
kemono_fav_url ="https://kemono.su/api/v1/account/favorites?type=artist"
|
||||||
coomer_fav_url ="https://coomer.su/api/v1/account/favorites?type=artist"
|
coomer_fav_url ="https://coomer.su/api/v1/account/favorites?type=artist"
|
||||||
@@ -149,9 +169,12 @@ class FavoriteArtistsDialog (QDialog ):
|
|||||||
errors_occurred =[]
|
errors_occurred =[]
|
||||||
any_cookies_loaded_successfully_for_any_source =False
|
any_cookies_loaded_successfully_for_any_source =False
|
||||||
|
|
||||||
api_sources =[
|
kemono_cr_fav_url = "https://kemono.cr/api/v1/account/favorites?type=artist"
|
||||||
{"name":"Kemono.su","url":kemono_fav_url ,"domain":"kemono.su"},
|
coomer_st_fav_url = "https://coomer.st/api/v1/account/favorites?type=artist"
|
||||||
{"name":"Coomer.su","url":coomer_fav_url ,"domain":"coomer.su"}
|
|
||||||
|
api_sources = [
|
||||||
|
{"name": "Kemono.cr", "url": kemono_cr_fav_url, "domain": "kemono.cr"},
|
||||||
|
{"name": "Coomer.st", "url": coomer_st_fav_url, "domain": "coomer.st"}
|
||||||
]
|
]
|
||||||
|
|
||||||
for source in api_sources :
|
for source in api_sources :
|
||||||
@@ -159,20 +182,41 @@ class FavoriteArtistsDialog (QDialog ):
|
|||||||
self .status_label .setText (self ._tr ("fav_artists_loading_from_source_status","⏳ Loading favorites from {source_name}...").format (source_name =source ['name']))
|
self .status_label .setText (self ._tr ("fav_artists_loading_from_source_status","⏳ Loading favorites from {source_name}...").format (source_name =source ['name']))
|
||||||
QCoreApplication .processEvents ()
|
QCoreApplication .processEvents ()
|
||||||
|
|
||||||
cookies_dict_for_source =None
|
cookies_dict_for_source = None
|
||||||
if self .cookies_config ['use_cookie']:
|
if self.cookies_config['use_cookie']:
|
||||||
cookies_dict_for_source =prepare_cookies_for_request (
|
primary_domain = source['domain']
|
||||||
True ,
|
fallback_domain = None
|
||||||
self .cookies_config ['cookie_text'],
|
if primary_domain == "kemono.cr":
|
||||||
self .cookies_config ['selected_cookie_file'],
|
fallback_domain = "kemono.su"
|
||||||
self .cookies_config ['app_base_dir'],
|
elif primary_domain == "coomer.st":
|
||||||
self ._logger ,
|
fallback_domain = "coomer.su"
|
||||||
target_domain =source ['domain']
|
|
||||||
|
# First, try the primary domain
|
||||||
|
cookies_dict_for_source = prepare_cookies_for_request(
|
||||||
|
True,
|
||||||
|
self.cookies_config['cookie_text'],
|
||||||
|
self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'],
|
||||||
|
self._logger,
|
||||||
|
target_domain=primary_domain
|
||||||
)
|
)
|
||||||
if cookies_dict_for_source :
|
|
||||||
any_cookies_loaded_successfully_for_any_source =True
|
# If no cookies found, try the fallback domain
|
||||||
else :
|
if not cookies_dict_for_source and fallback_domain:
|
||||||
self ._logger (f"Warning ({source ['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
|
self._logger(f"Warning ({source['name']}): No cookies found for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
|
||||||
|
cookies_dict_for_source = prepare_cookies_for_request(
|
||||||
|
True,
|
||||||
|
self.cookies_config['cookie_text'],
|
||||||
|
self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'],
|
||||||
|
self._logger,
|
||||||
|
target_domain=fallback_domain
|
||||||
|
)
|
||||||
|
|
||||||
|
if cookies_dict_for_source:
|
||||||
|
any_cookies_loaded_successfully_for_any_source = True
|
||||||
|
else:
|
||||||
|
self._logger(f"Warning ({source['name']}): Cookies enabled but could not be loaded for this source (including fallbacks). Fetch might fail.")
|
||||||
try :
|
try :
|
||||||
headers ={'User-Agent':'Mozilla/5.0'}
|
headers ={'User-Agent':'Mozilla/5.0'}
|
||||||
response =requests .get (source ['url'],headers =headers ,cookies =cookies_dict_for_source ,timeout =20 )
|
response =requests .get (source ['url'],headers =headers ,cookies =cookies_dict_for_source ,timeout =20 )
|
||||||
@@ -223,7 +267,7 @@ class FavoriteArtistsDialog (QDialog ):
|
|||||||
if self .cookies_config ['use_cookie']and not any_cookies_loaded_successfully_for_any_source :
|
if self .cookies_config ['use_cookie']and not any_cookies_loaded_successfully_for_any_source :
|
||||||
self .status_label .setText (self ._tr ("fav_artists_cookies_required_status","Error: Cookies enabled but could not be loaded for any source."))
|
self .status_label .setText (self ._tr ("fav_artists_cookies_required_status","Error: Cookies enabled but could not be loaded for any source."))
|
||||||
self ._logger ("Error: Cookies enabled but no cookies loaded for any source. Showing help dialog.")
|
self ._logger ("Error: Cookies enabled but no cookies loaded for any source. Showing help dialog.")
|
||||||
cookie_help_dialog =CookieHelpDialog (self )
|
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
|
||||||
cookie_help_dialog .exec_ ()
|
cookie_help_dialog .exec_ ()
|
||||||
self .download_button .setEnabled (False )
|
self .download_button .setEnabled (False )
|
||||||
if not fetched_any_successfully :
|
if not fetched_any_successfully :
|
||||||
|
|||||||
@@ -34,28 +34,30 @@ class FavoritePostsFetcherThread (QThread ):
|
|||||||
self .target_domain_preference =target_domain_preference
|
self .target_domain_preference =target_domain_preference
|
||||||
self .cancellation_event =threading .Event ()
|
self .cancellation_event =threading .Event ()
|
||||||
self .error_key_map ={
|
self .error_key_map ={
|
||||||
"Kemono.su":"kemono_su",
|
"kemono.cr":"kemono_su",
|
||||||
"Coomer.su":"coomer_su"
|
"coomer.st":"coomer_su"
|
||||||
}
|
}
|
||||||
|
|
||||||
def _logger (self ,message ):
|
def _logger (self ,message ):
|
||||||
self .parent_logger_func (f"[FavPostsFetcherThread] {message }")
|
self .parent_logger_func (f"[FavPostsFetcherThread] {message }")
|
||||||
|
|
||||||
def run (self ):
|
def run(self):
|
||||||
kemono_fav_posts_url ="https://kemono.su/api/v1/account/favorites?type=post"
|
kemono_su_fav_posts_url = "https://kemono.su/api/v1/account/favorites?type=post"
|
||||||
coomer_fav_posts_url ="https://coomer.su/api/v1/account/favorites?type=post"
|
coomer_su_fav_posts_url = "https://coomer.su/api/v1/account/favorites?type=post"
|
||||||
|
kemono_cr_fav_posts_url = "https://kemono.cr/api/v1/account/favorites?type=post"
|
||||||
|
coomer_st_fav_posts_url = "https://coomer.st/api/v1/account/favorites?type=post"
|
||||||
|
|
||||||
all_fetched_posts_temp =[]
|
all_fetched_posts_temp = []
|
||||||
error_messages_for_summary =[]
|
error_messages_for_summary = []
|
||||||
fetched_any_successfully =False
|
fetched_any_successfully = False
|
||||||
any_cookies_loaded_successfully_for_any_source =False
|
any_cookies_loaded_successfully_for_any_source = False
|
||||||
|
|
||||||
self .status_update .emit ("key_fetching_fav_post_list_init")
|
self.status_update.emit("key_fetching_fav_post_list_init")
|
||||||
self .progress_bar_update .emit (0 ,0 )
|
self.progress_bar_update.emit(0, 0)
|
||||||
|
|
||||||
api_sources =[
|
api_sources = [
|
||||||
{"name":"Kemono.su","url":kemono_fav_posts_url ,"domain":"kemono.su"},
|
{"name": "Kemono.cr", "url": kemono_cr_fav_posts_url, "domain": "kemono.cr"},
|
||||||
{"name":"Coomer.su","url":coomer_fav_posts_url ,"domain":"coomer.su"}
|
{"name": "Coomer.st", "url": coomer_st_fav_posts_url, "domain": "coomer.st"}
|
||||||
]
|
]
|
||||||
|
|
||||||
api_sources_to_try =[]
|
api_sources_to_try =[]
|
||||||
@@ -76,20 +78,41 @@ class FavoritePostsFetcherThread (QThread ):
|
|||||||
if self .cancellation_event .is_set ():
|
if self .cancellation_event .is_set ():
|
||||||
self .finished .emit ([],"KEY_FETCH_CANCELLED_DURING")
|
self .finished .emit ([],"KEY_FETCH_CANCELLED_DURING")
|
||||||
return
|
return
|
||||||
cookies_dict_for_source =None
|
cookies_dict_for_source = None
|
||||||
if self .cookies_config ['use_cookie']:
|
if self.cookies_config['use_cookie']:
|
||||||
cookies_dict_for_source =prepare_cookies_for_request (
|
primary_domain = source['domain']
|
||||||
True ,
|
fallback_domain = None
|
||||||
self .cookies_config ['cookie_text'],
|
if primary_domain == "kemono.cr":
|
||||||
self .cookies_config ['selected_cookie_file'],
|
fallback_domain = "kemono.su"
|
||||||
self .cookies_config ['app_base_dir'],
|
elif primary_domain == "coomer.st":
|
||||||
self ._logger ,
|
fallback_domain = "coomer.su"
|
||||||
target_domain =source ['domain']
|
|
||||||
|
# First, try the primary domain
|
||||||
|
cookies_dict_for_source = prepare_cookies_for_request(
|
||||||
|
True,
|
||||||
|
self.cookies_config['cookie_text'],
|
||||||
|
self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'],
|
||||||
|
self._logger,
|
||||||
|
target_domain=primary_domain
|
||||||
)
|
)
|
||||||
if cookies_dict_for_source :
|
|
||||||
any_cookies_loaded_successfully_for_any_source =True
|
# If no cookies found, try the fallback domain
|
||||||
else :
|
if not cookies_dict_for_source and fallback_domain:
|
||||||
self ._logger (f"Warning ({source ['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
|
self._logger(f"Warning ({source['name']}): No cookies found for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
|
||||||
|
cookies_dict_for_source = prepare_cookies_for_request(
|
||||||
|
True,
|
||||||
|
self.cookies_config['cookie_text'],
|
||||||
|
self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'],
|
||||||
|
self._logger,
|
||||||
|
target_domain=fallback_domain
|
||||||
|
)
|
||||||
|
|
||||||
|
if cookies_dict_for_source:
|
||||||
|
any_cookies_loaded_successfully_for_any_source = True
|
||||||
|
else:
|
||||||
|
self._logger(f"Warning ({source['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
|
||||||
|
|
||||||
self ._logger (f"Attempting to fetch favorite posts from: {source ['name']} ({source ['url']})")
|
self ._logger (f"Attempting to fetch favorite posts from: {source ['name']} ({source ['url']})")
|
||||||
source_key_part =self .error_key_map .get (source ['name'],source ['name'].lower ().replace ('.','_'))
|
source_key_part =self .error_key_map .get (source ['name'],source ['name'].lower ().replace ('.','_'))
|
||||||
@@ -409,14 +432,14 @@ class FavoritePostsDialog (QDialog ):
|
|||||||
if status_key .startswith ("KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_FOR_DOMAIN_")or status_key =="KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_GENERIC":
|
if status_key .startswith ("KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_FOR_DOMAIN_")or status_key =="KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_GENERIC":
|
||||||
status_label_text_key ="fav_posts_cookies_required_error"
|
status_label_text_key ="fav_posts_cookies_required_error"
|
||||||
self ._logger (f"Cookie error: {status_key }. Showing help dialog.")
|
self ._logger (f"Cookie error: {status_key }. Showing help dialog.")
|
||||||
cookie_help_dialog =CookieHelpDialog (self )
|
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
|
||||||
cookie_help_dialog .exec_ ()
|
cookie_help_dialog .exec_ ()
|
||||||
elif status_key =="KEY_AUTH_FAILED":
|
elif status_key =="KEY_AUTH_FAILED":
|
||||||
status_label_text_key ="fav_posts_auth_failed_title"
|
status_label_text_key ="fav_posts_auth_failed_title"
|
||||||
self ._logger (f"Auth error: {status_key }. Showing help dialog.")
|
self ._logger (f"Auth error: {status_key }. Showing help dialog.")
|
||||||
QMessageBox .warning (self ,self ._tr ("fav_posts_auth_failed_title","Authorization Failed (Posts)"),
|
QMessageBox .warning (self ,self ._tr ("fav_posts_auth_failed_title","Authorization Failed (Posts)"),
|
||||||
self ._tr ("fav_posts_auth_failed_message_generic","...").format (domain_specific_part =specific_domain_msg_part ))
|
self ._tr ("fav_posts_auth_failed_message_generic","...").format (domain_specific_part =specific_domain_msg_part ))
|
||||||
cookie_help_dialog =CookieHelpDialog (self )
|
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
|
||||||
cookie_help_dialog .exec_ ()
|
cookie_help_dialog .exec_ ()
|
||||||
elif status_key =="KEY_NO_FAVORITES_FOUND_ALL_PLATFORMS":
|
elif status_key =="KEY_NO_FAVORITES_FOUND_ALL_PLATFORMS":
|
||||||
status_label_text_key ="fav_posts_no_posts_found_status"
|
status_label_text_key ="fav_posts_no_posts_found_status"
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ import json
|
|||||||
from PyQt5.QtCore import Qt, QStandardPaths
|
from PyQt5.QtCore import Qt, QStandardPaths
|
||||||
from PyQt5.QtWidgets import (
|
from PyQt5.QtWidgets import (
|
||||||
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
|
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
|
||||||
QGroupBox, QComboBox, QMessageBox, QGridLayout
|
QGroupBox, QComboBox, QMessageBox, QGridLayout, QCheckBox
|
||||||
)
|
)
|
||||||
|
|
||||||
# --- Local Application Imports ---
|
# --- Local Application Imports ---
|
||||||
@@ -15,7 +15,8 @@ from ...utils.resolution import get_dark_theme
|
|||||||
from ..main_window import get_app_icon_object
|
from ..main_window import get_app_icon_object
|
||||||
from ...config.constants import (
|
from ...config.constants import (
|
||||||
THEME_KEY, LANGUAGE_KEY, DOWNLOAD_LOCATION_KEY,
|
THEME_KEY, LANGUAGE_KEY, DOWNLOAD_LOCATION_KEY,
|
||||||
RESOLUTION_KEY, UI_SCALE_KEY
|
RESOLUTION_KEY, UI_SCALE_KEY, SAVE_CREATOR_JSON_KEY,
|
||||||
|
COOKIE_TEXT_KEY, USE_COOKIE_KEY
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -35,7 +36,7 @@ class FutureSettingsDialog(QDialog):
|
|||||||
|
|
||||||
screen_height = QApplication.primaryScreen().availableGeometry().height() if QApplication.primaryScreen() else 800
|
screen_height = QApplication.primaryScreen().availableGeometry().height() if QApplication.primaryScreen() else 800
|
||||||
scale_factor = screen_height / 800.0
|
scale_factor = screen_height / 800.0
|
||||||
base_min_w, base_min_h = 420, 320 # Adjusted height for new layout
|
base_min_w, base_min_h = 420, 360 # Adjusted height for new layout
|
||||||
scaled_min_w = int(base_min_w * scale_factor)
|
scaled_min_w = int(base_min_w * scale_factor)
|
||||||
scaled_min_h = int(base_min_h * scale_factor)
|
scaled_min_h = int(base_min_h * scale_factor)
|
||||||
self.setMinimumSize(scaled_min_w, scaled_min_h)
|
self.setMinimumSize(scaled_min_w, scaled_min_h)
|
||||||
@@ -89,10 +90,17 @@ class FutureSettingsDialog(QDialog):
|
|||||||
# Default Path
|
# Default Path
|
||||||
self.default_path_label = QLabel()
|
self.default_path_label = QLabel()
|
||||||
self.save_path_button = QPushButton()
|
self.save_path_button = QPushButton()
|
||||||
self.save_path_button.clicked.connect(self._save_download_path)
|
# --- START: MODIFIED LOGIC ---
|
||||||
|
self.save_path_button.clicked.connect(self._save_cookie_and_path)
|
||||||
|
# --- END: MODIFIED LOGIC ---
|
||||||
download_window_layout.addWidget(self.default_path_label, 1, 0)
|
download_window_layout.addWidget(self.default_path_label, 1, 0)
|
||||||
download_window_layout.addWidget(self.save_path_button, 1, 1)
|
download_window_layout.addWidget(self.save_path_button, 1, 1)
|
||||||
|
|
||||||
|
# Save Creator.json Checkbox
|
||||||
|
self.save_creator_json_checkbox = QCheckBox()
|
||||||
|
self.save_creator_json_checkbox.stateChanged.connect(self._creator_json_setting_changed)
|
||||||
|
download_window_layout.addWidget(self.save_creator_json_checkbox, 2, 0, 1, 2)
|
||||||
|
|
||||||
main_layout.addWidget(self.download_window_group_box)
|
main_layout.addWidget(self.download_window_group_box)
|
||||||
|
|
||||||
main_layout.addStretch(1)
|
main_layout.addStretch(1)
|
||||||
@@ -102,6 +110,20 @@ class FutureSettingsDialog(QDialog):
|
|||||||
self.ok_button.clicked.connect(self.accept)
|
self.ok_button.clicked.connect(self.accept)
|
||||||
main_layout.addWidget(self.ok_button, 0, Qt.AlignRight | Qt.AlignBottom)
|
main_layout.addWidget(self.ok_button, 0, Qt.AlignRight | Qt.AlignBottom)
|
||||||
|
|
||||||
|
def _load_checkbox_states(self):
|
||||||
|
"""Loads the initial state for all checkboxes from settings."""
|
||||||
|
self.save_creator_json_checkbox.blockSignals(True)
|
||||||
|
# Default to True so the feature is on by default for users
|
||||||
|
should_save = self.parent_app.settings.value(SAVE_CREATOR_JSON_KEY, True, type=bool)
|
||||||
|
self.save_creator_json_checkbox.setChecked(should_save)
|
||||||
|
self.save_creator_json_checkbox.blockSignals(False)
|
||||||
|
|
||||||
|
def _creator_json_setting_changed(self, state):
|
||||||
|
"""Saves the state of the 'Save Creator.json' checkbox."""
|
||||||
|
is_checked = state == Qt.Checked
|
||||||
|
self.parent_app.settings.setValue(SAVE_CREATOR_JSON_KEY, is_checked)
|
||||||
|
self.parent_app.settings.sync()
|
||||||
|
|
||||||
def _tr(self, key, default_text=""):
|
def _tr(self, key, default_text=""):
|
||||||
if callable(get_translation) and self.parent_app:
|
if callable(get_translation) and self.parent_app:
|
||||||
return get_translation(self.parent_app.current_selected_language, key, default_text)
|
return get_translation(self.parent_app.current_selected_language, key, default_text)
|
||||||
@@ -122,16 +144,20 @@ class FutureSettingsDialog(QDialog):
|
|||||||
# Download & Window Group Labels
|
# Download & Window Group Labels
|
||||||
self.window_size_label.setText(self._tr("window_size_label", "Window Size:"))
|
self.window_size_label.setText(self._tr("window_size_label", "Window Size:"))
|
||||||
self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
|
self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
|
||||||
|
self.save_creator_json_checkbox.setText(self._tr("save_creator_json_label", "Save Creator.json file"))
|
||||||
|
|
||||||
|
# --- START: MODIFIED LOGIC ---
|
||||||
# Buttons and Controls
|
# Buttons and Controls
|
||||||
self._update_theme_toggle_button_text()
|
self._update_theme_toggle_button_text()
|
||||||
self.save_path_button.setText(self._tr("settings_save_path_button", "Save Current Download Path"))
|
self.save_path_button.setText(self._tr("settings_save_cookie_path_button", "Save Cookie + Download Path"))
|
||||||
self.save_path_button.setToolTip(self._tr("settings_save_path_tooltip", "Save the current 'Download Location' for future sessions."))
|
self.save_path_button.setToolTip(self._tr("settings_save_cookie_path_tooltip", "Save the current 'Download Location' and Cookie settings for future sessions."))
|
||||||
self.ok_button.setText(self._tr("ok_button", "OK"))
|
self.ok_button.setText(self._tr("ok_button", "OK"))
|
||||||
|
# --- END: MODIFIED LOGIC ---
|
||||||
|
|
||||||
# Populate dropdowns
|
# Populate dropdowns
|
||||||
self._populate_display_combo_boxes()
|
self._populate_display_combo_boxes()
|
||||||
self._populate_language_combo_box()
|
self._populate_language_combo_box()
|
||||||
|
self._load_checkbox_states()
|
||||||
|
|
||||||
def _apply_theme(self):
|
def _apply_theme(self):
|
||||||
if self.parent_app and self.parent_app.current_theme == "dark":
|
if self.parent_app and self.parent_app.current_theme == "dark":
|
||||||
@@ -254,22 +280,43 @@ class FutureSettingsDialog(QDialog):
|
|||||||
if msg_box.clickedButton() == restart_button:
|
if msg_box.clickedButton() == restart_button:
|
||||||
self.parent_app._request_restart_application()
|
self.parent_app._request_restart_application()
|
||||||
|
|
||||||
def _save_download_path(self):
|
def _save_cookie_and_path(self):
|
||||||
|
"""Saves the current download path and/or cookie settings from the main window."""
|
||||||
|
path_saved = False
|
||||||
|
cookie_saved = False
|
||||||
|
|
||||||
|
# --- Save Download Path Logic ---
|
||||||
if hasattr(self.parent_app, 'dir_input') and self.parent_app.dir_input:
|
if hasattr(self.parent_app, 'dir_input') and self.parent_app.dir_input:
|
||||||
current_path = self.parent_app.dir_input.text().strip()
|
current_path = self.parent_app.dir_input.text().strip()
|
||||||
if current_path and os.path.isdir(current_path):
|
if current_path and os.path.isdir(current_path):
|
||||||
self.parent_app.settings.setValue(DOWNLOAD_LOCATION_KEY, current_path)
|
self.parent_app.settings.setValue(DOWNLOAD_LOCATION_KEY, current_path)
|
||||||
self.parent_app.settings.sync()
|
path_saved = True
|
||||||
QMessageBox.information(self,
|
|
||||||
self._tr("settings_save_path_success_title", "Path Saved"),
|
# --- Save Cookie Logic ---
|
||||||
self._tr("settings_save_path_success_message", "Download location '{path}' saved.").format(path=current_path))
|
if hasattr(self.parent_app, 'use_cookie_checkbox'):
|
||||||
elif not current_path:
|
use_cookie = self.parent_app.use_cookie_checkbox.isChecked()
|
||||||
QMessageBox.warning(self,
|
cookie_content = self.parent_app.cookie_text_input.text().strip()
|
||||||
self._tr("settings_save_path_empty_title", "Empty Path"),
|
|
||||||
self._tr("settings_save_path_empty_message", "Download location cannot be empty."))
|
if use_cookie and cookie_content:
|
||||||
else:
|
self.parent_app.settings.setValue(USE_COOKIE_KEY, True)
|
||||||
QMessageBox.warning(self,
|
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, cookie_content)
|
||||||
self._tr("settings_save_path_invalid_title", "Invalid Path"),
|
cookie_saved = True
|
||||||
self._tr("settings_save_path_invalid_message", "The path '{path}' is not a valid directory.").format(path=current_path))
|
else: # Also save the 'off' state
|
||||||
|
self.parent_app.settings.setValue(USE_COOKIE_KEY, False)
|
||||||
|
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, "")
|
||||||
|
|
||||||
|
self.parent_app.settings.sync()
|
||||||
|
|
||||||
|
# --- User Feedback ---
|
||||||
|
if path_saved and cookie_saved:
|
||||||
|
message = self._tr("settings_save_both_success", "Download location and cookie settings saved.")
|
||||||
|
elif path_saved:
|
||||||
|
message = self._tr("settings_save_path_only_success", "Download location saved. No cookie settings were active to save.")
|
||||||
|
elif cookie_saved:
|
||||||
|
message = self._tr("settings_save_cookie_only_success", "Cookie settings saved. Download location was not set.")
|
||||||
else:
|
else:
|
||||||
QMessageBox.critical(self, "Error", "Could not access download path input from main application.")
|
QMessageBox.warning(self, self._tr("settings_save_nothing_title", "Nothing to Save"),
|
||||||
|
self._tr("settings_save_nothing_message", "The download location is not a valid directory and no cookie was active."))
|
||||||
|
return
|
||||||
|
|
||||||
|
QMessageBox.information(self, self._tr("settings_save_success_title", "Settings Saved"), message)
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ from PyQt5.QtCore import QUrl, QSize, Qt
|
|||||||
from PyQt5.QtGui import QIcon, QDesktopServices
|
from PyQt5.QtGui import QIcon, QDesktopServices
|
||||||
from PyQt5.QtWidgets import (
|
from PyQt5.QtWidgets import (
|
||||||
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
|
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
|
||||||
QStackedWidget, QScrollArea, QFrame, QWidget
|
QStackedWidget, QListWidget, QFrame, QWidget, QScrollArea
|
||||||
)
|
)
|
||||||
from ...i18n.translator import get_translation
|
from ...i18n.translator import get_translation
|
||||||
from ..main_window import get_app_icon_object
|
from ..main_window import get_app_icon_object
|
||||||
@@ -46,13 +46,12 @@ class TourStepWidget(QWidget):
|
|||||||
layout.addWidget(scroll_area, 1)
|
layout.addWidget(scroll_area, 1)
|
||||||
|
|
||||||
|
|
||||||
class HelpGuideDialog (QDialog ):
|
class HelpGuideDialog(QDialog):
|
||||||
"""A multi-page dialog for displaying the feature guide."""
|
"""A multi-page dialog for displaying the feature guide with a navigation list."""
|
||||||
def __init__ (self ,steps_data ,parent_app ,parent =None ):
|
def __init__(self, steps_data, parent_app, parent=None):
|
||||||
super ().__init__ (parent )
|
super().__init__(parent)
|
||||||
self .current_step =0
|
self.steps_data = steps_data
|
||||||
self .steps_data =steps_data
|
self.parent_app = parent_app
|
||||||
self .parent_app =parent_app
|
|
||||||
|
|
||||||
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
|
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
|
||||||
|
|
||||||
@@ -61,7 +60,7 @@ class HelpGuideDialog (QDialog ):
|
|||||||
self.setWindowIcon(app_icon)
|
self.setWindowIcon(app_icon)
|
||||||
|
|
||||||
self.setModal(True)
|
self.setModal(True)
|
||||||
self.resize(int(650 * scale), int(600 * scale))
|
self.resize(int(800 * scale), int(650 * scale))
|
||||||
|
|
||||||
dialog_font_size = int(11 * scale)
|
dialog_font_size = int(11 * scale)
|
||||||
|
|
||||||
@@ -69,6 +68,7 @@ class HelpGuideDialog (QDialog ):
|
|||||||
if hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
|
if hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
|
||||||
current_theme_style = get_dark_theme(scale)
|
current_theme_style = get_dark_theme(scale)
|
||||||
else:
|
else:
|
||||||
|
# Basic light theme fallback
|
||||||
current_theme_style = f"""
|
current_theme_style = f"""
|
||||||
QDialog {{ background-color: #F0F0F0; border: 1px solid #B0B0B0; }}
|
QDialog {{ background-color: #F0F0F0; border: 1px solid #B0B0B0; }}
|
||||||
QLabel {{ color: #1E1E1E; }}
|
QLabel {{ color: #1E1E1E; }}
|
||||||
@@ -86,118 +86,107 @@ class HelpGuideDialog (QDialog ):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
self.setStyleSheet(current_theme_style)
|
self.setStyleSheet(current_theme_style)
|
||||||
self ._init_ui ()
|
self._init_ui()
|
||||||
if self .parent_app :
|
if self.parent_app:
|
||||||
self .move (self .parent_app .geometry ().center ()-self .rect ().center ())
|
self.move(self.parent_app.geometry().center() - self.rect().center())
|
||||||
|
|
||||||
def _tr (self ,key ,default_text =""):
|
def _tr(self, key, default_text=""):
|
||||||
"""Helper to get translation based on current app language."""
|
"""Helper to get translation based on current app language."""
|
||||||
if callable (get_translation )and self .parent_app :
|
if callable(get_translation) and self.parent_app:
|
||||||
return get_translation (self .parent_app .current_selected_language ,key ,default_text )
|
return get_translation(self.parent_app.current_selected_language, key, default_text)
|
||||||
return default_text
|
return default_text
|
||||||
|
|
||||||
|
def _init_ui(self):
|
||||||
|
main_layout = QVBoxLayout(self)
|
||||||
|
main_layout.setContentsMargins(15, 15, 15, 15)
|
||||||
|
main_layout.setSpacing(10)
|
||||||
|
|
||||||
def _init_ui (self ):
|
# Title
|
||||||
main_layout =QVBoxLayout (self )
|
title_label = QLabel(self._tr("help_guide_dialog_title", "Kemono Downloader - Feature Guide"))
|
||||||
main_layout .setContentsMargins (0 ,0 ,0 ,0 )
|
scale = getattr(self.parent_app, 'scale_factor', 1.0)
|
||||||
main_layout .setSpacing (0 )
|
title_font_size = int(16 * scale)
|
||||||
|
title_label.setStyleSheet(f"font-size: {title_font_size}pt; font-weight: bold; color: #E0E0E0;")
|
||||||
|
title_label.setAlignment(Qt.AlignCenter)
|
||||||
|
main_layout.addWidget(title_label)
|
||||||
|
|
||||||
self .stacked_widget =QStackedWidget ()
|
# Content Layout (Navigation + Stacked Pages)
|
||||||
main_layout .addWidget (self .stacked_widget ,1 )
|
content_layout = QHBoxLayout()
|
||||||
|
main_layout.addLayout(content_layout, 1)
|
||||||
|
|
||||||
self .tour_steps_widgets =[]
|
self.nav_list = QListWidget()
|
||||||
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
|
self.nav_list.setFixedWidth(int(220 * scale))
|
||||||
for title, content in self.steps_data:
|
self.nav_list.setStyleSheet(f"""
|
||||||
step_widget = TourStepWidget(title, content, scale=scale)
|
QListWidget {{
|
||||||
self.tour_steps_widgets.append(step_widget)
|
background-color: #2E2E2E;
|
||||||
|
border: 1px solid #4A4A4A;
|
||||||
|
border-radius: 4px;
|
||||||
|
font-size: {int(11 * scale)}pt;
|
||||||
|
}}
|
||||||
|
QListWidget::item {{
|
||||||
|
padding: 10px;
|
||||||
|
border-bottom: 1px solid #4A4A4A;
|
||||||
|
}}
|
||||||
|
QListWidget::item:selected {{
|
||||||
|
background-color: #87CEEB;
|
||||||
|
color: #2E2E2E;
|
||||||
|
font-weight: bold;
|
||||||
|
}}
|
||||||
|
""")
|
||||||
|
content_layout.addWidget(self.nav_list)
|
||||||
|
|
||||||
|
self.stacked_widget = QStackedWidget()
|
||||||
|
content_layout.addWidget(self.stacked_widget)
|
||||||
|
|
||||||
|
for title_key, content_key in self.steps_data:
|
||||||
|
title = self._tr(title_key, title_key)
|
||||||
|
content = self._tr(content_key, f"Content for {content_key} not found.")
|
||||||
|
|
||||||
|
self.nav_list.addItem(title)
|
||||||
|
|
||||||
|
step_widget = TourStepWidget(title, content, scale=scale)
|
||||||
self.stacked_widget.addWidget(step_widget)
|
self.stacked_widget.addWidget(step_widget)
|
||||||
|
|
||||||
self .setWindowTitle (self ._tr ("help_guide_dialog_title","Kemono Downloader - Feature Guide"))
|
self.nav_list.currentRowChanged.connect(self.stacked_widget.setCurrentIndex)
|
||||||
|
if self.nav_list.count() > 0:
|
||||||
|
self.nav_list.setCurrentRow(0)
|
||||||
|
|
||||||
buttons_layout =QHBoxLayout ()
|
# Footer Layout (Social links and Close button)
|
||||||
buttons_layout .setContentsMargins (15 ,10 ,15 ,15 )
|
footer_layout = QHBoxLayout()
|
||||||
buttons_layout .setSpacing (10 )
|
footer_layout.setContentsMargins(0, 10, 0, 0)
|
||||||
|
|
||||||
|
# Social Media Icons
|
||||||
|
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
|
||||||
|
assets_base_dir = sys._MEIPASS
|
||||||
|
else:
|
||||||
|
assets_base_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
|
||||||
|
|
||||||
self .back_button =QPushButton (self ._tr ("tour_dialog_back_button","Back"))
|
github_icon_path = os.path.join(assets_base_dir, "assets", "github.png")
|
||||||
self .back_button .clicked .connect (self ._previous_step )
|
instagram_icon_path = os.path.join(assets_base_dir, "assets", "instagram.png")
|
||||||
self .back_button .setEnabled (False )
|
discord_icon_path = os.path.join(assets_base_dir, "assets", "discord.png")
|
||||||
|
|
||||||
if getattr (sys ,'frozen',False )and hasattr (sys ,'_MEIPASS'):
|
self.github_button = QPushButton(QIcon(github_icon_path), "")
|
||||||
assets_base_dir =sys ._MEIPASS
|
self.instagram_button = QPushButton(QIcon(instagram_icon_path), "")
|
||||||
else :
|
self.discord_button = QPushButton(QIcon(discord_icon_path), "")
|
||||||
assets_base_dir =os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
|
|
||||||
|
|
||||||
github_icon_path =os .path .join (assets_base_dir ,"assets","github.png")
|
|
||||||
instagram_icon_path =os .path .join (assets_base_dir ,"assets","instagram.png")
|
|
||||||
discord_icon_path =os .path .join (assets_base_dir ,"assets","discord.png")
|
|
||||||
|
|
||||||
self .github_button =QPushButton (QIcon (github_icon_path ),"")
|
|
||||||
self .instagram_button =QPushButton (QIcon (instagram_icon_path ),"")
|
|
||||||
self .Discord_button =QPushButton (QIcon (discord_icon_path ),"")
|
|
||||||
|
|
||||||
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
|
|
||||||
icon_dim = int(24 * scale)
|
icon_dim = int(24 * scale)
|
||||||
icon_size = QSize(icon_dim, icon_dim)
|
icon_size = QSize(icon_dim, icon_dim)
|
||||||
self .github_button .setIconSize (icon_size )
|
|
||||||
self .instagram_button .setIconSize (icon_size )
|
for button, tooltip_key, url in [
|
||||||
self .Discord_button .setIconSize (icon_size )
|
(self.github_button, "help_guide_github_tooltip", "https://github.com/Yuvi9587"),
|
||||||
|
(self.instagram_button, "help_guide_instagram_tooltip", "https://www.instagram.com/uvi.arts/"),
|
||||||
|
(self.discord_button, "help_guide_discord_tooltip", "https://discord.gg/BqP64XTdJN")
|
||||||
|
]:
|
||||||
|
button.setIconSize(icon_size)
|
||||||
|
button.setToolTip(self._tr(tooltip_key))
|
||||||
|
button.setFixedSize(icon_size.width() + 8, icon_size.height() + 8)
|
||||||
|
button.setStyleSheet("background-color: transparent; border: none;")
|
||||||
|
button.clicked.connect(lambda _, u=url: QDesktopServices.openUrl(QUrl(u)))
|
||||||
|
footer_layout.addWidget(button)
|
||||||
|
|
||||||
self .next_button =QPushButton (self ._tr ("tour_dialog_next_button","Next"))
|
footer_layout.addStretch(1)
|
||||||
self .next_button .clicked .connect (self ._next_step_action )
|
|
||||||
self .next_button .setDefault (True )
|
|
||||||
self .github_button .clicked .connect (self ._open_github_link )
|
|
||||||
self .instagram_button .clicked .connect (self ._open_instagram_link )
|
|
||||||
self .Discord_button .clicked .connect (self ._open_Discord_link )
|
|
||||||
self .github_button .setToolTip (self ._tr ("help_guide_github_tooltip","Visit project's GitHub page (Opens in browser)"))
|
|
||||||
self .instagram_button .setToolTip (self ._tr ("help_guide_instagram_tooltip","Visit our Instagram page (Opens in browser)"))
|
|
||||||
self .Discord_button .setToolTip (self ._tr ("help_guide_discord_tooltip","Visit our Discord community (Opens in browser)"))
|
|
||||||
|
|
||||||
|
self.finish_button = QPushButton(self._tr("tour_dialog_finish_button", "Finish"))
|
||||||
|
self.finish_button.clicked.connect(self.accept)
|
||||||
|
footer_layout.addWidget(self.finish_button)
|
||||||
|
|
||||||
social_layout =QHBoxLayout ()
|
main_layout.addLayout(footer_layout)
|
||||||
social_layout .setSpacing (10 )
|
|
||||||
social_layout .addWidget (self .github_button )
|
|
||||||
social_layout .addWidget (self .instagram_button )
|
|
||||||
social_layout .addWidget (self .Discord_button )
|
|
||||||
|
|
||||||
while buttons_layout .count ():
|
|
||||||
item =buttons_layout .takeAt (0 )
|
|
||||||
if item .widget ():
|
|
||||||
item .widget ().setParent (None )
|
|
||||||
elif item .layout ():
|
|
||||||
pass
|
|
||||||
buttons_layout .addLayout (social_layout )
|
|
||||||
buttons_layout .addStretch (1 )
|
|
||||||
buttons_layout .addWidget (self .back_button )
|
|
||||||
buttons_layout .addWidget (self .next_button )
|
|
||||||
main_layout .addLayout (buttons_layout )
|
|
||||||
self ._update_button_states ()
|
|
||||||
|
|
||||||
def _next_step_action (self ):
|
|
||||||
if self .current_step <len (self .tour_steps_widgets )-1 :
|
|
||||||
self .current_step +=1
|
|
||||||
self .stacked_widget .setCurrentIndex (self .current_step )
|
|
||||||
else :
|
|
||||||
self .accept ()
|
|
||||||
self ._update_button_states ()
|
|
||||||
|
|
||||||
def _previous_step (self ):
|
|
||||||
if self .current_step >0 :
|
|
||||||
self .current_step -=1
|
|
||||||
self .stacked_widget .setCurrentIndex (self .current_step )
|
|
||||||
self ._update_button_states ()
|
|
||||||
|
|
||||||
def _update_button_states (self ):
|
|
||||||
if self .current_step ==len (self .tour_steps_widgets )-1 :
|
|
||||||
self .next_button .setText (self ._tr ("tour_dialog_finish_button","Finish"))
|
|
||||||
else :
|
|
||||||
self .next_button .setText (self ._tr ("tour_dialog_next_button","Next"))
|
|
||||||
self .back_button .setEnabled (self .current_step >0 )
|
|
||||||
|
|
||||||
def _open_github_link (self ):
|
|
||||||
QDesktopServices .openUrl (QUrl ("https://github.com/Yuvi9587"))
|
|
||||||
|
|
||||||
def _open_instagram_link (self ):
|
|
||||||
QDesktopServices .openUrl (QUrl ("https://www.instagram.com/uvi.arts/"))
|
|
||||||
|
|
||||||
def _open_Discord_link (self ):
|
|
||||||
QDesktopServices .openUrl (QUrl ("https://discord.gg/BqP64XTdJN"))
|
|
||||||
@@ -24,7 +24,7 @@ class MoreOptionsDialog(QDialog):
|
|||||||
layout.addWidget(self.description_label)
|
layout.addWidget(self.description_label)
|
||||||
self.radio_button_group = QButtonGroup(self)
|
self.radio_button_group = QButtonGroup(self)
|
||||||
self.radio_content = QRadioButton("Description/Content")
|
self.radio_content = QRadioButton("Description/Content")
|
||||||
self.radio_comments = QRadioButton("Comments (Not Working)")
|
self.radio_comments = QRadioButton("Comments")
|
||||||
self.radio_button_group.addButton(self.radio_content)
|
self.radio_button_group.addButton(self.radio_content)
|
||||||
self.radio_button_group.addButton(self.radio_comments)
|
self.radio_button_group.addButton(self.radio_comments)
|
||||||
layout.addWidget(self.radio_content)
|
layout.addWidget(self.radio_content)
|
||||||
|
|||||||
118
src/ui/dialogs/MultipartScopeDialog.py
Normal file
118
src/ui/dialogs/MultipartScopeDialog.py
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
# multipart_scope_dialog.py
|
||||||
|
from PyQt5.QtWidgets import (
|
||||||
|
QDialog, QVBoxLayout, QGroupBox, QRadioButton, QDialogButtonBox, QButtonGroup,
|
||||||
|
QLabel, QLineEdit, QHBoxLayout, QFrame
|
||||||
|
)
|
||||||
|
from PyQt5.QtGui import QIntValidator
|
||||||
|
from PyQt5.QtCore import Qt
|
||||||
|
|
||||||
|
# It's good practice to get this constant from the source
|
||||||
|
# but for this example, we will define it here.
|
||||||
|
MAX_PARTS = 16
|
||||||
|
|
||||||
|
class MultipartScopeDialog(QDialog):
|
||||||
|
"""
|
||||||
|
A dialog to let the user select the scope, number of parts, and minimum size for multipart downloads.
|
||||||
|
"""
|
||||||
|
SCOPE_VIDEOS = 'videos'
|
||||||
|
SCOPE_ARCHIVES = 'archives'
|
||||||
|
SCOPE_BOTH = 'both'
|
||||||
|
|
||||||
|
def __init__(self, current_scope='both', current_parts=4, current_min_size_mb=100, parent=None):
|
||||||
|
super().__init__(parent)
|
||||||
|
self.setWindowTitle("Multipart Download Options")
|
||||||
|
self.setWindowFlags(self.windowFlags() & ~Qt.WindowContextHelpButtonHint)
|
||||||
|
self.setMinimumWidth(350)
|
||||||
|
|
||||||
|
# Main Layout
|
||||||
|
layout = QVBoxLayout(self)
|
||||||
|
|
||||||
|
# --- Options Group for Scope ---
|
||||||
|
self.options_group_box = QGroupBox("Apply multipart downloads to:")
|
||||||
|
options_layout = QVBoxLayout()
|
||||||
|
# ... (Radio buttons and button group code remains unchanged) ...
|
||||||
|
self.radio_videos = QRadioButton("Videos Only")
|
||||||
|
self.radio_archives = QRadioButton("Archives Only (.zip, .rar, etc.)")
|
||||||
|
self.radio_both = QRadioButton("Both Videos and Archives")
|
||||||
|
|
||||||
|
if current_scope == self.SCOPE_VIDEOS:
|
||||||
|
self.radio_videos.setChecked(True)
|
||||||
|
elif current_scope == self.SCOPE_ARCHIVES:
|
||||||
|
self.radio_archives.setChecked(True)
|
||||||
|
else:
|
||||||
|
self.radio_both.setChecked(True)
|
||||||
|
|
||||||
|
self.button_group = QButtonGroup(self)
|
||||||
|
self.button_group.addButton(self.radio_videos)
|
||||||
|
self.button_group.addButton(self.radio_archives)
|
||||||
|
self.button_group.addButton(self.radio_both)
|
||||||
|
|
||||||
|
options_layout.addWidget(self.radio_videos)
|
||||||
|
options_layout.addWidget(self.radio_archives)
|
||||||
|
options_layout.addWidget(self.radio_both)
|
||||||
|
self.options_group_box.setLayout(options_layout)
|
||||||
|
layout.addWidget(self.options_group_box)
|
||||||
|
|
||||||
|
# --- START: MODIFIED Download Settings Group ---
|
||||||
|
self.settings_group_box = QGroupBox("Download settings:")
|
||||||
|
settings_layout = QVBoxLayout()
|
||||||
|
|
||||||
|
# Layout for Parts count
|
||||||
|
parts_layout = QHBoxLayout()
|
||||||
|
self.parts_label = QLabel("Number of download parts per file:")
|
||||||
|
self.parts_input = QLineEdit(str(current_parts))
|
||||||
|
self.parts_input.setValidator(QIntValidator(2, MAX_PARTS, self))
|
||||||
|
self.parts_input.setFixedWidth(40)
|
||||||
|
self.parts_input.setToolTip(f"Set the number of concurrent connections per file (2-{MAX_PARTS}).")
|
||||||
|
parts_layout.addWidget(self.parts_label)
|
||||||
|
parts_layout.addStretch()
|
||||||
|
parts_layout.addWidget(self.parts_input)
|
||||||
|
settings_layout.addLayout(parts_layout)
|
||||||
|
|
||||||
|
# Layout for Minimum Size
|
||||||
|
size_layout = QHBoxLayout()
|
||||||
|
self.size_label = QLabel("Minimum file size for multipart (MB):")
|
||||||
|
self.size_input = QLineEdit(str(current_min_size_mb))
|
||||||
|
self.size_input.setValidator(QIntValidator(10, 10000, self)) # Min 10MB, Max ~10GB
|
||||||
|
self.size_input.setFixedWidth(40)
|
||||||
|
self.size_input.setToolTip("Files smaller than this will use a normal, single-part download.")
|
||||||
|
size_layout.addWidget(self.size_label)
|
||||||
|
size_layout.addStretch()
|
||||||
|
size_layout.addWidget(self.size_input)
|
||||||
|
settings_layout.addLayout(size_layout)
|
||||||
|
|
||||||
|
self.settings_group_box.setLayout(settings_layout)
|
||||||
|
layout.addWidget(self.settings_group_box)
|
||||||
|
# --- END: MODIFIED Download Settings Group ---
|
||||||
|
|
||||||
|
# OK and Cancel Buttons
|
||||||
|
self.button_box = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel)
|
||||||
|
self.button_box.accepted.connect(self.accept)
|
||||||
|
self.button_box.rejected.connect(self.reject)
|
||||||
|
layout.addWidget(self.button_box)
|
||||||
|
|
||||||
|
self.setLayout(layout)
|
||||||
|
|
||||||
|
def get_selected_scope(self):
|
||||||
|
# ... (This method remains unchanged) ...
|
||||||
|
if self.radio_videos.isChecked():
|
||||||
|
return self.SCOPE_VIDEOS
|
||||||
|
if self.radio_archives.isChecked():
|
||||||
|
return self.SCOPE_ARCHIVES
|
||||||
|
return self.SCOPE_BOTH
|
||||||
|
|
||||||
|
def get_selected_parts(self):
|
||||||
|
# ... (This method remains unchanged) ...
|
||||||
|
try:
|
||||||
|
parts = int(self.parts_input.text())
|
||||||
|
return max(2, min(parts, MAX_PARTS))
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
return 4
|
||||||
|
|
||||||
|
def get_selected_min_size(self):
|
||||||
|
"""Returns the selected minimum size in MB as an integer."""
|
||||||
|
try:
|
||||||
|
size = int(self.size_input.text())
|
||||||
|
return max(10, min(size, 10000)) # Enforce valid range
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
return 100 # Return a safe default
|
||||||
@@ -3,8 +3,27 @@ import re
|
|||||||
try:
|
try:
|
||||||
from fpdf import FPDF
|
from fpdf import FPDF
|
||||||
FPDF_AVAILABLE = True
|
FPDF_AVAILABLE = True
|
||||||
|
|
||||||
|
# --- FIX: Move the class definition inside the try block ---
|
||||||
|
class PDF(FPDF):
|
||||||
|
"""Custom PDF class to handle headers and footers."""
|
||||||
|
def header(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
def footer(self):
|
||||||
|
self.set_y(-15)
|
||||||
|
if self.font_family:
|
||||||
|
self.set_font(self.font_family, '', 8)
|
||||||
|
else:
|
||||||
|
self.set_font('Arial', '', 8)
|
||||||
|
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'C')
|
||||||
|
|
||||||
except ImportError:
|
except ImportError:
|
||||||
FPDF_AVAILABLE = False
|
FPDF_AVAILABLE = False
|
||||||
|
# If the import fails, FPDF and PDF will not be defined,
|
||||||
|
# but the program won't crash here.
|
||||||
|
FPDF = None
|
||||||
|
PDF = None
|
||||||
|
|
||||||
def strip_html_tags(text):
|
def strip_html_tags(text):
|
||||||
if not text:
|
if not text:
|
||||||
@@ -12,19 +31,6 @@ def strip_html_tags(text):
|
|||||||
clean = re.compile('<.*?>')
|
clean = re.compile('<.*?>')
|
||||||
return re.sub(clean, '', text)
|
return re.sub(clean, '', text)
|
||||||
|
|
||||||
class PDF(FPDF):
|
|
||||||
"""Custom PDF class to handle headers and footers."""
|
|
||||||
def header(self):
|
|
||||||
pass
|
|
||||||
|
|
||||||
def footer(self):
|
|
||||||
self.set_y(-15)
|
|
||||||
if self.font_family:
|
|
||||||
self.set_font(self.font_family, '', 8)
|
|
||||||
else:
|
|
||||||
self.set_font('Arial', '', 8)
|
|
||||||
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'C')
|
|
||||||
|
|
||||||
def create_single_pdf_from_content(posts_data, output_filename, font_path, logger=print):
|
def create_single_pdf_from_content(posts_data, output_filename, font_path, logger=print):
|
||||||
"""
|
"""
|
||||||
Creates a single, continuous PDF, correctly formatting both descriptions and comments.
|
Creates a single, continuous PDF, correctly formatting both descriptions and comments.
|
||||||
@@ -68,7 +74,7 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
|
|||||||
pdf.ln(10)
|
pdf.ln(10)
|
||||||
|
|
||||||
pdf.set_font(default_font_family, 'B', 16)
|
pdf.set_font(default_font_family, 'B', 16)
|
||||||
pdf.multi_cell(w=0, h=10, text=post.get('title', 'Untitled Post'), align='L')
|
pdf.multi_cell(w=0, h=10, txt=post.get('title', 'Untitled Post'), align='L')
|
||||||
pdf.ln(5)
|
pdf.ln(5)
|
||||||
|
|
||||||
if 'comments' in post and post['comments']:
|
if 'comments' in post and post['comments']:
|
||||||
@@ -89,7 +95,7 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
|
|||||||
pdf.ln(10)
|
pdf.ln(10)
|
||||||
|
|
||||||
pdf.set_font(default_font_family, '', 11)
|
pdf.set_font(default_font_family, '', 11)
|
||||||
pdf.multi_cell(0, 7, body)
|
pdf.multi_cell(w=0, h=7, txt=body)
|
||||||
|
|
||||||
if comment_index < len(comments_list) - 1:
|
if comment_index < len(comments_list) - 1:
|
||||||
pdf.ln(3)
|
pdf.ln(3)
|
||||||
@@ -97,7 +103,7 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
|
|||||||
pdf.ln(3)
|
pdf.ln(3)
|
||||||
elif 'content' in post:
|
elif 'content' in post:
|
||||||
pdf.set_font(default_font_family, '', 12)
|
pdf.set_font(default_font_family, '', 12)
|
||||||
pdf.multi_cell(w=0, h=7, text=post.get('content', 'No Content'))
|
pdf.multi_cell(w=0, h=7, txt=post.get('content', 'No Content'))
|
||||||
|
|
||||||
try:
|
try:
|
||||||
pdf.output(output_filename)
|
pdf.output(output_filename)
|
||||||
@@ -105,4 +111,4 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
|
|||||||
return True
|
return True
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger(f"❌ A critical error occurred while saving the final PDF: {e}")
|
logger(f"❌ A critical error occurred while saving the final PDF: {e}")
|
||||||
return False
|
return False
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
@@ -196,10 +196,9 @@ def get_link_platform(url):
|
|||||||
if 'twitter.com' in domain or 'x.com' in domain: return 'twitter/x'
|
if 'twitter.com' in domain or 'x.com' in domain: return 'twitter/x'
|
||||||
if 'discord.gg' in domain or 'discord.com/invite' in domain: return 'discord invite'
|
if 'discord.gg' in domain or 'discord.com/invite' in domain: return 'discord invite'
|
||||||
if 'pixiv.net' in domain: return 'pixiv'
|
if 'pixiv.net' in domain: return 'pixiv'
|
||||||
if 'kemono.su' in domain or 'kemono.party' in domain: return 'kemono'
|
if 'kemono.su' in domain or 'kemono.party' in domain or 'kemono.cr' in domain: return 'kemono'
|
||||||
if 'coomer.su' in domain or 'coomer.party' in domain: return 'coomer'
|
if 'coomer.su' in domain or 'coomer.party' in domain or 'coomer.st' in domain: return 'coomer'
|
||||||
|
|
||||||
# Fallback to a generic name for other domains
|
|
||||||
parts = domain.split('.')
|
parts = domain.split('.')
|
||||||
if len(parts) >= 2:
|
if len(parts) >= 2:
|
||||||
return parts[-2]
|
return parts[-2]
|
||||||
|
|||||||
@@ -239,16 +239,23 @@ def setup_ui(main_app):
|
|||||||
checkboxes_group_layout.addWidget(advanced_settings_label)
|
checkboxes_group_layout.addWidget(advanced_settings_label)
|
||||||
advanced_row1_layout = QHBoxLayout()
|
advanced_row1_layout = QHBoxLayout()
|
||||||
advanced_row1_layout.setSpacing(10)
|
advanced_row1_layout.setSpacing(10)
|
||||||
main_app.use_subfolders_checkbox = QCheckBox("Separate Folders by Known.txt")
|
|
||||||
main_app.use_subfolders_checkbox.setChecked(True)
|
# --- REORDERED CHECKBOXES ---
|
||||||
main_app.use_subfolders_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
|
|
||||||
advanced_row1_layout.addWidget(main_app.use_subfolders_checkbox)
|
|
||||||
main_app.use_subfolder_per_post_checkbox = QCheckBox("Subfolder per Post")
|
main_app.use_subfolder_per_post_checkbox = QCheckBox("Subfolder per Post")
|
||||||
main_app.use_subfolder_per_post_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
|
main_app.use_subfolder_per_post_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
|
||||||
|
main_app.use_subfolder_per_post_checkbox.setChecked(True)
|
||||||
advanced_row1_layout.addWidget(main_app.use_subfolder_per_post_checkbox)
|
advanced_row1_layout.addWidget(main_app.use_subfolder_per_post_checkbox)
|
||||||
|
|
||||||
main_app.date_prefix_checkbox = QCheckBox("Date Prefix")
|
main_app.date_prefix_checkbox = QCheckBox("Date Prefix")
|
||||||
main_app.date_prefix_checkbox.setToolTip("When 'Subfolder per Post' is active, prefix the folder name with the post's upload date.")
|
main_app.date_prefix_checkbox.setToolTip("When 'Subfolder per Post' is active, prefix the folder name with the post's upload date.")
|
||||||
advanced_row1_layout.addWidget(main_app.date_prefix_checkbox)
|
advanced_row1_layout.addWidget(main_app.date_prefix_checkbox)
|
||||||
|
|
||||||
|
main_app.use_subfolders_checkbox = QCheckBox("Separate Folders by Known.txt")
|
||||||
|
main_app.use_subfolders_checkbox.setChecked(False)
|
||||||
|
main_app.use_subfolders_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
|
||||||
|
advanced_row1_layout.addWidget(main_app.use_subfolders_checkbox)
|
||||||
|
# --- END REORDER ---
|
||||||
|
|
||||||
main_app.use_cookie_checkbox = QCheckBox("Use Cookie")
|
main_app.use_cookie_checkbox = QCheckBox("Use Cookie")
|
||||||
main_app.use_cookie_checkbox.setChecked(main_app.use_cookie_setting)
|
main_app.use_cookie_checkbox.setChecked(main_app.use_cookie_setting)
|
||||||
main_app.cookie_text_input = QLineEdit()
|
main_app.cookie_text_input = QLineEdit()
|
||||||
@@ -380,7 +387,7 @@ def setup_ui(main_app):
|
|||||||
main_app.link_search_input.setPlaceholderText("Search Links...")
|
main_app.link_search_input.setPlaceholderText("Search Links...")
|
||||||
main_app.link_search_input.setVisible(False)
|
main_app.link_search_input.setVisible(False)
|
||||||
log_title_layout.addWidget(main_app.link_search_input)
|
log_title_layout.addWidget(main_app.link_search_input)
|
||||||
main_app.link_search_button = QPushButton("🔍")
|
main_app.link_search_button = QPushButton("<EFBFBD>")
|
||||||
main_app.link_search_button.setVisible(False)
|
main_app.link_search_button.setVisible(False)
|
||||||
main_app.link_search_button.setFixedWidth(int(30 * scale))
|
main_app.link_search_button.setFixedWidth(int(30 * scale))
|
||||||
log_title_layout.addWidget(main_app.link_search_button)
|
log_title_layout.addWidget(main_app.link_search_button)
|
||||||
|
|||||||
Reference in New Issue
Block a user