mirror of
https://github.com/Yuvi9587/Kemono-Downloader.git
synced 2025-12-29 16:14:44 +00:00
Compare commits
10 Commits
3209770d00
...
v6.3.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9888ed0862 | ||
|
|
9e996bf682 | ||
|
|
e7a6a91542 | ||
|
|
d7faccce18 | ||
|
|
a78c01c4f6 | ||
|
|
6de9967e0b | ||
|
|
e3dd0e70b6 | ||
|
|
9db89cfad0 | ||
|
|
0a6034a632 | ||
|
|
2da69e7017 |
339
features.md
339
features.md
@@ -1,192 +1,147 @@
|
|||||||
# Kemono Downloader - Feature Guide
|
<div>
|
||||||
This guide provides a comprehensive overview of all user interface elements, input fields, buttons, popups, and functionalities available in the Kemono Downloader.
|
<h1>Kemono Downloader - Comprehensive Feature Guide</h1>
|
||||||
|
<p>This guide provides a detailed overview of all user interface elements, input fields, buttons, popups, and functionalities available in the application.</p>
|
||||||
## 1. Main Interface & Workflow
|
<hr>
|
||||||
These are the primary controls you'll interact with to initiate and manage downloads.
|
<h2><strong>Main Window: Core Functionality</strong></h2>
|
||||||
|
<p>The application is divided into a configuration panel on the left and a status/log panel on the right.</p>
|
||||||
### 1.1. Core Inputs
|
<h3><strong>Primary Inputs (Top-Left)</strong></h3>
|
||||||
**🔗 Creator/Post URL Input Field**
|
<ul>
|
||||||
- **Purpose**: Paste the URL of the content you want to download.
|
<li><strong>URL Input Field</strong>: This is the starting point for most downloads. You can paste a URL for a specific post or for an entire creator's feed. The application's behavior adapts based on the URL type.</li>
|
||||||
- **Supported Sites**: Kemono.su, Coomer.party, Simpcity.su.
|
<li><strong>🎨 Creator Selection Popup</strong>: This button opens a powerful dialog listing all known creators. From here, you can:
|
||||||
- **Supported URL Types**:
|
<ul>
|
||||||
- Creator pages (e.g., `https://kemono.su/patreon/user/12345`).
|
<li><strong>Search and Queue</strong>: Search for creators and check multiple names. Clicking "Add Selected" populates the main input field, preparing a batch download.</li>
|
||||||
- Individual posts (e.g., `https://kemono.su/patreon/user/12345/post/98765`).
|
<li><strong>Check for Updates</strong>: Select a single creator's saved profile. This loads their information and switches the main download button to "Check for Updates" mode, allowing you to download only new content since your last session.</li>
|
||||||
- **Note**: When ⭐ Favorite Mode is active, this field is disabled. For Simpcity.su URLs, the "Use Cookie" option is mandatory and auto-enabled.
|
</ul>
|
||||||
|
</li>
|
||||||
**🎨 Creator Selection Button**
|
<li><strong>Download Location</strong>: The primary folder where all content will be saved. The <strong>Browse...</strong> button lets you select this folder from your computer.</li>
|
||||||
- **Icon**: 🎨 (Artist Palette)
|
<li><strong>Page Range (Start/End)</strong>: These fields activate only for creator feed URLs. They allow you to download a specific slice of a creator's history (e.g., pages 5 through 10) instead of their entire feed.</li>
|
||||||
- **Purpose**: Opens the "Creator Selection" dialog to browse and queue downloads from known creators.
|
</ul>
|
||||||
- **Dialog Features**:
|
<hr>
|
||||||
- Loads creators from `creators.json`.
|
<h2><strong>Filtering & Naming (Left Panel)</strong></h2>
|
||||||
- **Search Bar**: Filter creators by name.
|
<p>These features give you precise control over what gets downloaded and how it's named and organized.</p>
|
||||||
- **Creator List**: Displays creators with their service (e.g., Patreon, Fanbox).
|
<ul>
|
||||||
- **Selection**: Checkboxes to select one or more creators.
|
<li><strong>Filter by Character(s)</strong>: A powerful tool to download content featuring specific characters. You can enter multiple names separated by commas.
|
||||||
- **Download Scope**: Organize downloads by Characters or Creators.
|
<ul>
|
||||||
- **Add to Queue**: Adds selected creators or their posts to the download queue.
|
<li><strong>Filter: [Scope] Button</strong>: This button changes how the character filter works:
|
||||||
|
<ul>
|
||||||
**Page Range (Start to End) Input Fields**
|
<li><strong>Title</strong>: Downloads posts only if a character's name is in the post title.</li>
|
||||||
- **Purpose**: Specify a range of pages to fetch for creator URLs.
|
<li><strong>Files</strong>: Downloads posts if a character's name is in any of the filenames within the post.</li>
|
||||||
- **Usage**: Enter the starting and ending page numbers.
|
<li><strong>Both</strong>: Combines the "Title" and "Files" logic.</li>
|
||||||
- **Behavior**:
|
<li><strong>Comments (Beta)</strong>: Downloads a post if a character's name is mentioned in the comments section.</li>
|
||||||
- If blank, all pages are processed.
|
</ul>
|
||||||
- Disabled for single post URLs.
|
</li>
|
||||||
|
</ul>
|
||||||
**📁 Download Location Input Field & Browse Button**
|
</li>
|
||||||
- **Purpose**: Specify the main directory for downloaded files.
|
<li><strong>Skip with Words</strong>: A keyword-based filter to avoid unwanted content (e.g., <code>WIP</code>, <code>sketch</code>).
|
||||||
- **Usage**: Type the path or click "Browse..." to select a folder.
|
<ul>
|
||||||
- **Requirement**: Mandatory for all download operations.
|
<li><strong>Scope: [Type] Button</strong>: This button changes how the skip filter works:
|
||||||
|
<ul>
|
||||||
### 1.2. Action Buttons
|
<li><strong>Posts</strong>: Skips the entire post if a keyword is found in the title.</li>
|
||||||
**⬇️ Start Download / 🔗 Extract Links Button**
|
<li><strong>Files</strong>: Skips only individual files if a keyword is found in the filename.</li>
|
||||||
- **Purpose**: Initiates downloading or link extraction.
|
<li><strong>Both</strong>: Applies both levels of skipping.</li>
|
||||||
- **Behavior**:
|
</ul>
|
||||||
- Shows "🔗 Extract Links" if "Only Links" is selected.
|
</li>
|
||||||
- Otherwise, shows "⬇️ Start Download".
|
</ul>
|
||||||
- Supports single-threaded or multi-threaded downloads based on settings.
|
</li>
|
||||||
|
<li><strong>Remove Words from name</strong>: Automatically cleans downloaded filenames by removing any specified words (e.g., "patreon," "HD").</li>
|
||||||
**🔄 Restore Download Button**
|
</ul>
|
||||||
- **Visibility**: Appears if an incomplete session is detected on startup.
|
<h3><strong>File Type Filter (Radio Buttons)</strong></h3>
|
||||||
- **Purpose**: Resumes a previously interrupted download session.
|
<p>This section lets you choose the kind of content you want:</p>
|
||||||
|
<ul>
|
||||||
**⏸️ Pause / ▶️ Resume Download Button**
|
<li><strong>All, Images/GIFs, Videos, 🎧 Only Audio, 📦 Only Archives</strong>: These options filter the downloads to only include the selected file types.</li>
|
||||||
- **Purpose**: Pause or resume the ongoing download.
|
<li><strong>🔗 Only Links</strong>: This special mode doesn't download any files. Instead, it scans post descriptions and lists all external links (like Mega, Google Drive) in the log panel.</li>
|
||||||
- **Behavior**: Toggles between "Pause" and "Resume". Some UI settings can be changed while paused.
|
<li><strong>More</strong>: Opens a dialog for text-only downloads. You can choose to save post <strong>descriptions</strong> or <strong>comments</strong> as formatted <strong>PDF, DOCX, or TXT</strong> files. A key feature here is the <strong>"Single PDF"</strong> option, which compiles the text from all downloaded posts into one continuous, sorted PDF document.</li>
|
||||||
|
</ul>
|
||||||
**❌ Cancel & Reset UI Button**
|
<hr>
|
||||||
- **Purpose**: Stops the current operation and performs a "soft" reset.
|
<h2><strong>Download Options & Advanced Settings (Checkboxes)</strong></h2>
|
||||||
- **Behavior**: Halts background threads, preserves URL and Download Location inputs, resets other settings.
|
<ul>
|
||||||
|
<li><strong>Skip .zip</strong>: A simple toggle to ignore archive files during downloads.</li>
|
||||||
**🔄 Reset Button (in the log area)**
|
<li><strong>Download Thumbnails Only</strong>: Downloads only the small preview images instead of the full-resolution files.</li>
|
||||||
- **Purpose**: Performs a "hard" reset when no operation is active.
|
<li><strong>Scan Content for Images</strong>: A crucial feature that scans the post's text content for embedded images that may not be listed in the API, ensuring a more complete download.</li>
|
||||||
- **Behavior**: Clears all inputs, resets options to default, and clears logs.
|
<li><strong>Compress to WebP</strong>: Saves disk space by automatically converting large images into the efficient WebP format.</li>
|
||||||
|
<li><strong>Keep Duplicates</strong>: Opens a dialog to control how files with identical content are handled. The default is to skip duplicates, but you can choose to keep all of them or set a specific limit (e.g., "keep up to 2 copies of the same file").</li>
|
||||||
## 2. Filtering & Content Selection
|
<li><strong>Subfolder per Post</strong>: Organizes downloads by creating a unique folder for each post, named after the post's title.</li>
|
||||||
These options allow precise control over downloaded content.
|
<li><strong>Date Prefix</strong>: When "Subfolder per Post" is on, this adds the post's date to the beginning of the folder name (e.g., <code>2025-07-25 Post Title</code>).</li>
|
||||||
|
<li><strong>Separate Folders by Known.txt</strong>: This enables the automatic folder organization system based on your "Known Names" list.</li>
|
||||||
### 2.1. Content Filtering
|
<li><strong>Use Cookie</strong>: Allows the application to use browser cookies to access content that might be behind a paywall or login. You can paste a cookie string directly or use <strong>Browse...</strong> to select a <code>cookies.txt</code> file.</li>
|
||||||
**🎯 Filter by Character(s) Input Field**
|
<li><strong>Use Multithreading</strong>: Greatly speeds up downloads of creator feeds by processing multiple posts at once. The number of <strong>Threads</strong> can be configured.</li>
|
||||||
- **Purpose**: Download content related to specific characters or series.
|
<li><strong>Show External Links in Log</strong>: When checked, a secondary log panel appears at the bottom of the right side, dedicated to listing any external links found.</li>
|
||||||
- **Usage**: Enter comma-separated character names.
|
</ul>
|
||||||
- **Advanced Syntax**:
|
<hr>
|
||||||
- `Nami`: Simple filter.
|
<h2><strong>Known Names Management (Bottom-Left)</strong></h2>
|
||||||
- `(Vivi, Ulti)`: Grouped filter. Matches posts with "Vivi" OR "Ulti". Creates a shared folder like `Vivi Ulti` if subfolders are enabled.
|
<p>This powerful feature automates the creation of organized, named folders.</p>
|
||||||
- `(Boa, Hancock)~`: Aliased filter. Treats "Boa" and "Hancock" as the same entity.
|
<ul>
|
||||||
|
<li><strong>Known Shows/Characters List</strong>: Displays all the names and groups you've saved.</li>
|
||||||
**Filter: [Type] Button (Character Filter Scope)**
|
<li><strong>Search...</strong>: Filters the list to quickly find a name.</li>
|
||||||
- **Purpose**: Defines where the character filter is applied. Cycles on click.
|
<li><strong>Open Known.txt</strong>: Opens the source file in a text editor for advanced manual editing.</li>
|
||||||
- **Options**:
|
<li><strong>Add New Name</strong>:
|
||||||
- **Filter: Title** (Default): Matches post titles.
|
<ul>
|
||||||
- **Filter: Files**: Matches filenames.
|
<li><strong>Single Name</strong>: Typing <code>Tifa Lockhart</code> and clicking <strong>➕ Add</strong> creates an entry that will match "Tifa Lockhart".</li>
|
||||||
- **Filter: Both**: Checks title first, then filenames.
|
<li><strong>Group</strong>: Typing <code>(Boa, Hancock, Snake Princess)~</code> and clicking <strong>➕ Add</strong> creates a single entry named "Boa Hancock Snake Princess". The application will then look for "Boa," "Hancock," OR "Snake Princess" in titles/filenames and save any matches into that combined folder.</li>
|
||||||
- **Filter: Comments (Beta)**: Checks filenames, then post comments.
|
</ul>
|
||||||
|
</li>
|
||||||
**🚫 Skip with Words Input Field**
|
<li><strong>⤵️ Add to Filter</strong>: Opens a dialog with your full Known Names list, allowing you to check multiple entries and add them all to the "Filter by Character(s)" field at once.</li>
|
||||||
- **Purpose**: Exclude posts/files with specified keywords (e.g., `WIP`, `sketch`).
|
<li><strong>🗑️ Delete Selected</strong>: Removes highlighted names from your list.</li>
|
||||||
|
</ul>
|
||||||
**Scope: [Type] Button (Skip Words Scope)**
|
<hr>
|
||||||
- **Purpose**: Defines where skip words are applied. Cycles on click.
|
<h2><strong>Action Buttons & Status Controls</strong></h2>
|
||||||
- **Options**:
|
<ul>
|
||||||
- **Scope: Posts** (Default): Skips posts if the title contains a skip word.
|
<li><strong>⬇️ Start Download / 🔗 Extract Links</strong>: The main action button. Its function is dynamic:
|
||||||
- **Scope: Files**: Skips files if the filename contains a skip word.
|
<ul>
|
||||||
- **Scope: Both**: Applies both rules.
|
<li><strong>Normal Mode</strong>: Starts the download based on the current settings.</li>
|
||||||
|
<li><strong>Update Mode</strong>: After selecting a creator profile, this button changes to <strong>🔄 Check for Updates</strong>.</li>
|
||||||
**✂️ Remove Words from Name Input Field**
|
<li><strong>Update Confirmation</strong>: After new posts are found, it changes to <strong>⬇️ Start Download (X new)</strong>.</li>
|
||||||
- **Purpose**: Remove unwanted text from filenames (e.g., `patreon`, `[HD]`).
|
<li><strong>Link Extraction Mode</strong>: The text changes to <strong>🔗 Extract Links</strong>.</li>
|
||||||
|
</ul>
|
||||||
### 2.2. File Type Filtering
|
</li>
|
||||||
**Filter Files (Radio Buttons)**
|
<li><strong>⏸️ Pause / ▶️ Resume Download</strong>: Pauses the ongoing download, allowing you to change certain settings (like filters) on the fly. Click again to resume.</li>
|
||||||
- **Purpose**: Select file types to download.
|
<li><strong>❌ Cancel & Reset UI</strong>: Immediately stops all download activity and resets the UI to a clean state, preserving your URL and Download Location inputs.</li>
|
||||||
- **Options**:
|
<li><strong>Error Button</strong>: If files fail to download, they are logged. This button opens a dialog listing all failed files and will show a count of errors (e.g., <strong>(5) Error</strong>). From the dialog, you can:
|
||||||
- **All**: All file types.
|
<ul>
|
||||||
- **Images/GIFs**: Common image formats.
|
<li>Select specific files to <strong>Retry</strong> downloading.</li>
|
||||||
- **Videos**: Common video formats.
|
<li><strong>Export</strong> the list of failed URLs to a <code>.txt</code> file.</li>
|
||||||
- **🎧 Only Audio**: Common audio formats.
|
</ul>
|
||||||
- **📦 Only Archives**: Only `.zip` and `.rar` files.
|
</li>
|
||||||
- **🔗 Only Links**: Extracts external links without downloading files.
|
<li><strong>🔄 Reset (Top-Right)</strong>: A hard reset that clears all logs and returns every single UI element to its default state.</li>
|
||||||
|
<li><strong>⚙️ (Settings)</strong>: Opens the main Settings dialog.</li>
|
||||||
**Skip .zip / Skip .rar Checkboxes**
|
<li><strong>📜 (History)</strong>: Opens the Download History dialog.</li>
|
||||||
- **Purpose**: Skip downloading `.zip` or `.rar` files.
|
<li><strong>? (Help)</strong>: Opens a helpful guide explaining the application's features.</li>
|
||||||
- **Behavior**: Disabled when "📦 Only Archives" is active.
|
<li><strong>❤️ Support</strong>: Opens a dialog with information on how to support the developer.</li>
|
||||||
|
</ul>
|
||||||
## 3. Download Customization
|
<hr>
|
||||||
Options to refine the download process and output.
|
<h2><strong>Specialized Modes & Features</strong></h2>
|
||||||
|
<h3><strong>⭐ Favorite Mode</strong></h3>
|
||||||
- **Download Thumbnails Only**: Downloads small preview images instead of full-resolution files.
|
<p>Activating this mode transforms the UI for managing saved collections:</p>
|
||||||
- **Scan Content for Images**: Scans post HTML for `<img>` tags, crucial for images in descriptions.
|
<ul>
|
||||||
- **Compress to WebP**: Converts images to WebP format (requires Pillow library).
|
<li>The URL input is disabled.</li>
|
||||||
- **Keep Duplicates**: Normally, if a post contains multiple files with the same name, only the first is downloaded. Checking this option will download all of them, renaming subsequent unique files with a numeric suffix (e.g., `image_1.jpg`).
|
<li>The main action buttons are replaced with:
|
||||||
- **🗄️ Custom Folder Name (Single Post Only)**: Specify a custom folder name for a single post's content (appears if subfolders are enabled).
|
<ul>
|
||||||
|
<li><strong>🖼️ Favorite Artists</strong>: Opens a dialog to browse and queue downloads from your saved favorite creators.</li>
|
||||||
## 4. 📖 Manga/Comic Mode
|
<li><strong>📄 Favorite Posts</strong>: Opens a dialog to browse and queue downloads for specific saved favorite posts.</li>
|
||||||
A mode for downloading creator feeds in chronological order, ideal for sequential content.
|
</ul>
|
||||||
|
</li>
|
||||||
- **Activation**: Active when downloading a creator's entire feed (not a single post).
|
<li><strong>Scope: [Location] Button</strong>: Toggles where the favorited content is saved:
|
||||||
- **Core Behavior**: Fetches all posts, processing from oldest to newest.
|
<ul>
|
||||||
- **Filename Style Toggle Button (in the log area)**:
|
<li><strong>Selected Location</strong>: Saves all content directly into the main "Download Location".</li>
|
||||||
- **Purpose**: Controls file naming in Manga Mode. Cycles on click.
|
<li><strong>Artist Folders</strong>: Creates a subfolder for each artist inside the main "Download Location".</li>
|
||||||
- **Options**:
|
</ul>
|
||||||
- **Name: Post Title**: First file named after post title; others keep original names.
|
</li>
|
||||||
- **Name: Original File**: Files keep server-provided names, with optional prefix.
|
</ul>
|
||||||
- **Name: Title+G.Num**: Global numbering with post title prefix (e.g., `Chapter 1_001.jpg`).
|
<h3><strong>📖 Manga/Comic Mode</strong></h3>
|
||||||
- **Name: Date Based**: Sequential naming by post date (e.g., `001.jpg`), with optional prefix.
|
<p>This mode is designed for sequential content and has several effects:</p>
|
||||||
- **Name: Post ID**: Files named after post ID to avoid clashes.
|
<ul>
|
||||||
- **Name: Date + Title**: Combines post date and title for filenames.
|
<li><strong>Reverses Download Order</strong>: It fetches and downloads posts from <strong>oldest to newest</strong>.</li>
|
||||||
|
<li><strong>Enables Special Naming</strong>: A <strong><code>Name: [Style]</code></strong> button appears, allowing you to choose how files are named to maintain their correct order (e.g., by Post Title, by Date, or simple sequential numbering like <code>001, 002, 003...</code>).</li>
|
||||||
## 5. Folder Organization & Known.txt
|
<li><strong>Disables Multithreading (for certain styles)</strong>: To guarantee perfect sequential numbering, multithreading for posts is automatically disabled for certain naming styles.</li>
|
||||||
Controls for structuring downloaded content.
|
</ul>
|
||||||
|
<h3><strong>Session & Error Management</strong></h3>
|
||||||
- **Separate Folders by Name/Title Checkbox**: Enables automatic subfolder creation.
|
<ul>
|
||||||
- **Subfolder per Post Checkbox**: Creates subfolders for each post, named after the post title.
|
<li><strong>Session Restore</strong>: If the application is closed unexpectedly during a download, it will detect the incomplete session on the next launch. The UI will present a <strong>🔄 Restore Download</strong> button to resume exactly where you left off. You can also choose to discard the session.</li>
|
||||||
- **Date Prefix for Post Subfolders Checkbox**: When used with "Subfolder per Post," this option prefixes the folder name with the post's upload date (e.g., `2025-07-11 Post Title`), allowing for chronological sorting.
|
<li><strong>Update Checking</strong>: By selecting a creator profile via the <strong>🎨 Creator Selection Popup</strong>, you can run an update check. The application compares the posts on the server with your download history for that creator and will prompt you to download only the new content.</li>
|
||||||
- **Known.txt Management UI (Bottom Left)**:
|
</ul>
|
||||||
- **Purpose**: Manages a local `Known.txt` file for series, characters, or terms used in folder creation.
|
<h3><strong>Logging & Monitoring</strong></h3>
|
||||||
- **List Display**: Shows primary names from `Known.txt`.
|
<ul>
|
||||||
- **➕ Add Button**: Adds names or groups (e.g., `(Character A, Alias B)~`).
|
<li><strong>Progress Log</strong>: The main log provides real-time feedback on the download process, including status messages, file saves, skips, and errors.</li>
|
||||||
- **⤵️ Add to Filter Button**: Select names from `Known.txt` for the character filter.
|
<li><strong>👁️ Log View Toggle</strong>: Switches the log view between the standard <strong>Progress Log</strong> and a <strong>Missed Character Log</strong>, which shows potential character names from posts that were skipped by your filters, helping you discover new names to add to your list.</li>
|
||||||
- **🗑️ Delete Selected Button**: Removes selected names from `Known.txt`.
|
</ul>
|
||||||
- **Open Known.txt Button**: Opens the file in the default text editor.
|
</div>
|
||||||
- **❓ Help Button**: Opens this feature guide.
|
|
||||||
- **📜 History Button**: Views recent download history.
|
|
||||||
|
|
||||||
## 6. ⭐ Favorite Mode (Kemono.su Only)
|
|
||||||
Download from favorited artists/posts on Kemono.su.
|
|
||||||
|
|
||||||
- **Enable Checkbox ("⭐ Favorite Mode")**:
|
|
||||||
- Switches to Favorite Mode.
|
|
||||||
- Disables the main URL input.
|
|
||||||
- Changes action buttons to "Favorite Artists" and "Favorite Posts".
|
|
||||||
- Requires cookies.
|
|
||||||
- **🖼️ Favorite Artists Button**: Select and download from favorited artists.
|
|
||||||
- **📄 Favorite Posts Button**: Select and download specific favorited posts.
|
|
||||||
- **Favorite Download Scope Button**:
|
|
||||||
- **Scope: Selected Location**: Downloads favorites to the main directory.
|
|
||||||
- **Scope: Artist Folders**: Creates subfolders per artist.
|
|
||||||
|
|
||||||
## 7. Advanced Settings & Performance
|
|
||||||
- **🍪 Cookie Management**:
|
|
||||||
- **Use Cookie Checkbox**: Enables cookies for restricted content.
|
|
||||||
- **Cookie Text Field**: Paste cookie string.
|
|
||||||
- **Browse... Button**: Select a `cookies.txt` file (Netscape format).
|
|
||||||
- **Use Multithreading Checkbox & Threads Input**:
|
|
||||||
- **Purpose**: Configures simultaneous operations.
|
|
||||||
- **Behavior**: Sets concurrent post processing (creator feeds) or file downloads (single posts).
|
|
||||||
- **Multi-part Download Toggle Button**:
|
|
||||||
- **Purpose**: Enables/disables multi-segment downloading for large files.
|
|
||||||
- **Note**: Best for large files; less efficient for small files.
|
|
||||||
|
|
||||||
## 8. Logging, Monitoring & Error Handling
|
|
||||||
- **📜 Progress Log Area**: Displays messages, progress, and errors.
|
|
||||||
- **👁️ / 🙈 Log View Toggle Button**: Switches between Progress Log and Missed Character Log (skipped posts).
|
|
||||||
- **Show External Links in Log**: Displays external links (e.g., Mega, Google Drive) in a secondary panel.
|
|
||||||
- **Export Links Button**: Saves extracted links to a `.txt` file in "Only Links" mode.
|
|
||||||
- **Download Extracted Links Button**: Downloads files from supported external links in "Only Links" mode.
|
|
||||||
- **🆘 Error Button & Dialog**:
|
|
||||||
- **Purpose**: Active if files fail to download. The button will display a live count of failed files (e.g., **(3) Error**).
|
|
||||||
- **Dialog Features**:
|
|
||||||
- Lists failed files.
|
|
||||||
- Retry failed downloads.
|
|
||||||
- Export failed URLs to a text file.
|
|
||||||
|
|
||||||
## 9. Application Settings (⚙️)
|
|
||||||
- **Appearance**: Switch between Light and Dark themes.
|
|
||||||
- **Language**: Change UI language (restart required).
|
|
||||||
|
|||||||
@@ -120,7 +120,7 @@ def download_from_api(
|
|||||||
selected_cookie_file=None,
|
selected_cookie_file=None,
|
||||||
app_base_dir=None,
|
app_base_dir=None,
|
||||||
manga_filename_style_for_sort_check=None,
|
manga_filename_style_for_sort_check=None,
|
||||||
processed_post_ids=None # --- ADD THIS ARGUMENT ---
|
processed_post_ids=None
|
||||||
):
|
):
|
||||||
headers = {
|
headers = {
|
||||||
'User-Agent': 'Mozilla/5.0',
|
'User-Agent': 'Mozilla/5.0',
|
||||||
@@ -139,9 +139,14 @@ def download_from_api(
|
|||||||
|
|
||||||
parsed_input_url_for_domain = urlparse(api_url_input)
|
parsed_input_url_for_domain = urlparse(api_url_input)
|
||||||
api_domain = parsed_input_url_for_domain.netloc
|
api_domain = parsed_input_url_for_domain.netloc
|
||||||
if not any(d in api_domain.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
|
|
||||||
|
# --- START: MODIFIED LOGIC ---
|
||||||
|
# This list is updated to include the new .cr and .st mirrors for validation.
|
||||||
|
if not any(d in api_domain.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
|
||||||
logger(f"⚠️ Unrecognized domain '{api_domain}' from input URL. Defaulting to kemono.su for API calls.")
|
logger(f"⚠️ Unrecognized domain '{api_domain}' from input URL. Defaulting to kemono.su for API calls.")
|
||||||
api_domain = "kemono.su"
|
api_domain = "kemono.su"
|
||||||
|
# --- END: MODIFIED LOGIC ---
|
||||||
|
|
||||||
cookies_for_api = None
|
cookies_for_api = None
|
||||||
if use_cookie and app_base_dir:
|
if use_cookie and app_base_dir:
|
||||||
cookies_for_api = prepare_cookies_for_request(use_cookie, cookie_text, selected_cookie_file, app_base_dir, logger, target_domain=api_domain)
|
cookies_for_api = prepare_cookies_for_request(use_cookie, cookie_text, selected_cookie_file, app_base_dir, logger, target_domain=api_domain)
|
||||||
@@ -220,6 +225,9 @@ def download_from_api(
|
|||||||
logger(f" Manga Mode: No posts found within the specified page range ({start_page or 1}-{end_page}).")
|
logger(f" Manga Mode: No posts found within the specified page range ({start_page or 1}-{end_page}).")
|
||||||
break
|
break
|
||||||
all_posts_for_manga_mode.extend(posts_batch_manga)
|
all_posts_for_manga_mode.extend(posts_batch_manga)
|
||||||
|
|
||||||
|
logger(f"MANGA_FETCH_PROGRESS:{len(all_posts_for_manga_mode)}:{current_page_num_manga}")
|
||||||
|
|
||||||
current_offset_manga += page_size
|
current_offset_manga += page_size
|
||||||
time.sleep(0.6)
|
time.sleep(0.6)
|
||||||
except RuntimeError as e:
|
except RuntimeError as e:
|
||||||
@@ -232,7 +240,12 @@ def download_from_api(
|
|||||||
logger(f"❌ Unexpected error during manga mode fetch: {e}")
|
logger(f"❌ Unexpected error during manga mode fetch: {e}")
|
||||||
traceback.print_exc()
|
traceback.print_exc()
|
||||||
break
|
break
|
||||||
|
|
||||||
if cancellation_event and cancellation_event.is_set(): return
|
if cancellation_event and cancellation_event.is_set(): return
|
||||||
|
|
||||||
|
if all_posts_for_manga_mode:
|
||||||
|
logger(f"MANGA_FETCH_COMPLETE:{len(all_posts_for_manga_mode)}")
|
||||||
|
|
||||||
if all_posts_for_manga_mode:
|
if all_posts_for_manga_mode:
|
||||||
if processed_post_ids:
|
if processed_post_ids:
|
||||||
original_count = len(all_posts_for_manga_mode)
|
original_count = len(all_posts_for_manga_mode)
|
||||||
|
|||||||
@@ -5,11 +5,10 @@ import json
|
|||||||
import traceback
|
import traceback
|
||||||
from concurrent.futures import ThreadPoolExecutor, as_completed, Future
|
from concurrent.futures import ThreadPoolExecutor, as_completed, Future
|
||||||
from .api_client import download_from_api
|
from .api_client import download_from_api
|
||||||
from .workers import PostProcessorWorker, DownloadThread
|
from .workers import PostProcessorWorker
|
||||||
from ..config.constants import (
|
from ..config.constants import (
|
||||||
STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING,
|
STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING,
|
||||||
MAX_THREADS, POST_WORKER_BATCH_THRESHOLD, POST_WORKER_NUM_BATCHES,
|
MAX_THREADS
|
||||||
POST_WORKER_BATCH_DELAY_SECONDS
|
|
||||||
)
|
)
|
||||||
from ..utils.file_utils import clean_folder_name
|
from ..utils.file_utils import clean_folder_name
|
||||||
|
|
||||||
@@ -44,6 +43,7 @@ class DownloadManager:
|
|||||||
self.creator_profiles_dir = None
|
self.creator_profiles_dir = None
|
||||||
self.current_creator_name_for_profile = None
|
self.current_creator_name_for_profile = None
|
||||||
self.current_creator_profile_path = None
|
self.current_creator_profile_path = None
|
||||||
|
self.session_file_path = None
|
||||||
|
|
||||||
def _log(self, message):
|
def _log(self, message):
|
||||||
"""Puts a progress message into the queue for the UI."""
|
"""Puts a progress message into the queue for the UI."""
|
||||||
@@ -62,7 +62,11 @@ class DownloadManager:
|
|||||||
self._log("❌ Cannot start a new session: A session is already in progress.")
|
self._log("❌ Cannot start a new session: A session is already in progress.")
|
||||||
return
|
return
|
||||||
|
|
||||||
|
self.session_file_path = config.get('session_file_path')
|
||||||
creator_profile_data = self._setup_creator_profile(config)
|
creator_profile_data = self._setup_creator_profile(config)
|
||||||
|
|
||||||
|
# Save settings to profile at the start of the session
|
||||||
|
if self.current_creator_profile_path:
|
||||||
creator_profile_data['settings'] = config
|
creator_profile_data['settings'] = config
|
||||||
creator_profile_data.setdefault('processed_post_ids', [])
|
creator_profile_data.setdefault('processed_post_ids', [])
|
||||||
self._save_creator_profile(creator_profile_data)
|
self._save_creator_profile(creator_profile_data)
|
||||||
@@ -77,6 +81,7 @@ class DownloadManager:
|
|||||||
self.total_downloads = 0
|
self.total_downloads = 0
|
||||||
self.total_skips = 0
|
self.total_skips = 0
|
||||||
self.all_kept_original_filenames = []
|
self.all_kept_original_filenames = []
|
||||||
|
|
||||||
is_single_post = bool(config.get('target_post_id_from_initial_url'))
|
is_single_post = bool(config.get('target_post_id_from_initial_url'))
|
||||||
use_multithreading = config.get('use_multithreading', True)
|
use_multithreading = config.get('use_multithreading', True)
|
||||||
is_manga_sequential = config.get('manga_mode_active') and config.get('manga_filename_style') in [STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING]
|
is_manga_sequential = config.get('manga_mode_active') and config.get('manga_filename_style') in [STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING]
|
||||||
@@ -86,88 +91,54 @@ class DownloadManager:
|
|||||||
if should_use_multithreading_for_posts:
|
if should_use_multithreading_for_posts:
|
||||||
fetcher_thread = threading.Thread(
|
fetcher_thread = threading.Thread(
|
||||||
target=self._fetch_and_queue_posts_for_pool,
|
target=self._fetch_and_queue_posts_for_pool,
|
||||||
args=(config, restore_data, creator_profile_data), # Add argument here
|
args=(config, restore_data, creator_profile_data),
|
||||||
daemon=True
|
daemon=True
|
||||||
)
|
)
|
||||||
fetcher_thread.start()
|
fetcher_thread.start()
|
||||||
else:
|
else:
|
||||||
self._start_single_threaded_session(config)
|
# Single-threaded mode does not use the manager's complex logic
|
||||||
|
self._log("ℹ️ Manager is handing off to a single-threaded worker...")
|
||||||
|
# The single-threaded worker will manage its own lifecycle and signals.
|
||||||
|
# The manager's role for this session is effectively over.
|
||||||
|
self.is_running = False # Allow another session to start if needed
|
||||||
|
self.progress_queue.put({'type': 'handoff_to_single_thread', 'payload': (config,)})
|
||||||
|
|
||||||
def _start_single_threaded_session(self, config):
|
|
||||||
"""Handles downloads that are best processed by a single worker thread."""
|
|
||||||
self._log("ℹ️ Initializing single-threaded download process...")
|
|
||||||
self.worker_thread = threading.Thread(
|
|
||||||
target=self._run_single_worker,
|
|
||||||
args=(config,),
|
|
||||||
daemon=True
|
|
||||||
)
|
|
||||||
self.worker_thread.start()
|
|
||||||
|
|
||||||
def _run_single_worker(self, config):
|
def _fetch_and_queue_posts_for_pool(self, config, restore_data, creator_profile_data):
|
||||||
"""Target function for the single-worker thread."""
|
|
||||||
try:
|
|
||||||
worker = DownloadThread(config, self.progress_queue)
|
|
||||||
worker.run() # This is the main blocking call for this thread
|
|
||||||
except Exception as e:
|
|
||||||
self._log(f"❌ CRITICAL ERROR in single-worker thread: {e}")
|
|
||||||
self._log(traceback.format_exc())
|
|
||||||
finally:
|
|
||||||
self.is_running = False
|
|
||||||
|
|
||||||
def _fetch_and_queue_posts_for_pool(self, config, restore_data):
|
|
||||||
"""
|
"""
|
||||||
Fetches all posts from the API and submits them as tasks to a thread pool.
|
Fetches posts from the API in batches and submits them as tasks to a thread pool.
|
||||||
This method runs in its own dedicated thread to avoid blocking.
|
This method runs in its own dedicated thread to avoid blocking the UI.
|
||||||
|
It provides immediate feedback as soon as the first batch of posts is found.
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
num_workers = min(config.get('num_threads', 4), MAX_THREADS)
|
num_workers = min(config.get('num_threads', 4), MAX_THREADS)
|
||||||
self.thread_pool = ThreadPoolExecutor(max_workers=num_workers, thread_name_prefix='PostWorker_')
|
self.thread_pool = ThreadPoolExecutor(max_workers=num_workers, thread_name_prefix='PostWorker_')
|
||||||
|
|
||||||
session_processed_ids = set(restore_data['processed_post_ids']) if restore_data else set()
|
session_processed_ids = set(restore_data.get('processed_post_ids', [])) if restore_data else set()
|
||||||
profile_processed_ids = set(creator_profile_data.get('processed_post_ids', []))
|
profile_processed_ids = set(creator_profile_data.get('processed_post_ids', []))
|
||||||
processed_ids = session_processed_ids.union(profile_processed_ids)
|
processed_ids = session_processed_ids.union(profile_processed_ids)
|
||||||
|
|
||||||
if restore_data:
|
if restore_data and 'all_posts_data' in restore_data:
|
||||||
|
# This logic for session restore remains as it relies on a pre-fetched list
|
||||||
all_posts = restore_data['all_posts_data']
|
all_posts = restore_data['all_posts_data']
|
||||||
processed_ids = set(restore_data['processed_post_ids'])
|
|
||||||
posts_to_process = [p for p in all_posts if p.get('id') not in processed_ids]
|
posts_to_process = [p for p in all_posts if p.get('id') not in processed_ids]
|
||||||
self.total_posts = len(all_posts)
|
self.total_posts = len(all_posts)
|
||||||
self.processed_posts = len(processed_ids)
|
self.processed_posts = len(processed_ids)
|
||||||
self._log(f"🔄 Restoring session. {len(posts_to_process)} posts remaining.")
|
self._log(f"🔄 Restoring session. {len(posts_to_process)} posts remaining.")
|
||||||
else:
|
|
||||||
posts_to_process = self._get_all_posts(config)
|
|
||||||
self.total_posts = len(posts_to_process)
|
|
||||||
self.processed_posts = 0
|
|
||||||
|
|
||||||
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
|
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
|
||||||
|
|
||||||
if not posts_to_process:
|
if not posts_to_process:
|
||||||
self._log("✅ No new posts to process.")
|
self._log("✅ No new posts to process from restored session.")
|
||||||
return
|
return
|
||||||
|
|
||||||
for post_data in posts_to_process:
|
for post_data in posts_to_process:
|
||||||
if self.cancellation_event.is_set():
|
if self.cancellation_event.is_set(): break
|
||||||
break
|
|
||||||
worker = PostProcessorWorker(post_data, config, self.progress_queue)
|
worker = PostProcessorWorker(post_data, config, self.progress_queue)
|
||||||
future = self.thread_pool.submit(worker.process)
|
future = self.thread_pool.submit(worker.process)
|
||||||
future.add_done_callback(self._handle_future_result)
|
future.add_done_callback(self._handle_future_result)
|
||||||
self.active_futures.append(future)
|
self.active_futures.append(future)
|
||||||
|
else:
|
||||||
except Exception as e:
|
# --- START: REFACTORED STREAMING LOGIC ---
|
||||||
self._log(f"❌ CRITICAL ERROR in post fetcher thread: {e}")
|
|
||||||
self._log(traceback.format_exc())
|
|
||||||
finally:
|
|
||||||
if self.thread_pool:
|
|
||||||
self.thread_pool.shutdown(wait=True)
|
|
||||||
self.is_running = False
|
|
||||||
self._log("🏁 All processing tasks have completed.")
|
|
||||||
self.progress_queue.put({
|
|
||||||
'type': 'finished',
|
|
||||||
'payload': (self.total_downloads, self.total_skips, self.cancellation_event.is_set(), self.all_kept_original_filenames)
|
|
||||||
})
|
|
||||||
|
|
||||||
def _get_all_posts(self, config):
|
|
||||||
"""Helper to fetch all posts using the API client."""
|
|
||||||
all_posts = []
|
|
||||||
post_generator = download_from_api(
|
post_generator = download_from_api(
|
||||||
api_url_input=config['api_url'],
|
api_url_input=config['api_url'],
|
||||||
logger=self._log,
|
logger=self._log,
|
||||||
@@ -181,11 +152,50 @@ class DownloadManager:
|
|||||||
selected_cookie_file=config.get('selected_cookie_file'),
|
selected_cookie_file=config.get('selected_cookie_file'),
|
||||||
app_base_dir=config.get('app_base_dir'),
|
app_base_dir=config.get('app_base_dir'),
|
||||||
manga_filename_style_for_sort_check=config.get('manga_filename_style'),
|
manga_filename_style_for_sort_check=config.get('manga_filename_style'),
|
||||||
processed_post_ids=config.get('processed_post_ids', [])
|
processed_post_ids=list(processed_ids)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.total_posts = 0
|
||||||
|
self.processed_posts = 0
|
||||||
|
|
||||||
|
# Process posts in batches as they are yielded by the API client
|
||||||
for batch in post_generator:
|
for batch in post_generator:
|
||||||
all_posts.extend(batch)
|
if self.cancellation_event.is_set():
|
||||||
return all_posts
|
self._log(" Post fetching cancelled.")
|
||||||
|
break
|
||||||
|
|
||||||
|
# Filter out any posts that might have been processed since the start
|
||||||
|
posts_in_batch_to_process = [p for p in batch if p.get('id') not in processed_ids]
|
||||||
|
|
||||||
|
if not posts_in_batch_to_process:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Update total count and immediately inform the UI
|
||||||
|
self.total_posts += len(posts_in_batch_to_process)
|
||||||
|
self.progress_queue.put({'type': 'overall_progress', 'payload': (self.total_posts, self.processed_posts)})
|
||||||
|
|
||||||
|
for post_data in posts_in_batch_to_process:
|
||||||
|
if self.cancellation_event.is_set(): break
|
||||||
|
worker = PostProcessorWorker(post_data, config, self.progress_queue)
|
||||||
|
future = self.thread_pool.submit(worker.process)
|
||||||
|
future.add_done_callback(self._handle_future_result)
|
||||||
|
self.active_futures.append(future)
|
||||||
|
|
||||||
|
if self.total_posts == 0 and not self.cancellation_event.is_set():
|
||||||
|
self._log("✅ No new posts found to process.")
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
self._log(f"❌ CRITICAL ERROR in post fetcher thread: {e}")
|
||||||
|
self._log(traceback.format_exc())
|
||||||
|
finally:
|
||||||
|
if self.thread_pool:
|
||||||
|
self.thread_pool.shutdown(wait=True)
|
||||||
|
self.is_running = False
|
||||||
|
self._log("🏁 All processing tasks have completed or been cancelled.")
|
||||||
|
self.progress_queue.put({
|
||||||
|
'type': 'finished',
|
||||||
|
'payload': (self.total_downloads, self.total_skips, self.cancellation_event.is_set(), self.all_kept_original_filenames)
|
||||||
|
})
|
||||||
|
|
||||||
def _handle_future_result(self, future: Future):
|
def _handle_future_result(self, future: Future):
|
||||||
"""Callback executed when a worker task completes."""
|
"""Callback executed when a worker task completes."""
|
||||||
@@ -261,9 +271,15 @@ class DownloadManager:
|
|||||||
"""Cancels the current running session."""
|
"""Cancels the current running session."""
|
||||||
if not self.is_running:
|
if not self.is_running:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
if self.cancellation_event.is_set():
|
||||||
|
self._log("ℹ️ Cancellation already in progress.")
|
||||||
|
return
|
||||||
|
|
||||||
self._log("⚠️ Cancellation requested by user...")
|
self._log("⚠️ Cancellation requested by user...")
|
||||||
self.cancellation_event.set()
|
self.cancellation_event.set()
|
||||||
if self.thread_pool:
|
|
||||||
self.thread_pool.shutdown(wait=False, cancel_futures=True)
|
|
||||||
|
|
||||||
self.is_running = False
|
if self.thread_pool:
|
||||||
|
self._log(" Signaling all worker threads to stop and shutting down pool...")
|
||||||
|
self.thread_pool.shutdown(wait=False)
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
import os
|
import os
|
||||||
|
import sys
|
||||||
import queue
|
import queue
|
||||||
import re
|
import re
|
||||||
import threading
|
import threading
|
||||||
@@ -409,6 +410,39 @@ class PostProcessorWorker:
|
|||||||
unique_id_for_part_file = uuid.uuid4().hex[:8]
|
unique_id_for_part_file = uuid.uuid4().hex[:8]
|
||||||
unique_part_file_stem_on_disk = f"{temp_file_base_for_unique_part}_{unique_id_for_part_file}"
|
unique_part_file_stem_on_disk = f"{temp_file_base_for_unique_part}_{unique_id_for_part_file}"
|
||||||
max_retries = 3
|
max_retries = 3
|
||||||
|
if not self.keep_in_post_duplicates:
|
||||||
|
final_save_path_check = os.path.join(target_folder_path, filename_to_save_in_main_path)
|
||||||
|
if os.path.exists(final_save_path_check):
|
||||||
|
try:
|
||||||
|
# Use a HEAD request to get the expected size without downloading the body
|
||||||
|
with requests.head(file_url, headers=headers, timeout=15, cookies=cookies_to_use_for_file, allow_redirects=True) as head_response:
|
||||||
|
head_response.raise_for_status()
|
||||||
|
expected_size = int(head_response.headers.get('Content-Length', -1))
|
||||||
|
|
||||||
|
actual_size = os.path.getsize(final_save_path_check)
|
||||||
|
|
||||||
|
if expected_size != -1 and actual_size == expected_size:
|
||||||
|
self.logger(f" -> Skip (File Exists & Complete): '{filename_to_save_in_main_path}' is already on disk with the correct size.")
|
||||||
|
|
||||||
|
# We still need to add its hash to the session to prevent duplicates in other modes
|
||||||
|
# This is a quick hash calculation for the already existing file
|
||||||
|
try:
|
||||||
|
md5_hasher = hashlib.md5()
|
||||||
|
with open(final_save_path_check, 'rb') as f_verify:
|
||||||
|
for chunk in iter(lambda: f_verify.read(8192), b""):
|
||||||
|
md5_hasher.update(chunk)
|
||||||
|
|
||||||
|
with self.downloaded_hash_counts_lock:
|
||||||
|
self.downloaded_hash_counts[md5_hasher.hexdigest()] += 1
|
||||||
|
except Exception as hash_exc:
|
||||||
|
self.logger(f" ⚠️ Could not hash existing file '{filename_to_save_in_main_path}' for session: {hash_exc}")
|
||||||
|
|
||||||
|
return 0, 1, filename_to_save_in_main_path, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_SKIPPED, None
|
||||||
|
else:
|
||||||
|
self.logger(f" ⚠️ File '{filename_to_save_in_main_path}' exists but is incomplete (Expected: {expected_size}, Actual: {actual_size}). Re-downloading.")
|
||||||
|
|
||||||
|
except requests.RequestException as e:
|
||||||
|
self.logger(f" ⚠️ Could not verify size of existing file '{filename_to_save_in_main_path}': {e}. Proceeding with download.")
|
||||||
retry_delay = 5
|
retry_delay = 5
|
||||||
downloaded_size_bytes = 0
|
downloaded_size_bytes = 0
|
||||||
calculated_file_hash = None
|
calculated_file_hash = None
|
||||||
@@ -740,8 +774,11 @@ class PostProcessorWorker:
|
|||||||
history_data_for_this_post = None
|
history_data_for_this_post = None
|
||||||
|
|
||||||
parsed_api_url = urlparse(self.api_url_input)
|
parsed_api_url = urlparse(self.api_url_input)
|
||||||
referer_url = f"https://{parsed_api_url.netloc}/"
|
post_data = self.post
|
||||||
headers = {'User-Agent': 'Mozilla/5.0', 'Referer': referer_url, 'Accept': '*/*'}
|
post_id = post_data.get('id', 'unknown_id')
|
||||||
|
|
||||||
|
post_page_url = f"https://{parsed_api_url.netloc}/{self.service}/user/{self.user_id}/post/{post_id}"
|
||||||
|
headers = {'User-Agent': 'Mozilla/5.0', 'Referer': post_page_url, 'Accept': '*/*'}
|
||||||
link_pattern = re.compile(r"""<a\s+.*?href=["'](https?://[^"']+)["'][^>]*>(.*?)</a>""", re.IGNORECASE | re.DOTALL)
|
link_pattern = re.compile(r"""<a\s+.*?href=["'](https?://[^"']+)["'][^>]*>(.*?)</a>""", re.IGNORECASE | re.DOTALL)
|
||||||
post_data = self.post
|
post_data = self.post
|
||||||
post_title = post_data.get('title', '') or 'untitled_post'
|
post_title = post_data.get('title', '') or 'untitled_post'
|
||||||
@@ -751,6 +788,17 @@ class PostProcessorWorker:
|
|||||||
|
|
||||||
effective_unwanted_keywords_for_folder_naming = self.unwanted_keywords.copy()
|
effective_unwanted_keywords_for_folder_naming = self.unwanted_keywords.copy()
|
||||||
is_full_creator_download_no_char_filter = not self.target_post_id_from_initial_url and not current_character_filters
|
is_full_creator_download_no_char_filter = not self.target_post_id_from_initial_url and not current_character_filters
|
||||||
|
|
||||||
|
if (self.show_external_links or self.extract_links_only):
|
||||||
|
embed_data = post_data.get('embed')
|
||||||
|
if isinstance(embed_data, dict) and embed_data.get('url'):
|
||||||
|
embed_url = embed_data['url']
|
||||||
|
embed_subject = embed_data.get('subject', embed_url) # Use subject as link text, fallback to URL
|
||||||
|
platform = get_link_platform(embed_url)
|
||||||
|
|
||||||
|
self.logger(f" 🔗 Found embed link: {embed_url}")
|
||||||
|
self._emit_signal('external_link', post_title, embed_subject, embed_url, platform, "")
|
||||||
|
|
||||||
if is_full_creator_download_no_char_filter and self.creator_download_folder_ignore_words:
|
if is_full_creator_download_no_char_filter and self.creator_download_folder_ignore_words:
|
||||||
self.logger(f" Applying creator download specific folder ignore words ({len(self.creator_download_folder_ignore_words)} words).")
|
self.logger(f" Applying creator download specific folder ignore words ({len(self.creator_download_folder_ignore_words)} words).")
|
||||||
effective_unwanted_keywords_for_folder_naming.update(self.creator_download_folder_ignore_words)
|
effective_unwanted_keywords_for_folder_naming.update(self.creator_download_folder_ignore_words)
|
||||||
@@ -789,8 +837,8 @@ class PostProcessorWorker:
|
|||||||
|
|
||||||
all_files_from_post_api_for_char_check = []
|
all_files_from_post_api_for_char_check = []
|
||||||
api_file_domain_for_char_check = urlparse(self.api_url_input).netloc
|
api_file_domain_for_char_check = urlparse(self.api_url_input).netloc
|
||||||
if not api_file_domain_for_char_check or not any(d in api_file_domain_for_char_check.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
|
if not api_file_domain_for_char_check or not any(d in api_file_domain_for_char_check.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
|
||||||
api_file_domain_for_char_check = "kemono.su" if "kemono" in self.service.lower() else "coomer.party"
|
api_file_domain_for_char_check = "kemono.cr" if "kemono" in self.service.lower() else "coomer.st"
|
||||||
if post_main_file_info and isinstance(post_main_file_info, dict) and post_main_file_info.get('path'):
|
if post_main_file_info and isinstance(post_main_file_info, dict) and post_main_file_info.get('path'):
|
||||||
original_api_name = post_main_file_info.get('name') or os.path.basename(post_main_file_info['path'].lstrip('/'))
|
original_api_name = post_main_file_info.get('name') or os.path.basename(post_main_file_info['path'].lstrip('/'))
|
||||||
if original_api_name:
|
if original_api_name:
|
||||||
@@ -833,9 +881,9 @@ class PostProcessorWorker:
|
|||||||
try:
|
try:
|
||||||
parsed_input_url_for_comments = urlparse(self.api_url_input)
|
parsed_input_url_for_comments = urlparse(self.api_url_input)
|
||||||
api_domain_for_comments = parsed_input_url_for_comments.netloc
|
api_domain_for_comments = parsed_input_url_for_comments.netloc
|
||||||
if not any(d in api_domain_for_comments.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
|
if not any(d in api_domain_for_comments.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
|
||||||
self.logger(f"⚠️ Unrecognized domain '{api_domain_for_comments}' for comment API. Defaulting based on service.")
|
self.logger(f"⚠️ Unrecognized domain '{api_domain_for_comments}' for comment API. Defaulting based on service.")
|
||||||
api_domain_for_comments = "kemono.su" if "kemono" in self.service.lower() else "coomer.party"
|
api_domain_for_comments = "kemono.cr" if "kemono" in self.service.lower() else "coomer.st"
|
||||||
comments_data = fetch_post_comments(
|
comments_data = fetch_post_comments(
|
||||||
api_domain_for_comments, self.service, self.user_id, post_id,
|
api_domain_for_comments, self.service, self.user_id, post_id,
|
||||||
headers, self.logger, self.cancellation_event, self.pause_event,
|
headers, self.logger, self.cancellation_event, self.pause_event,
|
||||||
@@ -887,17 +935,6 @@ class PostProcessorWorker:
|
|||||||
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
|
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
|
||||||
return result_tuple
|
return result_tuple
|
||||||
|
|
||||||
if self.skip_words_list and (self.skip_words_scope == SKIP_SCOPE_POSTS or self.skip_words_scope == SKIP_SCOPE_BOTH):
|
|
||||||
if self._check_pause(f"Skip words (post title) for post {post_id}"):
|
|
||||||
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
|
|
||||||
return result_tuple
|
|
||||||
post_title_lower = post_title.lower()
|
|
||||||
for skip_word in self.skip_words_list:
|
|
||||||
if skip_word.lower() in post_title_lower:
|
|
||||||
self.logger(f" -> Skip Post (Keyword in Title '{skip_word}'): '{post_title[:50]}...'. Scope: {self.skip_words_scope}")
|
|
||||||
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
|
|
||||||
return result_tuple
|
|
||||||
|
|
||||||
if not self.extract_links_only and self.manga_mode_active and current_character_filters and (self.char_filter_scope == CHAR_SCOPE_TITLE or self.char_filter_scope == CHAR_SCOPE_BOTH) and not post_is_candidate_by_title_char_match:
|
if not self.extract_links_only and self.manga_mode_active and current_character_filters and (self.char_filter_scope == CHAR_SCOPE_TITLE or self.char_filter_scope == CHAR_SCOPE_BOTH) and not post_is_candidate_by_title_char_match:
|
||||||
self.logger(f" -> Skip Post (Manga Mode with Title/Both Scope - No Title Char Match): Title '{post_title[:50]}' doesn't match filters.")
|
self.logger(f" -> Skip Post (Manga Mode with Title/Both Scope - No Title Char Match): Title '{post_title[:50]}' doesn't match filters.")
|
||||||
self._emit_signal('missed_character_post', post_title, "Manga Mode: No title match for character filter (Title/Both scope)")
|
self._emit_signal('missed_character_post', post_title, "Manga Mode: No title match for character filter (Title/Both scope)")
|
||||||
@@ -908,6 +945,7 @@ class PostProcessorWorker:
|
|||||||
self.logger(f"⚠️ Corrupt attachment data for post {post_id} (expected list, got {type(post_attachments)}). Skipping attachments.")
|
self.logger(f"⚠️ Corrupt attachment data for post {post_id} (expected list, got {type(post_attachments)}). Skipping attachments.")
|
||||||
post_attachments = []
|
post_attachments = []
|
||||||
|
|
||||||
|
# CORRECTED LOGIC: Determine folder path BEFORE skip checks
|
||||||
base_folder_names_for_post_content = []
|
base_folder_names_for_post_content = []
|
||||||
determined_post_save_path_for_history = self.override_output_dir if self.override_output_dir else self.download_root
|
determined_post_save_path_for_history = self.override_output_dir if self.override_output_dir else self.download_root
|
||||||
if not self.extract_links_only and self.use_subfolders:
|
if not self.extract_links_only and self.use_subfolders:
|
||||||
@@ -1056,6 +1094,28 @@ class PostProcessorWorker:
|
|||||||
break
|
break
|
||||||
determined_post_save_path_for_history = os.path.join(base_path_for_post_subfolder, final_post_subfolder_name)
|
determined_post_save_path_for_history = os.path.join(base_path_for_post_subfolder, final_post_subfolder_name)
|
||||||
|
|
||||||
|
if self.skip_words_list and (self.skip_words_scope == SKIP_SCOPE_POSTS or self.skip_words_scope == SKIP_SCOPE_BOTH):
|
||||||
|
if self._check_pause(f"Skip words (post title) for post {post_id}"):
|
||||||
|
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
|
||||||
|
return result_tuple
|
||||||
|
post_title_lower = post_title.lower()
|
||||||
|
for skip_word in self.skip_words_list:
|
||||||
|
if skip_word.lower() in post_title_lower:
|
||||||
|
self.logger(f" -> Skip Post (Keyword in Title '{skip_word}'): '{post_title[:50]}...'. Scope: {self.skip_words_scope}")
|
||||||
|
# Create a history object for the skipped post to record its ID
|
||||||
|
history_data_for_skipped_post = {
|
||||||
|
'post_id': post_id,
|
||||||
|
'service': self.service,
|
||||||
|
'user_id': self.user_id,
|
||||||
|
'post_title': post_title,
|
||||||
|
'top_file_name': "N/A (Post Skipped)",
|
||||||
|
'num_files': num_potential_files_in_post,
|
||||||
|
'upload_date_str': post_data.get('published') or post_data.get('added') or "Unknown",
|
||||||
|
'download_location': determined_post_save_path_for_history
|
||||||
|
}
|
||||||
|
result_tuple = (0, num_potential_files_in_post, [], [], [], history_data_for_skipped_post, None)
|
||||||
|
return result_tuple
|
||||||
|
|
||||||
if self.filter_mode == 'text_only' and not self.extract_links_only:
|
if self.filter_mode == 'text_only' and not self.extract_links_only:
|
||||||
self.logger(f" Mode: Text Only (Scope: {self.text_only_scope})")
|
self.logger(f" Mode: Text Only (Scope: {self.text_only_scope})")
|
||||||
post_title_lower = post_title.lower()
|
post_title_lower = post_title.lower()
|
||||||
@@ -1163,11 +1223,18 @@ class PostProcessorWorker:
|
|||||||
if FPDF:
|
if FPDF:
|
||||||
self.logger(f" Creating formatted PDF for {'comments' if self.text_only_scope == 'comments' else 'content'}...")
|
self.logger(f" Creating formatted PDF for {'comments' if self.text_only_scope == 'comments' else 'content'}...")
|
||||||
pdf = PDF()
|
pdf = PDF()
|
||||||
|
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
|
||||||
|
# If the application is run as a bundled exe, _MEIPASS is the temp folder
|
||||||
|
base_path = sys._MEIPASS
|
||||||
|
else:
|
||||||
|
# If running as a normal .py script, use the project_root_dir
|
||||||
|
base_path = self.project_root_dir
|
||||||
|
|
||||||
font_path = ""
|
font_path = ""
|
||||||
bold_font_path = ""
|
bold_font_path = ""
|
||||||
if self.project_root_dir:
|
if base_path:
|
||||||
font_path = os.path.join(self.project_root_dir, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
|
font_path = os.path.join(base_path, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
|
||||||
bold_font_path = os.path.join(self.project_root_dir, 'data', 'dejavu-sans', 'DejaVuSans-Bold.ttf')
|
bold_font_path = os.path.join(base_path, 'data', 'dejavu-sans', 'DejaVuSans-Bold.ttf')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if not os.path.exists(font_path): raise RuntimeError(f"Font file not found: {font_path}")
|
if not os.path.exists(font_path): raise RuntimeError(f"Font file not found: {font_path}")
|
||||||
@@ -1300,9 +1367,8 @@ class PostProcessorWorker:
|
|||||||
|
|
||||||
all_files_from_post_api = []
|
all_files_from_post_api = []
|
||||||
api_file_domain = urlparse(self.api_url_input).netloc
|
api_file_domain = urlparse(self.api_url_input).netloc
|
||||||
if not api_file_domain or not any(d in api_file_domain.lower() for d in ['kemono.su', 'kemono.party', 'coomer.su', 'coomer.party']):
|
if not api_file_domain or not any(d in api_file_domain.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
|
||||||
api_file_domain = "kemono.su" if "kemono" in self.service.lower() else "coomer.party"
|
api_file_domain = "kemono.cr" if "kemono" in self.service.lower() else "coomer.st"
|
||||||
|
|
||||||
if post_main_file_info and isinstance(post_main_file_info, dict) and post_main_file_info.get('path'):
|
if post_main_file_info and isinstance(post_main_file_info, dict) and post_main_file_info.get('path'):
|
||||||
file_path = post_main_file_info['path'].lstrip('/')
|
file_path = post_main_file_info['path'].lstrip('/')
|
||||||
original_api_name = post_main_file_info.get('name') or os.path.basename(file_path)
|
original_api_name = post_main_file_info.get('name') or os.path.basename(file_path)
|
||||||
@@ -1654,10 +1720,12 @@ class PostProcessorWorker:
|
|||||||
if not self.extract_links_only and self.use_post_subfolders and total_downloaded_this_post == 0:
|
if not self.extract_links_only and self.use_post_subfolders and total_downloaded_this_post == 0:
|
||||||
path_to_check_for_emptiness = determined_post_save_path_for_history
|
path_to_check_for_emptiness = determined_post_save_path_for_history
|
||||||
try:
|
try:
|
||||||
|
# Check if the path is a directory and if it's empty
|
||||||
if os.path.isdir(path_to_check_for_emptiness) and not os.listdir(path_to_check_for_emptiness):
|
if os.path.isdir(path_to_check_for_emptiness) and not os.listdir(path_to_check_for_emptiness):
|
||||||
self.logger(f" 🗑️ Removing empty post-specific subfolder: '{path_to_check_for_emptiness}'")
|
self.logger(f" 🗑️ Removing empty post-specific subfolder: '{path_to_check_for_emptiness}'")
|
||||||
os.rmdir(path_to_check_for_emptiness)
|
os.rmdir(path_to_check_for_emptiness)
|
||||||
except OSError as e_rmdir:
|
except OSError as e_rmdir:
|
||||||
|
# Log if removal fails for any reason (e.g., permissions)
|
||||||
self.logger(f" ⚠️ Could not remove empty post-specific subfolder '{path_to_check_for_emptiness}': {e_rmdir}")
|
self.logger(f" ⚠️ Could not remove empty post-specific subfolder '{path_to_check_for_emptiness}': {e_rmdir}")
|
||||||
|
|
||||||
result_tuple = (total_downloaded_this_post, total_skipped_this_post,
|
result_tuple = (total_downloaded_this_post, total_skipped_this_post,
|
||||||
@@ -1666,6 +1734,15 @@ class PostProcessorWorker:
|
|||||||
None)
|
None)
|
||||||
|
|
||||||
finally:
|
finally:
|
||||||
|
if not self.extract_links_only and self.use_post_subfolders and total_downloaded_this_post == 0:
|
||||||
|
path_to_check_for_emptiness = determined_post_save_path_for_history
|
||||||
|
try:
|
||||||
|
if os.path.isdir(path_to_check_for_emptiness) and not os.listdir(path_to_check_for_emptiness):
|
||||||
|
self.logger(f" 🗑️ Removing empty post-specific subfolder: '{path_to_check_for_emptiness}'")
|
||||||
|
os.rmdir(path_to_check_for_emptiness)
|
||||||
|
except OSError as e_rmdir:
|
||||||
|
self.logger(f" ⚠️ Could not remove potentially empty subfolder '{path_to_check_for_emptiness}': {e_rmdir}")
|
||||||
|
|
||||||
self._emit_signal('worker_finished', result_tuple)
|
self._emit_signal('worker_finished', result_tuple)
|
||||||
|
|
||||||
return result_tuple
|
return result_tuple
|
||||||
|
|||||||
@@ -3,15 +3,19 @@ import os
|
|||||||
import re
|
import re
|
||||||
import traceback
|
import traceback
|
||||||
import json
|
import json
|
||||||
|
import base64
|
||||||
|
import time
|
||||||
from urllib.parse import urlparse, urlunparse, parse_qs, urlencode
|
from urllib.parse import urlparse, urlunparse, parse_qs, urlencode
|
||||||
|
|
||||||
# --- Third-Party Library Imports ---
|
# --- Third-Party Library Imports ---
|
||||||
|
# Make sure to install these: pip install requests pycryptodome gdown
|
||||||
import requests
|
import requests
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from mega import Mega
|
from Crypto.Cipher import AES
|
||||||
MEGA_AVAILABLE = True
|
PYCRYPTODOME_AVAILABLE = True
|
||||||
except ImportError:
|
except ImportError:
|
||||||
MEGA_AVAILABLE = False
|
PYCRYPTODOME_AVAILABLE = False
|
||||||
|
|
||||||
try:
|
try:
|
||||||
import gdown
|
import gdown
|
||||||
@@ -19,17 +23,15 @@ try:
|
|||||||
except ImportError:
|
except ImportError:
|
||||||
GDRIVE_AVAILABLE = False
|
GDRIVE_AVAILABLE = False
|
||||||
|
|
||||||
# --- Helper Functions ---
|
# --- Constants ---
|
||||||
|
MEGA_API_URL = "https://g.api.mega.co.nz"
|
||||||
|
|
||||||
|
# --- Helper Functions (Original and New) ---
|
||||||
|
|
||||||
def _get_filename_from_headers(headers):
|
def _get_filename_from_headers(headers):
|
||||||
"""
|
"""
|
||||||
Extracts a filename from the Content-Disposition header.
|
Extracts a filename from the Content-Disposition header.
|
||||||
|
(This is from your original file and is kept for Dropbox downloads)
|
||||||
Args:
|
|
||||||
headers (dict): A dictionary of HTTP response headers.
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
str or None: The extracted filename, or None if not found.
|
|
||||||
"""
|
"""
|
||||||
cd = headers.get('content-disposition')
|
cd = headers.get('content-disposition')
|
||||||
if not cd:
|
if not cd:
|
||||||
@@ -37,64 +39,180 @@ def _get_filename_from_headers(headers):
|
|||||||
|
|
||||||
fname_match = re.findall('filename="?([^"]+)"?', cd)
|
fname_match = re.findall('filename="?([^"]+)"?', cd)
|
||||||
if fname_match:
|
if fname_match:
|
||||||
# Sanitize the filename to prevent directory traversal issues
|
|
||||||
# and remove invalid characters for most filesystems.
|
|
||||||
sanitized_name = re.sub(r'[<>:"/\\|?*]', '_', fname_match[0].strip())
|
sanitized_name = re.sub(r'[<>:"/\\|?*]', '_', fname_match[0].strip())
|
||||||
return sanitized_name
|
return sanitized_name
|
||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# --- Main Service Downloader Functions ---
|
# --- NEW: Helper functions for Mega decryption ---
|
||||||
|
|
||||||
|
def urlb64_to_b64(s):
|
||||||
|
"""Converts a URL-safe base64 string to a standard base64 string."""
|
||||||
|
s = s.replace('-', '+').replace('_', '/')
|
||||||
|
s += '=' * (-len(s) % 4)
|
||||||
|
return s
|
||||||
|
|
||||||
|
def b64_to_bytes(s):
|
||||||
|
"""Decodes a URL-safe base64 string to bytes."""
|
||||||
|
return base64.b64decode(urlb64_to_b64(s))
|
||||||
|
|
||||||
|
def bytes_to_hex(b):
|
||||||
|
"""Converts bytes to a hex string."""
|
||||||
|
return b.hex()
|
||||||
|
|
||||||
|
def hex_to_bytes(h):
|
||||||
|
"""Converts a hex string to bytes."""
|
||||||
|
return bytes.fromhex(h)
|
||||||
|
|
||||||
|
def hrk2hk(hex_raw_key):
|
||||||
|
"""Derives the final AES key from the raw key components for Mega."""
|
||||||
|
key_part1 = int(hex_raw_key[0:16], 16)
|
||||||
|
key_part2 = int(hex_raw_key[16:32], 16)
|
||||||
|
key_part3 = int(hex_raw_key[32:48], 16)
|
||||||
|
key_part4 = int(hex_raw_key[48:64], 16)
|
||||||
|
|
||||||
|
final_key_part1 = key_part1 ^ key_part3
|
||||||
|
final_key_part2 = key_part2 ^ key_part4
|
||||||
|
|
||||||
|
return f'{final_key_part1:016x}{final_key_part2:016x}'
|
||||||
|
|
||||||
|
def decrypt_at(at_b64, key_bytes):
|
||||||
|
"""Decrypts the 'at' attribute to get file metadata."""
|
||||||
|
at_bytes = b64_to_bytes(at_b64)
|
||||||
|
iv = b'\0' * 16
|
||||||
|
cipher = AES.new(key_bytes, AES.MODE_CBC, iv)
|
||||||
|
decrypted_at = cipher.decrypt(at_bytes)
|
||||||
|
return decrypted_at.decode('utf-8').strip('\0').replace('MEGA', '')
|
||||||
|
|
||||||
|
# --- NEW: Core Logic for Mega Downloads ---
|
||||||
|
|
||||||
|
def get_mega_file_info(file_id, file_key, session, logger_func):
|
||||||
|
"""Fetches file metadata and the temporary download URL from the Mega API."""
|
||||||
|
try:
|
||||||
|
hex_raw_key = bytes_to_hex(b64_to_bytes(file_key))
|
||||||
|
hex_key = hrk2hk(hex_raw_key)
|
||||||
|
key_bytes = hex_to_bytes(hex_key)
|
||||||
|
|
||||||
|
# Request file attributes
|
||||||
|
payload = [{"a": "g", "p": file_id}]
|
||||||
|
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
|
||||||
|
response.raise_for_status()
|
||||||
|
res_json = response.json()
|
||||||
|
|
||||||
|
if isinstance(res_json, list) and isinstance(res_json[0], int) and res_json[0] < 0:
|
||||||
|
logger_func(f" [Mega] ❌ API Error: {res_json[0]}. The link may be invalid or removed.")
|
||||||
|
return None
|
||||||
|
|
||||||
|
file_size = res_json[0]['s']
|
||||||
|
at_b64 = res_json[0]['at']
|
||||||
|
|
||||||
|
# Decrypt attributes to get the file name
|
||||||
|
at_dec_json_str = decrypt_at(at_b64, key_bytes)
|
||||||
|
at_dec_json = json.loads(at_dec_json_str)
|
||||||
|
file_name = at_dec_json['n']
|
||||||
|
|
||||||
|
# Request the temporary download URL
|
||||||
|
payload = [{"a": "g", "g": 1, "p": file_id}]
|
||||||
|
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
|
||||||
|
response.raise_for_status()
|
||||||
|
res_json = response.json()
|
||||||
|
dl_temp_url = res_json[0]['g']
|
||||||
|
|
||||||
|
return {
|
||||||
|
'file_name': file_name,
|
||||||
|
'file_size': file_size,
|
||||||
|
'dl_url': dl_temp_url,
|
||||||
|
'hex_raw_key': hex_raw_key
|
||||||
|
}
|
||||||
|
except (requests.RequestException, json.JSONDecodeError, KeyError, ValueError) as e:
|
||||||
|
logger_func(f" [Mega] ❌ Failed to get file info: {e}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
def download_and_decrypt_mega_file(info, download_path, logger_func):
|
||||||
|
"""Downloads the file and decrypts it chunk by chunk, reporting progress."""
|
||||||
|
file_name = info['file_name']
|
||||||
|
file_size = info['file_size']
|
||||||
|
dl_url = info['dl_url']
|
||||||
|
hex_raw_key = info['hex_raw_key']
|
||||||
|
|
||||||
|
final_path = os.path.join(download_path, file_name)
|
||||||
|
|
||||||
|
if os.path.exists(final_path) and os.path.getsize(final_path) == file_size:
|
||||||
|
logger_func(f" [Mega] ℹ️ File '{file_name}' already exists with the correct size. Skipping.")
|
||||||
|
return
|
||||||
|
|
||||||
|
# Prepare for decryption
|
||||||
|
key = hex_to_bytes(hrk2hk(hex_raw_key))
|
||||||
|
iv_hex = hex_raw_key[32:48] + '0000000000000000'
|
||||||
|
iv_bytes = hex_to_bytes(iv_hex)
|
||||||
|
cipher = AES.new(key, AES.MODE_CTR, initial_value=iv_bytes, nonce=b'')
|
||||||
|
|
||||||
|
try:
|
||||||
|
with requests.get(dl_url, stream=True, timeout=(15, 300)) as r:
|
||||||
|
r.raise_for_status()
|
||||||
|
downloaded_bytes = 0
|
||||||
|
last_log_time = time.time()
|
||||||
|
|
||||||
|
with open(final_path, 'wb') as f:
|
||||||
|
for chunk in r.iter_content(chunk_size=8192):
|
||||||
|
if not chunk:
|
||||||
|
continue
|
||||||
|
decrypted_chunk = cipher.decrypt(chunk)
|
||||||
|
f.write(decrypted_chunk)
|
||||||
|
downloaded_bytes += len(chunk)
|
||||||
|
|
||||||
|
# Log progress every second
|
||||||
|
current_time = time.time()
|
||||||
|
if current_time - last_log_time > 1:
|
||||||
|
progress_percent = (downloaded_bytes / file_size) * 100 if file_size > 0 else 0
|
||||||
|
logger_func(f" [Mega] Downloading '{file_name}': {downloaded_bytes/1024/1024:.2f}MB / {file_size/1024/1024:.2f}MB ({progress_percent:.1f}%)")
|
||||||
|
last_log_time = current_time
|
||||||
|
|
||||||
|
logger_func(f" [Mega] ✅ Successfully downloaded '{file_name}' to '{download_path}'")
|
||||||
|
except requests.RequestException as e:
|
||||||
|
logger_func(f" [Mega] ❌ Download failed for '{file_name}': {e}")
|
||||||
|
except IOError as e:
|
||||||
|
logger_func(f" [Mega] ❌ Could not write to file '{final_path}': {e}")
|
||||||
|
except Exception as e:
|
||||||
|
logger_func(f" [Mega] ❌ An unexpected error occurred during download/decryption: {e}")
|
||||||
|
|
||||||
|
|
||||||
|
# --- REPLACEMENT Main Service Downloader Function for Mega ---
|
||||||
|
|
||||||
def download_mega_file(mega_url, download_path, logger_func=print):
|
def download_mega_file(mega_url, download_path, logger_func=print):
|
||||||
"""
|
"""
|
||||||
Downloads a file from a Mega.nz URL.
|
Downloads a file from a Mega.nz URL using direct requests and decryption.
|
||||||
Handles both public links and links that include a decryption key.
|
This replaces the old mega.py implementation.
|
||||||
"""
|
"""
|
||||||
if not MEGA_AVAILABLE:
|
if not PYCRYPTODOME_AVAILABLE:
|
||||||
logger_func("❌ Mega download failed: 'mega.py' library is not installed.")
|
logger_func("❌ Mega download failed: 'pycryptodome' library is not installed. Please run: pip install pycryptodome")
|
||||||
return
|
return
|
||||||
|
|
||||||
logger_func(f" [Mega] Initializing Mega client...")
|
logger_func(f" [Mega] Initializing download for: {mega_url}")
|
||||||
try:
|
|
||||||
mega = Mega()
|
|
||||||
# Anonymous login is sufficient for public links
|
|
||||||
m = mega.login()
|
|
||||||
|
|
||||||
# --- MODIFIED PART: Added error handling for invalid links ---
|
# Regex to capture file ID and key from both old and new URL formats
|
||||||
try:
|
match = re.search(r'mega(?:\.co)?\.nz/(?:file/|#!)?([a-zA-Z0-9]+)(?:#|!)([a-zA-Z0-9_.-]+)', mega_url)
|
||||||
file_details = m.find(mega_url)
|
if not match:
|
||||||
if file_details is None:
|
logger_func(f" [Mega] ❌ Error: Invalid Mega URL format.")
|
||||||
logger_func(f" [Mega] ❌ Download failed. The link appears to be invalid or has been taken down: {mega_url}")
|
|
||||||
return
|
|
||||||
except (ValueError, json.JSONDecodeError) as e:
|
|
||||||
# This block catches the "Expecting value" error
|
|
||||||
logger_func(f" [Mega] ❌ Download failed. The link is likely invalid or expired. Error: {e}")
|
|
||||||
return
|
|
||||||
except Exception as e:
|
|
||||||
# Catch other potential errors from the mega.py library
|
|
||||||
logger_func(f" [Mega] ❌ An unexpected error occurred trying to access the link: {e}")
|
|
||||||
return
|
|
||||||
# --- END OF MODIFIED PART ---
|
|
||||||
|
|
||||||
filename = file_details[1]['a']['n']
|
|
||||||
logger_func(f" [Mega] File found: '{filename}'. Starting download...")
|
|
||||||
|
|
||||||
# Sanitize filename before saving
|
|
||||||
safe_filename = "".join([c for c in filename if c.isalpha() or c.isdigit() or c in (' ', '.', '_', '-')]).rstrip()
|
|
||||||
final_path = os.path.join(download_path, safe_filename)
|
|
||||||
|
|
||||||
# Check if file already exists
|
|
||||||
if os.path.exists(final_path):
|
|
||||||
logger_func(f" [Mega] ℹ️ File '{safe_filename}' already exists. Skipping download.")
|
|
||||||
return
|
return
|
||||||
|
|
||||||
# Start the download
|
file_id = match.group(1)
|
||||||
m.download_url(mega_url, dest_path=download_path, dest_filename=safe_filename)
|
file_key = match.group(2)
|
||||||
logger_func(f" [Mega] ✅ Successfully downloaded '{safe_filename}' to '{download_path}'")
|
|
||||||
|
|
||||||
except Exception as e:
|
session = requests.Session()
|
||||||
logger_func(f" [Mega] ❌ An unexpected error occurred during the Mega download process: {e}")
|
session.headers.update({'User-Agent': 'Kemono-Downloader-PyQt/1.0'})
|
||||||
|
|
||||||
|
file_info = get_mega_file_info(file_id, file_key, session, logger_func)
|
||||||
|
if not file_info:
|
||||||
|
logger_func(f" [Mega] ❌ Failed to get file info. The link may be invalid or expired. Aborting.")
|
||||||
|
return
|
||||||
|
|
||||||
|
logger_func(f" [Mega] File found: '{file_info['file_name']}' (Size: {file_info['file_size'] / 1024 / 1024:.2f} MB)")
|
||||||
|
|
||||||
|
download_and_decrypt_mega_file(file_info, download_path, logger_func)
|
||||||
|
|
||||||
|
|
||||||
|
# --- ORIGINAL Functions for Google Drive and Dropbox (Unchanged) ---
|
||||||
|
|
||||||
def download_gdrive_file(url, download_path, logger_func=print):
|
def download_gdrive_file(url, download_path, logger_func=print):
|
||||||
"""Downloads a file from a Google Drive link."""
|
"""Downloads a file from a Google Drive link."""
|
||||||
@@ -103,12 +221,9 @@ def download_gdrive_file(url, download_path, logger_func=print):
|
|||||||
return
|
return
|
||||||
try:
|
try:
|
||||||
logger_func(f" [G-Drive] Starting download for: {url}")
|
logger_func(f" [G-Drive] Starting download for: {url}")
|
||||||
# --- MODIFIED PART: Added a message and set quiet=True ---
|
|
||||||
logger_func(" [G-Drive] Download in progress... This may take some time. Please wait.")
|
logger_func(" [G-Drive] Download in progress... This may take some time. Please wait.")
|
||||||
|
|
||||||
# By setting quiet=True, the progress bar will no longer be printed to the terminal.
|
|
||||||
output_path = gdown.download(url, output=download_path, quiet=True, fuzzy=True)
|
output_path = gdown.download(url, output=download_path, quiet=True, fuzzy=True)
|
||||||
# --- END OF MODIFIED PART ---
|
|
||||||
|
|
||||||
if output_path and os.path.exists(output_path):
|
if output_path and os.path.exists(output_path):
|
||||||
logger_func(f" [G-Drive] ✅ Successfully downloaded to '{output_path}'")
|
logger_func(f" [G-Drive] ✅ Successfully downloaded to '{output_path}'")
|
||||||
@@ -120,15 +235,9 @@ def download_gdrive_file(url, download_path, logger_func=print):
|
|||||||
def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
|
def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
|
||||||
"""
|
"""
|
||||||
Downloads a file from a public Dropbox link by modifying the URL for direct download.
|
Downloads a file from a public Dropbox link by modifying the URL for direct download.
|
||||||
|
|
||||||
Args:
|
|
||||||
dropbox_link (str): The public Dropbox link to the file.
|
|
||||||
download_path (str): The directory to save the downloaded file.
|
|
||||||
logger_func (callable): Function to use for logging.
|
|
||||||
"""
|
"""
|
||||||
logger_func(f" [Dropbox] Attempting to download: {dropbox_link}")
|
logger_func(f" [Dropbox] Attempting to download: {dropbox_link}")
|
||||||
|
|
||||||
# Modify the Dropbox URL to force a direct download instead of showing the preview page.
|
|
||||||
parsed_url = urlparse(dropbox_link)
|
parsed_url = urlparse(dropbox_link)
|
||||||
query_params = parse_qs(parsed_url.query)
|
query_params = parse_qs(parsed_url.query)
|
||||||
query_params['dl'] = ['1']
|
query_params['dl'] = ['1']
|
||||||
@@ -145,13 +254,11 @@ def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
|
|||||||
with requests.get(direct_download_url, stream=True, allow_redirects=True, timeout=(10, 300)) as r:
|
with requests.get(direct_download_url, stream=True, allow_redirects=True, timeout=(10, 300)) as r:
|
||||||
r.raise_for_status()
|
r.raise_for_status()
|
||||||
|
|
||||||
# Determine filename from headers or URL
|
|
||||||
filename = _get_filename_from_headers(r.headers) or os.path.basename(parsed_url.path) or "dropbox_file"
|
filename = _get_filename_from_headers(r.headers) or os.path.basename(parsed_url.path) or "dropbox_file"
|
||||||
full_save_path = os.path.join(download_path, filename)
|
full_save_path = os.path.join(download_path, filename)
|
||||||
|
|
||||||
logger_func(f" [Dropbox] Starting download of '{filename}'...")
|
logger_func(f" [Dropbox] Starting download of '{filename}'...")
|
||||||
|
|
||||||
# Write file to disk in chunks
|
|
||||||
with open(full_save_path, 'wb') as f:
|
with open(full_save_path, 'wb') as f:
|
||||||
for chunk in r.iter_content(chunk_size=8192):
|
for chunk in r.iter_content(chunk_size=8192):
|
||||||
f.write(chunk)
|
f.write(chunk)
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
# --- Standard Library Imports ---
|
# --- Standard Library Imports ---
|
||||||
|
# --- Standard Library Imports ---
|
||||||
import os
|
import os
|
||||||
import time
|
import time
|
||||||
import hashlib
|
import hashlib
|
||||||
@@ -10,28 +11,49 @@ from concurrent.futures import ThreadPoolExecutor, as_completed
|
|||||||
|
|
||||||
# --- Third-Party Library Imports ---
|
# --- Third-Party Library Imports ---
|
||||||
import requests
|
import requests
|
||||||
|
MULTIPART_DOWNLOADER_AVAILABLE = True
|
||||||
|
|
||||||
# --- Module Constants ---
|
# --- Module Constants ---
|
||||||
CHUNK_DOWNLOAD_RETRY_DELAY = 2
|
CHUNK_DOWNLOAD_RETRY_DELAY = 2
|
||||||
MAX_CHUNK_DOWNLOAD_RETRIES = 1
|
MAX_CHUNK_DOWNLOAD_RETRIES = 1
|
||||||
DOWNLOAD_CHUNK_SIZE_ITER = 1024 * 256 # 256 KB per iteration chunk
|
DOWNLOAD_CHUNK_SIZE_ITER = 1024 * 256 # 256 KB per iteration chunk
|
||||||
|
|
||||||
# Flag to indicate if this module and its dependencies are available.
|
|
||||||
# This was missing and caused the ImportError.
|
|
||||||
MULTIPART_DOWNLOADER_AVAILABLE = True
|
|
||||||
|
|
||||||
|
|
||||||
def _download_individual_chunk(
|
def _download_individual_chunk(
|
||||||
chunk_url, temp_file_path, start_byte, end_byte, headers,
|
chunk_url, chunk_temp_file_path, start_byte, end_byte, headers,
|
||||||
part_num, total_parts, progress_data, cancellation_event,
|
part_num, total_parts, progress_data, cancellation_event,
|
||||||
skip_event, pause_event, global_emit_time_ref, cookies_for_chunk,
|
skip_event, pause_event, global_emit_time_ref, cookies_for_chunk,
|
||||||
logger_func, emitter=None, api_original_filename=None
|
logger_func, emitter=None, api_original_filename=None
|
||||||
):
|
):
|
||||||
"""
|
"""
|
||||||
Downloads a single segment (chunk) of a larger file. This function is
|
Downloads a single segment (chunk) of a larger file to its own unique part file.
|
||||||
intended to be run in a separate thread by a ThreadPoolExecutor.
|
This function is intended to be run in a separate thread by a ThreadPoolExecutor.
|
||||||
|
|
||||||
It handles retries, pauses, and cancellations for its specific chunk.
|
It handles retries, pauses, and cancellations for its specific chunk. If a
|
||||||
|
download fails, the partial chunk file is removed, allowing a clean retry later.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
chunk_url (str): The URL to download the file from.
|
||||||
|
chunk_temp_file_path (str): The unique path to save this specific chunk
|
||||||
|
(e.g., 'my_video.mp4.part0').
|
||||||
|
start_byte (int): The starting byte for the Range header.
|
||||||
|
end_byte (int): The ending byte for the Range header.
|
||||||
|
headers (dict): The HTTP headers to use for the request.
|
||||||
|
part_num (int): The index of this chunk (e.g., 0 for the first part).
|
||||||
|
total_parts (int): The total number of chunks for the entire file.
|
||||||
|
progress_data (dict): A thread-safe dictionary for sharing progress.
|
||||||
|
cancellation_event (threading.Event): Event to signal cancellation.
|
||||||
|
skip_event (threading.Event): Event to signal skipping the file.
|
||||||
|
pause_event (threading.Event): Event to signal pausing the download.
|
||||||
|
global_emit_time_ref (list): A mutable list with one element (a timestamp)
|
||||||
|
to rate-limit UI updates.
|
||||||
|
cookies_for_chunk (dict): Cookies to use for the request.
|
||||||
|
logger_func (function): A function to log messages.
|
||||||
|
emitter (queue.Queue or QObject): Emitter for sending progress to the UI.
|
||||||
|
api_original_filename (str): The original filename for UI display.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
tuple: A tuple containing (bytes_downloaded, success_flag).
|
||||||
"""
|
"""
|
||||||
# --- Pre-download checks for control events ---
|
# --- Pre-download checks for control events ---
|
||||||
if cancellation_event and cancellation_event.is_set():
|
if cancellation_event and cancellation_event.is_set():
|
||||||
@@ -49,6 +71,11 @@ def _download_individual_chunk(
|
|||||||
time.sleep(0.2)
|
time.sleep(0.2)
|
||||||
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Download resumed.")
|
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Download resumed.")
|
||||||
|
|
||||||
|
# Set this chunk's status to 'active' before starting the download.
|
||||||
|
with progress_data['lock']:
|
||||||
|
progress_data['chunks_status'][part_num]['active'] = True
|
||||||
|
|
||||||
|
try:
|
||||||
# Prepare headers for the specific byte range of this chunk
|
# Prepare headers for the specific byte range of this chunk
|
||||||
chunk_headers = headers.copy()
|
chunk_headers = headers.copy()
|
||||||
if end_byte != -1:
|
if end_byte != -1:
|
||||||
@@ -76,8 +103,9 @@ def _download_individual_chunk(
|
|||||||
response.raise_for_status()
|
response.raise_for_status()
|
||||||
|
|
||||||
# --- Data Writing Loop ---
|
# --- Data Writing Loop ---
|
||||||
with open(temp_file_path, 'r+b') as f:
|
# We open the unique chunk file in write-binary ('wb') mode.
|
||||||
f.seek(start_byte)
|
# No more seeking is required.
|
||||||
|
with open(chunk_temp_file_path, 'wb') as f:
|
||||||
for data_segment in response.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE_ITER):
|
for data_segment in response.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE_ITER):
|
||||||
if cancellation_event and cancellation_event.is_set():
|
if cancellation_event and cancellation_event.is_set():
|
||||||
return bytes_this_chunk, False
|
return bytes_this_chunk, False
|
||||||
@@ -117,7 +145,7 @@ def _download_individual_chunk(
|
|||||||
elif hasattr(emitter, 'file_progress_signal'):
|
elif hasattr(emitter, 'file_progress_signal'):
|
||||||
emitter.file_progress_signal.emit(api_original_filename, status_list_copy)
|
emitter.file_progress_signal.emit(api_original_filename, status_list_copy)
|
||||||
|
|
||||||
# If we reach here, the download for this chunk was successful
|
# If we get here, the download for this chunk is successful
|
||||||
return bytes_this_chunk, True
|
return bytes_this_chunk, True
|
||||||
|
|
||||||
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout, http.client.IncompleteRead) as e:
|
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout, http.client.IncompleteRead) as e:
|
||||||
@@ -129,23 +157,49 @@ def _download_individual_chunk(
|
|||||||
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Unexpected error: {e}\n{traceback.format_exc(limit=1)}")
|
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Unexpected error: {e}\n{traceback.format_exc(limit=1)}")
|
||||||
return bytes_this_chunk, False
|
return bytes_this_chunk, False
|
||||||
|
|
||||||
|
# If the retry loop finishes without a successful download
|
||||||
return bytes_this_chunk, False
|
return bytes_this_chunk, False
|
||||||
|
finally:
|
||||||
|
# This block runs whether the download succeeded or failed
|
||||||
|
with progress_data['lock']:
|
||||||
|
progress_data['chunks_status'][part_num]['active'] = False
|
||||||
|
progress_data['chunks_status'][part_num]['speed_bps'] = 0.0
|
||||||
|
|
||||||
|
|
||||||
def download_file_in_parts(file_url, save_path, total_size, num_parts, headers, api_original_filename,
|
def download_file_in_parts(file_url, save_path, total_size, num_parts, headers, api_original_filename,
|
||||||
emitter_for_multipart, cookies_for_chunk_session,
|
emitter_for_multipart, cookies_for_chunk_session,
|
||||||
cancellation_event, skip_event, logger_func, pause_event):
|
cancellation_event, skip_event, logger_func, pause_event):
|
||||||
logger_func(f"⬇️ Initializing Multi-part Download ({num_parts} parts) for: '{api_original_filename}' (Size: {total_size / (1024*1024):.2f} MB)")
|
"""
|
||||||
temp_file_path = save_path + ".part"
|
Manages a resilient, multipart file download by saving each chunk to a separate file.
|
||||||
|
|
||||||
try:
|
This function orchestrates the download process by:
|
||||||
with open(temp_file_path, 'wb') as f_temp:
|
1. Checking for already completed chunk files to resume a previous download.
|
||||||
if total_size > 0:
|
2. Submitting only the missing chunks to a thread pool for parallel download.
|
||||||
f_temp.truncate(total_size)
|
3. Assembling the final file from the individual chunks upon successful completion.
|
||||||
except IOError as e:
|
4. Cleaning up temporary chunk files after assembly.
|
||||||
logger_func(f" ❌ Error creating/truncating temp file '{temp_file_path}': {e}")
|
5. Leaving completed chunks on disk if the download fails, allowing for a future resume.
|
||||||
return False, 0, None, None
|
|
||||||
|
|
||||||
|
Args:
|
||||||
|
file_url (str): The URL of the file to download.
|
||||||
|
save_path (str): The final desired path for the downloaded file (e.g., 'my_video.mp4').
|
||||||
|
total_size (int): The total size of the file in bytes.
|
||||||
|
num_parts (int): The number of parts to split the download into.
|
||||||
|
headers (dict): HTTP headers for the download requests.
|
||||||
|
api_original_filename (str): The original filename for UI progress display.
|
||||||
|
emitter_for_multipart (queue.Queue or QObject): Emitter for UI signals.
|
||||||
|
cookies_for_chunk_session (dict): Cookies for the download requests.
|
||||||
|
cancellation_event (threading.Event): Event to signal cancellation.
|
||||||
|
skip_event (threading.Event): Event to signal skipping the file.
|
||||||
|
logger_func (function): A function for logging messages.
|
||||||
|
pause_event (threading.Event): Event to signal pausing the download.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
tuple: A tuple containing (success_flag, total_bytes_downloaded, md5_hash, file_handle).
|
||||||
|
The file_handle will be for the final assembled file if successful, otherwise None.
|
||||||
|
"""
|
||||||
|
logger_func(f"⬇️ Initializing Resumable Multi-part Download ({num_parts} parts) for: '{api_original_filename}' (Size: {total_size / (1024*1024):.2f} MB)")
|
||||||
|
|
||||||
|
# Calculate the byte range for each chunk
|
||||||
chunk_size_calc = total_size // num_parts
|
chunk_size_calc = total_size // num_parts
|
||||||
chunks_ranges = []
|
chunks_ranges = []
|
||||||
for i in range(num_parts):
|
for i in range(num_parts):
|
||||||
@@ -153,76 +207,119 @@ def download_file_in_parts(file_url, save_path, total_size, num_parts, headers,
|
|||||||
end = start + chunk_size_calc - 1 if i < num_parts - 1 else total_size - 1
|
end = start + chunk_size_calc - 1 if i < num_parts - 1 else total_size - 1
|
||||||
if start <= end:
|
if start <= end:
|
||||||
chunks_ranges.append((start, end))
|
chunks_ranges.append((start, end))
|
||||||
elif total_size == 0 and i == 0:
|
elif total_size == 0 and i == 0: # Handle zero-byte files
|
||||||
chunks_ranges.append((0, -1))
|
chunks_ranges.append((0, -1))
|
||||||
|
|
||||||
|
# Calculate the expected size of each chunk
|
||||||
chunk_actual_sizes = []
|
chunk_actual_sizes = []
|
||||||
for start, end in chunks_ranges:
|
for start, end in chunks_ranges:
|
||||||
if end == -1 and start == 0:
|
chunk_actual_sizes.append(end - start + 1 if end != -1 else 0)
|
||||||
chunk_actual_sizes.append(0)
|
|
||||||
else:
|
|
||||||
chunk_actual_sizes.append(end - start + 1)
|
|
||||||
|
|
||||||
if not chunks_ranges and total_size > 0:
|
if not chunks_ranges and total_size > 0:
|
||||||
logger_func(f" ⚠️ No valid chunk ranges for multipart download of '{api_original_filename}'. Aborting multipart.")
|
logger_func(f" ⚠️ No valid chunk ranges for multipart download of '{api_original_filename}'. Aborting.")
|
||||||
if os.path.exists(temp_file_path): os.remove(temp_file_path)
|
|
||||||
return False, 0, None, None
|
return False, 0, None, None
|
||||||
|
|
||||||
|
# --- Resumption Logic: Check for existing complete chunks ---
|
||||||
|
chunks_to_download = []
|
||||||
|
total_bytes_resumed = 0
|
||||||
|
for i, (start, end) in enumerate(chunks_ranges):
|
||||||
|
chunk_part_path = f"{save_path}.part{i}"
|
||||||
|
expected_chunk_size = chunk_actual_sizes[i]
|
||||||
|
|
||||||
|
if os.path.exists(chunk_part_path) and os.path.getsize(chunk_part_path) == expected_chunk_size:
|
||||||
|
logger_func(f" [Chunk {i + 1}/{num_parts}] Resuming with existing complete chunk file.")
|
||||||
|
total_bytes_resumed += expected_chunk_size
|
||||||
|
else:
|
||||||
|
chunks_to_download.append({'index': i, 'start': start, 'end': end})
|
||||||
|
|
||||||
|
# Setup the shared progress data structure
|
||||||
progress_data = {
|
progress_data = {
|
||||||
'total_file_size': total_size,
|
'total_file_size': total_size,
|
||||||
'total_downloaded_so_far': 0,
|
'total_downloaded_so_far': total_bytes_resumed,
|
||||||
'chunks_status': [
|
'chunks_status': [],
|
||||||
{'id': i, 'downloaded': 0, 'total': chunk_actual_sizes[i] if i < len(chunk_actual_sizes) else 0, 'active': False, 'speed_bps': 0.0}
|
|
||||||
for i in range(num_parts)
|
|
||||||
],
|
|
||||||
'lock': threading.Lock(),
|
'lock': threading.Lock(),
|
||||||
'last_global_emit_time': [time.time()]
|
'last_global_emit_time': [time.time()]
|
||||||
}
|
}
|
||||||
|
for i in range(num_parts):
|
||||||
|
is_resumed = not any(c['index'] == i for c in chunks_to_download)
|
||||||
|
progress_data['chunks_status'].append({
|
||||||
|
'id': i,
|
||||||
|
'downloaded': chunk_actual_sizes[i] if is_resumed else 0,
|
||||||
|
'total': chunk_actual_sizes[i],
|
||||||
|
'active': False,
|
||||||
|
'speed_bps': 0.0
|
||||||
|
})
|
||||||
|
|
||||||
|
# --- Download Phase ---
|
||||||
chunk_futures = []
|
chunk_futures = []
|
||||||
all_chunks_successful = True
|
all_chunks_successful = True
|
||||||
total_bytes_from_chunks = 0
|
total_bytes_from_threads = 0
|
||||||
|
|
||||||
with ThreadPoolExecutor(max_workers=num_parts, thread_name_prefix=f"MPChunk_{api_original_filename[:10]}_") as chunk_pool:
|
with ThreadPoolExecutor(max_workers=num_parts, thread_name_prefix=f"MPChunk_{api_original_filename[:10]}_") as chunk_pool:
|
||||||
for i, (start, end) in enumerate(chunks_ranges):
|
for chunk_info in chunks_to_download:
|
||||||
if cancellation_event and cancellation_event.is_set(): all_chunks_successful = False; break
|
if cancellation_event and cancellation_event.is_set():
|
||||||
chunk_futures.append(chunk_pool.submit(
|
all_chunks_successful = False
|
||||||
_download_individual_chunk, chunk_url=file_url, temp_file_path=temp_file_path,
|
break
|
||||||
|
|
||||||
|
i, start, end = chunk_info['index'], chunk_info['start'], chunk_info['end']
|
||||||
|
chunk_part_path = f"{save_path}.part{i}"
|
||||||
|
|
||||||
|
future = chunk_pool.submit(
|
||||||
|
_download_individual_chunk,
|
||||||
|
chunk_url=file_url,
|
||||||
|
chunk_temp_file_path=chunk_part_path,
|
||||||
start_byte=start, end_byte=end, headers=headers, part_num=i, total_parts=num_parts,
|
start_byte=start, end_byte=end, headers=headers, part_num=i, total_parts=num_parts,
|
||||||
progress_data=progress_data, cancellation_event=cancellation_event, skip_event=skip_event, global_emit_time_ref=progress_data['last_global_emit_time'],
|
progress_data=progress_data, cancellation_event=cancellation_event,
|
||||||
pause_event=pause_event, cookies_for_chunk=cookies_for_chunk_session, logger_func=logger_func, emitter=emitter_for_multipart,
|
skip_event=skip_event, global_emit_time_ref=progress_data['last_global_emit_time'],
|
||||||
|
pause_event=pause_event, cookies_for_chunk=cookies_for_chunk_session,
|
||||||
|
logger_func=logger_func, emitter=emitter_for_multipart,
|
||||||
api_original_filename=api_original_filename
|
api_original_filename=api_original_filename
|
||||||
))
|
)
|
||||||
|
chunk_futures.append(future)
|
||||||
|
|
||||||
for future in as_completed(chunk_futures):
|
for future in as_completed(chunk_futures):
|
||||||
if cancellation_event and cancellation_event.is_set(): all_chunks_successful = False; break
|
if cancellation_event and cancellation_event.is_set():
|
||||||
bytes_downloaded_this_chunk, success_this_chunk = future.result()
|
|
||||||
total_bytes_from_chunks += bytes_downloaded_this_chunk
|
|
||||||
if not success_this_chunk:
|
|
||||||
all_chunks_successful = False
|
all_chunks_successful = False
|
||||||
|
bytes_downloaded, success = future.result()
|
||||||
|
total_bytes_from_threads += bytes_downloaded
|
||||||
|
if not success:
|
||||||
|
all_chunks_successful = False
|
||||||
|
|
||||||
|
total_bytes_final = total_bytes_resumed + total_bytes_from_threads
|
||||||
|
|
||||||
if cancellation_event and cancellation_event.is_set():
|
if cancellation_event and cancellation_event.is_set():
|
||||||
logger_func(f" Multi-part download for '{api_original_filename}' cancelled by main event.")
|
logger_func(f" Multi-part download for '{api_original_filename}' cancelled by main event.")
|
||||||
all_chunks_successful = False
|
all_chunks_successful = False
|
||||||
if emitter_for_multipart:
|
|
||||||
with progress_data['lock']:
|
|
||||||
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
|
|
||||||
if isinstance(emitter_for_multipart, queue.Queue):
|
|
||||||
emitter_for_multipart.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
|
|
||||||
elif hasattr(emitter_for_multipart, 'file_progress_signal'):
|
|
||||||
emitter_for_multipart.file_progress_signal.emit(api_original_filename, status_list_copy)
|
|
||||||
|
|
||||||
if all_chunks_successful and (total_bytes_from_chunks == total_size or total_size == 0):
|
# --- Assembly and Cleanup Phase ---
|
||||||
logger_func(f" ✅ Multi-part download successful for '{api_original_filename}'. Total bytes: {total_bytes_from_chunks}")
|
if all_chunks_successful and (total_bytes_final == total_size or total_size == 0):
|
||||||
|
logger_func(f" ✅ All {num_parts} chunks complete. Assembling final file...")
|
||||||
md5_hasher = hashlib.md5()
|
md5_hasher = hashlib.md5()
|
||||||
with open(temp_file_path, 'rb') as f_hash:
|
try:
|
||||||
for buf in iter(lambda: f_hash.read(4096*10), b''):
|
with open(save_path, 'wb') as final_file:
|
||||||
md5_hasher.update(buf)
|
for i in range(num_parts):
|
||||||
|
chunk_part_path = f"{save_path}.part{i}"
|
||||||
|
with open(chunk_part_path, 'rb') as chunk_file:
|
||||||
|
content = chunk_file.read()
|
||||||
|
final_file.write(content)
|
||||||
|
md5_hasher.update(content)
|
||||||
|
|
||||||
calculated_hash = md5_hasher.hexdigest()
|
calculated_hash = md5_hasher.hexdigest()
|
||||||
return True, total_bytes_from_chunks, calculated_hash, open(temp_file_path, 'rb')
|
logger_func(f" ✅ Assembly successful for '{api_original_filename}'. Total bytes: {total_bytes_final}")
|
||||||
|
return True, total_bytes_final, calculated_hash, open(save_path, 'rb')
|
||||||
|
except Exception as e:
|
||||||
|
logger_func(f" ❌ Critical error during file assembly: {e}. Cleaning up.")
|
||||||
|
return False, total_bytes_final, None, None
|
||||||
|
finally:
|
||||||
|
# Cleanup all individual chunk files after successful assembly
|
||||||
|
for i in range(num_parts):
|
||||||
|
chunk_part_path = f"{save_path}.part{i}"
|
||||||
|
if os.path.exists(chunk_part_path):
|
||||||
|
try:
|
||||||
|
os.remove(chunk_part_path)
|
||||||
|
except OSError as e:
|
||||||
|
logger_func(f" ⚠️ Failed to remove temp part file '{chunk_part_path}': {e}")
|
||||||
else:
|
else:
|
||||||
logger_func(f" ❌ Multi-part download failed for '{api_original_filename}'. Success: {all_chunks_successful}, Bytes: {total_bytes_from_chunks}/{total_size}. Cleaning up.")
|
# If download failed, we do NOT clean up, allowing for resumption later
|
||||||
if os.path.exists(temp_file_path):
|
logger_func(f" ❌ Multi-part download failed for '{api_original_filename}'. Success: {all_chunks_successful}, Bytes: {total_bytes_final}/{total_size}. Partial chunks saved for future resumption.")
|
||||||
try: os.remove(temp_file_path)
|
return False, total_bytes_final, None, None
|
||||||
except OSError as e: logger_func(f" Failed to remove temp part file '{temp_file_path}': {e}")
|
|
||||||
return False, total_bytes_from_chunks, None, None
|
|
||||||
|
|||||||
@@ -960,15 +960,19 @@ class EmptyPopupDialog (QDialog ):
|
|||||||
|
|
||||||
self .parent_app .log_signal .emit (f"ℹ️ Added {num_just_added_posts } selected posts to the download queue. Total in queue: {total_in_queue }.")
|
self .parent_app .log_signal .emit (f"ℹ️ Added {num_just_added_posts } selected posts to the download queue. Total in queue: {total_in_queue }.")
|
||||||
|
|
||||||
|
# --- START: MODIFIED LOGIC ---
|
||||||
|
# Removed the blockSignals(True/False) calls to allow the main window's UI to update correctly.
|
||||||
if self .parent_app .link_input :
|
if self .parent_app .link_input :
|
||||||
self .parent_app .link_input .blockSignals (True )
|
|
||||||
self .parent_app .link_input .setText (
|
self .parent_app .link_input .setText (
|
||||||
self .parent_app ._tr ("popup_posts_selected_text","Posts - {count} selected").format (count =num_just_added_posts )
|
self .parent_app ._tr ("popup_posts_selected_text","Posts - {count} selected").format (count =num_just_added_posts )
|
||||||
)
|
)
|
||||||
self .parent_app .link_input .blockSignals (False )
|
|
||||||
self .parent_app .link_input .setPlaceholderText (
|
self .parent_app .link_input .setPlaceholderText (
|
||||||
self .parent_app ._tr ("items_in_queue_placeholder","{count} items in queue from popup.").format (count =total_in_queue )
|
self .parent_app ._tr ("items_in_queue_placeholder","{count} items in queue from popup.").format (count =total_in_queue )
|
||||||
)
|
)
|
||||||
|
# --- END: MODIFIED LOGIC ---
|
||||||
|
|
||||||
|
self.selected_creators_for_queue.clear()
|
||||||
|
|
||||||
self .accept ()
|
self .accept ()
|
||||||
else :
|
else :
|
||||||
QMessageBox .information (self ,self ._tr ("no_selection_title","No Selection"),
|
QMessageBox .information (self ,self ._tr ("no_selection_title","No Selection"),
|
||||||
@@ -986,9 +990,6 @@ class EmptyPopupDialog (QDialog ):
|
|||||||
self .add_selected_button .setEnabled (True )
|
self .add_selected_button .setEnabled (True )
|
||||||
self .setWindowTitle (self ._tr ("creator_popup_title","Creator Selection"))
|
self .setWindowTitle (self ._tr ("creator_popup_title","Creator Selection"))
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def _get_domain_for_service (self ,service_name ):
|
def _get_domain_for_service (self ,service_name ):
|
||||||
"""Determines the base domain for a given service."""
|
"""Determines the base domain for a given service."""
|
||||||
service_lower =service_name .lower ()
|
service_lower =service_name .lower ()
|
||||||
|
|||||||
@@ -37,13 +37,13 @@ class FavoriteArtistsDialog (QDialog ):
|
|||||||
self ._init_ui ()
|
self ._init_ui ()
|
||||||
self ._fetch_favorite_artists ()
|
self ._fetch_favorite_artists ()
|
||||||
|
|
||||||
def _get_domain_for_service (self ,service_name ):
|
def _get_domain_for_service(self, service_name):
|
||||||
service_lower =service_name .lower ()
|
service_lower = service_name.lower()
|
||||||
coomer_primary_services ={'onlyfans','fansly','manyvids','candfans'}
|
coomer_primary_services = {'onlyfans', 'fansly', 'manyvids', 'candfans'}
|
||||||
if service_lower in coomer_primary_services :
|
if service_lower in coomer_primary_services:
|
||||||
return "coomer.su"
|
return "coomer.st" # Use the new domain
|
||||||
else :
|
else:
|
||||||
return "kemono.su"
|
return "kemono.cr" # Use the new domain
|
||||||
|
|
||||||
def _tr (self ,key ,default_text =""):
|
def _tr (self ,key ,default_text =""):
|
||||||
"""Helper to get translation based on current app language."""
|
"""Helper to get translation based on current app language."""
|
||||||
@@ -128,9 +128,29 @@ class FavoriteArtistsDialog (QDialog ):
|
|||||||
def _fetch_favorite_artists (self ):
|
def _fetch_favorite_artists (self ):
|
||||||
|
|
||||||
if self.cookies_config['use_cookie']:
|
if self.cookies_config['use_cookie']:
|
||||||
# Check if we can load cookies for at least one of the services.
|
# --- Kemono Check with Fallback ---
|
||||||
kemono_cookies = prepare_cookies_for_request(True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'], self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.su")
|
kemono_cookies = prepare_cookies_for_request(
|
||||||
coomer_cookies = prepare_cookies_for_request(True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'], self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.su")
|
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.cr"
|
||||||
|
)
|
||||||
|
if not kemono_cookies:
|
||||||
|
self._logger("No cookies for kemono.cr, trying fallback kemono.su...")
|
||||||
|
kemono_cookies = prepare_cookies_for_request(
|
||||||
|
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.su"
|
||||||
|
)
|
||||||
|
|
||||||
|
# --- Coomer Check with Fallback ---
|
||||||
|
coomer_cookies = prepare_cookies_for_request(
|
||||||
|
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.st"
|
||||||
|
)
|
||||||
|
if not coomer_cookies:
|
||||||
|
self._logger("No cookies for coomer.st, trying fallback coomer.su...")
|
||||||
|
coomer_cookies = prepare_cookies_for_request(
|
||||||
|
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.su"
|
||||||
|
)
|
||||||
|
|
||||||
if not kemono_cookies and not coomer_cookies:
|
if not kemono_cookies and not coomer_cookies:
|
||||||
# If cookies are enabled but none could be loaded, show help and stop.
|
# If cookies are enabled but none could be loaded, show help and stop.
|
||||||
@@ -149,9 +169,12 @@ class FavoriteArtistsDialog (QDialog ):
|
|||||||
errors_occurred =[]
|
errors_occurred =[]
|
||||||
any_cookies_loaded_successfully_for_any_source =False
|
any_cookies_loaded_successfully_for_any_source =False
|
||||||
|
|
||||||
api_sources =[
|
kemono_cr_fav_url = "https://kemono.cr/api/v1/account/favorites?type=artist"
|
||||||
{"name":"Kemono.su","url":kemono_fav_url ,"domain":"kemono.su"},
|
coomer_st_fav_url = "https://coomer.st/api/v1/account/favorites?type=artist"
|
||||||
{"name":"Coomer.su","url":coomer_fav_url ,"domain":"coomer.su"}
|
|
||||||
|
api_sources = [
|
||||||
|
{"name": "Kemono.cr", "url": kemono_cr_fav_url, "domain": "kemono.cr"},
|
||||||
|
{"name": "Coomer.st", "url": coomer_st_fav_url, "domain": "coomer.st"}
|
||||||
]
|
]
|
||||||
|
|
||||||
for source in api_sources :
|
for source in api_sources :
|
||||||
@@ -159,20 +182,41 @@ class FavoriteArtistsDialog (QDialog ):
|
|||||||
self .status_label .setText (self ._tr ("fav_artists_loading_from_source_status","⏳ Loading favorites from {source_name}...").format (source_name =source ['name']))
|
self .status_label .setText (self ._tr ("fav_artists_loading_from_source_status","⏳ Loading favorites from {source_name}...").format (source_name =source ['name']))
|
||||||
QCoreApplication .processEvents ()
|
QCoreApplication .processEvents ()
|
||||||
|
|
||||||
cookies_dict_for_source =None
|
cookies_dict_for_source = None
|
||||||
if self .cookies_config ['use_cookie']:
|
if self.cookies_config['use_cookie']:
|
||||||
cookies_dict_for_source =prepare_cookies_for_request (
|
primary_domain = source['domain']
|
||||||
True ,
|
fallback_domain = None
|
||||||
self .cookies_config ['cookie_text'],
|
if primary_domain == "kemono.cr":
|
||||||
self .cookies_config ['selected_cookie_file'],
|
fallback_domain = "kemono.su"
|
||||||
self .cookies_config ['app_base_dir'],
|
elif primary_domain == "coomer.st":
|
||||||
self ._logger ,
|
fallback_domain = "coomer.su"
|
||||||
target_domain =source ['domain']
|
|
||||||
|
# First, try the primary domain
|
||||||
|
cookies_dict_for_source = prepare_cookies_for_request(
|
||||||
|
True,
|
||||||
|
self.cookies_config['cookie_text'],
|
||||||
|
self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'],
|
||||||
|
self._logger,
|
||||||
|
target_domain=primary_domain
|
||||||
)
|
)
|
||||||
if cookies_dict_for_source :
|
|
||||||
any_cookies_loaded_successfully_for_any_source =True
|
# If no cookies found, try the fallback domain
|
||||||
else :
|
if not cookies_dict_for_source and fallback_domain:
|
||||||
self ._logger (f"Warning ({source ['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
|
self._logger(f"Warning ({source['name']}): No cookies found for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
|
||||||
|
cookies_dict_for_source = prepare_cookies_for_request(
|
||||||
|
True,
|
||||||
|
self.cookies_config['cookie_text'],
|
||||||
|
self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'],
|
||||||
|
self._logger,
|
||||||
|
target_domain=fallback_domain
|
||||||
|
)
|
||||||
|
|
||||||
|
if cookies_dict_for_source:
|
||||||
|
any_cookies_loaded_successfully_for_any_source = True
|
||||||
|
else:
|
||||||
|
self._logger(f"Warning ({source['name']}): Cookies enabled but could not be loaded for this source (including fallbacks). Fetch might fail.")
|
||||||
try :
|
try :
|
||||||
headers ={'User-Agent':'Mozilla/5.0'}
|
headers ={'User-Agent':'Mozilla/5.0'}
|
||||||
response =requests .get (source ['url'],headers =headers ,cookies =cookies_dict_for_source ,timeout =20 )
|
response =requests .get (source ['url'],headers =headers ,cookies =cookies_dict_for_source ,timeout =20 )
|
||||||
@@ -223,7 +267,7 @@ class FavoriteArtistsDialog (QDialog ):
|
|||||||
if self .cookies_config ['use_cookie']and not any_cookies_loaded_successfully_for_any_source :
|
if self .cookies_config ['use_cookie']and not any_cookies_loaded_successfully_for_any_source :
|
||||||
self .status_label .setText (self ._tr ("fav_artists_cookies_required_status","Error: Cookies enabled but could not be loaded for any source."))
|
self .status_label .setText (self ._tr ("fav_artists_cookies_required_status","Error: Cookies enabled but could not be loaded for any source."))
|
||||||
self ._logger ("Error: Cookies enabled but no cookies loaded for any source. Showing help dialog.")
|
self ._logger ("Error: Cookies enabled but no cookies loaded for any source. Showing help dialog.")
|
||||||
cookie_help_dialog =CookieHelpDialog (self )
|
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
|
||||||
cookie_help_dialog .exec_ ()
|
cookie_help_dialog .exec_ ()
|
||||||
self .download_button .setEnabled (False )
|
self .download_button .setEnabled (False )
|
||||||
if not fetched_any_successfully :
|
if not fetched_any_successfully :
|
||||||
|
|||||||
@@ -34,28 +34,30 @@ class FavoritePostsFetcherThread (QThread ):
|
|||||||
self .target_domain_preference =target_domain_preference
|
self .target_domain_preference =target_domain_preference
|
||||||
self .cancellation_event =threading .Event ()
|
self .cancellation_event =threading .Event ()
|
||||||
self .error_key_map ={
|
self .error_key_map ={
|
||||||
"Kemono.su":"kemono_su",
|
"kemono.cr":"kemono_su",
|
||||||
"Coomer.su":"coomer_su"
|
"coomer.st":"coomer_su"
|
||||||
}
|
}
|
||||||
|
|
||||||
def _logger (self ,message ):
|
def _logger (self ,message ):
|
||||||
self .parent_logger_func (f"[FavPostsFetcherThread] {message }")
|
self .parent_logger_func (f"[FavPostsFetcherThread] {message }")
|
||||||
|
|
||||||
def run (self ):
|
def run(self):
|
||||||
kemono_fav_posts_url ="https://kemono.su/api/v1/account/favorites?type=post"
|
kemono_su_fav_posts_url = "https://kemono.su/api/v1/account/favorites?type=post"
|
||||||
coomer_fav_posts_url ="https://coomer.su/api/v1/account/favorites?type=post"
|
coomer_su_fav_posts_url = "https://coomer.su/api/v1/account/favorites?type=post"
|
||||||
|
kemono_cr_fav_posts_url = "https://kemono.cr/api/v1/account/favorites?type=post"
|
||||||
|
coomer_st_fav_posts_url = "https://coomer.st/api/v1/account/favorites?type=post"
|
||||||
|
|
||||||
all_fetched_posts_temp =[]
|
all_fetched_posts_temp = []
|
||||||
error_messages_for_summary =[]
|
error_messages_for_summary = []
|
||||||
fetched_any_successfully =False
|
fetched_any_successfully = False
|
||||||
any_cookies_loaded_successfully_for_any_source =False
|
any_cookies_loaded_successfully_for_any_source = False
|
||||||
|
|
||||||
self .status_update .emit ("key_fetching_fav_post_list_init")
|
self.status_update.emit("key_fetching_fav_post_list_init")
|
||||||
self .progress_bar_update .emit (0 ,0 )
|
self.progress_bar_update.emit(0, 0)
|
||||||
|
|
||||||
api_sources =[
|
api_sources = [
|
||||||
{"name":"Kemono.su","url":kemono_fav_posts_url ,"domain":"kemono.su"},
|
{"name": "Kemono.cr", "url": kemono_cr_fav_posts_url, "domain": "kemono.cr"},
|
||||||
{"name":"Coomer.su","url":coomer_fav_posts_url ,"domain":"coomer.su"}
|
{"name": "Coomer.st", "url": coomer_st_fav_posts_url, "domain": "coomer.st"}
|
||||||
]
|
]
|
||||||
|
|
||||||
api_sources_to_try =[]
|
api_sources_to_try =[]
|
||||||
@@ -76,20 +78,41 @@ class FavoritePostsFetcherThread (QThread ):
|
|||||||
if self .cancellation_event .is_set ():
|
if self .cancellation_event .is_set ():
|
||||||
self .finished .emit ([],"KEY_FETCH_CANCELLED_DURING")
|
self .finished .emit ([],"KEY_FETCH_CANCELLED_DURING")
|
||||||
return
|
return
|
||||||
cookies_dict_for_source =None
|
cookies_dict_for_source = None
|
||||||
if self .cookies_config ['use_cookie']:
|
if self.cookies_config['use_cookie']:
|
||||||
cookies_dict_for_source =prepare_cookies_for_request (
|
primary_domain = source['domain']
|
||||||
True ,
|
fallback_domain = None
|
||||||
self .cookies_config ['cookie_text'],
|
if primary_domain == "kemono.cr":
|
||||||
self .cookies_config ['selected_cookie_file'],
|
fallback_domain = "kemono.su"
|
||||||
self .cookies_config ['app_base_dir'],
|
elif primary_domain == "coomer.st":
|
||||||
self ._logger ,
|
fallback_domain = "coomer.su"
|
||||||
target_domain =source ['domain']
|
|
||||||
|
# First, try the primary domain
|
||||||
|
cookies_dict_for_source = prepare_cookies_for_request(
|
||||||
|
True,
|
||||||
|
self.cookies_config['cookie_text'],
|
||||||
|
self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'],
|
||||||
|
self._logger,
|
||||||
|
target_domain=primary_domain
|
||||||
)
|
)
|
||||||
if cookies_dict_for_source :
|
|
||||||
any_cookies_loaded_successfully_for_any_source =True
|
# If no cookies found, try the fallback domain
|
||||||
else :
|
if not cookies_dict_for_source and fallback_domain:
|
||||||
self ._logger (f"Warning ({source ['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
|
self._logger(f"Warning ({source['name']}): No cookies found for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
|
||||||
|
cookies_dict_for_source = prepare_cookies_for_request(
|
||||||
|
True,
|
||||||
|
self.cookies_config['cookie_text'],
|
||||||
|
self.cookies_config['selected_cookie_file'],
|
||||||
|
self.cookies_config['app_base_dir'],
|
||||||
|
self._logger,
|
||||||
|
target_domain=fallback_domain
|
||||||
|
)
|
||||||
|
|
||||||
|
if cookies_dict_for_source:
|
||||||
|
any_cookies_loaded_successfully_for_any_source = True
|
||||||
|
else:
|
||||||
|
self._logger(f"Warning ({source['name']}): Cookies enabled but could not be loaded for this domain. Fetch might fail if cookies are required.")
|
||||||
|
|
||||||
self ._logger (f"Attempting to fetch favorite posts from: {source ['name']} ({source ['url']})")
|
self ._logger (f"Attempting to fetch favorite posts from: {source ['name']} ({source ['url']})")
|
||||||
source_key_part =self .error_key_map .get (source ['name'],source ['name'].lower ().replace ('.','_'))
|
source_key_part =self .error_key_map .get (source ['name'],source ['name'].lower ().replace ('.','_'))
|
||||||
@@ -409,14 +432,14 @@ class FavoritePostsDialog (QDialog ):
|
|||||||
if status_key .startswith ("KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_FOR_DOMAIN_")or status_key =="KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_GENERIC":
|
if status_key .startswith ("KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_FOR_DOMAIN_")or status_key =="KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_GENERIC":
|
||||||
status_label_text_key ="fav_posts_cookies_required_error"
|
status_label_text_key ="fav_posts_cookies_required_error"
|
||||||
self ._logger (f"Cookie error: {status_key }. Showing help dialog.")
|
self ._logger (f"Cookie error: {status_key }. Showing help dialog.")
|
||||||
cookie_help_dialog =CookieHelpDialog (self )
|
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
|
||||||
cookie_help_dialog .exec_ ()
|
cookie_help_dialog .exec_ ()
|
||||||
elif status_key =="KEY_AUTH_FAILED":
|
elif status_key =="KEY_AUTH_FAILED":
|
||||||
status_label_text_key ="fav_posts_auth_failed_title"
|
status_label_text_key ="fav_posts_auth_failed_title"
|
||||||
self ._logger (f"Auth error: {status_key }. Showing help dialog.")
|
self ._logger (f"Auth error: {status_key }. Showing help dialog.")
|
||||||
QMessageBox .warning (self ,self ._tr ("fav_posts_auth_failed_title","Authorization Failed (Posts)"),
|
QMessageBox .warning (self ,self ._tr ("fav_posts_auth_failed_title","Authorization Failed (Posts)"),
|
||||||
self ._tr ("fav_posts_auth_failed_message_generic","...").format (domain_specific_part =specific_domain_msg_part ))
|
self ._tr ("fav_posts_auth_failed_message_generic","...").format (domain_specific_part =specific_domain_msg_part ))
|
||||||
cookie_help_dialog =CookieHelpDialog (self )
|
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
|
||||||
cookie_help_dialog .exec_ ()
|
cookie_help_dialog .exec_ ()
|
||||||
elif status_key =="KEY_NO_FAVORITES_FOUND_ALL_PLATFORMS":
|
elif status_key =="KEY_NO_FAVORITES_FOUND_ALL_PLATFORMS":
|
||||||
status_label_text_key ="fav_posts_no_posts_found_status"
|
status_label_text_key ="fav_posts_no_posts_found_status"
|
||||||
|
|||||||
@@ -15,7 +15,8 @@ from ...utils.resolution import get_dark_theme
|
|||||||
from ..main_window import get_app_icon_object
|
from ..main_window import get_app_icon_object
|
||||||
from ...config.constants import (
|
from ...config.constants import (
|
||||||
THEME_KEY, LANGUAGE_KEY, DOWNLOAD_LOCATION_KEY,
|
THEME_KEY, LANGUAGE_KEY, DOWNLOAD_LOCATION_KEY,
|
||||||
RESOLUTION_KEY, UI_SCALE_KEY, SAVE_CREATOR_JSON_KEY
|
RESOLUTION_KEY, UI_SCALE_KEY, SAVE_CREATOR_JSON_KEY,
|
||||||
|
COOKIE_TEXT_KEY, USE_COOKIE_KEY
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -89,7 +90,9 @@ class FutureSettingsDialog(QDialog):
|
|||||||
# Default Path
|
# Default Path
|
||||||
self.default_path_label = QLabel()
|
self.default_path_label = QLabel()
|
||||||
self.save_path_button = QPushButton()
|
self.save_path_button = QPushButton()
|
||||||
self.save_path_button.clicked.connect(self._save_download_path)
|
# --- START: MODIFIED LOGIC ---
|
||||||
|
self.save_path_button.clicked.connect(self._save_cookie_and_path)
|
||||||
|
# --- END: MODIFIED LOGIC ---
|
||||||
download_window_layout.addWidget(self.default_path_label, 1, 0)
|
download_window_layout.addWidget(self.default_path_label, 1, 0)
|
||||||
download_window_layout.addWidget(self.save_path_button, 1, 1)
|
download_window_layout.addWidget(self.save_path_button, 1, 1)
|
||||||
|
|
||||||
@@ -143,11 +146,13 @@ class FutureSettingsDialog(QDialog):
|
|||||||
self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
|
self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
|
||||||
self.save_creator_json_checkbox.setText(self._tr("save_creator_json_label", "Save Creator.json file"))
|
self.save_creator_json_checkbox.setText(self._tr("save_creator_json_label", "Save Creator.json file"))
|
||||||
|
|
||||||
|
# --- START: MODIFIED LOGIC ---
|
||||||
# Buttons and Controls
|
# Buttons and Controls
|
||||||
self._update_theme_toggle_button_text()
|
self._update_theme_toggle_button_text()
|
||||||
self.save_path_button.setText(self._tr("settings_save_path_button", "Save Current Download Path"))
|
self.save_path_button.setText(self._tr("settings_save_cookie_path_button", "Save Cookie + Download Path"))
|
||||||
self.save_path_button.setToolTip(self._tr("settings_save_path_tooltip", "Save the current 'Download Location' for future sessions."))
|
self.save_path_button.setToolTip(self._tr("settings_save_cookie_path_tooltip", "Save the current 'Download Location' and Cookie settings for future sessions."))
|
||||||
self.ok_button.setText(self._tr("ok_button", "OK"))
|
self.ok_button.setText(self._tr("ok_button", "OK"))
|
||||||
|
# --- END: MODIFIED LOGIC ---
|
||||||
|
|
||||||
# Populate dropdowns
|
# Populate dropdowns
|
||||||
self._populate_display_combo_boxes()
|
self._populate_display_combo_boxes()
|
||||||
@@ -275,22 +280,43 @@ class FutureSettingsDialog(QDialog):
|
|||||||
if msg_box.clickedButton() == restart_button:
|
if msg_box.clickedButton() == restart_button:
|
||||||
self.parent_app._request_restart_application()
|
self.parent_app._request_restart_application()
|
||||||
|
|
||||||
def _save_download_path(self):
|
def _save_cookie_and_path(self):
|
||||||
|
"""Saves the current download path and/or cookie settings from the main window."""
|
||||||
|
path_saved = False
|
||||||
|
cookie_saved = False
|
||||||
|
|
||||||
|
# --- Save Download Path Logic ---
|
||||||
if hasattr(self.parent_app, 'dir_input') and self.parent_app.dir_input:
|
if hasattr(self.parent_app, 'dir_input') and self.parent_app.dir_input:
|
||||||
current_path = self.parent_app.dir_input.text().strip()
|
current_path = self.parent_app.dir_input.text().strip()
|
||||||
if current_path and os.path.isdir(current_path):
|
if current_path and os.path.isdir(current_path):
|
||||||
self.parent_app.settings.setValue(DOWNLOAD_LOCATION_KEY, current_path)
|
self.parent_app.settings.setValue(DOWNLOAD_LOCATION_KEY, current_path)
|
||||||
|
path_saved = True
|
||||||
|
|
||||||
|
# --- Save Cookie Logic ---
|
||||||
|
if hasattr(self.parent_app, 'use_cookie_checkbox'):
|
||||||
|
use_cookie = self.parent_app.use_cookie_checkbox.isChecked()
|
||||||
|
cookie_content = self.parent_app.cookie_text_input.text().strip()
|
||||||
|
|
||||||
|
if use_cookie and cookie_content:
|
||||||
|
self.parent_app.settings.setValue(USE_COOKIE_KEY, True)
|
||||||
|
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, cookie_content)
|
||||||
|
cookie_saved = True
|
||||||
|
else: # Also save the 'off' state
|
||||||
|
self.parent_app.settings.setValue(USE_COOKIE_KEY, False)
|
||||||
|
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, "")
|
||||||
|
|
||||||
self.parent_app.settings.sync()
|
self.parent_app.settings.sync()
|
||||||
QMessageBox.information(self,
|
|
||||||
self._tr("settings_save_path_success_title", "Path Saved"),
|
# --- User Feedback ---
|
||||||
self._tr("settings_save_path_success_message", "Download location '{path}' saved.").format(path=current_path))
|
if path_saved and cookie_saved:
|
||||||
elif not current_path:
|
message = self._tr("settings_save_both_success", "Download location and cookie settings saved.")
|
||||||
QMessageBox.warning(self,
|
elif path_saved:
|
||||||
self._tr("settings_save_path_empty_title", "Empty Path"),
|
message = self._tr("settings_save_path_only_success", "Download location saved. No cookie settings were active to save.")
|
||||||
self._tr("settings_save_path_empty_message", "Download location cannot be empty."))
|
elif cookie_saved:
|
||||||
|
message = self._tr("settings_save_cookie_only_success", "Cookie settings saved. Download location was not set.")
|
||||||
else:
|
else:
|
||||||
QMessageBox.warning(self,
|
QMessageBox.warning(self, self._tr("settings_save_nothing_title", "Nothing to Save"),
|
||||||
self._tr("settings_save_path_invalid_title", "Invalid Path"),
|
self._tr("settings_save_nothing_message", "The download location is not a valid directory and no cookie was active."))
|
||||||
self._tr("settings_save_path_invalid_message", "The path '{path}' is not a valid directory.").format(path=current_path))
|
return
|
||||||
else:
|
|
||||||
QMessageBox.critical(self, "Error", "Could not access download path input from main application.")
|
QMessageBox.information(self, self._tr("settings_save_success_title", "Settings Saved"), message)
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ from PyQt5.QtCore import QUrl, QSize, Qt
|
|||||||
from PyQt5.QtGui import QIcon, QDesktopServices
|
from PyQt5.QtGui import QIcon, QDesktopServices
|
||||||
from PyQt5.QtWidgets import (
|
from PyQt5.QtWidgets import (
|
||||||
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
|
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
|
||||||
QStackedWidget, QScrollArea, QFrame, QWidget
|
QStackedWidget, QListWidget, QFrame, QWidget, QScrollArea
|
||||||
)
|
)
|
||||||
from ...i18n.translator import get_translation
|
from ...i18n.translator import get_translation
|
||||||
from ..main_window import get_app_icon_object
|
from ..main_window import get_app_icon_object
|
||||||
@@ -46,13 +46,12 @@ class TourStepWidget(QWidget):
|
|||||||
layout.addWidget(scroll_area, 1)
|
layout.addWidget(scroll_area, 1)
|
||||||
|
|
||||||
|
|
||||||
class HelpGuideDialog (QDialog ):
|
class HelpGuideDialog(QDialog):
|
||||||
"""A multi-page dialog for displaying the feature guide."""
|
"""A multi-page dialog for displaying the feature guide with a navigation list."""
|
||||||
def __init__ (self ,steps_data ,parent_app ,parent =None ):
|
def __init__(self, steps_data, parent_app, parent=None):
|
||||||
super ().__init__ (parent )
|
super().__init__(parent)
|
||||||
self .current_step =0
|
self.steps_data = steps_data
|
||||||
self .steps_data =steps_data
|
self.parent_app = parent_app
|
||||||
self .parent_app =parent_app
|
|
||||||
|
|
||||||
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
|
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
|
||||||
|
|
||||||
@@ -61,7 +60,7 @@ class HelpGuideDialog (QDialog ):
|
|||||||
self.setWindowIcon(app_icon)
|
self.setWindowIcon(app_icon)
|
||||||
|
|
||||||
self.setModal(True)
|
self.setModal(True)
|
||||||
self.resize(int(650 * scale), int(600 * scale))
|
self.resize(int(800 * scale), int(650 * scale))
|
||||||
|
|
||||||
dialog_font_size = int(11 * scale)
|
dialog_font_size = int(11 * scale)
|
||||||
|
|
||||||
@@ -69,6 +68,7 @@ class HelpGuideDialog (QDialog ):
|
|||||||
if hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
|
if hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
|
||||||
current_theme_style = get_dark_theme(scale)
|
current_theme_style = get_dark_theme(scale)
|
||||||
else:
|
else:
|
||||||
|
# Basic light theme fallback
|
||||||
current_theme_style = f"""
|
current_theme_style = f"""
|
||||||
QDialog {{ background-color: #F0F0F0; border: 1px solid #B0B0B0; }}
|
QDialog {{ background-color: #F0F0F0; border: 1px solid #B0B0B0; }}
|
||||||
QLabel {{ color: #1E1E1E; }}
|
QLabel {{ color: #1E1E1E; }}
|
||||||
@@ -86,118 +86,107 @@ class HelpGuideDialog (QDialog ):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
self.setStyleSheet(current_theme_style)
|
self.setStyleSheet(current_theme_style)
|
||||||
self ._init_ui ()
|
self._init_ui()
|
||||||
if self .parent_app :
|
if self.parent_app:
|
||||||
self .move (self .parent_app .geometry ().center ()-self .rect ().center ())
|
self.move(self.parent_app.geometry().center() - self.rect().center())
|
||||||
|
|
||||||
def _tr (self ,key ,default_text =""):
|
def _tr(self, key, default_text=""):
|
||||||
"""Helper to get translation based on current app language."""
|
"""Helper to get translation based on current app language."""
|
||||||
if callable (get_translation )and self .parent_app :
|
if callable(get_translation) and self.parent_app:
|
||||||
return get_translation (self .parent_app .current_selected_language ,key ,default_text )
|
return get_translation(self.parent_app.current_selected_language, key, default_text)
|
||||||
return default_text
|
return default_text
|
||||||
|
|
||||||
|
def _init_ui(self):
|
||||||
|
main_layout = QVBoxLayout(self)
|
||||||
|
main_layout.setContentsMargins(15, 15, 15, 15)
|
||||||
|
main_layout.setSpacing(10)
|
||||||
|
|
||||||
def _init_ui (self ):
|
# Title
|
||||||
main_layout =QVBoxLayout (self )
|
title_label = QLabel(self._tr("help_guide_dialog_title", "Kemono Downloader - Feature Guide"))
|
||||||
main_layout .setContentsMargins (0 ,0 ,0 ,0 )
|
scale = getattr(self.parent_app, 'scale_factor', 1.0)
|
||||||
main_layout .setSpacing (0 )
|
title_font_size = int(16 * scale)
|
||||||
|
title_label.setStyleSheet(f"font-size: {title_font_size}pt; font-weight: bold; color: #E0E0E0;")
|
||||||
|
title_label.setAlignment(Qt.AlignCenter)
|
||||||
|
main_layout.addWidget(title_label)
|
||||||
|
|
||||||
self .stacked_widget =QStackedWidget ()
|
# Content Layout (Navigation + Stacked Pages)
|
||||||
main_layout .addWidget (self .stacked_widget ,1 )
|
content_layout = QHBoxLayout()
|
||||||
|
main_layout.addLayout(content_layout, 1)
|
||||||
|
|
||||||
|
self.nav_list = QListWidget()
|
||||||
|
self.nav_list.setFixedWidth(int(220 * scale))
|
||||||
|
self.nav_list.setStyleSheet(f"""
|
||||||
|
QListWidget {{
|
||||||
|
background-color: #2E2E2E;
|
||||||
|
border: 1px solid #4A4A4A;
|
||||||
|
border-radius: 4px;
|
||||||
|
font-size: {int(11 * scale)}pt;
|
||||||
|
}}
|
||||||
|
QListWidget::item {{
|
||||||
|
padding: 10px;
|
||||||
|
border-bottom: 1px solid #4A4A4A;
|
||||||
|
}}
|
||||||
|
QListWidget::item:selected {{
|
||||||
|
background-color: #87CEEB;
|
||||||
|
color: #2E2E2E;
|
||||||
|
font-weight: bold;
|
||||||
|
}}
|
||||||
|
""")
|
||||||
|
content_layout.addWidget(self.nav_list)
|
||||||
|
|
||||||
|
self.stacked_widget = QStackedWidget()
|
||||||
|
content_layout.addWidget(self.stacked_widget)
|
||||||
|
|
||||||
|
for title_key, content_key in self.steps_data:
|
||||||
|
title = self._tr(title_key, title_key)
|
||||||
|
content = self._tr(content_key, f"Content for {content_key} not found.")
|
||||||
|
|
||||||
|
self.nav_list.addItem(title)
|
||||||
|
|
||||||
self .tour_steps_widgets =[]
|
|
||||||
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
|
|
||||||
for title, content in self.steps_data:
|
|
||||||
step_widget = TourStepWidget(title, content, scale=scale)
|
step_widget = TourStepWidget(title, content, scale=scale)
|
||||||
self.tour_steps_widgets.append(step_widget)
|
|
||||||
self.stacked_widget.addWidget(step_widget)
|
self.stacked_widget.addWidget(step_widget)
|
||||||
|
|
||||||
self .setWindowTitle (self ._tr ("help_guide_dialog_title","Kemono Downloader - Feature Guide"))
|
self.nav_list.currentRowChanged.connect(self.stacked_widget.setCurrentIndex)
|
||||||
|
if self.nav_list.count() > 0:
|
||||||
|
self.nav_list.setCurrentRow(0)
|
||||||
|
|
||||||
buttons_layout =QHBoxLayout ()
|
# Footer Layout (Social links and Close button)
|
||||||
buttons_layout .setContentsMargins (15 ,10 ,15 ,15 )
|
footer_layout = QHBoxLayout()
|
||||||
buttons_layout .setSpacing (10 )
|
footer_layout.setContentsMargins(0, 10, 0, 0)
|
||||||
|
|
||||||
self .back_button =QPushButton (self ._tr ("tour_dialog_back_button","Back"))
|
# Social Media Icons
|
||||||
self .back_button .clicked .connect (self ._previous_step )
|
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
|
||||||
self .back_button .setEnabled (False )
|
assets_base_dir = sys._MEIPASS
|
||||||
|
else:
|
||||||
|
assets_base_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
|
||||||
|
|
||||||
if getattr (sys ,'frozen',False )and hasattr (sys ,'_MEIPASS'):
|
github_icon_path = os.path.join(assets_base_dir, "assets", "github.png")
|
||||||
assets_base_dir =sys ._MEIPASS
|
instagram_icon_path = os.path.join(assets_base_dir, "assets", "instagram.png")
|
||||||
else :
|
discord_icon_path = os.path.join(assets_base_dir, "assets", "discord.png")
|
||||||
assets_base_dir =os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
|
|
||||||
|
|
||||||
github_icon_path =os .path .join (assets_base_dir ,"assets","github.png")
|
self.github_button = QPushButton(QIcon(github_icon_path), "")
|
||||||
instagram_icon_path =os .path .join (assets_base_dir ,"assets","instagram.png")
|
self.instagram_button = QPushButton(QIcon(instagram_icon_path), "")
|
||||||
discord_icon_path =os .path .join (assets_base_dir ,"assets","discord.png")
|
self.discord_button = QPushButton(QIcon(discord_icon_path), "")
|
||||||
|
|
||||||
self .github_button =QPushButton (QIcon (github_icon_path ),"")
|
|
||||||
self .instagram_button =QPushButton (QIcon (instagram_icon_path ),"")
|
|
||||||
self .Discord_button =QPushButton (QIcon (discord_icon_path ),"")
|
|
||||||
|
|
||||||
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
|
|
||||||
icon_dim = int(24 * scale)
|
icon_dim = int(24 * scale)
|
||||||
icon_size = QSize(icon_dim, icon_dim)
|
icon_size = QSize(icon_dim, icon_dim)
|
||||||
self .github_button .setIconSize (icon_size )
|
|
||||||
self .instagram_button .setIconSize (icon_size )
|
|
||||||
self .Discord_button .setIconSize (icon_size )
|
|
||||||
|
|
||||||
self .next_button =QPushButton (self ._tr ("tour_dialog_next_button","Next"))
|
for button, tooltip_key, url in [
|
||||||
self .next_button .clicked .connect (self ._next_step_action )
|
(self.github_button, "help_guide_github_tooltip", "https://github.com/Yuvi9587"),
|
||||||
self .next_button .setDefault (True )
|
(self.instagram_button, "help_guide_instagram_tooltip", "https://www.instagram.com/uvi.arts/"),
|
||||||
self .github_button .clicked .connect (self ._open_github_link )
|
(self.discord_button, "help_guide_discord_tooltip", "https://discord.gg/BqP64XTdJN")
|
||||||
self .instagram_button .clicked .connect (self ._open_instagram_link )
|
]:
|
||||||
self .Discord_button .clicked .connect (self ._open_Discord_link )
|
button.setIconSize(icon_size)
|
||||||
self .github_button .setToolTip (self ._tr ("help_guide_github_tooltip","Visit project's GitHub page (Opens in browser)"))
|
button.setToolTip(self._tr(tooltip_key))
|
||||||
self .instagram_button .setToolTip (self ._tr ("help_guide_instagram_tooltip","Visit our Instagram page (Opens in browser)"))
|
button.setFixedSize(icon_size.width() + 8, icon_size.height() + 8)
|
||||||
self .Discord_button .setToolTip (self ._tr ("help_guide_discord_tooltip","Visit our Discord community (Opens in browser)"))
|
button.setStyleSheet("background-color: transparent; border: none;")
|
||||||
|
button.clicked.connect(lambda _, u=url: QDesktopServices.openUrl(QUrl(u)))
|
||||||
|
footer_layout.addWidget(button)
|
||||||
|
|
||||||
|
footer_layout.addStretch(1)
|
||||||
|
|
||||||
social_layout =QHBoxLayout ()
|
self.finish_button = QPushButton(self._tr("tour_dialog_finish_button", "Finish"))
|
||||||
social_layout .setSpacing (10 )
|
self.finish_button.clicked.connect(self.accept)
|
||||||
social_layout .addWidget (self .github_button )
|
footer_layout.addWidget(self.finish_button)
|
||||||
social_layout .addWidget (self .instagram_button )
|
|
||||||
social_layout .addWidget (self .Discord_button )
|
|
||||||
|
|
||||||
while buttons_layout .count ():
|
main_layout.addLayout(footer_layout)
|
||||||
item =buttons_layout .takeAt (0 )
|
|
||||||
if item .widget ():
|
|
||||||
item .widget ().setParent (None )
|
|
||||||
elif item .layout ():
|
|
||||||
pass
|
|
||||||
buttons_layout .addLayout (social_layout )
|
|
||||||
buttons_layout .addStretch (1 )
|
|
||||||
buttons_layout .addWidget (self .back_button )
|
|
||||||
buttons_layout .addWidget (self .next_button )
|
|
||||||
main_layout .addLayout (buttons_layout )
|
|
||||||
self ._update_button_states ()
|
|
||||||
|
|
||||||
def _next_step_action (self ):
|
|
||||||
if self .current_step <len (self .tour_steps_widgets )-1 :
|
|
||||||
self .current_step +=1
|
|
||||||
self .stacked_widget .setCurrentIndex (self .current_step )
|
|
||||||
else :
|
|
||||||
self .accept ()
|
|
||||||
self ._update_button_states ()
|
|
||||||
|
|
||||||
def _previous_step (self ):
|
|
||||||
if self .current_step >0 :
|
|
||||||
self .current_step -=1
|
|
||||||
self .stacked_widget .setCurrentIndex (self .current_step )
|
|
||||||
self ._update_button_states ()
|
|
||||||
|
|
||||||
def _update_button_states (self ):
|
|
||||||
if self .current_step ==len (self .tour_steps_widgets )-1 :
|
|
||||||
self .next_button .setText (self ._tr ("tour_dialog_finish_button","Finish"))
|
|
||||||
else :
|
|
||||||
self .next_button .setText (self ._tr ("tour_dialog_next_button","Next"))
|
|
||||||
self .back_button .setEnabled (self .current_step >0 )
|
|
||||||
|
|
||||||
def _open_github_link (self ):
|
|
||||||
QDesktopServices .openUrl (QUrl ("https://github.com/Yuvi9587"))
|
|
||||||
|
|
||||||
def _open_instagram_link (self ):
|
|
||||||
QDesktopServices .openUrl (QUrl ("https://www.instagram.com/uvi.arts/"))
|
|
||||||
|
|
||||||
def _open_Discord_link (self ):
|
|
||||||
QDesktopServices .openUrl (QUrl ("https://discord.gg/BqP64XTdJN"))
|
|
||||||
@@ -24,7 +24,7 @@ class MoreOptionsDialog(QDialog):
|
|||||||
layout.addWidget(self.description_label)
|
layout.addWidget(self.description_label)
|
||||||
self.radio_button_group = QButtonGroup(self)
|
self.radio_button_group = QButtonGroup(self)
|
||||||
self.radio_content = QRadioButton("Description/Content")
|
self.radio_content = QRadioButton("Description/Content")
|
||||||
self.radio_comments = QRadioButton("Comments (Not Working)")
|
self.radio_comments = QRadioButton("Comments")
|
||||||
self.radio_button_group.addButton(self.radio_content)
|
self.radio_button_group.addButton(self.radio_content)
|
||||||
self.radio_button_group.addButton(self.radio_comments)
|
self.radio_button_group.addButton(self.radio_comments)
|
||||||
layout.addWidget(self.radio_content)
|
layout.addWidget(self.radio_content)
|
||||||
|
|||||||
@@ -105,6 +105,7 @@ class DownloaderApp (QWidget ):
|
|||||||
self.active_update_profile = None
|
self.active_update_profile = None
|
||||||
self.new_posts_for_update = []
|
self.new_posts_for_update = []
|
||||||
self.is_finishing = False
|
self.is_finishing = False
|
||||||
|
self.finish_lock = threading.Lock()
|
||||||
|
|
||||||
saved_res = self.settings.value(RESOLUTION_KEY, "Auto")
|
saved_res = self.settings.value(RESOLUTION_KEY, "Auto")
|
||||||
if saved_res != "Auto":
|
if saved_res != "Auto":
|
||||||
@@ -233,6 +234,7 @@ class DownloaderApp (QWidget ):
|
|||||||
self.downloaded_hash_counts = defaultdict(int)
|
self.downloaded_hash_counts = defaultdict(int)
|
||||||
self.downloaded_hash_counts_lock = threading.Lock()
|
self.downloaded_hash_counts_lock = threading.Lock()
|
||||||
self.session_temp_files = []
|
self.session_temp_files = []
|
||||||
|
self.single_pdf_mode = False
|
||||||
self.save_creator_json_enabled_this_session = True
|
self.save_creator_json_enabled_this_session = True
|
||||||
|
|
||||||
print(f"ℹ️ Known.txt will be loaded/saved at: {self.config_file}")
|
print(f"ℹ️ Known.txt will be loaded/saved at: {self.config_file}")
|
||||||
@@ -265,7 +267,7 @@ class DownloaderApp (QWidget ):
|
|||||||
self.download_location_label_widget = None
|
self.download_location_label_widget = None
|
||||||
self.remove_from_filename_label_widget = None
|
self.remove_from_filename_label_widget = None
|
||||||
self.skip_words_label_widget = None
|
self.skip_words_label_widget = None
|
||||||
self.setWindowTitle("Kemono Downloader v6.2.0")
|
self.setWindowTitle("Kemono Downloader v6.2.1")
|
||||||
setup_ui(self)
|
setup_ui(self)
|
||||||
self._connect_signals()
|
self._connect_signals()
|
||||||
self.log_signal.emit("ℹ️ Local API server functionality has been removed.")
|
self.log_signal.emit("ℹ️ Local API server functionality has been removed.")
|
||||||
@@ -283,6 +285,7 @@ class DownloaderApp (QWidget ):
|
|||||||
self._retranslate_main_ui()
|
self._retranslate_main_ui()
|
||||||
self._load_persistent_history()
|
self._load_persistent_history()
|
||||||
self._load_saved_download_location()
|
self._load_saved_download_location()
|
||||||
|
self._load_saved_cookie_settings()
|
||||||
self._update_button_states_and_connections()
|
self._update_button_states_and_connections()
|
||||||
self._check_for_interrupted_session()
|
self._check_for_interrupted_session()
|
||||||
|
|
||||||
@@ -1429,15 +1432,21 @@ class DownloaderApp (QWidget ):
|
|||||||
|
|
||||||
def _check_if_all_work_is_done(self):
|
def _check_if_all_work_is_done(self):
|
||||||
"""
|
"""
|
||||||
Checks if the fetcher thread is done AND if all submitted tasks have been processed.
|
Checks if the fetcher thread is done AND if all submitted tasks have been processed OR if a cancellation was requested.
|
||||||
If so, finalizes the download.
|
If so, finalizes the download. This is the central point for completion logic.
|
||||||
"""
|
"""
|
||||||
fetcher_is_done = not self.is_fetcher_thread_running
|
fetcher_is_done = not self.is_fetcher_thread_running
|
||||||
all_workers_are_done = (self.total_posts_to_process > 0 and self.processed_posts_count >= self.total_posts_to_process)
|
all_workers_are_done = (self.processed_posts_count >= self.total_posts_to_process)
|
||||||
|
is_cancelled = self.cancellation_event.is_set()
|
||||||
|
|
||||||
if fetcher_is_done and all_workers_are_done:
|
if fetcher_is_done and (all_workers_are_done or is_cancelled):
|
||||||
self.log_signal.emit("🏁 All fetcher and worker tasks complete.")
|
if not self.is_finishing:
|
||||||
self.finished_signal.emit(self.download_counter, self.skip_counter, self.cancellation_event.is_set(), self.all_kept_original_filenames)
|
if is_cancelled:
|
||||||
|
self.log_signal.emit("🏁 Fetcher cancelled. Finalizing...")
|
||||||
|
else:
|
||||||
|
self.log_signal.emit("🏁 All fetcher and worker tasks complete. Finalizing...")
|
||||||
|
|
||||||
|
self.finished_signal.emit(self.download_counter, self.skip_counter, is_cancelled, self.all_kept_original_filenames)
|
||||||
|
|
||||||
def _sync_queue_with_link_input (self ,current_text ):
|
def _sync_queue_with_link_input (self ,current_text ):
|
||||||
"""
|
"""
|
||||||
@@ -1563,6 +1572,31 @@ class DownloaderApp (QWidget ):
|
|||||||
QMessageBox .critical (self ,"Dialog Error",f"An unexpected error occurred with the folder selection dialog: {e }")
|
QMessageBox .critical (self ,"Dialog Error",f"An unexpected error occurred with the folder selection dialog: {e }")
|
||||||
|
|
||||||
def handle_main_log(self, message):
|
def handle_main_log(self, message):
|
||||||
|
if isinstance(message, str) and message.startswith("MANGA_FETCH_PROGRESS:"):
|
||||||
|
try:
|
||||||
|
parts = message.split(":")
|
||||||
|
fetched_count = int(parts[1])
|
||||||
|
page_num = int(parts[2])
|
||||||
|
self.progress_label.setText(self._tr("progress_fetching_manga_pages", "Progress: Fetching Page {page} ({count} posts found)...").format(page=page_num, count=fetched_count))
|
||||||
|
QCoreApplication.processEvents()
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
try:
|
||||||
|
fetched_count = int(message.split(":")[1])
|
||||||
|
self.progress_label.setText(self._tr("progress_fetching_manga_posts", "Progress: Fetching Manga Posts ({count})...").format(count=fetched_count))
|
||||||
|
QCoreApplication.processEvents()
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
pass
|
||||||
|
return
|
||||||
|
elif isinstance(message, str) and message.startswith("MANGA_FETCH_COMPLETE:"):
|
||||||
|
try:
|
||||||
|
total_posts = int(message.split(":")[1])
|
||||||
|
self.total_posts_to_process = total_posts
|
||||||
|
self.processed_posts_count = 0
|
||||||
|
self.update_progress_display(self.total_posts_to_process, self.processed_posts_count)
|
||||||
|
except (ValueError, IndexError):
|
||||||
|
pass
|
||||||
|
return
|
||||||
|
|
||||||
if message.startswith("TEMP_FILE_PATH:"):
|
if message.startswith("TEMP_FILE_PATH:"):
|
||||||
filepath = message.split(":", 1)[1]
|
filepath = message.split(":", 1)[1]
|
||||||
if self.single_pdf_setting:
|
if self.single_pdf_setting:
|
||||||
@@ -2554,8 +2588,27 @@ class DownloaderApp (QWidget ):
|
|||||||
self .manga_rename_toggle_button .setToolTip ("Click to cycle Manga Filename Style (when Manga Mode is active for a creator feed).")
|
self .manga_rename_toggle_button .setToolTip ("Click to cycle Manga Filename Style (when Manga Mode is active for a creator feed).")
|
||||||
|
|
||||||
def _toggle_manga_filename_style (self ):
|
def _toggle_manga_filename_style (self ):
|
||||||
current_style =self .manga_filename_style
|
url_text = self.link_input.text().strip() if self.link_input else ""
|
||||||
new_style =""
|
_, _, post_id = extract_post_info(url_text)
|
||||||
|
is_single_post = bool(post_id)
|
||||||
|
|
||||||
|
current_style = self.manga_filename_style
|
||||||
|
new_style = ""
|
||||||
|
|
||||||
|
if is_single_post:
|
||||||
|
# Cycle through a limited set of styles suitable for single posts
|
||||||
|
if current_style == STYLE_POST_TITLE:
|
||||||
|
new_style = STYLE_DATE_POST_TITLE
|
||||||
|
elif current_style == STYLE_DATE_POST_TITLE:
|
||||||
|
new_style = STYLE_ORIGINAL_NAME
|
||||||
|
elif current_style == STYLE_ORIGINAL_NAME:
|
||||||
|
new_style = STYLE_POST_ID
|
||||||
|
elif current_style == STYLE_POST_ID:
|
||||||
|
new_style = STYLE_POST_TITLE
|
||||||
|
else: # Fallback for any other style
|
||||||
|
new_style = STYLE_POST_TITLE
|
||||||
|
else:
|
||||||
|
# Original cycling logic for creator feeds
|
||||||
if current_style ==STYLE_POST_TITLE :
|
if current_style ==STYLE_POST_TITLE :
|
||||||
new_style =STYLE_ORIGINAL_NAME
|
new_style =STYLE_ORIGINAL_NAME
|
||||||
elif current_style ==STYLE_ORIGINAL_NAME :
|
elif current_style ==STYLE_ORIGINAL_NAME :
|
||||||
@@ -2565,8 +2618,8 @@ class DownloaderApp (QWidget ):
|
|||||||
elif current_style ==STYLE_POST_TITLE_GLOBAL_NUMBERING :
|
elif current_style ==STYLE_POST_TITLE_GLOBAL_NUMBERING :
|
||||||
new_style =STYLE_DATE_BASED
|
new_style =STYLE_DATE_BASED
|
||||||
elif current_style ==STYLE_DATE_BASED :
|
elif current_style ==STYLE_DATE_BASED :
|
||||||
new_style =STYLE_POST_ID # Change this line
|
new_style =STYLE_POST_ID
|
||||||
elif current_style ==STYLE_POST_ID: # Add this block
|
elif current_style ==STYLE_POST_ID:
|
||||||
new_style =STYLE_POST_TITLE
|
new_style =STYLE_POST_TITLE
|
||||||
else :
|
else :
|
||||||
self .log_signal .emit (f"⚠️ Unknown current manga filename style: {current_style }. Resetting to default ('{STYLE_POST_TITLE }').")
|
self .log_signal .emit (f"⚠️ Unknown current manga filename style: {current_style }. Resetting to default ('{STYLE_POST_TITLE }').")
|
||||||
@@ -2636,16 +2689,32 @@ class DownloaderApp (QWidget ):
|
|||||||
url_text =self .link_input .text ().strip ()if self .link_input else ""
|
url_text =self .link_input .text ().strip ()if self .link_input else ""
|
||||||
_ ,_ ,post_id =extract_post_info (url_text )
|
_ ,_ ,post_id =extract_post_info (url_text )
|
||||||
|
|
||||||
|
# --- START: MODIFIED LOGIC ---
|
||||||
is_creator_feed =not post_id if url_text else False
|
is_creator_feed =not post_id if url_text else False
|
||||||
|
is_single_post = bool(post_id)
|
||||||
is_favorite_mode_on =self .favorite_mode_checkbox .isChecked ()if self .favorite_mode_checkbox else False
|
is_favorite_mode_on =self .favorite_mode_checkbox .isChecked ()if self .favorite_mode_checkbox else False
|
||||||
|
|
||||||
|
# If the download queue contains items selected from the popup, treat it as a single-post context for UI purposes.
|
||||||
|
if self.favorite_download_queue and all(item.get('type') == 'single_post_from_popup' for item in self.favorite_download_queue):
|
||||||
|
is_single_post = True
|
||||||
|
|
||||||
|
# Allow Manga Mode checkbox for any valid URL (creator or single post) or if single posts are queued.
|
||||||
|
can_enable_manga_checkbox = (is_creator_feed or is_single_post) and not is_favorite_mode_on
|
||||||
|
|
||||||
if self .manga_mode_checkbox :
|
if self .manga_mode_checkbox :
|
||||||
self .manga_mode_checkbox .setEnabled (is_creator_feed and not is_favorite_mode_on )
|
self .manga_mode_checkbox .setEnabled (can_enable_manga_checkbox)
|
||||||
if not is_creator_feed and self .manga_mode_checkbox .isChecked ():
|
if not can_enable_manga_checkbox and self .manga_mode_checkbox .isChecked ():
|
||||||
self .manga_mode_checkbox .setChecked (False )
|
self .manga_mode_checkbox .setChecked (False )
|
||||||
checked =self .manga_mode_checkbox .isChecked ()
|
checked =self .manga_mode_checkbox .isChecked ()
|
||||||
|
|
||||||
manga_mode_effectively_on =is_creator_feed and checked
|
manga_mode_effectively_on = can_enable_manga_checkbox and checked
|
||||||
|
|
||||||
|
# If it's a single post context, prevent sequential styles from being selected as they don't apply.
|
||||||
|
sequential_styles = [STYLE_DATE_BASED, STYLE_POST_TITLE_GLOBAL_NUMBERING]
|
||||||
|
if is_single_post and self.manga_filename_style in sequential_styles:
|
||||||
|
self.manga_filename_style = STYLE_POST_TITLE # Default to a safe, non-sequential style
|
||||||
|
self._update_manga_filename_style_button_text()
|
||||||
|
# --- END: MODIFIED LOGIC ---
|
||||||
|
|
||||||
if self .manga_rename_toggle_button :
|
if self .manga_rename_toggle_button :
|
||||||
self .manga_rename_toggle_button .setVisible (manga_mode_effectively_on and not (is_only_links_mode or is_only_archives_mode or is_only_audio_mode ))
|
self .manga_rename_toggle_button .setVisible (manga_mode_effectively_on and not (is_only_links_mode or is_only_archives_mode or is_only_audio_mode ))
|
||||||
@@ -2755,7 +2824,9 @@ class DownloaderApp (QWidget ):
|
|||||||
if total_posts >0 or processed_posts >0 :
|
if total_posts >0 or processed_posts >0 :
|
||||||
self .file_progress_label .setText ("")
|
self .file_progress_label .setText ("")
|
||||||
|
|
||||||
def start_download(self, direct_api_url=None, override_output_dir=None, is_restore=False, is_continuation=False):
|
def start_download(self, direct_api_url=None, override_output_dir=None, is_restore=False, is_continuation=False, item_type_from_queue=None):
|
||||||
|
self.finish_lock = threading.Lock()
|
||||||
|
self.is_finishing = False
|
||||||
if self.active_update_profile:
|
if self.active_update_profile:
|
||||||
if not self.new_posts_for_update:
|
if not self.new_posts_for_update:
|
||||||
return self._check_for_updates()
|
return self._check_for_updates()
|
||||||
@@ -2881,17 +2952,30 @@ class DownloaderApp (QWidget ):
|
|||||||
self.cancellation_message_logged_this_session = False
|
self.cancellation_message_logged_this_session = False
|
||||||
|
|
||||||
service, user_id, post_id_from_url = extract_post_info(api_url)
|
service, user_id, post_id_from_url = extract_post_info(api_url)
|
||||||
|
|
||||||
|
# --- START: MODIFIED SECTION ---
|
||||||
|
# This check is now smarter. It only triggers the error if the item from the queue
|
||||||
|
# was supposed to be a post ('single_post_from_popup', etc.) but couldn't be parsed.
|
||||||
|
if direct_api_url and not post_id_from_url and item_type_from_queue and 'post' in item_type_from_queue:
|
||||||
|
self.log_signal.emit(f"❌ CRITICAL ERROR: Could not parse post ID from the queued POST URL: {api_url}")
|
||||||
|
self.log_signal.emit(" Skipping this item. This might be due to an unsupported URL format or a temporary issue.")
|
||||||
|
self.download_finished(
|
||||||
|
total_downloaded=0,
|
||||||
|
total_skipped=1,
|
||||||
|
cancelled_by_user=False,
|
||||||
|
kept_original_names_list=[]
|
||||||
|
)
|
||||||
|
return False
|
||||||
|
# --- END: MODIFIED SECTION ---
|
||||||
|
|
||||||
if not service or not user_id:
|
if not service or not user_id:
|
||||||
QMessageBox.critical(self, "Input Error", "Invalid or unsupported URL format.")
|
QMessageBox.critical(self, "Input Error", "Invalid or unsupported URL format.")
|
||||||
return False
|
return False
|
||||||
|
|
||||||
# Read the setting at the start of the download
|
|
||||||
self.save_creator_json_enabled_this_session = self.settings.value(SAVE_CREATOR_JSON_KEY, True, type=bool)
|
self.save_creator_json_enabled_this_session = self.settings.value(SAVE_CREATOR_JSON_KEY, True, type=bool)
|
||||||
|
|
||||||
profile_processed_ids = set() # Default to an empty set
|
creator_profile_data = {}
|
||||||
|
|
||||||
if self.save_creator_json_enabled_this_session:
|
if self.save_creator_json_enabled_this_session:
|
||||||
# --- CREATOR PROFILE LOGIC ---
|
|
||||||
creator_name_for_profile = None
|
creator_name_for_profile = None
|
||||||
if self.is_processing_favorites_queue and self.current_processing_favorite_item_info:
|
if self.is_processing_favorites_queue and self.current_processing_favorite_item_info:
|
||||||
creator_name_for_profile = self.current_processing_favorite_item_info.get('name_for_folder')
|
creator_name_for_profile = self.current_processing_favorite_item_info.get('name_for_folder')
|
||||||
@@ -2905,7 +2989,6 @@ class DownloaderApp (QWidget ):
|
|||||||
|
|
||||||
creator_profile_data = self._setup_creator_profile(creator_name_for_profile, self.session_file_path)
|
creator_profile_data = self._setup_creator_profile(creator_name_for_profile, self.session_file_path)
|
||||||
|
|
||||||
# Get all current UI settings and add them to the profile
|
|
||||||
current_settings = self._get_current_ui_settings_as_dict(api_url_override=api_url, output_dir_override=effective_output_dir_for_run)
|
current_settings = self._get_current_ui_settings_as_dict(api_url_override=api_url, output_dir_override=effective_output_dir_for_run)
|
||||||
creator_profile_data['settings'] = current_settings
|
creator_profile_data['settings'] = current_settings
|
||||||
|
|
||||||
@@ -2917,10 +3000,17 @@ class DownloaderApp (QWidget ):
|
|||||||
self._save_creator_profile(creator_name_for_profile, creator_profile_data, self.session_file_path)
|
self._save_creator_profile(creator_name_for_profile, creator_profile_data, self.session_file_path)
|
||||||
self.log_signal.emit(f"✅ Profile for '{creator_name_for_profile}' loaded/created. Settings saved.")
|
self.log_signal.emit(f"✅ Profile for '{creator_name_for_profile}' loaded/created. Settings saved.")
|
||||||
|
|
||||||
profile_processed_ids = set(creator_profile_data.get('processed_post_ids', []))
|
profile_processed_ids = set()
|
||||||
# --- END OF PROFILE LOGIC ---
|
|
||||||
|
if self.active_update_profile:
|
||||||
|
self.log_signal.emit(" Update session active: Loading existing processed post IDs to find new content.")
|
||||||
|
profile_processed_ids = set(creator_profile_data.get('processed_post_ids', []))
|
||||||
|
|
||||||
|
elif not is_restore:
|
||||||
|
self.log_signal.emit(" Fresh download session: Clearing previous post history for this creator to re-download all.")
|
||||||
|
if 'processed_post_ids' in creator_profile_data:
|
||||||
|
creator_profile_data['processed_post_ids'] = []
|
||||||
|
|
||||||
# The rest of this logic runs regardless, but uses the profile data if it was loaded
|
|
||||||
session_processed_ids = set(processed_post_ids_for_restore)
|
session_processed_ids = set(processed_post_ids_for_restore)
|
||||||
combined_processed_ids = session_processed_ids.union(profile_processed_ids)
|
combined_processed_ids = session_processed_ids.union(profile_processed_ids)
|
||||||
processed_post_ids_for_this_run = list(combined_processed_ids)
|
processed_post_ids_for_this_run = list(combined_processed_ids)
|
||||||
@@ -3048,7 +3138,7 @@ class DownloaderApp (QWidget ):
|
|||||||
elif backend_filter_mode == 'audio': current_mode_log_text = "Audio Download"
|
elif backend_filter_mode == 'audio': current_mode_log_text = "Audio Download"
|
||||||
|
|
||||||
current_char_filter_scope = self.get_char_filter_scope()
|
current_char_filter_scope = self.get_char_filter_scope()
|
||||||
manga_mode = manga_mode_is_checked and not post_id_from_url
|
manga_mode = manga_mode_is_checked
|
||||||
|
|
||||||
manga_date_prefix_text = ""
|
manga_date_prefix_text = ""
|
||||||
if manga_mode and (self.manga_filename_style == STYLE_DATE_BASED or self.manga_filename_style == STYLE_ORIGINAL_NAME) and hasattr(self, 'manga_date_prefix_input'):
|
if manga_mode and (self.manga_filename_style == STYLE_DATE_BASED or self.manga_filename_style == STYLE_ORIGINAL_NAME) and hasattr(self, 'manga_date_prefix_input'):
|
||||||
@@ -3471,6 +3561,7 @@ class DownloaderApp (QWidget ):
|
|||||||
if hasattr (self .download_thread ,'file_progress_signal'):self .download_thread .file_progress_signal .connect (self .update_file_progress_display )
|
if hasattr (self .download_thread ,'file_progress_signal'):self .download_thread .file_progress_signal .connect (self .update_file_progress_display )
|
||||||
if hasattr (self .download_thread ,'missed_character_post_signal'):
|
if hasattr (self .download_thread ,'missed_character_post_signal'):
|
||||||
self .download_thread .missed_character_post_signal .connect (self .handle_missed_character_post )
|
self .download_thread .missed_character_post_signal .connect (self .handle_missed_character_post )
|
||||||
|
if hasattr(self.download_thread, 'overall_progress_signal'): self.download_thread.overall_progress_signal.connect(self.update_progress_display)
|
||||||
if hasattr (self .download_thread ,'retryable_file_failed_signal'):
|
if hasattr (self .download_thread ,'retryable_file_failed_signal'):
|
||||||
|
|
||||||
if hasattr (self .download_thread ,'file_successfully_downloaded_signal'):
|
if hasattr (self .download_thread ,'file_successfully_downloaded_signal'):
|
||||||
@@ -3855,7 +3946,12 @@ class DownloaderApp (QWidget ):
|
|||||||
if not filepath.lower().endswith('.pdf'):
|
if not filepath.lower().endswith('.pdf'):
|
||||||
filepath += '.pdf'
|
filepath += '.pdf'
|
||||||
|
|
||||||
font_path = os.path.join(self.app_base_dir, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
|
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
|
||||||
|
base_path = sys._MEIPASS
|
||||||
|
else:
|
||||||
|
base_path = self.app_base_dir
|
||||||
|
|
||||||
|
font_path = os.path.join(base_path, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
|
||||||
|
|
||||||
self.log_signal.emit(" Sorting collected posts by date (oldest first)...")
|
self.log_signal.emit(" Sorting collected posts by date (oldest first)...")
|
||||||
sorted_content = sorted(posts_content_data, key=lambda x: x.get('published', 'Z'))
|
sorted_content = sorted(posts_content_data, key=lambda x: x.get('published', 'Z'))
|
||||||
@@ -4160,70 +4256,77 @@ class DownloaderApp (QWidget ):
|
|||||||
self ._update_log_display_mode_button_text ()
|
self ._update_log_display_mode_button_text ()
|
||||||
self ._filter_links_log ()
|
self ._filter_links_log ()
|
||||||
|
|
||||||
def cancel_download_button_action (self ):
|
def cancel_download_button_action(self):
|
||||||
self.is_finishing = True
|
"""
|
||||||
if not self .cancel_btn .isEnabled ()and not self .cancellation_event .is_set ():self .log_signal .emit ("ℹ️ No active download to cancel or already cancelling.");return
|
Signals all active download processes to cancel but DOES NOT reset the UI.
|
||||||
self .log_signal .emit ("⚠️ Requesting cancellation of download process (soft reset)...")
|
The UI reset is now handled by the 'download_finished' method.
|
||||||
self._cleanup_temp_files()
|
"""
|
||||||
self._clear_session_file() # Clear session file on explicit cancel
|
if self.cancellation_event.is_set():
|
||||||
if self .external_link_download_thread and self .external_link_download_thread .isRunning ():
|
self.log_signal.emit("ℹ️ Cancellation is already in progress.")
|
||||||
self .log_signal .emit (" Cancelling active External Link download thread...")
|
return
|
||||||
self .external_link_download_thread .cancel ()
|
|
||||||
|
|
||||||
current_url =self .link_input .text ()
|
self.log_signal.emit("⚠️ Requesting cancellation of download process...")
|
||||||
current_dir =self .dir_input .text ()
|
self.cancellation_event.set()
|
||||||
|
|
||||||
self .cancellation_event .set ()
|
# Update UI to "Cancelling" state
|
||||||
self .is_fetcher_thread_running =False
|
self.pause_btn.setEnabled(False)
|
||||||
if self .download_thread and self .download_thread .isRunning ():self .download_thread .requestInterruption ();self .log_signal .emit (" Signaled single download thread to interrupt.")
|
self.cancel_btn.setEnabled(False)
|
||||||
if self .thread_pool :
|
|
||||||
self .log_signal .emit (" Initiating non-blocking shutdown and cancellation of worker pool tasks...")
|
|
||||||
self .thread_pool .shutdown (wait =False ,cancel_futures =True )
|
|
||||||
self .thread_pool =None
|
|
||||||
self .active_futures =[]
|
|
||||||
|
|
||||||
self .external_link_queue .clear ();self ._is_processing_external_link_queue =False ;self ._current_link_post_title =None
|
if hasattr(self, 'reset_button'):
|
||||||
|
self.reset_button.setEnabled(False)
|
||||||
|
|
||||||
self ._perform_soft_ui_reset (preserve_url =current_url ,preserve_dir =current_dir )
|
self.progress_label.setText(self._tr("status_cancelling", "Cancelling... Please wait."))
|
||||||
|
|
||||||
self .progress_label .setText (f"{self ._tr ('status_cancelled_by_user','Cancelled by user')}. {self ._tr ('ready_for_new_task_text','Ready for new task.')}")
|
if self.download_thread and self.download_thread.isRunning():
|
||||||
self .file_progress_label .setText ("")
|
self.download_thread.requestInterruption()
|
||||||
if self .pause_event :self .pause_event .clear ()
|
self.log_signal.emit(" Signaled single download thread to interrupt.")
|
||||||
self .log_signal .emit ("ℹ️ UI reset. Ready for new operation. Background tasks are being terminated.")
|
|
||||||
self .is_paused =False
|
|
||||||
if hasattr (self ,'retryable_failed_files_info')and self .retryable_failed_files_info :
|
|
||||||
self .log_signal .emit (f" Discarding {len (self .retryable_failed_files_info )} pending retryable file(s) due to cancellation.")
|
|
||||||
self .cancellation_message_logged_this_session =False
|
|
||||||
self .retryable_failed_files_info .clear ()
|
|
||||||
self .favorite_download_queue .clear ()
|
|
||||||
self .permanently_failed_files_for_dialog .clear ()
|
|
||||||
self .is_processing_favorites_queue =False
|
|
||||||
self .favorite_download_scope =FAVORITE_SCOPE_SELECTED_LOCATION
|
|
||||||
self ._update_favorite_scope_button_text ()
|
|
||||||
if hasattr (self ,'link_input'):
|
|
||||||
self .last_link_input_text_for_queue_sync =self .link_input .text ()
|
|
||||||
self .cancellation_message_logged_this_session =False
|
|
||||||
|
|
||||||
def _get_domain_for_service (self ,service_name :str )->str :
|
if self.thread_pool:
|
||||||
|
self.log_signal.emit(" Signaling worker pool to cancel futures...")
|
||||||
|
|
||||||
|
if self.external_link_download_thread and self.external_link_download_thread.isRunning():
|
||||||
|
self.log_signal.emit(" Cancelling active External Link download thread...")
|
||||||
|
self.external_link_download_thread.cancel()
|
||||||
|
|
||||||
|
def _get_domain_for_service(self, service_name: str) -> str:
|
||||||
"""Determines the base domain for a given service."""
|
"""Determines the base domain for a given service."""
|
||||||
if not isinstance (service_name ,str ):
|
if not isinstance(service_name, str):
|
||||||
return "kemono.su"
|
return "kemono.cr" # Default fallback
|
||||||
service_lower =service_name .lower ()
|
service_lower = service_name.lower()
|
||||||
coomer_primary_services ={'onlyfans','fansly','manyvids','candfans','gumroad','patreon','subscribestar','dlsite','discord','fantia','boosty','pixiv','fanbox'}
|
coomer_primary_services = {'onlyfans', 'fansly', 'manyvids', 'candfans', 'gumroad', 'subscribestar', 'dlsite'}
|
||||||
if service_lower in coomer_primary_services and service_lower not in ['patreon','discord','fantia','boosty','pixiv','fanbox']:
|
if service_lower in coomer_primary_services:
|
||||||
return "coomer.su"
|
return "coomer.st"
|
||||||
return "kemono.su"
|
return "kemono.cr"
|
||||||
|
|
||||||
|
|
||||||
def download_finished(self, total_downloaded, total_skipped, cancelled_by_user, kept_original_names_list=None):
|
def download_finished(self, total_downloaded, total_skipped, cancelled_by_user, kept_original_names_list=None):
|
||||||
|
if not self.finish_lock.acquire(blocking=False):
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
if self.is_finishing:
|
if self.is_finishing:
|
||||||
return
|
return
|
||||||
self.is_finishing = True
|
self.is_finishing = True
|
||||||
|
|
||||||
|
if cancelled_by_user:
|
||||||
|
self.log_signal.emit("✅ Cancellation complete. Resetting UI.")
|
||||||
|
self._clear_session_file()
|
||||||
|
self.interrupted_session_data = None
|
||||||
|
self.is_restore_pending = False
|
||||||
|
current_url = self.link_input.text()
|
||||||
|
current_dir = self.dir_input.text()
|
||||||
|
self._perform_soft_ui_reset(preserve_url=current_url, preserve_dir=current_dir)
|
||||||
|
self.progress_label.setText(f"{self._tr('status_cancelled_by_user', 'Cancelled by user')}. {self._tr('ready_for_new_task_text', 'Ready for new task.')}")
|
||||||
|
self.file_progress_label.setText("")
|
||||||
|
if self.pause_event: self.pause_event.clear()
|
||||||
|
self.is_paused = False
|
||||||
|
return
|
||||||
|
|
||||||
self.log_signal.emit("🏁 Download of current item complete.")
|
self.log_signal.emit("🏁 Download of current item complete.")
|
||||||
|
|
||||||
if self.is_processing_favorites_queue and self.favorite_download_queue:
|
if self.is_processing_favorites_queue and self.favorite_download_queue:
|
||||||
self.log_signal.emit("✅ Item finished. Processing next in queue...")
|
self.log_signal.emit("✅ Item finished. Processing next in queue...")
|
||||||
|
self.is_finishing = False
|
||||||
|
self.finish_lock.release()
|
||||||
self._process_next_favorite_download()
|
self._process_next_favorite_download()
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -4237,10 +4340,7 @@ class DownloaderApp (QWidget ):
|
|||||||
self.is_restore_pending = False
|
self.is_restore_pending = False
|
||||||
|
|
||||||
self._finalize_download_history()
|
self._finalize_download_history()
|
||||||
status_message = self._tr("status_cancelled_by_user", "Cancelled by user") if cancelled_by_user else self._tr("status_completed", "Completed")
|
status_message = self._tr("status_completed", "Completed")
|
||||||
if cancelled_by_user and self.retryable_failed_files_info:
|
|
||||||
self.log_signal.emit(f" Download cancelled, discarding {len(self.retryable_failed_files_info)} file(s) that were pending retry.")
|
|
||||||
self.retryable_failed_files_info.clear()
|
|
||||||
|
|
||||||
summary_log = "=" * 40
|
summary_log = "=" * 40
|
||||||
summary_log += f"\n🏁 Download {status_message}!\n Summary: Downloaded Files={total_downloaded}, Skipped Files={total_skipped}\n"
|
summary_log += f"\n🏁 Download {status_message}!\n Summary: Downloaded Files={total_downloaded}, Skipped Files={total_skipped}\n"
|
||||||
@@ -4249,17 +4349,14 @@ class DownloaderApp (QWidget ):
|
|||||||
self.log_signal.emit("")
|
self.log_signal.emit("")
|
||||||
|
|
||||||
if self.thread_pool:
|
if self.thread_pool:
|
||||||
self.log_signal.emit(" Shutting down worker thread pool...")
|
|
||||||
self.thread_pool.shutdown(wait=False)
|
self.thread_pool.shutdown(wait=False)
|
||||||
self.thread_pool = None
|
self.thread_pool = None
|
||||||
self.log_signal.emit(" Thread pool shut down.")
|
|
||||||
|
|
||||||
if self.single_pdf_setting and self.session_temp_files and not cancelled_by_user:
|
if self.single_pdf_setting and self.session_temp_files:
|
||||||
try:
|
try:
|
||||||
self._trigger_single_pdf_creation()
|
self._trigger_single_pdf_creation()
|
||||||
finally:
|
finally:
|
||||||
self._cleanup_temp_files()
|
self._cleanup_temp_files()
|
||||||
self.single_pdf_setting = False
|
|
||||||
else:
|
else:
|
||||||
self._cleanup_temp_files()
|
self._cleanup_temp_files()
|
||||||
self.single_pdf_setting = False
|
self.single_pdf_setting = False
|
||||||
@@ -4317,8 +4414,10 @@ class DownloaderApp (QWidget ):
|
|||||||
"Would you like to attempt to download these failed files again?",
|
"Would you like to attempt to download these failed files again?",
|
||||||
QMessageBox.Yes | QMessageBox.No, QMessageBox.Yes)
|
QMessageBox.Yes | QMessageBox.No, QMessageBox.Yes)
|
||||||
if reply == QMessageBox.Yes:
|
if reply == QMessageBox.Yes:
|
||||||
|
self.is_finishing = False # Allow retry session to start
|
||||||
|
self.finish_lock.release() # Release lock for the retry session
|
||||||
self._start_failed_files_retry_session()
|
self._start_failed_files_retry_session()
|
||||||
return
|
return # Exit to allow retry session to run
|
||||||
else:
|
else:
|
||||||
self.log_signal.emit("ℹ️ User chose not to retry failed files.")
|
self.log_signal.emit("ℹ️ User chose not to retry failed files.")
|
||||||
self.permanently_failed_files_for_dialog.extend(self.retryable_failed_files_info)
|
self.permanently_failed_files_for_dialog.extend(self.retryable_failed_files_info)
|
||||||
@@ -4333,6 +4432,8 @@ class DownloaderApp (QWidget ):
|
|||||||
self._update_button_states_and_connections()
|
self._update_button_states_and_connections()
|
||||||
self.cancellation_message_logged_this_session = False
|
self.cancellation_message_logged_this_session = False
|
||||||
self.active_update_profile = None
|
self.active_update_profile = None
|
||||||
|
finally:
|
||||||
|
pass
|
||||||
|
|
||||||
def _handle_keep_duplicates_toggled(self, checked):
|
def _handle_keep_duplicates_toggled(self, checked):
|
||||||
"""Shows the duplicate handling dialog when the checkbox is checked."""
|
"""Shows the duplicate handling dialog when the checkbox is checked."""
|
||||||
@@ -5162,6 +5263,31 @@ class DownloaderApp (QWidget ):
|
|||||||
if hasattr(self, 'link_input'):
|
if hasattr(self, 'link_input'):
|
||||||
self.last_link_input_text_for_queue_sync = self.link_input.text()
|
self.last_link_input_text_for_queue_sync = self.link_input.text()
|
||||||
|
|
||||||
|
# --- START: MODIFIED LOGIC ---
|
||||||
|
# Manually trigger the UI update now that the queue is populated and the dialog is closed.
|
||||||
|
self.update_ui_for_manga_mode(self.manga_mode_checkbox.isChecked() if self.manga_mode_checkbox else False)
|
||||||
|
# --- END: MODIFIED LOGIC ---
|
||||||
|
|
||||||
|
def _load_saved_cookie_settings(self):
|
||||||
|
"""Loads and applies saved cookie settings on startup."""
|
||||||
|
try:
|
||||||
|
use_cookie_saved = self.settings.value(USE_COOKIE_KEY, False, type=bool)
|
||||||
|
cookie_content_saved = self.settings.value(COOKIE_TEXT_KEY, "", type=str)
|
||||||
|
|
||||||
|
if use_cookie_saved and cookie_content_saved:
|
||||||
|
self.use_cookie_checkbox.setChecked(True)
|
||||||
|
self.cookie_text_input.setText(cookie_content_saved)
|
||||||
|
|
||||||
|
# Check if the saved content is a file path and update UI accordingly
|
||||||
|
if os.path.exists(cookie_content_saved):
|
||||||
|
self.selected_cookie_filepath = cookie_content_saved
|
||||||
|
self.cookie_text_input.setReadOnly(True)
|
||||||
|
self._update_cookie_input_placeholders_and_tooltips()
|
||||||
|
|
||||||
|
self.log_signal.emit(f"ℹ️ Loaded saved cookie settings.")
|
||||||
|
except Exception as e:
|
||||||
|
self.log_signal.emit(f"⚠️ Could not load saved cookie settings: {e}")
|
||||||
|
|
||||||
def _show_favorite_artists_dialog (self ):
|
def _show_favorite_artists_dialog (self ):
|
||||||
if self ._is_download_active ()or self .is_processing_favorites_queue :
|
if self ._is_download_active ()or self .is_processing_favorites_queue :
|
||||||
QMessageBox .warning (self ,"Busy","Another download operation is already in progress.")
|
QMessageBox .warning (self ,"Busy","Another download operation is already in progress.")
|
||||||
@@ -5217,41 +5343,53 @@ class DownloaderApp (QWidget ):
|
|||||||
|
|
||||||
target_domain_preference_for_fetch =None
|
target_domain_preference_for_fetch =None
|
||||||
|
|
||||||
if cookies_config ['use_cookie']:
|
if cookies_config['use_cookie']:
|
||||||
self .log_signal .emit ("Favorite Posts: 'Use Cookie' is checked. Determining target domain...")
|
self.log_signal.emit("Favorite Posts: 'Use Cookie' is checked. Determining target domain...")
|
||||||
kemono_cookies =prepare_cookies_for_request (
|
|
||||||
cookies_config ['use_cookie'],
|
# --- Kemono Check with Fallback ---
|
||||||
cookies_config ['cookie_text'],
|
kemono_cookies = prepare_cookies_for_request(
|
||||||
cookies_config ['selected_cookie_file'],
|
cookies_config['use_cookie'], cookies_config['cookie_text'], cookies_config['selected_cookie_file'],
|
||||||
cookies_config ['app_base_dir'],
|
cookies_config['app_base_dir'], lambda msg: self.log_signal.emit(f"[FavPosts Cookie Check] {msg}"),
|
||||||
lambda msg :self .log_signal .emit (f"[FavPosts Cookie Check - Kemono] {msg }"),
|
target_domain="kemono.cr"
|
||||||
target_domain ="kemono.su"
|
|
||||||
)
|
)
|
||||||
coomer_cookies =prepare_cookies_for_request (
|
if not kemono_cookies:
|
||||||
cookies_config ['use_cookie'],
|
self.log_signal.emit(" ↳ No cookies for kemono.cr, trying fallback kemono.su...")
|
||||||
cookies_config ['cookie_text'],
|
kemono_cookies = prepare_cookies_for_request(
|
||||||
cookies_config ['selected_cookie_file'],
|
cookies_config['use_cookie'], cookies_config['cookie_text'], cookies_config['selected_cookie_file'],
|
||||||
cookies_config ['app_base_dir'],
|
cookies_config['app_base_dir'], lambda msg: self.log_signal.emit(f"[FavPosts Cookie Check] {msg}"),
|
||||||
lambda msg :self .log_signal .emit (f"[FavPosts Cookie Check - Coomer] {msg }"),
|
target_domain="kemono.su"
|
||||||
target_domain ="coomer.su"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
kemono_ok =bool (kemono_cookies )
|
# --- Coomer Check with Fallback ---
|
||||||
coomer_ok =bool (coomer_cookies )
|
coomer_cookies = prepare_cookies_for_request(
|
||||||
|
cookies_config['use_cookie'], cookies_config['cookie_text'], cookies_config['selected_cookie_file'],
|
||||||
|
cookies_config['app_base_dir'], lambda msg: self.log_signal.emit(f"[FavPosts Cookie Check] {msg}"),
|
||||||
|
target_domain="coomer.st"
|
||||||
|
)
|
||||||
|
if not coomer_cookies:
|
||||||
|
self.log_signal.emit(" ↳ No cookies for coomer.st, trying fallback coomer.su...")
|
||||||
|
coomer_cookies = prepare_cookies_for_request(
|
||||||
|
cookies_config['use_cookie'], cookies_config['cookie_text'], cookies_config['selected_cookie_file'],
|
||||||
|
cookies_config['app_base_dir'], lambda msg: self.log_signal.emit(f"[FavPosts Cookie Check] {msg}"),
|
||||||
|
target_domain="coomer.su"
|
||||||
|
)
|
||||||
|
|
||||||
if kemono_ok and not coomer_ok :
|
kemono_ok = bool(kemono_cookies)
|
||||||
target_domain_preference_for_fetch ="kemono.su"
|
coomer_ok = bool(coomer_cookies)
|
||||||
self .log_signal .emit (" ↳ Only Kemono.su cookies loaded. Will fetch favorites from Kemono.su only.")
|
|
||||||
elif coomer_ok and not kemono_ok :
|
if kemono_ok and not coomer_ok:
|
||||||
target_domain_preference_for_fetch ="coomer.su"
|
target_domain_preference_for_fetch = "kemono.cr"
|
||||||
self .log_signal .emit (" ↳ Only Coomer.su cookies loaded. Will fetch favorites from Coomer.su only.")
|
self.log_signal.emit(" ↳ Only Kemono cookies loaded. Will fetch favorites from Kemono.cr only.")
|
||||||
elif kemono_ok and coomer_ok :
|
elif coomer_ok and not kemono_ok:
|
||||||
target_domain_preference_for_fetch =None
|
target_domain_preference_for_fetch = "coomer.st"
|
||||||
self .log_signal .emit (" ↳ Cookies for both Kemono.su and Coomer.su loaded. Will attempt to fetch from both.")
|
self.log_signal.emit(" ↳ Only Coomer cookies loaded. Will fetch favorites from Coomer.st only.")
|
||||||
else :
|
elif kemono_ok and coomer_ok:
|
||||||
self .log_signal .emit (" ↳ No valid cookies loaded for Kemono.su or Coomer.su.")
|
target_domain_preference_for_fetch = None
|
||||||
cookie_help_dialog =CookieHelpDialog (self ,self )
|
self.log_signal.emit(" ↳ Cookies for both Kemono and Coomer loaded. Will attempt to fetch from both.")
|
||||||
cookie_help_dialog .exec_ ()
|
else:
|
||||||
|
self.log_signal.emit(" ↳ No valid cookies loaded for Kemono.cr or Coomer.st.")
|
||||||
|
cookie_help_dialog = CookieHelpDialog(self, self)
|
||||||
|
cookie_help_dialog.exec_()
|
||||||
return
|
return
|
||||||
else :
|
else :
|
||||||
self .log_signal .emit ("Favorite Posts: 'Use Cookie' is NOT checked. Cookies are required.")
|
self .log_signal .emit ("Favorite Posts: 'Use Cookie' is NOT checked. Cookies are required.")
|
||||||
@@ -5283,7 +5421,7 @@ class DownloaderApp (QWidget ):
|
|||||||
else :
|
else :
|
||||||
self .log_signal .emit ("ℹ️ Favorite posts selection cancelled.")
|
self .log_signal .emit ("ℹ️ Favorite posts selection cancelled.")
|
||||||
|
|
||||||
def _process_next_favorite_download (self ):
|
def _process_next_favorite_download(self):
|
||||||
|
|
||||||
if self.favorite_download_queue and not self.is_processing_favorites_queue:
|
if self.favorite_download_queue and not self.is_processing_favorites_queue:
|
||||||
manga_mode_is_checked = self.manga_mode_checkbox.isChecked() if self.manga_mode_checkbox else False
|
manga_mode_is_checked = self.manga_mode_checkbox.isChecked() if self.manga_mode_checkbox else False
|
||||||
@@ -5328,33 +5466,43 @@ class DownloaderApp (QWidget ):
|
|||||||
next_url =self .current_processing_favorite_item_info ['url']
|
next_url =self .current_processing_favorite_item_info ['url']
|
||||||
item_display_name =self .current_processing_favorite_item_info .get ('name','Unknown Item')
|
item_display_name =self .current_processing_favorite_item_info .get ('name','Unknown Item')
|
||||||
|
|
||||||
item_type =self .current_processing_favorite_item_info .get ('type','artist')
|
# --- START: MODIFIED SECTION ---
|
||||||
self .log_signal .emit (f"▶️ Processing next favorite from queue: '{item_display_name }' ({next_url })")
|
# Get the type of item from the queue to help start_download make smarter decisions.
|
||||||
|
item_type = self.current_processing_favorite_item_info.get('type', 'artist')
|
||||||
|
self.log_signal.emit(f"▶️ Processing next favorite from queue ({item_type}): '{item_display_name}' ({next_url})")
|
||||||
|
|
||||||
override_dir =None
|
override_dir = None
|
||||||
item_scope =self .current_processing_favorite_item_info .get ('scope_from_popup')
|
item_scope = self.current_processing_favorite_item_info.get('scope_from_popup')
|
||||||
if item_scope is None :
|
if item_scope is None:
|
||||||
item_scope =self .favorite_download_scope
|
item_scope = self.favorite_download_scope
|
||||||
|
|
||||||
main_download_dir =self .dir_input .text ().strip ()
|
main_download_dir = self.dir_input.text().strip()
|
||||||
|
|
||||||
should_create_artist_folder =False
|
should_create_artist_folder = False
|
||||||
if item_type =='creator_popup_selection'and item_scope ==EmptyPopupDialog .SCOPE_CREATORS :
|
if item_type == 'creator_popup_selection' and item_scope == EmptyPopupDialog.SCOPE_CREATORS:
|
||||||
should_create_artist_folder =True
|
should_create_artist_folder = True
|
||||||
elif item_type !='creator_popup_selection'and self .favorite_download_scope ==FAVORITE_SCOPE_ARTIST_FOLDERS :
|
elif item_type != 'creator_popup_selection' and self.favorite_download_scope == FAVORITE_SCOPE_ARTIST_FOLDERS:
|
||||||
should_create_artist_folder =True
|
should_create_artist_folder = True
|
||||||
|
|
||||||
if should_create_artist_folder and main_download_dir :
|
if should_create_artist_folder and main_download_dir:
|
||||||
folder_name_key =self .current_processing_favorite_item_info .get ('name_for_folder','Unknown_Folder')
|
folder_name_key = self.current_processing_favorite_item_info.get('name_for_folder', 'Unknown_Folder')
|
||||||
item_specific_folder_name =clean_folder_name (folder_name_key )
|
item_specific_folder_name = clean_folder_name(folder_name_key)
|
||||||
override_dir =os .path .normpath (os .path .join (main_download_dir ,item_specific_folder_name ))
|
override_dir = os.path.normpath(os.path.join(main_download_dir, item_specific_folder_name))
|
||||||
self .log_signal .emit (f" Scope requires artist folder. Target directory: '{override_dir }'")
|
self.log_signal.emit(f" Scope requires artist folder. Target directory: '{override_dir}'")
|
||||||
|
|
||||||
success_starting_download =self .start_download (direct_api_url =next_url ,override_output_dir =override_dir, is_continuation=True )
|
# Pass the item_type to the start_download function
|
||||||
|
success_starting_download = self.start_download(
|
||||||
|
direct_api_url=next_url,
|
||||||
|
override_output_dir=override_dir,
|
||||||
|
is_continuation=True,
|
||||||
|
item_type_from_queue=item_type
|
||||||
|
)
|
||||||
|
# --- END: MODIFIED SECTION ---
|
||||||
|
|
||||||
if not success_starting_download :
|
if not success_starting_download:
|
||||||
self .log_signal .emit (f"⚠️ Failed to initiate download for '{item_display_name }'. Skipping this item in queue.")
|
self.log_signal.emit(f"⚠️ Failed to initiate download for '{item_display_name}'. Skipping and moving to the next item in queue.")
|
||||||
self .download_finished (total_downloaded =0 ,total_skipped =1 ,cancelled_by_user =True ,kept_original_names_list =[])
|
# Use a QTimer to avoid deep recursion and correctly move to the next item.
|
||||||
|
QTimer.singleShot(100, self._process_next_favorite_download)
|
||||||
|
|
||||||
class ExternalLinkDownloadThread (QThread ):
|
class ExternalLinkDownloadThread (QThread ):
|
||||||
"""A QThread to handle downloading multiple external links sequentially."""
|
"""A QThread to handle downloading multiple external links sequentially."""
|
||||||
|
|||||||
@@ -196,10 +196,9 @@ def get_link_platform(url):
|
|||||||
if 'twitter.com' in domain or 'x.com' in domain: return 'twitter/x'
|
if 'twitter.com' in domain or 'x.com' in domain: return 'twitter/x'
|
||||||
if 'discord.gg' in domain or 'discord.com/invite' in domain: return 'discord invite'
|
if 'discord.gg' in domain or 'discord.com/invite' in domain: return 'discord invite'
|
||||||
if 'pixiv.net' in domain: return 'pixiv'
|
if 'pixiv.net' in domain: return 'pixiv'
|
||||||
if 'kemono.su' in domain or 'kemono.party' in domain: return 'kemono'
|
if 'kemono.su' in domain or 'kemono.party' in domain or 'kemono.cr' in domain: return 'kemono'
|
||||||
if 'coomer.su' in domain or 'coomer.party' in domain: return 'coomer'
|
if 'coomer.su' in domain or 'coomer.party' in domain or 'coomer.st' in domain: return 'coomer'
|
||||||
|
|
||||||
# Fallback to a generic name for other domains
|
|
||||||
parts = domain.split('.')
|
parts = domain.split('.')
|
||||||
if len(parts) >= 2:
|
if len(parts) >= 2:
|
||||||
return parts[-2]
|
return parts[-2]
|
||||||
|
|||||||
Reference in New Issue
Block a user