7 Commits

Author SHA1 Message Date
Yuvi9587
ec94417569 Update main.py 2025-09-07 05:24:54 -07:00
Yuvi9587
0a902895a8 Update main_window.py 2025-09-07 05:23:44 -07:00
Yuvi9587
7217bfdb39 Commit 2025-09-07 04:56:08 -07:00
Yuvi9587
24880b5042 Update readme.md 2025-08-28 04:34:37 -07:00
Yuvi9587
510ae5e1d1 Update readme.md 2025-08-27 19:59:11 -07:00
Yuvi9587
65b4759bad Commit 2025-08-27 19:51:42 -07:00
Yuvi9587
6e993d88de Commit 2025-08-27 19:50:13 -07:00
17 changed files with 1290 additions and 673 deletions

View File

@@ -1,391 +1,159 @@
<div>
<h1>Kemono Downloader - Comprehensive Feature Guide</h1> <h1>Kemono Downloader - Comprehensive Feature Guide</h1>
<p>This guide provides a detailed overview of all user interface elements, input fields, buttons, popups, and functionalities available in the application.</p> <p>This guide provides a detailed overview of all user interface elements, input fields, buttons, popups, and functionalities available in the application.</p>
<hr> <hr>
<h2>1. Core Concepts &amp; Supported Sites</h2>
<h2><strong>1. URL Input (🔗)</strong></h2> <h3>URL Input (🔗)</h3>
<p>This is the primary input field where you specify the content you want to download.</p> <p>This is the primary input field where you specify the content you want to download.</p>
<p><strong>Supported URL Types:</strong></p>
<p><strong>Functionality:</strong></p>
<ul> <ul>
<li><strong>Creator URL:</strong> A link to a creator's main page (e.g., https://kemono.su/patreon/user/12345). Downloads all posts from the creator.</li> <li><strong>Creator URL</strong>: A link to a creator's main page. Downloads all posts from that creator.</li>
<li><strong>Post URL:</strong> A direct link to a specific post (e.g., .../post/98765). Downloads only the specified post.</li> <li><strong>Post URL</strong>: A direct link to a specific post. Downloads only that single post.</li>
<li><strong>Batch Command</strong>: Special keywords to trigger bulk downloading from a text file (see Batch Downloading section).</li>
</ul> </ul>
<p><strong>Supported Websites:</strong></p>
<p><strong>Interaction with Other Features:</strong> The content of this field influences "Manga Mode" and "Page Range". "Page Range" is enabled only with a creator URL.</p>
<hr>
<h2><strong>2. Creator Selection & Update (🎨)</strong></h2>
<p>The color palette emoji button opens the Creator Selection & Update dialog. This allows managing and downloading from a local creator database.</p>
<p><strong>Functionality:</strong></p>
<ul> <ul>
<li><strong>Creator Browser:</strong> Loads a list from <code>creators.json</code>. Search by name, service, or paste a URL to find creators.</li> <li>Kemono (<code>kemono.su</code>, <code>kemono.party</code>, etc.)</li>
<li><strong>Batch Selection:</strong> Select multiple creators and click "Add Selected" to add them to the batch download session.</li> <li>Coomer (<code>coomer.su</code>, <code>coomer.party</code>, etc.)</li>
<li><strong>Update Checker:</strong> Use a saved profile (.json) to download only new content based on previously fetched posts.</li> <li>Discord (via Kemono/Coomer API)</li>
<li><strong>Post Fetching & Filtering:</strong> "Fetch Posts" loads post titles, allowing you to choose specific posts for download.</li> <li>Bunkr</li>
<li>Erome</li>
<li>Saint2.su</li>
<li>nhentai</li>
</ul> </ul>
<hr> <hr>
<h2>2. Main Download Controls &amp; Inputs</h2>
<h2><strong>3. Download Location Input (📁)</strong></h2> <h3>Download Location (📁)</h3>
<p>This input defines the destination directory for downloaded files.</p> <p>This input defines the main folder where your files will be saved.</p>
<p><strong>Functionality:</strong></p>
<ul> <ul>
<li><strong>Manual Entry:</strong> Enter or paste the folder path.</li> <li><strong>Browse Button</strong>: Opens a system dialog to choose a folder.</li>
<li><strong>Browse Button:</strong> Opens a system dialog to choose a folder.</li> <li><strong>Directory Creation</strong>: If the folder doesn't exist, the app will ask for confirmation to create it.</li>
<li><strong>Directory Creation:</strong> If the folder doesn't exist, the app can create it after user confirmation.</li>
</ul> </ul>
<h3>Filter by Character(s) &amp; Scope</h3>
<hr>
<h2><strong>4. Filter by Character(s) & Scope Button</strong></h2>
<p>Used to download content for specific characters or series and organize them into subfolders.</p> <p>Used to download content for specific characters or series and organize them into subfolders.</p>
<p><strong>Input Field (Filter by Character(s)):</strong></p>
<ul> <ul>
<li>Enter comma-separated names (e.g., <code>Tifa, Aerith</code>).</li> <li><strong>Input Field</strong>: Enter comma-separated names (e.g., <code>Tifa, Aerith</code>). Group aliases using parentheses for folder naming (e.g., <code>(Cloud, Zack)</code>).</li>
<li>Group aliases using parentheses (e.g., <code>(Cloud, Zack)</code>).</li> <li><strong>Scope Button</strong>: Cycles through where to look for name matches:
<li>Names are matched against titles, filenames, or comments.</li>
<li>If "Separate Folders by Known.txt" is enabled, the name becomes the subfolder name.</li>
</ul>
<p><strong>Scope Button Modes:</strong></p>
<ul> <ul>
<li><strong>Filter: Title</strong> (default) Match names in post titles only.</li> <li><strong>Filter: Title</strong>: Matches names in the post title.</li>
<li><strong>Filter: Files</strong> Match names in filenames only.</li> <li><strong>Filter: Files</strong>: Matches names in the filenames.</li>
<li><strong>Filter: Both</strong> Try title match first, then filenames.</li> <li><strong>Filter: Both</strong>: Checks the title first, then filenames.</li>
<li><strong>Filter: Comments</strong> Try filenames first, then post comments if no match.</li> <li><strong>Filter: Comments</strong>: Checks filenames first, then post comments.</li>
</ul> </ul>
</li>
</ul>
<h3>Skip with Words &amp; Scope</h3>
<p>Prevents downloading content based on keywords or file size.</p>
<ul>
<li><strong>Input Field</strong>: Enter comma-separated keywords (e.g., <code>WIP, sketch, preview</code>).</li>
<li><strong>Skip by Size</strong>: Enter a number in square brackets to skip any file <strong>smaller than</strong> that size in MB. For example, <code>WIP, [200]</code> skips files with "WIP" in the name AND any file smaller than 200 MB.</li>
<li><strong>Scope Button</strong>: Cycles through where to apply keyword filters:
<ul>
<li><strong>Scope: Posts</strong>: Skips the entire post if the title matches.</li>
<li><strong>Scope: Files</strong>: Skips individual files if the filename matches.</li>
<li><strong>Scope: Both</strong>: Checks the post title first, then individual files.</li>
</ul>
</li>
</ul>
<h3>Remove Words from Name (✂️)</h3>
<p>Enter comma-separated words to remove from final filenames (e.g., <code>patreon, [HD]</code>). This helps clean up file naming.</p>
<hr> <hr>
<h2>3. Primary Download Modes (Filter File Section)</h2>
<h2><strong>5. Skip with Words & Scope Button</strong></h2> <p>This section uses radio buttons to set the main download mode. Only one can be active at a time.</p>
<p>Prevents downloading content based on keywords.</p>
<p><strong>Input Field (Skip with Words):</strong></p>
<ul> <ul>
<li>Enter comma-separated keywords (e.g., <code>WIP, sketch, preview</code>).</li> <li><strong>All</strong>: Default mode. Downloads every file and attachment.</li>
<li>Matching is case-insensitive.</li> <li><strong>Images/GIFs</strong>: Downloads only common image formats.</li>
<li>If a keyword matches, the file or post is skipped.</li> <li><strong>Videos</strong>: Downloads only common video formats.</li>
</ul> <li><strong>Only Archives</strong>: Downloads only <code>.zip</code>, <code>.rar</code>, etc.</li>
<li><strong>Only Audio</strong>: Downloads only common audio formats.</li>
<p><strong>Scope Button Modes:</strong></p> <li><strong>Only Links</strong>: Extracts external hyperlinks (e.g., Mega, Google Drive) from post descriptions instead of downloading files. <strong>This mode unlocks special features</strong> (see section 6).</li>
<li><strong>More</strong>: Opens a dialog to download text-based content.
<ul> <ul>
<li><strong>Scope: Posts</strong> (default) Skips post if title contains a keyword.</li> <li><strong>Scope</strong>: Choose to extract text from the post description or comments.</li>
<li><strong>Scope: Files</strong> Skips individual files with keyword matches.</li> <li><strong>Export Format</strong>: Save as PDF, DOCX, or TXT.</li>
<li><strong>Scope: Both</strong> Skips entire post if title matches, otherwise filters individual files.</li> <li><strong>Single PDF</strong>: Compile all text from the session into one consolidated PDF file.</li>
</ul>
</div>
<div>
<h2><strong>Filter File Section (Radio Buttons)</strong></h2>
<p>This section uses a group of radio buttons to control the primary download mode, dictating which types of files are targeted. Only one of these modes can be active at a time.</p>
<ul>
<li>
<strong>All:</strong> Default mode. Downloads every file and attachment provided by the API, regardless of type.
</li>
<li>
<strong>Images/GIFs:</strong> Filters for common image formats (<code>.jpg</code>, <code>.png</code>, <code>.gif</code>, <code>.webp</code>), skipping non-image files.
</li>
<li>
<strong>Videos:</strong> Filters for common video formats like <code>.mp4</code>, <code>.webm</code>, and <code>.mov</code>, skipping all others.
</li>
<li>
<strong>Only Archives:</strong> Downloads only archive files (<code>.zip</code>, <code>.rar</code>). Disables "Compress to WebP" and unchecks "Skip Archives".
</li>
<li>
<strong>Only Audio:</strong> Filters for common audio formats like <code>.mp3</code>, <code>.wav</code>, and <code>.flac</code>.
</li>
<li>
<strong>Only Links:</strong> Extracts external hyperlinks from post descriptions (e.g., Mega, Google Drive) and displays them in the log. Disables all download options.
</li>
<li>
<strong>More:</strong> Opens the "More Options" dialog to download text-based content instead of media files.
<ul>
<li><strong>Scope:</strong> Choose to extract from post description or comments.</li>
<li><strong>Export Format:</strong> Save text as PDF, DOCX, or TXT.</li>
<li><strong>Single PDF:</strong> Optionally compile all text into one PDF.</li>
</ul> </ul>
</li> </li>
</ul> </ul>
<hr> <hr>
<h2>4. Advanced Features &amp; Toggles (Checkboxes)</h2>
<h2><strong>Check Box Buttons</strong></h2> <h3>Folder Organization</h3>
<p>These checkboxes provide additional toggles to refine the download behavior and enable special features.</p>
<ul> <ul>
<li> <li><strong>Separate folders by Known.txt</strong>: Automatically organizes downloads into subfolders based on name matches from your <code>Known.txt</code> list or the "Filter by Character(s)" input.</li>
<strong>⭐ Favorite Mode:</strong> Changes workflow to download from your personal favorites. Disables the URL input. <li><strong>Subfolder per post</strong>: Creates a unique folder for each post, named after the post's title. This prevents files from different posts from mixing.</li>
<li><strong>Date prefix</strong>: (Only available with "Subfolder per post") Prepends the post date to the folder name (e.g., <code>2025-08-03 My Post Title</code>) for chronological sorting.</li>
</ul>
<h3>Special Modes</h3>
<ul> <ul>
<li><strong>Favorite Artists:</strong> Opens a dialog to select from your favorited creators.</li> <li><strong>Favorite Mode</strong>: Switches the UI to download from your personal favorites list instead of using the URL input.</li>
<li><strong>Favorite Posts:</strong> Opens a dialog to select from your favorited posts on Kemono and Coomer.</li> <li><strong>Manga/Comic mode</strong>: Sorts a creator's posts from oldest to newest before downloading, ensuring correct page order. A scope button appears to control the filename style (e.g., using post title, date, or a global number).</li>
</ul>
<h3>File Handling</h3>
<ul>
<li><strong>Skip Archives</strong>: Ignores <code>.zip</code> and <code>.rar</code> files during downloads.</li>
<li><strong>Download Thumbnail Only</strong>: Saves only the small preview images instead of full-resolution files.</li>
<li><strong>Scan Content for Images</strong>: Parses post HTML to find embedded images that may not be listed in the API data.</li>
<li><strong>Compress to WebP</strong>: Converts large images (over 1.5 MB) to the space-saving WebP format.</li>
<li><strong>Keep Duplicates</strong>: Opens a dialog to control how duplicate files are handled (skip by default, keep all, or keep a specific number of copies).</li>
</ul>
<h3>General Functionality</h3>
<ul>
<li><strong>Use cookie</strong>: Enables login-based access. You can paste a cookie string or browse for a <code>cookies.txt</code> file.</li>
<li><strong>Use Multithreading</strong>: Enables parallel processing of posts for faster downloads. You can set the number of concurrent worker threads.</li>
<li><strong>Show external links in log</strong>: Opens a secondary log panel that displays external links found in post descriptions.</li>
</ul>
<hr>
<h2>5. Specialized Downloaders &amp; Batch Mode</h2>
<h3>Discord Features</h3>
<ul>
<li>When a Discord URL is entered, a <strong>Scope</strong> button appears.
<ul>
<li><strong>Scope: Files</strong>: Downloads all files from the channel/server.</li>
<li><strong>Scope: Messages</strong>: Saves the entire message history of the channel/server as a formatted PDF.</li>
</ul> </ul>
</li> </li>
<li> <li>A <strong>"Save as PDF"</strong> button also appears as a shortcut for the message saving feature.</li>
<strong>Skip Archives:</strong> When checked, archive files (<code>.zip</code>, <code>.rar</code>) are ignored. Disabled in "Only Archives" mode. </ul>
</li> <h3>Batch Downloading (<code>nhentai</code> &amp; <code>saint2.su</code>)</h3>
<li> <p>This feature allows you to download hundreds of galleries or videos from a simple text file.</p>
<strong>Download Thumbnail Only:</strong> Saves only thumbnail previews, not full-resolution files. Enables "Scan Content for Images". <ol>
</li> <li>In the <code>appdata</code> folder, create <code>nhentai.txt</code> or <code>saint2.su.txt</code>.</li>
<li> <li>Add one full URL per line to the corresponding file.</li>
<strong>Scan Content for Images:</strong> Parses post HTML for embedded images not listed in the API. Looks for <code>&lt;img&gt;</code> tags and direct image links. <li>In the app's URL input, type either <code>nhentai.net</code> or <code>saint2.su</code> and click "Start Download".</li>
</li> <li>The app will read the file and process every URL in the queue.</li>
<li> </ol>
<strong>Compress to WebP:</strong> Converts large images (over 1.5 MB) to WebP format using the Pillow library for space-saving. <hr>
</li> <h2>6. "Only Links" Mode: Extraction &amp; Direct Download</h2>
<li> <p>When you select the <strong>"Only Links"</strong> radio button, the application's behavior changes significantly.</p>
<strong>Keep Duplicates:</strong> Provides control over duplicate handling via the "Duplicate Handling Options" dialog.
<ul> <ul>
<li><strong>Skip by Hash:</strong> Default skip identical files.</li> <li><strong>Link Extraction</strong>: Instead of downloading files, the main log panel will fill with all external links found (Mega, Google Drive, Dropbox, etc.).</li>
<li><strong>Keep Everything:</strong> Save all files regardless of duplication.</li> <li><strong>Export Links</strong>: An "Export Links" button appears, allowing you to save the full list of extracted URLs to a <code>.txt</code> file.</li>
<li><strong>Limit:</strong> Set a limit on how many copies of the same file are saved. A limit of <code>0</code> means no limit.</li> <li><strong>Direct Cloud Download</strong>: A <strong>"Download"</strong> button appears next to the export button.
<ul>
<li>Clicking this opens a new dialog listing all supported cloud links (Mega, G-Drive, Dropbox).</li>
<li>You can select which files you want to download from this list.</li>
<li>The application will then download the selected files directly from the cloud service to your chosen download location.</li>
</ul> </ul>
</li> </li>
</ul> </ul>
</div> <hr>
<h2><strong>Folder Organization Checkboxes</strong></h2> <h2>7. Session &amp; Process Management</h2>
<h3>Main Action Buttons</h3>
<ul> <ul>
<li> <li><strong>Start Download</strong>: Begins the download process. This button's text changes contextually (e.g., "Extract Links", "Check for Updates").</li>
<strong>Separate folders by Known.txt:</strong> Automatically organizes downloads into folders based on name matches. <li><strong>Pause / Resume</strong>: Pauses or resumes the ongoing download. When paused, you can safely change some settings.</li>
<li><strong>Cancel &amp; Reset UI</strong>: Stops the current download and performs a soft reset of the UI, preserving your URL and download location.</li>
</ul>
<h3>Restore Interrupted Download</h3>
<p>If the application is closed unexpectedly during a download, it will save its progress.</p>
<ul> <ul>
<li>Uses "Filter by Character(s)" input first, if available.</li> <li>On the next launch, the UI will be pre-filled with the settings from the interrupted session.</li>
<li>Then checks names in <code>Known.txt</code>.</li> <li>The <strong>Pause</strong> button will change to <strong>"🔄 Restore Download"</strong>. Clicking it will resume the download exactly where it left off, skipping already processed posts.</li>
<li>Falls back to extracting from post title.</li> <li>The <strong>Cancel</strong> button will change to <strong>"🗑️ Discard Session"</strong>, allowing you to clear the saved state and start fresh.</li>
</ul> </ul>
</li> <h3>Other UI Controls</h3>
<li>
<strong>Subfolder per post:</strong> Creates a unique folder per post, using the posts title.
<ul> <ul>
<li>Prevents mixing files from multiple posts.</li> <li><strong>Error Button</strong>: Shows a count of failed files. Clicking it opens a dialog where you can view, export, or retry the failed downloads.</li>
<li>Can be combined with Known.txt-based folders.</li> <li><strong>History Button</strong>: Shows a log of recently downloaded files and processed posts.</li>
<li>Ensures uniqueness (e.g., <code>My Post Title_1</code>).</li> <li><strong>Settings Button</strong>: Opens the settings dialog where you can change the theme, language, and <strong>check for application updates</strong>.</li>
<li>Automatically removes empty folders.</li> <li><strong>Support Button</strong>: Opens a dialog with links to the project's source code and developer support pages.</li>
</ul>
</li>
<li>
<strong>Date prefix:</strong> Enabled only with "Subfolder per post". Prepends the post date (e.g., <code>2025-08-03 My Post Title</code>) for chronological sorting.
</li>
</ul>
<h2><strong>General Functionality Checkboxes</strong></h2>
<ul>
<li>
<strong>Use cookie:</strong> Enables login-based access via cookies.
<ul>
<li>Paste cookie string directly, or browse to select a <code>cookies.txt</code> file.</li>
<li>Cookies are used in all authenticated API requests.</li>
</ul>
</li>
<li>
<strong>Use Multithreading:</strong> Enables parallel downloading of posts.
<ul>
<li>Specify the number of worker threads (e.g., 10).</li>
<li>Disabled for Manga Mode and Only Links mode.</li>
</ul>
</li>
<li>
<strong>Show external links in log:</strong> Adds a secondary log that displays links (e.g., Mega, Dropbox) found in post text.
</li>
<li>
<strong>Manga/Comic mode:</strong> Sorts posts chronologically before download.
<ul>
<li>Ensures correct page order for comics/manga.</li>
</ul>
<strong>Scope Button (Name: ...):</strong> Controls filename style:
<ul>
<li><strong>Name: Post Title</strong> — e.g., <code>Chapter-1.jpg</code></li>
<li><strong>Name: Date + Original</strong> — e.g., <code>2025-08-03_filename.png</code></li>
<li><strong>Name: Date + Title</strong> — e.g., <code>2025-08-03_Chapter-1.jpg</code></li>
<li><strong>Name: Title+G.Num</strong> — e.g., <code>Page_001.jpg</code></li>
<li><strong>Name: Date Based</strong> — e.g., <code>001.jpg</code>, with optional prefix</li>
<li><strong>Name: Post ID</strong> — uses unique post ID as filename</li>
</ul>
</li>
</ul>
<h2><strong>Start Download</strong></h2>
<ul>
<li>
<strong>Default State ("⬇️ Start Download"):</strong> When idle, this button gathers all current settings (URL, filters, checkboxes, etc.) and begins the download process via the DownloadManager.
</li>
<li>
<strong>Restore State:</strong> If an interrupted session is detected, the tooltip will indicate that starting a new download will discard previous session progress.
</li>
<li>
<strong>Update Mode (Phase 1 - "🔄 Check For Updates"):</strong> If a creator profile is loaded, clicking this button will fetch the creator's posts and compare them against your saved profile to identify new content.
</li>
<li>
<strong>Update Mode (Phase 2 - "⬇️ Start Download (X new)"):</strong> After new posts are found, the button text updates to reflect the number. Clicking it downloads only the new content.
</li>
</ul>
<h2><strong>Pause / Resume Download</strong></h2>
<ul>
<li>
<strong>While Downloading:</strong> The button toggles between:
<ul>
<li><strong>"⏸️ Pause Download":</strong> Sets a <code>pause_event</code>, which tells all worker threads to halt their current task and wait.</li>
<li><strong>"▶️ Resume Download":</strong> Clears the <code>pause_event</code>, allowing threads to resume their work.</li>
</ul>
</li>
<li>
<strong>While Idle:</strong> The button is disabled.
</li>
<li>
<strong>Restore State:</strong> Changes to "🔄 Restore Download", which resumes the last session from saved data.
</li>
</ul>
<h2><strong>Cancel & Reset UI</strong></h2>
<ul>
<li>
<strong>Functionality:</strong> Stops downloads gracefully using a <code>cancellation_event</code>. Threads finish current tasks before shutting down.
</li>
<li>
<strong>The Soft Reset:</strong> After cancellation is confirmed by background threads, the UI resets via the <code>download_finished</code> function. Input fields (URL and Download Location) are preserved for convenience.
</li>
<li>
<strong>Restore State:</strong> Changes to "🗑️ Discard Session", which deletes <code>session.json</code> and resets the UI.
</li>
<li>
<strong>Update State:</strong> Changes to "🗑️ Clear Selection", unloading the selected creator profile and returning to normal UI state.
</li>
</ul>
<h2><strong>Error Button</strong></h2>
<ul>
<li>
<strong>Error Counter:</strong> Shows how many files failed to download (e.g., <code>(3) Error</code>). Disabled if there are no errors.
</li>
<li>
<strong>Error Dialog:</strong> Clicking opens the "Files Skipped Due to Errors" dialog (defined in <code>ErrorFilesDialog.py</code>), listing all failed files.
</li>
<li>
<strong>Dialog Features:</strong>
<ul>
<li><strong>View Failed Files:</strong> Shows filenames and related post info.</li>
<li><strong>Select and Retry:</strong> Retry selected failed files in a focused download session.</li>
<li><strong>Export URLs:</strong> Save a <code>.txt</code> file of direct download links. Optionally include post metadata with each URL.</li>
</ul>
</li>
</ul>
<h2><strong>"Known Area" and its Controls</strong></h2>
<p>This section, located on the right side of the main window, manages your personal name database (<code>Known.txt</code>), which the app uses to organize downloads into subfolders.</p>
<ul>
<li>
<strong>Open Known.txt:</strong> Opens the <code>Known.txt</code> file in your system's default text editor for manual editing, such as bulk changes or cleanup.
</li>
<li>
<strong>Search character input:</strong> A live search filter that hides any list items not matching your input text. Useful for quickly locating specific names in large lists.
</li>
<li>
<strong>Known Series/Characters Area:</strong> Displays all names currently stored in your <code>Known.txt</code>. These names are used when "Separate folders by Known.txt" is enabled.
</li>
<li>
<strong>Input at bottom & Add button:</strong> Type a new character or series name into the input field, then click " Add". The app checks for duplicates, updates the list, and saves to <code>Known.txt</code>.
</li>
<li>
<strong>Add to Filter:</strong> Opens a dialog showing all entries from <code>Known.txt</code> with checkboxes. You can select one or more to auto-fill the "Filter by Character(s)" field at the top of the app.
</li>
<li>
<strong>Delete Selected:</strong> Select one or more entries from the list and click "🗑️ Delete Selected" to remove them from the app and update <code>Known.txt</code> accordingly.
</li>
</ul>
<h2><strong>Other Buttons</strong></h2>
<ul>
<li>
<strong>(?_?) mark button (Help Guide):</strong> Opens a multi-page help dialog with step-by-step instructions and explanations for all app features. Useful for new users.
</li>
<li>
<strong>History Button:</strong> Opens the Download History dialog (from <code>DownloadHistoryDialog.py</code>), showing:
<ul>
<li>Recently downloaded files</li>
<li>The first few posts processed in the last session</li>
</ul>
This allows for a quick review of recent activity.
</li>
<li>
<strong>Settings Button:</strong> Opens the Settings dialog (from <code>FutureSettingsDialog.py</code>), where you can change app-wide settings such as theme (light/dark) and language.
</li>
<li>
<strong>Support Button:</strong> Opens the Support dialog (from <code>SupportDialog.py</code>), which includes developer info, source links, and donation platforms like Ko-fi or Patreon.
</li>
</ul>
<h2><strong>Log Area Controls</strong></h2>
<p>These controls are located around the main log panel and offer tools for managing downloads, configuring advanced options, and resetting the application.</p>
<ul>
<li>
<strong>Multi-part: OFF</strong><br>
This button acts as both a status indicator and a configuration panel for multi-part downloading (parallel downloading of large files).
<ul>
<li><strong>Function:</strong> Opens the <code>Multipart Download Options</code> dialog (defined in <code>MultipartScopeDialog.py</code>).</li>
<li><strong>Scope Options:</strong> Choose between "Videos Only", "Archives Only", or "Both".</li>
<li><strong>Number of parts:</strong> Set how many simultaneous connections to use (216).</li>
<li><strong>Minimum file size:</strong> Set a threshold (MB) below which files are downloaded normally.</li>
<li><strong>Status:</strong> After applying settings, the button's text updates (e.g., <code>Multi-part: Both</code>); otherwise, it resets to <code>Multi-part: OFF</code>.</li>
</ul>
</li>
<li>
<strong>👁️ Eye Emoji Button (Log View Toggle)</strong><br>
Switches between two views in the log panel:
<ul>
<li><strong>👁️ Progress Log View:</strong> Shows real-time download progress, status messages, and errors.</li>
<li><strong>🚫 Missed Character View:</strong> Displays names detected in posts that didnt match the current filter — useful for updating <code>Known.txt</code>.</li>
</ul>
</li>
<li>
<strong>Reset Button</strong><br>
Performs a full "soft reset" of the UI when the application is idle.
<ul>
<li>Clears all inputs (except saved Download Location)</li>
<li>Resets checkboxes, buttons, and logs</li>
<li>Clears counters, queues, and restores the UI to its default state</li>
<li><strong>Note:</strong> This is different from <em>Cancel & Reset UI</em>, which halts active downloads</li>
</ul>
</li>
</ul>
<h3><strong>The Progress Log and "Only Links" Mode Controls</strong></h3>
<ul>
<li>
<strong>Standard Mode (Progress Log)</strong><br>
This is the default behavior. The <code>main_log_output</code> field displays:
<ul>
<li>Post processing steps</li>
<li>Download/skipped file notifications</li>
<li>Error messages</li>
<li>Session summaries</li>
</ul>
</li>
<li>
<strong>"Only Links" Mode</strong><br>
When enabled, the log panel switches modes and reveals new controls.
<ul>
<li><strong>📜 Extracted Links Log:</strong> Replaces progress info with a list of found external links (e.g., Mega, Dropbox).</li>
<li><strong>Export Links Button:</strong> Saves the extracted links to a <code>.txt</code> file.</li>
<li><strong>Download Button:</strong> Opens the <code>Download Selected External Links</code> dialog (from <code>DownloadExtractedLinksDialog.py</code>), where you can:
<ul>
<li>View all supported external links</li>
<li>Select which ones to download</li>
<li>Begin download directly from cloud services</li>
</ul>
</li>
<li><strong>Links View Button:</strong> Toggles log display between:
<ul>
<li><strong>🔗 Links View:</strong> Shows all extracted links</li>
<li><strong>⬇️ Progress View:</strong> Shows download progress from external services (e.g., Mega)</li>
</ul>
</li>
</ul>
</li>
</ul> </ul>

152
readme.md
View File

@@ -1,12 +1,12 @@
<h1 align="center">Kemono Downloader</h1> <h1 align="center">Kemono Downloader</h1>
<div align="center"> <div align="center">
<table> <table>
<tbody>
<tr> <tr>
<td align="center"> <td align="center">
<img src="Read/Read.png" alt="Default Mode" width="400"><br> <img src="Read/Read.png" alt="Default Mode" width="400"><br>
<strong>Default</strong> <strong>Default Mode</strong>
</td> </td>
<td align="center"> <td align="center">
<img src="Read/Read1.png" alt="Favorite Mode" width="400"><br> <img src="Read/Read1.png" alt="Favorite Mode" width="400"><br>
@@ -23,140 +23,94 @@
<strong>Manga/Comic Mode</strong> <strong>Manga/Comic Mode</strong>
</td> </td>
</tr> </tr>
</tbody>
</table> </table>
</div> </div>
--- <hr>
A powerful, feature-rich GUI application for downloading content from **[Kemono.su](https://kemono.su)** (and its mirrors like kemono.party) and **[Coomer.party](https://coomer.party)** (and its mirrors like coomer.su). <p>A powerful, feature-rich GUI application for downloading content from a wide array of sites, including <strong>Kemono</strong>, <strong>Coomer</strong>, <strong>Bunkr</strong>, <strong>Erome</strong>, <strong>Saint2.su</strong>, and <strong>nhentai</strong>.</p>
<p>Built with PyQt5, this tool is designed for users who want deep filtering capabilities, customizable folder structures, efficient downloads, and intelligent automation — all within a modern and user-friendly graphical interface.</p>
Built with PyQt5, this tool is designed for users who want deep filtering capabilities, customizable folder structures, efficient downloads, and intelligent automation — all within a modern and user-friendly graphical interface.
<div align="center"> <div align="center">
<a href="features.md"><img src="https://img.shields.io/badge/📚%20Full%20Feature%20List-FFD700?style=for-the-badge&logoColor=black&color=FFD700" alt="Full Feature List"></a>
[![](https://img.shields.io/badge/📚%20Full%20Feature%20List-FFD700?style=for-the-badge&logoColor=black&color=FFD700)](features.md) <a href="LICENSE"><img src="https://img.shields.io/badge/📝%20License-90EE90?style=for-the-badge&logoColor=black&color=90EE90" alt="License"></a>
[![](https://img.shields.io/badge/📝%20License-90EE90?style=for-the-badge&logoColor=black&color=90EE90)](LICENSE) <a href="note.md"><img src="https://img.shields.io/badge/⚠️%20Important%20Note-FFCCCB?style=for-the-badge&logoColor=black&color=FFCCCB" alt="Important Note"></a>
[![](https://img.shields.io/badge/⚠️%20Important%20Note-FFCCCB?style=for-the-badge&logoColor=black&color=FFCCCB)](note.md)
</div> </div>
<h2><strong>Core Capabilities Overview</strong></h2> <h2>Core Capabilities Overview</h2>
<h3>High-Performance &amp; Resilient Downloading</h3>
<h3><strong>High-Performance Downloading</strong></h3>
<ul> <ul>
<li><strong>Multi-threading:</strong> Processes multiple posts simultaneously to greatly accelerate downloads from large creator profiles.</li> <li><strong>Multi-threading:</strong> Processes multiple posts simultaneously to greatly accelerate downloads from large creator profiles.</li>
<li><strong>Multi-part Downloading:</strong> Splits large files into chunks and downloads them in parallel to maximize speed.</li> <li><strong>Multi-part Downloading:</strong> Splits large files into chunks and downloads them in parallel to maximize speed.</li>
<li><strong>Resilience:</strong> Supports pausing, resuming, and restoring downloads after crashes or interruptions.</li> <li><strong>Session Management:</strong> Supports pausing, resuming, and <strong>restoring downloads</strong> after crashes or interruptions, so you never lose your progress.</li>
</ul> </ul>
<h3>Expanded Site Support</h3>
<h3><strong>Advanced Filtering & Content Control</strong></h3> <ul>
<li><strong>Direct Downloading:</strong> Full support for Kemono, Coomer, Bunkr, Erome, Saint2.su, and nhentai.</li>
<li><strong>Batch Mode:</strong> Download hundreds of URLs at once from <code>nhentai.txt</code> or <code>saint2.su.txt</code> files.</li>
<li><strong>Discord Support:</strong> Download files or save entire channel histories as PDFs directly through the API.</li>
</ul>
<h3>Advanced Filtering &amp; Content Control</h3>
<ul> <ul>
<li><strong>Content Type Filtering:</strong> Select whether to download all files or limit to images, videos, audio, or archives only.</li> <li><strong>Content Type Filtering:</strong> Select whether to download all files or limit to images, videos, audio, or archives only.</li>
<li><strong>Keyword Skipping:</strong> Automatically skips posts or files containing certain keywords (e.g., "WIP", "sketch").</li> <li><strong>Keyword Skipping:</strong> Automatically skips posts or files containing certain keywords (e.g., "WIP", "sketch").</li>
<li><strong>Character Filtering:</strong> Restricts downloads to posts that match specific character or series names.</li> <li><strong>Skip by Size:</strong> Avoid small files by setting a minimum size threshold in MB (e.g., <code>[200]</code>).</li>
<li><strong>Character Filtering:</strong> Restricts downloads to posts that match specific character or series names, with scope controls for title, filename, or comments.</li>
</ul> </ul>
<h3>Intelligent File Organization</h3>
<h3><strong>File Organization & Renaming</strong></h3>
<ul> <ul>
<li><strong>Automated Subfolders:</strong> Automatically organizes downloaded files into subdirectories based on character names or per post.</li> <li><strong>Automated Subfolders:</strong> Automatically organizes downloaded files into subdirectories based on character names or per post.</li>
<li><strong>Advanced File Renaming:</strong> Flexible renaming options, especially in Manga Mode, including: <li><strong>Advanced File Renaming:</strong> Flexible renaming options, especially in Manga Mode, including by post title, date, sequential numbering, or post ID.</li>
<ul> <li><strong>Filename Cleaning:</strong> Automatically removes unwanted text from filenames.</li>
<li><strong>Post Title:</strong> Uses the post's title (e.g., <code>Chapter-One.jpg</code>).</li>
<li><strong>Date + Original Name:</strong> Prepends the publication date to the original filename.</li>
<li><strong>Date + Title:</strong> Combines the date with the post title.</li>
<li><strong>Sequential Numbering (Date Based):</strong> Simple sequence numbers (e.g., <code>001.jpg</code>, <code>002.jpg</code>).</li>
<li><strong>Title + Global Numbering:</strong> Uses post title with a globally incrementing number across the session.</li>
<li><strong>Post ID:</strong> Names files using the posts unique ID.</li>
</ul> </ul>
</li> <h3>Specialized Modes</h3>
</ul>
<h3><strong>Specialized Modes</strong></h3>
<ul> <ul>
<li><strong>Manga/Comic Mode:</strong> Sorts posts chronologically before downloading to ensure pages appear in the correct sequence.</li> <li><strong>Manga/Comic Mode:</strong> Sorts posts chronologically before downloading to ensure pages appear in the correct sequence.</li>
<li><strong>Favorite Mode:</strong> Connects to your account and downloads from your favorites list (artists or posts).</li> <li><strong>Favorite Mode:</strong> Connects to your account and downloads from your favorites list (artists or posts).</li>
<li><strong>Link Extraction Mode:</strong> Extracts external links from posts for export or targeted downloading.</li> <li><strong>Link Extraction Mode:</strong> Extracts external links (Mega, Google Drive) from posts for export or <strong>direct in-app downloading</strong>.</li>
<li><strong>Text Extraction Mode:</strong> Saves post descriptions or comment sections as <code>PDF</code>, <code>DOCX</code>, or <code>TXT</code> files.</li> <li><strong>Text Extraction Mode:</strong> Saves post descriptions or comment sections as <code>PDF</code>, <code>DOCX</code>, or <code>TXT</code> files.</li>
</ul> </ul>
<h3>Utility &amp; Advanced Features</h3>
<h3><strong>Utility & Advanced Features</strong></h3>
<ul> <ul>
<li><strong>In-App Updater:</strong> Check for new versions directly from the settings menu.</li>
<li><strong>Cookie Support:</strong> Enables access to subscriber-only content via browser session cookies.</li> <li><strong>Cookie Support:</strong> Enables access to subscriber-only content via browser session cookies.</li>
<li><strong>Duplicate Detection:</strong> Prevents saving duplicate files using content-based comparison, with configurable limits.</li> <li><strong>Duplicate Detection:</strong> Prevents saving duplicate files using content-based comparison, with configurable limits.</li>
<li><strong>Image Compression:</strong> Automatically converts large images to <code>.webp</code> to reduce disk usage.</li> <li><strong>Image Compression:</strong> Automatically converts large images to <code>.webp</code> to reduce disk usage.</li>
<li><strong>Creator Management:</strong> Built-in creator browser and update checker for downloading only new posts from saved profiles.</li> <li><strong>Creator Management:</strong> Built-in creator browser and update checker for downloading only new posts from saved profiles.</li>
<li><strong>Error Handling:</strong> Tracks failed downloads and provides a retry dialog with options to export or redownload missing files.</li> <li><strong>Error Handling:</strong> Tracks failed downloads and provides a retry dialog with options to export or redownload missing files.</li>
</ul> </ul>
<h2>💻 Installation</h2>
## 💻 Installation <h3>Requirements</h3>
<ul>
### Requirements <li>Python 3.6 or higher</li>
<li>pip (Python package installer)</li>
- Python 3.6 or higher </ul>
- pip (Python package installer) <h3>Install Dependencies</h3>
<pre><code>pip install PyQt5 requests cloudscraper Pillow fpdf2 python-docx
### Install Dependencies </code></pre>
<h3>Running the Application</h3>
```bash <p>Navigate to the application's directory in your terminal and run:</p>
pip install PyQt5 requests Pillow mega.py fpdf2 python-docx <pre><code>python main.py
``` </code></pre>
<h2>Contribution</h2>
### Running the Application <p>Feel free to fork this repo and submit pull requests for bug fixes, new features, or UI improvements!</p>
Navigate to the application's directory in your terminal and run: <h2>License</h2>
```bash <p>This project is under the MIT Licence</p>
python main.py <h2>Star History</h2>
```
### Optional Setup
- **Main Inputs:**
- Place your `cookies.txt` in the root directory (if using cookies).
- Prepare your `Known.txt` and `creators.json` in the same directory for advanced filtering and selection features.
---
## Troubleshooting
### AttributeError: module 'asyncio' has no attribute 'coroutine'
If you encounter an error message similar to:
```
AttributeError: module 'asyncio' has no attribute 'coroutine'. Did you mean: 'coroutines'?
```
This usually means that a dependency, often `tenacity` (used by `mega.py`), is an older version that's incompatible with your Python version (typically Python 3.10+).
To fix this, activate your virtual environment and run the following commands to upgrade the libraries:
```bash
pip install --upgrade tenacity
pip install --upgrade mega.py
```
---
## Contribution
Feel free to fork this repo and submit pull requests for bug fixes, new features, or UI improvements!
---
## License
This project is under the MIT Licence
## Star History
<table align="center" style="border-collapse: collapse; border: none; margin-left: auto; margin-right: auto;"> <table align="center" style="border-collapse: collapse; border: none; margin-left: auto; margin-right: auto;">
<tbody>
<tr> <tr>
<td align="center" valign="middle" style="padding: 10px; border: none;"> <td align="center" valign="middle" style="padding: 10px; border: none;">
<a href="https://www.star-history.com/#Yuvi9587/Kemono-Downloader&Date"> <a href="https://www.star-history.com/#Yuvi9587/Kemono-Downloader&amp;Date">
<img src="https://api.star-history.com/svg?repos=Yuvi9587/Kemono-Downloader&type=Date" alt="Star History Chart" width="650"> <img src="https://api.star-history.com/svg?repos=Yuvi9587/Kemono-Downloader&amp;type=Date" alt="Star History Chart" width="650">
</a> </a>
</td>
</tr>
</tbody>
</table> </table>
<p align="center"> <p align="center">
<a href="https://buymeacoffee.com/yuvi9587"> <a href="https://buymeacoffee.com/yuvi9587">
<img src="https://img.shields.io/badge/🍺%20Buy%20Me%20a%20Coffee-FFCCCB?style=for-the-badge&logoColor=black&color=FFDD00" alt="Buy Me a Coffee"> <img src="https://img.shields.io/badge/🍺%20Buy%20Me%20a%20Coffee-FFCCCB?style=for-the-badge&amp;logoColor=black&amp;color=FFDD00" alt="Buy Me a Coffee">
</a> </a>
</p> </p>

View File

@@ -1,4 +1,3 @@
# --- Application Metadata ---
CONFIG_ORGANIZATION_NAME = "KemonoDownloader" CONFIG_ORGANIZATION_NAME = "KemonoDownloader"
CONFIG_APP_NAME_MAIN = "ApplicationSettings" CONFIG_APP_NAME_MAIN = "ApplicationSettings"
CONFIG_APP_NAME_TOUR = "ApplicationTour" CONFIG_APP_NAME_TOUR = "ApplicationTour"
@@ -9,7 +8,7 @@ STYLE_ORIGINAL_NAME = "original_name"
STYLE_DATE_BASED = "date_based" STYLE_DATE_BASED = "date_based"
STYLE_DATE_POST_TITLE = "date_post_title" STYLE_DATE_POST_TITLE = "date_post_title"
STYLE_POST_TITLE_GLOBAL_NUMBERING = "post_title_global_numbering" STYLE_POST_TITLE_GLOBAL_NUMBERING = "post_title_global_numbering"
STYLE_POST_ID = "post_id" # Add this line STYLE_POST_ID = "post_id"
MANGA_DATE_PREFIX_DEFAULT = "" MANGA_DATE_PREFIX_DEFAULT = ""
# --- Download Scopes --- # --- Download Scopes ---
@@ -61,6 +60,10 @@ RESOLUTION_KEY = "window_resolution"
UI_SCALE_KEY = "ui_scale_factor" UI_SCALE_KEY = "ui_scale_factor"
SAVE_CREATOR_JSON_KEY = "saveCreatorJsonProfile" SAVE_CREATOR_JSON_KEY = "saveCreatorJsonProfile"
FETCH_FIRST_KEY = "fetchAllPostsFirst" FETCH_FIRST_KEY = "fetchAllPostsFirst"
# --- FIX: Add the missing key for the Discord token ---
DISCORD_TOKEN_KEY = "discord/token"
POST_DOWNLOAD_ACTION_KEY = "postDownloadAction"
# --- UI Constants and Identifiers --- # --- UI Constants and Identifiers ---
HTML_PREFIX = "<!HTML!>" HTML_PREFIX = "<!HTML!>"

View File

@@ -0,0 +1,72 @@
# src/core/Hentai2read_client.py
import re
import os
import json
import requests
import cloudscraper
from bs4 import BeautifulSoup
def fetch_hentai2read_data(url, logger, session):
"""
Scrapes a SINGLE Hentai2Read chapter page using a provided session.
"""
logger(f"Attempting to fetch chapter data from: {url}")
try:
response = session.get(url, timeout=30)
response.raise_for_status()
page_content_text = response.text
soup = BeautifulSoup(page_content_text, 'html.parser')
album_title = ""
title_tags = soup.select('span[itemprop="name"]')
if title_tags:
album_title = title_tags[-1].text.strip()
if not album_title:
title_tag = soup.select_one('h1.title')
if title_tag:
album_title = title_tag.text.strip()
if not album_title:
logger("❌ Could not find album title on page.")
return None, None
image_urls = []
try:
start_index = page_content_text.index("'images' : ") + len("'images' : ")
end_index = page_content_text.index(",\n", start_index)
images_json_str = page_content_text[start_index:end_index]
image_paths = json.loads(images_json_str)
image_urls = ["https://hentaicdn.com/hentai" + part for part in image_paths]
except (ValueError, json.JSONDecodeError):
logger("❌ Could not find or parse image JSON data for this chapter.")
return None, None
if not image_urls:
logger("❌ No image URLs found for this chapter.")
return None, None
logger(f" Found {len(image_urls)} images for album '{album_title}'.")
files_to_download = []
for i, img_url in enumerate(image_urls):
page_num = i + 1
extension = os.path.splitext(img_url)[1].split('?')[0]
if not extension: extension = ".jpg"
filename = f"{page_num:03d}{extension}"
files_to_download.append({'url': img_url, 'filename': filename})
return album_title, files_to_download
except requests.exceptions.HTTPError as e:
if e.response.status_code == 404:
logger(f" Chapter not found (404 Error). This likely marks the end of the series.")
else:
logger(f"❌ An HTTP error occurred: {e}")
return None, None
except Exception as e:
logger(f"❌ An unexpected error occurred while fetching data: {e}")
return None, None

View File

@@ -63,7 +63,6 @@ def parse_datetime(date_string, format="%Y-%m-%dT%H:%M:%S%z", utcoffset=0):
unquote = urllib.parse.unquote unquote = urllib.parse.unquote
unescape = html.unescape unescape = html.unescape
# --- From: util.py ---
def decrypt_xor(encrypted, key, base64=True, fromhex=False): def decrypt_xor(encrypted, key, base64=True, fromhex=False):
if base64: encrypted = binascii.a2b_base64(encrypted) if base64: encrypted = binascii.a2b_base64(encrypted)
if fromhex: encrypted = bytes.fromhex(encrypted.decode()) if fromhex: encrypted = bytes.fromhex(encrypted.decode())
@@ -76,7 +75,6 @@ def advance(iterable, num):
def json_loads(s): return json.loads(s) def json_loads(s): return json.loads(s)
def json_dumps(obj): return json.dumps(obj, separators=(",", ":")) def json_dumps(obj): return json.dumps(obj, separators=(",", ":"))
# --- From: common.py ---
class Extractor: class Extractor:
def __init__(self, match, logger): def __init__(self, match, logger):
self.log = logger self.log = logger
@@ -116,7 +114,6 @@ class Extractor:
if not kwargs.get("fatal", True): return {} if not kwargs.get("fatal", True): return {}
raise raise
# --- From: bunkr.py (Adapted) ---
BASE_PATTERN_BUNKR = r"(?:https?://)?(?:[a-zA-Z0-9-]+\.)?(bunkr\.(?:si|la|ws|red|black|media|site|is|to|ac|cr|ci|fi|pk|ps|sk|ph|su)|bunkrr\.ru)" BASE_PATTERN_BUNKR = r"(?:https?://)?(?:[a-zA-Z0-9-]+\.)?(bunkr\.(?:si|la|ws|red|black|media|site|is|to|ac|cr|ci|fi|pk|ps|sk|ph|su)|bunkrr\.ru)"
DOMAINS = ["bunkr.si", "bunkr.ws", "bunkr.la", "bunkr.red", "bunkr.black", "bunkr.media", "bunkr.site"] DOMAINS = ["bunkr.si", "bunkr.ws", "bunkr.la", "bunkr.red", "bunkr.black", "bunkr.media", "bunkr.site"]
CF_DOMAINS = set() CF_DOMAINS = set()
@@ -195,10 +192,6 @@ class BunkrMediaExtractor(BunkrAlbumExtractor):
self.log.error("%s: %s", exc.__class__.__name__, exc) self.log.error("%s: %s", exc.__class__.__name__, exc)
yield MockMessage.Directory, {"album_name": "error", "count": 0}, {} yield MockMessage.Directory, {"album_name": "error", "count": 0}, {}
# ==============================================================================
# --- PUBLIC API FOR THE GUI ---
# ==============================================================================
def get_bunkr_extractor(url, logger): def get_bunkr_extractor(url, logger):
"""Selects the correct Bunkr extractor based on the URL pattern.""" """Selects the correct Bunkr extractor based on the URL pattern."""
if BunkrAlbumExtractor.pattern.match(url): if BunkrAlbumExtractor.pattern.match(url):
@@ -235,7 +228,6 @@ def fetch_bunkr_data(url, logger):
album_name = re.sub(r'[<>:"/\\|?*]', '_', raw_album_name).strip() or "untitled" album_name = re.sub(r'[<>:"/\\|?*]', '_', raw_album_name).strip() or "untitled"
logger.info(f"Processing Bunkr album: {album_name}") logger.info(f"Processing Bunkr album: {album_name}")
elif msg_type == MockMessage.Url: elif msg_type == MockMessage.Url:
# data here is the file_data dictionary
files_to_download.append(data) files_to_download.append(data)
if not files_to_download: if not files_to_download:

View File

@@ -1,4 +1,3 @@
# src/core/erome_client.py
import os import os
import re import re
@@ -7,10 +6,8 @@ import time
import urllib.parse import urllib.parse
import requests import requests
from datetime import datetime from datetime import datetime
import cloudscraper
# #############################################################################
# SECTION: Utility functions adapted from the original script
# #############################################################################
def extr(txt, begin, end, default=""): def extr(txt, begin, end, default=""):
"""Stripped-down version of 'extract()' to find text between two delimiters.""" """Stripped-down version of 'extract()' to find text between two delimiters."""
@@ -49,14 +46,10 @@ def nameext_from_url(url):
def parse_timestamp(ts, default=None): def parse_timestamp(ts, default=None):
"""Creates a datetime object from a Unix timestamp.""" """Creates a datetime object from a Unix timestamp."""
try: try:
# Use fromtimestamp for simplicity and compatibility
return datetime.fromtimestamp(int(ts)) return datetime.fromtimestamp(int(ts))
except (ValueError, TypeError): except (ValueError, TypeError):
return default return default
# #############################################################################
# SECTION: Main Erome Fetching Logic
# #############################################################################
def fetch_erome_data(url, logger): def fetch_erome_data(url, logger):
""" """
@@ -78,15 +71,10 @@ def fetch_erome_data(url, logger):
album_id = album_id_match.group(1) album_id = album_id_match.group(1)
page_url = f"https://www.erome.com/a/{album_id}" page_url = f"https://www.erome.com/a/{album_id}"
session = requests.Session() session = cloudscraper.create_scraper()
session.headers.update({
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
"Referer": "https://www.erome.com/"
})
try: try:
logger(f" Fetching Erome album page: {page_url}") logger(f" Fetching Erome album page: {page_url}")
# Add a loop to handle "Please wait" pages
for attempt in range(5): for attempt in range(5):
response = session.get(page_url, timeout=30) response = session.get(page_url, timeout=30)
response.raise_for_status() response.raise_for_status()
@@ -103,17 +91,14 @@ def fetch_erome_data(url, logger):
title = html.unescape(extr(page_content, 'property="og:title" content="', '"')) title = html.unescape(extr(page_content, 'property="og:title" content="', '"'))
user = urllib.parse.unquote(extr(page_content, 'href="https://www.erome.com/', '"', default="unknown_user")) user = urllib.parse.unquote(extr(page_content, 'href="https://www.erome.com/', '"', default="unknown_user"))
# Sanitize title and user for folder creation
sanitized_title = re.sub(r'[<>:"/\\|?*]', '_', title).strip() sanitized_title = re.sub(r'[<>:"/\\|?*]', '_', title).strip()
sanitized_user = re.sub(r'[<>:"/\\|?*]', '_', user).strip() sanitized_user = re.sub(r'[<>:"/\\|?*]', '_', user).strip()
album_folder_name = f"Erome - {sanitized_user} - {sanitized_title} [{album_id}]" album_folder_name = f"Erome - {sanitized_user} - {sanitized_title} [{album_id}]"
urls = [] urls = []
# Split the page content by media groups to find all videos
media_groups = page_content.split('<div class="media-group"') media_groups = page_content.split('<div class="media-group"')
for group in media_groups[1:]: # Skip the part before the first media group for group in media_groups[1:]:
# Prioritize <source> tag, fall back to data-src for images
video_url = extr(group, '<source src="', '"') or extr(group, 'data-src="', '"') video_url = extr(group, '<source src="', '"') or extr(group, 'data-src="', '"')
if video_url: if video_url:
urls.append(video_url) urls.append(video_url)
@@ -127,7 +112,6 @@ def fetch_erome_data(url, logger):
file_list = [] file_list = []
for i, file_url in enumerate(urls, 1): for i, file_url in enumerate(urls, 1):
filename_info = nameext_from_url(file_url) filename_info = nameext_from_url(file_url)
# Create a clean, descriptive filename
filename = f"{album_id}_{sanitized_title}_{i:03d}.{filename_info.get('extension', 'mp4')}" filename = f"{album_id}_{sanitized_title}_{i:03d}.{filename_info.get('extension', 'mp4')}"
file_data = { file_data = {

View File

@@ -15,7 +15,6 @@ def fetch_nhentai_gallery(gallery_id, logger=print):
""" """
api_url = f"https://nhentai.net/api/gallery/{gallery_id}" api_url = f"https://nhentai.net/api/gallery/{gallery_id}"
# Create a cloudscraper instance
scraper = cloudscraper.create_scraper() scraper = cloudscraper.create_scraper()
logger(f" Fetching nhentai gallery metadata from: {api_url}") logger(f" Fetching nhentai gallery metadata from: {api_url}")

View File

@@ -1,14 +1,9 @@
# src/core/saint2_client.py
import os import os
import re as re_module import re as re_module
import html import html
import urllib.parse import urllib.parse
import requests import requests
# ##############################################################################
# SECTION: Utility functions adapted from the original script
# ##############################################################################
PATTERN_CACHE = {} PATTERN_CACHE = {}
@@ -46,10 +41,6 @@ def nameext_from_url(url):
data["filename"], data["extension"] = filename, "" data["filename"], data["extension"] = filename, ""
return data return data
# ##############################################################################
# SECTION: Extractor Logic adapted for the main application
# ##############################################################################
class BaseExtractor: class BaseExtractor:
"""A simplified base class for extractors.""" """A simplified base class for extractors."""
def __init__(self, match, session, logger): def __init__(self, match, session, logger):
@@ -165,7 +156,6 @@ def fetch_saint2_data(url, logger):
if match: if match:
extractor = extractor_cls(match, session, logger) extractor = extractor_cls(match, session, logger)
album_title, files = extractor.items() album_title, files = extractor.items()
# Sanitize the album title to be a valid folder name
sanitized_title = re_module.sub(r'[<>:"/\\|?*]', '_', album_title) if album_title else "saint2_download" sanitized_title = re_module.sub(r'[<>:"/\\|?*]', '_', album_title) if album_title else "saint2_download"
return sanitized_title, files return sanitized_title, files

View File

@@ -848,6 +848,8 @@ class PostProcessorWorker:
'original_post_id_for_log': original_post_id_for_log, 'post_title': post_title, 'original_post_id_for_log': original_post_id_for_log, 'post_title': post_title,
'file_index_in_post': file_index_in_post, 'num_files_in_this_post': num_files_in_this_post, 'file_index_in_post': file_index_in_post, 'num_files_in_this_post': num_files_in_this_post,
'forced_filename_override': filename_to_save_in_main_path, 'forced_filename_override': filename_to_save_in_main_path,
'service': self.service,
'user_id': self.user_id
} }
return 0, 1, final_filename_saved_for_return, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_FAILED_PERMANENTLY_THIS_SESSION, permanent_failure_details return 0, 1, final_filename_saved_for_return, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_FAILED_PERMANENTLY_THIS_SESSION, permanent_failure_details
finally: finally:

File diff suppressed because one or more lines are too long

View File

@@ -106,7 +106,17 @@ class ErrorFilesDialog(QDialog):
post_title = error_info.get('post_title', 'Unknown Post') post_title = error_info.get('post_title', 'Unknown Post')
post_id = error_info.get('original_post_id_for_log', 'N/A') post_id = error_info.get('original_post_id_for_log', 'N/A')
item_text = f"File: {filename}\nFrom Post: '{post_title}' (ID: {post_id})" creator_name = "Unknown Creator"
service = error_info.get('service')
user_id = error_info.get('user_id')
# Check if we have the necessary info and access to the cache
if service and user_id and hasattr(self.parent_app, 'creator_name_cache'):
creator_key = (service.lower(), str(user_id))
# Look up the name, fall back to the user_id if not found
creator_name = self.parent_app.creator_name_cache.get(creator_key, user_id)
item_text = f"File: {filename}\nCreator: {creator_name} - Post: '{post_title}' (ID: {post_id})"
list_item = QListWidgetItem(item_text) list_item = QListWidgetItem(item_text)
list_item.setData(Qt.UserRole, error_info) list_item.setData(Qt.UserRole, error_info)
list_item.setFlags(list_item.flags() | Qt.ItemIsUserCheckable) list_item.setFlags(list_item.flags() | Qt.ItemIsUserCheckable)

View File

@@ -4,24 +4,109 @@ import json
import sys import sys
# --- PyQt5 Imports --- # --- PyQt5 Imports ---
from PyQt5.QtCore import Qt, QStandardPaths from PyQt5.QtCore import Qt, QStandardPaths, QTimer
from PyQt5.QtWidgets import ( from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout, QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
QGroupBox, QComboBox, QMessageBox, QGridLayout, QCheckBox QGroupBox, QComboBox, QMessageBox, QGridLayout, QCheckBox
) )
# --- Local Application Imports --- # --- Local Application Imports ---
from ...i18n.translator import get_translation from ...i18n.translator import get_translation
from ...utils.resolution import get_dark_theme from ...utils.resolution import get_dark_theme
from ..assets import get_app_icon_object
from ..main_window import get_app_icon_object from ..main_window import get_app_icon_object
from ...config.constants import ( from ...config.constants import (
THEME_KEY, LANGUAGE_KEY, DOWNLOAD_LOCATION_KEY, THEME_KEY, LANGUAGE_KEY, DOWNLOAD_LOCATION_KEY,
RESOLUTION_KEY, UI_SCALE_KEY, SAVE_CREATOR_JSON_KEY, RESOLUTION_KEY, UI_SCALE_KEY, SAVE_CREATOR_JSON_KEY,
COOKIE_TEXT_KEY, USE_COOKIE_KEY, COOKIE_TEXT_KEY, USE_COOKIE_KEY,
FETCH_FIRST_KEY FETCH_FIRST_KEY, DISCORD_TOKEN_KEY, POST_DOWNLOAD_ACTION_KEY
) )
from ...services.updater import UpdateChecker, UpdateDownloader from ...services.updater import UpdateChecker, UpdateDownloader
class CountdownMessageBox(QDialog):
"""
A custom message box that includes a countdown timer for the 'Yes' button,
which automatically accepts the dialog when the timer reaches zero.
"""
def __init__(self, title, text, countdown_seconds=10, parent_app=None, parent=None):
super().__init__(parent)
self.parent_app = parent_app
self.countdown = countdown_seconds
# --- Basic Window Setup ---
self.setWindowTitle(title)
self.setModal(True)
app_icon = get_app_icon_object()
if app_icon and not app_icon.isNull():
self.setWindowIcon(app_icon)
self._init_ui(text)
self._apply_theme()
# --- Timer Setup ---
self.timer = QTimer(self)
self.timer.setInterval(1000) # Tick every second
self.timer.timeout.connect(self._update_countdown)
self.timer.start()
def _init_ui(self, text):
"""Initializes the UI components of the dialog."""
main_layout = QVBoxLayout(self)
self.message_label = QLabel(text)
self.message_label.setWordWrap(True)
self.message_label.setAlignment(Qt.AlignCenter)
main_layout.addWidget(self.message_label)
buttons_layout = QHBoxLayout()
buttons_layout.addStretch(1)
self.yes_button = QPushButton()
self.yes_button.clicked.connect(self.accept)
self.yes_button.setDefault(True)
self.no_button = QPushButton()
self.no_button.clicked.connect(self.reject)
buttons_layout.addWidget(self.yes_button)
buttons_layout.addWidget(self.no_button)
buttons_layout.addStretch(1)
main_layout.addLayout(buttons_layout)
self._retranslate_ui()
self._update_countdown() # Initial text setup
def _tr(self, key, default_text=""):
"""Helper for translations."""
if self.parent_app and hasattr(self.parent_app, 'current_selected_language'):
return get_translation(self.parent_app.current_selected_language, key, default_text)
return default_text
def _retranslate_ui(self):
"""Sets translated text for UI elements."""
self.no_button.setText(self._tr("no_button_text", "No"))
# The 'yes' button text is handled by the countdown
def _update_countdown(self):
"""Updates the countdown and button text each second."""
if self.countdown <= 0:
self.timer.stop()
self.accept() # Automatically accept when countdown finishes
return
yes_text = self._tr("yes_button_text", "Yes")
self.yes_button.setText(f"{yes_text} ({self.countdown})")
self.countdown -= 1
def _apply_theme(self):
"""Applies the current theme from the parent application."""
if self.parent_app and hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
scale = getattr(self.parent_app, 'scale_factor', 1)
self.setStyleSheet(get_dark_theme(scale))
else:
self.setStyleSheet("")
class FutureSettingsDialog(QDialog): class FutureSettingsDialog(QDialog):
""" """
A dialog for managing application-wide settings like theme, language, A dialog for managing application-wide settings like theme, language,
@@ -39,7 +124,7 @@ class FutureSettingsDialog(QDialog):
screen_height = QApplication.primaryScreen().availableGeometry().height() if QApplication.primaryScreen() else 800 screen_height = QApplication.primaryScreen().availableGeometry().height() if QApplication.primaryScreen() else 800
scale_factor = screen_height / 800.0 scale_factor = screen_height / 800.0
base_min_w, base_min_h = 420, 480 # Increased height for update section base_min_w, base_min_h = 420, 520 # Increased height for new options
scaled_min_w = int(base_min_w * scale_factor) scaled_min_w = int(base_min_w * scale_factor)
scaled_min_h = int(base_min_h * scale_factor) scaled_min_h = int(base_min_h * scale_factor)
self.setMinimumSize(scaled_min_w, scaled_min_h) self.setMinimumSize(scaled_min_w, scaled_min_h)
@@ -55,7 +140,6 @@ class FutureSettingsDialog(QDialog):
self.interface_group_box = QGroupBox() self.interface_group_box = QGroupBox()
interface_layout = QGridLayout(self.interface_group_box) interface_layout = QGridLayout(self.interface_group_box)
# Theme, UI Scale, Language (unchanged)...
self.theme_label = QLabel() self.theme_label = QLabel()
self.theme_toggle_button = QPushButton() self.theme_toggle_button = QPushButton()
self.theme_toggle_button.clicked.connect(self._toggle_theme) self.theme_toggle_button.clicked.connect(self._toggle_theme)
@@ -87,21 +171,26 @@ class FutureSettingsDialog(QDialog):
self.default_path_label = QLabel() self.default_path_label = QLabel()
self.save_path_button = QPushButton() self.save_path_button = QPushButton()
self.save_path_button.clicked.connect(self._save_cookie_and_path) self.save_path_button.clicked.connect(self._save_settings)
download_window_layout.addWidget(self.default_path_label, 1, 0) download_window_layout.addWidget(self.default_path_label, 1, 0)
download_window_layout.addWidget(self.save_path_button, 1, 1) download_window_layout.addWidget(self.save_path_button, 1, 1)
self.post_download_action_label = QLabel()
self.post_download_action_combo = QComboBox()
self.post_download_action_combo.currentIndexChanged.connect(self._post_download_action_changed)
download_window_layout.addWidget(self.post_download_action_label, 2, 0)
download_window_layout.addWidget(self.post_download_action_combo, 2, 1)
self.save_creator_json_checkbox = QCheckBox() self.save_creator_json_checkbox = QCheckBox()
self.save_creator_json_checkbox.stateChanged.connect(self._creator_json_setting_changed) self.save_creator_json_checkbox.stateChanged.connect(self._creator_json_setting_changed)
download_window_layout.addWidget(self.save_creator_json_checkbox, 2, 0, 1, 2) download_window_layout.addWidget(self.save_creator_json_checkbox, 3, 0, 1, 2)
self.fetch_first_checkbox = QCheckBox() self.fetch_first_checkbox = QCheckBox()
self.fetch_first_checkbox.stateChanged.connect(self._fetch_first_setting_changed) self.fetch_first_checkbox.stateChanged.connect(self._fetch_first_setting_changed)
download_window_layout.addWidget(self.fetch_first_checkbox, 3, 0, 1, 2) download_window_layout.addWidget(self.fetch_first_checkbox, 4, 0, 1, 2)
main_layout.addWidget(self.download_window_group_box) main_layout.addWidget(self.download_window_group_box)
# --- NEW: Update Section ---
self.update_group_box = QGroupBox() self.update_group_box = QGroupBox()
update_layout = QGridLayout(self.update_group_box) update_layout = QGridLayout(self.update_group_box)
self.version_label = QLabel() self.version_label = QLabel()
@@ -112,7 +201,6 @@ class FutureSettingsDialog(QDialog):
update_layout.addWidget(self.update_status_label, 0, 1) update_layout.addWidget(self.update_status_label, 0, 1)
update_layout.addWidget(self.check_update_button, 1, 0, 1, 2) update_layout.addWidget(self.check_update_button, 1, 0, 1, 2)
main_layout.addWidget(self.update_group_box) main_layout.addWidget(self.update_group_box)
# --- END: New Section ---
main_layout.addStretch(1) main_layout.addStretch(1)
@@ -129,28 +217,27 @@ class FutureSettingsDialog(QDialog):
self.language_label.setText(self._tr("language_label", "Language:")) self.language_label.setText(self._tr("language_label", "Language:"))
self.window_size_label.setText(self._tr("window_size_label", "Window Size:")) self.window_size_label.setText(self._tr("window_size_label", "Window Size:"))
self.default_path_label.setText(self._tr("default_path_label", "Default Path:")) self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
self.post_download_action_label.setText(self._tr("post_download_action_label", "Action After Download:"))
self.save_creator_json_checkbox.setText(self._tr("save_creator_json_label", "Save Creator.json file")) self.save_creator_json_checkbox.setText(self._tr("save_creator_json_label", "Save Creator.json file"))
self.fetch_first_checkbox.setText(self._tr("fetch_first_label", "Fetch First (Download after all pages are found)")) self.fetch_first_checkbox.setText(self._tr("fetch_first_label", "Fetch First (Download after all pages are found)"))
self.fetch_first_checkbox.setToolTip(self._tr("fetch_first_tooltip", "If checked, the downloader will find all posts from a creator first before starting any downloads.\nThis can be slower to start but provides a more accurate progress bar.")) self.fetch_first_checkbox.setToolTip(self._tr("fetch_first_tooltip", "If checked, the downloader will find all posts from a creator first before starting any downloads.\nThis can be slower to start but provides a more accurate progress bar."))
self._update_theme_toggle_button_text() self._update_theme_toggle_button_text()
self.save_path_button.setText(self._tr("settings_save_cookie_path_button", "Save Cookie + Download Path")) self.save_path_button.setText(self._tr("settings_save_all_button", "Save Path + Cookie + Token"))
self.save_path_button.setToolTip(self._tr("settings_save_cookie_path_tooltip", "Save the current 'Download Location' and Cookie settings for future sessions.")) self.save_path_button.setToolTip(self._tr("settings_save_all_tooltip", "Save the current 'Download Location', Cookie, and Discord Token settings for future sessions."))
self.ok_button.setText(self._tr("ok_button", "OK")) self.ok_button.setText(self._tr("ok_button", "OK"))
# --- NEW: Translations for Update Section ---
self.update_group_box.setTitle(self._tr("update_group_title", "Application Updates")) self.update_group_box.setTitle(self._tr("update_group_title", "Application Updates"))
current_version = self.parent_app.windowTitle().split(' v')[-1] current_version = self.parent_app.windowTitle().split(' v')[-1]
self.version_label.setText(self._tr("current_version_label", f"Current Version: v{current_version}")) self.version_label.setText(self._tr("current_version_label", f"Current Version: v{current_version}"))
self.update_status_label.setText(self._tr("update_status_ready", "Ready to check.")) self.update_status_label.setText(self._tr("update_status_ready", "Ready to check."))
self.check_update_button.setText(self._tr("check_for_updates_button", "Check for Updates")) self.check_update_button.setText(self._tr("check_for_updates_button", "Check for Updates"))
# --- END: New Translations ---
self._populate_display_combo_boxes() self._populate_display_combo_boxes()
self._populate_language_combo_box() self._populate_language_combo_box()
self._populate_post_download_action_combo()
self._load_checkbox_states() self._load_checkbox_states()
def _check_for_updates(self): def _check_for_updates(self):
"""Starts the update check thread."""
self.check_update_button.setEnabled(False) self.check_update_button.setEnabled(False)
self.update_status_label.setText(self._tr("update_status_checking", "Checking...")) self.update_status_label.setText(self._tr("update_status_checking", "Checking..."))
current_version = self.parent_app.windowTitle().split(' v')[-1] current_version = self.parent_app.windowTitle().split(' v')[-1]
@@ -189,7 +276,6 @@ class FutureSettingsDialog(QDialog):
self.check_update_button.setEnabled(True) self.check_update_button.setEnabled(True)
self.ok_button.setEnabled(True) self.ok_button.setEnabled(True)
# --- (The rest of the file remains unchanged from your provided code) ---
def _load_checkbox_states(self): def _load_checkbox_states(self):
self.save_creator_json_checkbox.blockSignals(True) self.save_creator_json_checkbox.blockSignals(True)
should_save = self.parent_app.settings.value(SAVE_CREATOR_JSON_KEY, True, type=bool) should_save = self.parent_app.settings.value(SAVE_CREATOR_JSON_KEY, True, type=bool)
@@ -252,14 +338,8 @@ class FutureSettingsDialog(QDialog):
self.ui_scale_combo_box.blockSignals(True) self.ui_scale_combo_box.blockSignals(True)
self.ui_scale_combo_box.clear() self.ui_scale_combo_box.clear()
scales = [ scales = [
(0.5, "50%"), (0.5, "50%"), (0.7, "70%"), (0.9, "90%"), (1.0, "100% (Default)"),
(0.7, "70%"), (1.25, "125%"), (1.50, "150%"), (1.75, "175%"), (2.0, "200%")
(0.9, "90%"),
(1.0, "100% (Default)"),
(1.25, "125%"),
(1.50, "150%"),
(1.75, "175%"),
(2.0, "200%")
] ]
current_scale = self.parent_app.settings.value(UI_SCALE_KEY, 1.0) current_scale = self.parent_app.settings.value(UI_SCALE_KEY, 1.0)
for scale_val, scale_name in scales: for scale_val, scale_name in scales:
@@ -305,14 +385,44 @@ class FutureSettingsDialog(QDialog):
QMessageBox.information(self, self._tr("language_change_title", "Language Changed"), QMessageBox.information(self, self._tr("language_change_title", "Language Changed"),
self._tr("language_change_message", "A restart is required...")) self._tr("language_change_message", "A restart is required..."))
def _save_cookie_and_path(self): def _populate_post_download_action_combo(self):
"""Populates the action dropdown and sets the current selection from settings."""
self.post_download_action_combo.blockSignals(True)
self.post_download_action_combo.clear()
actions = [
(self._tr("action_off", "Off"), "off"),
(self._tr("action_notify", "Notify with Sound"), "notify"),
(self._tr("action_sleep", "Sleep"), "sleep"),
(self._tr("action_shutdown", "Shutdown"), "shutdown")
]
current_action = self.parent_app.settings.value(POST_DOWNLOAD_ACTION_KEY, "off")
for text, key in actions:
self.post_download_action_combo.addItem(text, key)
if current_action == key:
self.post_download_action_combo.setCurrentIndex(self.post_download_action_combo.count() - 1)
self.post_download_action_combo.blockSignals(False)
def _post_download_action_changed(self):
"""Saves the selected post-download action to settings."""
selected_action = self.post_download_action_combo.currentData()
self.parent_app.settings.setValue(POST_DOWNLOAD_ACTION_KEY, selected_action)
self.parent_app.settings.sync()
def _save_settings(self):
path_saved = False path_saved = False
cookie_saved = False cookie_saved = False
token_saved = False
if hasattr(self.parent_app, 'dir_input') and self.parent_app.dir_input: if hasattr(self.parent_app, 'dir_input') and self.parent_app.dir_input:
current_path = self.parent_app.dir_input.text().strip() current_path = self.parent_app.dir_input.text().strip()
if current_path and os.path.isdir(current_path): if current_path and os.path.isdir(current_path):
self.parent_app.settings.setValue(DOWNLOAD_LOCATION_KEY, current_path) self.parent_app.settings.setValue(DOWNLOAD_LOCATION_KEY, current_path)
path_saved = True path_saved = True
if hasattr(self.parent_app, 'use_cookie_checkbox'): if hasattr(self.parent_app, 'use_cookie_checkbox'):
use_cookie = self.parent_app.use_cookie_checkbox.isChecked() use_cookie = self.parent_app.use_cookie_checkbox.isChecked()
cookie_content = self.parent_app.cookie_text_input.text().strip() cookie_content = self.parent_app.cookie_text_input.text().strip()
@@ -323,8 +433,20 @@ class FutureSettingsDialog(QDialog):
else: else:
self.parent_app.settings.setValue(USE_COOKIE_KEY, False) self.parent_app.settings.setValue(USE_COOKIE_KEY, False)
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, "") self.parent_app.settings.setValue(COOKIE_TEXT_KEY, "")
if (hasattr(self.parent_app, 'remove_from_filename_input') and
hasattr(self.parent_app, 'remove_from_filename_label_widget')):
label_text = self.parent_app.remove_from_filename_label_widget.text()
if "Token" in label_text:
discord_token = self.parent_app.remove_from_filename_input.text().strip()
if discord_token:
self.parent_app.settings.setValue(DISCORD_TOKEN_KEY, discord_token)
token_saved = True
self.parent_app.settings.sync() self.parent_app.settings.sync()
if path_saved or cookie_saved:
QMessageBox.information(self, "Settings Saved", "Settings have been saved.") if path_saved or cookie_saved or token_saved:
QMessageBox.information(self, "Settings Saved", "Settings have been saved successfully.")
else: else:
QMessageBox.warning(self, "Nothing to Save", "No valid settings to save.") QMessageBox.warning(self, "Nothing to Save", "No valid settings were found to save.")

View File

@@ -1,6 +1,7 @@
import os import os
import re import re
import datetime import datetime
import time
try: try:
from fpdf import FPDF from fpdf import FPDF
FPDF_AVAILABLE = True FPDF_AVAILABLE = True
@@ -29,7 +30,7 @@ except ImportError:
FPDF = None FPDF = None
PDF = None PDF = None
def create_pdf_from_discord_messages(messages_data, server_name, channel_name, output_filename, font_path, logger=print): def create_pdf_from_discord_messages(messages_data, server_name, channel_name, output_filename, font_path, logger=print, cancellation_event=None, pause_event=None):
""" """
Creates a single PDF from a list of Discord message objects, formatted as a chat log. Creates a single PDF from a list of Discord message objects, formatted as a chat log.
UPDATED to include clickable links for attachments and embeds. UPDATED to include clickable links for attachments and embeds.
@@ -42,8 +43,20 @@ def create_pdf_from_discord_messages(messages_data, server_name, channel_name, o
logger(" No messages were found or fetched to create a PDF.") logger(" No messages were found or fetched to create a PDF.")
return False return False
# --- FIX: This helper function now correctly accepts and checks the event objects ---
def check_events(c_event, p_event):
"""Helper to safely check for pause and cancel events."""
if c_event and hasattr(c_event, 'is_cancelled') and c_event.is_cancelled:
return True # Stop
if p_event and hasattr(p_event, 'is_paused'):
while p_event.is_paused:
time.sleep(0.5)
if c_event and hasattr(c_event, 'is_cancelled') and c_event.is_cancelled:
return True
return False
logger(" Sorting messages by date (oldest first)...") logger(" Sorting messages by date (oldest first)...")
messages_data.sort(key=lambda m: m.get('published', '')) messages_data.sort(key=lambda m: m.get('published', m.get('timestamp', '')))
pdf = PDF(server_name, channel_name) pdf = PDF(server_name, channel_name)
default_font_family = 'DejaVu' default_font_family = 'DejaVu'
@@ -78,14 +91,19 @@ def create_pdf_from_discord_messages(messages_data, server_name, channel_name, o
logger(f" Starting PDF creation with {len(messages_data)} messages...") logger(f" Starting PDF creation with {len(messages_data)} messages...")
for i, message in enumerate(messages_data): for i, message in enumerate(messages_data):
# --- FIX: Pass the event objects to the helper function ---
if i % 50 == 0:
if check_events(cancellation_event, pause_event):
logger(" PDF generation cancelled by user.")
return False
author = message.get('author', {}).get('global_name') or message.get('author', {}).get('username', 'Unknown User') author = message.get('author', {}).get('global_name') or message.get('author', {}).get('username', 'Unknown User')
timestamp_str = message.get('published', '') timestamp_str = message.get('published', message.get('timestamp', ''))
content = message.get('content', '') content = message.get('content', '')
attachments = message.get('attachments', []) attachments = message.get('attachments', [])
embeds = message.get('embeds', []) embeds = message.get('embeds', [])
try: try:
# Handle timezone information correctly
if timestamp_str.endswith('Z'): if timestamp_str.endswith('Z'):
timestamp_str = timestamp_str[:-1] + '+00:00' timestamp_str = timestamp_str[:-1] + '+00:00'
dt_obj = datetime.datetime.fromisoformat(timestamp_str) dt_obj = datetime.datetime.fromisoformat(timestamp_str)
@@ -93,14 +111,12 @@ def create_pdf_from_discord_messages(messages_data, server_name, channel_name, o
except (ValueError, TypeError): except (ValueError, TypeError):
formatted_timestamp = timestamp_str formatted_timestamp = timestamp_str
# Draw a separator line
if i > 0: if i > 0:
pdf.ln(2) pdf.ln(2)
pdf.set_draw_color(200, 200, 200) # Light grey line pdf.set_draw_color(200, 200, 200)
pdf.cell(0, 0, '', border='T') pdf.cell(0, 0, '', border='T')
pdf.ln(2) pdf.ln(2)
# Message Header
pdf.set_font(default_font_family, 'B', 11) pdf.set_font(default_font_family, 'B', 11)
pdf.write(5, f"{author} ") pdf.write(5, f"{author} ")
pdf.set_font(default_font_family, '', 9) pdf.set_font(default_font_family, '', 9)
@@ -109,33 +125,31 @@ def create_pdf_from_discord_messages(messages_data, server_name, channel_name, o
pdf.set_text_color(0, 0, 0) pdf.set_text_color(0, 0, 0)
pdf.ln(6) pdf.ln(6)
# Message Content
if content: if content:
pdf.set_font(default_font_family, '', 10) pdf.set_font(default_font_family, '', 10)
pdf.multi_cell(w=0, h=5, text=content) pdf.multi_cell(w=0, h=5, text=content)
# --- START: MODIFIED ATTACHMENT AND EMBED LOGIC ---
if attachments or embeds: if attachments or embeds:
pdf.ln(1) pdf.ln(1)
pdf.set_font(default_font_family, '', 9) pdf.set_font(default_font_family, '', 9)
pdf.set_text_color(22, 119, 219) # A nice blue for links pdf.set_text_color(22, 119, 219)
for att in attachments: for att in attachments:
file_name = att.get('name', 'untitled') file_name = att.get('filename', 'untitled')
file_path = att.get('path', '') full_url = att.get('url', '#')
# Construct the full, clickable URL for the attachment
full_url = f"https://kemono.cr/data{file_path}"
pdf.write(5, text=f"[Attachment: {file_name}]", link=full_url) pdf.write(5, text=f"[Attachment: {file_name}]", link=full_url)
pdf.ln() # New line after each attachment pdf.ln()
for embed in embeds: for embed in embeds:
embed_url = embed.get('url', 'no url') embed_url = embed.get('url', 'no url')
# The embed URL is already a full URL
pdf.write(5, text=f"[Embed: {embed_url}]", link=embed_url) pdf.write(5, text=f"[Embed: {embed_url}]", link=embed_url)
pdf.ln() # New line after each embed pdf.ln()
pdf.set_text_color(0, 0, 0) # Reset color to black pdf.set_text_color(0, 0, 0)
# --- END: MODIFIED ATTACHMENT AND EMBED LOGIC ---
if check_events(cancellation_event, pause_event):
logger(" PDF generation cancelled by user before final save.")
return False
try: try:
pdf.output(output_filename) pdf.output(output_filename)

View File

@@ -2,6 +2,7 @@ import sys
import os import os
import time import time
import queue import queue
import random
import traceback import traceback
import html import html
import http import http
@@ -41,6 +42,7 @@ from ..core.nhentai_client import fetch_nhentai_gallery
from ..core.bunkr_client import fetch_bunkr_data from ..core.bunkr_client import fetch_bunkr_data
from ..core.saint2_client import fetch_saint2_data from ..core.saint2_client import fetch_saint2_data
from ..core.erome_client import fetch_erome_data from ..core.erome_client import fetch_erome_data
from ..core.Hentai2read_client import fetch_hentai2read_data
from .assets import get_app_icon_object from .assets import get_app_icon_object
from ..config.constants import * from ..config.constants import *
from ..utils.file_utils import KNOWN_NAMES, clean_folder_name from ..utils.file_utils import KNOWN_NAMES, clean_folder_name
@@ -53,7 +55,7 @@ from .dialogs.CookieHelpDialog import CookieHelpDialog
from .dialogs.FavoriteArtistsDialog import FavoriteArtistsDialog from .dialogs.FavoriteArtistsDialog import FavoriteArtistsDialog
from .dialogs.KnownNamesFilterDialog import KnownNamesFilterDialog from .dialogs.KnownNamesFilterDialog import KnownNamesFilterDialog
from .dialogs.HelpGuideDialog import HelpGuideDialog from .dialogs.HelpGuideDialog import HelpGuideDialog
from .dialogs.FutureSettingsDialog import FutureSettingsDialog from .dialogs.FutureSettingsDialog import FutureSettingsDialog, CountdownMessageBox
from .dialogs.ErrorFilesDialog import ErrorFilesDialog from .dialogs.ErrorFilesDialog import ErrorFilesDialog
from .dialogs.DownloadHistoryDialog import DownloadHistoryDialog from .dialogs.DownloadHistoryDialog import DownloadHistoryDialog
from .dialogs.DownloadExtractedLinksDialog import DownloadExtractedLinksDialog from .dialogs.DownloadExtractedLinksDialog import DownloadExtractedLinksDialog
@@ -67,6 +69,10 @@ from .dialogs.SupportDialog import SupportDialog
from .dialogs.KeepDuplicatesDialog import KeepDuplicatesDialog from .dialogs.KeepDuplicatesDialog import KeepDuplicatesDialog
from .dialogs.MultipartScopeDialog import MultipartScopeDialog from .dialogs.MultipartScopeDialog import MultipartScopeDialog
_ff_ver = (datetime.date.today().toordinal() - 735506) // 28
USERAGENT_FIREFOX = (f"Mozilla/5.0 (Windows NT 10.0; Win64; x64; "
f"rv:{_ff_ver}.0) Gecko/20100101 Firefox/{_ff_ver}.0")
class DynamicFilterHolder: class DynamicFilterHolder:
"""A thread-safe class to hold and update character filters during a download.""" """A thread-safe class to hold and update character filters during a download."""
def __init__(self, initial_filters=None): def __init__(self, initial_filters=None):
@@ -286,7 +292,7 @@ class DownloaderApp (QWidget ):
self.download_location_label_widget = None self.download_location_label_widget = None
self.remove_from_filename_label_widget = None self.remove_from_filename_label_widget = None
self.skip_words_label_widget = None self.skip_words_label_widget = None
self.setWindowTitle("Kemono Downloader v7.0.0") self.setWindowTitle("Kemono Downloader v7.1.0")
setup_ui(self) setup_ui(self)
self._connect_signals() self._connect_signals()
if hasattr(self, 'character_input'): if hasattr(self, 'character_input'):
@@ -305,6 +311,127 @@ class DownloaderApp (QWidget ):
self._check_for_interrupted_session() self._check_for_interrupted_session()
self._cleanup_after_update() self._cleanup_after_update()
def _run_discord_file_download_thread(self, session, server_id, channel_id, token, output_dir, message_limit=None):
"""
Runs in a background thread to fetch and download all files from a Discord channel.
"""
def queue_logger(message):
self.worker_to_gui_queue.put({'type': 'progress', 'payload': (message,)})
def queue_progress_label_update(message):
self.worker_to_gui_queue.put({'type': 'set_progress_label', 'payload': (message,)})
def check_events():
if self.cancellation_event.is_set():
return True # Stop
while self.pause_event.is_set():
time.sleep(0.5) # Wait while paused
if self.cancellation_event.is_set():
return True # Allow cancelling while paused
return False # Continue
download_count = 0
skip_count = 0
try:
queue_logger("=" * 40)
queue_logger(f"🚀 Starting Discord download for channel: {channel_id}")
queue_progress_label_update("Fetching messages...")
def fetch_discord_api(endpoint):
headers = {
'Authorization': token,
'User-Agent': USERAGENT_FIREFOX,
'Accept': '*/*',
'Accept-Language': 'en-US,en;q=0.5',
}
try:
response = session.get(f"https://discord.com/api/v10{endpoint}", headers=headers)
response.raise_for_status()
return response.json()
except Exception:
return None
last_message_id = None
all_messages = []
while True:
if check_events(): break
url_endpoint = f"/channels/{channel_id}/messages?limit=100"
if last_message_id:
url_endpoint += f"&before={last_message_id}"
message_batch = fetch_discord_api(url_endpoint)
if not message_batch:
break
all_messages.extend(message_batch)
if message_limit and len(all_messages) >= message_limit:
queue_logger(f" Reached message limit of {message_limit}. Halting fetch.")
all_messages = all_messages[:message_limit]
break
last_message_id = message_batch[-1]['id']
queue_progress_label_update(f"Fetched {len(all_messages)} messages...")
time.sleep(1)
if self.cancellation_event.is_set():
self.finished_signal.emit(0, 0, True, [])
return
queue_progress_label_update(f"Collected {len(all_messages)} messages. Starting downloads...")
total_attachments = sum(len(m.get('attachments', [])) for m in all_messages)
for message in reversed(all_messages):
if check_events(): break
for attachment in message.get('attachments', []):
if check_events(): break
file_url = attachment['url']
original_filename = attachment['filename']
filepath = os.path.join(output_dir, original_filename)
filename_to_use = original_filename
counter = 1
base_name, extension = os.path.splitext(original_filename)
while os.path.exists(filepath):
filename_to_use = f"{base_name} ({counter}){extension}"
filepath = os.path.join(output_dir, filename_to_use)
counter += 1
if filename_to_use != original_filename:
queue_logger(f" -> Duplicate name '{original_filename}'. Saving as '{filename_to_use}'.")
try:
queue_logger(f" Downloading ({download_count+1}/{total_attachments}): '{filename_to_use}'...")
# --- FIX: Stream the download in chunks for responsive controls ---
response = requests.get(file_url, stream=True, timeout=60)
response.raise_for_status()
download_cancelled = False
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if check_events():
download_cancelled = True
break
f.write(chunk)
if download_cancelled:
queue_logger(f" Download cancelled for '{filename_to_use}'. Deleting partial file.")
if os.path.exists(filepath):
os.remove(filepath)
continue # Move to the next attachment
download_count += 1
except Exception as e:
queue_logger(f" ❌ Failed to download '{filename_to_use}': {e}")
skip_count += 1
finally:
self.finished_signal.emit(download_count, skip_count, self.cancellation_event.is_set(), [])
def _cleanup_after_update(self): def _cleanup_after_update(self):
"""Deletes the old executable after a successful update.""" """Deletes the old executable after a successful update."""
try: try:
@@ -805,7 +932,7 @@ class DownloaderApp (QWidget ):
if hasattr (self ,'use_cookie_checkbox'):self .use_cookie_checkbox .setText (self ._tr ("use_cookie_checkbox_label","Use Cookie")) if hasattr (self ,'use_cookie_checkbox'):self .use_cookie_checkbox .setText (self ._tr ("use_cookie_checkbox_label","Use Cookie"))
if hasattr (self ,'use_multithreading_checkbox'):self .update_multithreading_label (self .thread_count_input .text ()if hasattr (self ,'thread_count_input')else "1") if hasattr (self ,'use_multithreading_checkbox'):self .update_multithreading_label (self .thread_count_input .text ()if hasattr (self ,'thread_count_input')else "1")
if hasattr (self ,'external_links_checkbox'):self .external_links_checkbox .setText (self ._tr ("show_external_links_checkbox_label","Show External Links in Log")) if hasattr (self ,'external_links_checkbox'):self .external_links_checkbox .setText (self ._tr ("show_external_links_checkbox_label","Show External Links in Log"))
if hasattr (self ,'manga_mode_checkbox'):self .manga_mode_checkbox .setText (self ._tr ("manga_comic_mode_checkbox_label","Manga/Comic Mode")) if hasattr (self ,'manga_mode_checkbox'):self .manga_mode_checkbox .setText (self ._tr ("manga_comic_mode_checkbox_label","Renaming Mode"))
if hasattr (self ,'thread_count_label'):self .thread_count_label .setText (self ._tr ("threads_label","Threads:")) if hasattr (self ,'thread_count_label'):self .thread_count_label .setText (self ._tr ("threads_label","Threads:"))
if hasattr (self ,'character_input'): if hasattr (self ,'character_input'):
@@ -1202,64 +1329,83 @@ class DownloaderApp (QWidget ):
) )
pdf_thread.start() pdf_thread.start()
def _run_discord_pdf_creation_thread(self, api_url, server_id, channel_id, output_filepath): def _run_discord_pdf_creation_thread(self, session, api_url, server_id, channel_id, output_filepath, message_limit=None):
def queue_logger(message): def queue_logger(message):
self.worker_to_gui_queue.put({'type': 'progress', 'payload': (message,)}) self.worker_to_gui_queue.put({'type': 'progress', 'payload': (message,)})
def queue_progress_label_update(message): def queue_progress_label_update(message):
self.worker_to_gui_queue.put({'type': 'set_progress_label', 'payload': (message,)}) self.worker_to_gui_queue.put({'type': 'set_progress_label', 'payload': (message,)})
token = self.remove_from_filename_input.text().strip()
headers = {
'Authorization': token,
'User-Agent': USERAGENT_FIREFOX,
}
self.set_ui_enabled(False) self.set_ui_enabled(False)
queue_logger("=" * 40) queue_logger("=" * 40)
queue_logger(f"🚀 Starting Discord PDF export for: {api_url}") queue_logger(f"🚀 Starting Discord PDF export for: {api_url}")
queue_progress_label_update("Fetching messages...") queue_progress_label_update("Fetching messages...")
all_messages = [] all_messages = []
cookies = prepare_cookies_for_request(
self.use_cookie_checkbox.isChecked(), self.cookie_text_input.text(),
self.selected_cookie_filepath, self.app_base_dir, queue_logger # Use safe logger
)
channels_to_process = [] channels_to_process = []
server_name_for_pdf = server_id server_name_for_pdf = server_id
if channel_id: if channel_id:
channels_to_process.append({'id': channel_id, 'name': channel_id}) channels_to_process.append({'id': channel_id, 'name': channel_id})
else: else:
channels = fetch_server_channels(server_id, queue_logger, cookies) # Use safe logger # This logic can be expanded later to fetch all channels in a server if needed
if channels: pass
channels_to_process = channels
# In a real scenario, you'd get the server name from an API. We'll use the ID.
server_name_for_pdf = server_id
else:
queue_logger(f"❌ Could not find any channels for server {server_id}.")
self.worker_to_gui_queue.put({'type': 'set_ui_enabled', 'payload': (True,)})
return
# Fetch messages for all required channels
for i, channel in enumerate(channels_to_process): for i, channel in enumerate(channels_to_process):
queue_progress_label_update(f"Fetching from channel {i+1}/{len(channels_to_process)}: #{channel.get('name', '')}") queue_progress_label_update(f"Fetching from channel {i+1}/{len(channels_to_process)}: #{channel.get('name', '')}")
message_generator = fetch_channel_messages(channel['id'], queue_logger, self.cancellation_event, self.pause_event, cookies) # Use safe logger last_message_id = None
for message_batch in message_generator: while not self.cancellation_event.is_set():
url_endpoint = f"/channels/{channel['id']}/messages?limit=100"
if last_message_id:
url_endpoint += f"&before={last_message_id}"
try:
resp = session.get(f"https://discord.com/api/v10{url_endpoint}", headers=headers)
resp.raise_for_status()
message_batch = resp.json()
except Exception:
message_batch = []
if not message_batch:
break
all_messages.extend(message_batch) all_messages.extend(message_batch)
if message_limit and len(all_messages) >= message_limit:
queue_logger(f" Reached message limit of {message_limit}. Halting fetch.")
all_messages = all_messages[:message_limit]
break
last_message_id = message_batch[-1]['id']
queue_progress_label_update(f"Fetched {len(all_messages)} messages...")
time.sleep(1)
if message_limit and len(all_messages) >= message_limit:
break
queue_progress_label_update(f"Collected {len(all_messages)} total messages. Generating PDF...") queue_progress_label_update(f"Collected {len(all_messages)} total messages. Generating PDF...")
# Determine font path all_messages.reverse()
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'): if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
base_path = sys._MEIPASS base_path = sys._MEIPASS
else: else:
base_path = self.app_base_dir base_path = self.app_base_dir
font_path = os.path.join(base_path, 'data', 'dejavu-sans', 'DejaVuSans.ttf') font_path = os.path.join(base_path, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
# Generate the PDF
success = create_pdf_from_discord_messages( success = create_pdf_from_discord_messages(
all_messages, all_messages,
server_name_for_pdf, server_name_for_pdf,
channels_to_process[0].get('name', channel_id) if len(channels_to_process) == 1 else "All Channels", channels_to_process[0].get('name', channel_id) if len(channels_to_process) == 1 else "All Channels",
output_filepath, output_filepath,
font_path, font_path,
logger=queue_logger # Use safe logger logger=queue_logger
) )
if success: if success:
@@ -1267,9 +1413,7 @@ class DownloaderApp (QWidget ):
else: else:
queue_progress_label_update(f"❌ PDF export failed. Check log for details.") queue_progress_label_update(f"❌ PDF export failed. Check log for details.")
queue_logger("=" * 40) self.finished_signal.emit(0, len(all_messages), self.cancellation_event.is_set(), [])
# Safely re-enable the UI from the main thread via the queue
self.worker_to_gui_queue.put({'type': 'set_ui_enabled', 'payload': (True,)})
def save_known_names(self): def save_known_names(self):
""" """
@@ -3149,7 +3293,8 @@ class DownloaderApp (QWidget ):
self.use_cookie_checkbox, self.keep_duplicates_checkbox, self.date_prefix_checkbox, self.use_cookie_checkbox, self.keep_duplicates_checkbox, self.date_prefix_checkbox,
self.manga_rename_toggle_button, self.manga_date_prefix_input, self.manga_rename_toggle_button, self.manga_date_prefix_input,
self.multipart_toggle_button, self.custom_folder_input, self.custom_folder_label, self.multipart_toggle_button, self.custom_folder_input, self.custom_folder_label,
self.discord_scope_toggle_button, self.save_discord_as_pdf_btn self.discord_scope_toggle_button
# --- FIX: REMOVED self.save_discord_as_pdf_btn from this list ---
] ]
enable_state = not is_specialized enable_state = not is_specialized
@@ -3190,20 +3335,41 @@ class DownloaderApp (QWidget ):
url_text = self.link_input.text().strip() url_text = self.link_input.text().strip()
service, _, _ = extract_post_info(url_text) service, _, _ = extract_post_info(url_text)
# Handle specialized downloaders (Bunkr, nhentai) # --- FIX: Use two separate flags for better control ---
# This is true for BOTH kemono.cr/discord and discord.com
is_any_discord_url = (service == 'discord')
# This is ONLY true for official discord.com
is_official_discord_url = 'discord.com' in url_text and is_any_discord_url
if is_official_discord_url:
# Show the token input only for the official site
self.remove_from_filename_label_widget.setText("🔑 Discord Token:")
self.remove_from_filename_input.setPlaceholderText("Enter your Discord Authorization Token here")
self.remove_from_filename_input.setEchoMode(QLineEdit.Password)
saved_token = self.settings.value(DISCORD_TOKEN_KEY, "")
if saved_token:
self.remove_from_filename_input.setText(saved_token)
else:
# Revert to the standard input for Kemono, Coomer, etc.
self.remove_from_filename_label_widget.setText(self._tr("remove_words_from_name_label", "✂️ Remove Words from name:"))
self.remove_from_filename_input.setPlaceholderText(self._tr("remove_from_filename_input_placeholder_text", "e.g., patreon, HD"))
self.remove_from_filename_input.setEchoMode(QLineEdit.Normal)
# Handle other specialized downloaders (Bunkr, nhentai, etc.)
is_saint2 = 'saint2.su' in url_text or 'saint2.pk' in url_text is_saint2 = 'saint2.su' in url_text or 'saint2.pk' in url_text
is_erome = 'erome.com' in url_text is_erome = 'erome.com' in url_text
is_specialized = service in ['bunkr', 'nhentai'] or is_saint2 or is_erome is_specialized = service in ['bunkr', 'nhentai', 'hentai2read'] or is_saint2 or is_erome
self._set_ui_for_specialized_downloader(is_specialized) self._set_ui_for_specialized_downloader(is_specialized)
# Handle Discord UI # --- FIX: Show the Scope button for ANY Discord URL (Kemono or official) ---
is_discord = (service == 'discord') self.discord_scope_toggle_button.setVisible(is_any_discord_url)
self.discord_scope_toggle_button.setVisible(is_discord) if hasattr(self, 'discord_message_limit_input'):
self.save_discord_as_pdf_btn.setVisible(is_discord) # Only show the message limit for the official site, as it's an API feature
self.discord_message_limit_input.setVisible(is_official_discord_url)
if is_discord: if is_any_discord_url:
self._update_discord_scope_button_text() self._update_discord_scope_button_text()
elif not is_specialized: # Don't change button text for specialized downloaders elif not is_specialized:
self.download_btn.setText(self._tr("start_download_button_text", "⬇️ Start Download")) self.download_btn.setText(self._tr("start_download_button_text", "⬇️ Start Download"))
def _update_discord_scope_button_text(self): def _update_discord_scope_button_text(self):
@@ -3450,6 +3616,48 @@ class DownloaderApp (QWidget ):
self._process_next_favorite_download() self._process_next_favorite_download()
return True return True
if api_url.strip().lower() in ['hentai2read.com', 'https://hentai2read.com', 'http://hentai2read.com']:
self.log_signal.emit("=" * 40)
self.log_signal.emit("🚀 Hentai2Read batch download mode detected.")
h2r_txt_path = os.path.join(self.app_base_dir, "appdata", "hentai2read.txt")
self.log_signal.emit(f" Looking for batch file at: {h2r_txt_path}")
if not os.path.exists(h2r_txt_path):
QMessageBox.warning(self, "File Not Found", f"To use batch mode, create a file named 'hentai2read.txt' in your 'appdata' folder.\n\nPlace one Hentai2Read gallery URL on each line.")
self.log_signal.emit(f"'hentai2read.txt' not found. Aborting batch download.")
return False
urls_to_download = []
try:
with open(h2r_txt_path, 'r', encoding='utf-8') as f:
for line in f:
found_urls = re.findall(r'https?://hentai2read\.com/[^/]+/\d+/?', line)
if found_urls:
urls_to_download.extend(found_urls)
except Exception as e:
QMessageBox.critical(self, "File Error", f"Could not read 'hentai2read.txt':\n{e}")
self.log_signal.emit(f" ❌ Error reading 'hentai2read.txt': {e}")
return False
if not urls_to_download:
QMessageBox.information(self, "Empty File", "No valid Hentai2Read gallery URLs were found in 'hentai2read.txt'.")
self.log_signal.emit(" 'hentai2read.txt' was found but contained no valid URLs.")
return False
self.log_signal.emit(f" Found {len(urls_to_download)} URLs to process.")
self.favorite_download_queue.clear()
for url in urls_to_download:
self.favorite_download_queue.append({
'url': url,
'name': f"Hentai2Read gallery from batch",
'type': 'post'
})
if not self.is_processing_favorites_queue:
self._process_next_favorite_download()
return True
if not is_restore: if not is_restore:
self._create_initial_session_file(api_url, effective_output_dir_for_run, remaining_queue=self.favorite_download_queue) self._create_initial_session_file(api_url, effective_output_dir_for_run, remaining_queue=self.favorite_download_queue)
@@ -3475,6 +3683,72 @@ class DownloaderApp (QWidget ):
service, id1, id2 = extract_post_info(api_url) service, id1, id2 = extract_post_info(api_url)
if 'discord.com' in api_url and service == 'discord':
server_id, channel_id = id1, id2
token = self.remove_from_filename_input.text().strip()
output_dir = self.dir_input.text().strip()
if not token or not output_dir:
QMessageBox.critical(self, "Input Error", "A Discord Token and Download Location are required.")
return False
limit_text = self.discord_message_limit_input.text().strip()
message_limit = int(limit_text) if limit_text else None
if message_limit:
self.log_signal.emit(f" Applying message limit: will fetch up to {message_limit} latest messages.")
mode = 'pdf' if self.discord_download_scope == 'messages' else 'files'
# 1. Create the thread object
self.download_thread = DiscordDownloadThread(
mode=mode, session=requests.Session(), token=token, output_dir=output_dir,
server_id=server_id, channel_id=channel_id, url=api_url, limit=message_limit, parent=self
)
# 2. Connect its signals to the main window's functions
self.download_thread.progress_signal.connect(self.handle_main_log)
self.download_thread.progress_label_signal.connect(self.progress_label.setText)
self.download_thread.finished_signal.connect(self.download_finished)
# --- FIX: Start the thread BEFORE updating the UI ---
# 3. Start the download process in the background
self.download_thread.start()
# 4. NOW, update the UI. The app knows a download is active.
self.set_ui_enabled(False)
self._update_button_states_and_connections()
return True
if service == 'hentai2read':
self.log_signal.emit("=" * 40)
self.log_signal.emit(f"🚀 Detected Hentai2Read gallery: {id1}")
if not effective_output_dir_for_run or not os.path.isdir(effective_output_dir_for_run):
QMessageBox.critical(self, "Input Error", "A valid Download Location is required.")
return False
self.set_ui_enabled(False)
self.download_thread = Hentai2readDownloadThread(
base_url="https://hentai2read.com",
manga_slug=id1,
chapter_num=id2,
output_dir=effective_output_dir_for_run,
pause_event=self.pause_event,
parent=self
)
self.download_thread.progress_signal.connect(self.handle_main_log)
self.download_thread.file_progress_signal.connect(self.update_file_progress_display)
self.download_thread.overall_progress_signal.connect(self.update_progress_display)
self.download_thread.finished_signal.connect(
lambda dl, skip, cancelled: self.download_finished(dl, skip, cancelled, [])
)
self.download_thread.start()
self._update_button_states_and_connections()
return True
if service == 'nhentai': if service == 'nhentai':
gallery_id = id1 gallery_id = id1
self.log_signal.emit("=" * 40) self.log_signal.emit("=" * 40)
@@ -3874,11 +4148,11 @@ class DownloaderApp (QWidget ):
msg_box.setIcon(QMessageBox.Warning) msg_box.setIcon(QMessageBox.Warning)
msg_box.setWindowTitle("Manga Mode & Page Range Warning") msg_box.setWindowTitle("Manga Mode & Page Range Warning")
msg_box.setText( msg_box.setText(
"You have enabled <b>Manga/Comic Mode</b> and also specified a <b>Page Range</b>.\n\n" "You have enabled <b>Renaming Mode</b> with a sequential naming style (<b>Date Based</b> or <b>Title + G.Num</b>) and also specified a <b>Page Range</b>.\n\n"
"Manga Mode processes posts from oldest to newest across all available pages by default.\n" "These modes rely on processing all posts from the beginning to create a correct sequence. "
"If you use a page range, you might miss parts of the manga/comic if it starts before your 'Start Page' or continues after your 'End Page'.\n\n" "Using a page range may result in an incomplete or incorrectly ordered download.\n\n"
"However, if you are certain the content you want is entirely within this page range (e.g., a short series, or you know the specific pages for a volume), then proceeding is okay.\n\n" "It is recommended to use these styles without a page range.\n\n"
"Do you want to proceed with this page range in Manga Mode?" "Do you want to proceed anyway?"
) )
proceed_button = msg_box.addButton("Proceed Anyway", QMessageBox.AcceptRole) proceed_button = msg_box.addButton("Proceed Anyway", QMessageBox.AcceptRole)
cancel_button = msg_box.addButton("Cancel Download", QMessageBox.RejectRole) cancel_button = msg_box.addButton("Cancel Download", QMessageBox.RejectRole)
@@ -4345,7 +4619,8 @@ class DownloaderApp (QWidget ):
self.discord_scope_toggle_button.setVisible(is_discord) self.discord_scope_toggle_button.setVisible(is_discord)
if hasattr(self, 'save_discord_as_pdf_btn'): if hasattr(self, 'save_discord_as_pdf_btn'):
self.save_discord_as_pdf_btn.setVisible(is_discord) self.save_discord_as_pdf_btn.setVisible(is_discord)
if hasattr(self, 'discord_message_limit_input'):
self.discord_message_limit_input.setVisible(is_discord)
if is_discord: if is_discord:
self._update_discord_scope_button_text() self._update_discord_scope_button_text()
else: else:
@@ -4910,14 +5185,27 @@ class DownloaderApp (QWidget ):
self ._handle_favorite_mode_toggle (is_fav_mode_active ) self ._handle_favorite_mode_toggle (is_fav_mode_active )
def _handle_pause_resume_action(self): def _handle_pause_resume_action(self):
if self ._is_download_active (): # --- FIX: Simplified and corrected the pause/resume logic ---
if not self._is_download_active():
return
# Toggle the main app's pause state tracker
self.is_paused = not self.is_paused self.is_paused = not self.is_paused
# Call the correct method on the thread based on the new state
if isinstance(self.download_thread, DiscordDownloadThread):
if self.is_paused: if self.is_paused:
if self .pause_event :self .pause_event .set () self.download_thread.pause()
self .log_signal .emit (" Download paused by user. Some settings can now be changed for subsequent operations.")
else: else:
if self .pause_event :self .pause_event .clear () self.download_thread.resume()
self .log_signal .emit (" Download resumed by user.") else:
# Fallback for older download types
if self.is_paused:
self.pause_event.set()
else:
self.pause_event.clear()
# This call correctly updates the button's text to "Pause" or "Resume"
self.set_ui_enabled(False) self.set_ui_enabled(False)
def _perform_soft_ui_reset (self ,preserve_url =None ,preserve_dir =None ): def _perform_soft_ui_reset (self ,preserve_url =None ,preserve_dir =None ):
@@ -5016,15 +5304,11 @@ class DownloaderApp (QWidget ):
self ._filter_links_log () self ._filter_links_log ()
def cancel_download_button_action(self): def cancel_download_button_action(self):
""" if self._is_download_active() and hasattr(self.download_thread, 'cancel'):
Signals all active download processes to cancel but DOES NOT reset the UI. self.progress_label.setText(self._tr("status_cancelling", "Cancelling... Please wait."))
The UI reset is now handled by the 'download_finished' method. self.download_thread.cancel()
""" else:
if self.cancellation_event.is_set(): # Fallback for other download types
self.log_signal.emit(" Cancellation is already in progress.")
return
self.log_signal.emit("⚠️ Requesting cancellation of download process...")
self.cancellation_event.set() self.cancellation_event.set()
# Update UI to "Cancelling" state # Update UI to "Cancelling" state
@@ -5064,6 +5348,10 @@ class DownloaderApp (QWidget ):
self.log_signal.emit(" Signaling Erome download thread to cancel.") self.log_signal.emit(" Signaling Erome download thread to cancel.")
self.download_thread.cancel() self.download_thread.cancel()
if isinstance(self.download_thread, Hentai2readDownloadThread):
self.log_signal.emit(" Signaling Hentai2Read download thread to cancel.")
self.download_thread.cancel()
def _get_domain_for_service(self, service_name: str) -> str: def _get_domain_for_service(self, service_name: str) -> str:
"""Determines the base domain for a given service.""" """Determines the base domain for a given service."""
if not isinstance(service_name, str): if not isinstance(service_name, str):
@@ -5206,12 +5494,73 @@ class DownloaderApp (QWidget ):
self.is_fetcher_thread_running = False self.is_fetcher_thread_running = False
# --- This is where the post-download action is triggered ---
if not cancelled_by_user and not self.is_processing_favorites_queue:
self._execute_post_download_action()
self.set_ui_enabled(True) self.set_ui_enabled(True)
self._update_button_states_and_connections() self._update_button_states_and_connections()
self.cancellation_message_logged_this_session = False self.cancellation_message_logged_this_session = False
self.active_update_profile = None self.active_update_profile = None
finally: finally:
pass self.is_finishing = False
self.finish_lock.release()
def _execute_post_download_action(self):
"""Checks the settings and performs the chosen action after downloads complete."""
action = self.settings.value(POST_DOWNLOAD_ACTION_KEY, "off")
if action == "off":
return
elif action == "notify":
QApplication.beep()
self.log_signal.emit("✅ Download complete! Notification sound played.")
return
# --- FIX: Ensure confirm_title is defined before it is used ---
confirm_title = self._tr("action_confirmation_title", "Action After Download")
confirm_text = ""
if action == "sleep":
confirm_text = self._tr("confirm_sleep_text", "All downloads are complete. The computer will now go to sleep.")
elif action == "shutdown":
confirm_text = self._tr("confirm_shutdown_text", "All downloads are complete. The computer will now shut down.")
dialog = CountdownMessageBox(
title=confirm_title,
text=confirm_text,
countdown_seconds=10,
parent_app=self,
parent=self
)
if dialog.exec_() == QDialog.Accepted:
# The rest of the logic only runs if the dialog is accepted (by click or timeout)
self.log_signal.emit(f" Performing post-download action: {action.capitalize()}")
try:
if sys.platform == "win32":
if action == "sleep":
os.system("powercfg -hibernate off")
os.system("rundll32.exe powrprof.dll,SetSuspendState 0,1,0")
os.system("powercfg -hibernate on")
elif action == "shutdown":
os.system("shutdown /s /t 1")
elif sys.platform == "darwin": # macOS
if action == "sleep":
os.system("pmset sleepnow")
elif action == "shutdown":
os.system("osascript -e 'tell app \"System Events\" to shut down'")
else: # Linux
if action == "sleep":
os.system("systemctl suspend")
elif action == "shutdown":
os.system("systemctl poweroff")
except Exception as e:
self.log_signal.emit(f"❌ Failed to execute post-download action '{action}': {e}")
else:
# This block runs if the user clicks "No"
self.log_signal.emit(f" Post-download '{action}' cancelled by user.")
def _handle_keep_duplicates_toggled(self, checked): def _handle_keep_duplicates_toggled(self, checked):
"""Shows the duplicate handling dialog when the checkbox is checked.""" """Shows the duplicate handling dialog when the checkbox is checked."""
@@ -6178,6 +6527,190 @@ class DownloaderApp (QWidget ):
# Use a QTimer to avoid deep recursion and correctly move to the next item. # Use a QTimer to avoid deep recursion and correctly move to the next item.
QTimer.singleShot(100, self._process_next_favorite_download) QTimer.singleShot(100, self._process_next_favorite_download)
class DiscordDownloadThread(QThread):
"""A dedicated QThread for handling all official Discord downloads."""
progress_signal = pyqtSignal(str)
progress_label_signal = pyqtSignal(str)
finished_signal = pyqtSignal(int, int, bool, list)
def __init__(self, mode, session, token, output_dir, server_id, channel_id, url, limit=None, parent=None):
super().__init__(parent)
self.mode = mode
self.session = session
self.token = token
self.output_dir = output_dir
self.server_id = server_id
self.channel_id = channel_id
self.api_url = url
self.message_limit = limit
self.is_cancelled = False
self.is_paused = False
def run(self):
if self.mode == 'pdf':
self._run_pdf_creation()
else:
self._run_file_download()
def cancel(self):
self.progress_signal.emit(" Cancellation signal received by Discord thread.")
self.is_cancelled = True
def pause(self):
self.progress_signal.emit(" Pausing Discord download...")
self.is_paused = True
def resume(self):
self.progress_signal.emit(" Resuming Discord download...")
self.is_paused = False
def _check_events(self):
if self.is_cancelled:
return True
while self.is_paused:
time.sleep(0.5)
if self.is_cancelled:
return True
return False
def _fetch_all_messages(self):
all_messages = []
last_message_id = None
headers = {'Authorization': self.token, 'User-Agent': USERAGENT_FIREFOX}
while True:
if self._check_events(): break
endpoint = f"/channels/{self.channel_id}/messages?limit=100"
if last_message_id:
endpoint += f"&before={last_message_id}"
try:
# This is a blocking call, but it has a timeout
resp = self.session.get(f"https://discord.com/api/v10{endpoint}", headers=headers, timeout=30)
resp.raise_for_status()
message_batch = resp.json()
except Exception as e:
self.progress_signal.emit(f" ❌ Error fetching message batch: {e}")
break
if not message_batch:
break
all_messages.extend(message_batch)
if self.message_limit and len(all_messages) >= self.message_limit:
self.progress_signal.emit(f" Reached message limit of {self.message_limit}. Halting fetch.")
all_messages = all_messages[:self.message_limit]
break
last_message_id = message_batch[-1]['id']
self.progress_label_signal.emit(f"Fetched {len(all_messages)} messages...")
time.sleep(1) # API Rate Limiting
return all_messages
def _run_pdf_creation(self):
# ... (This method remains the same as the previous version)
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Discord PDF export for: {self.api_url}")
self.progress_label_signal.emit("Fetching messages...")
all_messages = self._fetch_all_messages()
if self.is_cancelled:
self.finished_signal.emit(0, 0, True, [])
return
self.progress_label_signal.emit(f"Collected {len(all_messages)} total messages. Generating PDF...")
all_messages.reverse()
base_path = os.path.dirname(sys.executable) if getattr(sys, 'frozen', False) else os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))
font_path = os.path.join(base_path, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
output_filepath = os.path.join(self.output_dir, f"discord_{self.server_id}_{self.channel_id or 'server'}.pdf")
# The PDF generator itself now also checks for events
success = create_pdf_from_discord_messages(
all_messages, self.server_id, self.channel_id,
output_filepath, font_path, logger=self.progress_signal.emit,
cancellation_event=self, pause_event=self
)
if success:
self.progress_label_signal.emit(f"✅ PDF export complete!")
elif not self.is_cancelled:
self.progress_label_signal.emit(f"❌ PDF export failed. Check log for details.")
self.finished_signal.emit(0, len(all_messages), self.is_cancelled, [])
def _run_file_download(self):
# ... (This method remains the same as the previous version)
download_count = 0
skip_count = 0
try:
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Discord download for channel: {self.channel_id}")
self.progress_label_signal.emit("Fetching messages...")
all_messages = self._fetch_all_messages()
if self.is_cancelled:
self.finished_signal.emit(0, 0, True, [])
return
self.progress_label_signal.emit(f"Collected {len(all_messages)} messages. Starting downloads...")
total_attachments = sum(len(m.get('attachments', [])) for m in all_messages)
for message in reversed(all_messages):
if self._check_events(): break
for attachment in message.get('attachments', []):
if self._check_events(): break
file_url = attachment['url']
original_filename = attachment['filename']
filepath = os.path.join(self.output_dir, original_filename)
filename_to_use = original_filename
counter = 1
base_name, extension = os.path.splitext(original_filename)
while os.path.exists(filepath):
filename_to_use = f"{base_name} ({counter}){extension}"
filepath = os.path.join(self.output_dir, filename_to_use)
counter += 1
if filename_to_use != original_filename:
self.progress_signal.emit(f" -> Duplicate name '{original_filename}'. Saving as '{filename_to_use}'.")
try:
self.progress_signal.emit(f" Downloading ({download_count+1}/{total_attachments}): '{filename_to_use}'...")
response = requests.get(file_url, stream=True, timeout=60)
response.raise_for_status()
download_cancelled_mid_file = False
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self._check_events():
download_cancelled_mid_file = True
break
f.write(chunk)
if download_cancelled_mid_file:
self.progress_signal.emit(f" Download cancelled for '{filename_to_use}'. Deleting partial file.")
if os.path.exists(filepath):
os.remove(filepath)
continue
download_count += 1
except Exception as e:
self.progress_signal.emit(f" ❌ Failed to download '{filename_to_use}': {e}")
skip_count += 1
finally:
self.finished_signal.emit(download_count, skip_count, self.is_cancelled, [])
def cancel(self):
self.is_cancelled = True
self.progress_signal.emit(" Cancellation signal received by Discord thread.")
class Saint2DownloadThread(QThread): class Saint2DownloadThread(QThread):
"""A dedicated QThread for handling saint2.su downloads.""" """A dedicated QThread for handling saint2.su downloads."""
progress_signal = pyqtSignal(str) progress_signal = pyqtSignal(str)
@@ -6314,7 +6847,7 @@ class EromeDownloadThread(QThread):
return return
total_files = len(files_to_download) total_files = len(files_to_download)
session = requests.Session() session = cloudscraper.create_scraper()
for i, file_data in enumerate(files_to_download): for i, file_data in enumerate(files_to_download):
if self.is_cancelled: if self.is_cancelled:
@@ -6497,6 +7030,167 @@ class BunkrDownloadThread(QThread):
self.is_cancelled = True self.is_cancelled = True
self.progress_signal.emit(" Cancellation signal received by Bunkr thread.") self.progress_signal.emit(" Cancellation signal received by Bunkr thread.")
class Hentai2readDownloadThread(QThread):
"""
A dedicated QThread for Hentai2Read that uses a two-phase process:
1. Fetch Phase: Scans all chapters to get total image count.
2. Download Phase: Downloads all found images with overall progress.
"""
progress_signal = pyqtSignal(str)
file_progress_signal = pyqtSignal(str, object)
finished_signal = pyqtSignal(int, int, bool)
overall_progress_signal = pyqtSignal(int, int)
def __init__(self, base_url, manga_slug, chapter_num, output_dir, pause_event, parent=None):
super().__init__(parent)
self.base_url = base_url
self.manga_slug = manga_slug
self.start_chapter = int(chapter_num) if chapter_num else 1
self.output_dir = output_dir
self.pause_event = pause_event
self.is_cancelled = False
# Store the original chapter number to detect single-chapter mode
self.original_chapter_num = chapter_num
def _check_pause(self):
if self.is_cancelled: return True
if self.pause_event and self.pause_event.is_set():
self.progress_signal.emit(" Download paused...")
while self.pause_event.is_set():
if self.is_cancelled: return True
time.sleep(0.5)
self.progress_signal.emit(" Download resumed.")
return self.is_cancelled
def run(self):
# --- SETUP ---
is_single_chapter_mode = self.original_chapter_num is not None
self.progress_signal.emit("=" * 40)
self.progress_signal.emit(f"🚀 Starting Hentai2Read Download for: {self.manga_slug}")
session = cloudscraper.create_scraper(
browser={'browser': 'firefox', 'platform': 'windows', 'desktop': True}
)
# --- PHASE 1: FETCH METADATA FOR ALL CHAPTERS ---
self.progress_signal.emit("--- Phase 1: Fetching metadata for all chapters... ---")
all_chapters_to_download = []
chapter_counter = self.start_chapter
while True:
if self._check_pause():
self.finished_signal.emit(0, 0, True)
return
chapter_url = f"{self.base_url}/{self.manga_slug}/{chapter_counter}/"
album_name, files_to_download = fetch_hentai2read_data(chapter_url, self.progress_signal.emit, session)
if not files_to_download:
break # End of series found
all_chapters_to_download.append({
'album_name': album_name,
'files': files_to_download,
'chapter_num': chapter_counter,
'chapter_url': chapter_url
})
if is_single_chapter_mode:
break # If user specified one chapter, only fetch that one
chapter_counter += 1
if self._check_pause():
self.finished_signal.emit(0, 0, True)
return
# --- PHASE 2: CALCULATE TOTALS & START DOWNLOAD ---
if not all_chapters_to_download:
self.progress_signal.emit("❌ No downloadable chapters found for this series.")
self.finished_signal.emit(0, 0, self.is_cancelled)
return
total_images = sum(len(chap['files']) for chap in all_chapters_to_download)
self.progress_signal.emit(f"✅ Fetch complete. Found {len(all_chapters_to_download)} chapter(s) with a total of {total_images} images.")
self.progress_signal.emit("--- Phase 2: Starting image downloads... ---")
self.overall_progress_signal.emit(total_images, 0)
grand_total_dl = 0
grand_total_skip = 0
images_processed = 0
for chapter_data in all_chapters_to_download:
if self._check_pause(): break
chapter_album_name = chapter_data['album_name']
self.progress_signal.emit("-" * 40)
self.progress_signal.emit(f"Downloading Chapter {chapter_data['chapter_num']}: '{chapter_album_name}'")
series_name_raw = chapter_album_name.split(' Chapter')[0]
series_folder_name = clean_folder_name(series_name_raw)
MAX_FOLDER_LEN = 100
if len(series_folder_name) > MAX_FOLDER_LEN:
series_folder_name = series_folder_name[:MAX_FOLDER_LEN].strip()
chapter_part_raw = "Chapter " + str(chapter_data['chapter_num'])
chapter_folder_name = clean_folder_name(chapter_part_raw)
final_save_path = os.path.join(self.output_dir, series_folder_name, chapter_folder_name)
os.makedirs(final_save_path, exist_ok=True)
for file_data in chapter_data['files']:
if self._check_pause(): break
images_processed += 1
filename = file_data.get('filename')
filepath = os.path.join(final_save_path, filename)
if os.path.exists(filepath):
self.progress_signal.emit(f" -> Skip ({images_processed}/{total_images}): '{filename}' already exists.")
grand_total_skip += 1
continue
self.progress_signal.emit(f" Downloading ({images_processed}/{total_images}): '{filename}'...")
download_successful = False
for attempt in range(3):
if self._check_pause(): break
try:
headers = {'Referer': chapter_data['chapter_url']}
response = session.get(file_data.get('url'), stream=True, timeout=60, headers=headers)
response.raise_for_status()
with open(filepath, 'wb') as f:
for chunk in response.iter_content(chunk_size=8192):
if self._check_pause(): break
f.write(chunk)
if not self._check_pause():
download_successful = True
break
except (requests.exceptions.RequestException, ConnectionResetError):
if attempt < 2: time.sleep(2 * (attempt + 1))
if self._check_pause(): break
if download_successful:
grand_total_dl += 1
else:
self.progress_signal.emit(f" ❌ Download failed for '{filename}' after 3 attempts. Skipping.")
if os.path.exists(filepath): os.remove(filepath)
grand_total_skip += 1
self.overall_progress_signal.emit(total_images, images_processed)
time.sleep(random.uniform(0.2, 0.7))
if not is_single_chapter_mode:
time.sleep(random.uniform(1.5, 4.0))
self.file_progress_signal.emit("", None)
self.finished_signal.emit(grand_total_dl, grand_total_skip, self.is_cancelled)
def cancel(self):
self.is_cancelled = True
self.progress_signal.emit(" Cancellation signal received by Hentai2Read thread.")
class ExternalLinkDownloadThread (QThread ): class ExternalLinkDownloadThread (QThread ):
"""A QThread to handle downloading multiple external links sequentially.""" """A QThread to handle downloading multiple external links sequentially."""
progress_signal =pyqtSignal (str ) progress_signal =pyqtSignal (str )

View File

@@ -138,22 +138,10 @@ def prepare_cookies_for_request(use_cookie_flag, cookie_text_input, selected_coo
return None return None
# In src/utils/network_utils.py
def extract_post_info(url_string): def extract_post_info(url_string):
""" """
Parses a URL string to extract the service, user ID, and post ID. Parses a URL string to extract the service, user ID, and post ID.
UPDATED to support Discord, Bunkr, and nhentai URLs. UPDATED to support Hentai2Read series and chapters.
Args:
url_string (str): The URL to parse.
Returns:
tuple: A tuple containing (service, id1, id2).
For posts: (service, user_id, post_id).
For Discord: ('discord', server_id, channel_id).
For Bunkr: ('bunkr', full_url, None).
For nhentai: ('nhentai', gallery_id, None).
""" """
if not isinstance(url_string, str) or not url_string.strip(): if not isinstance(url_string, str) or not url_string.strip():
return None, None, None return None, None, None
@@ -172,6 +160,18 @@ def extract_post_info(url_string):
if nhentai_match: if nhentai_match:
return 'nhentai', nhentai_match.group(1), None return 'nhentai', nhentai_match.group(1), None
# --- Hentai2Read Check (Updated) ---
# This regex now captures the manga slug (id1) and optionally the chapter number (id2)
hentai2read_match = re.search(r'hentai2read\.com/([^/]+)(?:/(\d+))?/?', stripped_url)
if hentai2read_match:
manga_slug, chapter_num = hentai2read_match.groups()
return 'hentai2read', manga_slug, chapter_num # chapter_num will be None for series URLs
discord_channel_match = re.search(r'discord\.com/channels/(@me|\d+)/(\d+)', stripped_url)
if discord_channel_match:
server_id, channel_id = discord_channel_match.groups()
return 'discord', server_id, channel_id
# --- Kemono/Coomer/Discord Parsing --- # --- Kemono/Coomer/Discord Parsing ---
try: try:
parsed_url = urlparse(stripped_url) parsed_url = urlparse(stripped_url)

View File

@@ -284,7 +284,7 @@ def setup_ui(main_app):
advanced_row2_layout.addLayout(multithreading_layout) advanced_row2_layout.addLayout(multithreading_layout)
main_app.external_links_checkbox = QCheckBox("Show External Links in Log") main_app.external_links_checkbox = QCheckBox("Show External Links in Log")
advanced_row2_layout.addWidget(main_app.external_links_checkbox) advanced_row2_layout.addWidget(main_app.external_links_checkbox)
main_app.manga_mode_checkbox = QCheckBox("Manga/Comic Mode") main_app.manga_mode_checkbox = QCheckBox("Renaming Mode")
advanced_row2_layout.addWidget(main_app.manga_mode_checkbox) advanced_row2_layout.addWidget(main_app.manga_mode_checkbox)
advanced_row2_layout.addStretch(1) advanced_row2_layout.addStretch(1)
checkboxes_group_layout.addLayout(advanced_row2_layout) checkboxes_group_layout.addLayout(advanced_row2_layout)
@@ -391,10 +391,23 @@ def setup_ui(main_app):
main_app.link_search_button.setVisible(False) main_app.link_search_button.setVisible(False)
main_app.link_search_button.setFixedWidth(int(30 * scale)) main_app.link_search_button.setFixedWidth(int(30 * scale))
log_title_layout.addWidget(main_app.link_search_button) log_title_layout.addWidget(main_app.link_search_button)
discord_controls_layout = QHBoxLayout()
main_app.discord_scope_toggle_button = QPushButton("Scope: Files") main_app.discord_scope_toggle_button = QPushButton("Scope: Files")
main_app.discord_scope_toggle_button.setVisible(False) # Hidden by default main_app.discord_scope_toggle_button.setVisible(False) # Hidden by default
main_app.discord_scope_toggle_button.setFixedWidth(int(140 * scale)) discord_controls_layout.addWidget(main_app.discord_scope_toggle_button)
log_title_layout.addWidget(main_app.discord_scope_toggle_button)
main_app.discord_message_limit_input = QLineEdit(main_app)
main_app.discord_message_limit_input.setPlaceholderText("Msg Limit")
main_app.discord_message_limit_input.setToolTip("Optional: Limit the number of recent messages to process.")
main_app.discord_message_limit_input.setValidator(QIntValidator(1, 9999999, main_app))
main_app.discord_message_limit_input.setFixedWidth(int(80 * scale))
main_app.discord_message_limit_input.setVisible(False) # Hide it by default
discord_controls_layout.addWidget(main_app.discord_message_limit_input)
log_title_layout.addLayout(discord_controls_layout)
main_app.manga_rename_toggle_button = QPushButton() main_app.manga_rename_toggle_button = QPushButton()
main_app.manga_rename_toggle_button.setVisible(False) main_app.manga_rename_toggle_button.setVisible(False)
main_app.manga_rename_toggle_button.setFixedWidth(int(140 * scale)) main_app.manga_rename_toggle_button.setFixedWidth(int(140 * scale))