26 Commits

Author SHA1 Message Date
Yuvi9587
cc3565b12b Commit 2025-08-27 07:21:30 -07:00
Yuvi9587
f8b150dfdb commit 2025-08-17 08:43:27 -07:00
Yuvi9587
5f7b526852 Commit 2025-08-17 05:51:25 -07:00
Yuvi9587
b0a6c264e1 Commit 2025-08-15 20:22:40 -07:00
Yuvi9587
d9364f4f91 commit 2025-08-14 09:48:55 -07:00
Yuvi9587
9cd48bb63a Update main_window.py 2025-08-13 19:49:10 -07:00
Yuvi9587
d0f11c4a06 Commit 2025-08-13 19:38:33 -07:00
Yuvi9587
26fa3b9bc1 Commit 2025-08-10 09:16:31 -07:00
Yuvi9587
f7c4d892a8 commit 2025-08-07 21:42:04 -07:00
Yuvi9587
661b97aa16 Commit 2025-08-06 06:56:49 -07:00
Yuvi9587
3704fece2b Update main_window.py 2025-08-04 04:53:52 -07:00
Yuvi9587
bdb7ac93c4 Update readme.md 2025-08-03 09:16:25 -07:00
Yuvi9587
76d4a3ea8a Update main_window.py 2025-08-03 09:15:01 -07:00
Yuvi9587
ccc7804505 Update readme.md 2025-08-03 09:13:47 -07:00
Yuvi9587
4ee750c5d4 Update drive_downloader.py 2025-08-03 09:11:27 -07:00
Yuvi9587
e9be13c4e3 Update readme.md 2025-08-03 09:07:29 -07:00
Yuvi9587
a5cb04ea6f Update features.md 2025-08-03 06:46:30 -07:00
Yuvi9587
842f18d70d Update features.md 2025-08-03 06:32:32 -07:00
Yuvi9587
fb3f0e8913 Update features.md 2025-08-03 06:11:05 -07:00
Yuvi9587
0758887154 Update features.md 2025-08-03 06:07:05 -07:00
Yuvi9587
e752d881e7 Update features.md 2025-08-03 06:01:32 -07:00
Yuvi9587
a776d1abe9 Update features.md 2025-08-03 06:01:15 -07:00
Yuvi9587
21d1ce4fa9 Commit 2025-08-03 05:46:51 -07:00
Yuvi9587
d5112a25ee Commit 2025-08-01 09:42:10 -07:00
Yuvi9587
791ce503ff Update main_window.py 2025-08-01 07:57:32 -07:00
Yuvi9587
e5b519d5ce Commit 2025-08-01 06:33:36 -07:00
21 changed files with 3752 additions and 1118 deletions

View File

@@ -1,147 +1,391 @@
<div>
<h1>Kemono Downloader - Comprehensive Feature Guide</h1>
<p>This guide provides a detailed overview of all user interface elements, input fields, buttons, popups, and functionalities available in the application.</p>
<hr>
<h2><strong>Main Window: Core Functionality</strong></h2>
<p>The application is divided into a configuration panel on the left and a status/log panel on the right.</p>
<h3><strong>Primary Inputs (Top-Left)</strong></h3>
<ul>
<li><strong>URL Input Field</strong>: This is the starting point for most downloads. You can paste a URL for a specific post or for an entire creator's feed. The application's behavior adapts based on the URL type.</li>
<li><strong>🎨 Creator Selection Popup</strong>: This button opens a powerful dialog listing all known creators. From here, you can:
<ul>
<li><strong>Search and Queue</strong>: Search for creators and check multiple names. Clicking "Add Selected" populates the main input field, preparing a batch download.</li>
<li><strong>Check for Updates</strong>: Select a single creator's saved profile. This loads their information and switches the main download button to "Check for Updates" mode, allowing you to download only new content since your last session.</li>
</ul>
</li>
<li><strong>Download Location</strong>: The primary folder where all content will be saved. The <strong>Browse...</strong> button lets you select this folder from your computer.</li>
<li><strong>Page Range (Start/End)</strong>: These fields activate only for creator feed URLs. They allow you to download a specific slice of a creator's history (e.g., pages 5 through 10) instead of their entire feed.</li>
</ul>
<hr>
<h2><strong>Filtering & Naming (Left Panel)</strong></h2>
<p>These features give you precise control over what gets downloaded and how it's named and organized.</p>
<ul>
<li><strong>Filter by Character(s)</strong>: A powerful tool to download content featuring specific characters. You can enter multiple names separated by commas.
<ul>
<li><strong>Filter: [Scope] Button</strong>: This button changes how the character filter works:
<ul>
<li><strong>Title</strong>: Downloads posts only if a character's name is in the post title.</li>
<li><strong>Files</strong>: Downloads posts if a character's name is in any of the filenames within the post.</li>
<li><strong>Both</strong>: Combines the "Title" and "Files" logic.</li>
<li><strong>Comments (Beta)</strong>: Downloads a post if a character's name is mentioned in the comments section.</li>
</ul>
</li>
</ul>
</li>
<li><strong>Skip with Words</strong>: A keyword-based filter to avoid unwanted content (e.g., <code>WIP</code>, <code>sketch</code>).
<ul>
<li><strong>Scope: [Type] Button</strong>: This button changes how the skip filter works:
<ul>
<li><strong>Posts</strong>: Skips the entire post if a keyword is found in the title.</li>
<li><strong>Files</strong>: Skips only individual files if a keyword is found in the filename.</li>
<li><strong>Both</strong>: Applies both levels of skipping.</li>
</ul>
</li>
</ul>
</li>
<li><strong>Remove Words from name</strong>: Automatically cleans downloaded filenames by removing any specified words (e.g., "patreon," "HD").</li>
</ul>
<h3><strong>File Type Filter (Radio Buttons)</strong></h3>
<p>This section lets you choose the kind of content you want:</p>
<ul>
<li><strong>All, Images/GIFs, Videos, 🎧 Only Audio, 📦 Only Archives</strong>: These options filter the downloads to only include the selected file types.</li>
<li><strong>🔗 Only Links</strong>: This special mode doesn't download any files. Instead, it scans post descriptions and lists all external links (like Mega, Google Drive) in the log panel.</li>
<li><strong>More</strong>: Opens a dialog for text-only downloads. You can choose to save post <strong>descriptions</strong> or <strong>comments</strong> as formatted <strong>PDF, DOCX, or TXT</strong> files. A key feature here is the <strong>"Single PDF"</strong> option, which compiles the text from all downloaded posts into one continuous, sorted PDF document.</li>
</ul>
<hr>
<h2><strong>Download Options & Advanced Settings (Checkboxes)</strong></h2>
<ul>
<li><strong>Skip .zip</strong>: A simple toggle to ignore archive files during downloads.</li>
<li><strong>Download Thumbnails Only</strong>: Downloads only the small preview images instead of the full-resolution files.</li>
<li><strong>Scan Content for Images</strong>: A crucial feature that scans the post's text content for embedded images that may not be listed in the API, ensuring a more complete download.</li>
<li><strong>Compress to WebP</strong>: Saves disk space by automatically converting large images into the efficient WebP format.</li>
<li><strong>Keep Duplicates</strong>: Opens a dialog to control how files with identical content are handled. The default is to skip duplicates, but you can choose to keep all of them or set a specific limit (e.g., "keep up to 2 copies of the same file").</li>
<li><strong>Subfolder per Post</strong>: Organizes downloads by creating a unique folder for each post, named after the post's title.</li>
<li><strong>Date Prefix</strong>: When "Subfolder per Post" is on, this adds the post's date to the beginning of the folder name (e.g., <code>2025-07-25 Post Title</code>).</li>
<li><strong>Separate Folders by Known.txt</strong>: This enables the automatic folder organization system based on your "Known Names" list.</li>
<li><strong>Use Cookie</strong>: Allows the application to use browser cookies to access content that might be behind a paywall or login. You can paste a cookie string directly or use <strong>Browse...</strong> to select a <code>cookies.txt</code> file.</li>
<li><strong>Use Multithreading</strong>: Greatly speeds up downloads of creator feeds by processing multiple posts at once. The number of <strong>Threads</strong> can be configured.</li>
<li><strong>Show External Links in Log</strong>: When checked, a secondary log panel appears at the bottom of the right side, dedicated to listing any external links found.</li>
</ul>
<hr>
<h2><strong>Known Names Management (Bottom-Left)</strong></h2>
<p>This powerful feature automates the creation of organized, named folders.</p>
<ul>
<li><strong>Known Shows/Characters List</strong>: Displays all the names and groups you've saved.</li>
<li><strong>Search...</strong>: Filters the list to quickly find a name.</li>
<li><strong>Open Known.txt</strong>: Opens the source file in a text editor for advanced manual editing.</li>
<li><strong>Add New Name</strong>:
<ul>
<li><strong>Single Name</strong>: Typing <code>Tifa Lockhart</code> and clicking <strong> Add</strong> creates an entry that will match "Tifa Lockhart".</li>
<li><strong>Group</strong>: Typing <code>(Boa, Hancock, Snake Princess)~</code> and clicking <strong> Add</strong> creates a single entry named "Boa Hancock Snake Princess". The application will then look for "Boa," "Hancock," OR "Snake Princess" in titles/filenames and save any matches into that combined folder.</li>
</ul>
</li>
<li><strong>⤵️ Add to Filter</strong>: Opens a dialog with your full Known Names list, allowing you to check multiple entries and add them all to the "Filter by Character(s)" field at once.</li>
<li><strong>🗑️ Delete Selected</strong>: Removes highlighted names from your list.</li>
</ul>
<hr>
<h2><strong>Action Buttons & Status Controls</strong></h2>
<ul>
<li><strong>⬇️ Start Download / 🔗 Extract Links</strong>: The main action button. Its function is dynamic:
<ul>
<li><strong>Normal Mode</strong>: Starts the download based on the current settings.</li>
<li><strong>Update Mode</strong>: After selecting a creator profile, this button changes to <strong>🔄 Check for Updates</strong>.</li>
<li><strong>Update Confirmation</strong>: After new posts are found, it changes to <strong>⬇️ Start Download (X new)</strong>.</li>
<li><strong>Link Extraction Mode</strong>: The text changes to <strong>🔗 Extract Links</strong>.</li>
</ul>
</li>
<li><strong>⏸️ Pause / ▶️ Resume Download</strong>: Pauses the ongoing download, allowing you to change certain settings (like filters) on the fly. Click again to resume.</li>
<li><strong>❌ Cancel & Reset UI</strong>: Immediately stops all download activity and resets the UI to a clean state, preserving your URL and Download Location inputs.</li>
<li><strong>Error Button</strong>: If files fail to download, they are logged. This button opens a dialog listing all failed files and will show a count of errors (e.g., <strong>(5) Error</strong>). From the dialog, you can:
<ul>
<li>Select specific files to <strong>Retry</strong> downloading.</li>
<li><strong>Export</strong> the list of failed URLs to a <code>.txt</code> file.</li>
</ul>
</li>
<li><strong>🔄 Reset (Top-Right)</strong>: A hard reset that clears all logs and returns every single UI element to its default state.</li>
<li><strong>⚙️ (Settings)</strong>: Opens the main Settings dialog.</li>
<li><strong>📜 (History)</strong>: Opens the Download History dialog.</li>
<li><strong>? (Help)</strong>: Opens a helpful guide explaining the application's features.</li>
<li><strong>❤️ Support</strong>: Opens a dialog with information on how to support the developer.</li>
</ul>
<hr>
<h2><strong>Specialized Modes & Features</strong></h2>
<h3><strong>⭐ Favorite Mode</strong></h3>
<p>Activating this mode transforms the UI for managing saved collections:</p>
<ul>
<li>The URL input is disabled.</li>
<li>The main action buttons are replaced with:
<ul>
<li><strong>🖼️ Favorite Artists</strong>: Opens a dialog to browse and queue downloads from your saved favorite creators.</li>
<li><strong>📄 Favorite Posts</strong>: Opens a dialog to browse and queue downloads for specific saved favorite posts.</li>
</ul>
</li>
<li><strong>Scope: [Location] Button</strong>: Toggles where the favorited content is saved:
<ul>
<li><strong>Selected Location</strong>: Saves all content directly into the main "Download Location".</li>
<li><strong>Artist Folders</strong>: Creates a subfolder for each artist inside the main "Download Location".</li>
</ul>
</li>
</ul>
<h3><strong>📖 Manga/Comic Mode</strong></h3>
<p>This mode is designed for sequential content and has several effects:</p>
<ul>
<li><strong>Reverses Download Order</strong>: It fetches and downloads posts from <strong>oldest to newest</strong>.</li>
<li><strong>Enables Special Naming</strong>: A <strong><code>Name: [Style]</code></strong> button appears, allowing you to choose how files are named to maintain their correct order (e.g., by Post Title, by Date, or simple sequential numbering like <code>001, 002, 003...</code>).</li>
<li><strong>Disables Multithreading (for certain styles)</strong>: To guarantee perfect sequential numbering, multithreading for posts is automatically disabled for certain naming styles.</li>
</ul>
<h3><strong>Session & Error Management</strong></h3>
<ul>
<li><strong>Session Restore</strong>: If the application is closed unexpectedly during a download, it will detect the incomplete session on the next launch. The UI will present a <strong>🔄 Restore Download</strong> button to resume exactly where you left off. You can also choose to discard the session.</li>
<li><strong>Update Checking</strong>: By selecting a creator profile via the <strong>🎨 Creator Selection Popup</strong>, you can run an update check. The application compares the posts on the server with your download history for that creator and will prompt you to download only the new content.</li>
</ul>
<h3><strong>Logging & Monitoring</strong></h3>
<ul>
<li><strong>Progress Log</strong>: The main log provides real-time feedback on the download process, including status messages, file saves, skips, and errors.</li>
<li><strong>👁️ Log View Toggle</strong>: Switches the log view between the standard <strong>Progress Log</strong> and a <strong>Missed Character Log</strong>, which shows potential character names from posts that were skipped by your filters, helping you discover new names to add to your list.</li>
</ul>
<h1>Kemono Downloader - Comprehensive Feature Guide</h1>
<p>This guide provides a detailed overview of all user interface elements, input fields, buttons, popups, and functionalities available in the application.</p>
<hr>
<h2><strong>1. URL Input (🔗)</strong></h2>
<p>This is the primary input field where you specify the content you want to download.</p>
<p><strong>Functionality:</strong></p>
<ul>
<li><strong>Creator URL:</strong> A link to a creator's main page (e.g., https://kemono.su/patreon/user/12345). Downloads all posts from the creator.</li>
<li><strong>Post URL:</strong> A direct link to a specific post (e.g., .../post/98765). Downloads only the specified post.</li>
</ul>
<p><strong>Interaction with Other Features:</strong> The content of this field influences "Manga Mode" and "Page Range". "Page Range" is enabled only with a creator URL.</p>
<hr>
<h2><strong>2. Creator Selection & Update (🎨)</strong></h2>
<p>The color palette emoji button opens the Creator Selection & Update dialog. This allows managing and downloading from a local creator database.</p>
<p><strong>Functionality:</strong></p>
<ul>
<li><strong>Creator Browser:</strong> Loads a list from <code>creators.json</code>. Search by name, service, or paste a URL to find creators.</li>
<li><strong>Batch Selection:</strong> Select multiple creators and click "Add Selected" to add them to the batch download session.</li>
<li><strong>Update Checker:</strong> Use a saved profile (.json) to download only new content based on previously fetched posts.</li>
<li><strong>Post Fetching & Filtering:</strong> "Fetch Posts" loads post titles, allowing you to choose specific posts for download.</li>
</ul>
<hr>
<h2><strong>3. Download Location Input (📁)</strong></h2>
<p>This input defines the destination directory for downloaded files.</p>
<p><strong>Functionality:</strong></p>
<ul>
<li><strong>Manual Entry:</strong> Enter or paste the folder path.</li>
<li><strong>Browse Button:</strong> Opens a system dialog to choose a folder.</li>
<li><strong>Directory Creation:</strong> If the folder doesn't exist, the app can create it after user confirmation.</li>
</ul>
<hr>
<h2><strong>4. Filter by Character(s) & Scope Button</strong></h2>
<p>Used to download content for specific characters or series and organize them into subfolders.</p>
<p><strong>Input Field (Filter by Character(s)):</strong></p>
<ul>
<li>Enter comma-separated names (e.g., <code>Tifa, Aerith</code>).</li>
<li>Group aliases using parentheses (e.g., <code>(Cloud, Zack)</code>).</li>
<li>Names are matched against titles, filenames, or comments.</li>
<li>If "Separate Folders by Known.txt" is enabled, the name becomes the subfolder name.</li>
</ul>
<p><strong>Scope Button Modes:</strong></p>
<ul>
<li><strong>Filter: Title</strong> (default) Match names in post titles only.</li>
<li><strong>Filter: Files</strong> Match names in filenames only.</li>
<li><strong>Filter: Both</strong> Try title match first, then filenames.</li>
<li><strong>Filter: Comments</strong> Try filenames first, then post comments if no match.</li>
</ul>
<hr>
<h2><strong>5. Skip with Words & Scope Button</strong></h2>
<p>Prevents downloading content based on keywords.</p>
<p><strong>Input Field (Skip with Words):</strong></p>
<ul>
<li>Enter comma-separated keywords (e.g., <code>WIP, sketch, preview</code>).</li>
<li>Matching is case-insensitive.</li>
<li>If a keyword matches, the file or post is skipped.</li>
</ul>
<p><strong>Scope Button Modes:</strong></p>
<ul>
<li><strong>Scope: Posts</strong> (default) Skips post if title contains a keyword.</li>
<li><strong>Scope: Files</strong> Skips individual files with keyword matches.</li>
<li><strong>Scope: Both</strong> Skips entire post if title matches, otherwise filters individual files.</li>
</ul>
</div>
<div>
<h2><strong>Filter File Section (Radio Buttons)</strong></h2>
<p>This section uses a group of radio buttons to control the primary download mode, dictating which types of files are targeted. Only one of these modes can be active at a time.</p>
<ul>
<li>
<strong>All:</strong> Default mode. Downloads every file and attachment provided by the API, regardless of type.
</li>
<li>
<strong>Images/GIFs:</strong> Filters for common image formats (<code>.jpg</code>, <code>.png</code>, <code>.gif</code>, <code>.webp</code>), skipping non-image files.
</li>
<li>
<strong>Videos:</strong> Filters for common video formats like <code>.mp4</code>, <code>.webm</code>, and <code>.mov</code>, skipping all others.
</li>
<li>
<strong>Only Archives:</strong> Downloads only archive files (<code>.zip</code>, <code>.rar</code>). Disables "Compress to WebP" and unchecks "Skip Archives".
</li>
<li>
<strong>Only Audio:</strong> Filters for common audio formats like <code>.mp3</code>, <code>.wav</code>, and <code>.flac</code>.
</li>
<li>
<strong>Only Links:</strong> Extracts external hyperlinks from post descriptions (e.g., Mega, Google Drive) and displays them in the log. Disables all download options.
</li>
<li>
<strong>More:</strong> Opens the "More Options" dialog to download text-based content instead of media files.
<ul>
<li><strong>Scope:</strong> Choose to extract from post description or comments.</li>
<li><strong>Export Format:</strong> Save text as PDF, DOCX, or TXT.</li>
<li><strong>Single PDF:</strong> Optionally compile all text into one PDF.</li>
</ul>
</li>
</ul>
<hr>
<h2><strong>Check Box Buttons</strong></h2>
<p>These checkboxes provide additional toggles to refine the download behavior and enable special features.</p>
<ul>
<li>
<strong>⭐ Favorite Mode:</strong> Changes workflow to download from your personal favorites. Disables the URL input.
<ul>
<li><strong>Favorite Artists:</strong> Opens a dialog to select from your favorited creators.</li>
<li><strong>Favorite Posts:</strong> Opens a dialog to select from your favorited posts on Kemono and Coomer.</li>
</ul>
</li>
<li>
<strong>Skip Archives:</strong> When checked, archive files (<code>.zip</code>, <code>.rar</code>) are ignored. Disabled in "Only Archives" mode.
</li>
<li>
<strong>Download Thumbnail Only:</strong> Saves only thumbnail previews, not full-resolution files. Enables "Scan Content for Images".
</li>
<li>
<strong>Scan Content for Images:</strong> Parses post HTML for embedded images not listed in the API. Looks for <code>&lt;img&gt;</code> tags and direct image links.
</li>
<li>
<strong>Compress to WebP:</strong> Converts large images (over 1.5 MB) to WebP format using the Pillow library for space-saving.
</li>
<li>
<strong>Keep Duplicates:</strong> Provides control over duplicate handling via the "Duplicate Handling Options" dialog.
<ul>
<li><strong>Skip by Hash:</strong> Default skip identical files.</li>
<li><strong>Keep Everything:</strong> Save all files regardless of duplication.</li>
<li><strong>Limit:</strong> Set a limit on how many copies of the same file are saved. A limit of <code>0</code> means no limit.</li>
</ul>
</li>
</ul>
</div>
<h2><strong>Folder Organization Checkboxes</strong></h2>
<ul>
<li>
<strong>Separate folders by Known.txt:</strong> Automatically organizes downloads into folders based on name matches.
<ul>
<li>Uses "Filter by Character(s)" input first, if available.</li>
<li>Then checks names in <code>Known.txt</code>.</li>
<li>Falls back to extracting from post title.</li>
</ul>
</li>
<li>
<strong>Subfolder per post:</strong> Creates a unique folder per post, using the posts title.
<ul>
<li>Prevents mixing files from multiple posts.</li>
<li>Can be combined with Known.txt-based folders.</li>
<li>Ensures uniqueness (e.g., <code>My Post Title_1</code>).</li>
<li>Automatically removes empty folders.</li>
</ul>
</li>
<li>
<strong>Date prefix:</strong> Enabled only with "Subfolder per post". Prepends the post date (e.g., <code>2025-08-03 My Post Title</code>) for chronological sorting.
</li>
</ul>
<h2><strong>General Functionality Checkboxes</strong></h2>
<ul>
<li>
<strong>Use cookie:</strong> Enables login-based access via cookies.
<ul>
<li>Paste cookie string directly, or browse to select a <code>cookies.txt</code> file.</li>
<li>Cookies are used in all authenticated API requests.</li>
</ul>
</li>
<li>
<strong>Use Multithreading:</strong> Enables parallel downloading of posts.
<ul>
<li>Specify the number of worker threads (e.g., 10).</li>
<li>Disabled for Manga Mode and Only Links mode.</li>
</ul>
</li>
<li>
<strong>Show external links in log:</strong> Adds a secondary log that displays links (e.g., Mega, Dropbox) found in post text.
</li>
<li>
<strong>Manga/Comic mode:</strong> Sorts posts chronologically before download.
<ul>
<li>Ensures correct page order for comics/manga.</li>
</ul>
<strong>Scope Button (Name: ...):</strong> Controls filename style:
<ul>
<li><strong>Name: Post Title</strong> — e.g., <code>Chapter-1.jpg</code></li>
<li><strong>Name: Date + Original</strong> — e.g., <code>2025-08-03_filename.png</code></li>
<li><strong>Name: Date + Title</strong> — e.g., <code>2025-08-03_Chapter-1.jpg</code></li>
<li><strong>Name: Title+G.Num</strong> — e.g., <code>Page_001.jpg</code></li>
<li><strong>Name: Date Based</strong> — e.g., <code>001.jpg</code>, with optional prefix</li>
<li><strong>Name: Post ID</strong> — uses unique post ID as filename</li>
</ul>
</li>
</ul>
<h2><strong>Start Download</strong></h2>
<ul>
<li>
<strong>Default State ("⬇️ Start Download"):</strong> When idle, this button gathers all current settings (URL, filters, checkboxes, etc.) and begins the download process via the DownloadManager.
</li>
<li>
<strong>Restore State:</strong> If an interrupted session is detected, the tooltip will indicate that starting a new download will discard previous session progress.
</li>
<li>
<strong>Update Mode (Phase 1 - "🔄 Check For Updates"):</strong> If a creator profile is loaded, clicking this button will fetch the creator's posts and compare them against your saved profile to identify new content.
</li>
<li>
<strong>Update Mode (Phase 2 - "⬇️ Start Download (X new)"):</strong> After new posts are found, the button text updates to reflect the number. Clicking it downloads only the new content.
</li>
</ul>
<h2><strong>Pause / Resume Download</strong></h2>
<ul>
<li>
<strong>While Downloading:</strong> The button toggles between:
<ul>
<li><strong>"⏸️ Pause Download":</strong> Sets a <code>pause_event</code>, which tells all worker threads to halt their current task and wait.</li>
<li><strong>"▶️ Resume Download":</strong> Clears the <code>pause_event</code>, allowing threads to resume their work.</li>
</ul>
</li>
<li>
<strong>While Idle:</strong> The button is disabled.
</li>
<li>
<strong>Restore State:</strong> Changes to "🔄 Restore Download", which resumes the last session from saved data.
</li>
</ul>
<h2><strong>Cancel & Reset UI</strong></h2>
<ul>
<li>
<strong>Functionality:</strong> Stops downloads gracefully using a <code>cancellation_event</code>. Threads finish current tasks before shutting down.
</li>
<li>
<strong>The Soft Reset:</strong> After cancellation is confirmed by background threads, the UI resets via the <code>download_finished</code> function. Input fields (URL and Download Location) are preserved for convenience.
</li>
<li>
<strong>Restore State:</strong> Changes to "🗑️ Discard Session", which deletes <code>session.json</code> and resets the UI.
</li>
<li>
<strong>Update State:</strong> Changes to "🗑️ Clear Selection", unloading the selected creator profile and returning to normal UI state.
</li>
</ul>
<h2><strong>Error Button</strong></h2>
<ul>
<li>
<strong>Error Counter:</strong> Shows how many files failed to download (e.g., <code>(3) Error</code>). Disabled if there are no errors.
</li>
<li>
<strong>Error Dialog:</strong> Clicking opens the "Files Skipped Due to Errors" dialog (defined in <code>ErrorFilesDialog.py</code>), listing all failed files.
</li>
<li>
<strong>Dialog Features:</strong>
<ul>
<li><strong>View Failed Files:</strong> Shows filenames and related post info.</li>
<li><strong>Select and Retry:</strong> Retry selected failed files in a focused download session.</li>
<li><strong>Export URLs:</strong> Save a <code>.txt</code> file of direct download links. Optionally include post metadata with each URL.</li>
</ul>
</li>
</ul>
<h2><strong>"Known Area" and its Controls</strong></h2>
<p>This section, located on the right side of the main window, manages your personal name database (<code>Known.txt</code>), which the app uses to organize downloads into subfolders.</p>
<ul>
<li>
<strong>Open Known.txt:</strong> Opens the <code>Known.txt</code> file in your system's default text editor for manual editing, such as bulk changes or cleanup.
</li>
<li>
<strong>Search character input:</strong> A live search filter that hides any list items not matching your input text. Useful for quickly locating specific names in large lists.
</li>
<li>
<strong>Known Series/Characters Area:</strong> Displays all names currently stored in your <code>Known.txt</code>. These names are used when "Separate folders by Known.txt" is enabled.
</li>
<li>
<strong>Input at bottom & Add button:</strong> Type a new character or series name into the input field, then click " Add". The app checks for duplicates, updates the list, and saves to <code>Known.txt</code>.
</li>
<li>
<strong>Add to Filter:</strong> Opens a dialog showing all entries from <code>Known.txt</code> with checkboxes. You can select one or more to auto-fill the "Filter by Character(s)" field at the top of the app.
</li>
<li>
<strong>Delete Selected:</strong> Select one or more entries from the list and click "🗑️ Delete Selected" to remove them from the app and update <code>Known.txt</code> accordingly.
</li>
</ul>
<h2><strong>Other Buttons</strong></h2>
<ul>
<li>
<strong>(?_?) mark button (Help Guide):</strong> Opens a multi-page help dialog with step-by-step instructions and explanations for all app features. Useful for new users.
</li>
<li>
<strong>History Button:</strong> Opens the Download History dialog (from <code>DownloadHistoryDialog.py</code>), showing:
<ul>
<li>Recently downloaded files</li>
<li>The first few posts processed in the last session</li>
</ul>
This allows for a quick review of recent activity.
</li>
<li>
<strong>Settings Button:</strong> Opens the Settings dialog (from <code>FutureSettingsDialog.py</code>), where you can change app-wide settings such as theme (light/dark) and language.
</li>
<li>
<strong>Support Button:</strong> Opens the Support dialog (from <code>SupportDialog.py</code>), which includes developer info, source links, and donation platforms like Ko-fi or Patreon.
</li>
</ul>
<h2><strong>Log Area Controls</strong></h2>
<p>These controls are located around the main log panel and offer tools for managing downloads, configuring advanced options, and resetting the application.</p>
<ul>
<li>
<strong>Multi-part: OFF</strong><br>
This button acts as both a status indicator and a configuration panel for multi-part downloading (parallel downloading of large files).
<ul>
<li><strong>Function:</strong> Opens the <code>Multipart Download Options</code> dialog (defined in <code>MultipartScopeDialog.py</code>).</li>
<li><strong>Scope Options:</strong> Choose between "Videos Only", "Archives Only", or "Both".</li>
<li><strong>Number of parts:</strong> Set how many simultaneous connections to use (216).</li>
<li><strong>Minimum file size:</strong> Set a threshold (MB) below which files are downloaded normally.</li>
<li><strong>Status:</strong> After applying settings, the button's text updates (e.g., <code>Multi-part: Both</code>); otherwise, it resets to <code>Multi-part: OFF</code>.</li>
</ul>
</li>
<li>
<strong>👁️ Eye Emoji Button (Log View Toggle)</strong><br>
Switches between two views in the log panel:
<ul>
<li><strong>👁️ Progress Log View:</strong> Shows real-time download progress, status messages, and errors.</li>
<li><strong>🚫 Missed Character View:</strong> Displays names detected in posts that didnt match the current filter — useful for updating <code>Known.txt</code>.</li>
</ul>
</li>
<li>
<strong>Reset Button</strong><br>
Performs a full "soft reset" of the UI when the application is idle.
<ul>
<li>Clears all inputs (except saved Download Location)</li>
<li>Resets checkboxes, buttons, and logs</li>
<li>Clears counters, queues, and restores the UI to its default state</li>
<li><strong>Note:</strong> This is different from <em>Cancel & Reset UI</em>, which halts active downloads</li>
</ul>
</li>
</ul>
<h3><strong>The Progress Log and "Only Links" Mode Controls</strong></h3>
<ul>
<li>
<strong>Standard Mode (Progress Log)</strong><br>
This is the default behavior. The <code>main_log_output</code> field displays:
<ul>
<li>Post processing steps</li>
<li>Download/skipped file notifications</li>
<li>Error messages</li>
<li>Session summaries</li>
</ul>
</li>
<li>
<strong>"Only Links" Mode</strong><br>
When enabled, the log panel switches modes and reveals new controls.
<ul>
<li><strong>📜 Extracted Links Log:</strong> Replaces progress info with a list of found external links (e.g., Mega, Dropbox).</li>
<li><strong>Export Links Button:</strong> Saves the extracted links to a <code>.txt</code> file.</li>
<li><strong>Download Button:</strong> Opens the <code>Download Selected External Links</code> dialog (from <code>DownloadExtractedLinksDialog.py</code>), where you can:
<ul>
<li>View all supported external links</li>
<li>Select which ones to download</li>
<li>Begin download directly from cloud services</li>
</ul>
</li>
<li><strong>Links View Button:</strong> Toggles log display between:
<ul>
<li><strong>🔗 Links View:</strong> Shows all extracted links</li>
<li><strong>⬇️ Progress View:</strong> Shows download progress from external services (e.g., Mega)</li>
</ul>
</li>
</ul>
</li>
</ul>

145
readme.md
View File

@@ -1,4 +1,4 @@
<h1 align="center">Kemono Downloader v6.0.0</h1>
<h1 align="center">Kemono Downloader </h1>
<div align="center">
@@ -41,108 +41,53 @@ Built with PyQt5, this tool is designed for users who want deep filtering capabi
</div>
<h2><strong>Core Capabilities Overview</strong></h2>
---
<h3><strong>High-Performance Downloading</strong></h3>
<ul>
<li><strong>Multi-threading:</strong> Processes multiple posts simultaneously to greatly accelerate downloads from large creator profiles.</li>
<li><strong>Multi-part Downloading:</strong> Splits large files into chunks and downloads them in parallel to maximize speed.</li>
<li><strong>Resilience:</strong> Supports pausing, resuming, and restoring downloads after crashes or interruptions.</li>
</ul>
## Feature Overview
<h3><strong>Advanced Filtering & Content Control</strong></h3>
<ul>
<li><strong>Content Type Filtering:</strong> Select whether to download all files or limit to images, videos, audio, or archives only.</li>
<li><strong>Keyword Skipping:</strong> Automatically skips posts or files containing certain keywords (e.g., "WIP", "sketch").</li>
<li><strong>Character Filtering:</strong> Restricts downloads to posts that match specific character or series names.</li>
</ul>
Kemono Downloader offers a range of features to streamline your content downloading experience:
<h3><strong>File Organization & Renaming</strong></h3>
<ul>
<li><strong>Automated Subfolders:</strong> Automatically organizes downloaded files into subdirectories based on character names or per post.</li>
<li><strong>Advanced File Renaming:</strong> Flexible renaming options, especially in Manga Mode, including:
<ul>
<li><strong>Post Title:</strong> Uses the post's title (e.g., <code>Chapter-One.jpg</code>).</li>
<li><strong>Date + Original Name:</strong> Prepends the publication date to the original filename.</li>
<li><strong>Date + Title:</strong> Combines the date with the post title.</li>
<li><strong>Sequential Numbering (Date Based):</strong> Simple sequence numbers (e.g., <code>001.jpg</code>, <code>002.jpg</code>).</li>
<li><strong>Title + Global Numbering:</strong> Uses post title with a globally incrementing number across the session.</li>
<li><strong>Post ID:</strong> Names files using the posts unique ID.</li>
</ul>
</li>
</ul>
- **User-Friendly Interface:** A modern PyQt5 GUI for easy navigation and operation.
<h3><strong>Specialized Modes</strong></h3>
<ul>
<li><strong>Manga/Comic Mode:</strong> Sorts posts chronologically before downloading to ensure pages appear in the correct sequence.</li>
<li><strong>Favorite Mode:</strong> Connects to your account and downloads from your favorites list (artists or posts).</li>
<li><strong>Link Extraction Mode:</strong> Extracts external links from posts for export or targeted downloading.</li>
<li><strong>Text Extraction Mode:</strong> Saves post descriptions or comment sections as <code>PDF</code>, <code>DOCX</code>, or <code>TXT</code> files.</li>
</ul>
- **Flexible Downloading:**
- Download content from Kemono.su (and mirrors) and Coomer.party (and mirrors).
- Supports creator pages (with page range selection) and individual post URLs.
- Standard download controls: Start, Pause, Resume, and Cancel.
- **Powerful Filtering:**
- **Character Filtering:** Filter content by character names. Supports simple comma-separated names and grouped names for shared folders.
- **Keyword Skipping:** Skip posts or files based on specified keywords.
- **Filename Cleaning:** Remove unwanted words or phrases from downloaded filenames.
- **File Type Selection:** Choose to download all files, or limit to images/GIFs, videos, audio, or archives. Can also extract external links only.
- **Customizable Downloads:**
- **Thumbnails Only:** Option to download only small preview images.
- **Content Scanning:** Scan post HTML for `<img>` tags and direct image links, useful for images embedded in descriptions.
- **WebP Conversion:** Convert images to WebP format for smaller file sizes (requires Pillow library).
- **Organized Output:**
- **Automatic Subfolders:** Create subfolders based on character names (from filters or `Known.txt`) or post titles.
- **Per-Post Subfolders:** Option to create an additional subfolder for each individual post.
- **Manga/Comic Mode:**
- Downloads posts from a creator's feed in chronological order (oldest to newest).
- Offers various filename styling options for sequential reading (e.g., post title, original name, global numbering).
- **⭐ Favorite Mode:**
- Directly download from your favorited artists and posts on Kemono.su.
- Requires a valid cookie and adapts the UI for easy selection from your favorites.
- Supports downloading into a single location or artist-specific subfolders.
- **Performance & Advanced Options:**
- **Cookie Support:** Use cookies (paste string or load from `cookies.txt`) to access restricted content.
- **Multithreading:** Configure the number of simultaneous downloads/post processing threads for improved speed.
- **Logging:**
- A detailed progress log displays download activity, errors, and summaries.
- **Multi-language Interface:** Choose from several languages for the UI (English, Japanese, French, Spanish, German, Russian, Korean, Chinese Simplified).
- **Theme Customization:** Selectable Light and Dark themes for user comfort.
---
## ✨ What's New in v6.0.0
This release focuses on providing more granular control over file organization and improving at-a-glance status monitoring.
### New Features
- **Live Error Count on Button**
The **"Error" button** now dynamically displays the number of failed files during a download. Instead of opening the dialog, you can quickly see a live count like `(3) Error`, helping you track issues at a glance.
- **Date Prefix for Post Subfolders**
A new checkbox labeled **"Date Prefix"** is now available in the advanced settings.
When enabled alongside **"Subfolder per Post"**, it prepends the post's upload date to the folder name (e.g., `2025-07-11 Post Title`).
This makes your downloads sortable and easier to browse chronologically.
- **Keep Duplicates Within a Post**
A **"Keep Duplicates"** option has been added to preserve all files from a post — even if some have the same name.
Instead of skipping or overwriting, the downloader will save duplicates with numbered suffixes (e.g., `image.jpg`, `image_1.jpg`, etc.), which is especially useful when the same file name points to different media.
### Bug Fixes
- The downloader now correctly renames large `.part` files when completed, avoiding leftover temp files.
- The list of failed files shown in the Error Dialog is now saved and restored with your session — so no errors get lost if you close the app.
- Your selected download location is remembered, even after pressing the **Reset** button.
- The **Cancel** button is now enabled when restoring a pending session, so you can abort stuck jobs more easily.
- Internal cleanup logs (like "Deleting post cache") are now excluded from the final download summary for clarity.
---
## 📅 Next Update Plans
### 🔖 Post Tag Filtering (Planned for v6.1.0)
A powerful new **"Filter by Post Tags"** feature is planned:
- Filter and download content based on specific post tags.
- Combine tag filtering with current filters (character, file type, etc.).
- Use tag presets to automate frequent downloads.
This will provide **much greater control** over what gets downloaded, especially for creators who use tags consistently.
### 📁 Creator Download History (.json Save)
To streamline incremental downloads, a new system will allow the app to:
- Save a `.json` file with metadata about already-downloaded posts.
- Compare that file on future runs, so only **new** posts are downloaded.
- Avoids duplication and makes regular syncs fast and efficient.
Ideal for users managing large collections or syncing favorites regularly.
---
<h3><strong>Utility & Advanced Features</strong></h3>
<ul>
<li><strong>Cookie Support:</strong> Enables access to subscriber-only content via browser session cookies.</li>
<li><strong>Duplicate Detection:</strong> Prevents saving duplicate files using content-based comparison, with configurable limits.</li>
<li><strong>Image Compression:</strong> Automatically converts large images to <code>.webp</code> to reduce disk usage.</li>
<li><strong>Creator Management:</strong> Built-in creator browser and update checker for downloading only new posts from saved profiles.</li>
<li><strong>Error Handling:</strong> Tracks failed downloads and provides a retry dialog with options to export or redownload missing files.</li>
</ul>
## 💻 Installation
@@ -154,7 +99,7 @@ Ideal for users managing large collections or syncing favorites regularly.
### Install Dependencies
```bash
pip install PyQt5 requests Pillow mega.py
pip install PyQt5 requests Pillow mega.py fpdf2 python-docx
```
### Running the Application
@@ -197,7 +142,7 @@ Feel free to fork this repo and submit pull requests for bug fixes, new features
## License
This project is under the Custom Licence
This project is under the MIT Licence
## Star History

View File

@@ -60,6 +60,7 @@ DOWNLOAD_LOCATION_KEY = "downloadLocationV1"
RESOLUTION_KEY = "window_resolution"
UI_SCALE_KEY = "ui_scale_factor"
SAVE_CREATOR_JSON_KEY = "saveCreatorJsonProfile"
FETCH_FIRST_KEY = "fetchAllPostsFirst"
# --- UI Constants and Identifiers ---
HTML_PREFIX = "<!HTML!>"
@@ -97,7 +98,7 @@ FOLDER_NAME_STOP_WORDS = {
"for", "he", "her", "his", "i", "im", "in", "is", "it", "its",
"me", "my", "net", "not", "of", "on", "or", "org", "our",
"s", "she", "so", "the", "their", "they", "this",
"to", "ve", "was", "we", "were", "with", "www", "you", "your",
"to", "ve", "was", "we", "were", "with", "www", "you", "your", "nsfw", "sfw",
# add more according to need
}
@@ -111,7 +112,9 @@ CREATOR_DOWNLOAD_DEFAULT_FOLDER_IGNORE_WORDS = {
"may", "jun", "june", "jul", "july", "aug", "august", "sep", "september",
"oct", "october", "nov", "november", "dec", "december",
"mon", "monday", "tue", "tuesday", "wed", "wednesday", "thu", "thursday",
"fri", "friday", "sat", "saturday", "sun", "sunday"
"fri", "friday", "sat", "saturday", "sun", "sunday", "Pack", "tier", "spoiler",
# add more according to need
}

View File

@@ -1,8 +1,9 @@
import time
import traceback
from urllib.parse import urlparse
import json # Ensure json is imported
import json
import requests
import cloudscraper
from ..utils.network_utils import extract_post_info, prepare_cookies_for_request
from ..config.constants import (
STYLE_DATE_POST_TITLE
@@ -12,7 +13,6 @@ from ..config.constants import (
def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_event=None, pause_event=None, cookies_dict=None):
"""
Fetches a single page of posts from the API with robust retry logic.
NEW: Requests only essential fields to keep the response size small and reliable.
"""
if cancellation_event and cancellation_event.is_set():
raise RuntimeError("Fetch operation cancelled by user.")
@@ -33,7 +33,7 @@ def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_ev
if cancellation_event and cancellation_event.is_set():
raise RuntimeError("Fetch operation cancelled by user during retry loop.")
log_message = f" Fetching post list: {api_url_base}?o={offset} (Page approx. {offset // 50 + 1})"
log_message = f" Fetching post list: {paginated_url} (Page approx. {offset // 50 + 1})"
if attempt > 0:
log_message += f" (Attempt {attempt + 1}/{max_retries})"
logger(log_message)
@@ -41,9 +41,23 @@ def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_ev
try:
response = requests.get(paginated_url, headers=headers, timeout=(15, 60), cookies=cookies_dict)
response.raise_for_status()
response.encoding = 'utf-8'
return response.json()
except requests.exceptions.RequestException as e:
# Handle 403 error on the FIRST page as a rate limit/block
if e.response is not None and e.response.status_code == 403 and offset == 0:
logger(" ❌ Access Denied (403 Forbidden) on the first page.")
logger(" This is likely a rate limit or a Cloudflare block.")
logger(" 💡 SOLUTION: Wait a while, use a VPN, or provide a valid session cookie.")
return [] # Stop the process gracefully
# Handle 400 error as the end of pages
if e.response is not None and e.response.status_code == 400:
logger(f" ✅ Reached end of posts (API returned 400 Bad Request for offset {offset}).")
return []
# Handle all other network errors with a retry
logger(f" ⚠️ Retryable network error on page fetch (Attempt {attempt + 1}): {e}")
if attempt < max_retries - 1:
delay = retry_delay * (2 ** attempt)
@@ -65,26 +79,28 @@ def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_ev
raise RuntimeError(f"Failed to fetch page {paginated_url} after all attempts.")
def fetch_single_post_data(api_domain, service, user_id, post_id, headers, logger, cookies_dict=None):
"""
--- NEW FUNCTION ---
Fetches the full data, including the 'content' field, for a single post.
--- MODIFIED FUNCTION ---
Fetches the full data, including the 'content' field, for a single post using cloudscraper.
"""
post_api_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}/post/{post_id}"
logger(f" Fetching full content for post ID {post_id}...")
scraper = cloudscraper.create_scraper()
try:
with requests.get(post_api_url, headers=headers, timeout=(15, 300), cookies=cookies_dict, stream=True) as response:
response.raise_for_status()
response_body = b""
for chunk in response.iter_content(chunk_size=8192):
response_body += chunk
full_post_data = json.loads(response_body)
if isinstance(full_post_data, list) and full_post_data:
return full_post_data[0]
return full_post_data
response = scraper.get(post_api_url, headers=headers, timeout=(15, 300), cookies=cookies_dict)
response.raise_for_status()
full_post_data = response.json()
if isinstance(full_post_data, list) and full_post_data:
return full_post_data[0]
if isinstance(full_post_data, dict) and 'post' in full_post_data:
return full_post_data['post']
return full_post_data
except Exception as e:
logger(f" ❌ Failed to fetch full content for post {post_id}: {e}")
return None
@@ -101,6 +117,7 @@ def fetch_post_comments(api_domain, service, user_id, post_id, headers, logger,
try:
response = requests.get(comments_api_url, headers=headers, timeout=(10, 30), cookies=cookies_dict)
response.raise_for_status()
response.encoding = 'utf-8'
return response.json()
except requests.exceptions.RequestException as e:
raise RuntimeError(f"Error fetching comments for post {post_id}: {e}")
@@ -120,12 +137,18 @@ def download_from_api(
selected_cookie_file=None,
app_base_dir=None,
manga_filename_style_for_sort_check=None,
processed_post_ids=None
):
processed_post_ids=None,
fetch_all_first=False
):
parsed_input_url_for_domain = urlparse(api_url_input)
api_domain = parsed_input_url_for_domain.netloc
headers = {
'User-Agent': 'Mozilla/5.0',
'Accept': 'application/json'
'User-Agent': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
'Referer': f'https://{api_domain}/',
'Accept': 'text/css'
}
if processed_post_ids is None:
processed_post_ids = set()
else:
@@ -137,15 +160,11 @@ def download_from_api(
logger(" Download_from_api cancelled at start.")
return
parsed_input_url_for_domain = urlparse(api_url_input)
api_domain = parsed_input_url_for_domain.netloc
# The code that defined api_domain was moved from here to the top of the function
# --- START: MODIFIED LOGIC ---
# This list is updated to include the new .cr and .st mirrors for validation.
if not any(d in api_domain.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
logger(f"⚠️ Unrecognized domain '{api_domain}' from input URL. Defaulting to kemono.su for API calls.")
api_domain = "kemono.su"
# --- END: MODIFIED LOGIC ---
cookies_for_api = None
if use_cookie and app_base_dir:
@@ -159,6 +178,7 @@ def download_from_api(
try:
direct_response = requests.get(direct_post_api_url, headers=headers, timeout=(10, 30), cookies=cookies_for_api)
direct_response.raise_for_status()
direct_response.encoding = 'utf-8'
direct_post_data = direct_response.json()
if isinstance(direct_post_data, list) and direct_post_data:
direct_post_data = direct_post_data[0]
@@ -183,7 +203,8 @@ def download_from_api(
logger("⚠️ Page range (start/end page) is ignored when a specific post URL is provided (searching all pages for the post).")
is_manga_mode_fetch_all_and_sort_oldest_first = manga_mode and (manga_filename_style_for_sort_check != STYLE_DATE_POST_TITLE) and not target_post_id
api_base_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}"
should_fetch_all = fetch_all_first or is_manga_mode_fetch_all_and_sort_oldest_first
api_base_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}/posts"
page_size = 50
if is_manga_mode_fetch_all_and_sort_oldest_first:
logger(f" Manga Mode (Style: {manga_filename_style_for_sort_check if manga_filename_style_for_sort_check else 'Default'} - Oldest First Sort Active): Fetching all posts to sort by date...")
@@ -354,3 +375,4 @@ def download_from_api(
time.sleep(0.6)
if target_post_id and not processed_target_post_flag and not (cancellation_event and cancellation_event.is_set()):
logger(f"❌ Target post {target_post_id} could not be found after checking all relevant pages (final check after loop).")

249
src/core/bunkr_client.py Normal file
View File

@@ -0,0 +1,249 @@
import logging
import os
import re
import requests
import html
import time
import datetime
import urllib.parse
import json
import random
import binascii
import itertools
class MockMessage:
Directory = 1
Url = 2
Version = 3
class AlbumException(Exception): pass
class ExtractionError(AlbumException): pass
class HttpError(ExtractionError):
def __init__(self, message="", response=None):
self.response = response
self.status = response.status_code if response is not None else 0
super().__init__(message)
class ControlException(AlbumException): pass
class AbortExtraction(ExtractionError, ControlException): pass
try:
re_compile = re._compiler.compile
except AttributeError:
re_compile = re.sre_compile.compile
HTML_RE = re_compile(r"<[^>]+>")
def extr(txt, begin, end, default=""):
try:
first = txt.index(begin) + len(begin)
return txt[first:txt.index(end, first)]
except Exception: return default
def extract_iter(txt, begin, end, pos=None):
try:
index = txt.index
lbeg = len(begin)
lend = len(end)
while True:
first = index(begin, pos) + lbeg
last = index(end, first)
pos = last + lend
yield txt[first:last]
except Exception: return
def split_html(txt):
try: return [html.unescape(x).strip() for x in HTML_RE.split(txt) if x and not x.isspace()]
except TypeError: return []
def parse_datetime(date_string, format="%Y-%m-%dT%H:%M:%S%z", utcoffset=0):
try:
d = datetime.datetime.strptime(date_string, format)
o = d.utcoffset()
if o is not None: d = d.replace(tzinfo=None, microsecond=0) - o
else:
if d.microsecond: d = d.replace(microsecond=0)
if utcoffset: d += datetime.timedelta(0, utcoffset * -3600)
return d
except (TypeError, IndexError, KeyError, ValueError, OverflowError): return None
unquote = urllib.parse.unquote
unescape = html.unescape
# --- From: util.py ---
def decrypt_xor(encrypted, key, base64=True, fromhex=False):
if base64: encrypted = binascii.a2b_base64(encrypted)
if fromhex: encrypted = bytes.fromhex(encrypted.decode())
div = len(key)
return bytes([encrypted[i] ^ key[i % div] for i in range(len(encrypted))]).decode()
def advance(iterable, num):
iterator = iter(iterable)
next(itertools.islice(iterator, num, num), None)
return iterator
def json_loads(s): return json.loads(s)
def json_dumps(obj): return json.dumps(obj, separators=(",", ":"))
# --- From: common.py ---
class Extractor:
def __init__(self, match, logger):
self.log = logger
self.url = match.string
self.match = match
self.groups = match.groups()
self.session = requests.Session()
self.session.headers["User-Agent"] = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0"
@classmethod
def from_url(cls, url, logger):
if isinstance(cls.pattern, str): cls.pattern = re.compile(cls.pattern)
match = cls.pattern.match(url)
return cls(match, logger) if match else None
def __iter__(self): return self.items()
def items(self): yield MockMessage.Version, 1
def request(self, url, method="GET", fatal=True, **kwargs):
tries = 1
while True:
try:
response = self.session.request(method, url, **kwargs)
if response.status_code < 400: return response
msg = f"'{response.status_code} {response.reason}' for '{response.url}'"
except requests.exceptions.RequestException as exc:
msg = str(exc)
self.log.info("%s (retrying...)", msg)
if tries > 4: break
time.sleep(tries)
tries += 1
if not fatal: return None
raise HttpError(msg)
def request_json(self, url, **kwargs):
response = self.request(url, **kwargs)
try: return json_loads(response.text)
except Exception as exc:
self.log.warning("%s: %s", exc.__class__.__name__, exc)
if not kwargs.get("fatal", True): return {}
raise
# --- From: bunkr.py (Adapted) ---
BASE_PATTERN_BUNKR = r"(?:https?://)?(?:[a-zA-Z0-9-]+\.)?(bunkr\.(?:si|la|ws|red|black|media|site|is|to|ac|cr|ci|fi|pk|ps|sk|ph|su)|bunkrr\.ru)"
DOMAINS = ["bunkr.si", "bunkr.ws", "bunkr.la", "bunkr.red", "bunkr.black", "bunkr.media", "bunkr.site"]
CF_DOMAINS = set()
class BunkrAlbumExtractor(Extractor):
category = "bunkr"
root = "https://bunkr.si"
root_dl = "https://get.bunkrr.su"
root_api = "https://apidl.bunkr.ru"
pattern = re.compile(BASE_PATTERN_BUNKR + r"/a/([^/?#]+)")
def __init__(self, match, logger):
super().__init__(match, logger)
domain_match = re.search(BASE_PATTERN_BUNKR, match.string)
if domain_match:
self.root = "https://" + domain_match.group(1)
self.endpoint = self.root_api + "/api/_001_v2"
self.album_id = self.groups[-1]
def items(self):
page = self.request(self.url).text
title = unescape(unescape(extr(page, 'property="og:title" content="', '"')))
items_html = list(extract_iter(page, '<div class="grid-images_box', "</a>"))
album_data = {
"album_id": self.album_id,
"album_name": title,
"count": len(items_html),
}
yield MockMessage.Directory, album_data, {}
for item_html in items_html:
try:
webpage_url = unescape(extr(item_html, ' href="', '"'))
if webpage_url.startswith("/"):
webpage_url = self.root + webpage_url
file_data = self._extract_file(webpage_url)
info = split_html(item_html)
if not file_data.get("name"):
file_data["name"] = info[-3]
yield MockMessage.Url, file_data, {}
except Exception as exc:
self.log.error("%s: %s", exc.__class__.__name__, exc)
def _extract_file(self, webpage_url):
page = self.request(webpage_url).text
data_id = extr(page, 'data-file-id="', '"')
referer = self.root_dl + "/file/" + data_id
headers = {"Referer": referer, "Origin": self.root_dl}
data = self.request_json(self.endpoint, method="POST", headers=headers, json={"id": data_id})
file_url = decrypt_xor(data["url"], f"SECRET_KEY_{data['timestamp'] // 3600}".encode()) if data.get("encrypted") else data["url"]
file_name = extr(page, "<h1", "<").rpartition(">")[2]
return {
"url": file_url,
"name": unescape(file_name),
"_http_headers": {"Referer": referer}
}
class BunkrMediaExtractor(BunkrAlbumExtractor):
pattern = re.compile(BASE_PATTERN_BUNKR + r"(/[fvid]/[^/?#]+)")
def items(self):
try:
media_path = self.groups[-1]
file_data = self._extract_file(self.root + media_path)
album_data = {"album_name": file_data.get("name", "bunkr_media"), "count": 1}
yield MockMessage.Directory, album_data, {}
yield MockMessage.Url, file_data, {}
except Exception as exc:
self.log.error("%s: %s", exc.__class__.__name__, exc)
yield MockMessage.Directory, {"album_name": "error", "count": 0}, {}
# ==============================================================================
# --- PUBLIC API FOR THE GUI ---
# ==============================================================================
def get_bunkr_extractor(url, logger):
"""Selects the correct Bunkr extractor based on the URL pattern."""
if BunkrAlbumExtractor.pattern.match(url):
logger.info("Bunkr Album URL detected.")
return BunkrAlbumExtractor.from_url(url, logger)
elif BunkrMediaExtractor.pattern.match(url):
logger.info("Bunkr Media URL detected.")
return BunkrMediaExtractor.from_url(url, logger)
else:
logger.error(f"No suitable Bunkr extractor found for URL: {url}")
return None
def fetch_bunkr_data(url, logger):
"""
Main function to be called from the GUI.
It extracts all file information from a Bunkr URL.
Returns:
A tuple of (album_name, list_of_files)
- album_name (str): The name of the album.
- list_of_files (list): A list of dicts, each containing 'url', 'name', and '_http_headers'.
Returns (None, None) on failure.
"""
extractor = get_bunkr_extractor(url, logger)
if not extractor:
return None, None
try:
album_name = "default_bunkr_album"
files_to_download = []
for msg_type, data, metadata in extractor:
if msg_type == MockMessage.Directory:
raw_album_name = data.get('album_name', 'untitled')
album_name = re.sub(r'[<>:"/\\|?*]', '_', raw_album_name).strip() or "untitled"
logger.info(f"Processing Bunkr album: {album_name}")
elif msg_type == MockMessage.Url:
# data here is the file_data dictionary
files_to_download.append(data)
if not files_to_download:
logger.warning("No files found to download from the Bunkr URL.")
return None, None
return album_name, files_to_download
except Exception as e:
logger.error(f"An error occurred while extracting Bunkr info: {e}", exc_info=True)
return None, None

View File

@@ -0,0 +1,90 @@
import time
import cloudscraper
import json
def fetch_server_channels(server_id, logger=print, cookies_dict=None):
"""
Fetches all channels for a given Discord server ID from the API.
Uses cloudscraper to bypass Cloudflare.
"""
api_url = f"https://kemono.cr/api/v1/discord/server/{server_id}"
logger(f" Fetching channels for server: {api_url}")
scraper = cloudscraper.create_scraper()
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Referer': f'https://kemono.cr/discord/server/{server_id}',
'Accept': 'text/css'
}
try:
response = scraper.get(api_url, headers=headers, cookies=cookies_dict, timeout=30)
response.raise_for_status()
channels = response.json()
if isinstance(channels, list):
logger(f" ✅ Found {len(channels)} channels for server {server_id}.")
return channels
return None
except Exception as e:
logger(f" ❌ Error fetching server channels for {server_id}: {e}")
return None
def fetch_channel_messages(channel_id, logger=print, cancellation_event=None, pause_event=None, cookies_dict=None):
"""
A generator that fetches all messages for a specific Discord channel, handling pagination.
Uses cloudscraper and proper headers to bypass server protection.
"""
scraper = cloudscraper.create_scraper()
base_url = f"https://kemono.cr/api/v1/discord/channel/{channel_id}"
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Referer': f'https://kemono.cr/discord/channel/{channel_id}',
'Accept': 'text/css'
}
offset = 0
# --- FIX: Corrected the page size for Discord API pagination ---
page_size = 150
# --- END FIX ---
while True:
if cancellation_event and cancellation_event.is_set():
logger(" Discord message fetching cancelled.")
break
if pause_event and pause_event.is_set():
logger(" Discord message fetching paused...")
while pause_event.is_set():
if cancellation_event and cancellation_event.is_set():
break
time.sleep(0.5)
if not (cancellation_event and cancellation_event.is_set()):
logger(" Discord message fetching resumed.")
paginated_url = f"{base_url}?o={offset}"
logger(f" Fetching messages from API: page starting at offset {offset}")
try:
response = scraper.get(paginated_url, headers=headers, cookies=cookies_dict, timeout=30)
response.raise_for_status()
messages_batch = response.json()
if not messages_batch:
logger(f" ✅ Reached end of messages for channel {channel_id}.")
break
logger(f" Fetched {len(messages_batch)} messages...")
yield messages_batch
if len(messages_batch) < page_size:
logger(f" ✅ Last page of messages received for channel {channel_id}.")
break
offset += page_size
time.sleep(0.5) # Be respectful to the API
except (cloudscraper.exceptions.CloudflareException, json.JSONDecodeError) as e:
logger(f" ❌ Error fetching messages at offset {offset}: {e}")
break
except Exception as e:
logger(f" ❌ An unexpected error occurred while fetching messages: {e}")
break

147
src/core/erome_client.py Normal file
View File

@@ -0,0 +1,147 @@
# src/core/erome_client.py
import os
import re
import html
import time
import urllib.parse
import requests
from datetime import datetime
# #############################################################################
# SECTION: Utility functions adapted from the original script
# #############################################################################
def extr(txt, begin, end, default=""):
"""Stripped-down version of 'extract()' to find text between two delimiters."""
try:
first = txt.index(begin) + len(begin)
return txt[first:txt.index(end, first)]
except (ValueError, IndexError):
return default
def extract_iter(txt, begin, end):
"""Yields all occurrences of text between two delimiters."""
try:
index = txt.index
lbeg = len(begin)
lend = len(end)
pos = 0
while True:
first = index(begin, pos) + lbeg
last = index(end, first)
pos = last + lend
yield txt[first:last]
except (ValueError, IndexError):
return
def nameext_from_url(url):
"""Extracts filename and extension from a URL."""
data = {}
filename = urllib.parse.unquote(url.partition("?")[0].rpartition("/")[2])
name, _, ext = filename.rpartition(".")
if name and len(ext) <= 16:
data["filename"], data["extension"] = name, ext.lower()
else:
data["filename"], data["extension"] = filename, ""
return data
def parse_timestamp(ts, default=None):
"""Creates a datetime object from a Unix timestamp."""
try:
# Use fromtimestamp for simplicity and compatibility
return datetime.fromtimestamp(int(ts))
except (ValueError, TypeError):
return default
# #############################################################################
# SECTION: Main Erome Fetching Logic
# #############################################################################
def fetch_erome_data(url, logger):
"""
Identifies and extracts all media files from an Erome album URL.
Args:
url (str): The Erome album URL (e.g., https://www.erome.com/a/albumID).
logger (function): A function to log progress messages.
Returns:
tuple: A tuple containing (album_folder_name, list_of_file_dicts).
Returns (None, []) if data extraction fails.
"""
album_id_match = re.search(r"/a/(\w+)", url)
if not album_id_match:
logger(f"Error: The URL '{url}' does not appear to be a valid Erome album link.")
return None, []
album_id = album_id_match.group(1)
page_url = f"https://www.erome.com/a/{album_id}"
session = requests.Session()
session.headers.update({
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36",
"Referer": "https://www.erome.com/"
})
try:
logger(f" Fetching Erome album page: {page_url}")
# Add a loop to handle "Please wait" pages
for attempt in range(5):
response = session.get(page_url, timeout=30)
response.raise_for_status()
page_content = response.text
if "<title>Please wait a few moments</title>" in page_content:
logger(f" Cloudflare check detected. Waiting 5 seconds... (Attempt {attempt + 1}/5)")
time.sleep(5)
continue
break
else:
logger(" Error: Could not bypass Cloudflare check after several attempts.")
return None, []
title = html.unescape(extr(page_content, 'property="og:title" content="', '"'))
user = urllib.parse.unquote(extr(page_content, 'href="https://www.erome.com/', '"', default="unknown_user"))
# Sanitize title and user for folder creation
sanitized_title = re.sub(r'[<>:"/\\|?*]', '_', title).strip()
sanitized_user = re.sub(r'[<>:"/\\|?*]', '_', user).strip()
album_folder_name = f"Erome - {sanitized_user} - {sanitized_title} [{album_id}]"
urls = []
# Split the page content by media groups to find all videos
media_groups = page_content.split('<div class="media-group"')
for group in media_groups[1:]: # Skip the part before the first media group
# Prioritize <source> tag, fall back to data-src for images
video_url = extr(group, '<source src="', '"') or extr(group, 'data-src="', '"')
if video_url:
urls.append(video_url)
if not urls:
logger(" Warning: No media URLs found on the album page.")
return album_folder_name, []
logger(f" Found {len(urls)} media files in album '{title}'.")
file_list = []
for i, file_url in enumerate(urls, 1):
filename_info = nameext_from_url(file_url)
# Create a clean, descriptive filename
filename = f"{album_id}_{sanitized_title}_{i:03d}.{filename_info.get('extension', 'mp4')}"
file_data = {
"url": file_url,
"filename": filename,
"headers": {"Referer": page_url},
}
file_list.append(file_data)
return album_folder_name, file_list
except requests.exceptions.RequestException as e:
logger(f" Error fetching Erome page: {e}")
return None, []
except Exception as e:
logger(f" An unexpected error occurred during Erome extraction: {e}")
return None, []

View File

@@ -0,0 +1,45 @@
import requests
import cloudscraper
import json
def fetch_nhentai_gallery(gallery_id, logger=print):
"""
Fetches the metadata for a single nhentai gallery using cloudscraper to bypass Cloudflare.
Args:
gallery_id (str or int): The ID of the nhentai gallery.
logger (function): A function to log progress and error messages.
Returns:
dict: A dictionary containing the gallery's metadata if successful, otherwise None.
"""
api_url = f"https://nhentai.net/api/gallery/{gallery_id}"
# Create a cloudscraper instance
scraper = cloudscraper.create_scraper()
logger(f" Fetching nhentai gallery metadata from: {api_url}")
try:
# Use the scraper to make the GET request
response = scraper.get(api_url, timeout=20)
if response.status_code == 404:
logger(f" ❌ Gallery not found (404): ID {gallery_id}")
return None
response.raise_for_status()
gallery_data = response.json()
if "id" in gallery_data and "media_id" in gallery_data and "images" in gallery_data:
logger(f" ✅ Successfully fetched metadata for '{gallery_data['title']['english']}'")
gallery_data['pages'] = gallery_data.pop('images')['pages']
return gallery_data
else:
logger(" ❌ API response is missing essential keys (id, media_id, or images).")
return None
except Exception as e:
logger(f" ❌ An error occurred while fetching gallery {gallery_id}: {e}")
return None

173
src/core/saint2_client.py Normal file
View File

@@ -0,0 +1,173 @@
# src/core/saint2_client.py
import os
import re as re_module
import html
import urllib.parse
import requests
# ##############################################################################
# SECTION: Utility functions adapted from the original script
# ##############################################################################
PATTERN_CACHE = {}
def re(pattern):
"""Compile a regular expression pattern and cache it."""
try:
return PATTERN_CACHE[pattern]
except KeyError:
p = PATTERN_CACHE[pattern] = re_module.compile(pattern)
return p
def extract_from(txt, pos=None, default=""):
"""Returns a function that extracts text between two delimiters from 'txt'."""
def extr(begin, end, index=txt.find, txt=txt):
nonlocal pos
try:
start_pos = pos if pos is not None else 0
first = index(begin, start_pos) + len(begin)
last = index(end, first)
if pos is not None:
pos = last + len(end)
return txt[first:last]
except (ValueError, IndexError):
return default
return extr
def nameext_from_url(url):
"""Extract filename and extension from a URL."""
data = {}
filename = urllib.parse.unquote(url.partition("?")[0].rpartition("/")[2])
name, _, ext = filename.rpartition(".")
if name and len(ext) <= 16:
data["filename"], data["extension"] = name, ext.lower()
else:
data["filename"], data["extension"] = filename, ""
return data
# ##############################################################################
# SECTION: Extractor Logic adapted for the main application
# ##############################################################################
class BaseExtractor:
"""A simplified base class for extractors."""
def __init__(self, match, session, logger):
self.match = match
self.groups = match.groups()
self.session = session
self.log = logger
def request(self, url, **kwargs):
"""Makes an HTTP request using the session."""
try:
response = self.session.get(url, **kwargs)
response.raise_for_status()
return response
except requests.exceptions.RequestException as e:
self.log(f"Error making request to {url}: {e}")
return None
class SaintAlbumExtractor(BaseExtractor):
"""Extractor for saint.su albums."""
root = "https://saint2.su"
pattern = re(r"(?:https?://)?saint\d*\.(?:su|pk|cr|to)/a/([^/?#]+)")
def items(self):
"""Generator that yields all files from an album."""
album_id = self.groups[0]
response = self.request(f"{self.root}/a/{album_id}")
if not response:
return None, []
extr = extract_from(response.text)
title = extr("<title>", "<").rpartition(" - ")[0]
self.log(f"Downloading album: {title}")
files_html = re_module.findall(r'<a class="image".*?</a>', response.text, re_module.DOTALL)
file_list = []
for i, file_html in enumerate(files_html, 1):
file_extr = extract_from(file_html)
file_url = html.unescape(file_extr("onclick=\"play('", "'"))
if not file_url:
continue
filename_info = nameext_from_url(file_url)
filename = f"{filename_info['filename']}.{filename_info['extension']}"
file_data = {
"url": file_url,
"filename": filename,
"headers": {"Referer": response.url},
}
file_list.append(file_data)
return title, file_list
class SaintMediaExtractor(BaseExtractor):
"""Extractor for single saint.su media links."""
root = "https://saint2.su"
pattern = re(r"(?:https?://)?saint\d*\.(?:su|pk|cr|to)(/(embe)?d/([^/?#]+))")
def items(self):
"""Generator that yields the single file from a media page."""
path, embed, media_id = self.groups
url = self.root + path
response = self.request(url)
if not response:
return None, []
extr = extract_from(response.text)
file_url = ""
title = extr("<title>", "<").rpartition(" - ")[0] or media_id
if embed: # /embed/ link
file_url = html.unescape(extr('<source src="', '"'))
else: # /d/ link
file_url = html.unescape(extr('<a href="', '"'))
if not file_url:
self.log("Could not find video URL on the page.")
return title, []
filename_info = nameext_from_url(file_url)
filename = f"{filename_info['filename'] or media_id}.{filename_info['extension'] or 'mp4'}"
file_data = {
"url": file_url,
"filename": filename,
"headers": {"Referer": response.url}
}
return title, [file_data]
def fetch_saint2_data(url, logger):
"""
Identifies the correct extractor for a saint2.su URL and returns the data.
Args:
url (str): The saint2.su URL.
logger (function): A function to log progress messages.
Returns:
tuple: A tuple containing (album_title, list_of_file_dicts).
Returns (None, []) if no data could be fetched.
"""
extractors = [SaintMediaExtractor, SaintAlbumExtractor]
session = requests.Session()
session.headers.update({
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
})
for extractor_cls in extractors:
match = extractor_cls.pattern.match(url)
if match:
extractor = extractor_cls(match, session, logger)
album_title, files = extractor.items()
# Sanitize the album title to be a valid folder name
sanitized_title = re_module.sub(r'[<>:"/\\|?*]', '_', album_title) if album_title else "saint2_download"
return sanitized_title, files
logger(f"Error: The URL '{url}' does not match a known saint2 pattern.")
return None, []

View File

@@ -15,6 +15,8 @@ from concurrent.futures import ThreadPoolExecutor, as_completed, CancelledError,
from io import BytesIO
from urllib .parse import urlparse
import requests
import cloudscraper
try:
from PIL import Image
except ImportError:
@@ -37,7 +39,7 @@ try:
except ImportError:
Document = None
from PyQt5 .QtCore import Qt ,QThread ,pyqtSignal ,QMutex ,QMutexLocker ,QObject ,QTimer ,QSettings ,QStandardPaths ,QCoreApplication ,QUrl ,QSize ,QProcess
from .api_client import download_from_api, fetch_post_comments
from .api_client import download_from_api, fetch_post_comments, fetch_single_post_data
from ..services.multipart_downloader import download_file_in_parts, MULTIPART_DOWNLOADER_AVAILABLE
from ..services.drive_downloader import (
download_mega_file, download_gdrive_file, download_dropbox_file
@@ -54,6 +56,19 @@ from ..utils.text_utils import (
)
from ..config.constants import *
def robust_clean_name(name):
"""A more robust function to remove illegal characters for filenames and folders."""
if not name:
return ""
illegal_chars_pattern = r'[\x00-\x1f<>:"/\\|?*\']'
cleaned_name = re.sub(illegal_chars_pattern, '', name)
cleaned_name = cleaned_name.strip(' .')
if not cleaned_name:
return "untitled_folder"
return cleaned_name
class PostProcessorSignals (QObject ):
progress_signal =pyqtSignal (str )
file_download_status_signal =pyqtSignal (bool )
@@ -64,7 +79,6 @@ class PostProcessorSignals (QObject ):
worker_finished_signal = pyqtSignal(tuple)
class PostProcessorWorker:
def __init__(self, post_data, download_root, known_names,
filter_character_list, emitter,
unwanted_keywords, filter_mode, skip_zip,
@@ -104,7 +118,11 @@ class PostProcessorWorker:
text_export_format='txt',
single_pdf_mode=False,
project_root_dir=None,
processed_post_ids=None
processed_post_ids=None,
multipart_scope='both',
multipart_parts_count=4,
multipart_min_size_mb=100,
skip_file_size_mb=None
):
self.post = post_data
self.download_root = download_root
@@ -166,7 +184,10 @@ class PostProcessorWorker:
self.single_pdf_mode = single_pdf_mode
self.project_root_dir = project_root_dir
self.processed_post_ids = processed_post_ids if processed_post_ids is not None else []
self.multipart_scope = multipart_scope
self.multipart_parts_count = multipart_parts_count
self.multipart_min_size_mb = multipart_min_size_mb
self.skip_file_size_mb = skip_file_size_mb
if self.compress_images and Image is None:
self.logger("⚠️ Image compression disabled: Pillow library not found.")
self.compress_images = False
@@ -200,8 +221,38 @@ class PostProcessorWorker:
if self .dynamic_filter_holder :
return self .dynamic_filter_holder .get_filters ()
return self .filter_character_list_objects_initial
def _download_single_file(self, file_info, target_folder_path, headers, original_post_id_for_log, skip_event,
def _find_valid_subdomain(self, url: str, max_subdomains: int = 4) -> str:
"""
Attempts to find a working subdomain for a Kemono/Coomer URL that returned a 403 error.
Returns the original URL if no other valid subdomain is found.
"""
self.logger(f" probing for a valid subdomain...")
parsed_url = urlparse(url)
original_domain = parsed_url.netloc
for i in range(1, max_subdomains + 1):
domain_parts = original_domain.split('.')
if len(domain_parts) > 1:
base_domain = ".".join(domain_parts[-2:])
new_domain = f"n{i}.{base_domain}"
else:
continue
new_url = parsed_url._replace(netloc=new_domain).geturl()
try:
with requests.head(new_url, headers={'User-Agent': 'Mozilla/5.0'}, timeout=5, allow_redirects=True) as resp:
if resp.status_code == 200:
self.logger(f" ✅ Valid subdomain found: {new_domain}")
return new_url
except requests.RequestException:
continue
self.logger(f" ⚠️ No other valid subdomain found. Sticking with the original.")
return url
def _download_single_file(self, file_info, target_folder_path, post_page_url, original_post_id_for_log, skip_event,
post_title="", file_index_in_post=0, num_files_in_this_post=1,
manga_date_file_counter_ref=None,
forced_filename_override=None,
@@ -215,11 +266,36 @@ class PostProcessorWorker:
if self.check_cancel() or (skip_event and skip_event.is_set()):
return 0, 1, "", False, FILE_DOWNLOAD_STATUS_SKIPPED, None
file_download_headers = {
'User-Agent': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
'Referer': post_page_url,
'Accept': 'text/css'
}
file_url = file_info.get('url')
cookies_to_use_for_file = None
if self.use_cookie:
cookies_to_use_for_file = prepare_cookies_for_request(self.use_cookie, self.cookie_text, self.selected_cookie_file, self.app_base_dir, self.logger)
if self.skip_file_size_mb is not None:
api_original_filename_for_size_check = file_info.get('_original_name_for_log', file_info.get('name'))
try:
# Use a stream=True HEAD request to get headers without downloading the body
with requests.head(file_url, headers=file_download_headers, timeout=15, cookies=cookies_to_use_for_file, allow_redirects=True) as head_response:
head_response.raise_for_status()
content_length = head_response.headers.get('Content-Length')
if content_length:
file_size_bytes = int(content_length)
file_size_mb = file_size_bytes / (1024 * 1024)
if file_size_mb < self.skip_file_size_mb:
self.logger(f" -> Skip File (Size): '{api_original_filename_for_size_check}' is {file_size_mb:.2f} MB, which is smaller than the {self.skip_file_size_mb} MB limit.")
return 0, 1, api_original_filename_for_size_check, False, FILE_DOWNLOAD_STATUS_SKIPPED, None
else:
self.logger(f" ⚠️ Could not determine file size for '{api_original_filename_for_size_check}' to check against size limit. Proceeding with download.")
except requests.RequestException as e:
self.logger(f" ⚠️ Could not fetch file headers to check size for '{api_original_filename_for_size_check}': {e}. Proceeding with download.")
api_original_filename = file_info.get('_original_name_for_log', file_info.get('name'))
filename_to_save_in_main_path = ""
if forced_filename_override:
@@ -233,34 +309,28 @@ class PostProcessorWorker:
self.logger(f" -> Skip File (Keyword in Original Name '{skip_word}'): '{api_original_filename}'. Scope: {self.skip_words_scope}")
return 0, 1, api_original_filename, False, FILE_DOWNLOAD_STATUS_SKIPPED, None
cleaned_original_api_filename = clean_filename(api_original_filename)
cleaned_original_api_filename = robust_clean_name(api_original_filename)
original_filename_cleaned_base, original_ext = os.path.splitext(cleaned_original_api_filename)
if not original_ext.startswith('.'): original_ext = '.' + original_ext if original_ext else ''
if self.manga_mode_active:
if self.manga_filename_style == STYLE_ORIGINAL_NAME:
# Get the post's publication or added date
published_date_str = self.post.get('published')
added_date_str = self.post.get('added')
formatted_date_str = "nodate" # Fallback if no date is found
formatted_date_str = "nodate"
date_to_use_str = published_date_str or added_date_str
if date_to_use_str:
try:
# Extract just the YYYY-MM-DD part from the timestamp
formatted_date_str = date_to_use_str.split('T')[0]
except Exception:
self.logger(f" ⚠️ Could not parse date '{date_to_use_str}'. Using 'nodate' prefix.")
else:
self.logger(f" ⚠️ Post ID {original_post_id_for_log} has no date. Using 'nodate' prefix.")
# Combine the date with the cleaned original filename
filename_to_save_in_main_path = f"{formatted_date_str}_{cleaned_original_api_filename}"
was_original_name_kept_flag = True
elif self.manga_filename_style == STYLE_POST_TITLE:
if post_title and post_title.strip():
cleaned_post_title_base = clean_filename(post_title.strip())
cleaned_post_title_base = robust_clean_name(post_title.strip())
if num_files_in_this_post > 1:
if file_index_in_post == 0:
filename_to_save_in_main_path = f"{cleaned_post_title_base}{original_ext}"
@@ -281,7 +351,7 @@ class PostProcessorWorker:
manga_date_file_counter_ref[0] += 1
base_numbered_name = f"{counter_val_for_filename:03d}"
if self.manga_date_prefix and self.manga_date_prefix.strip():
cleaned_prefix = clean_filename(self.manga_date_prefix.strip())
cleaned_prefix = robust_clean_name(self.manga_date_prefix.strip())
if cleaned_prefix:
filename_to_save_in_main_path = f"{cleaned_prefix} {base_numbered_name}{original_ext}"
else:
@@ -298,7 +368,7 @@ class PostProcessorWorker:
with counter_lock:
counter_val_for_filename = manga_global_file_counter_ref[0]
manga_global_file_counter_ref[0] += 1
cleaned_post_title_base_for_global = clean_filename(post_title.strip() if post_title and post_title.strip() else "post")
cleaned_post_title_base_for_global = robust_clean_name(post_title.strip() if post_title and post_title.strip() else "post")
filename_to_save_in_main_path = f"{cleaned_post_title_base_for_global}_{counter_val_for_filename:03d}{original_ext}"
else:
self.logger(f"⚠️ Manga Title+GlobalNum Mode: Counter ref not provided or malformed for '{api_original_filename}'. Using original. Ref: {manga_global_file_counter_ref}")
@@ -330,8 +400,8 @@ class PostProcessorWorker:
self.logger(f" ⚠️ Post ID {original_post_id_for_log} missing both 'published' and 'added' dates for STYLE_DATE_POST_TITLE. Using 'nodate'.")
if post_title and post_title.strip():
temp_cleaned_title = clean_filename(post_title.strip())
if not temp_cleaned_title or temp_cleaned_title.startswith("untitled_file"):
temp_cleaned_title = robust_clean_name(post_title.strip())
if not temp_cleaned_title or temp_cleaned_title.startswith("untitled_folder"):
self.logger(f"⚠️ Manga mode (Date+PostTitle Style): Post title for post {original_post_id_for_log} ('{post_title}') was empty or generic after cleaning. Using 'post' as title part.")
cleaned_post_title_for_filename = "post"
else:
@@ -358,8 +428,26 @@ class PostProcessorWorker:
self.logger(f"⚠️ Manga mode: Generated filename was empty. Using generic fallback: '{filename_to_save_in_main_path}'.")
was_original_name_kept_flag = False
else:
filename_to_save_in_main_path = cleaned_original_api_filename
was_original_name_kept_flag = True
is_url_like = 'http' in api_original_filename.lower()
is_too_long = len(cleaned_original_api_filename) > 100
if is_url_like or is_too_long:
self.logger(f" ⚠️ Original filename is a URL or too long. Generating a shorter name.")
name_hash = hashlib.md5(api_original_filename.encode()).hexdigest()[:12]
_, ext = os.path.splitext(cleaned_original_api_filename)
if not ext:
try:
path = urlparse(api_original_filename).path
ext = os.path.splitext(path)[1] or ".file"
except Exception:
ext = ".file"
cleaned_post_title = robust_clean_name(post_title.strip() if post_title else "post")[:40]
filename_to_save_in_main_path = f"{cleaned_post_title}_{name_hash}{ext}"
was_original_name_kept_flag = False
else:
filename_to_save_in_main_path = cleaned_original_api_filename
was_original_name_kept_flag = True
if self.remove_from_filename_words_list and filename_to_save_in_main_path:
base_name_for_removal, ext_for_removal = os.path.splitext(filename_to_save_in_main_path)
@@ -414,8 +502,7 @@ class PostProcessorWorker:
final_save_path_check = os.path.join(target_folder_path, filename_to_save_in_main_path)
if os.path.exists(final_save_path_check):
try:
# Use a HEAD request to get the expected size without downloading the body
with requests.head(file_url, headers=headers, timeout=15, cookies=cookies_to_use_for_file, allow_redirects=True) as head_response:
with requests.head(file_url, headers=file_download_headers, timeout=15, cookies=cookies_to_use_for_file, allow_redirects=True) as head_response:
head_response.raise_for_status()
expected_size = int(head_response.headers.get('Content-Length', -1))
@@ -423,62 +510,96 @@ class PostProcessorWorker:
if expected_size != -1 and actual_size == expected_size:
self.logger(f" -> Skip (File Exists & Complete): '{filename_to_save_in_main_path}' is already on disk with the correct size.")
# We still need to add its hash to the session to prevent duplicates in other modes
# This is a quick hash calculation for the already existing file
try:
md5_hasher = hashlib.md5()
with open(final_save_path_check, 'rb') as f_verify:
for chunk in iter(lambda: f_verify.read(8192), b""):
md5_hasher.update(chunk)
with self.downloaded_hash_counts_lock:
self.downloaded_hash_counts[md5_hasher.hexdigest()] += 1
except Exception as hash_exc:
self.logger(f" ⚠️ Could not hash existing file '{filename_to_save_in_main_path}' for session: {hash_exc}")
return 0, 1, filename_to_save_in_main_path, was_original_name_kept_flag, FILE_DOWNLOAD_STATUS_SKIPPED, None
else:
self.logger(f" ⚠️ File '{filename_to_save_in_main_path}' exists but is incomplete (Expected: {expected_size}, Actual: {actual_size}). Re-downloading.")
except requests.RequestException as e:
self.logger(f" ⚠️ Could not verify size of existing file '{filename_to_save_in_main_path}': {e}. Proceeding with download.")
max_retries = 3
retry_delay = 5
downloaded_size_bytes = 0
calculated_file_hash = None
downloaded_part_file_path = None
total_size_bytes = 0
download_successful_flag = False
last_exception_for_retry_later = None
is_permanent_error = False
data_to_write_io = None
response_for_this_attempt = None
for attempt_num_single_stream in range(max_retries + 1):
response_for_this_attempt = None
response = None
if self._check_pause(f"File download attempt for '{api_original_filename}'"): break
if self.check_cancel() or (skip_event and skip_event.is_set()): break
try:
if attempt_num_single_stream > 0:
self.logger(f" Retrying download for '{api_original_filename}' (Overall Attempt {attempt_num_single_stream + 1}/{max_retries + 1})...")
time.sleep(retry_delay * (2 ** (attempt_num_single_stream - 1)))
self._emit_signal('file_download_status', True)
response = requests.get(file_url, headers=headers, timeout=(15, 300), stream=True, cookies=cookies_to_use_for_file)
current_url_to_try = file_url
response = requests.get(current_url_to_try, headers=file_download_headers, timeout=(30, 300), stream=True, cookies=cookies_to_use_for_file)
if response.status_code == 403 and ('kemono.cr' in current_url_to_try or 'coomer.st' in current_url_to_try):
self.logger(f" ⚠️ Got 403 Forbidden for '{api_original_filename}'. Attempting subdomain rotation...")
new_url = self._find_valid_subdomain(current_url_to_try)
if new_url != current_url_to_try:
self.logger(f" Retrying with new URL: {new_url}")
file_url = new_url
response.close() # Close the old response
response = requests.get(new_url, headers=file_download_headers, timeout=(30, 300), stream=True, cookies=cookies_to_use_for_file)
response.raise_for_status()
# --- REVISED AND MOVED SIZE CHECK LOGIC ---
total_size_bytes = int(response.headers.get('Content-Length', 0))
num_parts_for_file = min(self.num_file_threads, MAX_PARTS_FOR_MULTIPART_DOWNLOAD)
if self.skip_file_size_mb is not None:
if total_size_bytes > 0:
file_size_mb = total_size_bytes / (1024 * 1024)
if file_size_mb < self.skip_file_size_mb:
self.logger(f" -> Skip File (Size): '{api_original_filename}' is {file_size_mb:.2f} MB, which is smaller than the {self.skip_file_size_mb} MB limit.")
return 0, 1, api_original_filename, False, FILE_DOWNLOAD_STATUS_SKIPPED, None
# If Content-Length is missing, we can't check, so we no longer log a warning here and just proceed.
# --- END OF REVISED LOGIC ---
num_parts_for_file = min(self.multipart_parts_count, MAX_PARTS_FOR_MULTIPART_DOWNLOAD)
file_is_eligible_by_scope = False
if self.multipart_scope == 'videos':
if is_video(api_original_filename):
file_is_eligible_by_scope = True
elif self.multipart_scope == 'archives':
if is_archive(api_original_filename):
file_is_eligible_by_scope = True
elif self.multipart_scope == 'both':
if is_video(api_original_filename) or is_archive(api_original_filename):
file_is_eligible_by_scope = True
min_size_in_bytes = self.multipart_min_size_mb * 1024 * 1024
attempt_multipart = (self.allow_multipart_download and MULTIPART_DOWNLOADER_AVAILABLE and
num_parts_for_file > 1 and total_size_bytes > MIN_SIZE_FOR_MULTIPART_DOWNLOAD and
file_is_eligible_by_scope and
num_parts_for_file > 1 and total_size_bytes > min_size_in_bytes and
'bytes' in response.headers.get('Accept-Ranges', '').lower())
if self._check_pause(f"Multipart decision for '{api_original_filename}'"): break
if attempt_multipart:
if response_for_this_attempt:
response_for_this_attempt.close()
response_for_this_attempt = None
response.close() # Close the initial connection before starting multipart
mp_save_path_for_unique_part_stem_arg = os.path.join(target_folder_path, f"{unique_part_file_stem_on_disk}{temp_file_ext_for_unique_part}")
mp_success, mp_bytes, mp_hash, mp_file_handle = download_file_in_parts(
file_url, mp_save_path_for_unique_part_stem_arg, total_size_bytes, num_parts_for_file, headers, api_original_filename,
file_url, mp_save_path_for_unique_part_stem_arg, total_size_bytes, num_parts_for_file, file_download_headers, api_original_filename,
emitter_for_multipart=self.emitter, cookies_for_chunk_session=cookies_to_use_for_file,
cancellation_event=self.cancellation_event, skip_event=skip_event, logger_func=self.logger,
pause_event=self.pause_event
@@ -487,7 +608,7 @@ class PostProcessorWorker:
download_successful_flag = True
downloaded_size_bytes = mp_bytes
calculated_file_hash = mp_hash
downloaded_part_file_path = mp_save_path_for_unique_part_stem_arg + ".part"
downloaded_part_file_path = mp_save_path_for_unique_part_stem_arg
if mp_file_handle: mp_file_handle.close()
break
else:
@@ -501,7 +622,6 @@ class PostProcessorWorker:
current_attempt_downloaded_bytes = 0
md5_hasher = hashlib.md5()
last_progress_time = time.time()
single_stream_exception = None
try:
with open(current_single_stream_part_path, 'wb') as f_part:
for chunk in response.iter_content(chunk_size=1 * 1024 * 1024):
@@ -553,20 +673,23 @@ class PostProcessorWorker:
if isinstance(e, requests.exceptions.ConnectionError) and ("Failed to resolve" in str(e) or "NameResolutionError" in str(e)):
self.logger(" 💡 This looks like a DNS resolution problem. Please check your internet connection, DNS settings, or VPN.")
except requests.exceptions.RequestException as e:
self.logger(f" ❌ Download Error (Non-Retryable): {api_original_filename}. Error: {e}")
last_exception_for_retry_later = e
is_permanent_error = True
if ("Failed to resolve" in str(e) or "NameResolutionError" in str(e)):
self.logger(" 💡 This looks like a DNS resolution problem. Please check your internet connection, DNS settings, or VPN.")
break
if e.response is not None and e.response.status_code == 403:
self.logger(f" ⚠️ Download Error (403 Forbidden): {api_original_filename}. This often requires valid cookies.")
self.logger(f" Will retry... Check your 'Use Cookie' settings if this persists.")
last_exception_for_retry_later = e
else:
self.logger(f" ❌ Download Error (Non-Retryable): {api_original_filename}. Error: {e}")
last_exception_for_retry_later = e
is_permanent_error = True
break
except Exception as e:
self.logger(f" ❌ Unexpected Download Error: {api_original_filename}: {e}\n{traceback.format_exc(limit=2)}")
last_exception_for_retry_later = e
is_permanent_error = True
break
finally:
if response_for_this_attempt:
response_for_this_attempt.close()
if response:
response.close()
self._emit_signal('file_download_status', False)
final_total_for_progress = total_size_bytes if download_successful_flag and total_size_bytes > 0 else downloaded_size_bytes
@@ -635,26 +758,22 @@ class PostProcessorWorker:
self.logger(f" 🔄 Compressing '{api_original_filename}' to WebP...")
try:
with Image.open(downloaded_part_file_path) as img:
# Convert to RGB to avoid issues with paletted images or alpha channels in WebP
if img.mode not in ('RGB', 'RGBA'):
img = img.convert('RGBA')
# Use an in-memory buffer to save the compressed image
output_buffer = BytesIO()
img.save(output_buffer, format='WebP', quality=85)
# This buffer now holds the compressed data
data_to_write_io = output_buffer
# Update the filename to use the .webp extension
base, _ = os.path.splitext(filename_to_save_in_main_path)
filename_to_save_in_main_path = f"{base}.webp"
self.logger(f" ✅ Compression successful. New size: {len(data_to_write_io.getvalue()) / (1024*1024):.2f} MB")
except Exception as e_compress:
self.logger(f" ⚠️ Failed to compress '{api_original_filename}': {e_compress}. Saving original file instead.")
data_to_write_io = None # Ensure we fall back to saving the original
data_to_write_io = None
effective_save_folder = target_folder_path
base_name, extension = os.path.splitext(filename_to_save_in_main_path)
counter = 1
@@ -671,17 +790,14 @@ class PostProcessorWorker:
try:
if data_to_write_io:
# Write the compressed data from the in-memory buffer
with open(final_save_path, 'wb') as f_out:
f_out.write(data_to_write_io.getvalue())
# Clean up the original downloaded part file
if downloaded_part_file_path and os.path.exists(downloaded_part_file_path):
try:
os.remove(downloaded_part_file_path)
except OSError as e_rem:
self.logger(f" -> Failed to remove .part after compression: {e_rem}")
else:
# No compression was done, just rename the original file
if downloaded_part_file_path and os.path.exists(downloaded_part_file_path):
time.sleep(0.1)
os.rename(downloaded_part_file_path, final_save_path)
@@ -728,7 +844,7 @@ class PostProcessorWorker:
self.logger(f" -> Failed to remove partially saved file: {final_save_path}")
permanent_failure_details = {
'file_info': file_info, 'target_folder_path': target_folder_path, 'headers': headers,
'file_info': file_info, 'target_folder_path': target_folder_path, 'headers': file_download_headers,
'original_post_id_for_log': original_post_id_for_log, 'post_title': post_title,
'file_index_in_post': file_index_in_post, 'num_files_in_this_post': num_files_in_this_post,
'forced_filename_override': filename_to_save_in_main_path,
@@ -742,7 +858,7 @@ class PostProcessorWorker:
details_for_failure = {
'file_info': file_info,
'target_folder_path': target_folder_path,
'headers': headers,
'headers': file_download_headers,
'original_post_id_for_log': original_post_id_for_log,
'post_title': post_title,
'file_index_in_post': file_index_in_post,
@@ -756,36 +872,97 @@ class PostProcessorWorker:
def process(self):
if self.service == 'discord':
# For Discord, self.post is a MESSAGE object from the API.
post_title = self.post.get('content', '') or f"Message {self.post.get('id', 'N/A')}"
post_id = self.post.get('id', 'unknown_id')
post_main_file_info = {} # Discord messages don't have a single main file
post_attachments = self.post.get('attachments', [])
post_content_html = self.post.get('content', '')
post_data = self.post # Keep a reference to the original message object
log_prefix = "Message"
else:
# Existing logic for standard creator posts
post_title = self.post.get('title', '') or 'untitled_post'
post_id = self.post.get('id', 'unknown_id')
post_main_file_info = self.post.get('file')
post_attachments = self.post.get('attachments', [])
post_content_html = self.post.get('content', '')
post_data = self.post # Reference to the post object
log_prefix = "Post"
# --- FIX: FETCH FULL POST DATA IF CONTENT IS MISSING BUT NEEDED ---
content_is_needed = (
self.show_external_links or
self.extract_links_only or
self.scan_content_for_images or
(self.filter_mode == 'text_only' and self.text_only_scope == 'content')
)
if content_is_needed and self.post.get('content') is None and self.service != 'discord':
self.logger(f" Post {post_id} is missing 'content' field, fetching full data...")
parsed_url = urlparse(self.api_url_input)
api_domain = parsed_url.netloc
creator_page_url = f"https://{api_domain}/{self.service}/user/{self.user_id}"
headers = {
'User-Agent': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
'Referer': creator_page_url,
'Accept': 'text/css'
}
cookies = prepare_cookies_for_request(self.use_cookie, self.cookie_text, self.selected_cookie_file, self.app_base_dir, self.logger, target_domain=api_domain)
full_post_data = fetch_single_post_data(api_domain, self.service, self.user_id, post_id, headers, self.logger, cookies_dict=cookies)
if full_post_data:
self.logger(" ✅ Full post data fetched successfully.")
self.post = full_post_data
post_title = self.post.get('title', '') or 'untitled_post'
post_main_file_info = self.post.get('file')
post_attachments = self.post.get('attachments', [])
post_content_html = self.post.get('content', '')
post_data = self.post
else:
self.logger(f" ⚠️ Failed to fetch full content for post {post_id}. Content-dependent features may not work for this post.")
result_tuple = (0, 0, [], [], [], None, None)
total_downloaded_this_post = 0
total_skipped_this_post = 0
determined_post_save_path_for_history = self.override_output_dir if self.override_output_dir else self.download_root
try:
if self._check_pause(f"Post processing for ID {self.post.get('id', 'N/A')}"):
result_tuple = (0, 0, [], [], [], None, None)
return result_tuple
if self._check_pause(f"{log_prefix} processing for ID {post_id}"):
return (0, 0, [], [], [], None, None)
if self.check_cancel():
result_tuple = (0, 0, [], [], [], None, None)
return result_tuple
return (0, 0, [], [], [], None, None)
current_character_filters = self._get_current_character_filters()
kept_original_filenames_for_log = []
retryable_failures_this_post = []
permanent_failures_this_post = []
total_downloaded_this_post = 0
total_skipped_this_post = 0
history_data_for_this_post = None
parsed_api_url = urlparse(self.api_url_input)
post_data = self.post
post_id = post_data.get('id', 'unknown_id')
# CONTEXT-AWARE URL for Referer Header
if self.service == 'discord':
server_id = self.user_id
channel_id = self.post.get('channel', 'unknown_channel')
post_page_url = f"https://{parsed_api_url.netloc}/discord/server/{server_id}/{channel_id}"
else:
post_page_url = f"https://{parsed_api_url.netloc}/{self.service}/user/{self.user_id}/post/{post_id}"
post_page_url = f"https://{parsed_api_url.netloc}/{self.service}/user/{self.user_id}/post/{post_id}"
headers = {'User-Agent': 'Mozilla/5.0', 'Referer': post_page_url, 'Accept': '*/*'}
headers = {
'User-Agent': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
'Referer': post_page_url,
'Accept': 'text/css'
}
link_pattern = re.compile(r"""<a\s+.*?href=["'](https?://[^"']+)["'][^>]*>(.*?)</a>""", re.IGNORECASE | re.DOTALL)
post_data = self.post
post_title = post_data.get('title', '') or 'untitled_post'
post_id = post_data.get('id', 'unknown_id')
post_main_file_info = post_data.get('file')
post_attachments = post_data.get('attachments', [])
effective_unwanted_keywords_for_folder_naming = self.unwanted_keywords.copy()
is_full_creator_download_no_char_filter = not self.target_post_id_from_initial_url and not current_character_filters
@@ -803,9 +980,9 @@ class PostProcessorWorker:
self.logger(f" Applying creator download specific folder ignore words ({len(self.creator_download_folder_ignore_words)} words).")
effective_unwanted_keywords_for_folder_naming.update(self.creator_download_folder_ignore_words)
post_content_html = post_data.get('content', '')
if not self.extract_links_only:
self.logger(f"\n--- Processing Post {post_id} ('{post_title[:50]}...') (Thread: {threading.current_thread().name}) ---")
self.logger(f"\n--- Processing {log_prefix} {post_id} ('{post_title[:50]}...') (Thread: {threading.current_thread().name}) ---")
num_potential_files_in_post = len(post_attachments or []) + (1 if post_main_file_info and post_main_file_info.get('path') else 0)
post_is_candidate_by_title_char_match = False
@@ -849,7 +1026,7 @@ class PostProcessorWorker:
if original_api_att_name:
all_files_from_post_api_for_char_check.append({'_original_name_for_log': original_api_att_name})
if current_character_filters and self.char_filter_scope == CHAR_SCOPE_COMMENTS:
if current_character_filters and self.char_filter_scope == CHAR_SCOPE_COMMENTS and self.service != 'discord':
self.logger(f" [Char Scope: Comments] Phase 1: Checking post files for matches before comments for post ID '{post_id}'.")
if self._check_pause(f"File check (comments scope) for post {post_id}"):
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
@@ -872,7 +1049,7 @@ class PostProcessorWorker:
if post_is_candidate_by_file_char_match_in_comment_scope: break
self.logger(f" [Char Scope: Comments] Phase 1 Result: post_is_candidate_by_file_char_match_in_comment_scope = {post_is_candidate_by_file_char_match_in_comment_scope}")
if current_character_filters and self.char_filter_scope == CHAR_SCOPE_COMMENTS:
if current_character_filters and self.char_filter_scope == CHAR_SCOPE_COMMENTS and self.service != 'discord':
if not post_is_candidate_by_file_char_match_in_comment_scope:
if self._check_pause(f"Comment check for post {post_id}"):
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
@@ -936,10 +1113,10 @@ class PostProcessorWorker:
return result_tuple
if not self.extract_links_only and self.manga_mode_active and current_character_filters and (self.char_filter_scope == CHAR_SCOPE_TITLE or self.char_filter_scope == CHAR_SCOPE_BOTH) and not post_is_candidate_by_title_char_match:
self.logger(f" -> Skip Post (Manga Mode with Title/Both Scope - No Title Char Match): Title '{post_title[:50]}' doesn't match filters.")
self._emit_signal('missed_character_post', post_title, "Manga Mode: No title match for character filter (Title/Both scope)")
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
return result_tuple
self.logger(f" -> Skip Post (Manga Mode with Title/Both Scope - No Title Char Match): Title '{post_title[:50]}' doesn't match filters.")
self._emit_signal('missed_character_post', post_title, "Manga Mode: No title match for character filter (Title/Both scope)")
result_tuple = (0, num_potential_files_in_post, [], [], [], None, None)
return result_tuple
if not isinstance(post_attachments, list):
self.logger(f"⚠️ Corrupt attachment data for post {post_id} (expected list, got {type(post_attachments)}). Skipping attachments.")
@@ -1044,7 +1221,10 @@ class PostProcessorWorker:
determined_post_save_path_for_history = os.path.join(determined_post_save_path_for_history, base_folder_names_for_post_content[0])
if not self.extract_links_only and self.use_post_subfolders:
cleaned_post_title_for_sub = clean_folder_name(post_title)
cleaned_post_title_for_sub = robust_clean_name(post_title)
max_folder_len = 100
if len(cleaned_post_title_for_sub) > max_folder_len:
cleaned_post_title_for_sub = cleaned_post_title_for_sub[:max_folder_len].strip()
post_id_for_fallback = self.post.get('id', 'unknown_id')
if not cleaned_post_title_for_sub or cleaned_post_title_for_sub == "untitled_folder":
@@ -1069,29 +1249,50 @@ class PostProcessorWorker:
suffix_counter = 0
final_post_subfolder_name = ""
while True:
suffix_counter = 0
folder_creation_successful = False
final_post_subfolder_name = ""
post_id_for_folder = str(self.post.get('id', 'unknown_id'))
while not folder_creation_successful:
if suffix_counter == 0:
name_candidate = original_cleaned_post_title_for_sub
else:
name_candidate = f"{original_cleaned_post_title_for_sub}_{suffix_counter}"
potential_post_subfolder_path = os.path.join(base_path_for_post_subfolder, name_candidate)
try:
os.makedirs(potential_post_subfolder_path, exist_ok=False)
final_post_subfolder_name = name_candidate
if suffix_counter > 0:
self.logger(f" Post subfolder name conflict: Using '{final_post_subfolder_name}' instead of '{original_cleaned_post_title_for_sub}' to avoid mixing posts.")
break
except FileExistsError:
suffix_counter += 1
if suffix_counter > 100:
self.logger(f" ⚠️ Exceeded 100 attempts to find unique subfolder name for '{original_cleaned_post_title_for_sub}'. Using UUID.")
final_post_subfolder_name = f"{original_cleaned_post_title_for_sub}_{uuid.uuid4().hex[:8]}"
os.makedirs(os.path.join(base_path_for_post_subfolder, final_post_subfolder_name), exist_ok=True)
id_file_path = os.path.join(potential_post_subfolder_path, f".postid_{post_id_for_folder}")
if not os.path.isdir(potential_post_subfolder_path):
# Folder does not exist, create it and its ID file
try:
os.makedirs(potential_post_subfolder_path)
with open(id_file_path, 'w') as f:
f.write(post_id_for_folder)
final_post_subfolder_name = name_candidate
folder_creation_successful = True
if suffix_counter > 0:
self.logger(f" Post subfolder name conflict: Using '{final_post_subfolder_name}' to avoid mixing posts.")
except OSError as e_mkdir:
self.logger(f" ❌ Error creating directory '{potential_post_subfolder_path}': {e_mkdir}.")
final_post_subfolder_name = original_cleaned_post_title_for_sub
break
except OSError as e_mkdir:
self.logger(f" ❌ Error creating directory '{potential_post_subfolder_path}': {e_mkdir}. Files for this post might be saved in parent or fail.")
final_post_subfolder_name = original_cleaned_post_title_for_sub
break
else:
# Folder exists, check if it's for this post or a different one
if os.path.exists(id_file_path):
# ID file matches! This is a restore scenario. Reuse the folder.
self.logger(f" Re-using existing post subfolder: '{name_candidate}'")
final_post_subfolder_name = name_candidate
folder_creation_successful = True
else:
# Folder exists but ID file does not match (or is missing). This is a normal name collision.
suffix_counter += 1
if suffix_counter > 100: # Safety break
self.logger(f" ⚠️ Exceeded 100 attempts to find unique subfolder for '{original_cleaned_post_title_for_sub}'.")
final_post_subfolder_name = f"{original_cleaned_post_title_for_sub}_{uuid.uuid4().hex[:8]}"
os.makedirs(os.path.join(base_path_for_post_subfolder, final_post_subfolder_name), exist_ok=True)
break
determined_post_save_path_for_history = os.path.join(base_path_for_post_subfolder, final_post_subfolder_name)
if self.skip_words_list and (self.skip_words_scope == SKIP_SCOPE_POSTS or self.skip_words_scope == SKIP_SCOPE_BOTH):
@@ -1140,7 +1341,6 @@ class PostProcessorWorker:
parsed_url = urlparse(self.api_url_input)
api_domain = parsed_url.netloc
cookies = prepare_cookies_for_request(self.use_cookie, self.cookie_text, self.selected_cookie_file, self.app_base_dir, self.logger, target_domain=api_domain)
from .api_client import fetch_single_post_data
full_data = fetch_single_post_data(api_domain, self.service, self.user_id, post_id, headers, self.logger, cookies_dict=cookies)
if full_data:
final_post_data = full_data
@@ -1626,7 +1826,7 @@ class PostProcessorWorker:
self._download_single_file,
file_info=file_info_to_dl,
target_folder_path=current_path_for_file_instance,
headers=headers, original_post_id_for_log=post_id, skip_event=self.skip_current_file_flag,
post_page_url=post_page_url, original_post_id_for_log=post_id, skip_event=self.skip_current_file_flag,
post_title=post_title, manga_date_file_counter_ref=manga_date_counter_to_pass,
manga_global_file_counter_ref=manga_global_counter_to_pass, folder_context_name_for_history=folder_context_for_file,
file_index_in_post=file_idx, num_files_in_this_post=len(files_to_download_info_list)
@@ -1733,14 +1933,23 @@ class PostProcessorWorker:
permanent_failures_this_post, history_data_for_this_post,
None)
except Exception as main_thread_err:
self.logger(f"\n❌ Critical error within Worker process for {log_prefix} {post_id}: {main_thread_err}")
self.logger(traceback.format_exc())
# Ensure we still return a valid tuple to prevent the app from stalling
result_tuple = (0, 1, [], [], [{'error': str(main_thread_err)}], None, None)
finally:
# This block ALWAYS executes, ensuring that every task signals its completion.
# This is critical for the main thread to know when all work is done.
if not self.extract_links_only and self.use_post_subfolders and total_downloaded_this_post == 0:
path_to_check_for_emptiness = determined_post_save_path_for_history
try:
# Check if the path is a directory and if it's empty
if os.path.isdir(path_to_check_for_emptiness) and not os.listdir(path_to_check_for_emptiness):
self.logger(f" 🗑️ Removing empty post-specific subfolder: '{path_to_check_for_emptiness}'")
os.rmdir(path_to_check_for_emptiness)
except OSError as e_rmdir:
# Log if removal fails for any reason (e.g., permissions)
self.logger(f" ⚠️ Could not remove potentially empty subfolder '{path_to_check_for_emptiness}': {e_rmdir}")
self._emit_signal('worker_finished', result_tuple)
@@ -1783,6 +1992,8 @@ class DownloadThread(QThread):
remove_from_filename_words_list=None,
manga_date_prefix='',
allow_multipart_download=True,
multipart_parts_count=4,
multipart_min_size_mb=100,
selected_cookie_file=None,
override_output_dir=None,
app_base_dir=None,
@@ -1805,7 +2016,10 @@ class DownloadThread(QThread):
single_pdf_mode=False,
project_root_dir=None,
processed_post_ids=None,
start_offset=0):
start_offset=0,
fetch_first=False,
skip_file_size_mb=None
):
super().__init__()
self.api_url_input = api_url_input
self.output_dir = output_dir
@@ -1845,6 +2059,8 @@ class DownloadThread(QThread):
self.remove_from_filename_words_list = remove_from_filename_words_list
self.manga_date_prefix = manga_date_prefix
self.allow_multipart_download = allow_multipart_download
self.multipart_parts_count = multipart_parts_count
self.multipart_min_size_mb = multipart_min_size_mb
self.selected_cookie_file = selected_cookie_file
self.app_base_dir = app_base_dir
self.cookie_text = cookie_text
@@ -1869,6 +2085,8 @@ class DownloadThread(QThread):
self.project_root_dir = project_root_dir
self.processed_post_ids_set = set(processed_post_ids) if processed_post_ids is not None else set()
self.start_offset = start_offset
self.fetch_first = fetch_first
self.skip_file_size_mb = skip_file_size_mb
if self.compress_images and Image is None:
self.logger("⚠️ Image compression disabled: Pillow library not found (DownloadThread).")
@@ -1915,7 +2133,8 @@ class DownloadThread(QThread):
selected_cookie_file=self.selected_cookie_file,
app_base_dir=self.app_base_dir,
manga_filename_style_for_sort_check=self.manga_filename_style if self.manga_mode_active else None,
processed_post_ids=self.processed_post_ids_set
processed_post_ids=self.processed_post_ids_set,
fetch_all_first=self.fetch_first
)
for posts_batch_data in post_generator:
@@ -1986,6 +2205,9 @@ class DownloadThread(QThread):
'text_only_scope': self.text_only_scope,
'text_export_format': self.text_export_format,
'single_pdf_mode': self.single_pdf_mode,
'multipart_parts_count': self.multipart_parts_count,
'multipart_min_size_mb': self.multipart_min_size_mb,
'skip_file_size_mb': self.skip_file_size_mb,
'project_root_dir': self.project_root_dir,
}

View File

@@ -5,11 +5,12 @@ import traceback
import json
import base64
import time
import zipfile
from urllib.parse import urlparse, urlunparse, parse_qs, urlencode
# --- Third-Party Library Imports ---
# Make sure to install these: pip install requests pycryptodome gdown
# --- Third-party Library Imports ---
import requests
import cloudscraper
try:
from Crypto.Cipher import AES
@@ -23,20 +24,17 @@ try:
except ImportError:
GDRIVE_AVAILABLE = False
# --- Constants ---
MEGA_API_URL = "https://g.api.mega.co.nz"
# --- Helper Functions (Original and New) ---
def _get_filename_from_headers(headers):
"""
Extracts a filename from the Content-Disposition header.
(This is from your original file and is kept for Dropbox downloads)
"""
cd = headers.get('content-disposition')
if not cd:
return None
# Handles both filename="file.zip" and filename*=UTF-8''file%20name.zip
fname_match = re.findall('filename="?([^"]+)"?', cd)
if fname_match:
sanitized_name = re.sub(r'[<>:"/\\|?*]', '_', fname_match[0].strip())
@@ -44,28 +42,23 @@ def _get_filename_from_headers(headers):
return None
# --- NEW: Helper functions for Mega decryption ---
# --- Helper functions for Mega decryption ---
def urlb64_to_b64(s):
"""Converts a URL-safe base64 string to a standard base64 string."""
s = s.replace('-', '+').replace('_', '/')
s += '=' * (-len(s) % 4)
return s
def b64_to_bytes(s):
"""Decodes a URL-safe base64 string to bytes."""
return base64.b64decode(urlb64_to_b64(s))
def bytes_to_hex(b):
"""Converts bytes to a hex string."""
return b.hex()
def hex_to_bytes(h):
"""Converts a hex string to bytes."""
return bytes.fromhex(h)
def hrk2hk(hex_raw_key):
"""Derives the final AES key from the raw key components for Mega."""
key_part1 = int(hex_raw_key[0:16], 16)
key_part2 = int(hex_raw_key[16:32], 16)
key_part3 = int(hex_raw_key[32:48], 16)
@@ -77,23 +70,20 @@ def hrk2hk(hex_raw_key):
return f'{final_key_part1:016x}{final_key_part2:016x}'
def decrypt_at(at_b64, key_bytes):
"""Decrypts the 'at' attribute to get file metadata."""
at_bytes = b64_to_bytes(at_b64)
iv = b'\0' * 16
cipher = AES.new(key_bytes, AES.MODE_CBC, iv)
decrypted_at = cipher.decrypt(at_bytes)
return decrypted_at.decode('utf-8').strip('\0').replace('MEGA', '')
# --- NEW: Core Logic for Mega Downloads ---
# --- Core Logic for Mega Downloads ---
def get_mega_file_info(file_id, file_key, session, logger_func):
"""Fetches file metadata and the temporary download URL from the Mega API."""
try:
hex_raw_key = bytes_to_hex(b64_to_bytes(file_key))
hex_key = hrk2hk(hex_raw_key)
key_bytes = hex_to_bytes(hex_key)
# Request file attributes
payload = [{"a": "g", "p": file_id}]
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
response.raise_for_status()
@@ -105,13 +95,10 @@ def get_mega_file_info(file_id, file_key, session, logger_func):
file_size = res_json[0]['s']
at_b64 = res_json[0]['at']
# Decrypt attributes to get the file name
at_dec_json_str = decrypt_at(at_b64, key_bytes)
at_dec_json = json.loads(at_dec_json_str)
file_name = at_dec_json['n']
# Request the temporary download URL
payload = [{"a": "g", "g": 1, "p": file_id}]
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
response.raise_for_status()
@@ -129,19 +116,16 @@ def get_mega_file_info(file_id, file_key, session, logger_func):
return None
def download_and_decrypt_mega_file(info, download_path, logger_func):
"""Downloads the file and decrypts it chunk by chunk, reporting progress."""
file_name = info['file_name']
file_size = info['file_size']
dl_url = info['dl_url']
hex_raw_key = info['hex_raw_key']
final_path = os.path.join(download_path, file_name)
if os.path.exists(final_path) and os.path.getsize(final_path) == file_size:
logger_func(f" [Mega] File '{file_name}' already exists with the correct size. Skipping.")
return
# Prepare for decryption
key = hex_to_bytes(hrk2hk(hex_raw_key))
iv_hex = hex_raw_key[32:48] + '0000000000000000'
iv_bytes = hex_to_bytes(iv_hex)
@@ -155,13 +139,11 @@ def download_and_decrypt_mega_file(info, download_path, logger_func):
with open(final_path, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
if not chunk:
continue
if not chunk: continue
decrypted_chunk = cipher.decrypt(chunk)
f.write(decrypted_chunk)
downloaded_bytes += len(chunk)
# Log progress every second
current_time = time.time()
if current_time - last_log_time > 1:
progress_percent = (downloaded_bytes / file_size) * 100 if file_size > 0 else 0
@@ -169,28 +151,16 @@ def download_and_decrypt_mega_file(info, download_path, logger_func):
last_log_time = current_time
logger_func(f" [Mega] ✅ Successfully downloaded '{file_name}' to '{download_path}'")
except requests.RequestException as e:
logger_func(f" [Mega] ❌ Download failed for '{file_name}': {e}")
except IOError as e:
logger_func(f" [Mega] ❌ Could not write to file '{final_path}': {e}")
except Exception as e:
logger_func(f" [Mega] ❌ An unexpected error occurred during download/decryption: {e}")
# --- REPLACEMENT Main Service Downloader Function for Mega ---
def download_mega_file(mega_url, download_path, logger_func=print):
"""
Downloads a file from a Mega.nz URL using direct requests and decryption.
This replaces the old mega.py implementation.
"""
if not PYCRYPTODOME_AVAILABLE:
logger_func("❌ Mega download failed: 'pycryptodome' library is not installed. Please run: pip install pycryptodome")
return
logger_func(f" [Mega] Initializing download for: {mega_url}")
# Regex to capture file ID and key from both old and new URL formats
match = re.search(r'mega(?:\.co)?\.nz/(?:file/|#!)?([a-zA-Z0-9]+)(?:#|!)([a-zA-Z0-9_.-]+)', mega_url)
if not match:
logger_func(f" [Mega] ❌ Error: Invalid Mega URL format.")
@@ -204,18 +174,14 @@ def download_mega_file(mega_url, download_path, logger_func=print):
file_info = get_mega_file_info(file_id, file_key, session, logger_func)
if not file_info:
logger_func(f" [Mega] ❌ Failed to get file info. The link may be invalid or expired. Aborting.")
logger_func(f" [Mega] ❌ Failed to get file info. Aborting.")
return
logger_func(f" [Mega] File found: '{file_info['file_name']}' (Size: {file_info['file_size'] / 1024 / 1024:.2f} MB)")
download_and_decrypt_mega_file(file_info, download_path, logger_func)
# --- ORIGINAL Functions for Google Drive and Dropbox (Unchanged) ---
def download_gdrive_file(url, download_path, logger_func=print):
"""Downloads a file from a Google Drive link."""
if not GDRIVE_AVAILABLE:
logger_func("❌ Google Drive download failed: 'gdown' library is not installed.")
return
@@ -232,12 +198,15 @@ def download_gdrive_file(url, download_path, logger_func=print):
except Exception as e:
logger_func(f" [G-Drive] ❌ An unexpected error occurred: {e}")
# --- MODIFIED DROPBOX DOWNLOADER ---
def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
"""
Downloads a file from a public Dropbox link by modifying the URL for direct download.
Downloads a file or a folder (as a zip) from a public Dropbox link.
Uses cloudscraper to handle potential browser checks and auto-extracts zip files.
"""
logger_func(f" [Dropbox] Attempting to download: {dropbox_link}")
# Modify URL to force download (works for both files and folders)
parsed_url = urlparse(dropbox_link)
query_params = parse_qs(parsed_url.query)
query_params['dl'] = ['1']
@@ -246,26 +215,60 @@ def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
logger_func(f" [Dropbox] Using direct download URL: {direct_download_url}")
scraper = cloudscraper.create_scraper()
try:
if not os.path.exists(download_path):
os.makedirs(download_path, exist_ok=True)
logger_func(f" [Dropbox] Created download directory: {download_path}")
with requests.get(direct_download_url, stream=True, allow_redirects=True, timeout=(10, 300)) as r:
with scraper.get(direct_download_url, stream=True, allow_redirects=True, timeout=(20, 600)) as r:
r.raise_for_status()
filename = _get_filename_from_headers(r.headers) or os.path.basename(parsed_url.path) or "dropbox_file"
filename = _get_filename_from_headers(r.headers) or os.path.basename(parsed_url.path) or "dropbox_download"
# If it's a folder, Dropbox will name it FolderName.zip
if not os.path.splitext(filename)[1]:
filename += ".zip"
full_save_path = os.path.join(download_path, filename)
logger_func(f" [Dropbox] Starting download of '{filename}'...")
total_size = int(r.headers.get('content-length', 0))
downloaded_bytes = 0
last_log_time = time.time()
with open(full_save_path, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
downloaded_bytes += len(chunk)
current_time = time.time()
if total_size > 0 and current_time - last_log_time > 1:
progress = (downloaded_bytes / total_size) * 100
logger_func(f" -> Downloading '{filename}'... {downloaded_bytes/1024/1024:.2f}MB / {total_size/1024/1024:.2f}MB ({progress:.1f}%)")
last_log_time = current_time
logger_func(f" [Dropbox] ✅ Dropbox file downloaded successfully: {full_save_path}")
logger_func(f" [Dropbox] ✅ Download complete: {full_save_path}")
# --- NEW: Auto-extraction logic ---
if zipfile.is_zipfile(full_save_path):
logger_func(f" [Dropbox] ዚ Detected zip file. Attempting to extract...")
extract_folder_name = os.path.splitext(filename)[0]
extract_path = os.path.join(download_path, extract_folder_name)
os.makedirs(extract_path, exist_ok=True)
with zipfile.ZipFile(full_save_path, 'r') as zip_ref:
zip_ref.extractall(extract_path)
logger_func(f" [Dropbox] ✅ Successfully extracted to folder: '{extract_path}'")
# Optional: remove the zip file after extraction
try:
os.remove(full_save_path)
logger_func(f" [Dropbox] 🗑️ Removed original zip file.")
except OSError as e:
logger_func(f" [Dropbox] ⚠️ Could not remove original zip file: {e}")
except Exception as e:
logger_func(f" [Dropbox] ❌ An error occurred during Dropbox download: {e}")
traceback.print_exc(limit=2)
raise

125
src/services/updater.py Normal file
View File

@@ -0,0 +1,125 @@
import sys
import os
import requests
import subprocess # Keep this for now, though it's not used in the final command
from packaging.version import parse as parse_version
from PyQt5.QtCore import QThread, pyqtSignal
# Constants for the updater
GITHUB_REPO_URL = "https://api.github.com/repos/Yuvi9587/Kemono-Downloader/releases/latest"
EXE_NAME = "Kemono.Downloader.exe"
class UpdateChecker(QThread):
"""Checks for a new version on GitHub in a background thread."""
update_available = pyqtSignal(str, str) # new_version, download_url
up_to_date = pyqtSignal(str)
update_error = pyqtSignal(str)
def __init__(self, current_version):
super().__init__()
self.current_version_str = current_version.lstrip('v')
def run(self):
try:
response = requests.get(GITHUB_REPO_URL, timeout=15)
response.raise_for_status()
data = response.json()
latest_version_str = data['tag_name'].lstrip('v')
current_version = parse_version(self.current_version_str)
latest_version = parse_version(latest_version_str)
if latest_version > current_version:
for asset in data.get('assets', []):
if asset['name'] == EXE_NAME:
self.update_available.emit(latest_version_str, asset['browser_download_url'])
return
self.update_error.emit(f"Update found, but '{EXE_NAME}' is missing from the release assets.")
else:
self.up_to_date.emit("You are on the latest version.")
except requests.exceptions.RequestException as e:
self.update_error.emit(f"Network error: {e}")
except Exception as e:
self.update_error.emit(f"An error occurred: {e}")
class UpdateDownloader(QThread):
"""
Downloads the new executable and runs an updater script that kills the old process,
replaces the file, and displays a message in the terminal.
"""
download_finished = pyqtSignal()
download_error = pyqtSignal(str)
def __init__(self, download_url, parent_app):
super().__init__()
self.download_url = download_url
self.parent_app = parent_app
def run(self):
try:
app_path = sys.executable
app_dir = os.path.dirname(app_path)
temp_path = os.path.join(app_dir, f"{EXE_NAME}.tmp")
old_path = os.path.join(app_dir, f"{EXE_NAME}.old")
updater_script_path = os.path.join(app_dir, "updater.bat")
# --- NEW: Path for the PID file ---
pid_file_path = os.path.join(app_dir, "updater.pid")
# Download the new executable
with requests.get(self.download_url, stream=True, timeout=300) as r:
r.raise_for_status()
with open(temp_path, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
# --- NEW: Write the current Process ID to the pid file ---
with open(pid_file_path, "w") as f:
f.write(str(os.getpid()))
# --- NEW BATCH SCRIPT ---
# This script now reads the PID from the "updater.pid" file.
script_content = f"""
@echo off
SETLOCAL
echo.
echo Reading process information...
set /p PID=<{pid_file_path}
echo Closing the old application (PID: %PID%)...
taskkill /F /PID %PID%
echo Waiting for files to unlock...
timeout /t 2 /nobreak > nul
echo Replacing application files...
if exist "{old_path}" del /F /Q "{old_path}"
rename "{app_path}" "{os.path.basename(old_path)}"
rename "{temp_path}" "{EXE_NAME}"
echo.
echo ============================================================
echo Update Complete!
echo You can now close this window and run {EXE_NAME}.
echo ============================================================
echo.
pause
echo Cleaning up helper files...
del "{pid_file_path}"
del "%~f0"
ENDLOCAL
"""
with open(updater_script_path, "w") as f:
f.write(script_content)
# --- Go back to the os.startfile command that we know works ---
os.startfile(updater_script_path)
self.download_finished.emit()
except Exception as e:
self.download_error.emit(f"Failed to download or run updater: {e}")

View File

@@ -3,7 +3,7 @@ import html
import re
# --- Third-Party Library Imports ---
import requests
import cloudscraper # MODIFIED: Import cloudscraper
from PyQt5.QtCore import QCoreApplication, Qt
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QLineEdit, QListWidget,
@@ -12,7 +12,6 @@ from PyQt5.QtWidgets import (
# --- Local Application Imports ---
from ...i18n.translator import get_translation
# Corrected Import: Get the icon from the new assets utility module
from ..assets import get_app_icon_object
from ...utils.network_utils import prepare_cookies_for_request
from .CookieHelpDialog import CookieHelpDialog
@@ -41,9 +40,9 @@ class FavoriteArtistsDialog (QDialog ):
service_lower = service_name.lower()
coomer_primary_services = {'onlyfans', 'fansly', 'manyvids', 'candfans'}
if service_lower in coomer_primary_services:
return "coomer.st" # Use the new domain
return "coomer.st"
else:
return "kemono.cr" # Use the new domain
return "kemono.cr"
def _tr (self ,key ,default_text =""):
"""Helper to get translation based on current app language."""
@@ -126,9 +125,11 @@ class FavoriteArtistsDialog (QDialog ):
self .artist_list_widget .setVisible (show )
def _fetch_favorite_artists (self ):
# --- FIX: Use cloudscraper and add proper headers ---
scraper = cloudscraper.create_scraper()
# --- END FIX ---
if self.cookies_config['use_cookie']:
# --- Kemono Check with Fallback ---
kemono_cookies = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.cr"
@@ -140,7 +141,6 @@ class FavoriteArtistsDialog (QDialog ):
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.su"
)
# --- Coomer Check with Fallback ---
coomer_cookies = prepare_cookies_for_request(
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.st"
@@ -153,28 +153,21 @@ class FavoriteArtistsDialog (QDialog ):
)
if not kemono_cookies and not coomer_cookies:
# If cookies are enabled but none could be loaded, show help and stop.
self.status_label.setText(self._tr("fav_artists_cookies_required_status", "Error: Cookies enabled but could not be loaded for any source."))
self._logger("Error: Cookies enabled but no valid cookies were loaded. Showing help dialog.")
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
cookie_help_dialog.exec_()
self.download_button.setEnabled(False)
return # Stop further execution
kemono_fav_url ="https://kemono.su/api/v1/account/favorites?type=artist"
coomer_fav_url ="https://coomer.su/api/v1/account/favorites?type=artist"
return
self .all_fetched_artists =[]
fetched_any_successfully =False
errors_occurred =[]
any_cookies_loaded_successfully_for_any_source =False
kemono_cr_fav_url = "https://kemono.cr/api/v1/account/favorites?type=artist"
coomer_st_fav_url = "https://coomer.st/api/v1/account/favorites?type=artist"
api_sources = [
{"name": "Kemono.cr", "url": kemono_cr_fav_url, "domain": "kemono.cr"},
{"name": "Coomer.st", "url": coomer_st_fav_url, "domain": "coomer.st"}
{"name": "Kemono.cr", "url": "https://kemono.cr/api/v1/account/favorites?type=artist", "domain": "kemono.cr"},
{"name": "Coomer.st", "url": "https://coomer.st/api/v1/account/favorites?type=artist", "domain": "coomer.st"}
]
for source in api_sources :
@@ -185,41 +178,36 @@ class FavoriteArtistsDialog (QDialog ):
cookies_dict_for_source = None
if self.cookies_config['use_cookie']:
primary_domain = source['domain']
fallback_domain = None
if primary_domain == "kemono.cr":
fallback_domain = "kemono.su"
elif primary_domain == "coomer.st":
fallback_domain = "coomer.su"
fallback_domain = "kemono.su" if "kemono" in primary_domain else "coomer.su"
# First, try the primary domain
cookies_dict_for_source = prepare_cookies_for_request(
True,
self.cookies_config['cookie_text'],
self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'],
self._logger,
target_domain=primary_domain
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain=primary_domain
)
# If no cookies found, try the fallback domain
if not cookies_dict_for_source and fallback_domain:
self._logger(f"Warning ({source['name']}): No cookies found for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
if not cookies_dict_for_source:
self._logger(f"Warning ({source['name']}): No cookies for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
cookies_dict_for_source = prepare_cookies_for_request(
True,
self.cookies_config['cookie_text'],
self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'],
self._logger,
target_domain=fallback_domain
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain=fallback_domain
)
if cookies_dict_for_source:
any_cookies_loaded_successfully_for_any_source = True
else:
self._logger(f"Warning ({source['name']}): Cookies enabled but could not be loaded for this source (including fallbacks). Fetch might fail.")
self._logger(f"Warning ({source['name']}): Cookies enabled but not loaded for this source. Fetch may fail.")
try :
headers ={'User-Agent':'Mozilla/5.0'}
response =requests .get (source ['url'],headers =headers ,cookies =cookies_dict_for_source ,timeout =20 )
# --- FIX: Add Referer and Accept headers ---
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Referer': f"https://{source['domain']}/favorites",
'Accept': 'text/css'
}
# --- END FIX ---
# --- FIX: Use scraper instead of requests ---
response = scraper.get(source['url'], headers=headers, cookies=cookies_dict_for_source, timeout=20)
# --- END FIX ---
response .raise_for_status ()
artists_data_from_api =response .json ()
@@ -254,15 +242,10 @@ class FavoriteArtistsDialog (QDialog ):
fetched_any_successfully =True
self ._logger (f"Fetched {processed_artists_from_source } artists from {source ['name']}.")
except requests .exceptions .RequestException as e :
except Exception as e :
error_msg =f"Error fetching favorites from {source ['name']}: {e }"
self ._logger (error_msg )
errors_occurred .append (error_msg )
except Exception as e :
error_msg =f"An unexpected error occurred with {source ['name']}: {e }"
self ._logger (error_msg )
errors_occurred .append (error_msg )
if self .cookies_config ['use_cookie']and not any_cookies_loaded_successfully_for_any_source :
self .status_label .setText (self ._tr ("fav_artists_cookies_required_status","Error: Cookies enabled but could not be loaded for any source."))
@@ -288,7 +271,7 @@ class FavoriteArtistsDialog (QDialog ):
self ._show_content_elements (True )
self .download_button .setEnabled (True )
elif not fetched_any_successfully and not errors_occurred :
self .status_label .setText (self ._tr ("fav_artists_none_found_status","No favorite artists found on Kemono.su or Coomer.su."))
self .status_label .setText (self ._tr ("fav_artists_none_found_status","No favorite artists found on Kemono or Coomer."))
self ._show_content_elements (False )
self .download_button .setEnabled (False )
else :
@@ -344,4 +327,4 @@ class FavoriteArtistsDialog (QDialog ):
self .accept ()
def get_selected_artists (self ):
return self .selected_artists_data
return self .selected_artists_data

View File

@@ -7,7 +7,7 @@ import traceback
import json
import re
from collections import defaultdict
import requests
import cloudscraper # MODIFIED: Import cloudscraper
from PyQt5.QtCore import QCoreApplication, Qt, pyqtSignal, QThread
from PyQt5.QtWidgets import (
QApplication, QDialog, QHBoxLayout, QLabel, QLineEdit, QListWidget,
@@ -42,10 +42,9 @@ class FavoritePostsFetcherThread (QThread ):
self .parent_logger_func (f"[FavPostsFetcherThread] {message }")
def run(self):
kemono_su_fav_posts_url = "https://kemono.su/api/v1/account/favorites?type=post"
coomer_su_fav_posts_url = "https://coomer.su/api/v1/account/favorites?type=post"
kemono_cr_fav_posts_url = "https://kemono.cr/api/v1/account/favorites?type=post"
coomer_st_fav_posts_url = "https://coomer.st/api/v1/account/favorites?type=post"
# --- FIX: Use cloudscraper and add proper headers ---
scraper = cloudscraper.create_scraper()
# --- END FIX ---
all_fetched_posts_temp = []
error_messages_for_summary = []
@@ -56,8 +55,8 @@ class FavoritePostsFetcherThread (QThread ):
self.progress_bar_update.emit(0, 0)
api_sources = [
{"name": "Kemono.cr", "url": kemono_cr_fav_posts_url, "domain": "kemono.cr"},
{"name": "Coomer.st", "url": coomer_st_fav_posts_url, "domain": "coomer.st"}
{"name": "Kemono.cr", "url": "https://kemono.cr/api/v1/account/favorites?type=post", "domain": "kemono.cr"},
{"name": "Coomer.st", "url": "https://coomer.st/api/v1/account/favorites?type=post", "domain": "coomer.st"}
]
api_sources_to_try =[]
@@ -81,32 +80,18 @@ class FavoritePostsFetcherThread (QThread ):
cookies_dict_for_source = None
if self.cookies_config['use_cookie']:
primary_domain = source['domain']
fallback_domain = None
if primary_domain == "kemono.cr":
fallback_domain = "kemono.su"
elif primary_domain == "coomer.st":
fallback_domain = "coomer.su"
fallback_domain = "kemono.su" if "kemono" in primary_domain else "coomer.su"
# First, try the primary domain
cookies_dict_for_source = prepare_cookies_for_request(
True,
self.cookies_config['cookie_text'],
self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'],
self._logger,
target_domain=primary_domain
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain=primary_domain
)
# If no cookies found, try the fallback domain
if not cookies_dict_for_source and fallback_domain:
self._logger(f"Warning ({source['name']}): No cookies found for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
self._logger(f"Warning ({source['name']}): No cookies for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
cookies_dict_for_source = prepare_cookies_for_request(
True,
self.cookies_config['cookie_text'],
self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'],
self._logger,
target_domain=fallback_domain
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
self.cookies_config['app_base_dir'], self._logger, target_domain=fallback_domain
)
if cookies_dict_for_source:
@@ -120,8 +105,18 @@ class FavoritePostsFetcherThread (QThread ):
QCoreApplication .processEvents ()
try :
headers ={'User-Agent':'Mozilla/5.0'}
response =requests .get (source ['url'],headers =headers ,cookies =cookies_dict_for_source ,timeout =20 )
# --- FIX: Add Referer and Accept headers ---
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
'Referer': f"https://{source['domain']}/favorites",
'Accept': 'text/css'
}
# --- END FIX ---
# --- FIX: Use scraper instead of requests ---
response = scraper.get(source['url'], headers=headers, cookies=cookies_dict_for_source, timeout=20)
# --- END FIX ---
response .raise_for_status ()
posts_data_from_api =response .json ()
@@ -153,33 +148,24 @@ class FavoritePostsFetcherThread (QThread ):
fetched_any_successfully =True
self ._logger (f"Fetched {processed_posts_from_source } posts from {source ['name']}.")
except requests .exceptions .RequestException as e :
except Exception as e :
err_detail =f"Error fetching favorite posts from {source ['name']}: {e }"
self ._logger (err_detail )
error_messages_for_summary .append (err_detail )
if e .response is not None and e .response .status_code ==401 :
if hasattr(e, 'response') and e.response is not None and e.response.status_code == 401:
self .finished .emit ([],"KEY_AUTH_FAILED")
self ._logger (f"Authorization failed for {source ['name']}, emitting KEY_AUTH_FAILED.")
return
except Exception as e :
err_detail =f"An unexpected error occurred with {source ['name']}: {e }"
self ._logger (err_detail )
error_messages_for_summary .append (err_detail )
if self .cancellation_event .is_set ():
self .finished .emit ([],"KEY_FETCH_CANCELLED_AFTER")
return
if self .cookies_config ['use_cookie']and not any_cookies_loaded_successfully_for_any_source :
if self .target_domain_preference and not any_cookies_loaded_successfully_for_any_source :
domain_key_part =self .error_key_map .get (self .target_domain_preference ,self .target_domain_preference .lower ().replace ('.','_'))
self .finished .emit ([],f"KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_FOR_DOMAIN_{domain_key_part }")
return
self .finished .emit ([],"KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_GENERIC")
return
@@ -643,4 +629,4 @@ class FavoritePostsDialog (QDialog ):
self .accept ()
def get_selected_posts (self ):
return self .selected_posts_data
return self .selected_posts_data

View File

@@ -1,6 +1,7 @@
# --- Standard Library Imports ---
import os
import json
import sys
# --- PyQt5 Imports ---
from PyQt5.QtCore import Qt, QStandardPaths
@@ -16,9 +17,10 @@ from ..main_window import get_app_icon_object
from ...config.constants import (
THEME_KEY, LANGUAGE_KEY, DOWNLOAD_LOCATION_KEY,
RESOLUTION_KEY, UI_SCALE_KEY, SAVE_CREATOR_JSON_KEY,
COOKIE_TEXT_KEY, USE_COOKIE_KEY
COOKIE_TEXT_KEY, USE_COOKIE_KEY,
FETCH_FIRST_KEY
)
from ...services.updater import UpdateChecker, UpdateDownloader
class FutureSettingsDialog(QDialog):
"""
@@ -29,6 +31,7 @@ class FutureSettingsDialog(QDialog):
super().__init__(parent)
self.parent_app = parent_app_ref
self.setModal(True)
self.update_downloader_thread = None # To keep a reference
app_icon = get_app_icon_object()
if app_icon and not app_icon.isNull():
@@ -36,7 +39,7 @@ class FutureSettingsDialog(QDialog):
screen_height = QApplication.primaryScreen().availableGeometry().height() if QApplication.primaryScreen() else 800
scale_factor = screen_height / 800.0
base_min_w, base_min_h = 420, 360 # Adjusted height for new layout
base_min_w, base_min_h = 420, 480 # Increased height for update section
scaled_min_w = int(base_min_w * scale_factor)
scaled_min_h = int(base_min_h * scale_factor)
self.setMinimumSize(scaled_min_w, scaled_min_h)
@@ -49,25 +52,22 @@ class FutureSettingsDialog(QDialog):
"""Initializes all UI components and layouts for the dialog."""
main_layout = QVBoxLayout(self)
# --- Group 1: Interface Settings ---
self.interface_group_box = QGroupBox()
interface_layout = QGridLayout(self.interface_group_box)
# Theme
# Theme, UI Scale, Language (unchanged)...
self.theme_label = QLabel()
self.theme_toggle_button = QPushButton()
self.theme_toggle_button.clicked.connect(self._toggle_theme)
interface_layout.addWidget(self.theme_label, 0, 0)
interface_layout.addWidget(self.theme_toggle_button, 0, 1)
# UI Scale
self.ui_scale_label = QLabel()
self.ui_scale_combo_box = QComboBox()
self.ui_scale_combo_box.currentIndexChanged.connect(self._display_setting_changed)
interface_layout.addWidget(self.ui_scale_label, 1, 0)
interface_layout.addWidget(self.ui_scale_combo_box, 1, 1)
# Language
self.language_label = QLabel()
self.language_combo_box = QComboBox()
self.language_combo_box.currentIndexChanged.connect(self._language_selection_changed)
@@ -76,89 +76,146 @@ class FutureSettingsDialog(QDialog):
main_layout.addWidget(self.interface_group_box)
# --- Group 2: Download & Window Settings ---
self.download_window_group_box = QGroupBox()
download_window_layout = QGridLayout(self.download_window_group_box)
# Window Size (Resolution)
self.window_size_label = QLabel()
self.resolution_combo_box = QComboBox()
self.resolution_combo_box.currentIndexChanged.connect(self._display_setting_changed)
download_window_layout.addWidget(self.window_size_label, 0, 0)
download_window_layout.addWidget(self.resolution_combo_box, 0, 1)
# Default Path
self.default_path_label = QLabel()
self.save_path_button = QPushButton()
# --- START: MODIFIED LOGIC ---
self.save_path_button.clicked.connect(self._save_cookie_and_path)
# --- END: MODIFIED LOGIC ---
download_window_layout.addWidget(self.default_path_label, 1, 0)
download_window_layout.addWidget(self.save_path_button, 1, 1)
# Save Creator.json Checkbox
self.save_creator_json_checkbox = QCheckBox()
self.save_creator_json_checkbox.stateChanged.connect(self._creator_json_setting_changed)
self.save_creator_json_checkbox.stateChanged.connect(self._creator_json_setting_changed)
download_window_layout.addWidget(self.save_creator_json_checkbox, 2, 0, 1, 2)
self.fetch_first_checkbox = QCheckBox()
self.fetch_first_checkbox.stateChanged.connect(self._fetch_first_setting_changed)
download_window_layout.addWidget(self.fetch_first_checkbox, 3, 0, 1, 2)
main_layout.addWidget(self.download_window_group_box)
# --- NEW: Update Section ---
self.update_group_box = QGroupBox()
update_layout = QGridLayout(self.update_group_box)
self.version_label = QLabel()
self.update_status_label = QLabel()
self.check_update_button = QPushButton()
self.check_update_button.clicked.connect(self._check_for_updates)
update_layout.addWidget(self.version_label, 0, 0)
update_layout.addWidget(self.update_status_label, 0, 1)
update_layout.addWidget(self.check_update_button, 1, 0, 1, 2)
main_layout.addWidget(self.update_group_box)
# --- END: New Section ---
main_layout.addStretch(1)
# --- OK Button ---
self.ok_button = QPushButton()
self.ok_button.clicked.connect(self.accept)
main_layout.addWidget(self.ok_button, 0, Qt.AlignRight | Qt.AlignBottom)
def _retranslate_ui(self):
self.setWindowTitle(self._tr("settings_dialog_title", "Settings"))
self.interface_group_box.setTitle(self._tr("interface_group_title", "Interface Settings"))
self.download_window_group_box.setTitle(self._tr("download_window_group_title", "Download & Window Settings"))
self.theme_label.setText(self._tr("theme_label", "Theme:"))
self.ui_scale_label.setText(self._tr("ui_scale_label", "UI Scale:"))
self.language_label.setText(self._tr("language_label", "Language:"))
self.window_size_label.setText(self._tr("window_size_label", "Window Size:"))
self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
self.save_creator_json_checkbox.setText(self._tr("save_creator_json_label", "Save Creator.json file"))
self.fetch_first_checkbox.setText(self._tr("fetch_first_label", "Fetch First (Download after all pages are found)"))
self.fetch_first_checkbox.setToolTip(self._tr("fetch_first_tooltip", "If checked, the downloader will find all posts from a creator first before starting any downloads.\nThis can be slower to start but provides a more accurate progress bar."))
self._update_theme_toggle_button_text()
self.save_path_button.setText(self._tr("settings_save_cookie_path_button", "Save Cookie + Download Path"))
self.save_path_button.setToolTip(self._tr("settings_save_cookie_path_tooltip", "Save the current 'Download Location' and Cookie settings for future sessions."))
self.ok_button.setText(self._tr("ok_button", "OK"))
# --- NEW: Translations for Update Section ---
self.update_group_box.setTitle(self._tr("update_group_title", "Application Updates"))
current_version = self.parent_app.windowTitle().split(' v')[-1]
self.version_label.setText(self._tr("current_version_label", f"Current Version: v{current_version}"))
self.update_status_label.setText(self._tr("update_status_ready", "Ready to check."))
self.check_update_button.setText(self._tr("check_for_updates_button", "Check for Updates"))
# --- END: New Translations ---
self._populate_display_combo_boxes()
self._populate_language_combo_box()
self._load_checkbox_states()
def _check_for_updates(self):
"""Starts the update check thread."""
self.check_update_button.setEnabled(False)
self.update_status_label.setText(self._tr("update_status_checking", "Checking..."))
current_version = self.parent_app.windowTitle().split(' v')[-1]
self.update_checker_thread = UpdateChecker(current_version)
self.update_checker_thread.update_available.connect(self._on_update_available)
self.update_checker_thread.up_to_date.connect(self._on_up_to_date)
self.update_checker_thread.update_error.connect(self._on_update_error)
self.update_checker_thread.start()
def _on_update_available(self, new_version, download_url):
self.update_status_label.setText(self._tr("update_status_found", f"Update found: v{new_version}"))
self.check_update_button.setEnabled(True)
reply = QMessageBox.question(self, self._tr("update_available_title", "Update Available"),
self._tr("update_available_message", f"A new version (v{new_version}) is available.\nWould you like to download and install it now?"),
QMessageBox.Yes | QMessageBox.No, QMessageBox.Yes)
if reply == QMessageBox.Yes:
self.ok_button.setEnabled(False)
self.check_update_button.setEnabled(False)
self.update_status_label.setText(self._tr("update_status_downloading", "Downloading update..."))
self.update_downloader_thread = UpdateDownloader(download_url, self.parent_app)
self.update_downloader_thread.download_finished.connect(self._on_download_finished)
self.update_downloader_thread.download_error.connect(self._on_update_error)
self.update_downloader_thread.start()
def _on_download_finished(self):
QApplication.instance().quit()
def _on_up_to_date(self, message):
self.update_status_label.setText(self._tr("update_status_latest", message))
self.check_update_button.setEnabled(True)
def _on_update_error(self, message):
self.update_status_label.setText(self._tr("update_status_error", f"Error: {message}"))
self.check_update_button.setEnabled(True)
self.ok_button.setEnabled(True)
# --- (The rest of the file remains unchanged from your provided code) ---
def _load_checkbox_states(self):
"""Loads the initial state for all checkboxes from settings."""
self.save_creator_json_checkbox.blockSignals(True)
# Default to True so the feature is on by default for users
should_save = self.parent_app.settings.value(SAVE_CREATOR_JSON_KEY, True, type=bool)
self.save_creator_json_checkbox.setChecked(should_save)
self.save_creator_json_checkbox.blockSignals(False)
self.fetch_first_checkbox.blockSignals(True)
should_fetch_first = self.parent_app.settings.value(FETCH_FIRST_KEY, False, type=bool)
self.fetch_first_checkbox.setChecked(should_fetch_first)
self.fetch_first_checkbox.blockSignals(False)
def _creator_json_setting_changed(self, state):
"""Saves the state of the 'Save Creator.json' checkbox."""
is_checked = state == Qt.Checked
self.parent_app.settings.setValue(SAVE_CREATOR_JSON_KEY, is_checked)
self.parent_app.settings.sync()
def _fetch_first_setting_changed(self, state):
is_checked = state == Qt.Checked
self.parent_app.settings.setValue(FETCH_FIRST_KEY, is_checked)
self.parent_app.settings.sync()
def _tr(self, key, default_text=""):
if callable(get_translation) and self.parent_app:
return get_translation(self.parent_app.current_selected_language, key, default_text)
return default_text
def _retranslate_ui(self):
self.setWindowTitle(self._tr("settings_dialog_title", "Settings"))
# Group Box Titles
self.interface_group_box.setTitle(self._tr("interface_group_title", "Interface Settings"))
self.download_window_group_box.setTitle(self._tr("download_window_group_title", "Download & Window Settings"))
# Interface Group Labels
self.theme_label.setText(self._tr("theme_label", "Theme:"))
self.ui_scale_label.setText(self._tr("ui_scale_label", "UI Scale:"))
self.language_label.setText(self._tr("language_label", "Language:"))
# Download & Window Group Labels
self.window_size_label.setText(self._tr("window_size_label", "Window Size:"))
self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
self.save_creator_json_checkbox.setText(self._tr("save_creator_json_label", "Save Creator.json file"))
# --- START: MODIFIED LOGIC ---
# Buttons and Controls
self._update_theme_toggle_button_text()
self.save_path_button.setText(self._tr("settings_save_cookie_path_button", "Save Cookie + Download Path"))
self.save_path_button.setToolTip(self._tr("settings_save_cookie_path_tooltip", "Save the current 'Download Location' and Cookie settings for future sessions."))
self.ok_button.setText(self._tr("ok_button", "OK"))
# --- END: MODIFIED LOGIC ---
# Populate dropdowns
self._populate_display_combo_boxes()
self._populate_language_combo_box()
self._load_checkbox_states()
def _apply_theme(self):
if self.parent_app and self.parent_app.current_theme == "dark":
scale = getattr(self.parent_app, 'scale_factor', 1)
@@ -184,14 +241,7 @@ class FutureSettingsDialog(QDialog):
def _populate_display_combo_boxes(self):
self.resolution_combo_box.blockSignals(True)
self.resolution_combo_box.clear()
resolutions = [
("Auto", self._tr("auto_resolution", "Auto (System Default)")),
("1280x720", "1280 x 720"),
("1600x900", "1600 x 900"),
("1920x1080", "1920 x 1080 (Full HD)"),
("2560x1440", "2560 x 1440 (2K)"),
("3840x2160", "3840 x 2160 (4K)")
]
resolutions = [("Auto", "Auto"), ("1280x720", "1280x720"), ("1600x900", "1600x900"), ("1920x1080", "1920x1080")]
current_res = self.parent_app.settings.value(RESOLUTION_KEY, "Auto")
for res_key, res_name in resolutions:
self.resolution_combo_box.addItem(res_name, res_key)
@@ -210,35 +260,22 @@ class FutureSettingsDialog(QDialog):
(1.50, "150%"),
(1.75, "175%"),
(2.0, "200%")
]
current_scale = float(self.parent_app.settings.value(UI_SCALE_KEY, 1.0))
]
current_scale = self.parent_app.settings.value(UI_SCALE_KEY, 1.0)
for scale_val, scale_name in scales:
self.ui_scale_combo_box.addItem(scale_name, scale_val)
if abs(current_scale - scale_val) < 0.01:
if abs(float(current_scale) - scale_val) < 0.01:
self.ui_scale_combo_box.setCurrentIndex(self.ui_scale_combo_box.count() - 1)
self.ui_scale_combo_box.blockSignals(False)
def _display_setting_changed(self):
selected_res = self.resolution_combo_box.currentData()
selected_scale = self.ui_scale_combo_box.currentData()
self.parent_app.settings.setValue(RESOLUTION_KEY, selected_res)
self.parent_app.settings.setValue(UI_SCALE_KEY, selected_scale)
self.parent_app.settings.sync()
msg_box = QMessageBox(self)
msg_box.setIcon(QMessageBox.Information)
msg_box.setWindowTitle(self._tr("display_change_title", "Display Settings Changed"))
msg_box.setText(self._tr("language_change_message", "A restart is required for these changes to take effect."))
msg_box.setInformativeText(self._tr("language_change_informative", "Would you like to restart now?"))
restart_button = msg_box.addButton(self._tr("restart_now_button", "Restart Now"), QMessageBox.ApplyRole)
ok_button = msg_box.addButton(self._tr("ok_button", "OK"), QMessageBox.AcceptRole)
msg_box.setDefaultButton(ok_button)
msg_box.exec_()
if msg_box.clickedButton() == restart_button:
self.parent_app._request_restart_application()
QMessageBox.information(self, self._tr("display_change_title", "Display Settings Changed"),
self._tr("language_change_message", "A restart is required..."))
def _populate_language_combo_box(self):
self.language_combo_box.blockSignals(True)
@@ -248,7 +285,7 @@ class FutureSettingsDialog(QDialog):
("de", "Deutsch (German)"), ("es", "Español (Spanish)"), ("pt", "Português (Portuguese)"),
("ru", "Русский (Russian)"), ("zh_CN", "简体中文 (Simplified Chinese)"),
("zh_TW", "繁體中文 (Traditional Chinese)"), ("ko", "한국어 (Korean)")
]
]
current_lang = self.parent_app.current_selected_language
for lang_code, lang_name in languages:
self.language_combo_box.addItem(lang_name, lang_code)
@@ -262,61 +299,32 @@ class FutureSettingsDialog(QDialog):
self.parent_app.settings.setValue(LANGUAGE_KEY, selected_lang_code)
self.parent_app.settings.sync()
self.parent_app.current_selected_language = selected_lang_code
self._retranslate_ui()
if hasattr(self.parent_app, '_retranslate_main_ui'):
self.parent_app._retranslate_main_ui()
msg_box = QMessageBox(self)
msg_box.setIcon(QMessageBox.Information)
msg_box.setWindowTitle(self._tr("language_change_title", "Language Changed"))
msg_box.setText(self._tr("language_change_message", "A restart is required..."))
msg_box.setInformativeText(self._tr("language_change_informative", "Would you like to restart now?"))
restart_button = msg_box.addButton(self._tr("restart_now_button", "Restart Now"), QMessageBox.ApplyRole)
ok_button = msg_box.addButton(self._tr("ok_button", "OK"), QMessageBox.AcceptRole)
msg_box.setDefaultButton(ok_button)
msg_box.exec_()
if msg_box.clickedButton() == restart_button:
self.parent_app._request_restart_application()
self.parent_app._retranslate_main_ui()
QMessageBox.information(self, self._tr("language_change_title", "Language Changed"),
self._tr("language_change_message", "A restart is required..."))
def _save_cookie_and_path(self):
"""Saves the current download path and/or cookie settings from the main window."""
path_saved = False
cookie_saved = False
# --- Save Download Path Logic ---
if hasattr(self.parent_app, 'dir_input') and self.parent_app.dir_input:
current_path = self.parent_app.dir_input.text().strip()
if current_path and os.path.isdir(current_path):
self.parent_app.settings.setValue(DOWNLOAD_LOCATION_KEY, current_path)
path_saved = True
# --- Save Cookie Logic ---
if hasattr(self.parent_app, 'use_cookie_checkbox'):
use_cookie = self.parent_app.use_cookie_checkbox.isChecked()
cookie_content = self.parent_app.cookie_text_input.text().strip()
if use_cookie and cookie_content:
self.parent_app.settings.setValue(USE_COOKIE_KEY, True)
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, cookie_content)
cookie_saved = True
else: # Also save the 'off' state
else:
self.parent_app.settings.setValue(USE_COOKIE_KEY, False)
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, "")
self.parent_app.settings.sync()
# --- User Feedback ---
if path_saved and cookie_saved:
message = self._tr("settings_save_both_success", "Download location and cookie settings saved.")
elif path_saved:
message = self._tr("settings_save_path_only_success", "Download location saved. No cookie settings were active to save.")
elif cookie_saved:
message = self._tr("settings_save_cookie_only_success", "Cookie settings saved. Download location was not set.")
if path_saved or cookie_saved:
QMessageBox.information(self, "Settings Saved", "Settings have been saved.")
else:
QMessageBox.warning(self, self._tr("settings_save_nothing_title", "Nothing to Save"),
self._tr("settings_save_nothing_message", "The download location is not a valid directory and no cookie was active."))
return
QMessageBox.information(self, self._tr("settings_save_success_title", "Settings Saved"), message)
QMessageBox.warning(self, "Nothing to Save", "No valid settings to save.")

View File

@@ -0,0 +1,118 @@
# multipart_scope_dialog.py
from PyQt5.QtWidgets import (
QDialog, QVBoxLayout, QGroupBox, QRadioButton, QDialogButtonBox, QButtonGroup,
QLabel, QLineEdit, QHBoxLayout, QFrame
)
from PyQt5.QtGui import QIntValidator
from PyQt5.QtCore import Qt
# It's good practice to get this constant from the source
# but for this example, we will define it here.
MAX_PARTS = 16
class MultipartScopeDialog(QDialog):
"""
A dialog to let the user select the scope, number of parts, and minimum size for multipart downloads.
"""
SCOPE_VIDEOS = 'videos'
SCOPE_ARCHIVES = 'archives'
SCOPE_BOTH = 'both'
def __init__(self, current_scope='both', current_parts=4, current_min_size_mb=100, parent=None):
super().__init__(parent)
self.setWindowTitle("Multipart Download Options")
self.setWindowFlags(self.windowFlags() & ~Qt.WindowContextHelpButtonHint)
self.setMinimumWidth(350)
# Main Layout
layout = QVBoxLayout(self)
# --- Options Group for Scope ---
self.options_group_box = QGroupBox("Apply multipart downloads to:")
options_layout = QVBoxLayout()
# ... (Radio buttons and button group code remains unchanged) ...
self.radio_videos = QRadioButton("Videos Only")
self.radio_archives = QRadioButton("Archives Only (.zip, .rar, etc.)")
self.radio_both = QRadioButton("Both Videos and Archives")
if current_scope == self.SCOPE_VIDEOS:
self.radio_videos.setChecked(True)
elif current_scope == self.SCOPE_ARCHIVES:
self.radio_archives.setChecked(True)
else:
self.radio_both.setChecked(True)
self.button_group = QButtonGroup(self)
self.button_group.addButton(self.radio_videos)
self.button_group.addButton(self.radio_archives)
self.button_group.addButton(self.radio_both)
options_layout.addWidget(self.radio_videos)
options_layout.addWidget(self.radio_archives)
options_layout.addWidget(self.radio_both)
self.options_group_box.setLayout(options_layout)
layout.addWidget(self.options_group_box)
# --- START: MODIFIED Download Settings Group ---
self.settings_group_box = QGroupBox("Download settings:")
settings_layout = QVBoxLayout()
# Layout for Parts count
parts_layout = QHBoxLayout()
self.parts_label = QLabel("Number of download parts per file:")
self.parts_input = QLineEdit(str(current_parts))
self.parts_input.setValidator(QIntValidator(2, MAX_PARTS, self))
self.parts_input.setFixedWidth(40)
self.parts_input.setToolTip(f"Set the number of concurrent connections per file (2-{MAX_PARTS}).")
parts_layout.addWidget(self.parts_label)
parts_layout.addStretch()
parts_layout.addWidget(self.parts_input)
settings_layout.addLayout(parts_layout)
# Layout for Minimum Size
size_layout = QHBoxLayout()
self.size_label = QLabel("Minimum file size for multipart (MB):")
self.size_input = QLineEdit(str(current_min_size_mb))
self.size_input.setValidator(QIntValidator(10, 10000, self)) # Min 10MB, Max ~10GB
self.size_input.setFixedWidth(40)
self.size_input.setToolTip("Files smaller than this will use a normal, single-part download.")
size_layout.addWidget(self.size_label)
size_layout.addStretch()
size_layout.addWidget(self.size_input)
settings_layout.addLayout(size_layout)
self.settings_group_box.setLayout(settings_layout)
layout.addWidget(self.settings_group_box)
# --- END: MODIFIED Download Settings Group ---
# OK and Cancel Buttons
self.button_box = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel)
self.button_box.accepted.connect(self.accept)
self.button_box.rejected.connect(self.reject)
layout.addWidget(self.button_box)
self.setLayout(layout)
def get_selected_scope(self):
# ... (This method remains unchanged) ...
if self.radio_videos.isChecked():
return self.SCOPE_VIDEOS
if self.radio_archives.isChecked():
return self.SCOPE_ARCHIVES
return self.SCOPE_BOTH
def get_selected_parts(self):
# ... (This method remains unchanged) ...
try:
parts = int(self.parts_input.text())
return max(2, min(parts, MAX_PARTS))
except (ValueError, TypeError):
return 4
def get_selected_min_size(self):
"""Returns the selected minimum size in MB as an integer."""
try:
size = int(self.size_input.text())
return max(10, min(size, 10000)) # Enforce valid range
except (ValueError, TypeError):
return 100 # Return a safe default

View File

@@ -3,8 +3,27 @@ import re
try:
from fpdf import FPDF
FPDF_AVAILABLE = True
# --- FIX: Move the class definition inside the try block ---
class PDF(FPDF):
"""Custom PDF class to handle headers and footers."""
def header(self):
pass
def footer(self):
self.set_y(-15)
if self.font_family:
self.set_font(self.font_family, '', 8)
else:
self.set_font('Arial', '', 8)
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'C')
except ImportError:
FPDF_AVAILABLE = False
# If the import fails, FPDF and PDF will not be defined,
# but the program won't crash here.
FPDF = None
PDF = None
def strip_html_tags(text):
if not text:
@@ -12,19 +31,6 @@ def strip_html_tags(text):
clean = re.compile('<.*?>')
return re.sub(clean, '', text)
class PDF(FPDF):
"""Custom PDF class to handle headers and footers."""
def header(self):
pass
def footer(self):
self.set_y(-15)
if self.font_family:
self.set_font(self.font_family, '', 8)
else:
self.set_font('Arial', '', 8)
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'C')
def create_single_pdf_from_content(posts_data, output_filename, font_path, logger=print):
"""
Creates a single, continuous PDF, correctly formatting both descriptions and comments.
@@ -68,7 +74,7 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
pdf.ln(10)
pdf.set_font(default_font_family, 'B', 16)
pdf.multi_cell(w=0, h=10, text=post.get('title', 'Untitled Post'), align='L')
pdf.multi_cell(w=0, h=10, txt=post.get('title', 'Untitled Post'), align='L')
pdf.ln(5)
if 'comments' in post and post['comments']:
@@ -89,7 +95,7 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
pdf.ln(10)
pdf.set_font(default_font_family, '', 11)
pdf.multi_cell(0, 7, body)
pdf.multi_cell(w=0, h=7, txt=body)
if comment_index < len(comments_list) - 1:
pdf.ln(3)
@@ -97,7 +103,7 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
pdf.ln(3)
elif 'content' in post:
pdf.set_font(default_font_family, '', 12)
pdf.multi_cell(w=0, h=7, text=post.get('content', 'No Content'))
pdf.multi_cell(w=0, h=7, txt=post.get('content', 'No Content'))
try:
pdf.output(output_filename)
@@ -105,4 +111,4 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
return True
except Exception as e:
logger(f"❌ A critical error occurred while saving the final PDF: {e}")
return False
return False

View File

@@ -0,0 +1,146 @@
import os
import re
import datetime
try:
from fpdf import FPDF
FPDF_AVAILABLE = True
class PDF(FPDF):
"""Custom PDF class for Discord chat logs."""
def __init__(self, server_name, channel_name, *args, **kwargs):
super().__init__(*args, **kwargs)
self.server_name = server_name
self.channel_name = channel_name
self.default_font_family = 'DejaVu' # Can be changed to Arial if font fails
def header(self):
if self.page_no() == 1:
return # No header on the title page
self.set_font(self.default_font_family, '', 8)
self.cell(0, 10, f'{self.server_name} - #{self.channel_name}', 0, 0, 'L')
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'R')
self.ln(10)
def footer(self):
pass # No footer needed, header has page number
except ImportError:
FPDF_AVAILABLE = False
FPDF = None
PDF = None
def create_pdf_from_discord_messages(messages_data, server_name, channel_name, output_filename, font_path, logger=print):
"""
Creates a single PDF from a list of Discord message objects, formatted as a chat log.
UPDATED to include clickable links for attachments and embeds.
"""
if not FPDF_AVAILABLE:
logger("❌ PDF Creation failed: 'fpdf2' library is not installed.")
return False
if not messages_data:
logger(" No messages were found or fetched to create a PDF.")
return False
logger(" Sorting messages by date (oldest first)...")
messages_data.sort(key=lambda m: m.get('published', ''))
pdf = PDF(server_name, channel_name)
default_font_family = 'DejaVu'
try:
bold_font_path = font_path.replace("DejaVuSans.ttf", "DejaVuSans-Bold.ttf")
if not os.path.exists(font_path) or not os.path.exists(bold_font_path):
raise RuntimeError("Font files not found")
pdf.add_font('DejaVu', '', font_path, uni=True)
pdf.add_font('DejaVu', 'B', bold_font_path, uni=True)
except Exception as font_error:
logger(f" ⚠️ Could not load DejaVu font: {font_error}. Falling back to Arial.")
default_font_family = 'Arial'
pdf.default_font_family = 'Arial'
# --- Title Page ---
pdf.add_page()
pdf.set_font(default_font_family, 'B', 24)
pdf.cell(w=0, h=20, text="Discord Chat Log", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.ln(10)
pdf.set_font(default_font_family, '', 16)
pdf.cell(w=0, h=10, text=f"Server: {server_name}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.cell(w=0, h=10, text=f"Channel: #{channel_name}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.ln(5)
pdf.set_font(default_font_family, '', 10)
pdf.cell(w=0, h=10, text=f"Generated on: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.cell(w=0, h=10, text=f"Total Messages: {len(messages_data)}", align='C', new_x="LMARGIN", new_y="NEXT")
pdf.add_page()
logger(f" Starting PDF creation with {len(messages_data)} messages...")
for i, message in enumerate(messages_data):
author = message.get('author', {}).get('global_name') or message.get('author', {}).get('username', 'Unknown User')
timestamp_str = message.get('published', '')
content = message.get('content', '')
attachments = message.get('attachments', [])
embeds = message.get('embeds', [])
try:
# Handle timezone information correctly
if timestamp_str.endswith('Z'):
timestamp_str = timestamp_str[:-1] + '+00:00'
dt_obj = datetime.datetime.fromisoformat(timestamp_str)
formatted_timestamp = dt_obj.strftime('%Y-%m-%d %H:%M:%S')
except (ValueError, TypeError):
formatted_timestamp = timestamp_str
# Draw a separator line
if i > 0:
pdf.ln(2)
pdf.set_draw_color(200, 200, 200) # Light grey line
pdf.cell(0, 0, '', border='T')
pdf.ln(2)
# Message Header
pdf.set_font(default_font_family, 'B', 11)
pdf.write(5, f"{author} ")
pdf.set_font(default_font_family, '', 9)
pdf.set_text_color(128, 128, 128)
pdf.write(5, f"({formatted_timestamp})")
pdf.set_text_color(0, 0, 0)
pdf.ln(6)
# Message Content
if content:
pdf.set_font(default_font_family, '', 10)
pdf.multi_cell(w=0, h=5, text=content)
# --- START: MODIFIED ATTACHMENT AND EMBED LOGIC ---
if attachments or embeds:
pdf.ln(1)
pdf.set_font(default_font_family, '', 9)
pdf.set_text_color(22, 119, 219) # A nice blue for links
for att in attachments:
file_name = att.get('name', 'untitled')
file_path = att.get('path', '')
# Construct the full, clickable URL for the attachment
full_url = f"https://kemono.cr/data{file_path}"
pdf.write(5, text=f"[Attachment: {file_name}]", link=full_url)
pdf.ln() # New line after each attachment
for embed in embeds:
embed_url = embed.get('url', 'no url')
# The embed URL is already a full URL
pdf.write(5, text=f"[Embed: {embed_url}]", link=embed_url)
pdf.ln() # New line after each embed
pdf.set_text_color(0, 0, 0) # Reset color to black
# --- END: MODIFIED ATTACHMENT AND EMBED LOGIC ---
try:
pdf.output(output_filename)
logger(f"✅ Successfully created Discord chat log PDF: '{os.path.basename(output_filename)}'")
return True
except Exception as e:
logger(f"❌ A critical error occurred while saving the final PDF: {e}")
return False

File diff suppressed because it is too large Load Diff

View File

@@ -138,31 +138,54 @@ def prepare_cookies_for_request(use_cookie_flag, cookie_text_input, selected_coo
return None
# In src/utils/network_utils.py
def extract_post_info(url_string):
"""
Parses a URL string to extract the service, user ID, and post ID.
UPDATED to support Discord, Bunkr, and nhentai URLs.
Args:
url_string (str): The URL to parse.
Returns:
tuple: A tuple containing (service, user_id, post_id). Any can be None.
tuple: A tuple containing (service, id1, id2).
For posts: (service, user_id, post_id).
For Discord: ('discord', server_id, channel_id).
For Bunkr: ('bunkr', full_url, None).
For nhentai: ('nhentai', gallery_id, None).
"""
if not isinstance(url_string, str) or not url_string.strip():
return None, None, None
stripped_url = url_string.strip()
# --- Bunkr Check ---
bunkr_pattern = re.compile(
r"(?:https?://)?(?:[a-zA-Z0-9-]+\.)?bunkr\.(?:si|la|ws|red|black|media|site|is|to|ac|cr|ci|fi|pk|ps|sk|ph|su|ru)|bunkrr\.ru"
)
if bunkr_pattern.search(stripped_url):
return 'bunkr', stripped_url, None
# --- nhentai Check ---
nhentai_match = re.search(r'nhentai\.net/g/(\d+)', stripped_url)
if nhentai_match:
return 'nhentai', nhentai_match.group(1), None
# --- Kemono/Coomer/Discord Parsing ---
try:
parsed_url = urlparse(url_string.strip())
parsed_url = urlparse(stripped_url)
path_parts = [part for part in parsed_url.path.strip('/').split('/') if part]
# Standard format: /<service>/user/<user_id>/post/<post_id>
if len(path_parts) >= 3 and path_parts[0].lower() == 'discord' and path_parts[1].lower() == 'server':
return 'discord', path_parts[2], path_parts[3] if len(path_parts) >= 4 else None
if len(path_parts) >= 3 and path_parts[1].lower() == 'user':
service = path_parts[0]
user_id = path_parts[2]
post_id = path_parts[4] if len(path_parts) >= 5 and path_parts[3].lower() == 'post' else None
return service, user_id, post_id
# API format: /api/v1/<service>/user/<user_id>...
if len(path_parts) >= 5 and path_parts[0:2] == ['api', 'v1'] and path_parts[3].lower() == 'user':
service = path_parts[2]
user_id = path_parts[4]
@@ -173,8 +196,7 @@ def extract_post_info(url_string):
print(f"Debug: Exception during URL parsing for '{url_string}': {e}")
return None, None, None
def get_link_platform(url):
"""
Identifies the platform of a given URL based on its domain.

View File

@@ -391,6 +391,10 @@ def setup_ui(main_app):
main_app.link_search_button.setVisible(False)
main_app.link_search_button.setFixedWidth(int(30 * scale))
log_title_layout.addWidget(main_app.link_search_button)
main_app.discord_scope_toggle_button = QPushButton("Scope: Files")
main_app.discord_scope_toggle_button.setVisible(False) # Hidden by default
main_app.discord_scope_toggle_button.setFixedWidth(int(140 * scale))
log_title_layout.addWidget(main_app.discord_scope_toggle_button)
main_app.manga_rename_toggle_button = QPushButton()
main_app.manga_rename_toggle_button.setVisible(False)
main_app.manga_rename_toggle_button.setFixedWidth(int(140 * scale))