mirror of
https://github.com/Yuvi9587/Kemono-Downloader.git
synced 2025-12-29 16:14:44 +00:00
Compare commits
52 Commits
e5b519d5ce
...
v7.8.0
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b5b6c1bc46 | ||
|
|
67faea0992 | ||
|
|
be03f914ef | ||
|
|
ec9900b90f | ||
|
|
55ebfdb980 | ||
|
|
4a93b721e2 | ||
|
|
257111d462 | ||
|
|
9563ce82db | ||
|
|
169ded3fd8 | ||
|
|
7e8e8a59e2 | ||
|
|
0acd433920 | ||
|
|
cef4211d7b | ||
|
|
9fe0c37127 | ||
|
|
5d4e08f794 | ||
|
|
8239fdb8f3 | ||
|
|
df8a305e81 | ||
|
|
090f1a638d | ||
|
|
871ee75a2a | ||
|
|
fea59c7903 | ||
|
|
a9b210b2ba | ||
|
|
ec94417569 | ||
|
|
0a902895a8 | ||
|
|
7217bfdb39 | ||
|
|
24880b5042 | ||
|
|
510ae5e1d1 | ||
|
|
65b4759bad | ||
|
|
6e993d88de | ||
|
|
cc3565b12b | ||
|
|
f8b150dfdb | ||
|
|
5f7b526852 | ||
|
|
b0a6c264e1 | ||
|
|
d9364f4f91 | ||
|
|
9cd48bb63a | ||
|
|
d0f11c4a06 | ||
|
|
26fa3b9bc1 | ||
|
|
f7c4d892a8 | ||
|
|
661b97aa16 | ||
|
|
3704fece2b | ||
|
|
bdb7ac93c4 | ||
|
|
76d4a3ea8a | ||
|
|
ccc7804505 | ||
|
|
4ee750c5d4 | ||
|
|
e9be13c4e3 | ||
|
|
a5cb04ea6f | ||
|
|
842f18d70d | ||
|
|
fb3f0e8913 | ||
|
|
0758887154 | ||
|
|
e752d881e7 | ||
|
|
a776d1abe9 | ||
|
|
21d1ce4fa9 | ||
|
|
d5112a25ee | ||
|
|
791ce503ff |
BIN
assets/Ko-fi.png
Normal file
BIN
assets/Ko-fi.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 2.9 KiB |
BIN
assets/buymeacoffee.png
Normal file
BIN
assets/buymeacoffee.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 3.2 KiB |
BIN
assets/patreon.png
Normal file
BIN
assets/patreon.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 978 B |
286
features.md
286
features.md
@@ -1,147 +1,159 @@
|
||||
<div>
|
||||
<h1>Kemono Downloader - Comprehensive Feature Guide</h1>
|
||||
<p>This guide provides a detailed overview of all user interface elements, input fields, buttons, popups, and functionalities available in the application.</p>
|
||||
<hr>
|
||||
<h2><strong>Main Window: Core Functionality</strong></h2>
|
||||
<p>The application is divided into a configuration panel on the left and a status/log panel on the right.</p>
|
||||
<h3><strong>Primary Inputs (Top-Left)</strong></h3>
|
||||
<h1>Kemono Downloader - Comprehensive Feature Guide</h1>
|
||||
<p>This guide provides a detailed overview of all user interface elements, input fields, buttons, popups, and functionalities available in the application.</p>
|
||||
<hr>
|
||||
<h2>1. Core Concepts & Supported Sites</h2>
|
||||
<h3>URL Input (🔗)</h3>
|
||||
<p>This is the primary input field where you specify the content you want to download.</p>
|
||||
<p><strong>Supported URL Types:</strong></p>
|
||||
<ul>
|
||||
<li><strong>Creator URL</strong>: A link to a creator's main page. Downloads all posts from that creator.</li>
|
||||
<li><strong>Post URL</strong>: A direct link to a specific post. Downloads only that single post.</li>
|
||||
<li><strong>Batch Command</strong>: Special keywords to trigger bulk downloading from a text file (see Batch Downloading section).</li>
|
||||
</ul>
|
||||
<p><strong>Supported Websites:</strong></p>
|
||||
<ul>
|
||||
<li>Kemono (<code>kemono.su</code>, <code>kemono.party</code>, etc.)</li>
|
||||
<li>Coomer (<code>coomer.su</code>, <code>coomer.party</code>, etc.)</li>
|
||||
<li>Discord (via Kemono/Coomer API)</li>
|
||||
<li>Bunkr</li>
|
||||
<li>Erome</li>
|
||||
<li>Saint2.su</li>
|
||||
<li>nhentai</li>
|
||||
</ul>
|
||||
<hr>
|
||||
<h2>2. Main Download Controls & Inputs</h2>
|
||||
<h3>Download Location (📁)</h3>
|
||||
<p>This input defines the main folder where your files will be saved.</p>
|
||||
<ul>
|
||||
<li><strong>Browse Button</strong>: Opens a system dialog to choose a folder.</li>
|
||||
<li><strong>Directory Creation</strong>: If the folder doesn't exist, the app will ask for confirmation to create it.</li>
|
||||
</ul>
|
||||
<h3>Filter by Character(s) & Scope</h3>
|
||||
<p>Used to download content for specific characters or series and organize them into subfolders.</p>
|
||||
<ul>
|
||||
<li><strong>Input Field</strong>: Enter comma-separated names (e.g., <code>Tifa, Aerith</code>). Group aliases using parentheses for folder naming (e.g., <code>(Cloud, Zack)</code>).</li>
|
||||
<li><strong>Scope Button</strong>: Cycles through where to look for name matches:
|
||||
<ul>
|
||||
<li><strong>URL Input Field</strong>: This is the starting point for most downloads. You can paste a URL for a specific post or for an entire creator's feed. The application's behavior adapts based on the URL type.</li>
|
||||
<li><strong>🎨 Creator Selection Popup</strong>: This button opens a powerful dialog listing all known creators. From here, you can:
|
||||
<ul>
|
||||
<li><strong>Search and Queue</strong>: Search for creators and check multiple names. Clicking "Add Selected" populates the main input field, preparing a batch download.</li>
|
||||
<li><strong>Check for Updates</strong>: Select a single creator's saved profile. This loads their information and switches the main download button to "Check for Updates" mode, allowing you to download only new content since your last session.</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><strong>Download Location</strong>: The primary folder where all content will be saved. The <strong>Browse...</strong> button lets you select this folder from your computer.</li>
|
||||
<li><strong>Page Range (Start/End)</strong>: These fields activate only for creator feed URLs. They allow you to download a specific slice of a creator's history (e.g., pages 5 through 10) instead of their entire feed.</li>
|
||||
<li><strong>Filter: Title</strong>: Matches names in the post title.</li>
|
||||
<li><strong>Filter: Files</strong>: Matches names in the filenames.</li>
|
||||
<li><strong>Filter: Both</strong>: Checks the title first, then filenames.</li>
|
||||
<li><strong>Filter: Comments</strong>: Checks filenames first, then post comments.</li>
|
||||
</ul>
|
||||
<hr>
|
||||
<h2><strong>Filtering & Naming (Left Panel)</strong></h2>
|
||||
<p>These features give you precise control over what gets downloaded and how it's named and organized.</p>
|
||||
</li>
|
||||
</ul>
|
||||
<h3>Skip with Words & Scope</h3>
|
||||
<p>Prevents downloading content based on keywords or file size.</p>
|
||||
<ul>
|
||||
<li><strong>Input Field</strong>: Enter comma-separated keywords (e.g., <code>WIP, sketch, preview</code>).</li>
|
||||
<li><strong>Skip by Size</strong>: Enter a number in square brackets to skip any file <strong>smaller than</strong> that size in MB. For example, <code>WIP, [200]</code> skips files with "WIP" in the name AND any file smaller than 200 MB.</li>
|
||||
<li><strong>Scope Button</strong>: Cycles through where to apply keyword filters:
|
||||
<ul>
|
||||
<li><strong>Filter by Character(s)</strong>: A powerful tool to download content featuring specific characters. You can enter multiple names separated by commas.
|
||||
<ul>
|
||||
<li><strong>Filter: [Scope] Button</strong>: This button changes how the character filter works:
|
||||
<ul>
|
||||
<li><strong>Title</strong>: Downloads posts only if a character's name is in the post title.</li>
|
||||
<li><strong>Files</strong>: Downloads posts if a character's name is in any of the filenames within the post.</li>
|
||||
<li><strong>Both</strong>: Combines the "Title" and "Files" logic.</li>
|
||||
<li><strong>Comments (Beta)</strong>: Downloads a post if a character's name is mentioned in the comments section.</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><strong>Skip with Words</strong>: A keyword-based filter to avoid unwanted content (e.g., <code>WIP</code>, <code>sketch</code>).
|
||||
<ul>
|
||||
<li><strong>Scope: [Type] Button</strong>: This button changes how the skip filter works:
|
||||
<ul>
|
||||
<li><strong>Posts</strong>: Skips the entire post if a keyword is found in the title.</li>
|
||||
<li><strong>Files</strong>: Skips only individual files if a keyword is found in the filename.</li>
|
||||
<li><strong>Both</strong>: Applies both levels of skipping.</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><strong>Remove Words from name</strong>: Automatically cleans downloaded filenames by removing any specified words (e.g., "patreon," "HD").</li>
|
||||
<li><strong>Scope: Posts</strong>: Skips the entire post if the title matches.</li>
|
||||
<li><strong>Scope: Files</strong>: Skips individual files if the filename matches.</li>
|
||||
<li><strong>Scope: Both</strong>: Checks the post title first, then individual files.</li>
|
||||
</ul>
|
||||
<h3><strong>File Type Filter (Radio Buttons)</strong></h3>
|
||||
<p>This section lets you choose the kind of content you want:</p>
|
||||
</li>
|
||||
</ul>
|
||||
<h3>Remove Words from Name (✂️)</h3>
|
||||
<p>Enter comma-separated words to remove from final filenames (e.g., <code>patreon, [HD]</code>). This helps clean up file naming.</p>
|
||||
<hr>
|
||||
<h2>3. Primary Download Modes (Filter File Section)</h2>
|
||||
<p>This section uses radio buttons to set the main download mode. Only one can be active at a time.</p>
|
||||
<ul>
|
||||
<li><strong>All</strong>: Default mode. Downloads every file and attachment.</li>
|
||||
<li><strong>Images/GIFs</strong>: Downloads only common image formats.</li>
|
||||
<li><strong>Videos</strong>: Downloads only common video formats.</li>
|
||||
<li><strong>Only Archives</strong>: Downloads only <code>.zip</code>, <code>.rar</code>, etc.</li>
|
||||
<li><strong>Only Audio</strong>: Downloads only common audio formats.</li>
|
||||
<li><strong>Only Links</strong>: Extracts external hyperlinks (e.g., Mega, Google Drive) from post descriptions instead of downloading files. <strong>This mode unlocks special features</strong> (see section 6).</li>
|
||||
<li><strong>More</strong>: Opens a dialog to download text-based content.
|
||||
<ul>
|
||||
<li><strong>All, Images/GIFs, Videos, 🎧 Only Audio, 📦 Only Archives</strong>: These options filter the downloads to only include the selected file types.</li>
|
||||
<li><strong>🔗 Only Links</strong>: This special mode doesn't download any files. Instead, it scans post descriptions and lists all external links (like Mega, Google Drive) in the log panel.</li>
|
||||
<li><strong>More</strong>: Opens a dialog for text-only downloads. You can choose to save post <strong>descriptions</strong> or <strong>comments</strong> as formatted <strong>PDF, DOCX, or TXT</strong> files. A key feature here is the <strong>"Single PDF"</strong> option, which compiles the text from all downloaded posts into one continuous, sorted PDF document.</li>
|
||||
<li><strong>Scope</strong>: Choose to extract text from the post description or comments.</li>
|
||||
<li><strong>Export Format</strong>: Save as PDF, DOCX, or TXT.</li>
|
||||
<li><strong>Single PDF</strong>: Compile all text from the session into one consolidated PDF file.</li>
|
||||
</ul>
|
||||
<hr>
|
||||
<h2><strong>Download Options & Advanced Settings (Checkboxes)</strong></h2>
|
||||
</li>
|
||||
</ul>
|
||||
<hr>
|
||||
<h2>4. Advanced Features & Toggles (Checkboxes)</h2>
|
||||
<h3>Folder Organization</h3>
|
||||
<ul>
|
||||
<li><strong>Separate folders by Known.txt</strong>: Automatically organizes downloads into subfolders based on name matches from your <code>Known.txt</code> list or the "Filter by Character(s)" input.</li>
|
||||
<li><strong>Subfolder per post</strong>: Creates a unique folder for each post, named after the post's title. This prevents files from different posts from mixing.</li>
|
||||
<li><strong>Date prefix</strong>: (Only available with "Subfolder per post") Prepends the post date to the folder name (e.g., <code>2025-08-03 My Post Title</code>) for chronological sorting.</li>
|
||||
</ul>
|
||||
<h3>Special Modes</h3>
|
||||
<ul>
|
||||
<li><strong>⭐ Favorite Mode</strong>: Switches the UI to download from your personal favorites list instead of using the URL input.</li>
|
||||
<li><strong>Manga/Comic mode</strong>: Sorts a creator's posts from oldest to newest before downloading, ensuring correct page order. A scope button appears to control the filename style (e.g., using post title, date, or a global number).</li>
|
||||
</ul>
|
||||
<h3>File Handling</h3>
|
||||
<ul>
|
||||
<li><strong>Skip Archives</strong>: Ignores <code>.zip</code> and <code>.rar</code> files during downloads.</li>
|
||||
<li><strong>Download Thumbnail Only</strong>: Saves only the small preview images instead of full-resolution files.</li>
|
||||
<li><strong>Scan Content for Images</strong>: Parses post HTML to find embedded images that may not be listed in the API data.</li>
|
||||
<li><strong>Compress to WebP</strong>: Converts large images (over 1.5 MB) to the space-saving WebP format.</li>
|
||||
<li><strong>Keep Duplicates</strong>: Opens a dialog to control how duplicate files are handled (skip by default, keep all, or keep a specific number of copies).</li>
|
||||
</ul>
|
||||
<h3>General Functionality</h3>
|
||||
<ul>
|
||||
<li><strong>Use cookie</strong>: Enables login-based access. You can paste a cookie string or browse for a <code>cookies.txt</code> file.</li>
|
||||
<li><strong>Use Multithreading</strong>: Enables parallel processing of posts for faster downloads. You can set the number of concurrent worker threads.</li>
|
||||
<li><strong>Show external links in log</strong>: Opens a secondary log panel that displays external links found in post descriptions.</li>
|
||||
</ul>
|
||||
<hr>
|
||||
<h2>5. Specialized Downloaders & Batch Mode</h2>
|
||||
<h3>Discord Features</h3>
|
||||
<ul>
|
||||
<li>When a Discord URL is entered, a <strong>Scope</strong> button appears.
|
||||
<ul>
|
||||
<li><strong>Skip .zip</strong>: A simple toggle to ignore archive files during downloads.</li>
|
||||
<li><strong>Download Thumbnails Only</strong>: Downloads only the small preview images instead of the full-resolution files.</li>
|
||||
<li><strong>Scan Content for Images</strong>: A crucial feature that scans the post's text content for embedded images that may not be listed in the API, ensuring a more complete download.</li>
|
||||
<li><strong>Compress to WebP</strong>: Saves disk space by automatically converting large images into the efficient WebP format.</li>
|
||||
<li><strong>Keep Duplicates</strong>: Opens a dialog to control how files with identical content are handled. The default is to skip duplicates, but you can choose to keep all of them or set a specific limit (e.g., "keep up to 2 copies of the same file").</li>
|
||||
<li><strong>Subfolder per Post</strong>: Organizes downloads by creating a unique folder for each post, named after the post's title.</li>
|
||||
<li><strong>Date Prefix</strong>: When "Subfolder per Post" is on, this adds the post's date to the beginning of the folder name (e.g., <code>2025-07-25 Post Title</code>).</li>
|
||||
<li><strong>Separate Folders by Known.txt</strong>: This enables the automatic folder organization system based on your "Known Names" list.</li>
|
||||
<li><strong>Use Cookie</strong>: Allows the application to use browser cookies to access content that might be behind a paywall or login. You can paste a cookie string directly or use <strong>Browse...</strong> to select a <code>cookies.txt</code> file.</li>
|
||||
<li><strong>Use Multithreading</strong>: Greatly speeds up downloads of creator feeds by processing multiple posts at once. The number of <strong>Threads</strong> can be configured.</li>
|
||||
<li><strong>Show External Links in Log</strong>: When checked, a secondary log panel appears at the bottom of the right side, dedicated to listing any external links found.</li>
|
||||
<li><strong>Scope: Files</strong>: Downloads all files from the channel/server.</li>
|
||||
<li><strong>Scope: Messages</strong>: Saves the entire message history of the channel/server as a formatted PDF.</li>
|
||||
</ul>
|
||||
<hr>
|
||||
<h2><strong>Known Names Management (Bottom-Left)</strong></h2>
|
||||
<p>This powerful feature automates the creation of organized, named folders.</p>
|
||||
</li>
|
||||
<li>A <strong>"Save as PDF"</strong> button also appears as a shortcut for the message saving feature.</li>
|
||||
</ul>
|
||||
<h3>Batch Downloading (<code>nhentai</code> & <code>saint2.su</code>)</h3>
|
||||
<p>This feature allows you to download hundreds of galleries or videos from a simple text file.</p>
|
||||
<ol>
|
||||
<li>In the <code>appdata</code> folder, create <code>nhentai.txt</code> or <code>saint2.su.txt</code>.</li>
|
||||
<li>Add one full URL per line to the corresponding file.</li>
|
||||
<li>In the app's URL input, type either <code>nhentai.net</code> or <code>saint2.su</code> and click "Start Download".</li>
|
||||
<li>The app will read the file and process every URL in the queue.</li>
|
||||
</ol>
|
||||
<hr>
|
||||
<h2>6. "Only Links" Mode: Extraction & Direct Download</h2>
|
||||
<p>When you select the <strong>"Only Links"</strong> radio button, the application's behavior changes significantly.</p>
|
||||
<ul>
|
||||
<li><strong>Link Extraction</strong>: Instead of downloading files, the main log panel will fill with all external links found (Mega, Google Drive, Dropbox, etc.).</li>
|
||||
<li><strong>Export Links</strong>: An "Export Links" button appears, allowing you to save the full list of extracted URLs to a <code>.txt</code> file.</li>
|
||||
<li><strong>Direct Cloud Download</strong>: A <strong>"Download"</strong> button appears next to the export button.
|
||||
<ul>
|
||||
<li><strong>Known Shows/Characters List</strong>: Displays all the names and groups you've saved.</li>
|
||||
<li><strong>Search...</strong>: Filters the list to quickly find a name.</li>
|
||||
<li><strong>Open Known.txt</strong>: Opens the source file in a text editor for advanced manual editing.</li>
|
||||
<li><strong>Add New Name</strong>:
|
||||
<ul>
|
||||
<li><strong>Single Name</strong>: Typing <code>Tifa Lockhart</code> and clicking <strong>➕ Add</strong> creates an entry that will match "Tifa Lockhart".</li>
|
||||
<li><strong>Group</strong>: Typing <code>(Boa, Hancock, Snake Princess)~</code> and clicking <strong>➕ Add</strong> creates a single entry named "Boa Hancock Snake Princess". The application will then look for "Boa," "Hancock," OR "Snake Princess" in titles/filenames and save any matches into that combined folder.</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><strong>⤵️ Add to Filter</strong>: Opens a dialog with your full Known Names list, allowing you to check multiple entries and add them all to the "Filter by Character(s)" field at once.</li>
|
||||
<li><strong>🗑️ Delete Selected</strong>: Removes highlighted names from your list.</li>
|
||||
<li>Clicking this opens a new dialog listing all supported cloud links (Mega, G-Drive, Dropbox).</li>
|
||||
<li>You can select which files you want to download from this list.</li>
|
||||
<li>The application will then download the selected files directly from the cloud service to your chosen download location.</li>
|
||||
</ul>
|
||||
<hr>
|
||||
<h2><strong>Action Buttons & Status Controls</strong></h2>
|
||||
<ul>
|
||||
<li><strong>⬇️ Start Download / 🔗 Extract Links</strong>: The main action button. Its function is dynamic:
|
||||
<ul>
|
||||
<li><strong>Normal Mode</strong>: Starts the download based on the current settings.</li>
|
||||
<li><strong>Update Mode</strong>: After selecting a creator profile, this button changes to <strong>🔄 Check for Updates</strong>.</li>
|
||||
<li><strong>Update Confirmation</strong>: After new posts are found, it changes to <strong>⬇️ Start Download (X new)</strong>.</li>
|
||||
<li><strong>Link Extraction Mode</strong>: The text changes to <strong>🔗 Extract Links</strong>.</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><strong>⏸️ Pause / ▶️ Resume Download</strong>: Pauses the ongoing download, allowing you to change certain settings (like filters) on the fly. Click again to resume.</li>
|
||||
<li><strong>❌ Cancel & Reset UI</strong>: Immediately stops all download activity and resets the UI to a clean state, preserving your URL and Download Location inputs.</li>
|
||||
<li><strong>Error Button</strong>: If files fail to download, they are logged. This button opens a dialog listing all failed files and will show a count of errors (e.g., <strong>(5) Error</strong>). From the dialog, you can:
|
||||
<ul>
|
||||
<li>Select specific files to <strong>Retry</strong> downloading.</li>
|
||||
<li><strong>Export</strong> the list of failed URLs to a <code>.txt</code> file.</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><strong>🔄 Reset (Top-Right)</strong>: A hard reset that clears all logs and returns every single UI element to its default state.</li>
|
||||
<li><strong>⚙️ (Settings)</strong>: Opens the main Settings dialog.</li>
|
||||
<li><strong>📜 (History)</strong>: Opens the Download History dialog.</li>
|
||||
<li><strong>? (Help)</strong>: Opens a helpful guide explaining the application's features.</li>
|
||||
<li><strong>❤️ Support</strong>: Opens a dialog with information on how to support the developer.</li>
|
||||
</ul>
|
||||
<hr>
|
||||
<h2><strong>Specialized Modes & Features</strong></h2>
|
||||
<h3><strong>⭐ Favorite Mode</strong></h3>
|
||||
<p>Activating this mode transforms the UI for managing saved collections:</p>
|
||||
<ul>
|
||||
<li>The URL input is disabled.</li>
|
||||
<li>The main action buttons are replaced with:
|
||||
<ul>
|
||||
<li><strong>🖼️ Favorite Artists</strong>: Opens a dialog to browse and queue downloads from your saved favorite creators.</li>
|
||||
<li><strong>📄 Favorite Posts</strong>: Opens a dialog to browse and queue downloads for specific saved favorite posts.</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><strong>Scope: [Location] Button</strong>: Toggles where the favorited content is saved:
|
||||
<ul>
|
||||
<li><strong>Selected Location</strong>: Saves all content directly into the main "Download Location".</li>
|
||||
<li><strong>Artist Folders</strong>: Creates a subfolder for each artist inside the main "Download Location".</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
<h3><strong>📖 Manga/Comic Mode</strong></h3>
|
||||
<p>This mode is designed for sequential content and has several effects:</p>
|
||||
<ul>
|
||||
<li><strong>Reverses Download Order</strong>: It fetches and downloads posts from <strong>oldest to newest</strong>.</li>
|
||||
<li><strong>Enables Special Naming</strong>: A <strong><code>Name: [Style]</code></strong> button appears, allowing you to choose how files are named to maintain their correct order (e.g., by Post Title, by Date, or simple sequential numbering like <code>001, 002, 003...</code>).</li>
|
||||
<li><strong>Disables Multithreading (for certain styles)</strong>: To guarantee perfect sequential numbering, multithreading for posts is automatically disabled for certain naming styles.</li>
|
||||
</ul>
|
||||
<h3><strong>Session & Error Management</strong></h3>
|
||||
<ul>
|
||||
<li><strong>Session Restore</strong>: If the application is closed unexpectedly during a download, it will detect the incomplete session on the next launch. The UI will present a <strong>🔄 Restore Download</strong> button to resume exactly where you left off. You can also choose to discard the session.</li>
|
||||
<li><strong>Update Checking</strong>: By selecting a creator profile via the <strong>🎨 Creator Selection Popup</strong>, you can run an update check. The application compares the posts on the server with your download history for that creator and will prompt you to download only the new content.</li>
|
||||
</ul>
|
||||
<h3><strong>Logging & Monitoring</strong></h3>
|
||||
<ul>
|
||||
<li><strong>Progress Log</strong>: The main log provides real-time feedback on the download process, including status messages, file saves, skips, and errors.</li>
|
||||
<li><strong>👁️ Log View Toggle</strong>: Switches the log view between the standard <strong>Progress Log</strong> and a <strong>Missed Character Log</strong>, which shows potential character names from posts that were skipped by your filters, helping you discover new names to add to your list.</li>
|
||||
</ul>
|
||||
</div>
|
||||
</li>
|
||||
</ul>
|
||||
<hr>
|
||||
<h2>7. Session & Process Management</h2>
|
||||
<h3>Main Action Buttons</h3>
|
||||
<ul>
|
||||
<li><strong>Start Download</strong>: Begins the download process. This button's text changes contextually (e.g., "Extract Links", "Check for Updates").</li>
|
||||
<li><strong>Pause / Resume</strong>: Pauses or resumes the ongoing download. When paused, you can safely change some settings.</li>
|
||||
<li><strong>Cancel & Reset UI</strong>: Stops the current download and performs a soft reset of the UI, preserving your URL and download location.</li>
|
||||
</ul>
|
||||
<h3>Restore Interrupted Download</h3>
|
||||
<p>If the application is closed unexpectedly during a download, it will save its progress.</p>
|
||||
<ul>
|
||||
<li>On the next launch, the UI will be pre-filled with the settings from the interrupted session.</li>
|
||||
<li>The <strong>Pause</strong> button will change to <strong>"🔄 Restore Download"</strong>. Clicking it will resume the download exactly where it left off, skipping already processed posts.</li>
|
||||
<li>The <strong>Cancel</strong> button will change to <strong>"🗑️ Discard Session"</strong>, allowing you to clear the saved state and start fresh.</li>
|
||||
</ul>
|
||||
<h3>Other UI Controls</h3>
|
||||
<ul>
|
||||
<li><strong>Error Button</strong>: Shows a count of failed files. Clicking it opens a dialog where you can view, export, or retry the failed downloads.</li>
|
||||
<li><strong>History Button</strong>: Shows a log of recently downloaded files and processed posts.</li>
|
||||
<li><strong>Settings Button</strong>: Opens the settings dialog where you can change the theme, language, and <strong>check for application updates</strong>.</li>
|
||||
<li><strong>Support Button</strong>: Opens a dialog with links to the project's source code and developer support pages.</li>
|
||||
</ul>
|
||||
|
||||
324
readme.md
324
readme.md
@@ -1,217 +1,151 @@
|
||||
<h1 align="center">Kemono Downloader v6.0.0</h1>
|
||||
<h1 align="center">Kemono Downloader</h1>
|
||||
|
||||
<p>A powerful, feature-rich GUI application for downloading content from a wide array of sites, including <strong>Kemono</strong>, <strong>Coomer</strong>, <strong>Bunkr</strong>, <strong>Erome</strong>, <strong>Saint2.su</strong>, and <strong>nhentai</strong>.</p>
|
||||
<p>Built with PyQt5, this tool is designed for users who want deep filtering capabilities, customizable folder structures, efficient downloads, and intelligent automation — all within a modern and user-friendly graphical interface.</p>
|
||||
|
||||
<div align="center">
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="Read/Read.png" alt="Default Mode" width="400"><br>
|
||||
<strong>Default</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="Read/Read1.png" alt="Favorite Mode" width="400"><br>
|
||||
<strong>Favorite Mode</strong>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="center">
|
||||
<img src="Read/Read2.png" alt="Single Post" width="400"><br>
|
||||
<strong>Single Post</strong>
|
||||
</td>
|
||||
<td align="center">
|
||||
<img src="Read/Read3.png" alt="Manga/Comic Mode" width="400"><br>
|
||||
<strong>Manga/Comic Mode</strong>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
<a href="features.md"><img src="https://img.shields.io/badge/📚%20Full%20Feature%20List-FFD700?style=for-the-badge&logoColor=black&color=FFD700" alt="Full Feature List"></a>
|
||||
<a href="LICENSE"><img src="https://img.shields.io/badge/📝%20License-90EE90?style=for-the-badge&logoColor=black&color=90EE90" alt="License"></a>
|
||||
</div>
|
||||
|
||||
---
|
||||
<h2>Core Capabilities Overview</h2>
|
||||
<h3>High-Performance & Resilient Downloading</h3>
|
||||
<ul>
|
||||
<li><strong>Multi-threading:</strong> Processes multiple posts simultaneously to greatly accelerate downloads from large creator profiles.</li>
|
||||
<li><strong>Multi-part Downloading:</strong> Splits large files into chunks and downloads them in parallel to maximize speed.</li>
|
||||
<li><strong>Session Management:</strong> Supports pausing, resuming, and <strong>restoring downloads</strong> after crashes or interruptions, so you never lose your progress.</li>
|
||||
</ul>
|
||||
<h3>Expanded Site Support</h3>
|
||||
<ul>
|
||||
<li><strong>Direct Downloading:</strong> Full support for Kemono, Coomer, Bunkr, Erome, Saint2.su, and nhentai.</li>
|
||||
<li><strong>Batch Mode:</strong> Download hundreds of URLs at once from <code>nhentai.txt</code> or <code>saint2.su.txt</code> files.</li>
|
||||
<li><strong>Discord Support:</strong> Download files or save entire channel histories as PDFs directly through the API.</li>
|
||||
</ul>
|
||||
<h3>Advanced Filtering & Content Control</h3>
|
||||
<ul>
|
||||
<li><strong>Content Type Filtering:</strong> Select whether to download all files or limit to images, videos, audio, or archives only.</li>
|
||||
<li><strong>Keyword Skipping:</strong> Automatically skips posts or files containing certain keywords (e.g., "WIP", "sketch").</li>
|
||||
<li><strong>Skip by Size:</strong> Avoid small files by setting a minimum size threshold in MB (e.g., <code>[200]</code>).</li>
|
||||
<li><strong>Character Filtering:</strong> Restricts downloads to posts that match specific character or series names, with scope controls for title, filename, or comments.</li>
|
||||
</ul>
|
||||
<h3>Intelligent File Organization</h3>
|
||||
<ul>
|
||||
<li><strong>Automated Subfolders:</strong> Automatically organizes downloaded files into subdirectories based on character names or per post.</li>
|
||||
<li><strong>Advanced File Renaming:</strong> Flexible renaming options, especially in Manga Mode, including by post title, date, sequential numbering, or post ID.</li>
|
||||
<li><strong>Filename Cleaning:</strong> Automatically removes unwanted text from filenames.</li>
|
||||
</ul>
|
||||
<h3>Specialized Modes</h3>
|
||||
<ul>
|
||||
<li><strong>Renaming Mode:</strong> Sorts posts chronologically before downloading to ensure pages appear in the correct sequence.</li>
|
||||
<li><strong>Favorite Mode:</strong> Connects to your account and downloads from your favorites list (artists or posts).</li>
|
||||
<li><strong>Link Extraction Mode:</strong> Extracts external links (Mega, Google Drive) from posts for export or <strong>direct in-app downloading</strong>.</li>
|
||||
<li><strong>Text Extraction Mode:</strong> Saves post descriptions or comment sections as <code>PDF</code>, <code>DOCX</code>, or <code>TXT</code> files.</li>
|
||||
</ul>
|
||||
<h3>Utility & Advanced Features</h3>
|
||||
<ul>
|
||||
<li><strong>In-App Updater:</strong> Check for new versions directly from the settings menu.</li>
|
||||
<li><strong>Cookie Support:</strong> Enables access to subscriber-only content via browser session cookies.</li>
|
||||
<li><strong>Duplicate Detection:</strong> Prevents saving duplicate files using content-based comparison, with configurable limits.</li>
|
||||
<li><strong>Image Compression:</strong> Automatically converts large images to <code>.webp</code> to reduce disk usage.</li>
|
||||
<li><strong>Creator Management:</strong> Built-in creator browser and update checker for downloading only new posts from saved profiles.</li>
|
||||
<li><strong>Error Handling:</strong> Tracks failed downloads and provides a retry dialog with options to export or redownload missing files.</li>
|
||||
</ul>
|
||||
<section aria-labelledby="supported-sites">
|
||||
<h2 id="supported-sites">Supported Sites</h2>
|
||||
|
||||
A powerful, feature-rich GUI application for downloading content from **[Kemono.su](https://kemono.su)** (and its mirrors like kemono.party) and **[Coomer.party](https://coomer.party)** (and its mirrors like coomer.su).
|
||||
<h3>Main Platforms</h3>
|
||||
<p>
|
||||
The downloader is primarily built to archive content from the platforms below.
|
||||
</p>
|
||||
<ul>
|
||||
<li>
|
||||
<strong>Kemono & Coomer</strong> — Core supported sites; download posts and files from creators on services such as
|
||||
<em>Patreon, Fanbox, OnlyFans, Fansly</em>, and similar platforms.
|
||||
</li>
|
||||
<li>
|
||||
<strong>Discord</strong> — Two modes for a channel URL:
|
||||
<ul>
|
||||
<li>Download all files and attachments.</li>
|
||||
<li>Save the entire message history as a formatted PDF.</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
Built with PyQt5, this tool is designed for users who want deep filtering capabilities, customizable folder structures, efficient downloads, and intelligent automation — all within a modern and user-friendly graphical interface.
|
||||
<hr>
|
||||
|
||||
<div align="center">
|
||||
<h3>Specialized Site Support</h3>
|
||||
<p>Paste a link from any of the following and the app will handle the download automatically:</p>
|
||||
|
||||
[](features.md)
|
||||
[](LICENSE)
|
||||
[](note.md)
|
||||
<details>
|
||||
<summary>Supported specialized sites (click to expand)</summary>
|
||||
<ul>
|
||||
<li>AllPornComic</li>
|
||||
<li>Bunkr</li>
|
||||
<li>Erome</li>
|
||||
<li>Fap-Nation</li>
|
||||
<li>Hentai2Read</li>
|
||||
<li>nhentai</li>
|
||||
<li>Pixeldrain</li>
|
||||
<li>Saint2</li>
|
||||
<li>Toonily</li>
|
||||
</ul>
|
||||
</details>
|
||||
|
||||
</div>
|
||||
<hr>
|
||||
|
||||
<h3>Direct File Hosting</h3>
|
||||
<p>
|
||||
You may paste direct links from these file hosting services to download contents without using the
|
||||
<code>"Only Links"</code> mode:
|
||||
</p>
|
||||
<ul>
|
||||
<li>Dropbox</li>
|
||||
<li>Gofile</li>
|
||||
<li>Google Drive</li>
|
||||
<li>Mega</li>
|
||||
</ul>
|
||||
</section>
|
||||
|
||||
---
|
||||
<h2>💻 Installation</h2>
|
||||
<h3>Requirements</h3>
|
||||
<ul>
|
||||
<li>Python 3.6 or higher</li>
|
||||
<li>pip (Python package installer)</li>
|
||||
</ul>
|
||||
<h3>Install Dependencies</h3>
|
||||
<pre><code>Required - pip install PyQt5 requests packaging cloudscraper bs4 pycryptodome
|
||||
</code></pre>
|
||||
|
||||
## Feature Overview
|
||||
<pre><code>Optional - pip install gdown pillow fpdf python-docx
|
||||
</code></pre>
|
||||
|
||||
Kemono Downloader offers a range of features to streamline your content downloading experience:
|
||||
<h3>Running the Application</h3>
|
||||
<p>Navigate to the application's directory in your terminal and run:</p>
|
||||
<pre><code>python main.py
|
||||
</code></pre>
|
||||
<h2>Contribution</h2>
|
||||
<p>Feel free to fork this repo and submit pull requests for bug fixes, new features, or UI improvements!</p>
|
||||
<h2>License</h2>
|
||||
<p>This project is under the MIT Licence</p>
|
||||
### Included Third-Party Tools
|
||||
|
||||
- **User-Friendly Interface:** A modern PyQt5 GUI for easy navigation and operation.
|
||||
|
||||
- **Flexible Downloading:**
|
||||
- Download content from Kemono.su (and mirrors) and Coomer.party (and mirrors).
|
||||
- Supports creator pages (with page range selection) and individual post URLs.
|
||||
- Standard download controls: Start, Pause, Resume, and Cancel.
|
||||
|
||||
- **Powerful Filtering:**
|
||||
- **Character Filtering:** Filter content by character names. Supports simple comma-separated names and grouped names for shared folders.
|
||||
- **Keyword Skipping:** Skip posts or files based on specified keywords.
|
||||
- **Filename Cleaning:** Remove unwanted words or phrases from downloaded filenames.
|
||||
- **File Type Selection:** Choose to download all files, or limit to images/GIFs, videos, audio, or archives. Can also extract external links only.
|
||||
|
||||
- **Customizable Downloads:**
|
||||
- **Thumbnails Only:** Option to download only small preview images.
|
||||
- **Content Scanning:** Scan post HTML for `<img>` tags and direct image links, useful for images embedded in descriptions.
|
||||
- **WebP Conversion:** Convert images to WebP format for smaller file sizes (requires Pillow library).
|
||||
|
||||
- **Organized Output:**
|
||||
- **Automatic Subfolders:** Create subfolders based on character names (from filters or `Known.txt`) or post titles.
|
||||
- **Per-Post Subfolders:** Option to create an additional subfolder for each individual post.
|
||||
|
||||
- **Manga/Comic Mode:**
|
||||
- Downloads posts from a creator's feed in chronological order (oldest to newest).
|
||||
- Offers various filename styling options for sequential reading (e.g., post title, original name, global numbering).
|
||||
|
||||
- **⭐ Favorite Mode:**
|
||||
- Directly download from your favorited artists and posts on Kemono.su.
|
||||
- Requires a valid cookie and adapts the UI for easy selection from your favorites.
|
||||
- Supports downloading into a single location or artist-specific subfolders.
|
||||
|
||||
- **Performance & Advanced Options:**
|
||||
- **Cookie Support:** Use cookies (paste string or load from `cookies.txt`) to access restricted content.
|
||||
- **Multithreading:** Configure the number of simultaneous downloads/post processing threads for improved speed.
|
||||
|
||||
- **Logging:**
|
||||
- A detailed progress log displays download activity, errors, and summaries.
|
||||
|
||||
- **Multi-language Interface:** Choose from several languages for the UI (English, Japanese, French, Spanish, German, Russian, Korean, Chinese Simplified).
|
||||
|
||||
- **Theme Customization:** Selectable Light and Dark themes for user comfort.
|
||||
|
||||
---
|
||||
|
||||
## ✨ What's New in v6.0.0
|
||||
|
||||
This release focuses on providing more granular control over file organization and improving at-a-glance status monitoring.
|
||||
|
||||
### New Features
|
||||
|
||||
- **Live Error Count on Button**
|
||||
The **"Error" button** now dynamically displays the number of failed files during a download. Instead of opening the dialog, you can quickly see a live count like `(3) Error`, helping you track issues at a glance.
|
||||
|
||||
- **Date Prefix for Post Subfolders**
|
||||
A new checkbox labeled **"Date Prefix"** is now available in the advanced settings.
|
||||
When enabled alongside **"Subfolder per Post"**, it prepends the post's upload date to the folder name (e.g., `2025-07-11 Post Title`).
|
||||
This makes your downloads sortable and easier to browse chronologically.
|
||||
|
||||
- **Keep Duplicates Within a Post**
|
||||
A **"Keep Duplicates"** option has been added to preserve all files from a post — even if some have the same name.
|
||||
Instead of skipping or overwriting, the downloader will save duplicates with numbered suffixes (e.g., `image.jpg`, `image_1.jpg`, etc.), which is especially useful when the same file name points to different media.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- The downloader now correctly renames large `.part` files when completed, avoiding leftover temp files.
|
||||
- The list of failed files shown in the Error Dialog is now saved and restored with your session — so no errors get lost if you close the app.
|
||||
- Your selected download location is remembered, even after pressing the **Reset** button.
|
||||
- The **Cancel** button is now enabled when restoring a pending session, so you can abort stuck jobs more easily.
|
||||
- Internal cleanup logs (like "Deleting post cache") are now excluded from the final download summary for clarity.
|
||||
|
||||
---
|
||||
|
||||
## 📅 Next Update Plans
|
||||
|
||||
### 🔖 Post Tag Filtering (Planned for v6.1.0)
|
||||
|
||||
A powerful new **"Filter by Post Tags"** feature is planned:
|
||||
|
||||
- Filter and download content based on specific post tags.
|
||||
- Combine tag filtering with current filters (character, file type, etc.).
|
||||
- Use tag presets to automate frequent downloads.
|
||||
|
||||
This will provide **much greater control** over what gets downloaded, especially for creators who use tags consistently.
|
||||
|
||||
### 📁 Creator Download History (.json Save)
|
||||
|
||||
To streamline incremental downloads, a new system will allow the app to:
|
||||
|
||||
- Save a `.json` file with metadata about already-downloaded posts.
|
||||
- Compare that file on future runs, so only **new** posts are downloaded.
|
||||
- Avoids duplication and makes regular syncs fast and efficient.
|
||||
|
||||
Ideal for users managing large collections or syncing favorites regularly.
|
||||
|
||||
---
|
||||
|
||||
## 💻 Installation
|
||||
|
||||
### Requirements
|
||||
|
||||
- Python 3.6 or higher
|
||||
- pip (Python package installer)
|
||||
|
||||
### Install Dependencies
|
||||
|
||||
```bash
|
||||
pip install PyQt5 requests Pillow mega.py
|
||||
```
|
||||
|
||||
### Running the Application
|
||||
Navigate to the application's directory in your terminal and run:
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
|
||||
### Optional Setup
|
||||
- **Main Inputs:**
|
||||
- Place your `cookies.txt` in the root directory (if using cookies).
|
||||
- Prepare your `Known.txt` and `creators.json` in the same directory for advanced filtering and selection features.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### AttributeError: module 'asyncio' has no attribute 'coroutine'
|
||||
|
||||
If you encounter an error message similar to:
|
||||
```
|
||||
AttributeError: module 'asyncio' has no attribute 'coroutine'. Did you mean: 'coroutines'?
|
||||
```
|
||||
This usually means that a dependency, often `tenacity` (used by `mega.py`), is an older version that's incompatible with your Python version (typically Python 3.10+).
|
||||
|
||||
To fix this, activate your virtual environment and run the following commands to upgrade the libraries:
|
||||
|
||||
```bash
|
||||
pip install --upgrade tenacity
|
||||
pip install --upgrade mega.py
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Contribution
|
||||
|
||||
Feel free to fork this repo and submit pull requests for bug fixes, new features, or UI improvements!
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
This project is under the Custom Licence
|
||||
|
||||
## Star History
|
||||
This project includes a pre-compiled binary of `yt-dlp` for handling certain video downloads. `yt-dlp` is in the public domain. For more information or to get the latest version, please visit the official [yt-dlp GitHub repository](https://github.com/yt-dlp/yt-dlp).
|
||||
|
||||
<h2>Star History</h2>
|
||||
<table align="center" style="border-collapse: collapse; border: none; margin-left: auto; margin-right: auto;">
|
||||
<tr>
|
||||
<td align="center" valign="middle" style="padding: 10px; border: none;">
|
||||
<a href="https://www.star-history.com/#Yuvi9587/Kemono-Downloader&Date">
|
||||
<img src="https://api.star-history.com/svg?repos=Yuvi9587/Kemono-Downloader&type=Date" alt="Star History Chart" width="650">
|
||||
</a>
|
||||
<tbody>
|
||||
<tr>
|
||||
<td align="center" valign="middle" style="padding: 10px; border: none;">
|
||||
<a href="https://www.star-history.com/#Yuvi9587/Kemono-Downloader&Date">
|
||||
<img src="https://api.star-history.com/svg?repos=Yuvi9587/Kemono-Downloader&type=Date" alt="Star History Chart" width="650">
|
||||
</a>
|
||||
</td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://buymeacoffee.com/yuvi9587">
|
||||
<img src="https://img.shields.io/badge/🍺%20Buy%20Me%20a%20Coffee-FFCCCB?style=for-the-badge&logoColor=black&color=FFDD00" alt="Buy Me a Coffee">
|
||||
<img src="https://img.shields.io/badge/🍺%20Buy%20Me%20a%20Coffee-FFCCCB?style=for-the-badge&logoColor=black&color=FFDD00" alt="Buy Me a Coffee">
|
||||
</a>
|
||||
</p>
|
||||
|
||||
|
||||
@@ -6,9 +6,9 @@ We are committed to maintaining and improving the Kemono Downloader. For the bes
|
||||
|
||||
| Version | Supported Status |
|
||||
| -------------- | ------------------------------------ |
|
||||
| >= 5.0.0 | :white_check_mark: Actively Supported |
|
||||
| 4.0.0 - 4.x.x | :warning: Supported (Limited Features) |
|
||||
| < 4.0.0 | :x: End of Life (EOL) |
|
||||
| >= 7.0.0 | :white_check_mark: Actively Supported |
|
||||
| 6.0.0 - 6.x.x | :warning: Supported (Limited Features) |
|
||||
| < 5.0.0 | :x: End of Life (EOL) |
|
||||
|
||||
Users are encouraged to update to **v5.0.0 or newer** versions.
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
# --- Application Metadata ---
|
||||
CONFIG_ORGANIZATION_NAME = "KemonoDownloader"
|
||||
CONFIG_APP_NAME_MAIN = "ApplicationSettings"
|
||||
CONFIG_APP_NAME_TOUR = "ApplicationTour"
|
||||
@@ -9,7 +8,7 @@ STYLE_ORIGINAL_NAME = "original_name"
|
||||
STYLE_DATE_BASED = "date_based"
|
||||
STYLE_DATE_POST_TITLE = "date_post_title"
|
||||
STYLE_POST_TITLE_GLOBAL_NUMBERING = "post_title_global_numbering"
|
||||
STYLE_POST_ID = "post_id" # Add this line
|
||||
STYLE_POST_ID = "post_id"
|
||||
MANGA_DATE_PREFIX_DEFAULT = ""
|
||||
|
||||
# --- Download Scopes ---
|
||||
@@ -48,6 +47,8 @@ MAX_PARTS_FOR_MULTIPART_DOWNLOAD = 15
|
||||
# --- UI and Settings Keys (for QSettings) ---
|
||||
TOUR_SHOWN_KEY = "neverShowTourAgainV19"
|
||||
MANGA_FILENAME_STYLE_KEY = "mangaFilenameStyleV1"
|
||||
MANGA_CUSTOM_FORMAT_KEY = "mangaCustomFormatV1"
|
||||
MANGA_CUSTOM_DATE_FORMAT_KEY = "mangaCustomDateFormatV1"
|
||||
SKIP_WORDS_SCOPE_KEY = "skipWordsScopeV1"
|
||||
ALLOW_MULTIPART_DOWNLOAD_KEY = "allowMultipartDownloadV1"
|
||||
USE_COOKIE_KEY = "useCookieV1"
|
||||
@@ -60,6 +61,12 @@ DOWNLOAD_LOCATION_KEY = "downloadLocationV1"
|
||||
RESOLUTION_KEY = "window_resolution"
|
||||
UI_SCALE_KEY = "ui_scale_factor"
|
||||
SAVE_CREATOR_JSON_KEY = "saveCreatorJsonProfile"
|
||||
DATE_PREFIX_FORMAT_KEY = "datePrefixFormatV1"
|
||||
AUTO_RETRY_ON_FINISH_KEY = "auto_retry_on_finish"
|
||||
FETCH_FIRST_KEY = "fetchAllPostsFirst"
|
||||
DISCORD_TOKEN_KEY = "discord/token"
|
||||
|
||||
POST_DOWNLOAD_ACTION_KEY = "postDownloadAction"
|
||||
|
||||
# --- UI Constants and Identifiers ---
|
||||
HTML_PREFIX = "<!HTML!>"
|
||||
@@ -81,7 +88,7 @@ VIDEO_EXTENSIONS = {
|
||||
'.mpg', '.m4v', '.3gp', '.ogv', '.ts', '.vob'
|
||||
}
|
||||
ARCHIVE_EXTENSIONS = {
|
||||
'.zip', '.rar', '.7z', '.tar', '.gz', '.bz2'
|
||||
'.zip', '.rar', '.7z', '.tar', '.gz', '.bz2', '.bin'
|
||||
}
|
||||
AUDIO_EXTENSIONS = {
|
||||
'.mp3', '.wav', '.aac', '.flac', '.ogg', '.wma', '.m4a', '.opus',
|
||||
@@ -97,7 +104,7 @@ FOLDER_NAME_STOP_WORDS = {
|
||||
"for", "he", "her", "his", "i", "im", "in", "is", "it", "its",
|
||||
"me", "my", "net", "not", "of", "on", "or", "org", "our",
|
||||
"s", "she", "so", "the", "their", "they", "this",
|
||||
"to", "ve", "was", "we", "were", "with", "www", "you", "your",
|
||||
"to", "ve", "was", "we", "were", "with", "www", "you", "your", "nsfw", "sfw",
|
||||
# add more according to need
|
||||
}
|
||||
|
||||
@@ -111,10 +118,13 @@ CREATOR_DOWNLOAD_DEFAULT_FOLDER_IGNORE_WORDS = {
|
||||
"may", "jun", "june", "jul", "july", "aug", "august", "sep", "september",
|
||||
"oct", "october", "nov", "november", "dec", "december",
|
||||
"mon", "monday", "tue", "tuesday", "wed", "wednesday", "thu", "thursday",
|
||||
"fri", "friday", "sat", "saturday", "sun", "sunday"
|
||||
"fri", "friday", "sat", "saturday", "sun", "sunday", "Pack", "tier", "spoiler",
|
||||
|
||||
|
||||
# add more according to need
|
||||
}
|
||||
|
||||
# --- Duplicate Handling Modes ---
|
||||
DUPLICATE_HANDLING_HASH = "hash"
|
||||
DUPLICATE_HANDLING_KEEP_ALL = "keep_all"
|
||||
DUPLICATE_HANDLING_KEEP_ALL = "keep_all"
|
||||
STYLE_CUSTOM = "custom"
|
||||
292
src/core/Hentai2read_client.py
Normal file
292
src/core/Hentai2read_client.py
Normal file
@@ -0,0 +1,292 @@
|
||||
import re
|
||||
import os
|
||||
import time
|
||||
import cloudscraper
|
||||
from bs4 import BeautifulSoup
|
||||
from urllib.parse import urljoin
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
import queue
|
||||
|
||||
def run_hentai2read_download(start_url, output_dir, progress_callback, overall_progress_callback, check_pause_func):
|
||||
"""
|
||||
Orchestrates the download process using a producer-consumer model.
|
||||
"""
|
||||
scraper = cloudscraper.create_scraper()
|
||||
all_failed_files = [] # Track all failures across chapters
|
||||
|
||||
try:
|
||||
progress_callback(" [Hentai2Read] Scraping series page for all metadata...")
|
||||
top_level_folder_name, chapters_to_process = _get_series_metadata(start_url, progress_callback, scraper)
|
||||
|
||||
if not chapters_to_process:
|
||||
progress_callback("❌ No chapters found to download. Aborting.")
|
||||
return 0, 0
|
||||
|
||||
total_chapters = len(chapters_to_process)
|
||||
overall_progress_callback(total_chapters, 0)
|
||||
|
||||
total_downloaded_count = 0
|
||||
total_skipped_count = 0
|
||||
|
||||
for idx, chapter in enumerate(chapters_to_process):
|
||||
if check_pause_func(): break
|
||||
|
||||
progress_callback(f"\n-- Processing and Downloading Chapter {idx + 1}/{total_chapters}: '{chapter['title']}' --")
|
||||
|
||||
series_folder = re.sub(r'[\\/*?:"<>|]', "", top_level_folder_name).strip()
|
||||
chapter_folder = re.sub(r'[\\/*?:"<>|]', "", chapter['title']).strip()
|
||||
final_save_path = os.path.join(output_dir, series_folder, chapter_folder)
|
||||
os.makedirs(final_save_path, exist_ok=True)
|
||||
|
||||
dl_count, skip_count, chapter_failures = _process_and_download_chapter(
|
||||
chapter_url=chapter['url'],
|
||||
save_path=final_save_path,
|
||||
scraper=scraper,
|
||||
progress_callback=progress_callback,
|
||||
check_pause_func=check_pause_func
|
||||
)
|
||||
|
||||
total_downloaded_count += dl_count
|
||||
total_skipped_count += skip_count
|
||||
|
||||
if chapter_failures:
|
||||
all_failed_files.extend(chapter_failures)
|
||||
|
||||
overall_progress_callback(total_chapters, idx + 1)
|
||||
if check_pause_func(): break
|
||||
|
||||
# --- FINAL SUMMARY OF FAILURES ---
|
||||
if all_failed_files:
|
||||
progress_callback("\n" + "="*40)
|
||||
progress_callback(f"❌ SUMMARY: {len(all_failed_files)} files failed permanently after 10 retries:")
|
||||
for fail_msg in all_failed_files:
|
||||
progress_callback(f" • {fail_msg}")
|
||||
progress_callback("="*40 + "\n")
|
||||
else:
|
||||
progress_callback("\n✅ All chapters processed successfully with no permanent failures.")
|
||||
|
||||
return total_downloaded_count, total_skipped_count
|
||||
|
||||
except Exception as e:
|
||||
progress_callback(f"❌ A critical error occurred in the Hentai2Read client: {e}")
|
||||
return 0, 0
|
||||
|
||||
def _get_series_metadata(start_url, progress_callback, scraper):
|
||||
"""
|
||||
Scrapes the main series page to get the Artist Name, Series Title, and chapter list.
|
||||
"""
|
||||
max_retries = 4
|
||||
last_exception = None
|
||||
soup = None
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
if attempt > 0:
|
||||
progress_callback(f" [Hentai2Read] ⚠️ Retrying connection (Attempt {attempt + 1}/{max_retries})...")
|
||||
|
||||
response = scraper.get(start_url, timeout=30)
|
||||
response.raise_for_status()
|
||||
soup = BeautifulSoup(response.text, 'html.parser')
|
||||
last_exception = None
|
||||
break
|
||||
|
||||
except Exception as e:
|
||||
last_exception = e
|
||||
progress_callback(f" [Hentai2Read] ⚠️ Connection attempt {attempt + 1} failed: {e}")
|
||||
if attempt < max_retries - 1:
|
||||
time.sleep(2 * (attempt + 1))
|
||||
continue
|
||||
|
||||
if last_exception:
|
||||
progress_callback(f" [Hentai2Read] ❌ Error getting series metadata after {max_retries} attempts: {last_exception}")
|
||||
return "Unknown Series", []
|
||||
|
||||
try:
|
||||
series_title = "Unknown Series"
|
||||
artist_name = None
|
||||
|
||||
# 1. Try fetching Title
|
||||
title_tag = soup.select_one("h3.block-title a")
|
||||
if title_tag:
|
||||
series_title = title_tag.get_text(strip=True)
|
||||
else:
|
||||
meta_title = soup.select_one("meta[property='og:title']")
|
||||
if meta_title:
|
||||
series_title = meta_title.get("content", "Unknown Series").replace(" - Hentai2Read", "")
|
||||
|
||||
# 2. Try fetching Artist
|
||||
metadata_list = soup.select_one("ul.list.list-simple-mini")
|
||||
if metadata_list:
|
||||
for b_tag in metadata_list.find_all('b'):
|
||||
label = b_tag.get_text(strip=True)
|
||||
if "Artist" in label or "Author" in label:
|
||||
a_tag = b_tag.find_next_sibling('a')
|
||||
if a_tag:
|
||||
artist_name = a_tag.get_text(strip=True)
|
||||
break
|
||||
|
||||
if not artist_name:
|
||||
artist_link = soup.find('a', href=re.compile(r'/hentai-list/artist/'))
|
||||
if artist_link:
|
||||
artist_name = artist_link.get_text(strip=True)
|
||||
|
||||
if artist_name:
|
||||
top_level_folder_name = f"{artist_name} - {series_title}"
|
||||
else:
|
||||
top_level_folder_name = series_title
|
||||
|
||||
chapter_links = soup.select("div.media a.pull-left.font-w600")
|
||||
if not chapter_links:
|
||||
chapters_to_process = [{'url': start_url, 'title': series_title}]
|
||||
else:
|
||||
chapters_to_process = [
|
||||
{'url': urljoin(start_url, link['href']), 'title': " ".join(link.stripped_strings)}
|
||||
for link in chapter_links
|
||||
]
|
||||
chapters_to_process.reverse()
|
||||
|
||||
progress_callback(f" [Hentai2Read] ✅ Found Metadata: '{top_level_folder_name}'")
|
||||
progress_callback(f" [Hentai2Read] ✅ Found {len(chapters_to_process)} chapters to process.")
|
||||
|
||||
return top_level_folder_name, chapters_to_process
|
||||
|
||||
except Exception as e:
|
||||
progress_callback(f" [Hentai2Read] ❌ Error parsing metadata after successful connection: {e}")
|
||||
return "Unknown Series", []
|
||||
|
||||
def _process_and_download_chapter(chapter_url, save_path, scraper, progress_callback, check_pause_func):
|
||||
"""
|
||||
Uses a producer-consumer pattern to download a chapter.
|
||||
Includes RETRY LOGIC and ACTIVE LOGGING.
|
||||
"""
|
||||
task_queue = queue.Queue()
|
||||
num_download_threads = 8
|
||||
|
||||
download_stats = {'downloaded': 0, 'skipped': 0}
|
||||
failed_files_list = []
|
||||
|
||||
def downloader_worker():
|
||||
worker_scraper = cloudscraper.create_scraper()
|
||||
while True:
|
||||
task = task_queue.get()
|
||||
if task is None:
|
||||
task_queue.task_done()
|
||||
break
|
||||
|
||||
filepath, img_url = task
|
||||
filename = os.path.basename(filepath)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
# We log skips to show it's checking files
|
||||
progress_callback(f" -> Skip (Exists): '{filename}'")
|
||||
download_stats['skipped'] += 1
|
||||
task_queue.task_done()
|
||||
continue
|
||||
|
||||
# --- RETRY LOGIC START ---
|
||||
success = False
|
||||
# UNCOMMENTED: Log the start of download so you see activity
|
||||
progress_callback(f" Downloading: '{filename}'...")
|
||||
|
||||
for attempt in range(10): # Try 10 times
|
||||
try:
|
||||
if attempt > 0:
|
||||
progress_callback(f" ⚠️ Retrying '{filename}' (Attempt {attempt+1}/10)...")
|
||||
time.sleep(2)
|
||||
|
||||
response = worker_scraper.get(img_url, stream=True, timeout=60, headers={'Referer': chapter_url})
|
||||
response.raise_for_status()
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
f.write(chunk)
|
||||
|
||||
download_stats['downloaded'] += 1
|
||||
success = True
|
||||
# UNCOMMENTED: Log success
|
||||
progress_callback(f" ✅ Downloaded: '{filename}'")
|
||||
break
|
||||
|
||||
except Exception as e:
|
||||
if attempt == 9:
|
||||
progress_callback(f" ❌ Failed '{filename}' after 10 attempts: {e}")
|
||||
|
||||
if not success:
|
||||
failed_files_list.append(f"{filename} (Chapter: {os.path.basename(save_path)})")
|
||||
# Clean up empty file if failed
|
||||
if os.path.exists(filepath):
|
||||
try:
|
||||
os.remove(filepath)
|
||||
except OSError: pass
|
||||
|
||||
task_queue.task_done()
|
||||
|
||||
executor = ThreadPoolExecutor(max_workers=num_download_threads, thread_name_prefix='H2R_Downloader')
|
||||
for _ in range(num_download_threads):
|
||||
executor.submit(downloader_worker)
|
||||
|
||||
page_number = 1
|
||||
progress_callback(" [Hentai2Read] Scanning pages...") # Initial log
|
||||
|
||||
while True:
|
||||
if check_pause_func(): break
|
||||
if page_number > 300:
|
||||
progress_callback(" [Hentai2Read] ⚠️ Safety break: Reached 300 pages.")
|
||||
break
|
||||
|
||||
# Log occasionally to show scanning is alive
|
||||
if page_number % 10 == 0:
|
||||
progress_callback(f" [Hentai2Read] Scanned {page_number} pages so far...")
|
||||
|
||||
page_url_to_check = f"{chapter_url}{page_number}/"
|
||||
try:
|
||||
page_response = None
|
||||
page_last_exception = None
|
||||
for page_attempt in range(3):
|
||||
try:
|
||||
page_response = scraper.get(page_url_to_check, timeout=30)
|
||||
page_last_exception = None
|
||||
break
|
||||
except Exception as e:
|
||||
page_last_exception = e
|
||||
time.sleep(1)
|
||||
|
||||
if page_last_exception:
|
||||
raise page_last_exception
|
||||
|
||||
if page_response.history or page_response.status_code != 200:
|
||||
progress_callback(f" [Hentai2Read] End of chapter detected on page {page_number}.")
|
||||
break
|
||||
|
||||
soup = BeautifulSoup(page_response.text, 'html.parser')
|
||||
img_tag = soup.select_one("img#arf-reader")
|
||||
img_src = img_tag.get("src") if img_tag else None
|
||||
|
||||
if not img_tag or img_src == "https://static.hentai.direct/hentai":
|
||||
progress_callback(f" [Hentai2Read] End of chapter detected (Last page reached at {page_number}).")
|
||||
break
|
||||
|
||||
normalized_img_src = urljoin(page_response.url, img_src)
|
||||
ext = os.path.splitext(normalized_img_src.split('/')[-1])[-1] or ".jpg"
|
||||
filename = f"{page_number:03d}{ext}"
|
||||
filepath = os.path.join(save_path, filename)
|
||||
|
||||
task_queue.put((filepath, normalized_img_src))
|
||||
|
||||
page_number += 1
|
||||
time.sleep(0.1)
|
||||
except Exception as e:
|
||||
progress_callback(f" [Hentai2Read] ❌ Error while scraping page {page_number}: {e}")
|
||||
break
|
||||
|
||||
# Signal workers to exit
|
||||
for _ in range(num_download_threads):
|
||||
task_queue.put(None)
|
||||
|
||||
# Wait for all tasks to complete
|
||||
task_queue.join()
|
||||
executor.shutdown(wait=True)
|
||||
|
||||
progress_callback(f" Chapter complete. Processed {page_number - 1} images.")
|
||||
|
||||
return download_stats['downloaded'], download_stats['skipped'], failed_files_list
|
||||
121
src/core/allcomic_client.py
Normal file
121
src/core/allcomic_client.py
Normal file
@@ -0,0 +1,121 @@
|
||||
import requests
|
||||
import re
|
||||
from bs4 import BeautifulSoup
|
||||
import time
|
||||
import random
|
||||
from urllib.parse import urlparse
|
||||
|
||||
def get_chapter_list(scraper, series_url, logger_func):
|
||||
"""
|
||||
Checks if a URL is a series page and returns a list of all chapter URLs if it is.
|
||||
Relies on a passed-in scraper session for connection.
|
||||
"""
|
||||
logger_func(f" [AllComic] Checking for chapter list at: {series_url}")
|
||||
|
||||
headers = {'Referer': 'https://allporncomic.com/'}
|
||||
response = None
|
||||
max_retries = 8
|
||||
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
response = scraper.get(series_url, headers=headers, timeout=30)
|
||||
response.raise_for_status()
|
||||
logger_func(f" [AllComic] Successfully connected to series page on attempt {attempt + 1}.")
|
||||
break
|
||||
except requests.RequestException as e:
|
||||
logger_func(f" [AllComic] ⚠️ Series page check attempt {attempt + 1}/{max_retries} failed: {e}")
|
||||
if attempt < max_retries - 1:
|
||||
wait_time = (2 ** attempt) + random.uniform(0, 2)
|
||||
logger_func(f" Retrying in {wait_time:.1f} seconds...")
|
||||
time.sleep(wait_time)
|
||||
else:
|
||||
logger_func(f" [AllComic] ❌ All attempts to check series page failed.")
|
||||
return []
|
||||
|
||||
if not response:
|
||||
return []
|
||||
|
||||
try:
|
||||
soup = BeautifulSoup(response.text, 'html.parser')
|
||||
chapter_links = soup.select('li.wp-manga-chapter a')
|
||||
|
||||
if not chapter_links:
|
||||
logger_func(" [AllComic] ℹ️ No chapter list found. Assuming this is a single chapter page.")
|
||||
return []
|
||||
|
||||
chapter_urls = [link['href'] for link in chapter_links]
|
||||
chapter_urls.reverse()
|
||||
|
||||
logger_func(f" [AllComic] ✅ Found {len(chapter_urls)} chapters.")
|
||||
return chapter_urls
|
||||
|
||||
except Exception as e:
|
||||
logger_func(f" [AllComic] ❌ Error parsing chapters after successful connection: {e}")
|
||||
return []
|
||||
|
||||
def fetch_chapter_data(scraper, chapter_url, logger_func):
|
||||
"""
|
||||
Fetches the comic title, chapter title, and image URLs for a single chapter page.
|
||||
Relies on a passed-in scraper session for connection.
|
||||
"""
|
||||
logger_func(f" [AllComic] Fetching page: {chapter_url}")
|
||||
|
||||
headers = {'Referer': 'https://allporncomic.com/'}
|
||||
|
||||
response = None
|
||||
max_retries = 8
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
response = scraper.get(chapter_url, headers=headers, timeout=30)
|
||||
response.raise_for_status()
|
||||
break
|
||||
except requests.RequestException as e:
|
||||
logger_func(f" [AllComic] ⚠️ Chapter page connection attempt {attempt + 1}/{max_retries} failed: {e}")
|
||||
if attempt < max_retries - 1:
|
||||
wait_time = (2 ** attempt) + random.uniform(0, 2)
|
||||
logger_func(f" Retrying in {wait_time:.1f} seconds...")
|
||||
time.sleep(wait_time)
|
||||
else:
|
||||
logger_func(f" [AllComic] ❌ All connection attempts failed for chapter: {chapter_url}")
|
||||
return None, None, None
|
||||
|
||||
if not response:
|
||||
return None, None, None
|
||||
|
||||
try:
|
||||
soup = BeautifulSoup(response.text, 'html.parser')
|
||||
|
||||
comic_title = "Unknown Comic"
|
||||
title_element = soup.find('h1', class_='post-title')
|
||||
if title_element:
|
||||
comic_title = title_element.text.strip()
|
||||
else:
|
||||
try:
|
||||
path_parts = urlparse(chapter_url).path.strip('/').split('/')
|
||||
if len(path_parts) >= 3 and path_parts[-3] == 'porncomic':
|
||||
comic_slug = path_parts[-2]
|
||||
comic_title = comic_slug.replace('-', ' ').title()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
chapter_slug = chapter_url.strip('/').split('/')[-1]
|
||||
chapter_title = chapter_slug.replace('-', ' ').title()
|
||||
|
||||
reading_container = soup.find('div', class_='reading-content')
|
||||
list_of_image_urls = []
|
||||
if reading_container:
|
||||
image_elements = reading_container.find_all('img', class_='wp-manga-chapter-img')
|
||||
for img in image_elements:
|
||||
img_url = (img.get('data-src') or img.get('src', '')).strip()
|
||||
if img_url:
|
||||
list_of_image_urls.append(img_url)
|
||||
|
||||
if not list_of_image_urls:
|
||||
logger_func(f" [AllComic] ❌ Could not find any images on the page.")
|
||||
return None, None, None
|
||||
|
||||
return comic_title, chapter_title, list_of_image_urls
|
||||
|
||||
except Exception as e:
|
||||
logger_func(f" [AllComic] ❌ An unexpected error occurred while parsing the page: {e}")
|
||||
return None, None, None
|
||||
@@ -1,8 +1,9 @@
|
||||
import time
|
||||
import traceback
|
||||
from urllib.parse import urlparse
|
||||
import json # Ensure json is imported
|
||||
import json
|
||||
import requests
|
||||
import cloudscraper
|
||||
from ..utils.network_utils import extract_post_info, prepare_cookies_for_request
|
||||
from ..config.constants import (
|
||||
STYLE_DATE_POST_TITLE
|
||||
@@ -12,7 +13,6 @@ from ..config.constants import (
|
||||
def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_event=None, pause_event=None, cookies_dict=None):
|
||||
"""
|
||||
Fetches a single page of posts from the API with robust retry logic.
|
||||
NEW: Requests only essential fields to keep the response size small and reliable.
|
||||
"""
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
raise RuntimeError("Fetch operation cancelled by user.")
|
||||
@@ -23,7 +23,7 @@ def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_ev
|
||||
raise RuntimeError("Fetch operation cancelled by user while paused.")
|
||||
time.sleep(0.5)
|
||||
logger(" Post fetching resumed.")
|
||||
fields_to_request = "id,user,service,title,shared_file,added,published,edited,file,attachments,tags"
|
||||
fields_to_request = "id,user,service,title,shared_file,added,published,edited,file,attachments,tags,content"
|
||||
paginated_url = f'{api_url_base}?o={offset}&fields={fields_to_request}'
|
||||
|
||||
max_retries = 3
|
||||
@@ -33,17 +33,31 @@ def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_ev
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
raise RuntimeError("Fetch operation cancelled by user during retry loop.")
|
||||
|
||||
log_message = f" Fetching post list: {api_url_base}?o={offset} (Page approx. {offset // 50 + 1})"
|
||||
log_message = f" Fetching post list: {api_url_base} (Page approx. {offset // 50 + 1})"
|
||||
if attempt > 0:
|
||||
log_message += f" (Attempt {attempt + 1}/{max_retries})"
|
||||
logger(log_message)
|
||||
|
||||
try:
|
||||
response = requests.get(paginated_url, headers=headers, timeout=(15, 60), cookies=cookies_dict)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
with requests.get(paginated_url, headers=headers, timeout=(15, 60), cookies=cookies_dict) as response:
|
||||
response.raise_for_status()
|
||||
response.encoding = 'utf-8'
|
||||
return response.json()
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
# Handle 403 error on the FIRST page as a rate limit/block
|
||||
if e.response is not None and e.response.status_code == 403 and offset == 0:
|
||||
logger(" ❌ Access Denied (403 Forbidden) on the first page.")
|
||||
logger(" This is likely a rate limit or a Cloudflare block.")
|
||||
logger(" 💡 SOLUTION: Wait a while, use a VPN, or provide a valid session cookie.")
|
||||
return [] # Stop the process gracefully
|
||||
|
||||
# Handle 400 error as the end of pages
|
||||
if e.response is not None and e.response.status_code == 400:
|
||||
logger(f" ✅ Reached end of posts (API returned 400 Bad Request for offset {offset}).")
|
||||
return []
|
||||
|
||||
# Handle all other network errors with a retry
|
||||
logger(f" ⚠️ Retryable network error on page fetch (Attempt {attempt + 1}): {e}")
|
||||
if attempt < max_retries - 1:
|
||||
delay = retry_delay * (2 ** attempt)
|
||||
@@ -65,29 +79,36 @@ def fetch_posts_paginated(api_url_base, headers, offset, logger, cancellation_ev
|
||||
|
||||
raise RuntimeError(f"Failed to fetch page {paginated_url} after all attempts.")
|
||||
|
||||
|
||||
def fetch_single_post_data(api_domain, service, user_id, post_id, headers, logger, cookies_dict=None):
|
||||
"""
|
||||
--- NEW FUNCTION ---
|
||||
Fetches the full data, including the 'content' field, for a single post.
|
||||
--- MODIFIED FUNCTION ---
|
||||
Fetches the full data, including the 'content' field, for a single post using cloudscraper.
|
||||
"""
|
||||
post_api_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}/post/{post_id}"
|
||||
logger(f" Fetching full content for post ID {post_id}...")
|
||||
|
||||
# FIX: Ensure scraper session is closed after use
|
||||
scraper = None
|
||||
try:
|
||||
with requests.get(post_api_url, headers=headers, timeout=(15, 300), cookies=cookies_dict, stream=True) as response:
|
||||
response.raise_for_status()
|
||||
response_body = b""
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
response_body += chunk
|
||||
|
||||
full_post_data = json.loads(response_body)
|
||||
if isinstance(full_post_data, list) and full_post_data:
|
||||
return full_post_data[0]
|
||||
return full_post_data
|
||||
|
||||
scraper = cloudscraper.create_scraper()
|
||||
response = scraper.get(post_api_url, headers=headers, timeout=(15, 300), cookies=cookies_dict)
|
||||
response.raise_for_status()
|
||||
|
||||
full_post_data = response.json()
|
||||
|
||||
if isinstance(full_post_data, list) and full_post_data:
|
||||
return full_post_data[0]
|
||||
if isinstance(full_post_data, dict) and 'post' in full_post_data:
|
||||
return full_post_data['post']
|
||||
return full_post_data
|
||||
|
||||
except Exception as e:
|
||||
logger(f" ❌ Failed to fetch full content for post {post_id}: {e}")
|
||||
return None
|
||||
finally:
|
||||
# CRITICAL FIX: Close the scraper session to free file descriptors and memory
|
||||
if scraper:
|
||||
scraper.close()
|
||||
|
||||
|
||||
def fetch_post_comments(api_domain, service, user_id, post_id, headers, logger, cancellation_event=None, pause_event=None, cookies_dict=None):
|
||||
@@ -99,9 +120,11 @@ def fetch_post_comments(api_domain, service, user_id, post_id, headers, logger,
|
||||
logger(f" Fetching comments: {comments_api_url}")
|
||||
|
||||
try:
|
||||
response = requests.get(comments_api_url, headers=headers, timeout=(10, 30), cookies=cookies_dict)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
# FIX: Use context manager
|
||||
with requests.get(comments_api_url, headers=headers, timeout=(10, 30), cookies=cookies_dict) as response:
|
||||
response.raise_for_status()
|
||||
response.encoding = 'utf-8'
|
||||
return response.json()
|
||||
except requests.exceptions.RequestException as e:
|
||||
raise RuntimeError(f"Error fetching comments for post {post_id}: {e}")
|
||||
except ValueError as e:
|
||||
@@ -120,12 +143,18 @@ def download_from_api(
|
||||
selected_cookie_file=None,
|
||||
app_base_dir=None,
|
||||
manga_filename_style_for_sort_check=None,
|
||||
processed_post_ids=None
|
||||
):
|
||||
processed_post_ids=None,
|
||||
fetch_all_first=False
|
||||
):
|
||||
parsed_input_url_for_domain = urlparse(api_url_input)
|
||||
api_domain = parsed_input_url_for_domain.netloc
|
||||
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0',
|
||||
'Accept': 'application/json'
|
||||
'User-Agent': 'Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)',
|
||||
'Referer': f'https://{api_domain}/',
|
||||
'Accept': 'text/css'
|
||||
}
|
||||
|
||||
if processed_post_ids is None:
|
||||
processed_post_ids = set()
|
||||
else:
|
||||
@@ -136,16 +165,10 @@ def download_from_api(
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
logger(" Download_from_api cancelled at start.")
|
||||
return
|
||||
|
||||
parsed_input_url_for_domain = urlparse(api_url_input)
|
||||
api_domain = parsed_input_url_for_domain.netloc
|
||||
|
||||
# --- START: MODIFIED LOGIC ---
|
||||
# This list is updated to include the new .cr and .st mirrors for validation.
|
||||
if not any(d in api_domain.lower() for d in ['kemono.su', 'kemono.party', 'kemono.cr', 'coomer.su', 'coomer.party', 'coomer.st']):
|
||||
logger(f"⚠️ Unrecognized domain '{api_domain}' from input URL. Defaulting to kemono.su for API calls.")
|
||||
api_domain = "kemono.su"
|
||||
# --- END: MODIFIED LOGIC ---
|
||||
|
||||
cookies_for_api = None
|
||||
if use_cookie and app_base_dir:
|
||||
@@ -157,9 +180,12 @@ def download_from_api(
|
||||
direct_post_api_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}/post/{target_post_id}"
|
||||
logger(f" Attempting direct fetch for target post: {direct_post_api_url}")
|
||||
try:
|
||||
direct_response = requests.get(direct_post_api_url, headers=headers, timeout=(10, 30), cookies=cookies_for_api)
|
||||
direct_response.raise_for_status()
|
||||
direct_post_data = direct_response.json()
|
||||
# FIX: Use context manager
|
||||
with requests.get(direct_post_api_url, headers=headers, timeout=(10, 30), cookies=cookies_for_api) as direct_response:
|
||||
direct_response.raise_for_status()
|
||||
direct_response.encoding = 'utf-8'
|
||||
direct_post_data = direct_response.json()
|
||||
|
||||
if isinstance(direct_post_data, list) and direct_post_data:
|
||||
direct_post_data = direct_post_data[0]
|
||||
if isinstance(direct_post_data, dict) and 'post' in direct_post_data and isinstance(direct_post_data['post'], dict):
|
||||
@@ -183,7 +209,8 @@ def download_from_api(
|
||||
logger("⚠️ Page range (start/end page) is ignored when a specific post URL is provided (searching all pages for the post).")
|
||||
|
||||
is_manga_mode_fetch_all_and_sort_oldest_first = manga_mode and (manga_filename_style_for_sort_check != STYLE_DATE_POST_TITLE) and not target_post_id
|
||||
api_base_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}"
|
||||
should_fetch_all = fetch_all_first or is_manga_mode_fetch_all_and_sort_oldest_first
|
||||
api_base_url = f"https://{api_domain}/api/v1/{service}/user/{user_id}/posts"
|
||||
page_size = 50
|
||||
if is_manga_mode_fetch_all_and_sort_oldest_first:
|
||||
logger(f" Manga Mode (Style: {manga_filename_style_for_sort_check if manga_filename_style_for_sort_check else 'Default'} - Oldest First Sort Active): Fetching all posts to sort by date...")
|
||||
@@ -226,7 +253,7 @@ def download_from_api(
|
||||
break
|
||||
all_posts_for_manga_mode.extend(posts_batch_manga)
|
||||
|
||||
logger(f"MANGA_FETCH_PROGRESS:{len(all_posts_for_manga_mode)}:{current_page_num_manga}")
|
||||
logger(f"RENAMING_MODE_FETCH_PROGRESS:{len(all_posts_for_manga_mode)}:{current_page_num_manga}")
|
||||
|
||||
current_offset_manga += page_size
|
||||
time.sleep(0.6)
|
||||
@@ -244,7 +271,7 @@ def download_from_api(
|
||||
if cancellation_event and cancellation_event.is_set(): return
|
||||
|
||||
if all_posts_for_manga_mode:
|
||||
logger(f"MANGA_FETCH_COMPLETE:{len(all_posts_for_manga_mode)}")
|
||||
logger(f"RENAMING_MODE_FETCH_COMPLETE:{len(all_posts_for_manga_mode)}")
|
||||
|
||||
if all_posts_for_manga_mode:
|
||||
if processed_post_ids:
|
||||
@@ -291,6 +318,7 @@ def download_from_api(
|
||||
current_offset = (start_page - 1) * page_size
|
||||
current_page_num = start_page
|
||||
logger(f" Starting from page {current_page_num} (calculated offset {current_offset}).")
|
||||
|
||||
while True:
|
||||
if pause_event and pause_event.is_set():
|
||||
logger(" Post fetching loop paused...")
|
||||
@@ -300,18 +328,22 @@ def download_from_api(
|
||||
break
|
||||
time.sleep(0.5)
|
||||
if not (cancellation_event and cancellation_event.is_set()): logger(" Post fetching loop resumed.")
|
||||
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
logger(" Post fetching loop cancelled.")
|
||||
break
|
||||
|
||||
if target_post_id and processed_target_post_flag:
|
||||
break
|
||||
|
||||
if not target_post_id and end_page and current_page_num > end_page:
|
||||
logger(f"✅ Reached specified end page ({end_page}) for creator feed. Stopping.")
|
||||
break
|
||||
|
||||
try:
|
||||
posts_batch = fetch_posts_paginated(api_base_url, headers, current_offset, logger, cancellation_event, pause_event, cookies_dict=cookies_for_api)
|
||||
if not isinstance(posts_batch, list):
|
||||
logger(f"❌ API Error: Expected list of posts, got {type(posts_batch)} at page {current_page_num} (offset {current_offset}).")
|
||||
raw_posts_batch = fetch_posts_paginated(api_base_url, headers, current_offset, logger, cancellation_event, pause_event, cookies_dict=cookies_for_api)
|
||||
if not isinstance(raw_posts_batch, list):
|
||||
logger(f"❌ API Error: Expected list of posts, got {type(raw_posts_batch)} at page {current_page_num} (offset {current_offset}).")
|
||||
break
|
||||
except RuntimeError as e:
|
||||
if "cancelled by user" in str(e).lower():
|
||||
@@ -323,14 +355,8 @@ def download_from_api(
|
||||
logger(f"❌ Unexpected error fetching page {current_page_num} (offset {current_offset}): {e}")
|
||||
traceback.print_exc()
|
||||
break
|
||||
if processed_post_ids:
|
||||
original_count = len(posts_batch)
|
||||
posts_batch = [post for post in posts_batch if post.get('id') not in processed_post_ids]
|
||||
skipped_count = original_count - len(posts_batch)
|
||||
if skipped_count > 0:
|
||||
logger(f" Skipped {skipped_count} already processed post(s) from page {current_page_num}.")
|
||||
|
||||
if not posts_batch:
|
||||
|
||||
if not raw_posts_batch:
|
||||
if target_post_id and not processed_target_post_flag:
|
||||
logger(f"❌ Target post {target_post_id} not found after checking all available pages (API returned no more posts at offset {current_offset}).")
|
||||
elif not target_post_id:
|
||||
@@ -339,18 +365,34 @@ def download_from_api(
|
||||
else:
|
||||
logger(f"✅ Reached end of posts (no more content from API at offset {current_offset}).")
|
||||
break
|
||||
|
||||
posts_batch_to_yield = raw_posts_batch
|
||||
original_count = len(raw_posts_batch)
|
||||
|
||||
if processed_post_ids:
|
||||
posts_batch_to_yield = [post for post in raw_posts_batch if post.get('id') not in processed_post_ids]
|
||||
skipped_count = original_count - len(posts_batch_to_yield)
|
||||
if skipped_count > 0:
|
||||
logger(f" Skipped {skipped_count} already processed post(s) from page {current_page_num}.")
|
||||
|
||||
if target_post_id and not processed_target_post_flag:
|
||||
matching_post = next((p for p in posts_batch if str(p.get('id')) == str(target_post_id)), None)
|
||||
matching_post = next((p for p in posts_batch_to_yield if str(p.get('id')) == str(target_post_id)), None)
|
||||
if matching_post:
|
||||
logger(f"🎯 Found target post {target_post_id} on page {current_page_num} (offset {current_offset}).")
|
||||
yield [matching_post]
|
||||
processed_target_post_flag = True
|
||||
elif not target_post_id:
|
||||
yield posts_batch
|
||||
if posts_batch_to_yield:
|
||||
yield posts_batch_to_yield
|
||||
elif original_count > 0:
|
||||
logger(f" No new posts found on page {current_page_num}. Checking next page...")
|
||||
|
||||
if processed_target_post_flag:
|
||||
break
|
||||
|
||||
current_offset += page_size
|
||||
current_page_num += 1
|
||||
time.sleep(0.6)
|
||||
|
||||
if target_post_id and not processed_target_post_flag and not (cancellation_event and cancellation_event.is_set()):
|
||||
logger(f"❌ Target post {target_post_id} could not be found after checking all relevant pages (final check after loop).")
|
||||
logger(f"❌ Target post {target_post_id} could not be found after checking all relevant pages (final check after loop).")
|
||||
374
src/core/booru_client.py
Normal file
374
src/core/booru_client.py
Normal file
@@ -0,0 +1,374 @@
|
||||
|
||||
import os
|
||||
import re
|
||||
import time
|
||||
import datetime
|
||||
import urllib.parse
|
||||
import requests
|
||||
import logging
|
||||
import cloudscraper
|
||||
# --- Start of Combined Code from 1.py ---
|
||||
|
||||
# Part 1: Essential Utilities & Exceptions
|
||||
|
||||
class BooruClientException(Exception):
|
||||
"""Base class for exceptions in this client."""
|
||||
pass
|
||||
|
||||
class HttpError(BooruClientException):
|
||||
"""HTTP request during data extraction failed."""
|
||||
def __init__(self, message="", response=None):
|
||||
self.response = response
|
||||
self.status = response.status_code if response else 0
|
||||
if response and not message:
|
||||
message = f"'{response.status_code} {response.reason}' for '{response.url}'"
|
||||
super().__init__(message)
|
||||
|
||||
class NotFoundError(BooruClientException):
|
||||
pass
|
||||
|
||||
def unquote(s):
|
||||
return urllib.parse.unquote(s)
|
||||
|
||||
def parse_datetime(date_string, fmt):
|
||||
try:
|
||||
# Assumes date_string is in a format that strptime can handle with timezone
|
||||
return datetime.datetime.strptime(date_string, fmt)
|
||||
except (ValueError, TypeError):
|
||||
return None
|
||||
|
||||
def nameext_from_url(url, data=None):
|
||||
if data is None: data = {}
|
||||
try:
|
||||
path = urllib.parse.urlparse(url).path
|
||||
filename = unquote(os.path.basename(path))
|
||||
if '.' in filename:
|
||||
name, ext = filename.rsplit('.', 1)
|
||||
data["filename"], data["extension"] = name, ext.lower()
|
||||
else:
|
||||
data["filename"], data["extension"] = filename, ""
|
||||
except Exception:
|
||||
data["filename"], data["extension"] = "", ""
|
||||
return data
|
||||
|
||||
USERAGENT_FIREFOX = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/118.0"
|
||||
|
||||
# Part 2: Core Extractor Logic
|
||||
|
||||
class Extractor:
|
||||
category = ""
|
||||
subcategory = ""
|
||||
directory_fmt = ("{category}", "{id}")
|
||||
filename_fmt = "{filename}.{extension}"
|
||||
_retries = 3
|
||||
_timeout = 30
|
||||
|
||||
def __init__(self, match, logger_func=print):
|
||||
self.url = match.string
|
||||
self.match = match
|
||||
self.groups = match.groups()
|
||||
self.session = cloudscraper.create_scraper()
|
||||
self.session.headers["User-Agent"] = USERAGENT_FIREFOX
|
||||
self.log = logger_func
|
||||
self.api_key = None
|
||||
self.user_id = None
|
||||
|
||||
def set_auth(self, api_key, user_id):
|
||||
self.api_key = api_key
|
||||
self.user_id = user_id
|
||||
self._init_auth()
|
||||
|
||||
def _init_auth(self):
|
||||
"""Placeholder for extractor-specific auth setup."""
|
||||
pass
|
||||
|
||||
def request(self, url, method="GET", fatal=True, **kwargs):
|
||||
for attempt in range(self._retries + 1):
|
||||
try:
|
||||
response = self.session.request(method, url, timeout=self._timeout, **kwargs)
|
||||
if response.status_code < 400:
|
||||
return response
|
||||
if response.status_code == 404 and fatal:
|
||||
raise NotFoundError(f"Resource not found at {url}")
|
||||
self.log(f"Request for {url} failed with status {response.status_code}. Retrying...")
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.log(f"Request for {url} failed: {e}. Retrying...")
|
||||
if attempt < self._retries:
|
||||
time.sleep(2 ** attempt)
|
||||
if fatal:
|
||||
raise HttpError(f"Failed to retrieve {url} after {self._retries} retries.")
|
||||
return None
|
||||
|
||||
def request_json(self, url, **kwargs):
|
||||
response = self.request(url, **kwargs)
|
||||
try:
|
||||
return response.json()
|
||||
except (ValueError, TypeError) as exc:
|
||||
self.log(f"Failed to decode JSON from {url}: {exc}")
|
||||
raise BooruClientException("Invalid JSON response")
|
||||
|
||||
def items(self):
|
||||
data = self.metadata()
|
||||
for item in self.posts():
|
||||
# Check for our special page update message
|
||||
if isinstance(item, tuple) and item[0] == 'PAGE_UPDATE':
|
||||
yield item
|
||||
continue
|
||||
|
||||
# Otherwise, process it as a post
|
||||
post = item
|
||||
url = post.get("file_url")
|
||||
if not url: continue
|
||||
|
||||
nameext_from_url(url, post)
|
||||
post["date"] = parse_datetime(post.get("created_at"), "%Y-%m-%dT%H:%M:%S.%f%z")
|
||||
|
||||
if url.startswith("/"):
|
||||
url = self.root + url
|
||||
post['file_url'] = url # Ensure full URL
|
||||
|
||||
post.update(data)
|
||||
yield post
|
||||
|
||||
class BaseExtractor(Extractor):
|
||||
instances = ()
|
||||
|
||||
def __init__(self, match, logger_func=print):
|
||||
super().__init__(match, logger_func)
|
||||
self._init_category()
|
||||
|
||||
def _init_category(self):
|
||||
parsed_url = urllib.parse.urlparse(self.url)
|
||||
self.root = f"{parsed_url.scheme}://{parsed_url.netloc}"
|
||||
for i, group in enumerate(self.groups):
|
||||
if group is not None:
|
||||
try:
|
||||
self.category = self.instances[i][0]
|
||||
return
|
||||
except IndexError:
|
||||
continue
|
||||
|
||||
@classmethod
|
||||
def update(cls, instances):
|
||||
pattern_list = []
|
||||
instance_list = cls.instances = []
|
||||
for category, info in instances.items():
|
||||
root = info["root"].rstrip("/") if info["root"] else ""
|
||||
instance_list.append((category, root, info))
|
||||
pattern = info.get("pattern", re.escape(root.partition("://")[2]))
|
||||
pattern_list.append(f"({pattern})")
|
||||
return r"(?:https?://)?(?:" + "|".join(pattern_list) + r")"
|
||||
|
||||
# Part 3: Danbooru Extractor
|
||||
|
||||
class DanbooruExtractor(BaseExtractor):
|
||||
filename_fmt = "{category}_{id}_{filename}.{extension}"
|
||||
per_page = 200
|
||||
|
||||
def __init__(self, match, logger_func=print):
|
||||
super().__init__(match, logger_func)
|
||||
self._auth_logged = False
|
||||
|
||||
def _init_auth(self):
|
||||
if self.user_id and self.api_key:
|
||||
if not self._auth_logged:
|
||||
self.log("Danbooru auth set.")
|
||||
self._auth_logged = True
|
||||
self.session.auth = (self.user_id, self.api_key)
|
||||
|
||||
|
||||
def items(self):
|
||||
data = self.metadata()
|
||||
for item in self.posts():
|
||||
# Check for our special page update message
|
||||
if isinstance(item, tuple) and item[0] == 'PAGE_UPDATE':
|
||||
yield item
|
||||
continue
|
||||
|
||||
# Otherwise, process it as a post
|
||||
post = item
|
||||
url = post.get("file_url")
|
||||
if not url: continue
|
||||
|
||||
nameext_from_url(url, post)
|
||||
post["date"] = parse_datetime(post.get("created_at"), "%Y-%m-%dT%H:%M:%S.%f%z")
|
||||
|
||||
if url.startswith("/"):
|
||||
url = self.root + url
|
||||
post['file_url'] = url # Ensure full URL
|
||||
|
||||
post.update(data)
|
||||
yield post
|
||||
|
||||
def metadata(self):
|
||||
return {}
|
||||
|
||||
def posts(self):
|
||||
return []
|
||||
|
||||
def _pagination(self, endpoint, params, prefix="b"):
|
||||
url = self.root + endpoint
|
||||
params["limit"] = self.per_page
|
||||
params["page"] = 1
|
||||
threshold = self.per_page - 20
|
||||
|
||||
while True:
|
||||
posts = self.request_json(url, params=params)
|
||||
if not posts: break
|
||||
yield ('PAGE_UPDATE', len(posts))
|
||||
yield from posts
|
||||
if len(posts) < threshold: return
|
||||
if prefix:
|
||||
params["page"] = f"{prefix}{posts[-1]['id']}"
|
||||
else:
|
||||
params["page"] += 1
|
||||
|
||||
BASE_PATTERN = DanbooruExtractor.update({
|
||||
"danbooru": {"root": None, "pattern": r"(?:danbooru|safebooru)\.donmai\.us"},
|
||||
})
|
||||
|
||||
class DanbooruTagExtractor(DanbooruExtractor):
|
||||
subcategory = "tag"
|
||||
directory_fmt = ("{category}", "{search_tags}")
|
||||
pattern = BASE_PATTERN + r"(/posts\?(?:[^&#]*&)*tags=([^&#]*))"
|
||||
|
||||
def metadata(self):
|
||||
self.tags = unquote(self.groups[-1].replace("+", " ")).strip()
|
||||
sanitized_tags = re.sub(r'[\\/*?:"<>|]', "_", self.tags)
|
||||
return {"search_tags": sanitized_tags}
|
||||
|
||||
def posts(self):
|
||||
return self._pagination("/posts.json", {"tags": self.tags})
|
||||
|
||||
class DanbooruPostExtractor(DanbooruExtractor):
|
||||
subcategory = "post"
|
||||
pattern = BASE_PATTERN + r"(/post(?:s|/show)/(\d+))"
|
||||
|
||||
def posts(self):
|
||||
post_id = self.groups[-1]
|
||||
url = f"{self.root}/posts/{post_id}.json"
|
||||
post = self.request_json(url)
|
||||
return (post,) if post else ()
|
||||
|
||||
class GelbooruBase(Extractor):
|
||||
category = "gelbooru"
|
||||
root = "https://gelbooru.com"
|
||||
|
||||
def __init__(self, match, logger_func=print):
|
||||
super().__init__(match, logger_func)
|
||||
self._auth_logged = False
|
||||
|
||||
def _api_request(self, params, key="post"):
|
||||
# Auth is now added dynamically
|
||||
if self.api_key and self.user_id:
|
||||
if not self._auth_logged:
|
||||
self.log("Gelbooru auth set.")
|
||||
self._auth_logged = True
|
||||
params.update({"api_key": self.api_key, "user_id": self.user_id})
|
||||
|
||||
url = self.root + "/index.php?page=dapi&q=index&json=1"
|
||||
data = self.request_json(url, params=params)
|
||||
|
||||
if not key: return data
|
||||
posts = data.get(key, [])
|
||||
return posts if isinstance(posts, list) else [posts] if posts else []
|
||||
|
||||
def items(self):
|
||||
base_data = self.metadata()
|
||||
base_data['category'] = self.category
|
||||
|
||||
for item in self.posts():
|
||||
# Check for our special page update message
|
||||
if isinstance(item, tuple) and item[0] == 'PAGE_UPDATE':
|
||||
yield item
|
||||
continue
|
||||
|
||||
# Otherwise, process it as a post
|
||||
post = item
|
||||
url = post.get("file_url")
|
||||
if not url: continue
|
||||
|
||||
data = base_data.copy()
|
||||
data.update(post)
|
||||
nameext_from_url(url, data)
|
||||
yield data
|
||||
|
||||
def metadata(self): return {}
|
||||
def posts(self): return []
|
||||
|
||||
GELBOORU_PATTERN = r"(?:https?://)?(?:www\.)?gelbooru\.com"
|
||||
|
||||
class GelbooruTagExtractor(GelbooruBase):
|
||||
subcategory = "tag"
|
||||
directory_fmt = ("{category}", "{search_tags}")
|
||||
filename_fmt = "{category}_{id}_{md5}.{extension}"
|
||||
pattern = GELBOORU_PATTERN + r"(/index\.php\?page=post&s=list&tags=([^&#]*))"
|
||||
|
||||
def metadata(self):
|
||||
self.tags = unquote(self.groups[-1].replace("+", " ")).strip()
|
||||
sanitized_tags = re.sub(r'[\\/*?:"<>|]', "_", self.tags)
|
||||
return {"search_tags": sanitized_tags}
|
||||
|
||||
def posts(self):
|
||||
"""Scrapes HTML search pages as API can be restrictive for tags."""
|
||||
pid = 0
|
||||
posts_per_page = 42
|
||||
search_url = self.root + "/index.php"
|
||||
params = {"page": "post", "s": "list", "tags": self.tags}
|
||||
|
||||
while True:
|
||||
params['pid'] = pid
|
||||
self.log(f"Scraping search results page (offset: {pid})...")
|
||||
response = self.request(search_url, params=params)
|
||||
html_content = response.text
|
||||
post_ids = re.findall(r'id="p(\d+)"', html_content)
|
||||
|
||||
if not post_ids:
|
||||
self.log("No more posts found on page. Ending scrape.")
|
||||
break
|
||||
yield ('PAGE_UPDATE', len(post_ids))
|
||||
for post_id in post_ids:
|
||||
post_data = self._api_request({"s": "post", "id": post_id})
|
||||
yield from post_data
|
||||
|
||||
pid += posts_per_page
|
||||
|
||||
class GelbooruPostExtractor(GelbooruBase):
|
||||
subcategory = "post"
|
||||
filename_fmt = "{category}_{id}_{md5}.{extension}"
|
||||
pattern = GELBOORU_PATTERN + r"(/index\.php\?page=post&s=view&id=(\d+))"
|
||||
|
||||
def posts(self):
|
||||
post_id = self.groups[-1]
|
||||
return self._api_request({"s": "post", "id": post_id})
|
||||
|
||||
# --- Main Entry Point ---
|
||||
|
||||
EXTRACTORS = [
|
||||
DanbooruTagExtractor,
|
||||
DanbooruPostExtractor,
|
||||
GelbooruTagExtractor,
|
||||
GelbooruPostExtractor,
|
||||
]
|
||||
|
||||
def find_extractor(url, logger_func):
|
||||
for extractor_cls in EXTRACTORS:
|
||||
match = re.search(extractor_cls.pattern, url)
|
||||
if match:
|
||||
return extractor_cls(match, logger_func)
|
||||
return None
|
||||
|
||||
def fetch_booru_data(url, api_key, user_id, logger_func):
|
||||
"""
|
||||
Main function to find an extractor and yield image data.
|
||||
"""
|
||||
extractor = find_extractor(url, logger_func)
|
||||
if not extractor:
|
||||
logger_func(f"No suitable Booru extractor found for URL: {url}")
|
||||
return
|
||||
|
||||
logger_func(f"Using extractor: {extractor.__class__.__name__}")
|
||||
extractor.set_auth(api_key, user_id)
|
||||
|
||||
# The 'items' method will now yield the data dictionaries directly
|
||||
yield from extractor.items()
|
||||
282
src/core/bunkr_client.py
Normal file
282
src/core/bunkr_client.py
Normal file
@@ -0,0 +1,282 @@
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import requests
|
||||
import html
|
||||
import time
|
||||
import datetime
|
||||
import urllib.parse
|
||||
import json
|
||||
import random
|
||||
import binascii
|
||||
import itertools
|
||||
|
||||
class MockMessage:
|
||||
Directory = 1
|
||||
Url = 2
|
||||
Version = 3
|
||||
|
||||
class AlbumException(Exception): pass
|
||||
class ExtractionError(AlbumException): pass
|
||||
class HttpError(ExtractionError):
|
||||
def __init__(self, message="", response=None):
|
||||
self.response = response
|
||||
self.status = response.status_code if response is not None else 0
|
||||
super().__init__(message)
|
||||
class ControlException(AlbumException): pass
|
||||
class AbortExtraction(ExtractionError, ControlException): pass
|
||||
|
||||
try:
|
||||
re_compile = re._compiler.compile
|
||||
except AttributeError:
|
||||
re_compile = re.sre_compile.compile
|
||||
HTML_RE = re_compile(r"<[^>]+>")
|
||||
def extr(txt, begin, end, default=""):
|
||||
try:
|
||||
first = txt.index(begin) + len(begin)
|
||||
return txt[first:txt.index(end, first)]
|
||||
except Exception: return default
|
||||
def extract_iter(txt, begin, end, pos=None):
|
||||
try:
|
||||
index = txt.index
|
||||
lbeg = len(begin)
|
||||
lend = len(end)
|
||||
while True:
|
||||
first = index(begin, pos) + lbeg
|
||||
last = index(end, first)
|
||||
pos = last + lend
|
||||
yield txt[first:last]
|
||||
except Exception: return
|
||||
def split_html(txt):
|
||||
try: return [html.unescape(x).strip() for x in HTML_RE.split(txt) if x and not x.isspace()]
|
||||
except TypeError: return []
|
||||
def parse_datetime(date_string, format="%Y-%m-%dT%H:%M:%S%z", utcoffset=0):
|
||||
try:
|
||||
d = datetime.datetime.strptime(date_string, format)
|
||||
o = d.utcoffset()
|
||||
if o is not None: d = d.replace(tzinfo=None, microsecond=0) - o
|
||||
else:
|
||||
if d.microsecond: d = d.replace(microsecond=0)
|
||||
if utcoffset: d += datetime.timedelta(0, utcoffset * -3600)
|
||||
return d
|
||||
except (TypeError, IndexError, KeyError, ValueError, OverflowError): return None
|
||||
unquote = urllib.parse.unquote
|
||||
unescape = html.unescape
|
||||
|
||||
def decrypt_xor(encrypted, key, base64=True, fromhex=False):
|
||||
if base64: encrypted = binascii.a2b_base64(encrypted)
|
||||
if fromhex: encrypted = bytes.fromhex(encrypted.decode())
|
||||
div = len(key)
|
||||
return bytes([encrypted[i] ^ key[i % div] for i in range(len(encrypted))]).decode()
|
||||
def advance(iterable, num):
|
||||
iterator = iter(iterable)
|
||||
next(itertools.islice(iterator, num, num), None)
|
||||
return iterator
|
||||
def json_loads(s): return json.loads(s)
|
||||
def json_dumps(obj): return json.dumps(obj, separators=(",", ":"))
|
||||
|
||||
class Extractor:
|
||||
def __init__(self, match, logger):
|
||||
self.log = logger
|
||||
self.url = match.string
|
||||
self.match = match
|
||||
self.groups = match.groups()
|
||||
self.session = requests.Session()
|
||||
self.session.headers["User-Agent"] = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0"
|
||||
@classmethod
|
||||
def from_url(cls, url, logger):
|
||||
if isinstance(cls.pattern, str): cls.pattern = re.compile(cls.pattern)
|
||||
match = cls.pattern.match(url)
|
||||
return cls(match, logger) if match else None
|
||||
def __iter__(self): return self.items()
|
||||
def items(self): yield MockMessage.Version, 1
|
||||
def request(self, url, method="GET", fatal=True, **kwargs):
|
||||
tries = 1
|
||||
while True:
|
||||
try:
|
||||
response = self.session.request(method, url, **kwargs)
|
||||
if response.status_code < 400: return response
|
||||
msg = f"'{response.status_code} {response.reason}' for '{response.url}'"
|
||||
except requests.exceptions.RequestException as exc:
|
||||
msg = str(exc)
|
||||
|
||||
self.log.info("%s (retrying...)", msg)
|
||||
if tries > 4: break
|
||||
time.sleep(tries)
|
||||
tries += 1
|
||||
if not fatal: return None
|
||||
raise HttpError(msg)
|
||||
def request_json(self, url, **kwargs):
|
||||
response = self.request(url, **kwargs)
|
||||
try: return json_loads(response.text)
|
||||
except Exception as exc:
|
||||
self.log.warning("%s: %s", exc.__class__.__name__, exc)
|
||||
if not kwargs.get("fatal", True): return {}
|
||||
raise
|
||||
|
||||
BASE_PATTERN_BUNKR = r"(?:https?://)?(?:[a-zA-Z0-9-]+\.)?(bunkr\.(?:si|la|ws|red|black|media|site|is|to|ac|cr|ci|fi|pk|ps|sk|ph|su)|bunkrr\.ru)"
|
||||
DOMAINS = ["bunkr.si", "bunkr.ws", "bunkr.la", "bunkr.red", "bunkr.black", "bunkr.media", "bunkr.site"]
|
||||
CF_DOMAINS = set()
|
||||
|
||||
class BunkrAlbumExtractor(Extractor):
|
||||
category = "bunkr"
|
||||
root = "https://bunkr.si"
|
||||
root_dl = "https://get.bunkrr.su"
|
||||
root_api = "https://apidl.bunkr.ru"
|
||||
pattern = re.compile(BASE_PATTERN_BUNKR + r"/a/([^/?#]+)")
|
||||
|
||||
def __init__(self, match, logger):
|
||||
super().__init__(match, logger)
|
||||
domain_match = re.search(BASE_PATTERN_BUNKR, match.string)
|
||||
if domain_match:
|
||||
self.root = "https://" + domain_match.group(1)
|
||||
self.endpoint = self.root_api + "/api/_001_v2"
|
||||
self.album_id = self.groups[-1]
|
||||
|
||||
def items(self):
|
||||
page = self.request(self.url).text
|
||||
title = unescape(unescape(extr(page, 'property="og:title" content="', '"')))
|
||||
items_html = list(extract_iter(page, '<div class="grid-images_box', "</a>"))
|
||||
|
||||
album_data = {
|
||||
"album_id": self.album_id,
|
||||
"album_name": title,
|
||||
"count": len(items_html),
|
||||
}
|
||||
yield MockMessage.Directory, album_data, {}
|
||||
|
||||
for item_html in items_html:
|
||||
try:
|
||||
webpage_url = unescape(extr(item_html, ' href="', '"'))
|
||||
if webpage_url.startswith("/"):
|
||||
webpage_url = self.root + webpage_url
|
||||
|
||||
file_data = self._extract_file(webpage_url)
|
||||
info = split_html(item_html)
|
||||
|
||||
if not file_data.get("name"):
|
||||
file_data["name"] = info[-3]
|
||||
|
||||
yield MockMessage.Url, file_data, {}
|
||||
except Exception as exc:
|
||||
self.log.error("%s: %s", exc.__class__.__name__, exc)
|
||||
|
||||
def _extract_file(self, webpage_url):
|
||||
page = self.request(webpage_url).text
|
||||
data_id = extr(page, 'data-file-id="', '"')
|
||||
|
||||
# This referer is for the API call only
|
||||
api_referer = self.root_dl + "/file/" + data_id
|
||||
headers = {"Referer": api_referer, "Origin": self.root_dl}
|
||||
data = self.request_json(self.endpoint, method="POST", headers=headers, json={"id": data_id})
|
||||
|
||||
# Get the raw file URL (no domain replacement)
|
||||
file_url = decrypt_xor(data["url"], f"SECRET_KEY_{data['timestamp'] // 3600}".encode()) if data.get("encrypted") else data["url"]
|
||||
|
||||
file_name = extr(page, "<h1", "<").rpartition(">")[2]
|
||||
|
||||
# --- NEW FIX ---
|
||||
# The download thread uses a new `requests` call, so we must
|
||||
# explicitly pass BOTH the User-Agent and the correct Referer.
|
||||
|
||||
# 1. Get the User-Agent from this extractor's session
|
||||
user_agent = self.session.headers.get("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101 Firefox/102.0")
|
||||
|
||||
# 2. Use the original album URL as the Referer
|
||||
download_referer = self.url
|
||||
|
||||
return {
|
||||
"url": file_url,
|
||||
"name": unescape(file_name),
|
||||
"_http_headers": {
|
||||
"Referer": download_referer,
|
||||
"User-Agent": user_agent
|
||||
}
|
||||
}
|
||||
|
||||
class BunkrMediaExtractor(BunkrAlbumExtractor):
|
||||
pattern = re.compile(BASE_PATTERN_BUNKR + r"(/[fvid]/[^/?#]+)")
|
||||
def items(self):
|
||||
try:
|
||||
media_path = self.groups[-1]
|
||||
file_data = self._extract_file(self.root + media_path)
|
||||
album_data = {"album_name": file_data.get("name", "bunkr_media"), "count": 1}
|
||||
|
||||
yield MockMessage.Directory, album_data, {}
|
||||
yield MockMessage.Url, file_data, {}
|
||||
|
||||
except Exception as exc:
|
||||
self.log.error("%s: %s", exc.__class__.__name__, exc)
|
||||
yield MockMessage.Directory, {"album_name": "error", "count": 0}, {}
|
||||
|
||||
def get_bunkr_extractor(url, logger):
|
||||
"""Selects the correct Bunkr extractor based on the URL pattern."""
|
||||
if BunkrAlbumExtractor.pattern.match(url):
|
||||
logger.info("Bunkr Album URL detected.")
|
||||
return BunkrAlbumExtractor.from_url(url, logger)
|
||||
elif BunkrMediaExtractor.pattern.match(url):
|
||||
logger.info("Bunkr Media URL detected.")
|
||||
return BunkrMediaExtractor.from_url(url, logger)
|
||||
else:
|
||||
logger.error(f"No suitable Bunkr extractor found for URL: {url}")
|
||||
return None
|
||||
|
||||
def fetch_bunkr_data(url, logger):
|
||||
"""
|
||||
Main function to be called from the GUI.
|
||||
It extracts all file information from a Bunkr URL, now handling both albums and direct file links.
|
||||
|
||||
Returns:
|
||||
A tuple of (album_name, list_of_files)
|
||||
- album_name (str): The name of the album.
|
||||
- list_of_files (list): A list of dicts, each containing 'url', 'name', and '_http_headers'.
|
||||
Returns (None, None) on failure.
|
||||
"""
|
||||
# --- START: New logic to handle direct CDN file URLs ---
|
||||
try:
|
||||
parsed_url = urllib.parse.urlparse(url)
|
||||
# Check if the hostname contains 'cdn' and the path has a common file extension
|
||||
is_direct_cdn_file = (parsed_url.hostname and 'cdn' in parsed_url.hostname and 'bunkr' in parsed_url.hostname and
|
||||
any(parsed_url.path.lower().endswith(ext) for ext in ['.mp4', '.mkv', '.webm', '.jpg', '.jpeg', '.png', '.gif', '.zip', '.rar']))
|
||||
|
||||
if is_direct_cdn_file:
|
||||
logger.info("Bunkr direct file URL detected.")
|
||||
filename = os.path.basename(parsed_url.path)
|
||||
# Use the filename (without extension) as a sensible album name
|
||||
album_name = os.path.splitext(filename)[0]
|
||||
|
||||
files_to_download = [{
|
||||
'url': url,
|
||||
'name': filename,
|
||||
'_http_headers': {'Referer': 'https://bunkr.ru/'} # Use a generic Referer
|
||||
}]
|
||||
return album_name, files_to_download
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not parse Bunkr URL for direct file check: {e}")
|
||||
# --- END: New logic ---
|
||||
|
||||
# This is the original logic for album and media pages
|
||||
extractor = get_bunkr_extractor(url, logger)
|
||||
if not extractor:
|
||||
return None, None
|
||||
|
||||
try:
|
||||
album_name = "default_bunkr_album"
|
||||
files_to_download = []
|
||||
for msg_type, data, metadata in extractor:
|
||||
if msg_type == MockMessage.Directory:
|
||||
raw_album_name = data.get('album_name', 'untitled')
|
||||
album_name = re.sub(r'[<>:"/\\|?*]', '_', raw_album_name).strip() or "untitled"
|
||||
logger.info(f"Processing Bunkr album: {album_name}")
|
||||
elif msg_type == MockMessage.Url:
|
||||
files_to_download.append(data)
|
||||
|
||||
if not files_to_download:
|
||||
logger.warning("No files found to download from the Bunkr URL.")
|
||||
return None, None
|
||||
|
||||
return album_name, files_to_download
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"An error occurred while extracting Bunkr info: {e}", exc_info=True)
|
||||
return None, None
|
||||
174
src/core/deviantart_client.py
Normal file
174
src/core/deviantart_client.py
Normal file
@@ -0,0 +1,174 @@
|
||||
import requests
|
||||
import re
|
||||
import os
|
||||
import time
|
||||
import threading
|
||||
from urllib.parse import urlparse
|
||||
|
||||
class DeviantArtClient:
|
||||
# Public Client Credentials
|
||||
CLIENT_ID = "5388"
|
||||
CLIENT_SECRET = "76b08c69cfb27f26d6161f9ab6d061a1"
|
||||
BASE_API = "https://www.deviantart.com/api/v1/oauth2"
|
||||
|
||||
def __init__(self, logger_func=print):
|
||||
self.session = requests.Session()
|
||||
self.session.headers.update({
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
|
||||
'Accept': '*/*',
|
||||
})
|
||||
self.access_token = None
|
||||
self.logger = logger_func
|
||||
|
||||
# --- DEDUPLICATION LOGIC ---
|
||||
self.logged_waits = set()
|
||||
self.log_lock = threading.Lock()
|
||||
|
||||
def authenticate(self):
|
||||
"""Authenticates using client credentials flow."""
|
||||
try:
|
||||
url = "https://www.deviantart.com/oauth2/token"
|
||||
data = {
|
||||
"grant_type": "client_credentials",
|
||||
"client_id": self.CLIENT_ID,
|
||||
"client_secret": self.CLIENT_SECRET
|
||||
}
|
||||
resp = self.session.post(url, data=data, timeout=10)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
self.access_token = data.get("access_token")
|
||||
return True
|
||||
except Exception as e:
|
||||
self.logger(f"DA Auth Error: {e}")
|
||||
return False
|
||||
|
||||
def _api_call(self, endpoint, params=None):
|
||||
if not self.access_token:
|
||||
if not self.authenticate():
|
||||
raise Exception("Authentication failed")
|
||||
|
||||
url = f"{self.BASE_API}{endpoint}"
|
||||
params = params or {}
|
||||
params['access_token'] = self.access_token
|
||||
params['mature_content'] = 'true'
|
||||
|
||||
retries = 0
|
||||
max_retries = 4
|
||||
backoff_delay = 2
|
||||
|
||||
while True:
|
||||
try:
|
||||
resp = self.session.get(url, params=params, timeout=20)
|
||||
|
||||
# Handle Token Expiration (401)
|
||||
if resp.status_code == 401:
|
||||
self.logger(" [DeviantArt] Token expired. Refreshing...")
|
||||
if self.authenticate():
|
||||
params['access_token'] = self.access_token
|
||||
continue
|
||||
else:
|
||||
raise Exception("Failed to refresh token")
|
||||
|
||||
# Handle Rate Limiting (429)
|
||||
if resp.status_code == 429:
|
||||
if retries < max_retries:
|
||||
retry_after = resp.headers.get('Retry-After')
|
||||
|
||||
if retry_after:
|
||||
sleep_time = int(retry_after) + 1
|
||||
msg = f" [DeviantArt] ⚠️ Rate limit (Server says wait {sleep_time}s)."
|
||||
else:
|
||||
sleep_time = backoff_delay * (2 ** retries)
|
||||
msg = f" [DeviantArt] ⚠️ Rate limit reached. Retrying in {sleep_time}s..."
|
||||
|
||||
# --- THREAD-SAFE LOGGING CHECK ---
|
||||
should_log = False
|
||||
with self.log_lock:
|
||||
if sleep_time not in self.logged_waits:
|
||||
self.logged_waits.add(sleep_time)
|
||||
should_log = True
|
||||
|
||||
if should_log:
|
||||
self.logger(msg)
|
||||
|
||||
time.sleep(sleep_time)
|
||||
retries += 1
|
||||
continue
|
||||
else:
|
||||
resp.raise_for_status()
|
||||
|
||||
resp.raise_for_status()
|
||||
|
||||
# Clear log history on success so we get warned again if limits return later
|
||||
with self.log_lock:
|
||||
if self.logged_waits:
|
||||
self.logged_waits.clear()
|
||||
|
||||
return resp.json()
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
if retries < max_retries:
|
||||
# Using the lock here too to prevent connection error spam
|
||||
should_log = False
|
||||
with self.log_lock:
|
||||
if "conn_error" not in self.logged_waits:
|
||||
self.logged_waits.add("conn_error")
|
||||
should_log = True
|
||||
|
||||
if should_log:
|
||||
self.logger(f" [DeviantArt] Connection error: {e}. Retrying...")
|
||||
|
||||
time.sleep(2)
|
||||
retries += 1
|
||||
continue
|
||||
raise e
|
||||
|
||||
def get_deviation_uuid(self, url):
|
||||
"""Scrapes the deviation page to find the UUID."""
|
||||
try:
|
||||
resp = self.session.get(url, timeout=15)
|
||||
match = re.search(r'"deviationUuid":"([^"]+)"', resp.text)
|
||||
if match:
|
||||
return match.group(1)
|
||||
match = re.search(r'-(\d+)$', url)
|
||||
if match:
|
||||
return match.group(1)
|
||||
except Exception as e:
|
||||
self.logger(f"Error scraping UUID: {e}")
|
||||
return None
|
||||
|
||||
def get_deviation_content(self, uuid):
|
||||
"""Fetches download info."""
|
||||
try:
|
||||
data = self._api_call(f"/deviation/download/{uuid}")
|
||||
if 'src' in data:
|
||||
return data
|
||||
except:
|
||||
pass
|
||||
|
||||
try:
|
||||
meta = self._api_call(f"/deviation/{uuid}")
|
||||
if 'content' in meta:
|
||||
return meta['content']
|
||||
except:
|
||||
pass
|
||||
return None
|
||||
|
||||
def get_gallery_folder(self, username, offset=0, limit=24):
|
||||
"""Fetches items from a user's gallery."""
|
||||
return self._api_call("/gallery/all", {"username": username, "offset": offset, "limit": limit})
|
||||
|
||||
@staticmethod
|
||||
def extract_info_from_url(url):
|
||||
parsed = urlparse(url)
|
||||
path = parsed.path.strip('/')
|
||||
parts = path.split('/')
|
||||
|
||||
if len(parts) >= 3 and parts[1] == 'art':
|
||||
return 'post', parts[0], parts[2]
|
||||
elif len(parts) >= 2 and parts[1] == 'gallery':
|
||||
return 'gallery', parts[0], None
|
||||
elif len(parts) == 1:
|
||||
return 'gallery', parts[0], None
|
||||
|
||||
return None, None, None
|
||||
88
src/core/discord_client.py
Normal file
88
src/core/discord_client.py
Normal file
@@ -0,0 +1,88 @@
|
||||
import time
|
||||
import cloudscraper
|
||||
import json
|
||||
|
||||
def fetch_server_channels(server_id, logger=print, cookies_dict=None):
|
||||
"""
|
||||
Fetches all channels for a given Discord server ID from the API.
|
||||
Uses cloudscraper to bypass Cloudflare.
|
||||
"""
|
||||
api_url = f"https://kemono.cr/api/v1/discord/server/{server_id}"
|
||||
logger(f" Fetching channels for server: {api_url}")
|
||||
|
||||
scraper = cloudscraper.create_scraper()
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
|
||||
'Referer': f'https://kemono.cr/discord/server/{server_id}',
|
||||
'Accept': 'text/css'
|
||||
}
|
||||
|
||||
try:
|
||||
response = scraper.get(api_url, headers=headers, cookies=cookies_dict, timeout=30)
|
||||
response.raise_for_status()
|
||||
channels = response.json()
|
||||
if isinstance(channels, list):
|
||||
logger(f" ✅ Found {len(channels)} channels for server {server_id}.")
|
||||
return channels
|
||||
return None
|
||||
except Exception as e:
|
||||
logger(f" ❌ Error fetching server channels for {server_id}: {e}")
|
||||
return None
|
||||
|
||||
def fetch_channel_messages(channel_id, logger=print, cancellation_event=None, pause_event=None, cookies_dict=None):
|
||||
"""
|
||||
A generator that fetches all messages for a specific Discord channel, handling pagination.
|
||||
Uses cloudscraper and proper headers to bypass server protection.
|
||||
"""
|
||||
scraper = cloudscraper.create_scraper()
|
||||
base_url = f"https://kemono.cr/api/v1/discord/channel/{channel_id}"
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
|
||||
'Referer': f'https://kemono.cr/discord/channel/{channel_id}',
|
||||
'Accept': 'text/css'
|
||||
}
|
||||
|
||||
offset = 0
|
||||
page_size = 150
|
||||
|
||||
while True:
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
logger(" Discord message fetching cancelled.")
|
||||
break
|
||||
if pause_event and pause_event.is_set():
|
||||
logger(" Discord message fetching paused...")
|
||||
while pause_event.is_set():
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
break
|
||||
time.sleep(0.5)
|
||||
if not (cancellation_event and cancellation_event.is_set()):
|
||||
logger(" Discord message fetching resumed.")
|
||||
|
||||
paginated_url = f"{base_url}?o={offset}"
|
||||
logger(f" Fetching messages from API: page starting at offset {offset}")
|
||||
|
||||
try:
|
||||
response = scraper.get(paginated_url, headers=headers, cookies=cookies_dict, timeout=30)
|
||||
response.raise_for_status()
|
||||
messages_batch = response.json()
|
||||
|
||||
if not messages_batch:
|
||||
logger(f" ✅ Reached end of messages for channel {channel_id}.")
|
||||
break
|
||||
|
||||
logger(f" Fetched {len(messages_batch)} messages...")
|
||||
yield messages_batch
|
||||
|
||||
if len(messages_batch) < page_size:
|
||||
logger(f" ✅ Last page of messages received for channel {channel_id}.")
|
||||
break
|
||||
|
||||
offset += page_size
|
||||
time.sleep(0.5) # Be respectful to the API
|
||||
|
||||
except (cloudscraper.exceptions.CloudflareException, json.JSONDecodeError) as e:
|
||||
logger(f" ❌ Error fetching messages at offset {offset}: {e}")
|
||||
break
|
||||
except Exception as e:
|
||||
logger(f" ❌ An unexpected error occurred while fetching messages: {e}")
|
||||
break
|
||||
131
src/core/erome_client.py
Normal file
131
src/core/erome_client.py
Normal file
@@ -0,0 +1,131 @@
|
||||
|
||||
import os
|
||||
import re
|
||||
import html
|
||||
import time
|
||||
import urllib.parse
|
||||
import requests
|
||||
from datetime import datetime
|
||||
import cloudscraper
|
||||
|
||||
|
||||
def extr(txt, begin, end, default=""):
|
||||
"""Stripped-down version of 'extract()' to find text between two delimiters."""
|
||||
try:
|
||||
first = txt.index(begin) + len(begin)
|
||||
return txt[first:txt.index(end, first)]
|
||||
except (ValueError, IndexError):
|
||||
return default
|
||||
|
||||
def extract_iter(txt, begin, end):
|
||||
"""Yields all occurrences of text between two delimiters."""
|
||||
try:
|
||||
index = txt.index
|
||||
lbeg = len(begin)
|
||||
lend = len(end)
|
||||
pos = 0
|
||||
while True:
|
||||
first = index(begin, pos) + lbeg
|
||||
last = index(end, first)
|
||||
pos = last + lend
|
||||
yield txt[first:last]
|
||||
except (ValueError, IndexError):
|
||||
return
|
||||
|
||||
def nameext_from_url(url):
|
||||
"""Extracts filename and extension from a URL."""
|
||||
data = {}
|
||||
filename = urllib.parse.unquote(url.partition("?")[0].rpartition("/")[2])
|
||||
name, _, ext = filename.rpartition(".")
|
||||
if name and len(ext) <= 16:
|
||||
data["filename"], data["extension"] = name, ext.lower()
|
||||
else:
|
||||
data["filename"], data["extension"] = filename, ""
|
||||
return data
|
||||
|
||||
def parse_timestamp(ts, default=None):
|
||||
"""Creates a datetime object from a Unix timestamp."""
|
||||
try:
|
||||
return datetime.fromtimestamp(int(ts))
|
||||
except (ValueError, TypeError):
|
||||
return default
|
||||
|
||||
|
||||
def fetch_erome_data(url, logger):
|
||||
"""
|
||||
Identifies and extracts all media files from an Erome album URL.
|
||||
|
||||
Args:
|
||||
url (str): The Erome album URL (e.g., https://www.erome.com/a/albumID).
|
||||
logger (function): A function to log progress messages.
|
||||
|
||||
Returns:
|
||||
tuple: A tuple containing (album_folder_name, list_of_file_dicts).
|
||||
Returns (None, []) if data extraction fails.
|
||||
"""
|
||||
album_id_match = re.search(r"/a/(\w+)", url)
|
||||
if not album_id_match:
|
||||
logger(f"Error: The URL '{url}' does not appear to be a valid Erome album link.")
|
||||
return None, []
|
||||
|
||||
album_id = album_id_match.group(1)
|
||||
page_url = f"https://www.erome.com/a/{album_id}"
|
||||
|
||||
session = cloudscraper.create_scraper()
|
||||
|
||||
try:
|
||||
logger(f" Fetching Erome album page: {page_url}")
|
||||
for attempt in range(5):
|
||||
response = session.get(page_url, timeout=30)
|
||||
response.raise_for_status()
|
||||
page_content = response.text
|
||||
if "<title>Please wait a few moments</title>" in page_content:
|
||||
logger(f" Cloudflare check detected. Waiting 5 seconds... (Attempt {attempt + 1}/5)")
|
||||
time.sleep(5)
|
||||
continue
|
||||
break
|
||||
else:
|
||||
logger(" Error: Could not bypass Cloudflare check after several attempts.")
|
||||
return None, []
|
||||
|
||||
title = html.unescape(extr(page_content, 'property="og:title" content="', '"'))
|
||||
user = urllib.parse.unquote(extr(page_content, 'href="https://www.erome.com/', '"', default="unknown_user"))
|
||||
|
||||
sanitized_title = re.sub(r'[<>:"/\\|?*]', '_', title).strip()
|
||||
sanitized_user = re.sub(r'[<>:"/\\|?*]', '_', user).strip()
|
||||
|
||||
album_folder_name = f"Erome - {sanitized_user} - {sanitized_title} [{album_id}]"
|
||||
|
||||
urls = []
|
||||
media_groups = page_content.split('<div class="media-group"')
|
||||
for group in media_groups[1:]:
|
||||
video_url = extr(group, '<source src="', '"') or extr(group, 'data-src="', '"')
|
||||
if video_url:
|
||||
urls.append(video_url)
|
||||
|
||||
if not urls:
|
||||
logger(" Warning: No media URLs found on the album page.")
|
||||
return album_folder_name, []
|
||||
|
||||
logger(f" Found {len(urls)} media files in album '{title}'.")
|
||||
|
||||
file_list = []
|
||||
for i, file_url in enumerate(urls, 1):
|
||||
filename_info = nameext_from_url(file_url)
|
||||
filename = f"{album_id}_{sanitized_title}_{i:03d}.{filename_info.get('extension', 'mp4')}"
|
||||
|
||||
file_data = {
|
||||
"url": file_url,
|
||||
"filename": filename,
|
||||
"headers": {"Referer": page_url},
|
||||
}
|
||||
file_list.append(file_data)
|
||||
|
||||
return album_folder_name, file_list
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
logger(f" Error fetching Erome page: {e}")
|
||||
return None, []
|
||||
except Exception as e:
|
||||
logger(f" An unexpected error occurred during Erome extraction: {e}")
|
||||
return None, []
|
||||
138
src/core/fap_nation_client.py
Normal file
138
src/core/fap_nation_client.py
Normal file
@@ -0,0 +1,138 @@
|
||||
import re
|
||||
import os
|
||||
import cloudscraper
|
||||
from urllib.parse import urlparse, urljoin
|
||||
from ..utils.file_utils import clean_folder_name
|
||||
|
||||
def fetch_fap_nation_data(album_url, logger_func):
|
||||
"""
|
||||
Scrapes a fap-nation page by prioritizing HLS streams first, then falling
|
||||
back to direct download links. Selects the highest quality available.
|
||||
"""
|
||||
logger_func(f" [Fap-Nation] Fetching album data from: {album_url}")
|
||||
scraper = cloudscraper.create_scraper()
|
||||
|
||||
try:
|
||||
response = scraper.get(album_url, timeout=45)
|
||||
response.raise_for_status()
|
||||
html_content = response.text
|
||||
|
||||
title_match = re.search(r'<h1[^>]*itemprop="name"[^>]*>(.*?)</h1>', html_content, re.IGNORECASE)
|
||||
album_slug = clean_folder_name(os.path.basename(urlparse(album_url).path.strip('/')))
|
||||
album_title = clean_folder_name(title_match.group(1).strip()) if title_match else album_slug
|
||||
|
||||
files_to_download = []
|
||||
final_url = None
|
||||
link_type = None
|
||||
filename_from_video_tag = None
|
||||
|
||||
video_tag_title_match = re.search(r'data-plyr-config=.*?"title":.*?"([^&]+?\.mp4)"', html_content, re.IGNORECASE)
|
||||
if video_tag_title_match:
|
||||
filename_from_video_tag = clean_folder_name(video_tag_title_match.group(1))
|
||||
logger_func(f" [Fap-Nation] Found high-quality filename in video tag: {filename_from_video_tag}")
|
||||
|
||||
# --- REVISED LOGIC: HLS FIRST ---
|
||||
|
||||
# 1. Prioritize finding an HLS stream.
|
||||
logger_func(" [Fap-Nation] Priority 1: Searching for HLS stream...")
|
||||
iframe_match = re.search(r'<iframe[^>]+src="([^"]+mediadelivery\.net[^"]+)"', html_content, re.IGNORECASE)
|
||||
|
||||
if iframe_match:
|
||||
iframe_url = iframe_match.group(1)
|
||||
logger_func(f" [Fap-Nation] Found video iframe. Visiting: {iframe_url}")
|
||||
try:
|
||||
iframe_response = scraper.get(iframe_url, timeout=30)
|
||||
iframe_response.raise_for_status()
|
||||
iframe_html = iframe_response.text
|
||||
|
||||
playlist_match = re.search(r'<source[^>]+src="([^"]+\.m3u8)"', iframe_html, re.IGNORECASE)
|
||||
if playlist_match:
|
||||
final_url = playlist_match.group(1)
|
||||
link_type = 'hls'
|
||||
logger_func(f" [Fap-Nation] Found embedded HLS stream in iframe: {final_url}")
|
||||
except Exception as e:
|
||||
logger_func(f" [Fap-Nation] ⚠️ Error fetching or parsing iframe content: {e}")
|
||||
|
||||
if not final_url:
|
||||
logger_func(" [Fap-Nation] No stream found in iframe. Checking main page content as a last resort...")
|
||||
js_var_match = re.search(r'"(https?://[^"]+\.m3u8)"', html_content, re.IGNORECASE)
|
||||
if js_var_match:
|
||||
final_url = js_var_match.group(1)
|
||||
link_type = 'hls'
|
||||
logger_func(f" [Fap-Nation] Found HLS stream on main page: {final_url}")
|
||||
|
||||
# 2. Fallback: If no HLS stream was found, search for direct links.
|
||||
if not final_url:
|
||||
logger_func(" [Fap-Nation] No HLS stream found. Priority 2 (Fallback): Searching for direct download links...")
|
||||
direct_link_pattern = r'<a\s+[^>]*href="([^"]+\.(?:mp4|webm|mkv|mov))"[^>]*>'
|
||||
direct_links_found = re.findall(direct_link_pattern, html_content, re.IGNORECASE)
|
||||
|
||||
if direct_links_found:
|
||||
logger_func(f" [Fap-Nation] Found {len(direct_links_found)} direct media link(s). Selecting the best quality...")
|
||||
best_link = None
|
||||
# Define qualities from highest to lowest
|
||||
qualities_to_check = ['1080p', '720p', '480p', '360p']
|
||||
|
||||
# Find the best quality link by iterating through preferred qualities
|
||||
for quality in qualities_to_check:
|
||||
for link in direct_links_found:
|
||||
if quality in link.lower():
|
||||
best_link = link
|
||||
logger_func(f" [Fap-Nation] Found '{quality}' link: {best_link}")
|
||||
break # Found the best link for this quality level
|
||||
if best_link:
|
||||
break # Found the highest quality available
|
||||
|
||||
# Fallback if no quality string was found in any link
|
||||
if not best_link:
|
||||
best_link = direct_links_found[0]
|
||||
logger_func(f" [Fap-Nation] ⚠️ No quality tags (1080p, 720p, etc.) found in links. Defaulting to first link: {best_link}")
|
||||
|
||||
final_url = best_link
|
||||
link_type = 'direct'
|
||||
logger_func(f" [Fap-Nation] Identified direct media link: {final_url}")
|
||||
# If after all checks, we still have no URL, then fail.
|
||||
if not final_url:
|
||||
logger_func(" [Fap-Nation] ❌ Stage 1 Failed: Could not find any HLS stream or direct link.")
|
||||
return None, []
|
||||
|
||||
# --- HLS Quality Selection Logic ---
|
||||
if link_type == 'hls' and final_url:
|
||||
logger_func(" [Fap-Nation] HLS stream found. Checking for higher quality variants...")
|
||||
try:
|
||||
master_playlist_response = scraper.get(final_url, timeout=20)
|
||||
master_playlist_response.raise_for_status()
|
||||
playlist_content = master_playlist_response.text
|
||||
|
||||
streams = re.findall(r'#EXT-X-STREAM-INF:.*?RESOLUTION=(\d+)x(\d+).*?\n(.*?)\s', playlist_content)
|
||||
|
||||
if streams:
|
||||
best_stream = max(streams, key=lambda s: int(s[0]) * int(s[1]))
|
||||
height = best_stream[1]
|
||||
relative_path = best_stream[2]
|
||||
new_final_url = urljoin(final_url, relative_path)
|
||||
|
||||
logger_func(f" [Fap-Nation] ✅ Best quality found: {height}p. Updating URL to: {new_final_url}")
|
||||
final_url = new_final_url
|
||||
else:
|
||||
logger_func(" [Fap-Nation] ℹ️ No alternate quality streams found in playlist. Using original.")
|
||||
except Exception as e:
|
||||
logger_func(f" [Fap-Nation] ⚠️ Could not parse HLS master playlist for quality selection: {e}. Using original URL.")
|
||||
|
||||
if final_url and link_type:
|
||||
if filename_from_video_tag:
|
||||
base_name, _ = os.path.splitext(filename_from_video_tag)
|
||||
new_filename = f"{base_name}.mp4"
|
||||
else:
|
||||
new_filename = f"{album_slug}.mp4"
|
||||
|
||||
files_to_download.append({'url': final_url, 'filename': new_filename, 'type': link_type})
|
||||
logger_func(f" [Fap-Nation] ✅ Ready to download '{new_filename}' ({link_type} method).")
|
||||
return album_title, files_to_download
|
||||
|
||||
logger_func(f" [Fap-Nation] ❌ Could not determine a valid download link.")
|
||||
return None, []
|
||||
|
||||
except Exception as e:
|
||||
logger_func(f" [Fap-Nation] ❌ Error fetching Fap-Nation data: {e}")
|
||||
return None, []
|
||||
189
src/core/mangadex_client.py
Normal file
189
src/core/mangadex_client.py
Normal file
@@ -0,0 +1,189 @@
|
||||
# src/core/mangadex_client.py
|
||||
|
||||
import os
|
||||
import re
|
||||
import time
|
||||
import cloudscraper
|
||||
from collections import defaultdict
|
||||
from ..utils.file_utils import clean_folder_name
|
||||
|
||||
def fetch_mangadex_data(start_url, output_dir, logger_func, file_progress_callback, overall_progress_callback, pause_event, cancellation_event):
|
||||
"""
|
||||
Fetches and downloads all content from a MangaDex series or chapter URL.
|
||||
Returns a tuple of (downloaded_count, skipped_count).
|
||||
"""
|
||||
grand_total_dl = 0
|
||||
grand_total_skip = 0
|
||||
|
||||
api = _MangadexAPI(logger_func)
|
||||
|
||||
def _check_pause():
|
||||
if cancellation_event and cancellation_event.is_set(): return True
|
||||
if pause_event and pause_event.is_set():
|
||||
logger_func(" Download paused...")
|
||||
while pause_event.is_set():
|
||||
if cancellation_event and cancellation_event.is_set(): return True
|
||||
time.sleep(0.5)
|
||||
logger_func(" Download resumed.")
|
||||
return cancellation_event.is_set()
|
||||
|
||||
series_match = re.search(r"mangadex\.org/(?:title|manga)/([0-9a-f-]+)", start_url)
|
||||
chapter_match = re.search(r"mangadex\.org/chapter/([0-9a-f-]+)", start_url)
|
||||
|
||||
chapters_to_process = []
|
||||
if series_match:
|
||||
series_id = series_match.group(1)
|
||||
logger_func(f" Series detected. Fetching chapter list for ID: {series_id}")
|
||||
chapters_to_process = api.get_manga_chapters(series_id, cancellation_event, pause_event)
|
||||
elif chapter_match:
|
||||
chapter_id = chapter_match.group(1)
|
||||
logger_func(f" Single chapter detected. Fetching info for ID: {chapter_id}")
|
||||
chapter_info = api.get_chapter_info(chapter_id)
|
||||
if chapter_info:
|
||||
chapters_to_process = [chapter_info]
|
||||
|
||||
if not chapters_to_process:
|
||||
logger_func("❌ No chapters found or failed to fetch chapter info.")
|
||||
return 0, 0
|
||||
|
||||
logger_func(f"✅ Found {len(chapters_to_process)} chapter(s) to download.")
|
||||
if overall_progress_callback:
|
||||
overall_progress_callback.emit(len(chapters_to_process), 0)
|
||||
|
||||
for chap_idx, chapter_json in enumerate(chapters_to_process):
|
||||
if _check_pause(): break
|
||||
try:
|
||||
metadata = api.transform_chapter_data(chapter_json)
|
||||
logger_func("-" * 40)
|
||||
logger_func(f"Processing Chapter {chap_idx + 1}/{len(chapters_to_process)}: Vol. {metadata['volume']} Ch. {metadata['chapter']}{metadata['chapter_minor']} - {metadata['title']}")
|
||||
|
||||
server_info = api.get_at_home_server(chapter_json["id"])
|
||||
if not server_info:
|
||||
logger_func(" ❌ Could not get image server for this chapter. Skipping.")
|
||||
continue
|
||||
|
||||
base_url = f"{server_info['baseUrl']}/data/{server_info['chapter']['hash']}/"
|
||||
image_files = server_info['chapter']['data']
|
||||
|
||||
series_folder = clean_folder_name(metadata['manga'])
|
||||
chapter_folder_title = metadata['title'] or ''
|
||||
chapter_folder = clean_folder_name(f"Vol {metadata['volume']:02d} Chap {metadata['chapter']:03d}{metadata['chapter_minor']} - {chapter_folder_title}".strip().strip('-').strip())
|
||||
final_save_path = os.path.join(output_dir, series_folder, chapter_folder)
|
||||
os.makedirs(final_save_path, exist_ok=True)
|
||||
|
||||
for img_idx, filename in enumerate(image_files):
|
||||
if _check_pause(): break
|
||||
|
||||
full_img_url = base_url + filename
|
||||
img_path = os.path.join(final_save_path, f"{img_idx + 1:03d}{os.path.splitext(filename)[1]}")
|
||||
|
||||
if os.path.exists(img_path):
|
||||
logger_func(f" -> Skip ({img_idx+1}/{len(image_files)}): '{os.path.basename(img_path)}' already exists.")
|
||||
grand_total_skip += 1
|
||||
continue
|
||||
|
||||
logger_func(f" Downloading ({img_idx+1}/{len(image_files)}): '{os.path.basename(img_path)}'...")
|
||||
|
||||
try:
|
||||
response = api.session.get(full_img_url, stream=True, timeout=60, headers={'Referer': 'https://mangadex.org/'})
|
||||
response.raise_for_status()
|
||||
total_size = int(response.headers.get('content-length', 0))
|
||||
|
||||
if file_progress_callback:
|
||||
file_progress_callback.emit(os.path.basename(img_path), (0, total_size))
|
||||
|
||||
with open(img_path, 'wb') as f:
|
||||
downloaded_bytes = 0
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if _check_pause(): break
|
||||
f.write(chunk)
|
||||
downloaded_bytes += len(chunk)
|
||||
if file_progress_callback:
|
||||
file_progress_callback.emit(os.path.basename(img_path), (downloaded_bytes, total_size))
|
||||
|
||||
if _check_pause():
|
||||
if os.path.exists(img_path): os.remove(img_path)
|
||||
break
|
||||
|
||||
grand_total_dl += 1
|
||||
except Exception as e:
|
||||
logger_func(f" ❌ Failed to download page {img_idx+1}: {e}")
|
||||
grand_total_skip += 1
|
||||
|
||||
if overall_progress_callback:
|
||||
overall_progress_callback.emit(len(chapters_to_process), chap_idx + 1)
|
||||
time.sleep(1)
|
||||
|
||||
except Exception as e:
|
||||
logger_func(f" ❌ An unexpected error occurred while processing chapter {chapter_json.get('id')}: {e}")
|
||||
|
||||
return grand_total_dl, grand_total_skip
|
||||
|
||||
class _MangadexAPI:
|
||||
def __init__(self, logger_func):
|
||||
self.logger_func = logger_func
|
||||
self.session = cloudscraper.create_scraper()
|
||||
self.root = "https://api.mangadex.org"
|
||||
|
||||
def _call(self, endpoint, params=None, cancellation_event=None):
|
||||
if cancellation_event and cancellation_event.is_set(): return None
|
||||
try:
|
||||
response = self.session.get(f"{self.root}{endpoint}", params=params, timeout=30)
|
||||
if response.status_code == 429:
|
||||
retry_after = int(response.headers.get("X-RateLimit-Retry-After", 5))
|
||||
self.logger_func(f" ⚠️ Rate limited. Waiting for {retry_after} seconds...")
|
||||
time.sleep(retry_after)
|
||||
return self._call(endpoint, params, cancellation_event)
|
||||
response.raise_for_status()
|
||||
return response.json()
|
||||
except Exception as e:
|
||||
self.logger_func(f" ❌ API call to '{endpoint}' failed: {e}")
|
||||
return None
|
||||
|
||||
def get_manga_chapters(self, series_id, cancellation_event, pause_event):
|
||||
all_chapters = []
|
||||
offset = 0
|
||||
limit = 500
|
||||
base_params = {
|
||||
"limit": limit, "order[volume]": "asc", "order[chapter]": "asc",
|
||||
"translatedLanguage[]": ["en"], "includes[]": ["scanlation_group", "user", "manga"]
|
||||
}
|
||||
while True:
|
||||
if cancellation_event.is_set(): break
|
||||
while pause_event.is_set(): time.sleep(0.5)
|
||||
|
||||
params = {**base_params, "offset": offset}
|
||||
data = self._call(f"/manga/{series_id}/feed", params, cancellation_event)
|
||||
if not data or data.get("result") != "ok": break
|
||||
|
||||
results = data.get("data", [])
|
||||
all_chapters.extend(results)
|
||||
|
||||
if (offset + limit) >= data.get("total", 0): break
|
||||
offset += limit
|
||||
return all_chapters
|
||||
|
||||
def get_chapter_info(self, chapter_id):
|
||||
params = {"includes[]": ["scanlation_group", "user", "manga"]}
|
||||
data = self._call(f"/chapter/{chapter_id}", params)
|
||||
return data.get("data") if data and data.get("result") == "ok" else None
|
||||
|
||||
def get_at_home_server(self, chapter_id):
|
||||
return self._call(f"/at-home/server/{chapter_id}")
|
||||
|
||||
def transform_chapter_data(self, chapter):
|
||||
relationships = {item["type"]: item for item in chapter.get("relationships", [])}
|
||||
manga = relationships.get("manga", {})
|
||||
c_attrs = chapter.get("attributes", {})
|
||||
m_attrs = manga.get("attributes", {})
|
||||
|
||||
chapter_num_str = c_attrs.get("chapter", "0") or "0"
|
||||
chnum, sep, minor = chapter_num_str.partition(".")
|
||||
|
||||
return {
|
||||
"manga": (m_attrs.get("title", {}).get("en") or next(iter(m_attrs.get("title", {}).values()), "Unknown Series")),
|
||||
"title": c_attrs.get("title", ""),
|
||||
"volume": int(float(c_attrs.get("volume", 0) or 0)),
|
||||
"chapter": int(float(chnum or 0)),
|
||||
"chapter_minor": sep + minor if minor else ""
|
||||
}
|
||||
44
src/core/nhentai_client.py
Normal file
44
src/core/nhentai_client.py
Normal file
@@ -0,0 +1,44 @@
|
||||
import requests
|
||||
import cloudscraper
|
||||
import json
|
||||
|
||||
def fetch_nhentai_gallery(gallery_id, logger=print):
|
||||
"""
|
||||
Fetches the metadata for a single nhentai gallery using cloudscraper to bypass Cloudflare.
|
||||
|
||||
Args:
|
||||
gallery_id (str or int): The ID of the nhentai gallery.
|
||||
logger (function): A function to log progress and error messages.
|
||||
|
||||
Returns:
|
||||
dict: A dictionary containing the gallery's metadata if successful, otherwise None.
|
||||
"""
|
||||
api_url = f"https://nhentai.net/api/gallery/{gallery_id}"
|
||||
|
||||
scraper = cloudscraper.create_scraper()
|
||||
|
||||
logger(f" Fetching nhentai gallery metadata from: {api_url}")
|
||||
|
||||
try:
|
||||
# Use the scraper to make the GET request
|
||||
response = scraper.get(api_url, timeout=20)
|
||||
|
||||
if response.status_code == 404:
|
||||
logger(f" ❌ Gallery not found (404): ID {gallery_id}")
|
||||
return None
|
||||
|
||||
response.raise_for_status()
|
||||
|
||||
gallery_data = response.json()
|
||||
|
||||
if "id" in gallery_data and "media_id" in gallery_data and "images" in gallery_data:
|
||||
logger(f" ✅ Successfully fetched metadata for '{gallery_data['title']['english']}'")
|
||||
gallery_data['pages'] = gallery_data.pop('images')['pages']
|
||||
return gallery_data
|
||||
else:
|
||||
logger(" ❌ API response is missing essential keys (id, media_id, or images).")
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
logger(f" ❌ An error occurred while fetching gallery {gallery_id}: {e}")
|
||||
return None
|
||||
93
src/core/pixeldrain_client.py
Normal file
93
src/core/pixeldrain_client.py
Normal file
@@ -0,0 +1,93 @@
|
||||
import os
|
||||
import re
|
||||
import cloudscraper
|
||||
from ..utils.file_utils import clean_folder_name
|
||||
# --- ADDED IMPORTS ---
|
||||
from requests.adapters import HTTPAdapter
|
||||
from urllib3.util.retry import Retry
|
||||
|
||||
def fetch_pixeldrain_data(url: str, logger):
|
||||
"""
|
||||
Scrapes a given Pixeldrain URL to extract album or file information.
|
||||
Handles single files (/u/), albums/lists (/l/), and folders (/d/).
|
||||
"""
|
||||
logger(f"Fetching data for Pixeldrain URL: {url}")
|
||||
scraper = cloudscraper.create_scraper()
|
||||
root = "https://pixeldrain.com"
|
||||
|
||||
# --- START OF FIX: Add a robust retry strategy ---
|
||||
try:
|
||||
retry_strategy = Retry(
|
||||
total=5, # Total number of retries
|
||||
backoff_factor=1, # Wait 1s, 2s, 4s, 8s between retries
|
||||
status_forcelist=[429, 500, 502, 503, 504], # Retry on these server errors
|
||||
allowed_methods=["HEAD", "GET"]
|
||||
)
|
||||
adapter = HTTPAdapter(max_retries=retry_strategy)
|
||||
scraper.mount("https://", adapter)
|
||||
scraper.mount("http://", adapter)
|
||||
logger(" [Pixeldrain] Configured retry strategy for network requests.")
|
||||
except Exception as e:
|
||||
logger(f" [Pixeldrain] ⚠️ Could not configure retry strategy: {e}")
|
||||
# --- END OF FIX ---
|
||||
|
||||
file_match = re.search(r"/u/(\w+)", url)
|
||||
album_match = re.search(r"/l/(\w+)", url)
|
||||
folder_match = re.search(r"/d/([^?]+)", url)
|
||||
|
||||
try:
|
||||
if file_match:
|
||||
file_id = file_match.group(1)
|
||||
logger(f" Detected Pixeldrain File ID: {file_id}")
|
||||
api_url = f"{root}/api/file/{file_id}/info"
|
||||
data = scraper.get(api_url).json()
|
||||
|
||||
title = data.get("name", file_id)
|
||||
|
||||
files = [{
|
||||
'url': f"{root}/api/file/{file_id}?download",
|
||||
'filename': data.get("name", f"{file_id}.tmp")
|
||||
}]
|
||||
return title, files
|
||||
|
||||
elif album_match:
|
||||
album_id = album_match.group(1)
|
||||
logger(f" Detected Pixeldrain Album ID: {album_id}")
|
||||
api_url = f"{root}/api/list/{album_id}"
|
||||
data = scraper.get(api_url).json()
|
||||
|
||||
title = data.get("title", album_id)
|
||||
|
||||
files = []
|
||||
for file_info in data.get("files", []):
|
||||
files.append({
|
||||
'url': f"{root}/api/file/{file_info['id']}?download",
|
||||
'filename': file_info.get("name", f"{file_info['id']}.tmp")
|
||||
})
|
||||
return title, files
|
||||
|
||||
elif folder_match:
|
||||
path_id = folder_match.group(1)
|
||||
logger(f" Detected Pixeldrain Folder Path: {path_id}")
|
||||
api_url = f"{root}/api/filesystem/{path_id}?stat"
|
||||
data = scraper.get(api_url).json()
|
||||
|
||||
path_info = data["path"][data["base_index"]]
|
||||
title = path_info.get("name", path_id)
|
||||
|
||||
files = []
|
||||
for child in data.get("children", []):
|
||||
if child.get("type") == "file":
|
||||
files.append({
|
||||
'url': f"{root}/api/filesystem{child['path']}?attach",
|
||||
'filename': child.get("name")
|
||||
})
|
||||
return title, files
|
||||
|
||||
else:
|
||||
logger(" ❌ Could not identify Pixeldrain URL type (file, album, or folder).")
|
||||
return None, []
|
||||
|
||||
except Exception as e:
|
||||
logger(f"❌ An error occurred while fetching Pixeldrain data: {e}")
|
||||
return None, []
|
||||
107
src/core/rule34video_client.py
Normal file
107
src/core/rule34video_client.py
Normal file
@@ -0,0 +1,107 @@
|
||||
import cloudscraper
|
||||
from bs4 import BeautifulSoup
|
||||
import re
|
||||
import html
|
||||
|
||||
def fetch_rule34video_data(video_url, logger_func):
|
||||
"""
|
||||
Scrapes a rule34video.com page by specifically finding the 'Download' div,
|
||||
then selecting the best available quality link.
|
||||
|
||||
Args:
|
||||
video_url (str): The full URL to the rule34video.com page.
|
||||
logger_func (callable): Function to use for logging progress.
|
||||
|
||||
Returns:
|
||||
tuple: (video_title, final_video_url) or (None, None) on failure.
|
||||
"""
|
||||
logger_func(f" [Rule34Video] Fetching page: {video_url}")
|
||||
scraper = cloudscraper.create_scraper()
|
||||
|
||||
try:
|
||||
main_page_response = scraper.get(video_url, timeout=20)
|
||||
main_page_response.raise_for_status()
|
||||
|
||||
soup = BeautifulSoup(main_page_response.text, 'html.parser')
|
||||
|
||||
page_title_tag = soup.find('title')
|
||||
video_title = page_title_tag.text.strip() if page_title_tag else "rule34video_file"
|
||||
|
||||
# --- START OF FINAL FIX ---
|
||||
# 1. Find the SPECIFIC "Download" label first. This is the key.
|
||||
download_label = soup.find('div', class_='label', string='Download')
|
||||
|
||||
if not download_label:
|
||||
logger_func(" [Rule34Video] ❌ Could not find the 'Download' label. Unable to locate the correct links div.")
|
||||
return None, None
|
||||
|
||||
# 2. The correct container is the parent of this label.
|
||||
download_div = download_label.parent
|
||||
|
||||
# 3. Now, find the links ONLY within this correct container.
|
||||
link_tags = download_div.find_all('a', class_='tag_item')
|
||||
if not link_tags:
|
||||
logger_func(" [Rule34Video] ❌ Found the 'Download' div, but no download links were inside it.")
|
||||
return None, None
|
||||
# --- END OF FINAL FIX ---
|
||||
|
||||
links_by_quality = {}
|
||||
quality_pattern = re.compile(r'(\d+p|4k)')
|
||||
|
||||
for tag in link_tags:
|
||||
href = tag.get('href')
|
||||
if not href:
|
||||
continue
|
||||
|
||||
quality = None
|
||||
text_match = quality_pattern.search(tag.text)
|
||||
if text_match:
|
||||
quality = text_match.group(1)
|
||||
else:
|
||||
href_match = quality_pattern.search(href)
|
||||
if href_match:
|
||||
quality = href_match.group(1)
|
||||
|
||||
if quality:
|
||||
links_by_quality[quality] = href
|
||||
|
||||
if not links_by_quality:
|
||||
logger_func(" [Rule34Video] ⚠️ Could not parse specific qualities. Using first available link as a fallback.")
|
||||
final_video_url = link_tags[0].get('href')
|
||||
if not final_video_url:
|
||||
logger_func(" [Rule34Video] ❌ Fallback failed: First link tag had no href attribute.")
|
||||
return None, None
|
||||
|
||||
final_video_url = html.unescape(final_video_url)
|
||||
logger_func(f" [Rule34Video] ✅ Selected first available link as fallback: {final_video_url}")
|
||||
return video_title, final_video_url
|
||||
|
||||
logger_func(f" [Rule34Video] Found available qualities: {list(links_by_quality.keys())}")
|
||||
|
||||
final_video_url = None
|
||||
if '1080p' in links_by_quality:
|
||||
final_video_url = links_by_quality['1080p']
|
||||
logger_func(" [Rule34Video] ✅ Selected preferred 1080p link.")
|
||||
elif '720p' in links_by_quality:
|
||||
final_video_url = links_by_quality['720p']
|
||||
logger_func(" [Rule34Video] ✅ 1080p not found. Selected fallback 720p link.")
|
||||
else:
|
||||
fallback_order = ['480p', '360p']
|
||||
for quality in fallback_order:
|
||||
if quality in links_by_quality:
|
||||
final_video_url = links_by_quality[quality]
|
||||
logger_func(f" [Rule34Video] ⚠️ 1080p/720p not found. Selected best available fallback: {quality}")
|
||||
break
|
||||
|
||||
if not final_video_url:
|
||||
logger_func(" [Rule34Video] ❌ Could not find a suitable download link.")
|
||||
return None, None
|
||||
|
||||
final_video_url = html.unescape(final_video_url)
|
||||
logger_func(f" [Rule34Video] ✅ Selected direct download URL: {final_video_url}")
|
||||
|
||||
return video_title, final_video_url
|
||||
|
||||
except Exception as e:
|
||||
logger_func(f" [Rule34Video] ❌ An error occurred: {e}")
|
||||
return None, None
|
||||
163
src/core/saint2_client.py
Normal file
163
src/core/saint2_client.py
Normal file
@@ -0,0 +1,163 @@
|
||||
import os
|
||||
import re as re_module
|
||||
import html
|
||||
import urllib.parse
|
||||
import requests
|
||||
|
||||
|
||||
PATTERN_CACHE = {}
|
||||
|
||||
def re(pattern):
|
||||
"""Compile a regular expression pattern and cache it."""
|
||||
try:
|
||||
return PATTERN_CACHE[pattern]
|
||||
except KeyError:
|
||||
p = PATTERN_CACHE[pattern] = re_module.compile(pattern)
|
||||
return p
|
||||
|
||||
def extract_from(txt, pos=None, default=""):
|
||||
"""Returns a function that extracts text between two delimiters from 'txt'."""
|
||||
def extr(begin, end, index=txt.find, txt=txt):
|
||||
nonlocal pos
|
||||
try:
|
||||
start_pos = pos if pos is not None else 0
|
||||
first = index(begin, start_pos) + len(begin)
|
||||
last = index(end, first)
|
||||
if pos is not None:
|
||||
pos = last + len(end)
|
||||
return txt[first:last]
|
||||
except (ValueError, IndexError):
|
||||
return default
|
||||
return extr
|
||||
|
||||
def nameext_from_url(url):
|
||||
"""Extract filename and extension from a URL."""
|
||||
data = {}
|
||||
filename = urllib.parse.unquote(url.partition("?")[0].rpartition("/")[2])
|
||||
name, _, ext = filename.rpartition(".")
|
||||
if name and len(ext) <= 16:
|
||||
data["filename"], data["extension"] = name, ext.lower()
|
||||
else:
|
||||
data["filename"], data["extension"] = filename, ""
|
||||
return data
|
||||
|
||||
class BaseExtractor:
|
||||
"""A simplified base class for extractors."""
|
||||
def __init__(self, match, session, logger):
|
||||
self.match = match
|
||||
self.groups = match.groups()
|
||||
self.session = session
|
||||
self.log = logger
|
||||
|
||||
def request(self, url, **kwargs):
|
||||
"""Makes an HTTP request using the session."""
|
||||
try:
|
||||
response = self.session.get(url, **kwargs)
|
||||
response.raise_for_status()
|
||||
return response
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.log(f"Error making request to {url}: {e}")
|
||||
return None
|
||||
|
||||
class SaintAlbumExtractor(BaseExtractor):
|
||||
"""Extractor for saint.su albums."""
|
||||
root = "https://saint2.su"
|
||||
pattern = re(r"(?:https?://)?saint\d*\.(?:su|pk|cr|to)/a/([^/?#]+)")
|
||||
|
||||
def items(self):
|
||||
"""Generator that yields all files from an album."""
|
||||
album_id = self.groups[0]
|
||||
response = self.request(f"{self.root}/a/{album_id}")
|
||||
if not response:
|
||||
return None, []
|
||||
|
||||
extr = extract_from(response.text)
|
||||
title = extr("<title>", "<").rpartition(" - ")[0]
|
||||
self.log(f"Downloading album: {title}")
|
||||
|
||||
files_html = re_module.findall(r'<a class="image".*?</a>', response.text, re_module.DOTALL)
|
||||
file_list = []
|
||||
for i, file_html in enumerate(files_html, 1):
|
||||
file_extr = extract_from(file_html)
|
||||
file_url = html.unescape(file_extr("onclick=\"play('", "'"))
|
||||
if not file_url:
|
||||
continue
|
||||
|
||||
filename_info = nameext_from_url(file_url)
|
||||
filename = f"{filename_info['filename']}.{filename_info['extension']}"
|
||||
|
||||
file_data = {
|
||||
"url": file_url,
|
||||
"filename": filename,
|
||||
"headers": {"Referer": response.url},
|
||||
}
|
||||
file_list.append(file_data)
|
||||
|
||||
return title, file_list
|
||||
|
||||
class SaintMediaExtractor(BaseExtractor):
|
||||
"""Extractor for single saint.su media links."""
|
||||
root = "https://saint2.su"
|
||||
pattern = re(r"(?:https?://)?saint\d*\.(?:su|pk|cr|to)(/(embe)?d/([^/?#]+))")
|
||||
|
||||
def items(self):
|
||||
"""Generator that yields the single file from a media page."""
|
||||
path, embed, media_id = self.groups
|
||||
url = self.root + path
|
||||
response = self.request(url)
|
||||
if not response:
|
||||
return None, []
|
||||
|
||||
extr = extract_from(response.text)
|
||||
file_url = ""
|
||||
title = extr("<title>", "<").rpartition(" - ")[0] or media_id
|
||||
|
||||
if embed: # /embed/ link
|
||||
file_url = html.unescape(extr('<source src="', '"'))
|
||||
else: # /d/ link
|
||||
file_url = html.unescape(extr('<a href="', '"'))
|
||||
|
||||
if not file_url:
|
||||
self.log("Could not find video URL on the page.")
|
||||
return title, []
|
||||
|
||||
filename_info = nameext_from_url(file_url)
|
||||
filename = f"{filename_info['filename'] or media_id}.{filename_info['extension'] or 'mp4'}"
|
||||
|
||||
file_data = {
|
||||
"url": file_url,
|
||||
"filename": filename,
|
||||
"headers": {"Referer": response.url}
|
||||
}
|
||||
|
||||
return title, [file_data]
|
||||
|
||||
|
||||
def fetch_saint2_data(url, logger):
|
||||
"""
|
||||
Identifies the correct extractor for a saint2.su URL and returns the data.
|
||||
|
||||
Args:
|
||||
url (str): The saint2.su URL.
|
||||
logger (function): A function to log progress messages.
|
||||
|
||||
Returns:
|
||||
tuple: A tuple containing (album_title, list_of_file_dicts).
|
||||
Returns (None, []) if no data could be fetched.
|
||||
"""
|
||||
extractors = [SaintMediaExtractor, SaintAlbumExtractor]
|
||||
session = requests.Session()
|
||||
session.headers.update({
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
|
||||
})
|
||||
|
||||
for extractor_cls in extractors:
|
||||
match = extractor_cls.pattern.match(url)
|
||||
if match:
|
||||
extractor = extractor_cls(match, session, logger)
|
||||
album_title, files = extractor.items()
|
||||
sanitized_title = re_module.sub(r'[<>:"/\\|?*]', '_', album_title) if album_title else "saint2_download"
|
||||
return sanitized_title, files
|
||||
|
||||
logger(f"Error: The URL '{url}' does not match a known saint2 pattern.")
|
||||
return None, []
|
||||
102
src/core/simpcity_client.py
Normal file
102
src/core/simpcity_client.py
Normal file
@@ -0,0 +1,102 @@
|
||||
# src/core/simpcity_client.py
|
||||
|
||||
import cloudscraper
|
||||
from bs4 import BeautifulSoup
|
||||
from urllib.parse import urlparse, unquote
|
||||
import os
|
||||
import re
|
||||
from ..utils.file_utils import clean_folder_name
|
||||
import urllib.parse
|
||||
|
||||
def fetch_single_simpcity_page(url, logger_func, cookies=None, post_id=None):
|
||||
"""
|
||||
Scrapes a single SimpCity page for images, external links, video tags, and iframes.
|
||||
"""
|
||||
scraper = cloudscraper.create_scraper()
|
||||
headers = {'Referer': 'https://simpcity.cr/'}
|
||||
|
||||
try:
|
||||
response = scraper.get(url, timeout=30, headers=headers, cookies=cookies)
|
||||
final_url = response.url # Capture the final URL after any redirects
|
||||
|
||||
if response.status_code == 404:
|
||||
return None, [], final_url
|
||||
response.raise_for_status()
|
||||
soup = BeautifulSoup(response.text, 'html.parser')
|
||||
|
||||
album_title = None
|
||||
title_element = soup.find('h1', class_='p-title-value')
|
||||
if title_element:
|
||||
album_title = title_element.text.strip()
|
||||
|
||||
search_scope = soup
|
||||
if post_id:
|
||||
post_content_container = soup.find('div', attrs={'data-lb-id': f'post-{post_id}'})
|
||||
if post_content_container:
|
||||
logger_func(f" [SimpCity] ✅ Isolating search to post content container for ID {post_id}.")
|
||||
search_scope = post_content_container
|
||||
else:
|
||||
logger_func(f" [SimpCity] ⚠️ Could not find content container for post ID {post_id}.")
|
||||
|
||||
jobs_on_page = []
|
||||
|
||||
# Find native SimpCity images
|
||||
image_tags = search_scope.find_all('img', class_='bbImage')
|
||||
for img_tag in image_tags:
|
||||
thumbnail_url = img_tag.get('src')
|
||||
if not thumbnail_url or not isinstance(thumbnail_url, str) or 'saint2.su' in thumbnail_url: continue
|
||||
full_url = thumbnail_url.replace('.md.', '.')
|
||||
filename = img_tag.get('alt', '').replace('.md.', '.') or os.path.basename(unquote(urlparse(full_url).path))
|
||||
jobs_on_page.append({'type': 'image', 'filename': filename, 'url': full_url})
|
||||
|
||||
# Find links in <a> tags, now with redirect handling
|
||||
link_tags = search_scope.find_all('a', href=True)
|
||||
for link in link_tags:
|
||||
href = link.get('href', '')
|
||||
|
||||
actual_url = href
|
||||
if '/misc/goto?url=' in href:
|
||||
try:
|
||||
# Extract and decode the real URL from the 'url' parameter
|
||||
parsed_href = urlparse(href)
|
||||
query_params = dict(urllib.parse.parse_qsl(parsed_href.query))
|
||||
if 'url' in query_params:
|
||||
actual_url = unquote(query_params['url'])
|
||||
except Exception:
|
||||
actual_url = href # Fallback if parsing fails
|
||||
|
||||
# Perform all checks on the 'actual_url' which is now the real destination
|
||||
if re.search(r'pixeldrain\.com/[lud]/', actual_url): jobs_on_page.append({'type': 'pixeldrain', 'url': actual_url})
|
||||
elif re.search(r'saint2\.(su|pk|cr|to)/embed/', actual_url): jobs_on_page.append({'type': 'saint2', 'url': actual_url})
|
||||
elif re.search(r'bunkr\.(?:cr|si|la|ws|is|ru|su|red|black|media|site|to|ac|ci|fi|pk|ps|sk|ph)|bunkrr\.ru', actual_url): jobs_on_page.append({'type': 'bunkr', 'url': actual_url})
|
||||
elif re.search(r'mega\.(nz|io)', actual_url): jobs_on_page.append({'type': 'mega', 'url': actual_url})
|
||||
elif re.search(r'gofile\.io', actual_url): jobs_on_page.append({'type': 'gofile', 'url': actual_url})
|
||||
|
||||
# Find direct Saint2 video embeds in <video> tags
|
||||
video_tags = search_scope.find_all('video')
|
||||
for video in video_tags:
|
||||
source_tag = video.find('source')
|
||||
if source_tag and source_tag.get('src'):
|
||||
src_url = source_tag['src']
|
||||
if re.search(r'saint2\.(su|pk|cr|to)', src_url):
|
||||
jobs_on_page.append({'type': 'saint2_direct', 'url': src_url})
|
||||
|
||||
# Find embeds in <iframe> tags (as a fallback)
|
||||
iframe_tags = search_scope.find_all('iframe')
|
||||
for iframe in iframe_tags:
|
||||
src_url = iframe.get('src')
|
||||
if src_url and isinstance(src_url, str):
|
||||
if re.search(r'saint2\.(su|pk|cr|to)/embed/', src_url):
|
||||
jobs_on_page.append({'type': 'saint2', 'url': src_url})
|
||||
|
||||
if jobs_on_page:
|
||||
# We use a set to remove duplicate URLs that might be found in multiple ways
|
||||
unique_jobs = list({job['url']: job for job in jobs_on_page}.values())
|
||||
logger_func(f" [SimpCity] Scraper found jobs: {[job['type'] for job in unique_jobs]}")
|
||||
return album_title, unique_jobs, final_url
|
||||
|
||||
return album_title, [], final_url
|
||||
|
||||
except Exception as e:
|
||||
logger_func(f" [SimpCity] ❌ Error fetching page {url}: {e}")
|
||||
raise e
|
||||
73
src/core/toonily_client.py
Normal file
73
src/core/toonily_client.py
Normal file
@@ -0,0 +1,73 @@
|
||||
import cloudscraper
|
||||
from bs4 import BeautifulSoup
|
||||
import time
|
||||
|
||||
def get_chapter_list(series_url, logger_func):
|
||||
logger_func(f" [Toonily] Scraping series page for chapter list: {series_url}")
|
||||
scraper = cloudscraper.create_scraper()
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36',
|
||||
'Referer': 'https://toonily.com/'
|
||||
}
|
||||
|
||||
try:
|
||||
response = scraper.get(series_url, timeout=30, headers=headers)
|
||||
response.raise_for_status()
|
||||
soup = BeautifulSoup(response.content, 'html.parser')
|
||||
chapter_links = soup.select('li.wp-manga-chapter > a')
|
||||
|
||||
if not chapter_links:
|
||||
logger_func(" [Toonily] ❌ Could not find any chapter links on the page.")
|
||||
return []
|
||||
|
||||
urls = [link['href'] for link in chapter_links]
|
||||
urls.reverse()
|
||||
logger_func(f" [Toonily] Found {len(urls)} chapters.")
|
||||
return urls
|
||||
|
||||
except Exception as e:
|
||||
logger_func(f" [Toonily] ❌ Error getting chapter list: {e}")
|
||||
return []
|
||||
|
||||
|
||||
def fetch_chapter_data(chapter_url, logger_func, scraper_session):
|
||||
"""
|
||||
Scrapes a single Toonily.com chapter page for its title and image URLs.
|
||||
"""
|
||||
main_series_url = chapter_url.rsplit('/', 2)[0] + '/'
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36',
|
||||
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
|
||||
'Accept-Language': 'en-US,en;q=0.9',
|
||||
'Referer': main_series_url
|
||||
}
|
||||
|
||||
try:
|
||||
response = scraper_session.get(chapter_url, timeout=30, headers=headers)
|
||||
response.raise_for_status()
|
||||
soup = BeautifulSoup(response.content, 'html.parser')
|
||||
|
||||
title_element = soup.select_one('h1#chapter-heading')
|
||||
image_container = soup.select_one('div.reading-content')
|
||||
|
||||
if not title_element or not image_container:
|
||||
logger_func(" [Toonily] ❌ Page structure invalid. Could not find title or image container.")
|
||||
return None, None, []
|
||||
|
||||
full_chapter_title = title_element.text.strip()
|
||||
|
||||
if " - Chapter" in full_chapter_title:
|
||||
series_title = full_chapter_title.split(" - Chapter")[0].strip()
|
||||
else:
|
||||
series_title = full_chapter_title.strip()
|
||||
|
||||
chapter_title = full_chapter_title # The full string is best for the chapter folder name
|
||||
|
||||
image_elements = image_container.select('img')
|
||||
image_urls = [img.get('data-src', img.get('src')).strip() for img in image_elements if img.get('data-src') or img.get('src')]
|
||||
|
||||
return series_title, chapter_title, image_urls
|
||||
|
||||
except Exception as e:
|
||||
logger_func(f" [Toonily] ❌ An error occurred scraping chapter '{chapter_url}': {e}")
|
||||
return None, None, []
|
||||
1530
src/core/workers.py
1530
src/core/workers.py
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
@@ -5,11 +5,22 @@ import traceback
|
||||
import json
|
||||
import base64
|
||||
import time
|
||||
import zipfile
|
||||
import struct
|
||||
import sys
|
||||
import io
|
||||
import hashlib
|
||||
from contextlib import redirect_stdout
|
||||
from urllib.parse import urlparse, urlunparse, parse_qs, urlencode
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from threading import Lock
|
||||
|
||||
# --- Third-Party Library Imports ---
|
||||
# Make sure to install these: pip install requests pycryptodome gdown
|
||||
# --- Third-party Library Imports ---
|
||||
import requests
|
||||
import cloudscraper
|
||||
from requests.adapters import HTTPAdapter
|
||||
from urllib3.util.retry import Retry
|
||||
from ..utils.file_utils import clean_folder_name
|
||||
|
||||
try:
|
||||
from Crypto.Cipher import AES
|
||||
@@ -23,249 +34,672 @@ try:
|
||||
except ImportError:
|
||||
GDRIVE_AVAILABLE = False
|
||||
|
||||
# --- Constants ---
|
||||
MEGA_API_URL = "https://g.api.mega.co.nz"
|
||||
|
||||
# --- Helper Functions (Original and New) ---
|
||||
MIN_SIZE_FOR_MULTIPART_MEGA = 20 * 1024 * 1024 # 20 MB
|
||||
NUM_PARTS_FOR_MEGA = 5
|
||||
|
||||
def _get_filename_from_headers(headers):
|
||||
"""
|
||||
Extracts a filename from the Content-Disposition header.
|
||||
(This is from your original file and is kept for Dropbox downloads)
|
||||
"""
|
||||
cd = headers.get('content-disposition')
|
||||
if not cd:
|
||||
return None
|
||||
|
||||
fname_match = re.findall('filename="?([^"]+)"?', cd)
|
||||
if fname_match:
|
||||
sanitized_name = re.sub(r'[<>:"/\\|?*]', '_', fname_match[0].strip())
|
||||
return sanitized_name
|
||||
|
||||
return None
|
||||
|
||||
# --- NEW: Helper functions for Mega decryption ---
|
||||
|
||||
def urlb64_to_b64(s):
|
||||
"""Converts a URL-safe base64 string to a standard base64 string."""
|
||||
s = s.replace('-', '+').replace('_', '/')
|
||||
s += '=' * (-len(s) % 4)
|
||||
return s
|
||||
return s.replace('-', '+').replace('_', '/')
|
||||
|
||||
def b64_to_bytes(s):
|
||||
"""Decodes a URL-safe base64 string to bytes."""
|
||||
return base64.b64decode(urlb64_to_b64(s))
|
||||
|
||||
def bytes_to_hex(b):
|
||||
"""Converts bytes to a hex string."""
|
||||
return b.hex()
|
||||
def bytes_to_b64(b):
|
||||
return base64.b64encode(b).decode('utf-8')
|
||||
|
||||
def hex_to_bytes(h):
|
||||
"""Converts a hex string to bytes."""
|
||||
return bytes.fromhex(h)
|
||||
|
||||
def hrk2hk(hex_raw_key):
|
||||
"""Derives the final AES key from the raw key components for Mega."""
|
||||
key_part1 = int(hex_raw_key[0:16], 16)
|
||||
key_part2 = int(hex_raw_key[16:32], 16)
|
||||
key_part3 = int(hex_raw_key[32:48], 16)
|
||||
key_part4 = int(hex_raw_key[48:64], 16)
|
||||
|
||||
final_key_part1 = key_part1 ^ key_part3
|
||||
final_key_part2 = key_part2 ^ key_part4
|
||||
|
||||
return f'{final_key_part1:016x}{final_key_part2:016x}'
|
||||
|
||||
def decrypt_at(at_b64, key_bytes):
|
||||
"""Decrypts the 'at' attribute to get file metadata."""
|
||||
at_bytes = b64_to_bytes(at_b64)
|
||||
iv = b'\0' * 16
|
||||
cipher = AES.new(key_bytes, AES.MODE_CBC, iv)
|
||||
decrypted_at = cipher.decrypt(at_bytes)
|
||||
return decrypted_at.decode('utf-8').strip('\0').replace('MEGA', '')
|
||||
|
||||
# --- NEW: Core Logic for Mega Downloads ---
|
||||
|
||||
def get_mega_file_info(file_id, file_key, session, logger_func):
|
||||
"""Fetches file metadata and the temporary download URL from the Mega API."""
|
||||
def _decrypt_mega_attribute(encrypted_attr_b64, key_bytes):
|
||||
try:
|
||||
hex_raw_key = bytes_to_hex(b64_to_bytes(file_key))
|
||||
hex_key = hrk2hk(hex_raw_key)
|
||||
key_bytes = hex_to_bytes(hex_key)
|
||||
attr_bytes = b64_to_bytes(encrypted_attr_b64)
|
||||
padded_len = (len(attr_bytes) + 15) & ~15
|
||||
padded_attr_bytes = attr_bytes.ljust(padded_len, b'\0')
|
||||
iv = b'\0' * 16
|
||||
cipher = AES.new(key_bytes, AES.MODE_CBC, iv)
|
||||
decrypted_attr = cipher.decrypt(padded_attr_bytes)
|
||||
json_str = decrypted_attr.strip(b'\0').decode('utf-8')
|
||||
if json_str.startswith('MEGA'):
|
||||
return json.loads(json_str[4:])
|
||||
return json.loads(json_str)
|
||||
except Exception:
|
||||
return {}
|
||||
|
||||
# Request file attributes
|
||||
payload = [{"a": "g", "p": file_id}]
|
||||
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
|
||||
response.raise_for_status()
|
||||
res_json = response.json()
|
||||
def _decrypt_mega_key(encrypted_key_b64, master_key_bytes):
|
||||
key_bytes = b64_to_bytes(encrypted_key_b64)
|
||||
iv = b'\0' * 16
|
||||
cipher = AES.new(master_key_bytes, AES.MODE_ECB)
|
||||
return cipher.decrypt(key_bytes)
|
||||
|
||||
if isinstance(res_json, list) and isinstance(res_json[0], int) and res_json[0] < 0:
|
||||
logger_func(f" [Mega] ❌ API Error: {res_json[0]}. The link may be invalid or removed.")
|
||||
return None
|
||||
def _parse_mega_key(key_b64):
|
||||
key_bytes = b64_to_bytes(key_b64)
|
||||
key_parts = struct.unpack('>' + 'I' * (len(key_bytes) // 4), key_bytes)
|
||||
if len(key_parts) == 8:
|
||||
final_key = (key_parts[0] ^ key_parts[4], key_parts[1] ^ key_parts[5], key_parts[2] ^ key_parts[6], key_parts[3] ^ key_parts[7])
|
||||
iv = (key_parts[4], key_parts[5], 0, 0)
|
||||
key_bytes = struct.pack('>' + 'I' * 4, *final_key)
|
||||
iv_bytes = struct.pack('>' + 'I' * 4, *iv)
|
||||
return key_bytes, iv_bytes, None
|
||||
elif len(key_parts) == 4:
|
||||
return key_bytes, None, None
|
||||
raise ValueError("Invalid Mega key length")
|
||||
|
||||
file_size = res_json[0]['s']
|
||||
at_b64 = res_json[0]['at']
|
||||
def _process_file_key(file_key_bytes):
|
||||
key_parts = struct.unpack('>' + 'I' * 8, file_key_bytes)
|
||||
final_key_parts = (key_parts[0] ^ key_parts[4], key_parts[1] ^ key_parts[5], key_parts[2] ^ key_parts[6], key_parts[3] ^ key_parts[7])
|
||||
return struct.pack('>' + 'I' * 4, *final_key_parts)
|
||||
|
||||
# Decrypt attributes to get the file name
|
||||
at_dec_json_str = decrypt_at(at_b64, key_bytes)
|
||||
at_dec_json = json.loads(at_dec_json_str)
|
||||
file_name = at_dec_json['n']
|
||||
def _download_and_decrypt_chunk(args):
|
||||
url, temp_path, start_byte, end_byte, key, nonce, part_num, progress_data, progress_callback_func, file_name, cancellation_event, pause_event = args
|
||||
try:
|
||||
headers = {'Range': f'bytes={start_byte}-{end_byte}'}
|
||||
initial_counter = start_byte // 16
|
||||
cipher = AES.new(key, AES.MODE_CTR, nonce=nonce, initial_value=initial_counter)
|
||||
|
||||
with requests.get(url, headers=headers, stream=True, timeout=(15, 300)) as r:
|
||||
r.raise_for_status()
|
||||
with open(temp_path, 'wb') as f:
|
||||
for chunk in r.iter_content(chunk_size=8192):
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
return False
|
||||
while pause_event and pause_event.is_set():
|
||||
time.sleep(0.5)
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
return False
|
||||
|
||||
# Request the temporary download URL
|
||||
payload = [{"a": "g", "g": 1, "p": file_id}]
|
||||
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
|
||||
response.raise_for_status()
|
||||
res_json = response.json()
|
||||
dl_temp_url = res_json[0]['g']
|
||||
decrypted_chunk = cipher.decrypt(chunk)
|
||||
f.write(decrypted_chunk)
|
||||
with progress_data['lock']:
|
||||
progress_data['downloaded'] += len(chunk)
|
||||
if progress_callback_func and (time.time() - progress_data['last_update'] > 1):
|
||||
progress_callback_func(file_name, (progress_data['downloaded'], progress_data['total_size']))
|
||||
progress_data['last_update'] = time.time()
|
||||
return True
|
||||
except Exception as e:
|
||||
return False
|
||||
|
||||
return {
|
||||
'file_name': file_name,
|
||||
'file_size': file_size,
|
||||
'dl_url': dl_temp_url,
|
||||
'hex_raw_key': hex_raw_key
|
||||
}
|
||||
except (requests.RequestException, json.JSONDecodeError, KeyError, ValueError) as e:
|
||||
logger_func(f" [Mega] ❌ Failed to get file info: {e}")
|
||||
return None
|
||||
|
||||
def download_and_decrypt_mega_file(info, download_path, logger_func):
|
||||
"""Downloads the file and decrypts it chunk by chunk, reporting progress."""
|
||||
def download_and_decrypt_mega_file(info, download_path, logger_func, progress_callback_func=None, cancellation_event=None, pause_event=None):
|
||||
file_name = info['file_name']
|
||||
file_size = info['file_size']
|
||||
dl_url = info['dl_url']
|
||||
hex_raw_key = info['hex_raw_key']
|
||||
|
||||
final_path = os.path.join(download_path, file_name)
|
||||
|
||||
if os.path.exists(final_path) and os.path.getsize(final_path) == file_size:
|
||||
logger_func(f" [Mega] ℹ️ File '{file_name}' already exists with the correct size. Skipping.")
|
||||
return
|
||||
|
||||
# Prepare for decryption
|
||||
key = hex_to_bytes(hrk2hk(hex_raw_key))
|
||||
iv_hex = hex_raw_key[32:48] + '0000000000000000'
|
||||
iv_bytes = hex_to_bytes(iv_hex)
|
||||
cipher = AES.new(key, AES.MODE_CTR, initial_value=iv_bytes, nonce=b'')
|
||||
os.makedirs(download_path, exist_ok=True)
|
||||
key, iv, _ = _parse_mega_key(urlb64_to_b64(info['file_key']))
|
||||
nonce = iv[:8]
|
||||
|
||||
# Check for cancellation before starting
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
logger_func(f" [Mega] Download for '{file_name}' cancelled before starting.")
|
||||
return
|
||||
|
||||
try:
|
||||
with requests.get(dl_url, stream=True, timeout=(15, 300)) as r:
|
||||
r.raise_for_status()
|
||||
downloaded_bytes = 0
|
||||
last_log_time = time.time()
|
||||
|
||||
# i tried to make the mega download multipart for big file but it didnt work you can try if you can fix this to make it multipart replace "if true" with this "if file_size < MIN_SIZE_FOR_MULTIPART_MEGA:" to activate multipart
|
||||
if True:
|
||||
logger_func(f" [Mega] Downloading '{file_name}' (Single Stream)...")
|
||||
try:
|
||||
cipher = AES.new(key, AES.MODE_CTR, nonce=nonce, initial_value=0)
|
||||
with requests.get(dl_url, stream=True, timeout=(15, 300)) as r:
|
||||
r.raise_for_status()
|
||||
downloaded_bytes = 0
|
||||
last_update_time = time.time()
|
||||
with open(final_path, 'wb') as f:
|
||||
for chunk in r.iter_content(chunk_size=8192):
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
break
|
||||
while pause_event and pause_event.is_set():
|
||||
time.sleep(0.5)
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
break
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
break
|
||||
|
||||
decrypted_chunk = cipher.decrypt(chunk)
|
||||
f.write(decrypted_chunk)
|
||||
downloaded_bytes += len(chunk)
|
||||
current_time = time.time()
|
||||
if current_time - last_update_time > 1:
|
||||
if progress_callback_func:
|
||||
progress_callback_func(file_name, (downloaded_bytes, file_size))
|
||||
last_update_time = time.time()
|
||||
|
||||
with open(final_path, 'wb') as f:
|
||||
for chunk in r.iter_content(chunk_size=8192):
|
||||
if not chunk:
|
||||
continue
|
||||
decrypted_chunk = cipher.decrypt(chunk)
|
||||
f.write(decrypted_chunk)
|
||||
downloaded_bytes += len(chunk)
|
||||
|
||||
# Log progress every second
|
||||
current_time = time.time()
|
||||
if current_time - last_log_time > 1:
|
||||
progress_percent = (downloaded_bytes / file_size) * 100 if file_size > 0 else 0
|
||||
logger_func(f" [Mega] Downloading '{file_name}': {downloaded_bytes/1024/1024:.2f}MB / {file_size/1024/1024:.2f}MB ({progress_percent:.1f}%)")
|
||||
last_log_time = current_time
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
logger_func(f" [Mega] ❌ Download cancelled for '{file_name}'. Deleting partial file.")
|
||||
if os.path.exists(final_path): os.remove(final_path)
|
||||
else:
|
||||
logger_func(f" [Mega] ✅ Successfully downloaded '{file_name}'")
|
||||
|
||||
except Exception as e:
|
||||
logger_func(f" [Mega] ❌ Download failed for '{file_name}': {e}")
|
||||
if os.path.exists(final_path): os.remove(final_path)
|
||||
else:
|
||||
logger_func(f" [Mega] Downloading '{file_name}' ({NUM_PARTS_FOR_MEGA} Parts)...")
|
||||
chunk_size = file_size // NUM_PARTS_FOR_MEGA
|
||||
chunks = []
|
||||
for i in range(NUM_PARTS_FOR_MEGA):
|
||||
start = i * chunk_size
|
||||
end = start + chunk_size - 1 if i < NUM_PARTS_FOR_MEGA - 1 else file_size - 1
|
||||
chunks.append((start, end))
|
||||
|
||||
progress_data = {'downloaded': 0, 'total_size': file_size, 'lock': Lock(), 'last_update': time.time()}
|
||||
|
||||
logger_func(f" [Mega] ✅ Successfully downloaded '{file_name}' to '{download_path}'")
|
||||
except requests.RequestException as e:
|
||||
logger_func(f" [Mega] ❌ Download failed for '{file_name}': {e}")
|
||||
except IOError as e:
|
||||
logger_func(f" [Mega] ❌ Could not write to file '{final_path}': {e}")
|
||||
tasks = []
|
||||
for i, (start, end) in enumerate(chunks):
|
||||
temp_path = f"{final_path}.part{i}"
|
||||
tasks.append((dl_url, temp_path, start, end, key, nonce, i, progress_data, progress_callback_func, file_name, cancellation_event, pause_event))
|
||||
|
||||
all_parts_successful = True
|
||||
with ThreadPoolExecutor(max_workers=NUM_PARTS_FOR_MEGA) as executor:
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
all_parts_successful = False
|
||||
else:
|
||||
results = executor.map(_download_and_decrypt_chunk, tasks)
|
||||
for result in results:
|
||||
if not result:
|
||||
all_parts_successful = False
|
||||
|
||||
# Check for cancellation after threads finish/are cancelled
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
all_parts_successful = False
|
||||
logger_func(f" [Mega] ❌ Multipart download cancelled for '{file_name}'.")
|
||||
|
||||
if all_parts_successful:
|
||||
logger_func(f" [Mega] All parts for '{file_name}' downloaded. Assembling file...")
|
||||
try:
|
||||
with open(final_path, 'wb') as f_out:
|
||||
for i in range(NUM_PARTS_FOR_MEGA):
|
||||
part_path = f"{final_path}.part{i}"
|
||||
with open(part_path, 'rb') as f_in:
|
||||
f_out.write(f_in.read())
|
||||
os.remove(part_path)
|
||||
logger_func(f" [Mega] ✅ Successfully downloaded and assembled '{file_name}'")
|
||||
except Exception as e:
|
||||
logger_func(f" [Mega] ❌ File assembly failed for '{file_name}': {e}")
|
||||
else:
|
||||
logger_func(f" [Mega] ❌ Multipart download failed or was cancelled for '{file_name}'. Cleaning up partial files.")
|
||||
for i in range(NUM_PARTS_FOR_MEGA):
|
||||
part_path = f"{final_path}.part{i}"
|
||||
if os.path.exists(part_path):
|
||||
os.remove(part_path)
|
||||
|
||||
|
||||
def _process_mega_folder(folder_id, folder_key, session, logger_func):
|
||||
try:
|
||||
master_key_bytes, _, _ = _parse_mega_key(folder_key)
|
||||
payload = [{"a": "f", "c": 1, "r": 1}]
|
||||
params = {'n': folder_id}
|
||||
response = session.post(f"{MEGA_API_URL}/cs", params=params, json=payload, timeout=30)
|
||||
response.raise_for_status()
|
||||
res_json = response.json()
|
||||
|
||||
if isinstance(res_json, int) or (isinstance(res_json, list) and res_json and isinstance(res_json[0], int)):
|
||||
error_code = res_json if isinstance(res_json, int) else res_json[0]
|
||||
logger_func(f" [Mega Folder] ❌ API returned error code: {error_code}. The folder may be invalid or removed.")
|
||||
return None, None
|
||||
if not isinstance(res_json, list) or not res_json or not isinstance(res_json[0], dict) or 'f' not in res_json[0]:
|
||||
logger_func(f" [Mega Folder] ❌ Invalid folder data received: {str(res_json)[:200]}")
|
||||
return None, None
|
||||
|
||||
nodes = res_json[0]['f']
|
||||
decrypted_nodes = {}
|
||||
for node in nodes:
|
||||
try:
|
||||
encrypted_key_b64 = node['k'].split(':')[-1]
|
||||
decrypted_key_raw = _decrypt_mega_key(encrypted_key_b64, master_key_bytes)
|
||||
|
||||
attr_key = _process_file_key(decrypted_key_raw) if node.get('t') == 0 else decrypted_key_raw
|
||||
attributes = _decrypt_mega_attribute(node['a'], attr_key)
|
||||
name = re.sub(r'[<>:"/\\|?*]', '_', attributes.get('n', f"unknown_{node['h']}"))
|
||||
|
||||
decrypted_nodes[node['h']] = {"name": name, "parent": node.get('p'), "type": node.get('t'), "size": node.get('s'), "raw_key_b64": urlb64_to_b64(bytes_to_b64(decrypted_key_raw))}
|
||||
except Exception as e:
|
||||
logger_func(f" [Mega Folder] ⚠️ Could not process node {node.get('h')}: {e}")
|
||||
|
||||
root_name = decrypted_nodes.get(folder_id, {}).get("name", "Mega_Folder")
|
||||
files_to_download = []
|
||||
for handle, node_info in decrypted_nodes.items():
|
||||
if node_info.get("type") == 0:
|
||||
path_parts = [node_info['name']]
|
||||
current_parent_id = node_info.get('parent')
|
||||
while current_parent_id in decrypted_nodes:
|
||||
parent_node = decrypted_nodes[current_parent_id]
|
||||
path_parts.insert(0, parent_node['name'])
|
||||
current_parent_id = parent_node.get('parent')
|
||||
if current_parent_id == folder_id:
|
||||
break
|
||||
files_to_download.append({'h': handle, 's': node_info['size'], 'key': node_info['raw_key_b64'], 'relative_path': os.path.join(*path_parts)})
|
||||
|
||||
return root_name, files_to_download
|
||||
except Exception as e:
|
||||
logger_func(f" [Mega] ❌ An unexpected error occurred during download/decryption: {e}")
|
||||
logger_func(f" [Mega Folder] ❌ Failed to get folder info: {e}")
|
||||
return None, None
|
||||
|
||||
|
||||
# --- REPLACEMENT Main Service Downloader Function for Mega ---
|
||||
|
||||
def download_mega_file(mega_url, download_path, logger_func=print):
|
||||
"""
|
||||
Downloads a file from a Mega.nz URL using direct requests and decryption.
|
||||
This replaces the old mega.py implementation.
|
||||
"""
|
||||
def download_mega_file(mega_url, download_path, logger_func=print, progress_callback_func=None, overall_progress_callback=None, cancellation_event=None, pause_event=None):
|
||||
if not PYCRYPTODOME_AVAILABLE:
|
||||
logger_func("❌ Mega download failed: 'pycryptodome' library is not installed. Please run: pip install pycryptodome")
|
||||
logger_func("❌ Mega download failed: 'pycryptodome' library is not installed.")
|
||||
if overall_progress_callback: overall_progress_callback(1, 1)
|
||||
return
|
||||
|
||||
logger_func(f" [Mega] Initializing download for: {mega_url}")
|
||||
|
||||
# Regex to capture file ID and key from both old and new URL formats
|
||||
match = re.search(r'mega(?:\.co)?\.nz/(?:file/|#!)?([a-zA-Z0-9]+)(?:#|!)([a-zA-Z0-9_.-]+)', mega_url)
|
||||
if not match:
|
||||
logger_func(f" [Mega] ❌ Error: Invalid Mega URL format.")
|
||||
return
|
||||
|
||||
file_id = match.group(1)
|
||||
file_key = match.group(2)
|
||||
|
||||
folder_match = re.search(r'mega(?:\.co)?\.nz/folder/([a-zA-Z0-9]+)#([a-zA-Z0-9_.-]+)', mega_url)
|
||||
file_match = re.search(r'mega(?:\.co)?\.nz/(?:file/|#!)?([a-zA-Z0-9]+)(?:#|!)([a-zA-Z0-9_.-]+)', mega_url)
|
||||
session = requests.Session()
|
||||
session.headers.update({'User-Agent': 'Kemono-Downloader-PyQt/1.0'})
|
||||
|
||||
file_info = get_mega_file_info(file_id, file_key, session, logger_func)
|
||||
if not file_info:
|
||||
logger_func(f" [Mega] ❌ Failed to get file info. The link may be invalid or expired. Aborting.")
|
||||
return
|
||||
|
||||
logger_func(f" [Mega] File found: '{file_info['file_name']}' (Size: {file_info['file_size'] / 1024 / 1024:.2f} MB)")
|
||||
|
||||
download_and_decrypt_mega_file(file_info, download_path, logger_func)
|
||||
if folder_match:
|
||||
folder_id, folder_key = folder_match.groups()
|
||||
logger_func(f" [Mega] Folder link detected. Starting crawl...")
|
||||
root_folder_name, files = _process_mega_folder(folder_id, folder_key, session, logger_func)
|
||||
|
||||
if root_folder_name is None or files is None:
|
||||
logger_func(" [Mega Folder] ❌ Crawling failed. Aborting.")
|
||||
if overall_progress_callback: overall_progress_callback(1, 1)
|
||||
return
|
||||
|
||||
if not files:
|
||||
logger_func(" [Mega Folder] ℹ️ Folder is empty. Nothing to download.")
|
||||
if overall_progress_callback: overall_progress_callback(0, 0)
|
||||
return
|
||||
|
||||
logger_func(" [Mega Folder] Prioritizing largest files first...")
|
||||
files.sort(key=lambda f: f.get('s', 0), reverse=True)
|
||||
|
||||
# --- ORIGINAL Functions for Google Drive and Dropbox (Unchanged) ---
|
||||
total_files = len(files)
|
||||
logger_func(f" [Mega Folder] ✅ Crawl complete. Found {total_files} file(s) in folder '{root_folder_name}'.")
|
||||
|
||||
if overall_progress_callback: overall_progress_callback(total_files, 0)
|
||||
|
||||
def download_gdrive_file(url, download_path, logger_func=print):
|
||||
"""Downloads a file from a Google Drive link."""
|
||||
folder_download_path = os.path.join(download_path, root_folder_name)
|
||||
os.makedirs(folder_download_path, exist_ok=True)
|
||||
|
||||
progress_lock = Lock()
|
||||
processed_count = 0
|
||||
MAX_WORKERS = 3
|
||||
|
||||
logger_func(f" [Mega Folder] Starting concurrent download with up to {MAX_WORKERS} workers...")
|
||||
|
||||
def _download_worker(file_data):
|
||||
nonlocal processed_count
|
||||
try:
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
return
|
||||
|
||||
params = {'n': folder_id}
|
||||
payload = [{"a": "g", "g": 1, "n": file_data['h']}]
|
||||
response = session.post(f"{MEGA_API_URL}/cs", params=params, json=payload, timeout=20)
|
||||
response.raise_for_status()
|
||||
res_json = response.json()
|
||||
|
||||
if isinstance(res_json, int) or (isinstance(res_json, list) and res_json and isinstance(res_json[0], int)):
|
||||
error_code = res_json if isinstance(res_json, int) else res_json[0]
|
||||
logger_func(f" [Mega Worker] ❌ API Error {error_code} for '{file_data['relative_path']}'. Skipping.")
|
||||
return
|
||||
|
||||
dl_temp_url = res_json[0]['g']
|
||||
file_info = {'file_name': os.path.basename(file_data['relative_path']), 'file_size': file_data['s'], 'dl_url': dl_temp_url, 'file_key': file_data['key']}
|
||||
file_specific_path = os.path.dirname(file_data['relative_path'])
|
||||
final_download_dir = os.path.join(folder_download_path, file_specific_path)
|
||||
|
||||
download_and_decrypt_mega_file(file_info, final_download_dir, logger_func, progress_callback_func, cancellation_event, pause_event)
|
||||
|
||||
except Exception as e:
|
||||
# Don't log error if it was a cancellation
|
||||
if not (cancellation_event and cancellation_event.is_set()):
|
||||
logger_func(f" [Mega Worker] ❌ Failed to process '{file_data['relative_path']}': {e}")
|
||||
finally:
|
||||
with progress_lock:
|
||||
processed_count += 1
|
||||
if overall_progress_callback:
|
||||
overall_progress_callback(total_files, processed_count)
|
||||
|
||||
with ThreadPoolExecutor(max_workers=MAX_WORKERS) as executor:
|
||||
futures = [executor.submit(_download_worker, file_data) for file_data in files]
|
||||
for future in as_completed(futures):
|
||||
if cancellation_event and cancellation_event.is_set():
|
||||
# Attempt to cancel remaining futures
|
||||
for f in futures:
|
||||
if not f.done():
|
||||
f.cancel()
|
||||
break
|
||||
try:
|
||||
future.result()
|
||||
except Exception as e:
|
||||
if not (cancellation_event and cancellation_event.is_set()):
|
||||
logger_func(f" [Mega Folder] A download worker failed with an error: {e}")
|
||||
|
||||
logger_func(" [Mega Folder] ✅ All concurrent downloads complete or cancelled.")
|
||||
|
||||
elif file_match:
|
||||
if overall_progress_callback: overall_progress_callback(1, 0)
|
||||
file_id, file_key = file_match.groups()
|
||||
try:
|
||||
payload = [{"a": "g", "p": file_id}]
|
||||
response = session.post(f"{MEGA_API_URL}/cs", json=payload, timeout=20)
|
||||
res_json = response.json()
|
||||
if isinstance(res_json, list) and res_json and isinstance(res_json[0], int):
|
||||
logger_func(f" [Mega] ❌ API Error {res_json[0]}. Link may be invalid or removed.")
|
||||
if overall_progress_callback: overall_progress_callback(1, 1)
|
||||
return
|
||||
|
||||
file_size = res_json[0]['s']
|
||||
at_b64 = res_json[0]['at']
|
||||
raw_file_key_bytes = b64_to_bytes(file_key)
|
||||
attr_key_bytes = _process_file_key(raw_file_key_bytes)
|
||||
attrs = _decrypt_mega_attribute(at_b64, attr_key_bytes)
|
||||
|
||||
file_name = attrs.get('n', f"unknown_file_{file_id}")
|
||||
payload_dl = [{"a": "g", "g": 1, "p": file_id}]
|
||||
response_dl = session.post(f"{MEGA_API_URL}/cs", json=payload_dl, timeout=20)
|
||||
dl_temp_url = response_dl.json()[0]['g']
|
||||
file_info_obj = {'file_name': file_name, 'file_size': file_size, 'dl_url': dl_temp_url, 'file_key': file_key}
|
||||
|
||||
download_and_decrypt_mega_file(file_info_obj, download_path, logger_func, progress_callback_func, cancellation_event, pause_event)
|
||||
|
||||
if overall_progress_callback: overall_progress_callback(1, 1)
|
||||
except Exception as e:
|
||||
if not (cancellation_event and cancellation_event.is_set()):
|
||||
logger_func(f" [Mega] ❌ Failed to process single file: {e}")
|
||||
if overall_progress_callback: overall_progress_callback(1, 1)
|
||||
else:
|
||||
logger_func(f" [Mega] ❌ Error: Invalid or unsupported Mega URL format.")
|
||||
if '/folder/' in mega_url and '/file/' in mega_url:
|
||||
logger_func(" [Mega] ℹ️ This looks like a link to a file inside a folder. Please use a direct, shareable link to the individual file.")
|
||||
if overall_progress_callback: overall_progress_callback(1, 1)
|
||||
|
||||
def download_gdrive_file(url, download_path, logger_func=print, progress_callback_func=None, overall_progress_callback=None, use_post_subfolder=False, post_title=None):
|
||||
if not GDRIVE_AVAILABLE:
|
||||
logger_func("❌ Google Drive download failed: 'gdown' library is not installed.")
|
||||
return
|
||||
|
||||
# --- Subfolder Logic ---
|
||||
final_download_path = download_path
|
||||
if use_post_subfolder and post_title:
|
||||
subfolder_name = clean_folder_name(post_title)
|
||||
final_download_path = os.path.join(download_path, subfolder_name)
|
||||
logger_func(f" [G-Drive] Using post subfolder: '{subfolder_name}'")
|
||||
os.makedirs(final_download_path, exist_ok=True)
|
||||
# --- End Subfolder Logic ---
|
||||
|
||||
original_stdout = sys.stdout
|
||||
original_stderr = sys.stderr
|
||||
captured_output_buffer = io.StringIO()
|
||||
|
||||
paths = None
|
||||
try:
|
||||
logger_func(f" [G-Drive] Starting download for: {url}")
|
||||
logger_func(" [G-Drive] Download in progress... This may take some time. Please wait.")
|
||||
logger_func(f" [G-Drive] Starting folder download for: {url}")
|
||||
|
||||
output_path = gdown.download(url, output=download_path, quiet=True, fuzzy=True)
|
||||
sys.stdout = captured_output_buffer
|
||||
sys.stderr = captured_output_buffer
|
||||
|
||||
paths = gdown.download_folder(url, output=final_download_path, quiet=False, use_cookies=False, remaining_ok=True)
|
||||
|
||||
if output_path and os.path.exists(output_path):
|
||||
logger_func(f" [G-Drive] ✅ Successfully downloaded to '{output_path}'")
|
||||
else:
|
||||
logger_func(f" [G-Drive] ❌ Download failed. The file may have been moved, deleted, or is otherwise inaccessible.")
|
||||
except Exception as e:
|
||||
logger_func(f" [G-Drive] ❌ An unexpected error occurred: {e}")
|
||||
logger_func(" [G-Drive] ℹ️ This can happen if the folder is private, deleted, or you have been rate-limited by Google.")
|
||||
finally:
|
||||
sys.stdout = original_stdout
|
||||
sys.stderr = original_stderr
|
||||
|
||||
def download_dropbox_file(dropbox_link, download_path=".", logger_func=print):
|
||||
"""
|
||||
Downloads a file from a public Dropbox link by modifying the URL for direct download.
|
||||
"""
|
||||
captured_output = captured_output_buffer.getvalue()
|
||||
if captured_output:
|
||||
processed_files_count = 0
|
||||
current_filename = None
|
||||
|
||||
if overall_progress_callback:
|
||||
overall_progress_callback(-1, 0)
|
||||
|
||||
lines = captured_output.splitlines()
|
||||
for i, line in enumerate(lines):
|
||||
cleaned_line = line.strip('\r').strip()
|
||||
if not cleaned_line:
|
||||
continue
|
||||
|
||||
if cleaned_line.startswith("To: "):
|
||||
try:
|
||||
if current_filename:
|
||||
logger_func(f" [G-Drive] ✅ Saved '{current_filename}'")
|
||||
|
||||
filepath = cleaned_line[4:]
|
||||
current_filename = os.path.basename(filepath)
|
||||
processed_files_count += 1
|
||||
|
||||
logger_func(f" [G-Drive] ({processed_files_count}/?) Downloading '{current_filename}'...")
|
||||
if progress_callback_func:
|
||||
progress_callback_func(current_filename, "In Progress...")
|
||||
if overall_progress_callback:
|
||||
overall_progress_callback(-1, processed_files_count -1)
|
||||
|
||||
except Exception:
|
||||
logger_func(f" [gdown] {cleaned_line}")
|
||||
|
||||
if current_filename:
|
||||
logger_func(f" [G-Drive] ✅ Saved '{current_filename}'")
|
||||
if overall_progress_callback:
|
||||
overall_progress_callback(-1, processed_files_count)
|
||||
|
||||
if paths and all(os.path.exists(p) for p in paths):
|
||||
final_folder_path = os.path.dirname(paths[0]) if paths else final_download_path
|
||||
logger_func(f" [G-Drive] ✅ Finished. Downloaded {len(paths)} file(s) to folder '{final_folder_path}'")
|
||||
else:
|
||||
logger_func(f" [G-Drive] ❌ Download failed or folder was empty. Check the log above for details from gdown.")
|
||||
|
||||
def download_dropbox_file(dropbox_link, download_path=".", logger_func=print, progress_callback_func=None, use_post_subfolder=False, post_title=None):
|
||||
logger_func(f" [Dropbox] Attempting to download: {dropbox_link}")
|
||||
|
||||
|
||||
final_download_path = download_path
|
||||
if use_post_subfolder and post_title:
|
||||
subfolder_name = clean_folder_name(post_title)
|
||||
final_download_path = os.path.join(download_path, subfolder_name)
|
||||
logger_func(f" [Dropbox] Using post subfolder: '{subfolder_name}'")
|
||||
|
||||
parsed_url = urlparse(dropbox_link)
|
||||
query_params = parse_qs(parsed_url.query)
|
||||
query_params['dl'] = ['1']
|
||||
new_query = urlencode(query_params, doseq=True)
|
||||
direct_download_url = urlunparse(parsed_url._replace(query=new_query))
|
||||
|
||||
logger_func(f" [Dropbox] Using direct download URL: {direct_download_url}")
|
||||
|
||||
scraper = cloudscraper.create_scraper()
|
||||
try:
|
||||
if not os.path.exists(download_path):
|
||||
os.makedirs(download_path, exist_ok=True)
|
||||
logger_func(f" [Dropbox] Created download directory: {download_path}")
|
||||
|
||||
with requests.get(direct_download_url, stream=True, allow_redirects=True, timeout=(10, 300)) as r:
|
||||
os.makedirs(final_download_path, exist_ok=True)
|
||||
with scraper.get(direct_download_url, stream=True, allow_redirects=True, timeout=(20, 600)) as r:
|
||||
r.raise_for_status()
|
||||
|
||||
filename = _get_filename_from_headers(r.headers) or os.path.basename(parsed_url.path) or "dropbox_file"
|
||||
full_save_path = os.path.join(download_path, filename)
|
||||
|
||||
filename = _get_filename_from_headers(r.headers) or os.path.basename(parsed_url.path) or "dropbox_download"
|
||||
if not os.path.splitext(filename)[1]:
|
||||
filename += ".zip"
|
||||
full_save_path = os.path.join(final_download_path, filename)
|
||||
logger_func(f" [Dropbox] Starting download of '{filename}'...")
|
||||
|
||||
total_size = int(r.headers.get('content-length', 0))
|
||||
downloaded_bytes = 0
|
||||
last_log_time = time.time()
|
||||
with open(full_save_path, 'wb') as f:
|
||||
for chunk in r.iter_content(chunk_size=8192):
|
||||
f.write(chunk)
|
||||
|
||||
logger_func(f" [Dropbox] ✅ Dropbox file downloaded successfully: {full_save_path}")
|
||||
|
||||
downloaded_bytes += len(chunk)
|
||||
current_time = time.time()
|
||||
if current_time - last_log_time > 1:
|
||||
if progress_callback_func:
|
||||
progress_callback_func(filename, (downloaded_bytes, total_size))
|
||||
last_log_time = current_time
|
||||
logger_func(f" [Dropbox] ✅ Download complete: {full_save_path}")
|
||||
if zipfile.is_zipfile(full_save_path):
|
||||
logger_func(f" [Dropbox] ዚ Detected zip file. Attempting to extract...")
|
||||
extract_folder_name = os.path.splitext(filename)[0]
|
||||
extract_path = os.path.join(final_download_path, extract_folder_name)
|
||||
os.makedirs(extract_path, exist_ok=True)
|
||||
with zipfile.ZipFile(full_save_path, 'r') as zip_ref:
|
||||
zip_ref.extractall(extract_path)
|
||||
logger_func(f" [Dropbox] ✅ Successfully extracted to folder: '{extract_path}'")
|
||||
try:
|
||||
os.remove(full_save_path)
|
||||
logger_func(f" [Dropbox] 🗑️ Removed original zip file.")
|
||||
except OSError as e:
|
||||
logger_func(f" [Dropbox] ⚠️ Could not remove original zip file: {e}")
|
||||
except Exception as e:
|
||||
logger_func(f" [Dropbox] ❌ An error occurred during Dropbox download: {e}")
|
||||
traceback.print_exc(limit=2)
|
||||
raise
|
||||
|
||||
def _get_gofile_api_token(session, logger_func):
|
||||
"""Creates a temporary guest account to get an API token."""
|
||||
try:
|
||||
logger_func(" [Gofile] Creating temporary guest account for API token...")
|
||||
response = session.post("https://api.gofile.io/accounts", timeout=20)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
if data.get("status") == "ok":
|
||||
token = data["data"]["token"]
|
||||
logger_func(" [Gofile] ✅ Successfully obtained API token.")
|
||||
return token
|
||||
else:
|
||||
logger_func(f" [Gofile] ❌ Failed to get API token, status: {data.get('status')}")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger_func(f" [Gofile] ❌ Error creating guest account: {e}")
|
||||
return None
|
||||
|
||||
def _get_gofile_website_token(session, logger_func):
|
||||
"""Fetches the 'wt' (website token) from Gofile's global JS file."""
|
||||
try:
|
||||
logger_func(" [Gofile] Fetching website token (wt)...")
|
||||
response = session.get("https://gofile.io/dist/js/global.js", timeout=20)
|
||||
response.raise_for_status()
|
||||
match = re.search(r'\.wt = "([^"]+)"', response.text)
|
||||
if match:
|
||||
wt = match.group(1)
|
||||
logger_func(" [Gofile] ✅ Successfully fetched website token.")
|
||||
return wt
|
||||
logger_func(" [Gofile] ❌ Could not find website token in JS file.")
|
||||
return None
|
||||
except Exception as e:
|
||||
logger_func(f" [Gofile] ❌ Error fetching website token: {e}")
|
||||
return None
|
||||
|
||||
def download_gofile_folder(gofile_url, download_path, logger_func=print, progress_callback_func=None, overall_progress_callback=None):
|
||||
"""Downloads all files from a Gofile folder URL."""
|
||||
logger_func(f" [Gofile] Initializing download for: {gofile_url}")
|
||||
|
||||
match = re.search(r"gofile\.io/d/([^/?#]+)", gofile_url)
|
||||
if not match:
|
||||
logger_func(" [Gofile] ❌ Invalid Gofile folder URL format.")
|
||||
if overall_progress_callback: overall_progress_callback(1, 1)
|
||||
return
|
||||
|
||||
content_id = match.group(1)
|
||||
|
||||
scraper = cloudscraper.create_scraper()
|
||||
|
||||
try:
|
||||
retry_strategy = Retry(
|
||||
total=5,
|
||||
backoff_factor=1,
|
||||
status_forcelist=[429, 500, 502, 503, 504],
|
||||
allowed_methods=["HEAD", "GET", "POST"]
|
||||
)
|
||||
adapter = HTTPAdapter(max_retries=retry_strategy)
|
||||
scraper.mount("http://", adapter)
|
||||
scraper.mount("https://", adapter)
|
||||
logger_func(" [Gofile] 🔧 Configured robust retry strategy for network requests.")
|
||||
except Exception as e:
|
||||
logger_func(f" [Gofile] ⚠️ Could not configure retry strategy: {e}")
|
||||
|
||||
api_token = _get_gofile_api_token(scraper, logger_func)
|
||||
if not api_token:
|
||||
if overall_progress_callback: overall_progress_callback(1, 1)
|
||||
return
|
||||
|
||||
website_token = _get_gofile_website_token(scraper, logger_func)
|
||||
if not website_token:
|
||||
if overall_progress_callback: overall_progress_callback(1, 1)
|
||||
return
|
||||
|
||||
try:
|
||||
scraper.cookies.set("accountToken", api_token, domain=".gofile.io")
|
||||
scraper.headers.update({"Authorization": f"Bearer {api_token}"})
|
||||
|
||||
api_url = f"https://api.gofile.io/contents/{content_id}?wt={website_token}"
|
||||
logger_func(f" [Gofile] Fetching folder contents for ID: {content_id}")
|
||||
response = scraper.get(api_url, timeout=30)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
|
||||
if data.get("status") != "ok":
|
||||
if data.get("status") == "error-passwordRequired":
|
||||
logger_func(" [Gofile] ❌ This folder is password protected. Downloading password-protected folders is not supported.")
|
||||
else:
|
||||
logger_func(f" [Gofile] ❌ API Error: {data.get('status')}. The folder may be expired or invalid.")
|
||||
if overall_progress_callback: overall_progress_callback(1, 1)
|
||||
return
|
||||
|
||||
folder_info = data.get("data", {})
|
||||
folder_name = clean_folder_name(folder_info.get("name", content_id))
|
||||
files_to_download = [item for item in folder_info.get("children", {}).values() if item.get("type") == "file"]
|
||||
|
||||
if not files_to_download:
|
||||
logger_func(" [Gofile] ℹ️ No files found in this Gofile folder.")
|
||||
if overall_progress_callback: overall_progress_callback(0, 0)
|
||||
return
|
||||
|
||||
final_download_path = os.path.join(download_path, folder_name)
|
||||
os.makedirs(final_download_path, exist_ok=True)
|
||||
logger_func(f" [Gofile] Found {len(files_to_download)} file(s). Saving to folder: '{folder_name}'")
|
||||
if overall_progress_callback: overall_progress_callback(len(files_to_download), 0)
|
||||
|
||||
download_session = requests.Session()
|
||||
adapter = HTTPAdapter(max_retries=Retry(
|
||||
total=5, backoff_factor=1, status_forcelist=[429, 500, 502, 503, 504]
|
||||
))
|
||||
download_session.mount("http://", adapter)
|
||||
download_session.mount("https://", adapter)
|
||||
|
||||
for i, file_info in enumerate(files_to_download):
|
||||
filename = file_info.get("name")
|
||||
file_url = file_info.get("link")
|
||||
file_size = file_info.get("size", 0)
|
||||
filepath = os.path.join(final_download_path, filename)
|
||||
|
||||
if os.path.exists(filepath) and os.path.getsize(filepath) == file_size:
|
||||
logger_func(f" [Gofile] ({i+1}/{len(files_to_download)}) ⏩ Skipping existing file: '{filename}'")
|
||||
if overall_progress_callback: overall_progress_callback(len(files_to_download), i + 1)
|
||||
continue
|
||||
|
||||
logger_func(f" [Gofile] ({i+1}/{len(files_to_download)}) 🔽 Downloading: '{filename}'")
|
||||
with download_session.get(file_url, stream=True, timeout=(60, 600)) as r:
|
||||
r.raise_for_status()
|
||||
|
||||
if progress_callback_func:
|
||||
progress_callback_func(filename, (0, file_size))
|
||||
|
||||
downloaded_bytes = 0
|
||||
last_log_time = time.time()
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in r.iter_content(chunk_size=8192):
|
||||
f.write(chunk)
|
||||
downloaded_bytes += len(chunk)
|
||||
current_time = time.time()
|
||||
if current_time - last_log_time > 0.5: # Update slightly faster
|
||||
if progress_callback_func:
|
||||
progress_callback_func(filename, (downloaded_bytes, file_size))
|
||||
last_log_time = current_time
|
||||
|
||||
if progress_callback_func:
|
||||
progress_callback_func(filename, (file_size, file_size))
|
||||
|
||||
logger_func(f" [Gofile] ✅ Finished '{filename}'")
|
||||
if overall_progress_callback: overall_progress_callback(len(files_to_download), i + 1)
|
||||
time.sleep(1)
|
||||
|
||||
except Exception as e:
|
||||
logger_func(f" [Gofile] ❌ An error occurred during Gofile download: {e}")
|
||||
if not isinstance(e, requests.exceptions.RequestException):
|
||||
traceback.print_exc()
|
||||
|
||||
@@ -15,7 +15,7 @@ MULTIPART_DOWNLOADER_AVAILABLE = True
|
||||
|
||||
# --- Module Constants ---
|
||||
CHUNK_DOWNLOAD_RETRY_DELAY = 2
|
||||
MAX_CHUNK_DOWNLOAD_RETRIES = 1
|
||||
MAX_CHUNK_DOWNLOAD_RETRIES = 5
|
||||
DOWNLOAD_CHUNK_SIZE_ITER = 1024 * 256 # 256 KB per iteration chunk
|
||||
|
||||
|
||||
|
||||
120
src/services/updater.py
Normal file
120
src/services/updater.py
Normal file
@@ -0,0 +1,120 @@
|
||||
import sys
|
||||
import os
|
||||
import requests
|
||||
import subprocess # Keep this for now, though it's not used in the final command
|
||||
from packaging.version import parse as parse_version
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
# Constants for the updater
|
||||
GITHUB_REPO_URL = "https://api.github.com/repos/Yuvi9587/Kemono-Downloader/releases/latest"
|
||||
EXE_NAME = "Kemono.Downloader.exe"
|
||||
|
||||
class UpdateChecker(QThread):
|
||||
"""Checks for a new version on GitHub in a background thread."""
|
||||
update_available = pyqtSignal(str, str) # new_version, download_url
|
||||
up_to_date = pyqtSignal(str)
|
||||
update_error = pyqtSignal(str)
|
||||
|
||||
def __init__(self, current_version):
|
||||
super().__init__()
|
||||
self.current_version_str = current_version.lstrip('v')
|
||||
|
||||
def run(self):
|
||||
try:
|
||||
response = requests.get(GITHUB_REPO_URL, timeout=15)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
|
||||
latest_version_str = data['tag_name'].lstrip('v')
|
||||
current_version = parse_version(self.current_version_str)
|
||||
latest_version = parse_version(latest_version_str)
|
||||
|
||||
if latest_version > current_version:
|
||||
for asset in data.get('assets', []):
|
||||
if asset['name'] == EXE_NAME:
|
||||
self.update_available.emit(latest_version_str, asset['browser_download_url'])
|
||||
return
|
||||
self.update_error.emit(f"Update found, but '{EXE_NAME}' is missing from the release assets.")
|
||||
else:
|
||||
self.up_to_date.emit("You are on the latest version.")
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.update_error.emit(f"Network error: {e}")
|
||||
except Exception as e:
|
||||
self.update_error.emit(f"An error occurred: {e}")
|
||||
|
||||
|
||||
class UpdateDownloader(QThread):
|
||||
"""
|
||||
Downloads the new executable and runs an updater script that kills the old process,
|
||||
replaces the file, and displays a message in the terminal.
|
||||
"""
|
||||
download_finished = pyqtSignal()
|
||||
download_error = pyqtSignal(str)
|
||||
|
||||
def __init__(self, download_url, parent_app):
|
||||
super().__init__()
|
||||
self.download_url = download_url
|
||||
self.parent_app = parent_app
|
||||
|
||||
def run(self):
|
||||
try:
|
||||
app_path = sys.executable
|
||||
app_dir = os.path.dirname(app_path)
|
||||
temp_path = os.path.join(app_dir, f"{EXE_NAME}.tmp")
|
||||
old_path = os.path.join(app_dir, f"{EXE_NAME}.old")
|
||||
updater_script_path = os.path.join(app_dir, "updater.bat")
|
||||
|
||||
pid_file_path = os.path.join(app_dir, "updater.pid")
|
||||
|
||||
with requests.get(self.download_url, stream=True, timeout=300) as r:
|
||||
r.raise_for_status()
|
||||
with open(temp_path, 'wb') as f:
|
||||
for chunk in r.iter_content(chunk_size=8192):
|
||||
f.write(chunk)
|
||||
|
||||
with open(pid_file_path, "w") as f:
|
||||
f.write(str(os.getpid()))
|
||||
|
||||
script_content = f"""
|
||||
@echo off
|
||||
SETLOCAL
|
||||
|
||||
echo.
|
||||
echo Reading process information...
|
||||
set /p PID=<{pid_file_path}
|
||||
|
||||
echo Closing the old application (PID: %PID%)...
|
||||
taskkill /F /PID %PID%
|
||||
|
||||
echo Waiting for files to unlock...
|
||||
timeout /t 2 /nobreak > nul
|
||||
|
||||
echo Replacing application files...
|
||||
if exist "{old_path}" del /F /Q "{old_path}"
|
||||
rename "{app_path}" "{os.path.basename(old_path)}"
|
||||
rename "{temp_path}" "{EXE_NAME}"
|
||||
|
||||
echo.
|
||||
echo ============================================================
|
||||
echo Update Complete!
|
||||
echo You can now close this window and run {EXE_NAME}.
|
||||
echo ============================================================
|
||||
echo.
|
||||
pause
|
||||
|
||||
echo Cleaning up helper files...
|
||||
del "{pid_file_path}"
|
||||
del "%~f0"
|
||||
ENDLOCAL
|
||||
"""
|
||||
with open(updater_script_path, "w") as f:
|
||||
f.write(script_content)
|
||||
|
||||
# --- Go back to the os.startfile command that we know works ---
|
||||
os.startfile(updater_script_path)
|
||||
|
||||
self.download_finished.emit()
|
||||
|
||||
except Exception as e:
|
||||
self.download_error.emit(f"Failed to download or run updater: {e}")
|
||||
137
src/ui/classes/allcomic_downloader_thread.py
Normal file
137
src/ui/classes/allcomic_downloader_thread.py
Normal file
@@ -0,0 +1,137 @@
|
||||
import os
|
||||
import threading
|
||||
import time
|
||||
from urllib.parse import urlparse
|
||||
|
||||
import cloudscraper
|
||||
import requests
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...core.allcomic_client import (fetch_chapter_data as allcomic_fetch_data,
|
||||
get_chapter_list as allcomic_get_list)
|
||||
from ...utils.file_utils import clean_folder_name
|
||||
|
||||
|
||||
class AllcomicDownloadThread(QThread):
|
||||
"""A dedicated QThread for handling allcomic.com downloads."""
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool)
|
||||
overall_progress_signal = pyqtSignal(int, int)
|
||||
|
||||
def __init__(self, url, output_dir, parent=None):
|
||||
super().__init__(parent)
|
||||
self.comic_url = url
|
||||
self.output_dir = output_dir
|
||||
self.is_cancelled = False
|
||||
self.pause_event = parent.pause_event if hasattr(parent, 'pause_event') else threading.Event()
|
||||
|
||||
def _check_pause(self):
|
||||
if self.is_cancelled: return True
|
||||
if self.pause_event and self.pause_event.is_set():
|
||||
self.progress_signal.emit(" Download paused...")
|
||||
while self.pause_event.is_set():
|
||||
if self.is_cancelled: return True
|
||||
time.sleep(0.5)
|
||||
self.progress_signal.emit(" Download resumed.")
|
||||
return self.is_cancelled
|
||||
|
||||
def run(self):
|
||||
grand_total_dl = 0
|
||||
grand_total_skip = 0
|
||||
|
||||
# Create the scraper session ONCE for the entire job
|
||||
scraper = cloudscraper.create_scraper(
|
||||
browser={'browser': 'firefox', 'platform': 'windows', 'desktop': True}
|
||||
)
|
||||
|
||||
# Pass the scraper to the function
|
||||
chapters_to_download = allcomic_get_list(scraper, self.comic_url, self.progress_signal.emit)
|
||||
|
||||
if not chapters_to_download:
|
||||
chapters_to_download = [self.comic_url]
|
||||
|
||||
self.progress_signal.emit(f"--- Starting download of {len(chapters_to_download)} chapter(s) ---")
|
||||
|
||||
for chapter_idx, chapter_url in enumerate(chapters_to_download):
|
||||
if self._check_pause(): break
|
||||
|
||||
self.progress_signal.emit(f"\n-- Processing Chapter {chapter_idx + 1}/{len(chapters_to_download)} --")
|
||||
# Pass the scraper to the function
|
||||
comic_title, chapter_title, image_urls = allcomic_fetch_data(scraper, chapter_url, self.progress_signal.emit)
|
||||
|
||||
if not image_urls:
|
||||
self.progress_signal.emit(f"❌ Failed to get data for chapter. Skipping.")
|
||||
continue
|
||||
|
||||
series_folder_name = clean_folder_name(comic_title)
|
||||
chapter_folder_name = clean_folder_name(chapter_title)
|
||||
final_save_path = os.path.join(self.output_dir, series_folder_name, chapter_folder_name)
|
||||
|
||||
try:
|
||||
os.makedirs(final_save_path, exist_ok=True)
|
||||
self.progress_signal.emit(f" Saving to folder: '{os.path.join(series_folder_name, chapter_folder_name)}'")
|
||||
except OSError as e:
|
||||
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
|
||||
grand_total_skip += len(image_urls)
|
||||
continue
|
||||
|
||||
total_files_in_chapter = len(image_urls)
|
||||
self.overall_progress_signal.emit(total_files_in_chapter, 0)
|
||||
headers = {'Referer': chapter_url}
|
||||
|
||||
for i, img_url in enumerate(image_urls):
|
||||
if self._check_pause(): break
|
||||
|
||||
file_extension = os.path.splitext(urlparse(img_url).path)[1] or '.jpg'
|
||||
filename = f"{i+1:03d}{file_extension}"
|
||||
filepath = os.path.join(final_save_path, filename)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
self.progress_signal.emit(f" -> Skip ({i+1}/{total_files_in_chapter}): '{filename}' already exists.")
|
||||
grand_total_skip += 1
|
||||
else:
|
||||
download_successful = False
|
||||
max_retries = 8
|
||||
for attempt in range(max_retries):
|
||||
if self._check_pause(): break
|
||||
try:
|
||||
self.progress_signal.emit(f" Downloading ({i+1}/{total_files_in_chapter}): '{filename}' (Attempt {attempt + 1})...")
|
||||
# Use the persistent scraper object
|
||||
response = scraper.get(img_url, stream=True, headers=headers, timeout=60)
|
||||
response.raise_for_status()
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if self._check_pause(): break
|
||||
f.write(chunk)
|
||||
|
||||
if self._check_pause():
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
break
|
||||
|
||||
download_successful = True
|
||||
grand_total_dl += 1
|
||||
break
|
||||
|
||||
except requests.RequestException as e:
|
||||
self.progress_signal.emit(f" ⚠️ Attempt {attempt + 1} failed for '{filename}': {e}")
|
||||
if attempt < max_retries - 1:
|
||||
wait_time = 2 * (attempt + 1)
|
||||
self.progress_signal.emit(f" Retrying in {wait_time} seconds...")
|
||||
time.sleep(wait_time)
|
||||
else:
|
||||
self.progress_signal.emit(f" ❌ All attempts failed for '{filename}'. Skipping.")
|
||||
grand_total_skip += 1
|
||||
|
||||
self.overall_progress_signal.emit(total_files_in_chapter, i + 1)
|
||||
time.sleep(0.5) # Increased delay between images for this site
|
||||
|
||||
if self._check_pause(): break
|
||||
|
||||
self.file_progress_signal.emit("", None)
|
||||
self.finished_signal.emit(grand_total_dl, grand_total_skip, self.is_cancelled)
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
self.progress_signal.emit(" Cancellation signal received by AllComic thread.")
|
||||
133
src/ui/classes/booru_downloader_thread.py
Normal file
133
src/ui/classes/booru_downloader_thread.py
Normal file
@@ -0,0 +1,133 @@
|
||||
import os
|
||||
import threading
|
||||
import time
|
||||
import datetime
|
||||
import requests
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...core.booru_client import fetch_booru_data, BooruClientException
|
||||
from ...utils.file_utils import clean_folder_name
|
||||
|
||||
_ff_ver = (datetime.date.today().toordinal() - 735506) // 28
|
||||
USERAGENT_FIREFOX = (f"Mozilla/5.0 (Windows NT 10.0; Win64; x64; "
|
||||
f"rv:{_ff_ver}.0) Gecko/20100101 Firefox/{_ff_ver}.0")
|
||||
|
||||
class BooruDownloadThread(QThread):
|
||||
"""A dedicated QThread for handling Danbooru and Gelbooru downloads."""
|
||||
progress_signal = pyqtSignal(str)
|
||||
overall_progress_signal = pyqtSignal(int, int)
|
||||
finished_signal = pyqtSignal(int, int, bool) # dl_count, skip_count, cancelled
|
||||
|
||||
def __init__(self, url, output_dir, api_key, user_id, parent=None):
|
||||
super().__init__(parent)
|
||||
self.booru_url = url
|
||||
self.output_dir = output_dir
|
||||
self.api_key = api_key
|
||||
self.user_id = user_id
|
||||
self.is_cancelled = False
|
||||
self.pause_event = parent.pause_event if hasattr(parent, 'pause_event') else threading.Event()
|
||||
|
||||
def run(self):
|
||||
download_count = 0
|
||||
skip_count = 0
|
||||
processed_count = 0
|
||||
cumulative_total = 0
|
||||
|
||||
def logger(msg):
|
||||
self.progress_signal.emit(str(msg))
|
||||
|
||||
try:
|
||||
self.progress_signal.emit("=" * 40)
|
||||
self.progress_signal.emit(f"🚀 Starting Booru Download for: {self.booru_url}")
|
||||
|
||||
item_generator = fetch_booru_data(self.booru_url, self.api_key, self.user_id, logger)
|
||||
|
||||
download_path = self.output_dir # Default path
|
||||
path_initialized = False
|
||||
|
||||
session = requests.Session()
|
||||
session.headers["User-Agent"] = USERAGENT_FIREFOX
|
||||
|
||||
for item in item_generator:
|
||||
if self.is_cancelled:
|
||||
break
|
||||
|
||||
if isinstance(item, tuple) and item[0] == 'PAGE_UPDATE':
|
||||
newly_found = item[1]
|
||||
cumulative_total += newly_found
|
||||
self.progress_signal.emit(f" Found {newly_found} more posts. Total so far: {cumulative_total}")
|
||||
self.overall_progress_signal.emit(cumulative_total, processed_count)
|
||||
continue
|
||||
|
||||
post_data = item
|
||||
processed_count += 1
|
||||
|
||||
if not path_initialized:
|
||||
base_folder_name = post_data.get('search_tags', 'booru_download')
|
||||
download_path = os.path.join(self.output_dir, clean_folder_name(base_folder_name))
|
||||
os.makedirs(download_path, exist_ok=True)
|
||||
path_initialized = True
|
||||
|
||||
if self.pause_event.is_set():
|
||||
self.progress_signal.emit(" Download paused...")
|
||||
while self.pause_event.is_set():
|
||||
if self.is_cancelled: break
|
||||
time.sleep(0.5)
|
||||
if self.is_cancelled: break
|
||||
self.progress_signal.emit(" Download resumed.")
|
||||
|
||||
file_url = post_data.get('file_url')
|
||||
if not file_url:
|
||||
skip_count += 1
|
||||
self.progress_signal.emit(f" -> Skip ({processed_count}/{cumulative_total}): Post ID {post_data.get('id')} has no file URL.")
|
||||
continue
|
||||
|
||||
cat = post_data.get('category', 'booru')
|
||||
post_id = post_data.get('id', 'unknown')
|
||||
md5 = post_data.get('md5', '')
|
||||
fname = post_data.get('filename', f"file_{post_id}")
|
||||
ext = post_data.get('extension', 'jpg')
|
||||
|
||||
final_filename = f"{cat}_{post_id}_{md5 or fname}.{ext}"
|
||||
filepath = os.path.join(download_path, final_filename)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
self.progress_signal.emit(f" -> Skip ({processed_count}/{cumulative_total}): '{final_filename}' already exists.")
|
||||
skip_count += 1
|
||||
else:
|
||||
try:
|
||||
self.progress_signal.emit(f" Downloading ({processed_count}/{cumulative_total}): '{final_filename}'...")
|
||||
response = session.get(file_url, stream=True, timeout=60)
|
||||
response.raise_for_status()
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if self.is_cancelled: break
|
||||
f.write(chunk)
|
||||
|
||||
if not self.is_cancelled:
|
||||
download_count += 1
|
||||
else:
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
skip_count += 1
|
||||
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ Failed to download '{final_filename}': {e}")
|
||||
skip_count += 1
|
||||
|
||||
self.overall_progress_signal.emit(cumulative_total, processed_count)
|
||||
time.sleep(0.2)
|
||||
|
||||
if not path_initialized:
|
||||
self.progress_signal.emit("No posts found for the given URL/tags.")
|
||||
|
||||
except BooruClientException as e:
|
||||
self.progress_signal.emit(f"❌ A Booru client error occurred: {e}")
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f"❌ An unexpected error occurred in Booru thread: {e}")
|
||||
finally:
|
||||
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
self.progress_signal.emit(" Cancellation signal received by Booru thread.")
|
||||
195
src/ui/classes/bunkr_downloader_thread.py
Normal file
195
src/ui/classes/bunkr_downloader_thread.py
Normal file
@@ -0,0 +1,195 @@
|
||||
import os
|
||||
import re
|
||||
import time
|
||||
import requests
|
||||
import threading
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...core.bunkr_client import fetch_bunkr_data
|
||||
|
||||
# Define image extensions
|
||||
IMG_EXTS = ('.jpg', '.jpeg', '.png', '.gif', '.webp', '.bmp', '.avif')
|
||||
BUNKR_IMG_THREADS = 6 # Hardcoded thread count for images
|
||||
|
||||
class BunkrDownloadThread(QThread):
|
||||
"""A dedicated QThread for handling Bunkr downloads."""
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool, list)
|
||||
|
||||
def __init__(self, url, output_dir, parent=None):
|
||||
super().__init__(parent)
|
||||
self.bunkr_url = url
|
||||
self.output_dir = output_dir
|
||||
self.is_cancelled = False
|
||||
|
||||
# --- NEW: Threading members ---
|
||||
self.lock = threading.Lock()
|
||||
self.download_count = 0
|
||||
self.skip_count = 0
|
||||
self.file_index = 0 # Use a shared index for logging
|
||||
|
||||
class ThreadLogger:
|
||||
def __init__(self, signal_emitter):
|
||||
self.signal_emitter = signal_emitter
|
||||
def info(self, msg, *args, **kwargs):
|
||||
self.signal_emitter.emit(str(msg))
|
||||
def error(self, msg, *args, **kwargs):
|
||||
self.signal_emitter.emit(f"❌ ERROR: {msg}")
|
||||
def warning(self, msg, *args, **kwargs):
|
||||
self.signal_emitter.emit(f"⚠️ WARNING: {msg}")
|
||||
def debug(self, msg, *args, **kwargs):
|
||||
pass
|
||||
|
||||
self.logger = ThreadLogger(self.progress_signal)
|
||||
|
||||
def _download_file(self, file_data, total_files, album_path, is_image_task=False):
|
||||
"""
|
||||
A thread-safe method to download a single file.
|
||||
This function will be called by the main thread (for videos)
|
||||
and worker threads (for images).
|
||||
"""
|
||||
|
||||
# Stop if a cancellation signal was received before starting
|
||||
if self.is_cancelled:
|
||||
return
|
||||
|
||||
# --- Thread-safe index for logging ---
|
||||
with self.lock:
|
||||
self.file_index += 1
|
||||
current_file_num = self.file_index
|
||||
|
||||
try:
|
||||
filename = file_data.get('name', 'untitled_file')
|
||||
file_url = file_data.get('url')
|
||||
headers = file_data.get('_http_headers')
|
||||
|
||||
filename = re.sub(r'[<>:"/\\|?*]', '_', filename).strip()
|
||||
filepath = os.path.join(album_path, filename)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
self.progress_signal.emit(f" -> Skip ({current_file_num}/{total_files}): '{filename}' already exists.")
|
||||
with self.lock:
|
||||
self.skip_count += 1
|
||||
return
|
||||
|
||||
self.progress_signal.emit(f" Downloading ({current_file_num}/{total_files}): '{filename}'...")
|
||||
|
||||
response = requests.get(file_url, stream=True, headers=headers, timeout=60)
|
||||
response.raise_for_status()
|
||||
|
||||
total_size = int(response.headers.get('content-length', 0))
|
||||
downloaded_size = 0
|
||||
last_update_time = time.time()
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if self.is_cancelled:
|
||||
break
|
||||
if chunk:
|
||||
f.write(chunk)
|
||||
downloaded_size += len(chunk)
|
||||
|
||||
# For videos/other files, send frequent progress
|
||||
# For images, don't send progress to avoid UI flicker
|
||||
if not is_image_task:
|
||||
current_time = time.time()
|
||||
if total_size > 0 and (current_time - last_update_time) > 0.5:
|
||||
self.file_progress_signal.emit(filename, (downloaded_size, total_size))
|
||||
last_update_time = current_time
|
||||
|
||||
if self.is_cancelled:
|
||||
self.progress_signal.emit(f" Download cancelled for '{filename}'.")
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
return
|
||||
|
||||
if total_size > 0:
|
||||
self.file_progress_signal.emit(filename, (total_size, total_size))
|
||||
|
||||
with self.lock:
|
||||
self.download_count += 1
|
||||
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.progress_signal.emit(f" ❌ Failed to download '{filename}'. Error: {e}")
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
with self.lock:
|
||||
self.skip_count += 1
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ An unexpected error occurred with '{filename}': {e}")
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
with self.lock:
|
||||
self.skip_count += 1
|
||||
|
||||
def run(self):
|
||||
self.progress_signal.emit("=" * 40)
|
||||
self.progress_signal.emit(f"🚀 Starting Bunkr Download for: {self.bunkr_url}")
|
||||
|
||||
album_name, files_to_download = fetch_bunkr_data(self.bunkr_url, self.logger)
|
||||
|
||||
if not files_to_download:
|
||||
self.progress_signal.emit("❌ Failed to extract file information from Bunkr. Aborting.")
|
||||
self.finished_signal.emit(0, 0, self.is_cancelled, [])
|
||||
return
|
||||
|
||||
album_path = os.path.join(self.output_dir, album_name)
|
||||
try:
|
||||
os.makedirs(album_path, exist_ok=True)
|
||||
self.progress_signal.emit(f" Saving to folder: '{album_path}'")
|
||||
except OSError as e:
|
||||
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
|
||||
self.finished_signal.emit(0, len(files_to_download), self.is_cancelled, [])
|
||||
return
|
||||
|
||||
total_files = len(files_to_download)
|
||||
|
||||
# --- NEW: Separate files into images and others ---
|
||||
image_files = []
|
||||
other_files = []
|
||||
for f in files_to_download:
|
||||
name = f.get('name', '').lower()
|
||||
if name.endswith(IMG_EXTS):
|
||||
image_files.append(f)
|
||||
else:
|
||||
other_files.append(f)
|
||||
|
||||
self.progress_signal.emit(f" Found {len(image_files)} images and {len(other_files)} other files.")
|
||||
|
||||
# --- 1. Process videos and other files sequentially (one by one) ---
|
||||
if other_files:
|
||||
self.progress_signal.emit(f" Downloading {len(other_files)} videos/other files sequentially...")
|
||||
for file_data in other_files:
|
||||
if self.is_cancelled:
|
||||
break
|
||||
# Call the new download helper method
|
||||
self._download_file(file_data, total_files, album_path, is_image_task=False)
|
||||
|
||||
# --- 2. Process images concurrently using a fixed 6-thread pool ---
|
||||
if image_files and not self.is_cancelled:
|
||||
self.progress_signal.emit(f" Downloading {len(image_files)} images concurrently ({BUNKR_IMG_THREADS} threads)...")
|
||||
with ThreadPoolExecutor(max_workers=BUNKR_IMG_THREADS, thread_name_prefix='BunkrImg') as executor:
|
||||
|
||||
# Submit all image download tasks
|
||||
futures = {executor.submit(self._download_file, file_data, total_files, album_path, is_image_task=True): file_data for file_data in image_files}
|
||||
|
||||
try:
|
||||
# Wait for tasks to complete, but check for cancellation
|
||||
for future in futures:
|
||||
if self.is_cancelled:
|
||||
future.cancel() # Try to cancel running/pending tasks
|
||||
else:
|
||||
future.result() # Wait for the task to finish (or raise exception)
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ A thread pool error occurred: {e}")
|
||||
|
||||
if self.is_cancelled:
|
||||
self.progress_signal.emit(" Download cancelled by user.")
|
||||
# Update skip count to reflect all non-downloaded files
|
||||
self.skip_count = total_files - self.download_count
|
||||
|
||||
self.file_progress_signal.emit("", None) # Clear file progress
|
||||
self.finished_signal.emit(self.download_count, self.skip_count, self.is_cancelled, [])
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
self.progress_signal.emit(" Cancellation signal received by Bunkr thread.")
|
||||
208
src/ui/classes/deviantart_downloader_thread.py
Normal file
208
src/ui/classes/deviantart_downloader_thread.py
Normal file
@@ -0,0 +1,208 @@
|
||||
import os
|
||||
import time
|
||||
import requests
|
||||
import re
|
||||
from datetime import datetime
|
||||
from concurrent.futures import ThreadPoolExecutor, wait
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
from ...core.deviantart_client import DeviantArtClient
|
||||
from ...utils.file_utils import clean_folder_name
|
||||
|
||||
class DeviantArtDownloadThread(QThread):
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
overall_progress_signal = pyqtSignal(int, int)
|
||||
finished_signal = pyqtSignal(int, int, bool, list)
|
||||
|
||||
def __init__(self, url, output_dir, pause_event, cancellation_event, parent=None):
|
||||
super().__init__(parent)
|
||||
self.url = url
|
||||
self.output_dir = output_dir
|
||||
self.pause_event = pause_event
|
||||
self.cancellation_event = cancellation_event
|
||||
|
||||
# --- PASS LOGGER TO CLIENT ---
|
||||
# This ensures client logs go to the UI, not just the black console window
|
||||
self.client = DeviantArtClient(logger_func=self.progress_signal.emit)
|
||||
|
||||
self.parent_app = parent
|
||||
self.download_count = 0
|
||||
self.skip_count = 0
|
||||
|
||||
# --- THREAD SETTINGS ---
|
||||
self.max_threads = 10
|
||||
|
||||
def run(self):
|
||||
self.progress_signal.emit("=" * 40)
|
||||
self.progress_signal.emit(f"🚀 Starting DeviantArt download for: {self.url}")
|
||||
self.progress_signal.emit(f" ℹ️ Using {self.max_threads} parallel threads.")
|
||||
|
||||
try:
|
||||
if not self.client.authenticate():
|
||||
self.progress_signal.emit("❌ Failed to authenticate with DeviantArt API.")
|
||||
self.finished_signal.emit(0, 0, True, [])
|
||||
return
|
||||
|
||||
mode, username, _ = self.client.extract_info_from_url(self.url)
|
||||
|
||||
if mode == 'post':
|
||||
self._process_single_post(self.url)
|
||||
elif mode == 'gallery':
|
||||
self._process_gallery(username)
|
||||
else:
|
||||
self.progress_signal.emit("❌ Could not parse DeviantArt URL type.")
|
||||
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f"❌ Error during download: {e}")
|
||||
self.skip_count += 1
|
||||
finally:
|
||||
self.finished_signal.emit(self.download_count, self.skip_count, self.cancellation_event.is_set(), [])
|
||||
|
||||
def _check_pause_cancel(self):
|
||||
if self.cancellation_event.is_set(): return True
|
||||
while self.pause_event.is_set():
|
||||
time.sleep(0.5)
|
||||
if self.cancellation_event.is_set(): return True
|
||||
return False
|
||||
|
||||
def _process_single_post(self, url):
|
||||
self.progress_signal.emit(f" Fetching deviation info...")
|
||||
uuid = self.client.get_deviation_uuid(url)
|
||||
if not uuid:
|
||||
self.progress_signal.emit("❌ Could not find Deviation UUID.")
|
||||
self.skip_count += 1
|
||||
return
|
||||
|
||||
meta = self.client._api_call(f"/deviation/{uuid}")
|
||||
content = self.client.get_deviation_content(uuid)
|
||||
if not content:
|
||||
self.progress_signal.emit("❌ Could not retrieve download URL.")
|
||||
self.skip_count += 1
|
||||
return
|
||||
|
||||
self._download_file(content['src'], meta)
|
||||
|
||||
def _process_gallery(self, username):
|
||||
self.progress_signal.emit(f" Fetching gallery for user: {username}...")
|
||||
offset = 0
|
||||
has_more = True
|
||||
|
||||
base_folder = os.path.join(self.output_dir, clean_folder_name(username))
|
||||
if not os.path.exists(base_folder):
|
||||
os.makedirs(base_folder, exist_ok=True)
|
||||
|
||||
with ThreadPoolExecutor(max_workers=self.max_threads) as executor:
|
||||
while has_more:
|
||||
if self._check_pause_cancel(): break
|
||||
|
||||
data = self.client.get_gallery_folder(username, offset=offset)
|
||||
results = data.get('results', [])
|
||||
has_more = data.get('has_more', False)
|
||||
offset = data.get('next_offset')
|
||||
|
||||
if not results: break
|
||||
|
||||
futures = []
|
||||
for deviation in results:
|
||||
if self._check_pause_cancel(): break
|
||||
future = executor.submit(self._process_deviation_task, deviation, base_folder)
|
||||
futures.append(future)
|
||||
|
||||
wait(futures)
|
||||
|
||||
time.sleep(1)
|
||||
|
||||
def _process_deviation_task(self, deviation, base_folder):
|
||||
if self._check_pause_cancel(): return
|
||||
|
||||
dev_id = deviation.get('deviationid')
|
||||
title = deviation.get('title', 'Unknown')
|
||||
|
||||
try:
|
||||
content = self.client.get_deviation_content(dev_id)
|
||||
if content:
|
||||
self._download_file(content['src'], deviation, override_dir=base_folder)
|
||||
else:
|
||||
self.skip_count += 1
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ Error processing {title}: {e}")
|
||||
self.skip_count += 1
|
||||
|
||||
def _format_date(self, timestamp):
|
||||
if not timestamp: return "NoDate"
|
||||
try:
|
||||
fmt_setting = self.parent_app.manga_custom_date_format
|
||||
strftime_fmt = fmt_setting.replace("YYYY", "%Y").replace("MM", "%m").replace("DD", "%d")
|
||||
dt_obj = datetime.fromtimestamp(int(timestamp))
|
||||
return dt_obj.strftime(strftime_fmt)
|
||||
except Exception:
|
||||
return "InvalidDate"
|
||||
|
||||
def _download_file(self, file_url, metadata, override_dir=None):
|
||||
if self._check_pause_cancel(): return
|
||||
|
||||
parsed = requests.utils.urlparse(file_url)
|
||||
path_filename = os.path.basename(parsed.path)
|
||||
if '?' in path_filename: path_filename = path_filename.split('?')[0]
|
||||
_, ext = os.path.splitext(path_filename)
|
||||
|
||||
title = metadata.get('title', 'Untitled')
|
||||
safe_title = clean_folder_name(title)
|
||||
if not safe_title: safe_title = "Untitled"
|
||||
|
||||
final_filename = f"{safe_title}{ext}"
|
||||
|
||||
if self.parent_app and self.parent_app.manga_mode_checkbox.isChecked():
|
||||
try:
|
||||
creator_name = metadata.get('author', {}).get('username', 'Unknown')
|
||||
published_ts = metadata.get('published_time')
|
||||
|
||||
fmt_data = {
|
||||
"creator_name": creator_name,
|
||||
"title": title,
|
||||
"published": self._format_date(published_ts),
|
||||
"added": self._format_date(published_ts),
|
||||
"edited": self._format_date(published_ts),
|
||||
"id": metadata.get('deviationid', ''),
|
||||
"service": "deviantart",
|
||||
"name": safe_title
|
||||
}
|
||||
|
||||
custom_fmt = self.parent_app.custom_manga_filename_format
|
||||
new_name = custom_fmt.format(**fmt_data)
|
||||
final_filename = f"{clean_folder_name(new_name)}{ext}"
|
||||
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ⚠️ Renaming failed ({e}), using default.")
|
||||
|
||||
save_dir = override_dir if override_dir else self.output_dir
|
||||
if not os.path.exists(save_dir):
|
||||
try:
|
||||
os.makedirs(save_dir, exist_ok=True)
|
||||
except OSError: pass
|
||||
|
||||
filepath = os.path.join(save_dir, final_filename)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
return
|
||||
|
||||
try:
|
||||
self.progress_signal.emit(f" ⬇️ Downloading: {final_filename}")
|
||||
|
||||
with requests.get(file_url, stream=True, timeout=30) as r:
|
||||
r.raise_for_status()
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in r.iter_content(chunk_size=8192):
|
||||
if self._check_pause_cancel():
|
||||
f.close()
|
||||
os.remove(filepath)
|
||||
return
|
||||
if chunk:
|
||||
f.write(chunk)
|
||||
|
||||
self.download_count += 1
|
||||
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ Download failed: {e}")
|
||||
self.skip_count += 1
|
||||
189
src/ui/classes/discord_downloader_thread.py
Normal file
189
src/ui/classes/discord_downloader_thread.py
Normal file
@@ -0,0 +1,189 @@
|
||||
import os
|
||||
import time
|
||||
import datetime
|
||||
import requests
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
# Assuming discord_pdf_generator is in the dialogs folder, sibling to the classes folder
|
||||
from ..dialogs.discord_pdf_generator import create_pdf_from_discord_messages
|
||||
|
||||
# This constant is needed for the thread to function independently
|
||||
_ff_ver = (datetime.date.today().toordinal() - 735506) // 28
|
||||
USERAGENT_FIREFOX = (f"Mozilla/5.0 (Windows NT 10.0; Win64; x64; "
|
||||
f"rv:{_ff_ver}.0) Gecko/20100101 Firefox/{_ff_ver}.0")
|
||||
|
||||
class DiscordDownloadThread(QThread):
|
||||
"""A dedicated QThread for handling all official Discord downloads."""
|
||||
progress_signal = pyqtSignal(str)
|
||||
progress_label_signal = pyqtSignal(str)
|
||||
finished_signal = pyqtSignal(int, int, bool, list)
|
||||
|
||||
def __init__(self, mode, session, token, output_dir, server_id, channel_id, url, app_base_dir, limit=None, parent=None):
|
||||
super().__init__(parent)
|
||||
self.mode = mode
|
||||
self.session = session
|
||||
self.token = token
|
||||
self.output_dir = output_dir
|
||||
self.server_id = server_id
|
||||
self.channel_id = channel_id
|
||||
self.api_url = url
|
||||
self.message_limit = limit
|
||||
self.app_base_dir = app_base_dir # Path to app's base directory
|
||||
|
||||
self.is_cancelled = False
|
||||
self.is_paused = False
|
||||
|
||||
def run(self):
|
||||
if self.mode == 'pdf':
|
||||
self._run_pdf_creation()
|
||||
else:
|
||||
self._run_file_download()
|
||||
|
||||
def cancel(self):
|
||||
self.progress_signal.emit(" Cancellation signal received by Discord thread.")
|
||||
self.is_cancelled = True
|
||||
|
||||
def pause(self):
|
||||
self.progress_signal.emit(" Pausing Discord download...")
|
||||
self.is_paused = True
|
||||
|
||||
def resume(self):
|
||||
self.progress_signal.emit(" Resuming Discord download...")
|
||||
self.is_paused = False
|
||||
|
||||
def _check_events(self):
|
||||
if self.is_cancelled:
|
||||
return True
|
||||
while self.is_paused:
|
||||
time.sleep(0.5)
|
||||
if self.is_cancelled:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _fetch_all_messages(self):
|
||||
all_messages = []
|
||||
last_message_id = None
|
||||
headers = {'Authorization': self.token, 'User-Agent': USERAGENT_FIREFOX}
|
||||
|
||||
while True:
|
||||
if self._check_events(): break
|
||||
|
||||
endpoint = f"/channels/{self.channel_id}/messages?limit=100"
|
||||
if last_message_id:
|
||||
endpoint += f"&before={last_message_id}"
|
||||
|
||||
try:
|
||||
resp = self.session.get(f"https://discord.com/api/v10{endpoint}", headers=headers, timeout=30)
|
||||
resp.raise_for_status()
|
||||
message_batch = resp.json()
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ Error fetching message batch: {e}")
|
||||
break
|
||||
|
||||
if not message_batch:
|
||||
break
|
||||
|
||||
all_messages.extend(message_batch)
|
||||
|
||||
if self.message_limit and len(all_messages) >= self.message_limit:
|
||||
self.progress_signal.emit(f" Reached message limit of {self.message_limit}. Halting fetch.")
|
||||
all_messages = all_messages[:self.message_limit]
|
||||
break
|
||||
|
||||
last_message_id = message_batch[-1]['id']
|
||||
self.progress_label_signal.emit(f"Fetched {len(all_messages)} messages...")
|
||||
time.sleep(1) # API Rate Limiting
|
||||
|
||||
return all_messages
|
||||
|
||||
def _run_pdf_creation(self):
|
||||
self.progress_signal.emit("=" * 40)
|
||||
self.progress_signal.emit(f"🚀 Starting Discord PDF export for: {self.api_url}")
|
||||
self.progress_label_signal.emit("Fetching messages...")
|
||||
|
||||
all_messages = self._fetch_all_messages()
|
||||
|
||||
if self.is_cancelled:
|
||||
self.finished_signal.emit(0, 0, True, [])
|
||||
return
|
||||
|
||||
self.progress_label_signal.emit(f"Collected {len(all_messages)} total messages. Generating PDF...")
|
||||
all_messages.reverse()
|
||||
|
||||
font_path = os.path.join(self.app_base_dir, 'data', 'dejavu-sans', 'DejaVuSans.ttf')
|
||||
output_filepath = os.path.join(self.output_dir, f"discord_{self.server_id}_{self.channel_id or 'server'}.pdf")
|
||||
|
||||
success = create_pdf_from_discord_messages(
|
||||
all_messages, self.server_id, self.channel_id,
|
||||
output_filepath, font_path, logger=self.progress_signal.emit,
|
||||
cancellation_event=self, pause_event=self
|
||||
)
|
||||
|
||||
if success:
|
||||
self.progress_label_signal.emit(f"✅ PDF export complete!")
|
||||
elif not self.is_cancelled:
|
||||
self.progress_label_signal.emit(f"❌ PDF export failed. Check log for details.")
|
||||
|
||||
self.finished_signal.emit(0, len(all_messages), self.is_cancelled, [])
|
||||
|
||||
def _run_file_download(self):
|
||||
download_count = 0
|
||||
skip_count = 0
|
||||
try:
|
||||
self.progress_signal.emit("=" * 40)
|
||||
self.progress_signal.emit(f"🚀 Starting Discord download for channel: {self.channel_id}")
|
||||
self.progress_label_signal.emit("Fetching messages...")
|
||||
all_messages = self._fetch_all_messages()
|
||||
|
||||
if self.is_cancelled:
|
||||
self.finished_signal.emit(0, 0, True, [])
|
||||
return
|
||||
|
||||
self.progress_label_signal.emit(f"Collected {len(all_messages)} messages. Starting downloads...")
|
||||
total_attachments = sum(len(m.get('attachments', [])) for m in all_messages)
|
||||
|
||||
for message in reversed(all_messages):
|
||||
if self._check_events(): break
|
||||
for attachment in message.get('attachments', []):
|
||||
if self._check_events(): break
|
||||
|
||||
file_url = attachment['url']
|
||||
original_filename = attachment['filename']
|
||||
filepath = os.path.join(self.output_dir, original_filename)
|
||||
filename_to_use = original_filename
|
||||
|
||||
counter = 1
|
||||
base_name, extension = os.path.splitext(original_filename)
|
||||
while os.path.exists(filepath):
|
||||
filename_to_use = f"{base_name} ({counter}){extension}"
|
||||
filepath = os.path.join(self.output_dir, filename_to_use)
|
||||
counter += 1
|
||||
|
||||
if filename_to_use != original_filename:
|
||||
self.progress_signal.emit(f" -> Duplicate name '{original_filename}'. Saving as '{filename_to_use}'.")
|
||||
|
||||
try:
|
||||
self.progress_signal.emit(f" Downloading ({download_count+1}/{total_attachments}): '{filename_to_use}'...")
|
||||
response = requests.get(file_url, stream=True, timeout=60)
|
||||
response.raise_for_status()
|
||||
|
||||
download_cancelled_mid_file = False
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if self._check_events():
|
||||
download_cancelled_mid_file = True
|
||||
break
|
||||
f.write(chunk)
|
||||
|
||||
if download_cancelled_mid_file:
|
||||
self.progress_signal.emit(f" Download cancelled for '{filename_to_use}'. Deleting partial file.")
|
||||
if os.path.exists(filepath):
|
||||
os.remove(filepath)
|
||||
continue
|
||||
|
||||
download_count += 1
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ Failed to download '{filename_to_use}': {e}")
|
||||
skip_count += 1
|
||||
finally:
|
||||
self.finished_signal.emit(download_count, skip_count, self.is_cancelled, [])
|
||||
194
src/ui/classes/downloader_factory.py
Normal file
194
src/ui/classes/downloader_factory.py
Normal file
@@ -0,0 +1,194 @@
|
||||
import re
|
||||
import requests
|
||||
from urllib.parse import urlparse
|
||||
|
||||
# Utility Imports
|
||||
from ...utils.network_utils import prepare_cookies_for_request
|
||||
from ...utils.file_utils import clean_folder_name
|
||||
|
||||
# Downloader Thread Imports (Alphabetical Order Recommended)
|
||||
from .allcomic_downloader_thread import AllcomicDownloadThread
|
||||
from .booru_downloader_thread import BooruDownloadThread
|
||||
from .bunkr_downloader_thread import BunkrDownloadThread
|
||||
from .discord_downloader_thread import DiscordDownloadThread # Official Discord
|
||||
from .drive_downloader_thread import DriveDownloadThread
|
||||
from .erome_downloader_thread import EromeDownloadThread
|
||||
from .external_link_downloader_thread import ExternalLinkDownloadThread
|
||||
from .fap_nation_downloader_thread import FapNationDownloadThread
|
||||
from .hentai2read_downloader_thread import Hentai2readDownloadThread
|
||||
from .kemono_discord_downloader_thread import KemonoDiscordDownloadThread
|
||||
from .mangadex_downloader_thread import MangaDexDownloadThread
|
||||
from .nhentai_downloader_thread import NhentaiDownloadThread
|
||||
from .pixeldrain_downloader_thread import PixeldrainDownloadThread
|
||||
from .rule34video_downloader_thread import Rule34VideoDownloadThread
|
||||
from .saint2_downloader_thread import Saint2DownloadThread
|
||||
from .simp_city_downloader_thread import SimpCityDownloadThread
|
||||
from .toonily_downloader_thread import ToonilyDownloadThread
|
||||
from .deviantart_downloader_thread import DeviantArtDownloadThread
|
||||
|
||||
def create_downloader_thread(main_app, api_url, service, id1, id2, effective_output_dir_for_run):
|
||||
"""
|
||||
Factory function to create and configure the correct QThread for a given URL.
|
||||
Returns a configured QThread instance, a specific error string ("COOKIE_ERROR", "FETCH_ERROR"),
|
||||
or None if no special handler is found (indicating fallback to generic BackendDownloadThread).
|
||||
"""
|
||||
|
||||
|
||||
# Handler for Booru sites (Danbooru, Gelbooru)
|
||||
if service in ['danbooru', 'gelbooru']:
|
||||
api_key = main_app.api_key_input.text().strip()
|
||||
user_id = main_app.user_id_input.text().strip()
|
||||
return BooruDownloadThread(
|
||||
url=api_url, output_dir=effective_output_dir_for_run,
|
||||
api_key=api_key, user_id=user_id, parent=main_app
|
||||
)
|
||||
|
||||
# Handler for cloud storage sites (Mega, GDrive, Dropbox, GoFile)
|
||||
platform = None
|
||||
if 'mega.nz' in api_url or 'mega.io' in api_url: platform = 'mega'
|
||||
elif 'drive.google.com' in api_url: platform = 'gdrive'
|
||||
elif 'dropbox.com' in api_url: platform = 'dropbox'
|
||||
elif 'gofile.io' in api_url: platform = 'gofile'
|
||||
if platform:
|
||||
use_post_subfolder = main_app.use_subfolder_per_post_checkbox.isChecked()
|
||||
return DriveDownloadThread(
|
||||
api_url, effective_output_dir_for_run, platform, use_post_subfolder,
|
||||
main_app.cancellation_event, main_app.pause_event, main_app.log_signal.emit,
|
||||
parent=main_app # Pass parent for consistency
|
||||
)
|
||||
|
||||
# Handler for Erome
|
||||
if 'erome.com' in api_url:
|
||||
return EromeDownloadThread(api_url, effective_output_dir_for_run, main_app)
|
||||
|
||||
# Handler for MangaDex
|
||||
if 'mangadex.org' in api_url:
|
||||
return MangaDexDownloadThread(api_url, effective_output_dir_for_run, main_app)
|
||||
|
||||
# Handler for Saint2
|
||||
is_saint2_url = service == 'saint2' or 'saint2.su' in api_url or 'saint2.pk' in api_url # Add more domains if needed
|
||||
if is_saint2_url and api_url.strip().lower() != 'saint2.su': # Exclude batch mode trigger if using URL input
|
||||
return Saint2DownloadThread(api_url, effective_output_dir_for_run, main_app)
|
||||
|
||||
# Handler for SimpCity
|
||||
if service == 'simpcity':
|
||||
cookies = prepare_cookies_for_request(
|
||||
use_cookie_flag=True, # SimpCity requires cookies
|
||||
cookie_text_input=main_app.simpcity_cookie_text_input.text(), # Use dedicated input
|
||||
selected_cookie_file_path=main_app.selected_cookie_filepath, # Use shared selection
|
||||
app_base_dir=main_app.app_base_dir,
|
||||
logger_func=main_app.log_signal.emit,
|
||||
target_domain='simpcity.cr' # Specific domain
|
||||
)
|
||||
if not cookies:
|
||||
main_app.log_signal.emit("❌ SimpCity requires valid cookies. Please provide them.")
|
||||
return "COOKIE_ERROR" # Sentinel value for cookie failure
|
||||
return SimpCityDownloadThread(api_url, id2, effective_output_dir_for_run, cookies, main_app)
|
||||
|
||||
# Handler for Rule34Video
|
||||
if service == 'rule34video':
|
||||
main_app.log_signal.emit("ℹ️ Rule34Video.com URL detected. Starting dedicated downloader.")
|
||||
return Rule34VideoDownloadThread(api_url, effective_output_dir_for_run, main_app) # id1 (video_id) is used inside the thread
|
||||
|
||||
# HANDLER FOR KEMONO DISCORD (Place BEFORE official Discord)
|
||||
elif service == 'discord' and any(domain in api_url for domain in ['kemono.cr', 'kemono.su', 'kemono.party']):
|
||||
main_app.log_signal.emit("ℹ️ Kemono Discord URL detected. Starting dedicated downloader.")
|
||||
cookies = prepare_cookies_for_request(
|
||||
use_cookie_flag=main_app.use_cookie_checkbox.isChecked(), # Respect UI setting
|
||||
cookie_text_input=main_app.cookie_text_input.text(),
|
||||
selected_cookie_file_path=main_app.selected_cookie_filepath,
|
||||
app_base_dir=main_app.app_base_dir,
|
||||
logger_func=main_app.log_signal.emit,
|
||||
target_domain='kemono.cr' # Primary Kemono domain, adjust if needed
|
||||
)
|
||||
# KemonoDiscordDownloadThread expects parent for events
|
||||
return KemonoDiscordDownloadThread(
|
||||
server_id=id1,
|
||||
channel_id=id2,
|
||||
output_dir=effective_output_dir_for_run,
|
||||
cookies_dict=cookies,
|
||||
parent=main_app
|
||||
)
|
||||
|
||||
# Handler for official Discord URLs
|
||||
elif service == 'discord' and 'discord.com' in api_url:
|
||||
main_app.log_signal.emit("ℹ️ Official Discord URL detected. Starting dedicated downloader.")
|
||||
token = main_app.remove_from_filename_input.text().strip() # Token is in the "Remove Words" field for Discord
|
||||
if not token:
|
||||
main_app.log_signal.emit("❌ Official Discord requires an Authorization Token in the 'Remove Words' field.")
|
||||
return None # Or a specific error sentinel
|
||||
|
||||
limit_text = main_app.discord_message_limit_input.text().strip()
|
||||
message_limit = int(limit_text) if limit_text.isdigit() else None
|
||||
mode = main_app.discord_download_scope # Should be 'pdf' or 'files'
|
||||
|
||||
return DiscordDownloadThread(
|
||||
mode=mode,
|
||||
session=requests.Session(), # Create a session for this thread
|
||||
token=token,
|
||||
output_dir=effective_output_dir_for_run,
|
||||
server_id=id1,
|
||||
channel_id=id2,
|
||||
url=api_url,
|
||||
app_base_dir=main_app.app_base_dir,
|
||||
limit=message_limit,
|
||||
parent=main_app # Pass main_app for events/signals
|
||||
)
|
||||
|
||||
# Check specific domains or rely on service name if extract_post_info provides it
|
||||
if service == 'allcomic' or 'allcomic.com' in api_url or 'allporncomic.com' in api_url:
|
||||
return AllcomicDownloadThread(api_url, effective_output_dir_for_run, main_app)
|
||||
|
||||
# Handler for Hentai2Read
|
||||
if service == 'hentai2read' or 'hentai2read.com' in api_url:
|
||||
return Hentai2readDownloadThread(api_url, effective_output_dir_for_run, main_app)
|
||||
|
||||
# Handler for Fap-Nation
|
||||
if service == 'fap-nation' or 'fap-nation.com' in api_url or 'fap-nation.org' in api_url:
|
||||
use_post_subfolder = main_app.use_subfolder_per_post_checkbox.isChecked()
|
||||
# Ensure signals are passed correctly if needed by the thread
|
||||
return FapNationDownloadThread(
|
||||
api_url, effective_output_dir_for_run, use_post_subfolder,
|
||||
main_app.pause_event, main_app.cancellation_event, main_app.actual_gui_signals, main_app
|
||||
)
|
||||
|
||||
# Handler for Pixeldrain
|
||||
if service == 'pixeldrain' or 'pixeldrain.com' in api_url:
|
||||
return PixeldrainDownloadThread(api_url, effective_output_dir_for_run, main_app) # URL contains the ID
|
||||
|
||||
# Handler for nHentai
|
||||
if service == 'nhentai':
|
||||
from ...core.nhentai_client import fetch_nhentai_gallery
|
||||
main_app.log_signal.emit(f"ℹ️ nHentai gallery ID {id1} detected. Fetching gallery data...")
|
||||
gallery_data = fetch_nhentai_gallery(id1, main_app.log_signal.emit)
|
||||
if not gallery_data:
|
||||
main_app.log_signal.emit(f"❌ Failed to fetch nHentai gallery data for ID {id1}.")
|
||||
return "FETCH_ERROR" # Sentinel value for fetch failure
|
||||
return NhentaiDownloadThread(gallery_data, effective_output_dir_for_run, main_app)
|
||||
|
||||
# Handler for Toonily
|
||||
if service == 'toonily' or 'toonily.com' in api_url:
|
||||
return ToonilyDownloadThread(api_url, effective_output_dir_for_run, main_app)
|
||||
|
||||
# Handler for Bunkr
|
||||
if service == 'bunkr':
|
||||
# id1 contains the full URL or album ID from extract_post_info
|
||||
return BunkrDownloadThread(id1, effective_output_dir_for_run, main_app)
|
||||
|
||||
# Handler for DeviantArt
|
||||
if service == 'deviantart':
|
||||
main_app.log_signal.emit(f"ℹ️ DeviantArt URL detected. Starting dedicated downloader.")
|
||||
return DeviantArtDownloadThread(
|
||||
url=api_url,
|
||||
output_dir=effective_output_dir_for_run,
|
||||
pause_event=main_app.pause_event,
|
||||
cancellation_event=main_app.cancellation_event,
|
||||
parent=main_app
|
||||
)
|
||||
# ----------------------
|
||||
# --- Fallback ---
|
||||
# If no specific handler matched based on service name or URL pattern, return None.
|
||||
# This signals main_window.py to use the generic BackendDownloadThread/PostProcessorWorker
|
||||
# which uses the standard Kemono/Coomer post API.
|
||||
main_app.log_signal.emit(f"ℹ️ No specialized downloader found for service '{service}' and URL '{api_url[:50]}...'. Using generic downloader.")
|
||||
return None
|
||||
77
src/ui/classes/drive_downloader_thread.py
Normal file
77
src/ui/classes/drive_downloader_thread.py
Normal file
@@ -0,0 +1,77 @@
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...services.drive_downloader import (
|
||||
download_dropbox_file,
|
||||
download_gdrive_file,
|
||||
download_gofile_folder,
|
||||
download_mega_file as drive_download_mega_file,
|
||||
)
|
||||
|
||||
|
||||
class DriveDownloadThread(QThread):
|
||||
"""A dedicated QThread for handling direct Mega, GDrive, and Dropbox links."""
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool, list)
|
||||
overall_progress_signal = pyqtSignal(int, int)
|
||||
|
||||
def __init__(self, url, output_dir, platform, use_post_subfolder, cancellation_event, pause_event, logger_func, parent=None):
|
||||
super().__init__(parent)
|
||||
self.drive_url = url
|
||||
self.output_dir = output_dir
|
||||
self.platform = platform
|
||||
self.use_post_subfolder = use_post_subfolder
|
||||
self.is_cancelled = False
|
||||
self.cancellation_event = cancellation_event
|
||||
self.pause_event = pause_event
|
||||
self.logger_func = logger_func
|
||||
|
||||
def run(self):
|
||||
self.logger_func("=" * 40)
|
||||
self.logger_func(f"🚀 Starting direct {self.platform.capitalize()} Download for: {self.drive_url}")
|
||||
|
||||
try:
|
||||
if self.platform == 'mega':
|
||||
drive_download_mega_file(
|
||||
self.drive_url, self.output_dir,
|
||||
logger_func=self.logger_func,
|
||||
progress_callback_func=self.file_progress_signal.emit,
|
||||
overall_progress_callback=self.overall_progress_signal.emit,
|
||||
cancellation_event=self.cancellation_event,
|
||||
pause_event=self.pause_event
|
||||
)
|
||||
elif self.platform == 'gdrive':
|
||||
download_gdrive_file(
|
||||
self.drive_url, self.output_dir,
|
||||
logger_func=self.logger_func,
|
||||
progress_callback_func=self.file_progress_signal.emit,
|
||||
overall_progress_callback=self.overall_progress_signal.emit,
|
||||
use_post_subfolder=self.use_post_subfolder,
|
||||
post_title="Google Drive Download"
|
||||
)
|
||||
elif self.platform == 'dropbox':
|
||||
download_dropbox_file(
|
||||
self.drive_url, self.output_dir,
|
||||
logger_func=self.logger_func,
|
||||
progress_callback_func=self.file_progress_signal.emit,
|
||||
use_post_subfolder=self.use_post_subfolder,
|
||||
post_title="Dropbox Download"
|
||||
)
|
||||
elif self.platform == 'gofile':
|
||||
download_gofile_folder(
|
||||
self.drive_url, self.output_dir,
|
||||
logger_func=self.logger_func,
|
||||
progress_callback_func=self.file_progress_signal.emit,
|
||||
overall_progress_callback=self.overall_progress_signal.emit
|
||||
)
|
||||
|
||||
self.finished_signal.emit(1, 0, self.is_cancelled, [])
|
||||
|
||||
except Exception as e:
|
||||
self.logger_func(f"❌ An unexpected error occurred in DriveDownloadThread: {e}")
|
||||
self.finished_signal.emit(0, 1, self.is_cancelled, [])
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
if self.cancellation_event:
|
||||
self.cancellation_event.set()
|
||||
self.logger_func(f" Cancellation signal received by {self.platform.capitalize()} thread.")
|
||||
106
src/ui/classes/erome_downloader_thread.py
Normal file
106
src/ui/classes/erome_downloader_thread.py
Normal file
@@ -0,0 +1,106 @@
|
||||
import os
|
||||
import time
|
||||
import requests
|
||||
import cloudscraper
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...core.erome_client import fetch_erome_data
|
||||
|
||||
class EromeDownloadThread(QThread):
|
||||
"""A dedicated QThread for handling erome.com downloads."""
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool) # dl_count, skip_count, cancelled
|
||||
|
||||
def __init__(self, url, output_dir, parent=None):
|
||||
super().__init__(parent)
|
||||
self.erome_url = url
|
||||
self.output_dir = output_dir
|
||||
self.is_cancelled = False
|
||||
|
||||
def run(self):
|
||||
download_count = 0
|
||||
skip_count = 0
|
||||
self.progress_signal.emit("=" * 40)
|
||||
self.progress_signal.emit(f"🚀 Starting Erome.com Download for: {self.erome_url}")
|
||||
|
||||
album_name, files_to_download = fetch_erome_data(self.erome_url, self.progress_signal.emit)
|
||||
|
||||
if not files_to_download:
|
||||
self.progress_signal.emit("❌ Failed to extract file information from Erome. Aborting.")
|
||||
self.finished_signal.emit(0, 0, self.is_cancelled)
|
||||
return
|
||||
|
||||
album_path = os.path.join(self.output_dir, album_name)
|
||||
try:
|
||||
os.makedirs(album_path, exist_ok=True)
|
||||
self.progress_signal.emit(f" Saving to folder: '{album_path}'")
|
||||
except OSError as e:
|
||||
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
|
||||
self.finished_signal.emit(0, len(files_to_download), self.is_cancelled)
|
||||
return
|
||||
|
||||
total_files = len(files_to_download)
|
||||
session = cloudscraper.create_scraper()
|
||||
|
||||
for i, file_data in enumerate(files_to_download):
|
||||
if self.is_cancelled:
|
||||
self.progress_signal.emit(" Download cancelled by user.")
|
||||
skip_count = total_files - download_count
|
||||
break
|
||||
|
||||
filename = file_data.get('filename', f'untitled_{i+1}.mp4')
|
||||
file_url = file_data.get('url')
|
||||
headers = file_data.get('headers')
|
||||
filepath = os.path.join(album_path, filename)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
self.progress_signal.emit(f" -> Skip ({i+1}/{total_files}): '{filename}' already exists.")
|
||||
skip_count += 1
|
||||
continue
|
||||
|
||||
self.progress_signal.emit(f" Downloading ({i+1}/{total_files}): '{filename}'...")
|
||||
|
||||
try:
|
||||
response = session.get(file_url, stream=True, headers=headers, timeout=60)
|
||||
response.raise_for_status()
|
||||
|
||||
total_size = int(response.headers.get('content-length', 0))
|
||||
downloaded_size = 0
|
||||
last_update_time = time.time()
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if self.is_cancelled:
|
||||
break
|
||||
if chunk:
|
||||
f.write(chunk)
|
||||
downloaded_size += len(chunk)
|
||||
current_time = time.time()
|
||||
if total_size > 0 and (current_time - last_update_time) > 0.5:
|
||||
self.file_progress_signal.emit(filename, (downloaded_size, total_size))
|
||||
last_update_time = current_time
|
||||
|
||||
if self.is_cancelled:
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
continue
|
||||
|
||||
if total_size > 0:
|
||||
self.file_progress_signal.emit(filename, (total_size, total_size))
|
||||
|
||||
download_count += 1
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.progress_signal.emit(f" ❌ Failed to download '{filename}'. Error: {e}")
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
skip_count += 1
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ An unexpected error occurred with '{filename}': {e}")
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
skip_count += 1
|
||||
|
||||
self.file_progress_signal.emit("", None)
|
||||
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
self.progress_signal.emit(" Cancellation signal received by Erome thread.")
|
||||
86
src/ui/classes/external_link_downloader_thread.py
Normal file
86
src/ui/classes/external_link_downloader_thread.py
Normal file
@@ -0,0 +1,86 @@
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...services.drive_downloader import (
|
||||
download_dropbox_file,
|
||||
download_gdrive_file,
|
||||
download_mega_file as drive_download_mega_file,
|
||||
)
|
||||
|
||||
|
||||
class ExternalLinkDownloadThread(QThread):
|
||||
"""A QThread to handle downloading multiple external links sequentially."""
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_complete_signal = pyqtSignal(str, bool)
|
||||
finished_signal = pyqtSignal()
|
||||
overall_progress_signal = pyqtSignal(int, int)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
|
||||
def __init__(self, tasks_to_download, download_base_path, parent_logger_func, parent=None, use_post_subfolder=False):
|
||||
super().__init__(parent)
|
||||
self.tasks = tasks_to_download
|
||||
self.download_base_path = download_base_path
|
||||
self.parent_logger_func = parent_logger_func
|
||||
self.is_cancelled = False
|
||||
self.use_post_subfolder = use_post_subfolder
|
||||
|
||||
def run(self):
|
||||
total_tasks = len(self.tasks)
|
||||
self.progress_signal.emit(f"ℹ️ Starting external link download thread for {total_tasks} link(s).")
|
||||
self.overall_progress_signal.emit(total_tasks, 0)
|
||||
|
||||
for i, task_info in enumerate(self.tasks):
|
||||
if self.is_cancelled:
|
||||
self.progress_signal.emit("External link download cancelled by user.")
|
||||
break
|
||||
|
||||
self.overall_progress_signal.emit(total_tasks, i + 1)
|
||||
|
||||
platform = task_info.get('platform', 'unknown').lower()
|
||||
full_url = task_info['url']
|
||||
post_title = task_info['title']
|
||||
|
||||
self.progress_signal.emit(f"Download ({i + 1}/{total_tasks}): Starting '{post_title}' ({platform.upper()}) from {full_url}")
|
||||
|
||||
try:
|
||||
if platform == 'mega':
|
||||
drive_download_mega_file(
|
||||
full_url,
|
||||
self.download_base_path,
|
||||
logger_func=self.parent_logger_func,
|
||||
progress_callback_func=self.file_progress_signal.emit,
|
||||
overall_progress_callback=self.overall_progress_signal.emit
|
||||
)
|
||||
elif platform == 'google drive':
|
||||
download_gdrive_file(
|
||||
full_url,
|
||||
self.download_base_path,
|
||||
logger_func=self.parent_logger_func,
|
||||
progress_callback_func=self.file_progress_signal.emit,
|
||||
overall_progress_callback=self.overall_progress_signal.emit,
|
||||
use_post_subfolder=self.use_post_subfolder,
|
||||
post_title=post_title
|
||||
)
|
||||
elif platform == 'dropbox':
|
||||
download_dropbox_file(
|
||||
full_url,
|
||||
self.download_base_path,
|
||||
logger_func=self.parent_logger_func,
|
||||
progress_callback_func=self.file_progress_signal.emit,
|
||||
use_post_subfolder=self.use_post_subfolder,
|
||||
post_title=post_title
|
||||
)
|
||||
else:
|
||||
self.progress_signal.emit(f"⚠️ Unsupported platform '{platform}' for link: {full_url}")
|
||||
self.file_complete_signal.emit(full_url, False)
|
||||
continue
|
||||
self.file_complete_signal.emit(full_url, True)
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f"❌ Error downloading ({platform.upper()}) link '{full_url}': {e}")
|
||||
self.file_complete_signal.emit(full_url, False)
|
||||
|
||||
self.finished_signal.emit()
|
||||
|
||||
def cancel(self):
|
||||
"""Sets the cancellation flag to stop the thread gracefully."""
|
||||
self.progress_signal.emit(" [External Links] Cancellation signal received by thread.")
|
||||
self.is_cancelled = True
|
||||
162
src/ui/classes/fap_nation_downloader_thread.py
Normal file
162
src/ui/classes/fap_nation_downloader_thread.py
Normal file
@@ -0,0 +1,162 @@
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
import threading
|
||||
import time
|
||||
from PyQt5.QtCore import QThread, pyqtSignal, QProcess
|
||||
import cloudscraper
|
||||
|
||||
from ...core.fap_nation_client import fetch_fap_nation_data
|
||||
from ...services.multipart_downloader import download_file_in_parts
|
||||
|
||||
class FapNationDownloadThread(QThread):
|
||||
"""
|
||||
A dedicated QThread for Fap-Nation that uses a hybrid approach, choosing
|
||||
between yt-dlp for HLS streams and a multipart downloader for direct links.
|
||||
"""
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool)
|
||||
overall_progress_signal = pyqtSignal(int, int)
|
||||
|
||||
def __init__(self, url, output_dir, use_post_subfolder, pause_event, cancellation_event, gui_signals, parent=None):
|
||||
super().__init__(parent)
|
||||
self.album_url = url
|
||||
self.output_dir = output_dir
|
||||
self.use_post_subfolder = use_post_subfolder
|
||||
self.is_cancelled = False
|
||||
self.process = None
|
||||
self.current_filename = "Unknown File"
|
||||
self.album_name = "fap-nation_album"
|
||||
self.pause_event = pause_event
|
||||
self.cancellation_event = cancellation_event
|
||||
self.gui_signals = gui_signals
|
||||
self._is_finished = False
|
||||
|
||||
self.process = QProcess(self)
|
||||
self.process.readyReadStandardOutput.connect(self.handle_ytdlp_output)
|
||||
|
||||
def run(self):
|
||||
self.progress_signal.emit("=" * 40)
|
||||
self.progress_signal.emit(f"🚀 Starting Fap-Nation Download for: {self.album_url}")
|
||||
|
||||
self.album_name, files_to_download = fetch_fap_nation_data(self.album_url, self.progress_signal.emit)
|
||||
|
||||
if self.is_cancelled or not files_to_download:
|
||||
self.progress_signal.emit("❌ Failed to extract file information. Aborting.")
|
||||
self.finished_signal.emit(0, 1, self.is_cancelled)
|
||||
return
|
||||
|
||||
self.overall_progress_signal.emit(1, 0)
|
||||
|
||||
save_path = self.output_dir
|
||||
if self.use_post_subfolder:
|
||||
save_path = os.path.join(self.output_dir, self.album_name)
|
||||
self.progress_signal.emit(f" Subfolder per Post is ON. Saving to: '{self.album_name}'")
|
||||
os.makedirs(save_path, exist_ok=True)
|
||||
|
||||
file_data = files_to_download[0]
|
||||
self.current_filename = file_data.get('filename')
|
||||
download_url = file_data.get('url')
|
||||
link_type = file_data.get('type')
|
||||
filepath = os.path.join(save_path, self.current_filename)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
self.progress_signal.emit(f" -> Skip: '{self.current_filename}' already exists.")
|
||||
self.overall_progress_signal.emit(1, 1)
|
||||
self.finished_signal.emit(0, 1, self.is_cancelled)
|
||||
return
|
||||
|
||||
if link_type == 'hls':
|
||||
self.download_with_ytdlp(filepath, download_url)
|
||||
elif link_type == 'direct':
|
||||
self.download_with_multipart(filepath, download_url)
|
||||
else:
|
||||
self.progress_signal.emit(f" ❌ Unknown link type '{link_type}'. Aborting.")
|
||||
self._on_ytdlp_finished(-1)
|
||||
|
||||
def download_with_ytdlp(self, filepath, playlist_url):
|
||||
self.progress_signal.emit(f" Downloading (HLS Stream): '{self.current_filename}' using yt-dlp...")
|
||||
try:
|
||||
if getattr(sys, 'frozen', False):
|
||||
base_path = sys._MEIPASS
|
||||
ytdlp_path = os.path.join(base_path, "yt-dlp.exe")
|
||||
else:
|
||||
ytdlp_path = "yt-dlp.exe"
|
||||
|
||||
if not os.path.exists(ytdlp_path):
|
||||
self.progress_signal.emit(f" ❌ ERROR: yt-dlp.exe not found at '{ytdlp_path}'.")
|
||||
self._on_ytdlp_finished(-1)
|
||||
return
|
||||
|
||||
command = [ytdlp_path, '--no-warnings', '--progress', '--output', filepath, '--merge-output-format', 'mp4', playlist_url]
|
||||
|
||||
self.process.start(command[0], command[1:])
|
||||
self.process.waitForFinished(-1)
|
||||
self._on_ytdlp_finished(self.process.exitCode())
|
||||
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ Failed to start yt-dlp: {e}")
|
||||
self._on_ytdlp_finished(-1)
|
||||
|
||||
def download_with_multipart(self, filepath, direct_url):
|
||||
self.progress_signal.emit(f" Downloading (Direct Link): '{self.current_filename}' using multipart downloader...")
|
||||
try:
|
||||
session = cloudscraper.create_scraper()
|
||||
head_response = session.head(direct_url, allow_redirects=True, timeout=20)
|
||||
head_response.raise_for_status()
|
||||
total_size = int(head_response.headers.get('content-length', 0))
|
||||
|
||||
success, _, _, _ = download_file_in_parts(
|
||||
file_url=direct_url, save_path=filepath, total_size=total_size, num_parts=5,
|
||||
headers=session.headers, api_original_filename=self.current_filename,
|
||||
emitter_for_multipart=self.gui_signals,
|
||||
cookies_for_chunk_session=session.cookies,
|
||||
cancellation_event=self.cancellation_event,
|
||||
skip_event=None, logger_func=self.progress_signal.emit, pause_event=self.pause_event
|
||||
)
|
||||
self._on_ytdlp_finished(0 if success else 1)
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ Multipart download failed: {e}")
|
||||
self._on_ytdlp_finished(1)
|
||||
|
||||
def handle_ytdlp_output(self):
|
||||
if not self.process:
|
||||
return
|
||||
|
||||
output = self.process.readAllStandardOutput().data().decode('utf-8', errors='ignore')
|
||||
for line in reversed(output.strip().splitlines()):
|
||||
line = line.strip()
|
||||
progress_match = re.search(r'\[download\]\s+([\d.]+)%\s+of\s+~?\s*([\d.]+\w+B)', line)
|
||||
if progress_match:
|
||||
percent, size = progress_match.groups()
|
||||
self.file_progress_signal.emit("yt-dlp:", f"{percent}% of {size}")
|
||||
break
|
||||
|
||||
def _on_ytdlp_finished(self, exit_code):
|
||||
if self._is_finished:
|
||||
return
|
||||
self._is_finished = True
|
||||
|
||||
download_count, skip_count = 0, 0
|
||||
|
||||
if self.is_cancelled:
|
||||
self.progress_signal.emit(f" Download of '{self.current_filename}' was cancelled.")
|
||||
skip_count = 1
|
||||
elif exit_code == 0:
|
||||
self.progress_signal.emit(f" ✅ Download process finished successfully for '{self.current_filename}'.")
|
||||
download_count = 1
|
||||
else:
|
||||
self.progress_signal.emit(f" ❌ Download process exited with an error (Code: {exit_code}) for '{self.current_filename}'.")
|
||||
skip_count = 1
|
||||
|
||||
self.overall_progress_signal.emit(1, 1)
|
||||
self.process = None
|
||||
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
self.cancellation_event.set()
|
||||
if self.process and self.process.state() == QProcess.Running:
|
||||
self.progress_signal.emit(" Cancellation signal received, terminating yt-dlp process.")
|
||||
self.process.kill()
|
||||
51
src/ui/classes/hentai2read_downloader_thread.py
Normal file
51
src/ui/classes/hentai2read_downloader_thread.py
Normal file
@@ -0,0 +1,51 @@
|
||||
import threading
|
||||
import time
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...core.Hentai2read_client import run_hentai2read_download as h2r_run_download
|
||||
|
||||
|
||||
class Hentai2readDownloadThread(QThread):
|
||||
"""
|
||||
A dedicated QThread that calls the self-contained Hentai2Read client to
|
||||
perform scraping and downloading.
|
||||
"""
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool)
|
||||
overall_progress_signal = pyqtSignal(int, int)
|
||||
|
||||
def __init__(self, url, output_dir, parent=None):
|
||||
super().__init__(parent)
|
||||
self.start_url = url
|
||||
self.output_dir = output_dir
|
||||
self.is_cancelled = False
|
||||
self.pause_event = parent.pause_event if hasattr(parent, 'pause_event') else threading.Event()
|
||||
|
||||
def _check_pause(self):
|
||||
"""Helper to handle pausing and cancellation events."""
|
||||
if self.is_cancelled: return True
|
||||
if self.pause_event and self.pause_event.is_set():
|
||||
self.progress_signal.emit(" Download paused...")
|
||||
while self.pause_event.is_set():
|
||||
if self.is_cancelled: return True
|
||||
time.sleep(0.5)
|
||||
self.progress_signal.emit(" Download resumed.")
|
||||
return self.is_cancelled
|
||||
|
||||
def run(self):
|
||||
"""
|
||||
Executes the main download logic by calling the dedicated client function.
|
||||
"""
|
||||
downloaded, skipped = h2r_run_download(
|
||||
start_url=self.start_url,
|
||||
output_dir=self.output_dir,
|
||||
progress_callback=self.progress_signal.emit,
|
||||
overall_progress_callback=self.overall_progress_signal.emit,
|
||||
check_pause_func=self._check_pause
|
||||
)
|
||||
|
||||
self.finished_signal.emit(downloaded, skipped, self.is_cancelled)
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
549
src/ui/classes/kemono_discord_downloader_thread.py
Normal file
549
src/ui/classes/kemono_discord_downloader_thread.py
Normal file
@@ -0,0 +1,549 @@
|
||||
# kemono_discord_downloader_thread.py
|
||||
import os
|
||||
import time
|
||||
import uuid
|
||||
import threading
|
||||
import cloudscraper
|
||||
import requests
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
# --- Assuming these files are in the correct relative path ---
|
||||
# Adjust imports if your project structure is different
|
||||
try:
|
||||
from ...core.discord_client import fetch_server_channels, fetch_channel_messages
|
||||
from ...utils.file_utils import clean_filename
|
||||
except ImportError as e:
|
||||
# Basic fallback logging if signals aren't ready
|
||||
print(f"ERROR: Failed to import required modules for Kemono Discord thread: {e}")
|
||||
# Re-raise to prevent the thread from being created incorrectly
|
||||
raise
|
||||
|
||||
# Custom exception for clean cancellation/pausing
|
||||
class InterruptedError(Exception):
|
||||
"""Custom exception for handling cancellations/pausing gracefully within download loops."""
|
||||
pass
|
||||
|
||||
class KemonoDiscordDownloadThread(QThread):
|
||||
"""
|
||||
A dedicated QThread for downloading files from Kemono Discord server/channel pages,
|
||||
using the Kemono API via discord_client and multithreading for file downloads.
|
||||
Includes a single retry attempt after a 15-second delay for specific errors.
|
||||
"""
|
||||
# --- Signals ---
|
||||
progress_signal = pyqtSignal(str) # General log messages
|
||||
progress_label_signal = pyqtSignal(str) # Update main progress label (e.g., "Fetching messages...")
|
||||
file_progress_signal = pyqtSignal(str, object) # Update file progress bar (filename, (downloaded_bytes, total_bytes | None))
|
||||
permanent_file_failed_signal = pyqtSignal(list) # To report failures to main window
|
||||
finished_signal = pyqtSignal(int, int, bool, list) # (downloaded_count, skipped_count, was_cancelled, [])
|
||||
|
||||
def __init__(self, server_id, channel_id, output_dir, cookies_dict, parent):
|
||||
"""
|
||||
Initializes the Kemono Discord downloader thread.
|
||||
|
||||
Args:
|
||||
server_id (str): The Discord server ID from Kemono.
|
||||
channel_id (str | None): The specific Discord channel ID from Kemono, if provided.
|
||||
output_dir (str): The base directory to save downloaded files.
|
||||
cookies_dict (dict | None): Cookies to use for requests.
|
||||
parent (QWidget): The parent widget (main_app) to access events/settings.
|
||||
"""
|
||||
super().__init__(parent)
|
||||
self.server_id = server_id
|
||||
self.target_channel_id = channel_id # The specific channel from URL, if any
|
||||
self.output_dir = output_dir
|
||||
self.cookies_dict = cookies_dict
|
||||
self.parent_app = parent # Access main app's events and settings
|
||||
|
||||
# --- Shared Events & Internal State ---
|
||||
self.cancellation_event = getattr(parent, 'cancellation_event', threading.Event())
|
||||
self.pause_event = getattr(parent, 'pause_event', threading.Event())
|
||||
self._is_cancelled_internal = False # Internal flag for quick breaking
|
||||
|
||||
# --- Thread-Safe Counters ---
|
||||
self.download_count = 0
|
||||
self.skip_count = 0
|
||||
self.count_lock = threading.Lock()
|
||||
|
||||
# --- List to Store Failure Details ---
|
||||
self.permanently_failed_details = []
|
||||
|
||||
# --- Multithreading Configuration ---
|
||||
self.num_file_threads = 1 # Default
|
||||
try:
|
||||
use_mt = getattr(self.parent_app, 'use_multithreading_checkbox', None)
|
||||
thread_input = getattr(self.parent_app, 'thread_count_input', None)
|
||||
if use_mt and use_mt.isChecked() and thread_input:
|
||||
thread_count_ui = int(thread_input.text().strip())
|
||||
# Apply a reasonable cap specific to this downloader type (adjust as needed)
|
||||
self.num_file_threads = max(1, min(thread_count_ui, 20)) # Cap at 20 file threads
|
||||
except (ValueError, AttributeError, TypeError):
|
||||
try: self.progress_signal.emit("⚠️ Warning: Could not read thread count setting, defaulting to 1.")
|
||||
except: pass
|
||||
self.num_file_threads = 1 # Fallback on error getting setting
|
||||
|
||||
# --- Network Client ---
|
||||
try:
|
||||
self.scraper = cloudscraper.create_scraper(browser={'browser': 'firefox', 'platform': 'windows', 'mobile': False})
|
||||
except Exception as e:
|
||||
try: self.progress_signal.emit(f"❌ ERROR: Failed to initialize cloudscraper: {e}")
|
||||
except: pass
|
||||
self.scraper = None
|
||||
|
||||
# --- Control Methods (cancel, pause, resume - same as before) ---
|
||||
def cancel(self):
|
||||
self._is_cancelled_internal = True
|
||||
self.cancellation_event.set()
|
||||
try: self.progress_signal.emit(" Cancellation requested for Kemono Discord download.")
|
||||
except: pass
|
||||
|
||||
def pause(self):
|
||||
if not self.pause_event.is_set():
|
||||
self.pause_event.set()
|
||||
try: self.progress_signal.emit(" Pausing Kemono Discord download...")
|
||||
except: pass
|
||||
|
||||
def resume(self):
|
||||
if self.pause_event.is_set():
|
||||
self.pause_event.clear()
|
||||
try: self.progress_signal.emit(" Resuming Kemono Discord download...")
|
||||
except: pass
|
||||
|
||||
# --- Helper: Check Cancellation/Pause (same as before) ---
|
||||
def _check_events(self):
|
||||
if self._is_cancelled_internal or self.cancellation_event.is_set():
|
||||
if not self._is_cancelled_internal:
|
||||
self._is_cancelled_internal = True
|
||||
try: self.progress_signal.emit(" Cancellation detected by Kemono Discord thread check.")
|
||||
except: pass
|
||||
return True # Cancelled
|
||||
|
||||
was_paused = False
|
||||
while self.pause_event.is_set():
|
||||
if not was_paused:
|
||||
try: self.progress_signal.emit(" Kemono Discord operation paused...")
|
||||
except: pass
|
||||
was_paused = True
|
||||
if self.cancellation_event.is_set():
|
||||
self._is_cancelled_internal = True
|
||||
try: self.progress_signal.emit(" Cancellation detected while paused.")
|
||||
except: pass
|
||||
return True
|
||||
time.sleep(0.5)
|
||||
return False
|
||||
|
||||
# --- REVISED Helper: Download Single File with ONE Retry ---
|
||||
def _download_single_kemono_file(self, file_info):
|
||||
"""
|
||||
Downloads a single file, handles collisions after download,
|
||||
and automatically retries ONCE after 15s for specific network errors.
|
||||
|
||||
Returns:
|
||||
tuple: (bool_success, dict_error_details_or_None)
|
||||
"""
|
||||
# --- Constants for Retry Logic ---
|
||||
MAX_ATTEMPTS = 2 # 1 initial attempt + 1 retry
|
||||
RETRY_DELAY_SECONDS = 15
|
||||
|
||||
# --- Extract info ---
|
||||
channel_dir = file_info['channel_dir']
|
||||
original_filename = file_info['original_filename']
|
||||
file_url = file_info['file_url']
|
||||
channel_id = file_info['channel_id']
|
||||
post_title = file_info.get('post_title', f"Message in channel {channel_id}")
|
||||
original_post_id_for_log = file_info.get('message_id', 'N/A')
|
||||
base_kemono_domain = "kemono.cr"
|
||||
|
||||
if not self.scraper:
|
||||
try: self.progress_signal.emit(f" ❌ Cannot download '{original_filename}': Cloudscraper not initialized.")
|
||||
except: pass
|
||||
failure_details = { 'file_info': {'url': file_url, 'name': original_filename}, 'post_title': post_title, 'original_post_id_for_log': original_post_id_for_log, 'target_folder_path': channel_dir, 'error': 'Cloudscraper not initialized', 'service': 'discord', 'user_id': self.server_id }
|
||||
return False, failure_details
|
||||
|
||||
if self._check_events(): return False, None # Interrupted before start
|
||||
|
||||
# --- Determine filenames ---
|
||||
cleaned_original_filename = clean_filename(original_filename)
|
||||
intended_final_filename = cleaned_original_filename
|
||||
unique_suffix = uuid.uuid4().hex[:8]
|
||||
temp_filename = f"{intended_final_filename}.{unique_suffix}.part"
|
||||
temp_filepath = os.path.join(channel_dir, temp_filename)
|
||||
|
||||
# --- Download Attempt Loop ---
|
||||
download_successful = False
|
||||
last_exception = None
|
||||
should_retry = False # Flag to indicate if the first attempt failed with a retryable error
|
||||
|
||||
for attempt in range(1, MAX_ATTEMPTS + 1):
|
||||
response = None
|
||||
try:
|
||||
# --- Pre-attempt checks ---
|
||||
if self._check_events(): raise InterruptedError("Cancelled/Paused before attempt")
|
||||
if attempt == 2 and should_retry: # Only delay *before* the retry
|
||||
try: self.progress_signal.emit(f" ⏳ Retrying '{original_filename}' (Attempt {attempt}/{MAX_ATTEMPTS}) after {RETRY_DELAY_SECONDS}s...")
|
||||
except: pass
|
||||
for _ in range(RETRY_DELAY_SECONDS):
|
||||
if self._check_events(): raise InterruptedError("Cancelled/Paused during retry delay")
|
||||
time.sleep(1)
|
||||
# If it's attempt 2 but should_retry is False, it means the first error was non-retryable, so skip
|
||||
elif attempt == 2 and not should_retry:
|
||||
break # Exit loop, failure already determined
|
||||
|
||||
# --- Log attempt ---
|
||||
log_prefix = f" ⬇️ Downloading:" if attempt == 1 else f" 🔄 Retrying:"
|
||||
try: self.progress_signal.emit(f"{log_prefix} '{original_filename}' (Attempt {attempt}/{MAX_ATTEMPTS})...")
|
||||
except: pass
|
||||
if attempt == 1:
|
||||
try: self.file_progress_signal.emit(original_filename, (0, 0))
|
||||
except: pass
|
||||
|
||||
# --- Perform Download ---
|
||||
headers = { 'User-Agent': 'Mozilla/5.0 ...', 'Referer': f'https://{base_kemono_domain}/discord/channel/{channel_id}'} # Shortened for brevity
|
||||
response = self.scraper.get(file_url, headers=headers, cookies=self.cookies_dict, stream=True, timeout=(15, 120))
|
||||
response.raise_for_status()
|
||||
|
||||
total_size = int(response.headers.get('content-length', 0))
|
||||
downloaded_size = 0
|
||||
last_progress_emit_time = time.time()
|
||||
|
||||
with open(temp_filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=1024*1024):
|
||||
if self._check_events(): raise InterruptedError("Cancelled/Paused during chunk writing")
|
||||
if chunk:
|
||||
f.write(chunk)
|
||||
downloaded_size += len(chunk)
|
||||
current_time = time.time()
|
||||
if total_size > 0 and (current_time - last_progress_emit_time > 0.5 or downloaded_size == total_size):
|
||||
try: self.file_progress_signal.emit(original_filename, (downloaded_size, total_size))
|
||||
except: pass
|
||||
last_progress_emit_time = current_time
|
||||
elif total_size == 0 and (current_time - last_progress_emit_time > 0.5):
|
||||
try: self.file_progress_signal.emit(original_filename, (downloaded_size, 0))
|
||||
except: pass
|
||||
last_progress_emit_time = current_time
|
||||
response.close()
|
||||
|
||||
# --- Verification ---
|
||||
if self._check_events(): raise InterruptedError("Cancelled/Paused after download completion")
|
||||
|
||||
if total_size > 0 and downloaded_size != total_size:
|
||||
try: self.progress_signal.emit(f" ⚠️ Size mismatch on attempt {attempt} for '{original_filename}'. Expected {total_size}, got {downloaded_size}.")
|
||||
except: pass
|
||||
last_exception = IOError(f"Size mismatch: Expected {total_size}, got {downloaded_size}")
|
||||
if os.path.exists(temp_filepath):
|
||||
try: os.remove(temp_filepath)
|
||||
except OSError: pass
|
||||
should_retry = (attempt == 1) # Only retry if it was the first attempt
|
||||
continue # Try again if attempt 1, otherwise loop finishes
|
||||
else:
|
||||
download_successful = True
|
||||
break # Success!
|
||||
|
||||
# --- Error Handling within Loop ---
|
||||
except InterruptedError as e:
|
||||
last_exception = e
|
||||
should_retry = False # Don't retry if interrupted
|
||||
break
|
||||
except (requests.exceptions.Timeout, requests.exceptions.ConnectionError, cloudscraper.exceptions.CloudflareException) as e:
|
||||
last_exception = e
|
||||
try: self.progress_signal.emit(f" ❌ Network/Cloudflare error on attempt {attempt} for '{original_filename}': {e}")
|
||||
except: pass
|
||||
should_retry = (attempt == 1) # Retry only if first attempt
|
||||
except requests.exceptions.RequestException as e:
|
||||
status_code = getattr(e.response, 'status_code', None)
|
||||
if status_code and 500 <= status_code <= 599: # Retry on 5xx
|
||||
last_exception = e
|
||||
try: self.progress_signal.emit(f" ❌ Server error ({status_code}) on attempt {attempt} for '{original_filename}'. Will retry...")
|
||||
except: pass
|
||||
should_retry = (attempt == 1) # Retry only if first attempt
|
||||
else: # Don't retry on 4xx or other request errors
|
||||
last_exception = e
|
||||
try: self.progress_signal.emit(f" ❌ Non-retryable HTTP error for '{original_filename}': {e}")
|
||||
except: pass
|
||||
should_retry = False
|
||||
break
|
||||
except OSError as e:
|
||||
last_exception = e
|
||||
try: self.progress_signal.emit(f" ❌ OS error during download attempt {attempt} for '{original_filename}': {e}")
|
||||
except: pass
|
||||
should_retry = False
|
||||
break
|
||||
except Exception as e:
|
||||
last_exception = e
|
||||
try: self.progress_signal.emit(f" ❌ Unexpected error on attempt {attempt} for '{original_filename}': {e}")
|
||||
except: pass
|
||||
should_retry = False
|
||||
break
|
||||
finally:
|
||||
if response:
|
||||
try: response.close()
|
||||
except Exception: pass
|
||||
# --- End Download Attempt Loop ---
|
||||
|
||||
try: self.file_progress_signal.emit(original_filename, None) # Clear progress
|
||||
except: pass
|
||||
|
||||
# --- Post-Download Processing ---
|
||||
if download_successful:
|
||||
# --- Rename Logic ---
|
||||
final_filename_to_use = intended_final_filename
|
||||
final_filepath_on_disk = os.path.join(channel_dir, final_filename_to_use)
|
||||
counter = 1
|
||||
base_name, extension = os.path.splitext(intended_final_filename)
|
||||
while os.path.exists(final_filepath_on_disk):
|
||||
final_filename_to_use = f"{base_name} ({counter}){extension}"
|
||||
final_filepath_on_disk = os.path.join(channel_dir, final_filename_to_use)
|
||||
counter += 1
|
||||
if final_filename_to_use != intended_final_filename:
|
||||
try: self.progress_signal.emit(f" -> Name conflict for '{intended_final_filename}'. Renaming to '{final_filename_to_use}'.")
|
||||
except: pass
|
||||
try:
|
||||
os.rename(temp_filepath, final_filepath_on_disk)
|
||||
try: self.progress_signal.emit(f" ✅ Saved: '{final_filename_to_use}'")
|
||||
except: pass
|
||||
return True, None # SUCCESS
|
||||
except OSError as e:
|
||||
try: self.progress_signal.emit(f" ❌ OS error renaming temp file to '{final_filename_to_use}': {e}")
|
||||
except: pass
|
||||
if os.path.exists(temp_filepath):
|
||||
try: os.remove(temp_filepath)
|
||||
except OSError: pass
|
||||
# ---> RETURN FAILURE TUPLE (Rename Failed) <---
|
||||
failure_details = { 'file_info': {'url': file_url, 'name': original_filename}, 'post_title': post_title, 'original_post_id_for_log': original_post_id_for_log, 'target_folder_path': channel_dir, 'intended_filename': intended_final_filename, 'error': f"Rename failed: {e}", 'service': 'discord', 'user_id': self.server_id }
|
||||
return False, failure_details
|
||||
else:
|
||||
# Download failed or was interrupted
|
||||
if not isinstance(last_exception, InterruptedError):
|
||||
try: self.progress_signal.emit(f" ❌ FAILED to download '{original_filename}' after {MAX_ATTEMPTS} attempts. Last error: {last_exception}")
|
||||
except: pass
|
||||
if os.path.exists(temp_filepath):
|
||||
try: os.remove(temp_filepath)
|
||||
except OSError as e_rem:
|
||||
try: self.progress_signal.emit(f" (Failed to remove temp file '{temp_filename}': {e_rem})")
|
||||
except: pass
|
||||
# ---> RETURN FAILURE TUPLE (Download Failed/Interrupted) <---
|
||||
# Only generate details if it wasn't interrupted by user
|
||||
failure_details = None
|
||||
if not isinstance(last_exception, InterruptedError):
|
||||
failure_details = {
|
||||
'file_info': {'url': file_url, 'name': original_filename},
|
||||
'post_title': post_title, 'original_post_id_for_log': original_post_id_for_log,
|
||||
'target_folder_path': channel_dir, 'intended_filename': intended_final_filename,
|
||||
'error': f"Failed after {MAX_ATTEMPTS} attempts: {last_exception}",
|
||||
'service': 'discord', 'user_id': self.server_id,
|
||||
'forced_filename_override': intended_final_filename,
|
||||
'file_index_in_post': file_info.get('file_index', 0),
|
||||
'num_files_in_this_post': file_info.get('num_files', 1)
|
||||
}
|
||||
return False, failure_details # Return None details if interrupted
|
||||
|
||||
# --- Main Thread Execution ---
|
||||
def run(self):
|
||||
"""Main execution logic: Fetches channels/messages and dispatches file downloads."""
|
||||
self.download_count = 0
|
||||
self.skip_count = 0
|
||||
self._is_cancelled_internal = False
|
||||
self.permanently_failed_details = [] # Reset failed list
|
||||
|
||||
if not self.scraper:
|
||||
try: self.progress_signal.emit("❌ Aborting Kemono Discord download: Cloudscraper failed to initialize.")
|
||||
except: pass
|
||||
self.finished_signal.emit(0, 0, False, [])
|
||||
return
|
||||
|
||||
try:
|
||||
# --- Log Start ---
|
||||
try:
|
||||
self.progress_signal.emit("=" * 40)
|
||||
self.progress_signal.emit(f"🚀 Starting Kemono Discord download for server: {self.server_id}")
|
||||
self.progress_signal.emit(f" Using {self.num_file_threads} thread(s) for file downloads.")
|
||||
except: pass
|
||||
|
||||
# --- Channel Fetching (same as before) ---
|
||||
channels_to_process = []
|
||||
# ... (logic to populate channels_to_process using fetch_server_channels or target_channel_id) ...
|
||||
if self.target_channel_id:
|
||||
channels_to_process.append({'id': self.target_channel_id, 'name': self.target_channel_id})
|
||||
try: self.progress_signal.emit(f" Targeting specific channel: {self.target_channel_id}")
|
||||
except: pass
|
||||
else:
|
||||
try: self.progress_label_signal.emit("Fetching server channels via Kemono API...")
|
||||
except: pass
|
||||
channels_data = fetch_server_channels(self.server_id, logger=self.progress_signal.emit, cookies_dict=self.cookies_dict)
|
||||
if self._check_events(): return
|
||||
if channels_data is not None:
|
||||
channels_to_process = channels_data
|
||||
try: self.progress_signal.emit(f" Found {len(channels_to_process)} channels.")
|
||||
except: pass
|
||||
else:
|
||||
try: self.progress_signal.emit(f" ❌ Failed to fetch channels for server {self.server_id} via Kemono API.")
|
||||
except: pass
|
||||
return
|
||||
|
||||
# --- Process Each Channel ---
|
||||
for channel in channels_to_process:
|
||||
if self._check_events(): break
|
||||
|
||||
channel_id = channel['id']
|
||||
channel_name = clean_filename(channel.get('name', channel_id))
|
||||
channel_dir = os.path.join(self.output_dir, channel_name)
|
||||
try:
|
||||
os.makedirs(channel_dir, exist_ok=True)
|
||||
except OSError as e:
|
||||
try: self.progress_signal.emit(f" ❌ Failed to create directory for channel '{channel_name}': {e}. Skipping channel.")
|
||||
except: pass
|
||||
continue
|
||||
|
||||
try:
|
||||
self.progress_signal.emit(f"\n--- Processing Channel: #{channel_name} ({channel_id}) ---")
|
||||
self.progress_label_signal.emit(f"Fetching messages for #{channel_name}...")
|
||||
except: pass
|
||||
|
||||
# --- Collect File Download Tasks ---
|
||||
file_tasks = []
|
||||
message_generator = fetch_channel_messages(
|
||||
channel_id, logger=self.progress_signal.emit,
|
||||
cancellation_event=self.cancellation_event, pause_event=self.pause_event,
|
||||
cookies_dict=self.cookies_dict
|
||||
)
|
||||
|
||||
try:
|
||||
message_index = 0
|
||||
for message_batch in message_generator:
|
||||
if self._check_events(): break
|
||||
for message in message_batch:
|
||||
message_id = message.get('id', f'msg_{message_index}')
|
||||
post_title_context = (message.get('content') or f"Message {message_id}")[:50] + "..."
|
||||
attachments = message.get('attachments', [])
|
||||
file_index_in_message = 0
|
||||
num_files_in_message = len(attachments)
|
||||
|
||||
for attachment in attachments:
|
||||
if self._check_events(): raise InterruptedError
|
||||
file_path = attachment.get('path')
|
||||
original_filename = attachment.get('name')
|
||||
if file_path and original_filename:
|
||||
base_kemono_domain = "kemono.cr"
|
||||
if not file_path.startswith('/'): file_path = '/' + file_path
|
||||
file_url = f"https://{base_kemono_domain}/data{file_path}"
|
||||
file_tasks.append({
|
||||
'channel_dir': channel_dir, 'original_filename': original_filename,
|
||||
'file_url': file_url, 'channel_id': channel_id,
|
||||
'message_id': message_id, 'post_title': post_title_context,
|
||||
'file_index': file_index_in_message, 'num_files': num_files_in_message
|
||||
})
|
||||
file_index_in_message += 1
|
||||
message_index += 1
|
||||
if self._check_events(): raise InterruptedError
|
||||
if self._check_events(): raise InterruptedError
|
||||
except InterruptedError:
|
||||
try: self.progress_signal.emit(" Interrupted while collecting file tasks.")
|
||||
except: pass
|
||||
break # Exit channel processing
|
||||
except Exception as e_msg:
|
||||
try: self.progress_signal.emit(f" ❌ Error fetching messages for channel {channel_name}: {e_msg}")
|
||||
except: pass
|
||||
continue # Continue to next channel
|
||||
|
||||
if self._check_events(): break
|
||||
|
||||
if not file_tasks:
|
||||
try: self.progress_signal.emit(" No downloadable file attachments found in this channel's messages.")
|
||||
except: pass
|
||||
continue
|
||||
|
||||
try:
|
||||
self.progress_signal.emit(f" Found {len(file_tasks)} potential file attachments. Starting downloads...")
|
||||
self.progress_label_signal.emit(f"Downloading {len(file_tasks)} files for #{channel_name}...")
|
||||
except: pass
|
||||
|
||||
# --- Execute Downloads Concurrently ---
|
||||
files_processed_in_channel = 0
|
||||
with ThreadPoolExecutor(max_workers=self.num_file_threads, thread_name_prefix=f"KDC_{channel_id[:4]}_") as executor:
|
||||
futures = {executor.submit(self._download_single_kemono_file, task): task for task in file_tasks}
|
||||
try:
|
||||
for future in as_completed(futures):
|
||||
files_processed_in_channel += 1
|
||||
task_info = futures[future]
|
||||
try:
|
||||
success, details = future.result() # Unpack result
|
||||
with self.count_lock:
|
||||
if success:
|
||||
self.download_count += 1
|
||||
else:
|
||||
self.skip_count += 1
|
||||
if details: # Append details if the download permanently failed
|
||||
self.permanently_failed_details.append(details)
|
||||
except Exception as e_future:
|
||||
filename = task_info.get('original_filename', 'unknown file')
|
||||
try: self.progress_signal.emit(f" ❌ System error processing download future for '{filename}': {e_future}")
|
||||
except: pass
|
||||
with self.count_lock:
|
||||
self.skip_count += 1
|
||||
# Append details on system failure
|
||||
failure_details = { 'file_info': {'url': task_info.get('file_url'), 'name': filename}, 'post_title': task_info.get('post_title', 'N/A'), 'original_post_id_for_log': task_info.get('message_id', 'N/A'), 'target_folder_path': task_info.get('channel_dir'), 'error': f"Future execution error: {e_future}", 'service': 'discord', 'user_id': self.server_id, 'forced_filename_override': clean_filename(filename), 'file_index_in_post': task_info.get('file_index', 0), 'num_files_in_this_post': task_info.get('num_files', 1) }
|
||||
self.permanently_failed_details.append(failure_details)
|
||||
|
||||
try: self.progress_label_signal.emit(f"#{channel_name}: {files_processed_in_channel}/{len(file_tasks)} files processed")
|
||||
except: pass
|
||||
|
||||
if self._check_events():
|
||||
try: self.progress_signal.emit(" Cancelling remaining file downloads for this channel...")
|
||||
except: pass
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
break # Exit as_completed loop
|
||||
except InterruptedError:
|
||||
try: self.progress_signal.emit(" Download processing loop interrupted.")
|
||||
except: pass
|
||||
executor.shutdown(wait=False, cancel_futures=True)
|
||||
|
||||
if self._check_events(): break # Check between channels
|
||||
|
||||
# --- End Channel Loop ---
|
||||
|
||||
except Exception as e:
|
||||
# Catch unexpected errors in the main run logic
|
||||
try:
|
||||
self.progress_signal.emit(f"❌ Unexpected critical error in Kemono Discord thread run loop: {e}")
|
||||
import traceback
|
||||
self.progress_signal.emit(traceback.format_exc())
|
||||
except: pass # Avoid errors if signals fail at the very end
|
||||
finally:
|
||||
# --- Final Cleanup and Signal ---
|
||||
try:
|
||||
try: self.progress_signal.emit("=" * 40)
|
||||
except: pass
|
||||
cancelled = self._is_cancelled_internal or self.cancellation_event.is_set()
|
||||
|
||||
# --- EMIT FAILED FILES SIGNAL ---
|
||||
if self.permanently_failed_details:
|
||||
try:
|
||||
self.progress_signal.emit(f" Reporting {len(self.permanently_failed_details)} permanently failed files...")
|
||||
self.permanent_file_failed_signal.emit(list(self.permanently_failed_details)) # Emit a copy
|
||||
except Exception as e_emit_fail:
|
||||
print(f"ERROR emitting permanent_file_failed_signal: {e_emit_fail}")
|
||||
|
||||
# Log final status
|
||||
try:
|
||||
if cancelled and not self._is_cancelled_internal:
|
||||
self.progress_signal.emit(" Kemono Discord download cancelled externally.")
|
||||
elif self._is_cancelled_internal:
|
||||
self.progress_signal.emit(" Kemono Discord download finished due to cancellation.")
|
||||
else:
|
||||
self.progress_signal.emit("✅ Kemono Discord download process finished.")
|
||||
except: pass
|
||||
|
||||
# Clear file progress
|
||||
try: self.file_progress_signal.emit("", None)
|
||||
except: pass
|
||||
|
||||
# Get final counts safely
|
||||
with self.count_lock:
|
||||
final_download_count = self.download_count
|
||||
final_skip_count = self.skip_count
|
||||
|
||||
# Emit finished signal
|
||||
self.finished_signal.emit(final_download_count, final_skip_count, cancelled, [])
|
||||
except Exception as e_final:
|
||||
# Log final signal emission error if possible
|
||||
print(f"ERROR in KemonoDiscordDownloadThread finally block: {e_final}")
|
||||
45
src/ui/classes/mangadex_downloader_thread.py
Normal file
45
src/ui/classes/mangadex_downloader_thread.py
Normal file
@@ -0,0 +1,45 @@
|
||||
import threading
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...core.mangadex_client import fetch_mangadex_data
|
||||
|
||||
|
||||
class MangaDexDownloadThread(QThread):
|
||||
"""A wrapper QThread for running the MangaDex client function."""
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool)
|
||||
overall_progress_signal = pyqtSignal(int, int)
|
||||
|
||||
def __init__(self, url, output_dir, parent=None):
|
||||
super().__init__(parent)
|
||||
self.start_url = url
|
||||
self.output_dir = output_dir
|
||||
self.is_cancelled = False
|
||||
self.pause_event = parent.pause_event if hasattr(parent, 'pause_event') else threading.Event()
|
||||
self.cancellation_event = parent.cancellation_event if hasattr(parent, 'cancellation_event') else threading.Event()
|
||||
|
||||
def run(self):
|
||||
downloaded = 0
|
||||
skipped = 0
|
||||
try:
|
||||
downloaded, skipped = fetch_mangadex_data(
|
||||
self.start_url,
|
||||
self.output_dir,
|
||||
logger_func=self.progress_signal.emit,
|
||||
file_progress_callback=self.file_progress_signal,
|
||||
overall_progress_callback=self.overall_progress_signal,
|
||||
pause_event=self.pause_event,
|
||||
cancellation_event=self.cancellation_event
|
||||
)
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f"❌ A critical error occurred in the MangaDex thread: {e}")
|
||||
skipped = 1 # Mark as skipped if there was a critical failure
|
||||
finally:
|
||||
self.finished_signal.emit(downloaded, skipped, self.is_cancelled)
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
if self.cancellation_event:
|
||||
self.cancellation_event.set()
|
||||
self.progress_signal.emit(" Cancellation signal received by MangaDex thread.")
|
||||
105
src/ui/classes/nhentai_downloader_thread.py
Normal file
105
src/ui/classes/nhentai_downloader_thread.py
Normal file
@@ -0,0 +1,105 @@
|
||||
import os
|
||||
import time
|
||||
import cloudscraper
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...utils.file_utils import clean_folder_name
|
||||
|
||||
|
||||
class NhentaiDownloadThread(QThread):
|
||||
progress_signal = pyqtSignal(str)
|
||||
finished_signal = pyqtSignal(int, int, bool)
|
||||
|
||||
IMAGE_SERVERS = [
|
||||
"https://i.nhentai.net", "https://i2.nhentai.net", "https://i3.nhentai.net",
|
||||
"https://i5.nhentai.net", "https://i7.nhentai.net"
|
||||
]
|
||||
|
||||
EXTENSION_MAP = {'j': 'jpg', 'p': 'png', 'g': 'gif', 'w': 'webp' }
|
||||
|
||||
def __init__(self, gallery_data, output_dir, parent=None):
|
||||
super().__init__(parent)
|
||||
self.gallery_data = gallery_data
|
||||
self.output_dir = output_dir
|
||||
self.is_cancelled = False
|
||||
|
||||
def run(self):
|
||||
title = self.gallery_data.get("title", {}).get("english", f"gallery_{self.gallery_data.get('id')}")
|
||||
gallery_id = self.gallery_data.get("id")
|
||||
media_id = self.gallery_data.get("media_id")
|
||||
pages_info = self.gallery_data.get("pages", [])
|
||||
|
||||
folder_name = clean_folder_name(title)
|
||||
gallery_path = os.path.join(self.output_dir, folder_name)
|
||||
|
||||
try:
|
||||
os.makedirs(gallery_path, exist_ok=True)
|
||||
except OSError as e:
|
||||
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
|
||||
self.finished_signal.emit(0, len(pages_info), False)
|
||||
return
|
||||
|
||||
self.progress_signal.emit(f"⬇️ Downloading '{title}' to folder '{folder_name}'...")
|
||||
|
||||
scraper = cloudscraper.create_scraper()
|
||||
download_count = 0
|
||||
skip_count = 0
|
||||
|
||||
for i, page_data in enumerate(pages_info):
|
||||
if self.is_cancelled:
|
||||
break
|
||||
|
||||
page_num = i + 1
|
||||
|
||||
ext_char = page_data.get('t', 'j')
|
||||
extension = self.EXTENSION_MAP.get(ext_char, 'jpg')
|
||||
|
||||
relative_path = f"/galleries/{media_id}/{page_num}.{extension}"
|
||||
|
||||
local_filename = f"{page_num:03d}.{extension}"
|
||||
filepath = os.path.join(gallery_path, local_filename)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
self.progress_signal.emit(f" -> Skip (Exists): {local_filename}")
|
||||
skip_count += 1
|
||||
continue
|
||||
|
||||
download_successful = False
|
||||
for server in self.IMAGE_SERVERS:
|
||||
if self.is_cancelled:
|
||||
break
|
||||
|
||||
full_url = f"{server}{relative_path}"
|
||||
try:
|
||||
self.progress_signal.emit(f" Downloading page {page_num}/{len(pages_info)} from {server} ...")
|
||||
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36',
|
||||
'Referer': f'https://nhentai.net/g/{gallery_id}/'
|
||||
}
|
||||
|
||||
response = scraper.get(full_url, headers=headers, timeout=60, stream=True)
|
||||
|
||||
if response.status_code == 200:
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
f.write(chunk)
|
||||
download_count += 1
|
||||
download_successful = True
|
||||
break
|
||||
else:
|
||||
self.progress_signal.emit(f" -> {server} returned status {response.status_code}. Trying next server...")
|
||||
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" -> {server} failed to connect or timed out: {e}. Trying next server...")
|
||||
|
||||
if not download_successful:
|
||||
self.progress_signal.emit(f" ❌ Failed to download {local_filename} from all servers.")
|
||||
skip_count += 1
|
||||
|
||||
time.sleep(0.5)
|
||||
|
||||
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
101
src/ui/classes/pixeldrain_downloader_thread.py
Normal file
101
src/ui/classes/pixeldrain_downloader_thread.py
Normal file
@@ -0,0 +1,101 @@
|
||||
import os
|
||||
import time
|
||||
import requests
|
||||
import cloudscraper
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...core.pixeldrain_client import fetch_pixeldrain_data
|
||||
from ...utils.file_utils import clean_folder_name
|
||||
|
||||
|
||||
class PixeldrainDownloadThread(QThread):
|
||||
"""A dedicated QThread for handling pixeldrain.com downloads."""
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool) # dl_count, skip_count, cancelled
|
||||
|
||||
def __init__(self, url, output_dir, parent=None):
|
||||
super().__init__(parent)
|
||||
self.pixeldrain_url = url
|
||||
self.output_dir = output_dir
|
||||
self.is_cancelled = False
|
||||
|
||||
def run(self):
|
||||
download_count = 0
|
||||
skip_count = 0
|
||||
self.progress_signal.emit("=" * 40)
|
||||
self.progress_signal.emit(f"🚀 Starting Pixeldrain.com Download for: {self.pixeldrain_url}")
|
||||
|
||||
album_title_raw, files_to_download = fetch_pixeldrain_data(self.pixeldrain_url, self.progress_signal.emit)
|
||||
|
||||
if not files_to_download:
|
||||
self.progress_signal.emit("❌ Failed to extract file information from Pixeldrain. Aborting.")
|
||||
self.finished_signal.emit(0, 0, self.is_cancelled)
|
||||
return
|
||||
|
||||
album_folder_name = clean_folder_name(album_title_raw)
|
||||
album_path = os.path.join(self.output_dir, album_folder_name)
|
||||
try:
|
||||
os.makedirs(album_path, exist_ok=True)
|
||||
self.progress_signal.emit(f" Saving to folder: '{album_path}'")
|
||||
except OSError as e:
|
||||
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
|
||||
self.finished_signal.emit(0, len(files_to_download), self.is_cancelled)
|
||||
return
|
||||
|
||||
total_files = len(files_to_download)
|
||||
session = cloudscraper.create_scraper()
|
||||
|
||||
for i, file_data in enumerate(files_to_download):
|
||||
if self.is_cancelled:
|
||||
self.progress_signal.emit(" Download cancelled by user.")
|
||||
skip_count = total_files - download_count
|
||||
break
|
||||
|
||||
filename = file_data.get('filename')
|
||||
file_url = file_data.get('url')
|
||||
filepath = os.path.join(album_path, filename)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
self.progress_signal.emit(f" -> Skip ({i+1}/{total_files}): '{filename}' already exists.")
|
||||
skip_count += 1
|
||||
continue
|
||||
|
||||
self.progress_signal.emit(f" Downloading ({i+1}/{total_files}): '{filename}'...")
|
||||
|
||||
try:
|
||||
response = session.get(file_url, stream=True, timeout=90)
|
||||
response.raise_for_status()
|
||||
|
||||
total_size = int(response.headers.get('content-length', 0))
|
||||
downloaded_size = 0
|
||||
last_update_time = time.time()
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if self.is_cancelled:
|
||||
break
|
||||
if chunk:
|
||||
f.write(chunk)
|
||||
downloaded_size += len(chunk)
|
||||
current_time = time.time()
|
||||
if total_size > 0 and (current_time - last_update_time) > 0.5:
|
||||
self.file_progress_signal.emit(filename, (downloaded_size, total_size))
|
||||
last_update_time = current_time
|
||||
|
||||
if self.is_cancelled:
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
continue
|
||||
|
||||
download_count += 1
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.progress_signal.emit(f" ❌ Failed to download '{filename}'. Error: {e}")
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
skip_count += 1
|
||||
|
||||
self.file_progress_signal.emit("", None)
|
||||
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
self.progress_signal.emit(" Cancellation signal received by Pixeldrain thread.")
|
||||
87
src/ui/classes/rule34video_downloader_thread.py
Normal file
87
src/ui/classes/rule34video_downloader_thread.py
Normal file
@@ -0,0 +1,87 @@
|
||||
import os
|
||||
import time
|
||||
import requests
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
import cloudscraper
|
||||
|
||||
from ...core.rule34video_client import fetch_rule34video_data
|
||||
from ...utils.file_utils import clean_folder_name
|
||||
|
||||
class Rule34VideoDownloadThread(QThread):
|
||||
"""A dedicated QThread for handling rule34video.com downloads."""
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool) # dl_count, skip_count, cancelled
|
||||
|
||||
def __init__(self, url, output_dir, parent=None):
|
||||
super().__init__(parent)
|
||||
self.video_url = url
|
||||
self.output_dir = output_dir
|
||||
self.is_cancelled = False
|
||||
|
||||
def run(self):
|
||||
download_count = 0
|
||||
skip_count = 0
|
||||
|
||||
video_title, final_video_url = fetch_rule34video_data(self.video_url, self.progress_signal.emit)
|
||||
|
||||
if not final_video_url:
|
||||
self.progress_signal.emit("❌ Failed to get video data. Aborting.")
|
||||
self.finished_signal.emit(0, 1, self.is_cancelled)
|
||||
return
|
||||
|
||||
# Create a safe filename from the title, defaulting if needed
|
||||
safe_title = clean_folder_name(video_title if video_title else "rule34video_file")
|
||||
filename = f"{safe_title}.mp4"
|
||||
filepath = os.path.join(self.output_dir, filename)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
self.progress_signal.emit(f" -> Skip: '{filename}' already exists.")
|
||||
self.finished_signal.emit(0, 1, self.is_cancelled)
|
||||
return
|
||||
|
||||
self.progress_signal.emit(f" Downloading: '{filename}'...")
|
||||
try:
|
||||
scraper = cloudscraper.create_scraper()
|
||||
# The CDN link might not require special headers, but a referer is good practice
|
||||
headers = {'Referer': 'https://rule34video.com/'}
|
||||
response = scraper.get(final_video_url, stream=True, headers=headers, timeout=90)
|
||||
response.raise_for_status()
|
||||
|
||||
total_size = int(response.headers.get('content-length', 0))
|
||||
downloaded_size = 0
|
||||
last_update_time = time.time()
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
# Use a larger chunk size for video files
|
||||
for chunk in response.iter_content(chunk_size=8192 * 4):
|
||||
if self.is_cancelled:
|
||||
break
|
||||
if chunk:
|
||||
f.write(chunk)
|
||||
downloaded_size += len(chunk)
|
||||
current_time = time.time()
|
||||
if total_size > 0 and (current_time - last_update_time) > 0.5:
|
||||
self.file_progress_signal.emit(filename, (downloaded_size, total_size))
|
||||
last_update_time = current_time
|
||||
|
||||
if self.is_cancelled:
|
||||
if os.path.exists(filepath):
|
||||
os.remove(filepath)
|
||||
skip_count = 1
|
||||
self.progress_signal.emit(f" Download cancelled for '{filename}'.")
|
||||
else:
|
||||
download_count = 1
|
||||
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ Failed to download '{filename}': {e}")
|
||||
if os.path.exists(filepath):
|
||||
os.remove(filepath)
|
||||
skip_count = 1
|
||||
|
||||
self.file_progress_signal.emit("", None)
|
||||
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
self.progress_signal.emit(" Cancellation signal received by Rule34Video thread.")
|
||||
105
src/ui/classes/saint2_downloader_thread.py
Normal file
105
src/ui/classes/saint2_downloader_thread.py
Normal file
@@ -0,0 +1,105 @@
|
||||
import os
|
||||
import time
|
||||
import requests
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...core.saint2_client import fetch_saint2_data
|
||||
|
||||
class Saint2DownloadThread(QThread):
|
||||
"""A dedicated QThread for handling saint2.su downloads."""
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool) # dl_count, skip_count, cancelled
|
||||
|
||||
def __init__(self, url, output_dir, parent=None):
|
||||
super().__init__(parent)
|
||||
self.saint2_url = url
|
||||
self.output_dir = output_dir
|
||||
self.is_cancelled = False
|
||||
|
||||
def run(self):
|
||||
download_count = 0
|
||||
skip_count = 0
|
||||
self.progress_signal.emit("=" * 40)
|
||||
self.progress_signal.emit(f"🚀 Starting Saint2.su Download for: {self.saint2_url}")
|
||||
|
||||
album_name, files_to_download = fetch_saint2_data(self.saint2_url, self.progress_signal.emit)
|
||||
|
||||
if not files_to_download:
|
||||
self.progress_signal.emit("❌ Failed to extract file information from Saint2. Aborting.")
|
||||
self.finished_signal.emit(0, 0, self.is_cancelled)
|
||||
return
|
||||
|
||||
album_path = os.path.join(self.output_dir, album_name)
|
||||
try:
|
||||
os.makedirs(album_path, exist_ok=True)
|
||||
self.progress_signal.emit(f" Saving to folder: '{album_path}'")
|
||||
except OSError as e:
|
||||
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
|
||||
self.finished_signal.emit(0, len(files_to_download), self.is_cancelled)
|
||||
return
|
||||
|
||||
total_files = len(files_to_download)
|
||||
session = requests.Session()
|
||||
|
||||
for i, file_data in enumerate(files_to_download):
|
||||
if self.is_cancelled:
|
||||
self.progress_signal.emit(" Download cancelled by user.")
|
||||
skip_count = total_files - download_count
|
||||
break
|
||||
|
||||
filename = file_data.get('filename', f'untitled_{i+1}.mp4')
|
||||
file_url = file_data.get('url')
|
||||
headers = file_data.get('headers')
|
||||
filepath = os.path.join(album_path, filename)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
self.progress_signal.emit(f" -> Skip ({i+1}/{total_files}): '{filename}' already exists.")
|
||||
skip_count += 1
|
||||
continue
|
||||
|
||||
self.progress_signal.emit(f" Downloading ({i+1}/{total_files}): '{filename}'...")
|
||||
|
||||
try:
|
||||
response = session.get(file_url, stream=True, headers=headers, timeout=60)
|
||||
response.raise_for_status()
|
||||
|
||||
total_size = int(response.headers.get('content-length', 0))
|
||||
downloaded_size = 0
|
||||
last_update_time = time.time()
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if self.is_cancelled:
|
||||
break
|
||||
if chunk:
|
||||
f.write(chunk)
|
||||
downloaded_size += len(chunk)
|
||||
current_time = time.time()
|
||||
if total_size > 0 and (current_time - last_update_time) > 0.5:
|
||||
self.file_progress_signal.emit(filename, (downloaded_size, total_size))
|
||||
last_update_time = current_time
|
||||
|
||||
if self.is_cancelled:
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
continue
|
||||
|
||||
if total_size > 0:
|
||||
self.file_progress_signal.emit(filename, (total_size, total_size))
|
||||
|
||||
download_count += 1
|
||||
except requests.exceptions.RequestException as e:
|
||||
self.progress_signal.emit(f" ❌ Failed to download '{filename}'. Error: {e}")
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
skip_count += 1
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ An unexpected error occurred with '{filename}': {e}")
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
skip_count += 1
|
||||
|
||||
self.file_progress_signal.emit("", None)
|
||||
self.finished_signal.emit(download_count, skip_count, self.is_cancelled)
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
self.progress_signal.emit(" Cancellation signal received by Saint2 thread.")
|
||||
386
src/ui/classes/simp_city_downloader_thread.py
Normal file
386
src/ui/classes/simp_city_downloader_thread.py
Normal file
@@ -0,0 +1,386 @@
|
||||
import os
|
||||
import queue
|
||||
import re
|
||||
import threading
|
||||
import time
|
||||
from collections import Counter
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
from urllib.parse import urlparse
|
||||
|
||||
import cloudscraper
|
||||
import requests
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...core.bunkr_client import fetch_bunkr_data
|
||||
from ...core.pixeldrain_client import fetch_pixeldrain_data
|
||||
from ...core.saint2_client import fetch_saint2_data
|
||||
from ...core.simpcity_client import fetch_single_simpcity_page
|
||||
from ...services.drive_downloader import (
|
||||
download_mega_file as drive_download_mega_file,
|
||||
download_gofile_folder
|
||||
)
|
||||
from ...utils.file_utils import clean_folder_name
|
||||
|
||||
|
||||
class SimpCityDownloadThread(QThread):
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool, list)
|
||||
overall_progress_signal = pyqtSignal(int, int)
|
||||
|
||||
def __init__(self, url, post_id, output_dir, cookies, parent=None):
|
||||
super().__init__(parent)
|
||||
self.start_url = url
|
||||
self.post_id = post_id
|
||||
self.output_dir = output_dir
|
||||
self.cookies = cookies
|
||||
self.is_cancelled = False
|
||||
self.parent_app = parent
|
||||
self.image_queue = queue.Queue()
|
||||
self.service_queue = queue.Queue()
|
||||
self.counter_lock = threading.Lock()
|
||||
self.total_dl_count = 0
|
||||
self.total_skip_count = 0
|
||||
self.total_jobs_found = 0
|
||||
self.total_jobs_processed = 0
|
||||
self.processed_job_urls = set()
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
|
||||
class _ServiceLoggerAdapter:
|
||||
"""Wraps the progress signal to provide .info(), .error(), .warning() methods for other clients."""
|
||||
def __init__(self, signal_emitter, prefix=""):
|
||||
self.emit = signal_emitter
|
||||
self.prefix = prefix
|
||||
|
||||
def __call__(self, msg, *args, **kwargs):
|
||||
# Make the logger callable, defaulting to the info method.
|
||||
self.info(msg, *args, **kwargs)
|
||||
|
||||
def info(self, msg, *args, **kwargs): self.emit(f"{self.prefix}{str(msg) % args}")
|
||||
def error(self, msg, *args, **kwargs): self.emit(f"{self.prefix}❌ ERROR: {str(msg) % args}")
|
||||
def warning(self, msg, *args, **kwargs): self.emit(f"{self.prefix}⚠️ WARNING: {str(msg) % args}")
|
||||
|
||||
def _log_interceptor(self, message):
|
||||
"""Filters out verbose log messages from the simpcity_client."""
|
||||
if "[SimpCity] Scraper found" in message or "[SimpCity] Scraping page" in message:
|
||||
pass
|
||||
else:
|
||||
self.progress_signal.emit(message)
|
||||
|
||||
def _get_enriched_jobs(self, jobs_to_check):
|
||||
"""Performs a pre-flight check on jobs to get an accurate total file count and summary."""
|
||||
if not jobs_to_check:
|
||||
return []
|
||||
|
||||
enriched_jobs = []
|
||||
|
||||
bunkr_logger = self._ServiceLoggerAdapter(self.progress_signal.emit, prefix=" ")
|
||||
pixeldrain_logger = self._ServiceLoggerAdapter(self.progress_signal.emit, prefix=" ")
|
||||
saint2_logger = self._ServiceLoggerAdapter(self.progress_signal.emit, prefix=" ")
|
||||
|
||||
for job in jobs_to_check:
|
||||
job_type = job.get('type')
|
||||
job_url = job.get('url')
|
||||
|
||||
if job_type in ['image', 'saint2_direct']:
|
||||
enriched_jobs.append(job)
|
||||
elif (job_type == 'bunkr' and self.should_dl_bunkr) or \
|
||||
(job_type == 'pixeldrain' and self.should_dl_pixeldrain) or \
|
||||
(job_type == 'saint2' and self.should_dl_saint2):
|
||||
self.progress_signal.emit(f" -> Checking {job_type} album for file count...")
|
||||
|
||||
fetch_map = {
|
||||
'bunkr': (fetch_bunkr_data, bunkr_logger),
|
||||
'pixeldrain': (fetch_pixeldrain_data, pixeldrain_logger),
|
||||
'saint2': (fetch_saint2_data, saint2_logger)
|
||||
}
|
||||
fetch_func, logger_adapter = fetch_map[job_type]
|
||||
album_name, files = fetch_func(job_url, logger_adapter)
|
||||
|
||||
if files:
|
||||
job['prefetched_files'] = files
|
||||
job['prefetched_album_name'] = album_name
|
||||
enriched_jobs.append(job)
|
||||
|
||||
if enriched_jobs:
|
||||
summary_counts = Counter()
|
||||
current_page_file_count = 0
|
||||
for job in enriched_jobs:
|
||||
if job.get('prefetched_files'):
|
||||
file_count = len(job['prefetched_files'])
|
||||
summary_counts[job['type']] += file_count
|
||||
current_page_file_count += file_count
|
||||
else:
|
||||
summary_counts[job['type']] += 1
|
||||
current_page_file_count += 1
|
||||
|
||||
summary_parts = [f"{job_type} ({count})" for job_type, count in summary_counts.items()]
|
||||
self.progress_signal.emit(f" [SimpCity] Content Found: {' | '.join(summary_parts)}")
|
||||
|
||||
with self.counter_lock: self.total_jobs_found += current_page_file_count
|
||||
self.overall_progress_signal.emit(self.total_jobs_found, self.total_jobs_processed)
|
||||
|
||||
return enriched_jobs
|
||||
|
||||
def _download_single_image(self, job, album_path, session):
|
||||
"""Downloads one image file; this is run by the image thread pool."""
|
||||
filename = job['filename']
|
||||
filepath = os.path.join(album_path, filename)
|
||||
try:
|
||||
if os.path.exists(filepath):
|
||||
self.progress_signal.emit(f" -> Skip (Image): '{filename}'")
|
||||
with self.counter_lock: self.total_skip_count += 1
|
||||
return
|
||||
self.progress_signal.emit(f" -> Downloading (Image): '{filename}'...")
|
||||
# --- START MODIFICATION ---
|
||||
response = session.get(job['url'], stream=True, timeout=180, headers={'Referer': self.start_url})
|
||||
# --- END MODIFICATION ---
|
||||
response.raise_for_status()
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if self.is_cancelled: break
|
||||
f.write(chunk)
|
||||
if not self.is_cancelled:
|
||||
with self.counter_lock: self.total_dl_count += 1
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" -> ❌ Image download failed for '{filename}': {e}")
|
||||
with self.counter_lock: self.total_skip_count += 1
|
||||
finally:
|
||||
if not self.is_cancelled:
|
||||
with self.counter_lock: self.total_jobs_processed += 1
|
||||
self.overall_progress_signal.emit(self.total_jobs_found, self.total_jobs_processed)
|
||||
|
||||
def _image_worker(self, album_path):
|
||||
"""Target function for the image thread pool that pulls jobs from the queue."""
|
||||
session = cloudscraper.create_scraper()
|
||||
while True:
|
||||
if self.is_cancelled: break
|
||||
try:
|
||||
job = self.image_queue.get(timeout=1)
|
||||
if job is None: break
|
||||
self._download_single_image(job, album_path, session)
|
||||
self.image_queue.task_done()
|
||||
except queue.Empty:
|
||||
continue
|
||||
|
||||
def _service_worker(self, album_path):
|
||||
"""Target function for the single service thread, ensuring sequential downloads."""
|
||||
while True:
|
||||
if self.is_cancelled: break
|
||||
try:
|
||||
job = self.service_queue.get(timeout=1)
|
||||
if job is None: break
|
||||
|
||||
job_type = job['type']
|
||||
job_url = job['url']
|
||||
|
||||
if job_type in ['pixeldrain', 'saint2', 'bunkr']:
|
||||
if (job_type == 'pixeldrain' and self.should_dl_pixeldrain) or \
|
||||
(job_type == 'saint2' and self.should_dl_saint2) or \
|
||||
(job_type == 'bunkr' and self.should_dl_bunkr):
|
||||
self.progress_signal.emit(f"\n--- Processing Service ({job_type.capitalize()}): {job_url} ---")
|
||||
self._download_album(job.get('prefetched_files', []), job_url, album_path)
|
||||
elif job_type == 'mega' and self.should_dl_mega:
|
||||
self.progress_signal.emit(f"\n--- Processing Service (Mega): {job_url} ---")
|
||||
drive_download_mega_file(job_url, album_path, self.progress_signal.emit, self.file_progress_signal.emit)
|
||||
elif job_type == 'gofile' and self.should_dl_gofile:
|
||||
self.progress_signal.emit(f"\n--- Processing Service (Gofile): {job_url} ---")
|
||||
download_gofile_folder(job_url, album_path, self.progress_signal.emit, self.file_progress_signal.emit)
|
||||
elif job_type == 'saint2_direct' and self.should_dl_saint2:
|
||||
self.progress_signal.emit(f"\n--- Processing Service (Saint2 Direct): {job_url} ---")
|
||||
try:
|
||||
filename = os.path.basename(urlparse(job_url).path)
|
||||
filepath = os.path.join(album_path, filename)
|
||||
if os.path.exists(filepath):
|
||||
with self.counter_lock: self.total_skip_count += 1
|
||||
else:
|
||||
response = cloudscraper.create_scraper().get(job_url, stream=True, timeout=120, headers={'Referer': self.start_url})
|
||||
response.raise_for_status()
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if self.is_cancelled: break
|
||||
f.write(chunk)
|
||||
if not self.is_cancelled:
|
||||
with self.counter_lock: self.total_dl_count += 1
|
||||
except Exception as e:
|
||||
with self.counter_lock: self.total_skip_count += 1
|
||||
finally:
|
||||
if not self.is_cancelled:
|
||||
with self.counter_lock: self.total_jobs_processed += 1
|
||||
self.overall_progress_signal.emit(self.total_jobs_found, self.total_jobs_processed)
|
||||
|
||||
self.service_queue.task_done()
|
||||
except queue.Empty:
|
||||
continue
|
||||
|
||||
def _download_album(self, files_to_process, source_url, album_path):
|
||||
"""Helper to download all files from a pre-fetched album list."""
|
||||
if not files_to_process: return
|
||||
session = cloudscraper.create_scraper()
|
||||
for file_data in files_to_process:
|
||||
if self.is_cancelled: return
|
||||
filename = file_data.get('filename') or file_data.get('name')
|
||||
filepath = os.path.join(album_path, filename)
|
||||
try:
|
||||
if os.path.exists(filepath):
|
||||
with self.counter_lock: self.total_skip_count += 1
|
||||
else:
|
||||
self.progress_signal.emit(f" -> Downloading: '{filename}'...")
|
||||
headers = file_data.get('headers', {'Referer': source_url})
|
||||
# --- START MODIFICATION ---
|
||||
response = session.get(file_data.get('url'), stream=True, timeout=180, headers=headers)
|
||||
# --- END MODIFICATION ---
|
||||
response.raise_for_status()
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if self.is_cancelled: break
|
||||
f.write(chunk)
|
||||
if not self.is_cancelled:
|
||||
with self.counter_lock: self.total_dl_count += 1
|
||||
except Exception as e:
|
||||
with self.counter_lock: self.total_skip_count += 1
|
||||
finally:
|
||||
if not self.is_cancelled:
|
||||
with self.counter_lock: self.total_jobs_processed += 1
|
||||
self.overall_progress_signal.emit(self.total_jobs_found, self.total_jobs_processed)
|
||||
|
||||
def run(self):
|
||||
"""Main entry point for the thread, orchestrates the entire download."""
|
||||
self.progress_signal.emit("=" * 40)
|
||||
self.progress_signal.emit(f"🚀 Starting SimpCity Download for: {self.start_url}")
|
||||
|
||||
self.should_dl_pixeldrain = self.parent_app.simpcity_dl_pixeldrain_cb.isChecked()
|
||||
self.should_dl_saint2 = self.parent_app.simpcity_dl_saint2_cb.isChecked()
|
||||
self.should_dl_mega = self.parent_app.simpcity_dl_mega_cb.isChecked()
|
||||
self.should_dl_images = self.parent_app.simpcity_dl_images_cb.isChecked()
|
||||
self.should_dl_bunkr = self.parent_app.simpcity_dl_bunkr_cb.isChecked()
|
||||
self.should_dl_gofile = self.parent_app.simpcity_dl_gofile_cb.isChecked()
|
||||
|
||||
is_single_post_mode = self.post_id or '/post-' in self.start_url
|
||||
album_path = ""
|
||||
|
||||
try:
|
||||
if is_single_post_mode:
|
||||
self.progress_signal.emit(" Mode: Single Post detected.")
|
||||
album_title, _, _ = fetch_single_simpcity_page(self.start_url, self._log_interceptor, cookies=self.cookies, post_id=self.post_id)
|
||||
album_path = os.path.join(self.output_dir, clean_folder_name(album_title or "simpcity_post"))
|
||||
else:
|
||||
self.progress_signal.emit(" Mode: Full Thread detected.")
|
||||
first_page_url = re.sub(r'(/page-\d+)|(/post-\d+)', '', self.start_url).split('#')[0].strip('/')
|
||||
album_title, _, _ = fetch_single_simpcity_page(first_page_url, self._log_interceptor, cookies=self.cookies)
|
||||
album_path = os.path.join(self.output_dir, clean_folder_name(album_title or "simpcity_album"))
|
||||
os.makedirs(album_path, exist_ok=True)
|
||||
self.progress_signal.emit(f" Saving all content to folder: '{os.path.basename(album_path)}'")
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f"❌ Could not process the initial page. Aborting. Error: {e}")
|
||||
self.finished_signal.emit(0, 0, self.is_cancelled, []); return
|
||||
|
||||
service_thread = threading.Thread(target=self._service_worker, args=(album_path,), daemon=True)
|
||||
service_thread.start()
|
||||
num_image_threads = 15
|
||||
image_executor = ThreadPoolExecutor(max_workers=num_image_threads, thread_name_prefix='SimpCityImage')
|
||||
for _ in range(num_image_threads): image_executor.submit(self._image_worker, album_path)
|
||||
|
||||
try:
|
||||
if is_single_post_mode:
|
||||
_, jobs, _ = fetch_single_simpcity_page(self.start_url, self._log_interceptor, cookies=self.cookies, post_id=self.post_id)
|
||||
enriched_jobs = self._get_enriched_jobs(jobs)
|
||||
if enriched_jobs:
|
||||
for job in enriched_jobs:
|
||||
if job['type'] == 'image':
|
||||
if self.should_dl_images: self.image_queue.put(job)
|
||||
else: self.service_queue.put(job)
|
||||
|
||||
else:
|
||||
base_url = re.sub(r'(/page-\d+)|(/post-\d+)', '', self.start_url).split('#')[0].strip('/')
|
||||
page_counter = 1; end_of_thread = False; MAX_RETRIES = 3
|
||||
while not end_of_thread:
|
||||
if self.is_cancelled: break
|
||||
page_url = f"{base_url}/page-{page_counter}"; retries = 0; page_fetch_successful = False
|
||||
while retries < MAX_RETRIES:
|
||||
if self.is_cancelled: end_of_thread = True; break
|
||||
self.progress_signal.emit(f"\n--- Analyzing page {page_counter} (Attempt {retries + 1}/{MAX_RETRIES}) ---")
|
||||
try:
|
||||
page_title, jobs_on_page, final_url = fetch_single_simpcity_page(page_url, self._log_interceptor, cookies=self.cookies)
|
||||
|
||||
# --- START: MODIFIED REDIRECT LOGIC ---
|
||||
if final_url != page_url:
|
||||
self.progress_signal.emit(f" -> Redirect detected from {page_url} to {final_url}")
|
||||
try:
|
||||
req_page_match = re.search(r'/page-(\d+)', page_url)
|
||||
final_page_match = re.search(r'/page-(\d+)', final_url)
|
||||
|
||||
if req_page_match:
|
||||
req_page_num = int(req_page_match.group(1))
|
||||
|
||||
# Scenario 1: Redirect to an earlier page (e.g., page-11 -> page-10)
|
||||
if final_page_match and int(final_page_match.group(1)) < req_page_num:
|
||||
self.progress_signal.emit(f" -> Redirected to an earlier page ({final_page_match.group(0)}). Reached end of thread.")
|
||||
end_of_thread = True
|
||||
|
||||
# Scenario 2: Redirect to base URL (e.g., page-11 -> /)
|
||||
# We check req_page_num > 1 because page-1 often redirects to base URL, which is normal.
|
||||
elif not final_page_match and req_page_num > 1:
|
||||
self.progress_signal.emit(f" -> Redirected to base thread URL. Reached end of thread.")
|
||||
end_of_thread = True
|
||||
|
||||
except (ValueError, TypeError):
|
||||
pass # Ignore parsing errors
|
||||
# --- END: MODIFIED REDIRECT LOGIC ---
|
||||
|
||||
if end_of_thread:
|
||||
page_fetch_successful = True; break
|
||||
|
||||
if page_counter > 1 and not page_title:
|
||||
self.progress_signal.emit(f" -> Page {page_counter} is invalid or has no title. Reached end of thread.")
|
||||
end_of_thread = True
|
||||
elif not jobs_on_page:
|
||||
self.progress_signal.emit(f" -> Page {page_counter} has no content. Reached end of thread.")
|
||||
end_of_thread = True
|
||||
else:
|
||||
new_jobs = [job for job in jobs_on_page if job.get('url') not in self.processed_job_urls]
|
||||
if not new_jobs and page_counter > 1:
|
||||
self.progress_signal.emit(f" -> Page {page_counter} contains no new content. Reached end of thread.")
|
||||
end_of_thread = True
|
||||
else:
|
||||
enriched_jobs = self._get_enriched_jobs(new_jobs)
|
||||
if not enriched_jobs and not new_jobs:
|
||||
# This can happen if all new_jobs were e.g. pixeldrain and it's disabled
|
||||
self.progress_signal.emit(f" -> Page {page_counter} content was filtered out. Reached end of thread.")
|
||||
end_of_thread = True
|
||||
|
||||
else:
|
||||
for job in enriched_jobs:
|
||||
self.processed_job_urls.add(job.get('url'))
|
||||
if job['type'] == 'image':
|
||||
if self.should_dl_images: self.image_queue.put(job)
|
||||
else: self.service_queue.put(job)
|
||||
|
||||
page_fetch_successful = True; break
|
||||
except requests.exceptions.HTTPError as e:
|
||||
if e.response.status_code in [403, 404]:
|
||||
self.progress_signal.emit(f" -> Page {page_counter} returned {e.response.status_code}. Reached end of thread.")
|
||||
end_of_thread = True; break
|
||||
elif e.response.status_code == 429:
|
||||
self.progress_signal.emit(f" -> Rate limited (429). Waiting...")
|
||||
time.sleep(5 * (retries + 2)); retries += 1
|
||||
else:
|
||||
self.progress_signal.emit(f" -> HTTP Error {e.response.status_code} on page {page_counter}. Stopping crawl.")
|
||||
end_of_thread = True; break
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" Stopping crawl due to error on page {page_counter}: {e}"); end_of_thread = True; break
|
||||
if not page_fetch_successful and not end_of_thread:
|
||||
self.progress_signal.emit(f" -> Failed to fetch page {page_counter} after {MAX_RETRIES} attempts. Stopping crawl.")
|
||||
end_of_thread = True
|
||||
if not end_of_thread: page_counter += 1
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f"❌ A critical error occurred during the main fetch phase: {e}")
|
||||
|
||||
self.progress_signal.emit("\n--- All pages analyzed. Waiting for background downloads to complete... ---")
|
||||
for _ in range(num_image_threads): self.image_queue.put(None)
|
||||
self.service_queue.put(None)
|
||||
image_executor.shutdown(wait=True)
|
||||
service_thread.join()
|
||||
self.finished_signal.emit(self.total_dl_count, self.total_skip_count, self.is_cancelled, [])
|
||||
128
src/ui/classes/toonily_downloader_thread.py
Normal file
128
src/ui/classes/toonily_downloader_thread.py
Normal file
@@ -0,0 +1,128 @@
|
||||
import os
|
||||
import threading
|
||||
import time
|
||||
from urllib.parse import urlparse
|
||||
|
||||
import cloudscraper
|
||||
from PyQt5.QtCore import QThread, pyqtSignal
|
||||
|
||||
from ...core.toonily_client import (
|
||||
fetch_chapter_data as toonily_fetch_data,
|
||||
get_chapter_list as toonily_get_list
|
||||
)
|
||||
from ...utils.file_utils import clean_folder_name
|
||||
|
||||
|
||||
class ToonilyDownloadThread(QThread):
|
||||
"""A dedicated QThread for handling toonily.com series or single chapters."""
|
||||
progress_signal = pyqtSignal(str)
|
||||
file_progress_signal = pyqtSignal(str, object)
|
||||
finished_signal = pyqtSignal(int, int, bool)
|
||||
overall_progress_signal = pyqtSignal(int, int) # Signal for chapter progress
|
||||
|
||||
def __init__(self, url, output_dir, parent=None):
|
||||
super().__init__(parent)
|
||||
self.start_url = url
|
||||
self.output_dir = output_dir
|
||||
self.is_cancelled = False
|
||||
# Get access to the pause event from the main app
|
||||
self.pause_event = parent.pause_event if hasattr(parent, 'pause_event') else threading.Event()
|
||||
|
||||
def _check_pause(self):
|
||||
# Helper function to check for pause/cancel events
|
||||
if self.is_cancelled: return True
|
||||
if self.pause_event and self.pause_event.is_set():
|
||||
self.progress_signal.emit(" Download paused...")
|
||||
while self.pause_event.is_set():
|
||||
if self.is_cancelled: return True
|
||||
time.sleep(0.5)
|
||||
self.progress_signal.emit(" Download resumed.")
|
||||
return self.is_cancelled
|
||||
|
||||
def run(self):
|
||||
grand_total_dl = 0
|
||||
grand_total_skip = 0
|
||||
|
||||
# Check if the URL is a series or a chapter
|
||||
if '/chapter-' in self.start_url:
|
||||
# It's a single chapter URL
|
||||
chapters_to_download = [self.start_url]
|
||||
self.progress_signal.emit("ℹ️ Single Toonily chapter URL detected.")
|
||||
else:
|
||||
# It's a series URL, so get all chapters
|
||||
chapters_to_download = toonily_get_list(self.start_url, self.progress_signal.emit)
|
||||
|
||||
if not chapters_to_download:
|
||||
self.progress_signal.emit("❌ No chapters found to download.")
|
||||
self.finished_signal.emit(0, 0, self.is_cancelled)
|
||||
return
|
||||
|
||||
self.progress_signal.emit(f"--- Starting download of {len(chapters_to_download)} chapter(s) ---")
|
||||
self.overall_progress_signal.emit(len(chapters_to_download), 0)
|
||||
|
||||
scraper = cloudscraper.create_scraper()
|
||||
|
||||
for chapter_idx, chapter_url in enumerate(chapters_to_download):
|
||||
if self._check_pause(): break
|
||||
|
||||
self.progress_signal.emit(f"\n-- Processing Chapter {chapter_idx + 1}/{len(chapters_to_download)} --")
|
||||
series_title, chapter_title, image_urls = toonily_fetch_data(chapter_url, self.progress_signal.emit, scraper)
|
||||
|
||||
if not image_urls:
|
||||
self.progress_signal.emit(f"❌ Failed to get data for chapter. Skipping.")
|
||||
continue
|
||||
|
||||
# Create folders like: /Downloads/Series Name/Chapter 01/
|
||||
series_folder_name = clean_folder_name(series_title)
|
||||
# Make a safe folder name from the full chapter title
|
||||
chapter_folder_name = clean_folder_name(chapter_title)
|
||||
final_save_path = os.path.join(self.output_dir, series_folder_name, chapter_folder_name)
|
||||
|
||||
try:
|
||||
os.makedirs(final_save_path, exist_ok=True)
|
||||
self.progress_signal.emit(f" Saving to folder: '{os.path.join(series_folder_name, chapter_folder_name)}'")
|
||||
except OSError as e:
|
||||
self.progress_signal.emit(f"❌ Critical error creating directory: {e}")
|
||||
grand_total_skip += len(image_urls)
|
||||
continue
|
||||
|
||||
for i, img_url in enumerate(image_urls):
|
||||
if self._check_pause(): break
|
||||
|
||||
try:
|
||||
file_extension = os.path.splitext(urlparse(img_url).path)[1] or '.jpg'
|
||||
filename = f"{i+1:03d}{file_extension}"
|
||||
filepath = os.path.join(final_save_path, filename)
|
||||
|
||||
if os.path.exists(filepath):
|
||||
self.progress_signal.emit(f" -> Skip ({i+1}/{len(image_urls)}): '{filename}' already exists.")
|
||||
grand_total_skip += 1
|
||||
else:
|
||||
self.progress_signal.emit(f" Downloading ({i+1}/{len(image_urls)}): '{filename}'...")
|
||||
response = scraper.get(img_url, stream=True, timeout=60, headers={'Referer': chapter_url})
|
||||
response.raise_for_status()
|
||||
|
||||
with open(filepath, 'wb') as f:
|
||||
for chunk in response.iter_content(chunk_size=8192):
|
||||
if self._check_pause(): break
|
||||
f.write(chunk)
|
||||
|
||||
if self._check_pause():
|
||||
if os.path.exists(filepath): os.remove(filepath)
|
||||
break
|
||||
|
||||
grand_total_dl += 1
|
||||
time.sleep(0.2)
|
||||
except Exception as e:
|
||||
self.progress_signal.emit(f" ❌ Failed to download '{filename}': {e}")
|
||||
grand_total_skip += 1
|
||||
|
||||
self.overall_progress_signal.emit(len(chapters_to_download), chapter_idx + 1)
|
||||
time.sleep(1) # Wait a second between chapters
|
||||
|
||||
self.file_progress_signal.emit("", None)
|
||||
self.finished_signal.emit(grand_total_dl, grand_total_skip, self.is_cancelled)
|
||||
|
||||
def cancel(self):
|
||||
self.is_cancelled = True
|
||||
self.progress_signal.emit(" Cancellation signal received by Toonily thread.")
|
||||
@@ -16,7 +16,6 @@ class CookieHelpDialog(QDialog):
|
||||
It can be displayed as a simple informational popup or as a modal choice
|
||||
when cookies are required but not found.
|
||||
"""
|
||||
# Constants to define the user's choice from the dialog
|
||||
CHOICE_PROCEED_WITHOUT_COOKIES = 1
|
||||
CHOICE_CANCEL_DOWNLOAD = 2
|
||||
CHOICE_OK_INFO_ONLY = 3
|
||||
@@ -64,7 +63,6 @@ class CookieHelpDialog(QDialog):
|
||||
button_layout.addStretch(1)
|
||||
|
||||
if self.offer_download_without_option:
|
||||
# Add buttons for making a choice
|
||||
self.download_without_button = QPushButton()
|
||||
self.download_without_button.clicked.connect(self._proceed_without_cookies)
|
||||
button_layout.addWidget(self.download_without_button)
|
||||
@@ -73,7 +71,6 @@ class CookieHelpDialog(QDialog):
|
||||
self.cancel_button.clicked.connect(self._cancel_download)
|
||||
button_layout.addWidget(self.cancel_button)
|
||||
else:
|
||||
# Add a simple OK button for informational display
|
||||
self.ok_button = QPushButton()
|
||||
self.ok_button.clicked.connect(self._ok_info_only)
|
||||
button_layout.addWidget(self.ok_button)
|
||||
|
||||
109
src/ui/dialogs/CustomFilenameDialog.py
Normal file
109
src/ui/dialogs/CustomFilenameDialog.py
Normal file
@@ -0,0 +1,109 @@
|
||||
from PyQt5.QtWidgets import (
|
||||
QDialog, QVBoxLayout, QHBoxLayout, QLabel, QLineEdit, QPushButton,
|
||||
QDialogButtonBox, QTextEdit
|
||||
)
|
||||
from PyQt5.QtCore import Qt
|
||||
|
||||
class CustomFilenameDialog(QDialog):
|
||||
"""A dialog for creating a custom filename format string."""
|
||||
|
||||
DISPLAY_KEY_MAP = {
|
||||
"PostID": "id",
|
||||
"CreatorName": "creator_name",
|
||||
"service": "service",
|
||||
"title": "title",
|
||||
"added": "added",
|
||||
"published": "published",
|
||||
"edited": "edited",
|
||||
"name": "name"
|
||||
}
|
||||
|
||||
# STRICT LIST: Only these three will be clickable for DeviantArt
|
||||
DA_ALLOWED_KEYS = ["creator_name", "title", "published"]
|
||||
|
||||
def __init__(self, current_format, current_date_format, parent=None, is_deviantart=False):
|
||||
super().__init__(parent)
|
||||
self.setWindowTitle("Custom Filename Format")
|
||||
self.setMinimumWidth(500)
|
||||
|
||||
self.current_format = current_format
|
||||
self.current_date_format = current_date_format
|
||||
|
||||
# --- Main Layout ---
|
||||
layout = QVBoxLayout(self)
|
||||
|
||||
# --- Description ---
|
||||
desc_text = "Create a filename format using placeholders. The date/time values will be automatically formatted."
|
||||
if is_deviantart:
|
||||
desc_text += "\n\n(DeviantArt Mode: Only Creator Name, Title, and Upload Date are available. Other buttons are disabled.)"
|
||||
|
||||
description_label = QLabel(desc_text)
|
||||
description_label.setWordWrap(True)
|
||||
layout.addWidget(description_label)
|
||||
|
||||
# --- Format Input ---
|
||||
format_label = QLabel("Filename Format:")
|
||||
layout.addWidget(format_label)
|
||||
self.format_input = QLineEdit(self)
|
||||
self.format_input.setText(self.current_format)
|
||||
|
||||
if is_deviantart:
|
||||
self.format_input.setPlaceholderText("e.g., {published} {title} {creator_name}")
|
||||
else:
|
||||
self.format_input.setPlaceholderText("e.g., {published} {title} {id}")
|
||||
|
||||
layout.addWidget(self.format_input)
|
||||
|
||||
# --- Date Format Input ---
|
||||
date_format_label = QLabel("Date Format (for {published}):")
|
||||
layout.addWidget(date_format_label)
|
||||
self.date_format_input = QLineEdit(self)
|
||||
self.date_format_input.setText(self.current_date_format)
|
||||
self.date_format_input.setPlaceholderText("e.g., YYYY-MM-DD")
|
||||
layout.addWidget(self.date_format_input)
|
||||
|
||||
# --- Available Keys Display ---
|
||||
keys_label = QLabel("Click to add a placeholder:")
|
||||
layout.addWidget(keys_label)
|
||||
|
||||
keys_layout = QHBoxLayout()
|
||||
keys_layout.setSpacing(5)
|
||||
|
||||
for display_key, internal_key in self.DISPLAY_KEY_MAP.items():
|
||||
key_button = QPushButton(f"{{{display_key}}}")
|
||||
|
||||
# --- DeviantArt Logic ---
|
||||
if is_deviantart:
|
||||
if internal_key in self.DA_ALLOWED_KEYS:
|
||||
# Active buttons: Bold text, enabled
|
||||
key_button.setStyleSheet("font-weight: bold; color: black;")
|
||||
key_button.setEnabled(True)
|
||||
else:
|
||||
# Inactive buttons: Disabled (Cannot be clicked)
|
||||
key_button.setEnabled(False)
|
||||
key_button.setToolTip("Not available for DeviantArt")
|
||||
# ------------------------
|
||||
|
||||
# Use a lambda to pass the correct internal key when clicked
|
||||
key_button.clicked.connect(lambda checked, key=internal_key: self.add_key_to_input(key))
|
||||
keys_layout.addWidget(key_button)
|
||||
keys_layout.addStretch()
|
||||
|
||||
layout.addLayout(keys_layout)
|
||||
|
||||
# --- OK/Cancel Buttons ---
|
||||
button_box = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel)
|
||||
button_box.accepted.connect(self.accept)
|
||||
button_box.rejected.connect(self.reject)
|
||||
layout.addWidget(button_box)
|
||||
|
||||
def add_key_to_input(self, key_to_insert):
|
||||
"""Adds the corresponding internal key placeholder to the input field."""
|
||||
self.format_input.insert(f" {{{key_to_insert}}} ")
|
||||
self.format_input.setFocus()
|
||||
|
||||
def get_format_string(self):
|
||||
return self.format_input.text().strip()
|
||||
|
||||
def get_date_format_string(self):
|
||||
return self.date_format_input.text().strip()
|
||||
@@ -22,6 +22,8 @@ from ..main_window import get_app_icon_object
|
||||
from ...core.api_client import download_from_api
|
||||
from ...utils.network_utils import extract_post_info, prepare_cookies_for_request
|
||||
from ...utils.resolution import get_dark_theme
|
||||
# --- IMPORT THE NEW DIALOG ---
|
||||
from .UpdateCheckDialog import UpdateCheckDialog
|
||||
|
||||
|
||||
class PostsFetcherThread (QThread ):
|
||||
@@ -138,7 +140,7 @@ class EmptyPopupDialog (QDialog ):
|
||||
SCOPE_CREATORS ="Creators"
|
||||
|
||||
|
||||
def __init__ (self ,app_base_dir ,parent_app_ref ,parent =None ):
|
||||
def __init__ (self ,user_data_path ,parent_app_ref ,parent =None ):
|
||||
super ().__init__ (parent )
|
||||
self.parent_app = parent_app_ref
|
||||
|
||||
@@ -146,13 +148,21 @@ class EmptyPopupDialog (QDialog ):
|
||||
|
||||
self.setMinimumSize(int(400 * scale_factor), int(300 * scale_factor))
|
||||
self.current_scope_mode = self.SCOPE_CREATORS
|
||||
self .app_base_dir =app_base_dir
|
||||
self.user_data_path = user_data_path
|
||||
|
||||
app_icon =get_app_icon_object ()
|
||||
if app_icon and not app_icon .isNull ():
|
||||
self .setWindowIcon (app_icon )
|
||||
|
||||
# --- MODIFIED: Store a list of profiles now ---
|
||||
self.update_profiles_list = None
|
||||
# --- NEW: Flag to indicate if settings should load to UI ---
|
||||
self.load_settings_into_ui_requested = False
|
||||
|
||||
# --- DEPRECATED (kept for compatibility if needed, but new logic won't use them) ---
|
||||
self.update_profile_data = None
|
||||
self.update_creator_name = None
|
||||
|
||||
self .selected_creators_for_queue =[]
|
||||
self .globally_selected_creators ={}
|
||||
self .fetched_posts_data ={}
|
||||
@@ -321,29 +331,37 @@ class EmptyPopupDialog (QDialog ):
|
||||
pass
|
||||
|
||||
def _handle_update_check(self):
|
||||
"""Opens a dialog to select a creator profile and loads it for an update session."""
|
||||
appdata_dir = os.path.join(self.app_base_dir, "appdata")
|
||||
profiles_dir = os.path.join(appdata_dir, "creator_profiles")
|
||||
"""
|
||||
--- MODIFIED FUNCTION ---
|
||||
Opens the new UpdateCheckDialog instead of a QFileDialog.
|
||||
If a profile is selected, it sets the dialog's result properties
|
||||
and accepts the dialog, just like the old file dialog logic did.
|
||||
"""
|
||||
# --- NEW BEHAVIOR ---
|
||||
# Pass the app_base_dir and a reference to the main app (for translations/theme)
|
||||
dialog = UpdateCheckDialog(self.user_data_path, self.parent_app, self)
|
||||
|
||||
if not os.path.isdir(profiles_dir):
|
||||
QMessageBox.warning(self, "Directory Not Found", f"The creator profiles directory does not exist yet.\n\nPath: {profiles_dir}")
|
||||
return
|
||||
|
||||
filepath, _ = QFileDialog.getOpenFileName(self, "Select Creator Profile for Update", profiles_dir, "JSON Files (*.json)")
|
||||
|
||||
if filepath:
|
||||
try:
|
||||
with open(filepath, 'r', encoding='utf-8') as f:
|
||||
data = json.load(f)
|
||||
|
||||
if 'creator_url' not in data or 'processed_post_ids' not in data:
|
||||
raise ValueError("Invalid profile format.")
|
||||
|
||||
self.update_profile_data = data
|
||||
self.update_creator_name = os.path.basename(filepath).replace('.json', '')
|
||||
self.accept() # Close the dialog and signal success
|
||||
except Exception as e:
|
||||
QMessageBox.critical(self, "Error Loading Profile", f"Could not load or parse the selected profile file:\n\n{e}")
|
||||
if dialog.exec_() == QDialog.Accepted:
|
||||
# --- MODIFIED: Get a list of profiles now ---
|
||||
selected_profiles = dialog.get_selected_profiles()
|
||||
# --- NEW: Get the checkbox state ---
|
||||
self.load_settings_into_ui_requested = dialog.should_load_into_ui()
|
||||
|
||||
if selected_profiles:
|
||||
try:
|
||||
# --- MODIFIED: Store the list ---
|
||||
self.update_profiles_list = selected_profiles
|
||||
|
||||
# --- Set deprecated single-profile fields for backward compatibility (optional) ---
|
||||
# --- This helps if other parts of the main window still expect one profile ---
|
||||
self.update_profile_data = selected_profiles[0]['data']
|
||||
self.update_creator_name = selected_profiles[0]['name']
|
||||
|
||||
self.accept() # Close EmptyPopupDialog and signal success to main_window
|
||||
except Exception as e:
|
||||
QMessageBox.critical(self, "Error Loading Profile",
|
||||
f"Could not process the selected profile data:\n\n{e}")
|
||||
# --- END OF NEW BEHAVIOR ---
|
||||
|
||||
def _handle_fetch_posts_click (self ):
|
||||
selected_creators =list (self .globally_selected_creators .values ())
|
||||
@@ -981,9 +999,14 @@ class EmptyPopupDialog (QDialog ):
|
||||
def _handle_posts_close_view (self ):
|
||||
self .right_pane_widget .hide ()
|
||||
self .main_splitter .setSizes ([self .width (),0 ])
|
||||
self .posts_list_widget .itemChanged .disconnect (self ._handle_post_item_check_changed )
|
||||
|
||||
# --- MODIFIED: Added check before disconnect ---
|
||||
if hasattr (self ,'_handle_post_item_check_changed'):
|
||||
self .posts_title_list_widget .itemChanged .disconnect (self ._handle_post_item_check_changed )
|
||||
try:
|
||||
self .posts_title_list_widget .itemChanged .disconnect (self ._handle_post_item_check_changed )
|
||||
except TypeError:
|
||||
pass # Already disconnected
|
||||
|
||||
self .posts_search_input .setVisible (False )
|
||||
self .posts_search_input .clear ()
|
||||
self .globally_selected_post_ids .clear ()
|
||||
@@ -1035,4 +1058,4 @@ class EmptyPopupDialog (QDialog ):
|
||||
else :
|
||||
if unique_key in self .globally_selected_creators :
|
||||
del self .globally_selected_creators [unique_key ]
|
||||
self .fetch_posts_button .setEnabled (bool (self .globally_selected_creators ))
|
||||
self .fetch_posts_button .setEnabled (bool (self .globally_selected_creators ))
|
||||
|
||||
@@ -2,77 +2,55 @@
|
||||
from PyQt5.QtCore import pyqtSignal, Qt
|
||||
from PyQt5.QtWidgets import (
|
||||
QApplication, QDialog, QHBoxLayout, QLabel, QListWidget, QListWidgetItem,
|
||||
QMessageBox, QPushButton, QVBoxLayout, QAbstractItemView, QFileDialog
|
||||
QMessageBox, QPushButton, QVBoxLayout, QAbstractItemView, QFileDialog, QCheckBox
|
||||
)
|
||||
|
||||
# --- Local Application Imports ---
|
||||
from ...i18n.translator import get_translation
|
||||
from ..assets import get_app_icon_object
|
||||
# Corrected Import: The filename uses PascalCase.
|
||||
from .ExportOptionsDialog import ExportOptionsDialog
|
||||
from ...utils.resolution import get_dark_theme
|
||||
from ...config.constants import AUTO_RETRY_ON_FINISH_KEY
|
||||
|
||||
class ErrorFilesDialog(QDialog):
|
||||
"""
|
||||
Dialog to display files that were skipped due to errors and
|
||||
allows the user to retry downloading them or export the list of URLs.
|
||||
"""
|
||||
|
||||
# Signal emitted with a list of file info dictionaries to retry
|
||||
retry_selected_signal = pyqtSignal(list)
|
||||
|
||||
def __init__(self, error_files_info_list, parent_app, parent=None):
|
||||
"""
|
||||
Initializes the dialog.
|
||||
|
||||
Args:
|
||||
error_files_info_list (list): A list of dictionaries, each containing
|
||||
info about a failed file.
|
||||
parent_app (DownloaderApp): A reference to the main application window
|
||||
for theming and translations.
|
||||
parent (QWidget, optional): The parent widget. Defaults to None.
|
||||
"""
|
||||
super().__init__(parent)
|
||||
self.parent_app = parent_app
|
||||
self.setModal(True)
|
||||
self.error_files = error_files_info_list
|
||||
|
||||
# --- Basic Window Setup ---
|
||||
app_icon = get_app_icon_object()
|
||||
if app_icon and not app_icon.isNull():
|
||||
self.setWindowIcon(app_icon)
|
||||
|
||||
# --- START OF FIX ---
|
||||
# Get the user-defined scale factor from the parent application.
|
||||
scale_factor = getattr(self.parent_app, 'scale_factor', 1.0)
|
||||
|
||||
# Define base dimensions and apply the correct scale factor.
|
||||
base_width, base_height = 550, 400
|
||||
base_width, base_height = 600, 450
|
||||
self.setMinimumSize(int(base_width * scale_factor), int(base_height * scale_factor))
|
||||
self.resize(int(base_width * scale_factor * 1.1), int(base_height * scale_factor * 1.1))
|
||||
# --- END OF FIX ---
|
||||
|
||||
# --- Initialize UI and Apply Theming ---
|
||||
self._init_ui()
|
||||
self._retranslate_ui()
|
||||
self._apply_theme()
|
||||
|
||||
def _init_ui(self):
|
||||
"""Initializes all UI components and layouts for the dialog."""
|
||||
main_layout = QVBoxLayout(self)
|
||||
|
||||
self.info_label = QLabel()
|
||||
self.info_label.setWordWrap(True)
|
||||
main_layout.addWidget(self.info_label)
|
||||
|
||||
if self.error_files:
|
||||
self.files_list_widget = QListWidget()
|
||||
self.files_list_widget.setSelectionMode(QAbstractItemView.NoSelection)
|
||||
self._populate_list()
|
||||
main_layout.addWidget(self.files_list_widget)
|
||||
self.files_list_widget = QListWidget()
|
||||
self.files_list_widget.setSelectionMode(QAbstractItemView.ExtendedSelection)
|
||||
main_layout.addWidget(self.files_list_widget)
|
||||
self._populate_list()
|
||||
|
||||
# --- Control Buttons ---
|
||||
buttons_layout = QHBoxLayout()
|
||||
|
||||
self.select_all_button = QPushButton()
|
||||
self.select_all_button.clicked.connect(self._select_all_items)
|
||||
buttons_layout.addWidget(self.select_all_button)
|
||||
@@ -81,94 +59,170 @@ class ErrorFilesDialog(QDialog):
|
||||
self.retry_button.clicked.connect(self._handle_retry_selected)
|
||||
buttons_layout.addWidget(self.retry_button)
|
||||
|
||||
self.load_button = QPushButton()
|
||||
self.load_button.clicked.connect(self._handle_load_errors_from_txt)
|
||||
buttons_layout.addWidget(self.load_button)
|
||||
|
||||
self.export_button = QPushButton()
|
||||
self.export_button.clicked.connect(self._handle_export_errors_to_txt)
|
||||
buttons_layout.addWidget(self.export_button)
|
||||
|
||||
# The stretch will push everything added after this point to the right
|
||||
buttons_layout.addStretch(1)
|
||||
|
||||
# --- MOVED: Auto Retry Checkbox ---
|
||||
self.auto_retry_checkbox = QCheckBox()
|
||||
auto_retry_enabled = self.parent_app.settings.value(AUTO_RETRY_ON_FINISH_KEY, False, type=bool)
|
||||
self.auto_retry_checkbox.setChecked(auto_retry_enabled)
|
||||
self.auto_retry_checkbox.toggled.connect(self._save_auto_retry_setting)
|
||||
buttons_layout.addWidget(self.auto_retry_checkbox)
|
||||
# --- END ---
|
||||
|
||||
self.ok_button = QPushButton()
|
||||
self.ok_button.clicked.connect(self.accept)
|
||||
self.ok_button.setDefault(True)
|
||||
buttons_layout.addWidget(self.ok_button)
|
||||
main_layout.addLayout(buttons_layout)
|
||||
|
||||
# Enable/disable buttons based on whether there are errors
|
||||
has_errors = bool(self.error_files)
|
||||
self.select_all_button.setEnabled(has_errors)
|
||||
self.retry_button.setEnabled(has_errors)
|
||||
self.export_button.setEnabled(has_errors)
|
||||
|
||||
def _populate_list(self):
|
||||
"""Populates the list widget with details of the failed files."""
|
||||
self.files_list_widget.clear()
|
||||
for error_info in self.error_files:
|
||||
filename = error_info.get('forced_filename_override',
|
||||
error_info.get('file_info', {}).get('name', 'Unknown Filename'))
|
||||
post_title = error_info.get('post_title', 'Unknown Post')
|
||||
post_id = error_info.get('original_post_id_for_log', 'N/A')
|
||||
self._add_item_to_list(error_info)
|
||||
|
||||
item_text = f"File: {filename}\nFrom Post: '{post_title}' (ID: {post_id})"
|
||||
list_item = QListWidgetItem(item_text)
|
||||
list_item.setData(Qt.UserRole, error_info)
|
||||
list_item.setFlags(list_item.flags() | Qt.ItemIsUserCheckable)
|
||||
list_item.setCheckState(Qt.Unchecked)
|
||||
self.files_list_widget.addItem(list_item)
|
||||
def _handle_load_errors_from_txt(self):
|
||||
"""Opens a file dialog to load URLs from a .txt file."""
|
||||
import re
|
||||
|
||||
filepath, _ = QFileDialog.getOpenFileName(
|
||||
self,
|
||||
self._tr("error_files_load_dialog_title", "Load Error File URLs"),
|
||||
"",
|
||||
"Text Files (*.txt);;All Files (*)"
|
||||
)
|
||||
|
||||
if not filepath:
|
||||
return
|
||||
|
||||
try:
|
||||
detailed_pattern = re.compile(r"^(https?://[^\s]+)\s*\[Post: '(.*?)' \(ID: (.*?)\), File: '(.*?)'\]$")
|
||||
simple_pattern = re.compile(r'^(https?://[^\s]+)')
|
||||
|
||||
with open(filepath, 'r', encoding='utf-8') as f:
|
||||
for line in f:
|
||||
line = line.strip()
|
||||
if not line: continue
|
||||
|
||||
url, post_title, post_id, filename = None, 'Loaded from .txt', 'N/A', None
|
||||
|
||||
detailed_match = detailed_pattern.match(line)
|
||||
if detailed_match:
|
||||
url, post_title, post_id, filename = detailed_match.groups()
|
||||
else:
|
||||
simple_match = simple_pattern.match(line)
|
||||
if simple_match:
|
||||
url = simple_match.group(1)
|
||||
filename = url.split('/')[-1]
|
||||
|
||||
if url:
|
||||
simple_error_info = {
|
||||
'is_loaded_from_txt': True, 'file_info': {'url': url, 'name': filename},
|
||||
'post_title': post_title, 'original_post_id_for_log': post_id,
|
||||
'target_folder_path': self.parent_app.dir_input.text().strip(),
|
||||
'forced_filename_override': filename, 'file_index_in_post': 0,
|
||||
'num_files_in_this_post': 1, 'service': None, 'user_id': None, 'api_url_input': ''
|
||||
}
|
||||
self.error_files.append(simple_error_info)
|
||||
self._add_item_to_list(simple_error_info)
|
||||
|
||||
self.info_label.setText(self._tr("error_files_found_label", "The following {count} file(s)...").format(count=len(self.error_files)))
|
||||
|
||||
has_errors = bool(self.error_files)
|
||||
self.select_all_button.setEnabled(has_errors)
|
||||
self.retry_button.setEnabled(has_errors)
|
||||
self.export_button.setEnabled(has_errors)
|
||||
|
||||
except Exception as e:
|
||||
QMessageBox.critical(self, self._tr("error_files_load_error_title", "Load Error"),
|
||||
self._tr("error_files_load_error_message", "Could not load or parse the file: {error}").format(error=str(e)))
|
||||
|
||||
def _tr(self, key, default_text=""):
|
||||
"""Helper to get translation based on the main application's current language."""
|
||||
if callable(get_translation) and self.parent_app:
|
||||
return get_translation(self.parent_app.current_selected_language, key, default_text)
|
||||
return default_text
|
||||
|
||||
def _retranslate_ui(self):
|
||||
"""Sets the text for all translatable UI elements."""
|
||||
self.setWindowTitle(self._tr("error_files_dialog_title", "Files Skipped Due to Errors"))
|
||||
if not self.error_files:
|
||||
self.info_label.setText(self._tr("error_files_no_errors_label", "No files were recorded as skipped..."))
|
||||
else:
|
||||
self.info_label.setText(self._tr("error_files_found_label", "The following {count} file(s)...").format(count=len(self.error_files)))
|
||||
|
||||
self.select_all_button.setText(self._tr("error_files_select_all_button", "Select All"))
|
||||
self.auto_retry_checkbox.setText(self._tr("error_files_auto_retry_checkbox", "Auto Retry at End"))
|
||||
self.select_all_button.setText(self._tr("error_files_select_all_button", "Select/Deselect All"))
|
||||
self.retry_button.setText(self._tr("error_files_retry_selected_button", "Retry Selected"))
|
||||
self.load_button.setText(self._tr("error_files_load_urls_button", "Load URLs from .txt"))
|
||||
self.export_button.setText(self._tr("error_files_export_urls_button", "Export URLs to .txt"))
|
||||
self.ok_button.setText(self._tr("ok_button", "OK"))
|
||||
|
||||
def _apply_theme(self):
|
||||
"""Applies the current theme from the parent application."""
|
||||
if self.parent_app and self.parent_app.current_theme == "dark":
|
||||
# Get the scale factor from the parent app
|
||||
scale = getattr(self.parent_app, 'scale_factor', 1)
|
||||
# Call the imported function with the correct scale
|
||||
self.setStyleSheet(get_dark_theme(scale))
|
||||
else:
|
||||
# Explicitly set a blank stylesheet for light mode
|
||||
self.setStyleSheet("")
|
||||
|
||||
def _save_auto_retry_setting(self, checked):
|
||||
"""Saves the state of the auto-retry checkbox to QSettings."""
|
||||
self.parent_app.settings.setValue(AUTO_RETRY_ON_FINISH_KEY, checked)
|
||||
|
||||
def _add_item_to_list(self, error_info):
|
||||
"""Creates and adds a single QListWidgetItem based on error_info content."""
|
||||
if error_info.get('is_loaded_from_txt'):
|
||||
filename = error_info.get('file_info', {}).get('name', 'Unknown Filename')
|
||||
post_title = error_info.get('post_title', 'N/A')
|
||||
post_id = error_info.get('original_post_id_for_log', 'N/A')
|
||||
item_text = f"File: {filename}\nPost: '{post_title}' (ID: {post_id}) [Loaded from .txt]"
|
||||
else:
|
||||
filename = error_info.get('forced_filename_override', error_info.get('file_info', {}).get('name', 'Unknown Filename'))
|
||||
post_title = error_info.get('post_title', 'Unknown Post')
|
||||
post_id = error_info.get('original_post_id_for_log', 'N/A')
|
||||
creator_name = "Unknown Creator"
|
||||
service, user_id = error_info.get('service'), error_info.get('user_id')
|
||||
if service and user_id and hasattr(self.parent_app, 'creator_name_cache'):
|
||||
creator_name = self.parent_app.creator_name_cache.get((service.lower(), str(user_id)), user_id)
|
||||
item_text = f"File: {filename}\nCreator: {creator_name} - Post: '{post_title}' (ID: {post_id})"
|
||||
|
||||
list_item = QListWidgetItem(item_text)
|
||||
list_item.setData(Qt.UserRole, error_info)
|
||||
list_item.setFlags(list_item.flags() | Qt.ItemIsUserCheckable)
|
||||
list_item.setCheckState(Qt.Unchecked) # Start as unchecked
|
||||
self.files_list_widget.addItem(list_item)
|
||||
|
||||
def _select_all_items(self):
|
||||
"""Checks all items in the list."""
|
||||
if hasattr(self, 'files_list_widget'):
|
||||
for i in range(self.files_list_widget.count()):
|
||||
self.files_list_widget.item(i).setCheckState(Qt.Checked)
|
||||
"""Toggles checking all items in the list."""
|
||||
# Determine if we should check or uncheck all based on the first item's state
|
||||
is_currently_checked = self.files_list_widget.item(0).checkState() == Qt.Checked if self.files_list_widget.count() > 0 else False
|
||||
new_state = Qt.Unchecked if is_currently_checked else Qt.Checked
|
||||
for i in range(self.files_list_widget.count()):
|
||||
self.files_list_widget.item(i).setCheckState(new_state)
|
||||
|
||||
def _handle_retry_selected(self):
|
||||
"""Gathers selected files and emits the retry signal."""
|
||||
if not hasattr(self, 'files_list_widget'):
|
||||
return
|
||||
|
||||
selected_files_for_retry = [
|
||||
self.files_list_widget.item(i).data(Qt.UserRole)
|
||||
for i in range(self.files_list_widget.count())
|
||||
if self.files_list_widget.item(i).checkState() == Qt.Checked
|
||||
]
|
||||
|
||||
if selected_files_for_retry:
|
||||
self.retry_selected_signal.emit(selected_files_for_retry)
|
||||
self.accept()
|
||||
else:
|
||||
QMessageBox.information(
|
||||
self,
|
||||
self._tr("fav_artists_no_selection_title", "No Selection"),
|
||||
self._tr("error_files_no_selection_retry_message", "Please select at least one file to retry.")
|
||||
)
|
||||
QMessageBox.information(self, self._tr("fav_artists_no_selection_title", "No Selection"),
|
||||
self._tr("error_files_no_selection_retry_message", "Please check the box next to at least one file to retry."))
|
||||
|
||||
def _handle_export_errors_to_txt(self):
|
||||
"""Exports the URLs of failed files to a text file."""
|
||||
@@ -193,10 +247,13 @@ class ErrorFilesDialog(QDialog):
|
||||
|
||||
if url:
|
||||
if export_option == ExportOptionsDialog.EXPORT_MODE_WITH_DETAILS:
|
||||
original_filename = file_info.get('name', 'Unknown Filename')
|
||||
post_title = error_item.get('post_title', 'Unknown Post')
|
||||
post_id = error_item.get('original_post_id_for_log', 'N/A')
|
||||
details_string = f" [Post: '{post_title}' (ID: {post_id}), File: '{original_filename}']"
|
||||
|
||||
# Prioritize the final renamed filename, but fall back to the original from the API
|
||||
filename_to_display = error_item.get('forced_filename_override') or file_info.get('name', 'Unknown Filename')
|
||||
|
||||
details_string = f" [Post: '{post_title}' (ID: {post_id}), File: '{filename_to_display}']"
|
||||
lines_to_export.append(f"{url}{details_string}")
|
||||
else:
|
||||
lines_to_export.append(url)
|
||||
|
||||
226
src/ui/dialogs/ExportLinksDialog.py
Normal file
226
src/ui/dialogs/ExportLinksDialog.py
Normal file
@@ -0,0 +1,226 @@
|
||||
import os
|
||||
import json
|
||||
import re
|
||||
from collections import defaultdict
|
||||
from PyQt5.QtWidgets import (
|
||||
QApplication, QWidget, QLabel, QLineEdit, QTextEdit, QPushButton,
|
||||
QVBoxLayout, QHBoxLayout, QFileDialog, QMessageBox, QListWidget, QRadioButton,
|
||||
QButtonGroup, QCheckBox, QSplitter, QGroupBox, QDialog, QStackedWidget,
|
||||
QScrollArea, QListWidgetItem, QSizePolicy, QProgressBar, QAbstractItemView, QFrame,
|
||||
QMainWindow, QAction, QGridLayout,
|
||||
)
|
||||
from PyQt5.QtCore import Qt
|
||||
|
||||
class ExportLinksDialog(QDialog):
|
||||
"""
|
||||
A dialog for exporting extracted links with various format options, including custom templates.
|
||||
"""
|
||||
def __init__(self, links_data, parent=None):
|
||||
super().__init__(parent)
|
||||
self.links_data = links_data
|
||||
self.setWindowTitle("Export Extracted Links")
|
||||
self.setMinimumWidth(550)
|
||||
self._setup_ui()
|
||||
self._update_options_visibility()
|
||||
|
||||
def _setup_ui(self):
|
||||
"""Initializes the UI components of the dialog."""
|
||||
main_layout = QVBoxLayout(self)
|
||||
|
||||
# Format Selection (Top Level)
|
||||
format_group = QGroupBox("Export Format")
|
||||
format_layout = QHBoxLayout()
|
||||
self.radio_txt = QRadioButton("Plain Text (.txt)")
|
||||
self.radio_json = QRadioButton("JSON (.json)")
|
||||
self.radio_txt.setChecked(True)
|
||||
format_layout.addWidget(self.radio_txt)
|
||||
format_layout.addWidget(self.radio_json)
|
||||
format_group.setLayout(format_layout)
|
||||
main_layout.addWidget(format_group)
|
||||
|
||||
# TXT Options Group
|
||||
self.txt_options_group = QGroupBox("TXT Options")
|
||||
txt_options_layout = QVBoxLayout()
|
||||
|
||||
self.txt_mode_group = QButtonGroup(self)
|
||||
self.radio_simple = QRadioButton("Simple (URL only, one per line)")
|
||||
self.radio_detailed = QRadioButton("Detailed (with checkboxes)")
|
||||
self.radio_custom = QRadioButton("Custom Format Template")
|
||||
|
||||
self.txt_mode_group.addButton(self.radio_simple)
|
||||
self.txt_mode_group.addButton(self.radio_detailed)
|
||||
self.txt_mode_group.addButton(self.radio_custom)
|
||||
|
||||
txt_options_layout.addWidget(self.radio_simple)
|
||||
txt_options_layout.addWidget(self.radio_detailed)
|
||||
|
||||
self.detailed_options_widget = QWidget()
|
||||
detailed_layout = QVBoxLayout(self.detailed_options_widget)
|
||||
detailed_layout.setContentsMargins(20, 5, 0, 5)
|
||||
self.check_include_titles = QCheckBox("Include post titles as separators")
|
||||
self.check_include_link_text = QCheckBox("Include link text/description")
|
||||
self.check_include_platform = QCheckBox("Include platform (e.g., Mega, GDrive)")
|
||||
detailed_layout.addWidget(self.check_include_titles)
|
||||
detailed_layout.addWidget(self.check_include_link_text)
|
||||
detailed_layout.addWidget(self.check_include_platform)
|
||||
txt_options_layout.addWidget(self.detailed_options_widget)
|
||||
|
||||
txt_options_layout.addWidget(self.radio_custom)
|
||||
|
||||
self.custom_format_widget = QWidget()
|
||||
custom_layout = QVBoxLayout(self.custom_format_widget)
|
||||
custom_layout.setContentsMargins(20, 5, 0, 5)
|
||||
placeholders_label = QLabel("Available placeholders: <b>{url} {post_title} {link_text} {platform} {key}</b>")
|
||||
self.custom_format_input = QTextEdit()
|
||||
self.custom_format_input.setAcceptRichText(False)
|
||||
self.custom_format_input.setPlaceholderText("Enter your format, e.g., ({url}) or Title: {post_title}\\nLink: {url}")
|
||||
self.custom_format_input.setText("{url}")
|
||||
self.custom_format_input.setFixedHeight(80)
|
||||
custom_layout.addWidget(placeholders_label)
|
||||
custom_layout.addWidget(self.custom_format_input)
|
||||
txt_options_layout.addWidget(self.custom_format_widget)
|
||||
|
||||
separator = QLabel("-" * 70)
|
||||
txt_options_layout.addWidget(separator)
|
||||
self.check_separate_files = QCheckBox("Save each platform to a separate file (e.g., export_mega.txt)")
|
||||
txt_options_layout.addWidget(self.check_separate_files)
|
||||
|
||||
self.txt_options_group.setLayout(txt_options_layout)
|
||||
main_layout.addWidget(self.txt_options_group)
|
||||
|
||||
# File Path Selection
|
||||
path_layout = QHBoxLayout()
|
||||
self.path_input = QLineEdit()
|
||||
self.browse_button = QPushButton("Browse...")
|
||||
path_layout.addWidget(self.path_input)
|
||||
path_layout.addWidget(self.browse_button)
|
||||
main_layout.addLayout(path_layout)
|
||||
|
||||
# Action Buttons
|
||||
button_layout = QHBoxLayout()
|
||||
button_layout.addStretch(1)
|
||||
self.export_button = QPushButton("Export")
|
||||
self.cancel_button = QPushButton("Cancel")
|
||||
button_layout.addWidget(self.export_button)
|
||||
button_layout.addWidget(self.cancel_button)
|
||||
main_layout.addLayout(button_layout)
|
||||
|
||||
# Connections
|
||||
self.radio_txt.toggled.connect(self._update_options_visibility)
|
||||
self.radio_simple.toggled.connect(self._update_options_visibility)
|
||||
self.radio_detailed.toggled.connect(self._update_options_visibility)
|
||||
self.radio_custom.toggled.connect(self._update_options_visibility)
|
||||
self.browse_button.clicked.connect(self._browse)
|
||||
self.export_button.clicked.connect(self._accept_and_export)
|
||||
self.cancel_button.clicked.connect(self.reject)
|
||||
|
||||
self.radio_simple.setChecked(True)
|
||||
|
||||
def _update_options_visibility(self):
|
||||
is_txt = self.radio_txt.isChecked()
|
||||
self.txt_options_group.setVisible(is_txt)
|
||||
|
||||
self.detailed_options_widget.setVisible(is_txt and self.radio_detailed.isChecked())
|
||||
self.custom_format_widget.setVisible(is_txt and self.radio_custom.isChecked())
|
||||
|
||||
def _browse(self, base_filepath):
|
||||
is_separate_files_mode = self.radio_txt.isChecked() and self.check_separate_files.isChecked()
|
||||
|
||||
if is_separate_files_mode:
|
||||
dir_path = QFileDialog.getExistingDirectory(self, "Select Folder to Save Files")
|
||||
if dir_path:
|
||||
self.path_input.setText(os.path.join(dir_path, "exported_links"))
|
||||
else:
|
||||
default_filename = "exported_links"
|
||||
file_filter = "Text Files (*.txt)"
|
||||
if self.radio_json.isChecked():
|
||||
default_filename += ".json"
|
||||
file_filter = "JSON Files (*.json)"
|
||||
else:
|
||||
default_filename += ".txt"
|
||||
|
||||
filepath, _ = QFileDialog.getSaveFileName(self, "Save Links", default_filename, file_filter)
|
||||
if filepath:
|
||||
self.path_input.setText(filepath)
|
||||
|
||||
def _accept_and_export(self):
|
||||
filepath = self.path_input.text().strip()
|
||||
if not filepath:
|
||||
QMessageBox.warning(self, "Input Error", "Please select a file path or folder.")
|
||||
return
|
||||
|
||||
try:
|
||||
if self.radio_txt.isChecked():
|
||||
self._write_txt_file(filepath)
|
||||
else:
|
||||
self._write_json_file(filepath)
|
||||
|
||||
QMessageBox.information(self, "Export Successful", "Links successfully exported!")
|
||||
self.accept()
|
||||
except OSError as e:
|
||||
QMessageBox.critical(self, "Export Error", f"Could not write to file:\n{e}")
|
||||
|
||||
def _write_txt_file(self, base_filepath):
|
||||
if self.check_separate_files.isChecked():
|
||||
links_by_platform = defaultdict(list)
|
||||
for _, _, link_url, platform, _ in self.links_data:
|
||||
sanitized_platform = re.sub(r'[<>:"/\\|?*]', '_', platform.lower().replace(' ', '_'))
|
||||
links_by_platform[sanitized_platform].append(link_url)
|
||||
|
||||
base, ext = os.path.splitext(base_filepath)
|
||||
if not ext: ext = ".txt"
|
||||
|
||||
for platform_key, links in links_by_platform.items():
|
||||
platform_filepath = f"{base}_{platform_key}{ext}"
|
||||
with open(platform_filepath, 'w', encoding='utf-8') as f:
|
||||
for url in links:
|
||||
f.write(url + "\n")
|
||||
return
|
||||
|
||||
with open(base_filepath, 'w', encoding='utf-8') as f:
|
||||
if self.radio_simple.isChecked():
|
||||
for _, _, link_url, _, _ in self.links_data:
|
||||
f.write(link_url + "\n")
|
||||
|
||||
elif self.radio_detailed.isChecked():
|
||||
include_titles = self.check_include_titles.isChecked()
|
||||
include_text = self.check_include_link_text.isChecked()
|
||||
include_platform = self.check_include_platform.isChecked()
|
||||
current_title = None
|
||||
for post_title, link_text, link_url, platform, _ in self.links_data:
|
||||
if include_titles and post_title != current_title:
|
||||
if current_title is not None: f.write("\n" + "="*60 + "\n\n")
|
||||
f.write(f"# Post: {post_title}\n")
|
||||
current_title = post_title
|
||||
line_parts = [link_url]
|
||||
if include_platform: line_parts.append(f"Platform: {platform}")
|
||||
if include_text and link_text: line_parts.append(f"Description: {link_text}")
|
||||
f.write(" | ".join(line_parts) + "\n")
|
||||
|
||||
elif self.radio_custom.isChecked():
|
||||
template = self.custom_format_input.toPlainText().replace("\\n", "\n")
|
||||
for post_title, link_text, link_url, platform, decryption_key in self.links_data:
|
||||
formatted_line = template.format(
|
||||
url=link_url,
|
||||
post_title=post_title,
|
||||
link_text=link_text,
|
||||
platform=platform,
|
||||
key=decryption_key or ""
|
||||
)
|
||||
f.write(formatted_line)
|
||||
if not template.endswith('\n'):
|
||||
f.write('\n')
|
||||
|
||||
def _write_json_file(self, filepath):
|
||||
output_data = []
|
||||
for post_title, link_text, link_url, platform, decryption_key in self.links_data:
|
||||
output_data.append({
|
||||
"post_title": post_title,
|
||||
"url": link_url,
|
||||
"link_text": link_text,
|
||||
"platform": platform,
|
||||
"key": decryption_key or None
|
||||
})
|
||||
|
||||
with open(filepath, 'w', encoding='utf-8') as f:
|
||||
json.dump(output_data, f, indent=2)
|
||||
@@ -3,7 +3,7 @@ import html
|
||||
import re
|
||||
|
||||
# --- Third-Party Library Imports ---
|
||||
import requests
|
||||
import cloudscraper # MODIFIED: Import cloudscraper
|
||||
from PyQt5.QtCore import QCoreApplication, Qt
|
||||
from PyQt5.QtWidgets import (
|
||||
QApplication, QDialog, QHBoxLayout, QLabel, QLineEdit, QListWidget,
|
||||
@@ -12,7 +12,6 @@ from PyQt5.QtWidgets import (
|
||||
|
||||
# --- Local Application Imports ---
|
||||
from ...i18n.translator import get_translation
|
||||
# Corrected Import: Get the icon from the new assets utility module
|
||||
from ..assets import get_app_icon_object
|
||||
from ...utils.network_utils import prepare_cookies_for_request
|
||||
from .CookieHelpDialog import CookieHelpDialog
|
||||
@@ -41,9 +40,9 @@ class FavoriteArtistsDialog (QDialog ):
|
||||
service_lower = service_name.lower()
|
||||
coomer_primary_services = {'onlyfans', 'fansly', 'manyvids', 'candfans'}
|
||||
if service_lower in coomer_primary_services:
|
||||
return "coomer.st" # Use the new domain
|
||||
return "coomer.st"
|
||||
else:
|
||||
return "kemono.cr" # Use the new domain
|
||||
return "kemono.cr"
|
||||
|
||||
def _tr (self ,key ,default_text =""):
|
||||
"""Helper to get translation based on current app language."""
|
||||
@@ -126,9 +125,11 @@ class FavoriteArtistsDialog (QDialog ):
|
||||
self .artist_list_widget .setVisible (show )
|
||||
|
||||
def _fetch_favorite_artists (self ):
|
||||
# --- FIX: Use cloudscraper and add proper headers ---
|
||||
scraper = cloudscraper.create_scraper()
|
||||
# --- END FIX ---
|
||||
|
||||
if self.cookies_config['use_cookie']:
|
||||
# --- Kemono Check with Fallback ---
|
||||
kemono_cookies = prepare_cookies_for_request(
|
||||
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.cr"
|
||||
@@ -140,7 +141,6 @@ class FavoriteArtistsDialog (QDialog ):
|
||||
self.cookies_config['app_base_dir'], self._logger, target_domain="kemono.su"
|
||||
)
|
||||
|
||||
# --- Coomer Check with Fallback ---
|
||||
coomer_cookies = prepare_cookies_for_request(
|
||||
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||
self.cookies_config['app_base_dir'], self._logger, target_domain="coomer.st"
|
||||
@@ -153,28 +153,21 @@ class FavoriteArtistsDialog (QDialog ):
|
||||
)
|
||||
|
||||
if not kemono_cookies and not coomer_cookies:
|
||||
# If cookies are enabled but none could be loaded, show help and stop.
|
||||
self.status_label.setText(self._tr("fav_artists_cookies_required_status", "Error: Cookies enabled but could not be loaded for any source."))
|
||||
self._logger("Error: Cookies enabled but no valid cookies were loaded. Showing help dialog.")
|
||||
cookie_help_dialog = CookieHelpDialog(self.parent_app, self)
|
||||
cookie_help_dialog.exec_()
|
||||
self.download_button.setEnabled(False)
|
||||
return # Stop further execution
|
||||
|
||||
kemono_fav_url ="https://kemono.su/api/v1/account/favorites?type=artist"
|
||||
coomer_fav_url ="https://coomer.su/api/v1/account/favorites?type=artist"
|
||||
return
|
||||
|
||||
self .all_fetched_artists =[]
|
||||
fetched_any_successfully =False
|
||||
errors_occurred =[]
|
||||
any_cookies_loaded_successfully_for_any_source =False
|
||||
|
||||
kemono_cr_fav_url = "https://kemono.cr/api/v1/account/favorites?type=artist"
|
||||
coomer_st_fav_url = "https://coomer.st/api/v1/account/favorites?type=artist"
|
||||
|
||||
api_sources = [
|
||||
{"name": "Kemono.cr", "url": kemono_cr_fav_url, "domain": "kemono.cr"},
|
||||
{"name": "Coomer.st", "url": coomer_st_fav_url, "domain": "coomer.st"}
|
||||
{"name": "Kemono.cr", "url": "https://kemono.cr/api/v1/account/favorites?type=artist", "domain": "kemono.cr"},
|
||||
{"name": "Coomer.st", "url": "https://coomer.st/api/v1/account/favorites?type=artist", "domain": "coomer.st"}
|
||||
]
|
||||
|
||||
for source in api_sources :
|
||||
@@ -185,41 +178,36 @@ class FavoriteArtistsDialog (QDialog ):
|
||||
cookies_dict_for_source = None
|
||||
if self.cookies_config['use_cookie']:
|
||||
primary_domain = source['domain']
|
||||
fallback_domain = None
|
||||
if primary_domain == "kemono.cr":
|
||||
fallback_domain = "kemono.su"
|
||||
elif primary_domain == "coomer.st":
|
||||
fallback_domain = "coomer.su"
|
||||
fallback_domain = "kemono.su" if "kemono" in primary_domain else "coomer.su"
|
||||
|
||||
# First, try the primary domain
|
||||
cookies_dict_for_source = prepare_cookies_for_request(
|
||||
True,
|
||||
self.cookies_config['cookie_text'],
|
||||
self.cookies_config['selected_cookie_file'],
|
||||
self.cookies_config['app_base_dir'],
|
||||
self._logger,
|
||||
target_domain=primary_domain
|
||||
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||
self.cookies_config['app_base_dir'], self._logger, target_domain=primary_domain
|
||||
)
|
||||
|
||||
# If no cookies found, try the fallback domain
|
||||
if not cookies_dict_for_source and fallback_domain:
|
||||
self._logger(f"Warning ({source['name']}): No cookies found for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
|
||||
if not cookies_dict_for_source:
|
||||
self._logger(f"Warning ({source['name']}): No cookies for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
|
||||
cookies_dict_for_source = prepare_cookies_for_request(
|
||||
True,
|
||||
self.cookies_config['cookie_text'],
|
||||
self.cookies_config['selected_cookie_file'],
|
||||
self.cookies_config['app_base_dir'],
|
||||
self._logger,
|
||||
target_domain=fallback_domain
|
||||
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||
self.cookies_config['app_base_dir'], self._logger, target_domain=fallback_domain
|
||||
)
|
||||
|
||||
if cookies_dict_for_source:
|
||||
any_cookies_loaded_successfully_for_any_source = True
|
||||
else:
|
||||
self._logger(f"Warning ({source['name']}): Cookies enabled but could not be loaded for this source (including fallbacks). Fetch might fail.")
|
||||
self._logger(f"Warning ({source['name']}): Cookies enabled but not loaded for this source. Fetch may fail.")
|
||||
try :
|
||||
headers ={'User-Agent':'Mozilla/5.0'}
|
||||
response =requests .get (source ['url'],headers =headers ,cookies =cookies_dict_for_source ,timeout =20 )
|
||||
# --- FIX: Add Referer and Accept headers ---
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
|
||||
'Referer': f"https://{source['domain']}/favorites",
|
||||
'Accept': 'text/css'
|
||||
}
|
||||
# --- END FIX ---
|
||||
|
||||
# --- FIX: Use scraper instead of requests ---
|
||||
response = scraper.get(source['url'], headers=headers, cookies=cookies_dict_for_source, timeout=20)
|
||||
# --- END FIX ---
|
||||
|
||||
response .raise_for_status ()
|
||||
artists_data_from_api =response .json ()
|
||||
|
||||
@@ -254,15 +242,10 @@ class FavoriteArtistsDialog (QDialog ):
|
||||
fetched_any_successfully =True
|
||||
self ._logger (f"Fetched {processed_artists_from_source } artists from {source ['name']}.")
|
||||
|
||||
except requests .exceptions .RequestException as e :
|
||||
except Exception as e :
|
||||
error_msg =f"Error fetching favorites from {source ['name']}: {e }"
|
||||
self ._logger (error_msg )
|
||||
errors_occurred .append (error_msg )
|
||||
except Exception as e :
|
||||
error_msg =f"An unexpected error occurred with {source ['name']}: {e }"
|
||||
self ._logger (error_msg )
|
||||
errors_occurred .append (error_msg )
|
||||
|
||||
|
||||
if self .cookies_config ['use_cookie']and not any_cookies_loaded_successfully_for_any_source :
|
||||
self .status_label .setText (self ._tr ("fav_artists_cookies_required_status","Error: Cookies enabled but could not be loaded for any source."))
|
||||
@@ -288,7 +271,7 @@ class FavoriteArtistsDialog (QDialog ):
|
||||
self ._show_content_elements (True )
|
||||
self .download_button .setEnabled (True )
|
||||
elif not fetched_any_successfully and not errors_occurred :
|
||||
self .status_label .setText (self ._tr ("fav_artists_none_found_status","No favorite artists found on Kemono.su or Coomer.su."))
|
||||
self .status_label .setText (self ._tr ("fav_artists_none_found_status","No favorite artists found on Kemono or Coomer."))
|
||||
self ._show_content_elements (False )
|
||||
self .download_button .setEnabled (False )
|
||||
else :
|
||||
@@ -344,4 +327,4 @@ class FavoriteArtistsDialog (QDialog ):
|
||||
self .accept ()
|
||||
|
||||
def get_selected_artists (self ):
|
||||
return self .selected_artists_data
|
||||
return self .selected_artists_data
|
||||
|
||||
@@ -7,7 +7,7 @@ import traceback
|
||||
import json
|
||||
import re
|
||||
from collections import defaultdict
|
||||
import requests
|
||||
import cloudscraper # MODIFIED: Import cloudscraper
|
||||
from PyQt5.QtCore import QCoreApplication, Qt, pyqtSignal, QThread
|
||||
from PyQt5.QtWidgets import (
|
||||
QApplication, QDialog, QHBoxLayout, QLabel, QLineEdit, QListWidget,
|
||||
@@ -42,10 +42,9 @@ class FavoritePostsFetcherThread (QThread ):
|
||||
self .parent_logger_func (f"[FavPostsFetcherThread] {message }")
|
||||
|
||||
def run(self):
|
||||
kemono_su_fav_posts_url = "https://kemono.su/api/v1/account/favorites?type=post"
|
||||
coomer_su_fav_posts_url = "https://coomer.su/api/v1/account/favorites?type=post"
|
||||
kemono_cr_fav_posts_url = "https://kemono.cr/api/v1/account/favorites?type=post"
|
||||
coomer_st_fav_posts_url = "https://coomer.st/api/v1/account/favorites?type=post"
|
||||
# --- FIX: Use cloudscraper and add proper headers ---
|
||||
scraper = cloudscraper.create_scraper()
|
||||
# --- END FIX ---
|
||||
|
||||
all_fetched_posts_temp = []
|
||||
error_messages_for_summary = []
|
||||
@@ -56,8 +55,8 @@ class FavoritePostsFetcherThread (QThread ):
|
||||
self.progress_bar_update.emit(0, 0)
|
||||
|
||||
api_sources = [
|
||||
{"name": "Kemono.cr", "url": kemono_cr_fav_posts_url, "domain": "kemono.cr"},
|
||||
{"name": "Coomer.st", "url": coomer_st_fav_posts_url, "domain": "coomer.st"}
|
||||
{"name": "Kemono.cr", "url": "https://kemono.cr/api/v1/account/favorites?type=post", "domain": "kemono.cr"},
|
||||
{"name": "Coomer.st", "url": "https://coomer.st/api/v1/account/favorites?type=post", "domain": "coomer.st"}
|
||||
]
|
||||
|
||||
api_sources_to_try =[]
|
||||
@@ -81,32 +80,18 @@ class FavoritePostsFetcherThread (QThread ):
|
||||
cookies_dict_for_source = None
|
||||
if self.cookies_config['use_cookie']:
|
||||
primary_domain = source['domain']
|
||||
fallback_domain = None
|
||||
if primary_domain == "kemono.cr":
|
||||
fallback_domain = "kemono.su"
|
||||
elif primary_domain == "coomer.st":
|
||||
fallback_domain = "coomer.su"
|
||||
fallback_domain = "kemono.su" if "kemono" in primary_domain else "coomer.su"
|
||||
|
||||
# First, try the primary domain
|
||||
cookies_dict_for_source = prepare_cookies_for_request(
|
||||
True,
|
||||
self.cookies_config['cookie_text'],
|
||||
self.cookies_config['selected_cookie_file'],
|
||||
self.cookies_config['app_base_dir'],
|
||||
self._logger,
|
||||
target_domain=primary_domain
|
||||
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||
self.cookies_config['app_base_dir'], self._logger, target_domain=primary_domain
|
||||
)
|
||||
|
||||
# If no cookies found, try the fallback domain
|
||||
if not cookies_dict_for_source and fallback_domain:
|
||||
self._logger(f"Warning ({source['name']}): No cookies found for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
|
||||
self._logger(f"Warning ({source['name']}): No cookies for '{primary_domain}'. Trying fallback '{fallback_domain}'...")
|
||||
cookies_dict_for_source = prepare_cookies_for_request(
|
||||
True,
|
||||
self.cookies_config['cookie_text'],
|
||||
self.cookies_config['selected_cookie_file'],
|
||||
self.cookies_config['app_base_dir'],
|
||||
self._logger,
|
||||
target_domain=fallback_domain
|
||||
True, self.cookies_config['cookie_text'], self.cookies_config['selected_cookie_file'],
|
||||
self.cookies_config['app_base_dir'], self._logger, target_domain=fallback_domain
|
||||
)
|
||||
|
||||
if cookies_dict_for_source:
|
||||
@@ -120,8 +105,18 @@ class FavoritePostsFetcherThread (QThread ):
|
||||
QCoreApplication .processEvents ()
|
||||
|
||||
try :
|
||||
headers ={'User-Agent':'Mozilla/5.0'}
|
||||
response =requests .get (source ['url'],headers =headers ,cookies =cookies_dict_for_source ,timeout =20 )
|
||||
# --- FIX: Add Referer and Accept headers ---
|
||||
headers = {
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36',
|
||||
'Referer': f"https://{source['domain']}/favorites",
|
||||
'Accept': 'text/css'
|
||||
}
|
||||
# --- END FIX ---
|
||||
|
||||
# --- FIX: Use scraper instead of requests ---
|
||||
response = scraper.get(source['url'], headers=headers, cookies=cookies_dict_for_source, timeout=20)
|
||||
# --- END FIX ---
|
||||
|
||||
response .raise_for_status ()
|
||||
posts_data_from_api =response .json ()
|
||||
|
||||
@@ -153,33 +148,24 @@ class FavoritePostsFetcherThread (QThread ):
|
||||
fetched_any_successfully =True
|
||||
self ._logger (f"Fetched {processed_posts_from_source } posts from {source ['name']}.")
|
||||
|
||||
except requests .exceptions .RequestException as e :
|
||||
except Exception as e :
|
||||
err_detail =f"Error fetching favorite posts from {source ['name']}: {e }"
|
||||
self ._logger (err_detail )
|
||||
error_messages_for_summary .append (err_detail )
|
||||
if e .response is not None and e .response .status_code ==401 :
|
||||
if hasattr(e, 'response') and e.response is not None and e.response.status_code == 401:
|
||||
self .finished .emit ([],"KEY_AUTH_FAILED")
|
||||
self ._logger (f"Authorization failed for {source ['name']}, emitting KEY_AUTH_FAILED.")
|
||||
return
|
||||
except Exception as e :
|
||||
err_detail =f"An unexpected error occurred with {source ['name']}: {e }"
|
||||
self ._logger (err_detail )
|
||||
error_messages_for_summary .append (err_detail )
|
||||
|
||||
if self .cancellation_event .is_set ():
|
||||
self .finished .emit ([],"KEY_FETCH_CANCELLED_AFTER")
|
||||
return
|
||||
|
||||
|
||||
if self .cookies_config ['use_cookie']and not any_cookies_loaded_successfully_for_any_source :
|
||||
|
||||
if self .target_domain_preference and not any_cookies_loaded_successfully_for_any_source :
|
||||
|
||||
domain_key_part =self .error_key_map .get (self .target_domain_preference ,self .target_domain_preference .lower ().replace ('.','_'))
|
||||
self .finished .emit ([],f"KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_FOR_DOMAIN_{domain_key_part }")
|
||||
return
|
||||
|
||||
|
||||
self .finished .emit ([],"KEY_COOKIES_REQUIRED_BUT_NOT_FOUND_GENERIC")
|
||||
return
|
||||
|
||||
@@ -643,4 +629,4 @@ class FavoritePostsDialog (QDialog ):
|
||||
self .accept ()
|
||||
|
||||
def get_selected_posts (self ):
|
||||
return self .selected_posts_data
|
||||
return self .selected_posts_data
|
||||
@@ -1,42 +1,133 @@
|
||||
# --- Standard Library Imports ---
|
||||
import os
|
||||
import json
|
||||
import sys
|
||||
|
||||
# --- PyQt5 Imports ---
|
||||
from PyQt5.QtCore import Qt, QStandardPaths
|
||||
from PyQt5.QtCore import Qt, QStandardPaths, QTimer
|
||||
from PyQt5.QtWidgets import (
|
||||
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
|
||||
QGroupBox, QComboBox, QMessageBox, QGridLayout, QCheckBox
|
||||
QGroupBox, QComboBox, QMessageBox, QGridLayout, QCheckBox, QLineEdit,
|
||||
QTabWidget, QWidget, QFileDialog # Added QFileDialog
|
||||
)
|
||||
|
||||
# --- Local Application Imports ---
|
||||
from ...i18n.translator import get_translation
|
||||
from ...utils.resolution import get_dark_theme
|
||||
from ..assets import get_app_icon_object
|
||||
|
||||
from ..main_window import get_app_icon_object
|
||||
from ...config.constants import (
|
||||
THEME_KEY, LANGUAGE_KEY, DOWNLOAD_LOCATION_KEY,
|
||||
RESOLUTION_KEY, UI_SCALE_KEY, SAVE_CREATOR_JSON_KEY,
|
||||
COOKIE_TEXT_KEY, USE_COOKIE_KEY
|
||||
DATE_PREFIX_FORMAT_KEY,
|
||||
COOKIE_TEXT_KEY, USE_COOKIE_KEY,
|
||||
FETCH_FIRST_KEY, DISCORD_TOKEN_KEY, POST_DOWNLOAD_ACTION_KEY
|
||||
)
|
||||
from ...services.updater import UpdateChecker, UpdateDownloader
|
||||
|
||||
class CountdownMessageBox(QDialog):
|
||||
"""
|
||||
A custom message box that includes a countdown timer for the 'Yes' button,
|
||||
which automatically accepts the dialog when the timer reaches zero.
|
||||
"""
|
||||
def __init__(self, title, text, countdown_seconds=10, parent_app=None, parent=None):
|
||||
super().__init__(parent)
|
||||
self.parent_app = parent_app
|
||||
self.countdown = countdown_seconds
|
||||
|
||||
# --- Basic Window Setup ---
|
||||
self.setWindowTitle(title)
|
||||
self.setModal(True)
|
||||
app_icon = get_app_icon_object()
|
||||
if app_icon and not app_icon.isNull():
|
||||
self.setWindowIcon(app_icon)
|
||||
|
||||
self._init_ui(text)
|
||||
self._apply_theme()
|
||||
|
||||
# --- Timer Setup ---
|
||||
self.timer = QTimer(self)
|
||||
self.timer.setInterval(1000) # Tick every second
|
||||
self.timer.timeout.connect(self._update_countdown)
|
||||
self.timer.start()
|
||||
|
||||
def _init_ui(self, text):
|
||||
"""Initializes the UI components of the dialog."""
|
||||
main_layout = QVBoxLayout(self)
|
||||
|
||||
self.message_label = QLabel(text)
|
||||
self.message_label.setWordWrap(True)
|
||||
self.message_label.setAlignment(Qt.AlignCenter)
|
||||
main_layout.addWidget(self.message_label)
|
||||
|
||||
buttons_layout = QHBoxLayout()
|
||||
buttons_layout.addStretch(1)
|
||||
|
||||
self.yes_button = QPushButton()
|
||||
self.yes_button.clicked.connect(self.accept)
|
||||
self.yes_button.setDefault(True)
|
||||
|
||||
self.no_button = QPushButton()
|
||||
self.no_button.clicked.connect(self.reject)
|
||||
|
||||
buttons_layout.addWidget(self.yes_button)
|
||||
buttons_layout.addWidget(self.no_button)
|
||||
buttons_layout.addStretch(1)
|
||||
|
||||
main_layout.addLayout(buttons_layout)
|
||||
|
||||
self._retranslate_ui()
|
||||
self._update_countdown() # Initial text setup
|
||||
|
||||
def _tr(self, key, default_text=""):
|
||||
"""Helper for translations."""
|
||||
if self.parent_app and hasattr(self.parent_app, 'current_selected_language'):
|
||||
return get_translation(self.parent_app.current_selected_language, key, default_text)
|
||||
return default_text
|
||||
|
||||
def _retranslate_ui(self):
|
||||
"""Sets translated text for UI elements."""
|
||||
self.no_button.setText(self._tr("no_button_text", "No"))
|
||||
# The 'yes' button text is handled by the countdown
|
||||
|
||||
def _update_countdown(self):
|
||||
"""Updates the countdown and button text each second."""
|
||||
if self.countdown <= 0:
|
||||
self.timer.stop()
|
||||
self.accept() # Automatically accept when countdown finishes
|
||||
return
|
||||
|
||||
yes_text = self._tr("yes_button_text", "Yes")
|
||||
self.yes_button.setText(f"{yes_text} ({self.countdown})")
|
||||
self.countdown -= 1
|
||||
|
||||
def _apply_theme(self):
|
||||
"""Applies the current theme from the parent application."""
|
||||
if self.parent_app and hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
|
||||
scale = getattr(self.parent_app, 'scale_factor', 1)
|
||||
self.setStyleSheet(get_dark_theme(scale))
|
||||
else:
|
||||
self.setStyleSheet("")
|
||||
|
||||
class FutureSettingsDialog(QDialog):
|
||||
"""
|
||||
A dialog for managing application-wide settings like theme, language,
|
||||
and display options, with an organized layout.
|
||||
and display options, using a tabbed layout.
|
||||
"""
|
||||
def __init__(self, parent_app_ref, parent=None):
|
||||
super().__init__(parent)
|
||||
self.parent_app = parent_app_ref
|
||||
self.setModal(True)
|
||||
self.update_downloader_thread = None # To keep a reference
|
||||
|
||||
app_icon = get_app_icon_object()
|
||||
if app_icon and not app_icon.isNull():
|
||||
self.setWindowIcon(app_icon)
|
||||
|
||||
screen_height = QApplication.primaryScreen().availableGeometry().height() if QApplication.primaryScreen() else 800
|
||||
scale_factor = screen_height / 800.0
|
||||
base_min_w, base_min_h = 420, 360 # Adjusted height for new layout
|
||||
# Use a more balanced aspect ratio
|
||||
scale_factor = screen_height / 1000.0
|
||||
base_min_w, base_min_h = 480, 420 # Wider, less tall
|
||||
scaled_min_w = int(base_min_w * scale_factor)
|
||||
scaled_min_h = int(base_min_h * scale_factor)
|
||||
self.setMinimumSize(scaled_min_w, scaled_min_h)
|
||||
@@ -48,121 +139,287 @@ class FutureSettingsDialog(QDialog):
|
||||
def _init_ui(self):
|
||||
"""Initializes all UI components and layouts for the dialog."""
|
||||
main_layout = QVBoxLayout(self)
|
||||
|
||||
# --- Create Tab Widget ---
|
||||
self.tab_widget = QTabWidget()
|
||||
main_layout.addWidget(self.tab_widget)
|
||||
|
||||
# --- Group 1: Interface Settings ---
|
||||
self.interface_group_box = QGroupBox()
|
||||
interface_layout = QGridLayout(self.interface_group_box)
|
||||
# --- Create Tabs ---
|
||||
self.display_tab = QWidget()
|
||||
self.downloads_tab = QWidget()
|
||||
self.updates_tab = QWidget()
|
||||
|
||||
# Add tabs to the widget
|
||||
self.tab_widget.addTab(self.display_tab, "Display")
|
||||
self.tab_widget.addTab(self.downloads_tab, "Downloads")
|
||||
self.tab_widget.addTab(self.updates_tab, "Updates")
|
||||
|
||||
# --- Populate Display Tab ---
|
||||
display_tab_layout = QVBoxLayout(self.display_tab)
|
||||
self.display_group_box = QGroupBox()
|
||||
display_layout = QGridLayout(self.display_group_box)
|
||||
|
||||
# Theme
|
||||
self.theme_label = QLabel()
|
||||
self.theme_toggle_button = QPushButton()
|
||||
self.theme_toggle_button.clicked.connect(self._toggle_theme)
|
||||
interface_layout.addWidget(self.theme_label, 0, 0)
|
||||
interface_layout.addWidget(self.theme_toggle_button, 0, 1)
|
||||
display_layout.addWidget(self.theme_label, 0, 0)
|
||||
display_layout.addWidget(self.theme_toggle_button, 0, 1)
|
||||
|
||||
# UI Scale
|
||||
self.ui_scale_label = QLabel()
|
||||
self.ui_scale_combo_box = QComboBox()
|
||||
self.ui_scale_combo_box.currentIndexChanged.connect(self._display_setting_changed)
|
||||
interface_layout.addWidget(self.ui_scale_label, 1, 0)
|
||||
interface_layout.addWidget(self.ui_scale_combo_box, 1, 1)
|
||||
|
||||
# Language
|
||||
display_layout.addWidget(self.ui_scale_label, 1, 0)
|
||||
display_layout.addWidget(self.ui_scale_combo_box, 1, 1)
|
||||
|
||||
self.language_label = QLabel()
|
||||
self.language_combo_box = QComboBox()
|
||||
self.language_combo_box.currentIndexChanged.connect(self._language_selection_changed)
|
||||
interface_layout.addWidget(self.language_label, 2, 0)
|
||||
interface_layout.addWidget(self.language_combo_box, 2, 1)
|
||||
display_layout.addWidget(self.language_label, 2, 0)
|
||||
display_layout.addWidget(self.language_combo_box, 2, 1)
|
||||
|
||||
main_layout.addWidget(self.interface_group_box)
|
||||
|
||||
# --- Group 2: Download & Window Settings ---
|
||||
self.download_window_group_box = QGroupBox()
|
||||
download_window_layout = QGridLayout(self.download_window_group_box)
|
||||
|
||||
# Window Size (Resolution)
|
||||
self.window_size_label = QLabel()
|
||||
self.resolution_combo_box = QComboBox()
|
||||
self.resolution_combo_box.currentIndexChanged.connect(self._display_setting_changed)
|
||||
download_window_layout.addWidget(self.window_size_label, 0, 0)
|
||||
download_window_layout.addWidget(self.resolution_combo_box, 0, 1)
|
||||
display_layout.addWidget(self.window_size_label, 3, 0)
|
||||
display_layout.addWidget(self.resolution_combo_box, 3, 1)
|
||||
|
||||
display_tab_layout.addWidget(self.display_group_box)
|
||||
display_tab_layout.addStretch(1) # Push content to the top
|
||||
|
||||
# --- Populate Downloads Tab ---
|
||||
downloads_tab_layout = QVBoxLayout(self.downloads_tab)
|
||||
self.download_settings_group_box = QGroupBox()
|
||||
download_settings_layout = QGridLayout(self.download_settings_group_box)
|
||||
|
||||
# Default Path
|
||||
self.default_path_label = QLabel()
|
||||
self.save_path_button = QPushButton()
|
||||
# --- START: MODIFIED LOGIC ---
|
||||
self.save_path_button.clicked.connect(self._save_cookie_and_path)
|
||||
# --- END: MODIFIED LOGIC ---
|
||||
download_window_layout.addWidget(self.default_path_label, 1, 0)
|
||||
download_window_layout.addWidget(self.save_path_button, 1, 1)
|
||||
self.save_path_button.clicked.connect(self._save_settings)
|
||||
download_settings_layout.addWidget(self.default_path_label, 0, 0)
|
||||
download_settings_layout.addWidget(self.save_path_button, 0, 1)
|
||||
|
||||
self.post_download_action_label = QLabel()
|
||||
self.post_download_action_combo = QComboBox()
|
||||
self.post_download_action_combo.currentIndexChanged.connect(self._post_download_action_changed)
|
||||
download_settings_layout.addWidget(self.post_download_action_label, 1, 0)
|
||||
download_settings_layout.addWidget(self.post_download_action_combo, 1, 1)
|
||||
|
||||
self.date_prefix_format_label = QLabel()
|
||||
self.date_prefix_format_input = QLineEdit()
|
||||
self.date_prefix_format_input.textChanged.connect(self._date_prefix_format_changed)
|
||||
download_settings_layout.addWidget(self.date_prefix_format_label, 2, 0)
|
||||
download_settings_layout.addWidget(self.date_prefix_format_input, 2, 1)
|
||||
|
||||
# Save Creator.json Checkbox
|
||||
self.save_creator_json_checkbox = QCheckBox()
|
||||
self.save_creator_json_checkbox.stateChanged.connect(self._creator_json_setting_changed)
|
||||
download_window_layout.addWidget(self.save_creator_json_checkbox, 2, 0, 1, 2)
|
||||
self.save_creator_json_checkbox.stateChanged.connect(self._creator_json_setting_changed)
|
||||
download_settings_layout.addWidget(self.save_creator_json_checkbox, 3, 0, 1, 2)
|
||||
|
||||
self.fetch_first_checkbox = QCheckBox()
|
||||
self.fetch_first_checkbox.stateChanged.connect(self._fetch_first_setting_changed)
|
||||
download_settings_layout.addWidget(self.fetch_first_checkbox, 4, 0, 1, 2)
|
||||
|
||||
main_layout.addWidget(self.download_window_group_box)
|
||||
# --- START: Add new Load/Save buttons ---
|
||||
settings_file_layout = QHBoxLayout()
|
||||
self.load_settings_button = QPushButton()
|
||||
self.save_settings_button = QPushButton()
|
||||
settings_file_layout.addWidget(self.load_settings_button)
|
||||
settings_file_layout.addWidget(self.save_settings_button)
|
||||
settings_file_layout.addStretch(1)
|
||||
|
||||
# Add this new layout to the grid
|
||||
download_settings_layout.addLayout(settings_file_layout, 5, 0, 1, 2) # Row 5, span 2 cols
|
||||
|
||||
# Connect signals
|
||||
self.load_settings_button.clicked.connect(self._handle_load_settings)
|
||||
self.save_settings_button.clicked.connect(self._handle_save_settings)
|
||||
# --- END: Add new Load/Save buttons ---
|
||||
|
||||
main_layout.addStretch(1)
|
||||
downloads_tab_layout.addWidget(self.download_settings_group_box)
|
||||
downloads_tab_layout.addStretch(1) # Push content to the top
|
||||
|
||||
# --- OK Button ---
|
||||
# --- Populate Updates Tab ---
|
||||
updates_tab_layout = QVBoxLayout(self.updates_tab)
|
||||
self.update_group_box = QGroupBox()
|
||||
update_layout = QGridLayout(self.update_group_box)
|
||||
self.version_label = QLabel()
|
||||
self.update_status_label = QLabel()
|
||||
self.check_update_button = QPushButton()
|
||||
self.check_update_button.clicked.connect(self._check_for_updates)
|
||||
update_layout.addWidget(self.version_label, 0, 0)
|
||||
update_layout.addWidget(self.update_status_label, 0, 1)
|
||||
update_layout.addWidget(self.check_update_button, 1, 0, 1, 2)
|
||||
|
||||
updates_tab_layout.addWidget(self.update_group_box)
|
||||
updates_tab_layout.addStretch(1) # Push content to the top
|
||||
|
||||
# --- OK Button (outside tabs) ---
|
||||
button_layout = QHBoxLayout()
|
||||
button_layout.addStretch(1)
|
||||
self.ok_button = QPushButton()
|
||||
self.ok_button.clicked.connect(self.accept)
|
||||
main_layout.addWidget(self.ok_button, 0, Qt.AlignRight | Qt.AlignBottom)
|
||||
button_layout.addWidget(self.ok_button)
|
||||
main_layout.addLayout(button_layout)
|
||||
|
||||
|
||||
def _retranslate_ui(self):
|
||||
self.setWindowTitle(self._tr("settings_dialog_title", "Settings"))
|
||||
|
||||
# --- Tab Titles ---
|
||||
self.tab_widget.setTabText(0, self._tr("settings_tab_display", "Display"))
|
||||
self.tab_widget.setTabText(1, self._tr("settings_tab_downloads", "Downloads"))
|
||||
self.tab_widget.setTabText(2, self._tr("settings_tab_updates", "Updates"))
|
||||
|
||||
# --- Display Tab ---
|
||||
self.display_group_box.setTitle(self._tr("display_settings_group_title", "Display Settings"))
|
||||
self.theme_label.setText(self._tr("theme_label", "Theme:"))
|
||||
self.ui_scale_label.setText(self._tr("ui_scale_label", "UI Scale:"))
|
||||
self.language_label.setText(self._tr("language_label", "Language:"))
|
||||
self.window_size_label.setText(self._tr("window_size_label", "Window Size:"))
|
||||
|
||||
# --- Downloads Tab ---
|
||||
self.download_settings_group_box.setTitle(self._tr("download_settings_group_title", "Download Settings"))
|
||||
self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
|
||||
self.date_prefix_format_label.setText(self._tr("date_prefix_format_label", "Post Subfolder Format:"))
|
||||
self.date_prefix_format_input.setPlaceholderText(self._tr("date_prefix_format_placeholder", "e.g., YYYY-MM-DD {post} {postid}"))
|
||||
self.date_prefix_format_input.setToolTip(self._tr(
|
||||
"date_prefix_format_tooltip",
|
||||
"Create a custom folder name using placeholders:\n"
|
||||
"• YYYY, MM, DD: for the date\n"
|
||||
"• {post}: for the post title\n"
|
||||
"• {postid}: for the post's unique ID\n\n"
|
||||
"Example: {post} [{postid}] [YYYY-MM-DD]"
|
||||
))
|
||||
self.post_download_action_label.setText(self._tr("post_download_action_label", "Action After Download:"))
|
||||
self.save_creator_json_checkbox.setText(self._tr("save_creator_json_label", "Save Creator.json file"))
|
||||
self.fetch_first_checkbox.setText(self._tr("fetch_first_label", "Fetch First (Download after all pages are found)"))
|
||||
self.fetch_first_checkbox.setToolTip(self._tr("fetch_first_tooltip", "If checked, the downloader will find all posts from a creator first before starting any downloads.\nThis can be slower to start but provides a more accurate progress bar."))
|
||||
self.save_path_button.setText(self._tr("settings_save_all_button", "Save Path + Cookie + Token"))
|
||||
self.save_path_button.setToolTip(self._tr("settings_save_all_tooltip", "Save the current 'Download Location', Cookie, and Discord Token settings for future sessions."))
|
||||
|
||||
# --- START: Add new button text ---
|
||||
self.load_settings_button.setText(self._tr("load_settings_button", "Load Settings..."))
|
||||
self.load_settings_button.setToolTip(self._tr("load_settings_tooltip", "Load all download settings from a .json file."))
|
||||
self.save_settings_button.setText(self._tr("save_settings_button", "Save Settings..."))
|
||||
self.save_settings_button.setToolTip(self._tr("save_settings_tooltip", "Save all current download settings to a .json file."))
|
||||
# --- END: Add new button text ---
|
||||
|
||||
# --- Updates Tab ---
|
||||
self.update_group_box.setTitle(self._tr("update_group_title", "Application Updates"))
|
||||
current_version = self.parent_app.windowTitle().split(' v')[-1]
|
||||
self.version_label.setText(self._tr("current_version_label", f"Current Version: v{current_version}"))
|
||||
self.update_status_label.setText(self._tr("update_status_ready", "Ready to check."))
|
||||
self.check_update_button.setText(self._tr("check_for_updates_button", "Check for Updates"))
|
||||
|
||||
# --- General ---
|
||||
self._update_theme_toggle_button_text()
|
||||
self.ok_button.setText(self._tr("ok_button", "OK"))
|
||||
|
||||
# --- Load Data ---
|
||||
self._populate_display_combo_boxes()
|
||||
self._populate_language_combo_box()
|
||||
self._populate_post_download_action_combo()
|
||||
self._load_date_prefix_format()
|
||||
self._load_checkbox_states()
|
||||
|
||||
def _check_for_updates(self):
|
||||
self.check_update_button.setEnabled(False)
|
||||
self.update_status_label.setText(self._tr("update_status_checking", "Checking..."))
|
||||
current_version = self.parent_app.windowTitle().split(' v')[-1]
|
||||
|
||||
self.update_checker_thread = UpdateChecker(current_version)
|
||||
self.update_checker_thread.update_available.connect(self._on_update_available)
|
||||
self.update_checker_thread.up_to_date.connect(self._on_up_to_date)
|
||||
self.update_checker_thread.update_error.connect(self._on_update_error)
|
||||
self.update_checker_thread.start()
|
||||
|
||||
def _on_update_available(self, new_version, download_url):
|
||||
self.update_status_label.setText(self._tr("update_status_found", f"Update found: v{new_version}"))
|
||||
self.check_update_button.setEnabled(True)
|
||||
|
||||
reply = QMessageBox.question(self, self._tr("update_available_title", "Update Available"),
|
||||
self._tr("update_available_message", f"A new version (v{new_version}) is available.\nWould you like to download and install it now?"),
|
||||
QMessageBox.Yes | QMessageBox.No, QMessageBox.Yes)
|
||||
if reply == QMessageBox.Yes:
|
||||
self.ok_button.setEnabled(False)
|
||||
self.check_update_button.setEnabled(False)
|
||||
self.update_status_label.setText(self._tr("update_status_downloading", "Downloading update..."))
|
||||
self.update_downloader_thread = UpdateDownloader(download_url, self.parent_app)
|
||||
self.update_downloader_thread.download_finished.connect(self._on_download_finished)
|
||||
self.update_downloader_thread.download_error.connect(self._on_update_error)
|
||||
self.update_downloader_thread.start()
|
||||
|
||||
def _on_download_finished(self):
|
||||
QApplication.instance().quit()
|
||||
|
||||
def _on_up_to_date(self, message):
|
||||
self.update_status_label.setText(self._tr("update_status_latest", message))
|
||||
self.check_update_button.setEnabled(True)
|
||||
|
||||
def _on_update_error(self, message):
|
||||
self.update_status_label.setText(self._tr("update_status_error", f"Error: {message}"))
|
||||
self.check_update_button.setEnabled(True)
|
||||
self.ok_button.setEnabled(True)
|
||||
|
||||
def _load_checkbox_states(self):
|
||||
"""Loads the initial state for all checkboxes from settings."""
|
||||
self.save_creator_json_checkbox.blockSignals(True)
|
||||
# Default to True so the feature is on by default for users
|
||||
should_save = self.parent_app.settings.value(SAVE_CREATOR_JSON_KEY, True, type=bool)
|
||||
self.save_creator_json_checkbox.setChecked(should_save)
|
||||
self.save_creator_json_checkbox.blockSignals(False)
|
||||
|
||||
self.fetch_first_checkbox.blockSignals(True)
|
||||
should_fetch_first = self.parent_app.settings.value(FETCH_FIRST_KEY, False, type=bool)
|
||||
self.fetch_first_checkbox.setChecked(should_fetch_first)
|
||||
self.fetch_first_checkbox.blockSignals(False)
|
||||
|
||||
def _creator_json_setting_changed(self, state):
|
||||
"""Saves the state of the 'Save Creator.json' checkbox."""
|
||||
is_checked = state == Qt.Checked
|
||||
self.parent_app.settings.setValue(SAVE_CREATOR_JSON_KEY, is_checked)
|
||||
self.parent_app.settings.sync()
|
||||
|
||||
def _fetch_first_setting_changed(self, state):
|
||||
is_checked = state == Qt.Checked
|
||||
self.parent_app.settings.setValue(FETCH_FIRST_KEY, is_checked)
|
||||
self.parent_app.settings.sync()
|
||||
|
||||
def _tr(self, key, default_text=""):
|
||||
if callable(get_translation) and self.parent_app:
|
||||
return get_translation(self.parent_app.current_selected_language, key, default_text)
|
||||
return default_text
|
||||
|
||||
def _retranslate_ui(self):
|
||||
self.setWindowTitle(self._tr("settings_dialog_title", "Settings"))
|
||||
|
||||
# Group Box Titles
|
||||
self.interface_group_box.setTitle(self._tr("interface_group_title", "Interface Settings"))
|
||||
self.download_window_group_box.setTitle(self._tr("download_window_group_title", "Download & Window Settings"))
|
||||
|
||||
# Interface Group Labels
|
||||
self.theme_label.setText(self._tr("theme_label", "Theme:"))
|
||||
self.ui_scale_label.setText(self._tr("ui_scale_label", "UI Scale:"))
|
||||
self.language_label.setText(self._tr("language_label", "Language:"))
|
||||
|
||||
# Download & Window Group Labels
|
||||
self.window_size_label.setText(self._tr("window_size_label", "Window Size:"))
|
||||
self.default_path_label.setText(self._tr("default_path_label", "Default Path:"))
|
||||
self.save_creator_json_checkbox.setText(self._tr("save_creator_json_label", "Save Creator.json file"))
|
||||
|
||||
# --- START: MODIFIED LOGIC ---
|
||||
# Buttons and Controls
|
||||
self._update_theme_toggle_button_text()
|
||||
self.save_path_button.setText(self._tr("settings_save_cookie_path_button", "Save Cookie + Download Path"))
|
||||
self.save_path_button.setToolTip(self._tr("settings_save_cookie_path_tooltip", "Save the current 'Download Location' and Cookie settings for future sessions."))
|
||||
self.ok_button.setText(self._tr("ok_button", "OK"))
|
||||
# --- END: MODIFIED LOGIC ---
|
||||
|
||||
# Populate dropdowns
|
||||
self._populate_display_combo_boxes()
|
||||
self._populate_language_combo_box()
|
||||
self._load_checkbox_states()
|
||||
|
||||
def _apply_theme(self):
|
||||
if self.parent_app and self.parent_app.current_theme == "dark":
|
||||
scale = getattr(self.parent_app, 'scale_factor', 1)
|
||||
self.setStyleSheet(get_dark_theme(scale))
|
||||
base_stylesheet = get_dark_theme(scale)
|
||||
|
||||
# --- START: Tab Styling Fix ---
|
||||
tab_stylesheet = """
|
||||
QTabWidget::pane {
|
||||
border-top: 1px solid #444;
|
||||
margin-top: -1px; /* Overlap with tab bar */
|
||||
background-color: #2D2D2D;
|
||||
}
|
||||
QTabBar::tab {
|
||||
background-color: #3D3D3D;
|
||||
color: #BBBBBB;
|
||||
border: 1px solid #444;
|
||||
border-bottom: none; /* No bottom border for tabs */
|
||||
padding: 6px 12px;
|
||||
margin-right: 2px;
|
||||
border-top-left-radius: 4px;
|
||||
border-top-right-radius: 4px;
|
||||
}
|
||||
QTabBar::tab:selected {
|
||||
background-color: #2D2D2D; /* Same as pane background */
|
||||
color: #EEEEEE;
|
||||
border-bottom: 1px solid #2D2D2D; /* Hides the pane top border */
|
||||
margin-bottom: -1px; /* Pulls tab down to cover pane border */
|
||||
}
|
||||
QTabBar::tab:!selected:hover {
|
||||
background-color: #4A4A4A;
|
||||
}
|
||||
"""
|
||||
# --- END: Tab Styling Fix ---
|
||||
|
||||
self.setStyleSheet(base_stylesheet + tab_stylesheet)
|
||||
else:
|
||||
self.setStyleSheet("")
|
||||
|
||||
@@ -184,14 +441,7 @@ class FutureSettingsDialog(QDialog):
|
||||
def _populate_display_combo_boxes(self):
|
||||
self.resolution_combo_box.blockSignals(True)
|
||||
self.resolution_combo_box.clear()
|
||||
resolutions = [
|
||||
("Auto", self._tr("auto_resolution", "Auto (System Default)")),
|
||||
("1280x720", "1280 x 720"),
|
||||
("1600x900", "1600 x 900"),
|
||||
("1920x1080", "1920 x 1080 (Full HD)"),
|
||||
("2560x1440", "2560 x 1440 (2K)"),
|
||||
("3840x2160", "3840 x 2160 (4K)")
|
||||
]
|
||||
resolutions = [("Auto", "Auto"), ("1280x720", "1280x720"), ("1600x900", "1600x900"), ("1920x1080", "1920x1080")]
|
||||
current_res = self.parent_app.settings.value(RESOLUTION_KEY, "Auto")
|
||||
for res_key, res_name in resolutions:
|
||||
self.resolution_combo_box.addItem(res_name, res_key)
|
||||
@@ -202,43 +452,24 @@ class FutureSettingsDialog(QDialog):
|
||||
self.ui_scale_combo_box.blockSignals(True)
|
||||
self.ui_scale_combo_box.clear()
|
||||
scales = [
|
||||
(0.5, "50%"),
|
||||
(0.7, "70%"),
|
||||
(0.9, "90%"),
|
||||
(1.0, "100% (Default)"),
|
||||
(1.25, "125%"),
|
||||
(1.50, "150%"),
|
||||
(1.75, "175%"),
|
||||
(2.0, "200%")
|
||||
(0.5, "50%"), (0.7, "70%"), (0.9, "90%"), (1.0, "100% (Default)"),
|
||||
(1.25, "125%"), (1.50, "150%"), (1.75, "175%"), (2.0, "200%")
|
||||
]
|
||||
|
||||
current_scale = float(self.parent_app.settings.value(UI_SCALE_KEY, 1.0))
|
||||
current_scale = self.parent_app.settings.value(UI_SCALE_KEY, 1.0)
|
||||
for scale_val, scale_name in scales:
|
||||
self.ui_scale_combo_box.addItem(scale_name, scale_val)
|
||||
if abs(current_scale - scale_val) < 0.01:
|
||||
if abs(float(current_scale) - scale_val) < 0.01:
|
||||
self.ui_scale_combo_box.setCurrentIndex(self.ui_scale_combo_box.count() - 1)
|
||||
self.ui_scale_combo_box.blockSignals(False)
|
||||
|
||||
def _display_setting_changed(self):
|
||||
selected_res = self.resolution_combo_box.currentData()
|
||||
selected_scale = self.ui_scale_combo_box.currentData()
|
||||
|
||||
self.parent_app.settings.setValue(RESOLUTION_KEY, selected_res)
|
||||
self.parent_app.settings.setValue(UI_SCALE_KEY, selected_scale)
|
||||
self.parent_app.settings.sync()
|
||||
|
||||
msg_box = QMessageBox(self)
|
||||
msg_box.setIcon(QMessageBox.Information)
|
||||
msg_box.setWindowTitle(self._tr("display_change_title", "Display Settings Changed"))
|
||||
msg_box.setText(self._tr("language_change_message", "A restart is required for these changes to take effect."))
|
||||
msg_box.setInformativeText(self._tr("language_change_informative", "Would you like to restart now?"))
|
||||
restart_button = msg_box.addButton(self._tr("restart_now_button", "Restart Now"), QMessageBox.ApplyRole)
|
||||
ok_button = msg_box.addButton(self._tr("ok_button", "OK"), QMessageBox.AcceptRole)
|
||||
msg_box.setDefaultButton(ok_button)
|
||||
msg_box.exec_()
|
||||
|
||||
if msg_box.clickedButton() == restart_button:
|
||||
self.parent_app._request_restart_application()
|
||||
QMessageBox.information(self, self._tr("display_change_title", "Display Settings Changed"),
|
||||
self._tr("language_change_message", "A restart is required..."))
|
||||
|
||||
def _populate_language_combo_box(self):
|
||||
self.language_combo_box.blockSignals(True)
|
||||
@@ -262,61 +493,183 @@ class FutureSettingsDialog(QDialog):
|
||||
self.parent_app.settings.setValue(LANGUAGE_KEY, selected_lang_code)
|
||||
self.parent_app.settings.sync()
|
||||
self.parent_app.current_selected_language = selected_lang_code
|
||||
|
||||
self._retranslate_ui()
|
||||
if hasattr(self.parent_app, '_retranslate_main_ui'):
|
||||
self.parent_app._retranslate_main_ui()
|
||||
|
||||
msg_box = QMessageBox(self)
|
||||
msg_box.setIcon(QMessageBox.Information)
|
||||
msg_box.setWindowTitle(self._tr("language_change_title", "Language Changed"))
|
||||
msg_box.setText(self._tr("language_change_message", "A restart is required..."))
|
||||
msg_box.setInformativeText(self._tr("language_change_informative", "Would you like to restart now?"))
|
||||
restart_button = msg_box.addButton(self._tr("restart_now_button", "Restart Now"), QMessageBox.ApplyRole)
|
||||
ok_button = msg_box.addButton(self._tr("ok_button", "OK"), QMessageBox.AcceptRole)
|
||||
msg_box.setDefaultButton(ok_button)
|
||||
msg_box.exec_()
|
||||
self.parent_app._retranslate_main_ui()
|
||||
QMessageBox.information(self, self._tr("language_change_title", "Language Changed"),
|
||||
self._tr("language_change_message", "A restart is required..."))
|
||||
|
||||
if msg_box.clickedButton() == restart_button:
|
||||
self.parent_app._request_restart_application()
|
||||
def _populate_post_download_action_combo(self):
|
||||
"""Populates the action dropdown and sets the current selection from settings."""
|
||||
self.post_download_action_combo.blockSignals(True)
|
||||
self.post_download_action_combo.clear()
|
||||
|
||||
actions = [
|
||||
(self._tr("action_off", "Off"), "off"),
|
||||
(self._tr("action_notify", "Notify with Sound"), "notify"),
|
||||
(self._tr("action_sleep", "Sleep"), "sleep"),
|
||||
(self._tr("action_shutdown", "Shutdown"), "shutdown")
|
||||
]
|
||||
|
||||
current_action = self.parent_app.settings.value(POST_DOWNLOAD_ACTION_KEY, "off")
|
||||
|
||||
for text, key in actions:
|
||||
self.post_download_action_combo.addItem(text, key)
|
||||
if current_action == key:
|
||||
self.post_download_action_combo.setCurrentIndex(self.post_download_action_combo.count() - 1)
|
||||
|
||||
self.post_download_action_combo.blockSignals(False)
|
||||
|
||||
def _save_cookie_and_path(self):
|
||||
"""Saves the current download path and/or cookie settings from the main window."""
|
||||
def _post_download_action_changed(self):
|
||||
"""Saves the selected post-download action to settings."""
|
||||
selected_action = self.post_download_action_combo.currentData()
|
||||
self.parent_app.settings.setValue(POST_DOWNLOAD_ACTION_KEY, selected_action)
|
||||
self.parent_app.settings.sync()
|
||||
|
||||
def _load_date_prefix_format(self):
|
||||
"""Loads the saved date prefix format and sets it in the input field."""
|
||||
self.date_prefix_format_input.blockSignals(True)
|
||||
current_format = self.parent_app.settings.value(DATE_PREFIX_FORMAT_KEY, "YYYY-MM-DD {post}", type=str)
|
||||
self.date_prefix_format_input.setText(current_format)
|
||||
self.date_prefix_format_input.blockSignals(False)
|
||||
|
||||
def _date_prefix_format_changed(self, text):
|
||||
"""Saves the date prefix format whenever it's changed."""
|
||||
self.parent_app.settings.setValue(DATE_PREFIX_FORMAT_KEY, text)
|
||||
self.parent_app.settings.sync()
|
||||
# Also update the live value in the parent app
|
||||
if hasattr(self.parent_app, 'date_prefix_format'):
|
||||
self.parent_app.date_prefix_format = text
|
||||
|
||||
def _save_settings(self):
|
||||
path_saved = False
|
||||
cookie_saved = False
|
||||
|
||||
# --- Save Download Path Logic ---
|
||||
token_saved = False
|
||||
|
||||
if hasattr(self.parent_app, 'dir_input') and self.parent_app.dir_input:
|
||||
current_path = self.parent_app.dir_input.text().strip()
|
||||
if current_path and os.path.isdir(current_path):
|
||||
self.parent_app.settings.setValue(DOWNLOAD_LOCATION_KEY, current_path)
|
||||
path_saved = True
|
||||
|
||||
# --- Save Cookie Logic ---
|
||||
if hasattr(self.parent_app, 'use_cookie_checkbox'):
|
||||
use_cookie = self.parent_app.use_cookie_checkbox.isChecked()
|
||||
cookie_content = self.parent_app.cookie_text_input.text().strip()
|
||||
|
||||
if use_cookie and cookie_content:
|
||||
self.parent_app.settings.setValue(USE_COOKIE_KEY, True)
|
||||
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, cookie_content)
|
||||
cookie_saved = True
|
||||
else: # Also save the 'off' state
|
||||
else:
|
||||
self.parent_app.settings.setValue(USE_COOKIE_KEY, False)
|
||||
self.parent_app.settings.setValue(COOKIE_TEXT_KEY, "")
|
||||
|
||||
|
||||
if (hasattr(self.parent_app, 'remove_from_filename_input') and
|
||||
hasattr(self.parent_app, 'remove_from_filename_label_widget')):
|
||||
|
||||
label_text = self.parent_app.remove_from_filename_label_widget.text()
|
||||
if "Token" in label_text:
|
||||
discord_token = self.parent_app.remove_from_filename_input.text().strip()
|
||||
if discord_token:
|
||||
self.parent_app.settings.setValue(DISCORD_TOKEN_KEY, discord_token)
|
||||
token_saved = True
|
||||
|
||||
self.parent_app.settings.sync()
|
||||
|
||||
# --- User Feedback ---
|
||||
if path_saved and cookie_saved:
|
||||
message = self._tr("settings_save_both_success", "Download location and cookie settings saved.")
|
||||
elif path_saved:
|
||||
message = self._tr("settings_save_path_only_success", "Download location saved. No cookie settings were active to save.")
|
||||
elif cookie_saved:
|
||||
message = self._tr("settings_save_cookie_only_success", "Cookie settings saved. Download location was not set.")
|
||||
if path_saved or cookie_saved or token_saved:
|
||||
QMessageBox.information(self, "Settings Saved", "Settings have been saved successfully.")
|
||||
else:
|
||||
QMessageBox.warning(self, self._tr("settings_save_nothing_title", "Nothing to Save"),
|
||||
self._tr("settings_save_nothing_message", "The download location is not a valid directory and no cookie was active."))
|
||||
QMessageBox.warning(self, "Nothing to Save", "No valid settings were found to save.")
|
||||
|
||||
# --- START: New functions for Save/Load ---
|
||||
def _get_settings_dir(self):
|
||||
"""Helper to get a consistent directory for saving/loading profiles."""
|
||||
if hasattr(self.parent_app, 'user_data_path'):
|
||||
# We use 'user_data_path' which should point to 'appdata'
|
||||
settings_dir = os.path.join(self.parent_app.user_data_path, "settings_profiles")
|
||||
os.makedirs(settings_dir, exist_ok=True)
|
||||
return settings_dir
|
||||
# Fallback if user_data_path isn't available
|
||||
return QStandardPaths.writableLocation(QStandardPaths.DocumentsLocation)
|
||||
|
||||
def _handle_save_settings(self):
|
||||
"""
|
||||
Calls the main app to get all settings, then saves them to a user-chosen JSON file.
|
||||
"""
|
||||
if not hasattr(self.parent_app, '_get_current_ui_settings_as_dict'):
|
||||
QMessageBox.critical(self, self._tr("generic_error_title", "Error"),
|
||||
self._tr("settings_missing_save_func_error", "Parent application is missing the required save function."))
|
||||
return
|
||||
|
||||
QMessageBox.information(self, self._tr("settings_save_success_title", "Settings Saved"), message)
|
||||
settings_dir = self._get_settings_dir()
|
||||
filepath, _ = QFileDialog.getSaveFileName(
|
||||
self,
|
||||
self._tr("save_settings_dialog_title", "Save Settings Profile"),
|
||||
settings_dir,
|
||||
self._tr("json_files_filter", "JSON Files (*.json)")
|
||||
)
|
||||
|
||||
if filepath:
|
||||
if not filepath.endswith('.json'):
|
||||
filepath += '.json'
|
||||
|
||||
try:
|
||||
# Get all settings from the main window
|
||||
settings_data = self.parent_app._get_current_ui_settings_as_dict()
|
||||
|
||||
with open(filepath, 'w', encoding='utf-8') as f:
|
||||
json.dump(settings_data, f, indent=2)
|
||||
|
||||
QMessageBox.information(self,
|
||||
self._tr("save_settings_success_title", "Settings Saved"),
|
||||
self._tr("save_settings_success_msg", "Settings successfully saved to:\n{filename}")
|
||||
.format(filename=os.path.basename(filepath)))
|
||||
except Exception as e:
|
||||
QMessageBox.critical(self,
|
||||
self._tr("save_settings_error_title", "Error Saving Settings"),
|
||||
str(e))
|
||||
|
||||
def _handle_load_settings(self):
|
||||
"""
|
||||
Lets the user pick a JSON file, loads it, and applies the settings to the main app.
|
||||
"""
|
||||
if not hasattr(self.parent_app, '_load_ui_from_settings_dict') or \
|
||||
not hasattr(self.parent_app, '_update_all_ui_states'):
|
||||
QMessageBox.critical(self, self._tr("generic_error_title", "Error"),
|
||||
self._tr("settings_missing_load_func_error", "Parent application is missing the required load functions."))
|
||||
return
|
||||
|
||||
settings_dir = self._get_settings_dir()
|
||||
filepath, _ = QFileDialog.getOpenFileName(
|
||||
self,
|
||||
self._tr("load_settings_dialog_title", "Load Settings Profile"),
|
||||
settings_dir,
|
||||
self._tr("json_files_filter", "JSON Files (*.json)")
|
||||
)
|
||||
|
||||
if filepath:
|
||||
try:
|
||||
with open(filepath, 'r', encoding='utf-8') as f:
|
||||
settings_data = json.load(f)
|
||||
|
||||
if not isinstance(settings_data, dict):
|
||||
raise ValueError(self._tr("settings_invalid_json_error", "File is not a valid settings dictionary."))
|
||||
|
||||
# Apply all settings to the main window
|
||||
self.parent_app._load_ui_from_settings_dict(settings_data)
|
||||
|
||||
# Refresh the main window UI to show changes
|
||||
self.parent_app._update_all_ui_states()
|
||||
|
||||
QMessageBox.information(self,
|
||||
self._tr("load_settings_success_title", "Settings Loaded"),
|
||||
self._tr("load_settings_success_msg", "Successfully loaded settings from:\n{filename}")
|
||||
.format(filename=os.path.basename(filepath)))
|
||||
|
||||
# Close the settings dialog after loading
|
||||
self.accept()
|
||||
|
||||
except Exception as e:
|
||||
QMessageBox.critical(self,
|
||||
self._tr("load_settings_error_title", "Error Loading Settings"),
|
||||
str(e))
|
||||
# --- END: New functions for Save/Load ---
|
||||
@@ -6,7 +6,6 @@ from PyQt5.QtWidgets import (
|
||||
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
|
||||
QStackedWidget, QListWidget, QFrame, QWidget, QScrollArea
|
||||
)
|
||||
from ...i18n.translator import get_translation
|
||||
from ..main_window import get_app_icon_object
|
||||
from ...utils.resolution import get_dark_theme
|
||||
|
||||
@@ -26,7 +25,8 @@ class TourStepWidget(QWidget):
|
||||
|
||||
title_label = QLabel(title_text)
|
||||
title_label.setAlignment(Qt.AlignCenter)
|
||||
title_label.setStyleSheet(f"font-size: {title_font_size}pt; font-weight: bold; color: #E0E0E0; padding-bottom: 15px;")
|
||||
# Use a consistent color for titles regardless of theme
|
||||
title_label.setStyleSheet(f"font-size: {title_font_size}pt; font-weight: bold; color: #87CEEB; padding-bottom: 15px;")
|
||||
layout.addWidget(title_label)
|
||||
|
||||
scroll_area = QScrollArea()
|
||||
@@ -41,17 +41,456 @@ class TourStepWidget(QWidget):
|
||||
content_label.setAlignment(Qt.AlignLeft | Qt.AlignTop)
|
||||
content_label.setTextFormat(Qt.RichText)
|
||||
content_label.setOpenExternalLinks(True)
|
||||
content_label.setStyleSheet(f"font-size: {content_font_size}pt; color: #C8C8C8; line-height: 1.8;")
|
||||
# Set a base line-height and color
|
||||
content_label.setStyleSheet(f"font-size: {content_font_size}pt; color: #C8C8C8; line-height: 1.5;")
|
||||
scroll_area.setWidget(content_label)
|
||||
layout.addWidget(scroll_area, 1)
|
||||
|
||||
|
||||
class HelpGuideDialog(QDialog):
|
||||
"""A multi-page dialog for displaying the feature guide with a navigation list."""
|
||||
|
||||
def __init__(self, steps_data, parent_app, parent=None):
|
||||
super().__init__(parent)
|
||||
self.steps_data = steps_data
|
||||
self.parent_app = parent_app
|
||||
super().__init__(parent_app)
|
||||
|
||||
self.parent_app = parent_app # This is the main_window instance
|
||||
|
||||
|
||||
self.steps_data = [
|
||||
("Welcome!",
|
||||
"""
|
||||
<p style='font-size: 12pt;'>Welcome to the Kemono Downloader! This guide will walk you through the key features to get you started.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Wide Range of Support</h3>
|
||||
<p>This application provides full, direct download support for several popular sites, including:</p>
|
||||
<ul>
|
||||
<li>Kemono</li>
|
||||
<li>Coomer</li>
|
||||
<li>Bunkr</li>
|
||||
<li>Erome</li>
|
||||
<li>Saint2.su</li>
|
||||
<li>nhentai.net/</li>
|
||||
<li>fap-nation.org/</li>
|
||||
<li>Discord</li>
|
||||
<li>allporncomic.com</li>
|
||||
<li>allporncomic.com</li>
|
||||
<li>hentai2read.com</li>
|
||||
<li>mangadex.org</li>
|
||||
<li>Simpcity</li>
|
||||
<li>gelbooru.com</li>
|
||||
<li>Toonily.com</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Powerful Batch Mode</h3>
|
||||
<p>Save time by downloading hundreds of URLs at once. Simply type <b>nhentai.net</b> or <b>saint2.su</b> into the URL bar. The app will look for a <b>nhentai.txt</b> or <b>saint2.su.txt</b> file in your 'appdata' folder and process all the URLs inside it.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Advanced Discord Support</h3>
|
||||
<p>Go beyond simple file downloading. The app can connect directly to the Discord API to:</p>
|
||||
<ul>
|
||||
<li>Download all files from a specific channel.</li>
|
||||
<li>Save an entire channel's message history as a fully formatted PDF.</li>
|
||||
</ul>
|
||||
"""),
|
||||
|
||||
("Advanced Filtering",
|
||||
"""
|
||||
<p>Control exactly what content you download, from broad categories to specific keywords.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Content Type Filters</h3>
|
||||
<p>These radio buttons let you select the main <i>type</i> of content you want:</p>
|
||||
<ul>
|
||||
<li><b>All:</b> Downloads everything (default).</li>
|
||||
<li><b>Images/GIFs:</b> Only downloads static images and GIFs.</li>
|
||||
<li><b>Videos:</b> Only downloads video files (MP4, WEBM, MOV, etc.).</li>
|
||||
<li><b>Only Archives:</b> Exclusively downloads .zip and .rar files.</li>
|
||||
<li><b>Only Links:</b> Extracts external links (Mega, Google Drive) from post descriptions instead of downloading.</li>
|
||||
<li><b>Only Audio:</b> Only downloads audio files (MP3, WAV, etc.).</li>
|
||||
<li><b>More:</b> Opens a dialog to download post descriptions or comments as text/PDF.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Character Filtering</h3>
|
||||
<p>The <b>"Filter by Character(s)"</b> input is your most powerful tool for targeting content.</p>
|
||||
<ul>
|
||||
<li><b>Basic Use:</b> Enter names, separated by commas (e.g., <code>Tifa, Aerith</code>). This will create folders for "Tifa" and "Aerith" and download posts matching those names.</li>
|
||||
<li><b>Grouped Aliases:</b> Use parentheses to group aliases for a single character (e.g., <code>(Tifa, Lockhart)</code>). This still creates a "Tifa" folder, but it will also match posts that just say "Lockhart".</li>
|
||||
</ul>
|
||||
<p>The <b>"Filter: [Scope]"</b> button changes <i>what</i> is scanned:</p>
|
||||
<ul>
|
||||
<li><b>Filter: Title (Default):</b> Scans only the post's main title.</li>
|
||||
<li><b>Filter: Files:</b> Scans the <i>filenames</i> within the post.</li>
|
||||
<li><b>Filter: Both:</b> Scans both the title and the filenames.</li>
|
||||
<li><b>Filter: Comments (Beta):</b> Scans the post's comment section for the keywords.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Skip Filters (Avoid Content)</h3>
|
||||
<p>The <b>"Skip with Words"</b> input lets you avoid content you don't want.</p>
|
||||
<p>The <b>"Scope: [Scope]"</b> button changes <i>how</i> it skips:</p>
|
||||
<ul>
|
||||
<li><b>Scope: Posts (Default):</b> Skips the <i>entire post</i> if the post's title contains a skip word (e.g., <code>WIP, sketch</code>).</li>
|
||||
<li><b>Scope: Files:</b> Scans and skips <i>individual files</i> if their filename contains a skip word.</li>
|
||||
<li><b>Scope: Both:</b> Skips the post if the title matches, and if not, still checks individual files.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Other Content Options</h3>
|
||||
<ul>
|
||||
<li><b>Skip .zip:</b> A quick toggle to ignore all archive files.</li>
|
||||
<li><b>Download Thumbnails Only:</b> Downloads the small preview image instead of the full-resolution file.</li>
|
||||
<li><b>Scan Content for Images:</b> Scans the post's text description for <code><img></code> tags. Useful for embedded images not in the post's attachment list.</li>
|
||||
<li><b>Keep Duplicates:</b> By default, the app skips files with identical content (hash). Check this to open a dialog and configure it to keep duplicate files.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Filename Control</h3>
|
||||
<p>The <b>"Remove Words from name"</b> input cleans up filenames. Any text you enter here (comma-separated) will be removed from the final saved filename (e.g., <code>patreon, exclusive</code>).</p>
|
||||
"""),
|
||||
|
||||
("Folder Management (Known.txt)",
|
||||
"""
|
||||
<p>This feature, enabled by the <b>"Separate Folders by Known.txt"</b> checkbox, automatically sorts your downloads. It's designed mainly for <b>Kemono</b>, where creators often tag posts with character names in the title.</p>
|
||||
|
||||
<p>When you download from a creator, this feature checks each <b>post title</b> against your `Known.txt` list. If a name matches, a folder is created for that name, and all posts from that creator mentioning the name will be <b>grouped together</b> in that single folder.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Folder Naming Priority</h3>
|
||||
<p>When "Separate Folders" is checked, the app uses this priority to name folders:</p>
|
||||
<ol>
|
||||
<li><b>Character Filter:</b> If you use the <b>"Filter by Character(s)"</b> input (e.g., <code>Tifa</code>), that name is <b>always</b> used as the folder name. This overrides all other rules.</li>
|
||||
<li><b>Known.txt (Post Title):</b> If no filter is used, it checks the <b>post's title</b> for a name in `Known.txt`. (This is the most common use case).</li>
|
||||
<li><b>Known.txt (Filename):</b> If the title doesn't match, it checks all <b>filenames</b> in the post for a match in `Known.txt`.</li>
|
||||
<li><b>Fallback:</b> If no match is found, it creates a generic folder from the post's title.</li>
|
||||
</ol>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Editing Your Known.txt File</h3>
|
||||
<p>You can manage this list using the panel on the right of the main window or by clicking <b>"Open Known.txt"</b> to edit it directly. There are two formats:</p>
|
||||
<ul>
|
||||
<li><b>Simple Name:</b><br>
|
||||
<code>Tifa</code><br>
|
||||
This creates a folder named "Tifa" and matches posts/files named "Tifa".
|
||||
</li>
|
||||
<br>
|
||||
<li><b>Grouped Aliases:</b><br>
|
||||
<code>(Tifa, Lockhart)</code><br>
|
||||
This is the most powerful format. It creates a folder named <b>"Tifa Lockhart"</b> and will match posts/files that contain either "Tifa" <i>or</i> "Lockhart". This is perfect for characters with multiple names.
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Important Note:</h3>
|
||||
<p>This automatic sorting <b>only works if the creator includes the character names or keywords in the post title</b> (or filename). If they don't, the app has no way of knowing how to sort the post, and it will fall back to a generic folder name.</p>
|
||||
"""),
|
||||
|
||||
("Renaming Mode",
|
||||
"""
|
||||
<p>This mode is designed for downloading comics, manga, or any multi-file post where you need files to be in a specific, sequential order. When active, it downloads posts from <b>oldest to newest</b>.</p>
|
||||
|
||||
<p>Activate it by checking the <b>"Renaming Mode"</b> checkbox. This reveals a new button: <b>"Name: [Style]"</b>. Clicking this button cycles through all available naming conventions.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Available Naming Styles</h3>
|
||||
<ul>
|
||||
<li><b>Post Title:</b> (Default) Files are named after the post's title, with a number for multi-file posts (e.g., <code>My Comic Page_1.jpg</code>, <code>My Comic Page_2.jpg</code>).</li>
|
||||
|
||||
<li><b>Date + Original:</b> Prepends the post's date to the original filename (e.g., <code>2025-11-16_original_file_name.jpg</code>).</li>
|
||||
|
||||
<li><b>Date + Title:</b> Prepends the date to the post title (e.g., <code>2025-11-16_My Comic Page_1.jpg</code>).</li>
|
||||
|
||||
<li><b>Post ID:</b> Names files using the post's unique ID and the file index (e.g., <code>9876543_0.jpg</code>, <code>9876543_1.jpg</code>).</li>
|
||||
|
||||
<li><b>Date Based:</b> Renames all files to a simple, sequential number (e.g., <code>001.jpg</code>, <code>002.jpg</code>). You can add a prefix in the text box that appears (e.g., "Chapter 1 " to get <code>Chapter 1 001.jpg</code>).
|
||||
<br><b style='color: #f0ad4e;'>Note: This mode disables multithreading to guarantee correct file order.</b></li>
|
||||
|
||||
<li><b>Title + G.Num (Global Numbering):</b> Names files by title, but with a *global* counter (e.g., <code>Post A_001.jpg</code>, <code>Post B_002.jpg</code>).
|
||||
<br><b style='color: #f0ad4e;'>Note: This mode also disables multithreading.</b></li>
|
||||
|
||||
<li><b>Custom:</b> Lets you design your own filename using a format string. A <b>"..."</b> button will appear to open the custom format dialog.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Custom Format Placeholders</h3>
|
||||
<p>When using the "Custom" style, you can use these placeholders (click the buttons in the dialog to add them):</p>
|
||||
<ul>
|
||||
<li><code>{id}</code> - The unique ID of the post.</li>
|
||||
<li><code>{creator_name}</code> - The creator's name.</li>
|
||||
<li><code>{service}</code> - The service (e.g., Patreon, Pixiv Fanbox, etc).</li>
|
||||
<li><code>{title}</code> - The title of the post.</li>
|
||||
<li><code>{added}</code> - Date the post was added.</li>
|
||||
<li><code>{published}</code> - Date the post was published.</li>
|
||||
<li><code>{edited}</code> - Date the post was last edited.</li>
|
||||
<li><code>{name}</code> - The original name of the file.</li>
|
||||
</ul>
|
||||
<p>You can also set a custom <b>Date Format</b> (e.g., <code>YYYY-MM-DD</code>) that will apply to the {added}, {published}, and {edited} placeholders.</p>
|
||||
"""),
|
||||
|
||||
("Batch Downloading",
|
||||
"""
|
||||
<p>This feature allows you to download hundreds of URLs from a text file, which is much faster than queuing them one by one.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>How It Works (Step-by-Step)</h3>
|
||||
<ol>
|
||||
<li><b>Find your 'appdata' folder:</b> This is in the same directory as the downloader's <code>.exe</code> file.</li>
|
||||
<li><b>Create a .txt file:</b> Inside the 'appdata' folder, create a text file for the site you want to batch from. The name must be exact. (eg.. nhentai.txt, hentai2read.txt, etc.. )</li>
|
||||
<li><b>Add URLs:</b> Open the <code>.txt</code> file and paste one download URL on each line. Save the file.</li>
|
||||
<li><b>Start the Batch:</b> In the downloader's main URL bar, type the <b>site's domain name</b> (e.g., <code>nhentai.net</code>) and click "Start Download".</li>
|
||||
</ol>
|
||||
<p>The app will automatically find your text file, read all the URLs, and download them sequentially.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Supported Sites and Filenames</h3>
|
||||
<p>The <code>.txt</code> file name must match the site you are triggering:</p>
|
||||
<ul>
|
||||
<li><b>To trigger, type:</b> <code>allporncomic.com</code><br>
|
||||
<b>Text file name:</b> <code>allporncomic.txt</code></li>
|
||||
|
||||
<li><b>To trigger, type:</b> <code>nhentai.net</code><br>
|
||||
<b>Text file name:</b> <code>nhentai.txt</code></li>
|
||||
|
||||
<li><b>To trigger, type:</b> <code>fap-nation.com</code> or <code>fap-nation.org</code><br>
|
||||
<b>Text file name:</b> <code>fap-nation.txt</code></li>
|
||||
|
||||
<li><b>To trigger, type:</b> <code>saint2.su</code><br>
|
||||
<b>Text file name:</b> <code>saint2.su.txt</code></li>
|
||||
|
||||
<li><b>To trigger, type:</b> <code>hentai2read.com</code><br>
|
||||
<b>Text file name:</b> <code>hentai2read.txt</code></li>
|
||||
|
||||
<li><b>To trigger, type:</b> <code>rule34video.com</code><br>
|
||||
<b>Text file name:</b> <code>rule34video.txt</code></li>
|
||||
</ul>
|
||||
"""),
|
||||
|
||||
("Special Modes: Text & Links",
|
||||
"""
|
||||
<p>These two modes completely change the downloader's function from downloading files to extracting information.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>🔗 Only Links Mode</h3>
|
||||
<p>When you select this, the app <b>stops downloading files</b>. Instead, it scans the post's description for any external URLs (like Mega, Google Drive, Dropbox, etc.) and lists them in the main log.</p>
|
||||
<p>This mode also reveals a new set of tools above the log:</p>
|
||||
<ul>
|
||||
<li><b>Search Bar:</b> Lets you filter the extracted links by keyword (e.g., "mega", "part 1").</li>
|
||||
<li><b>Export Links Button:</b> Opens a dialog to save all the found links to a <code>.txt</code> file.</li>
|
||||
<li><b>Download Button:</b> Opens a new dialog that lets you selectively download from the supported links (Mega, Google Drive, Dropbox) that were found.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>📄 More (Text Export Mode)</h3>
|
||||
<p>This mode downloads the <b>text content</b> from posts instead of the files. When you select it, a dialog appears asking for more details:</p>
|
||||
<ul>
|
||||
<li><b>Scope:</b>
|
||||
<ul>
|
||||
<li><b>Description/Content:</b> Saves the text from the post's main body.</li>
|
||||
<li><b>Comments:</b> Fetches and saves all the comments from the post.</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><b>Export as:</b> You can choose to save the text as a <b>PDF</b>, <b>DOCX</b>, or <b>TXT</b> file.</li>
|
||||
<li><b>Single PDF:</b> (Only available for PDF format) This powerful option stops the app from saving individual PDF files. Instead, it collects the text from <i>all</i> matching posts, sorts them by date, and compiles them into <b>one single, large PDF file</b> at the end of the download session.</li>
|
||||
</ul>
|
||||
"""),
|
||||
|
||||
("Special Commands",
|
||||
"""
|
||||
<p>You can add special commands to the <b>"Filter by Character(s)"</b> input field to change download behavior for a single task. Commands are keywords wrapped in square brackets <code>[]</code>.</p>
|
||||
<p><b>Example:</b> <code>Tifa, (Cloud, Zack) [ao] [sfp-10]</code></p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Filter Commands (in "Filter by Character(s)" input)</h3>
|
||||
<ul>
|
||||
<li><b><code>[ao]</code> (Archive Only Priority)</b><br>
|
||||
This command prioritizes archives.
|
||||
<ul>
|
||||
<li>If a post contains <b>only images/videos</b>, it will download them normally.</li>
|
||||
<li>If a post contains <b>both archives AND images/videos</b>, this command tells the app to <b>only download the archives</b> and skip the other files for that post.</li>
|
||||
</ul>
|
||||
</li>
|
||||
<br>
|
||||
<li><b><code>[sfp-N]</code> (Subfolder Per Post Threshold)</b><br>
|
||||
This is an override for when "Subfolder per Post" is <b>OFF</b> (and "Separate Folders by Known.txt" is <b>ON</b>).<br>
|
||||
For example, if you set <code>[sfp-10]</code>:
|
||||
<ul>
|
||||
<li>Posts with <b>less than 10 files</b> will download normally into the main folder (e.g., <code>/ArtistName/</code>).</li>
|
||||
<li>When a post with <b>10 or more files</b> is found, this command will <b>force a subfolder to be created for that one post</b> (e.g., <code>/ArtistName/Comic_Title/</code>) to keep its files grouped together.</li>
|
||||
</ul>
|
||||
</li>
|
||||
<br>
|
||||
<li><b><code>[unknown]</code> (Handle Unknown)</b><br>
|
||||
Changes how sorting works when "Separate Folders by Known.txt" is on. If a post title doesn't match any name in your <code>Known.txt</code> list, this command will create a folder using the post's title instead of a generic fallback folder.
|
||||
</li>
|
||||
<br>
|
||||
<li><b><code>[.domain]</code> (Domain Override)</b><br>
|
||||
An advanced command. For example, <code>[.st]</code> forces the app to download from <code>coomer.st</code>, and <code>[.cr]</code> forces it to download from <code>kemono.cr</code>. This can be useful if one domain is blocked or slow.
|
||||
</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Skip Command (in "Skip with Words" input)</h3>
|
||||
<p>This command is different and goes into the <b>"Skip with Words"</b> input field, along with any other skip words.</p>
|
||||
<ul>
|
||||
<li><b><code>[N]</code> (Skip File by Size)</b><br>
|
||||
This command skips any file that is <b>smaller</b> than <code>N</code> megabytes (MB).<br>
|
||||
<b>Example:</b> Entering <code>WIP, sketch, [200]</code> into the "Skip with Words" input will skip files with "WIP" or "sketch" in their name, AND it will also skip any file smaller than 200MB.
|
||||
</li>
|
||||
</ul>
|
||||
"""),
|
||||
|
||||
("Cloud Storage & Direct Links",
|
||||
"""
|
||||
<p>The downloader has built-in support for popular cloud storage and direct-link sites. You can use this in two main ways.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Method 1: Direct URL Download</h3>
|
||||
<p>You can paste a direct link from these services into the main URL bar and hit "Start Download" just like a Kemono link.</p>
|
||||
<ul>
|
||||
<li><b>Pixeldrain:</b> Supports single files (<code>/u/...</code>), albums (<code>/l/...</code>), and folders (<code>/d/...</code>).</li>
|
||||
<li><b>Mega.nz:</b> Supports both single file links (<code>/file/...</code>) and folder links (<code>/folder/...</code>).</li>
|
||||
<li><b>Gofile.io:</b> Supports folder links (<code>/d/...</code>).</li>
|
||||
<li><b>Google Drive:</b> Supports shared folder links.</li>
|
||||
<li><b>Dropbox:</b> Supports shared <code>.zip</code> file links. It will automatically download, extract, and delete the <code>.zip</code> file.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Method 2: "Only Links" Mode Downloader</h3>
|
||||
<p>This is a two-step process for handling posts that have many cloud links in their description.</p>
|
||||
<ol>
|
||||
<li><b>Step 1: Extract Links</b><br>
|
||||
Select the <b>"🔗 Only Links"</b> radio button and run a download on a creator or post page. The app will scan all posts and list the external links (Mega, GDrive, etc.) it finds in the log.
|
||||
</li>
|
||||
<br>
|
||||
<li><b>Step 2: Download Links</b><br>
|
||||
After extraction, a <b>"Download"</b> button (next to "Export Links") will become active. This opens a new window where you can selectively download from the supported links (Mega, Google Drive, Dropbox) that were found.
|
||||
</li>
|
||||
</ol>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Note: SimpCity Integration</h3>
|
||||
<p>SimpCity support relies heavily on this feature. When you download from a SimpCity thread, the app <b>automatically</b> scans the page for links to services like <b>Pixeldrain, Bunkr, Saint2, Mega, and Gofile</b> and then downloads them just as if you had put in those links directly. You can control which of these services are downloaded from the checkboxes in the "SimpCity Settings" section of the main window.</p>
|
||||
"""),
|
||||
|
||||
("Creator Selection & Updates",
|
||||
"""
|
||||
<p>Clicking the <b>🎨 button</b> (next to the URL bar) opens the <b>Creator Selection</b> dialog. This is your control for managing creators you've already downloaded from.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Main List & Searching</h3>
|
||||
<p>The main list shows all creators from your <code>creators.json</code> file. You can:</p>
|
||||
<ul>
|
||||
<li><b>Search:</b> The top search bar filters your creators by name, service, or even a direct URL.</li>
|
||||
<li><b>Select:</b> Check the boxes next to creators to select them for an action.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Action Buttons</h3>
|
||||
|
||||
<p><b>Check for Updates</b></p>
|
||||
<p>This button opens a new window, "Check for Updates," which lists all your <b>Creator Profiles</b> (the <code>.json</code> files saved in your <code>appdata/creator_profiles</code> folder). These profiles are created automatically when you download a full creator page.</p>
|
||||
<p>From this dialog, you can check multiple creators at once. The app will scan all of them and then show a final "Start Download" button on the main window to download <i>only</i> the new posts, using the same settings you used for each creator last time.</p>
|
||||
|
||||
<p><b>Add Selected</b></p>
|
||||
<p>This is the simplest action. It takes all the creators you've checked, puts their names in the main URL bar, and closes the dialog. This is a quick way to add multiple creators to the download queue for a download.</p>
|
||||
|
||||
<p><b>Fetch Posts</b></p>
|
||||
<p>This is a powerful tool for finding specific posts. When you click it:</p>
|
||||
<ol>
|
||||
<li>The dialog expands, and a new panel appears on the right.</li>
|
||||
<li>The app fetches <i>every single post</i> from all the creators you selected. This may take time.</li>
|
||||
<li>The right panel fills with a list of all posts, grouped by creator.</li>
|
||||
<li>You can now search this list and check the boxes next to the <i>individual posts</i> you want.</li>
|
||||
<li>Clicking <b>"Add Selected Posts to Queue"</b> adds only those specific posts to the download queue.</li>
|
||||
</ol>
|
||||
"""),
|
||||
|
||||
("⭐ Favorite Mode",
|
||||
"""
|
||||
<p>This mode is a powerful feature for downloading directly from your personal <b>Kemono</b> and <b>Coomer</b> favorites lists. It requires you to be logged in on your browser and to provide your cookies to the app.</p>
|
||||
|
||||
<p><b style='color: #f0ad4e;'>Important:</b> You <b>must</b> check the <b>"Use Cookie"</b> box and provide a valid cookie for this mode to work. If cookies are missing or invalid, the app will show you a help dialog.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>How to Use Favorite Mode</h3>
|
||||
<ol>
|
||||
<li>Check the <b>"⭐ Favorite Mode"</b> checkbox on the main window. This will lock the URL bar and show two new buttons.</li>
|
||||
<li>Click either <b>"🖼️ Favorite Artists"</b> or <b>"📄 Favorite Posts"</b>.</li>
|
||||
<li>A new dialog will open and begin fetching all your favorites from both Kemono and Coomer at the same time.</li>
|
||||
<li>Once loaded, you can search, filter, and select the artists or posts you want to download.</li>
|
||||
<li>Click "Download Selected" to add them to the main download queue and begin processing.</li>
|
||||
</ol>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Favorite Artists</h3>
|
||||
<p>The <b>"Favorite Artists"</b> dialog will load your list of followed creators. When you download from here, the app treats it as a full creator download, just as if you had pasted in that artist's URL.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Favorite Posts</h3>
|
||||
<p>The <b>"Favorite Posts"</b> dialog loads a list of every individual post you have favorited. This dialog has some extra features:</p>
|
||||
<ul>
|
||||
<li><b>Creator Name Resolution:</b> It attempts to match the post's creator ID with the names in your <code>creators.json</code> file to show you a recognizable name.</li>
|
||||
<li><b>Known.txt Matching:</b> It highlights posts by showing <code>[Known - Tifa]</code> in the title if the post title matches an entry in your <code>Known.txt</code> list, helping you find specific content.</li>
|
||||
<li><b>Grouping:</b> Posts are automatically grouped by creator to keep the list organized.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Download Scope (Artist Folders)</h3>
|
||||
<p>In Favorite Mode, the <b>"Scope: [Location]"</b> button becomes very important. It controls <i>where</i> your favorited items are saved:</p>
|
||||
<ul>
|
||||
<li><b>Scope: Selected Location (Default):</b> Downloads all selected items directly into the main "Download Location" folder you have set.</li>
|
||||
<li><b>Scope: Artist Folders:</b> This automatically creates a new subfolder for each artist inside your main "Download Location" (e.g., <code>/Downloads/ArtistName/</code>). This is the best way to keep your favorites organized.</li>
|
||||
</ul>
|
||||
"""),
|
||||
|
||||
("File & Download Options",
|
||||
"""
|
||||
<p>These checkboxes give you fine-grained control over which files are downloaded and how they are processed.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>File Type & Content</h3>
|
||||
<ul>
|
||||
<li><b>Skip .zip:</b> A simple toggle. When checked, the downloader will skip all <code>.zip</code> and <code>.rar</code> archive files it finds.</li>
|
||||
<br>
|
||||
<li><b>Scan Content for Images:</b> This is a powerful feature for posts where images are embedded in the description (<code><img></code> tags) but not listed as attachments. When checked, the app will scan the post's HTML content and try to find and download these embedded images.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Image Processing</h3>
|
||||
<ul>
|
||||
<li><b>Download Thumbnails Only:</b> Saves bandwidth and time by downloading the small preview/thumbnail version of an image instead of the full-resolution file.</li>
|
||||
<br>
|
||||
<li><b>Compress to WebP:</b> If an image is over 1.5MB, this option will automatically convert it to the <code>.webp</code> format during the download, which significantly reduces file size while maintaining high quality.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Duplicate Handling</h3>
|
||||
<ul>
|
||||
<li><b>Keep Duplicates:</b> By default, the app checks the <i>content</i> (hash) of a file and will not re-download a file it already has. Checking this box opens a dialog with more options:
|
||||
<ul>
|
||||
<li><b>Hash (Default):</b> The standard behavior.</li>
|
||||
<li><b>Keep Everything:</b> Disables all duplicate checks and downloads every file from the API, even if you already have it.</li>
|
||||
<li><b>Limit:</b> Lets you set a limit (e.g., 2) to how many times a file with the same content can be downloaded.</li>
|
||||
</ul>
|
||||
</li>
|
||||
</ul>
|
||||
"""),
|
||||
|
||||
("Utility & Advanced Options",
|
||||
"""
|
||||
<p>These features provide advanced control over your downloads, sessions, and application settings.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Use Cookie</h3>
|
||||
<p>This is essential for downloading from sites that require a login (like <b>SimpCity</b> or accessing your <b>favorites</b> on Kemono/Coomer). You can either:</p>
|
||||
<ul>
|
||||
<li><b>Paste a cookie string:</b> Copy the "cookie" value from your browser's developer tools and paste it into the text field.</li>
|
||||
<li><b>Use a file:</b> Click the "Browse" button to select a <code>cookies.txt</code> file exported from your browser.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Page Range</h3>
|
||||
<p>When downloading from a creator's main page (not a single post), these "Start" and "End" fields let you limit the download. For example, entering <code>Start: 1</code> and <code>End: 5</code> will only download posts from the first five pages.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Multi-part Download</h3>
|
||||
<p>Clicking the <b>"Multi-part: OFF"</b> button opens a dialog to enable high-speed downloads for large files. It will split a large file into multiple parts and download them at the same time. You can choose to apply this to videos, archives, or both, and set the minimum file size to trigger it.</p>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Download History</h3>
|
||||
<p>The <b>"History"</b> button opens a dialog showing two lists:</p>
|
||||
<ul>
|
||||
<li><b>Last 3 Files:</b> The last 3 individual files you successfully downloaded.</li>
|
||||
<li><b>First 3 Posts:</b> The first 3 posts Processed from your *most recent* download session.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Settings (Gear Icon)</h3>
|
||||
<p>The <b>Gear</b> icon ⚙️ opens the main application settings, which is now organized into tabs:</p>
|
||||
<ul>
|
||||
<li><b>Display Tab:</b> Change the app's <b>Theme</b> (Light/Dark), <b>Language</b>, <b>UI Scale</b>, and default <b>Window Size</b>.</li>
|
||||
<li><b>Downloads Tab:</b>
|
||||
<ul>
|
||||
<li>Save your current <b>Download Path</b>, <b>Cookie</b>, and <b>Discord Token</b> for future sessions using the "Save Path + Cookie + Token" button.</li>
|
||||
<li>Set an <b>Action After Download</b> (e.g., Notify, Sleep, Shutdown).</li>
|
||||
<li>Customize the <b>Post Subfolder Format</b> for when the date prefix is used (e.g., <code>YYYY-MM-DD {post}</code>).</li>
|
||||
<li>Toggle <b>"Save Creator.json file"</b> (which enables the "Check for Updates" feature).</li>
|
||||
<li>Toggle <b>"Fetch First"</b> (to find all posts from a creator before starting any downloads).</li>
|
||||
</ul>
|
||||
</li>
|
||||
<li><b>Updates Tab:</b> Check for and install new application updates.</li>
|
||||
</ul>
|
||||
|
||||
<h3 style='color: #E0E0E0;'>Reset Button</h3>
|
||||
<p>The <b>"Reset"</b> button (bottom right) is a soft reset. It clears all input fields (except your Download Location), clears the logs, and resets all download options and filters back to their default state. It does <b>not</b> clear your Download History or saved Settings.</p>
|
||||
""")
|
||||
]
|
||||
|
||||
scale = self.parent_app.scale_factor if hasattr(self.parent_app, 'scale_factor') else 1.0
|
||||
|
||||
@@ -66,7 +505,38 @@ class HelpGuideDialog(QDialog):
|
||||
|
||||
current_theme_style = ""
|
||||
if hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
|
||||
current_theme_style = get_dark_theme(scale)
|
||||
base_style = get_dark_theme(scale)
|
||||
|
||||
list_widget_style = f"""
|
||||
QListWidget {{
|
||||
background-color: #2E2E2E;
|
||||
border: 1px solid #4A4A4A;
|
||||
border-radius: 4px;
|
||||
font-size: {int(11 * scale)}pt;
|
||||
color: #DCDCDC;
|
||||
}}
|
||||
QListWidget::item {{
|
||||
padding: 10px;
|
||||
border-bottom: 1px solid #4A4A4A;
|
||||
}}
|
||||
QListWidget::item:selected {{
|
||||
background-color: #87CEEB;
|
||||
color: #1E1E1E;
|
||||
font-weight: bold;
|
||||
}}
|
||||
QListWidget::item:hover:!selected {{
|
||||
background-color: #3A3A3A;
|
||||
}}
|
||||
|
||||
/* Style for the TourStepWidget content */
|
||||
TourStepWidget QLabel {{
|
||||
color: #DCDCDC;
|
||||
}}
|
||||
TourStepWidget QScrollArea {{
|
||||
background-color: transparent;
|
||||
}}
|
||||
"""
|
||||
current_theme_style = base_style + list_widget_style
|
||||
else:
|
||||
# Basic light theme fallback
|
||||
current_theme_style = f"""
|
||||
@@ -83,29 +553,50 @@ class HelpGuideDialog(QDialog):
|
||||
}}
|
||||
QPushButton:hover {{ background-color: #CACACA; }}
|
||||
QPushButton:pressed {{ background-color: #B0B0B0; }}
|
||||
QListWidget {{
|
||||
background-color: #FFFFFF;
|
||||
border: 1px solid #C0C0C0;
|
||||
border-radius: 4px;
|
||||
font-size: {int(11 * scale)}pt;
|
||||
color: #1E1E1E;
|
||||
}}
|
||||
QListWidget::item {{
|
||||
padding: 10px;
|
||||
border-bottom: 1px solid #E0E0E0;
|
||||
}}
|
||||
QListWidget::item:selected {{
|
||||
background-color: #0078D7;
|
||||
color: #FFFFFF;
|
||||
font-weight: bold;
|
||||
}}
|
||||
QListWidget::item:hover:!selected {{
|
||||
background-color: #F0F0F0;
|
||||
}}
|
||||
TourStepWidget QLabel {{
|
||||
color: #1E1E1E;
|
||||
}}
|
||||
TourStepWidget h3 {{
|
||||
color: #005A9E;
|
||||
}}
|
||||
"""
|
||||
|
||||
self.setStyleSheet(current_theme_style)
|
||||
self._init_ui()
|
||||
|
||||
if self.parent_app:
|
||||
self.move(self.parent_app.geometry().center() - self.rect().center())
|
||||
|
||||
def _tr(self, key, default_text=""):
|
||||
"""Helper to get translation based on current app language."""
|
||||
if callable(get_translation) and self.parent_app:
|
||||
return get_translation(self.parent_app.current_selected_language, key, default_text)
|
||||
return default_text
|
||||
|
||||
def _init_ui(self):
|
||||
main_layout = QVBoxLayout(self)
|
||||
main_layout.setContentsMargins(15, 15, 15, 15)
|
||||
main_layout.setSpacing(10)
|
||||
|
||||
# Title
|
||||
title_label = QLabel(self._tr("help_guide_dialog_title", "Kemono Downloader - Feature Guide"))
|
||||
title_label = QLabel("Kemono Downloader - Feature Guide")
|
||||
scale = getattr(self.parent_app, 'scale_factor', 1.0)
|
||||
title_font_size = int(16 * scale)
|
||||
title_label.setStyleSheet(f"font-size: {title_font_size}pt; font-weight: bold; color: #E0E0E0;")
|
||||
# Use a consistent color for the main title
|
||||
title_label.setStyleSheet(f"font-size: {title_font_size}pt; font-weight: bold; color: #87CEEB;")
|
||||
title_label.setAlignment(Qt.AlignCenter)
|
||||
main_layout.addWidget(title_label)
|
||||
|
||||
@@ -115,34 +606,14 @@ class HelpGuideDialog(QDialog):
|
||||
|
||||
self.nav_list = QListWidget()
|
||||
self.nav_list.setFixedWidth(int(220 * scale))
|
||||
self.nav_list.setStyleSheet(f"""
|
||||
QListWidget {{
|
||||
background-color: #2E2E2E;
|
||||
border: 1px solid #4A4A4A;
|
||||
border-radius: 4px;
|
||||
font-size: {int(11 * scale)}pt;
|
||||
}}
|
||||
QListWidget::item {{
|
||||
padding: 10px;
|
||||
border-bottom: 1px solid #4A4A4A;
|
||||
}}
|
||||
QListWidget::item:selected {{
|
||||
background-color: #87CEEB;
|
||||
color: #2E2E2E;
|
||||
font-weight: bold;
|
||||
}}
|
||||
""")
|
||||
# Styles are now set in the __init__ method
|
||||
content_layout.addWidget(self.nav_list)
|
||||
|
||||
self.stacked_widget = QStackedWidget()
|
||||
content_layout.addWidget(self.stacked_widget)
|
||||
|
||||
for title_key, content_key in self.steps_data:
|
||||
title = self._tr(title_key, title_key)
|
||||
content = self._tr(content_key, f"Content for {content_key} not found.")
|
||||
|
||||
for title, content in self.steps_data:
|
||||
self.nav_list.addItem(title)
|
||||
|
||||
step_widget = TourStepWidget(title, content, scale=scale)
|
||||
self.stacked_widget.addWidget(step_widget)
|
||||
|
||||
@@ -171,13 +642,19 @@ class HelpGuideDialog(QDialog):
|
||||
icon_dim = int(24 * scale)
|
||||
icon_size = QSize(icon_dim, icon_dim)
|
||||
|
||||
tooltip_map = {
|
||||
"help_guide_github_tooltip": "Visit the project on GitHub",
|
||||
"help_guide_instagram_tooltip": "Follow the developer on Instagram",
|
||||
"help_guide_discord_tooltip": "Join the official Discord server"
|
||||
}
|
||||
|
||||
for button, tooltip_key, url in [
|
||||
(self.github_button, "help_guide_github_tooltip", "https://github.com/Yuvi9587"),
|
||||
(self.github_button, "help_guide_github_tooltip", "https://github.com/Yuvi63771/Kemono-Downloader"),
|
||||
(self.instagram_button, "help_guide_instagram_tooltip", "https://www.instagram.com/uvi.arts/"),
|
||||
(self.discord_button, "help_guide_discord_tooltip", "https://discord.gg/BqP64XTdJN")
|
||||
]:
|
||||
button.setIconSize(icon_size)
|
||||
button.setToolTip(self._tr(tooltip_key))
|
||||
button.setToolTip(tooltip_map.get(tooltip_key, ""))
|
||||
button.setFixedSize(icon_size.width() + 8, icon_size.height() + 8)
|
||||
button.setStyleSheet("background-color: transparent; border: none;")
|
||||
button.clicked.connect(lambda _, u=url: QDesktopServices.openUrl(QUrl(u)))
|
||||
@@ -185,7 +662,7 @@ class HelpGuideDialog(QDialog):
|
||||
|
||||
footer_layout.addStretch(1)
|
||||
|
||||
self.finish_button = QPushButton(self._tr("tour_dialog_finish_button", "Finish"))
|
||||
self.finish_button = QPushButton("Finish")
|
||||
self.finish_button.clicked.connect(self.accept)
|
||||
footer_layout.addWidget(self.finish_button)
|
||||
|
||||
|
||||
@@ -11,17 +11,16 @@ class MoreOptionsDialog(QDialog):
|
||||
SCOPE_CONTENT = "content"
|
||||
SCOPE_COMMENTS = "comments"
|
||||
|
||||
def __init__(self, parent=None, current_scope=None, current_format=None, single_pdf_checked=False):
|
||||
def __init__(self, parent=None, current_scope=None, current_format=None, single_pdf_checked=False, add_info_checked=False):
|
||||
super().__init__(parent)
|
||||
self.parent_app = parent
|
||||
self.setWindowTitle("More Options")
|
||||
self.setMinimumWidth(350)
|
||||
|
||||
# ... (Layout and other widgets remain the same) ...
|
||||
|
||||
layout = QVBoxLayout(self)
|
||||
self.description_label = QLabel("Please choose the scope for the action:")
|
||||
layout.addWidget(self.description_label)
|
||||
|
||||
self.radio_button_group = QButtonGroup(self)
|
||||
self.radio_content = QRadioButton("Description/Content")
|
||||
self.radio_comments = QRadioButton("Comments")
|
||||
@@ -50,14 +49,20 @@ class MoreOptionsDialog(QDialog):
|
||||
export_layout.addStretch()
|
||||
layout.addLayout(export_layout)
|
||||
|
||||
# --- UPDATED: Single PDF Checkbox ---
|
||||
# --- Single PDF Checkbox ---
|
||||
self.single_pdf_checkbox = QCheckBox("Single PDF")
|
||||
self.single_pdf_checkbox.setToolTip("If checked, all text from matching posts will be compiled into one single PDF file.")
|
||||
self.single_pdf_checkbox.setChecked(single_pdf_checked)
|
||||
layout.addWidget(self.single_pdf_checkbox)
|
||||
|
||||
self.format_combo.currentTextChanged.connect(self.update_single_pdf_checkbox_state)
|
||||
self.update_single_pdf_checkbox_state(self.format_combo.currentText())
|
||||
# --- NEW: Add Info Checkbox ---
|
||||
self.add_info_checkbox = QCheckBox("Add info in PDF")
|
||||
self.add_info_checkbox.setToolTip("If checked, adds a first page with post details (Title, Date, Link, Creator, Tags, etc.).")
|
||||
self.add_info_checkbox.setChecked(add_info_checked)
|
||||
layout.addWidget(self.add_info_checkbox)
|
||||
|
||||
self.format_combo.currentTextChanged.connect(self.update_checkbox_states)
|
||||
self.update_checkbox_states(self.format_combo.currentText())
|
||||
|
||||
self.button_box = QDialogButtonBox(QDialogButtonBox.Ok | QDialogButtonBox.Cancel)
|
||||
self.button_box.accepted.connect(self.accept)
|
||||
@@ -65,12 +70,18 @@ class MoreOptionsDialog(QDialog):
|
||||
layout.addWidget(self.button_box)
|
||||
self.setLayout(layout)
|
||||
self._apply_theme()
|
||||
def update_single_pdf_checkbox_state(self, text):
|
||||
"""Enable the Single PDF checkbox only if the format is PDF."""
|
||||
|
||||
def update_checkbox_states(self, text):
|
||||
"""Enable PDF-specific checkboxes only if the format is PDF."""
|
||||
is_pdf = (text.upper() == "PDF")
|
||||
self.single_pdf_checkbox.setEnabled(is_pdf)
|
||||
self.add_info_checkbox.setEnabled(is_pdf)
|
||||
|
||||
if not is_pdf:
|
||||
self.single_pdf_checkbox.setChecked(False)
|
||||
# We don't uncheck add_info necessarily, just disable it,
|
||||
# but unchecking is safer visually to imply "won't happen"
|
||||
self.add_info_checkbox.setChecked(False)
|
||||
|
||||
def get_selected_scope(self):
|
||||
if self.radio_comments.isChecked():
|
||||
@@ -84,13 +95,14 @@ class MoreOptionsDialog(QDialog):
|
||||
"""Returns the state of the Single PDF checkbox."""
|
||||
return self.single_pdf_checkbox.isChecked() and self.single_pdf_checkbox.isEnabled()
|
||||
|
||||
def get_add_info_state(self):
|
||||
"""Returns the state of the Add Info checkbox."""
|
||||
return self.add_info_checkbox.isChecked() and self.add_info_checkbox.isEnabled()
|
||||
|
||||
def _apply_theme(self):
|
||||
"""Applies the current theme from the parent application."""
|
||||
if self.parent_app and self.parent_app.current_theme == "dark":
|
||||
# Get the scale factor from the parent app
|
||||
if self.parent_app and hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
|
||||
scale = getattr(self.parent_app, 'scale_factor', 1)
|
||||
# Call the imported function with the correct scale
|
||||
self.setStyleSheet(get_dark_theme(scale))
|
||||
else:
|
||||
# Explicitly set a blank stylesheet for light mode
|
||||
self.setStyleSheet("")
|
||||
self.setStyleSheet("")
|
||||
@@ -4,24 +4,22 @@ try:
|
||||
from fpdf import FPDF
|
||||
FPDF_AVAILABLE = True
|
||||
|
||||
# --- FIX: Move the class definition inside the try block ---
|
||||
class PDF(FPDF):
|
||||
"""Custom PDF class to handle headers and footers."""
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.font_family_main = 'Arial'
|
||||
|
||||
def header(self):
|
||||
pass
|
||||
|
||||
def footer(self):
|
||||
self.set_y(-15)
|
||||
if self.font_family:
|
||||
self.set_font(self.font_family, '', 8)
|
||||
else:
|
||||
self.set_font('Arial', '', 8)
|
||||
self.set_font(self.font_family_main, '', 8)
|
||||
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'C')
|
||||
|
||||
except ImportError:
|
||||
FPDF_AVAILABLE = False
|
||||
# If the import fails, FPDF and PDF will not be defined,
|
||||
# but the program won't crash here.
|
||||
FPDF = None
|
||||
PDF = None
|
||||
|
||||
@@ -31,12 +29,169 @@ def strip_html_tags(text):
|
||||
clean = re.compile('<.*?>')
|
||||
return re.sub(clean, '', text)
|
||||
|
||||
def create_single_pdf_from_content(posts_data, output_filename, font_path, logger=print):
|
||||
def _setup_pdf_fonts(pdf, font_path, logger=print):
|
||||
"""Helper to setup fonts for the PDF instance."""
|
||||
bold_font_path = ""
|
||||
default_font = 'Arial'
|
||||
|
||||
if font_path:
|
||||
bold_font_path = font_path.replace("DejaVuSans.ttf", "DejaVuSans-Bold.ttf")
|
||||
|
||||
try:
|
||||
if font_path and os.path.exists(font_path):
|
||||
pdf.add_font('DejaVu', '', font_path, uni=True)
|
||||
default_font = 'DejaVu'
|
||||
if os.path.exists(bold_font_path):
|
||||
pdf.add_font('DejaVu', 'B', bold_font_path, uni=True)
|
||||
else:
|
||||
pdf.add_font('DejaVu', 'B', font_path, uni=True)
|
||||
except Exception as font_error:
|
||||
logger(f" ⚠️ Could not load DejaVu font: {font_error}. Falling back to Arial.")
|
||||
default_font = 'Arial'
|
||||
|
||||
pdf.font_family_main = default_font
|
||||
return default_font
|
||||
|
||||
def add_metadata_page(pdf, post, font_family):
|
||||
"""Adds a dedicated metadata page to the PDF with clickable links."""
|
||||
pdf.add_page()
|
||||
pdf.set_font(font_family, 'B', 16)
|
||||
pdf.multi_cell(w=0, h=10, txt=post.get('title', 'Untitled Post'), align='C')
|
||||
pdf.ln(10)
|
||||
pdf.set_font(font_family, '', 11)
|
||||
|
||||
def add_info_row(label, value, link_url=None):
|
||||
if not value: return
|
||||
|
||||
# Write Label (Bold)
|
||||
pdf.set_font(font_family, 'B', 11)
|
||||
pdf.write(8, f"{label}: ")
|
||||
|
||||
# Write Value
|
||||
if link_url:
|
||||
# Styling for clickable link: Blue + Underline
|
||||
pdf.set_text_color(0, 0, 255)
|
||||
# Check if font supports underline style directly or just use 'U'
|
||||
# FPDF standard allows 'U' in style string.
|
||||
# We use 'U' combined with the font family.
|
||||
# Note: DejaVu implementation in fpdf2 might handle 'U' automatically or ignore it depending on version,
|
||||
# but setting text color indicates link clearly enough usually.
|
||||
pdf.set_font(font_family, 'U', 11)
|
||||
|
||||
# Pass the URL to the 'link' parameter
|
||||
pdf.multi_cell(w=0, h=8, txt=str(value), link=link_url)
|
||||
|
||||
# Reset styles
|
||||
pdf.set_text_color(0, 0, 0)
|
||||
pdf.set_font(font_family, '', 11)
|
||||
else:
|
||||
pdf.set_font(font_family, '', 11)
|
||||
pdf.multi_cell(w=0, h=8, txt=str(value))
|
||||
|
||||
pdf.ln(2)
|
||||
|
||||
date_str = post.get('published') or post.get('added') or 'Unknown'
|
||||
add_info_row("Date Uploaded", date_str)
|
||||
|
||||
creator = post.get('creator_name') or post.get('user') or 'Unknown'
|
||||
add_info_row("Creator", creator)
|
||||
|
||||
add_info_row("Service", post.get('service', 'Unknown'))
|
||||
|
||||
link = post.get('original_link')
|
||||
if not link and post.get('service') and post.get('user') and post.get('id'):
|
||||
link = f"https://kemono.su/{post['service']}/user/{post['user']}/post/{post['id']}"
|
||||
|
||||
# Pass 'link' as both the text value AND the URL target
|
||||
add_info_row("Original Link", link, link_url=link)
|
||||
|
||||
tags = post.get('tags')
|
||||
if tags:
|
||||
tags_str = ", ".join(tags) if isinstance(tags, list) else str(tags)
|
||||
add_info_row("Tags", tags_str)
|
||||
|
||||
pdf.ln(10)
|
||||
pdf.cell(0, 0, border='T')
|
||||
pdf.ln(10)
|
||||
|
||||
def create_individual_pdf(post_data, output_filename, font_path, add_info_page=False, add_comments=False, logger=print):
|
||||
"""
|
||||
Creates a single, continuous PDF, correctly formatting both descriptions and comments.
|
||||
Creates a PDF for a single post.
|
||||
Supports optional metadata page and appending comments.
|
||||
"""
|
||||
if not FPDF_AVAILABLE:
|
||||
logger("❌ PDF Creation failed: 'fpdf2' library is not installed. Please run: pip install fpdf2")
|
||||
logger("❌ PDF Creation failed: 'fpdf2' library not installed.")
|
||||
return False
|
||||
|
||||
pdf = PDF()
|
||||
font_family = _setup_pdf_fonts(pdf, font_path, logger)
|
||||
|
||||
if add_info_page:
|
||||
# add_metadata_page adds the page start itself
|
||||
add_metadata_page(pdf, post_data, font_family)
|
||||
# REMOVED: pdf.add_page() <-- This ensures content starts right below the line
|
||||
else:
|
||||
pdf.add_page()
|
||||
|
||||
# Only add the Title header manually if we didn't add the info page
|
||||
# (Because the info page already contains the title at the top)
|
||||
if not add_info_page:
|
||||
pdf.set_font(font_family, 'B', 16)
|
||||
pdf.multi_cell(w=0, h=10, txt=post_data.get('title', 'Untitled Post'), align='L')
|
||||
pdf.ln(5)
|
||||
|
||||
content_text = post_data.get('content_text_for_pdf')
|
||||
comments_list = post_data.get('comments_list_for_pdf')
|
||||
|
||||
# 1. Write Content
|
||||
if content_text:
|
||||
pdf.set_font(font_family, '', 12)
|
||||
pdf.multi_cell(w=0, h=7, txt=content_text)
|
||||
pdf.ln(10)
|
||||
|
||||
# 2. Write Comments (if enabled and present)
|
||||
if comments_list and (add_comments or not content_text):
|
||||
if add_comments and content_text:
|
||||
pdf.add_page()
|
||||
pdf.set_font(font_family, 'B', 14)
|
||||
pdf.cell(0, 10, "Comments", ln=True)
|
||||
pdf.ln(5)
|
||||
|
||||
for i, comment in enumerate(comments_list):
|
||||
user = comment.get('commenter_name', 'Unknown User')
|
||||
timestamp = comment.get('published', 'No Date')
|
||||
body = strip_html_tags(comment.get('content', ''))
|
||||
|
||||
pdf.set_font(font_family, '', 10)
|
||||
pdf.write(8, "Comment by: ")
|
||||
pdf.set_font(font_family, 'B', 10)
|
||||
pdf.write(8, str(user))
|
||||
|
||||
pdf.set_font(font_family, '', 10)
|
||||
pdf.write(8, f" on {timestamp}")
|
||||
pdf.ln(10)
|
||||
|
||||
pdf.set_font(font_family, '', 11)
|
||||
pdf.multi_cell(w=0, h=7, txt=body)
|
||||
|
||||
if i < len(comments_list) - 1:
|
||||
pdf.ln(3)
|
||||
pdf.cell(w=0, h=0, border='T')
|
||||
pdf.ln(3)
|
||||
|
||||
try:
|
||||
pdf.output(output_filename)
|
||||
return True
|
||||
except Exception as e:
|
||||
logger(f"❌ Error saving PDF '{os.path.basename(output_filename)}': {e}")
|
||||
return False
|
||||
|
||||
def create_single_pdf_from_content(posts_data, output_filename, font_path, add_info_page=False, logger=print):
|
||||
"""
|
||||
Creates a single, continuous PDF from multiple posts.
|
||||
"""
|
||||
if not FPDF_AVAILABLE:
|
||||
logger("❌ PDF Creation failed: 'fpdf2' library is not installed.")
|
||||
return False
|
||||
|
||||
if not posts_data:
|
||||
@@ -44,38 +199,21 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
|
||||
return False
|
||||
|
||||
pdf = PDF()
|
||||
default_font_family = 'DejaVu'
|
||||
font_family = _setup_pdf_fonts(pdf, font_path, logger)
|
||||
|
||||
bold_font_path = ""
|
||||
if font_path:
|
||||
bold_font_path = font_path.replace("DejaVuSans.ttf", "DejaVuSans-Bold.ttf")
|
||||
|
||||
try:
|
||||
if not os.path.exists(font_path): raise RuntimeError(f"Font file not found: {font_path}")
|
||||
if not os.path.exists(bold_font_path): raise RuntimeError(f"Bold font file not found: {bold_font_path}")
|
||||
|
||||
pdf.add_font('DejaVu', '', font_path, uni=True)
|
||||
pdf.add_font('DejaVu', 'B', bold_font_path, uni=True)
|
||||
except Exception as font_error:
|
||||
logger(f" ⚠️ Could not load DejaVu font: {font_error}. Falling back to Arial.")
|
||||
default_font_family = 'Arial'
|
||||
|
||||
pdf.add_page()
|
||||
|
||||
logger(f" Starting continuous PDF creation with content from {len(posts_data)} posts...")
|
||||
|
||||
for i, post in enumerate(posts_data):
|
||||
if i > 0:
|
||||
if 'content' in post:
|
||||
pdf.add_page()
|
||||
elif 'comments' in post:
|
||||
pdf.ln(10)
|
||||
pdf.cell(0, 0, '', border='T')
|
||||
pdf.ln(10)
|
||||
if add_info_page:
|
||||
add_metadata_page(pdf, post, font_family)
|
||||
# REMOVED: pdf.add_page() <-- This ensures content starts right below the line
|
||||
else:
|
||||
pdf.add_page()
|
||||
|
||||
pdf.set_font(default_font_family, 'B', 16)
|
||||
pdf.multi_cell(w=0, h=10, txt=post.get('title', 'Untitled Post'), align='L')
|
||||
pdf.ln(5)
|
||||
if not add_info_page:
|
||||
pdf.set_font(font_family, 'B', 16)
|
||||
pdf.multi_cell(w=0, h=10, txt=post.get('title', 'Untitled Post'), align='L')
|
||||
pdf.ln(5)
|
||||
|
||||
if 'comments' in post and post['comments']:
|
||||
comments_list = post['comments']
|
||||
@@ -84,17 +222,17 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
|
||||
timestamp = comment.get('published', 'No Date')
|
||||
body = strip_html_tags(comment.get('content', ''))
|
||||
|
||||
pdf.set_font(default_font_family, '', 10)
|
||||
pdf.set_font(font_family, '', 10)
|
||||
pdf.write(8, "Comment by: ")
|
||||
if user is not None:
|
||||
pdf.set_font(default_font_family, 'B', 10)
|
||||
pdf.set_font(font_family, 'B', 10)
|
||||
pdf.write(8, str(user))
|
||||
|
||||
pdf.set_font(default_font_family, '', 10)
|
||||
pdf.set_font(font_family, '', 10)
|
||||
pdf.write(8, f" on {timestamp}")
|
||||
pdf.ln(10)
|
||||
|
||||
pdf.set_font(default_font_family, '', 11)
|
||||
pdf.set_font(font_family, '', 11)
|
||||
pdf.multi_cell(w=0, h=7, txt=body)
|
||||
|
||||
if comment_index < len(comments_list) - 1:
|
||||
@@ -102,7 +240,7 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
|
||||
pdf.cell(w=0, h=0, border='T')
|
||||
pdf.ln(3)
|
||||
elif 'content' in post:
|
||||
pdf.set_font(default_font_family, '', 12)
|
||||
pdf.set_font(font_family, '', 12)
|
||||
pdf.multi_cell(w=0, h=7, txt=post.get('content', 'No Content'))
|
||||
|
||||
try:
|
||||
@@ -111,4 +249,4 @@ def create_single_pdf_from_content(posts_data, output_filename, font_path, logge
|
||||
return True
|
||||
except Exception as e:
|
||||
logger(f"❌ A critical error occurred while saving the final PDF: {e}")
|
||||
return False
|
||||
return False
|
||||
@@ -1,71 +1,144 @@
|
||||
# src/ui/dialogs/SupportDialog.py
|
||||
# src/app/dialogs/SupportDialog.py
|
||||
|
||||
# --- Standard Library Imports ---
|
||||
import sys
|
||||
import os
|
||||
|
||||
# --- PyQt5 Imports ---
|
||||
from PyQt5.QtWidgets import (
|
||||
QDialog, QVBoxLayout, QLabel, QFrame, QDialogButtonBox, QGridLayout
|
||||
QDialog, QVBoxLayout, QHBoxLayout, QLabel, QFrame,
|
||||
QPushButton, QSizePolicy
|
||||
)
|
||||
from PyQt5.QtCore import Qt, QSize
|
||||
from PyQt5.QtGui import QFont, QPixmap
|
||||
from PyQt5.QtCore import Qt, QSize, QUrl
|
||||
from PyQt5.QtGui import QPixmap, QDesktopServices
|
||||
|
||||
# --- Local Application Imports ---
|
||||
from ...utils.resolution import get_dark_theme
|
||||
|
||||
# --- Helper function for robust asset loading ---
|
||||
def get_asset_path(filename):
|
||||
"""
|
||||
Gets the absolute path to a file in the assets folder,
|
||||
handling both development and frozen (PyInstaller) environments.
|
||||
"""
|
||||
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
|
||||
# Running in a PyInstaller bundle
|
||||
base_path = sys._MEIPASS
|
||||
else:
|
||||
# Running in a normal Python environment from src/ui/dialogs/
|
||||
base_path = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
return os.path.join(base_path, 'assets', filename)
|
||||
|
||||
|
||||
class SupportDialog(QDialog):
|
||||
"""
|
||||
A dialog to show support and donation options.
|
||||
A polished dialog showcasing support and community options in a
|
||||
clean, modern card-based layout.
|
||||
"""
|
||||
def __init__(self, parent=None):
|
||||
super().__init__(parent)
|
||||
self.parent_app = parent
|
||||
self.setWindowTitle("❤️ Support the Developer")
|
||||
self.setMinimumWidth(450)
|
||||
|
||||
self.setWindowTitle("❤️ Support & Community")
|
||||
self.setMinimumWidth(560)
|
||||
|
||||
self._init_ui()
|
||||
self._apply_theme()
|
||||
|
||||
def _init_ui(self):
|
||||
"""Initializes all UI components and layouts for the dialog."""
|
||||
# Main layout
|
||||
main_layout = QVBoxLayout(self)
|
||||
main_layout.setSpacing(15)
|
||||
def _create_card_button(
|
||||
self, icon_path, title, subtitle, url,
|
||||
hover_color="#2E2E2E", min_height=110, icon_size=44
|
||||
):
|
||||
"""Reusable clickable card widget with icon, title, and subtitle."""
|
||||
button = QPushButton()
|
||||
button.setCursor(Qt.PointingHandCursor)
|
||||
button.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding)
|
||||
button.setMinimumHeight(min_height)
|
||||
|
||||
# Title Label
|
||||
title_label = QLabel("Thank You for Your Support!")
|
||||
font = title_label.font()
|
||||
font.setPointSize(14)
|
||||
# Consistent style
|
||||
button.setStyleSheet(f"""
|
||||
QPushButton {{
|
||||
background-color: #3A3A3A;
|
||||
border: 1px solid #555;
|
||||
border-radius: 10px;
|
||||
text-align: center;
|
||||
padding: 12px;
|
||||
}}
|
||||
QPushButton:hover {{
|
||||
background-color: {hover_color};
|
||||
border: 1px solid #777;
|
||||
}}
|
||||
""")
|
||||
|
||||
layout = QVBoxLayout(button)
|
||||
layout.setSpacing(6)
|
||||
|
||||
# Icon
|
||||
icon_label = QLabel()
|
||||
pixmap = QPixmap(icon_path)
|
||||
if not pixmap.isNull():
|
||||
scale = getattr(self.parent_app, 'scale_factor', 1.0)
|
||||
scaled_size = int(icon_size * scale)
|
||||
icon_label.setPixmap(
|
||||
pixmap.scaled(QSize(scaled_size, scaled_size), Qt.KeepAspectRatio, Qt.SmoothTransformation)
|
||||
)
|
||||
icon_label.setAlignment(Qt.AlignCenter)
|
||||
layout.addWidget(icon_label)
|
||||
|
||||
# Title
|
||||
title_label = QLabel(title)
|
||||
font = self.font()
|
||||
font.setPointSize(11)
|
||||
font.setBold(True)
|
||||
title_label.setFont(font)
|
||||
title_label.setAlignment(Qt.AlignCenter)
|
||||
main_layout.addWidget(title_label)
|
||||
title_label.setStyleSheet("background-color: transparent; border: none;")
|
||||
layout.addWidget(title_label)
|
||||
|
||||
# Informational Text
|
||||
info_label = QLabel(
|
||||
"If you find this application useful, please consider supporting its development. "
|
||||
"Your contribution helps cover costs and encourages future updates and features."
|
||||
# Subtitle
|
||||
if subtitle:
|
||||
subtitle_label = QLabel(subtitle)
|
||||
subtitle_label.setStyleSheet("color: #A8A8A8; background-color: transparent; border: none;")
|
||||
subtitle_label.setAlignment(Qt.AlignCenter)
|
||||
layout.addWidget(subtitle_label)
|
||||
|
||||
button.clicked.connect(lambda: QDesktopServices.openUrl(QUrl(url)))
|
||||
return button
|
||||
|
||||
def _create_section_title(self, text):
|
||||
"""Stylized section heading."""
|
||||
label = QLabel(text)
|
||||
font = label.font()
|
||||
font.setPointSize(13)
|
||||
font.setBold(True)
|
||||
label.setFont(font)
|
||||
label.setAlignment(Qt.AlignCenter)
|
||||
label.setStyleSheet("margin-top: 10px; margin-bottom: 5px;")
|
||||
return label
|
||||
|
||||
def _init_ui(self):
|
||||
main_layout = QVBoxLayout(self)
|
||||
main_layout.setSpacing(18)
|
||||
main_layout.setContentsMargins(20, 20, 20, 20)
|
||||
|
||||
# Header
|
||||
header_label = QLabel("Support the Project")
|
||||
font = header_label.font()
|
||||
font.setPointSize(17)
|
||||
font.setBold(True)
|
||||
header_label.setFont(font)
|
||||
header_label.setAlignment(Qt.AlignCenter)
|
||||
main_layout.addWidget(header_label)
|
||||
|
||||
subtext = QLabel(
|
||||
"If you enjoy this application, consider supporting its development. "
|
||||
"Your help keeps the project alive and growing!"
|
||||
)
|
||||
info_label.setWordWrap(True)
|
||||
info_label.setAlignment(Qt.AlignCenter)
|
||||
main_layout.addWidget(info_label)
|
||||
subtext.setWordWrap(True)
|
||||
subtext.setAlignment(Qt.AlignCenter)
|
||||
main_layout.addWidget(subtext)
|
||||
|
||||
# Financial Support
|
||||
main_layout.addWidget(self._create_section_title("Contribute Financially"))
|
||||
donation_layout = QHBoxLayout()
|
||||
donation_layout.setSpacing(15)
|
||||
|
||||
donation_layout.addWidget(self._create_card_button(
|
||||
get_asset_path("ko-fi.png"), "Ko-fi", "One-time ",
|
||||
"https://ko-fi.com/yuvi427183", "#2B2F36"
|
||||
))
|
||||
donation_layout.addWidget(self._create_card_button(
|
||||
get_asset_path("patreon.png"), "Patreon", "Soon ",
|
||||
"https://www.patreon.com/Yuvi102", "#3A2E2B"
|
||||
))
|
||||
donation_layout.addWidget(self._create_card_button(
|
||||
get_asset_path("buymeacoffee.png"), "Buy Me a Coffee", "One-time",
|
||||
"https://buymeacoffee.com/yuvi9587", "#403520"
|
||||
))
|
||||
main_layout.addLayout(donation_layout)
|
||||
|
||||
# Separator
|
||||
line = QFrame()
|
||||
@@ -73,83 +146,62 @@ class SupportDialog(QDialog):
|
||||
line.setFrameShadow(QFrame.Sunken)
|
||||
main_layout.addWidget(line)
|
||||
|
||||
# --- Donation Options Layout (using a grid for icons and text) ---
|
||||
options_layout = QGridLayout()
|
||||
options_layout.setSpacing(18)
|
||||
options_layout.setColumnStretch(0, 1) # Add stretch to center the content horizontally
|
||||
options_layout.setColumnStretch(3, 1)
|
||||
# Community Section
|
||||
main_layout.addWidget(self._create_section_title("Get Help & Connect"))
|
||||
community_layout = QHBoxLayout()
|
||||
community_layout.setSpacing(15)
|
||||
|
||||
link_font = self.font()
|
||||
link_font.setPointSize(12)
|
||||
link_font.setBold(True)
|
||||
|
||||
scale = getattr(self.parent_app, 'scale_factor', 1.0)
|
||||
icon_size = int(32 * scale)
|
||||
|
||||
# --- Ko-fi ---
|
||||
kofi_icon_label = QLabel()
|
||||
kofi_pixmap = QPixmap(get_asset_path("kofi.png"))
|
||||
if not kofi_pixmap.isNull():
|
||||
kofi_icon_label.setPixmap(kofi_pixmap.scaled(QSize(icon_size, icon_size), Qt.KeepAspectRatio, Qt.SmoothTransformation))
|
||||
|
||||
kofi_text_label = QLabel(
|
||||
'<a href="https://ko-fi.com/yuvi427183" style="color: #13C2C2; text-decoration: none;">'
|
||||
'☕ Buy me a Ko-fi'
|
||||
'</a>'
|
||||
)
|
||||
kofi_text_label.setOpenExternalLinks(True)
|
||||
kofi_text_label.setFont(link_font)
|
||||
|
||||
options_layout.addWidget(kofi_icon_label, 0, 1, Qt.AlignRight | Qt.AlignVCenter)
|
||||
options_layout.addWidget(kofi_text_label, 0, 2, Qt.AlignLeft | Qt.AlignVCenter)
|
||||
|
||||
# --- GitHub Sponsors ---
|
||||
github_icon_label = QLabel()
|
||||
github_pixmap = QPixmap(get_asset_path("github_sponsors.png"))
|
||||
if not github_pixmap.isNull():
|
||||
github_icon_label.setPixmap(github_pixmap.scaled(QSize(icon_size, icon_size), Qt.KeepAspectRatio, Qt.SmoothTransformation))
|
||||
|
||||
github_text_label = QLabel(
|
||||
'<a href="https://github.com/sponsors/Yuvi9587" style="color: #EA4AAA; text-decoration: none;">'
|
||||
'💜 Sponsor on GitHub'
|
||||
'</a>'
|
||||
)
|
||||
github_text_label.setOpenExternalLinks(True)
|
||||
github_text_label.setFont(link_font)
|
||||
|
||||
options_layout.addWidget(github_icon_label, 1, 1, Qt.AlignRight | Qt.AlignVCenter)
|
||||
options_layout.addWidget(github_text_label, 1, 2, Qt.AlignLeft | Qt.AlignVCenter)
|
||||
|
||||
# --- Buy Me a Coffee (New) ---
|
||||
bmac_icon_label = QLabel()
|
||||
bmac_pixmap = QPixmap(get_asset_path("bmac.png"))
|
||||
if not bmac_pixmap.isNull():
|
||||
bmac_icon_label.setPixmap(bmac_pixmap.scaled(QSize(icon_size, icon_size), Qt.KeepAspectRatio, Qt.SmoothTransformation))
|
||||
|
||||
bmac_text_label = QLabel(
|
||||
'<a href="https://buymeacoffee.com/yuvi9587" style="color: #FFDD00; text-decoration: none;">'
|
||||
'🍺 Buy Me a Coffee'
|
||||
'</a>'
|
||||
)
|
||||
bmac_text_label.setOpenExternalLinks(True)
|
||||
bmac_text_label.setFont(link_font)
|
||||
|
||||
options_layout.addWidget(bmac_icon_label, 2, 1, Qt.AlignRight | Qt.AlignVCenter)
|
||||
options_layout.addWidget(bmac_text_label, 2, 2, Qt.AlignLeft | Qt.AlignVCenter)
|
||||
|
||||
main_layout.addLayout(options_layout)
|
||||
community_layout.addWidget(self._create_card_button(
|
||||
get_asset_path("github.png"), "GitHub", "Report issues",
|
||||
"https://github.com/Yuvi9587/Kemono-Downloader", "#2E2E2E",
|
||||
min_height=100, icon_size=36
|
||||
))
|
||||
community_layout.addWidget(self._create_card_button(
|
||||
get_asset_path("discord.png"), "Discord", "Join the server",
|
||||
"https://discord.gg/BqP64XTdJN", "#2C2F33",
|
||||
min_height=100, icon_size=36
|
||||
))
|
||||
community_layout.addWidget(self._create_card_button(
|
||||
get_asset_path("instagram.png"), "Instagram", "Follow me",
|
||||
"https://www.instagram.com/uvi.arts/", "#3B2E40",
|
||||
min_height=100, icon_size=36
|
||||
))
|
||||
main_layout.addLayout(community_layout)
|
||||
|
||||
# Close Button
|
||||
self.button_box = QDialogButtonBox(QDialogButtonBox.Close)
|
||||
self.button_box.rejected.connect(self.reject)
|
||||
main_layout.addWidget(self.button_box)
|
||||
close_button = QPushButton("Close")
|
||||
close_button.setMinimumWidth(100)
|
||||
close_button.clicked.connect(self.accept)
|
||||
close_button.setStyleSheet("""
|
||||
QPushButton {
|
||||
padding: 6px 14px;
|
||||
border-radius: 6px;
|
||||
background-color: #444;
|
||||
color: white;
|
||||
}
|
||||
QPushButton:hover {
|
||||
background-color: #555;
|
||||
}
|
||||
""")
|
||||
|
||||
self.setLayout(main_layout)
|
||||
button_layout = QHBoxLayout()
|
||||
button_layout.addStretch()
|
||||
button_layout.addWidget(close_button)
|
||||
button_layout.addStretch()
|
||||
main_layout.addLayout(button_layout)
|
||||
|
||||
def _apply_theme(self):
|
||||
"""Applies the current theme from the parent application."""
|
||||
if self.parent_app and hasattr(self.parent_app, 'current_theme') and self.parent_app.current_theme == "dark":
|
||||
scale = getattr(self.parent_app, 'scale_factor', 1)
|
||||
self.setStyleSheet(get_dark_theme(scale))
|
||||
else:
|
||||
self.setStyleSheet("")
|
||||
self.setStyleSheet("")
|
||||
|
||||
|
||||
def get_asset_path(filename):
|
||||
"""Return the path to an asset, works in both dev and packaged environments."""
|
||||
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
|
||||
base_path = sys._MEIPASS
|
||||
else:
|
||||
base_path = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..', '..'))
|
||||
return os.path.join(base_path, 'assets', filename)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import os
|
||||
import sys
|
||||
from PyQt5.QtCore import pyqtSignal, Qt, QSettings, QCoreApplication
|
||||
from PyQt5.QtCore import pyqtSignal, Qt, QSettings
|
||||
from PyQt5.QtWidgets import (
|
||||
QApplication, QDialog, QHBoxLayout, QLabel, QPushButton, QVBoxLayout,
|
||||
QStackedWidget, QScrollArea, QFrame, QWidget, QCheckBox
|
||||
@@ -8,89 +8,88 @@ from PyQt5.QtWidgets import (
|
||||
from ...i18n.translator import get_translation
|
||||
from ..main_window import get_app_icon_object
|
||||
from ...utils.resolution import get_dark_theme
|
||||
from ...config.constants import (
|
||||
CONFIG_ORGANIZATION_NAME
|
||||
)
|
||||
from ...config.constants import CONFIG_ORGANIZATION_NAME
|
||||
|
||||
|
||||
class TourStepWidget(QWidget):
|
||||
"""
|
||||
A custom widget representing a single step or page in the feature tour.
|
||||
It neatly formats a title and its corresponding content.
|
||||
A custom widget for a single tour page, with improved styling for titles and content.
|
||||
"""
|
||||
def __init__(self, title_text, content_text, parent=None):
|
||||
super().__init__(parent)
|
||||
layout = QVBoxLayout(self)
|
||||
layout.setContentsMargins(20, 20, 20, 20)
|
||||
layout.setSpacing(10)
|
||||
layout.setContentsMargins(25, 20, 25, 20)
|
||||
layout.setSpacing(15)
|
||||
layout.setAlignment(Qt.AlignHCenter)
|
||||
|
||||
title_label = QLabel(title_text)
|
||||
title_label.setAlignment(Qt.AlignCenter)
|
||||
title_label.setStyleSheet("font-size: 18px; font-weight: bold; color: #E0E0E0; padding-bottom: 15px;")
|
||||
title_label.setWordWrap(True)
|
||||
title_label.setStyleSheet("font-size: 18pt; font-weight: bold; color: #E0E0E0; padding-bottom: 10px;")
|
||||
layout.addWidget(title_label)
|
||||
|
||||
# Frame for the content area to give it a nice border
|
||||
content_frame = QFrame()
|
||||
content_frame.setObjectName("contentFrame")
|
||||
content_layout = QVBoxLayout(content_frame)
|
||||
|
||||
scroll_area = QScrollArea()
|
||||
scroll_area.setWidgetResizable(True)
|
||||
scroll_area.setFrameShape(QFrame.NoFrame)
|
||||
scroll_area.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff)
|
||||
scroll_area.setVerticalScrollBarPolicy(Qt.ScrollBarAsNeeded)
|
||||
scroll_area.setStyleSheet("background-color: transparent;")
|
||||
|
||||
|
||||
content_label = QLabel(content_text)
|
||||
content_label.setWordWrap(True)
|
||||
content_label.setAlignment(Qt.AlignLeft | Qt.AlignTop)
|
||||
content_label.setTextFormat(Qt.RichText)
|
||||
content_label.setOpenExternalLinks(True)
|
||||
content_label.setStyleSheet("font-size: 11pt; color: #C8C8C8; line-height: 1.8;")
|
||||
# Indent the content slightly for better readability
|
||||
content_label.setStyleSheet("font-size: 11pt; color: #C8C8C8; padding-left: 5px; padding-right: 5px;")
|
||||
|
||||
scroll_area.setWidget(content_label)
|
||||
layout.addWidget(scroll_area, 1)
|
||||
content_layout.addWidget(scroll_area)
|
||||
layout.addWidget(content_frame, 1)
|
||||
|
||||
|
||||
class TourDialog(QDialog):
|
||||
"""
|
||||
A dialog that shows a multi-page tour to the user on first launch.
|
||||
Includes a "Never show again" checkbox and uses QSettings to remember this preference.
|
||||
A redesigned, multi-page tour dialog with a visual progress indicator.
|
||||
"""
|
||||
tour_finished_normally = pyqtSignal()
|
||||
tour_skipped = pyqtSignal()
|
||||
CONFIG_APP_NAME_TOUR = "ApplicationTour"
|
||||
TOUR_SHOWN_KEY = "neverShowTourAgainV19"
|
||||
TOUR_SHOWN_KEY = "neverShowTourAgainV20" # Version bumped to ensure new tour shows once
|
||||
CONFIG_ORGANIZATION_NAME = CONFIG_ORGANIZATION_NAME
|
||||
|
||||
def __init__(self, parent_app, parent=None):
|
||||
"""
|
||||
Initializes the dialog.
|
||||
|
||||
Args:
|
||||
parent_app (DownloaderApp): A reference to the main application window.
|
||||
parent (QWidget, optional): The parent widget. Defaults to None.
|
||||
"""
|
||||
super().__init__(parent)
|
||||
self.settings = QSettings(CONFIG_ORGANIZATION_NAME, self.CONFIG_APP_NAME_TOUR)
|
||||
self.settings = QSettings(self.CONFIG_ORGANIZATION_NAME, self.CONFIG_APP_NAME_TOUR)
|
||||
self.current_step = 0
|
||||
self.parent_app = parent_app
|
||||
self.progress_dots = []
|
||||
|
||||
self.setWindowIcon(get_app_icon_object())
|
||||
self.setModal(True)
|
||||
self.setFixedSize(600, 620)
|
||||
self.setFixedSize(680, 650)
|
||||
|
||||
self._init_ui()
|
||||
self._apply_theme()
|
||||
self._center_on_screen()
|
||||
|
||||
def _tr(self, key, default_text=""):
|
||||
"""Helper for translation."""
|
||||
if callable(get_translation) and self.parent_app:
|
||||
return get_translation(self.parent_app.current_selected_language, key, default_text)
|
||||
return default_text
|
||||
|
||||
def _init_ui(self):
|
||||
"""Initializes all UI components and layouts."""
|
||||
main_layout = QVBoxLayout(self)
|
||||
main_layout.setContentsMargins(0, 0, 0, 0)
|
||||
main_layout.setSpacing(0)
|
||||
|
||||
self.stacked_widget = QStackedWidget()
|
||||
main_layout.addWidget(self.stacked_widget, 1)
|
||||
|
||||
# All 8 steps from your translator.py file
|
||||
steps_content = [
|
||||
("tour_dialog_step1_title", "tour_dialog_step1_content"),
|
||||
("tour_dialog_step2_title", "tour_dialog_step2_content"),
|
||||
@@ -102,52 +101,105 @@ class TourDialog(QDialog):
|
||||
("tour_dialog_step8_title", "tour_dialog_step8_content"),
|
||||
]
|
||||
|
||||
self.tour_steps_widgets = []
|
||||
for title_key, content_key in steps_content:
|
||||
title = self._tr(title_key, title_key)
|
||||
content = self._tr(content_key, "Content not found.")
|
||||
step_widget = TourStepWidget(title, content)
|
||||
self.tour_steps_widgets.append(step_widget)
|
||||
self.stacked_widget.addWidget(step_widget)
|
||||
|
||||
self.setWindowTitle(self._tr("tour_dialog_title", "Welcome to Kemono Downloader!"))
|
||||
bottom_controls_layout = QVBoxLayout()
|
||||
bottom_controls_layout.setContentsMargins(15, 10, 15, 15)
|
||||
bottom_controls_layout.setSpacing(12)
|
||||
|
||||
# --- Bottom Controls Area ---
|
||||
bottom_frame = QFrame()
|
||||
bottom_frame.setObjectName("bottomFrame")
|
||||
main_layout.addWidget(bottom_frame)
|
||||
|
||||
self.never_show_again_checkbox = QCheckBox(self._tr("tour_dialog_never_show_checkbox", "Never show this tour again"))
|
||||
bottom_controls_layout.addWidget(self.never_show_again_checkbox, 0, Qt.AlignLeft)
|
||||
bottom_controls_layout = QVBoxLayout(bottom_frame)
|
||||
bottom_controls_layout.setContentsMargins(20, 15, 20, 20)
|
||||
bottom_controls_layout.setSpacing(15)
|
||||
|
||||
buttons_layout = QHBoxLayout()
|
||||
buttons_layout.setSpacing(10)
|
||||
self.skip_button = QPushButton(self._tr("tour_dialog_skip_button", "Skip Tour"))
|
||||
# --- Progress Indicator ---
|
||||
progress_layout = QHBoxLayout()
|
||||
progress_layout.addStretch()
|
||||
for i in range(len(steps_content)):
|
||||
dot = QLabel()
|
||||
dot.setObjectName("progressDot")
|
||||
dot.setFixedSize(12, 12)
|
||||
self.progress_dots.append(dot)
|
||||
progress_layout.addWidget(dot)
|
||||
progress_layout.addStretch()
|
||||
bottom_controls_layout.addLayout(progress_layout)
|
||||
|
||||
# --- Buttons and Checkbox ---
|
||||
buttons_and_check_layout = QHBoxLayout()
|
||||
self.never_show_again_checkbox = QCheckBox(self._tr("tour_dialog_never_show_checkbox", "Never show this again"))
|
||||
buttons_and_check_layout.addWidget(self.never_show_again_checkbox, 0, Qt.AlignLeft)
|
||||
buttons_and_check_layout.addStretch()
|
||||
|
||||
self.skip_button = QPushButton(self._tr("tour_dialog_skip_button", "Skip"))
|
||||
self.skip_button.clicked.connect(self._skip_tour_action)
|
||||
self.back_button = QPushButton(self._tr("tour_dialog_back_button", "Back"))
|
||||
self.back_button.clicked.connect(self._previous_step)
|
||||
self.next_button = QPushButton(self._tr("tour_dialog_next_button", "Next"))
|
||||
self.next_button.clicked.connect(self._next_step_action)
|
||||
self.next_button.setDefault(True)
|
||||
self.next_button.setObjectName("nextButton") # For special styling
|
||||
|
||||
buttons_layout.addWidget(self.skip_button)
|
||||
buttons_layout.addStretch(1)
|
||||
buttons_layout.addWidget(self.back_button)
|
||||
buttons_layout.addWidget(self.next_button)
|
||||
buttons_and_check_layout.addWidget(self.skip_button)
|
||||
buttons_and_check_layout.addWidget(self.back_button)
|
||||
buttons_and_check_layout.addWidget(self.next_button)
|
||||
bottom_controls_layout.addLayout(buttons_and_check_layout)
|
||||
|
||||
bottom_controls_layout.addLayout(buttons_layout)
|
||||
main_layout.addLayout(bottom_controls_layout)
|
||||
|
||||
self._update_button_states()
|
||||
self._update_ui_states()
|
||||
|
||||
def _apply_theme(self):
|
||||
"""Applies the current theme from the parent application."""
|
||||
if self.parent_app and self.parent_app.current_theme == "dark":
|
||||
scale = getattr(self.parent_app, 'scale_factor', 1)
|
||||
self.setStyleSheet(get_dark_theme(scale))
|
||||
dark_theme_base = get_dark_theme(scale)
|
||||
tour_styles = """
|
||||
QDialog {
|
||||
background-color: #2D2D30;
|
||||
}
|
||||
#bottomFrame {
|
||||
background-color: #252526;
|
||||
border-top: 1px solid #3E3E42;
|
||||
}
|
||||
#contentFrame {
|
||||
border: 1px solid #3E3E42;
|
||||
border-radius: 5px;
|
||||
}
|
||||
QScrollArea {
|
||||
background-color: transparent;
|
||||
border: none;
|
||||
}
|
||||
#progressDot {
|
||||
background-color: #555;
|
||||
border-radius: 6px;
|
||||
border: 1px solid #4F4F4F;
|
||||
}
|
||||
#progressDot[active="true"] {
|
||||
background-color: #007ACC;
|
||||
border: 1px solid #005A9E;
|
||||
}
|
||||
#nextButton {
|
||||
background-color: #007ACC;
|
||||
border: 1px solid #005A9E;
|
||||
padding: 8px 18px;
|
||||
font-weight: bold;
|
||||
}
|
||||
#nextButton:hover {
|
||||
background-color: #1E90FF;
|
||||
}
|
||||
#nextButton:disabled {
|
||||
background-color: #444;
|
||||
border-color: #555;
|
||||
}
|
||||
"""
|
||||
self.setStyleSheet(dark_theme_base + tour_styles)
|
||||
else:
|
||||
self.setStyleSheet("QDialog { background-color: #f0f0f0; }")
|
||||
|
||||
def _center_on_screen(self):
|
||||
"""Centers the dialog on the screen."""
|
||||
try:
|
||||
screen_geo = QApplication.primaryScreen().availableGeometry()
|
||||
self.move(screen_geo.center() - self.rect().center())
|
||||
@@ -155,54 +207,49 @@ class TourDialog(QDialog):
|
||||
print(f"[TourDialog] Error centering dialog: {e}")
|
||||
|
||||
def _next_step_action(self):
|
||||
"""Moves to the next step or finishes the tour."""
|
||||
if self.current_step < len(self.tour_steps_widgets) - 1:
|
||||
if self.current_step < self.stacked_widget.count() - 1:
|
||||
self.current_step += 1
|
||||
self.stacked_widget.setCurrentIndex(self.current_step)
|
||||
else:
|
||||
self._finish_tour_action()
|
||||
self._update_button_states()
|
||||
self._update_ui_states()
|
||||
|
||||
def _previous_step(self):
|
||||
"""Moves to the previous step."""
|
||||
if self.current_step > 0:
|
||||
self.current_step -= 1
|
||||
self.stacked_widget.setCurrentIndex(self.current_step)
|
||||
self._update_button_states()
|
||||
self._update_ui_states()
|
||||
|
||||
def _update_button_states(self):
|
||||
"""Updates the state and text of navigation buttons."""
|
||||
is_last_step = self.current_step == len(self.tour_steps_widgets) - 1
|
||||
def _update_ui_states(self):
|
||||
is_last_step = self.current_step == self.stacked_widget.count() - 1
|
||||
self.next_button.setText(self._tr("tour_dialog_finish_button", "Finish") if is_last_step else self._tr("tour_dialog_next_button", "Next"))
|
||||
self.back_button.setEnabled(self.current_step > 0)
|
||||
self.skip_button.setVisible(not is_last_step)
|
||||
|
||||
for i, dot in enumerate(self.progress_dots):
|
||||
dot.setProperty("active", i == self.current_step)
|
||||
dot.style().polish(dot)
|
||||
|
||||
def _skip_tour_action(self):
|
||||
"""Handles the action when the tour is skipped."""
|
||||
self._save_settings_if_checked()
|
||||
self.tour_skipped.emit()
|
||||
self.reject()
|
||||
|
||||
def _finish_tour_action(self):
|
||||
"""Handles the action when the tour is finished normally."""
|
||||
self._save_settings_if_checked()
|
||||
self.tour_finished_normally.emit()
|
||||
self.accept()
|
||||
|
||||
def _save_settings_if_checked(self):
|
||||
"""Saves the 'never show again' preference to QSettings."""
|
||||
self.settings.setValue(self.TOUR_SHOWN_KEY, self.never_show_again_checkbox.isChecked())
|
||||
self.settings.sync()
|
||||
|
||||
@staticmethod
|
||||
def should_show_tour():
|
||||
"""Checks QSettings to see if the tour should be shown on startup."""
|
||||
settings = QSettings(TourDialog.CONFIG_ORGANIZATION_NAME, TourDialog.CONFIG_APP_NAME_TOUR)
|
||||
never_show = settings.value(TourDialog.TOUR_SHOWN_KEY, False, type=bool)
|
||||
return not never_show
|
||||
|
||||
CONFIG_ORGANIZATION_NAME = CONFIG_ORGANIZATION_NAME
|
||||
|
||||
def closeEvent(self, event):
|
||||
"""Ensures settings are saved if the dialog is closed via the 'X' button."""
|
||||
self._skip_tour_action()
|
||||
super().closeEvent(event)
|
||||
super().closeEvent(event)
|
||||
226
src/ui/dialogs/UpdateCheckDialog.py
Normal file
226
src/ui/dialogs/UpdateCheckDialog.py
Normal file
@@ -0,0 +1,226 @@
|
||||
# --- Standard Library Imports ---
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
# --- PyQt5 Imports ---
|
||||
from PyQt5.QtCore import Qt, pyqtSignal
|
||||
from PyQt5.QtWidgets import (
|
||||
QDialog, QVBoxLayout, QHBoxLayout, QListWidget, QListWidgetItem,
|
||||
QPushButton, QMessageBox, QAbstractItemView, QLabel, QCheckBox
|
||||
)
|
||||
|
||||
# --- Local Application Imports ---
|
||||
from ...i18n.translator import get_translation
|
||||
from ..main_window import get_app_icon_object
|
||||
from ...utils.resolution import get_dark_theme
|
||||
|
||||
class UpdateCheckDialog(QDialog):
|
||||
"""
|
||||
A dialog that lists all creator .json profiles with checkboxes
|
||||
and allows the user to select multiple to check for updates.
|
||||
"""
|
||||
|
||||
def __init__(self, user_data_path, parent_app_ref, parent=None):
|
||||
super().__init__(parent)
|
||||
self.parent_app = parent_app_ref
|
||||
self.user_data_path = user_data_path
|
||||
self.selected_profiles_list = [] # Will store a list of {'name': ..., 'data': ...}
|
||||
|
||||
self._default_checkbox_tooltip = (
|
||||
"If checked, the settings from the selected profile will be loaded into the main window.\n"
|
||||
"You can then modify them. When you start the download, the new settings will be saved to the profile."
|
||||
)
|
||||
|
||||
self._init_ui()
|
||||
self._load_profiles()
|
||||
self._retranslate_ui()
|
||||
|
||||
# Apply theme from parent
|
||||
if self.parent_app and self.parent_app.current_theme == "dark":
|
||||
scale = getattr(self.parent_app, 'scale_factor', 1)
|
||||
self.setStyleSheet(get_dark_theme(scale))
|
||||
else:
|
||||
self.setStyleSheet("")
|
||||
|
||||
def _init_ui(self):
|
||||
"""Initializes the UI components."""
|
||||
self.setWindowTitle("Check for Updates")
|
||||
self.setMinimumSize(400, 450)
|
||||
|
||||
app_icon = get_app_icon_object()
|
||||
if app_icon and not app_icon.isNull():
|
||||
self.setWindowIcon(app_icon)
|
||||
|
||||
layout = QVBoxLayout(self)
|
||||
|
||||
self.info_label = QLabel("Select creator profiles to check for updates:")
|
||||
layout.addWidget(self.info_label)
|
||||
|
||||
# --- List Widget with Checkboxes ---
|
||||
self.list_widget = QListWidget()
|
||||
# No selection mode, we only care about checkboxes
|
||||
self.list_widget.setSelectionMode(QAbstractItemView.NoSelection)
|
||||
# Connect signal to handle checkbox state changes
|
||||
self.list_widget.itemChanged.connect(self._handle_item_changed)
|
||||
layout.addWidget(self.list_widget)
|
||||
|
||||
# --- NEW: Checkbox to Load Settings ---
|
||||
self.load_settings_checkbox = QCheckBox("Load profile settings into UI (Edit Settings)")
|
||||
self.load_settings_checkbox.setToolTip(self._default_checkbox_tooltip)
|
||||
layout.addWidget(self.load_settings_checkbox)
|
||||
# -------------------------------------
|
||||
|
||||
# --- All Buttons in One Horizontal Layout ---
|
||||
button_layout = QHBoxLayout()
|
||||
button_layout.setSpacing(6) # small even spacing between all buttons
|
||||
|
||||
self.select_all_button = QPushButton("Select All")
|
||||
self.select_all_button.clicked.connect(self._toggle_all_checkboxes)
|
||||
|
||||
self.deselect_all_button = QPushButton("Deselect All")
|
||||
self.deselect_all_button.clicked.connect(self._toggle_all_checkboxes)
|
||||
|
||||
self.close_button = QPushButton("Close")
|
||||
self.close_button.clicked.connect(self.reject)
|
||||
|
||||
self.check_button = QPushButton("Check Selected")
|
||||
self.check_button.clicked.connect(self.on_check_selected)
|
||||
self.check_button.setDefault(True)
|
||||
|
||||
# Add buttons without a stretch (so no large gap)
|
||||
button_layout.addWidget(self.select_all_button)
|
||||
button_layout.addWidget(self.deselect_all_button)
|
||||
button_layout.addWidget(self.close_button)
|
||||
button_layout.addWidget(self.check_button)
|
||||
|
||||
layout.addLayout(button_layout)
|
||||
|
||||
def _tr(self, key, default_text=""):
|
||||
"""Helper to get translation based on current app language."""
|
||||
if callable(get_translation) and self.parent_app:
|
||||
return get_translation(self.parent_app.current_selected_language, key, default_text)
|
||||
return default_text
|
||||
|
||||
def _retranslate_ui(self):
|
||||
"""Translates the UI elements."""
|
||||
self.setWindowTitle(self._tr("update_check_dialog_title", "Check for Updates"))
|
||||
self.info_label.setText(self._tr("update_check_dialog_info_multiple", "Select creator profiles to check for updates:"))
|
||||
self.select_all_button.setText(self._tr("select_all_button_text", "Select All"))
|
||||
self.deselect_all_button.setText(self._tr("deselect_all_button_text", "Deselect All"))
|
||||
self.check_button.setText(self._tr("update_check_dialog_check_button", "Check Selected"))
|
||||
self.close_button.setText(self._tr("update_check_dialog_close_button", "Close"))
|
||||
self.load_settings_checkbox.setText(self._tr("update_check_load_settings_checkbox", "Load profile settings into UI (Edit Settings)"))
|
||||
|
||||
def _load_profiles(self):
|
||||
"""Loads all .json files from the creator_profiles directory as checkable items."""
|
||||
appdata_dir = self.user_data_path
|
||||
profiles_dir = os.path.join(appdata_dir, "creator_profiles")
|
||||
|
||||
if not os.path.isdir(profiles_dir):
|
||||
QMessageBox.warning(self,
|
||||
self._tr("update_check_dir_not_found_title", "Directory Not Found"),
|
||||
self._tr("update_check_dir_not_found_msg",
|
||||
"The creator profiles directory does not exist yet.\n\nPath: {path}")
|
||||
.format(path=profiles_dir))
|
||||
return
|
||||
|
||||
profiles_found = []
|
||||
for filename in os.listdir(profiles_dir):
|
||||
if filename.endswith(".json"):
|
||||
filepath = os.path.join(profiles_dir, filename)
|
||||
try:
|
||||
with open(filepath, 'r', encoding='utf-8') as f:
|
||||
data = json.load(f)
|
||||
|
||||
# Basic validation to ensure it's a valid profile
|
||||
if 'creator_url' in data and 'processed_post_ids' in data:
|
||||
creator_name = os.path.splitext(filename)[0]
|
||||
profiles_found.append({'name': creator_name, 'data': data})
|
||||
else:
|
||||
print(f"Skipping invalid profile: {filename}")
|
||||
except Exception as e:
|
||||
print(f"Failed to load profile {filename}: {e}")
|
||||
|
||||
profiles_found.sort(key=lambda x: x['name'].lower())
|
||||
|
||||
for profile_info in profiles_found:
|
||||
item = QListWidgetItem(profile_info['name'])
|
||||
item.setData(Qt.UserRole, profile_info)
|
||||
# --- Make item checkable ---
|
||||
item.setFlags(item.flags() | Qt.ItemIsUserCheckable)
|
||||
item.setCheckState(Qt.Unchecked)
|
||||
self.list_widget.addItem(item)
|
||||
|
||||
if not profiles_found:
|
||||
self.list_widget.addItem(self._tr("update_check_no_profiles", "No creator profiles found."))
|
||||
self.list_widget.setEnabled(False)
|
||||
self.check_button.setEnabled(False)
|
||||
self.select_all_button.setEnabled(False)
|
||||
self.deselect_all_button.setEnabled(False)
|
||||
self.load_settings_checkbox.setEnabled(False)
|
||||
|
||||
def _toggle_all_checkboxes(self):
|
||||
"""Handles Select All and Deselect All button clicks."""
|
||||
sender = self.sender()
|
||||
check_state = Qt.Checked if sender == self.select_all_button else Qt.Unchecked
|
||||
|
||||
# Block signals to prevent triggering _handle_item_changed repeatedly
|
||||
self.list_widget.blockSignals(True)
|
||||
for i in range(self.list_widget.count()):
|
||||
item = self.list_widget.item(i)
|
||||
if item.flags() & Qt.ItemIsUserCheckable:
|
||||
item.setCheckState(check_state)
|
||||
self.list_widget.blockSignals(False)
|
||||
|
||||
# Manually trigger the update once after batch change
|
||||
self._handle_item_changed(None)
|
||||
|
||||
def _handle_item_changed(self, item):
|
||||
"""
|
||||
Monitors how many items are checked.
|
||||
If more than 1 item is checked, disable the 'Load Settings' checkbox.
|
||||
"""
|
||||
checked_count = 0
|
||||
for i in range(self.list_widget.count()):
|
||||
if self.list_widget.item(i).checkState() == Qt.Checked:
|
||||
checked_count += 1
|
||||
|
||||
if checked_count > 1:
|
||||
self.load_settings_checkbox.setChecked(False)
|
||||
self.load_settings_checkbox.setEnabled(False)
|
||||
self.load_settings_checkbox.setToolTip(
|
||||
self._tr("update_check_multi_selection_warning",
|
||||
"Editing settings is disabled when multiple profiles are selected.")
|
||||
)
|
||||
else:
|
||||
self.load_settings_checkbox.setEnabled(True)
|
||||
self.load_settings_checkbox.setToolTip(self._default_checkbox_tooltip)
|
||||
|
||||
def on_check_selected(self):
|
||||
"""Handles the 'Check Selected' button click."""
|
||||
self.selected_profiles_list = []
|
||||
|
||||
for i in range(self.list_widget.count()):
|
||||
item = self.list_widget.item(i)
|
||||
if item.checkState() == Qt.Checked:
|
||||
profile_info = item.data(Qt.UserRole)
|
||||
if profile_info:
|
||||
self.selected_profiles_list.append(profile_info)
|
||||
|
||||
if not self.selected_profiles_list:
|
||||
QMessageBox.warning(self,
|
||||
self._tr("update_check_no_selection_title", "No Selection"),
|
||||
self._tr("update_check_no_selection_msg", "Please select at least one creator to check."))
|
||||
return
|
||||
|
||||
self.accept()
|
||||
|
||||
def get_selected_profiles(self):
|
||||
"""Returns the list of profile data selected by the user."""
|
||||
return self.selected_profiles_list
|
||||
|
||||
def should_load_into_ui(self):
|
||||
"""Returns True if the 'Load settings into UI' checkbox is checked."""
|
||||
# Only return True if it's enabled and checked (double safety)
|
||||
return self.load_settings_checkbox.isEnabled() and self.load_settings_checkbox.isChecked()
|
||||
160
src/ui/dialogs/discord_pdf_generator.py
Normal file
160
src/ui/dialogs/discord_pdf_generator.py
Normal file
@@ -0,0 +1,160 @@
|
||||
import os
|
||||
import re
|
||||
import datetime
|
||||
import time
|
||||
try:
|
||||
from fpdf import FPDF
|
||||
FPDF_AVAILABLE = True
|
||||
|
||||
class PDF(FPDF):
|
||||
"""Custom PDF class for Discord chat logs."""
|
||||
def __init__(self, server_name, channel_name, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.server_name = server_name
|
||||
self.channel_name = channel_name
|
||||
self.default_font_family = 'DejaVu' # Can be changed to Arial if font fails
|
||||
|
||||
def header(self):
|
||||
if self.page_no() == 1:
|
||||
return # No header on the title page
|
||||
self.set_font(self.default_font_family, '', 8)
|
||||
self.cell(0, 10, f'{self.server_name} - #{self.channel_name}', 0, 0, 'L')
|
||||
self.cell(0, 10, 'Page ' + str(self.page_no()), 0, 0, 'R')
|
||||
self.ln(10)
|
||||
|
||||
def footer(self):
|
||||
pass # No footer needed, header has page number
|
||||
|
||||
except ImportError:
|
||||
FPDF_AVAILABLE = False
|
||||
FPDF = None
|
||||
PDF = None
|
||||
|
||||
def create_pdf_from_discord_messages(messages_data, server_name, channel_name, output_filename, font_path, logger=print, cancellation_event=None, pause_event=None):
|
||||
"""
|
||||
Creates a single PDF from a list of Discord message objects, formatted as a chat log.
|
||||
UPDATED to include clickable links for attachments and embeds.
|
||||
"""
|
||||
if not FPDF_AVAILABLE:
|
||||
logger("❌ PDF Creation failed: 'fpdf2' library is not installed.")
|
||||
return False
|
||||
|
||||
if not messages_data:
|
||||
logger(" No messages were found or fetched to create a PDF.")
|
||||
return False
|
||||
|
||||
# --- FIX: This helper function now correctly accepts and checks the event objects ---
|
||||
def check_events(c_event, p_event):
|
||||
"""Helper to safely check for pause and cancel events."""
|
||||
if c_event and hasattr(c_event, 'is_cancelled') and c_event.is_cancelled:
|
||||
return True # Stop
|
||||
if p_event and hasattr(p_event, 'is_paused'):
|
||||
while p_event.is_paused:
|
||||
time.sleep(0.5)
|
||||
if c_event and hasattr(c_event, 'is_cancelled') and c_event.is_cancelled:
|
||||
return True
|
||||
return False
|
||||
|
||||
logger(" Sorting messages by date (oldest first)...")
|
||||
messages_data.sort(key=lambda m: m.get('published', m.get('timestamp', '')))
|
||||
|
||||
pdf = PDF(server_name, channel_name)
|
||||
default_font_family = 'DejaVu'
|
||||
|
||||
try:
|
||||
bold_font_path = font_path.replace("DejaVuSans.ttf", "DejaVuSans-Bold.ttf")
|
||||
if not os.path.exists(font_path) or not os.path.exists(bold_font_path):
|
||||
raise RuntimeError("Font files not found")
|
||||
|
||||
pdf.add_font('DejaVu', '', font_path, uni=True)
|
||||
pdf.add_font('DejaVu', 'B', bold_font_path, uni=True)
|
||||
except Exception as font_error:
|
||||
logger(f" ⚠️ Could not load DejaVu font: {font_error}. Falling back to Arial.")
|
||||
default_font_family = 'Arial'
|
||||
pdf.default_font_family = 'Arial'
|
||||
|
||||
# --- Title Page ---
|
||||
pdf.add_page()
|
||||
pdf.set_font(default_font_family, 'B', 24)
|
||||
pdf.cell(w=0, h=20, text="Discord Chat Log", align='C', new_x="LMARGIN", new_y="NEXT")
|
||||
pdf.ln(10)
|
||||
pdf.set_font(default_font_family, '', 16)
|
||||
pdf.cell(w=0, h=10, text=f"Server: {server_name}", align='C', new_x="LMARGIN", new_y="NEXT")
|
||||
pdf.cell(w=0, h=10, text=f"Channel: #{channel_name}", align='C', new_x="LMARGIN", new_y="NEXT")
|
||||
pdf.ln(5)
|
||||
pdf.set_font(default_font_family, '', 10)
|
||||
pdf.cell(w=0, h=10, text=f"Generated on: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}", align='C', new_x="LMARGIN", new_y="NEXT")
|
||||
pdf.cell(w=0, h=10, text=f"Total Messages: {len(messages_data)}", align='C', new_x="LMARGIN", new_y="NEXT")
|
||||
|
||||
pdf.add_page()
|
||||
|
||||
logger(f" Starting PDF creation with {len(messages_data)} messages...")
|
||||
|
||||
for i, message in enumerate(messages_data):
|
||||
# --- FIX: Pass the event objects to the helper function ---
|
||||
if i % 50 == 0:
|
||||
if check_events(cancellation_event, pause_event):
|
||||
logger(" PDF generation cancelled by user.")
|
||||
return False
|
||||
|
||||
author = message.get('author', {}).get('global_name') or message.get('author', {}).get('username', 'Unknown User')
|
||||
timestamp_str = message.get('published', message.get('timestamp', ''))
|
||||
content = message.get('content', '')
|
||||
attachments = message.get('attachments', [])
|
||||
embeds = message.get('embeds', [])
|
||||
|
||||
try:
|
||||
if timestamp_str.endswith('Z'):
|
||||
timestamp_str = timestamp_str[:-1] + '+00:00'
|
||||
dt_obj = datetime.datetime.fromisoformat(timestamp_str)
|
||||
formatted_timestamp = dt_obj.strftime('%Y-%m-%d %H:%M:%S')
|
||||
except (ValueError, TypeError):
|
||||
formatted_timestamp = timestamp_str
|
||||
|
||||
if i > 0:
|
||||
pdf.ln(2)
|
||||
pdf.set_draw_color(200, 200, 200)
|
||||
pdf.cell(0, 0, '', border='T')
|
||||
pdf.ln(2)
|
||||
|
||||
pdf.set_font(default_font_family, 'B', 11)
|
||||
pdf.write(5, f"{author} ")
|
||||
pdf.set_font(default_font_family, '', 9)
|
||||
pdf.set_text_color(128, 128, 128)
|
||||
pdf.write(5, f"({formatted_timestamp})")
|
||||
pdf.set_text_color(0, 0, 0)
|
||||
pdf.ln(6)
|
||||
|
||||
if content:
|
||||
pdf.set_font(default_font_family, '', 10)
|
||||
pdf.multi_cell(w=0, h=5, text=content)
|
||||
|
||||
if attachments or embeds:
|
||||
pdf.ln(1)
|
||||
pdf.set_font(default_font_family, '', 9)
|
||||
pdf.set_text_color(22, 119, 219)
|
||||
|
||||
for att in attachments:
|
||||
file_name = att.get('filename', 'untitled')
|
||||
full_url = att.get('url', '#')
|
||||
pdf.write(5, text=f"[Attachment: {file_name}]", link=full_url)
|
||||
pdf.ln()
|
||||
|
||||
for embed in embeds:
|
||||
embed_url = embed.get('url', 'no url')
|
||||
pdf.write(5, text=f"[Embed: {embed_url}]", link=embed_url)
|
||||
pdf.ln()
|
||||
|
||||
pdf.set_text_color(0, 0, 0)
|
||||
|
||||
if check_events(cancellation_event, pause_event):
|
||||
logger(" PDF generation cancelled by user before final save.")
|
||||
return False
|
||||
|
||||
try:
|
||||
pdf.output(output_filename)
|
||||
logger(f"✅ Successfully created Discord chat log PDF: '{os.path.basename(output_filename)}'")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger(f"❌ A critical error occurred while saving the final PDF: {e}")
|
||||
return False
|
||||
File diff suppressed because it is too large
Load Diff
49
src/utils/command.py
Normal file
49
src/utils/command.py
Normal file
@@ -0,0 +1,49 @@
|
||||
import re
|
||||
|
||||
# Command constants
|
||||
CMD_ARCHIVE_ONLY = 'ao'
|
||||
CMD_DOMAIN_OVERRIDE_PREFIX = '.'
|
||||
CMD_SFP_PREFIX = 'sfp-'
|
||||
CMD_UNKNOWN = 'unknown' # New command constant
|
||||
|
||||
def parse_commands_from_text(raw_text: str):
|
||||
"""
|
||||
Parses special commands from a text string and returns the cleaned text
|
||||
and a dictionary of found commands.
|
||||
|
||||
Commands are in the format [command].
|
||||
Example: "Tifa, (Cloud, Zack) [.st] [sfp-10] [unknown]"
|
||||
|
||||
Returns:
|
||||
tuple[str, dict]: A tuple containing:
|
||||
- The text string with commands removed.
|
||||
- A dictionary of commands and their values.
|
||||
"""
|
||||
command_pattern = re.compile(r'\[(.*?)\]')
|
||||
commands = {}
|
||||
|
||||
def command_replacer(match):
|
||||
command_str = match.group(1).strip().lower()
|
||||
|
||||
if command_str.startswith(CMD_DOMAIN_OVERRIDE_PREFIX):
|
||||
tld = command_str[len(CMD_DOMAIN_OVERRIDE_PREFIX):]
|
||||
if 'domain_override' not in commands:
|
||||
commands['domain_override'] = tld
|
||||
elif command_str == CMD_ARCHIVE_ONLY:
|
||||
commands['archive_only'] = True
|
||||
elif command_str.startswith(CMD_SFP_PREFIX):
|
||||
try:
|
||||
threshold_str = command_str[len(CMD_SFP_PREFIX):]
|
||||
threshold = int(threshold_str)
|
||||
if 'sfp_threshold' not in commands:
|
||||
commands['sfp_threshold'] = threshold
|
||||
except (ValueError, IndexError):
|
||||
pass
|
||||
elif command_str == CMD_UNKNOWN: # Logic to handle the new command
|
||||
commands['handle_unknown'] = True
|
||||
|
||||
return ''
|
||||
|
||||
text_without_commands = command_pattern.sub(command_replacer, raw_text).strip()
|
||||
|
||||
return text_without_commands, commands
|
||||
@@ -20,7 +20,7 @@ VIDEO_EXTENSIONS = {
|
||||
'.mpg', '.m4v', '.3gp', '.ogv', '.ts', '.vob'
|
||||
}
|
||||
ARCHIVE_EXTENSIONS = {
|
||||
'.zip', '.rar', '.7z', '.tar', '.gz', '.bz2'
|
||||
'.zip', '.rar', '.7z', '.tar', '.gz', '.bz2', '.bin'
|
||||
}
|
||||
AUDIO_EXTENSIONS = {
|
||||
'.mp3', '.wav', '.aac', '.flac', '.ogg', '.wma', '.m4a', '.opus',
|
||||
@@ -140,3 +140,5 @@ def is_audio(filename):
|
||||
if not filename: return False
|
||||
_, ext = os.path.splitext(filename)
|
||||
return ext.lower() in AUDIO_EXTENSIONS
|
||||
|
||||
|
||||
|
||||
@@ -1,14 +1,7 @@
|
||||
# --- Standard Library Imports ---
|
||||
import os
|
||||
import re
|
||||
from urllib.parse import urlparse
|
||||
|
||||
# --- Third-Party Library Imports ---
|
||||
# This module might not require third-party libraries directly,
|
||||
# but 'requests' is a common dependency for network operations.
|
||||
# import requests
|
||||
|
||||
|
||||
def parse_cookie_string(cookie_string):
|
||||
"""
|
||||
Parses a 'name=value; name2=value2' cookie string into a dictionary.
|
||||
@@ -106,13 +99,11 @@ def prepare_cookies_for_request(use_cookie_flag, cookie_text_input, selected_coo
|
||||
if not use_cookie_flag:
|
||||
return None
|
||||
|
||||
# Priority 1: Use the specifically browsed file first
|
||||
if selected_cookie_file_path and os.path.exists(selected_cookie_file_path):
|
||||
cookies = load_cookies_from_netscape_file(selected_cookie_file_path, logger_func, target_domain)
|
||||
if cookies:
|
||||
return cookies
|
||||
|
||||
# Priority 2: Look for a domain-specific cookie file
|
||||
if app_base_dir and target_domain:
|
||||
domain_specific_path = os.path.join(app_base_dir, "data", f"{target_domain}_cookies.txt")
|
||||
if os.path.exists(domain_specific_path):
|
||||
@@ -120,7 +111,6 @@ def prepare_cookies_for_request(use_cookie_flag, cookie_text_input, selected_coo
|
||||
if cookies:
|
||||
return cookies
|
||||
|
||||
# Priority 3: Look for a generic cookies.txt
|
||||
if app_base_dir:
|
||||
default_path = os.path.join(app_base_dir, "appdata", "cookies.txt")
|
||||
if os.path.exists(default_path):
|
||||
@@ -128,7 +118,6 @@ def prepare_cookies_for_request(use_cookie_flag, cookie_text_input, selected_coo
|
||||
if cookies:
|
||||
return cookies
|
||||
|
||||
# Priority 4: Fall back to manually entered text
|
||||
if cookie_text_input:
|
||||
cookies = parse_cookie_string(cookie_text_input)
|
||||
if cookies:
|
||||
@@ -141,28 +130,84 @@ def prepare_cookies_for_request(use_cookie_flag, cookie_text_input, selected_coo
|
||||
def extract_post_info(url_string):
|
||||
"""
|
||||
Parses a URL string to extract the service, user ID, and post ID.
|
||||
|
||||
Args:
|
||||
url_string (str): The URL to parse.
|
||||
|
||||
Returns:
|
||||
tuple: A tuple containing (service, user_id, post_id). Any can be None.
|
||||
UPDATED to support Hentai2Read series and chapters.
|
||||
"""
|
||||
if not isinstance(url_string, str) or not url_string.strip():
|
||||
return None, None, None
|
||||
|
||||
try:
|
||||
parsed_url = urlparse(url_string.strip())
|
||||
path_parts = [part for part in parsed_url.path.strip('/').split('/') if part]
|
||||
|
||||
stripped_url = url_string.strip()
|
||||
|
||||
|
||||
# --- DeviantArt Check ---
|
||||
if 'deviantart.com' in stripped_url.lower() or 'fav.me' in stripped_url.lower():
|
||||
# This MUST return 'deviantart' as the first element
|
||||
return 'deviantart', 'placeholder_user', 'placeholder_id' # ----------------------
|
||||
|
||||
# --- Rule34Video Check ---
|
||||
rule34video_match = re.search(r'rule34video\.com/video/(\d+)', stripped_url)
|
||||
if rule34video_match:
|
||||
video_id = rule34video_match.group(1)
|
||||
return 'rule34video', video_id, None
|
||||
|
||||
# --- Danbooru Check ---
|
||||
danbooru_match = re.search(r'danbooru\.donmai\.us|safebooru\.donmai\.us', stripped_url)
|
||||
if danbooru_match:
|
||||
return 'danbooru', None, None
|
||||
|
||||
# Standard format: /<service>/user/<user_id>/post/<post_id>
|
||||
# --- Gelbooru Check ---
|
||||
gelbooru_match = re.search(r'gelbooru\.com', stripped_url)
|
||||
if gelbooru_match:
|
||||
return 'gelbooru', None, None
|
||||
|
||||
# --- Bunkr Check ---
|
||||
bunkr_pattern = re.compile(
|
||||
r"(?:https?://)?(?:[a-zA-Z0-9-]+\.)?bunkr\.(?:si|la|ws|red|black|media|site|is|to|ac|cr|ci|fi|pk|ps|sk|ph|su|ru)|bunkrr\.ru"
|
||||
)
|
||||
if bunkr_pattern.search(stripped_url):
|
||||
return 'bunkr', stripped_url, None
|
||||
|
||||
# --- SimpCity Check (Corrected version) ---
|
||||
simpcity_match = re.search(r'simpcity\.cr/threads/([^/]+)(?:/post-(\d+))?', stripped_url)
|
||||
if simpcity_match:
|
||||
thread_info = simpcity_match.group(1)
|
||||
post_id = simpcity_match.group(2)
|
||||
return 'simpcity', thread_info, post_id
|
||||
|
||||
# --- nhentai Check ---
|
||||
nhentai_match = re.search(r'nhentai\.net/g/(\d+)', stripped_url)
|
||||
if nhentai_match:
|
||||
return 'nhentai', nhentai_match.group(1), None
|
||||
|
||||
# --- Hentai2Read Check (Corrected to match series, chapter, and image URLs) ---
|
||||
hentai2read_match = re.search(r'hentai2read\.com/([^/]+)(?:/(\d+))?/?', stripped_url)
|
||||
if hentai2read_match:
|
||||
manga_slug, chapter_num = hentai2read_match.groups()
|
||||
return 'hentai2read', manga_slug, chapter_num
|
||||
|
||||
# --- Pixeldrain Check ---
|
||||
pixeldrain_match = re.search(r'pixeldrain\.com/[lud]/([^/?#]+)', stripped_url)
|
||||
if pixeldrain_match:
|
||||
return 'pixeldrain', stripped_url, None
|
||||
|
||||
discord_channel_match = re.search(r'discord\.com/channels/(@me|\d+)/(\d+)', stripped_url)
|
||||
if discord_channel_match:
|
||||
server_id, channel_id = discord_channel_match.groups()
|
||||
return 'discord', server_id, channel_id
|
||||
|
||||
# --- Kemono/Coomer/Discord Parsing ---
|
||||
try:
|
||||
parsed_url = urlparse(stripped_url)
|
||||
path_parts = [part for part in parsed_url.path.strip('/').split('/') if part]
|
||||
|
||||
if len(path_parts) >= 3 and path_parts[0].lower() == 'discord' and path_parts[1].lower() == 'server':
|
||||
return 'discord', path_parts[2], path_parts[3] if len(path_parts) >= 4 else None
|
||||
|
||||
if len(path_parts) >= 3 and path_parts[1].lower() == 'user':
|
||||
service = path_parts[0]
|
||||
user_id = path_parts[2]
|
||||
post_id = path_parts[4] if len(path_parts) >= 5 and path_parts[3].lower() == 'post' else None
|
||||
return service, user_id, post_id
|
||||
|
||||
# API format: /api/v1/<service>/user/<user_id>...
|
||||
|
||||
if len(path_parts) >= 5 and path_parts[0:2] == ['api', 'v1'] and path_parts[3].lower() == 'user':
|
||||
service = path_parts[2]
|
||||
user_id = path_parts[4]
|
||||
@@ -173,8 +218,7 @@ def extract_post_info(url_string):
|
||||
print(f"Debug: Exception during URL parsing for '{url_string}': {e}")
|
||||
|
||||
return None, None, None
|
||||
|
||||
|
||||
|
||||
def get_link_platform(url):
|
||||
"""
|
||||
Identifies the platform of a given URL based on its domain.
|
||||
|
||||
@@ -28,19 +28,12 @@ def setup_ui(main_app):
|
||||
main_app.scale_factor = scale
|
||||
|
||||
default_font = QApplication.font()
|
||||
base_font_size = 9 # Use a standard base size
|
||||
base_font_size = 9
|
||||
default_font.setPointSize(int(base_font_size * scale))
|
||||
main_app.setFont(default_font)
|
||||
|
||||
default_font = QApplication.font()
|
||||
base_font_size = 9 # Use a standard base size
|
||||
default_font.setPointSize(int(base_font_size * scale))
|
||||
main_app.setFont(default_font)
|
||||
# --- END: Improved Scaling Logic ---
|
||||
|
||||
main_app.main_splitter = QSplitter(Qt.Horizontal)
|
||||
|
||||
# --- Use a scroll area for the left panel for consistency ---
|
||||
left_scroll_area = QScrollArea()
|
||||
left_scroll_area.setWidgetResizable(True)
|
||||
left_scroll_area.setFrameShape(QFrame.NoFrame)
|
||||
@@ -75,7 +68,7 @@ def setup_ui(main_app):
|
||||
main_app.empty_popup_button.clicked.connect(main_app._show_empty_popup)
|
||||
url_input_layout.addWidget(main_app.empty_popup_button)
|
||||
main_app.page_range_label = QLabel(main_app._tr("page_range_label_text", "Page Range:"))
|
||||
main_app.page_range_label.setStyleSheet("font-weight: bold; padding-left: 10px;")
|
||||
main_app.page_range_label .setStyleSheet("font-weight: bold; padding-left: 10px;")
|
||||
url_input_layout.addWidget(main_app.page_range_label)
|
||||
main_app.start_page_input = QLineEdit()
|
||||
main_app.start_page_input.setPlaceholderText(main_app._tr("start_page_input_placeholder", "Start"))
|
||||
@@ -134,8 +127,6 @@ def setup_ui(main_app):
|
||||
main_app._update_char_filter_scope_button_text()
|
||||
char_input_and_button_layout.addWidget(main_app.char_filter_scope_toggle_button, 1)
|
||||
character_filter_v_layout.addLayout(char_input_and_button_layout)
|
||||
|
||||
# --- Custom Folder Widget Definition ---
|
||||
main_app.custom_folder_widget = QWidget()
|
||||
custom_folder_v_layout = QVBoxLayout(main_app.custom_folder_widget)
|
||||
custom_folder_v_layout.setContentsMargins(0, 0, 0, 0)
|
||||
@@ -146,7 +137,6 @@ def setup_ui(main_app):
|
||||
custom_folder_v_layout.addWidget(main_app.custom_folder_label)
|
||||
custom_folder_v_layout.addWidget(main_app.custom_folder_input)
|
||||
main_app.custom_folder_widget.setVisible(False)
|
||||
|
||||
filters_and_custom_folder_layout.addWidget(main_app.character_filter_widget, 1)
|
||||
filters_and_custom_folder_layout.addWidget(main_app.custom_folder_widget, 1)
|
||||
left_layout.addWidget(main_app.filters_and_custom_folder_container_widget)
|
||||
@@ -199,7 +189,6 @@ def setup_ui(main_app):
|
||||
main_app.radio_only_audio = QRadioButton("🎧 Only Audio")
|
||||
main_app.radio_only_links = QRadioButton("🔗 Only Links")
|
||||
main_app.radio_more = QRadioButton("More")
|
||||
|
||||
main_app.radio_all.setChecked(True)
|
||||
for btn in [main_app.radio_all, main_app.radio_images, main_app.radio_videos, main_app.radio_only_archives, main_app.radio_only_audio, main_app.radio_only_links, main_app.radio_more]:
|
||||
main_app.radio_group.addButton(btn)
|
||||
@@ -211,6 +200,24 @@ def setup_ui(main_app):
|
||||
file_filter_layout.addLayout(radio_button_layout)
|
||||
left_layout.addLayout(file_filter_layout)
|
||||
|
||||
# --- Booru Inputs Container ---
|
||||
main_app.booru_inputs_widget = QWidget()
|
||||
booru_inputs_layout = QHBoxLayout(main_app.booru_inputs_widget)
|
||||
booru_inputs_layout.setContentsMargins(0, 5, 0, 0)
|
||||
main_app.api_key_label = QLabel("API Key:")
|
||||
main_app.api_key_input = QLineEdit()
|
||||
main_app.api_key_input.setPlaceholderText("Danbooru or Gelbooru API Key")
|
||||
main_app.user_id_label = QLabel("User ID:")
|
||||
main_app.user_id_input = QLineEdit()
|
||||
main_app.user_id_input.setPlaceholderText("Danbooru Username or Gelbooru User ID")
|
||||
booru_inputs_layout.addWidget(main_app.api_key_label)
|
||||
booru_inputs_layout.addWidget(main_app.api_key_input, 1)
|
||||
booru_inputs_layout.addSpacing(10)
|
||||
booru_inputs_layout.addWidget(main_app.user_id_label)
|
||||
booru_inputs_layout.addWidget(main_app.user_id_input, 1)
|
||||
left_layout.addWidget(main_app.booru_inputs_widget)
|
||||
main_app.booru_inputs_widget.setVisible(False)
|
||||
|
||||
# --- Checkboxes Group ---
|
||||
checkboxes_group_layout = QVBoxLayout()
|
||||
checkboxes_group_layout.setSpacing(10)
|
||||
@@ -234,40 +241,42 @@ def setup_ui(main_app):
|
||||
row1_layout.addStretch(1)
|
||||
checkboxes_group_layout.addLayout(row1_layout)
|
||||
|
||||
# --- Advanced Settings ---
|
||||
# --- Advanced Settings Container ---
|
||||
main_app.advanced_settings_widget = QWidget()
|
||||
advanced_settings_layout = QVBoxLayout(main_app.advanced_settings_widget)
|
||||
advanced_settings_layout.setContentsMargins(0, 0, 0, 0)
|
||||
advanced_settings_layout.setSpacing(10)
|
||||
advanced_settings_label = QLabel("⚙️ Advanced Settings:")
|
||||
checkboxes_group_layout.addWidget(advanced_settings_label)
|
||||
advanced_row1_layout = QHBoxLayout()
|
||||
advanced_row1_layout.setSpacing(10)
|
||||
advanced_settings_layout.addWidget(advanced_settings_label)
|
||||
|
||||
# --- REORDERED CHECKBOXES ---
|
||||
main_app.advanced_row1_layout = QHBoxLayout()
|
||||
main_app.advanced_row1_layout.setSpacing(10)
|
||||
main_app.use_subfolder_per_post_checkbox = QCheckBox("Subfolder per Post")
|
||||
main_app.use_subfolder_per_post_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
|
||||
main_app.use_subfolder_per_post_checkbox.setChecked(True)
|
||||
advanced_row1_layout.addWidget(main_app.use_subfolder_per_post_checkbox)
|
||||
|
||||
main_app.advanced_row1_layout.addWidget(main_app.use_subfolder_per_post_checkbox)
|
||||
main_app.date_prefix_checkbox = QCheckBox("Date Prefix")
|
||||
main_app.date_prefix_checkbox.setToolTip("When 'Subfolder per Post' is active, prefix the folder name with the post's upload date.")
|
||||
advanced_row1_layout.addWidget(main_app.date_prefix_checkbox)
|
||||
|
||||
main_app.advanced_row1_layout.addWidget(main_app.date_prefix_checkbox)
|
||||
main_app.use_subfolders_checkbox = QCheckBox("Separate Folders by Known.txt")
|
||||
main_app.use_subfolders_checkbox.setChecked(False)
|
||||
main_app.use_subfolders_checkbox.toggled.connect(main_app.update_ui_for_subfolders)
|
||||
advanced_row1_layout.addWidget(main_app.use_subfolders_checkbox)
|
||||
# --- END REORDER ---
|
||||
|
||||
main_app.advanced_row1_layout.addWidget(main_app.use_subfolders_checkbox)
|
||||
|
||||
# --- Original Cookie Controls (for non-SimpCity sites) ---
|
||||
main_app.use_cookie_checkbox = QCheckBox("Use Cookie")
|
||||
main_app.use_cookie_checkbox.setChecked(main_app.use_cookie_setting)
|
||||
main_app.cookie_text_input = QLineEdit()
|
||||
main_app.cookie_text_input.setPlaceholderText("if no Select cookies.txt)")
|
||||
main_app.cookie_text_input.setPlaceholderText("Cookie string or path from Browse...")
|
||||
main_app.cookie_text_input.setText(main_app.cookie_text_setting)
|
||||
advanced_row1_layout.addWidget(main_app.use_cookie_checkbox)
|
||||
advanced_row1_layout.addWidget(main_app.cookie_text_input, 2)
|
||||
main_app.cookie_browse_button = QPushButton("Browse...")
|
||||
main_app.cookie_browse_button.setFixedWidth(int(80 * scale))
|
||||
advanced_row1_layout.addWidget(main_app.cookie_browse_button)
|
||||
advanced_row1_layout.addStretch(1)
|
||||
checkboxes_group_layout.addLayout(advanced_row1_layout)
|
||||
main_app.advanced_row1_layout.addWidget(main_app.use_cookie_checkbox)
|
||||
main_app.advanced_row1_layout.addWidget(main_app.cookie_text_input, 2)
|
||||
main_app.advanced_row1_layout.addWidget(main_app.cookie_browse_button)
|
||||
main_app.advanced_row1_layout.addStretch(1)
|
||||
advanced_settings_layout.addLayout(main_app.advanced_row1_layout)
|
||||
|
||||
advanced_row2_layout = QHBoxLayout()
|
||||
advanced_row2_layout.setSpacing(10)
|
||||
multithreading_layout = QHBoxLayout()
|
||||
@@ -284,13 +293,61 @@ def setup_ui(main_app):
|
||||
advanced_row2_layout.addLayout(multithreading_layout)
|
||||
main_app.external_links_checkbox = QCheckBox("Show External Links in Log")
|
||||
advanced_row2_layout.addWidget(main_app.external_links_checkbox)
|
||||
main_app.manga_mode_checkbox = QCheckBox("Manga/Comic Mode")
|
||||
main_app.manga_mode_checkbox = QCheckBox("Renaming Mode")
|
||||
advanced_row2_layout.addWidget(main_app.manga_mode_checkbox)
|
||||
advanced_row2_layout.addStretch(1)
|
||||
checkboxes_group_layout.addLayout(advanced_row2_layout)
|
||||
advanced_settings_layout.addLayout(advanced_row2_layout)
|
||||
checkboxes_group_layout.addWidget(main_app.advanced_settings_widget)
|
||||
|
||||
# --- SimpCity Settings Container (with its own cookie controls) ---
|
||||
main_app.simpcity_settings_widget = QWidget()
|
||||
simpcity_settings_layout = QVBoxLayout(main_app.simpcity_settings_widget)
|
||||
simpcity_settings_layout.setContentsMargins(0, 0, 0, 0)
|
||||
simpcity_settings_layout.setSpacing(10)
|
||||
simpcity_settings_label = QLabel("⚙️ SimpCity Download Options:")
|
||||
simpcity_settings_layout.addWidget(simpcity_settings_label)
|
||||
|
||||
# Checkbox row
|
||||
simpcity_checkboxes_layout = QHBoxLayout()
|
||||
|
||||
main_app.simpcity_dl_images_cb = QCheckBox("Download Images")
|
||||
main_app.simpcity_dl_images_cb.setChecked(True) # Checked by default
|
||||
main_app.simpcity_dl_pixeldrain_cb = QCheckBox("Download Pixeldrain")
|
||||
main_app.simpcity_dl_saint2_cb = QCheckBox("Download Saint2.su")
|
||||
main_app.simpcity_dl_mega_cb = QCheckBox("Download Mega")
|
||||
main_app.simpcity_dl_bunkr_cb = QCheckBox("Download Bunkr")
|
||||
main_app.simpcity_dl_gofile_cb = QCheckBox("Download Gofile")
|
||||
|
||||
simpcity_checkboxes_layout.addWidget(main_app.simpcity_dl_images_cb)
|
||||
simpcity_checkboxes_layout.addWidget(main_app.simpcity_dl_pixeldrain_cb)
|
||||
simpcity_checkboxes_layout.addWidget(main_app.simpcity_dl_saint2_cb)
|
||||
simpcity_checkboxes_layout.addWidget(main_app.simpcity_dl_mega_cb)
|
||||
simpcity_checkboxes_layout.addWidget(main_app.simpcity_dl_bunkr_cb)
|
||||
simpcity_checkboxes_layout.addWidget(main_app.simpcity_dl_gofile_cb)
|
||||
simpcity_checkboxes_layout.addStretch(1)
|
||||
simpcity_settings_layout.addLayout(simpcity_checkboxes_layout)
|
||||
|
||||
# --- START NEW CODE ---
|
||||
simpcity_cookie_layout = QHBoxLayout()
|
||||
simpcity_cookie_layout.setContentsMargins(0, 5, 0, 0) # Add some top margin
|
||||
simpcity_cookie_label = QLabel("Cookie:")
|
||||
main_app.simpcity_cookie_text_input = QLineEdit()
|
||||
main_app.simpcity_cookie_text_input.setPlaceholderText("Cookie string or path... (Required)")
|
||||
main_app.simpcity_cookie_browse_button = QPushButton("Browse...")
|
||||
main_app.simpcity_cookie_browse_button.setFixedWidth(int(80 * scale))
|
||||
|
||||
simpcity_cookie_layout.addWidget(simpcity_cookie_label)
|
||||
simpcity_cookie_layout.addWidget(main_app.simpcity_cookie_text_input, 1) # Stretch factor
|
||||
simpcity_cookie_layout.addWidget(main_app.simpcity_cookie_browse_button)
|
||||
|
||||
simpcity_settings_layout.addLayout(simpcity_cookie_layout)
|
||||
checkboxes_group_layout.addWidget(main_app.simpcity_settings_widget)
|
||||
main_app.simpcity_settings_widget.setVisible(False)
|
||||
|
||||
left_layout.addLayout(checkboxes_group_layout)
|
||||
|
||||
# --- Action Buttons ---
|
||||
# --- Action Buttons & Remaining UI ---
|
||||
# ... (The rest of the setup_ui function remains unchanged)
|
||||
main_app.standard_action_buttons_widget = QWidget()
|
||||
btn_layout = QHBoxLayout(main_app.standard_action_buttons_widget)
|
||||
btn_layout.setContentsMargins(0, 10, 0, 0)
|
||||
@@ -326,8 +383,6 @@ def setup_ui(main_app):
|
||||
main_app.bottom_action_buttons_stack.addWidget(main_app.favorite_action_buttons_widget)
|
||||
left_layout.addWidget(main_app.bottom_action_buttons_stack)
|
||||
left_layout.addSpacing(10)
|
||||
|
||||
# --- Known Names Layout ---
|
||||
known_chars_label_layout = QHBoxLayout()
|
||||
known_chars_label_layout.setSpacing(10)
|
||||
main_app.known_chars_label = QLabel("🎭 Known Shows/Characters (for Folder Names):")
|
||||
@@ -376,8 +431,6 @@ def setup_ui(main_app):
|
||||
char_manage_layout.addWidget(main_app.support_button, 0)
|
||||
left_layout.addLayout(char_manage_layout)
|
||||
left_layout.addStretch(0)
|
||||
|
||||
# --- Right Panel (Logs) ---
|
||||
right_panel_widget.setLayout(right_layout)
|
||||
log_title_layout = QHBoxLayout()
|
||||
main_app.progress_log_label = QLabel("📜 Progress Log:")
|
||||
@@ -387,15 +440,31 @@ def setup_ui(main_app):
|
||||
main_app.link_search_input.setPlaceholderText("Search Links...")
|
||||
main_app.link_search_input.setVisible(False)
|
||||
log_title_layout.addWidget(main_app.link_search_input)
|
||||
main_app.link_search_button = QPushButton("<EFBFBD>")
|
||||
main_app.link_search_button = QPushButton("🔎")
|
||||
main_app.link_search_button.setVisible(False)
|
||||
main_app.link_search_button.setFixedWidth(int(30 * scale))
|
||||
log_title_layout.addWidget(main_app.link_search_button)
|
||||
discord_controls_layout = QHBoxLayout()
|
||||
main_app.discord_scope_toggle_button = QPushButton("Scope: Files")
|
||||
main_app.discord_scope_toggle_button.setVisible(False)
|
||||
discord_controls_layout.addWidget(main_app.discord_scope_toggle_button)
|
||||
main_app.discord_message_limit_input = QLineEdit(main_app)
|
||||
main_app.discord_message_limit_input.setPlaceholderText("Msg Limit")
|
||||
main_app.discord_message_limit_input.setToolTip("Optional: Limit the number of recent messages to process.")
|
||||
main_app.discord_message_limit_input.setValidator(QIntValidator(1, 9999999, main_app))
|
||||
main_app.discord_message_limit_input.setFixedWidth(int(80 * scale))
|
||||
main_app.discord_message_limit_input.setVisible(False)
|
||||
discord_controls_layout.addWidget(main_app.discord_message_limit_input)
|
||||
log_title_layout.addLayout(discord_controls_layout)
|
||||
main_app.manga_rename_toggle_button = QPushButton()
|
||||
main_app.manga_rename_toggle_button.setVisible(False)
|
||||
main_app.manga_rename_toggle_button.setFixedWidth(int(140 * scale))
|
||||
main_app._update_manga_filename_style_button_text()
|
||||
log_title_layout.addWidget(main_app.manga_rename_toggle_button)
|
||||
main_app.custom_rename_dialog_button = QPushButton("Open Dialog")
|
||||
main_app.custom_rename_dialog_button.setVisible(False)
|
||||
main_app.custom_rename_dialog_button.clicked.connect(main_app._show_custom_rename_dialog)
|
||||
log_title_layout.addWidget(main_app.custom_rename_dialog_button)
|
||||
main_app.manga_date_prefix_input = QLineEdit()
|
||||
main_app.manga_date_prefix_input.setPlaceholderText("Prefix for Manga Filenames")
|
||||
main_app.manga_date_prefix_input.setVisible(False)
|
||||
@@ -458,26 +527,17 @@ def setup_ui(main_app):
|
||||
main_app.file_progress_label.setWordWrap(True)
|
||||
main_app.file_progress_label.setStyleSheet("padding-top: 2px; font-style: italic; color: #A0A0A0;")
|
||||
right_layout.addWidget(main_app.file_progress_label)
|
||||
|
||||
# --- Final Assembly ---
|
||||
main_app.main_splitter.addWidget(left_scroll_area)
|
||||
main_app.main_splitter.addWidget(right_panel_widget)
|
||||
|
||||
if main_app.width() >= 1920:
|
||||
# For wider resolutions, give more space to the log panel (right).
|
||||
main_app.main_splitter.setStretchFactor(0, 4)
|
||||
main_app.main_splitter.setStretchFactor(1, 6)
|
||||
else:
|
||||
# Default for lower resolutions, giving more space to controls (left).
|
||||
main_app.main_splitter.setStretchFactor(0, 7)
|
||||
main_app.main_splitter.setStretchFactor(1, 3)
|
||||
|
||||
|
||||
top_level_layout = QHBoxLayout(main_app)
|
||||
top_level_layout.setContentsMargins(0, 0, 0, 0)
|
||||
top_level_layout.addWidget(main_app.main_splitter)
|
||||
|
||||
# --- Initial UI State Updates ---
|
||||
main_app.update_ui_for_subfolders(main_app.use_subfolders_checkbox.isChecked())
|
||||
main_app.update_external_links_setting(main_app.external_links_checkbox.isChecked())
|
||||
main_app.update_multithreading_label(main_app.thread_count_input.text())
|
||||
@@ -493,7 +553,6 @@ def setup_ui(main_app):
|
||||
if hasattr(main_app, 'radio_group') and main_app.radio_group.checkedButton():
|
||||
main_app._handle_filter_mode_change(main_app.radio_group.checkedButton(), True)
|
||||
main_app.radio_group.buttonToggled.connect(main_app._handle_more_options_toggled)
|
||||
|
||||
main_app._update_manga_filename_style_button_text()
|
||||
main_app._update_skip_scope_button_text()
|
||||
main_app._update_char_filter_scope_button_text()
|
||||
@@ -535,6 +594,9 @@ def get_dark_theme(scale=1):
|
||||
border-radius: 4px;
|
||||
font-size: {font_size}pt;
|
||||
}}
|
||||
QLineEdit::placeholder {{
|
||||
color: #8A8A8A; /* A muted grey color for placeholder text */
|
||||
}}
|
||||
QTextEdit {{
|
||||
font-family: Consolas, Courier New, monospace;
|
||||
}}
|
||||
|
||||
@@ -26,6 +26,16 @@ KNOWN_TXT_MATCH_CLEANUP_PATTERNS = [
|
||||
r'\bPreview\b',
|
||||
]
|
||||
|
||||
# --- START NEW CODE ---
|
||||
# Regular expression to detect CJK characters
|
||||
# Covers Hiragana, Katakana, Half/Full width forms, CJK Unified Ideographs, Hangul Syllables, etc.
|
||||
cjk_pattern = re.compile(r'[\u3000-\u303f\u3040-\u309f\u30a0-\u30ff\uff00-\uffef\u4e00-\u9fff\uac00-\ud7af]')
|
||||
|
||||
def contains_cjk(text):
|
||||
"""Checks if the text contains any CJK characters."""
|
||||
return bool(cjk_pattern.search(text))
|
||||
# --- END NEW CODE ---
|
||||
|
||||
# --- Text Matching and Manipulation Utilities ---
|
||||
|
||||
def is_title_match_for_character(post_title, character_name_filter):
|
||||
@@ -42,7 +52,7 @@ def is_title_match_for_character(post_title, character_name_filter):
|
||||
"""
|
||||
if not post_title or not character_name_filter:
|
||||
return False
|
||||
|
||||
|
||||
# Use word boundaries (\b) to match whole words only
|
||||
pattern = r"(?i)\b" + re.escape(str(character_name_filter).strip()) + r"\b"
|
||||
return bool(re.search(pattern, post_title))
|
||||
@@ -62,7 +72,7 @@ def is_filename_match_for_character(filename, character_name_filter):
|
||||
"""
|
||||
if not filename or not character_name_filter:
|
||||
return False
|
||||
|
||||
|
||||
return str(character_name_filter).strip().lower() in filename.lower()
|
||||
|
||||
|
||||
@@ -101,16 +111,16 @@ def extract_folder_name_from_title(title, unwanted_keywords):
|
||||
"""
|
||||
if not title:
|
||||
return 'Uncategorized'
|
||||
|
||||
|
||||
title_lower = title.lower()
|
||||
# Find all whole words in the title
|
||||
tokens = re.findall(r'\b[\w\-]+\b', title_lower)
|
||||
|
||||
|
||||
for token in tokens:
|
||||
clean_token = clean_folder_name(token)
|
||||
if clean_token and clean_token.lower() not in unwanted_keywords:
|
||||
return clean_token
|
||||
|
||||
|
||||
# Fallback to cleaning the full title if no single significant word is found
|
||||
cleaned_full_title = clean_folder_name(title)
|
||||
return cleaned_full_title if cleaned_full_title else 'Uncategorized'
|
||||
@@ -120,6 +130,7 @@ def match_folders_from_title(title, names_to_match, unwanted_keywords):
|
||||
"""
|
||||
Matches folder names from a title based on a list of known name objects.
|
||||
Each name object is a dict: {'name': 'PrimaryName', 'aliases': ['alias1', ...]}
|
||||
MODIFIED: Uses substring matching for CJK aliases, word boundary for others.
|
||||
|
||||
Args:
|
||||
title (str): The post title to check.
|
||||
@@ -137,10 +148,11 @@ def match_folders_from_title(title, names_to_match, unwanted_keywords):
|
||||
for pat_str in KNOWN_TXT_MATCH_CLEANUP_PATTERNS:
|
||||
cleaned_title = re.sub(pat_str, ' ', cleaned_title, flags=re.IGNORECASE)
|
||||
cleaned_title = re.sub(r'\s+', ' ', cleaned_title).strip()
|
||||
# Store both original case cleaned title and lower case for different matching
|
||||
title_lower = cleaned_title.lower()
|
||||
|
||||
matched_cleaned_names = set()
|
||||
|
||||
|
||||
# Sort by name length descending to match longer names first (e.g., "Cloud Strife" before "Cloud")
|
||||
sorted_name_objects = sorted(names_to_match, key=lambda x: len(x.get("name", "")), reverse=True)
|
||||
|
||||
@@ -149,25 +161,52 @@ def match_folders_from_title(title, names_to_match, unwanted_keywords):
|
||||
aliases = name_obj.get("aliases", [])
|
||||
if not primary_folder_name or not aliases:
|
||||
continue
|
||||
|
||||
|
||||
# <<< START MODIFICATION >>>
|
||||
cleaned_primary_name = clean_folder_name(primary_folder_name)
|
||||
if not cleaned_primary_name or cleaned_primary_name.lower() in unwanted_keywords:
|
||||
continue # Skip this entry entirely if its primary name is unwanted or empty
|
||||
|
||||
match_found_for_this_object = False
|
||||
for alias in aliases:
|
||||
if not alias: continue
|
||||
alias_lower = alias.lower()
|
||||
if not alias_lower: continue
|
||||
|
||||
# Use word boundaries for accurate matching
|
||||
pattern = r'\b' + re.escape(alias_lower) + r'\b'
|
||||
if re.search(pattern, title_lower):
|
||||
cleaned_primary_name = clean_folder_name(primary_folder_name)
|
||||
if cleaned_primary_name.lower() not in unwanted_keywords:
|
||||
|
||||
# Check if the alias contains CJK characters
|
||||
if contains_cjk(alias):
|
||||
# Use simple substring matching for CJK
|
||||
if alias_lower in title_lower:
|
||||
matched_cleaned_names.add(cleaned_primary_name)
|
||||
break # Move to the next name object once a match is found for this one
|
||||
|
||||
match_found_for_this_object = True
|
||||
break # Move to the next name object
|
||||
else:
|
||||
# Use original word boundary matching for non-CJK
|
||||
try:
|
||||
# Compile pattern for efficiency if used repeatedly, though here it changes each loop
|
||||
pattern = r'\b' + re.escape(alias_lower) + r'\b'
|
||||
if re.search(pattern, title_lower):
|
||||
matched_cleaned_names.add(cleaned_primary_name)
|
||||
match_found_for_this_object = True
|
||||
break # Move to the next name object
|
||||
except re.error as e:
|
||||
# Log error if the alias creates an invalid regex (unlikely with escape)
|
||||
print(f"Regex error for alias '{alias}': {e}") # Or use proper logging
|
||||
continue
|
||||
|
||||
# This outer break logic remains the same (though slightly redundant with inner breaks)
|
||||
if match_found_for_this_object:
|
||||
pass # Already added and broke inner loop
|
||||
# <<< END MODIFICATION >>>
|
||||
|
||||
return sorted(list(matched_cleaned_names))
|
||||
|
||||
|
||||
def match_folders_from_filename_enhanced(filename, names_to_match, unwanted_keywords):
|
||||
"""
|
||||
Matches folder names from a filename, prioritizing longer and more specific aliases.
|
||||
It returns immediately after finding the first (longest) match.
|
||||
MODIFIED: Prioritizes boundary-aware matches for Latin characters,
|
||||
falls back to substring search for CJK compatibility.
|
||||
|
||||
Args:
|
||||
filename (str): The filename to check.
|
||||
@@ -175,33 +214,61 @@ def match_folders_from_filename_enhanced(filename, names_to_match, unwanted_keyw
|
||||
unwanted_keywords (set): A set of folder names to ignore.
|
||||
|
||||
Returns:
|
||||
list: A sorted list of matched primary folder names.
|
||||
list: A list containing the single best folder name match, or an empty list.
|
||||
"""
|
||||
if not filename or not names_to_match:
|
||||
return []
|
||||
|
||||
filename_lower = filename.lower()
|
||||
matched_primary_names = set()
|
||||
|
||||
# Create a flat list of (alias, primary_name) tuples to sort by alias length
|
||||
# Create a flat list of (alias, primary_name) tuples
|
||||
alias_map_to_primary = []
|
||||
for name_obj in names_to_match:
|
||||
primary_name = name_obj.get("name")
|
||||
if not primary_name: continue
|
||||
|
||||
|
||||
cleaned_primary_name = clean_folder_name(primary_name)
|
||||
if not cleaned_primary_name or cleaned_primary_name.lower() in unwanted_keywords:
|
||||
continue
|
||||
|
||||
for alias in name_obj.get("aliases", []):
|
||||
if alias.lower():
|
||||
alias_map_to_primary.append((alias.lower(), cleaned_primary_name))
|
||||
|
||||
if alias: # Check if alias is not None and not an empty string
|
||||
alias_lower_val = alias.lower()
|
||||
if alias_lower_val: # Check again after lowercasing
|
||||
alias_map_to_primary.append((alias_lower_val, cleaned_primary_name))
|
||||
|
||||
# Sort by alias length, descending, to match longer aliases first
|
||||
alias_map_to_primary.sort(key=lambda x: len(x[0]), reverse=True)
|
||||
|
||||
# Return the FIRST match found, which will be the longest
|
||||
for alias_lower, primary_name_for_alias in alias_map_to_primary:
|
||||
if filename_lower.startswith(alias_lower):
|
||||
matched_primary_names.add(primary_name_for_alias)
|
||||
try:
|
||||
# 1. Attempt boundary-aware match first (good for English/Latin)
|
||||
# Matches alias if it's at the start/end or surrounded by common separators
|
||||
# We use word boundaries (\b) and also check for common non-word separators like +_-
|
||||
pattern = r'(?:^|[\s_+-])' + re.escape(alias_lower) + r'(?:[\s_+-]|$)'
|
||||
|
||||
if re.search(pattern, filename_lower):
|
||||
# Found a precise, boundary-aware match. This is the best case.
|
||||
return [primary_name_for_alias]
|
||||
|
||||
return sorted(list(matched_primary_names))
|
||||
# 2. Fallback: Simple substring check (for CJK or other cases)
|
||||
# This executes ONLY if the boundary match above failed.
|
||||
# We check if the alias contains CJK OR if the filename does.
|
||||
# This avoids applying the simple 'in' check for Latin-only aliases in Latin-only filenames.
|
||||
elif (contains_cjk(alias_lower) or contains_cjk(filename_lower)) and alias_lower in filename_lower:
|
||||
# This is the fallback for CJK compatibility.
|
||||
return [primary_name_for_alias]
|
||||
|
||||
# If alias is "ul" and filename is "sin+título":
|
||||
# 1. re.search(r'(?:^|[\s_+-])ul(?:[\s_+-]|$)', "sin+título") -> Fails (good)
|
||||
# 2. contains_cjk("ul") -> False
|
||||
# 3. contains_cjk("sin+título") -> False
|
||||
# 4. No match is found for "ul". (correct)
|
||||
|
||||
except re.error as e:
|
||||
print(f"Regex error matching alias '{alias_lower}' in filename '{filename_lower}': {e}")
|
||||
continue # Skip this alias if regex fails
|
||||
|
||||
# If the loop finishes without any matches, return an empty list.
|
||||
return []
|
||||
110
structure.txt
Normal file
110
structure.txt
Normal file
@@ -0,0 +1,110 @@
|
||||
├── assets/
|
||||
│ ├── Kemono.ico
|
||||
│ ├── Kemono.png
|
||||
│ ├── Ko-fi.png
|
||||
│ ├── buymeacoffee.png
|
||||
│ ├── discord.png
|
||||
│ ├── github.png
|
||||
│ ├── instagram.png
|
||||
│ └── patreon.png
|
||||
├── data/
|
||||
│ ├── creators.json
|
||||
│ └── dejavu-sans/
|
||||
│ ├── DejaVu Fonts License.txt
|
||||
│ ├── DejaVuSans-Bold.ttf
|
||||
│ ├── DejaVuSans-BoldOblique.ttf
|
||||
│ ├── DejaVuSans-ExtraLight.ttf
|
||||
│ ├── DejaVuSans-Oblique.ttf
|
||||
│ ├── DejaVuSans.ttf
|
||||
│ ├── DejaVuSansCondensed-Bold.ttf
|
||||
│ ├── DejaVuSansCondensed-BoldOblique.ttf
|
||||
│ ├── DejaVuSansCondensed-Oblique.ttf
|
||||
│ └── DejaVuSansCondensed.ttf
|
||||
├── main.py
|
||||
├── src/
|
||||
│ ├── __init__.py
|
||||
│ ├── config/
|
||||
│ │ ├── __init__.py
|
||||
│ │ └── constants.py
|
||||
│ ├── core/
|
||||
│ │ ├── Hentai2read_client.py
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── allcomic_client.py
|
||||
│ │ ├── api_client.py
|
||||
│ │ ├── booru_client.py
|
||||
│ │ ├── bunkr_client.py
|
||||
│ │ ├── discord_client.py
|
||||
│ │ ├── erome_client.py
|
||||
│ │ ├── fap_nation_client.py
|
||||
│ │ ├── manager.py
|
||||
│ │ ├── mangadex_client.py
|
||||
│ │ ├── nhentai_client.py
|
||||
│ │ ├── pixeldrain_client.py
|
||||
│ │ ├── rule34video_client.py
|
||||
│ │ ├── saint2_client.py
|
||||
│ │ ├── simpcity_client.py
|
||||
│ │ ├── toonily_client.py
|
||||
│ │ └── workers.py
|
||||
│ ├── i18n/
|
||||
│ │ ├── __init__.py
|
||||
│ │ └── translator.py
|
||||
│ ├── services/
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── drive_downloader.py
|
||||
│ │ ├── multipart_downloader.py
|
||||
│ │ └── updater.py
|
||||
│ ├── ui/
|
||||
│ │ ├── __init__.py
|
||||
│ │ ├── assets.py
|
||||
│ │ ├── classes/
|
||||
│ │ │ ├── allcomic_downloader_thread.py
|
||||
│ │ │ ├── booru_downloader_thread.py
|
||||
│ │ │ ├── bunkr_downloader_thread.py
|
||||
│ │ │ ├── discord_downloader_thread.py
|
||||
│ │ │ ├── downloader_factory.py
|
||||
│ │ │ ├── drive_downloader_thread.py
|
||||
│ │ │ ├── erome_downloader_thread.py
|
||||
│ │ │ ├── external_link_downloader_thread.py
|
||||
│ │ │ ├── fap_nation_downloader_thread.py
|
||||
│ │ │ ├── hentai2read_downloader_thread.py
|
||||
│ │ │ ├── kemono_discord_downloader_thread.py
|
||||
│ │ │ ├── mangadex_downloader_thread.py
|
||||
│ │ │ ├── nhentai_downloader_thread.py
|
||||
│ │ │ ├── pixeldrain_downloader_thread.py
|
||||
│ │ │ ├── rule34video_downloader_thread.py
|
||||
│ │ │ ├── saint2_downloader_thread.py
|
||||
│ │ │ ├── simp_city_downloader_thread.py
|
||||
│ │ │ └── toonily_downloader_thread.py
|
||||
│ │ ├── dialogs/
|
||||
│ │ │ ├── ConfirmAddAllDialog.py
|
||||
│ │ │ ├── CookieHelpDialog.py
|
||||
│ │ │ ├── CustomFilenameDialog.py
|
||||
│ │ │ ├── DownloadExtractedLinksDialog.py
|
||||
│ │ │ ├── DownloadHistoryDialog.py
|
||||
│ │ │ ├── EmptyPopupDialog.py
|
||||
│ │ │ ├── ErrorFilesDialog.py
|
||||
│ │ │ ├── ExportLinksDialog.py
|
||||
│ │ │ ├── ExportOptionsDialog.py
|
||||
│ │ │ ├── FavoriteArtistsDialog.py
|
||||
│ │ │ ├── FavoritePostsDialog.py
|
||||
│ │ │ ├── FutureSettingsDialog.py
|
||||
│ │ │ ├── HelpGuideDialog.py
|
||||
│ │ │ ├── KeepDuplicatesDialog.py
|
||||
│ │ │ ├── KnownNamesFilterDialog.py
|
||||
│ │ │ ├── MoreOptionsDialog.py
|
||||
│ │ │ ├── MultipartScopeDialog.py
|
||||
│ │ │ ├── SinglePDF.py
|
||||
│ │ │ ├── SupportDialog.py
|
||||
│ │ │ ├── TourDialog.py
|
||||
│ │ │ ├── __init__.py
|
||||
│ │ │ └── discord_pdf_generator.py
|
||||
│ │ └── main_window.py
|
||||
│ └── utils/
|
||||
│ ├── __init__.py
|
||||
│ ├── command.py
|
||||
│ ├── file_utils.py
|
||||
│ ├── network_utils.py
|
||||
│ ├── resolution.py
|
||||
│ └── text_utils.py
|
||||
├── structure.txt
|
||||
└── yt-dlp.exe
|
||||
BIN
yt-dlp.exe
Normal file
BIN
yt-dlp.exe
Normal file
Binary file not shown.
Reference in New Issue
Block a user