63 Commits

Author SHA1 Message Date
Yuvi9587
46658a7bab Update readme.md 2025-05-30 21:09:15 +05:30
Yuvi9587
927c11f2bb Update readme.md 2025-05-30 21:08:23 +05:30
Yuvi9587
a54f2b3567 Commit 2025-05-30 21:04:02 +05:30
Yuvi9587
7f2312b64f Commit 2025-05-30 08:28:11 +05:30
Yuvi9587
7106694bcb Update main.py 2025-05-29 20:29:19 +05:30
Yuvi9587
6b37d73e5a Commit 2025-05-29 19:47:04 +05:30
Yuvi9587
d1c5b205ef Commit 2025-05-29 19:09:08 +05:30
Yuvi9587
10b567a5fd Commit 2025-05-29 19:07:28 +05:30
Yuvi9587
eed0a919aa Commit 2025-05-29 17:56:16 +05:30
Yuvi9587
78357df07f Commit 2025-05-29 08:50:01 +01:00
Yuvi9587
8137c76eb4 Commit 2025-05-28 20:33:06 +05:30
Yuvi9587
be3a522305 Commit 2025-05-28 18:20:54 +05:30
Yuvi9587
13d05765b2 Commit 2025-05-28 13:41:39 +05:30
Yuvi9587
f52d16d1e4 Commit 2025-05-28 04:41:09 +05:30
Yuvi9587
acb91c7e8a Commit 2025-05-28 09:46:03 +05:30
Yuvi9587
c765a7a281 Commit 2025-05-28 09:21:34 +05:30
Yuvi9587
5abfcc8550 Merge branch 'main' of https://github.com/Yuvi9587/Kemono-Downloader 2025-05-28 08:26:03 +05:30
Yuvi9587
7957468077 Commit 2025-05-27 23:00:16 +05:30
Yuvi9587
f774773b63 Commit 2025-05-27 20:34:38 +05:30
Yuvi9587
8036cb9835 Commit 2025-05-26 20:37:37 +05:30
Yuvi9587
13fc33d2c0 Commit 2025-05-26 09:33:45 +05:30
Yuvi9587
8663ef54a3 Commit 2025-05-26 08:43:13 +05:30
Yuvi9587
0316813792 Delete dist directory 2025-05-26 13:55:54 +05:30
Yuvi9587
d201a5396c Delete build/Kemono Downloader directory 2025-05-26 13:55:25 +05:30
Yuvi9587
86f9396b6c Commit 2025-05-26 13:52:34 +05:30
Yuvi9587
0fb4bb3cb0 Commit 2025-05-26 13:52:07 +05:30
Yuvi9587
1528d7ce25 Update Read.png 2025-05-26 09:54:26 +05:30
Yuvi9587
4e7eeb7989 Commit 2025-05-26 09:52:06 +05:30
Yuvi9587
7f2976a4f4 Commit 2025-05-26 09:48:00 +05:30
Yuvi9587
8928cb92da readme.md 2025-05-26 01:39:39 +05:30
Yuvi9587
a181b76124 Update main.py 2025-05-25 17:18:11 +05:30
Yuvi9587
8f085a8f63 Commit 2025-05-25 21:52:04 +05:30
Yuvi9587
93a997351b Update readme.md 2025-05-25 21:22:47 +05:30
Yuvi9587
b3af6c1c15 Commit 2025-05-25 21:21:00 +05:30
Yuvi9587
4a65263f7d Commit 2025-05-25 19:49:17 +05:30
Yuvi9587
1091b5b9b4 Commit 2025-05-25 19:48:08 +05:30
Yuvi9587
f6b3ff2f5c Update main.py 2025-05-25 11:36:35 +05:30
Yuvi9587
b399bdf5cf readme.md 2025-05-25 16:54:35 +05:30
Yuvi9587
9ace161bc8 Update downloader_utils.py 2025-05-25 11:22:04 +05:30
Yuvi9587
66e52cfd78 Commit 2025-05-25 12:27:15 +05:30
Yuvi9587
e665fd3cde Commit 2025-05-25 11:38:38 +05:30
Yuvi9587
fc94f4c691 Commit 2025-05-24 22:55:23 +05:30
Yuvi9587
78e2012f04 Commit 2025-05-24 13:30:06 +05:30
Yuvi9587
3fe9dbacc6 Commit 2025-05-24 13:15:08 +05:30
Yuvi9587
004dea06e0 Commit 2025-05-24 16:22:47 +05:30
Yuvi9587
8994a69c34 Add files via upload 2025-05-24 10:36:15 +05:30
Yuvi9587
f4a692673e main.py 2025-05-24 10:35:46 +05:30
Yuvi9587
4cb5f14ef6 Delete Known.txt 2025-05-23 21:01:05 +05:30
Yuvi9587
a596c4f350 Update main.py 2025-05-23 20:59:35 +05:30
Yuvi9587
e091c60d29 Commit 2025-05-23 20:23:36 +05:30
Yuvi9587
d2ea026a41 Commit 2025-05-23 19:11:52 +05:30
Yuvi9587
bb3d5c20f5 Commit 2025-05-23 18:24:42 +05:30
Yuvi9587
a13eae8f16 Commit 2025-05-23 18:19:30 +05:30
Yuvi9587
7e5dc71720 Commit 2025-05-23 18:06:47 +05:30
Yuvi9587
d7960bbb85 Commit 2025-05-23 17:22:54 +05:30
Yuvi9587
c4d5ba3040 Commit 2025-05-22 07:40:10 +05:30
Yuvi9587
fd84de7bce Commit 2025-05-22 07:03:05 +05:30
Yuvi9587
a6383b20a4 Commit 2025-05-21 17:20:16 +05:30
Yuvi9587
651f9d9f8d Update main.py 2025-05-18 16:17:40 +05:30
Yuvi9587
decef6730f Commit 2025-05-18 16:12:19 +05:30
Yuvi9587
32a12e8a09 Commit 2025-05-17 11:41:43 +05:30
Yuvi9587
62007d2d45 Update readme.md 2025-05-16 16:08:48 +05:30
Yuvi9587
f1e592cf99 Update readme.md 2025-05-16 12:50:32 +05:30
14 changed files with 755725 additions and 1568 deletions

BIN
Kemono.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

BIN
Read/Read.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 168 KiB

BIN
Read/Read1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 126 KiB

BIN
Read/Read2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 139 KiB

BIN
Read/Read3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

BIN
assets/discord.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

BIN
assets/github.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

BIN
assets/instagram.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

750524
creators.json Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

4924
main.py

File diff suppressed because it is too large Load Diff

View File

@@ -5,6 +5,7 @@ import hashlib
import http.client
import traceback
import threading
import queue # Import the missing 'queue' module
from concurrent.futures import ThreadPoolExecutor, as_completed
CHUNK_DOWNLOAD_RETRY_DELAY = 2 # Slightly reduced for faster retries if needed
@@ -13,61 +14,64 @@ DOWNLOAD_CHUNK_SIZE_ITER = 1024 * 256 # 256KB for iter_content within a chunk d
def _download_individual_chunk(chunk_url, temp_file_path, start_byte, end_byte, headers,
part_num, total_parts, progress_data, cancellation_event, skip_event, logger,
signals=None, api_original_filename=None): # Added signals and api_original_filename
part_num, total_parts, progress_data, cancellation_event, skip_event, pause_event, global_emit_time_ref, cookies_for_chunk, # Added cookies_for_chunk
logger_func, emitter=None, api_original_filename=None): # Renamed logger, signals to emitter
"""Downloads a single chunk of a file and writes it to the temp file."""
if cancellation_event and cancellation_event.is_set():
logger(f" [Chunk {part_num + 1}/{total_parts}] Download cancelled before start.")
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Download cancelled before start.")
return 0, False # bytes_downloaded, success
if skip_event and skip_event.is_set():
logger(f" [Chunk {part_num + 1}/{total_parts}] Skip event triggered before start.")
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Skip event triggered before start.")
return 0, False
if pause_event and pause_event.is_set():
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Download paused before start...")
while pause_event.is_set():
if cancellation_event and cancellation_event.is_set():
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Download cancelled while paused.")
return 0, False
time.sleep(0.2) # Shorter sleep for responsive resume
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Download resumed.")
chunk_headers = headers.copy()
# end_byte can be -1 for 0-byte files, meaning download from start_byte to end of file (which is start_byte itself)
if end_byte != -1 : # For 0-byte files, end_byte might be -1, Range header should not be set or be 0-0
chunk_headers['Range'] = f"bytes={start_byte}-{end_byte}"
elif start_byte == 0 and end_byte == -1: # Specifically for 0-byte files
# Some servers might not like Range: bytes=0--1.
# For a 0-byte file, we might not even need a range header, or Range: bytes=0-0
# Let's try without for 0-byte, or rely on server to handle 0-0 if Content-Length was 0.
# If Content-Length was 0, the main function might handle it directly.
# This chunking logic is primarily for files > 0 bytes.
# For now, if end_byte is -1, it implies a 0-byte file, so we expect 0 bytes.
pass
bytes_this_chunk = 0
last_progress_emit_time_for_chunk = time.time()
last_speed_calc_time = time.time()
bytes_at_last_speed_calc = 0
for attempt in range(MAX_CHUNK_DOWNLOAD_RETRIES + 1):
if cancellation_event and cancellation_event.is_set():
logger(f" [Chunk {part_num + 1}/{total_parts}] Cancelled during retry loop.")
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Cancelled during retry loop.")
return bytes_this_chunk, False
if skip_event and skip_event.is_set():
logger(f" [Chunk {part_num + 1}/{total_parts}] Skip event during retry loop.")
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Skip event during retry loop.")
return bytes_this_chunk, False
if pause_event and pause_event.is_set():
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Paused during retry loop...")
while pause_event.is_set():
if cancellation_event and cancellation_event.is_set():
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Cancelled while paused in retry loop.")
return bytes_this_chunk, False
time.sleep(0.2)
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Resumed from retry loop pause.")
try:
if attempt > 0:
logger(f" [Chunk {part_num + 1}/{total_parts}] Retrying download (Attempt {attempt}/{MAX_CHUNK_DOWNLOAD_RETRIES})...")
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Retrying download (Attempt {attempt}/{MAX_CHUNK_DOWNLOAD_RETRIES})...")
time.sleep(CHUNK_DOWNLOAD_RETRY_DELAY * (2 ** (attempt - 1)))
# Reset speed calculation on retry
last_speed_calc_time = time.time()
bytes_at_last_speed_calc = bytes_this_chunk # Current progress of this chunk
# Enhanced log message for chunk start
log_msg = f" 🚀 [Chunk {part_num + 1}/{total_parts}] Starting download: bytes {start_byte}-{end_byte if end_byte != -1 else 'EOF'}"
logger(log_msg)
print(f"DEBUG_MULTIPART: {log_msg}") # Direct console print for debugging
response = requests.get(chunk_url, headers=chunk_headers, timeout=(10, 120), stream=True)
logger_func(log_msg)
response = requests.get(chunk_url, headers=chunk_headers, timeout=(10, 120), stream=True, cookies=cookies_for_chunk)
response.raise_for_status()
# For 0-byte files, if end_byte was -1, we expect 0 content.
if start_byte == 0 and end_byte == -1 and int(response.headers.get('Content-Length', 0)) == 0:
logger(f" [Chunk {part_num + 1}/{total_parts}] Confirmed 0-byte file.")
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Confirmed 0-byte file.")
with progress_data['lock']:
progress_data['chunks_status'][part_num]['active'] = False
progress_data['chunks_status'][part_num]['speed_bps'] = 0
@@ -77,17 +81,24 @@ def _download_individual_chunk(chunk_url, temp_file_path, start_byte, end_byte,
f.seek(start_byte)
for data_segment in response.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE_ITER):
if cancellation_event and cancellation_event.is_set():
logger(f" [Chunk {part_num + 1}/{total_parts}] Cancelled during data iteration.")
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Cancelled during data iteration.")
return bytes_this_chunk, False
if skip_event and skip_event.is_set():
logger(f" [Chunk {part_num + 1}/{total_parts}] Skip event during data iteration.")
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Skip event during data iteration.")
return bytes_this_chunk, False
if pause_event and pause_event.is_set():
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Paused during data iteration...")
while pause_event.is_set():
if cancellation_event and cancellation_event.is_set():
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Cancelled while paused in data iteration.")
return bytes_this_chunk, False
time.sleep(0.2)
logger_func(f" [Chunk {part_num + 1}/{total_parts}] Resumed from data iteration pause.")
if data_segment:
f.write(data_segment)
bytes_this_chunk += len(data_segment)
with progress_data['lock']:
# Increment both the chunk's downloaded and the overall downloaded
progress_data['total_downloaded_so_far'] += len(data_segment)
progress_data['chunks_status'][part_num]['downloaded'] = bytes_this_chunk
progress_data['chunks_status'][part_num]['active'] = True
@@ -99,46 +110,49 @@ def _download_individual_chunk(chunk_url, temp_file_path, start_byte, end_byte,
current_speed_bps = (bytes_delta * 8) / time_delta_speed if time_delta_speed > 0 else 0
progress_data['chunks_status'][part_num]['speed_bps'] = current_speed_bps
last_speed_calc_time = current_time
bytes_at_last_speed_calc = bytes_this_chunk
# Emit progress more frequently from within the chunk download
if current_time - last_progress_emit_time_for_chunk > 0.1: # Emit up to 10 times/sec per chunk
if signals and hasattr(signals, 'file_progress_signal'):
# Ensure we read the latest total downloaded from progress_data
# Send a copy of the chunks_status list
status_list_copy = [dict(s) for s in progress_data['chunks_status']] # Make a deep enough copy
signals.file_progress_signal.emit(api_original_filename, status_list_copy)
last_progress_emit_time_for_chunk = current_time
bytes_at_last_speed_calc = bytes_this_chunk
if emitter and (current_time - global_emit_time_ref[0] > 0.25): # Max ~4Hz for the whole file
global_emit_time_ref[0] = current_time # Update shared last emit time
status_list_copy = [dict(s) for s in progress_data['chunks_status']] # Make a deep enough copy
if isinstance(emitter, queue.Queue):
emitter.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
elif hasattr(emitter, 'file_progress_signal'): # PostProcessorSignals-like
emitter.file_progress_signal.emit(api_original_filename, status_list_copy)
return bytes_this_chunk, True
except (requests.exceptions.ConnectionError, requests.exceptions.Timeout, http.client.IncompleteRead) as e:
logger(f" ❌ [Chunk {part_num + 1}/{total_parts}] Retryable error: {e}")
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Retryable error: {e}")
if isinstance(e, requests.exceptions.ConnectionError) and \
("Failed to resolve" in str(e) or "NameResolutionError" in str(e)):
logger_func(" 💡 This looks like a DNS resolution problem. Please check your internet connection, DNS settings, or VPN.")
if attempt == MAX_CHUNK_DOWNLOAD_RETRIES:
logger(f" ❌ [Chunk {part_num + 1}/{total_parts}] Failed after {MAX_CHUNK_DOWNLOAD_RETRIES} retries.")
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Failed after {MAX_CHUNK_DOWNLOAD_RETRIES} retries.")
return bytes_this_chunk, False
except requests.exceptions.RequestException as e: # Includes 4xx/5xx errors after raise_for_status
logger(f" ❌ [Chunk {part_num + 1}/{total_parts}] Non-retryable error: {e}")
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Non-retryable error: {e}")
if ("Failed to resolve" in str(e) or "NameResolutionError" in str(e)): # More general check
logger_func(" 💡 This looks like a DNS resolution problem. Please check your internet connection, DNS settings, or VPN.")
return bytes_this_chunk, False
except Exception as e:
logger(f" ❌ [Chunk {part_num + 1}/{total_parts}] Unexpected error: {e}\n{traceback.format_exc(limit=1)}")
logger_func(f" ❌ [Chunk {part_num + 1}/{total_parts}] Unexpected error: {e}\n{traceback.format_exc(limit=1)}")
return bytes_this_chunk, False
# Ensure final status is marked as inactive if loop finishes due to retries
with progress_data['lock']:
progress_data['chunks_status'][part_num]['active'] = False
progress_data['chunks_status'][part_num]['speed_bps'] = 0
return bytes_this_chunk, False # Should be unreachable
def download_file_in_parts(file_url, save_path, total_size, num_parts, headers,
api_original_filename, signals, cancellation_event, skip_event, logger):
def download_file_in_parts(file_url, save_path, total_size, num_parts, headers, api_original_filename,
emitter_for_multipart, cookies_for_chunk_session, # Added cookies_for_chunk_session
cancellation_event, skip_event, logger_func, pause_event):
"""
Downloads a file in multiple parts concurrently.
Returns: (download_successful_flag, downloaded_bytes, calculated_file_hash, temp_file_handle_or_None)
The temp_file_handle will be an open read-binary file handle to the .part file if successful, otherwise None.
It is the responsibility of the caller to close this handle and rename/delete the .part file.
"""
logger(f"⬇️ Initializing Multi-part Download ({num_parts} parts) for: '{api_original_filename}' (Size: {total_size / (1024*1024):.2f} MB)")
logger_func(f"⬇️ Initializing Multi-part Download ({num_parts} parts) for: '{api_original_filename}' (Size: {total_size / (1024*1024):.2f} MB)")
temp_file_path = save_path + ".part"
try:
@@ -146,7 +160,7 @@ def download_file_in_parts(file_url, save_path, total_size, num_parts, headers,
if total_size > 0:
f_temp.truncate(total_size) # Pre-allocate space
except IOError as e:
logger(f" ❌ Error creating/truncating temp file '{temp_file_path}': {e}")
logger_func(f" ❌ Error creating/truncating temp file '{temp_file_path}': {e}")
return False, 0, None, None
chunk_size_calc = total_size // num_parts
@@ -167,7 +181,7 @@ def download_file_in_parts(file_url, save_path, total_size, num_parts, headers,
chunk_actual_sizes.append(end - start + 1)
if not chunks_ranges and total_size > 0:
logger(f" ⚠️ No valid chunk ranges for multipart download of '{api_original_filename}'. Aborting multipart.")
logger_func(f" ⚠️ No valid chunk ranges for multipart download of '{api_original_filename}'. Aborting multipart.")
if os.path.exists(temp_file_path): os.remove(temp_file_path)
return False, 0, None, None
@@ -178,7 +192,8 @@ def download_file_in_parts(file_url, save_path, total_size, num_parts, headers,
{'id': i, 'downloaded': 0, 'total': chunk_actual_sizes[i] if i < len(chunk_actual_sizes) else 0, 'active': False, 'speed_bps': 0.0}
for i in range(num_parts)
],
'lock': threading.Lock()
'lock': threading.Lock(),
'last_global_emit_time': [time.time()] # Shared mutable for global throttling timestamp
}
chunk_futures = []
@@ -191,8 +206,9 @@ def download_file_in_parts(file_url, save_path, total_size, num_parts, headers,
chunk_futures.append(chunk_pool.submit(
_download_individual_chunk, chunk_url=file_url, temp_file_path=temp_file_path,
start_byte=start, end_byte=end, headers=headers, part_num=i, total_parts=num_parts,
progress_data=progress_data, cancellation_event=cancellation_event, skip_event=skip_event, logger=logger,
signals=signals, api_original_filename=api_original_filename # Pass them here
progress_data=progress_data, cancellation_event=cancellation_event, skip_event=skip_event, global_emit_time_ref=progress_data['last_global_emit_time'],
pause_event=pause_event, cookies_for_chunk=cookies_for_chunk_session, logger_func=logger_func, emitter=emitter_for_multipart,
api_original_filename=api_original_filename
))
for future in as_completed(chunk_futures):
@@ -201,32 +217,29 @@ def download_file_in_parts(file_url, save_path, total_size, num_parts, headers,
total_bytes_from_chunks += bytes_downloaded_this_chunk
if not success_this_chunk:
all_chunks_successful = False
# Progress is emitted from within _download_individual_chunk
if cancellation_event and cancellation_event.is_set():
logger(f" Multi-part download for '{api_original_filename}' cancelled by main event.")
logger_func(f" Multi-part download for '{api_original_filename}' cancelled by main event.")
all_chunks_successful = False
# Ensure a final progress update is sent with all chunks marked inactive (unless still active due to error)
if signals and hasattr(signals, 'file_progress_signal'):
if emitter_for_multipart:
with progress_data['lock']:
# Ensure all chunks are marked inactive for the final signal if download didn't fully succeed or was cancelled
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
signals.file_progress_signal.emit(api_original_filename, status_list_copy)
status_list_copy = [dict(s) for s in progress_data['chunks_status']]
if isinstance(emitter_for_multipart, queue.Queue):
emitter_for_multipart.put({'type': 'file_progress', 'payload': (api_original_filename, status_list_copy)})
elif hasattr(emitter_for_multipart, 'file_progress_signal'): # PostProcessorSignals-like
emitter_for_multipart.file_progress_signal.emit(api_original_filename, status_list_copy)
if all_chunks_successful and (total_bytes_from_chunks == total_size or total_size == 0):
logger(f" ✅ Multi-part download successful for '{api_original_filename}'. Total bytes: {total_bytes_from_chunks}")
logger_func(f" ✅ Multi-part download successful for '{api_original_filename}'. Total bytes: {total_bytes_from_chunks}")
md5_hasher = hashlib.md5()
with open(temp_file_path, 'rb') as f_hash:
for buf in iter(lambda: f_hash.read(4096*10), b''): # Read in larger buffers for hashing
md5_hasher.update(buf)
calculated_hash = md5_hasher.hexdigest()
# Return an open file handle for the caller to manage (e.g., for compression)
# The caller is responsible for closing this handle and renaming/deleting the .part file.
return True, total_bytes_from_chunks, calculated_hash, open(temp_file_path, 'rb')
else:
logger(f" ❌ Multi-part download failed for '{api_original_filename}'. Success: {all_chunks_successful}, Bytes: {total_bytes_from_chunks}/{total_size}. Cleaning up.")
logger_func(f" ❌ Multi-part download failed for '{api_original_filename}'. Success: {all_chunks_successful}, Bytes: {total_bytes_from_chunks}/{total_size}. Cleaning up.")
if os.path.exists(temp_file_path):
try: os.remove(temp_file_path)
except OSError as e: logger(f" Failed to remove temp part file '{temp_file_path}': {e}")
return False, total_bytes_from_chunks, None, None
except OSError as e: logger_func(f" Failed to remove temp part file '{temp_file_path}': {e}")
return False, total_bytes_from_chunks, None, None

460
readme.md
View File

@@ -1,204 +1,306 @@
# Kemono Downloader v3.2.0
<h1 align="center">Kemono Downloader v4.2.0</h1>
A feature-rich GUI application built with PyQt5 to download content from **Kemono.su** or **Coomer.party**.
Offers robust filtering, smart organization, manga-specific handling, and performance tuning.
<table align="center">
<tr>
<td align="center">
<img src="Read/Read.png" alt="Post Downloader Tab" width="400"/>
<br>
<strong>Default</strong>
</td>
<td align="center">
<img src="Read/Read1.png" alt="Creator Downloader Tab" width="400"/>
<br>
<strong>Favorite mode</strong>
</td>
</tr>
<tr>
<td align="center">
<img src="Read/Read2.png" alt="Settings Tab" width="400"/>
<br>
<strong>Single Post</strong>
</td>
<td align="center">
<img src="Read/Read3.png" alt="Settings Tab" width="400"/>
<br>
<strong>Manga/Comic Mode</strong>
</td>
<td align="center">
</td>
</tr>
</table>
---
This version introduces:
- Multi-part downloads
- Character filtering by comments
- Filename word removal
- Various UI/workflow enhancements
A powerful, feature-rich GUI application for downloading content from **[Kemono.su](https://kemono.su)** (and its mirrors like kemono.party) and **[Coomer.party](https://coomer.party)** (and its mirrors like coomer.su).
Built with PyQt5, this tool is designed for users who want deep filtering capabilities, customizable folder structures, efficient downloads, and intelligent automation, all within a modern and user-friendly graphical interface.
*This v5.0.0 release marks a significant feature milestone. Future updates are expected to be less frequent, focusing on maintenance and minor refinements.*
---
## 🚀 What's New in v3.2.0
## What's New in v5.0.0?
### 🔹 Character Filter by Post Comments (Beta)
Version 5.0.0 is a major update, introducing comprehensive new features and refining existing ones for a more powerful and streamlined experience:
- New "Comments" scope for the 'Filter by Character(s)' feature.
### ⭐ Favorite Mode (Artists & Posts)
- **Direct Downloads from Your Kemono.su Favorites:**
- Enable via the "**⭐ Favorite Mode**" checkbox.
- The UI adapts: URL input is replaced, and action buttons change to "**🖼️ Favorite Artists**" and "**📄 Favorite Posts**".
- "**🍪 Use Cookie**" is automatically enabled and required.
- **Favorite Artists Dialog:** Fetches and lists your favorited artists. Select one or more to queue for download.
- **Favorite Posts Dialog:** Fetches and lists your favorited posts, grouped by artist. Includes search, selection, and known name highlighting in post titles.
- **Flexible Download Scopes for Favorites:**
- `Scope: Selected Location`: Downloads all selected favorites into the main "Download Location".
- `Scope: Artist Folders`: Creates a subfolder for each artist within the main "Download Location".
- Standard filters (character, skip words, file type) apply to content downloaded via Favorite Mode.
**How it works:**
1. Checks if any **filenames** match your character filter. If yes → downloads the post (skips comment check).
2. If no filename matches → scans the **post's comments**. If matched → downloads the post.
### 🎨 Creator Selection Popup
- Click the "**🎨**" button next to the URL input to open the "Creator Selection" dialog.
- Loads creators from your `creators.json` file (expected in the app's directory).
- Search, select multiple creators, and their names are added to the URL input, comma-separated.
- Choose download scope (`Characters` or `Creators`) for items added via this popup, influencing folder structure.
- Prioritizes filename-matched character name for folder naming, otherwise uses comment match.
- Cycle through filter scopes with the `Filter: [Scope]` button next to the character input.
### 🎯 Advanced Character Filtering & `Known.txt` Integration
- **Enhanced Filter Syntax:**
- `Nami`: Simple character filter.
- `(Vivi, Ulti, Uta)`: Groups distinct characters into a shared folder for the session (e.g., "Vivi Ulti Uta"). Adds "Vivi", "Ulti", "Uta" as *separate* entries to `Known.txt` if new.
- `(Boa, Hancock)~`: Defines "Boa" and "Hancock" as aliases for the *same character/entity*. Creates a shared folder (e.g., "Boa Hancock"). Adds "Boa Hancock" as a *single group entry* to `Known.txt` if new, with "Boa" and "Hancock" as its aliases.
- **"Add to Filter" Button (⤵️):** Opens a dialog to select names from your `Known.txt` (with search) and add them to the "Filter by Character(s)" field. Grouped names from `Known.txt` are added with the `~` syntax.
- **New Name Confirmation:** When new, unrecognized names/groups are used in the filter, a dialog prompts to add them to `Known.txt` with appropriate formatting.
### 📖 Manga/Comic Mode Enhancements
- **"Title+G.Num" Filename Style:** (Post Title + Global Numbering) All files across posts get the post title prefix + a global sequential number (e.g., `Chapter 1_001.jpg`, `Chapter 2_003.jpg`).
- **Optional Filename Prefix:** For "Original File" and "Date Based" manga styles, an input field appears to add a custom prefix to filenames.
### 🖼️ Enhanced Image & Content Handling
- **"Scan Content for Images":** A checkbox to scan post HTML for `<img>` tags and direct image links, resolving relative paths. Crucial for images embedded in descriptions but not in API attachments.
- When "Download Thumbnails Only" is active, "Scan Content for Images" is auto-enabled, and *only* content-scanned images are downloaded.
- **"🎧 Only Audio" Filter Mode:** Dedicated mode to download only common audio formats (MP3, WAV, FLAC, etc.).
- **"📦 Only Archives" Filter Mode:** Exclusively downloads `.zip` and `.rar` files.
### ⚙️ UI & Workflow Improvements
- **Cookie Management:**
- Directly paste cookie strings.
- Browse and load `cookies.txt` files.
- Automatic fallback to `cookies.txt` in the app directory.
- **Multi-part Download Toggle:** Button in the log area to easily switch multi-segment downloads ON/OFF for large files.
- **Log View Toggle (👁️ / 🙈):** Switch between the detailed "Progress Log" and the "Missed Character Log" (which now shows intelligently extracted key terms from skipped titles).
- **Retry Failed Downloads:** Prompts at the end of a session to retry files that failed with recoverable errors (e.g., IncompleteRead).
- **Persistent UI Defaults:** Key filter scopes ("Skip with Words" -> Posts, "Filter by Character(s)" -> Title) now reset to defaults on launch for consistency.
- **Refined Onboarding Tour & Help Guide:** Updated guides accessible via the "❓" button.
---
### ✂️ Remove Specific Words from Filenames
## Core Features
- Input field: `"✂️ Remove Words from name"`
- Enter comma-separated words (e.g., `patreon, kemono, [HD], _final`)
- These are removed from filenames (case-insensitive) to improve organization.
This section details the primary functionalities of the Kemono Downloader.
### User Interface & Workflow
- **Main Inputs:**
- **🔗 Kemono Creator/Post URL:** Paste the full URL of a Kemono/Coomer creator's page or a specific post.
- *Example (Creator):* `https://kemono.su/patreon/user/12345`
- *Example (Post):* `https://kemono.su/patreon/user/12345/post/98765`
- **🎨 Creator Selection Button:** (Next to URL input) Opens a dialog to select creators from `creators.json` to populate the URL field.
- **Page Range (Start to End):** For creator URLs, specify a range of pages to fetch. Disabled for single posts or Manga Mode.
- **📁 Download Location:** Browse to select the main folder for all downloads. Required unless in "🔗 Only Links" mode.
- **Action Buttons:**
- **⬇️ Start Download / 🔗 Extract Links:** Initiates the primary operation based on current settings.
- **⏸️ Pause / ▶️ Resume Download:** Temporarily halt and continue the process. Some UI settings can be changed while paused.
- **❌ Cancel & Reset UI:** Stops the current operation and performs a "soft" UI reset (preserves URL and Directory inputs).
- **🔄 Reset:** (In log area) Clears all inputs, logs, and resets settings to default when idle.
### Filtering & Content Selection
- **🎯 Filter by Character(s):**
- Enter character names, comma-separated.
- **Syntax Examples:**
- `Tifa, Aerith`: Matches posts/files with "Tifa" OR "Aerith". If "Separate Folders" is on, creates folders "Tifa" and "Aerith". Adds "Tifa", "Aerith" to `Known.txt` separately if new.
- `(Vivi, Ulti, Uta)`: Matches "Vivi" OR "Ulti" OR "Uta". Session folder: "Vivi Ulti Uta". Adds "Vivi", "Ulti", "Uta" to `Known.txt` as separate entries if new.
- `(Boa, Hancock)~`: Matches "Boa" OR "Hancock". Session folder: "Boa Hancock". Adds "Boa Hancock" as a single group entry to `Known.txt` if new (aliases: Boa, Hancock).
- **Filter: [Type] Button (Scope):** Cycles how this filter applies:
- `Filter: Files`: Checks individual filenames. Only matching files from a post are downloaded.
- `Filter: Title`: Checks post titles. All files from a matching post are downloaded.
- `Filter: Both`: Checks post title first. If no match, then checks filenames.
- `Filter: Comments (Beta)`: Checks filenames first. If no file match, then checks post comments. (Uses more API requests).
- **🚫 Skip with Words:**
- Enter words (comma-separated) to skip content (e.g., `WIP, sketch`).
- **Scope: [Type] Button:** Cycles how skipping applies:
- `Scope: Files`: Skips individual files by name.
- `Scope: Posts`: Skips entire posts by title.
- `Scope: Both`: Post title first, then filenames.
- **✂️ Remove Words from name:**
- Enter words (comma-separated) to remove from downloaded filenames (e.g., `patreon, [HD]`).
- **Filter Files (Radio Buttons):**
- `All`: All file types.
- `Images/GIFs`: Common image formats.
- `Videos`: Common video formats.
- `📦 Only Archives`: Exclusively `.zip` and `.rar` files. Disables archive skipping and external link log.
- `🎧 Only Audio`: Common audio formats (MP3, WAV, FLAC, etc.).
- `🔗 Only Links`: Extracts and displays external links from post descriptions. Disables download options.
- **Skip .zip / Skip .rar Checkboxes:** Avoid downloading these archive types (disabled if "📦 Only Archives" is active).
### Download Customization
- **Download Thumbnails Only:** Downloads small API preview images.
- If "Scan Content for Images" is also active, *only* images found by content scan are downloaded (API thumbnails ignored).
- **Scan Content for Images:** Scans post HTML for `<img>` tags and direct image links, resolving relative paths.
- **Compress to WebP:** If Pillow is installed, converts images > 1.5MB to WebP if significantly smaller.
- **🗄️ Custom Folder Name (Single Post Only):**
- Visible if downloading a single post URL AND "Separate Folders by Name/Title" is enabled.
- Set a custom folder name for that specific post's downloads.
### 📖 Manga/Comic Mode (Creator Feeds Only)
- **Chronological Processing:** Downloads posts from oldest to newest.
- **Page Range Disabled:** All posts are fetched for sorting.
- **Filename Style Toggle Button (in log area):**
- `Name: Post Title (Default)`: First file named after post title; subsequent files in the same post keep original names.
- `Name: Original File`: All files attempt to keep original names. Optional prefix input appears.
- `Name: Title+G.Num`: All files across posts get post title prefix + global sequential number (e.g., `Chapter 1_001.jpg`). Disables post-level multithreading.
- `Name: Date Based`: Files named sequentially (e.g., `001.jpg`) by post date. Optional prefix input appears. Disables post-level multithreading.
### Folder Organization
- **Separate Folders by Name/Title:** Creates subfolders based on "Filter by Character(s)" or post titles. Uses `Known.txt` as a fallback.
- **Subfolder per Post:** If "Separate Folders" is on, creates an additional subfolder for each post.
- **`Known.txt` Management (Bottom Left UI):**
- **List:** Displays primary names from `Known.txt`.
- **Add New:** Input field to add new names/groups.
- Simple: `My Series`
- Group (Separate Known.txt): `(Vivi, Ulti, Uta)`
- Group (Single Known.txt with `~`): `(Character A, Char A)~`
- ** Add Button:** Adds the name/group to `Known.txt`.
- **⤵️ Add to Filter Button:** Opens a dialog to select names from `Known.txt` to add to the "Filter by Character(s)" field.
- **🗑️ Delete Selected Button:** Removes selected names from `Known.txt`.
- **Open Known.txt Button:** Opens `Known.txt` in your default text editor for advanced editing.
- **❓ Button:** Opens this feature guide.
### Advanced & Performance
- **🍪 Cookie Management:**
- **Use Cookie Checkbox:** Enables cookie usage.
- **Text Field:** Paste cookie string (e.g., `name1=value1; name2=value2`).
- **Browse... Button:** Select a `cookies.txt` file (Netscape format).
- *Behavior:* Text field takes precedence. If "Use Cookie" is checked and both are empty, tries to load `cookies.txt` from the app directory.
- **Use Multithreading Checkbox & Threads Input:**
- *Creator Feeds:* Number of posts to process simultaneously.
- *Single Post URLs:* Number of files to download concurrently.
- **Multi-part Download Toggle Button (in log area):**
- `Multi-part: ON`: Enables multi-segment downloads for large files. Can speed up large file downloads but may increase UI choppiness or log spam with many small files.
- `Multi-part: OFF (Default)`: Files downloaded in a single stream.
- Disabled if "🔗 Only Links" or "📦 Only Archives" mode is active.
### Logging & Monitoring
- **📜 Progress Log / Extracted Links Log:** Main text area for detailed messages or extracted links.
- **👁️ / 🙈 Log View Toggle Button:** Switches main log between:
- `👁️ Progress Log`: All download activity, errors, summaries.
- `🙈 Missed Character Log`: Key terms from post titles/content skipped due to character filters.
- **Show External Links in Log Checkbox & Panel:** If checked, a secondary log panel displays external links from post descriptions (disabled in "Only Links" / "Only Archives" modes).
- **Export Links Button:** (In "Only Links" mode) Saves extracted links to a `.txt` file.
- **Progress Labels:** Display overall post progress and individual file download status/speed.
### ⭐ Favorite Mode (Downloading from Your Kemono.su Favorites)
- **Enable:** Check the "**⭐ Favorite Mode**" checkbox (next to "🔗 Only Links").
- **UI Changes:**
- URL input is replaced with a "Favorite Mode active" message.
- Action buttons change to "**🖼️ Favorite Artists**" and "**📄 Favorite Posts**".
- "**🍪 Use Cookie**" is auto-enabled and locked (required for favorites).
- **🖼️ Favorite Artists Dialog:**
- Fetches and lists artists you've favorited on Kemono.su.
- Includes search, select all/deselect all, and a "Download Selected" button.
- Selected artists are added to a download queue.
- **📄 Favorite Posts Dialog:**
- Fetches and lists posts you've favorited, grouped by artist and sorted by date.
- Includes search (title, creator, ID, service), select all/deselect all.
- Highlights known names from your `Known.txt` in post titles for easier identification.
- Selected posts are added to a download queue.
- **Favorite Download Scope Button:** (Next to "Favorite Posts" button)
- `Scope: Selected Location`: All selected favorites download into the main "Download Location". Filters apply globally.
- `Scope: Artist Folders`: A subfolder (named after the artist) is created in the main "Download Location" for each artist. Content goes into their specific subfolder. Filters apply within each artist's folder.
- **Filters:** Standard "Filter by Character(s)", "Skip with Words", and "Filter Files" options apply to content downloaded from favorites.
---
### 🧩 Multi-part Downloads for Large Files
## Key Files
- Toggle multi-part downloads (OFF by default).
- Improves speed on large files (e.g., >10MB videos, zips).
- Falls back to single-stream on failure.
- Toggle via `Multi-part: ON/OFF` in the log header.
- **`Known.txt`:** (Located in the application's directory)
- Stores your list of known shows, characters, or series titles for automatic folder organization.
- **Format:** Each line is an entry.
- Simple: `My Awesome Series`
- Grouped (single `Known.txt` entry, shared folder): `(Boa, Hancock)` - creates folder "Boa Hancock", aliases "Boa", "Hancock".
- Used as a fallback for folder naming if "Separate Folders" is on and no active filter matches.
- **`creators.json`:** (Expected in the application's directory)
- Used by the "🎨 Creator Selection Popup".
- A JSON file containing a list of creator objects. Expected structure: `[ [ {creator1_data}, {creator2_data}, ... ] ]` or a flat list `[ {creator1_data}, ... ]`.
- Each creator object should ideally have `name`, `service`, `id`, and optionally `favorited` (integer count for sorting in popup).
- *Example entry in the inner list:* `{"id": "12345", "name": "ArtistName", "service": "patreon", "favorited": 10}`
- **`cookies.txt` (Optional):**
- If "Use Cookie" is enabled and no direct string/file is provided, the app looks for this in its directory.
- Must be in Netscape cookie file format.
- **Application Settings:** UI preferences (like manga style, multipart preference) are saved by Qt's `QSettings` (location varies by OS). Cookie details and some filter scopes are session-based.
---
### 🧠 UI and Workflow Enhancements
- **Updated Welcome Tour**
Shows on first launch, covers all new and core features.
- **Smarter Cancel/Reset**
Cancels active tasks and resets UI — but retains URL and Download Directory fields.
- **Simplified Interface**
- Removed "Skip Current File" and local API server for a cleaner experience.
---
### 📁 Refined File & Duplicate Handling
- **Duplicate Filenames**
Adds numeric suffix (`file.jpg`, `file_1.jpg`, etc.).
Removed the "Duplicate" subfolder system.
- **Efficient Hash Check**
Detects and skips duplicate files within the same session (before writing to disk).
- **Better Temp File Cleanup**
Cleans up `.part` files — especially if duplicate or compressed post-download.
---
## 🧩 Core Features
### 🎛 Simple GUI
- Built with **PyQt5**
- Dark theme, responsive layout
### 📥 Supports Post and Creator URLs
- Download a single post or an entire creators feed.
### 🔢 Page Range Support
- Choose page range when downloading creator feeds (except in Manga Mode).
---
### 🗂 Smart Folder System
- Organize by character names, post titles, or custom labels.
- Option to create a separate folder for each post.
- Uses `Known.txt` for fallback names.
---
### 📚 Known Names Manager
- Add/edit/delete known characters/shows
- Saves entries in `Known.txt` for automatic folder naming.
---
### 🔍 Advanced Filtering
- **Filter by Character(s)**
Scope: `Files`, `Post Titles`, `Both`, or `Post Comments (Beta)`
- **Skip with Words**
Skip posts or files based on keywords. Toggle scope.
- **Media Type Filters**
Choose: `All`, `Images/GIFs`, `Videos`, `📦 Only Archives (.zip/.rar)`
- **🔗 Only Links Mode**
Extracts links from post descriptions.
- **Skip Archives**
Ignore `.zip`/`.rar` unless in "Only Archives" mode.
---
### 📖 Manga/Comic Mode (Creator URLs Only)
- Downloads posts oldest-to-newest.
**Filename Style Toggle:**
- `Post Title` (default): Names first file in post after title.
- `Original File`: Uses original file names.
- Uses manga/series title for filtering and folder naming.
---
### 🖼️ Image Compression
- Converts large images to **WebP** if it significantly reduces size.
- Requires `Pillow` library.
---
### 🖼 Download Thumbnails Only
- Option to fetch only small preview images.
---
### ⚙️ Multithreaded Downloads
- Adjustable threads for:
- Multiple post processing (creator feeds)
- File-level concurrency (within a post)
---
### ⏯ Download Controls
- Start and cancel active operations.
---
### 🌙 Dark Mode Interface
- Modern, dark-themed GUI for comfort and clarity.
---
## 🔧 Backend Enhancements
### ♻️ Retry Logic
- Retries failed file and chunk downloads before skipping.
---
### 🧬 Session-wide Deduplication
- Uses **MD5 hashes** to avoid saving identical files during a session.
---
### 🧹 Smart Naming & Cleanup
- Cleans special characters in names.
- Applies numeric suffixes on collision.
- Removes specified unwanted words.
---
### 📋 Efficient Logging
- Toggle verbosity: `Basic` (important) or `Full` (everything).
- Separate panel for extracted external links.
- Real-time feedback with clear statuses.
---
## 📦 Installation
## Installation
### Requirements
- Python 3.6+
- Pip (Python package manager)
- Python 3.6 or higher
- pip (Python package installer)
### Install Dependencies
Open your terminal or command prompt and run:
### Install Libraries
```bash
pip install PyQt5 requests Pillow
```
```bash
python main.py
```
### 2. Optional Setup
- Place your `cookies.txt` in the root directory (if using cookies).
- Prepare your `Known.txt` and `creators.json` in the same directory for advanced filtering and selection features.
---
## Tips & Best Practices
- For best results, use **Favorite Mode** if you're a logged-in user with bookmarked artists/posts.
- Use **Filter by Character(s)** and keep your `Known.txt` updated to reduce clutter and organize downloads.
- Use the **multi-part toggle** for large video/audio files but disable it when downloading large batches of small images to reduce overhead.
- Adjust **thread count** based on your internet speed and CPU; too many threads can result in API throttling.
---
## Troubleshooting
- **Downloads not starting?**
- Ensure the download location is set.
- Check your filters aren't too strict.
- If in Favorite Mode, make sure cookie is set and valid.
- **Missing characters/folders?**
- Review the Missed Character Log.
- Use the "Scan Content for Images" option if image links are embedded in descriptions.
- **App crashes or logs errors?**
- Check the console/log area for stack traces.
- Run from terminal to capture more error output.
- Ensure `Known.txt` and `creators.json` are valid.
---
## Contribution
Feel free to fork this repo and submit pull requests for bug fixes, new features, or UI improvements!
---
## License
This project is released under the MIT License.