fix: AI worker crash-proof + GDPR/hosting/accessibility analysis

AI worker fixes (root cause of "nothing reaches Replicate"):
- Worker task died silently — no exception handler around while loop
- Added try/except around entire loop body with exc_info logging
- Added watchdog task that restarts dead workers every 10 seconds
- ensure_workers_alive() called on every /api/ai/assess/batch POST
- _assess_one() is now a top-level function (not closure) — avoids
  subtle scoping bugs with async inner functions in while loops
- /api/ai/debug endpoint: shows worker alive status, task exception,
  last 10 queue entries — browse to /api/ai/debug to diagnose
- /api/ai/worker/restart endpoint + UI button
- "Restart AI worker" button + "Debug AI queue" link in enrichment tab

site_analyzer.py — new signals:
- IP resolution + ip-api.com for ASN, org, ISP, host country
- EU hosting detection (27 EU + EEA + adequacy countries)
- GDPR: detects Cookiebot, OneTrust, CookiePro, Osano, Iubenda,
  Borlabs, CookieYes, Complianz, Usercentrics + text signals
- Privacy policy and GDPR text presence
- Accessibility: html lang missing, images without alt count,
  skip nav link, empty links, inputs without labels

Gemini prompt additions:
- Hosting section: IP, ASN, org/ISP, EU vs non-EU flag
- GDPR section: cookie tool, notice, privacy policy
- Accessibility section: all quick-scan results
- New output fields: hosting_notes, gdpr_compliance,
  accessibility_issues[]

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-04-13 18:01:34 +02:00
parent 5ad8259c75
commit 60c9b495ae
10 changed files with 409 additions and 205 deletions

View File

@@ -20,7 +20,7 @@ from app.db import (
queue_domains, get_queue_status, build_duckdb_index, index_status,
queue_ai, get_ai_queue_status, save_ai_assessment,
)
from app.enricher import start_worker, pause_worker, resume_worker, is_running
from app.enricher import start_worker, pause_worker, resume_worker, is_running, ensure_workers_alive
from app.scorer import run_scoring
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
@@ -61,13 +61,20 @@ async def download_parquet():
logger.info("Parquet download complete")
async def _watchdog():
"""Restart workers if they die every 10 seconds."""
while True:
await asyncio.sleep(10)
ensure_workers_alive()
@asynccontextmanager
async def lifespan(app: FastAPI):
await download_parquet()
await init_db()
# Build DuckDB index in background — queries still work (slower) while building
asyncio.create_task(build_duckdb_index())
start_worker()
asyncio.create_task(_watchdog())
logger.info("DomGod ready on port 6677")
yield
@@ -167,9 +174,43 @@ async def ai_assess_batch(body: dict):
if not domains_list:
return JSONResponse({"error": "no domains provided"}, status_code=400)
await queue_ai(domains_list)
ensure_workers_alive() # ensure AI worker is alive when jobs are queued
return {"queued": len(domains_list)}
@app.post("/api/ai/worker/restart")
async def ai_worker_restart():
ensure_workers_alive()
return {"status": "restarted"}
@app.get("/api/ai/debug")
async def ai_debug():
"""Returns worker state + last 10 queue entries for troubleshooting."""
from app.enricher import _ai_worker_task
task_alive = _ai_worker_task is not None and not _ai_worker_task.done()
task_exc = None
if _ai_worker_task and _ai_worker_task.done() and not _ai_worker_task.cancelled():
try:
task_exc = str(_ai_worker_task.exception())
except Exception:
pass
async with aiosqlite.connect(SQLITE_PATH) as db:
db.row_factory = aiosqlite.Row
async with db.execute(
"SELECT domain, status, created_at, completed_at, error FROM ai_queue ORDER BY created_at DESC LIMIT 10"
) as cur:
recent = [dict(r) async for r in cur]
return {
"ai_worker_alive": task_alive,
"ai_worker_exception": task_exc,
"recent_queue": recent,
"queue_status": await get_ai_queue_status(),
}
@app.get("/api/ai/status")
async def ai_status():
return await get_ai_queue_status()