Assessed/Not assessed filter:
- 'yes' → beauty_lead_quality IS NOT NULL (has been B2B assessed)
- 'no' → beauty_lead_quality IS NULL (never assessed)
- wired through /api/enriched → get_enriched(beauty_assessed=)
Per-page limit:
- options: 100 / 500 / 1000 / 2000 / 5000
- backend cap raised from le=1000 to le=5000
Auto-advance on empty Not-checked page:
- after bulk validate/prescreen, loadDomains reloads the same DuckDB page
- if every domain on that page is now processed (client-side filter → 0 rows)
but the page still returned results, automatically increment page and retry
- prevents "No domains found" after successfully processing a batch
- capped at page 500 to avoid infinite loop
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
prescreen_status='none' was truthy so it triggered /api/enriched, which only
finds rows already in enriched_domains with NULL status — missing all the
unprocessed domains that only exist in the 72M DuckDB index.
- exclude 'none' from the hasEnrichFilter check
- 'Not checked' now uses /api/domains (DuckDB) and filters client-side:
rows where prescreen_status is absent = never touched
- all other prescreen status values (live/dead/parked/error) still use
/api/enriched server-side
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Root cause: loadDomains() always hit /api/domains (DuckDB 72M rows) and filtered
niche/site_type/prescreen_status client-side on a random page of 100 domains —
virtually none had been classified, so Live+Beauty+Ecommerce always returned 0.
- loadDomains() now routes to /api/enriched when any enrichment filter is active
(prescreen_status, niche, site_type, country) — all filters are server-side SQLite
- Falls back to /api/domains only when no enrichment filters are set (discovery mode)
- alpha_only and no_sld supported in both modes:
- DuckDB: existing regex support
- SQLite: LIKE patterns (no hyphens/digits) + dot-count (no SLD)
- Add alpha_only/no_sld params to /api/enriched endpoint and get_enriched()
- Fix stale d.classified reference in prescreenOne toast
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously /api/prescreen/batch blocked for 4-10 minutes waiting for Replicate/
DeepSeek, causing browser connection timeout and zero results saved.
- Phase 1 (HTTP check) runs synchronously and saves results immediately
- Phase 2 (DeepSeek classify) fires as asyncio.create_task and runs in background
- Response is returned to client as soon as phase 1 completes (~30-90s)
- Frontend toast shows "classifying N in background" so user knows niche/type
will appear shortly without waiting
- Each DeepSeek sub-batch saves independently so partial results are preserved
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- /api/validate/batch endpoint: HTTP-check only (no DeepSeek), accepts up to 500 domains
- Validate Selected bulk button: runs validate in 500-domain chunks, shows live/dead summary
- Alpha only checkbox: passes alpha_only=true to /api/domains to exclude hyphens/numbers
- No SLD checkbox: passes no_sld=true to /api/domains to skip com.es / co.uk style domains
- Both flags wired into loadDomains() and resetFilters()
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- loadDomains() now calls /api/domains (72M domain index) instead of /api/enriched
- keyword and TLD filters are server-side (DuckDB); prescreen_status, niche,
site_type, country are client-side — same pattern as main DomGod _fetch()
- "Not checked" now correctly finds domains that exist in DuckDB but have never
been pre-screened (no row in enriched_domains, so no prescreen_status)
- results info shows "X shown · Y matching · page N" to reflect DuckDB total vs
client-side-filtered visible count
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- drop standalone Pre-screen tab (textarea upload) — confusing duplicate
- bulk bar Pre-screen Selected button is the only entry point now
- add prescreening flag with loading state on button + double-click guard
- remove dead prescreenInput/prescreenRunning/prescreenResult state vars and runPrescreen()
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- add keyword and tld params to get_enriched() in db.py (LIKE on domain + page_title)
- forward keyword/tld through /api/enriched in beauty_main.py
- rewrite beauty/index.html loadDomains() to pass all filters server-side via URLSearchParams
- track domainsTotal from API response for correct pagination display
- add Pre-screen Selected and B2B Assess Selected bulk action buttons
- add per-row Screen and Assess buttons
- goSearch() resets to page 1 before fetching
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
SQLite locking:
- Enable WAL journal mode in init_db (readers don't block writers)
- Set busy_timeout=30000ms in init_db
- Add timeout=30 to every aiosqlite.connect() across db.py, validator.py,
enricher.py, main.py so connections wait up to 30s instead of crashing
Error status:
- 4xx/5xx HTTP responses are now prescreen_status='error' (server alive
but broken/blocking) instead of 'live'
- Added 'error' counter to validator stats and orange Error stat box in UI
- Added ps-error CSS class (orange) and filter option in Browse tab
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds rescan_dead flag that causes _filter_unvalidated to treat
previously-dead domains as needing a fresh check. Useful after
fixing the http/https detection bug.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
1. prescreener.py: classify_with_deepseek now retries on 429 with
exponential back-off (5s → 10s → 20s → 40s, up to 4 attempts);
same back-off also covers other transient errors.
2. main.py: prescreen batches run sequentially with a 3s gap instead
of asyncio.gather (parallel). Parallel batches caused the second
batch to always hit the 429 rate limit, leaving most domains
unclassified (only the smaller last batch succeeded).
3. index.html: prescreenSelected() now clears this.domains before
calling _fetch() so Alpine re-renders the full table with the
updated niche/type values; also updates the notify hint to mention
the expected 1-2 min wait.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Phase 1 (no AI credits): httpx checks every selected domain concurrently
(30 parallel) with real browser UA — detects live/dead/parked/redirect.
Parked: keyword scan in body/title + known parking host redirect check.
Results saved to DB immediately; dead/parked never reach DeepSeek.
Phase 2 (single DeepSeek call): all live-site titles + snippets bundled
into ONE Replicate/DeepSeek-R1 request → returns niche + type for every
domain in batch (up to 80 per call, parallelised if more).
- app/prescreener.py (new): _check_one(), prescreen_domains(),
classify_with_deepseek(), parking signal lists, same-domain redirect logic
- app/db.py: prescreen_status/niche/site_type/prescreen_at columns +
migrations; save_prescreen_results() upsert helper
- app/main.py: POST /api/prescreen/batch endpoint
- app/static/index.html:
- 🔍 Pre-screen button (disabled while running, shows spinner)
- Niche + Type columns in Browse and Leads tables (.pni/.pty pills)
- Prescreen status colour dot (●) when niche not yet set
- prescreening state flag; result toast shows per-status counts
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- db.py: add `language` column to ai_queue; migration; queue_ai() accepts
language param and re-queues with ON CONFLICT UPDATE so changing language works
- main.py: batch and single assess endpoints accept `language` from request body
- enricher.py: ai_worker_loop reads language column, passes to _assess_one()
- replicate_ai.py: assess_domain() and _build_prompt() accept language param;
OUTPUT LANGUAGE section injected into prompt so Gemini writes pitch/email in
the requested language (EN/ES/RO)
- index.html: flag dropdown (🇪🇸/🇬🇧/🇷🇴) next to AI Assess button; aiLang
state default ES; language sent in all batch assessment requests
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Prompt now describes complete agency capabilities (everything web-related)
- Concrete pitch examples with business name + specific problem references
- New mandatory output fields: outreach_email (3-4 sentence ready-to-send ES)
and email_subject (specific subject line)
- HOT/WARM/COLD scoring guide based on site deficiency count
- Modal: pitch box replaced with full outreach email + subject + Copy button
- max_output_tokens raised to 6000
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
1. scorer: dead sites capped at 5 (was scoring HOT from SSL/CMS signals)
2. Kit Digital: require explicit kit-digital/agente-digitalizador signals;
generic EU logo patterns (fondos-europeos, logo-ue, cofinanciado) removed.
Gemini kit_digital_confirmed now overwrites heuristic in DB.
3. Browse table: social links replaced with compact coloured icon badges
(fb/ig/in/x/tt/yt) linked to the profile URLs
4. site_analyzer: added has_gmb / gmb_url detection (Maps embed, Place links,
LocalBusiness schema); fed to Gemini prompt
5. scorer: +5 no-social, +3 reachable contact; Gemini prompt includes GMB and
social media management as sellable services; modal shows GMB/social status
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Track aiSt.done across poll cycles; re-fetch current page whenever
the done count increases while on the browse tab.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
AI worker fixes (root cause of "nothing reaches Replicate"):
- Worker task died silently — no exception handler around while loop
- Added try/except around entire loop body with exc_info logging
- Added watchdog task that restarts dead workers every 10 seconds
- ensure_workers_alive() called on every /api/ai/assess/batch POST
- _assess_one() is now a top-level function (not closure) — avoids
subtle scoping bugs with async inner functions in while loops
- /api/ai/debug endpoint: shows worker alive status, task exception,
last 10 queue entries — browse to /api/ai/debug to diagnose
- /api/ai/worker/restart endpoint + UI button
- "Restart AI worker" button + "Debug AI queue" link in enrichment tab
site_analyzer.py — new signals:
- IP resolution + ip-api.com for ASN, org, ISP, host country
- EU hosting detection (27 EU + EEA + adequacy countries)
- GDPR: detects Cookiebot, OneTrust, CookiePro, Osano, Iubenda,
Borlabs, CookieYes, Complianz, Usercentrics + text signals
- Privacy policy and GDPR text presence
- Accessibility: html lang missing, images without alt count,
skip nav link, empty links, inputs without labels
Gemini prompt additions:
- Hosting section: IP, ASN, org/ISP, EU vs non-EU flag
- GDPR section: cookie tool, notice, privacy policy
- Accessibility section: all quick-scan results
- New output fields: hosting_notes, gdpr_compliance,
accessibility_issues[]
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Build /data/domains.duckdb on first run (tld+parts columns + ART index)
→ TLD filter goes from ~60s full scan to <100ms index lookup
→ System still works (slower) while index builds in background
- New /api/domains params: alpha_only, no_sld, keyword
→ alpha_only: domains with only letters (no hyphens/numbers)
→ no_sld: parts=2, excludes com.es / net.es patterns
→ keyword: LIKE '%term%' niche search
- /api/domains and /api/enriched now return total count for pagination
- Pagination: shows total matches, page X of Y, Next disabled at last page
- Enrich button: toast notifications instead of alert(), error handling
- Select all on page button, clear selection button
- Stats/TLD breakdown cached after first load (no repeat full scan)
- Header shows index build status (building → ready)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- FastAPI backend with DuckDB pushdown queries on 72M parquet
- Async enrichment worker: HTTP, SSL, DNS MX, CMS fingerprint, ip-api.com
- Resumable parquet download with HTTP Range support
- Lead scoring engine (max 100 pts, target countries ES,GB,DE,FR,RO,PT,AD,IT)
- Single-file Alpine.js + Chart.js dashboard on port 6677
- SQLite enrichment DB with job queue and scores tables
- Dockerized with persistent /data volume
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>