Commit Graph

17 Commits

Author SHA1 Message Date
d9ece58e12 fix: search race condition + brand detection + contacts + reassess
- loadDomains(): add generation counter so stale auto-advance fetches
  cannot overwrite a newer user-triggered search result; snapshot filter
  state before the first await so URL reflects what was requested; add
  HTTP status check so backend errors surface as toasts rather than
  silent empty results; auto-advance now calls loadDomains() without
  await so the counter increments correctly per page advance

- beauty_ai: word-boundary regex for short brands (≤5 chars) to stop
  'ref' matching 'reference'/'refresh'/'prefer' etc.; merge phones,
  whatsapp and social_links from site_analyzer directly into result
  (more reliable than AI extraction); add contact_whatsapp and
  contact_social fields to AI JSON schema

- db: add requeue_beauty() for re-assessing already-assessed domains

- beauty_main: /api/beauty/reassess/batch endpoint using requeue_beauty

- index.html: Re-assess Selected bulk button, per-row ↺ button in
  Browse and Pipeline, WhatsApp + social links in Pipeline contact panel

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-07 11:06:58 +02:00
788252e14f feat: assessed filter, 5000 per-page limit, auto-advance on empty Not-checked page
Assessed/Not assessed filter:
- 'yes' → beauty_lead_quality IS NOT NULL (has been B2B assessed)
- 'no'  → beauty_lead_quality IS NULL (never assessed)
- wired through /api/enriched → get_enriched(beauty_assessed=)

Per-page limit:
- options: 100 / 500 / 1000 / 2000 / 5000
- backend cap raised from le=1000 to le=5000

Auto-advance on empty Not-checked page:
- after bulk validate/prescreen, loadDomains reloads the same DuckDB page
- if every domain on that page is now processed (client-side filter → 0 rows)
  but the page still returned results, automatically increment page and retry
- prevents "No domains found" after successfully processing a batch
- capped at page 500 to avoid infinite loop

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-06 09:19:51 +02:00
2f0959b8e8 fix: smart routing in Browse — enrichment filters use /api/enriched, discovery uses /api/domains
Root cause: loadDomains() always hit /api/domains (DuckDB 72M rows) and filtered
niche/site_type/prescreen_status client-side on a random page of 100 domains —
virtually none had been classified, so Live+Beauty+Ecommerce always returned 0.

- loadDomains() now routes to /api/enriched when any enrichment filter is active
  (prescreen_status, niche, site_type, country) — all filters are server-side SQLite
- Falls back to /api/domains only when no enrichment filters are set (discovery mode)
- alpha_only and no_sld supported in both modes:
  - DuckDB: existing regex support
  - SQLite: LIKE patterns (no hyphens/digits) + dot-count (no SLD)
- Add alpha_only/no_sld params to /api/enriched endpoint and get_enriched()
- Fix stale d.classified reference in prescreenOne toast

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-06 08:53:54 +02:00
90f128e04e fix: extend keyword search to page_snippet and beauty_assessment
- add page_snippet TEXT column migration
- save prescreener body snippet (600 chars) to page_snippet on upsert
- keyword filter now searches: domain, page_title, page_snippet, beauty_assessment JSON
  so "belleza" matches sites whose content/assessment mentions the word even if
  the domain name or title doesn't

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-05 07:29:20 +02:00
ad03107f0d fix: beauty frontend server-side filtering and bulk actions
- add keyword and tld params to get_enriched() in db.py (LIKE on domain + page_title)
- forward keyword/tld through /api/enriched in beauty_main.py
- rewrite beauty/index.html loadDomains() to pass all filters server-side via URLSearchParams
- track domainsTotal from API response for correct pagination display
- add Pre-screen Selected and B2B Assess Selected bulk action buttons
- add per-row Screen and Assess buttons
- goSearch() resets to page 1 before fetching

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 19:44:34 +02:00
a7dd7927b9 feat: BeautyLeads B2B cosmetics frontend on port 7788
New service (app/beauty_main.py) sharing the same /data volume:
- Separate FastAPI app running on port 7788
- beauty_ai.py: brand universe scan (~650 brands), portfolio match
  detection against OUR_BRANDS, Gemini B2B assessment prompt in Spanish
  returning quality/categories/dist_matches/outreach_email
- beauty_queue table + beauty_lead_quality/beauty_assessment columns
  in enriched_domains (with migrations)
- Endpoints: /api/beauty/assess/batch, /api/beauty/leads,
  /api/beauty/status, /api/beauty/export, /api/beauty/reset
- Static frontend: Browse (beauty/ecommerce pre-filtered, no CMS/SSL/KD
  columns), Validator, B2B Pipeline (brand chips, expandable outreach),
  Pre-screen, Export CSV
- docker-compose: second 'beauty' service with shared data volume
- Dockerfile: expose 7788 alongside 6677

Also: add 'error' prescreen_status handling + UI (orange stat box,
filter option) for 4xx/5xx HTTP responses

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-04 19:31:10 +02:00
db95876db2 fix: SQLite database locked errors + add error status for 4xx/5xx
SQLite locking:
- Enable WAL journal mode in init_db (readers don't block writers)
- Set busy_timeout=30000ms in init_db
- Add timeout=30 to every aiosqlite.connect() across db.py, validator.py,
  enricher.py, main.py so connections wait up to 30s instead of crashing

Error status:
- 4xx/5xx HTTP responses are now prescreen_status='error' (server alive
  but broken/blocking) instead of 'live'
- Added 'error' counter to validator stats and orange Error stat box in UI
- Added ps-error CSS class (orange) and filter option in Browse tab

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 07:10:45 +02:00
8f387cada2 feat: bulk validator tab + status/niche/type browse filters
- New app/validator.py: background HTTP checker for entire dataset
  - 50 concurrent checks, skips already-validated domains
  - Extracts prescreen_status, server, IP, load_time_ms
  - start/stop/status API at /api/validator/start|stop|status

- New dedicated "Validator 🔬" tab with stats grid, TLD filter,
  Start/Stop controls, live progress indicator

- Browse tab: "Live" column replaced with "Status" dot (color-coded
  ● from prescreen_status, falls back to is_live)
- Browse tab: new Status / Niche / Type filter dropdowns

- db.py: added ip TEXT + load_time_ms INTEGER columns + migrations;
  get_enriched() supports prescreen_status/niche/site_type filters

- main.py: /api/enriched extended with prescreen_status/niche/site_type

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-18 08:27:24 +02:00
a0c9db1ef2 fix: DeepSeek niche/type not saving to DB
Two bugs:
1. _parse_classify_output stripped <think> block before searching for JSON.
   DeepSeek-R1 often puts the JSON array inside the think block (especially
   when it "decides" mid-reasoning), so stripping it first destroyed the data.
   Fix: search full output first, then inside <think>, then stripped — three
   fallback strategies with info logging at each step.

2. Phase 2 save used bare UPDATE WHERE domain=? which silently does nothing
   if the domain row doesn't exist yet in enriched_domains.
   Fix: replace with INSERT ... ON CONFLICT DO UPDATE (true upsert).

Also adds logger.info lines so container logs show raw DeepSeek output
and parse result count for easy debugging.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 21:35:49 +02:00
7fc510f903 feat: two-phase pre-screening with HTTP check + DeepSeek batch classification
Phase 1 (no AI credits): httpx checks every selected domain concurrently
(30 parallel) with real browser UA — detects live/dead/parked/redirect.
Parked: keyword scan in body/title + known parking host redirect check.
Results saved to DB immediately; dead/parked never reach DeepSeek.

Phase 2 (single DeepSeek call): all live-site titles + snippets bundled
into ONE Replicate/DeepSeek-R1 request → returns niche + type for every
domain in batch (up to 80 per call, parallelised if more).

- app/prescreener.py (new): _check_one(), prescreen_domains(),
  classify_with_deepseek(), parking signal lists, same-domain redirect logic
- app/db.py: prescreen_status/niche/site_type/prescreen_at columns +
  migrations; save_prescreen_results() upsert helper
- app/main.py: POST /api/prescreen/batch endpoint
- app/static/index.html:
  - 🔍 Pre-screen button (disabled while running, shows spinner)
  - Niche + Type columns in Browse and Leads tables (.pni/.pty pills)
  - Prescreen status colour dot (●) when niche not yet set
  - prescreening state flag; result toast shows per-status counts

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 21:22:45 +02:00
63f961dc80 feat: add Leads tab and Hide Assessed filter in Browse
- db.py: get_enriched() accepts ai_only + lead_quality params
- main.py: /api/enriched exposes ai_only + lead_quality query params;
  new /api/export/leads endpoint produces CSV with contacts + pitch
- index.html:
  - New "Leads 🤖" tab shows all AI-assessed domains with contacts
    (quality/country/limit filters, per-row 📋 copy email, 🔍 modal,
    CSV export, pagination, auto-refreshes every 3s)
  - Browse: "Hide assessed" checkbox filters out already-processed
    domains so you can focus on fresh targets
  - Poll cycle refreshes Leads tab when active

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 18:57:15 +02:00
22eae3f9b7 feat: add EN/ES/RO language selector for AI pitch generation
- db.py: add `language` column to ai_queue; migration; queue_ai() accepts
  language param and re-queues with ON CONFLICT UPDATE so changing language works
- main.py: batch and single assess endpoints accept `language` from request body
- enricher.py: ai_worker_loop reads language column, passes to _assess_one()
- replicate_ai.py: assess_domain() and _build_prompt() accept language param;
  OUTPUT LANGUAGE section injected into prompt so Gemini writes pitch/email in
  the requested language (EN/ES/RO)
- index.html: flag dropdown (🇪🇸/🇬🇧/🇷🇴) next to AI Assess button; aiLang
  state default ES; language sent in all batch assessment requests

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 08:39:27 +02:00
dad910b6b0 feat: 5 fixes — dead site scoring, Kit Digital precision, social icons, GMB detection, social/GMB weighting
1. scorer: dead sites capped at 5 (was scoring HOT from SSL/CMS signals)
2. Kit Digital: require explicit kit-digital/agente-digitalizador signals;
   generic EU logo patterns (fondos-europeos, logo-ue, cofinanciado) removed.
   Gemini kit_digital_confirmed now overwrites heuristic in DB.
3. Browse table: social links replaced with compact coloured icon badges
   (fb/ig/in/x/tt/yt) linked to the profile URLs
4. site_analyzer: added has_gmb / gmb_url detection (Maps embed, Place links,
   LocalBusiness schema); fed to Gemini prompt
5. scorer: +5 no-social, +3 reachable contact; Gemini prompt includes GMB and
   social media management as sellable services; modal shows GMB/social status

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 07:21:02 +02:00
5ad8259c75 feat: deep site analysis engine + fix AI assess for any domain
site_analyzer.py (new):
- Fresh scrape with timing, page size, server, CMS detection
- Lorem ipsum detection (16 phrases incl. user's example)
- Placeholder content detection (hello world, sample page, etc.)
- Analytics: GA4, GTM, Facebook Pixel, Hotjar, Clarity
- Webmaster: Google Search Console, Bing, Yandex verification tags
- sitemap.xml and robots.txt check + Googlebot block detection
- Mobile viewport check, word count, image/script count
- Full contact extraction: emails, phones, WhatsApp, social links
- Kit Digital signal detection

AI worker fix:
- No longer requires pre-enrichment — works on ANY selected domain
- Does fresh site_analyzer scrape then calls Gemini with full context
- Stores site_analysis JSON alongside AI assessment
- Upserts into enriched_domains even if domain was never enriched

Gemini prompt now includes:
- Complete technical snapshot (load time, size, server, SSL)
- Full SEO signals (sitemap, robots, analytics, webmaster verified)
- Content quality (lorem ipsum matches, placeholder matches)
- Kit Digital signals
- All extracted contacts
- 500-word page text sample
- Outputs: summary, site_quality_score/10, content_issues[],
  urgency_signals[], performance_notes, seo_status,
  best_contact_channel+value, all_contacts, ES pitch,
  services_needed, outreach_notes

UI: rich AI modal with summary banner, quality grid, content issues,
    urgency signals, full contact list, technical snapshot

Fixes: correct Replicate token, ai_queue status='running' bug

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-13 17:46:01 +02:00
faca4b6e1a feat: Gemini AI assessment, Kit Digital detection, contact extraction
Kit Digital detection (enricher.py):
- Scans img src/alt/srcset for digitalizadores, kit-digital, fondos-europeos etc
- Scans page text for Kit Digital, Agente Digitalizador, Next Generation EU, PRTR
- Scans links for acelerapyme.es, red.es, kit-digital refs
- +20 score bonus for Kit Digital confirmed sites (proven IT buyers)

Contact extraction (enricher.py):
- Pulls mailto/tel/wa.me links from HTML
- Extracts email addresses via regex, phone numbers (ES format)
- Detects social media links (FB, IG, LinkedIn, Twitter, TikTok)
- Stored as JSON in contact_info column

Gemini via Replicate (replicate_ai.py):
- Assesses lead quality (HOT/WARM/COLD), Kit Digital confirmation
- Identifies best contact channel + actual value (email/phone/WA)
- Writes Spanish cold-call/email pitch angle
- Lists services likely needed + outreach notes
- 3 concurrent requests, 90s timeout, JSON output parsing

DB: migration adds kit_digital, kit_digital_signals, contact_info,
    ai_assessment, ai_lead_quality, ai_pitch, ai_contact_channel/value,
    ai_queue table

UI: Kit Digital 🏅 badge, AI quality pill (clickable modal with full
    assessment), contact chips (email/phone/WA/social), AI Assess button,
    Kit Digital only filter, AI queue status in enrichment tab

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-13 17:25:06 +02:00
7acff12242 feat: persistent DuckDB index, new filters, pagination fix, enrich UX
- Build /data/domains.duckdb on first run (tld+parts columns + ART index)
  → TLD filter goes from ~60s full scan to <100ms index lookup
  → System still works (slower) while index builds in background
- New /api/domains params: alpha_only, no_sld, keyword
  → alpha_only: domains with only letters (no hyphens/numbers)
  → no_sld: parts=2, excludes com.es / net.es patterns
  → keyword: LIKE '%term%' niche search
- /api/domains and /api/enriched now return total count for pagination
- Pagination: shows total matches, page X of Y, Next disabled at last page
- Enrich button: toast notifications instead of alert(), error handling
- Select all on page button, clear selection button
- Stats/TLD breakdown cached after first load (no repeat full scan)
- Header shows index build status (building → ready)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-13 17:00:08 +02:00
b2e7a2f2db feat: initial Dockerized domain intelligence dashboard
- FastAPI backend with DuckDB pushdown queries on 72M parquet
- Async enrichment worker: HTTP, SSL, DNS MX, CMS fingerprint, ip-api.com
- Resumable parquet download with HTTP Range support
- Lead scoring engine (max 100 pts, target countries ES,GB,DE,FR,RO,PT,AD,IT)
- Single-file Alpine.js + Chart.js dashboard on port 6677
- SQLite enrichment DB with job queue and scores tables
- Dockerized with persistent /data volume

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-13 16:22:30 +02:00