* feat(telemetry/meter): added base setup for telemetry meter signal
* feat(telemetry/meter): added metadata setup for meter
* feat(telemetry/meter): fix stmnt builder tests
* feat(telemetry/meter): test query range API fixes
* feat(telemetry/meter): improve error messages
* feat(telemetrymeter): step interval improvements
* feat(telemetrymeter): metadata changes and aggregate attribute changes
* feat(telemetrymeter): metadata changes and aggregate attribute changes
* feat(telemetrymeter): deprecate the signal and use aggregation instead
* feat(telemetrymeter): deprecate the signal and use aggregation instead
* feat(telemetrymeter): deprecate the signal and use aggregation instead
* feat(telemetrymeter): cleanup the types
* feat(telemetrymeter): introduce source for query
* feat(telemetrymeter): better naming for source in metadata
* feat(telemetrymeter): added quick filters for meter explorer
* feat(telemetrymeter): incorporate the new changes to stmnt builder
* feat(telemetrymeter): add the statement builder for the ranged cache queries
* feat(telemetrymeter): use meter aggregate keys
* feat(telemetrymeter): use meter aggregate keys
* feat(telemetrymeter): remove meter from complete bools
* feat(telemetrymeter): remove meter from complete bools
* feat(telemetrymeter): update the quick filters to use meter
## 📄 Summary
To reliably migrate the alerts and dashboards, we need access to the telemetrystore to fetch some metadata and while doing migration, I need to log some stuff to fix stuff later.
Key changes:
- Modified the migration to include telemetrystore and a logging provider (open to using a standard logger instead)
- To avoid the previous issues with imported dashboards failing during migration, I've ensured that imported JSON files are automatically transformed when migration is active
- Implemented detailed logic to handle dashboard migration cleanly and prevent unnecessary errors
- Separated the core migration logic from SQL migration code, as users from the dot metrics migration requested shareable code snippets for local migrations. This modular approach allows others to easily reuse the migration functionality.
Known: I didn't register the migration yet in this PR, and will not merge this yet, so please review with that in mid.
For the requestType: Trace, we don't care about the timestamp in the rawRow.
- Handling Zero timestamp values in the rawData response
- simplify RawRow `map[string]*any` to `map[string]any` and eliminate unnecessary pointer indirection.
* fix: prevent creation of funnels with duplicate names
- Fixed Update method to validate duplicate names before updating
- Added proper duplicate name validation that excludes the current funnel being updated
- Fixed incorrect error wrapping in Update method that was marking all errors as "already exists"
- Fixed typo in error message ("funnelr" -> "funnel")
- Added comprehensive tests for duplicate name validation in both Create and Update operations
- Used internal errors package for consistent error handling
The funnel API now properly prevents creating or updating funnels with duplicate names
within the same organization, resolving issues where duplicate funnels could be created
but would fail during retrieval.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: returning error instance
* fix: implement database transactions for funnel creation and updates
- Wrap check-and-create operations in Bun transactions to prevent race conditions
- Apply transaction pattern to both Create() and Update() methods
- Ensures atomic operations when checking for duplicate funnel names
- Prevents concurrent requests from creating duplicate funnels
- Follows existing transaction patterns from user store implementation
Addresses PR feedback for race condition prevention
---------
Co-authored-by: Ankit Nayan <ankitnayan@Ankits-MacBook-Pro.local>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Shaheer Kochai <ashaheerki@gmail.com>