This introduces a new Raw Data Export module to the codebase, enabling users to export raw log data via a dedicated API endpoint. The changes include the implementation of the module and handler, integration with existing infrastructure, configuration updates, and adjustments to tests and module wiring.
* feat(access-control): embed openfga in signoz
* feat(authz): rename access control to authz
* feat(authz): fix codeowners and go mod tidy
* feat(authz): fix lint
* feat(authz): update go version and move convertor to instrumentation
* feat(authz): some more lint issues
* feat(authz): some more lint issues
* feat(authz): some more lint issues
* feat(authz): fix more lint issues
* feat(authz): make logger converter interface
* fix: don't skip resource filter in main table for OR queries
* fix: dont skip resource table
* fix: make check case insensitive
* fix: iterate over token stream
* fix: use lower and convert re2 to string in fulltext
* fix: minor error change
* fix: address comments
---------
Co-authored-by: Srikanth Chekuri <srikanth.chekuri92@gmail.com>
## 📄 Summary
To reliably migrate the alerts and dashboards, we need access to the telemetrystore to fetch some metadata and while doing migration, I need to log some stuff to fix stuff later.
Key changes:
- Modified the migration to include telemetrystore and a logging provider (open to using a standard logger instead)
- To avoid the previous issues with imported dashboards failing during migration, I've ensured that imported JSON files are automatically transformed when migration is active
- Implemented detailed logic to handle dashboard migration cleanly and prevent unnecessary errors
- Separated the core migration logic from SQL migration code, as users from the dot metrics migration requested shareable code snippets for local migrations. This modular approach allows others to easily reuse the migration functionality.
Known: I didn't register the migration yet in this PR, and will not merge this yet, so please review with that in mid.
## 📄 Summary
- Fix the order by for the time series result
- Add the statement builder for trace query (was supposed to be replaced with new development but that never happened, so we continue the old table)
- Removed `pkg/types/telemetrytypes/virtualfield.go`, not used currently anywhere but causing circular import. Will re-introduce later.
Deprecate all flags
- Use querier.config.fluxInterval in lieu of passing `--flux-interval` and `--flux-interval-for-trace-detail`
- Remove `--gateway-url`
- Use telemetrystore.clickhouse.cluster in lieu of passing `--cluster` or `--cluster-name`
- Add an `unparam` check in the linter. Updated some functions across the querier codebase to be compatible with this linter.
- Remove prometheus config from docker builds.
* chore: update types
1. add partial bool to indicate if the value covers the partial interval
2. add optional unit if present (ex: duration_nano, metrics with units)
3. use pointers wherever necessary
4. add format options for request and remove redundant name in query envelope
* chore: fix some gaps
1. make the range as [start, end)
2. provide the logs statement builder with the body column
3. skip the body filter on resource filter statement builder
4. remove unnecessary agg expr rewriter in metrics
5. add ability to skip full text in where clause visitor
* chore: add API endpoint for new query range
* chore: add bucket cache implementation
* chore: add fingerprinting impl and add bucket cache to querier
* chore: add provider factory
* chore(linter): add more linters and deprecate zap
* chore(linter): add more linters and deprecate zap
* chore(linter): add more linters and deprecate zap
* chore(linter): add more linters and deprecate zap