feat(devenv): add otel-collector support for local development

- Add .devenv/docker/otel-collector/ with compose.yaml and config
- Add devenv-otel-collector and devenv-up targets to Makefile
- Update development.md with otel-collector setup instructions
- Add README.md with usage documentation for otel-collector setup

This enables developers to run the complete SigNoz stack locally,
including the OpenTelemetry Collector for receiving telemetry data
on ports 4317 (gRPC) and 4318 (HTTP).
This commit is contained in:
nikhilmantri0902 2025-08-04 18:50:01 +05:30
parent 090538f11f
commit 28fb149203
5 changed files with 231 additions and 5 deletions

View File

@ -0,0 +1,44 @@
# SigNoz OTel Collector Development Environment
This directory contains the Docker Compose setup for running the SigNoz OpenTelemetry Collector locally during development.
## What it does
- Starts the SigNoz OTel Collector container
- Exposes OTLP endpoints for receiving telemetry data:
- **gRPC**: `localhost:4317`
- **HTTP**: `localhost:4318`
- Connects to ClickHouse running on the host machine
- Processes and forwards telemetry data to ClickHouse
## Usage
```bash
# Start the OTel Collector
make devenv-otel-collector
# Or start both ClickHouse and OTel Collector together
make devenv-up
```
## Prerequisites
- ClickHouse must be running (use `make devenv-clickhouse`)
- Docker must be installed and running
## Testing
Send a test trace to verify everything is working:
```bash
curl -X POST http://localhost:4318/v1/traces \
-H "Content-Type: application/json" \
-d '{"resourceSpans":[{"resource":{"attributes":[{"key":"service.name","value":{"stringValue":"test-service"}}]},"scopeSpans":[{"spans":[{"traceId":"12345678901234567890123456789012","spanId":"1234567890123456","name":"test-span","startTimeUnixNano":"1609459200000000000","endTimeUnixNano":"1609459201000000000"}]}]}]}'
```
## Configuration
- `compose.yaml`: Docker Compose configuration for the OTel Collector
- `otel-collector-config.yaml`: OpenTelemetry Collector configuration file
The configuration is set up to connect to ClickHouse via `host.docker.internal:9000`, which allows the containerized collector to reach ClickHouse running on the host machine.

View File

@ -0,0 +1,29 @@
services:
otel-collector:
image: signoz/signoz-otel-collector:v0.128.2
container_name: signoz-otel-collector-dev
command:
- --config=/etc/otel-collector-config.yaml
- --feature-gates=-pkg.translator.prometheus.NormalizeName
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
environment:
- OTEL_RESOURCE_ATTRIBUTES=host.name=signoz-host,os.type=linux
- LOW_CARDINAL_EXCEPTION_GROUPING=false
ports:
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
- "13133:13133" # health check extension
healthcheck:
test:
- CMD
- wget
- --spider
- -q
- localhost:13133
interval: 30s
timeout: 5s
retries: 3
restart: unless-stopped
extra_hosts:
- "host.docker.internal:host-gateway"

View File

@ -0,0 +1,96 @@
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus:
config:
global:
scrape_interval: 60s
scrape_configs:
- job_name: otel-collector
static_configs:
- targets:
- localhost:8888
labels:
job_name: otel-collector
processors:
batch:
send_batch_size: 10000
send_batch_max_size: 11000
timeout: 10s
resourcedetection:
# Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
detectors: [env, system]
timeout: 2s
signozspanmetrics/delta:
metrics_exporter: signozclickhousemetrics
metrics_flush_interval: 60s
latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
dimensions_cache_size: 100000
aggregation_temporality: AGGREGATION_TEMPORALITY_DELTA
enable_exp_histogram: true
dimensions:
- name: service.namespace
default: default
- name: deployment.environment
default: default
# This is added to ensure the uniqueness of the timeseries
# Otherwise, identical timeseries produced by multiple replicas of
# collectors result in incorrect APM metrics
- name: signoz.collector.id
- name: service.version
- name: browser.platform
- name: browser.mobile
- name: k8s.cluster.name
- name: k8s.node.name
- name: k8s.namespace.name
- name: host.name
- name: host.type
- name: container.name
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: 0.0.0.0:1777
exporters:
clickhousetraces:
datasource: tcp://host.docker.internal:9000/signoz_traces
low_cardinal_exception_grouping: ${env:LOW_CARDINAL_EXCEPTION_GROUPING}
use_new_schema: true
signozclickhousemetrics:
dsn: tcp://host.docker.internal:9000/signoz_metrics
clickhouselogsexporter:
dsn: tcp://host.docker.internal:9000/signoz_logs
timeout: 10s
use_new_schema: true
service:
telemetry:
logs:
encoding: json
extensions:
- health_check
- pprof
pipelines:
traces:
receivers: [otlp]
processors: [signozspanmetrics/delta, batch]
exporters: [clickhousetraces]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [signozclickhousemetrics]
metrics/prometheus:
receivers: [prometheus]
processors: [batch]
exporters: [signozclickhousemetrics]
logs:
receivers: [otlp]
processors: [batch]
exporters: [clickhouselogsexporter]

View File

@ -61,6 +61,25 @@ devenv-postgres: ## Run postgres in devenv
@cd .devenv/docker/postgres; \ @cd .devenv/docker/postgres; \
docker compose -f compose.yaml up -d docker compose -f compose.yaml up -d
.PHONY: devenv-otel-collector
devenv-otel-collector: ## Run otel-collector in devenv (requires clickhouse to be running)
@cd .devenv/docker/otel-collector; \
docker compose -f compose.yaml up -d
.PHONY: devenv-up
devenv-up: ## Start both clickhouse and otel-collector for local development
@echo "Starting ClickHouse..."
@cd .devenv/docker/clickhouse; \
docker compose -f compose.yaml up -d
@echo "Waiting for ClickHouse to be ready..."
@sleep 10
@echo "Starting OTel Collector..."
@cd .devenv/docker/otel-collector; \
docker compose -f compose.yaml up -d
@echo "✅ Development environment is ready!"
@echo " - ClickHouse: http://localhost:8123"
@echo " - OTel Collector: grpc://localhost:4317, http://localhost:4318"
############################################################## ##############################################################
# go commands # go commands
############################################################## ##############################################################

View File

@ -44,20 +44,35 @@ Before diving in, make sure you have these tools installed:
SigNoz has three main components: Clickhouse, Backend, and Frontend. Let's set them up one by one. SigNoz has three main components: Clickhouse, Backend, and Frontend. Let's set them up one by one.
### 1. Setting up Clickhouse ### 1. Setting up ClickHouse
First, we need to get Clickhouse running: First, we need to get ClickHouse running:
```bash ```bash
make devenv-clickhouse make devenv-clickhouse
``` ```
This command: This command:
- Starts Clickhouse in a single-shard, single-replica cluster - Starts ClickHouse in a single-shard, single-replica cluster
- Sets up Zookeeper - Sets up Zookeeper
- Runs the latest schema migrations - Runs the latest schema migrations
### 2. Starting the Backend ### 2. Setting up OpenTelemetry Collector
Next, start the OpenTelemetry Collector to receive telemetry data:
```bash
make devenv-otel-collector
```
This command:
- Starts the SigNoz OpenTelemetry Collector
- Listens on port 4317 (gRPC) and 4318 (HTTP) for incoming telemetry data
- Forwards data to ClickHouse for storage
> 💡 **Quick Setup**: Use `make devenv-up` to start both ClickHouse and OTel Collector together
### 3. Starting the Backend
1. Run the backend server: 1. Run the backend server:
```bash ```bash
@ -73,7 +88,7 @@ This command:
> 💡 **Tip**: The API server runs at `http://localhost:8080/` by default > 💡 **Tip**: The API server runs at `http://localhost:8080/` by default
### 3. Setting up the Frontend ### 4. Setting up the Frontend
1. Navigate to the frontend directory: 1. Navigate to the frontend directory:
```bash ```bash
@ -98,3 +113,26 @@ This command:
> 💡 **Tip**: `yarn dev` will automatically rebuild when you make changes to the code > 💡 **Tip**: `yarn dev` will automatically rebuild when you make changes to the code
Now you're all set to start developing! Happy coding! 🎉 Now you're all set to start developing! Happy coding! 🎉
## Testing Your Setup
To verify everything is working correctly:
1. **Check ClickHouse**: `curl http://localhost:8123/ping` (should return "Ok.")
2. **Check OTel Collector**: `curl http://localhost:13133` (should return health status)
3. **Check Backend**: `curl http://localhost:8080/api/v1/health` (should return `{"status":"ok"}`)
4. **Check Frontend**: Open `http://localhost:3301` in your browser
## Sending Test Data
You can now send telemetry data to your local SigNoz instance:
- **OTLP gRPC**: `localhost:4317`
- **OTLP HTTP**: `localhost:4318`
For example, using `curl` to send a test trace:
```bash
curl -X POST http://localhost:4318/v1/traces \
-H "Content-Type: application/json" \
-d '{"resourceSpans":[{"resource":{"attributes":[{"key":"service.name","value":{"stringValue":"test-service"}}]},"scopeSpans":[{"spans":[{"traceId":"12345678901234567890123456789012","spanId":"1234567890123456","name":"test-span","startTimeUnixNano":"1609459200000000000","endTimeUnixNano":"1609459201000000000"}]}]}]}'
```