You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to set up Vector for scraping the logs out of my k8s services.
Deploying vector trough Helm charts.
The difference is that I am using 0.28.1-alpine tag.
The pod is running okay. No apparent logs are present that indicate an issue (even with debug flag on).
I have tried to tap vector tap --outputs-of "db_logs" --url http://0.0.0.0:9001/graphql & also extended the configuration to be able to tap to dropped output. However, nothing appears to be logged. I am pretty much stuck at this point.
secret:
credentials:
type: execcommand:
- /etc/vector/secret.shapi:
enabled: trueaddress: 0.0.0.0:9001sources:
kubernetes_host:
type: kubernetes_logsextra_label_selector: app.kubernetes.io/instance=platform,app.kubernetes.io/name!=supabase-vectortransforms:
project_logs:
type: remapinputs:
- kubernetes_hostsource: |- .project = "default" .event_message = del(.message) .appname = del(.kubernetes.container_name) del(.file) del(.kubernetes) del(.source_type) del(.stream)router:
type: routeinputs:
- project_logsroute:
kong: '.appname == "supabase-kong"'auth: '.appname == "supabase-auth"'rest: '.appname == "supabase-rest"'realtime: '.appname == "supabase-realtime"'storage: '.appname == "supabase-storage"'functions: '.appname == "supabase-functions"'db: '.appname == "supabase-db"'# Ignores non nginx errors since they are related with kong booting upkong_logs:
type: remapinputs:
- router.kongsource: |- req, err = parse_nginx_log(.event_message, "combined") if err == null { .timestamp = req.timestamp .metadata.request.headers.referer = req.referer .metadata.request.headers.user_agent = req.agent .metadata.request.headers.cf_connecting_ip = req.client .metadata.request.method = req.method .metadata.request.path = req.path .metadata.request.protocol = req.protocol .metadata.response.status_code = req.status } if err != null { abort }# Ignores non nginx errors since they are related with kong booting upkong_err:
type: remapinputs:
- router.kongsource: |- .metadata.request.method = "GET" .metadata.response.status_code = 200 parsed, err = parse_nginx_log(.event_message, "error") if err == null { .timestamp = parsed.timestamp .severity = parsed.severity .metadata.request.host = parsed.host .metadata.request.headers.cf_connecting_ip = parsed.client url, err = split(parsed.request, " ") if err == null { .metadata.request.method = url[0] .metadata.request.path = url[1] .metadata.request.protocol = url[2] } } if err != null { abort }# Gotrue logs are structured json strings which frontend parses directly. But we keep metadata for consistency.auth_logs:
type: remapinputs:
- router.authsource: |- parsed, err = parse_json(.event_message) if err == null { .metadata.timestamp = parsed.time .metadata = merge!(.metadata, parsed) }# PostgREST logs are structured so we separate timestamp from message using regexrest_logs:
type: remapinputs:
- router.restsource: |- parsed, err = parse_regex(.event_message, r'^(?P<time>.*): (?P<msg>.*)$') if err == null { .event_message = parsed.msg .timestamp = parse_timestamp!(parsed.time, format: "%e/%b/%Y %R %:z") .metadata.host = .project }# Realtime logs are structured so we parse the severity level using regex (ignore time because it has no date)realtime_logs:
type: remapinputs:
- router.realtimesource: |- .metadata.project = del(.project) .metadata.external_id = .metadata.project parsed, err = parse_regex(.event_message, r'^(?P<time>\d+:\d+:\d+\.\d+) \[(?P<level>\w+)\] (?P<msg>.*)$') if err == null { .event_message = parsed.msg .metadata.level = parsed.level }# Storage logs may contain json objects so we parse them for completenessstorage_logs:
type: remapinputs:
- router.storagesource: |- .metadata.project = del(.project) .metadata.tenantId = .metadata.project parsed, err = parse_json(.event_message) if err == null { .event_message = parsed.msg .metadata.level = parsed.level .metadata.timestamp = parsed.time .metadata.context[0].host = parsed.hostname .metadata.context[0].pid = parsed.pid }# Postgres logs some messages to stderr which we map to warning severity leveldb_logs:
type: remapdrop_on_abort: truereroute_dropped: trueinputs:
- router.dbsource: |- .metadata.host = "db-default" .metadata.parsed.timestamp = .timestamp parsed, err = parse_regex(.event_message, r'.*(?P<level>INFO|NOTICE|WARNING|ERROR|LOG|FATAL|PANIC?):.*', numeric_groups: true) if err != null || parsed == null { .metadata.parsed.error_severity = "info" } if parsed != null { .metadata.parsed.error_severity = parsed.level } if .metadata.parsed.error_severity == "info" { .metadata.parsed.error_severity = "log" } .metadata.parsed.error_severity = upcase!(.metadata.parsed.error_severity)sinks:
logflare_auth:
type: 'http'inputs:
- auth_logsencoding:
codec: 'json'method: 'post'request:
retry_max_duration_secs: 10headers:
x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN}uri: 'http://platform-supabase-analytics:4000/api/logs?source_name=gotrue.logs.prod'logflare_realtime:
type: 'http'inputs:
- realtime_logsencoding:
codec: 'json'method: 'post'request:
retry_max_duration_secs: 10headers:
x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN}uri: 'http://platform-supabase-analytics:4000/api/logs?source_name=realtime.logs.prod'logflare_rest:
type: 'http'inputs:
- rest_logsencoding:
codec: 'json'method: 'post'request:
retry_max_duration_secs: 10headers:
x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN}uri: 'http://platform-supabase-analytics:4000/api/logs?source_name=postgREST.logs.prod'logflare_db:
type: 'http'inputs:
- db_logsencoding:
codec: 'json'method: 'post'request:
retry_max_duration_secs: 10headers:
x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN}# We must route the sink through kong because ingesting logs before logflare is fully initialised will# lead to broken queries from studio. This works by the assumption that containers are started in the# following order: vector > db > logflare > konguri: 'http://platform-supabase-kong:8000/analytics/v1/api/logs?source_name=postgres.logs'logflare_functions:
type: 'http'inputs:
- router.functionsencoding:
codec: 'json'method: 'post'request:
retry_max_duration_secs: 10headers:
x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN}uri: 'http://platform-supabase-analytics:4000/api/logs?source_name=deno-relay-logs'logflare_storage:
type: 'http'inputs:
- storage_logsencoding:
codec: 'json'method: 'post'request:
retry_max_duration_secs: 10headers:
x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN}uri: 'http://platform-supabase-analytics:4000/api/logs?source_name=storage.logs.prod.2'logflare_kong:
type: 'http'inputs:
- kong_logs
- kong_errencoding:
codec: 'json'method: 'post'request:
retry_max_duration_secs: 10headers:
x-api-key: ${LOGFLARE_PUBLIC_ACCESS_TOKEN}uri: 'http://platform-supabase-analytics:4000/api/logs?source_name=cloudflare.logs.prod'
Vector Logs
2025-08-19T16:55:11.803028Z INFO vector::app: Internal log rate limit configured. internal_log_rate_secs=10
2025-08-19T16:55:11.803329Z INFO vector::app: Log level is enabled. level="vector=info,codec=info,vrl=info,file_source=info,tower_limit=trace,rdkafka=info,buffers=info,lapin=info,kube=info"
2025-08-19T16:55:11.803387Z INFO vector::app: Loading configs. paths=["/etc/vector/vector.yml"]
2025-08-19T16:55:11.821338Z WARN vector::config::loading: Transform "db_logs.dropped" has no consumers
2025-08-19T16:55:11.821354Z WARN vector::config::loading: Transform "router._unmatched" has no consumers
2025-08-19T16:55:11.821667Z INFO source{component_kind="source" component_id=kubernetes_host component_type=kubernetes_logs component_name=kubernetes_host}: vector::sources::kubernetes_logs: Obtained Kubernetes Node name to collect logs for (self). self_node_name="lima-rancher-desktop"
2025-08-19T16:55:11.827729Z INFO source{component_kind="source" component_id=kubernetes_host component_type=kubernetes_logs component_name=kubernetes_host}: vector::sources::kubernetes_logs: Excluding matching files. exclude_paths=["**/*.gz", "**/*.tmp"]
2025-08-19T16:55:11.943483Z INFO vector::topology::running: Running healthchecks.
2025-08-19T16:55:11.944040Z INFO vector::topology::builder: Healthcheck passed.
2025-08-19T16:55:11.944165Z INFO vector::topology::builder: Healthcheck passed.
2025-08-19T16:55:11.944253Z INFO vector::topology::builder: Healthcheck passed.
2025-08-19T16:55:11.944335Z INFO vector::topology::builder: Healthcheck passed.
2025-08-19T16:55:11.944396Z INFO vector::topology::builder: Healthcheck passed.
2025-08-19T16:55:11.944474Z INFO vector::topology::builder: Healthcheck passed.
2025-08-19T16:55:11.944563Z INFO vector::topology::builder: Healthcheck passed.
2025-08-19T16:55:11.945683Z INFO vector: Vector has started. debug="false" version="0.28.1" arch="aarch64" revision="ff15924 2023-03-06"
2025-08-19T16:55:11.949784Z INFO source{component_kind="source" component_id=kubernetes_host component_type=kubernetes_logs component_name=kubernetes_host}:file_server: file_source::checkpointer: Attempting to read legacy checkpoint files.
2025-08-19T16:55:11.953489Z INFO vector::internal_events::api: API server running. address=0.0.0.0:9001 playground=http://0.0.0.0:9001/playground
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Question
I am trying to set up Vector for scraping the logs out of my k8s services.
Deploying vector trough Helm charts.
The difference is that I am using
0.28.1-alpine
tag.The pod is running okay. No apparent logs are present that indicate an issue (even with debug flag on).
I have tried to tap
vector tap --outputs-of "db_logs" --url http://0.0.0.0:9001/graphql
& also extended the configuration to be able to tap to dropped output. However, nothing appears to be logged. I am pretty much stuck at this point.Here is the deployment resource:
Vector Config
Vector Logs
Beta Was this translation helpful? Give feedback.
All reactions