Sign Up

Integrating monoscope with Kubernetes

This guide shows how to send Kubernetes logs, metrics, events, and traces to monoscope using the OpenTelemetry Collector — no application code changes required.


Prerequisites

  • A Kubernetes cluster
  • kubectl installed
  • Helm 3
  • A monoscope account with an API key

Architecture: Why Two Collectors

Kubernetes telemetry comes from two fundamentally different sources, and each requires a different deployment shape:

Data Source Deployment
Pod logs (filelog) Files on each node’s disk (/var/log/pods) DaemonSet — one per node
Kubelet metrics (kubeletstats) Local kubelet on each node DaemonSet — one per node
Cluster events (k8s_events, emitted as logs) Kubernetes API server (cluster-global) Deployment, 1 replica
Cluster metrics (k8s_cluster) Kubernetes API server (cluster-global) Deployment, 1 replica

Running everything in a single DaemonSet duplicates events and cluster metrics N times (once per node). Running everything in a single Deployment silently drops logs from every node except one. The safe pattern is to install two collector releases — an agent (DaemonSet) and a cluster collector (Deployment).

This split is the OpenTelemetry community's recommended pattern. The Helm chart's presets auto-wire the receivers, RBAC, and host mounts — you just pick which presets belong on which release.

Sending Application Telemetry

Both collectors expose OTLP on 4317 (gRPC) and 4318 (HTTP). Point your instrumented apps at the agent service:

http://monoscope-agent-opentelemetry-collector.default.svc.cluster.local:4318

By default this ClusterIP Service load-balances across every agent pod cluster-wide. If you want each app pod to send to the agent on its own node (lower latency, no cross-node hops), patch the Service with spec.internalTrafficPolicy: Local.

Advanced: OpenTelemetry Operator

If you already run the OpenTelemetry Operator — for example because you use its auto-instrumentation feature, manage everything via Argo CD / Flux, or need the Target Allocator for Prometheus sharding — you can replace the two Helm releases with two OpenTelemetryCollector custom resources.

The same split applies: one CR with mode: daemonset for filelog + kubeletstats, and one CR with mode: deployment and replicas: 1 for k8s_events + k8s_cluster.

Install the Operator

The OTel Operator’s admission webhook requires cert-manager (or another certificate provider). Skip the first command if you already have cert-manager — or any compatible cert source — installed. For production, replace latest with a pinned release tag from each project.

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml
kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml

Create the same API key secret used by the Helm path (skip if already created):

kubectl create secret generic monoscope-secrets \
  --from-literal=api-key=YOUR_API_KEY

Agent CR (DaemonSet)

apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: monoscope-agent
spec:
  mode: daemonset
  volumeMounts:
    - name: varlogpods
      mountPath: /var/log/pods
      readOnly: true
  volumes:
    - name: varlogpods
      hostPath:
        path: /var/log/pods
  env:
    - name: K8S_NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
    - name: MONOSCOPE_API_KEY
      valueFrom:
        secretKeyRef:
          name: monoscope-secrets
          key: api-key
  config:
    extensions:
      health_check:
        endpoint: 0.0.0.0:13133
    receivers:
      filelog:
        # Path layout: /var/log/pods/<ns>_<pod>_<uid>/<container>/<restart>.log
        include: [/var/log/pods/*/*/*.log]
        start_at: end          # use 'beginning' only for first-install backfill
      kubeletstats:
        collection_interval: 10s
        auth_type: serviceAccount
        endpoint: ${env:K8S_NODE_NAME}:10250
        insecure_skip_verify: true   # acceptable for dev; in production mount the kubelet CA
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            endpoint: 0.0.0.0:4318
    processors:
      batch: {}
      memory_limiter:
        check_interval: 1s
        limit_mib: 4000
        spike_limit_mib: 800
      k8s_attributes:
        auth_type: serviceAccount
        passthrough: false
        filter:
          node_from_env_var: K8S_NODE_NAME
      resource:
        attributes:
          - key: x-api-key
            value: ${env:MONOSCOPE_API_KEY}
            action: upsert
    exporters:
      otlp_grpc:
        endpoint: "otelcol.monoscope.tech:4317"
        tls:
          insecure: true
    service:
      extensions: [health_check]
      pipelines:
        traces:
          receivers: [otlp]
          processors: [k8s_attributes, memory_limiter, batch, resource]
          exporters: [otlp_grpc]
        metrics:
          receivers: [otlp, kubeletstats]
          processors: [k8s_attributes, memory_limiter, batch, resource]
          exporters: [otlp_grpc]
        logs:
          receivers: [filelog, otlp]
          processors: [k8s_attributes, memory_limiter, batch, resource]
          exporters: [otlp_grpc]

Cluster CR (Deployment, 1 replica)

apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
  name: monoscope-cluster
spec:
  mode: deployment
  replicas: 1
  env:
    - name: MONOSCOPE_API_KEY
      valueFrom:
        secretKeyRef:
          name: monoscope-secrets
          key: api-key
  config:
    extensions:
      health_check:
        endpoint: 0.0.0.0:13133
    receivers:
      k8s_cluster:
        collection_interval: 10s
      k8s_events:
        namespaces: []
    processors:
      batch: {}
      memory_limiter:
        check_interval: 1s
        limit_mib: 1000
        spike_limit_mib: 200
      k8s_attributes:
        auth_type: serviceAccount
        passthrough: false
      resource:
        attributes:
          - key: x-api-key
            value: ${env:MONOSCOPE_API_KEY}
            action: upsert
    exporters:
      otlp_grpc:
        endpoint: "otelcol.monoscope.tech:4317"
        tls:
          insecure: true
    service:
      extensions: [health_check]
      pipelines:
        metrics:
          receivers: [k8s_cluster]
          processors: [k8s_attributes, memory_limiter, batch, resource]
          exporters: [otlp_grpc]
        logs:
          receivers: [k8s_events]
          processors: [k8s_attributes, memory_limiter, batch, resource]
          exporters: [otlp_grpc]

RBAC

Both CRs need read access to pods, namespaces, nodes, events, and the workload APIs. The OTel Operator creates a ServiceAccount named <cr-name>-collector in the CR’s namespace, so the binding below targets monoscope-agent-collector and monoscope-cluster-collector. Apply this once:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: monoscope-collector
rules:
- apiGroups: [""]
  resources:
    - pods
    - namespaces
    - nodes
    - nodes/stats
    - nodes/proxy
    - services
    - events
    - replicationcontrollers
    - resourcequotas
    - persistentvolumes
    - persistentvolumeclaims
  verbs: [get, list, watch]
- apiGroups: ["apps"]
  resources: [deployments, replicasets, daemonsets, statefulsets]
  verbs: [get, list, watch]
- apiGroups: ["batch"]
  resources: [jobs, cronjobs]
  verbs: [get, list, watch]
- apiGroups: ["autoscaling"]
  resources: [horizontalpodautoscalers]
  verbs: [get, list, watch]
- apiGroups: ["events.k8s.io"]
  resources: [events]
  verbs: [get, list, watch]
- apiGroups: ["discovery.k8s.io"]
  resources: [endpointslices]
  verbs: [get, list, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: monoscope-collector
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: monoscope-collector
subjects:
- kind: ServiceAccount
  name: monoscope-agent-collector
  namespace: default
- kind: ServiceAccount
  name: monoscope-cluster-collector
  namespace: default

Monitoring API Gateways and Ingresses

For Kubernetes API gateways (Kong, Istio, Ambassador) or Ingress controllers, point them at the agent collector’s OTLP endpoint. For example, with the Nginx Ingress Controller:

apiVersion: v1
kind: ConfigMap
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  enable-opentelemetry: "true"
  otlp-collector-host: "monoscope-agent-opentelemetry-collector.default.svc.cluster.local"
  otlp-collector-port: "4317"

Then enable tracing per Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/enable-opentelemetry: "true"

Next Steps

  • Configure alerts in monoscope based on Kubernetes metrics and events
  • Build dashboards for cluster health and workload performance
  • Correlate API latency with container resource usage
  • Use monoscope insights to right-size deployments and spot regressions