Loki

Tutorial to scrap the pod’s traces with a Promtail sidecar agent and send the logs to Grafana Loki.

The following documentation presents how to scrap log from a Spring Boot Microservice. You may need to adapt it to your needs.

Requirements

  • A Grafana instance (see Grafana Helm chart)
  • A deployed service (Kubernetes Deployment)
  • kubectl and helm installed and configured to your Kubernetes clusters
  • (optional) a cluster dedicated to your cross namespaces tools (example: mycorp-monitoring). It could be the same as your Grafana instance.

Loki

Install Loki by using its Helm chart.

Configure

Configure your Loki instance values.yaml:

rbac:
  create: false
  pspEnabled: false
resources:
  limits:
    cpu: 500m
    memory: 512Mi
  requests:
    cpu: 100m
    memory: 128Mi

Installation

Applies your configuration:

helm repo add grafana https://grafana.github.io/helm-charts
helm search repo grafana/loki
helm install loki grafana/loki -f loki.yaml --namespace [NAMESPACE]

(optional) Expose

Deploy the routes to access publicly your Loki instance:

  • routes.yaml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: loki
spec:
  entryPoints:
    - http
  routes:
    - kind: Rule
      match: Host(`loki.mydomain.com`)
      middlewares:
        - name: https-redirect
          namespace: traefik
      services:
        - name: loki
          port: 3100
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: loki-tls
spec:
  entryPoints:
    - https
  routes:
    - kind: Rule
      match: Host(`loki.mydomain.com`)
      services:
        - name: loki
          port: 3100
  tls:
    certResolver: default

Grafana

DataSource

Configure Loki datasource on http://loki:3100 (if your Grafana is in the same namespace). You may also want to use your public routes to configure the datasource of an external Grafana instance.

Dashboard

Here is a link to a Loki Dashboard you can import in your Grafana instance. You can also import a dashboard from the Grafana Marketplace.

Promtail

We are using the Kubernetes pods multi-containers capability (the sidecar is part of the service mesh) to share your application’s traces with the Promtail agent.

Configuration

Agent configuration shared with the agent containers.

In this example we are using the /app/log/*.log path and file pattern for the service’s logs files.

The following configuration defines how to process, parse and extract labels of the default Spring Boot Logback output

It’s possible to parameterize the configuration by using environment variables, example in the below configuration: ${SERVICE_NAME:service}

  • promtail-boot.config.yaml:
kind: ConfigMap
apiVersion: v1
metadata:
  name: promtail-boot
data:
  promtail.yaml: |
    server:
      disable: true
    positions:
      filename: /app/log/positions.yaml
    client:
      url: http://loki.[namespace].svc:3100/loki/api/v1/push
    scrape_configs:
    - job_name: ${SERVICE_NAME:service}
      pipeline_stages:
        - multiline:
            firstline: '^\d{4}-\d{2}-\d{2}\s\d{1,2}\:\d{2}\:\d{2}\.\d{3}'
            max_wait_time: 3s
        - match:
            selector: '{type="boot"}'
            stages:
              - regex:
                  expression: '^(?P<timestamp>\d{4}-\d{2}-\d{2}\s\d{1,2}\:\d{2}\:\d{2}\.\d{3})\s+(?P<level>[A-Z]{4,5})\s(?P<pid>\d)\s---\s\[\s*(?P<thread>.*)]\s(?P<logger>.*)\s+\:\s(?P<message>(?s:.*))$'
              - labels:
                  timestamp:
                  level:
                  pid:
                  thread:
                  logger:
                  message:
              - timestamp:
                  format: '2006-01-02 15:04:05.000'
                  source: timestamp
      static_configs:
      - labels:
          app: h8lio
          namespace: byzaneo-one
          service: ${SERVICE_NAME:service}
          type: boot
          __path__: /app/log/*.log    

⚠️ Pay attention to the __path__ value where the agent will look for traces to scrap.

  • deploy the configuration in the namespace where will be located the sidecar agents:
kubectl apply -f promtail-boot.config.yaml

Agent Sidecar

The most important part is to share the log files location between the Deployment’s containers. To do so, we are using an emptyDir volume.

  • configure your application to produce log files. Example in your Spring Boot microservice application.yaml:
...
logging:
  level:
    root: DEBUG
    ...
  file.name: /app/log/myservice.log
...

If you need more configuration options, you can use a logback.xml file

  • add the sidecar Promtail agent to your Deployment:
apiVersion: apps/v1
kind: Deployment
...
spec:
  ...
  template:
    ...
    spec:
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000
      volumes:
      - name: promtail
        configMap:
          defaultMode: 420
          name: promtail-boot
      - name: log
        emptyDir: {}
      ...
      containers:
        - args:
          - '-config.expand-env=true'
          - '-config.file=/etc/promtail/promtail.yaml'
          env:
          - name: SERVICE_NAME
            value: [serviceName]
          image: grafana/promtail
          imagePullPolicy: Always
          name: promtail
          resources:
            limits:
              cpu: 250m
              memory: 128Mi
            requests:
              cpu: 125m
              memory: 64Mi
          volumeMounts:
          - mountPath: /app/log
            name: log
          - mountPath: /etc/promtail
            name: promtail
        - ...
          volumeMounts:
          - mountPath: /app/log
            name: log
          ...

Adapt the security context, the log path, the service name and other Promtail environment variables.

Repeat this configuration to all the service to observe.

Explore the logs

Once your observed services are up and running (you should see 2/2 pods on a single replicated service) and after few seconds (depending on the amount of logs produced and the scrapping delay), go to your Grafana instance and “Explore” the Loki “Log Browser” or your dedicated dashboards.

You should be able to set alerts to observe and to be notified on crossed services critical logs levels.