云其它

关注公众号 jb51net

关闭
首页 > 网站技巧 > 服务器 > 云和虚拟化 > 云其它 > Kubernetes Event Exporter和Prometheus的K8s事件告警

Kubernetes Event Exporter和Prometheus的K8s事件告警详解

作者:alden_ygq

本文介绍了通过KubernetesEventExporter将Kubernetes事件转换为Prometheus指标,并结合Prometheus告警规则和Alertmanager进行告警通知的方案,该方案通过监听Kubernetes API、抓取指标、评估告警规则并发送通知,建立了高效可靠的事件驱动监控体系

本方案通过 Kubernetes Event Exporter 自动捕获集群事件并转换为 Prometheus 指标,再通过 Prometheus 的告警规则和 Alertmanager 进行告警通知,从而建立一个高效、可靠的事件驱动监控体系。

设计架构

以下是该方案的核心组件和工作流程:

主要组件与作用

部署 Kubernetes Event Exporter

1. 创建 RBAC 资源

首先确保 kubernetes-event-exporter 有权限读取集群事件和相关资源。

# event-exporter-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: event-exporter
  namespace: monitoring  # 建议部署在monitoring命名空间

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: event-exporter
rules:
- apiGroups: [""]
  resources: ["events", "pods", "nodes"]  # 需要events的读权限,获取pods/nodes信息可用于丰富标签
  verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
  resources: ["deployments", "replicasets", "statefulsets"]
  verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: event-exporter
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: event-exporter
subjects:
- kind: ServiceAccount
  name: event-exporter
  namespace: monitoring

应用 RBAC 配置

kubectl apply -f event-exporter-rbac.yaml

2. 创建 Kubernetes Event Exporter 配置

重点是将事件转换为 Prometheus 指标。

# event-exporter-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: event-exporter-config
  namespace: monitoring
data:
  config.yaml: |
    logLevel: info
    logFormat: json
    # 指标接收器,暴露Prometheus格式指标
    metricsReceiver:
      port: 9102  # 指标暴露的端口
    route:
      routes:
        - match:
            - receiver: "metrics-receiver"  # 匹配所有事件,并转换为指标
        # 可以添加更多路由规则,例如只处理Warning事件 
        # - match:
        #   - type: "Warning"
        #   receiver: "metrics-receiver"
    receivers:
      - name: "metrics-receiver"
        metrics: {}  # 使用内置的metrics receiver

应用 ConfigMap

kubectl apply -f event-exporter-config.yaml

3. 部署 Kubernetes Event Exporter

创建 Deployment 和 Service。

# event-exporter-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: event-exporter
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      app: event-exporter
  template:
    metadata:
      labels:
        app: event-exporter
      annotations:
        prometheus.io/scrape: "true"          # 允许Prometheus自动发现并抓取
        prometheus.io/port: "9102"            # 指标暴露的端口
        prometheus.io/path: "/metrics"        # 指标路径
    spec:
      serviceAccountName: event-exporter
      containers:
      - name: event-exporter
        image: ghcr.io/opsgenie/kubernetes-event-exporter:v0.11
        args:
          - -conf=/data/config.yaml
        ports:
        - containerPort: 9102  # 与ConfigMap中metricsReceiver.port一致
        volumeMounts:
        - name: config-volume
          mountPath: /data
        resources:
          requests:
            memory: "64Mi"
            cpu: "50m"
          limits:
            memory: "128Mi"
            cpu: "100m"
      volumes:
      - name: config-volume
        configMap:
          name: event-exporter-config

---
apiVersion: v1
kind: Service
metadata:
  name: event-exporter
  namespace: monitoring
  labels:
    app: event-exporter
spec:
  ports:
  - port: 9102
    targetPort: 9102
    protocol: TCP
    name: http-metrics
  selector:
    app: event-exporter
  type: ClusterIP

应用 Deployment 和 Service

kubectl apply -f event-exporter-deployment.yaml

4. 验证部署

检查 Pod 状态和日志:

kubectl get pods -n monitoring -l app=event-exporter
kubectl logs -f -n monitoring deployment/event-exporter

配置 Prometheus 抓取指标

确保你Prometheus 配置能够发现并抓取 kubernetes-event-exporter 暴露的指标。如果使用 Prometheus Operator 和 ServiceMonitor,可以创建如下资源:

# event-exporter-servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: event-exporter
  namespace: monitoring
  labels:
    app: event-exporter
spec:
  endpoints:
  - port: http-metrics  # 对应Service中端口名称
    interval: 30s       # 抓取间隔
  namespaceSelector:
    matchNames:
    - monitoring
  selector:
    matchLabels:
      app: event-exporter

应用 ServiceMonitor

kubectl apply -f event-exporter-servicemonitor.yaml

配置 Prometheus 告警规则

在 Prometheus 中创建告警规则文件(例如 k8s-events-rules.yaml):

# k8s-events-rules.yaml
groups:
- name: KubernetesEventsAlert
  rules:
  # 规则1: 监控Warning类型事件
  - alert: K8sWarningEvent
    expr: increase(event_exporter_events_total{event_type="Warning"}[5m]) > 0
    for: 0m  # 一旦触发立即告警
    labels:
      severity: warning
      source: k8s-event
    annotations:
      description: |-
        Kubernetes Warning 事件发生!
        Namespace: {{ $labels.namespace }}
        Object: {{ $labels.involved_object_kind }}/{{ $labels.involved_object_name }}
        Reason: {{ $labels.reason }}
      summary: "Kubernetes Warning Event ({{ $labels.involved_object_kind }})"

  # 规则2: 监控特定频繁发生的事件原因(例如:BackOff)
  - alert: K8sPodBackOff
    expr: increase(event_exporter_events_total{reason="BackOff", involved_object_kind="Pod"}[10m]) > 3
    for: 0m
    labels:
      severity: error
      source: k8s-event
    annotations:
      description: |-
        Pod 频繁重启(BackOff)!
        Pod: {{ $labels.namespace }}/{{ $labels.involved_object_name }}
        10分钟内发生次数: {{ $value }}
      summary: "Pod {{ $labels.involved_object_name }} is restarting frequently (BackOff)"

  # 规则3: 监控节点异常(例如:NotReady)
  - alert: K8sNodeNotReady
    expr: increase(event_exporter_events_total{reason="NotReady", involved_object_kind="Node"}[5m]) > 0
    for: 1m  # 持续1分钟才告警,避免瞬时问题
    labels:
      severity: critical
      source: k8s-event
    annotations:
      description: |-
        节点状态异常(NotReady)!
        Node: {{ $labels.involved_object_name }}
      summary: "Node {{ $labels.involved_object_name }} is NotReady"

告警规则说明

应用告警规则

kubectl apply -f k8s-events-rules.yaml

配置 Alertmanager 发送告警

配置 Alertmanager (alertmanager.yml) 来处理和发送告警:

# alertmanager.yml 示例 (部分)
route:
  group_by: [namespace, alertname]  # 按命名空间和告警名称分组
  group_wait: 10s
  group_interval: 5m
  repeat_interval: 2h
  receiver: 'default-receiver'
  routes:
    - match:                     # 匹配事件告警
        source: k8s-event
      receiver: 'k8s-event-receiver'
      # 进一步根据严重程度路由
      routes:
        - match:
            severity: critical
          receiver: 'critical-team-receiver'
        - match:
            severity: warning
          receiver: 'warning-team-receiver'

receivers:
- name: 'default-receiver'
  webhook_configs:
  - url: 'http://some-webhook-url'

- name: 'k8s-event-receiver'
  email_configs:
  - to: 'devops-team@example.com'
    from: 'alertmanager@example.com'
    smarthost: 'smtp.example.com:587'
    auth_username: 'alertmanager'
    auth_password: 'password'
  # 也可以配置Slack、Webhook等
  webhook_configs:
  - url: 'http://your-webhook-url/alert'  # 例如,发送到钉钉、Slack或自定义系统

- name: 'critical-team-receiver'
  # ... 关键告警的接收方配置,如电话、PagerDuty等

Alertmanager 关键功能

优化与提示

总结

通过 kubernetes-event-exporter 将 K8s 事件转换为 Prometheus 指标,再结合 Prometheus 的告警规则和 Alertmanager 的通知能力,可以构建一个强大且灵活的事件监控与告警系统。这个方案能实时地感知到集群中的异常状态,从而快速响应并解决问题,提升集群的稳定性和可观测性。

以上为个人经验,希望能给大家一个参考,也希望大家多多支持脚本之家。

阅读全文