首先在golang应用中使用prometheus/client_golang库暴露http请求计数等指标,接着配置Prometheus通过scrape_configs定期抓取/metrics端点数据,然后在alerts.yml中定义基于表达式的报警规则如高延迟或高错误率,最后通过Alertmanager接收Firing状态报警并经邮件等方式发送通知。

Prometheus 是云原生生态中广泛使用的监控系统,Golang 服务结合 Prometheus 可以轻松暴露运行时指标并实现报警。要实现 Golang 应用的指标采集与报警,核心流程包括:在 Go 程序中暴露指标、配置 Prometheus 抓取、编写报警规则,并通过 Alertmanager 发送通知。下面详细介绍每一步实践。
1. 在 Golang 中暴露监控指标
使用 prometheus/client_golang 库可在 Go 服务中注册和暴露指标。常见指标类型包括 Counter(计数器)、Gauge(当前值)、Histogram(分布统计)和 Summary(分位数)。
示例代码:
package main <p>import ( "net/http" "github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus/promhttp" )</p><p>var ( httpRequestsTotal = prometheus.NewCounterVec( prometheus.CounterOpts{ Name: "http_requests_total", Help: "Total number of HTTP requests.", }, []string{"method", "endpoint", "status"}, ) )</p><p>func init() { prometheus.MustRegister(httpRequestsTotal) }</p><p>func handler(w http.ResponseWriter, r *http.Request) { httpRequestsTotal.WithLabelValues(r.Method, r.URL.Path, "200").Inc() w.Write([]byte("Hello World")) }</p><p>func main() { http.Handle("/metrics", promhttp.Handler()) http.HandleFunc("/", handler) http.ListenAndServe(":8080", nil) }</p>
启动后访问 :8080/metrics 可看到暴露的指标。确保 Prometheus 能访问此端点。
立即学习“go语言免费学习笔记(深入)”;
2. 配置 Prometheus 抓取指标
编辑 prometheus.yml 文件,添加目标实例:
scrape_configs: - job_name: 'go-service' static_configs: - targets: ['your-go-service-ip:8080']
Prometheus 启动后会定期从该地址拉取 /metrics 数据。可通过 Prometheus 的 Web ui 查询指标,例如:
http_requests_total{job=”go-service”}
3. 编写报警规则
报警规则定义在 Prometheus 的 rule_files 中。创建一个规则文件如 alerts.yml:
groups: - name: go_service_alerts rules: - alert: HighRequestLatency expr: histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket[5m])) by (le)) > 0.5 for: 2m labels: severity: warning annotations: summary: "High latency on {{ $labels.instance }}" description: "95th percentile latency is above 500ms" <ul><li>alert: HighErrorRate expr: sum(rate(http_requests_total{status=~"5.."}[5m])) / sum(rate(http_requests_total[5m])) > 0.05 for: 3m labels: severity: critical annotations: summary: "High error rate on {{ $labels.instance }}" description: "Error rate is above 5%"
将该文件引入 Prometheus 配置:
rule_files: </li><li>"alerts.yml"
expr 定义触发条件,for 表示持续时间,满足后进入 Pending 状态,之后变为 Firing 并通知 Alertmanager。
4. 集成 Alertmanager 发送报警
Alertmanager 负责去重、分组和发送通知。配置 alertmanager.yml 示例(邮件通知):
route: receiver: email-notifications group_wait: 30s group_interval: 5m repeat_interval: 1h <p>receivers:</p><ul><li>name: email-notifications email_configs:<ul><li>to: admin@example.com from: alertmanager@example.com smarthost: smtp.example.com:587 auth_username: "alertmanager" auth_identity: "alertmanager@example.com" auth_password: "password"
启动 Alertmanager 并确保 Prometheus 配置中指定其地址:
alerting: alertmanagers: - static_configs: - targets: ["localhost:9093"]
当报警触发,Alertmanager 将根据配置发送邮件或其他通知(支持钉钉、企业微信、Slack 等)。
基本上就这些。从 Go 暴露指标到 Prometheus 抓取、定义规则再到 Alertmanager 通知,整个链路清晰可控。关键是指标设计合理、报警阈值贴合业务,避免误报漏报。


