Nginx动态配置upstream的使用小结
作者:油墨香^_^
1. 前言
Nginx 作为一款高性能的 Web 服务器和反向代理服务器,在现代互联网架构中扮演着至关重要的角色。其中,upstream 模块是 Nginx 实现负载均衡的核心组件。传统的 upstream 配置需要修改配置文件并重载 Nginx,这在动态的云原生环境中显得不够灵活。本文将深入探讨 Nginx 动态配置 upstream 的各种方法,从基础概念到高级实践,提供万字详细解析。
2. upstream 基础概念
2.1 什么是 upstream
在 Nginx 中,upstream 模块用于定义一组后端服务器,Nginx 可以将请求代理到这些服务器上,并实现负载均衡。
http {
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com;
server backup1.example.com backup;
}
server {
location / {
proxy_pass http://backend;
}
}
}2.2 upstream 的负载均衡算法
Nginx upstream 支持多种负载均衡算法:
- 轮询 (Round Robin) - 默认算法
- 加权轮询 (Weighted Round Robin)
- IP 哈希 (IP Hash)
- 最少连接 (Least Connections)
- 加权最少连接 (Weighted Least Connections)
- 随机算法 (Random)
2.3 upstream 服务器参数
每个 upstream 服务器可以配置多种参数:
server address [parameters];
常用参数包括:
weight=number- 权重max_conns=number- 最大连接数max_fails=number- 最大失败次数fail_timeout=time- 失败超时时间backup- 备份服务器down- 标记服务器不可用
3. 传统 upstream 配置的局限性
3.1 静态配置的问题
传统 upstream 配置的主要问题:
- 需要重载配置:每次修改都需要执行
nginx -s reload - 配置生效延迟:重载期间可能影响服务
- 不适合动态环境:在容器化、微服务架构中,服务实例频繁变化
- 运维复杂度高:需要人工干预或复杂的自动化脚本
3.2 动态服务发现的必要性
在现代架构中,服务发现成为必需功能:
- 微服务架构中服务实例动态变化
- 容器编排平台(Kubernetes)中 Pod 的 IP 地址不固定
- 自动扩缩容场景需要动态更新后端服务器
4. Nginx 动态 upstream 配置方案
4.1 Nginx Plus 商业版本
Nginx Plus 提供了官方的动态配置 API:
http {
upstream backend {
zone backend 64k;
server 10.0.0.1:80;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend;
}
# Nginx Plus API 端点
location /upstream_conf {
upstream_conf;
allow 127.0.0.1;
deny all;
}
}
}使用 API 动态管理 upstream:
# 添加服务器 curl -X POST -d 'server=10.0.0.2:80' http://localhost/upstream_conf?upstream=backend # 删除服务器 curl -X DELETE http://localhost/upstream_conf?upstream=backend&id=0 # 查看服务器状态 curl http://localhost/upstream_conf?upstream=backend
4.2 OpenResty 方案
OpenResty 基于 Nginx 和 LuaJIT,提供了强大的扩展能力:
http {
lua_package_path "/path/to/lua/scripts/?.lua;;";
upstream backend {
server 0.0.0.1; # 占位符
balancer_by_lua_block {
local balancer = require "ngx.balancer"
local upstream = require "upstream"
local peer = upstream.get_peer()
if peer then
balancer.set_current_peer(peer.ip, peer.port)
end
}
}
init_worker_by_lua_block {
local upstream = require "upstream"
upstream.init()
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
location /upstream {
content_by_lua_block {
local upstream = require "upstream"
if ngx.var.request_method == "GET" then
upstream.list_peers()
elseif ngx.var.request_method == "POST" then
upstream.add_peer(ngx.var.arg_ip, ngx.var.arg_port)
elseif ngx.var.request_method == "DELETE" then
upstream.remove_peer(ngx.var.arg_ip, ngx.var.arg_port)
end
}
}
}
}
对应的 Lua 模块:
-- upstream.lua
local _M = {}
local peers = {}
local current_index = 1
function _M.init()
-- 从配置中心或服务发现初始化
peers = {
{ip = "10.0.0.1", port = 80},
{ip = "10.0.0.2", port = 80}
}
end
function _M.get_peer()
if #peers == 0 then
return nil
end
local peer = peers[current_index]
current_index = current_index % #peers + 1
return peer
end
function _M.add_peer(ip, port)
table.insert(peers, {ip = ip, port = port})
ngx.say("Peer added: " .. ip .. ":" .. port)
end
function _M.remove_peer(ip, port)
for i, peer in ipairs(peers) do
if peer.ip == ip and peer.port == port then
table.remove(peers, i)
ngx.say("Peer removed: " .. ip .. ":" .. port)
return
end
end
ngx.say("Peer not found: " .. ip .. ":" .. port)
end
function _M.list_peers()
ngx.say("Current peers:")
for _, peer in ipairs(peers) do
ngx.say(peer.ip .. ":" .. peer.port)
end
end
return _M4.3 第三方模块:nginx-upsync-module
nginx-upsync-module 是一个流行的第三方模块,支持从 Consul、etcd 等服务发现组件同步 upstream 配置。
编译安装:
# 下载 Nginx 源码 wget http://nginx.org/download/nginx-1.20.1.tar.gz tar -zxvf nginx-1.20.1.tar.gz # 下载 nginx-upsync-module git clone https://github.com/weibocom/nginx-upsync-module.git # 编译安装 cd nginx-1.20.1 ./configure --add-module=../nginx-upsync-module make && make install
配置示例:
http {
upstream backend {
upsync 127.0.0.1:8500/v1/kv/upstreams/backend upsync_timeout=6m upsync_interval=500ms
upsync_type=consul strong_dependency=off;
upsync_dump_path /usr/local/nginx/conf/servers/servers_backend.conf;
include /usr/local/nginx/conf/servers/servers_backend.conf;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
# upsync 状态页面
location /upstream_list {
upstream_show;
}
}
}4.4 DNS 动态解析方案
利用 Nginx 的 DNS 解析功能实现动态服务发现:
http {
resolver 10.0.0.2 valid=10s;
upstream backend {
zone backend 64k;
server backend-service.namespace.svc.cluster.local service=http resolve;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}5. 基于 Consul 的服务发现集成
5.1 Consul 服务注册
首先,在 Consul 中注册服务:
# 注册服务
curl -X PUT -d '{
"ID": "backend1",
"Name": "backend",
"Address": "10.0.0.1",
"Port": 80,
"Tags": ["v1", "primary"]
}' http://127.0.0.1:8500/v1/agent/service/register
# 注册另一个实例
curl -X PUT -d '{
"ID": "backend2",
"Name": "backend",
"Address": "10.0.0.2",
"Port": 80,
"Tags": ["v1", "secondary"]
}' http://127.0.0.1:8500/v1/agent/service/register5.2 Nginx 配置集成
使用 ngx_http_js_module 集成 Consul:
load_module modules/ngx_http_js_module.so;
http {
js_path "/etc/nginx/js/";
js_import main from consul_upstream.js;
upstream backend {
server 127.0.0.1:11111; # 占位符
js_filter main.resolve_backend;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
# 动态更新端点
location /upstream/update {
js_content main.update_upstream;
}
}
}JavaScript 模块:
// consul_upstream.js
const Consul = require('consul');
let consul;
let currentServers = [];
function initConsul() {
consul = new Consul({
host: '127.0.0.1',
port: 8500
});
// 初始获取服务列表
updateServiceList();
// 设置监听
setInterval(updateServiceList, 5000);
}
function updateServiceList() {
consul.agent.service.list((err, services) => {
if (err) {
console.error('Consul error:', err);
return;
}
const backendServices = [];
for (const id in services) {
if (services[id].Service === 'backend') {
backendServices.push({
address: services[id].Address,
port: services[id].Port
});
}
}
currentServers = backendServices;
});
}
function resolve_backend(r) {
if (currentServers.length === 0) {
r.error('No backend servers available');
return;
}
// 简单的轮询
const server = currentServers[r.variables.requests % currentServers.length];
r.variables.backend_address = server.address;
r.variables.backend_port = server.port;
}
function update_upstream(r) {
updateServiceList();
r.headersOut['Content-Type'] = 'application/json';
r.return(200, JSON.stringify({
status: 'updated',
servers: currentServers
}));
}
export default { resolve_backend, update_upstream };
// 初始化
initConsul();6. Kubernetes 环境中的动态 upstream
6.1 使用 NGINX Ingress Controller
在 Kubernetes 中,NGINX Ingress Controller 可以自动管理 upstream:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /web
pathType: Prefix
backend:
service:
name: web-service
port:
number: 806.2 自定义 Controller 实现
创建自定义的 upstream 控制器:
package main
import (
"context"
"encoding/json"
"fmt"
"net/http"
"os"
"os/exec"
"time"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
type UpstreamManager struct {
clientset *kubernetes.Clientset
nginxConfigPath string
}
func NewUpstreamManager() (*UpstreamManager, error) {
config, err := rest.InClusterConfig()
if err != nil {
return nil, err
}
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return nil, err
}
return &UpstreamManager{
clientset: clientset,
nginxConfigPath: "/etc/nginx/conf.d/upstreams.conf",
}, nil
}
func (um *UpstreamManager) UpdateUpstream(serviceName, namespace string) error {
endpoints, err := um.clientset.CoreV1().Endpoints(namespace).Get(
context.TODO(), serviceName, metav1.GetOptions{})
if err != nil {
return err
}
var servers []string
for _, subset := range endpoints.Subsets {
for _, address := range subset.Addresses {
for _, port := range subset.Ports {
servers = append(servers,
fmt.Sprintf("server %s:%d;", address.IP, port.Port))
}
}
}
configContent := fmt.Sprintf(`
upstream %s {
%s
}`, serviceName, joinServers(servers))
err = os.WriteFile(fmt.Sprintf("%s/%s.conf", um.nginxConfigPath, serviceName),
[]byte(configContent), 0644)
if err != nil {
return err
}
// 重载 Nginx
cmd := exec.Command("nginx", "-s", "reload")
return cmd.Run()
}
func (um *UpstreamManager) WatchServices() {
for {
services, err := um.clientset.CoreV1().Services("").List(context.TODO(), metav1.ListOptions{})
if err != nil {
fmt.Printf("Error listing services: %v\n", err)
time.Sleep(5 * time.Second)
continue
}
for _, service := range services.Items {
if service.Spec.Type == corev1.ServiceTypeClusterIP {
err := um.UpdateUpstream(service.Name, service.Namespace)
if err != nil {
fmt.Printf("Error updating upstream for %s: %v\n", service.Name, err)
}
}
}
time.Sleep(30 * time.Second)
}
}
func joinServers(servers []string) string {
result := ""
for _, server := range servers {
result += server + "\n "
}
return result
}
func main() {
manager, err := NewUpstreamManager()
if err != nil {
panic(err)
}
go manager.WatchServices()
http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
json.NewEncoder(w).Encode(map[string]string{"status": "healthy"})
})
http.ListenAndServe(":8080", nil)
}7. 高级动态配置策略
7.1 基于权重的动态调整
根据后端服务器的性能指标动态调整权重:
-- dynamic_weight.lua
local _M = {}
local metrics = {}
local weight_cache = {}
function _M.collect_metrics(ip, port)
-- 模拟收集指标
local cpu_usage = math.random(10, 90)
local memory_usage = math.random(20, 80)
local active_connections = math.random(0, 1000)
metrics[ip .. ":" .. port] = {
cpu = cpu_usage,
memory = memory_usage,
connections = active_connections,
timestamp = ngx.now()
}
return metrics[ip .. ":" .. port]
end
function _M.calculate_weight(ip, port)
local metric = _M.collect_metrics(ip, port)
-- 基于指标计算权重
local base_weight = 100
-- CPU 使用率越高,权重越低
local cpu_factor = (100 - metric.cpu) / 100
-- 内存使用率越高,权重越低
local memory_factor = (100 - metric.memory) / 100
-- 连接数越多,权重越低
local conn_factor = math.max(0, 1 - metric.connections / 1000)
local calculated_weight = math.floor(base_weight * cpu_factor * memory_factor * conn_factor)
calculated_weight = math.max(1, math.min(calculated_weight, 100))
weight_cache[ip .. ":" .. port] = calculated_weight
return calculated_weight
end
function _M.get_weight(ip, port)
if not weight_cache[ip .. ":" .. port] then
return _M.calculate_weight(ip, port)
end
-- 每30秒重新计算权重
if ngx.now() - metrics[ip .. ":" .. port].timestamp > 30 then
return _M.calculate_weight(ip, port)
end
return weight_cache[ip .. ":" .. port]
end
return _M7.2 健康检查与熔断机制
实现智能的健康检查和熔断:
http {
upstream backend {
server 10.0.0.1:80;
server 10.0.0.2:80;
# 健康检查配置
check interval=3000 rise=2 fall=3 timeout=1000 type=http;
check_http_send "HEAD /health HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}
server {
listen 80;
location / {
proxy_pass http://backend;
# 熔断配置
proxy_next_upstream error timeout http_500 http_502 http_503;
proxy_next_upstream_tries 3;
proxy_next_upstream_timeout 10s;
}
# 健康检查状态页面
location /status {
check_status;
access_log off;
}
}
}自定义健康检查逻辑:
-- health_check.lua
local _M = {}
local health_status = {}
local check_interval = 5 -- 检查间隔(秒)
local failure_threshold = 3 -- 失败阈值
function _M.check_health(ip, port)
local http = require "resty.http"
local httpc = http.new()
local res, err = httpc:request_uri("http://" .. ip .. ":" .. port .. "/health", {
method = "GET",
timeout = 1000, -- 1秒超时
keepalive_timeout = 60,
keepalive_pool = 10
})
local key = ip .. ":" .. port
if not health_status[key] then
health_status[key] = {
consecutive_failures = 0,
last_check = ngx.now(),
healthy = true
}
end
if not res or res.status ~= 200 then
health_status[key].consecutive_failures = health_status[key].consecutive_failures + 1
if health_status[key].consecutive_failures >= failure_threshold then
health_status[key].healthy = false
end
else
health_status[key].consecutive_failures = 0
health_status[key].healthy = true
end
health_status[key].last_check = ngx.now()
return health_status[key].healthy
end
function _M.is_healthy(ip, port)
local key = ip .. ":" .. port
if not health_status[key] then
return _M.check_health(ip, port)
end
-- 如果超过检查间隔,重新检查
if ngx.now() - health_status[key].last_check > check_interval then
return _M.check_health(ip, port)
end
return health_status[key].healthy
end
function _M.get_health_status()
return health_status
end
return _M8. 性能优化与最佳实践
8.1 连接池优化
http {
upstream backend {
server 10.0.0.1:80;
server 10.0.0.2:80;
# 连接池配置
keepalive 32;
keepalive_requests 100;
keepalive_timeout 60s;
}
server {
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
# 缓冲区优化
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
proxy_busy_buffers_size 8k;
# 超时配置
proxy_connect_timeout 3s;
proxy_send_timeout 10s;
proxy_read_timeout 10s;
}
}
}8.2 缓存与限流
http {
# 限流配置
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
# 缓存配置
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m
max_size=10g inactive=60m use_temp_path=off;
upstream backend {
server 10.0.0.1:80;
server 10.0.0.2:80;
}
server {
location /api/ {
# 限流
limit_req zone=api burst=20 nodelay;
# 缓存
proxy_cache my_cache;
proxy_cache_valid 200 302 5m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503;
proxy_pass http://backend;
}
}
}9. 监控与日志
9.1 详细访问日志
http {
log_format upstream_log '[$time_local] $remote_addr - $remote_user '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'upstream: $upstream_addr '
'upstream_status: $upstream_status '
'request_time: $request_time '
'upstream_response_time: $upstream_response_time '
'upstream_connect_time: $upstream_connect_time';
upstream backend {
server 10.0.0.1:80;
server 10.0.0.2:80;
}
server {
access_log /var/log/nginx/access.log upstream_log;
location / {
proxy_pass http://backend;
}
}
}9.2 状态监控
server {
listen 8080;
# 基础状态
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
# 上游状态
location /upstream_status {
proxy_pass http://backend;
access_log off;
}
# 健康检查端点
location /health {
access_log off;
return 200 "healthy\n";
add_header Content-Type text/plain;
}
}10. 安全考虑
10.1 API 安全防护
# 动态配置 API 安全
location /upstream_api {
# IP 白名单
allow 10.0.0.0/8;
allow 172.16.0.0/12;
allow 192.168.0.0/16;
deny all;
# 认证
auth_basic "Upstream API";
auth_basic_user_file /etc/nginx/.htpasswd;
# 限流
limit_req zone=api_admin burst=5 nodelay;
# 方法限制
if ($request_method !~ ^(GET|POST|DELETE)$) {
return 405;
}
proxy_pass http://upstream_manager;
}10.2 输入验证
-- input_validation.lua
local _M = {}
function _M.validate_ip(ip)
if not ip or type(ip) ~= "string" then
return false
end
local chunks = {ip:match("^(%d+)%.(%d+)%.(%d+)%.(%d+)$")}
if #chunks ~= 4 then
return false
end
for _, v in pairs(chunks) do
if tonumber(v) > 255 then
return false
end
end
return true
end
function _M.validate_port(port)
if not port then
return false
end
local port_num = tonumber(port)
if not port_num or port_num < 1 or port_num > 65535 then
return false
end
return true
end
function _M.sanitize_input(input)
if not input then
return nil
end
-- 移除潜在的危险字符
local sanitized = input:gsub("[<>%$%[%]%{%}]", "")
return sanitized
end
return _M11. 故障排除与调试
11.1 调试配置
server {
# 调试日志
error_log /var/log/nginx/debug.log debug;
location / {
# 调试头部
add_header X-Upstream-Addr $upstream_addr;
add_header X-Upstream-Status $upstream_status;
add_header X-Request-ID $request_id;
proxy_pass http://backend;
# 调试日志
log_subrequest on;
}
}11.2 常见问题解决
- 连接超时:调整
proxy_connect_timeout - 上游服务器不可用:检查健康检查配置
- 内存泄漏:监控
ngx_http_lua模块的内存使用 - 性能问题:优化连接池和缓冲区配置
12. 总结
Nginx 动态 upstream 配置是现代微服务架构中的关键组件。通过本文介绍的多种方案,您可以根据具体需求选择合适的实现方式:
- Nginx Plus:适合企业环境,功能完善但需要付费
- OpenResty:灵活性强,适合定制化需求
- 第三方模块:平衡功能与成本
- DNS 解析:简单易用,适合基础场景
- 自定义控制器:在 Kubernetes 环境中集成度高
无论选择哪种方案,都需要考虑性能、安全、监控和维护等方面。动态 upstream 配置大大提高了系统的弹性和可维护性,是现代云原生架构不可或缺的一部分。
在实际生产环境中,建议:
- 实施渐进式部署策略
- 建立完善的监控告警体系
- 定期进行故障演练
- 保持配置的版本控制
- 建立回滚机制
到此这篇关于Nginx动态配置upstream的使用小结的文章就介绍到这了,更多相关Nginx动态配置upstream内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!
