Nginx 长连接keep_alive的具体使用
作者:老猫喜欢今日爬山
如何开启并支持长连接
当使用nginx作为反向代理时,为了支持长连接,需要做到两点:
- 从client到nginx的连接是长连接
- 从nginx到server的连接是长连接
从HTTP协议的角度看,nginx在这个过程中,对于客户端它扮演着HTTP服务器端的角色。而对于真正的服务器端(在nginx的术语中称为upstream)nginx又扮演着HTTP客户端的角色。
客户端(Client)和 Nginx 保持长连接,两个要求:
- client发送的HTTP请求要求keep alive
- nginx设置上支持keep alive
一、客户端 与 Nginx 长连接配置
默认情况下,nginx已经自动开启了对client连接的keep alive支持。一般场景可以直接使用,但是对于一些比较特殊的场景,还是有必要调整个别参数。
keepalive_timeout 指令
keepalive_timeout指令的语法:
Syntax: keepalive_timeout timeout [header_timeout]; Default: keepalive_timeout 75s; Context: http, server, location
第一个参数设置keep-alive客户端连接在服务器端保持开启的超时值。值为0会禁用keep-alive客户端连接。可选的第二个参数在响应的header域中设置一个值“Keep-Alive: timeout=time”。这两个参数可以不一样。
注:默认75s一般情况下也够用,对于一些请求比较大的内部服务器通讯的场景,适当加大为120s或者300s。第二个参数通常可以不用设置。
keepalive_requests 指令
keepalive_requests指令用于设置一个keep-alive连接上可以服务的请求的最大数量。当最大请求数量达到时,连接被关闭。默认是100。
这个参数的真实含义,是指一个keep alive建立之后,nginx就会为这个连接设置一个计数器,记录这个keep alive的长连接上已经接收并处理的客户端请求的数量。如果达到这个参数设置的最大值时,则nginx会强行关闭这个长连接,逼迫客户端不得不重新建立新的长连接。
这个参数往往被大多数人忽略,因为大多数情况下当QPS(每秒请求数)不是很高时,默认值100凑合够用。但是,对于一些QPS比较高(比如超过10000QPS,甚至达到30000,50000甚至更高) 的场景,默认的100就显得太低。
简单计算一下,QPS=10000时,客户端每秒发送10000个请求(通常建立有多个长连接),每个连接只能最多跑100次请求,意味着平均每秒钟就会有100个长连接因此被nginx关闭。同样意味着为了保持QPS,客户端不得不每秒中重新新建100个连接。因此,如果用netstat命令看客户端机器,就会发现有大量的TIME_WAIT的socket连接(即使此时keep alive已经在client和nginx之间生效)。
因此对于QPS较高的场景,非常有必要加大这个参数,以避免出现大量连接被生成再抛弃的情况,减少TIME_WAIT。
keepalive_requests 指令
默认值1000,单个连接中处理的最大请求数,超过这个数,连接销毁。
keepalive_disable 指令
不对某些浏览器建立长连接,默认:msie6
send_timeout 指令
两次向客户端写操作之间的间隔 如果大于这个时间则关闭连接 默认60s
此处有坑,注意耗时的同步操作有可能会丢弃用户连接
该设置表示Nginx服务器与客户端连接后,某次会话中服务器等待客户端响应超过10s,就会自动关闭连接。
http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65 65; #超过这个时间 没有活动,会让keepalive失效 keepalive_time 1h; # 一个tcp连接总时长,超过之后 强制失效 send_timeout 60;# 默认60s 此处有坑!! 系统中 若有耗时操作,超过 send_timeout 强制断开连接。 注意:准备过程中,不是传输过程 keepalive_requests 1000; #一个tcp复用中 可以并发接收的请求个数
二、Nginx 与 服务端长连接配置
2.1 在 upstream 中配置
keepalive 100;
向上游服务器的保留连接数(线程池的概念)
keepalive_timeout 65
连接保留时间(单位秒)
keepalive_requests 10000
一个tcp复用中 可以并发接收的请求个数
2.2 在 server 中配置
proxy_http_version 1.1; 配置http版本号 默认使用http1.0协议,需要在request中增加”Connection: keep-alive“ header才能够支持,而HTTP1.1默认支持。 proxy_set_header Connection ""; 清楚close信息
三、压力测试
3.1 【客户端】 直连 【Nginx】
Server Software: nginx/1.21.6 Server Hostname: 192.168.44.102 Server Port: 80 Document Path: / Document Length: 16 bytes Concurrency Level: 30 Time taken for tests: 13.035 seconds Complete requests: 100000 Failed requests: 0 Write errors: 0 Total transferred: 25700000 bytes HTML transferred: 1600000 bytes Requests per second: 7671.48 [#/sec] (mean) Time per request: 3.911 [ms] (mean) Time per request: 0.130 [ms] (mean, across all concurrent requests) Transfer rate: 1925.36 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.4 0 12 Processing: 1 3 1.0 3 14 Waiting: 0 3 0.9 3 14 Total: 2 4 0.9 4 14 Percentage of the requests served within a certain time (ms) 50% 4 66% 4 75% 4 80% 4 90% 5 95% 5 98% 6 99% 7 100% 14 (longest request)
3.2 【客户端】 连接 【Nginx】 反向代理 【Nginx】
Server Software: nginx/1.21.6 Server Hostname: 192.168.44.101 Server Port: 80 Document Path: / Document Length: 16 bytes Concurrency Level: 30 Time taken for tests: 25.968 seconds Complete requests: 100000 Failed requests: 0 Write errors: 0 Total transferred: 25700000 bytes HTML transferred: 1600000 bytes Requests per second: 3850.85 [#/sec] (mean) Time per request: 7.790 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 966.47 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 13 Processing: 3 8 1.4 7 22 Waiting: 1 7 1.4 7 22 Total: 3 8 1.4 7 22 Percentage of the requests served within a certain time (ms) 50% 7 66% 8 75% 8 80% 8 90% 9 95% 10 98% 12 99% 13 100% 22 (longest request)
3.3【客户端】 直连 【Tomcat】
Server Software: Server Hostname: 192.168.44.105 Server Port: 8080 Document Path: / Document Length: 7834 bytes Concurrency Level: 30 Time taken for tests: 31.033 seconds Complete requests: 100000 Failed requests: 0 Write errors: 0 Total transferred: 804300000 bytes HTML transferred: 783400000 bytes Requests per second: 3222.38 [#/sec] (mean) Time per request: 9.310 [ms] (mean) Time per request: 0.310 [ms] (mean, across all concurrent requests) Transfer rate: 25310.16 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.3 0 15 Processing: 0 9 7.8 7 209 Waiting: 0 9 7.2 7 209 Total: 0 9 7.8 7 209 Percentage of the requests served within a certain time (ms) 50% 7 66% 9 75% 11 80% 13 90% 18 95% 22 98% 27 99% 36 100% 209 (longest request)
3.4 【客户端】 连接 【Nginx】 反向代理 【Tomcat】并开启keepalive
Server Software: nginx/1.21.6 Server Hostname: 192.168.44.101 Server Port: 80 Document Path: / Document Length: 7834 bytes Concurrency Level: 30 Time taken for tests: 23.379 seconds Complete requests: 100000 Failed requests: 0 Write errors: 0 Total transferred: 806500000 bytes HTML transferred: 783400000 bytes Requests per second: 4277.41 [#/sec] (mean) Time per request: 7.014 [ms] (mean) Time per request: 0.234 [ms] (mean, across all concurrent requests) Transfer rate: 33688.77 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 9 Processing: 1 7 4.2 6 143 Waiting: 1 7 4.2 6 143 Total: 1 7 4.2 6 143 Percentage of the requests served within a certain time (ms) 50% 6 66% 7 75% 7 80% 7 90% 8 95% 10 98% 13 99% 16 100% 143 (longest request)
3.5 【客户端】 连接 【Nginx】 反向代理 【Tomcat】不开启keepalive
Server Software: nginx/1.21.6 Server Hostname: 192.168.44.101 Server Port: 80 Document Path: / Document Length: 7834 bytes Concurrency Level: 30 Time taken for tests: 33.814 seconds Complete requests: 100000 Failed requests: 0 Write errors: 0 Total transferred: 806500000 bytes HTML transferred: 783400000 bytes Requests per second: 2957.32 [#/sec] (mean) Time per request: 10.144 [ms] (mean) Time per request: 0.338 [ms] (mean, across all concurrent requests) Transfer rate: 23291.74 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 9 Processing: 1 10 5.5 9 229 Waiting: 1 10 5.5 9 229 Total: 1 10 5.5 9 229 Percentage of the requests served within a certain time (ms) 50% 9 66% 10 75% 11 80% 11 90% 13 95% 14 98% 17 99% 19 100% 229 (longest request)
到此这篇关于Nginx 长连接keep_alive的具体使用的文章就介绍到这了,更多相关Nginx 长连接keep_alive内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!