docker安装RocketMQ的实现(附填坑经验connect to failed)
作者:郭咖啡
本人安装版本:最新版(rocketmq-4.4.0),以下均对应4.4.0版本
一、docker部署RocketMQ
1、简易说明
在RocketMQ中,有三个关键组件:Namesrv(Name Server)、Broker和Console-ng(管理控制台)。
Namesrv(Name Server):Namesrv是RocketMQ的命名服务,负责管理整个RocketMQ集群的路由信息。每个RocketMQ集群中都至少需要一个Namesrv实例。它维护了Broker的网络信息、Topic的路由规则以及Consumer的消费进度等元数据,并提供给Producer和Consumer使用。
Broker:Broker是RocketMQ的消息存储和处理节点,负责存储消息、处理消息的读写请求和转发消息等功能。在RocketMQ集群中,可以有多个Broker实例,各个Broker通过与Namesrv交互来维护消息的元数据和路由信息,以实现高可用、负载均衡的消息传输。
Console-ng(管理控制台):Console-ng是RocketMQ官方提供的管理控制台,用于管理和监控RocketMQ集群。它提供了图形化界面,可以进行Topic、Consumer等的配置管理、消息查询与追踪、监控指标展示等操作。Console-ng对于集群的监控和运维非常有用。
这三个组件共同构成了RocketMQ的核心架构,并协同工作以实现高可用、高性能的消息传输和数据管理。您可以通过启动Namesrv、Broker来搭建一个RocketMQ集群,并使用Console-ng进行集群的管理与监控。
2、docker拉取RocketMQ镜像\RocketMQ控制台
拉取最新的RocketMQ,如果下载指定版本可以去docker官网查看
注:Namesrv、Broker均采用rocketmqinc/rocketmq同一个镜像
# 最新(rocketmq) docker pull rocketmqinc/rocketmq # 指定版本号(rocketmq) docker pull rocketmqinc/rocketmq:<版本号> # 最新(RocketMQ控制台) docker pull pangliang/rocketmq-console-ng
3、获取RocketMQ配置文件
可以启动RocketMQ,然后从docker容器中拷贝出配置文件,拷出配置文件后启动的容器就可以删除了
docker run -d --name rmqnamesrv -p 9876:9876 rocketmqinc/rocketmq:latest sh mqnamesrv # 进入容器(用于进入容器找到broker.conf 的位置) docker exec -it 容器ID /bin/bash # 从容器中下载文件到虚拟机 docker cp 容器ID:/opt/rocketmq-4.4.0/conf/broker.conf 虚拟机路径
4、RocketMQ配置文件描述
ongPollingEnable=true offsetCheckInSlave=false # nameServer地址,分号分割 namesrvAddr=172.16.234.150:9876 fetchNamesrvAddrByAddressServer=false #是否允许 Broker 自动创建订阅组,建议线下开启,线上关闭 autoCreateSubscriptionGroup=true #是否允许 Broker 自动创建Topic,建议线下开启,线上关闭 autoCreateTopicEnable=true sendThreadPoolQueueCapacity=100000 clusterTopicEnable=true filterServerNums=1 pullMessageThreadPoolNums=20 # broker名字,名字可重复,为了管理,每个master起一个名字,他的slave同他,eg:Amaster叫broker-a,他的slave也叫broker-a brokerName=knBroker #rocketmqHome=/usr/local/alibaba-rocketmq/ sendMessageThreadPoolNums=24 # 0 表示 Master,>0 表示 Slave brokerId=0 brokerIP1=172.16.234.150 brokerTopicEnable=true brokerPermission=6 shortPollingTimeMills=1000 clientManageThreadPoolNums=16 adminBrokerThreadPoolNums=16 flushConsumerOffsetInterval=5000 flushConsumerOffsetHistoryInterval=60000 # 在发送消息时,自动创建服务器不存在的topic,默认创建的队列数 defaultTopicQueueNums=8 rejectTransactionMessage=false notifyConsumerIdsChangedEnable=true pullThreadPoolQueueCapacity=100000 # # 所属集群名字 brokerClusterName=DefaultCluster putMsgIndexHightWater=600000 maxTransferBytesOnMessageInDisk=65536 #检测物理文件磁盘空间 diskMaxUsedSpaceRatio=75 checkCRCOnRecover=true haSlaveFallbehindMax=268435 deleteConsumeQueueFilesInterval=100 cleanResourceInterval=10000 maxMsgsNumBatch=64 flushConsumeQueueLeastPages=2 syncFlushTimeout=5000 #删除文件时间点,默认凌晨 4点 deleteWhen=04 #Broker 的角色 brokerRole=ASYNC_MASTER destroyMapedFileIntervalForcibly=120000 #commitLog每个文件的大小默认1G mapedFileSizeCommitLog=1073741824 haSendHeartbeatInterval=5000 #刷盘方式 flushDiskType=ASYNC_FLUSH cleanFileForciblyEnable=true haHousekeepingInterval=20000 redeleteHangedFileInterval=120000 #限制的消息大小 maxMessageSize=524288 flushCommitLogTimed=false haMasterAddress= maxTransferCountOnMessageInDisk=4 flushIntervalCommitLog=1000 #文件保留时间,默认 48 小时 fileReservedTime=72 flushCommitLogThoroughInterval=10000 maxHashSlotNum=5000 maxIndexNum=20000 messageIndexEnable=true #存储路径 storePathRootDir=/root/store #commitLog 存储路径 storePathCommitLog=/root/store/commitlog #消费队列存储路径存储路径 storePathConsumeQueue=/root/store/consumequeue #消息索引存储路径 storePathIndex=/root/store/index haListenPort=10912 flushDelayOffsetInterval=10000 haTransferBatchSize=32768 deleteCommitLogFilesInterval=100 maxTransferBytesOnMessageInMemory=262144 accessMessageInMemoryMaxRatio=40 flushConsumeQueueThoroughInterval=60000 flushIntervalConsumeQueue=1000 maxTransferCountOnMessageInMemory=32 messageIndexSafe=false #ConsumeQueue每个文件默认存30W条,根据业务情况调整 mapedFileSizeConsumeQueue=6000000 messageDelayLevel=1s 5s 10s 30s 1m 2m 3m 4m 5m 6m 7m 8m 9m 10m 20m 30m 1h 2h flushCommitLogLeastPages=4 serverChannelMaxIdleTimeSeconds=120 #Broker 对外服务的监听端口 listenPort=10911 serverCallbackExecutorThreads=0 serverAsyncSemaphoreValue=64 serverSocketSndBufSize=131072 serverSelectorThreads=3 serverPooledByteBufAllocatorEnable=false serverWorkerThreads=8 serverSocketRcvBufSize=131072 serverOnewaySemaphoreValue=256 clientWorkerThreads=4 connectTimeoutMillis=3000 clientSocketRcvBufSize=131072 clientOnewaySemaphoreValue=2048 clientChannelMaxIdleTimeSeconds=120 clientPooledByteBufAllocatorEnable=false clientAsyncSemaphoreValue=2048 channelNotActiveInterval=60000 clientCallbackExecutorThreads=2 clientSocketSndBufSize=131072
5、docker启动RocketMQ
以下将配置文件、日志、存储均挂载在本地
# 1、namesrv docker run -d -p 9876:9876 \ -v /mydata/rocketmq/namesrv/logs:/root/logs \ -v /mydata/rocketmq/namesrv/store:/root/store \ -v /mydata/rocketmq/conf/broker.conf:/opt/rocketmq-4.4.0/conf/broker.conf \ --name rmqnamesrv \ rocketmqinc/rocketmq:latest sh mqnamesrv # 2、broker docker run -d -p 10911:10911 -p 10909:10909 \ -v /mydata/rocketmq/broker/logs:/root/logs \ -v /mydata/rocketmq/broker/store:/root/store \ -v /mydata/rocketmq/conf/broker.conf:/opt/rocketmq-4.4.0/conf/broker.conf \ --name rmqbroker \ --add-host namesrv:172.16.234.150 \ -e "NAMESRV_ADDR=namesrv:9876" \ rocketmqinc/rocketmq:latest \ sh mqbroker -n namesrv:9876 \ -c /opt/rocketmq-4.4.0/conf/broker.conf autoCreateTopicEnable=true # 3、Console-ng docker run --name rocketmq-console \ -e "JAVA_OPTS=-Drocketmq.namesrv.addr=172.16.234.150:9876 \ -Dcom.rocketmq.sendMessageWithVIPChannel=false" \ -p 8080:8080 -t styletang/rocketmq-console-ng
6、进入RocketMQ控制台
地址(IP+rocketmq-console端口):http://172.16.234.150:8080/#/
二、填坑经验
错误一: connect to <172.17.0.3:10909> failed
1、在启动Java项目后,发送MQ消息是报以下错误
2、RocketMQ控制台,集群地址显示为docker分配的IP
进入RocketMQ控制台,控制台显示的集群地址为172.17.0.3:10909,并非172.16.234.150:10909,172.17.0.3实际为docker内部分配的ID,需要此IP修改为虚拟机的IP
为了解决以上问题使用docker创建了一个网格【docker network create rocketmq-net + docker run -d --network rocketmq-net …】,但是没有解决此问题,实际导致此问题的是RocketMQ配置文件,请核查RocketMQ配置文件,或者采用上方提供的配置文件
com.himyidea.framework.mq.MQRuntimeException: EC = 900101: MSG = 900101 | msg=MQ Client Failure at com.himyidea.framework.mq.producer.impl.GeneralMQProducer.doSend(GeneralMQProducer.java:130) at com.himyidea.framework.mq.producer.impl.GeneralMQProducer.sendMessage(GeneralMQProducer.java:105) at java.lang.Thread.run(Thread.java:748) Caused by: com.alibaba.rocketmq.client.exception.MQClientException: Send [3] times, still failed, cost [9073]ms, Topic: report_data_topic, BrokersSent: [broker-a, broker-a, broker-a] See http://docs.aliyun.com/cn#/pub/ons/faq/exceptions&send_msg_failed for further details. at com.alibaba.rocketmq.client.impl.producer.DefaultMQProducerImpl.sendDefaultImpl(DefaultMQProducerImpl.java:522) at com.alibaba.rocketmq.client.impl.producer.DefaultMQProducerImpl.send(DefaultMQProducerImpl.java:1030) at com.alibaba.rocketmq.client.impl.producer.DefaultMQProducerImpl.send(DefaultMQProducerImpl.java:989) at com.alibaba.rocketmq.client.producer.DefaultMQProducer.send(DefaultMQProducer.java:90) at com.himyidea.framework.mq.producer.impl.GeneralMQProducer.doSend(GeneralMQProducer.java:126) ... 85 more Caused by: com.alibaba.rocketmq.remoting.exception.RemotingConnectException: connect to <172.17.0.3:10909> failed at com.alibaba.rocketmq.remoting.netty.NettyRemotingClient.invokeSync(NettyRemotingClient.java:360) at com.alibaba.rocketmq.client.impl.MQClientAPIImpl.sendMessageSync(MQClientAPIImpl.java:267) at com.alibaba.rocketmq.client.impl.MQClientAPIImpl.sendMessage(MQClientAPIImpl.java:251) at com.alibaba.rocketmq.client.impl.MQClientAPIImpl.sendMessage(MQClientAPIImpl.java:214) at com.alibaba.rocketmq.client.impl.producer.DefaultMQProducerImpl.sendKernelImpl(DefaultMQProducerImpl.java:671) at com.alibaba.rocketmq.client.impl.producer.DefaultMQProducerImpl.sendDefaultImpl(DefaultMQProducerImpl.java:440)
错误二: maybe your broker machine memory too small
内存不足,以为是docker启动命令中未传内存信息,实际是虚拟机可用内存空间不足
Caused by: com.alibaba.rocketmq.client.exception.MQBrokerException: CODE: 14 DESC: service not available now, maybe disk full, CL: 0.99 CQ: -1.00 INDEX: -1.00, maybe your broker machine memory too small.
到此这篇关于docker安装RocketMQ的实现(附填坑经验connect to failed)的文章就介绍到这了,更多相关docker安装RocketMQ内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!