Docker搭建ELK日志系统,并通过Kibana查看日志方式
作者:思远:
这篇文章主要介绍了Docker搭建ELK日志系统,并通过Kibana查看日志方式,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教
docker-compose.yml
version: '3' services: elasticsearch: image: elasticsearch:7.7.0 #镜像 container_name: elasticsearch #定义容器名称 restart: always #开机启动,失败也会一直重启 environment: - "cluster.name=elasticsearch" #设置集群名称为elasticsearch - "discovery.type=single-node" #以单一节点模式启动 - "ES_JAVA_OPTS=-Xms512m -Xmx1024m" #设置使用jvm内存大小 volumes: - D:\docker\elasticsearch\plugins:/usr/share/elasticsearch/plugins # 插件文件挂载[挂在目录自行更改] - D:\docker\elasticsearch\data:/usr/share/elasticsearch/data # 数据文件挂载[挂在目录自行更改] ports: - 9200:9200 kibana: image: kibana:7.7.0 container_name: kibana restart: always depends_on: - elasticsearch #kibana在elasticsearch启动之后再启动 environment: - ELASTICSEARCH_URL=http://elasticsearch:9200 #设置访问elasticsearch的地址 ports: - 5601:5601 logstash: image: logstash:7.7.0 container_name: logstash restart: always volumes: - D:\docker\logstash\logstash-springboot.conf:/usr/share/logstash/pipeline/logstash.conf #挂载logstash的配置文件[挂在文件自行更改] depends_on: - elasticsearch #kibana在elasticsearch启动之后再启动 links: - elasticsearch:es #可以用es这个域名访问elasticsearch服务 ports: - 4560:4560
D:\docker\logstash\logstash-springboot.conf
文件内容:
input { tcp { mode => "server" host => "0.0.0.0" port => 4560 codec => json_lines } } output { elasticsearch { hosts => "es:9200" index => "springboot-logstash-%{+YYYY.MM.dd}" } }
访问:http://localhost:9200
访问:http://localhost:5601
配置中文:
# 进入容器 docker exec -it kibana /bin/bash # 配置 中文[在文件末尾加上] vi ./config/kibana.yml i18n.locale: zh-CN # 重启 docker restart kibana
Spring boot 集成 logstash
pom.xml
<properties> <hutool.version>5.6.6</hutool.version> <lombok.version>1.18.20</lombok.version> <spring-boot.web.version>2.2.10.RELEASE</spring-boot.web.version> <logstash.version>5.1</logstash.version> </properties> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>${lombok.version}</version> </dependency> <dependency> <groupId>cn.hutool</groupId> <artifactId>hutool-all</artifactId> <version>${hutool.version}</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <version>${spring-boot.web.version}</version> </dependency> <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>${logstash.version}</version> </dependency>
application.yml
server: port: 8081 spring: application: name: ELK-server-base logging: level: org: springframework: boot: autoconfigure: info
logstash.xml
<?xml version="1.0" encoding="UTF-8"?> <configuration> <conversionRule conversionWord="clr" converterClass="org.springframework.boot.logging.logback.ColorConverter" /> <conversionRule conversionWord="wex" converterClass="org.springframework.boot.logging.logback.WhitespaceThrowableProxyConverter" /> <conversionRule conversionWord="wEx" converterClass="org.springframework.boot.logging.logback.ExtendedWhitespaceThrowableProxyConverter" /> <property name="CONSOLE_LOG_PATTERN" value="${CONSOLE_LOG_PATTERN:-%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}" /> <appender name="console" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>${CONSOLE_LOG_PATTERN}</pattern> <charset>utf8</charset> </encoder> <!-- 临界值过滤器,过滤掉低于指定临界值的日志 --> <filter class="ch.qos.logback.classic.filter.ThresholdFilter"> <level>debug</level> </filter> </appender> <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>logs/Logback.log</file> <encoder> <pattern>${CONSOLE_LOG_PATTERN}</pattern> <charset>utf8</charset> </encoder> <!-- 临界值过滤器,过滤掉低于指定临界值的日志 --> <filter class="ch.qos.logback.classic.filter.ThresholdFilter"> <level>debug</level> </filter> <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <fileNamePattern>logs/%d{yyyy-MM-dd}/Logback.%d{yyyy-MM-dd}.log </fileNamePattern> <maxHistory>30</maxHistory> </rollingPolicy> </appender> <appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender"> <destination>127.0.0.1:4560</destination> <encoder class="net.logstash.logback.encoder.LogstashEncoder" /> </appender> <!--logger节点,可选节点,用来具体指明包的日志输出级别,它将会覆盖root的输出级别 --> <logger name="com.yj" level="debug" /> <root level="debug"> <appender-ref ref="stash" /> <appender-ref ref="console" /> <!-- <appender-ref ref="file" />--> </root> </configuration>
ElkApplication.java
@Slf4j @SpringBootApplication @EnableScheduling public class ElkBaseApplication { @Autowired private Environment environment; public static void main(String[] args) { SpringApplication.run(ElkBaseApplication.class, args); } @Scheduled(cron = "0/5 * * * * ?") public void schedulingMessage() { log.info("服务:{},时间:{} 打印的日志,端口:{}", environment.getProperty("spring.application.name",String.class), DateUtil.formatDateTime(DateUtil.date()), environment.getProperty("server.port",Integer.class)); } }
启动项目并在kibana中查看日志
创建索引模式
创建好索引模式点击 “Discover”
选择创建好的索引模式 “springboot-logstash-*”
因为字段比较多,这里只选着了 “message”
总结
以上为个人经验,希望能给大家一个参考,也希望大家多多支持脚本之家。