java

关注公众号 jb51net

关闭
首页 > 软件编程 > java > k8s集群接入istio

k8s java集群接入istio案例详解(完整步骤)

作者:完颜振江

本文给大家介绍k8s java集群接入istio案例,本文结合实例代码给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,需要的朋友参考下吧

支持灰度v1/v2的版本

✅ 第一步:完整的操作步骤

🧩 目标

在不修改应用代码的前提下,将现有 Jenkins 流水线与 Istio 服务网格集成,支持通过 Ingress Gateway 访问服务,并实现基于 v1/v2 的灰度发布。

🔧 操作流程

1. 确保 Istio 已安装并启用注入(已完成)

# cat istio-custom.yaml 
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: default
  components:
    pilot:
      enabled: true
      k8s:
        resources:
          requests:
            cpu: 50m
            memory: 64Mi
          limits:
            cpu: 200m
            memory: 256Mi
    ingressGateways:
      - name: istio-ingressgateway
        enabled: true
        k8s:
          resources:
            requests:
              cpu: 50m
              memory: 128Mi
            limits:
              cpu: 200m
              memory: 512Mi
  values:
    global:
      hub: swr.cn-east-3.myhuaweicloud.com/bocheng-test/istio
      tag: 1.27.3
      proxy:
        image: proxyv2
      imagePullPolicy: IfNotPresent
    gateways:
      istio-ingressgateway:
        type: NodePort
        ports:
          - port: 80
            targetPort: 8080
            name: http2
          - port: 443
            targetPort: 8443
            name: https
# 安装 Istio(使用你的私有镜像)
istioctl install -f istio-custom.yaml -y
# 启用命名空间自动注入
kubectl label namespace bc-feature-202509-testcce istio-injection=enabled --overwrite

验证:

kubectl get pod -n bc-feature-202509-testcce -l app=bc-gateway
kubectl describe pod <pod-name> -n bc-feature-202509-testcce | grep istio-proxy

应看到 istio-proxy 容器和两个 initContainers。

2. 创建 Istio 入口网关(只需一次)

创建文件 gateway.yaml

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: global-ingress-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

应用一次即可:

kubectl apply -f gateway.yaml -n istio-system

3. 创建 Istio 配置模板目录

在项目根目录下创建 istio/ 目录,包含两个模板文件:

文件:istio/virtualservice.yaml.tpl

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
meta
  name: ${MODULE_LOWERCASE}-vs
  namespace: ${CCE_NAMESPACE}
spec:
  hosts:
  - "${MODULE_LOWERCASE}.example.com"
  gateways:
  - istio-system/global-ingress-gateway
  http:
  - route:
    - destination:
        host: ${MODULE_LOWERCASE}.${CCE_NAMESPACE}.svc.cluster.local
        subset: ${DEPLOY_VERSION}
      weight: 100

文件:istio/destinationrule.yaml.tpl

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
meta
  name: ${MODULE_LOWERCASE}-dr
  namespace: ${CCE_NAMESPACE}
spec:
  host: ${MODULE_LOWERCASE}.${CCE_NAMESPACE}.svc.cluster.local
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

4. 修改 Jenkinsfile 实现自动部署 Istio 规则

stage('🎯 部署到 CCE') 中,在更新镜像后添加生成和部署 Istio 配置的步骤。

5. 外部访问方式

获取 Ingress Gateway 的 NodePort:

kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.spec.ports[0].nodePort}'

访问示例:

curl -H "Host: bc-gateway.example.com" http://192.168.122.190:31902/actuator/health

✅ 第二步:完整的适配后 Jenkinsfile

pipeline {
    agent any
    parameters {
        choice(
            name: 'MODULE',
            choices: [
                'bc-gateway',
                'bc-admin',
                'bc-invoice',
                'bc-job',
                'bc-oldBusiness',
                'bc-open',
                'bc-resource',
                'bc-system',
                'bc-third',
                'bc-wallet'
            ],
            description: '选择要构建的模块'
        )
        string(
            name: 'VERSION',
            defaultValue: '2.2.2',
            description: '构建版本号(必须与 pom.xml 中 <revision> 一致)'
        )
        booleanParam(
            name: 'SKIP_TESTS',
            defaultValue: true,
            description: '是否跳过单元测试'
        )
        booleanParam(
            name: 'CLEAN_M2_CACHE',
            defaultValue: false,
            description: '是否清理 .m2 中 org.dromara 缓存(首次构建建议勾选)'
        )
        choice(
            name: 'DEPLOY_VERSION',
            choices: ['v1', 'v2'],
            description: '选择部署版本(v1 或 v2),用于灰度发布'
        )
        booleanParam(
            name: 'DEPLOY_TO_CCE',
            defaultValue: false,
            description: '是否部署到华为云 CCE 集群'
        )
        booleanParam(
            name: 'ROLLBACK_LAST_VERSION',
            defaultValue: false,
            description: '是否回滚上一版本(跳过构建)'
        )
        booleanParam(
            name: 'WAIT_FOR_ALL_REPLICAS',
            defaultValue: false,
            description: '是否等待所有副本完成滚动更新(否则仅等待首个新 Pod Ready)'
        )
    }
    environment {
        MVN = '/usr/maven/apache-maven-3.9.9/bin/mvn'
        JAVA_HOME = '/usr/java/jdk-17.0.12'
        PATH = "${env.JAVA_HOME}/bin:${env.MVN}:${env.PATH}"
        SWR_REGISTRY = 'swr.cn-east-3.myhuaweicloud.com'
        IMAGE_REPO   = 'dbb-java-micro-test'
        CCE_NAMESPACE   = 'bc-feature-202509-testcce'
        CONTAINER_NAME  = 'container-1'
        KUBECTL_PATH    = '/usr/local/bin/kubectl'
        KUBECONFIG_FILE = '/root/.kube/config'
        PORT_MAPPING_STR = '''
            bc-gateway=8901
            bc-admin=8909
            bc-invoice=8902
            bc-job=8905
            bc-oldBusiness=8903
            bc-open=8910
            bc-resource=8904
            bc-system=8903
            bc-third=8906
            bc-wallet=8908
        '''
        HEALTH_PATH = '/actuator/health'
    }
    stages {
        stage('🔄 是否执行回滚') {
            when { expression { params.ROLLBACK_LAST_VERSION } }
            steps {
                script {
                    def ns = env.CCE_NAMESPACE
                    def deployName = params.MODULE.toLowerCase()
                    sh """
                        set -e
                        export KUBECONFIG=${env.KUBECONFIG_FILE}
                        ${env.KUBECTL_PATH} rollout undo deployment/${deployName} -n ${ns}
                        echo "⏳ 等待回滚完成..."
                        ${env.KUBECTL_PATH} rollout status deployment/${deployName} -n ${ns} --timeout=120s
                        echo "✅ 回滚成功"
                    """
                }
            }
        }
        stage('📦 拉取代码') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                checkout scm
            }
        }
        stage('🚀 准备工作') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                script {
                    def portMap = [:]
                    env.PORT_MAPPING_STR.stripIndent().split('\n').each {
                        def pair = it.trim().split('=')
                        if (pair.size() == 2) {
                            portMap[pair[0].trim()] = pair[1].trim()
                        }
                    }
                    def module = params.MODULE
                    def modulePath = module == 'bc-gateway' ? 'bc-gateway' : "bc-modules/${module}"
                    def port = portMap[module]
                    if (!port) error("❌ 未配置模块 ${module} 的服务端口")
                    def timestamp = new Date().format('yyyyMMddHHmmss')
                    def imageTag = "${params.DEPLOY_VERSION}-${timestamp}"
                    def imageName = "${env.SWR_REGISTRY}/${env.IMAGE_REPO}/${module.toLowerCase()}"
                    def fullImageName = "${imageName}:${imageTag}"
                    env.MODULE = module
                    env.MODULE_PATH = modulePath
                    env.SERVICE_JAR = "${modulePath}/target/${module}.jar"
                    env.IMAGE_NAME = imageName
                    env.FULL_IMAGE_NAME = fullImageName
                    env.SERVICE_PORT = port
                }
            }
        }
        stage('🧹 清理本地缓存(可选)') {
            when {
                allOf {
                    expression { !params.ROLLBACK_LAST_VERSION }
                    expression { params.CLEAN_M2_CACHE }
                }
            }
            steps {
                script {
                    def dromaraDir = "$HOME/.m2/repository/org/dromara"
                    sh "rm -rf ${dromaraDir} && echo '🗑️ 已清理 .m2 缓存'"
                }
            }
        }
        stage('🔨 构建模块') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                script {
                    def skipTestsFlag = params.SKIP_TESTS ? '-Dmaven.test.skip=true' : ''
                    sh """
                        set -e
                        cd \$WORKSPACE
                        \$MVN clean install \\
                            -pl \${MODULE_PATH} \\
                            -am \\
                            -P test \\
                            -T 4C \\
                            ${skipTestsFlag} \\
                            -Drevision=${params.VERSION} \\
                            -U --no-transfer-progress -B
                        echo "✅ 构建成功"
                    """
                }
            }
        }
        stage('✅ 测试报告') {
            when {
                allOf {
                    expression { !params.ROLLBACK_LAST_VERSION }
                    expression { !params.SKIP_TESTS }
                }
            }
            steps {
                junit testResults: "${env.MODULE_PATH}/target/surefire-reports/*.xml"
            }
        }
        stage('🐳 构建镜像') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                script {
                    sh '''
                        set -e
                        CONTEXT_DIR="$WORKSPACE/$MODULE_PATH"
                        JAR_PATH="$CONTEXT_DIR/target/$MODULE.jar"
                        DOCKERFILE="$CONTEXT_DIR/Dockerfile"
                        [ -f "$JAR_PATH" ] || { echo "❌ JAR 文件不存在"; exit 1; }
                        [ -f "$DOCKERFILE" ] || { echo "❌ Dockerfile 不存在"; exit 1; }
                        cd "$CONTEXT_DIR" && docker build -t "$FULL_IMAGE_NAME" .
                        echo "✅ 镜像构建成功: $FULL_IMAGE_NAME"
                    '''
                }
            }
        }
        stage('🔐 登录 SWR') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                script {
                    withCredentials([usernamePassword(
                        credentialsId: 'cba87dfc-e05d-4d08-b059-5dc83c79e7ec',
                        usernameVariable: 'SWR_USER',
                        passwordVariable: 'SWR_PASS'
                    )]) {
                        sh '''
                            set -e
                            echo "$SWR_PASS" | docker login $SWR_REGISTRY -u "$SWR_USER" --password-stdin
                            echo "✅ 登录 SWR 成功"
                        '''
                    }
                }
            }
        }
        stage('📤 推送镜像') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                sh '''
                    set -e
                    docker push $FULL_IMAGE_NAME
                    echo "✅ 镜像推送成功: $FULL_IMAGE_NAME"
                '''
            }
        }
        stage('🎯 部署到 CCE') {
            when {
                allOf {
                    expression { !params.ROLLBACK_LAST_VERSION }
                    expression { params.DEPLOY_TO_CCE }
                }
            }
            steps {
                script {
                    def ns = env.CCE_NAMESPACE
                    def container = env.CONTAINER_NAME
                    def deployName = env.MODULE.toLowerCase()
                    def timeoutSeconds = 300
                    def intervalSeconds = 45
                    def maxAttempts = timeoutSeconds / intervalSeconds
                    // === 1. 更新镜像 ===
                    sh """
                        set -e
                        export KUBECONFIG=${env.KUBECONFIG_FILE}
                        ${env.KUBECTL_PATH} set image deployment/${deployName} ${container}=$FULL_IMAGE_NAME -n ${ns}
                        echo "✅ 镜像已更新为: $FULL_IMAGE_NAME"
                    """
                    // === 2. 注入 version 标签到 Deployment(关键!)===
                    sh """
                        set -e
                        export KUBECONFIG=${env.KUBECONFIG_FILE}
                        ${env.KUBECTL_PATH} patch deployment ${deployName} -n ${ns} -p='{"spec":{"template":{"metadata":{"labels":{"version":"${params.DEPLOY_VERSION}"}}}}}'
                        echo "✅ Deployment 已打 version=${params.DEPLOY_VERSION} 标签"
                    """
                    // === 3. 生成并部署 Istio VirtualService 和 DestinationRule ===
                    echo "🔄 生成 Istio 配置文件..."
                    sh '''
                        set -e
                        export MODULE_LOWERCASE=''' + deployName + '''
                        export CCE_NAMESPACE=''' + ns + '''
                        export DEPLOY_VERSION=''' + params.DEPLOY_VERSION + '''
                        # 替换模板
                        cat istio/virtualservice.yaml.tpl | envsubst > /tmp/virtualservice.yaml
                        cat istio/destinationrule.yaml.tpl | envsubst > /tmp/destinationrule.yaml
                        # 应用配置
                        export KUBECONFIG=/root/.kube/config
                        kubectl apply -f /tmp/virtualservice.yaml -n $CCE_NAMESPACE
                        kubectl apply -f /tmp/destinationrule.yaml -n $CCE_NAMESPACE
                        echo "✅ Istio 路由规则已更新"
                    '''
                    // === 4. 等待验证策略 ===
                    if (params.WAIT_FOR_ALL_REPLICAS) {
                        echo "⏳ 等待所有副本完成滚动更新..."
                        sh """
                            set -e
                            export KUBECONFIG=${env.KUBECONFIG_FILE}
                            ${env.KUBECTL_PATH} rollout status deployment/${deployName} -n ${ns} --timeout=${timeoutSeconds}s
                            echo "✅ 所有副本已成功更新"
                        """
                    } else {
                        echo "⏳ 等待首个新 Pod 进入 READY 状态..."
                        def success = false
                        for (int i = 0; i < maxAttempts; i++) {
                            try {
                                def output = sh(
                                    script: """
                                        set +e
                                        export KUBECONFIG=${env.KUBECONFIG_FILE}
                                        kubectl get pods \\
                                          -l app=${deployName},version=${params.DEPLOY_VERSION} \\
                                          -n ${ns} \\
                                          --field-selector=status.phase=Running \\
                                          -o custom-columns=NAME:.metadata.name,READY:.status.containerStatuses[0].ready,IMAGE:.spec.containers[0].image \\
                                          --no-headers
                                    """,
                                    returnStdout: true
                                ).trim()
                                if (output && output.contains("${FULL_IMAGE_NAME}") && output.contains("true")) {
                                    echo "🟢 新版本 Pod 已就绪"
                                    success = true
                                    break
                                }
                            } catch (Exception e) {
                                echo "🟡 检查异常: ${e}"
                            }
                            sleep(intervalSeconds)
                        }
                        if (!success) {
                            error("❌ 超时:未能看到新版本 Pod 就绪")
                        }
                        echo "✅ 部署验证通过"
                    }
                }
            }
        }
        stage('🎉 完成通知') {
            steps {
                script {
                    if (params.ROLLBACK_LAST_VERSION) {
                        echo "⏪ 已完成回滚操作"
                    } else {
                        echo "✅ 构建 & 部署成功"
                        if (params.DEPLOY_TO_CCE) {
                            echo "🚀 已部署至 CCE: deployment/${env.MODULE.toLowerCase()} in ${env.CCE_NAMESPACE}"
                            echo "🔖 镜像: ${env.FULL_IMAGE_NAME}"
                            echo "💡 提示: 支持基于 v1/v2 的灰度发布"
                        }
                    }
                }
            }
        }
    }
    post {
        success {
            echo "✅【流水线成功】构建与部署已完成"
        }
        failure {
            echo "❌【流水线失败】请检查上述日志"
        }
        always {
            cleanWs()
        }
    }
}

✅ 总结

你现在拥有了一个完全适配 Istio 的 Jenkins 流水线,实现了:

外部访问只需:

curl -H "Host: bc-gateway.example.com" http://<INGRESS_IP>:<NODEPORT>/actuator/health

不要灰度的版本

你可能在某些模块(如 bc-jobbc-third)上不需要灰度,但仍希望接入 Istio 以获得 mTLS、可观测性等能力。这种情况下,我们简化 DestinationRuleVirtualService 的配置。

✅ 第一步:完整的操作步骤(非灰度场景)

🧩 目标

将现有 Jenkins 流水线适配 Istio,支持:

🔧 操作流程

1. 确保 Istio 已安装并启用注入(已完成)

root@k8s-master:/data/service/ISTIO# cat istio-custom.yaml 
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: default
  components:
    pilot:
      enabled: true
      k8s:
        resources:
          requests:
            cpu: 50m
            memory: 64Mi
          limits:
            cpu: 200m
            memory: 256Mi
    ingressGateways:
      - name: istio-ingressgateway
        enabled: true
        k8s:
          resources:
            requests:
              cpu: 50m
              memory: 128Mi
            limits:
              cpu: 200m
              memory: 512Mi
  values:
    global:
      hub: swr.cn-east-3.myhuaweicloud.com/bocheng-test/istio
      tag: 1.27.3
      proxy:
        image: proxyv2
      imagePullPolicy: IfNotPresent
    gateways:
      istio-ingressgateway:
        type: NodePort
        ports:
          - port: 80
            targetPort: 8080
            name: http2
          - port: 443
            targetPort: 8443
            name: https
# 安装 Istio
istioctl install -f istio-custom.yaml -y
# 启用命名空间自动注入
kubectl label namespace bc-feature-202509-testcce istio-injection=enabled --overwrite

验证:

kubectl get pod -n bc-feature-202509-testcce -l app=bc-gateway
kubectl describe pod <pod-name> -n bc-feature-202509-testcce | grep istio-proxy

应看到 istio-proxy 容器。

2. 创建 Istio 入口网关(只需一次)

创建文件 gateway.yaml

apiVersion: networking.istio.io/v1beta1
kind: Gateway
meta
  name: global-ingress-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

应用一次即可:

kubectl apply -f gateway.yaml -n istio-system

3. 创建 Istio 配置模板目录(非灰度版)

在项目根目录下创建 istio/ 目录,包含两个模板文件:

文件:istio/virtualservice-nogray.yaml.tpl

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
meta
  name: ${MODULE_LOWERCASE}-vs
  namespace: ${CCE_NAMESPACE}
spec:
  hosts:
  - "${MODULE_LOWERCASE}.example.com"
  gateways:
  - istio-system/global-ingress-gateway
  http:
  - route:
    - destination:
        host: ${MODULE_LOWERCASE}.${CCE_NAMESPACE}.svc.cluster.local
        port:
          number: ${SERVICE_PORT}
      weight: 100

文件:istio/destinationrule-nogray.yaml.tpl

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
meta
  name: ${MODULE_LOWERCASE}-dr
  namespace: ${CCE_NAMESPACE}
spec:
  host: ${MODULE_LOWERCASE}.${CCE_NAMESPACE}.svc.cluster.local
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL
    loadBalancer:
      simple: ROUND_ROBIN

⚠️ 注意:没有 subsets,直接路由到服务后端。

4. 修改 Jenkinsfile 实现自动部署 Istio 规则

stage('🎯 部署到 CCE') 中,在更新镜像后添加生成和部署 Istio 配置的步骤。

5. 外部访问方式

获取 Ingress Gateway 的 NodePort:

kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.spec.ports[0].nodePort}'

访问示例:

curl -H "Host: bc-gateway.example.com" http://192.168.122.190:31902/actuator/health

✅ 第二步:完整的适配后 Jenkinsfile(非灰度版)

pipeline {
    agent any
    parameters {
        choice(
            name: 'MODULE',
            choices: [
                'bc-gateway',
                'bc-admin',
                'bc-invoice',
                'bc-job',
                'bc-oldBusiness',
                'bc-open',
                'bc-resource',
                'bc-system',
                'bc-third',
                'bc-wallet'
            ],
            description: '选择要构建的模块'
        )
        string(
            name: 'VERSION',
            defaultValue: '2.2.2',
            description: '构建版本号(必须与 pom.xml 中 <revision> 一致)'
        )
        booleanParam(
            name: 'SKIP_TESTS',
            defaultValue: true,
            description: '是否跳过单元测试'
        )
        booleanParam(
            name: 'CLEAN_M2_CACHE',
            defaultValue: false,
            description: '是否清理 .m2 中 org.dromara 缓存(首次构建建议勾选)'
        )
        booleanParam(
            name: 'DEPLOY_TO_CCE',
            defaultValue: false,
            description: '是否部署到华为云 CCE 集群'
        )
        booleanParam(
            name: 'ROLLBACK_LAST_VERSION',
            defaultValue: false,
            description: '是否回滚上一版本(跳过构建)'
        )
        booleanParam(
            name: 'WAIT_FOR_ALL_REPLICAS',
            defaultValue: false,
            description: '是否等待所有副本完成滚动更新(否则仅等待首个新 Pod Ready)'
        )
    }
    environment {
        MVN = '/usr/maven/apache-maven-3.9.9/bin/mvn'
        JAVA_HOME = '/usr/java/jdk-17.0.12'
        PATH = "${env.JAVA_HOME}/bin:${env.MVN}:${env.PATH}"
        SWR_REGISTRY = 'swr.cn-east-3.myhuaweicloud.com'
        IMAGE_REPO   = 'dbb-java-micro-test'
        CCE_NAMESPACE   = 'bc-feature-202509-testcce'
        CONTAINER_NAME  = 'container-1'
        KUBECTL_PATH    = '/usr/local/bin/kubectl'
        KUBECONFIG_FILE = '/root/.kube/config'
        PORT_MAPPING_STR = '''
            bc-gateway=8901
            bc-admin=8909
            bc-invoice=8902
            bc-job=8905
            bc-oldBusiness=8903
            bc-open=8910
            bc-resource=8904
            bc-system=8903
            bc-third=8906
            bc-wallet=8908
        '''
        HEALTH_PATH = '/actuator/health'
    }
    stages {
        stage('🔄 是否执行回滚') {
            when { expression { params.ROLLBACK_LAST_VERSION } }
            steps {
                script {
                    def ns = env.CCE_NAMESPACE
                    def deployName = params.MODULE.toLowerCase()
                    sh """
                        set -e
                        export KUBECONFIG=${env.KUBECONFIG_FILE}
                        ${env.KUBECTL_PATH} rollout undo deployment/${deployName} -n ${ns}
                        echo "⏳ 等待回滚完成..."
                        ${env.KUBECTL_PATH} rollout status deployment/${deployName} -n ${ns} --timeout=120s
                        echo "✅ 回滚成功"
                    """
                }
            }
        }
        stage('📦 拉取代码') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                checkout scm
            }
        }
        stage('🚀 准备工作') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                script {
                    def portMap = [:]
                    env.PORT_MAPPING_STR.stripIndent().split('\n').each {
                        def pair = it.trim().split('=')
                        if (pair.size() == 2) {
                            portMap[pair[0].trim()] = pair[1].trim()
                        }
                    }
                    def module = params.MODULE
                    def modulePath = module == 'bc-gateway' ? 'bc-gateway' : "bc-modules/${module}"
                    def port = portMap[module]
                    if (!port) error("❌ 未配置模块 ${module} 的服务端口")
                    def timestamp = new Date().format('yyyyMMddHHmmss')
                    def imageTag = "latest-${timestamp}"
                    def imageName = "${env.SWR_REGISTRY}/${env.IMAGE_REPO}/${module.toLowerCase()}"
                    def fullImageName = "${imageName}:${imageTag}"
                    env.MODULE = module
                    env.MODULE_PATH = modulePath
                    env.SERVICE_JAR = "${modulePath}/target/${module}.jar"
                    env.IMAGE_NAME = imageName
                    env.FULL_IMAGE_NAME = fullImageName
                    env.SERVICE_PORT = port
                }
            }
        }
        stage('🧹 清理本地缓存(可选)') {
            when {
                allOf {
                    expression { !params.ROLLBACK_LAST_VERSION }
                    expression { params.CLEAN_M2_CACHE }
                }
            }
            steps {
                script {
                    def dromaraDir = "$HOME/.m2/repository/org/dromara"
                    sh "rm -rf ${dromaraDir} && echo '🗑️ 已清理 .m2 缓存'"
                }
            }
        }
        stage('🔨 构建模块') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                script {
                    def skipTestsFlag = params.SKIP_TESTS ? '-Dmaven.test.skip=true' : ''
                    sh """
                        set -e
                        cd \$WORKSPACE
                        \$MVN clean install \\
                            -pl \${MODULE_PATH} \\
                            -am \\
                            -P test \\
                            -T 4C \\
                            ${skipTestsFlag} \\
                            -Drevision=${params.VERSION} \\
                            -U --no-transfer-progress -B
                        echo "✅ 构建成功"
                    """
                }
            }
        }
        stage('✅ 测试报告') {
            when {
                allOf {
                    expression { !params.ROLLBACK_LAST_VERSION }
                    expression { !params.SKIP_TESTS }
                }
            }
            steps {
                junit testResults: "${env.MODULE_PATH}/target/surefire-reports/*.xml"
            }
        }
        stage('🐳 构建镜像') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                script {
                    sh '''
                        set -e
                        CONTEXT_DIR="$WORKSPACE/$MODULE_PATH"
                        JAR_PATH="$CONTEXT_DIR/target/$MODULE.jar"
                        DOCKERFILE="$CONTEXT_DIR/Dockerfile"
                        [ -f "$JAR_PATH" ] || { echo "❌ JAR 文件不存在"; exit 1; }
                        [ -f "$DOCKERFILE" ] || { echo "❌ Dockerfile 不存在"; exit 1; }
                        cd "$CONTEXT_DIR" && docker build -t "$FULL_IMAGE_NAME" .
                        echo "✅ 镜像构建成功: $FULL_IMAGE_NAME"
                    '''
                }
            }
        }
        stage('🔐 登录 SWR') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                script {
                    withCredentials([usernamePassword(
                        credentialsId: 'cba87dfc-e05d-4d08-b059-5dc83c79e7ec',
                        usernameVariable: 'SWR_USER',
                        passwordVariable: 'SWR_PASS'
                    )]) {
                        sh '''
                            set -e
                            echo "$SWR_PASS" | docker login $SWR_REGISTRY -u "$SWR_USER" --password-stdin
                            echo "✅ 登录 SWR 成功"
                        '''
                    }
                }
            }
        }
        stage('📤 推送镜像') {
            when { expression { !params.ROLLBACK_LAST_VERSION } }
            steps {
                sh '''
                    set -e
                    docker push $FULL_IMAGE_NAME
                    echo "✅ 镜像推送成功: $FULL_IMAGE_NAME"
                '''
            }
        }
        stage('🎯 部署到 CCE') {
            when {
                allOf {
                    expression { !params.ROLLBACK_LAST_VERSION }
                    expression { params.DEPLOY_TO_CCE }
                }
            }
            steps {
                script {
                    def ns = env.CCE_NAMESPACE
                    def container = env.CONTAINER_NAME
                    def deployName = env.MODULE.toLowerCase()
                    def timeoutSeconds = 300
                    def intervalSeconds = 45
                    def maxAttempts = timeoutSeconds / intervalSeconds
                    // === 1. 更新镜像 ===
                    sh """
                        set -e
                        export KUBECONFIG=${env.KUBECONFIG_FILE}
                        ${env.KUBECTL_PATH} set image deployment/${deployName} ${container}=$FULL_IMAGE_NAME -n ${ns}
                        echo "✅ 镜像已更新为: $FULL_IMAGE_NAME"
                    """
                    // === 2. 生成并部署 Istio VirtualService 和 DestinationRule(非灰度版)===
                    echo "🔄 生成 Istio 配置文件..."
                    sh '''
                        set -e
                        export MODULE_LOWERCASE=''' + deployName + '''
                        export CCE_NAMESPACE=''' + ns + '''
                        export SERVICE_PORT=''' + env.SERVICE_PORT + '''
                        # 替换模板
                        cat istio/virtualservice-nogray.yaml.tpl | envsubst > /tmp/virtualservice.yaml
                        cat istio/destinationrule-nogray.yaml.tpl | envsubst > /tmp/destinationrule.yaml
                        # 应用配置
                        export KUBECONFIG=/root/.kube/config
                        kubectl apply -f /tmp/virtualservice.yaml -n $CCE_NAMESPACE
                        kubectl apply -f /tmp/destinationrule.yaml -n $CCE_NAMESPACE
                        echo "✅ Istio 路由规则已更新(非灰度模式)"
                    '''
                    // === 3. 等待验证策略 ===
                    if (params.WAIT_FOR_ALL_REPLICAS) {
                        echo "⏳ 等待所有副本完成滚动更新..."
                        sh """
                            set -e
                            export KUBECONFIG=${env.KUBECONFIG_FILE}
                            ${env.KUBECTL_PATH} rollout status deployment/${deployName} -n ${ns} --timeout=${timeoutSeconds}s
                            echo "✅ 所有副本已成功更新"
                        """
                    } else {
                        echo "⏳ 等待首个新 Pod 进入 READY 状态..."
                        def success = false
                        for (int i = 0; i < maxAttempts; i++) {
                            try {
                                def output = sh(
                                    script: """
                                        set +e
                                        export KUBECONFIG=${env.KUBECONFIG_FILE}
                                        kubectl get pods \\
                                          -l app=${deployName} \\
                                          -n ${ns} \\
                                          --field-selector=status.phase=Running \\
                                          -o custom-columns=NAME:.metadata.name,READY:.status.containerStatuses[0].ready,IMAGE:.spec.containers[0].image \\
                                          --no-headers
                                    """,
                                    returnStdout: true
                                ).trim()
                                if (output && output.contains("${FULL_IMAGE_NAME}") && output.contains("true")) {
                                    echo "🟢 新版本 Pod 已就绪"
                                    success = true
                                    break
                                }
                            } catch (Exception e) {
                                echo "🟡 检查异常: ${e}"
                            }
                            sleep(intervalSeconds)
                        }
                        if (!success) {
                            error("❌ 超时:未能看到新版本 Pod 就绪")
                        }
                        echo "✅ 部署验证通过"
                    }
                }
            }
        }
        stage('🎉 完成通知') {
            steps {
                script {
                    if (params.ROLLBACK_LAST_VERSION) {
                        echo "⏪ 已完成回滚操作"
                    } else {
                        echo "✅ 构建 & 部署成功"
                        if (params.DEPLOY_TO_CCE) {
                            echo "🚀 已部署至 CCE: deployment/${env.MODULE.toLowerCase()} in ${env.CCE_NAMESPACE}"
                            echo "🔖 镜像: ${env.FULL_IMAGE_NAME}"
                            echo "💡 提示: 非灰度模式,流量直接路由到服务后端"
                        }
                    }
                }
            }
        }
    }
    post {
        success {
            echo "✅【流水线成功】构建与部署已完成"
        }
        failure {
            echo "❌【流水线失败】请检查上述日志"
        }
        always {
            cleanWs()
        }
    }
}

✅ 总结

你现在拥有了一个完全适配 Istio 的非灰度发布流水线,实现了:

适用于 bc-jobbc-third 等无需灰度的模块。

将 Istio Ingress Gateway 的NodePort模式*。*

✅ 第一步:istio-custom.yaml(NodePort 模式)

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: default
  components:
    pilot:
      enabled: true
      k8s:
        resources:
          requests:
            cpu: 50m
            memory: 64Mi
          limits:
            cpu: 200m
            memory: 256Mi
    ingressGateways:
      - name: istio-ingressgateway
        enabled: true
        k8s:
          resources:
            requests:
              cpu: 50m
              memory: 128Mi
            limits:
              cpu: 200m
              memory: 512Mi
  values:
    global:
      hub: swr.cn-east-3.myhuaweicloud.com/bocheng-test/istio
      tag: 1.27.3
      proxy:
        image: proxyv2
      imagePullPolicy: IfNotPresent
    gateways:
      istio-ingressgateway:
        # 使用 NodePort 模式
        type: NodePort
        ports:
          - port: 80
            targetPort: 8080
            name: http2
          - port: 443
            targetPort: 8443
            name: https

⚠️ 提示:NodePort 端口会自动分配在 30000-32767 范围内。

✅ 第二步:安装 Istio

# 卸载旧版本(如有)
istioctl uninstall -y --purge
# 安装 NodePort 配置
istioctl install -f istio-custom.yaml -y

✅ 第三步:验证 NodePort 是否创建成功

kubectl get svc -n istio-system istio-ingressgateway

输出示例:

NAME                   TYPE       CLUSTER-IP     PORT(S)                      AGE
istio-ingressgateway   NodePort   10.96.123.45   80:31234/TCP,443:30987/TCP   2m

✅ 第四步:创建 Gateway(入口网关规则)

# gateway.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
meta
  name: global-ingress-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
kubectl apply -f gateway.yaml

✅ 第五步:创建 VirtualService(路由规则)

# virtualservice-nogray.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
meta
  name: bc-gateway-vs
  namespace: bc-feature-202509-testcce
spec:
  hosts:
  - "bc-gateway.example.com"
  gateways:
  - istio-system/global-ingress-gateway
  http:
  - route:
    - destination:
        host: bc-gateway.bc-feature-202509-testcce.svc.cluster.local
        port:
          number: 8901
      weight: 100
kubectl apply -f virtualservice-nogray.yaml

✅ 第六步:外部访问方式(使用任意节点 IP + NodePort)

获取 NodePort:

kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.spec.ports[0].nodePort}'
# 输出:31234

使用任意 worker 节点的 IP 访问:

export NODE_IP=192.168.122.190
export NODE_PORT=31234
curl -H "Host: bc-gateway.example.com" http://$NODE_IP:$NODE_PORT/actuator/health

✅ 成功!

✅ 总结:NodePort 模式特点

特性说明
类型NodePort
外部访问<worker-node-ip>:<nodeport>
端口范围30000-32767
是否需要云厂商支持❌ 不需要
适合场景测试、开发、无 ELB 的环境

适配目前的k8s环境需要变更吗

# cat istio-custom.yaml 
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: default
  components:
    pilot:
      enabled: true
      k8s:
        resources:
          requests:
            cpu: 50m
            memory: 64Mi
          limits:
            cpu: 200m
            memory: 256Mi
    ingressGateways:
      - name: istio-ingressgateway
        enabled: true
        k8s:
          resources:
            requests:
              cpu: 50m
              memory: 128Mi
            limits:
              cpu: 200m
              memory: 512Mi
  values:
    global:
      hub: swr.cn-east-3.myhuaweicloud.com/bocheng-test/istio
      tag: 1.27.3
      proxy:
        image: proxyv2
      imagePullPolicy: IfNotPresent
    gateways:
      istio-ingressgateway:
        type: NodePort
        ports:
          - port: 80
            targetPort: 8080
            name: http2
          - port: 443
            targetPort: 8443
            name: https

✅ 为什么说它“不需要变更”?

配置项状态说明
profile: default✅ 合理使用默认 profile,功能完整
pilot 资源限制✅ 合理在小集群中降低了 CPU/Mem 请求(50m/64Mi),避免调度失败
ingressGateways 资源限制✅ 合理设置了合理的 limits 和 requests
hub + tag 指定镜像✅ 最佳实践使用 swr.cn-east-3.myhuaweicloud.com/bocheng-test/istio 统一管理镜像,比手动写 PILOT_IMAGE 更可靠
image: proxyv2✅ 正确明确指定 sidecar 镜像名称
type: NodePort✅ 适合你的环境在非云 LB 环境下使用 NodePort 是标准做法
targetPort: 8080/8443✅ 正确匹配 Istio ingressgateway 容器内 Envoy 监听端口

🛠️ 这个配置的作用总结

你通过这个 YAML 实现了:

🔁 什么时候才需要修改它?

除非遇到以下情况,否则无需改动:

场景是否需要修改
新增 Egress Gateway✅ 需要在 components 中添加
改用 LoadBalancer 类型✅ 修改 gateways.istio-ingressgateway.type: LoadBalancer
启用 mTLS STRICT 全局默认✅ 在 values.global 中添加策略
升级 Istio 版本✅ 修改 tag: 1.28.0
添加自定义证书✅ 添加 secretVolumes 或更新 tls 配置

✅ 建议操作

你现在应该:

# 确保只安装一次(避免重复)
istioctl uninstall -y --purge
# 使用当前 yaml 安装
istioctl install -f istio-custom.yaml -y

然后按之前的流程创建 GatewayVirtualServiceDestinationRule

将 Istio Ingress Gateway 从NodePort模式适配为LoadBalancer模式的完整配置和操作流程。

✅ 第一步:修改istio-custom.yaml启用 LoadBalancer

保持其他配置不变,只修改 values.gateways.istio-ingressgateway.typeLoadBalancer

更新后的istio-custom.yaml

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: default
  components:
    pilot:
      enabled: true
      k8s:
        resources:
          requests:
            cpu: 50m
            memory: 64Mi
          limits:
            cpu: 200m
            memory: 256Mi
    ingressGateways:
      - name: istio-ingressgateway
        enabled: true
        k8s:
          resources:
            requests:
              cpu: 50m
              memory: 128Mi
            limits:
              cpu: 200m
              memory: 512Mi
  values:
    global:
      hub: swr.cn-east-3.myhuaweicloud.com/bocheng-test/istio
      tag: 1.27.3
      proxy:
        image: proxyv2
      imagePullPolicy: IfNotPresent
    gateways:
      istio-ingressgateway:
        # 改为 LoadBalancer
        type: LoadBalancer
        ports:
          - port: 80
            targetPort: 8080
            name: http2
          - port: 443
            targetPort: 8443
            name: https
        # 可选:指定服务注解,用于云厂商 ELB 配置
        serviceAnnotations:
          # 华为云 ELB 示例注解(按需启用)
          # service.beta.kubernetes.io/huawei-elb-vpcid: your-vpc-id
          # service.beta.kubernetes.io/huawei-elb-classic-listener-port-protocol: tcp

💡 提示:serviceAnnotations 可用于指定 ELB 类型、带宽、公网IP等,具体参考 华为云 CCE 文档

✅ 第二步:重新安装 Istio

# 卸载旧版本
istioctl uninstall -y --purge
# 安装新配置
istioctl install -f istio-custom.yaml -y

✅ 第三步:验证 LoadBalancer 是否创建成功

kubectl get svc -n istio-system istio-ingressgateway

你应该看到:

NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)                      AGE
istio-ingressgateway   LoadBalancer   10.96.123.45   121.36.88.123   80:31234/TCP,443:30987/TCP   2m

✅ 第四步:创建 Gateway + VirtualService(非灰度版示例)

创建gateway.yaml

apiVersion: networking.istio.io/v1beta1
kind: Gateway
meta
  name: global-ingress-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
kubectl apply -f gateway.yaml

创建virtualservice-nogray.yaml

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
meta
  name: bc-gateway-vs
  namespace: bc-feature-202509-testcce
spec:
  hosts:
  - "bc-gateway.example.com"
  gateways:
  - istio-system/global-ingress-gateway
  http:
  - route:
    - destination:
        host: bc-gateway.bc-feature-202509-testcce.svc.cluster.local
        port:
          number: 8901
      weight: 100
kubectl apply -f virtualservice-nogray.yaml

✅ 第五步:外部访问方式(使用 LB 公网 IP)

获取公网地址:

export LB_IP=$(kubectl get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $LB_IP

访问服务:

curl -H "Host: bc-gateway.example.com" http://$LB_IP/actuator/health

✅ 成功!

✅ Jenkinsfile 中如何适配?

如果你希望在 Jenkins 中动态判断是否使用 LoadBalancer,可以加一个参数:

choice(
    name: 'INGRESS_TYPE',
    choices: ['NodePort', 'LoadBalancer'],
    description: '选择入口类型'
)

然后在部署 Istio 时传入不同配置,或直接使用固定 LoadBalancer

✅ 总结:NodePortvsLoadBalancer

特性NodePortLoadBalancer
是否需要云厂商支持❌ 不需要✅ 需要(如华为云 ELB)
外部访问方式<node-ip>:<port><lb-ip>:80/443
端口范围30000-32767标准端口(80/443)
适合场景测试环境生产环境
自动创建公网 IP❌ 否✅ 是

你现在可以将 istio-custom.yaml 中的 type: NodePort 改为 LoadBalancer,并立即获得一个公网可访问的 Istio 入口网关。

需要我为你生成一份 华为云 ELB 注解完整模板自动获取 LB IP 并测试连通性的 Shell 脚本 吗?

到此这篇关于k8s java集群接入istio案例的文章就介绍到这了,更多相关k8s集群接入istio内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!

您可能感兴趣的文章:
阅读全文