NATS文档
  • 欢迎
  • 发行备注
    • 最新情况
      • NATS 2.2
      • NATS 2.0
  • NATS 概念
    • 概览
      • 比较 NATS
    • 什么是NATS
      • 演练安装
    • 基于主题的消息
    • 核心NATS
      • 发布和订阅
        • 发布/订阅演 练
      • 请求和响应
        • 请求/响应 演练
      • 队列组
        • 队列 演练
    • JetStream
      • 流
      • 消费者
        • 示例
      • JetStream 演练
      • 键值对存储
        • 键值对存储演练
      • 对象存储
        • 对象存储演练
    • 主题映射与分区
    • NATS服务器基础架构
      • NATS部署架构适配
    • 安全
    • 连接性
  • 使用 NATS
    • NATS工具
      • nats
        • nats基准测试
      • nk
      • nsc
        • 基础
        • 流
        • 服务
        • 签名密钥
        • 撤销
        • 管理操作
      • nats-top
        • 教程
    • 用NATS开发
      • 一个NATS应用的解剖
      • 连接
        • 连接到默认服务器
        • 连接到特定服务器
        • 连接到群集
        • 连接名称
        • 用用户名和密码做认证
        • 用令牌做认证
        • 用NKey做认证
        • 用一个可信文件做认证
        • 用TLS加密连接
        • 设置连接超时
        • 乒乓协议
        • 关闭响应消息
        • 杂技功能
        • 自动恢复
          • 禁用自动重连
          • 设置自动重新连接的最大次数
          • 随机
          • 重连尝试之间暂停
          • 关注重连事件
          • 重连尝试期间缓存消息
        • 监视连接
          • 关注连接事件
          • 低速消费者
      • 接收消息
        • 同步订阅
        • 异步订阅
        • 取消订阅
        • N个消息后取消订阅
        • 回复一个消息
        • 通配符订阅
        • 队列订阅
        • 断开连接前清除消息
        • 接收结构化数据
      • 发送消息
        • 包含一个回复主题
        • 请求回复语义
        • 缓存刷入和乒
        • 发送结构化数据
      • JetStream
        • 深入JetStream模型
        • 管理流和消费者
        • 消费者详情
        • 发布到流
        • 使用键值对存储
        • 使用对象存储
      • 教程
        • 用go做个自定义拨号器
  • 运行一个NATS服务
    • 安装、运行和部署NATS服务
      • 安装一个NATS服务
      • 运行和部署一个NATS服务
      • Windows服务
      • 信号
    • 环境约束
    • NATS和Docker
      • 教程
      • Docker Swarm
      • Python 和 NGS 运行在Docker
      • JetStream
    • NATS和Kubernetes
      • 用Helm 部署NATS
      • 创建一个Kubernetes群集
      • NATS群集和认证管理
      • 用cfssl保护NATS群集
      • 用负载均衡来保护外部的NATS访问
      • 在Digital Ocean用Helm创建超级NATS群集
      • 使用Helm从0到K8S再到叶子节点
    • NATS服务的客户端
    • 配置 NATS服务
      • 配置 JetStream
        • 配置管理 Management
          • NATS管理命令行
          • 地形
          • GitHub Actions
          • Kubernetes控制器
      • 群集
        • 群集配置
        • JetStream 群集
          • 管理
      • 网关超级群集
        • 配置
      • 叶子节点
        • 配置
        • JetStream在叶子节点
      • 安全加固NATS
        • 使用 TLS
        • 认证
          • 令牌
          • 用户名/密码
          • TLS认证
            • 群集中的TLS认证
          • NKeys
          • 认证超时
          • 去中心化的 JWT 认证/授权
            • 使用解析器查找帐户
            • 内存解析器教程
            • 混合认证/授权安装
        • 授权
        • 基于账户的多租户
        • OCSP Stapling
      • 日志
      • 使用监控
      • MQTT
        • 配置
      • 配置主题映射
      • 系统事件
        • 系统时间和去中心化的JWT教程
      • WebSocket
        • 配置
    • 管理和监控你的NATS服务基础架构
      • 监控
        • 监控 JetStream
      • 管理 JetStream
        • 账号信息
        • 命名流,消费者和账号
        • 流
        • 消费者
        • 数据复制
        • 灾难回复
        • 加密Rest
      • 管理JWT安全
        • 深入JWT指南
      • 升级一个群集
      • 慢消费者
      • 信号
      • 跛脚鸭模式
  • 参考
    • 常见问题
    • NATS协议
      • 协议演示
      • 客户端协议
        • 开发一个客户端
      • NATS群集协议
      • JetStream API参考
  • 遗产
    • STAN='NATS流'
      • STAN概念
        • 和NATS的关系
        • 客户端连接
        • 频道
          • 消息日志
          • 订阅
            • 通常的
            • 持久化的
            • 队列组
            • 重新投递
        • 存储接口
        • 存储加密
        • 群集
          • Supported Stores
          • Configuration
          • Auto Configuration
          • Containers
        • Fault Tolerance
          • Active Server
          • Standby Servers
          • Shared State
          • Failover
        • Partitioning
        • Monitoring
          • Endpoints
      • Developing With STAN
        • Connecting to NATS Streaming Server
        • Publishing to a Channel
        • Receiving Messages from a Channel
        • Durable Subscriptions
        • Queue Subscriptions
        • Acknowledgements
        • The Streaming Protocol
      • STAN NATS Streaming Server
        • Installing
        • Running
        • Configuring
          • Command Line Arguments
          • Configuration File
          • Store Limits
          • Persistence
            • File Store
            • SQL Store
          • Securing
        • Process Signaling
        • Windows Service
        • Embedding NATS Streaming Server
        • Docker Swarm
        • Kubernetes
          • NATS Streaming with Fault Tolerance.
    • nats账号服务
      • Basics
      • Inspecting JWTs
      • Directory Store
      • Update Notifications
由 GitBook 提供支持
在本页
  • Configuration
  • Server Image
  • Limits
  • Logging
  • TLS setup for client connections
  • Clustering
  • Leafnodes
  • Websocket Configuration
  • Setting up External Access
  • Using HostPorts
  • Using LoadBalancers
  • Gateways
  • Auth setup
  • Auth with a Memory Resolver
  • Auth using an Account Server Resolver
  • JetStream
  • Setting up Memory and File Storage
  • Using with an existing PersistentVolumeClaim
  • Clustering example
  • Misc
  • NATS Box
  • Configuration Reload sidecar
  • Prometheus Exporter sidecar
  • Prometheus operator ServiceMonitor support
  • Pod Customizations
  • Name Overrides
  • Image Pull Secrets
  1. 运行一个NATS服务
  2. NATS和Kubernetes

用Helm 部署NATS

上一页NATS和Kubernetes下一页创建一个Kubernetes群集

最后更新于2年前

The NATS Helm charts can be used to deploy a StatefulSet of NATS servers using Helm templates which are easy to extend. Using Helm3 you can add the NATS Helm repo as follows:

helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install my-nats nats/nats

The contains a complete list of configuration options. Some common scenarios are outlined below.

Configuration

Server Image

nats:
  image: nats:2.7.4-alpine
  pullPolicy: IfNotPresent

Limits

nats:
  # The number of connect attempts against discovered routes.
  connectRetries: 30

  # How many seconds should pass before sending a PING
  # to a client that has no activity.
  pingInterval:

  # Server settings.
  limits:
    maxConnections:
    maxSubscriptions:
    maxControlLine:
    maxPayload:

    writeDeadline:
    maxPending:
    maxPings:
    lameDuckDuration:

  # Number of seconds to wait for client connections to end after the pod termination is requested
  terminationGracePeriodSeconds: 60

Logging

Note: It is not recommended to enable trace or debug in production since enabling it will significantly degrade performance.

nats:
  logging:
    debug:
    trace:
    logtime:
    connectErrorReports:
    reconnectErrorReports:

TLS setup for client connections

nats:
  tls:
    secret:
      name: nats-client-tls
    ca: "ca.crt"
    cert: "tls.crt"
    key: "tls.key"

Example of creating the nats-client-tls k8s secret with three named values matching the above setup:

kubectl create secret generic nats-client-tls --from-file=tls.crt=./broker.crt --from-file=tls.key=./broker.key --from-file=ca.crt=./ca.pem

Clustering

cluster:
  enabled: true
  replicas: 3

  tls:
    secret:
      name: nats-server-tls
    ca: "ca.crt"
    cert: "tls.crt"
    key: "tls.key"

Example:

helm install nats nats/nats --set cluster.enabled=true

Leafnodes

leafnodes:
  enabled: true
  remotes:
    - url: "tls://connect.ngs.global:7422"
      # credentials:
      #   secret:
      #     name: leafnode-creds
      #     key: TA.creds
      # tls:
      #   secret:
      #     name: nats-leafnode-tls
      #   ca: "ca.crt"
      #   cert: "tls.crt"
      #   key: "tls.key"

  #######################
  #                     #
  #  TLS Configuration  #
  #                     #
  #######################
  # 
  #  # You can find more on how to setup and trouble shoot TLS connnections at:
  # 
  #  # https://docs.nats.io/running-a-nats-server/configuration/securing_nats/tls
  # 
  tls:
    secret:
      name: nats-client-tls
    ca: "ca.crt"
    cert: "tls.crt"
    key: "tls.key"

Websocket Configuration

websocket:
  enabled: true
  port: 443

  tls:
    secret:
      name: nats-tls
    cert: "fullchain.pem"
    key: "privkey.pem"

Setting up External Access

Using HostPorts

In case of both external access and advertisements being enabled, an initializer container will be used to gather the public IPs. This container will be required to have enough RBAC policy to be able to make a look up of the public IP of the node where it is running.

For example, to set up external access for a cluster and advertise the public IP to clients:

nats:
  # Toggle whether to enable external access.
  # This binds a host port for clients, gateways and leafnodes.
  externalAccess: true

  # Toggle to disable client advertisements (connect_urls),
  # in case of running behind a load balancer (which is not recommended)
  # it might be required to disable advertisements.
  advertise: true

  # In case both external access and advertise are enabled
  # then a service account would be required to be able to
  # gather the public IP from a node.
  serviceAccount: "nats-server"

Where the service account named nats-server has the following RBAC policy for example:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nats-server
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nats-server
rules:
- apiGroups: [""]
  resources:
  - nodes
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nats-server-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nats-server
subjects:
- kind: ServiceAccount
  name: nats-server
  namespace: default

The container image of the initializer can be customized via:

bootconfig:
  image: natsio/nats-boot-config:latest
  pullPolicy: IfNotPresent

Using LoadBalancers

When using a load balancer for external access, it is recommended to disable advertisement so that internal IPs from the NATS Servers are not advertised to the clients connecting through the load balancer.

nats:
  image: nats:alpine

cluster:
  enabled: true
  noAdvertise: true

leafnodes:
  enabled: true
  noAdvertise: true

natsbox:
  enabled: true

You could then use an L4 enabled load balancer to connect to NATS, for example:

apiVersion: v1
kind: Service
metadata:
  name: nats-lb
spec:
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: nats
  ports:
    - protocol: TCP
      port: 4222
      targetPort: 4222
      name: nats
    - protocol: TCP
      port: 7422
      targetPort: 7422
      name: leafnodes
    - protocol: TCP
      port: 7522
      targetPort: 7522
      name: gateways

Gateways

gateway:
  enabled: false
  name: 'default'

  #############################
  #                           #
  #  List of remote gateways  #
  #                           #
  #############################
  # gateways:
  #   - name: other
  #     url: nats://my-gateway-url:7522

  #######################
  #                     #
  #  TLS Configuration  #
  #                     #
  #######################
  # 
  #  # You can find more on how to setup and trouble shoot TLS connnections at:
  # 
  #  # https://docs.nats.io/running-a-nats-server/configuration/securing_nats/tls
  #
  # tls:
  #   secret:
  #     name: nats-client-tls
  #   ca: "ca.crt"
  #   cert: "tls.crt"
  #   key: "tls.key"

Auth setup

Auth with a Memory Resolver

auth:
  enabled: true

  # Reference to the Operator JWT.
  operatorjwt:
    configMap:
      name: operator-jwt
      key: KO.jwt

  # Public key of the System Account
  systemAccount:

  resolver:
    ############################
    #                          #
    # Memory resolver settings #
    #                          #
    ##############################
    type: memory

    # 
    # Use a configmap reference which will be mounted
    # into the container.
    # 
    configMap:
      name: nats-accounts
      key: resolver.conf

Auth using an Account Server Resolver

auth:
  enabled: true

  # Reference to the Operator JWT.
  operatorjwt:
    configMap:
      name: operator-jwt
      key: KO.jwt

  # Public key of the System Account
  systemAccount:

  resolver:
    ##########################
    #                        #
    #  URL resolver settings #
    #                        #
    ##########################
    type: URL
    url: "http://nats-account-server:9090/jwt/v1/accounts/"

JetStream

Setting up Memory and File Storage

File Storage is always recommended, since JetStream's RAFT Meta Group will be persisted to file storage. The Storage Class used should be block storage. NFS is not recommended.

nats:
  image: nats:alpine

  jetstream:
    enabled: true

    memStorage:
      enabled: true
      size: 2Gi

    fileStorage:
      enabled: true
      size: 10Gi
      # storageClassName: gp2 # NOTE: AWS setup but customize as needed for your infra.

Using with an existing PersistentVolumeClaim

For example, given the following PersistentVolumeClaim:

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nats-js-disk
  annotations:
    volume.beta.kubernetes.io/storage-class: "default"
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

You can start JetStream so that one pod is bound to it:

nats:
  image: nats:alpine

  jetstream:
    enabled: true

    fileStorage:
      enabled: true
      storageDirectory: /data/
      existingClaim: nats-js-disk
      claimStorageSize: 3Gi

Clustering example

nats:
  image: nats:alpine

  jetstream:
    enabled: true

    memStorage:
      enabled: true
      size: "2Gi"

    fileStorage:
      enabled: true
      size: "1Gi"
      storageDirectory: /data/
      storageClassName: default

cluster:
  enabled: true
  # Cluster name is required, by default will be release name.
  # name: "nats"
  replicas: 3

Misc

NATS Box

natsbox:
  enabled: true
  image: nats:alpine
  pullPolicy: IfNotPresent

  # credentials:
  #   secret:
  #     name: nats-sys-creds
  #     key: sys.creds

Configuration Reload sidecar

The NATS config reloader image to use:

reloader:
  enabled: true
  image: natsio/nats-server-config-reloader:latest
  pullPolicy: IfNotPresent

Prometheus Exporter sidecar

You can toggle whether to start the sidecar to be used to feed metrics to Prometheus:

exporter:
  enabled: true
  image: natsio/prometheus-nats-exporter:latest
  pullPolicy: IfNotPresent

Prometheus operator ServiceMonitor support

You can enable Prometheus operator ServiceMonitor:

exporter:
  # You have to enable exporter first
  enabled: true
  serviceMonitor:
    enabled: true
    ## Specify the namespace where Prometheus Operator is running
    # namespace: monitoring
    # ...

Pod Customizations

Security Context

 # Toggle whether to use setup a Pod Security Context
 # ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
securityContext:
  fsGroup: 1000
  runAsUser: 1000
  runAsNonRoot: true

Affinity

matchExpressions must be configured according to your setup

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
        - matchExpressions:
            - key: node.kubernetes.io/purpose
              operator: In
              values:
                - nats
  podAntiAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
            - key: app
              operator: In
              values:
                - nats
                - stan
        topologyKey: "kubernetes.io/hostname"

Service topology

topologyKeys:
  - "kubernetes.io/hostname"
  - "topology.kubernetes.io/zone"
  - "topology.kubernetes.io/region"

CPU/Memory Resource Requests/Limits

Sets the pods CPU/memory requests/limits

nats:
  resources:
    requests:
      cpu: 2
      memory: 4Gi
    limits:
      cpu: 4
      memory: 6Gi

No resources are set by default.

Annotations

podAnnotations:
  key1 : "value1",
  key2 : "value2"

Name Overrides

Can change the name of the resources as needed with:

nameOverride: "my-nats"

Image Pull Secrets

imagePullSecrets:
- name: myRegistry

Adds this to the StatefulSet:

spec:
  imagePullSecrets:
    - name: myRegistry

You can find more on how to set up and troubleshoot TLS connections at:

If clustering is enabled, then a 3-node cluster will be set up. More info at:

Leafnode connections to extend a cluster. More info at:

A supercluster can be formed by pointing to remote gateways. You can find more about gateways in the NATS documentation: .

A lightweight container with NATS and NATS Streaming utilities deployed along the cluster to confirm the setup. You can find the image at:

is disabled by default but can be enabled by setting topologyKeys. For example:

ArtifactHub NATS Helm package
running-a-nats-service/configuration/securing_nats/tls
running-a-nats-server/configuration/leafnodes
running-a-nats-server/configuration/gateways
https://github.com/nats-io/nats-box
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
Service topology
https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations
running-a-nats-server/configuration/clustering#nats-server-clustering