标签 Rancher 下的文章

环境准备

下载离线安装文件

本例子用 v1.30.4+k3s1

  • 安装脚本 install.sh
wget https://github.com/k3s-io/k3s/raw/refs/tags/v1.30.4+k3s1/install.sh
  • 主程序 k3s
wget https://github.com/k3s-io/k3s/releases/download/v1.30.4%2Bk3s1/k3s
  • 镜像压缩包
wget https://github.com/k3s-io/k3s/releases/download/v1.30.4%2Bk3s1/k3s-airgap-images-amd64.tar

准备数据库 PostgreSQL

  • 数据库
name ip notes
database 192.168.100.100 PostgreSQL
  • 创建 compose.yml
services:
  postgres:
    image: postgres:alpine
    container_name: postgres
    restart: unless-stopped
    environment:
      POSTGRES_USER: k3s
      POSTGRES_PASSWORD: talent
      POSTGRES_DB: k3s
    ports:
      - "5432:5432" 
    volumes:
      - ./data:/var/lib/postgresql/data
      - ./init-db.sh:/docker-entrypoint-initdb.d/init-db.sh 
  • 创建脚本 init-db.sh
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname=postgres <<-EOSQL
    CREATE DATABASE harbor;
EOSQL
  • 启动 PostgreSQL 数据库容器
docker compose up -d
  • 验证数据库
docker exec -it postgres psql -U k3s -c '\l'
                                                 List of databases
   Name    | Owner | Encoding | Locale Provider |  Collate   |   Ctype    | Locale | ICU Rules | Access privileges 
-----------+-------+----------+-----------------+------------+------------+--------+-----------+-------------------
 harbor    | k3s   | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | 
 k3s       | k3s   | UTF8     | libc            | en_US.utf8 | en_US.utf8 |        |           | 
...
(5 rows)

安装部署 k3s

准备服务器

准备三台机器,都作为 master 运行,本节内容需要在三台都执行。

name ip notes
master01 192.168.100.101 必须
master02 192.168.100.102 可选
master03 192.168.100.103 可选
cat <<EOF >> /etc/hosts
192.168.100.100   database
192.168.100.101   master01
192.168.100.102   master02
192.168.100.103   master03
EOF

导入镜像包

  • 离线安装重要步骤
mkdir -p /var/lib/rancher/k3s/agent/images/
cp k3s-airgap-images-amd64.tar /var/lib/rancher/k3s/agent/images/

拷贝主程序 k3s

  • 拷贝进执行目录,不需要下载主程序
chmod a+x k3s && cp k3s /usr/local/bin/

安装

  • 执行安装脚本,并连接数据库
INSTALL_K3S_SKIP_DOWNLOAD=true \
sh install.sh -s - server \
--datastore-endpoint="postgres://k3s:talent@database:5432/k3s?sslmode=disable" \
--token K10c71232f051944de58ab058871ac7a85b42cbdee5c6df94deb6bb493c79b15a92
  • 查看镜像状态
ctr i ls -q
docker.io/rancher/klipper-helm:v0.8.4-build20240523
docker.io/rancher/klipper-lb:v0.4.9
docker.io/rancher/local-path-provisioner:v0.0.28
docker.io/rancher/mirrored-coredns-coredns:1.10.1
docker.io/rancher/mirrored-library-busybox:1.36.1
docker.io/rancher/mirrored-library-traefik:2.10.7
docker.io/rancher/mirrored-metrics-server:v0.7.0
docker.io/rancher/mirrored-pause:3.6
  • 三台机器都执行完成,查看集群 node 状态
kubectl get nodes 
NAME       STATUS   ROLES                  AGE     VERSION
master01   Ready    control-plane,master   4m49s   v1.30.4+k3s1
master02   Ready    control-plane,master   51s     v1.30.4+k3s1
master03   Ready    control-plane,master   45s     v1.30.4+k3s1
  • 查看 pod 状态
kubectl get pods -A
NAMESPACE   NAME                                READY  STATUS     RESTARTS  AGE
kube-system coredns-576bfc4dc7-f6bsb            1/1    Running    0         27m
kube-system helm-install-traefik-crd-5b2x4      0/1    Completed  0         27m
kube-system helm-install-traefik-pl6k7          0/1    Completed  1         27m
kube-system local-path-provisioner-5f9d8-lhntz  1/1    Running    0         27m
kube-system metrics-server-557ff575fb-77xzw     1/1    Running    0         27m
kube-system svclb-traefik-151f91a9-8tkdq        2/2    Running    0         23m
kube-system svclb-traefik-151f91a9-b7pwz        2/2    Running    0         23m
kube-system svclb-traefik-151f91a9-kwwp8        2/2    Running    0         25m
kube-system traefik-5fb479b77-r84mv             1/1    Running    0         25m

安装 Helm 部署工具

  • 下载 Helm 二进制,安装
wget https://get.helm.sh/helm-v3.16.2-linux-amd64.tar.gz
tar -zxvf helm-v3.16.2-linux-amd64.tar.gz
chmod a+x linux-amd64/helm && cp helm /usr/local/bin/
  • 设置环境变量
    加入到 ~/.bashrc 自启动
echo "export KUBECONFIG=/etc/rancher/k3s/k3s.yaml" >> ~/.bashrc && source ~/.bashrc

部署 Cert Manager

  • 在线下载 Cert Manager 镜像,并导出,复制到三台服务器
docker images --format "{{.Repository}}:{{.Tag}}"  |grep cert-manager > cert-manager.txt
docker save $(awk '{printf "%s ", $0}' cert-manager.txt) | gzip > cert-manager.tar.gz
  • 导入 Cert Manager 镜像
gzip -d -c cert-manager-images.tar.gz | ctr i import -
  • 下载所需文件,复制到某一台服务器后
wget https://github.com/cert-manager/cert-manager/releases/download/v1.15.3/cert-manager.crds.yaml
wget https://charts.jetstack.io/charts/cert-manager-v1.15.3.tgz
  • 部署 cert-manager 配置
kubectl apply -f cert-manager.crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
  • 使用 Helm 部署 cert-manager 离线包
helm install cert-manager ./cert-manager-v1.15.3.tgz --namespace cert-manager --create-namespace
or
helm install cert-manager ./cert-manager-v1.15.3.tgz --namespace cert-manager --create-namespace --set crds.enabled=true
kubectl get pods -n cert-manager 
NAME                                       READY   STATUS      RESTARTS   AGE
cert-manager-9647b459d-dqd8b               1/1     Running     0          23m
cert-manager-cainjector-5d8798687c-l2kkf   1/1     Running     0          23m
cert-manager-startupapicheck-6gglh         0/1     Completed   1          23m
cert-manager-webhook-c77744d75-wd2kx       1/1     Running     0          23m

部署 Rancher 集群管理工具

导入镜像

  • 在线下载 Rancher 镜像,并导出,复制到三台服务器
docker images --format "{{.Repository}}:{{.Tag}}"  |grep rancher > rancher.txt
docker save $(awk '{printf "%s ", $0}' rancher.txt) | gzip > rancher-images.tar.gz
  • 导入 Rancher 镜像
gzip -d -c rancher-images.tar.gz | ctr i import -

部署 Rancher

  • 下载 Rancher Helm 包
wget https://releases.rancher.com/server-charts/latest/rancher-2.9.2.tgz
  • 使用 Helm 部署 Rancher 离线包
helm install rancher ./rancher-2.9.2.tgz \
--namespace cattle-system \
--set hostname=manager.yiqisoft.cn \
--create-namespace
  • 查询 Rancher 安装结果
kubectl get pods -n cattle-system 
NAME                               READY   STATUS      RESTARTS      AGE
rancher-57c9747d96-p498t           1/1     Running     0             52m
rancher-57c9747d96-pdrtn           1/1     Running     1 (52m ago)   52m
rancher-57c9747d96-xm9sl           1/1     Running     0             52m
rancher-webhook-77b49b9ff9-fqdxf   1/1     Running     0             51m
  • 配置 dns 指向,访问 Rancher 页面
    dns 指向到三台服务器,如果有条件,要设置 HAProxy 和 KeepAlived 做负载均衡
address=/manager.yiqisoft.cn/192.168.100.101
address=/manager.yiqisoft.cn/192.168.100.102
address=/manager.yiqisoft.cn/192.168.100.103

部署网络存储

使用 Ceph(推荐)

准备一个 Ceph 集群用于存储(这里省略,请自行补脑)。

使用 NFS 网络文件系统

简单实用,但是效率不高,安全性一般。

  • 在一个文件服务器上部署 NFS,这里用 database 主机
apt update
apt install nfs-kernel-server -y
mkdir -p /opt/nfs_share
chown nobody:nogroup /opt/nfs_share
chmod 777 /opt/nfs_share
echo "/opt/nfs_share *(rw,sync,no_subtree_check)" >> /etc/exports
exportfs -ra
systemctl restart nfs-kernel-server

部署 Harbor 私有化仓库

部署 pv 和 pvc

  • 编辑 pv-pvc.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 30Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /opt/nfs_share
    server: database
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  volumeName: nfs-pv
  storageClassName: ""
  • 应用到 namespace harbor
kubectl create namespace harbor
kubectl apply -f pv-pvc.yaml -n harbor

结果

persistentvolume/nfs-pv created
persistentvolumeclaim/nfs-pvc created
  • 查看结果,nfs-pvc 必须是 Bound
kubectl get pvc -n harbor

结果

NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
nfs-pvc   Bound    nfs-pv   30Gi       RWX                           <unset>                 92s
  • 查看 nfs-pv 必须是 Bound
kubectl get pv -n harbor

结果

NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM            STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
nfs-pv   30Gi       RWX            Retain           Bound    harbor/nfs-pvc                  <unset>                          97s

安装 NFS 依赖

所有的 Master 上需要安装 nfs-common,并验证(可选),可以访问 /tmp/nfs 目录

apt install nfs-common -y
mkdir /tmp/nfs
mount -t nfs database:/opt/nfs_share /tmp/nfs
ll /tmp/nfs

部署 Harbor

  • 在线下载 Harbor Helm 文件,并复制到某一台服务器
wget https://helm.goharbor.io/harbor-1.15.1.tgz
  • 在线方式,导出默认 values,修改
helm show values harbor/harbor > harbor-values.yml
# 内容修改如下
expose:
  ingress:
    hosts:
      core: hub.yiqisoft.cn
externalURL: https://hub.yiqisoft.cn
  persistentVolumeClaim:
    registry:/exter
      existingClaim: "nfs-pvc"
      storageClass: "-"
database:
  type: external
  external:
    host: "192.168.100.100"
    port: "5432"
    username: "k3s"
    password: "talent"
    coreDatabase: "harbor"
harborAdminPassword: "Harbor12345"
  • 在线下载 Harbor 镜像,导出
docker images --format "{{.Repository}}:{{.Tag}}"  |grep goharbor > harbor.txt
docker save $(awk '{printf "%s ", $0}' harbor.txt) | gzip > harbor-images.tar.gz
  • 导入 Harbor 镜像
gzip -d -c harbor-images.tar.gz | ctr i import -
  • 安装 Harbor,需要通过互联网下载所有的镜像,时间根据网络带宽决定
helm install harbor ./harbor-1.15.1.tgz -f harbor-values.yml -n harbor
  • 确认 Harbor 是否安装成功
kubectl get pods -n harbor

都显示 READY 即可

NAME                               READY   STATUS    RESTARTS      AGE
harbor-core-666cf84bc6-vq8mp       1/1     Running   0             85s
harbor-jobservice-c68bc669-6twbp   1/1     Running   0             85s
harbor-portal-d6c99f896-cq9fx      1/1     Running   0             85s
harbor-redis-0                     1/1     Running   0             84s
harbor-registry-5984c768c8-4xnrg   2/2     Running   0             84s
harbor-trivy-0                     1/1     Running   0             84s
  • 配置 dns 指向,访问 Harbor 页面
    dns 指向到三台服务器,如果有条件,要设置 HAProxy 和 KeepAlived 做负载均衡
address=/hub.yiqisoft.cn/192.168.100.101
address=/hub.yiqisoft.cn/192.168.100.102
address=/hub.yiqisoft.cn/192.168.100.103
  • 验证 Harbor 工作
curl -k https://hub.yiqisoft.cn/v2/
{"errors":[{"code":"UNAUTHORIZED","message":"unauthorized: unauthorized"}]}

部署 ChartMuseum 管理工具

下载 ChartMuseum 镜像

  • 在线下载 ChartMuseum 镜像,并导出,复制到三台服务器
docker save hub.yiqisoft.cn/yiqisoft/chartmuseum:v0.16.2 | gzip > chartmuseum-images.tar.gz
  • 上传到私有仓库
docker push hub.yiqisoft.cn/yiqisoft/chartmuseum:v0.16.2
  • 或导入 ChartMuseum 镜像
gzip -d -c chartmuseum-images.tar.gz | ctr i import -

部署 ChartMuseum Helm Chart

  • 下载 ChartMuseum chart
wget https://github.com/chartmuseum/charts/releases/download/chartmuseum-3.10.3/chartmuseum-3.10.3.tgz
  • 部署 Chart
helm install chartmuseum ./chartmuseum-3.10.3.tgz \
--namespace chartmuseum \
--create-namespace \
--set env.open.DISABLE_API=false \
--set service.type=LoadBalancer
  • 部署一个 Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
  name: chartmuseum
  namespace: chartmuseum
spec:
  ingressClassName: traefik
  rules:
    - host: chart.yiqisoft.cn
      http:
        paths:
          - backend:
              service:
                name: chartmuseum
                port:
                  number: 8080
            path: /
            pathType: Prefix

或在 Rancher UI 中增加一个类似配置。

上传 Charts 到 ChartMuseum

  • 使用类似这样的命令即可上传,ChartMuseum 会自动索引文件到 index.yaml
curl --data-binary "@yiedge-base-3.1.0.tgz" http://chart.yiqisoft.cn/api/charts
  • 检查上传结果
curl chart.yiqisoft.cn/index.yaml
apiVersion: v1
entries:
  yiedge-base:
  - apiVersion: v1
    appVersion: Pnicom
    created: "2024-11-11T10:11:55.473460704Z"
    description: A Helm Chart for YiEDGE base
    digest: 14b9a971a53a2482a6d4a930daefe2b0fee1cb1bcef909b023cc34959b4c142a
    home: https://www.yiqisoft.cn
    icon: https://www.yiqisoft.cn/images/yiqilogo.png
    keywords:
    - IoT
    - Middleware
    - YiEDGE
    name: yiedge-base
    sources:
    - https://github.com/yiqisoft
    urls:
    - charts/yiedge-base-3.1.0.tgz
    version: 3.1.0
generated: "2024-11-11T10:11:56Z"
serverInfo: {}

部署 frps 反向代理

编译 frps

  • clone 仓库,并编译 docker image
git clone https://github.com/fatedier/frp.git
git checkout v0.61.0
docker build -f dockerfiles/Dockerfile-for-frps . -t hub.yiqisoft.cn/yiqisoft/frps:v0.61.0
  • push 到私有仓库
docker push hub.yiqisoft.cn/yiqisoft/frps:v0.61.0

开发 frps Helm Chart

给出范例,根据自己的需求完成。

  • Helm Chart 目录结构
frp-server/
├── charts/                # 可选,依赖的子 Chart
├── templates/             # 存放模板文件
│   ├── deployment.yaml    # 部署模板
│   ├── service.yaml       # 服务模板
│   ├── configmap.yaml     # 配置文件模板
│   ├── ingress.yaml       # 可选的 Ingress 配置
├── values.yaml            # 默认配置文件
├── Chart.yaml             # Chart 描述文件
  • Chart.yaml
apiVersion: v1
name: frps
version: "0.0.1"
appVersion: frp Server
description: A Helm Chart for frp server
keywords:
- frp
- reverse proxy
home: https://www.yiqisoft.cn
icon: https://www.yiqisoft.cn/images/yiqilogo.png
sources:
- https://github.com/yiqisoft
  • values.yaml
    根据需要修改 domainsubDomainHost,还有一些对外开放的端口 service.nodePorts.http/frp/dashboard
rreplicaCount: 1
fullnameOverride: "frp-server" 
domain: "*.h.yiqisoft.cn"  # dynamic domain
image:
  repository: hub.yiqisoft.cn/yiqisoft/frps
  tag: v0.61.0
  pullPolicy: IfNotPresent
service:
  type: NodePort
  nodePorts:
    http: 30080             # vhost http nodePort
    frp: 30000              # frp service nodePort
    dashboard: 30500        # dashboard nodePort
frpConfig:
  bindPort: 7000
  vhostHTTPPort: 8888
  subDomainHost: "h.yiqisoft.cn"
  webServer:
    addr: "0.0.0.0"
    port: 7500
    user: "yiqisoft"
    password: "talent123!"
    pprofEnable: true
  log:
    to: "/tmp/frps.log"
    level: "info"
    maxDays: 7
  auth:
    token: "yiqisoft"
  • deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.fullnameOverride }}
  labels:
    app: {{ .Values.fullnameOverride }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Values.fullnameOverride }}
  template:
    metadata:
      labels:
        app: {{ .Values.fullnameOverride }}
    spec:
      containers:
        - name: frp-server
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          args:
          - "-c"
          - "/frps.toml"
          ports:
            - name: frp-vhost
              containerPort: {{ .Values.frpConfig.vhostHTTPPort }}
            - name: frp-port
              containerPort: {{ .Values.frpConfig.bindPort }}
            - name: frp-dashboard
              containerPort: {{ .Values.frpConfig.webServer.port }}
          volumeMounts:
            - name: frp-config
              mountPath: /frps.toml
              subPath: frps.toml
      volumes:
        - name: frp-config
          configMap:
            name: {{ .Values.fullnameOverride }}-config
  • service.yaml
apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.fullnameOverride }}-service
  labels:
    app: {{ .Values.fullnameOverride }}
spec:
  type: NodePort
  ports:
    - name: frp-vhost
      port: {{ .Values.frpConfig.vhostHTTPPort }}    # 配置的 HTTP 端口
      targetPort: frp-vhost
      protocol: TCP
      nodePort: {{ .Values.service.nodePorts.http }}  # 从 values.yaml 中读取对外暴露的端口
    - name: frp-port
      port: {{ .Values.frpConfig.bindPort }}         # 配置的 bindPort
      targetPort: frp-port
      protocol: TCP
      nodePort: {{ .Values.service.nodePorts.frp }}  # 从 values.yaml 中读取对外暴露的端口
    - name: frp-dashboard
      port: {{ .Values.frpConfig.webServer.port }}  # 配置的 dashboard 端口
      targetPort: frp-dashboard
      protocol: TCP
      nodePort: {{ .Values.service.nodePorts.dashboard }}  # 从 values.yaml 中读取对外暴露的端口
  selector:
    app: {{ .Values.fullnameOverride }}
  • ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: {{ .Values.fullnameOverride }}-ingress
  labels:
    app: {{ .Values.fullnameOverride }}
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: "{{ .Values.domain }}"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: {{ .Values.fullnameOverride }}-service
                port:
                  number: {{ .Values.frpConfig.vhostHTTPPort }}
  • configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Values.fullnameOverride }}-config
data:
  frps.toml: |
    bindPort = {{ .Values.frpConfig.bindPort }}
    vhostHTTPPort = {{ .Values.frpConfig.vhostHTTPPort }}
    subDomainHost = "{{ .Values.frpConfig.subDomainHost }}"

    webServer.addr = "{{ .Values.frpConfig.webServer.addr }}"
    webServer.port = {{ .Values.frpConfig.webServer.port }}
    webServer.user = "{{ .Values.frpConfig.webServer.user }}"
    webServer.password = "{{ .Values.frpConfig.webServer.password }}"
    webServer.pprofEnable = {{ .Values.frpConfig.webServer.pprofEnable }}

    log.to = "{{ .Values.frpConfig.log.to }}"
    log.level = "{{ .Values.frpConfig.log.level }}"
    log.maxDays = {{ .Values.frpConfig.log.maxDays }}

    auth.token = "{{ .Values.frpConfig.auth.token }}"
  • 打包 Chart
helm package frps-server
Successfully packaged chart and saved it to: /opt/charts/frps-0.0.1.tgz
  • 上传到私有 ChartMuseum
curl --data-binary "@frps-0.0.1.tgz" http://chart.yiqisoft.cn/api/charts

部署到 k3s 管理集群

  • 通过手工方式将 frps-0.0.1.tgz 部署到集群
helm install frps-server ./frp-0.0.1.tgz -n frps --createnamespace
  • 通过 Rancher UI 部署更为简单一些。

验证结果

  • 需要在私有 dns 解析地址,例如 dnsmasq
address=/h.yiqisoft.cn/192.168.100.101
address=/*.h.yiqisoft.cn/192.168.100.101
  • 访问 http://h.yiqisoft.cn:30500 可以管理 frps
  • frpc 连接服务器 tcp://h.yiqisoft.cn:30000,即可访问 client-id.h.yiqisoft.cn

管理新的集群(业务集群)

  • 某些情况下,可能需要清理 CoreDNS 缓存
kubectl -n kube-system rollout restart deployment coredns