与Kubebuilder建造Kubernetes操作员的综合指南
#go #mysql #kubernetes #kubebuilder

kubernetes运营商是自动化Kubernetes上复杂应用程序的强大方法。在这篇博客文章中,我们将为想要学习如何创建和使用运营商的Kubernetes开发人员提供动手指南。我们将介绍运营商的基础知识,包括如何定义自定义资源,创建控制器和管理对帐循环。我们还将为MySQL提供操作符。

先决条件
GO版本v1.20.0+
Docker版本17.03+。
kubectl版本v1.11.3+。
访问Kubernetes v1.11.3+ cluster。

安装kubebuilder

$ curl -L -o kubebuilder "https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)"
$ chmod +x kubebuilder
$ mv kubebuilder /usr/local/bin/

$ kubebuilder version 
Version: main.version{KubeBuilderVersion:"3.11.1", KubernetesVendor:"1.27.1", GitCommit:"1dc8ed95f7cc55fef3151f749d3d541bec3423c9", BuildDate:"2023-07-03T13:10:56Z", GoOs:"linux", GoArch:"amd64"}

init/bootstrap项目

$ mkdir -p ~/ops/mysql-operator && cd /mysql-operator
$ kubebuilder init --domain dpuigerarde.com --repo github.com/dpuig/mysql-operator

kubebuilder init --domain命令用于初始化新的kubernetes操作员项目。域名标志为项目的自定义资源指定了Kubernetes组。域标志的默认值是my.domain。

创建一个API

$ kubebuilder create api --group apps --version v1alpha1 --kind MySQLCluster

kubebuilder create api --group命令用于在Kubernetes操作员项目中创建新的API(自定义资源定义)。组标志为API指定Kubernetes组。组标志的默认值是项目的域名。

如果您按y进行创建资源[Y/N]和创建控制器[Y/N],那么这将创建文件

api
└── v1alpha1
    ├── groupversion_info.go
    ├── mysqlcluster_types.go
    └── zz_generated.deepcopy.go

定义API的地方

也文件

internal
└── controller
    ├── mysqlcluster_controller.go
    └── suite_test.go

在此类对帐业务逻辑的情况下(CRD)。

自定义资源定义(CRD)

mySqlClustersPec CRD定义了MySQLCluster资源的架构。它应该包括以下字段:

deploymentName: mysql db的名称。
复制品: mysql豆荚的数量。
版本:使用MySQL的版本。
密码:默认管理密码。

在生成的项目中,查找api/v1alpha1/mysqlcluster_types.go

编辑MySQLClusterSpecMySQLClusterStatus结构:

type MySQLClusterSpec struct {
    // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
    // Important: Run "make" to regenerate code after modifying this file

    // Foo is an example field of MySQLCluster. Edit mysqlcluster_types.go to remove/update
    // Foo string `json:"foo,omitempty"`

    // +kubebuilder:validation:Required
    // +kubebuilder:validation:Format:=string

    // the name of the deployment
    DeploymentName string `json:"deploymentName"`

    // +kubebuilder:validation:Required
    // +kubebuilder:validation:Minimum=0

    // the number of replicas
    Replicas *int32 `json:"replicas"`

    Version string `json:"version"`

    // +kubebuilder:validation:Required
    // +kubebuilder:validation:Format:=string
    Password string `json:"password"`
}

type MySQLClusterStatus struct {
    // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
    // Important: Run "make" to regenerate code after modifying this file
    // this is equal deployment.status.availableReplicas
    // +optional
    AvailableReplicas int32 `json:"availableReplicas"`
}

然后运行:

$ make manifests

实现控制器逻辑

编辑位于internal/controller/mysqlcluster_controller.go

的生成的控制器文件

很快,我们将专注于博客文章,以关注API类型的详细信息,尤其是控制器中的逻辑。对于这种情况,总的来说,该控制器负责部署将启动MySQL DB的部署。

package controller

import (
    "context"

    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/client-go/tools/record"
    ctrl "sigs.k8s.io/controller-runtime"
    "sigs.k8s.io/controller-runtime/pkg/client"
    "sigs.k8s.io/controller-runtime/pkg/log"

    "github.com/go-logr/logr"

    appsv1 "k8s.io/api/apps/v1"
    corev1 "k8s.io/api/core/v1"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"

    samplecontrollerv1alpha1 "github.com/dpuig/mysql-operator/api/v1alpha1"
)

var (
    deploymentOwnerKey = ".metadata.controller"
    apiGVStr           = samplecontrollerv1alpha1.GroupVersion.String()
)

// MySQLClusterReconciler reconciles a MySQLCluster object
type MySQLClusterReconciler struct {
    client.Client
    Log      logr.Logger
    Scheme   *runtime.Scheme
    Recorder record.EventRecorder
}

//+kubebuilder:rbac:groups=apps.dpuigerarde.com,resources=mysqlclusters,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=apps.dpuigerarde.com,resources=mysqlclusters/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=apps.dpuigerarde.com,resources=mysqlclusters/finalizers,verbs=update

// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
// TODO(user): Modify the Reconcile function to compare the state specified by
// the MySQLCluster object against the actual cluster state, and then
// perform operations to make the cluster state reflect the state specified by
// the user.
//
// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.15.0/pkg/reconcile
func (r *MySQLClusterReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    _ = log.FromContext(ctx)

    log := r.Log.WithValues("mysqlCluster", req.NamespacedName)

    var mysqlCluster samplecontrollerv1alpha1.MySQLCluster
    log.Info("fetching MySQLCluster Resource")
    if err := r.Get(ctx, req.NamespacedName, &mysqlCluster); err != nil {
        log.Error(err, "unable to fetch MySQLCluster")
        return ctrl.Result{}, client.IgnoreNotFound(err)
    }

    if err := r.cleanupOwnedResources(ctx, log, &mysqlCluster); err != nil {
        log.Error(err, "failed to clean up old Deployment resources for this Foo")
        return ctrl.Result{}, err
    }

    // get deploymentName from mysqlCluster.Spec
    deploymentName := mysqlCluster.Spec.DeploymentName

    // define deployment template using deploymentName
    deploy := &appsv1.Deployment{
        ObjectMeta: metav1.ObjectMeta{
            Name:      deploymentName,
            Namespace: req.Namespace,
        },
    }

    // Create or Update deployment object
    if _, err := ctrl.CreateOrUpdate(ctx, r.Client, deploy, func() error {
        replicas := int32(1)
        if mysqlCluster.Spec.Replicas != nil {
            replicas = *mysqlCluster.Spec.Replicas
        }
        deploy.Spec.Replicas = &replicas

        labels := map[string]string{
            "app":        "mysql",
            "controller": req.Name,
        }

        // set labels to spec.selector for our deployment
        if deploy.Spec.Selector == nil {
            deploy.Spec.Selector = &metav1.LabelSelector{MatchLabels: labels}
        }

        // set labels to template.objectMeta for our deployment
        if deploy.Spec.Template.ObjectMeta.Labels == nil {
            deploy.Spec.Template.ObjectMeta.Labels = labels
        }

        // set a container for our deployment
        containers := []corev1.Container{
            {
                Name:  "db",
                Image: "mysql:" + mysqlCluster.Spec.Version,
                Env: []corev1.EnvVar{
                    {
                        Name:  "MYSQL_ROOT_PASSWORD",
                        Value: mysqlCluster.Spec.Password,
                    },
                },
                Command: []string{"mysqld", "--user=root"},
                Args:    []string{"--default-authentication-plugin=mysql_native_password"},
                Ports: []corev1.ContainerPort{
                    {
                        Name:          "mysql",
                        ContainerPort: 3306,
                    },
                },
                VolumeMounts: []corev1.VolumeMount{
                    {
                        Name:      "mysql-persistent-storage",
                        MountPath: "/var/lib/mysql",
                    },
                },
                SecurityContext: &corev1.SecurityContext{
                    RunAsUser:  func() *int64 { i := int64(1001); return &i }(),
                    RunAsGroup: func() *int64 { i := int64(1001); return &i }(),
                },
            },
        }

        // set containers to template.spec.containers for our deployment
        if deploy.Spec.Template.Spec.Containers == nil {
            deploy.Spec.Template.Spec.Containers = containers
        }

        deploy.Spec.Strategy.Type = "Recreate"
        deploy.Spec.Template.Spec.Volumes = []corev1.Volume{
            {
                Name: "mysql-persistent-storage",
                VolumeSource: corev1.VolumeSource{
                    PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
                        ClaimName: "mysql-pv-claim",
                    },
                },
            },
        }

        deploy.Spec.Template.Spec.SecurityContext = &corev1.PodSecurityContext{
            FSGroup: func() *int64 { i := int64(1001); return &i }(),
        }

        // set the owner so that garbage collection can kicks in
        if err := ctrl.SetControllerReference(&mysqlCluster, deploy, r.Scheme); err != nil {
            log.Error(err, "unable to set ownerReference from mysqlCluster to Deployment")
            return err
        }

        return nil
    }); err != nil {

        // error handling of ctrl.CreateOrUpdate
        log.Error(err, "unable to ensure deployment is correct")
        return ctrl.Result{}, err

    }

    // get deployment object from in-memory-cache
    var deployment appsv1.Deployment
    var deploymentNamespacedName = client.ObjectKey{Namespace: req.Namespace, Name: mysqlCluster.Spec.DeploymentName}
    if err := r.Get(ctx, deploymentNamespacedName, &deployment); err != nil {
        log.Error(err, "unable to fetch Deployment")
        return ctrl.Result{}, client.IgnoreNotFound(err)
    }

    // set mysqlCluster.status.AvailableReplicas from deployment
    availableReplicas := deployment.Status.AvailableReplicas
    if availableReplicas == mysqlCluster.Status.AvailableReplicas {
        return ctrl.Result{}, nil
    }
    mysqlCluster.Status.AvailableReplicas = availableReplicas

    // update mysqlCluster.status
    if err := r.Status().Update(ctx, &mysqlCluster); err != nil {
        log.Error(err, "unable to update mysqlCluster status")
        return ctrl.Result{}, err
    }

    // create event for updated mysqlCluster.status
    r.Recorder.Eventf(&mysqlCluster, corev1.EventTypeNormal, "Updated", "Update mysqlCluster.status.AvailableReplicas: %d", mysqlCluster.Status.AvailableReplicas)

    return ctrl.Result{}, nil
}

// SetupWithManager sets up the controller with the Manager.
func (r *MySQLClusterReconciler) SetupWithManager(mgr ctrl.Manager) error {
    ctx := context.Background()
    // add deploymentOwnerKey index to deployment object which MySQLCluster resource owns
    if err := mgr.GetFieldIndexer().IndexField(ctx, &appsv1.Deployment{}, deploymentOwnerKey, func(rawObj client.Object) []string {
        // grab the deployment object, extract the owner...
        deployment := rawObj.(*appsv1.Deployment)
        owner := metav1.GetControllerOf(deployment)
        if owner == nil {
            return nil
        }
        // ...make sure it's a MySQLCluster...
        if owner.APIVersion != apiGVStr || owner.Kind != "MySQLCluster" {
            return nil
        }

        // ...and if so, return it
        return []string{owner.Name}
    }); err != nil {
        return err
    }

    // define to watch targets...Foo resource and owned Deployment
    return ctrl.NewControllerManagedBy(mgr).
        For(&samplecontrollerv1alpha1.MySQLCluster{}).
        Owns(&appsv1.Deployment{}).
        Complete(r)
}

// cleanupOwnedResources will delete any existing Deployment resources that
// were created for the given mysqlCluster that no longer match the
// mysqlCluster.spec.deploymentName field.
func (r *MySQLClusterReconciler) cleanupOwnedResources(ctx context.Context, log logr.Logger, mysqlCluster *samplecontrollerv1alpha1.MySQLCluster) error {
    log.Info("finding existing Deployments for Foo resource")

    // List all deployment resources owned by this mysqlCluster
    var deployments appsv1.DeploymentList
    if err := r.List(ctx, &deployments, client.InNamespace(mysqlCluster.Namespace), client.MatchingFields(map[string]string{deploymentOwnerKey: mysqlCluster.Name})); err != nil {
        return err
    }

    // Delete deployment if the deployment name doesn't match foo.spec.deploymentName
    for _, deployment := range deployments.Items {
        if deployment.Name == mysqlCluster.Spec.DeploymentName {
            // If this deployment's name matches the one on the Foo resource
            // then do not delete it.
            continue
        }

        // Delete old deployment object which doesn't match foo.spec.deploymentName
        if err := r.Delete(ctx, &deployment); err != nil {
            log.Error(err, "failed to delete Deployment resource")
            return err
        }

        log.Info("delete deployment resource: " + deployment.Name)
        r.Recorder.Eventf(mysqlCluster, corev1.EventTypeNormal, "Deleted", "Deleted deployment %q", deployment.Name)
    }

    return nil
}

项目结构

.
├── api
│   └── v1alpha1
│       ├── groupversion_info.go
│       ├── mysqlcluster_types.go
│       └── zz_generated.deepcopy.go
├── bin
│   ├── controller-gen
│   └── kustomize
├── cmd
│   └── main.go
├── config
│   ├── crd
│   │   ├── bases
│   │   │   └── apps.dpuigerarde.com_mysqlclusters.yaml
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches
│   │       ├── cainjection_in_mysqlclusters.yaml
│   │       └── webhook_in_mysqlclusters.yaml
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── mysqlcluster_editor_role.yaml
│   │   ├── mysqlcluster_viewer_role.yaml
│   │   ├── role_binding.yaml
│   │   ├── role.yaml
│   │   └── service_account.yaml
│   └── samples
│       ├── apps_v1alpha1_mysqlcluster.yaml
│       └── kustomization.yaml
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
├── internal
│   └── controller
│       ├── mysqlcluster_controller.go
│       └── suite_test.go
├── Makefile
├── mysql-pv.yaml
├── PROJECT
└── README.md

在本地运行运营商(用于开发)

出于开发目的,您可能希望在本地运行远程群集。这使您可以在开发过程中更快地迭代。

  • 设置kubeconfig上下文:
$ export KUBECONFIG=<path-to-your-kubeconfig-file>
  • 将CRD安装到集群中:
$ make install  
$ kubectl get crds 

NAME                                 CREATED AT
mysqlclusters.apps.dpuigerarde.com   2023-08-28T02:22:43Z

出于此示例的目的。我们将创建一些额外的资源,PersistEntVolume和PersistentVolumeclaim,这些资源将作为补充,文件mysql-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi

应用

$ kubectl apply -f mysql-pv.yaml

$ kubectl get pv,pvc
NAME                               CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
persistentvolume/mysql-pv-volume   20Gi       RWO            Retain           Bound    default/mysql-pv-claim   manual                  103m

NAME                                   STATUS   VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/mysql-pv-claim   Bound    mysql-pv-volume   20Gi       RWO            manual         103m
  • 运行您的控制器(这将在前景中运行,因此,如果要让它运行,请切换到新终端):
$ make run

部署自定义资源

确保更新config/samples/apps_v1alpha1_mysqlcluster.yaml,并使用您要用于MySQLCluster资源的实际规范。

apiVersion: apps.dpuigerarde.com/v1alpha1
kind: MySQLCluster
metadata:
  labels:
    app.kubernetes.io/name: mysqlcluster
    app.kubernetes.io/instance: mysqlcluster-sample
    app.kubernetes.io/part-of: mysql-operator
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: mysql-operator
  name: mysqlcluster-sample
spec:
  # TODO(user): Add fields here
  deploymentName: mysqlcluster-sample-deploy
  replicas: 1
  version: "5.6"
  password: example
$ kubectl apply -f apply -f config/samples/apps_v1alpha1_mysqlcluster.yaml

mysqlcluster.apps.dpuigerarde.com/mysqlcluster-sample created
$ kubectl get mysqlclusters
NAME                  AGE
mysqlcluster-sample   10m

此时,您的操作员应检测自定义资源并执行和解循环,创建指定的MySQL DB。

但是,这是一个包含问题的博客文章,示例存在问题,我希望我可以指望您的帮助来解决此问题。我保证很快就会通过解决问题的解决方案

$ kubectl get deploy

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
mysqlcluster-sample-deploy   0/1     1            0           11m
$ kubectl get pods   

NAME                                         READY   STATUS             RESTARTS      AGE
mysqlcluster-sample-deploy-79c78b6c5-62jh5   0/1     CrashLoopBackOff   7 (42s ago)   11m
$ kubectl logs mysqlcluster-sample-deploy-79c78b6c5-62jh5 

2023-08-28 16:26:24 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2023-08-28 16:26:24 0 [Warning] Can't create test file /var/lib/mysql/mysqlcluster-sample-deploy-79c78b6c5-62jh5.lower-test
2023-08-28 16:26:24 0 [Note] mysqld (mysqld 5.6.51) starting as process 1 ...
2023-08-28 16:26:24 1 [Warning] Can't create test file /var/lib/mysql/mysqlcluster-sample-deploy-79c78b6c5-62jh5.lower-test
2023-08-28 16:26:24 1 [Warning] Can't create test file /var/lib/mysql/mysqlcluster-sample-deploy-79c78b6c5-62jh5.lower-test
2023-08-28 16:26:24 1 [Warning] One can only use the --user switch if running as root

2023-08-28 16:26:24 1 [Note] Plugin 'FEDERATED' is disabled.
mysqld: Table 'mysql.plugin' doesn't exist
2023-08-28 16:26:24 1 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
2023-08-28 16:26:24 1 [Note] InnoDB: Using atomics to ref count buffer pool pages
2023-08-28 16:26:24 1 [Note] InnoDB: The InnoDB memory heap is disabled
2023-08-28 16:26:24 1 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2023-08-28 16:26:24 1 [Note] InnoDB: Memory barrier is not used
2023-08-28 16:26:24 1 [Note] InnoDB: Compressed tables use zlib 1.2.11
2023-08-28 16:26:24 1 [Note] InnoDB: Using Linux native AIO
2023-08-28 16:26:24 1 [Note] InnoDB: Not using CPU crc32 instructions
2023-08-28 16:26:24 1 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2023-08-28 16:26:24 1 [Note] InnoDB: Completed initialization of buffer pool
2023-08-28 16:26:24 1 [ERROR] InnoDB: ./ibdata1 can't be opened in read-write mode
2023-08-28 16:26:24 1 [ERROR] InnoDB: The system tablespace must be writable!
2023-08-28 16:26:24 1 [ERROR] Plugin 'InnoDB' init function returned error.
2023-08-28 16:26:24 1 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2023-08-28 16:26:24 1 [ERROR] Unknown/unsupported storage engine: InnoDB
2023-08-28 16:26:24 1 [ERROR] Aborting

2023-08-28 16:26:24 1 [Note] Binlog end
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'partition'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_DELETED'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_METRICS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMPMEM'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMP'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_LOCKS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_TRX'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'BLACKHOLE'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'ARCHIVE'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'MRG_MYISAM'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'MyISAM'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'MEMORY'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'CSV'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'sha256_password'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'mysql_old_password'