如何使用Gunicorn,Nginx,Redis,芹菜和克朗塔布在Docker运行Python烧瓶应用程序
#python #docker #flask #cron

如果您已经构建了需要Redis和Cron作业的Python烧瓶应用程序,并且您希望使用Docker托管您的应用程序,则此帖子将为您提供如何设置应用程序以使用nginx作为反向运行WebServer代理和Gunicorn作为App Server。

本文假设您知道如何使用Python烧瓶构建应用程序。另外,对于这篇文章,我假设使用远程数据库服务器(MySQL)

您可以查看我以前有关Build a User Authentication API using Python Flask and MySQL

的帖子

docker
挑战 Dockerfile中只能有一个CMD指令。考虑到我们的应用程序使用芹菜和Redis来处理队列,还需要运行CRON作业。运行背景过程以保持单个Docker容器中的作业可能很棘手。

您可以使用entrypoint.sh脚本

FROM python:3.12-rc-alpine
COPY app_process app_process
COPY bin/crontab /etc/cron.d/crontab 
RUN chmod +x /etc/cron.d/crontab
RUN crontab /etc/cron.d/crontab
COPY start.sh start.sh
CMD /start.sh

start.sh脚本可以是

#!/bin/bash

# turn on bash's job control
set -m

# Start the primary process and put it in the background
gunicorn --bind 0.0.0.0:5000 wsgi:app --log-level=debug --workers=2 &

# cron
cron -f &

#celery
celery -A myapp.celery worker --loglevel=INFO

# now we bring the primary process back into the foreground
# and leave it there
fg %1

您可以链接多个命令来启动单个服务


CMD gunicorn --bind 0.0.0.0:5000 wsgi:app --log-level=debug --workers=2 & cron -f & celery -A myapp.celery worker --loglevel=INFO

您也可以使用supervisord来管理流程。

# syntax=docker/dockerfile:1
FROM python:3.12-rc-alpine
RUN apt-get update && apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY app_process app_process
COPY bin/crontab /etc/cron.d/crontab 
RUN chmod +x /etc/cron.d/crontab
RUN crontab /etc/cron.d/crontab
CMD ["/usr/bin/supervisord"]

您的supervisord配置可能是

[supervisord]
nodaemon=true
user=root

[program:celeryworker]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=celery -A myapp.celery worker --loglevel=INFO
autostart=true
autorestart=true

[program:myapp_gunicorn]
command=gunicorn --bind 0.0.0.0:5000 wsgi:app --log-level=debug --workers=2
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

[program:cron]
command = cron -f -L 15
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/cron.log
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0

上述任何方法的问题是,如果任何服务失败并试图恢复,您将负责监视它们。例如,在主应用程序工作时,crontab可能会停止运行,您必须处理如何恢复crontab而不重新启动整个容器。

最好使用每个容器使用一项服务来分开关注的领域。

使用多个容器。

您可以使用多个容器来运行不同的服务。在这个解决方案中,我使用了

  • 烧瓶应用程序的一个容器,
  • REDIS服务的一个容器
  • 使用Supervisord来管理芹菜的Cronjob and Celery(队列服务)的一个容器。

注意:您决定如果需要的话,将芹菜(队列服务)进一步移动到一个单独的容器中。

烧瓶应用的Dockerfile

FROM python:3.11.4-slim-bullseye

# set work directory
WORKDIR /app

# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

ARG UID=1000
ARG GID=1000

RUN apt-get update \
  && apt-get install -y --no-install-recommends build-essential default-libmysqlclient-dev default-mysql-client curl libpq-dev pkg-config \
  && rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man \
  && apt-get clean

# RUN useradd -m python
# RUN chown -R python:python /app

# USER python


# If you have a requirement.txt file
COPY requirements/main.txt requirements/main.txt

# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements/main.txt

COPY . /app/

RUN pip install -e .


CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--worker-tmp-dir", "/dev/shm", "--workers", "2", "--worker-class", "gevent", "--worker-connections", "1000", "wsgi:app", "--log-level", "debug"]

docker for rontab and Celery

FROM python:3.11.4-slim-bullseye

# set work directory
WORKDIR /cronapp/


# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

ARG UID=1000
ARG GID=1000

RUN apt-get update \
  && apt-get install -y --no-install-recommends supervisor build-essential default-libmysqlclient-dev default-mysql-client curl cron libpq-dev pkg-config \
  && rm -rf /var/lib/apt/lists/* /usr/share/doc /usr/share/man \
  && apt-get clean

# RUN useradd -m python
# RUN chown -R python:python /app
# USER python


COPY requirements/main.txt requirements/main.txt

# install dependencies
RUN pip install --upgrade pip
RUN pip install -r requirements/main.txt

COPY . /cronapp/

RUN pip install -e .

# Setup cronjob
RUN touch /var/log/cron.log 

# Copying the crontab file 
COPY cron/bin/crontab /etc/cron.d/crontab 
RUN chmod +x /etc/cron.d/crontab


# run the crontab file
RUN crontab /etc/cron.d/crontab

RUN mkdir -p /var/log/supervisor

COPY services/cron/bin/supervisord.conf /etc/supervisor/conf.d/supervisord.conf

# CMD ["/usr/bin/supervisord", "-n"]

CMD cron -f & /usr/bin/supervisord -n

主管配置

[supervisord]
nodaemon=true
user=root

[program:celeryworker]
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
command=celery -A myapp.celery worker --loglevel=INFO
autostart=true
autorestart=true

[program:myapp_gunicorn]
command=gunicorn --bind 0.0.0.0:5000 wsgi:app --log-level=debug --workers=4
autostart=true
autorestart=true
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

样品crontab

SHELL=/bin/bash
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

# run notify users every day at 1:05AM
5 1 * * *  flask --app myapp notify-users >> /var/log/cron.log 2>&1

对于这种工作方法,您的应用程序必须结构结构,才能使用package pattern。 (在previous post中相同)。

这样,您可以从应用程序上的命令行中运行一个函数:

flask --app myapp notify-users

记住要通过使用@app.cli.command来创建自定义命令

在命令行上指定一个函数

示例:

from myapp.users import users
from myapp import app
from myapp.models.user import User
from myapp.queue.sendmail import send_email_to_user

@app.cli.command('notify-users')
def notify_users():
    offset = 0
    limit = 100
    users = User.filter(User.is_verified == 1).order_by(User.created_at.desc()).limit(limit).offset(offset)

    for user in users:
        send_email_to_user(user)

nginx dockerfile

FROM nginx:1.23-alpine

RUN rm /etc/nginx/nginx.conf
COPY nginx.conf /etc/nginx/

RUN rm /etc/nginx/conf.d/default.conf
COPY myapp.conf /etc/nginx/conf.d/


CMD ["nginx", "-g", "daemon off;"]

您现在可以使用Docker-Compose来管理所有容器

样本docker-compose.yml

version: "3.8"

services:
  backend:
    container_name: "app"
    build:
      context: .
      args:
        - "UID=-1000"
        - "GID=-1000"
        - "FLASK_DEBUG=false"
    volumes:
      - .:/app
    ports:
      - "5000:5000"
    env_file:
      - ".env"
    restart: "-unless-stopped"
    stop_grace_period: "2s"
    tty: true
    deploy:
      resources:
        limits:
          cpus: "-0"
          memory: "-0"
    depends_on:
      - "redis"
    profiles: ["myapp"]

  cron:
    container_name: "cron"
    build:
      context: .
      dockerfile: ./services/cron/Dockerfile
      args:
        - "UID=-1000"
        - "GID=-1000"
    env_file:
      - ".env"
    restart: "-unless-stopped"
    stop_grace_period: "2s"
    tty: true
    deploy:
      resources:
        limits:
          cpus: "-0"
          memory: "-0"
    depends_on:
      - "redis"
    volumes:
      - .:/cronapp/
    profiles: ["myapp"]

  redis:
    deploy:
      resources:
        limits:
          cpus: "-0"
          memory: "-0"
    image: "redis:7.0.5-bullseye"
    restart: "-unless-stopped"
    stop_grace_period: "3s"
    command: "redis-server --bind redis --maxmemory 256mb --maxmemory-policy allkeys-lru --appendonly yes"
    volumes:
      - "./redis:/data"
    profiles: ["redis"]

  nginx:
    container_name: "nginx"
    build:
      context: ./services/nginx
    restart: "-unless-stopped"
    stop_grace_period: "2s"
    tty: true
    deploy:
      resources:
        limits:
          cpus: "-0"
          memory: "-0"
    ports:
      - "80:80"
    depends_on:
      - "backend"
    volumes:
      - .:/nginx/
    profiles: ["nginx"]

您现在可以通过运行
开始您的应用程序和所有服务

docker compose up --detach --build app redis cron nginx