jenkins flask
Coverage with Flask
烧瓶覆盖
问题介绍 (Problem introduction)
Recently I have started working on a flask project. One of the challenges I have faced was to execute CICD test cases. I have used flask-Sqlalchemy ORM to connect to MySQL databases. However, while doing the CICD, we do not have access to the MySQL database causing the CICD to fail. We have tried to evaluate multiple ways to achieve that. Possible strategies:
最近,我开始从事烧瓶项目。 我面临的挑战之一是执行CICD测试用例。 我已经使用flask-Sqlalchemy ORM连接到MySQL数据库。 但是,在执行CICD时,我们无法访问导致CICD失败MySQL数据库。 我们试图评估实现该目标的多种方法。 可能的策略:
- Setup MySQL on the CICD container. This makes containers a little heavy and takes time and resources. 在CICD容器上设置MySQL。 这使容器有点沉重,并浪费时间和资源。
- Since we use Kubernetes deployment for execution we can set up a MySQL container within the same execution pod. This would probably be the best approach considering the Kubernetes circumstances and reduces execution time by preloading the data. 由于我们使用Kubernetes部署执行,因此我们可以在同一执行窗格中设置MySQL容器。 考虑到Kubernetes的情况,这可能是最好的方法,并通过预加载数据来减少执行时间。
- Setup an in-memory database Sqlite, like java’s h2. This will reduce the requirement for both resources and time. 设置一个内存数据库Sqlite,例如java的h2。 这将减少对资源和时间的需求。
- Another issue is to run and execute the test cases on the pod while being able to push the docker image to the Docker registry. In order to solve both, we can run 2 containers, 1 for building docker and pushing it to docker-registry, while the other container is used to execute python specific tasks like executing test cases. This seems to be a very lightweight approach since we are already using docker(with docker binary) container to a few services. 另一个问题是在Pod上运行并执行测试用例,同时又能将Docker映像推送到Docker注册表。 为了解决这两个问题,我们可以运行2个容器,其中1个用于构建docker并将其推送到docker-registry,而另一个容器用于执行python特定的任务,例如执行测试用例。 这似乎是一种非常轻巧的方法,因为我们已经在一些服务中使用了docker(with docker binary)容器。
I would like to outline this document, this document will follow through below steps:
我想概述此文档,该文档将遵循以下步骤:
- Explain the deployment topology 解释部署拓扑
- Problems importing mysql data into Sqlite 将mysql数据导入Sqlite时出现问题
- The solution to import data into Sqlite3 将数据导入Sqlite3的解决方案
- Setup test profiles and how to use them 设置测试配置文件以及如何使用它们
- Prepare pipeline execution environment 准备管道执行环境
- Setup and execute pipeline 设置和执行管道
建立拓扑 (Build topology)
基本进口问题 (Basic import problem)
Upon reading I realized that the best way to execute unit test cases is to run in-memory databases like H2 for java-hibernate. Initially, we tried to create a SQL dump and load that in Sqlite3.
阅读后,我意识到执行单元测试用例的最佳方法是运行内存数据库,例如H2 for Java-hibernate。 最初,我们尝试创建一个SQL转储并将其加载到Sqlite3中。
$ mysqldump -u<user> -p<password> -h <host> db > testdump.sql
$ cat testdump.sql| sqlite3 mysqlite3.db Error: near line 25: near "AUTO_INCREMENT": syntax error
Error: near line 38: near "LOCK": syntax error
Error: near line 41: near "UNLOCK": syntax error
Error: near line 50: near "ENGINE": syntax error
Error: near line 60: near "LOCK": syntax error
Error: near line 62: no such table: alembic_version
Error: near line 64: near "UNLOCK": syntax error
Error: near line 73: near "AUTO_INCREMENT": syntax error
Error: near line 92: near "LOCK": syntax error
Error: near line 94: no such table: apps
Error: near line 96: near "UNLOCK": syntax error
Error: near line 105: near "AUTO_INCREMENT": syntax error
使用mysql-to-sqlite3设置SQLite3 (Setup SQLite3 with mysql-to-sqlite3)
As you can see it is a bit difficult to load the data into Sqlite database. We find one tool that can covert the MySQL data into Sqlite data. This is just a simple way to create a fixture.
如您所见,将数据加载到Sqlite数据库中有点困难。 我们找到了一种可以将MySQL数据转换为Sqlite数据的工具。 这只是创建固定装置的简单方法。
Steps to solve this:
解决此问题的步骤:
- Install the mysql-to-sqlite3 utility. 安装mysql-to-sqlite3实用程序。
- Use the utility to create sqlite file 使用该实用程序创建sqlite文件
$ pip install mysql-to-sqlite3
$ mysql2sqlite -f testdb.sqlite -d db_name -u<user> -p<password> -h <host>
设置测试配置文件 (Setup test profile)
In order to set up a Flask test, we can set up a config file that can work like a mvn profile.
为了设置Flask测试,我们可以设置一个可以像mvn配置文件一样工作的配置文件。
# Config file aka profile configurations
import os
DEBUG = True # Turns on debugging features in Flask
BCRYPT_LOG_ROUNDS = 12 # Configuration for the Flask-Bcrypt extension
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
DB_URL = os.environ.get("DB_URL", '0.0.0.0')
DB_USER = os.environ.get("DB_USER", 'root')
DB_PASS = os.environ.get("DB_PASS", 'root')
DB_NAME = os.environ.get("DB_NAME", 'app')
SQLALCHEMY_DATABASE_URI = "mysql+pymysql://{DB_USER}:{DB_PASS}@{DB_URL}/{DB_NAME}?charset=utf8mb4".format(
DB_URL=DB_URL, DB_USER=DB_USER, DB_PASS=DB_PASS, DB_NAME=DB_NAME)
SQLALCHEMY_TRACK_MODIFICATIONS = False
SQLALCHEMY_ECHO = False
BUNDLE_ERRORS = False
CELERY_BROKER_URL = 'sqs://'
sns_queue_region = os.environ.get('SNS_QUEUE_REGION', 'us-west-2')
SQLALCHEMY_POOL_RECYCLE = 3600
SQLALCHEMY_ENGINE_OPTIONS = {
'pool_size': 50,
'pool_recycle': 120,
'pool_pre_ping': True
}
class ProductionConfig(Config):
DEBUG = False
class StagingConfig(Config):
DEVELOPMENT = True
DEBUG = True
class DevelopmentConfig(Config):
DEVELOPMENT = True
DEBUG = True
class TestingConfig(Config):
TESTING = True
DB_NAME = os.environ.get("DB_NAME", 'testdb')
SQLALCHEMY_DATABASE_URI = "sqlite:///{DB_NAME}.sqlite".format(DB_NAME=DB_NAME)
SQLALCHEMY_POOL_RECYCLE = 0
SQLALCHEMY_ENGINE_OPTIONS = {}
Once the above is set up, we need to add a line in app.py for the flask app. Something like setting up APP_SETTINGS. In the below configuration, we set the default to production. This certain configuration will use the inheritance and set the default configuration for different profiles. This way you just need to override the environment variable to change to a different profile.
完成上述设置后,我们需要在flask.app的app.py中添加一行。 类似于设置APP_SETTINGS。 在以下配置中,我们将默认设置为生产。 此特定配置将使用继承并为不同的配置文件设置默认配置。 这样,您只需要覆盖环境变量即可更改为其他概要文件。
# Load the default configuration
app.config.from_object(os.environ.get("APP_SETTINGS", "config.ProductionConfig") )
Once these are configured we can override them using environment variables like below and execute anything command that one may like.
一旦配置了这些,我们就可以使用如下所示的环境变量覆盖它们,并执行可能喜欢的任何命令。
export APP_SETTINGS="config.ProductionConfig"
python app.py
设置run.py (Setup run.py)
Additionally, we need a way to execute coverage. I prefer to execute them using coverage and pass a file with reference to other files. I believe this is a decent way to run the coverage and store the results.
此外,我们需要一种执行覆盖率的方法。 我更喜欢使用coverage来执行它们,并参考其他文件传递一个文件。 我相信这是运行覆盖范围和存储结果的一种不错的方法。
# manage.py
import pytest
from flask_script import Manager
from app import app
manager = Manager(app)
@manager.command
def test():
"""Runs the tests."""
pytest.main(["-s", "tests/__init__.py"])
if __name__ == "__main__":
manager.run()
Once the above is complete one can execute the coverage using the below command. The final step will create an HTML report. Sonarqube requires an XML format for this to work.
完成以上操作后,即可使用以下命令执行覆盖。 最后一步将创建一个HTML报告。 Sonarqube需要XML格式才能工作。
$ PYTHONPATH=. coverage run -m pytest run.py run
$ coverage report
$ coverage html
$ coverage xml
管道执行环境 (Pipeline execution environment)
We can set up a docker container as an execution environment. We can set up a profile and execute that over the environment. Configuration of docker container looks like below:
我们可以将Docker容器设置为执行环境。 我们可以建立一个配置文件并在环境中执行它。 Docker容器的配置如下所示:
FROM python:3.6-stretch
RUN mkdir -p /app && apt-get update && apt-get install -y libcurl4-openssl-dev libssl-dev vim
WORKDIR /app
COPY requirements.txt /app
RUN pip install --no-cache-dir -r requirements.txt
ENV cmd=""ENV APP_SETTINGS="config.TestingConfig"
COPY . /app
EXPOSE 8080
Additionally, we need to set up a python test case execution container. This is required to execute test cases.
此外,我们需要设置一个python测试用例执行容器。 这是执行测试用例所必需的。
# Dockerfile
FROM python:3.6-stretch
RUN mkdir -p /app && apt-get update && apt-get install -y libcurl4-openssl-dev libssl-dev vim
WORKDIR /app
COPY requirements.txt /app
RUN pip install --no-cache-dir -r requirements.txt# Build the test image from here:
$ docker build . -t hub.docker.com/shubhamkr619/python-testing-image:v1
# Push the image$ docker push hub.docker.com/shubhamkr619/python-testing-image:v1
设置并执行Jenkins管道 (Setup and execute Jenkins pipeline)
In order to execute this pipeline, we need to setup Jenkins's job to achieve that.
为了执行此管道,我们需要设置詹金斯的工作来实现这一目标。
First, we will set up a Jenkins.yaml file to be used by Jenkins Kubernetes plugin.
首先,我们将设置一个Jenkins.yaml文件,供Jenkins Kubernetes插件使用。
apiVersion: v1
kind: Pod
metadata:
labels:
Application: app
spec:
containers:
- name: docker
image: docker:1.11
command: ['cat']
tty: true
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
- name: python
image: hub.docker.com/shubhamkr619/python-testing-image:v1
tty: true
command: ['cat']
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: APP_SETTINGS
value: "config.ProductionConfig"
volumeMounts:
- name: dockersock
mountPath: /var/run/docker.sock
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
This above step will ensure that we run 2 containers within the pod be able to run both docker-container with docker binary and Python container with python and coverage access.
上面的步骤将确保我们在pod中运行2个容器,以便能够同时运行带有docker二进制文件的docker-container和具有python和coverage访问权限的Python容器。
//Added shared Library
library identifier: 'cicdjenkins@master', retriever: modernSCM(
[$class: 'GitSCMSource',
remote: 'github.com/jenkinpiple/',
credentialsId: 'svc.devops-ut']
)
//Defines the build CI pipeline for app
pipeline {
agent {
kubernetes {
label 'app'
defaultContainer 'jnlp'
yamlFile 'Jenkins.yaml'
}
}
environment {
DOCKER_REGISTRY= '<registry_url>'
APP_NAME = '<app_name>'
DOCKER_REGISTRY_CRED_ID= "<registry_cred>"
DOCKER_REPOSITORY= 'docker-local'
}
stages {
//Purpouse : Notify slack about JOB started
stage('General') {
steps {
notify('STARTED')
githubstatus('STARTED')
echo sh(script: 'env|sort', returnStdout: true)
script {
USER = wrap([$class: 'BuildUser']) {
return env.BUILD_USER }
GIT_REVISION = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
}
}
}
//Docker Build
stage('Build') {
steps {
container('docker') {
script{
def git_branch = "${GIT_BRANCH}"
git_branch = git_branch.replace("/", "_")
DOCKER_BUILD_NUMBER = BUILD_NUMBER+"-${git_branch}"
docker.build("${DOCKER_REGISTRY}/${DOCKER_REPOSITORY}/${APP_NAME}"+":"+DOCKER_BUILD_NUMBER)
}
}
}
}stage('Test') {
steps {
container('python') {
script {
output = sh(returnStdout: true, script: 'pip install --no-cache-dir -r requirements.txt && export ENV APP_SETTINGS="config.TestingConfig" && export PYTHONPATH=. && coverage run --source=. run.py test && coverage xml ' ).trim()
echo output
}
}
}
}
stage("SonarQube Analysis") {
steps{
//Executing SonarQube Analysis inside build container
container("python"){
script {
sh "sed -i 's/sonar.projectVersion=build-number/sonar.projectVersion=${BUILD_NUMBER}/g' sonar-project.properties"
sh "sed -i 's@sonar.branch.name=branch_name@sonar.branch.name=$BRANCH_NAME@g' sonar-project.properties"
withSonarQubeEnv('SonarQube') {
echo "===========Performing Sonar Scan============"
def sonarqubeScannerHome = tool 'SonarQube Scanner 3.3.0.1492'
sh "${sonarqubeScannerHome}/bin/sonar-scanner"
}
}
}
}
}
//Quality Gate
stage("Quality Gate") {
steps {
script {
timeout(time: 1, unit: 'HOURS') {
waitForQualityGate abortPipeline: true }
}
}
} //Docker Push
stage('Artifactory Push'){
steps {
container('docker') {
script {
dockerpush("${DOCKER_REPOSITORY}/${APP_NAME}","${DOCKER_BUILD_NUMBER}","${DOCKER_REPOSITORY}")
}
}
}
}
}
}
翻译自: https://medium.com/srendevops/flask-test-cicd-pipeline-afbe9bec07a3
jenkins flask