当前位置: 首页 > 软件库 > 云计算 > >

cicd-templates

授权协议 View license
开发语言 C/C++
所属分类 云计算
软件类型 开源软件
地区 不详
投 递 者 宗政卓
操作系统 跨平台
开源组织
适用人群 未知
 软件概览

Databricks Labs CI/CD Templates

This repository provides a template for automated Databricks CI/CD pipeline creation and deployment.

Table of Contents

Sample project structure (with GitHub Actions)

.
├── .dbx
│   └── project.json
├── .github
│   └── workflows
│       ├── onpush.yml
│       └── onrelease.yml
├── .gitignore
├── README.md
├── conf
│   ├── deployment.json
│   └── test
│       └── sample.json
├── pytest.ini
├── sample_project
│   ├── __init__.py
│   ├── common.py
│   └── jobs
│       ├── __init__.py
│       └── sample
│           ├── __init__.py
│           └── entrypoint.py
├── setup.py
├── tests
│   ├── integration
│   │   └── sample_test.py
│   └── unit
│       └── sample_test.py
└── unit-requirements.txt

Some explanations regarding structure:

  • .dbx folder is an auxiliary folder, where metadata about environments and execution context is located.
  • sample_project - Python package with your code (the directory name will follow your project name)
  • tests - directory with your package tests
  • conf/deployment.json - deployment configuration file. Please read the following section for a full reference.
  • .github/workflows/ - workflow definitions for GitHub Actions

Sample project structure (with Azure DevOps)

.
├── .dbx
│   └── project.json
├── .gitignore
├── README.md
├── azure-pipelines.yml
├── conf
│   ├── deployment.json
│   └── test
│       └── sample.json
├── pytest.ini
├── sample_project_azure_dev_ops
│   ├── __init__.py
│   ├── common.py
│   └── jobs
│       ├── __init__.py
│       └── sample
│           ├── __init__.py
│           └── entrypoint.py
├── setup.py
├── tests
│   ├── integration
│   │   └── sample_test.py
│   └── unit
│       └── sample_test.py
└── unit-requirements.txt

Some explanations regarding structure:

  • .dbx folder is an auxiliary folder, where metadata about environments and execution context is located.
  • sample_project_azure_dev_ops - Python package with your code (the directory name will follow your project name)
  • tests - directory with your package tests
  • conf/deployment.json - deployment configuration file. Please read the following section for a full reference.
  • azure-pipelines.yml - Azure DevOps Pipelines workflow definition

Sample project structure (with GitLab)

.
├── .dbx
│   └── project.json
├── .gitignore
├── README.md
├── .gitlab-ci.yml
├── conf
│   ├── deployment.json
│   └── test
│       └── sample.json
├── pytest.ini
├── sample_project_gitlab
│   ├── __init__.py
│   ├── common.py
│   └── jobs
│       ├── __init__.py
│       └── sample
│           ├── __init__.py
│           └── entrypoint.py
├── setup.py
├── tests
│   ├── integration
│   │   └── sample_test.py
│   └── unit
│       └── sample_test.py
└── unit-requirements.txt

Some explanations regarding structure:

  • .dbx folder is an auxiliary folder, where metadata about environments and execution context is located.
  • sample_project_gitlab - Python package with your code (the directory name will follow your project name)
  • tests - directory with your package tests
  • conf/deployment.json - deployment configuration file. Please read the following section for a full reference.
  • .gitlab-ci.yml - GitLab CI/CD workflow definition

Note on dbx

NOTE:
dbx is a CLI tool for advanced Databricks jobs management.It can be used separately from cicd-templates, and if you would like to preserve your project structure, please refer to dbx documentation on how to use it with customized project structure.

Quickstart

NOTE:
As a prerequisite, you need to install databricks-cli with a configured profile.In this instruction we're based on Databricks Runtime 7.3 LTS ML.If you don't need to use ML libraries, we still recommend to use ML-based version due to %pip magic support.

Local steps

Perform the following actions in your development environment:

  • Create new conda environment and activate it:
conda create -n <your-environment-name> python=3.7.5
conda activate <your-environment-name>
  • If you would like to be able to run local unit tests, you'll need JDK. If you don't have one, It can be installed via:
conda install -c anaconda "openjdk=8.0.152"
  • Install cookiecutter and path:
pip install cookiecutter path
  • Create new project using cookiecutter template:
cookiecutter https://github.com/databrickslabs/cicd-templates
  • Install development dependencies:
pip install -r unit-requirements.txt
  • Install generated package in development mode:
pip install -e .
  • In the generated directory you'll have a sample job with testing and launch configurations around it.
  • Launch and debug your code on an interactive cluster via the following command. Job name could be found in conf/deployment.json:
dbx execute --cluster-name=<my-cluster> --job=<job-name>
  • Make your first deployment from the local machine:
dbx deploy
  • Launch your first pipeline as a new separate job, and trace the job status. Job name could be found in conf/deployment.json:
dbx launch --job <your-job-name> --trace
  • For an in-depth local development and unit testing guidance, please refer to a generated README.md in the root of the project.

Setting up CI/CD pipeline on GitHub Actions

  • Create a new repository on GitHub
  • Configure DATABRICKS_HOST and DATABRICKS_TOKEN secrets for your project in GitHub UI
  • Add a remote origin to the local repo
  • Push the code
  • Open the GitHub Actions for your project to verify the state of the deployment pipeline

Setting up CI/CD pipeline on Azure DevOps

  • Create a new repository on GitHub
  • Connect the repository to Azure DevOps
  • Configure DATABRICKS_HOST and DATABRICKS_TOKEN secrets for your project in Azure DevOps. Note that secret variables must be mapped to env as mentioned here using the syntax env: for example:
variables:
- group: Databricks-environment
stages:
...
...
    - script: |
        dbx deploy
      env:
        DATABRICKS_TOKEN: $(DATABRICKS_TOKEN)
  • Add a remote origin to the local repo
  • Push the code
  • Open the Azure DevOps UI to check the deployment status

Setting up CI/CD pipeline on Gitlab

  • Create a new repository on Gitlab
  • Configure DATABRICKS_HOST and DATABRICKS_TOKEN secrets for your project in GitLab UI
  • Add a remote origin to the local repo
  • Push the code
  • Open the GitLab CI/CD UI to check the deployment status

Deployment file structure

A sample deployment file could be found in a generated project.

General file structure could look like this:

{
    "<environment-name>": {
        "jobs": [
            {
                "name": "sample_project-sample",
                "existing_cluster_id": "some-cluster-id", 
                "libraries": [],
                "max_retries": 0,
                "spark_python_task": {
                    "python_file": "sample_project/jobs/sample/entrypoint.py",
                    "parameters": [
                        "--conf-file",
                        "conf/test/sample.json"
                    ]
                }
            }
        ]
    }
}

Per each environment you could describe any amount of jobs. Job description should follow the Databricks Jobs API.

However, there is some advanced behaviour for a dbx deploy.

When you run dbx deploy with a given deployment file (by default it takes the deployment file from conf/deployment.json), the following actions will be performed:

  • Find the deployment configuration in --deployment-file (default: conf/deployment.json)
  • Build .whl package in a given project directory (could be disabled via --no-rebuild option)
  • Add this .whl package to a job definition
  • Add all requirements from --requirements-file (default: requirements.txt). Step will be skipped if requirements file is non-existent.
  • Create a new job or adjust existing job if the given job name exists. Job will be found by it's name.

Important thing about referencing is that you can also reference arbitrary local files. This is very handy for python_file section.In the example above, the entrypoint file and the job configuration will be added to the job definition and uploaded to dbfs automatically. No explicit file upload is needed.

Different deployment types

Databricks Jobs API provides two methods for launching a particular workload:

Main logical difference between these methods is that Run Submit API allows to submit a workload directly without creating a job.Therefore, we have two deployment types - one for Run Submit API, and one for Run Now API.

Deployment for Run Submit API

To deploy only the files and not to override the job definitions, do the following:

dbx deploy --files-only

To launch the file-based deployment:

dbx launch --as-run-submit --trace

This type of deployment is handy for working in different branches, and it won't affect the job definition.

Deployment for Run Now API

To deploy files and update the job definitions:

dbx deploy

To launch the file-based deployment:

dbx launch --job=<job-name>

This type of deployment shall be mainly used in automated way during new release.dbx deploy will change the job definition (unless --files-only option is provided).

Troubleshooting

Q: When running dbx deploy I'm getting the following exception json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) and stack trace:

...
  File ".../lib/python3.7/site-packages/dbx/utils/common.py", line 215, in prepare_environment
    experiment = mlflow.get_experiment_by_name(environment_data["workspace_dir"])
...

json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

What could be causing it and what is the potential fix?

A:
We've seen this exception when in the profile the host=https://{domain}/?o={orgid} format is used for Azure. It is valid for the databricks cli, but not for the API. If that's the cause, once the "?o={orgid}" suffix is removed, the problem should be gone.

FAQ

Q: I'm using poetry for package management. Is it possible to use poetry together with this template?

A:
Yes, it's also possible, but the library management during cluster execution should be performed via libraries section of job description.You also might need to disable the automatic rebuild for dbx deploy and dbx execute via --no-rebuild option. Finally, the built package should be in wheel format and located in /dist/ directory.

Q: How can I add my Databricks Notebook to the deployment.json, so I can create a job out of it?

A:
Please follow this documentation section and add a notebook task definition into the deployment file.

Q: Is it possible to use dbx for non-Python based projects, for example Scala-based projects?

A:
Yes, it's possible, but the interactive mode dbx execute is not yet supported. However, you can just take the dbx wheel to your Scala-based project and reference your jar files in the deployment file, so the dbx deploy and dbx launch commands be available for you.

Q: I have a lot of interdependent jobs, and using solely JSON seems like a giant code duplication. What could solve this problem?

A:
You can implement any configuration logic and simply write the output into a custom deployment.json file and then pass it via --deployment-file option.As an example, you can generate your configuration using Python script, or Jsonnet.

Q: How can I secure the project environment?

A:
From the state serialization perspective, your code and deployments are stored in two separate storages:

  • workspace directory -this directory is stored in your workspace, described per-environment and defined in .dbx/project.json, in workspace_dir field.To control access to this directory, please use Workspace ACLs.
  • artifact location - this location is stored in DBFS, described per-environment and defined in .dbx/project.json, in artifact_location field.To control access this location, please use credentials passthrough (docs for ADLS and for S3).

Q: I would like to use self-hosted (private) pypi repository. How can I configure my deployment and CI/CD pipeline?

A:
To set up this scenario, there are some settings to be applied:

  • Databricks driver should have network access to your pypi repository
  • Additional step to deploy your package to pypi repo should be configured in CI/CD pipeline
  • Package re-build and generation should be disabled via --no-rebuild --no-package arguments for dbx execute
  • Package reference should be configured in job description

Here is a sample for dbx deploy command:

dbx deploy --no-rebuild --no-package

Sample section to libraries configuration:

{
    "pypi": {"package": "my-package-name==1.0.0", "repo": "my-repo.com"}
}

Q: What is the purpose of init_adapter method in SampleJob?

A:This method should be primarily used for adapting configuration for dbx execute based run.By using this method, you can provide an initial configuration in case if --conf-file option is not provided.

Q: I don't like the idea of hosting the host and token variables into ~/.databrickscfg file inside the CI pipeline. How can I make this setup more secure?

A:
dbx now supports environment variables, provided via DATABRICKS_HOST and DATABRICKS_TOKEN.If these variables are defined in env, no ~/.databrickscfg file needed.

Legal Information

This software is provided as-is and is not officially supported by Databricks through customer technical support channels.Support, questions, and feature requests can be communicated through the Issues page of this repo.Please see the legal agreement and understand that issues with the use of this code will not be answered or investigated by Databricks Support.

Feedback

Issues with template? Found a bug? Have a great idea for an addition? Feel free to file an issue.

Contributing

Have a great idea that you want to add? Fork the repo and submit a PR!

Kudos

  • 部署Gitlab 以及配置CI/CD自动部署到服务器 1、安装GIT 1.1 使用yum安装工具 #利用URL规则在命令行下工作的文件传输工具 yum -y install curl-devel #expat就是SAX2模型的解析器 xml解析器 yum -y install expat-devel #gettext是一种国际化与本地化系统,可以进行编程和翻译的操作 yum -y install

  • 最近想把公司基于Jenkins的自动化构建修改到GitLab上,主要原因是在Jenkins上没有做权限控制,大家使用同一个账号,造成不同项目组的源码泄漏问题;另外还有一个使用Jenkins的独立服务器,感觉还是资源浪费了点。 配置GitLab的runner 从gitlab上获取Runner注册的token。有三种方式 按项目 项目->设置->CI/CD->Runner 按分组 分组->设置->CI

 相关资料
  • 构建Xamarin.iOS项目时面临以下错误。它建立在我的本地机器上。而不是在托管的macOS High Sierra托管代理上。 沙马林。iOS任务失败,错误为:/Library/Frameworks/Mono。framework/Versions/Current/Commands/msbuild失败,返回代码:1。有关设置生成管道的指南,请参阅https://go.microsoft.com/

  • 我有一个Spring开机应用MySQL。 当地一切都很完美。 甚至当我在AWS中将我的app jar文件手动部署到Elastic Beanstalk时,它也能工作。 但是它在cicd代码构建期间失败,错误如下。请帮助!!请让我知道我是否应该在这里提供任何其他详细信息。 com.mysql.cj.jdbc.exceptions.CommunicationsException: Communicati

  • 我创建的密钥存储库位于另一个资源组中,我正在将逻辑应用程序部署到其他资源组中,在这些资源组中我引用了密钥值秘密,如下所示: 我正在使用CICD部署逻辑应用程序,但在发布定义中,我得到以下错误: KeyVaultParameterReferenceAuthorization失败:对象id为“648FA2CC-6CD1-49FA-A11A-AD6A276916CC”的客户端“648FA2CC-6CD1

  • 我是新来的DevOps和CICD管道。< br >我成功地部署了ASP。NET MVC网站使用Azure DevOps CICD管道到我的onprem代理/机器使用Azure DevOps。 同样,我想部署一个控制台应用程序,它最终可以作为Windows“任务计划程序”中的计划作业使用,也可以作为“Windows服务”使用。现在我正在手动管理这些部署,但在看到运营模式的强大功能后,我希望控制台应用

  • 问题内容: 该程序输出简单 1,4,2, 但我想打印 1,4,2。 如您所见,逗号是在数组的每个项目之后打印的。 有没有一种方法可以更改,并确保最后一项将打印“。” 代替 ”,” 问题答案: 您可以使用 实现这一目标。诀窍是首先发出逗号分隔符,而不是该范围内的第一项。

  • 问题内容: 我正在加载一个包含换行符的文本文件,并将其传递给。 用替换为已加载的字符串中的with ,它们会被模板转义为html 并显示在浏览器中,而不是引起换行。 如何更改此行为而无需切换到(没有XSS保护)? 问题答案: 看来您可以先在文本上运行template.HTMLEscape()进行净化,然后执行\ n 替换所信任的内容,然后将其用作预先转义和信任的模板数据。 更新:在Kocka的示例