pai

Resource scheduling and cluster management for AI
授权协议 MIT License
开发语言 Python
所属分类 神经网络/人工智能、 机器学习/深度学习
软件类型 开源软件
地区 不详
投 递 者 宇文梓
操作系统 跨平台
开源组织
适用人群 未知
 软件概览

Open Platform for AI (OpenPAI)

OpenPAI v1.8.0 has been released!

With the release of v1.0, OpenPAI is switching to a more robust, more powerful and lightweight architecture. OpenPAI is also becoming more and more modular so that the platform can be easily customized and expanded to suit new needs. OpenPAI also provides many AI user-friendly features, making it easier for end users and administrators to complete daily AI tasks.

                                                                                                                                                                                     









Table of Contents

When to consider OpenPAI

  1. When your organization needs to share powerful AI computing resources (GPU/FPGA farm, etc.) among teams.
  2. When your organization needs to share and reuse common AI assets like Model, Data, Environment, etc.
  3. When your organization needs an easy IT ops platform for AI.
  4. When you want to run a complete training pipeline in one place.

Why choose OpenPAI

The platform incorporates the mature design that has a proven track record in Microsoft's large-scale production environment.

Support on-premises and easy to deploy

OpenPAI is a full stack solution. OpenPAI not only supports on-premises, hybrid, or public Cloud deployment but also supports single-box deployment for trial users.

Support popular AI frameworks and heterogeneous hardware

Pre-built docker for popular AI frameworks. Easy to include heterogeneous hardware. Support Distributed training, such as distributed TensorFlow.

Most complete solution and easy to extend

OpenPAI is a most complete solution for deep learning, support virtual cluster, compatible with Kubernetes eco-system, complete training pipeline at one cluster etc. OpenPAI is architected in a modular way: different module can be plugged in as appropriate. Here is the architecture of OpenPAI, highlighting technical innovations of the platform.

Get started

OpenPAI manages computing resources and is optimized for deep learning. Through docker technology, the computing hardware are decoupled with software, so that it's easy to run distributed jobs, switch with different deep learning frameworks, or run other kinds of jobs on consistent environments.

As OpenPAI is a platform, there are typically two different roles:

  • Cluster users are the consumers of the cluster's computing resources. According to the deployment scenarios, cluster users could be researchers of Machine Learning and Deep Learning, data scientists, lab teachers, students and so on.
  • Cluster administrators are the owners and maintainers of computing resources. The administrators are responsible for the deployment and availability of the cluster.

OpenPAI provides end-to-end manuals for both cluster users and administrators.

For cluster administrators

The admin manual is a comprehensive guide for cluster administrators, it covers (but not limited to) the following contents:

  • Installation and upgrade. The installation is based on Kubespray, and here is the system requirements. OpenPAI provides an installation guide to facilitate the installation.

    If you are considering upgrade from older version to the latest v1.0.0, please refer to the table below for a brief comparison between v0.14.0 and the v1.0.0. More detail about the upgrade considerations can be found upgrade guide.

    v0.14.0 v1.0.0
    Architecture Kubernetes + Hadoop YARN Kubernetes
    Scheduler YARN Scheduler HiveD / K8S default
    Job Orchestrating YARN Framework Launcher Framework Controller
    RESTful API v1 + v2 pure v2
    Storage Team-wise storage plugin PV/PVC storage sharing
    Marketplace Marketplace v2 openpaimarketplace
    SDK Python JavaScript / TypeScript

    If there is any question during deployment, please check installation FAQs and troubleshooting first. If it is not covered yet, refer to here to ask question or submit an issue.

  • Basic cluster management. Through the Web-portal and a command-line tool paictl, administrators could complete cluster managements, such as adding (or removing) nodes, monitoring nodes and services, and storages setup and permission control.

  • Users and groups management. Administrators could manage the users and groups easily.

  • Alerts management. Administrators could customize alerts rules and actions.

  • Customization. Administrators could customize the cluster by plugins. Administrators could also upgrade (or downgrade) a single component (e.g. rest servers) to address customized application demands.

For cluster users

The user manual is a guidance for cluster users, who could train and serve deep learning (and other) tasks on OpenPAI.

  • Job submission and monitoring. The quick start tutorial is a good start for learning how to train models on OpenPAI. And more examples and supports to multiple mainstream frameworks (out-of-the-box docker images) are in here. OpenPAI also provides supports for good debuggability and advanced job functionalities.

  • Data managements. Users could use cluster provisioned storages and custom storages in their jobs. The cluster provisioned storages are well integrated and easy to configure in a job (refer to here).

  • Collaboration and sharing. OpenPAI provides facilities for collaboration in teams and organizations. The cluster provisioned storages are organized by teams (groups). And users could easily share their works (e.g. jobs) in the marketplace, where others could discover and reproduce (clone) by one-click.

Besides the webportal, OpenPAI provides VS Code extension and command line tool (preview). The VS Code extension is a friendly, GUI based client tool of OpenPAI, and it's highly recommended. It's an extension of Visual Studio Code. It can submit job, simulate jobs locally, manage multiple OpenPAI environments, and so on.

Standalone Components

With the v1.0.0 release, OpenPAI starts using a more modularized component design and re-organize the code structure to 1 main repo together with 7 standalone key component repos. pai is the main repo, and the 7 component repos are:

  • hivedscheduler is a Kubernetes Scheduler Extender for Multi-Tenant GPU clusters, which provides various advantages over standard k8s scheduler.
  • frameworkcontroller is built to orchestrate all kinds of applications on Kubernetes by a single controller.
  • openpai-protocol is the specification of OpenPAI job protocol.
  • openpai-runtime provides runtime support which is necessary for the OpenPAI protocol.
  • openpaisdk is a JavaScript SDK designed to facilitate the developers of OpenPAI to offer more user-friendly experience.
  • openpaimarketplace is a service which stores examples and job templates. Users can use it from webportal plugin to share their jobs or run-and-learn others' sharing job.
  • openpaivscode is a VSCode extension, which makes users connect OpenPAI clusters, submit AI jobs, simulate jobs locally and manage files in VSCode easily.

Reference

Related Projects

Targeting at openness and advancing state-of-art technology, Microsoft Research (MSR) and Microsoft Software Technology Center Asia (STCA) had also released few other open source projects.

  • NNI : An open source AutoML toolkit for neural architecture search and hyper-parameter tuning.We encourage researchers and students leverage these projects to accelerate the AI development and research.
  • MMdnn : A comprehensive, cross-framework solution to convert, visualize and diagnose deep neural network models. The "MM" in MMdnn stands for model management and "dnn" is an acronym for deep neural network.
  • NeuronBlocks : An NLP deep learning modeling toolkit that helps engineers to build DNN models like playing Lego. The main goal of this toolkit is to minimize developing cost for NLP deep neural network model building, including both training and inference stages.
  • SPTAG : Space Partition Tree And Graph (SPTAG) is an open source library for large scale vector approximate nearest neighbor search scenario.

Get involved

How to contribute

Contributor License Agreement

This project welcomes contributions and suggestions. Most contributions require you to agree to aContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant usthe rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to providea CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructionsprovided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct.For more information see the Code of Conduct FAQ orcontact opencode@microsoft.com with any additional questions or comments.

Call for contribution

We are working on a set of major features improvement and refactor, anyone who is familiar with the features is encouraged to join the design review and discussion in the corresponding issue ticket.

Who should consider contributing to OpenPAI

  • Folks who want to add support for other ML and DL frameworks
  • Folks who want to make OpenPAI a richer AI platform (e.g. support for more ML pipelines, hyperparameter tuning)
  • Folks who want to write tutorials/blog posts showing how to use OpenPAI to solve AI problems

Contributors

One key purpose of OpenPAI is to support the highly diversified requirements from academia and industry. OpenPAI is completely open: it is under the MIT license. This makes OpenPAI particularly attractive to evaluate various research ideas, which include but not limited to the components.

OpenPAI operates in an open model. It is initially designed and developed by Microsoft Research (MSR) and Microsoft Software Technology Center Asia (STCA) platform team.We are glad to have Peking University, Xi'an Jiaotong University, Zhejiang University, University of Science and Technology of China and SHANGHAI INESA AI INNOVATION CENTER (SHAIIC) joined us to develop the platform jointly.Contributions from academia and industry are all highly welcome.

  • 机器学习PAI 机器学习PAI(Platform of Artificial Intelligence)是阿里云人工智能平台,提供一站式的机器学习解决方案。本文介绍什么是机器学习PAI。 什么是机器学习 机器学习是指机器通过统计学算法,对大量历史数据进行学习,进而利用生成的经验模型指导业务。目前机器学习主要应用在以下场景: • 营销类场景:商品推荐、用户群体画像或广告精准投放。 • 金融类场景:贷

  • 求pai的近似值: 利用正多边形逼近的方法求pai值。 利用圆的内接正六边形等于半径, 将边数乘以2作正十二边形,求出边长。 重复操作这一过程,就可以求出pai的近似值。 设圆的内接多边形的边长为2b,边数为i,则边数乘以2后的新正多边形的边长: x = 0.5 * sqrt(2 - 2 * sqrt(1 - b * b)); 周长为: y = 2 * i *x; 程序: #include <st

  • PAI项目创建方法 购买region 进入MaxCompute,购买相应region,目前机器学习只支持华东2(GPU公测免费)以及华北2(GPU计划收费),注意选择“按量后付费”。 链接:https://common-buy.aliyun.com/?commodityCode=odps#/buy 进入DataWorks创建项目 进入到DataWorks控制台,然后选择相应的region,点击“创

  • 接触PAI平台也快1年了,感觉PAI平台应用起来很强大,很方便。而且2017 年 3 月 29 日,阿里云重磅推出升级版的机器学习平台 PAI 2.0,可以大幅度降低人工智能门槛以及开发成本。PAI 2.0 提供 100 余种算法组件,涵盖了分类、回归、聚类等常用场景,还针对主流的算法应用场景,提供了偏向业务的算法,包含文本分析、关系分析、推荐三种类别。 PAI 2.0 新增了参数服务器(Para

  • 首先在 166 上 build 镜像,保存为 framework.tar docker save [tag] > framework.tar 然后,从 99 上下载 framework.tar scp hanjl@166:~/framework.tar ./ docker load < framework.tar docker tag [tag] by:5000/hanjl/framework

  • 最近PAI免费,研究了一下,简直就是学生党的福音啊,希望将来收费的时候能便宜点。 FLAGS.buckets获取的就是数据源选择的文件夹,然后调用这个read_image_in_pai函数就能读取目录内的全部图像。 def read_image( filepath ): img_obj = file_io.read_file_to_string(filepath) file_io.

相关阅读

相关文章

相关问答

相关文档