datasets

授权协议 Apache-2.0 License
开发语言 Python
所属分类 神经网络/人工智能、 机器学习/深度学习
软件类型 开源软件
地区 不详
投 递 者 锺离霖
操作系统 跨平台
开源组织
适用人群 未知
 软件概览



�� Datasets is a lightweight library providing two main features:

  • one-line dataloaders for many public datasets: one liners to download and pre-process any of the major public datasets (in 467 languages and dialects!) provided on the HuggingFace Datasets Hub. With a simple command like squad_dataset = load_dataset("squad"), get any of these datasets ready to use in a dataloader for training/evaluating a ML model (Numpy/Pandas/PyTorch/TensorFlow/JAX),
  • efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. With simple commands like tokenized_dataset = dataset.map(tokenize_example), efficiently prepare the dataset for inspection and ML model evaluation and training.

�� Documentation �� Colab tutorial

�� Find a dataset in the Hub �� Add a new dataset to the Hub

�� Datasets also provides access to +15 evaluation metrics and is designed to let the community easily add and share new datasets and evaluation metrics.

�� Datasets has many additional interesting features:

  • Thrive on large datasets: �� Datasets naturally frees the user from RAM memory limitation, all datasets are memory-mapped using an efficient zero-serialization cost backend (Apache Arrow).
  • Smart caching: never wait for your data to process several times.
  • Lightweight and fast with a transparent and pythonic API (multi-processing/caching/memory-mapping).
  • Built-in interoperability with NumPy, pandas, PyTorch, Tensorflow 2 and JAX.

�� Datasets originated from a fork of the awesome TensorFlow Datasets and the HuggingFace team want to deeply thank the TensorFlow Datasets team for building this amazing library. More details on the differences between �� Datasets and tfds can be found in the section Main differences between �� Datasets and tfds.

Installation

With pip

�� Datasets can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)

pip install datasets

With conda

�� Datasets can be installed using conda as follows:

conda install -c huggingface -c conda-forge datasets

Follow the installation pages of TensorFlow and PyTorch to see how to install them with conda.

For more details on installation, check the installation page in the documentation: https://huggingface.co/docs/datasets/installation.html

Installation to use with PyTorch/TensorFlow/pandas

If you plan to use �� Datasets with PyTorch (1.0+), TensorFlow (2.2+) or pandas, you should also install PyTorch, TensorFlow or pandas.

For more details on using the library with NumPy, pandas, PyTorch or TensorFlow, check the quick tour page in the documentation: https://huggingface.co/docs/datasets/quicktour.html

Usage

�� Datasets is made to be very simple to use. The main methods are:

  • datasets.list_datasets() to list the available datasets
  • datasets.load_dataset(dataset_name, **kwargs) to instantiate a dataset
  • datasets.list_metrics() to list the available metrics
  • datasets.load_metric(metric_name, **kwargs) to instantiate a metric

Here is a quick example:

from datasets import list_datasets, load_dataset, list_metrics, load_metric

# Print all the available datasets
print(list_datasets())

# Load a dataset and print the first example in the training set
squad_dataset = load_dataset('squad')
print(squad_dataset['train'][0])

# List all the available metrics
print(list_metrics())

# Load a metric
squad_metric = load_metric('squad')

# Process the dataset - add a column with the length of the context texts
dataset_with_length = squad_dataset.map(lambda x: {"length": len(x["context"])})

# Process the dataset - tokenize the context texts (using a tokenizer from the �� Transformers library)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')

tokenized_dataset = squad_dataset.map(lambda x: tokenizer(x['context']), batched=True)

For more details on using the library, check the quick tour page in the documentation: https://huggingface.co/docs/datasets/quicktour.html and the specific pages on:

Another introduction to �� Datasets is the tutorial on Google Colab here:

Add a new dataset to the Hub

We have a very detailed step-by-step guide to add a new dataset to the datasets already provided on the HuggingFace Datasets Hub.

You will find the step-by-step guide here to add a dataset to this repository.

You can also have your own repository for your dataset on the Hub under your or your organization's namespace and share it with the community. More information in the documentation section about dataset sharing.

Main differences between �� Datasets and tfds

If you are familiar with the great TensorFlow Datasets, here are the main differences between �� Datasets and tfds:

  • the scripts in �� Datasets are not provided within the library but are queried, downloaded/cached and dynamically loaded upon request
  • �� Datasets also provides evaluation metrics in a similar fashion to the datasets, i.e. as dynamically installed scripts with a unified API. This gives access to the pair of a benchmark dataset and a benchmark metric for instance for benchmarks like SQuAD or GLUE.
  • the backend serialization of �� Datasets is based on Apache Arrow instead of TF Records and leverage python dataclasses for info and features with some diverging features (we mostly don't do encoding and store the raw data as much as possible in the backend serialization cache).
  • the user-facing dataset object of �� Datasets is not a tf.data.Dataset but a built-in framework-agnostic dataset class with methods inspired by what we like in tf.data (like a map() method). It basically wraps a memory-mapped Arrow table cache.

Disclaimers

Similar to TensorFlow Datasets, �� Datasets is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use them. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

BibTeX

If you want to cite this framework you can use this:

@software{quentin_lhoest_2021_5570305,
  author       = {Quentin Lhoest and
                  Albert Villanova del Moral and
                  Patrick von Platen and
                  Thomas Wolf and
                  Yacine Jernite and
                  Abhishek Thakur and
                  Lewis Tunstall and
                  Suraj Patil and
                  Mariama Drame and
                  Julien Chaumond and
                  Julien Plu and
                  Joe Davison and
                  Simon Brandeis and
                  Victor Sanh and
                  Teven Le Scao and
                  Kevin Canwen Xu and
                  Nicolas Patry and
                  Steven Liu and
                  Angelina McMillan-Major and
                  Philipp Schmid and
                  Sylvain Gugger and
                  Nathan Raw and
                  Sylvain Lesage and
                  Anton Lozhkov and
                  Matthew Carrigan and
                  Théo Matussière and
                  Leandro von Werra and
                  Lysandre Debut and
                  Stas Bekman and
                  Clément Delangue},
  title        = {huggingface/datasets: 1.13.2},
  month        = oct,
  year         = 2021,
  publisher    = {Zenodo},
  version      = {1.13.2},
  doi          = {10.5281/zenodo.5570305},
  url          = {https://doi.org/10.5281/zenodo.5570305}
}
  • load_<dataset_name> 本地加载数据,保存在了本地磁盘上,本地加载数据,保存在..datasets\\data目录下的 datasets.load_boston #波士顿房价数据集 datasets.load_breast_cancer #乳腺癌数据集 datasets.load_diabetes #糖尿病数据集 datasets.load_digits #手写体数字数

  • 利用PyTorch框架来开发深度学习算法时几个基础的模块 Dataset & DataLoader datasets models transforms 在利用PyTorch开始进入深度学习“大坑”的时候必须将以上的几个模块熟练掌握,这样才可以运用自如的写自己的算法或者魔改别人算法的code,下面将对以上几个模块逐一介绍其重点和一些注意事项。 Dataset & DataLoader 基础概念 D

  • 构造方法 函数原型 datasets.Dataset( arrow_table: Table, info: Optional[DatasetInfo] = None, split: Optional[NamedSplit] = None, indices_table: Optional[Table] = None, fingerp

  • sklearn.datasets模块主要提供了一些导入、在线下载及本地生成数据集的方法,可以通过dir或help命令查看,我们会发现主要有三种形式: load_<dataset_name> fetch_<dataset_name> make_<dataset_name> 1.datasets.load_<dataset_name>:sklearn包自带的小数据集: 1.1数据集文件目录: 在skl

  • 类属性/方法 1、from_csv函数 dataset.DatasetDict.from_csv( path_or_paths: Dict[str, PathLike], features: Optional[Features] = None, cache_dir: str = None, keep_in_memory: bool =

 相关资料
  • CIFAR10 小图像分类数据集 50,000 张 32x32 彩色训练图像数据,以及 10,000 张测试图像数据,总共分为 10 个类别。 用法: from keras.datasets import cifar10 (x_train, y_train), (x_test, y_test) = cifar10.load_data() 返回: 2 个元组: x_train, x_test:

  • 机器学习资料集/ 范例三: The iris dataset http://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html 这个范例目的是介绍机器学习范例资料集中的iris 鸢尾花资料集 (一)引入函式库及内建手写数字资料库 #这行是在ipython notebook的介面裏专用,如果在其他介面则可以拿掉

  • 机器学习资料集/ 范例一: The digits dataset http://scikit-learn.org/stable/auto_examples/datasets/plot_digits_last_image.html 这个范例目的是介绍机器学习范例资料集的操作,对于初学者以及授课特别适合使用。 (一)引入函式库及内建手写数字资料库 #这行是在ipython notebook的介面裏专用

  • 这个章节介绍scikit-learn 所提供之机器学习资料集,最常用的主要有: 手写数字辨识 鸢尾花资料集 Ex 1: The digits 手写数字辨识 Ex 3: The iris 鸢尾花资料集

  • 在本章中,我们将讨论如何导入数据集和库。 让我们首先了解如何导入库。 导入库 让我们从导入Pandas开始,Pandas是一个用于管理关系(表格式)数据集的优秀库。 Seaborn在处理DataFrames时非常方便,DataFrames是用于数据分析的最广泛使用的数据结构。 以下命令将帮助您导入Pandas - # Pandas for managing datasets import pand

  • 数据集名称指定文件的名称,它在JCL中由DSN表示。 DSN参数引用新创建或现有数据集的物理数据集名称。 DSN值可以由1到8个字符长度的子名组成,以句点分隔,总长度为44个字符(字母数字)。 以下是语法: <b>DSN=&name | *.stepname.ddname</b> Temporary datasets仅需要存储作业持续时间,并在作业完成时删除。 此类数据集表示为DSN=&name