squad_dataset = load_dataset("squad")
, get any of these datasets ready to use in a dataloader for training/evaluating a ML model (Numpy/Pandas/PyTorch/TensorFlow/JAX),tokenized_dataset = dataset.map(tokenize_example)
, efficiently prepare the dataset for inspection and ML model evaluation and training.
tfds
can be found in the section Main differences between
tfds
.
pip install datasets
conda install -c huggingface -c conda-forge datasets
Follow the installation pages of TensorFlow and PyTorch to see how to install them with conda.
For more details on installation, check the installation page in the documentation: https://huggingface.co/docs/datasets/installation.html
If you plan to use
For more details on using the library with NumPy, pandas, PyTorch or TensorFlow, check the quick tour page in the documentation: https://huggingface.co/docs/datasets/quicktour.html
datasets.list_datasets()
to list the available datasetsdatasets.load_dataset(dataset_name, **kwargs)
to instantiate a datasetdatasets.list_metrics()
to list the available metricsdatasets.load_metric(metric_name, **kwargs)
to instantiate a metricHere is a quick example:
from datasets import list_datasets, load_dataset, list_metrics, load_metric
# Print all the available datasets
print(list_datasets())
# Load a dataset and print the first example in the training set
squad_dataset = load_dataset('squad')
print(squad_dataset['train'][0])
# List all the available metrics
print(list_metrics())
# Load a metric
squad_metric = load_metric('squad')
# Process the dataset - add a column with the length of the context texts
dataset_with_length = squad_dataset.map(lambda x: {"length": len(x["context"])})
# Process the dataset - tokenize the context texts (using a tokenizer from the �� Transformers library)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
tokenized_dataset = squad_dataset.map(lambda x: tokenizer(x['context']), batched=True)
For more details on using the library, check the quick tour page in the documentation: https://huggingface.co/docs/datasets/quicktour.html and the specific pages on:
Another introduction to
We have a very detailed step-by-step guide to add a new dataset to the datasets already provided on the HuggingFace Datasets Hub.
You will find the step-by-step guide here to add a dataset to this repository.
You can also have your own repository for your dataset on the Hub under your or your organization's namespace and share it with the community. More information in the documentation section about dataset sharing.
tfds
If you are familiar with the great TensorFlow Datasets, here are the main differences between
tfds
:
tf.data.Dataset
but a built-in framework-agnostic dataset class with methods inspired by what we like in tf.data
(like a map()
method). It basically wraps a memory-mapped Arrow table cache.Similar to TensorFlow Datasets,
If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!
If you want to cite this framework you can use this:
@software{quentin_lhoest_2021_5570305,
author = {Quentin Lhoest and
Albert Villanova del Moral and
Patrick von Platen and
Thomas Wolf and
Yacine Jernite and
Abhishek Thakur and
Lewis Tunstall and
Suraj Patil and
Mariama Drame and
Julien Chaumond and
Julien Plu and
Joe Davison and
Simon Brandeis and
Victor Sanh and
Teven Le Scao and
Kevin Canwen Xu and
Nicolas Patry and
Steven Liu and
Angelina McMillan-Major and
Philipp Schmid and
Sylvain Gugger and
Nathan Raw and
Sylvain Lesage and
Anton Lozhkov and
Matthew Carrigan and
Théo Matussière and
Leandro von Werra and
Lysandre Debut and
Stas Bekman and
Clément Delangue},
title = {huggingface/datasets: 1.13.2},
month = oct,
year = 2021,
publisher = {Zenodo},
version = {1.13.2},
doi = {10.5281/zenodo.5570305},
url = {https://doi.org/10.5281/zenodo.5570305}
}
load_<dataset_name> 本地加载数据,保存在了本地磁盘上,本地加载数据,保存在..datasets\\data目录下的 datasets.load_boston #波士顿房价数据集 datasets.load_breast_cancer #乳腺癌数据集 datasets.load_diabetes #糖尿病数据集 datasets.load_digits #手写体数字数
利用PyTorch框架来开发深度学习算法时几个基础的模块 Dataset & DataLoader datasets models transforms 在利用PyTorch开始进入深度学习“大坑”的时候必须将以上的几个模块熟练掌握,这样才可以运用自如的写自己的算法或者魔改别人算法的code,下面将对以上几个模块逐一介绍其重点和一些注意事项。 Dataset & DataLoader 基础概念 D
构造方法 函数原型 datasets.Dataset( arrow_table: Table, info: Optional[DatasetInfo] = None, split: Optional[NamedSplit] = None, indices_table: Optional[Table] = None, fingerp
sklearn.datasets模块主要提供了一些导入、在线下载及本地生成数据集的方法,可以通过dir或help命令查看,我们会发现主要有三种形式: load_<dataset_name> fetch_<dataset_name> make_<dataset_name> 1.datasets.load_<dataset_name>:sklearn包自带的小数据集: 1.1数据集文件目录: 在skl
类属性/方法 1、from_csv函数 dataset.DatasetDict.from_csv( path_or_paths: Dict[str, PathLike], features: Optional[Features] = None, cache_dir: str = None, keep_in_memory: bool =
CIFAR10 小图像分类数据集 50,000 张 32x32 彩色训练图像数据,以及 10,000 张测试图像数据,总共分为 10 个类别。 用法: from keras.datasets import cifar10 (x_train, y_train), (x_test, y_test) = cifar10.load_data() 返回: 2 个元组: x_train, x_test:
机器学习资料集/ 范例三: The iris dataset http://scikit-learn.org/stable/auto_examples/datasets/plot_iris_dataset.html 这个范例目的是介绍机器学习范例资料集中的iris 鸢尾花资料集 (一)引入函式库及内建手写数字资料库 #这行是在ipython notebook的介面裏专用,如果在其他介面则可以拿掉
机器学习资料集/ 范例一: The digits dataset http://scikit-learn.org/stable/auto_examples/datasets/plot_digits_last_image.html 这个范例目的是介绍机器学习范例资料集的操作,对于初学者以及授课特别适合使用。 (一)引入函式库及内建手写数字资料库 #这行是在ipython notebook的介面裏专用
这个章节介绍scikit-learn 所提供之机器学习资料集,最常用的主要有: 手写数字辨识 鸢尾花资料集 Ex 1: The digits 手写数字辨识 Ex 3: The iris 鸢尾花资料集
在本章中,我们将讨论如何导入数据集和库。 让我们首先了解如何导入库。 导入库 让我们从导入Pandas开始,Pandas是一个用于管理关系(表格式)数据集的优秀库。 Seaborn在处理DataFrames时非常方便,DataFrames是用于数据分析的最广泛使用的数据结构。 以下命令将帮助您导入Pandas - # Pandas for managing datasets import pand
数据集名称指定文件的名称,它在JCL中由DSN表示。 DSN参数引用新创建或现有数据集的物理数据集名称。 DSN值可以由1到8个字符长度的子名组成,以句点分隔,总长度为44个字符(字母数字)。 以下是语法: <b>DSN=&name | *.stepname.ddname</b> Temporary datasets仅需要存储作业持续时间,并在作业完成时删除。 此类数据集表示为DSN=&name