This repository provides users with examples on how to program Big Data and Deep Learning applications that run on Hopsworks, using Apache Spark, Apache Flink, Apache Kafka, Apache Hive and TensorFlow. Users can then upload and run their programs and notebooks from within their Hopsworks projects.
You can find the latest Hopsworks documentation on the project's webpage,including Hopsworks user and developer guides as well as a list of versions for all supported services. This README file is meant to provide basic instructions and codebase on how to build and run the examples.
Install dependencies first:
pip3 install jupyter
pip3 install nbconvert
Generate the webpages and run the webserver:
export LC_CTYPE=en_US.UTF-8
python3 make.py
ln -s notebooks content
./binaries/hugo server
When you add a new notebook, add it under the "notebooks" directory.If you want to add a new category for notebooks, put your notebook in a new directory, then edit this file to add your category:
themes/berbera/layouts/index.html
mvn package
Generates a jar for each module which can then either be used to create Hopsworks jobs (Spark/Flink) or execute Hive queries remotely.
Hops Examples makes use of Hops, a set of Java and Python libraries which provide developers with tools that make programming on Hops easy. Hops is automatically made available to all Jobs and Notebooks, without the user having to explicitely import it. Detailed documentation on Hops is available here.
To help you get started, StructuredStreamingKafka show how to build a Spark application that produces and consumes messages from Kafka and also persists itboth in Parquet format and in plain text to HopsFS. The example makes use of the latest Spark-Kafka API. To run the example, you need to provide the following parameters when creating a Spark job in Hopsworks:
Usage: <type>(producer|consumer)
MainClass is io.hops.examples.spark.kafka.StructuredStreamingKafka
Topics are provided via the Hopsworks Job UI. User checks the Kafka box and selects the topics from the drop-down menu. When consuming from multiple topics using a single Spark directStream, all topics must use the same Avro schema. Create a new directStream for topic(s) that use different Avro schemas.
Data consumed is be default persisted to the Resources
dataset of the Project where the job is running.
StructuredStreamingKafka.java
generates String <key,value> pairs which are converted by Hops into Avro records and serialized into bytes. Similarly, during consuming from a Kafka source, messages are deserialized into Avro records. The default Avro schema used is the following:
{
"fields": [
{
"name": "timestamp",
"type": "string"
},
{
"name": "priority",
"type": "string"
},
{
"name": "logger",
"type": "string"
},
{
"name": "message",
"type": "string"
}
],
"name": "myrecord",
"type": "record"
}
Hops Example provides Jupyter notebooks for running TensorFlow applications on Hops. All notebooks are automaticallymade available to Hopsworks projects upon project creation. Detailed documentation on how tp program TensorFlow onHopsworks, is available here.
A sample feature engineering job that takes in raw data, transforms it into features suitable for machine learningand saves the features into the featurestore is available in featurestore/
. This job will automatically beavailable in your project if you take the featurestore tour on Hopsworks. Example notebooks for interacting withthe featurestore are available in notebooks/featurestore/
. More documentation about the featurestore is availablehere: Featurestore Documentation.
This repo comes with notebooks demonstrating how to implement horizontally scalable TFX pipelines. Thechicago_taxi_tfx_hopsworks
notebook contains all the steps of the pipeline along with visualizations. It is basedon the TFX Chicago taxi rides example but uses a smaller slice of the original dataset. The notebook downloads thedataset into Hopsworks and then calls the TFX components to go all the way from data preparation to model analysis.
That notebook then is split into smaller ones that correspond to the different steps in the pipeline. These notebookscan be found under the notebookstfx/chicago_taxi/pipeline
directory in this repo. To execute these, you need tocreate one Hopsworks Spark job per notebook and then use the Apache Airflow dag chicago_tfx_airflow_pipeline.py
provided in this repo to orchestrate them. Please refer to the Apache Airflow section of the user guide on how toupload manage your dags. There is also a Visualizations
notebook that runs the visualizations steps of thepipeline and can be executed at any time, as the output of the pipeline (statistics, schema, etc.) is persisted tothe Resources
dataset in your project. You can catch a demo of the pipeline here.
Under notebooks/beam
you can find the portability_wordcount_python
notebook, which guides you through runninga WordCount program in a Python Portable Beam Pipeline. You can download the notebook from this repo, upload it inyour Hopsworks project and just run it! Hopsworks transparently manages the entire lifecycle of the notebook and theBeam related services and components.
HiveJDBCClient.java
available in hops-examples-hive
}, shows how users can remotely execute Hive queries against their Hopsworks projects' Hive databases. Firstly, itinstantiates a Java JDBC client and then connects to the example database described inHopsworks documentation. Users need to have created the database in their project as described in the documentation. This example uses log4j2 with logs being written to a ./hive/logs
directory. For changes made to ./hive/src/main/resources/log4j2.properties
to take effect, users must first do
mvn clean package
For HiveJDBCClient.java
to be able to connect to the Hopsworks Hive server, users need to create a hive_credentials.properties
file based on hive_credentials.properties.example
and set proper values for the parameters:
hive_url=jdbc:hive2://[domain]:[port] #default port:9085
dbname=[database_name] #the name of the Dataset in Hopsworks, omitting the ".db" suffix.
truststore_path=[absolute_path_to_truststore]
keystore_path=[absolute_path_to_keystore]
truststore_pw=[truststore_password]
keystore_pw=[keystore_password]
Users can export their project's certificates by navigating to the Settings page in Hopsworks. An email is then sentwith the password for the truststore and keystore.
zz http://www.tenouk.com/Module43a.html Also Refer 1 http://csis.bits-pilani.ac.in/faculty/dk_tyagi/Study_stuffs/raw.html 2 Send a raw Ethernet frame in Linux (zz) This is a continuation from Part IV
C Language Examples of IPv4 and IPv6 Raw Sockets for Linux I have recently been investigating raw socket programming in C for linux and I decided to provide a collection of routines I have prepared. T
Hops Development Environment Docs Hops v15 documentation. Hops v14 documentation. Contributing Please refer to our contribution guide. This project adheres to the Contributor Covenant Code of Conduc
An implementation of the @handsontable/react wrapper.import React from 'react'; import ReactDOM from 'react-dom'; import { HotTable } from '@handsontable/react'; import Handsontable from 'handsontable
通用范例/范例七: Face completion with a multi-output estimators http://scikit-learn.org/stable/auto_examples/plot_multioutput_face_completion.html 这个范例用来展示scikit-learn如何用 extremely randomized trees, k neares
http://scikit-learn.org/stable/auto_examples/missing_values.htm 在这范例说明有时补充缺少的数据(missing values),可以得到更好的结果。但仍然需要进行交叉验证。来验证填充是否合适 。而missing values可以用均值、中位值,或者频繁出现的值代替。中位值对大数据之机器学习来说是比较稳定的估计值。 (一)引入函式库及内
http://scikit-learn.org/stable/auto_examples/plot_isotonic_regression.html 迴归函数採用递增函数。 y[] are inputs (real numbers) y_[] are fitted 这个范例的主要目的: 比较 Isotonic Fit Linear Fit (一) Regression「迴归」 「迴归」就是找一个函
http://scikit-learn.org/stable/auto_examples/feature_stacker.html 在许多实际应用中,会有很多方法可以从一个数据集中提取特征。也常常会组合多个方法来获得良好的特征。这个例子说明如何使用FeatureUnion 来结合由PCA 和univariate selection 时的特征。 这个范例的主要目的: 资料集:iris 鸢尾花资料集