LambdaNet

授权协议 MIT License
开发语言 Python
所属分类 神经网络/人工智能、 机器学习/深度学习
软件类型 开源软件
地区 不详
投 递 者 姬高扬
操作系统 跨平台
开源组织
适用人群 未知
 软件概览

LambdaNet

LambdaNet is an artificial neural network library written in Haskellthat abstracts network creation, training, and use as higher orderfunctions. The benefit of this approach is that it provides a frameworkin which users can:

  • quickly iterate through network designs by using different functional components
  • experiment by writing small functional components to extend the library

The library comes with a pre-defined set of functions that can be composedin many ways to operate on real-world data. These will be enumerated laterin the documentation.

Current Release

The code from this repo doesn't reflect the current release of LambdaNet. The READMEfor the current release on Hackage can be found here.

Installation

The first step is to follow the HMatrix installation instructions.After that, LambdaNet can be installed through Cabal:

cabal update
cabal install LambdaNet

Installing the Most Recent Build

Alternatively, you can use the nightly. The API may be different than whatis covered in the README, but the examples/ folder will always containa working file using all the features of the current commit.

To install the nightly build, simply run:

git clone https://github.com/jbarrow/LambdaNet.git && cd LambdaNet
cabal install

Using LambdaNet

Using LambdaNet to rapidly prototype networks using built-in functionsrequires only a minimal level of Haskell knowledge (although gettingthe data into the right form may be more difficult). However, extendingthe library may require a more in-depth knowledge of Haskell andfunctional programming techniques.

You can find a quick example of using the network in XOR.hs. Once LambdaNetis installed, download XOR.hs, and then you can run the file in your REPL tosee the results:

runhaskell examples/XOR.hs

The rest of this section dissects the XOR network in order to talk aboutthe design of LambdaNet.

Training Data

Before you can train or use a network, you must have training data. Thetraining data is a tuple of vectors, the first value being the inputto the network, and the second value being the expected output.

For the XOR network, the data is easily hardcoded:

let trainData = [
  (fromList [0.0, 0.0], fromList [0.0]),
  (fromList [0.0, 1.0], fromList [1.0]),
  (fromList [1.0, 0.0], fromList [1.0]),
  (fromList [1.0, 1.0], fromList [0.0])
]

However, for any non-trivial application the most difficult work will begetting the data in this form. Unfortunately, LambdaNet does not currentlyhave tools to support data handling.

Layer Definitions

The first step in creating a network is to define a list of layerdefinitions. The type layer definition takes a neuron type, a count ofneurons in the layer, and a connectivity function.

Creating the layer definitions for a three-layer XOR network, with2 neurons in the input layer, 2 hidden neurons, and 1 output neuroncan be done as:

let l = LayerDefinition sigmoidNeuron 2 connectFully
let l' = LayerDefinition sigmoidNeuron 2 connectFully
let l'' = LayerDefinition sigmoidNeuron 1 connectFully

Neuron Types

A neuron is simply defined as an activation function and its derivative,and the LambdaNet library provides three built-in neuron types:

  • sigmoidNeuron - A neuron with a sigmoid activation function
  • tanhNeuron - A neuron with a hyperbolic tangent activation function
  • recluNeuron - A neuron with a rectified linear activation function

By passing one of these functions into a LayerDefinition, you cancreate a layer with neurons of that type.

Connectivity

A connectivity function is a bit more opaque. Currently, the libraryonly provides connectFully, a function which creates a fullyconnected feed-forward network.

Simply, the connectivity function takes in the number of neurons in layer land the number of neurons in layer l + 1, and returns a boolean matrixof integers (0/1) that represents the connectivity graph of the layers-- a 0 means two neurons are not connected and a 1 means they are. Thestarting weights are defined later.

Creating the Network

The createNetwork function takes in a random transform, an entropygenerator, and a list of layer definitions, and returns a network.

For the XOR network, the createNetwork function is:

let n = createNetwork normals (mkStdGen 4) [l, l', l'']

Our source of entropy is the very random: mkStdGen 4, which willalways result in the same generator.

Random Transforms

The random transform function is a transform that operates on astream of uniformly distributed random numbers and returns a streamof floating point numbers.

Currently, the two defined distributions are:

  • uniforms - A trivial function that returns a stream of uniformly distributed random numbers
  • normals - A slightly less-trivial function that uses the Box-Muller transform to create a stream of numbers ~ N(0, 1)

Work is being done to offer a student t-distribution, which would requiresupport for a chi-squared distribution transformation.

Training the Network

In order to train a network, you must create a new trainer:

let t = BackpropTrainer (3 :: Float) quadraticCost quadraticCost'

The BackpropTrainer type takes in a learning rate, a cost function, andits derivative.

The actual training of the network, the fit function uses the trainer, anetwork, and the training data, and returns a new, trained network.For the XOR network, this is:

let n' = trainUntilErrorLessThan n t online dat 0.01

LambdaNet provides three training methods:

  • trainUntil
  • trainUntilErrorLessThan
  • trainNTimes

The trainUntil function takes a StopCondition (check Network/Trainer.hs)for more information, and the last two are simply wrappers for the first one thatprovide specific predicates.

The calculated error is what is returned by the cost function.

Cost Functions

Currently, the only provided cost function is the quadratic error cost function,quadraticCost and its derivative, quadraticCost'. I am about to add thecross-entropy cost function.

Selection Functions

Selection functions break up a dataset for each round of training. The currently providedselection functions are:

  • minibatch n - You must provide an n and partially apply it to minibatch to get a valid selection function. This function updates the network after every n passes.
  • online - Using this function means that the network updates after every training example.

For small data sets, it's better to use online, while for larger data sets, the trainingcan occur much faster if you use a reasonably sized minibatch.

Using the Network

Once the network is trained, you can use it with your test data orproduction data:

predict (fromList [1, 0]) n'

LambdaNet at least attempts to follow a Scikit-Learn style naming schemewith fit and predict functions.

Storing and Loading

Once a network has been trained, the weights and biases can be stored ina file:

saveNetwork "xor.ann" n'

By calling saveNetwork with a file path, you can save the state of thenetwork.

Loading a network requires passing in a list of layer definitionsfor the original network, but will load all the weights and biases of thesaved network:

n'' <- loadNetwork "xor.ann" [l, l', l'']

Note that the loadNetwork function returns an IO (Network), you can't simplycall predict or train on the object returned by loadNetwork. Using theapproach in XOR.hs should allow you to work with the returned object.

Currently Under Development

What has been outlined above is only the first stages of LambdaNet. I intendto support some additional features, such as:

  • Unit testing
  • Self-organizing maps
  • Regularization functions
  • Additional trainer types (RProp, RMSProp)
  • Additional cost functions

Unit Testing

In order to develop more complex network architectures, it is importantto ensure that all of the basics are working -- especially as the APIundergoes changes. To run the unit tests:

git clone https://github.com/jbarrow/LambdaNet.git && cd LambdaNet
cabal install
cd test
runhaskell Main.hs

This will download the most recent version of LambdaNet and run all theunit tests.

Self-Organizing Maps (SOMs, or Kohonen Maps)

SOMs were chosen as the next architecture to develop because they makedifferent assumptions than FeedForward networks. This allows us to seehow the current library handles building out new architectures. Alreadythis has forced a change in the Neuron model and spurred the developmentof a visualizations package (in order to usefully understand the outputsof the SOMs).

Regularization Functions and Momentum

Standard backprop training is subject to overfitting and falling into localminima. By providing support for regularization and momentum, LambdaNetwill be able to provide more extensible and robust training.

Future Goals

The future goals are:

  • Convolutional Networks
  • Data handling for Neural Networks

Generating the Documentation Images

All the documentation for the network was generated in the following manner. In the docs folder, run:

runhaskell docs.hs
python analysis.py

Note that I am currently working on removing the Python image analysisfrom the library, and switching it with Haskell and gnuplot. I'm alsoworking on using the generated images in network documentation.

  • LambdaNet: Probabilistic Type Inference using Graph Neural Networks https://openreview.net/group?id=ICLR.cc/2020/Conference 代码补全,即自动推断变量类型,在Python和TypeScript编程语言 成为一个流行的需求, 变量类型注解 可帮助代码补全 和 静态错误捕获,这些变

相关阅读

相关文章

相关问答

相关文档