Here two sequences of 12 steps (where T1 is known to depend
on P1) are used to define the operation of a filter.
p1 = {-1 0 1 0 1 1 -1 0 -1 1 0 1};
t1 = {-1 -1 1 1 1 2 0 -1 -1 0 1 1};
Here NEWLIN is used to create a layer with an input range
of [-1 1]), one neuron, input delays of 0 and 1, and a
learning rate of 0.5. The linear layer is then simulated.
net = newlin([-1 1],1,[0 1],0.5);
Here the network adapts for one pass through the sequence.
The network's mean squared error is displayed. (Since this
is the first call of ADAPT the default Pi is used.)
[net,y,e,pf] = adapt(net,p1,t1);
mse(e)
Note the errors are quite large. Here the network adapts
to another 12 time steps (using the previous Pf as the
new initial delay conditions.)
p2 = {1 -1 -1 1 1 -1 0 0 0 1 -1 -1};
t2 = {2 0 -2 0 2 0 -1 0 0 1 0 -1};
[net,y,e,pf] = adapt(net,p2,t2,pf);
mse(e)
Here the network adapts through 100 passes through
the entire sequence.
p3 = [p1 p2];
t3 = [t1 t2];
net.adaptParam.passes = 100;
[net,y,e] = adapt(net,p3,t3);
mse(e)
The error after 100 passes through the sequence is very
small - the network has adapted to the relationship
between the input and target signals.