Perceptron Learning
Perceptron Learning
A perceptron is the simplest form of 'neural network' learning.
The 'neuron' is simply a vector that can be multiplied against data points to say 1 or 0.
Learning consists of exposing the perceptron to data, and if it's wrong, adjusting the vector in the other direction.
A simple case I wanted to try out was learning which letters are vowels. As I thought about how to represent letters, I realized I'd want a dense, somewhat high-dimensional vector. Initially I accomplished this with hashing:
In [7]: hash('h')
Out[7]: 13312040041
In [8]: vp.to_vec('h')
Out[8]: array([1, 3, 3, 1, 2, 0, 4, 0, 0, 4, 1])
This was a fine start, but one of my goals was showing the power of dimensionality for this sort of arbitrary class separation. Viewing the hash as simply a random draw in some space, I refactored the 'embedding' of letters to vectors using
np.random.random
to get an array of the specified size.
Training Visualization App
The real thing I wanted to get to was seeing the online learning aspect of the perceptron in operation. Putting the user in control of the sequence of learning data would illustrate that you can control the 'picture of the world' the learner has. Feed it 'consonsants', it will make pro-consonant predictions; feed it 'vowels', it will be overly vowel-ey.
Here's the UI I ended up with (or really a slice of it):
To run
The project is on Github
here
More packaging (on both Python and JS sides) will be coming soon!