Neural networks are modeled after biological neural networks and attempt to allow computers to learn in a similar manner to humans through reinforcment learning.
Neural networks attemp to solve problems that would normally be easy for humans, but hard for computers.
Inspired by Multilayered Perceptron (MLP) and the animal visual cortex. "Designed to use minimal amounts of preprocessing. They have wide applications in image and video recognition, recommender systems and natural language processing"
Multilayered Perceptron (MLP) "a type of feedforward artificial neural network model that maps sets of input data onto a set of appropriate outputs. An MLP consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one. Except for the input nodes, each node is a neuron (or processing element) with a nonlinear activation function. MLP utilizes a supervised learning technique called backpropagation for training the network. MLP is a modification of the standard linear perceptron and can distinguish data that are not linearly separable." from Wikipedia
"A feedforward neural network is an artificial neural network wherein connections between the units do not form a cycle. As such, it is different from recurrent neural networks." wiki
"Backpropagation, an abbreviation for "backward propagation of errors", is a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of a loss function with respect to all the weights in the network, so that the gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the loss function.
Backpropagation requires a known, desired output for each input value in order to calculate the loss function gradient. It is therefore usually considered to be a supervised learning method, although it is also used in some unsupervised networks such as autoencoders. It is a generalization of the delta rule to multi-layered feedforward networks, made possible by using the chain rule to iteratively compute gradients for each layer. Backpropagation requires that the activation function used by the artificial neurons (or "nodes") be differentiable." wiki
"In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control." wiki
In Python you could use neupy or scipy.interpolate.Rbf
One good real-world application is in a spam filter. Naive Bayes' can be used to develop a model that can discriminate normal (Ham) emails from garbage (Spam). Lots of ways to improve it, but works fairly well in a basic sense.
Supervised learning.
Visualization of the data can enable one to discover much more than standardized statistics like mean, median or mode... an article highlighting this for the Anscombe’s Quartet problem.
Deep convolutional networks are SOTA for images. There are many well known architectures, including AlexNet and VGGNet.
Convolutional networks usually involved a combination of convolutional layers as well as subsampling and fully connected feedforward layers.
These handle time series data especially well. They can be combined with convolutional networks to generate captions for images.
These handle natural language especially well
Optimize the cost function while training the model to give the highest accuracy.
Outputs of perceptrons/neurons/nodes generated by passing weighted inputs through an 'activation function'.
Training to estimate best weights for inputs to nodes.
In [ ]: