Way back in the day, when I was a grad student at Colorado State University, I took a class called Pattern Analysis from Michael Kirby (http://www.math.colostate.edu/~kirby/). A part of this class was dedicated to the study of neural networks and specifically looking at some of the work of Lapedes and Farber of Los Alamos Labs. (https://papers.nips.cc/paper/59-how-neural-nets-work.pdf)

Whenever I think about the topic of neural networks, it is hard not to have the image of Star Trek’s Commander Data come to mind.

Now that we have that out of the way…

The concept of neural networks dates back to the 1970s and before, and the basic idea is not overly complex. A digital neural network mimics the structure of a biological neural with a collection of nodes (neurons) and edges (synapses). As such, a digital neural network can be considered mathematically to be a directed graph. Unlike a biological neural network, however, a digital neural network is frequently constructed in a very ordered, layered, architecture.

Data flows through a neural network from its “input” layer through various “hidden layers” and eventually appears at its “output layer”. As the data flows through the network, “activation functions” at each node calculate the output of the node based on the sum of that nodes inputs. Data that flows along the edges (synapses) of the network are multiplied by a weight prior to being added to the downstream nodes.

Nodes and edges are combined into a larger network similar to what we can see below.

Neural networks can be “trained” based on a known set of input and expected outputs. A sample from a training set gets passed into the network, and then the error is calculated based on the output of the neural network, and the desired output. The error is then a mathematical function of the biases and weights associated with the nodes and edges of the network, and that function can be minimized using numerical analysis techniques like gradient descent.

Back in the late 80s and early 90s, the limits of computation power and smaller datasets limited the size and trainability neural networks. Neural networks had a decline in popularity in the 90s and 2000. Recently, however, advances in computational power, cloud computing and large amounts of data generated by social media and services like YouTube have sparked a huge resurgence in the popularity of digital neural networks as a computational and analytical tool for detail with data. Today neural networks play a large part in many speech recognition and image recognition applications. Over the summer, I will be continuing to explore this area and generate more blog posts.

I plan to follow the basic outline of the online this course:

https://www.udacity.com/course/deep-learning–ud730

And also use portions of this book:

https://github.com/HFTrader/DeepLearningBook:

My goal is to use this blog to track my research in this area. While Deep Learning does not directly deal with IoT, I believe that the two can be used together. I am still thinking about how this may come about… Maybe Commander Data will make a re-appearance in this blog…