It appears that notation for neural networks isn’t completely standardised so I’m just going to include some notes from a free e-book from Adventures in Machine learning by Dr Andy Thomas. It’s worth pointing out that some formats use [ ] square brackets instead of ( ) so it’s necessary to be aware of the notation description for anything being studied online… as if learning neural network programming wasn’t tough enough 🙂
In the above is a 3 layer network. Some of the weights notation is shown the diagram above.
Notation of node weights
w11(1) is the weight in-between the first node of the first later, and the first node of the next later.
To make it clearer what this really means, take a look at the weight in-between the 3rd node of the first layer, and the 2nd node of the second layer.
Here we see w23(1)Â
The ‘w’ denotes weight. First look at the number in the brackets. This refers to the layer number of the first node. The subscript 3 is actually referring to the node number in the first layer, and the 2 preceding that 3 is referring to the node number in the layer that is the value in the brackets +1.
So if we saw
w46(5)
the above means the weight between node 6 in layer 5, and node 4 in layer 6 (5+1).
Notation of bias
The notation for the bias can be simplified as
bi(l) where i is the node number in later l+1
In the example image above we have a bias in layer 1 that feeds into all the nodes of layer 2.
b3(1) would, therefore, signify the weight of the bias in-between the first layer and the third node in layer 2.
Output notation
Is simplified as
hj(l) where j denotes the node number of layer l of the network.
So in the above, the output of node 3 in the second layer is denoted as
h3(2)