Deep Learning — MP Neuron
Welcome back to the deep learning series. In this article, we will discuss MP neurons in detail. I have also written about deep learning applications which will give you an introduction and some real-world applications. So in case you have not read that article, you can follow the below link.
Human Brain: On average, the human brain contains about ¹⁰¹¹ neurons (100 billion)!
Neurons interconnect with each other at junctions called synapses. Each neuron has about 7,000 synaptic connections to other neurons. At a synapse, one neuron sends a message to a target neuron. Most synapses are chemical. These synapses communicate using chemical messengers. Other synapses are electrical. In these synapses, ions flow directly between cells. At a chemical synapse, an action potential is triggered by the presynaptic neuron to release neurotransmitters. These molecules bind to receptors on the postsynaptic cell and make it more likely to fire an action potential.
Biological Neuron: A neuron is a cell just like any other cell of the body, which has a DNA code and is produced identically as the other cells. The majority of neurons encode their outputs or activations as a series of brief electrical pulses (action potentials). Dendrites are the receptive zones that receive from other neurons. The cell body of the neuron processes the incoming activations from dendrites and converts them into output activations. Axons transmit the output activation to other neurons. Synapses allow the weighted transmission of signals (using neurotransmitters) between Axons and dendrites to build up large neural networks. Consider the below diagram for a better understanding.
In case you are wondering where we use the Neural Networks concept in real life, you can read my other article on neural network applications.
MP Neuron: Theory
In 1943 two electrical engineers, Warren McCulloch and Walter Pitts described what is called a ‘neural network’. Mc Culloch- Pitts Neuron is also known as the MP neuron. Both, the McCulloch & Pitts neuron is a binary threshold device.
It sums up the inputs (multiplied by the synaptic strengths or weights) and either fire (produces output 1) or does not fire (produces output 0) depending on whether the input is above some threshold). Let’s summarize this in points for better understanding. Use the below image for a better understanding.
- Output → Boolean (0,1)
- Input(s) → Boolean (0,1)
- Weights are unity
- We have only one threshold to change.
Let us try to understand this by an example.
Output → “If we need to find out how successful a person is?”
Input(s) → Factors can be: IQ of the person, Character of the person like whether the person is hardworking or not?
Weights → We will give weights to each factor in the above example.
Another example could be.
Output → “Will you go to college or not?”
Input(s) → Factors can be: COVID-19 can be one reason. But let’s say, there is no pandemic.
Weights → Each character have a different kind of weightage. Weights are nothing but parameters. If you have an exam on that particular day of college, then you will give more weightage. Whereas if you are just lazy, you will give less weightage to your lethargic state.
Implementation of AND Gate with MP Neuron
Let us now try to implement AND gate using MP neuron. If you see the below diagram, you will see the truth table of AND gate. Here, X1 and X2 are the inputs or what you call features.
All the factors and features are independent of each other. For example, if your exam is today and you are also lazy. These are two independent features. The above model can also be represented by a feature space (shown in the below diagram).
If you look at this diagram, in the middle we have a truth table for AND gate. We have x1 and x2 as features. We have one column for Sum (h) and one column for activation (value=1.5) along with Output class as the last column. Try to compare it with the diagram on the left.
Let us now try to draw a line in feature space. Refer above diagram for a better understanding. We see that we can separate two classes only when the activation function is equal to 1.5.
Problems with MP-neurons
Let us now discuss some problems with the MP neuron that we are facing.
- Same (unit) weights
- Fixed thresholds →We saw that when activation = 1.5, then only when we can separate two classes. Also, what will be the output when we have a hard threshold like 0 or 1?
- What about non-discrete and/or non-binary tasks?
- No real advantage over any conventional digital logic circuit.
- Cannot learn (these networks displayed any learning?)
After all these discussions about the implementation of AND gate with the MP neuron, one thing we can conclude is that the MP neuron is only for binary classification.
Implementation of OR Gate with MP Neuron
You also must be curious to know whether we can even implement OR gate with MP neurons. If yes, what should be the threshold? Can you also draw the decision line? What is the feature space like if we have 3 inputs? How do you imagine the decision surface to be?
A lot of questions need to be answered. Let us now see the implementation of the OR gate below in this article.
See the truth table of the OR gate mentioned below and try to think what should be the value of the activation function so that both classes are separated.
Seeing the above table will clear your all doubts. So answering questions like what will be the feature space with 3 inputs, the answer is 3 Dimensional feature space and the decision surface will be the 2-dimensional plane.
In the above figure, look at the 3D plane on the right side. We have 3 coordinates X1, X2, and X3. These 3 coordinates are the values of features. Since we have 3 coordinates, we can say our Boolean inputs are X1, X2, and X3. So we will have a total of ²³=8 combinations.
Question: input(s) → If we have n-dimensional inputs, what will be the dimension of the feature space?
Answer: “N-Dimensional”. The dimension of the hyperplane will be ‘N-1’. Total data points will be 2^N.
Conclusion
The MP neuron is a type of artificial neuron that was proposed in 1943 by Warren McCulloch and Walter Pitts. It functions as a binary threshold device, where input signals are summed and then compared to a threshold value. If the sum is above the threshold, the MP neuron produces an output of 1, otherwise, it produces an output of 0. MP neurons have been used in the development of neural networks, but they have some limitations including the use of fixed weights and thresholds, and their inability to perform certain logical operations. These limitations have led to the development of more advanced artificial neuron models.
In case you have not followed me yet, please do. Follow the below link for Part 3 of this deep learning series. I hope to see you again for another blog on the deep learning series. Stay tuned!