Weighted sum is a fundamental concept in Machine Learning that is used to calculate the total of multiple inputs, with each input being multiplied by a corresponding weight. This method can be used to calculate the output of a neural network, which is essentially a weighted sum of the inputs that is passed through a series of activation functions.

In a neural network, every input has a corresponding weight that determines its importance in the final output. The weights are assigned during the training phase of the network, where the network learns to adjust its weights for optimal performance. The goal of the training process is to find the weights that produce the best output for a given input.

To calculate the weighted sum, each input is multiplied by its corresponding weight, and the results are added together to give the total output. For example, if a neural network has three inputs with weights of 0.5, 0.3, and 0.2, and the input values are 1, 2, and 3 respectively, then the weighted sum would be:

0.5 * 1 + 0.3 * 2 + 0.2 * 3 = 1.3

The output of the network is then determined by passing the weighted sum through an activation function, which introduces non-linearity into the output. This allows the network to model complex relationships between inputs and outputs.

One common activation function used in neural networks is the sigmoid function, which maps the weighted sum to a value between 0 and 1. This can be useful for classification tasks, where the output can be interpreted as the probability of a certain class.

Another common activation function is the ReLU (Rectified Linear Unit) function, which maps the weighted sum to a value between 0 and infinity. This can be useful for tasks that require the network to learn non-linear relationships between inputs and outputs.

In conclusion, weighted sum is a fundamental concept in Machine Learning that is used to calculate the total of multiple inputs, with each input being multiplied by a corresponding weight. This method is used in neural networks to calculate the output of the network, which is passed through an activation function to introduce non-linearity. The weights are learned during the training phase of the network, with the goal of finding the weights that produce the best output for a given input.