Introduction to Machine Learning

Machine learning can be thought of extension of human intelligence to the computers. We are trying to make computers think the way we do. Though this statement is extremely ambitious and loosely put, for most of the basic tasks a computer is expected to do, this is true.

Let me make my point through a simple example.

Imagine your favorite team A has made it to the finale of a major sporting event against team B. If you were to bet on a team’s win, which team would you choose?

If I just tell you that the final is a match between A and B, it is hard for you to come up with a reasonable guess. Even if you support A, you know that it is not going to affect the outcome of the match. In fact, your support to the team is a bias you have towards it. That is to say if both the teams are a match in every aspect of the game, then you would choose A because you support A.

As of now without any other information than it’s A v/s B in the final, you would naturally go for team A‘s win. Let us bring in more information.

I now tell you that the overall win percentage of team A is 65% and that of B is 70%, still it is hard to predict a win as these numbers are close and deep down you are rooting for A’s win. And now I tell you that out of 10 finals that A has been in, A has won 5 times. Also, B has won 7 out of 11 finals it has been in.

Let me put them in a table,


With this information given, can you now predict a winner? If Yes, then how did you come up with the prediction?  The answer is not simple. Even though your brain came up with an answer in an instant, you know that it is just a guess. An educated guess may be!

If I were to pick a team with this information I would pick team B because they have higher success rate in both overall and final games.

So let’s feed you with more information. If I tell you that the venue for the final is at A‘s home ground, success rate at home for A is impeccable and they have not lost a single game in last 10 matches! Where as team B has a moderate away success rate and have only won 7 in their last 10 games.

Now with this information table looks like,


So, with this information, who would you root for in the final match? It is reasonable to pick A because of the home advantage and the form. If you observe the whole process, with addition of new information your prediction is altering. What happened here? Why is the new information altering your prediction?

Each column in the table is called feature, that is associated with the team. With these features you were able to make a prediction. One way to think of this is your brain is formulating a function that takes these features into account and calculates a score/probability of success for a team. In other words, it is assigning weights (importance) to each of these feature and takes the weighted sum of these features to come up with a score for each team. (Though it is way more complex than a linear weighted sum, assume this for simplicity).

The function looks like this:



X represents the vector or list of our feature.
W is the vector of wieghts associated with corresponding x ϵ X

For team A, the function f becomes:


For team B, the function f becomes:


The function f, calculates a score which indicates the success of a team. Higher the score, better is the chances. In order to calculate the score, we need to know the weights used in the equation. How do we assign values to these weights? This is where learning from data comes in. Subconsciously your brain used all the data that is known to you based on your past experience to come up with an importance factor associated with each of these features. For example, based on your experience you know that home advantage plays a big role in any game. It is logical to assign a positive and higher weight to it. Assume you came up with this set of weights, W = [0.1, 0.2, 0.2, 0.3, 0.2]. (We will never know what the weights your brain came up with but let’s assume this for the sake of argument)

According to these weights, the score for team A is 0.83 and score for team B is 0.408. For better comparison, we can normalize the scores by dividing them by sum of the scores. Hence we get probabilities,




One more way of comparing them is by taking ratio of their scores,


So, A is twice more likely to win the final match than B. This is called “odds”. We say team A has better odds of winning to team B.

When you were asked to pick a team, what you did is to come up with a mathematical model (function + data) that can provide a score proportional to the likelihood of a team’s win. To formulate this model, what you did is to pick out the factors associated with the game which you felt were important to influence a team’s win. This may be based on the data known to you and your previous experiences. Once you know what factors are influential, you tried to rank them based on the data and importance. To quantify this rank, you came up with weights. You used these weights in your equation to predict the winner of the final. Note that the calculation of weights is not based on the data in the table 2. It is obtained by analyzing the data available to you (prior experience).

This is exactly how you would train your computer program to learn from data and provide answers to questions. So when I said machine learning is an extension of human intelligence to computers, this what I meant.  In a nutshell, Machine Learning is learning the mathematical equation associated with data using various algorithms, to answer the questions whose answers can be quantified.

This is the beginning of a series of blog in which I will try to simplify the concepts of machine learning and data science. I have been very loose with the terminologies used in the article. This is just to keep things simple. Going forward, we will correct them as and when required. Please feel free to let me know your views.


Vikram Kalabi
✉: [email protected]

Share -


2 thoughts on “Introduction to Machine Learning

  1. manjunath j

    Nice post!!! Can you show us how weights are assigned in your future posts ?? That will give more clarity.

    • Vikram

      That is the idea. We have numerous parameter estimation algorithms. The parameter estimation is the essence of most of machine learning algorithms. We will slowly introduce one by one. Follow Datalore Labs on facebook for updates.


Comments are closed.