Note From Author:This tutorial is the foundation of computer vision delivered as “Lesson 8” of the series, there are more Lessons upcoming which would talk to the extend of building your own deep learning based computer vision projects. You can find the complete syllabus and table of content here

Target Audience: Final year College Students, New to Data Science Career, IT employees who wants to switch to data science Career .

*Takeaway** : Main takeaway from this article :*

- Logistic Regression
- Approaching Logistic Regression with Neural Network mindset

Logistic Regression is an algorithm for binary classification. In a binary classification problem the input (**X**) will be a feature vector of 1-D dimension and the output (**Y**) label will be a **1 **or **0**

The logistic regression output label lies between the range 0 and 1 .

0 ≤ Y ≤ 1, where Y is the probability of the output label being 1 given the input X

Y = P(y=1 | x) For a learning algorithm to find **Y i**t takes two parameters W and B. Where, **W **is the weight associated with the input feature vector X and **B **bias.

To find Y . Well, one thing you could try that doesn’t work would be to have Y be w transpose X plus B, kind of a linear function of the input X. And in fact, this is what you use if you were doing linear regression. As shown below.