Deep Learning: DLvML
Welcome to this series of blogs wherein I’ll try to explain the difference between Deep Learning and Machine Learning. For this, I’ll be referring the following books :
- Machine Learning Probabilistic Perspective By Kevin Murphy.
- Deep Learning By Ian Goodfellow and Yoshua Bengio (Link).
Today, Artificial intelligence(AI) is a thriving ﬁeld with many practical applications and active research topics. The true challenge to artificial intelligence is to solve problems that human solve intuitively and by observing things like spoken accent and faces in an image.
The solution to the above problem is to allow computers to learn from experience and understand the world in terms of a hierarchy of concepts, with each concept defined in terms of its relation to simpler concepts. By gathering knowledge from experience, this approach avoids the need for human operators to formally specify all of the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones. If we draw a graph showing how these concepts are built on top of each other, the graph is deep, with many layers. For this reason, we call this approach to AI Deep Learning.
Several artificial intelligence projects sought to hard-coding rules to deal with various problems. This is known as the knowledge base approach to artificial intelligence. None of these models were successful. Deep Learning/ Machine Learning approach cuts down on this hard-coding.
Difference Between Machine Learning and Deep Learning
This is a very common question that nearly every beginner in this field comes across. Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.
Lets go Deeper into this.
First: While in Machine Learning, application of machine learning model is a meager task once you know which model to apply. Whereas, feature extraction, feature transformation turn out to be a bigger burden.
On the other hand, deep learning models have tremendously simplified predictive pipelines. Traditional pipelines consisted of feature extraction and model training as separate steps. In contrast, end-to-end deep learning models are able to learn functions that connect the input data directly to the required output, thus making model development much simpler and faster.
Second: The performance of a sufficiently complex deep learning model will keep increasing as we throw more data at it. This is different from behavior of most machine learning techniques where, as the amount of training data increases, the performance of shallow machine learning models will improve up to a certain point and then they reach a plateau.
We’ll look more into the basic concepts of Deep Learning in the upcoming blogs.
Feedback and contributions are highly appreciated.