This post is a summary of the article published in the 2018 issue of "Physics World." This article is written by Janelle Shane.
- They are excellent at recognizing patterns in multivariate data.
- They are suitable for problems that are not very well-understood. Traditional systems were either rule-based or feature-based. However manually coming up with rules or features is intellectual challenging and infeasible in many cases such as face recognition. Neural networks are good at feature engineering.
- Interpretability is an issue with neural networks. A neural network acts like a black box because humans cannot easily interpret the the features learnt by the model.
- It is necessary to review the results by human experts because neural networks might learn features that are not at all relevant to the task at hand.
- Neural networks might suffer from class imbalance in training examples. This is a major issue in the case of rare events, for which it is hard to generate sufficient number of training examples.
- Neural network might suffer from overfitting to training examples. Overfitting can be resolved by testing the network on unseen examples.
"Neural networks can be a very useful tool, but users must be careful not to trust them blindly. Their impressive abilities are a complement to, rather than a substitute for, critical thinking an human expertise."