Friday, 3 August 2018

Neural networks, explained - Janelle Shane, Physics World

This post is a summary of  the article published in the 2018 issue of "Physics World." This article is written by Janelle Shane.

  1. They are excellent at recognizing patterns in multivariate data.
  2. They are suitable for problems that are not very well-understood. Traditional systems were either rule-based or feature-based. However manually coming up with rules or features is intellectual challenging and infeasible in many cases such as face recognition. Neural networks are good at feature engineering. 
  1. Interpretability is an issue with neural networks. A neural network acts like a black box because humans cannot easily interpret the the features learnt by the model.
  2. It is necessary to review the results by human experts because neural networks might learn features that are not at all relevant to the task at hand.
  3. Neural networks might suffer from class imbalance in training examples. This is a major issue in the case of rare events, for which it is hard to generate sufficient number of training examples. 
  4. Neural network might suffer from overfitting to training examples. Overfitting can be resolved by testing the network on unseen examples.
"Neural networks can be a very useful tool, but users must be careful not to trust them blindly. Their impressive abilities are a complement to, rather than a substitute for, critical thinking an human expertise."

No comments:

Post a Comment