Deep learning has been around since the 1950s. What has not been around that long, however, is the data that a deep learning system needs in order to improve. From a recent Fortune article on the topic: Teams have been intentionally leveraging the power of the internet to build their own databases for use in deep learning experiments: The Image Net project, launched in 2007, is a collaboration of over 50,000 people in 180 countries. They were able to create the largest international image database ever in 2009, with about 15 million titled and classified images spreading across 22,000 categories. That dramatic progress has sparked a burst of activity. Equity funding of AI-focused startups reached an all-time high last quarter of more than $1 billion, according to the CB Insights research firm. There were 121 funding rounds for such startups in the second quarter of 2016, compared with 21 in the equivalent quarter of 2011, that group says. More than $7.5 billion in total investments have been made during that stretch—with more than $6 billion of that coming since 2014.”

The Rise of Deep Learning

Today, news broke that Google Translate will, moving forward, be using deep learning: The amounts of data available continue growing at an increasing rate. According to a 2014 Internet Trends report, an average of 1.8 billion digital images are uploaded every day, a number that only risen since. Every CCTV camera, every smartphone, every Facebook user, every Instagram account continues to pump out additional data about everyday events that might one day be fed into a constantly-improving deep learning algorithm. Get used to it. ‘We believe we are the first using [neural machine translation] in a large-scale production environment,’ says Mike Schuster, research scientist at Google.”