If you download and run the Google’s latest translation app, you will be using a deep neural network — and not just through the cloud, but right on your own phone. The network instantly adds twenty additional languages to the existing seven that their app could decode before.
What puts the ‘deep’ in neural networks really comes down to having a few hidden layers of neurons in between the input layer and the output layer. That’s where all the so-called deep learning comes into play. The original neural network from many decades ago, the perceptron, was at its heart just an algorithm. It was run as single layer of neurons connected in a special way. Its first practical implementation was in software running on a standard processor. You would be wrong to think that deep neural networks implemented in hardware aren’t coming. Networks made from memristor arrays or constructed from FPGAs are certainly possible now, just not entirely portable.
Google was able to extract the essence of a large and general translation architecture running in the cloud, and pare it down to something you could use to translate a menu in a restaurant. The network itself is still powerful enough to do things like recognize letters that are rotated through a small offset. To be able to run in real time, Google had to optimize several math operations. Technically speaking, that entailed tuning things like matrix multiplies to fit processing into all levels of cache memory and making use of the smartphone processor’s SIMD instructions.
One can imagine chips in the near future that have larger and more dedicated neural networks implemented in hardware, which can be accessed for all kinds of mission critical functions. For example, the so-called convolutional neural networks that are used here for letter translation are also used in a variety of other image processing operations…….
See full story on extremetech.com
Image courtesy of extremetech.com