Project Description

Mobile GPU Architectures for Deep Learning Applications

Deep Learning is one area of Machine Learning and has been showing promising results in practical vision applications. Since Deep Learning is based on neural network models composed of many independent neurons, it often utilize many cores of GPUs or Neural Processing Units (NPUs) to increase performance. However, in mobile platforms, it is difficult to achieve high performance with limited chip area and power. First, mobile GPUs do not provide optimal performance because they are made for general purposes. Second, NPUs achieve even higher performance over GPUs, but they considerably consume extra chip an area and power.

The purpose of our research is to improve mobile GPU microarchitecture for higher performance on Deep Learning applications in cooperation with compilers and run-time software. The research can help many practical and real-time Deep Learning applications run on endpoint mobile devices such as smartphones, Google Glass, and iWatch.