resistive cam acceleration for tunable approximate computing

Department: Computer Science & Engineering
Faculty Advisor(s): Tajana S. Rosing

Primary Student
Name: Daniel Nikolai Peroni
Phone: 530-391-5887
Grad Year: 2021

Many applications, such as machine learning and data sensing, are statistical in nature and can tolerate some level of inaccuracy in their computation making approximate computation a viable method to save energy and increase performance by trading off energy for accuracy. In this paper, we propose a tiered novel approximate floating point multiplier, called CFPU, which significantly reduces energy consumption and improves performance of multiplication at slight cost of accuracy. The floating point multiplication is approximated by replacing the costly mantissa multiplication step of the operation with lower energy alternatives. We process the data in either a basic approximate mode, an intermediate approximate mode, or on the exact hardware, depending on accuracy requirements. The proposed two levels of approximation combined with exact hardware CFPU achieves 2.9x energy savings and 2.1x speedup while ensuring less than 5% average relative error for eight general OpenCL and three machine learning applications. In addition, our result shows, that at the same level of accuracy, the proposed CFPU can achieve 2.4x EDP improvement compare to state-of-the-art approximate multipliers.

Industry Application Area(s)
Electronics/Photonics | Internet, Networking, Systems

« Back to Posters or Search Results