We present a new hybrid FA design (mix of CMOS and pass transistor logic styles), which aims at achieving higher speed but keeping power dissipation low, and hence, targeting low PDP. Our proposed FA and seven other existing FA designs are simulated in spice, using 45 nm low power model, using standard test bed and test pattern (56 input transitions), and the simulation results of these eight designs are compared in terms of power dissipation, propagation delay and PDP. Simulation results prove that our proposed FA design has the lowest propagation delay and lowest PDP across the simulated supply voltage range and the frequency range.
Dr. Mazad Zaveri, Manan Mewada (PhD student)
This project will attempt to create an indigenous expandable multi-board (FPGA or Arduino boards) acceleration/simulation platform for (implementing/emulating) neural network/algorithm. The project would involve the development of the algorithm-specific computational architecture (coded in Verilog HDL) within each board (also referred to as the Processing Node in the figure below), and algorithm-specific inter-board communication scheme (coded in Verilog HDL). The computational architecture (and the communication scheme) would be programmable, in terms of: the number of neurons & synapses, function of neuron, and possibly in terms of neural connectivity/topology. Improvements in performance, due to possible distribution of the sub-operations of the algorithm over the FPGA or Arduino boards, and parallelization within each FPGA board (not possible in Arduino board), could be explored.
Related outcomes/deliverables would be: Code repository (implementations of) various neural networks, such as: MLP NN, RBF NN, Hopfield, BAM, etc., onto multiple boards (FPGA or Arduino boards), that can be used by SEAS students for other courses, such as: Machine Learning, etc.
Dr. Mazad Zaveri, Pal Nikola (B Tech student), Dev Mehta (BTech student)