SIRENA aims to build a new class of AI hardware accelerator chips that perform intelligence tasks far more efficiently than state-of-the-art technologies. These chips are designed to power a wide range of applications — from always-on wearables and automotive control systems to cloud-scale AI inference — while drastically reducing the energy consumed in doing so.

Reconfigurable Nonlinear Processing Units (RNPUs) are the at the heart of SIRENA. RNPUs are novel computing primitives that have been and are being studied at the NanoElectronics group. They offer device-level programmale nonlinearity at the fraction of energy and power consumption costs of digital hardware implementation.

