Within the context of the embedded AI lab, we are planning several projects that include internships, bachelor's, and master's assignments. If you are a bachelor or master student and interested in participating in such a project, please don’t hesitate to contact us to discuss the options.
Available assignments In collaboration with Industry
Uploaded August 2025
Description: Energy and neural compute resources in an edge-AI accelerator are inherently limited and scarce. This, in turn, limits the size and capabilities of the models that we can execute on them. To make the most out of these resources, training small footprint quantized models and running them as a single process in bare metal is common practice today. This, however, limits the scope and hence usefulness of these accelerators to only a single dedicated task at a time.
A desirable approach that has been around since the advent of modern operating systems is to time-multiplex a system's resources across different processes/threads/models. Following this line of thinking, if we can decompose a larger model into smaller (potentially re-usable) "neural modules" and equip an accelerator with adaptive scheduling, it should be possible to run concurrently & modularly several models (than previously possible) for several tasks in a virtualized accelerator.
Scope: The goal of this project is to establish feasibility (implement prototype functions) and assess under what (required) specs, conditions and capabilities, it is possible to enable this modus operandi with acceptable model performance and execution latency.
More about innatera: https://innatera.com/
A. Yousefzadeh PhD (Amir) Assistant ProfessorUploaded August 2025
Location: High Tech Campus, Eindhoven (on-site)
Compensation: €1,000/month stipend
About Euclyd.ai Euclyd.ai is a fast-growing startup building next-generation hardware accelerators for Large Language Models (LLMs). We work at the intersection of algorithms, compilers, and digital hardware to make state-of-the-art language models eGicient and deployable.
The Opportunity Join us for a thesis-compatible internship where you’ll help push the limits of LLM efficiency on custom hardware. You’ll work closely with our engineering team on projects that can be tailored to your background and thesis goals.
Example Projects
- Model Efficiency: Quantization (FP8/INT8/INT4) and structured/unstructured sparsity for LLMs; accuracy–latency–power trade-off studies.
- Mapping to Hardware: Tiling, scheduling, memory layouts, and kernel fusion to deploy LLMs on our accelerator.
- Architecture Optimization: Micro-architecture exploration (compute/memory/bandwidth), performance modeling, and bottleneck analysis.
What We’re Looking For
- Enrolled MSc student (thesis or internship track) in EE/CE/CS or related field
- Solid Python skills; C/C++ or HDL experience is a plus
- Curiosity, a hands-on mindset, and a willingness to relocate to Eindhoven for the internship
Why Euclyd.ai
- Work on real hardware and production-oriented research
- Rapidly growing team with future opportunities
- Scope can be tailored to your CV and thesis requirements
- International, collaborative environment
Practical Info
- Start date: Flexible
- Duration: Aligns with your university’s thesis/internship requirements
- Visa/agreements: We can work with your university on the required internship/thesis agreement
How to Apply Send your recent CV and Transcripts (and, if available, a short note on your thesis interests) to Amir Yousefzadeh a.yousefzadeh@utwente.nl. If you’d like more details about the role or company, feel free to ask—we’re happy to share more.
A. Yousefzadeh PhD (Amir) Assistant Professor
AVAILABLE Assignments at our research groups
Uploaded March 2024
This master thesis project explores the potential of dynamic neural networks (DNNs) in enhancing the performance and efficiency of artificial intelligence (AI) applications within embedded systems. Embedded devices, characterized by their limited computational resources, memory, and energy constraints, necessitate innovative approaches to deploy AI solutions effectively. The project will investigate how DNNs can overcome these limitations with their ability to adapt architecture and computational processes based on input data. By dynamically adjusting their complexity, these networks offer a promising solution to maintaining or enhancing AI application performance under stringent resource constraints. The research will focus on developing methodologies for implementing DNNs in embedded systems, evaluating their performance against traditional static neural networks, and understanding the trade-offs in computational efficiency, energy consumption, and model accuracy.
Required skills
- Python and C programming
- Knowledge about neural networks
A. Yousefzadeh PhD (Amir)Assistant ProfessorUploaded March 2024
A master thesis project on a RISC-V-based neuromorphic processor aims to explore the design, implementation, and evaluation of a novel computing architecture that merges the efficient, open-source RISC-V instruction set architecture with the principles of neuromorphic computing. This project will focus on leveraging the modularity and extensibility of RISC-V to integrate specialized neuromorphic computing modules, which mimic the neural structures and processing mechanisms of the human brain, to achieve high efficiency and low power consumption in tasks related to artificial intelligence and machine learning. The research will encompass the development of a prototype processor, including the design of custom neuromorphic computing extensions for the RISC-V architecture, the simulation of neural network models on this platform, and a comprehensive analysis of its performance, power efficiency, and potential applications in edge computing and IoT devices. This endeavour will contribute to the advancement of neuromorphic computing technologies and demonstrate the versatility and potential of the RISC-V architecture in addressing the growing demands for energy-efficient AI computation.
Required skills
- Hardware design in FPGA
- Knowledge about neural networks
A. Yousefzadeh PhD (Amir)Assistant ProfessorUploaded October 2024
Humans are visually guided animals. As a consequence, tracking visual attention of a humans is as close to mind-reading as it can get. Eye tracking devices are commonly used for the research purpose, e.g. in research on cognition, marketing or primate behavior. However, the available solutions are proprietary and are less useful as their enormous price tags suggests. For behavioral studies in the wild, only some models have (limited) wearability.
We can now imagine a compact, stand-alone, self-calibrating eye tracker with robust accuracy in a variety of conditions using inexpensive hardware:
- A mid-res world view camera (WV)
- A low-res eye tracking camera (ET)
- 6DoF acceleration sensor (AC)
- MCU
On the system, a world camera feeds a pre-trained generic model for predicting human visual attention (MVA). For example, faces, puppies and all rapid changes in the peripheral field of vision are very likely to attract visual attention. If WV detects a new face in the field of vision, it can be used as a (probabilistic) calibration point. After collecting enough calibration data, a user-specific model can be trained using both camera feeds at the natural calibration points as input, producing estimated glance directions. The calibration process can further be accelerated by adding a very inexpensive generic algorithm for eye tracking, for example the Quadbright method used by this project: http://github.com/schmettow/yet.
Interested in the project? Ask m.schmettow@utwente.nl
General contact person: Sebastian Bunda