Resource efficient neural networks

Publication year: 2026   |   Status: open

Today, solutions based on the neural paradigm and machine learning outperform solutions based on algorithmic approaches in accuracy and robustness in many problems. However, in their practical application, their high computational resource requirements are often a significant limitation, both during training (e.g. solutions based on transformer architecture) and in inference (e.g. real-time object detection in ADAS systems).

The general trend is that the reduction of computational resources has bypassed the new approaches, development, however, an optimized, resource-efficient neural network-based solution can significantly reduce their cost, enable their applicability on weaker hardware, or even make their application more robust on the same hardware (e.g. by using ensemble methods). It should also be taken into account that most neural network-based solutions perform highly redundant calculations, and their memory and computational requirements can be effectively reduced by quantization, pruning, and sparse regularization.

Tasks that can be implemented:

The student can also come up with own ideas, but without the need for completeness, here are a few specific tasks:

  • During the learning phase pruning the network with specific regularizations
  • Creation of a quantized architecture during training according to a predetermined computational accuracy
  • Knowledge distillation based architecture regularization
  • Attention layers, acceleration of sequential network training (with adaptive selection of the computation of the attentions).

Required competencies:

Dedicatedness and motivation to understanding the behavior of neural networks in detail. Practical application of Python machine learning libraries. Interest in mathematical apparatus (numerical optimization methods, analysis, statistics), good failure tolerance (the task is not just parameterizing an available solution).

What does the topic offer to the student:

In addition to writing a thesis or diploma work, the topic is also suitable for presenting its results at a TDK conference. Since the tasks addressed may also be relevant in solving many practical problems, participation in academic-industrial collaborations and presentations at international scientific forums are also possible. The student can benefit greatly even if he/she does not succeed in achieving a satisfactory result on the specific, chosen task (since he/she can thoroughly learn about the operation of tools that are becoming increasingly important in the field of computer technology).

Hadházi Dániel
Dániel Hadházi

research assistant
hadhazi