Expressivity

Deep Learning models have transformed numerous application domains, yet their remarkable adaptability and success raise fundamental theoretical questions. A key problem in deep learning lies in understanding the expressivity of neural networks. To that end, we study the expressive power of various architectures and neuronal models, including feedforward, recurrent, self-attention, and spiking neural networks. Our focus is on analyzing their approximation properties and investigating how factors such as depth, width, and activation functions influence their representational power, with the goal of identifying principles for designing efficient and versatile models.

Research at our chair

General references

Contact

Do you have questions about our research in this area?

Please do not hesitate to contact us directly. Feel free to write an e-mail to Manjot Singh, one of our PhD students in the field of Expressivity.

Inquiries from students are very welcome!