Scalable Deep Learning
Type: Bachelor CS
If you are interested please contact :
Context of the work:
Deep Learning (DL) is a very important machine learning area nowadays and it has proven to be a successful tool for all machine learning paradigms, i.e. supervised learning, unsupervised learning, and reinforcement learning. Still, the scalability of DL models is limited due too many redundant connections. In our previous work , we showed that the fully-connected layers of artificial neural networks can be replaced with sparse ones which have an evolutionary connectivity. Such sparse and adaptive models are very scalable at no cost in performance. Yet, our work is just a drop in the ocean and many new research directions can start from it.
Short description of the assignment:
Use your imagination, creativity, curiosity, research interests, and technical skills to start one of these directions. Non-exhaustively, you may start from some of our most interesting papers on these topics [1-3] or to propose your own topic. Possible topics of research are:
- Advance state-of-the-art in sparse deep learning algorithms (e.g. SET-DBN)
- Fundamental properties of scalable deep neural networks
- Scalability & reinforcement learning
- Scalable deep neural networks applied to your preferred topic.
Possible expected outcomes:
finding new insights on the relations between learning in biological brain and learning in artificial neural networks, advancing state-of-the-art in learning algorithms for artificial neural networks, obtaining publishable results.
- Basic Calculus, Probabilities, and Optimization
- Good programming skills (preferably Python)
- Basic understanding (or the willingness to learn) of artificial neural networks
Upon successful completion of this project, the student will have learnt:
- Get familiar with deep learning and basic neuroscience
- Get familiar with widely used deep learning libraries, e.g. Keras, Tensorflow
- Interpret the behavior of deep learning models
- Implement from scratch artificial neural network models
 D.C. Mocanu, E. Mocanu, P. Stone, P.H. Nguyen, M. Gibescu, A. Liotta: “Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science”, Nature Communications, 2018, https://www.nature.com/articles/s41467-018-04316-3 https://github.com/dcmocanu/sparse-evolutionary-artificial-neuralnetworks;
 E. Mocanu, D.C. Mocanu, P.H. Nguyen, A. Liotta, M. Webber, M. Gibescu, J.G. Slootweg: “ On-line Building Energy Optimization using Deep Reinforcement Learning”, IEEE Transactions on Smart Grid, 2018
 D.C. Mocanu, E. Mocanu: “One-shot learning using mixture of variational autoencoders: a generalization learning approach”, Proc. of the 17th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2018, Stockholm, Sweden.