Vacancies in my team

Currently, there are no open positions in my team. Please, check often for updates. one doctoral-student vacancy and one post-doctoral vacancy. A short description of the two positions can be found below:

Post-doctoral vacancy on ``resilient and secure delay-critical edge computing’’

Apply online here

Deadline: Feb. 29, 2020

The Internet of Things (IoT) envisions an interconnected world where virtually all devices, from cars to wearables, from home appliances to factory machines, from cameras to health-monitoring devices, will be interconnected and continuously share data. The IoT promises to transform the way we live and is a key enabler for applications such as intelligent transportation systems, automated supply-chain management, smart cities, smart grids, and smart farming. The success of these technologies and the full realization of the IoT depend on the ability of performing latency-critical, highly computationally demanding tasks in third parties. Edge computing is a new paradigm that promises to achieve that, moving the computation power from the cloud closer to where data is generated, by pooling the available resources at the network edge.

Within this research project, you will work on novel methods to design resilient, low-latency, secure, and privacy-preserving edge computing schemes.

PhD position vacancy on ``generalization bounds for deep neural network: design and insights’’

Apply online here

Deadline: March 6, 2020

Deep-learning algorithms have dramatically improved the state of the art in many machine-learning problems, including computer vision, natural language processing, and audio recognition. However, there is no satisfactory mathematical theory that adequately explains their success. Clearly, it is unacceptable to utilize such “black box” methods in any application for which performance guarantees are critical (e.g., traffic-safety applications).

DNNs consist of several hidden layers comprising many nodes. The nodes are where computations happen: the inputs to the nodes are weighted by coefficients that amplify or dampen that input and then the result is passed through a nonlinear activation function. The coefficients (weights) are optimized, e.g., through stochastic gradient descent (SGD), during a training phase where labeled inputs are provided to the network, and the labels produced by the network are compared with the ground truth by using a suitably chosen loss function. What features of deep neural networks then allow them to learn “general rules’’ from training sets? What class of functions can they learn? How many resources (e.g., layers, coefficients) do they need?

This project is geared at increasing our theoretical understanding of DNNs through the development of novel information-theoretic bounds on the generalization error attainable. We will also explore how such bounds can guide the practical design of DNNs network.

This position is part of the Wallenberg AI, Autonomous Systems a Software Program, a large fundamental and applied research initiative across multiple Swedish universities.

Note: this position is also part of a larger recruitment initiative at Chalmers within the field of mathematical methods for artificial intelligence. A total of six PhD students will be recruited within this initiative.