Open Postdoc Positions in Bandits and Reinforcement Learning at INRIA Lille

Open Postdoc Positions in Bandits and Reinforcement Learning at INRIA Lille

Posted by Rebecca Martin on Wed, 09/06/2010 - 00:00

Open Postdoc Positions in Bandits and Reinforcement Learning at INRIA Lille

The project team SEQUEL (Sequential Learning) of INRIA Lille, France, http://sequel.lille.inria.fr/ is seeking to appoint several Postdoctoral Fellows. We welcome applicants with a strong mathematical background who are interested in theory and applications of reinforcement learning and bandit algorithms.
The research will be conducted under the supervision of Remi Munos, Mohammad Ghavamzadeh and/or Daniil Ryabko, depending on the chosen topics.

The positions are research only and are for one year, with possibility of being extended.
The starting date is flexible, from the Fall 2010 to Spring 2011.

INRIA is France's leading institution in Computer Science, with over 2800 scientists employed, of which around 250 in Lille. Lille is the capital of the north of France, a metropolis with 1 million inhabitants, with excellent train connection to Brussels (30 min), Paris (1h) and London (1h30).
The Sequel lab is a dynamic lab at INRIA with over 25 researchers (including PhD students) which covers several aspects of machine learning from theory to applications, including statistical learning, reinforcement learning, and sequential learning.

The positions will be funded by the EXPLO-RA project (Exploration-Exploitation for efficient Resource Allocation), a project in collaboration with ENS Ulm (Gilles Stoltz), Ecole des Ponts (Jean Yves Audibert), INRIA team TAO (Olivier Teytaud), Univ. Paris Descartes (Bruno Bouzy), and Univ. Paris Dauphine (Tristan Cazenave).
See: http://sites.google.com/site/anrexplora/ for some of our activities.

Possible topics include:
- In Reinforcement learning: RL in high dimensions. Sparse representations, use of random projections in RL.
- In Bandits: Bandit algorithms in complex environments. Contextual bandits, Bandits with dependent arms, Infinitely many arms bandits. Links between the bandit and other learning problems.
- In hierarchical bandits / Monte-Carlo Tree Search: Analysis and developement of MCTS / hierarchical bandit algorithms, planning with MCTS for solving MDPs
- In Statistical learning: Compressed learning, use of random projections, link with compressed sensing.
- In sequential learning: Sequential prediction of time series

Candidates must have a Ph.D. degree (by the starting date of the position) in machine learning, statistics, or related fields, possibily with background in reinforcement learning, bandits, or optimization.

To apply please send a CV and a proposition of research topic to remi.munos(at)inria.fr or mohammad.ghavamzadeh(at)inria.fr, or daniil.ryabko(at)inria.fr.

If you are planning to go to ICML / COLT this year, we could set up an appointment there.