Polito Logo
Ellis logo
PhD-AI logo

Gabriele Tiboni

Ellis PhD student at Politecnico di Torino and TU Darmstadt.
I'm interested in reinforcement learning and computer vision applied to the field of robotics.


Jan 2024 Our paper "Domain Randomization via Entropy Maximization" (DORAEMON) has been accepted at ICLR 2024! We recommend you give it a read if you're interested in sim-to-real transfer of RL policies.
Oct 2023 I presented two works as a first author at this year's IROS 2023! Check out (1) paper #1 on Robotic Spray Painting and (2) paper #2 on Domain Randomization for Soft Robots.
Jun 2023 I attended the Reinforcement Learning Summer School (RLSS 2023) in Barcellona, and won the social beach volley tournament event!
May 2023 Our paper "DROPO: Sim-to-Real Transfer with Offline Domain Randomization" has been accepted in Robotics and Autonomous Systems Journal.
Feb 2023 Ellis PhD visiting at TU-Darmstadt for 9 months, under the co-supervision of Prof. Jan Peters, Prof. Georgia Chalvatzaki and Prof. Carlo d'Eramo.
Nov 2022 I attended the National PhD-AI Fall School, discussing topics such as Federated Learning, Domain Adaptation, and Kernel Methods by Prof. Lorenzo Rosasco.
Sep 2022 Our paper "Online vs. Offline Adaptive Domain Randomization Benchmark" will be published in the Springer Proceedings in Advanced Robotics 2023.
Jul 2022 I attended the International Computer Vision Summer School (ICVSS 2022) and won the Reading Group Competition led by Prof. Stefano Soatto.


Hi there! My name is Gabriele. I'm enrolled in the National PhD AI programme at Politecnico di Torino, complemented by the Ellis PhD&Post-Doc programme and the ELIZA School of Excellence programme. I'm supervised by Prof. Tatiana Tommasi and co-supervised by Prof. Jan Peters.

My areas of interest include reinforcement learning (RL), computer vision and robotics. Recently, I've been working on transferring RL robot policies from simulation to the real world.
The main goal of my research is to allow next-generation robots to be trained safely and efficiently through learning-based algorithms.


new Domain Randomization via Entropy Maximization Gabriele Tiboni, Pascal Klink, Jan Peters, Tatiana Tommasi, Carlo D'Eramo, Georgia Chalvatzaki International Conference on Learning Representations (ICLR) Paper/ Code/ Website DROPO: Sim-to-Real Transfer with Offline Domain Randomization Gabriele Tiboni, Karol Arndt, Ville Kyrki Robotics and Autonomous Systems, 104432 Paper/ Code/ Website Domain Randomization for Robust, Affordable and Effective Closed-loop Control of Soft Robots Gabriele Tiboni, Andrea Protopapa, Tatiana Tommasi, Giuseppe Averta 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS) Paper/ Code/ Website PaintNet: Unstructured Multi-Path Learning from 3D Point Clouds for Robotic Spray Painting Gabriele Tiboni, Raffaello Camoriano, Tatiana Tommasi 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS) Paper/ Code/ Website Online vs. Offline Adaptive Domain Randomization Benchmark Gabriele Tiboni, Karol Arndt, Giuseppe Averta, Ville Kyrki, Tatiana Tommasi Springer Proceedings in Advanced Robotics (SPAR), 2023 Paper/ Code/ Website Towards Safe and Efficient Transfer of Robot Policies from Simulation to Real World Gabriele Tiboni, Karol Arndt, Ville Kyrki, Barbara Caputo M.Sc. Thesis, 2021 Thesis


RL course project presentation

March 11th, 2024 ・ 11 mins

Presentation of the RL project to the students of the course of "Machine Learning and Deep Learning" at Politecnico di Torino.

Intro to RL for Supervised Learners (Seminar)

July 27th, 2023 ・ 38 mins

A high-level introduction to Reinforcement Learning concepts tailored to an audience familiar with supervised learning. Recording of weekly seminar at Polytechnic of Turin.

Practical session on Policy Gradient (Reinforcement Learning course @ TU Darmstadt)

June 22, 2023 ・ 1 hour 13 mins

Recording of practical session on Policy Gradient (PG) for students of the Reinforcement Learning course @ TU Darmstadt. Derivations of the policy gradient and the optimal baseline are shown, together with practical examples for running PG algorithms with Mushroom RL library.