About Me

Safe robot learning demos

I'm Shangding Gu

Now I am a postdoc at UC Berkeley, USA, and also a guest researcher at Technical University of Munich. I fortunately work with Prof. Costas Spanos, Prof. Edward Lee and Prof. Ming Jin. Before coming to Berkeley, I pursued my PhD under the supervision of Prof. Alois Knoll at Technical University of Munich. I had a great time visiting Prof. Jan Peters' lab from September 2022 to December 2022. Following this, I did a research internship at Microsoft from April 2023 to August 2023. My research currently focuses on developing artificial intelligence methods and models, with a special interest in exploring safe reinforcement learning, planning, foundation models, and robotics, in which my goal is to enable robots to know how to learn, reason and plan, and enable robots to work in support of people. See Safe RL YouTube Channel. I support slow science. I am a student of mind, nature, and cosmos. If you are interested in my research topics, please feel free to contact me indicating your background and skills. Outside of research, I enjoy playing the guitar, reading, running, swimming and playing badminton with friends.
Email : shangding.gu[at]tum.de
Location : Berkeley, USA

Research Interests

Safe/Robust Reinforcement Learning; Reinforcement Learning Theory; AI Safety.
Foundation Models; Motion Planning; Autonomous Driving; Robotics (e.g., arm robotics and marine robotics).


04.2024: Our paper on safe learning for real-world robot control got accepted by IEEE Transactions on Industrial Informatics (IF: 12.3)
12.2023: Our paper on safety and reward balance for safe RL got accepted by AAAI 2024 (Oral Paper)
10.2023: Our paper on a safe human-robot learning framework got accepted by Frontiers in Neurorobotics
08.2023: Our paper on RL for autonomous driving parking lots got accepted by IEEE Transactions on Cybernetics (IF: 11.8)
06.2023: Our paper on offline RL with uncertain action constraint got accepted by IEEE Transactions on Cognitive and Developmental Systems
03.2023: Our paper on safe multi-robot learning got accepted by the journal of Artificial Intelligence (IF: 14.4)
12.2022: We launched a long-term safe reinforcement learning online seminar. Every month, we will invite at least one speaker to share cutting-edge research with RL researchers and students (each speaker has about 1 hour to share his/her research). We believe that holding this seminar can promote the research of safe reinforcement learning. For details, please see the Seminar Homepage
11.2022: Invited a safe RL talk at the RL China community
10.2022: Invited a safe RL talk at Prof. Jan Peters' lab
09.2022: We organized the 1st Safe RL Workshop @ IEEE MFI 2022

Recent Works

Gu, S., Shi, L., Ding, Y., Knoll, A., Spanos, C., Wierman, A., & Jin, M. (2024). Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation. arXiv preprint arXiv:2405.20860.


Zheng, Z., & Gu, S. (2024). Safe Multi-Agent Reinforcement Learning with Bilevel Optimization in Autonomous Driving. arXiv preprint arXiv:2405.18209.

[Arxiv], [Code]

Gu, S., Sel, B., Ding, Y., Wang, L., Lin, Q., Knoll, A., & Jin, M. (2024). Safe and Balanced: A Framework for Constrained Multi-Objective Reinforcement Learning. arXiv preprint arXiv:2405.16390.


Supervised Students

Kathleen Baur (Now at Cornell University)
Mhamed Jaafar (Now at Brainlab)
Zheng Zhi (Now at Agile Robots AG)
Jiarui Zou (Now at TUM)
Manxi Sun (Now at TUM)
Donghao Song (Collaborated with Derui Zhu)

My Services


As a reviewer for some journals and conferences, e.g., JMLR, IEEE TASE, IEEE TVT, IEEE TNNLS, IEEE TITS, IEEE TAI, ICML, NeurIPS, ICLR, AAAI, ICRA.


I used to be the head of the supporting education department of the student union and participated in the teaching activities for nearly two years.