-
🔥🔥🔥 [June 30, 2025] We show that selecting more uniformly distributed data increases the minimum pairwise distance, which provably reduces neural network approximation error—leading to better training efficiency beyond the NTK regime. Code and Paper are available at: https://github.com/SafeRL-Lab/data-uniformity and https://arxiv.org/pdf/2506.24120.
-
🔥🔥🔥 [May 19, 2025] We released M4R, a benchmark for evaluating massive multimodal understanding and reasoning in open space. Paper, code, dataset, and leaderboard are available at: https://accident-bench.github.io/ and https://arxiv.org/pdf/2509.26636
-
🔥🔥🔥 [May 07, 2025] We released RLBenchNet, a systematic benchmarking suite for evaluating neural network architectures in reinforcement learning. Code and Paper are available at: https://github.com/SafeRL-Lab/BenchNetRL and https://arxiv.org/pdf/2505.15040
-
đź” We focus on the theory and practice of machine learning, with applications to robotics and foundation models.
-
🌱 We host workshops and seminars on safe AI and robot learning. Researchers and students interested in safe AI and robot learning are welcome to join! Recordings are available on the AI Agent Research YouTube Channel. For more information, visit the Agentic AI Frontier Seminar, the Safe RL Seminar Homepage, and the Safe RL Workshop Homepage.