樱花影视

This website stores cookies on your computer. These cookies are used to collect information about how you interact with our website and allow us to remember your browser. We use this information to improve and customize your browsing experience, for analytics and metrics about our visitors both on this website and other media, and for marketing purposes. By using this website, you accept and agree to be bound by UVic鈥檚 Terms of Use and Protection of Privacy Policy.聽聽If you do not agree to the above, you can configure your browser鈥檚 setting to 鈥渄o not track.鈥

Skip to main content

Mehran Ghafarian Tamizi

  • MSc (University of Tehran, 2020)

Notice of the Final Oral Examination for the Degree of Doctor of Philosophy

Topic

Towards Generalizable Motion Planning: Learning-Based Frameworks for Efficient and Safe Trajectory Generation

Department of Electrical and Computer Engineering

Date & location

  • Friday, November 7, 2025

  • 12:30 P.M.

  • Virtual Defence

Reviewers

Supervisory Committee

  • Dr. Homayoun Najjaran, Department of Electrical and Computer Engineering, UVic (Supervisor)

  • Dr. Elizabeth Croft, Department of Electrical and Computer Engineering, UVic (Member)

  • Dr. Hong-Chuan Yang, Department of Electrical and Computer Engineering, UVic (Member)

  • Dr. Teseo Schneider, Department of Computer Science, UVic (Outside Member) 

External Examiner

  • Dr. Kamal Gupta, Department of Engineering Science, Simon Fraser University 

Chair of Oral Examination

  • Dr. Timothy Iles, School of Pacific and Asian Studies, UVic

     

Abstract

Robotic motion planning remains a fundamental challenge in industrial automation, with manipulators offering a clear example of the need for real-time, collision free, and safe trajectory generation. Traditional planners often face trade-offs among optimality, adaptability, and computational efficiency, limiting their applicability in cluttered and high-dimensional industrial environments. Furthermore, most learning-based planners suffer from poor generalization, requiring retraining when deployed in new scenes or on different robot platforms. This thesis presents two learning-based frameworks designed to address these challenges. First, we introduce the Path Planning and Collision Checking Network (PPCNet), an end-to-end neural architecture that combines a waypoint generator with a learned collision checker to enable fast, safe, and reliable planning in structured environments. PPCNet is validated in both simulated and real-world bin-picking tasks, demonstrating substantial speed-ups over classical planners while maintaining path quality. To overcome the generalization limitations of PPCNet, we propose Generalizable and Adaptive Diffusion-Guided Environment-aware Trajectory generation (GADGET), a conditional diffusion-based motion planner guided by control barrier functions. GADGET leverages voxel-based scene encoding and goal conditioning to generate safe trajectories across previously unseen environments and robotic arms without retraining. The integration of barrier function-based guidance ensures robust collision avoidance during trajectory generation. Extensive experiments demonstrate that both frameworks achieve real time planning performance and high success rates, with GADGET offering strong generalization to novel settings. This work highlights the potential of combining deep generative models with adaptable design to create scalable and broadly generalizable motion planners, capable of transferring across diverse environments and robot platforms with minimal modification.