Upgrade to Pro — share decks privately, control downloads, hide ads and more …

PhD defense 2/2: Planning, Execution, Represent...

PhD defense 2/2: Planning, Execution, Representation, and Their Integration for Multiple Moving Agents

More Decks by Keisuke Okumura | 奥村圭祐

Other Decks in Research

Transcript

  1. 26th Jan. 2023 PhD Defense (Final) Planning, Execution, Representation, and

    Their Integration for Multiple Moving Agents Keisuke Okumura Tokyo Institute of Technology, Japan ౦ژ޻ۀେֶ 5PLZP*OTUJUVUFPG5FDIOPMPHZ http://bit.ly/3J96UjM
  2. /56 2 YouTube/Mind Blowing Videos logistics YouTube/WIRED manufacturing YouTube/Tokyo 2020

    entertainment Swarm { is, will be } necessary in everywhere
  3. /56 3 objective-1 Representation objective-2 Planning Common Knowledge? Cooperation? (increased)

    Uncertainty Execution Navigation for a Team of Agents Who Plans? Huge Search Space
  4. /56 4 integration Representation [AAMAS-22+] building effective roadmaps Execution [AAAI-21,

    ICRA-21, ICAPS-22*, IJCAI-22, AAAI-23+] overcoming uncertainties *Best Student Paper Award Planning ≥1000 agents within 1 sec [IJCAI-19, IROS-21, ICAPS-22*, AIJ-22, AAAI-23+] Research Summary
  5. /56 6 Short-Horizon Planning for MAPF Short-Horizon Planning for Unlabeled-MAPF

    Short-Horizon Planning Guides Long-Horizon Planning Improving Solution Quality by Iterative Refinement 4. 5. 6. 7. Part I. Planning Building Representation from Learning Building Representation while Planning 11. 12. Part III. Representation Online Planning to Overcome Timing Uncertainties Offline Planning to Overcome Timing Uncertainties Offline Planning to Overcome Crash Faults 8. 9. 10. Part II. Execution Introduction, Preliminaries, and Background 1-3. Conclusion and Discussion 13. Dissertation Outline
  6. /56 7 Short-Horizon Planning for MAPF Short-Horizon Planning Guides Long-Horizon

    Planning 4. 6. Part I. Planning Building Representation from Learning Building Representation while Planning 11. 12. Part III. Representation Recap: Quick Multi-Agent Path Planning discrete continuous
  7. /56 8 Short-Horizon Planning for MAPF Short-Horizon Planning for Unlabeled-MAPF

    Short-Horizon Planning Guides Long-Horizon Planning Improving Solution Quality by Iterative Refinement 4. 5. 6. 7. Part I. Planning Dissertation Outline discrete
  8. /56 9 MAPF: Multi-Agent Path Finding given agents (starts) graph

    goals solution paths without collisions optimization is intractable in various criteria [Yu+ AAAI-13, Ma+ AAAI-16, Banfi+ RA-L-17, Geft+ AAMAS-22]
  9. /56 10 Challenge in Planning quality & solvability high low

    effort small large speed & scalability complete optimal incomplete suboptimal keeping solvability & solution quality with small planning effort holy grail
  10. /56 11 Approaches in Planning quality & solvability effort high

    low small large speed & scalability approach-1 [Chap. 4-6] short-horizon planning guides long-horizon planning approach-2 [Chap. 7] iterative refinement planning time (sec) cost / lower bond 300 agents
  11. /56 12        

           solved instances (%) runtime (sec) 0.0% A* [Hart+ 68] 0.4% ODrM* [W agner+ AIJ-15] 8.3% CBS [Sharon+ AIJ-15; Li+ AIJ-21] 10.7% BCP [Lam+ COR-22] 50.5% EECBS-5 [Li+ AAAI-21] 61.4% PP [Silver AIIDE-05] 80.9% LNS2 [Li+ AAAI-22] 67.4% PIBT [Okumura+ AIJ-22] complete & eventually optimal! 99.0% LaCAM* 13900 instances - 33 grid maps - random scenario - every 50 agents, up to max. (1000) tested on desktop PC blazing fast! SC & bounded sub-opt incomp & sub-opt SC* & optimal Results on MAPF Benchmark [Stern+ SOCS-19] *SC: solution complete: unable to distinguish unsolvable instances successfully breaking the trade-off comp & optimal 30.9% ODrM*-5 [W agner+ AIJ-15] comp & bounded sub-opt
  12. /56 13 Building Representation from Learning Building Representation while Planning

    11. 12. Part III. Representation Dissertation Outline continuous
  13. /56 14 Background: Planning in Continuous Spaces given agents (starts)

    goals workspace necessity of constructing roadmap solution paths without collisions
  14. /56 15 Challenge in Representation dense representation sparse representation produced

    by PRM [Kavraki+ 96] complete optimal incomplete suboptimal building small but promising representation for planning quality & solvability effort high low small large speed & scalability
  15. /56 16 Approaches in Representation quality & solvability effort high

    low small large speed & scalability approach-1 [Chap. 11] learning from planning demonstrations approach-2 [Chap. 12] building representation while planning
  16. /56 17 Future Direction integration LaCAM(*) PIBT planning in discretized

    spaces SSSP CTRMs planning in continuous spaces establish practical methodologies to multi-robot motion planning iterative refinement robust execution scalable & quick probabilistically complete asymptotically optimal, anytime with kinodynamic constraints
  17. /56 18 Offline Planning to Overcome Timing Uncertainties Offline Planning

    to Overcome Crash Faults 9. 10. Part II. Execution Rest of Talk: Offline Planning with Reactive Execution
  18. /56 20 access:. 6th Jun. 2022 https://www.ft.com/content/aaddf4b1-a78b-4289-b42f-fd3f5cd7f176 “appears to have

    been caused by the collision of three bots on the grid” “more than 1,000 robots buzz around a grid, stopping to grab crates of food” Lacking robust execution may cause catastrophe
  19. /56 21 Execution on Real Robots Planning Reality Gaps /

    Uncertainties frictions, sensor errors, communication delays, inconsistent speeds due to battery consumption, individual difference between robots, crash, etc overcoming uncertainties at runtime
  20. /56 24 1 2 1 2 3 4 Planning Execution

    Delay Imperfect Execution
  21. /56 25 1 2 1 2 3 4 Planning Execution

    Conservative: Synchronize Forcibly wait make agents wait for delayed ones arrival time: 3
  22. /56 26 Progressive: Preserve Temporal Dependencies [Cap+ IROS-16, Ma+ AAAI-17]

    check temporal dependencies 1 2 1 2 3 4 => preserved, go arrival time: 2 1 2 1 2 3 4 Planning Execution 1 2 3 4 0 1 2 3 4 progress ideal actual homotopy
  23. /56 27 Really Smart? 1 2 1 2 3 4

    1 2 3 4 5 6 delay negative effect 60 agents, solved by PIBT Typical MAPF agents actions complicated dependencies [Okumura+ AIJ-22] [Okumura+ 22]
  24. /56 28 Raspberry Pi x8 32 robots Bluetooth Expensive Communication

    Cost stable network / monitoring systems for ≥1000 agents…? NON-TRIVIAL!! [Okumura+ 22]
  25. /56 30 Offline Time-Independent Multi-Agent Path Planning KO, François Bonnet,

    Yasumasa Tamura & Xavier Defago planning execution https://kei18.github.io/otimapp IJCAI-22 (extended version is under review at T-RO) formalize, analyze, and solve OTIMAPP OTIMAPP solution multi-agent pathfinding solution 2 1 4 3 0 0 1 2 weak to timing uncertainties
  26. /56 31 given start goal graph solution path Problem Def.

    – OTIMAPP s.t. all agents eventually reach goals regardless of action orders
  27. /56 32 Solution Analysis no reachable* cyclic deadlock two conditions

    are necessary & sufficient *non-reachable deadlocks exist no reachable* terminal deadlock
  28. /56 33 Computational Complexity 1. finding solutions is NP-hard 2.

    verification is co-NP-complete main observations the proofs are reductions from 3-SAT OTIMAPP is computationally intractable
  29. /56 34 Solvers both solvers can solve large OTIMAPP instances

    to some extent agents             random-32-32-10 32x32             random-64-64-10 64x64             den520d 257x256 success rate (%) ≤ 5 min MAPF avoids collisions OTIMAPP avoids deadlocks propose two algorithms based on Multi-Agent Path Finding (MAPF) algorithms prioritized planning deadlock-based search extending conflict-based search [Sharon+ AIJ-15] extending conventional PP [Erdmann+ Algorithmica-87]
  30. /56 35 Execution Demo no synchronization only local interactions centralized

    style with toio robots decentralized style with AFADA [Kameyama+ ICRA-21] all robots are guaranteed to reach their goals
  31. /56 39 Fault-Tolerant Offline Multi-Agent Path Planning planning execution https://kei18.github.io/mappcf

    AAAI-23 fault solution multiple paths assuming crashes change execution path on-demand KO*, Sébastien Tixeuil formalize, analyze, and solve multi-agent path planning with crash faults (MAPPCF) *work done during staying in LIP6, Sorbonne University in France
  32. /56 41 online replanning offline approach: preparing backup paths from

    the beginning or crash detected => then? 1 With Unforeseen Crash crashed (forever stop)
  33. /56 42 primary path Solution Concept of MAPPCF backup path

    when is detected transition rule & 0 with crash faults
  34. /56 45 Solution Concept of MAPPCF more than two agents

    may crash => backup path of backup path 3 done! with crash faults
  35. /56 46 Problem Formulation of MAPPCF given solution s.t. all

    non-crashed agents eventually reach their destination, regardless of crashes (up to f ) & transition rules & maximum number of crashes f defined with failure detector & execution model centralized planning followed by decentralized execution
  36. /56 47 Failure Detectors oracle that tells status of neighboring

    vertices response: 1. no agent query 2. non-crashed agent named FD 3. crashed agent anonymous FD unable to identify who crashes c.f., [Chandra+ JACM-96]
  37. /56 48 Execution Models how agents are scheduled at runtime

    synchronous model all agents act simultaneously solutions avoid collisions MAPF: multi-agent pathfinding [Stern+ SOCS-19] solution solutions avoid deadlocks each agent acts spontaneously while locally avoiding collisions offline time-independent MAPP [Okumura+ IJCAI-22] sequential model (async) solution possible schedule
  38. /56 49 Model Power Analyses SYN + AFD SYN +

    NFD SEQ + NFD SEQ + AFD synchronous model sequential model named failure detector anonymous FD SYN SEQ NFD AFD strictly stronger SEQ+AFD SYN+AFD solvable instances weakly stronger SYN+AFD SYN+NFD SYN+AFD SYN+NFD or solvable in SYN unsolvable in SEQ
  39. /56 50 Computational Complexity 1. finding solutions is NP-hard 2.

    verification is co-NP-complete regardless of FD types or execution models the proofs are reductions from 3-SAT MAPPCF is computationally intractable
  40. /56 51 Solver / Empirical Results MAPPCF provides better solution

    concept than finding disjoint paths random-32-32-10 32x32 (|V|=922) from [Stern+ SOCS-19] 30sec timeout with named FD         success rate #agents DCRF/SYN disjoint paths fixed #crashes: f =1 adapted from CBS [Sharon+ AIJ-15] v.s. finding vertex disjoint paths         costs / lower bound #agents DCRF/SYN disjoint paths traveling time when no crashes propose DCRF (decoupled crash faults resolution framework) to solve MAPPCF
  41. /56 52 Summary in Execution planning execution reality gaps new

    solution concept: offline planning with reactive execution in practice? => hybrid of (relaxed) offline & online planning
  42. /56 TODAY 54 Planning Representation Execution integration [AAMAS-22+] building effective

    roadmaps [AAAI-21, ICRA-21, ICAPS-22, IJCAI-22, AAAI-23+] overcoming uncertainties ≥1000 agents within 1 sec [IJCAI-19, IROS-21, ICAPS-22, AIJ-22, AAAI-23+] Research Summary
  43. /56 55 YouTube/Mind Blowing Videos warehouse YouTube/StarCraft video game [Flatland

    Challenge, AIcrowd] railway/airport operations [Morris+ AAAIW-16] [Le Goc+ UIST-16] robotic interface [Song+ ICCBB-01] drug discovery [Zhang+ 18] manufacturing [van den Berg+ ISRR-11] crowd simulation [Zhang+ SIGGRAPH-20] puzzle solving pipe routing [Belov+ SOCS-20] Impact of Research Results develop new horizons of multi-agent navigation contribution:
  44. /56 57 Publications 1. “Improving LaCAM for Scalable Eventually Optimal

    Multi-Agent Pathfinding.” KO (under review at IJCAI-23) 2. “Quick Multi-Robot Motion Planning by Combining Sampling and Search.” KO & Xavier Défago. (under review at IJCAI-23) 3. “LaCAM: Search-Based Algorithm for Quick Multi-Agent Pathfinding.” KO. AAAI. 2023. 4. “Fault-Tolerant Offline Multi-Agent Path Planning.” KO & Sebastien Tixeuil. AAAI. 2023. 5. “Priority Inheritance with Backtracking for Iterative Multi-agent Path Finding.” KO, Manao Machida, Xavier Défago & Yasumasa Tamura. Artificial Intelligence (AIJ). 2022 (previously presented at IJCAI-19) 6. “Offline Time-Independent Multi-Agent Path Planning.” KO, François Bonnet, Yasumasa Tamura & Xavier Défago. IJCAI. 2022. (extended version is under review at T-RO) 7. “Solving Simultaneous Target Assignment and Path Planning Efficiently with Time-Independent Execution.” KO & Xavier Défago. ICAPS. 2022. (best student paper award, extended version is under review at AIJ) 8. “CTRMs: Learning to Construct Cooperative Timed Roadmaps for Multi-agent Path Planning in Continuous Spaces.” KO, Ryo Yonetani, Mai Nishimura & Asako Kanezaki. AAMAS. 2022. 9. “Iterative Refinement for Real-Time Multi-Robot Path Planning.” KO, Yasumasa Tamura & Xavier Défago. IROS. 2021. Others: 10. “Active Modular Environment for Robot Navigation.” Shota Kameyama, KO, Yasumasa Tamura & Xavier Défago. ICRA. 2021. 11. “Time-Independent Planning for Multiple Moving Agents.” KO, Yasumasa Tamura & Xavier Défago. AAAI. 2021. 12. “Roadside-assisted Cooperative Planning using Future Path Sharing for Autonomous Driving.” Mai Hirata, Manabu Tsukada, KO, Yasumasa Tamura, Hideya Ochiai & Xavier Défago. VTC. 2021. 13. “winPIBT: Extended Prioritized Algorithm for Iterative Multi-agent Path Finding.” KO, Yasumasa Tamura & Xavier Défago. WoMAPF. 2021. 14. “Amoeba Exploration: Coordinated Exploration with Distributed Robots.” KO, Yasumasa Tamura & Xavier Défago. iCAST. 2018. under review peer-reviewed