One domain application of artificial intelligence (AI) systems is humanitarian aid planning, where dynamically changing societal conditions need to be monitored and analyzed, so humanitarian organizations can coordinate efforts and appropriately support forcibly displaced peoples. Essential in facilitating effective human-AI collaboration is the explainability of AI system outputs (XAI). This late-breaking work presents an ongoing industrial research project aimed at designing, building, and implementing an XAI system for humanitarian aid planning. We draw on empirical data from our project and define current and future scenarios of use, adopting a scenario-based XAI design approach. These scenarios surface three central themes which shape human-AI collaboration in humanitarian aid planning: (1) Surfacing Causality, (2) Multifaceted Trust & Lack of Data Quality, (3) Balancing Risky Situations. We explore each theme and in doing so, further our understanding of how humanitarian aid planners can partner with AI systems to better support forcibly displaced peoples.