The First Workshop on the Application of LLM Explainability to Reasoning and Planning

@ COLM 2025

Submit Your Paper

About

We are thrilled to announce the First Workshop on the Application of LLM Explainability to Reasoning and Planning at COLM 2025 to be held on October 10, 2025.

Enabling large language models (LLMs) to reason (e.g., arithmetic reasoning, symbolic reasoning, commonsense reasoning, etc.) and plan (e.g., path-finding, tool use, web navigation, computer use, etc.) has been a popular topic in the past few years. Despite the exciting achievement, there have also been growing concerns about the safety and trustworthiness of these LLM applications, due to our large “unknowns” on how LLMs achieve these capabilities and where they could fail. On the other hand, LLM explainability (broadly including any research explaining or interpreting LLMs) has also attracted increasing attention, but existing research has mostly focused on simplified tasks and hardly yields insights that can be directly applied to realistic reasoning and planning tasks. This discrepancy has consequently raised doubts about the practical meaning of LLM explainability research.

In this workshop, we aim to bring together researchers from various perspectives to discuss the potential and practical applications of model explainability to advancing LLM reasoning and planning. Specifically, the workshop welcomes submissions on the following topics (non-exclusively):

  • local explanations (e.g., feature attribution, textual explanations, including CoT type) of LLMs in reasoning and/or planning tasks;
  • global explanations (e.g., mechanistic interpretability) of LLMs in reasoning and/or planning tasks;
  • applications of explainability to enhance LLM’s effectiveness in reasoning and/or planning tasks;
  • applications of explainability to enhance LLM’s safety and trustworthiness in reasoning and/or planning tasks;
  • user interface development driven by LLM explanations;
  • human-LLM collaboration and teaming driven by explanations; and
  • explainability-driven, automatic or human-in-the-loop LLM evaluation.

Join our Google Group for workshop updates and Q&A https://groups.google.com/g/xllm-reasoning-planning-workshop, and contact us at xllmreasoningplanningworkshop AT gmail DOT com for other inquiries!

Invited speakers (tentative)

Schedule

TBD

Call for papers

Important dates

  • Submission deadline: June 23, 2025, 23:59 AoE
  • Acceptance notification: July 24, 2025

Submission instructions

We welcome both long (up to 9 pages of main content, plus unlimited references) and short (up to 5 pages of main content, plus unlimited references) paper submissions, following the official template of COLM. The long papers are expected to include completed and full-scope work while the short paper submissions can be preliminary or ongoing work. All submissions will be non-archival. We also allow dual submissions that are under review or have recently been accepted to other venues—for the former, authors should make sure to follow the dual submission policies from the other venue; for the latter, we ask authors to indicate the accepted venue.

Workshop awards

The workshop will announce one Best Paper Award targeting all authors, and one Special Recognition Award targeting papers with junior and/or underrepresented-group authors being the first authors. Authors submitting to our workshop will be requested to clarify the status of the first author(s) for eligibility confirmation.

Program committee

Workshop organizers