AgentEscapeBench: Evaluating Out-of-Domain Tool-Grounded Reasoning in LLM Agents
概要
arXiv:2605.07926v1 Announce Type: new Abstract: As LLM-based agents increasingly rely on external tools, it is important to evaluate their ability to sustain tool-grounded reasoning beyond familiar workflows and short-range interactions. We introduce AgentEscapeBench, an escape-room-style benchmark…