LLMs Now Interview Humans to Gather Complex Context: A Breakthrough in AI Knowledge Extraction
Breaking News: Interrogatory LLM Technique Emerges
A novel approach is reshaping how large language models (LLMs) prepare for complex tasks. Instead of requiring humans to write lengthy context documents, the LLM now interviews a person directly — asking targeted questions to extract the necessary information.

This method, called the 'interrogatory LLM', was first detailed on Harper Reed's blog and has been further explored by Martin Fowler on his 'Bliki' platform. The core idea: a human provides initial guidance, but the LLM takes the lead by asking one question at a time.
"The LLM asks all the questions it needs to create the appropriate context," explained Reed. "It can be told about other sources to consult if it lacks certain knowledge. Once done, it produces a context report for another session to execute the next step."
How It Works
Traditionally, feeding an LLM context for a complex task — like designing a new feature — required a human to write several pages of markdown. That included user-facing descriptions, implementation guidelines, and references to external systems.
With the interrogatory approach, the human simply responds to the LLM's questions. The system dynamically builds the context document. A key rule, according to Reed, is to let the LLM ask only one question at a time.
"I found it needed to be frequently reminded of this," Fowler noted in his writeup. This keeps the conversation focused and prevents information overload.
Beyond Creation: Document Review
The same technique can be used to verify existing documents. For example, an LLM can read a software specification and then interview a human expert to check its accuracy. This is an alternative to asking the expert to read and review the document — a task many find difficult.
"People often find reviewing hard, so a conversation with an LLM might be more fruitful, particularly if the document isn't well-written," Fowler said.
Background
The interrogatory LLM addresses a fundamental bottleneck in deploying AI assistants: the need for rich, structured context. As machines take on roles that demand deep domain knowledge, the quality of that knowledge input becomes critical. Manual drafting is time-consuming and error-prone.
The technique also offers a lifeline for individuals who struggle with writing but possess valuable knowledge. "Many folks find writing hard, often very hard," Fowler observed. "This can be a real problem when we need to get information out of someone's head into a form that other humans can consume."
What This Means
For organizations, the interrogatory LLM could drastically reduce the time and skill needed to prepare AI agents for complex projects. Instead of training people to write detailed specs, they can simply talk to a chatbot.
The approach is not limited to a single use case. Fowler envisions a pipeline: one interrogatory LLM builds a document, and other similar LLMs interview different experts to review it. This could scale knowledge validation across teams.
Critics may point to the 'tang of AI-writing' in the final output — a style some find unappealing. But Fowler argues that "that's better than not having the information itself, either due to rushed writing or no writing at all."
As LLMs become more conversational, the interrogatory model may become a standard tool for knowledge capture, making it easier for experts to share what they know without the burden of documentation.
This is a developing story. Check back for updates on how this technique is being adopted in enterprise and research settings.
Related Articles
- Mastering Markdown on GitHub: A Beginner's Step-by-Step Guide
- Unlocking AI Excellence: Inside Dataiku’s 2025 Partner Certification Challenge
- 10 Academic Theories for Human-Centered Web Design: A Practitioner's Guide
- Redefining Reinforcement Learning: A Divide-and-Conquer Approach Beyond Temporal Difference
- How to Critically Analyze a Film Trailer from a Classical Studies Perspective: The Odyssey Case Study
- AI in Higher Education: Insights from Coursera's Global Survey
- Rebuilding America's Outbreak Response: A Guide to Strengthening Contagious Pathogen Surveillance
- 7 Key Insights from Stanford's Youngest Instructor on AI, C++, and the Future of CS Education