whatpeoplewant 6 hours ago

Zero—“knowledge” contains no w. But knowledge-building relies on the five Ws (who, what, when, where, why) plus how. In a multi-agent, agentic AI/LLM workflow you can mirror that by assigning distributed agents to each W and running them in parallel, then reconciling their outputs, which often improves coverage and reduces errors.

trilogic 12 hours ago

Ask ChatGPT that, It will stay thinking for hours, not being able to answer.

Try it yourself... If that is a llm issue in general why this small model gets it right first shot? (check link).