OpenAI’s “Agentic” Whitepaper Missed the Point

Two days after OpenAI released their whitepaper, Harrison Chase (LangChain’s Co-founder) wrote a sharp critique on how to think about agentic apps.

If you care even a little bit about where AI workflows and LLM orchestration are headed—you’ll want to read this.

Image: Autonomous Agents from “Building effective agents” by Anthropic


Agents are systems that independently accomplish tasks on your behalf.
— Open AI

TL;DR

  1. The definition of “Agents” by OpenAI (seen above) is too vague.

    • This definition doesn’t delve deep enough to explain the nuances between workflows and agents.

  2. Most agentic systems out there in production aren’t entirely composed of agents.

    • They’re hybrid solutions consisting of both workflows and agents.

  3. LangGraph—LangChain’s agentic orchestration framework—offers declarative and imperative APIs, with a series of agent abstractions built on top.


Workflows vs. Agents

Image: High floor, low ceiling from “How to think about agent frameworks” by Harrison Chase

Chase leans into Anthropic’s more nuanced take:

  • Workflows: predictable, code-driven pipelines with LLMs + tools.

  • Agents: dynamic, feedback-driven systems where the LLM guides the process with autonomy to take actions and make decisions.


Building Reliable Agentic Apps

The hard part about building reliable agents systems Is making sure the LLM has the appropriate context at each step:

  • Controlling exactly what context goes into the LLM.

  • Managing the orchestration of each step with intent.

It’s prudent to understand the flow of context—to help debug and improve agentic apps.


Enter LangGraph

So what’s Chase proposing instead?

LangGraph—LangChain’s agentic orchestration framework (with both declarative and imperative APIs), also has a series of agent abstractions built on top:

↳ Offers a declarative, graph-style syntax for defining workflows and logic flows.

↳ Includes agent abstractions layered on top of a flexible, lower-level architecture.

↳ Supports multiple APIs — from functional and event-driven styles to implementations in both Python and TypeScript.

↳ Lets you model agentic behavior as graphs, where nodes represent steps and edges define transitions.

↳ Edges can be static or conditional, allowing the graph to be structured declaratively while still enabling fully dynamic execution path.

↳ Built-in persistence layer enables things like fault tolerance, short-term and long-term memory.

↳ That same layer powers human-in-the-loop interactions — think pausing, approving, resuming, even rewinding execution (“time travel”).

↳ Native support for streaming, including token-level updates, node changes, and custom events.

↳ Tightly integrated with LangSmith for powerful debugging, evaluation, and system monitoring.


Why This Matters

Everyone’s talking about AI agents, autonomous workflows, task automation, and LLMs running your day.

But if we build on shaky abstractions, we’ll inevitably end up:

  • Burning time debugging invisible logic.

  • Deploying agents that “seem” smart but fail silently.

  • Losing trust in systems we barely understand.

Next
Next

You Ain’t got the Answers, PM!