Runnable Interface
To make it as easy as possible to create custom chains, we've implemented a "Runnable" protocol. Many LangChain components implement the Runnable
protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.
This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. The standard interface includes:
stream
: stream back chunks of the responseinvoke
: call the chain on an inputbatch
: call the chain on a list of inputs
These also have corresponding async methods that should be used with asyncio await
syntax for concurrency:
astream
: stream back chunks of the response asyncainvoke
: call the chain on an input asyncabatch
: call the chain on a list of inputs asyncastream_log
: stream back intermediate steps as they happen, in addition to the final responseastream_events
: beta stream events as they happen in the chain (introduced inlangchain-core
0.1.14)
The input type and output type varies by component:
Component | Input Type | Output Type |
---|---|---|
Prompt | Dictionary | PromptValue |
ChatModel | Single string, list of chat messages or a PromptValue | ChatMessage |
LLM | Single string, list of chat messages or a PromptValue | String |
OutputParser | The output of an LLM or ChatModel | Depends on the parser |
Retriever | Single string | List of Documents |
Tool | Single string or dictionary, depending on the tool | Depends on the tool |
All runnables expose input and output schemas to inspect the inputs and outputs:
input_schema
: an input Pydantic model auto-generated from the structure of the Runnableoutput_schema
: an output Pydantic model auto-generated from the structure of the Runnable