Conceptual guide
In this section, you'll find explanations of the key concepts, providing a deeper understanding of core principles.
The conceptual guide will not cover step-by-step instructions or specific implementation details β those are found in the How-To Guides and Tutorials sections. For detailed reference material, please visit the API Reference.
Architectureβ
- Conceptual Guide: LangChain Architecture
Runnable interfaceβ
- Conceptual Guide: About the Runnable interface
- How-to Guides: How to use the Runnable interface
The Runnable interface is a standard interface for defining and invoking LangChain components.
LangChain Expression Language (LCEL)β
- Conceptual Guide: About the Runnable interface
- How-to Guides: How to use the Runnable interface
Componentsβ
LangChain provides standard, extendable interfaces and external integrations for various components useful for building with LLMs. Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix.
Chat modelsβ
- Conceptual Guide: About Chat Models
- Integrations: LangChain Chat Model Integrations
- How-to Guides: How to use Chat Models
Multimodalityβ
- Conceptual Guide: About Multimodal Chat Models
LLMsβ
Pure text-in/text-out LLMs tend to be older or lower-level. Many new popular models are best used as chat completion models, even for non-chat use cases.
You are probably looking for the section above instead.
- Conceptual Guide: About Language Models
- Integrations: LangChain LLM Integrations
- How-to Guides: How to use LLMs
Messagesβ
- Conceptual Guide: About Messages
- How-to Guides: How to use Messages
Prompt templatesβ
Conceptual Guide: About Prompt Templates How-to Guides: How to use Prompt Templates
String PromptTemplatesβ
ChatPromptTemplatesβ
MessagesPlaceholderβ
Example selectorsβ
One common prompting technique for achieving better performance is to include examples as part of the prompt. This is known as few-shot prompting. This gives the language model concrete examples of how it should behave. Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them. Example Selectors are classes responsible for selecting and then formatting examples into prompts.
For specifics on how to use example selectors, see the relevant how-to guides here.
Output parsersβ
Output parsers precede chat models that were capable of calling tools. These days, it is recommended to use function/tool calling as it's simpler while providing better quality results.
See documentation for that here.
Conceptual Guide: About Output Parsers How-to Guides: How to use Output Parsers
Chat historyβ
Most LLM applications have a conversational interface. An essential component of a conversation is being able to refer to information introduced earlier in the conversation. At bare minimum, a conversational system should be able to access some window of past messages directly.
The concept of ChatHistory
refers to a class in LangChain which can be used to wrap an arbitrary chain.
This ChatHistory
will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database.
Future interactions will then load those messages and pass them into the chain as part of the input.
Documentsβ
A Document object in LangChain contains information about some data. It has two attributes:
page_content: str
: The content of this document. Currently is only a string.metadata: dict
: Arbitrary metadata associated with this document. Can track the document id, file name, etc.
Document loadersβ
These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc.
Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the .load
method.
An example use case is as follows:
Output parsersβ
The information here refers to parsers that take a text output from a model try to parse it into a more structured representation. More and more models are supporting function (or tool) calling, which handles this automatically. It is recommended to use function/tool calling rather than output parsing. See documentation for that here.
- Conceptual Guide: About Output Parsers
- How-to Guides: How to use Output Parsers
Text splittersβ
- Conceptual Guide: About Text Splitters
Embedding modelsβ
- Conceptual Guide: About Embedding Models
- How-to Guides: How to use Embedding Models
Retrieversβ
- Conceptual Guide: About Retrievers
- How-to Guides: How to use Retrievers