id
stringlengths 14
16
| text
stringlengths 20
3.26k
| source
stringlengths 65
181
|
---|---|---|
c7f8bf914708-0 | langchain_postgres 0.0.6¶
langchain_postgres.chat_message_histories¶
Client for persisting chat message history in a Postgres database.
This client provides support for both sync and async via psycopg 3.
Classes¶
chat_message_histories.PostgresChatMessageHistory(...)
Client for persisting chat message history in a Postgres database,
langchain_postgres.vectorstores¶
Classes¶
vectorstores.DistanceStrategy(value)
Enumerator of the Distance strategies.
vectorstores.PGVector(embeddings, *[, ...])
Vectorstore implementation using Postgres as the backend. | https://api.python.langchain.com/en/latest/postgres_api_reference.html |
84a96703aa28-0 | langchain_ai21 0.1.6¶
langchain_ai21.ai21_base¶
Classes¶
ai21_base.AI21Base
Create a new model by parsing and validating input data from keyword arguments.
langchain_ai21.chat¶
Classes¶
chat.chat_adapter.ChatAdapter()
Provides a common interface for the different Chat models available in AI21.
chat.chat_adapter.J2ChatAdapter()
chat.chat_adapter.JambaChatCompletionsAdapter()
Functions¶
chat.chat_factory.create_chat_adapter(model)
langchain_ai21.chat_models¶
Classes¶
chat_models.ChatAI21
ChatAI21 chat model.
langchain_ai21.contextual_answers¶
Classes¶
contextual_answers.AI21ContextualAnswers
Create a new model by parsing and validating input data from keyword arguments.
contextual_answers.ContextualAnswerInput
langchain_ai21.embeddings¶
Classes¶
embeddings.AI21Embeddings
AI21 Embeddings embedding model.
langchain_ai21.llms¶
Classes¶
llms.AI21LLM
AI21LLM large language models.
langchain_ai21.semantic_text_splitter¶
Classes¶
semantic_text_splitter.AI21SemanticTextSplitter([...])
Splitting text into coherent and readable units, based on distinct topics and lines | https://api.python.langchain.com/en/latest/ai21_api_reference.html |
6402da86d7bd-0 | langchain_exa 0.1.0¶
langchain_exa.retrievers¶
Classes¶
retrievers.ExaSearchRetriever
Exa Search retriever.
langchain_exa.tools¶
Tool for the Exa Search API.
Classes¶
tools.ExaFindSimilarResults
Tool that queries the Metaphor Search API and gets back json.
tools.ExaSearchResults
Tool that queries the Metaphor Search API and gets back json. | https://api.python.langchain.com/en/latest/exa_api_reference.html |
769b8b2ed736-0 | langchain_text_splitters 0.2.1¶
langchain_text_splitters.base¶
Classes¶
base.Language(value)
Enum of the programming languages.
base.TextSplitter(chunk_size, chunk_overlap, ...)
Interface for splitting text into chunks.
base.TokenTextSplitter([encoding_name, ...])
Splitting text to tokens using model tokenizer.
base.Tokenizer(chunk_overlap, ...)
Tokenizer data class.
Functions¶
base.split_text_on_tokens(*, text, tokenizer)
Split incoming text and return chunks using tokenizer.
langchain_text_splitters.character¶
Classes¶
character.CharacterTextSplitter([separator, ...])
Splitting text that looks at characters.
character.RecursiveCharacterTextSplitter([...])
Splitting text by recursively look at characters.
langchain_text_splitters.html¶
Classes¶
html.ElementType
Element type as typed dict.
html.HTMLHeaderTextSplitter(headers_to_split_on)
Splitting HTML files based on specified headers.
html.HTMLSectionSplitter(headers_to_split_on)
Splitting HTML files based on specified tag and font sizes.
langchain_text_splitters.json¶
Classes¶
json.RecursiveJsonSplitter([max_chunk_size, ...])
langchain_text_splitters.konlpy¶
Classes¶
konlpy.KonlpyTextSplitter([separator])
Splitting text using Konlpy package.
langchain_text_splitters.latex¶
Classes¶
latex.LatexTextSplitter(**kwargs)
Attempts to split the text along Latex-formatted layout elements.
langchain_text_splitters.markdown¶
Classes¶
markdown.HeaderType
Header type as typed dict.
markdown.LineType
Line type as typed dict.
markdown.MarkdownHeaderTextSplitter(...[, ...]) | https://api.python.langchain.com/en/latest/text_splitters_api_reference.html |
769b8b2ed736-1 | Line type as typed dict.
markdown.MarkdownHeaderTextSplitter(...[, ...])
Splitting markdown files based on specified headers.
markdown.MarkdownTextSplitter(**kwargs)
Attempts to split the text along Markdown-formatted headings.
langchain_text_splitters.nltk¶
Classes¶
nltk.NLTKTextSplitter([separator, language])
Splitting text using NLTK package.
langchain_text_splitters.python¶
Classes¶
python.PythonCodeTextSplitter(**kwargs)
Attempts to split the text along Python syntax.
langchain_text_splitters.sentence_transformers¶
Classes¶
sentence_transformers.SentenceTransformersTokenTextSplitter([...])
Splitting text to tokens using sentence model tokenizer.
langchain_text_splitters.spacy¶
Classes¶
spacy.SpacyTextSplitter([separator, ...])
Splitting text using Spacy package. | https://api.python.langchain.com/en/latest/text_splitters_api_reference.html |
ac4984d7bea5-0 | langchain_fireworks 0.1.3¶
langchain_fireworks.chat_models¶
Fireworks chat wrapper.
Classes¶
chat_models.ChatFireworks
Fireworks Chat large language models API.
langchain_fireworks.embeddings¶
Classes¶
embeddings.FireworksEmbeddings
FireworksEmbeddings embedding model.
langchain_fireworks.llms¶
Wrapper around Fireworks AI’s Completion API.
Classes¶
llms.Fireworks
LLM models from Fireworks. | https://api.python.langchain.com/en/latest/fireworks_api_reference.html |
bcc038c1af01-0 | langchain_cohere 0.1.6¶
langchain_cohere.chat_models¶
Classes¶
chat_models.ChatCohere
Implements the BaseChatModel (and BaseLanguageModel) interface with Cohere's large language models.
Functions¶
chat_models.get_cohere_chat_request(messages, *)
Get the request for the Cohere chat API.
chat_models.get_role(message)
Get the role of the message.
langchain_cohere.cohere_agent¶
Functions¶
cohere_agent.create_cohere_tools_agent(llm, ...)
langchain_cohere.common¶
Classes¶
common.CohereCitation(start, end, text, ...)
Cohere has fine-grained citations that specify the exact part of text.
langchain_cohere.embeddings¶
Classes¶
embeddings.CohereEmbeddings
Implements the Embeddings interface with Cohere's text representation language models.
langchain_cohere.llms¶
Classes¶
llms.BaseCohere
Base class for Cohere models.
llms.Cohere
Cohere large language models.
Functions¶
llms.acompletion_with_retry(llm, **kwargs)
Use tenacity to retry the completion call.
llms.completion_with_retry(llm, **kwargs)
Use tenacity to retry the completion call.
llms.enforce_stop_tokens(text, stop)
Cut off the text as soon as any stop words occur.
langchain_cohere.rag_retrievers¶
Classes¶
rag_retrievers.CohereRagRetriever
Cohere Chat API with RAG.
langchain_cohere.react_multi_hop¶
Classes¶
react_multi_hop.parsing.CohereToolsReactAgentOutputParser
Parses a message into agent actions/finish. | https://api.python.langchain.com/en/latest/cohere_api_reference.html |
bcc038c1af01-1 | Parses a message into agent actions/finish.
Functions¶
react_multi_hop.agent.create_cohere_react_agent(...)
Create an agent that enables multiple tools to be used in sequence to complete a task.
react_multi_hop.parsing.parse_actions(generation)
Parse action selections from model output.
react_multi_hop.parsing.parse_answer_with_prefixes(...)
parses string into key-value pairs,
react_multi_hop.parsing.parse_citations(...)
Parses a grounded_generation (from parse_actions) and documents (from convert_to_documents) into a (generation, CohereCitation list) tuple.
react_multi_hop.parsing.parse_jsonified_tool_use_generation(...)
Parses model-generated jsonified actions.
react_multi_hop.prompt.convert_to_documents(...)
Converts observations into a 'document' dict
react_multi_hop.prompt.create_directly_answer_tool()
directly_answer is a special tool that's always presented to the model as an available tool.
react_multi_hop.prompt.multi_hop_prompt(...)
The returned function produces a BasePromptTemplate suitable for multi-hop.
react_multi_hop.prompt.render_intermediate_steps(...)
Renders an agent's intermediate steps into prompt content.
react_multi_hop.prompt.render_messages(messages)
Renders one or more BaseMessage implementations into prompt content.
react_multi_hop.prompt.render_observations(...)
Renders the 'output' part of an Agent's intermediate step into prompt content.
react_multi_hop.prompt.render_role(message)
Renders the role of a message into prompt content.
react_multi_hop.prompt.render_structured_preamble([...])
Renders the structured preamble part of the prompt content.
react_multi_hop.prompt.render_tool(tool)
Renders a tool into prompt content
react_multi_hop.prompt.render_tool_args(tool)
Renders the 'Args' section of a tool's prompt content.
react_multi_hop.prompt.render_tool_signature(tool) | https://api.python.langchain.com/en/latest/cohere_api_reference.html |
bcc038c1af01-2 | react_multi_hop.prompt.render_tool_signature(tool)
Renders the signature of a tool into prompt content.
react_multi_hop.prompt.render_type(type_, ...)
Renders a tool's type into prompt content.
langchain_cohere.rerank¶
Classes¶
rerank.CohereRerank
Document compressor that uses Cohere Rerank API. | https://api.python.langchain.com/en/latest/cohere_api_reference.html |
0bffa7774dfd-0 | langchain_aws 0.1.6¶
langchain_aws.chat_models¶
Classes¶
chat_models.bedrock.BedrockChat
[Deprecated]
chat_models.bedrock.ChatBedrock
A chat model that uses the Bedrock API.
chat_models.bedrock.ChatPromptAdapter()
Adapter class to prepare the inputs from Langchain to prompt format that Chat model expects.
Functions¶
chat_models.bedrock.convert_messages_to_prompt_anthropic(...)
Format a list of messages into a full prompt for the Anthropic model
chat_models.bedrock.convert_messages_to_prompt_llama(...)
Convert a list of messages to a prompt for llama.
chat_models.bedrock.convert_messages_to_prompt_llama3(...)
Convert a list of messages to a prompt for llama.
chat_models.bedrock.convert_messages_to_prompt_mistral(...)
Convert a list of messages to a prompt for mistral.
langchain_aws.embeddings¶
Classes¶
embeddings.bedrock.BedrockEmbeddings
Bedrock embedding models.
langchain_aws.function_calling¶
Methods for creating function specs in the style of Bedrock Functions
for supported model providers
Classes¶
function_calling.AnthropicTool
function_calling.FunctionDescription
Representation of a callable function to send to an LLM.
function_calling.ToolDescription
Representation of a callable function to the OpenAI API.
Functions¶
function_calling.convert_to_anthropic_tool(tool)
function_calling.get_system_message(tools)
langchain_aws.graphs¶
Classes¶
graphs.neptune_graph.BaseNeptuneGraph()
graphs.neptune_graph.NeptuneAnalyticsGraph(...)
Neptune Analytics wrapper for graph operations.
graphs.neptune_graph.NeptuneGraph(host[, ...])
Neptune wrapper for graph operations.
graphs.neptune_graph.NeptuneQueryException(...)
Exception for the Neptune queries. | https://api.python.langchain.com/en/latest/aws_api_reference.html |
0bffa7774dfd-1 | graphs.neptune_graph.NeptuneQueryException(...)
Exception for the Neptune queries.
graphs.neptune_rdf_graph.NeptuneRdfGraph(host)
Neptune wrapper for RDF graph operations.
langchain_aws.llms¶
Classes¶
llms.bedrock.Bedrock
[Deprecated]
llms.bedrock.BedrockBase
Base class for Bedrock models.
llms.bedrock.BedrockLLM
Bedrock models.
llms.bedrock.LLMInputOutputAdapter()
Adapter class to prepare the inputs from Langchain to a format that LLM model expects.
llms.sagemaker_endpoint.ContentHandlerBase()
A handler class to transform input from LLM to a format that SageMaker endpoint expects.
llms.sagemaker_endpoint.LLMContentHandler()
Content handler for LLM class.
llms.sagemaker_endpoint.LineIterator(stream)
A helper class for parsing the byte stream input.
llms.sagemaker_endpoint.SagemakerEndpoint
Sagemaker Inference Endpoint models.
Functions¶
llms.sagemaker_endpoint.enforce_stop_tokens(...)
Cut off the text as soon as any stop words occur.
langchain_aws.retrievers¶
Classes¶
retrievers.bedrock.AmazonKnowledgeBasesRetriever
Amazon Bedrock Knowledge Bases retrieval.
retrievers.bedrock.RetrievalConfig
Configuration for retrieval.
retrievers.bedrock.VectorSearchConfig
Configuration for vector search.
retrievers.kendra.AdditionalResultAttribute
Additional result attribute.
retrievers.kendra.AdditionalResultAttributeValue
Value of an additional result attribute.
retrievers.kendra.AmazonKendraRetriever
Amazon Kendra Index retriever.
retrievers.kendra.DocumentAttribute
Document attribute.
retrievers.kendra.DocumentAttributeValue
Value of a document attribute. | https://api.python.langchain.com/en/latest/aws_api_reference.html |
0bffa7774dfd-2 | Document attribute.
retrievers.kendra.DocumentAttributeValue
Value of a document attribute.
retrievers.kendra.Highlight
Information that highlights the keywords in the excerpt.
retrievers.kendra.QueryResult
Amazon Kendra Query API search result.
retrievers.kendra.QueryResultItem
Query API result item.
retrievers.kendra.ResultItem
Base class of a result item.
retrievers.kendra.RetrieveResult
Amazon Kendra Retrieve API search result.
retrievers.kendra.RetrieveResultItem
Retrieve API result item.
retrievers.kendra.TextWithHighLights
Text with highlights.
Functions¶
retrievers.kendra.clean_excerpt(excerpt)
Clean an excerpt from Kendra.
retrievers.kendra.combined_text(item)
Combine a ResultItem title and excerpt into a single string.
langchain_aws.utils¶
Functions¶
utils.enforce_stop_tokens(text, stop)
Cut off the text as soon as any stop words occur.
utils.get_num_tokens_anthropic(text)
Get the number of tokens in a string of text.
utils.get_token_ids_anthropic(text)
Get the token ids for a string of text. | https://api.python.langchain.com/en/latest/aws_api_reference.html |
0427ef845670-0 | langchain_voyageai 0.1.1¶
langchain_voyageai.embeddings¶
Classes¶
embeddings.VoyageAIEmbeddings
VoyageAIEmbeddings embedding model.
langchain_voyageai.rerank¶
Classes¶
rerank.VoyageAIRerank
Document compressor that uses VoyageAI Rerank API. | https://api.python.langchain.com/en/latest/voyageai_api_reference.html |
4c61959f7b68-0 | langchain_couchbase 0.0.1¶
langchain_couchbase.vectorstores¶
Couchbase vector stores.
Classes¶
vectorstores.CouchbaseVectorStore(cluster, ...)
Couchbase vector store. | https://api.python.langchain.com/en/latest/couchbase_api_reference.html |
1a3f9b66ffdb-0 | langchain_experimental 0.0.60¶
langchain_experimental.agents¶
Agent is a class that uses an LLM to choose
a sequence of actions to take.
In Chains, a sequence of actions is hardcoded. In Agents,
a language model is used as a reasoning engine to determine which actions
to take and in which order.
Agents select and use Tools and Toolkits for actions.
Functions¶
agents.agent_toolkits.csv.base.create_csv_agent(...)
Create pandas dataframe agent by loading csv to a dataframe.
agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent(llm, df)
Construct a Pandas agent from an LLM and dataframe(s).
agents.agent_toolkits.python.base.create_python_agent(...)
Construct a python agent from an LLM and tool.
agents.agent_toolkits.spark.base.create_spark_dataframe_agent(llm, df)
Construct a Spark agent from an LLM and dataframe.
agents.agent_toolkits.xorbits.base.create_xorbits_agent(...)
Construct a xorbits agent from an LLM and dataframe.
langchain_experimental.autonomous_agents¶
Autonomous agents in the Langchain experimental package include
[AutoGPT](https://github.com/Significant-Gravitas/AutoGPT),
[BabyAGI](https://github.com/yoheinakajima/babyagi),
and [HuggingGPT](https://arxiv.org/abs/2303.17580) agents that
interact with language models autonomously.
These agents have specific functionalities like memory management,
task creation, execution chains, and response generation.
They differ from ordinary agents by their autonomous decision-making capabilities,
memory handling, and specialized functionalities for tasks and response.
Classes¶
autonomous_agents.autogpt.agent.AutoGPT(...)
Agent for interacting with AutoGPT. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-1 | Agent for interacting with AutoGPT.
autonomous_agents.autogpt.memory.AutoGPTMemory
Memory for AutoGPT.
autonomous_agents.autogpt.output_parser.AutoGPTAction(...)
Action returned by AutoGPTOutputParser.
autonomous_agents.autogpt.output_parser.AutoGPTOutputParser
Output parser for AutoGPT.
autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser
Base Output parser for AutoGPT.
autonomous_agents.autogpt.prompt.AutoGPTPrompt
Prompt for AutoGPT.
autonomous_agents.autogpt.prompt_generator.PromptGenerator()
Generator of custom prompt strings.
autonomous_agents.baby_agi.baby_agi.BabyAGI
Controller model for the BabyAGI agent.
autonomous_agents.baby_agi.task_creation.TaskCreationChain
Chain generating tasks.
autonomous_agents.baby_agi.task_execution.TaskExecutionChain
Chain to execute tasks.
autonomous_agents.baby_agi.task_prioritization.TaskPrioritizationChain
Chain to prioritize tasks.
autonomous_agents.hugginggpt.hugginggpt.HuggingGPT(...)
Agent for interacting with HuggingGPT.
autonomous_agents.hugginggpt.repsonse_generator.ResponseGenerationChain
Chain to execute tasks.
autonomous_agents.hugginggpt.repsonse_generator.ResponseGenerator(...)
Generates a response based on the input.
autonomous_agents.hugginggpt.task_executor.Task(...)
Task to be executed.
autonomous_agents.hugginggpt.task_executor.TaskExecutor(plan)
Load tools and execute tasks.
autonomous_agents.hugginggpt.task_planner.BasePlanner
Base class for a planner.
autonomous_agents.hugginggpt.task_planner.Plan(steps)
A plan to execute.
autonomous_agents.hugginggpt.task_planner.PlanningOutputParser | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-2 | autonomous_agents.hugginggpt.task_planner.PlanningOutputParser
Parses the output of the planning stage.
autonomous_agents.hugginggpt.task_planner.Step(...)
A step in the plan.
autonomous_agents.hugginggpt.task_planner.TaskPlaningChain
Chain to execute tasks.
autonomous_agents.hugginggpt.task_planner.TaskPlanner
Planner for tasks.
Functions¶
autonomous_agents.autogpt.output_parser.preprocess_json_input(...)
Preprocesses a string to be parsed as json.
autonomous_agents.autogpt.prompt_generator.get_prompt(tools)
Generates a prompt string.
autonomous_agents.hugginggpt.repsonse_generator.load_response_generator(llm)
Load the ResponseGenerator.
autonomous_agents.hugginggpt.task_planner.load_chat_planner(llm)
Load the chat planner.
langchain_experimental.chat_models¶
Chat Models are a variation on language models.
While Chat Models use language models under the hood, the interface they expose
is a bit different. Rather than expose a “text in, text out” API, they expose
an interface where “chat messages” are the inputs and outputs.
Class hierarchy:
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
chat_models.llm_wrapper.ChatWrapper
Wrapper for chat LLMs.
chat_models.llm_wrapper.Llama2Chat
Wrapper for Llama-2-chat model.
chat_models.llm_wrapper.Mixtral
See https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1#instruction-format
chat_models.llm_wrapper.Orca
Wrapper for Orca-style models.
chat_models.llm_wrapper.Vicuna | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-3 | Wrapper for Orca-style models.
chat_models.llm_wrapper.Vicuna
Wrapper for Vicuna-style models.
langchain_experimental.comprehend_moderation¶
Comprehend Moderation is used to detect and handle Personally Identifiable Information (PII),
toxicity, and prompt safety in text.
The Langchain experimental package includes the AmazonComprehendModerationChain class
for the comprehend moderation tasks. It is based on Amazon Comprehend service.
This class can be configured with specific moderation settings like PII labels, redaction,
toxicity thresholds, and prompt safety thresholds.
See more at https://aws.amazon.com/comprehend/
Amazon Comprehend service is used by several other classes:
- ComprehendToxicity class is used to check the toxicity of text prompts using
AWS Comprehend service and take actions based on the configuration
ComprehendPromptSafety class is used to validate the safety of given prompt
text, raising an error if unsafe content is detected based on the specified threshold
ComprehendPII class is designed to handle
Personally Identifiable Information (PII) moderation tasks,
detecting and managing PII entities in text inputs
Classes¶
comprehend_moderation.amazon_comprehend_moderation.AmazonComprehendModerationChain
Moderation Chain, based on Amazon Comprehend service.
comprehend_moderation.base_moderation.BaseModeration(client)
Base class for moderation.
comprehend_moderation.base_moderation_callbacks.BaseModerationCallbackHandler()
Base class for moderation callback handlers.
comprehend_moderation.base_moderation_config.BaseModerationConfig
Base configuration settings for moderation.
comprehend_moderation.base_moderation_config.ModerationPiiConfig
Configuration for PII moderation filter. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-4 | Configuration for PII moderation filter.
comprehend_moderation.base_moderation_config.ModerationPromptSafetyConfig
Configuration for Prompt Safety moderation filter.
comprehend_moderation.base_moderation_config.ModerationToxicityConfig
Configuration for Toxicity moderation filter.
comprehend_moderation.base_moderation_exceptions.ModerationPiiError([...])
Exception raised if PII entities are detected.
comprehend_moderation.base_moderation_exceptions.ModerationPromptSafetyError([...])
Exception raised if Unsafe prompts are detected.
comprehend_moderation.base_moderation_exceptions.ModerationToxicityError([...])
Exception raised if Toxic entities are detected.
comprehend_moderation.pii.ComprehendPII(client)
Class to handle Personally Identifiable Information (PII) moderation.
comprehend_moderation.prompt_safety.ComprehendPromptSafety(client)
Class to handle prompt safety moderation.
comprehend_moderation.toxicity.ComprehendToxicity(client)
Class to handle toxicity moderation.
langchain_experimental.cpal¶
Causal program-aided language (CPAL) is a concept implemented in LangChain as
a chain for causal modeling and narrative decomposition.
CPAL improves upon the program-aided language (PAL) by incorporating
causal structure to prevent hallucination in language models,
particularly when dealing with complex narratives and math
problems with nested dependencies.
CPAL involves translating causal narratives into a stack of operations,
setting hypothetical conditions for causal models, and decomposing
narratives into story elements.
It allows for the creation of causal chains that define the relationships
between different elements in a narrative, enabling the modeling and analysis
of causal relationships within a given context.
Classes¶
cpal.base.CPALChain | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-5 | of causal relationships within a given context.
Classes¶
cpal.base.CPALChain
Causal program-aided language (CPAL) chain implementation.
cpal.base.CausalChain
Translate the causal narrative into a stack of operations.
cpal.base.InterventionChain
Set the hypothetical conditions for the causal model.
cpal.base.NarrativeChain
Decompose the narrative into its story elements.
cpal.base.QueryChain
Query the outcome table using SQL.
cpal.constants.Constant(value)
Enum for constants used in the CPAL.
cpal.models.CausalModel
Casual data.
cpal.models.EntityModel
Entity in the story.
cpal.models.EntitySettingModel
Entity initial conditions.
cpal.models.InterventionModel
Intervention data of the story aka initial conditions.
cpal.models.NarrativeModel
Narrative input as three story elements.
cpal.models.QueryModel
Query data of the story.
cpal.models.ResultModel
Result of the story query.
cpal.models.StoryModel
Story data.
cpal.models.SystemSettingModel
System initial conditions.
langchain_experimental.data_anonymizer¶
Data anonymizer contains both Anonymizers and Deanonymizers.
It uses the [Microsoft Presidio](https://microsoft.github.io/presidio/) library.
Anonymizers are used to replace a Personally Identifiable Information (PII)
entity text with some other
value by applying a certain operator (e.g. replace, mask, redact, encrypt).
Deanonymizers are used to revert the anonymization operation
(e.g. to decrypt an encrypted text).
Classes¶
data_anonymizer.base.AnonymizerBase()
Base abstract class for anonymizers.
data_anonymizer.base.ReversibleAnonymizerBase()
Base abstract class for reversible anonymizers. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-6 | Base abstract class for reversible anonymizers.
data_anonymizer.deanonymizer_mapping.DeanonymizerMapping(...)
Deanonymizer mapping.
data_anonymizer.presidio.PresidioAnonymizer([...])
Anonymizer using Microsoft Presidio.
data_anonymizer.presidio.PresidioAnonymizerBase([...])
Base Anonymizer using Microsoft Presidio.
data_anonymizer.presidio.PresidioReversibleAnonymizer([...])
Reversible Anonymizer using Microsoft Presidio.
Functions¶
data_anonymizer.deanonymizer_mapping.create_anonymizer_mapping(...)
Create or update the mapping used to anonymize and/or
data_anonymizer.deanonymizer_mapping.format_duplicated_operator(...)
Format the operator name with the count.
data_anonymizer.deanonymizer_matching_strategies.case_insensitive_matching_strategy(...)
Case insensitive matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.combined_exact_fuzzy_matching_strategy(...)
Combined exact and fuzzy matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.exact_matching_strategy(...)
Exact matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.fuzzy_matching_strategy(...)
Fuzzy matching strategy for deanonymization.
data_anonymizer.deanonymizer_matching_strategies.ngram_fuzzy_matching_strategy(...)
N-gram fuzzy matching strategy for deanonymization.
data_anonymizer.faker_presidio_mapping.get_pseudoanonymizer_mapping([seed])
Get a mapping of entities to pseudo anonymize them.
langchain_experimental.fallacy_removal¶
Fallacy Removal Chain runs a self-review of logical fallacies
as determined by paper
[Robust and Explainable Identification of Logical Fallacies in Natural | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-7 | as determined by paper
[Robust and Explainable Identification of Logical Fallacies in Natural
Language Arguments](https://arxiv.org/pdf/2212.07425.pdf).
It is modeled after Constitutional AI and in the same format, but applying logical
fallacies as generalized rules to remove them in output.
Classes¶
fallacy_removal.base.FallacyChain
Chain for applying logical fallacy evaluations.
fallacy_removal.models.LogicalFallacy
Logical fallacy.
langchain_experimental.generative_agents¶
Generative Agent primitives.
Classes¶
generative_agents.generative_agent.GenerativeAgent
Agent as a character with memory and innate characteristics.
generative_agents.memory.GenerativeAgentMemory
Memory for the generative agent.
langchain_experimental.graph_transformers¶
Graph Transformers transform Documents into Graph Documents.
Classes¶
graph_transformers.diffbot.DiffbotGraphTransformer(...)
Transform documents into graph documents using Diffbot NLP API.
graph_transformers.diffbot.NodesList()
List of nodes with associated properties.
graph_transformers.diffbot.SimplifiedSchema()
Simplified schema mapping.
graph_transformers.diffbot.TypeOption(value)
An enumeration.
graph_transformers.llm.LLMGraphTransformer(llm)
Transform documents into graph-based documents using a LLM.
graph_transformers.llm.UnstructuredRelation
Create a new model by parsing and validating input data from keyword arguments.
Functions¶
graph_transformers.diffbot.format_property_key(s)
Formats a string to be used as a property key.
graph_transformers.llm.create_simple_model([...])
Simple model allows to limit node and/or relationship types.
graph_transformers.llm.create_unstructured_prompt([...])
graph_transformers.llm.format_property_key(s)
graph_transformers.llm.map_to_base_node(node)
Map the SimpleNode to the base Node. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-8 | Map the SimpleNode to the base Node.
graph_transformers.llm.map_to_base_relationship(rel)
Map the SimpleRelationship to the base Relationship.
graph_transformers.llm.optional_enum_field([...])
Utility function to conditionally create a field with an enum constraint.
langchain_experimental.llm_bash¶
LLM bash is a chain that uses LLM to interpret a prompt and
executes bash code.
Classes¶
llm_bash.base.LLMBashChain
Chain that interprets a prompt and executes bash operations.
llm_bash.bash.BashProcess([strip_newlines, ...])
Wrapper for starting subprocesses.
llm_bash.prompt.BashOutputParser
Parser for bash output.
langchain_experimental.llm_symbolic_math¶
Chain that interprets a prompt and executes python code to do math.
Heavily borrowed from llm_math, uses the [SymPy](https://www.sympy.org/) package.
Classes¶
llm_symbolic_math.base.LLMSymbolicMathChain
Chain that interprets a prompt and executes python code to do symbolic math.
langchain_experimental.llms¶
Experimental LLM classes provide
access to the large language model (LLM) APIs and services.
Classes¶
llms.anthropic_functions.AnthropicFunctions
[Deprecated] Chat model for interacting with Anthropic functions.
llms.anthropic_functions.TagParser()
Parser for the tool tags.
llms.jsonformer_decoder.JsonFormer
Jsonformer wrapped LLM using HuggingFace Pipeline API.
llms.llamaapi.ChatLlamaAPI
Chat model using the Llama API.
llms.lmformatenforcer_decoder.LMFormatEnforcer
LMFormatEnforcer wrapped LLM using HuggingFace Pipeline API.
llms.ollama_functions.OllamaFunctions | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-9 | llms.ollama_functions.OllamaFunctions
Function chat model that uses Ollama API.
llms.rellm_decoder.RELLM
RELLM wrapped LLM using HuggingFace Pipeline API.
Functions¶
llms.jsonformer_decoder.import_jsonformer()
Lazily import of the jsonformer package.
llms.lmformatenforcer_decoder.import_lmformatenforcer()
Lazily import of the lmformatenforcer package.
llms.ollama_functions.convert_to_ollama_tool(tool)
Convert a tool to an Ollama tool.
llms.ollama_functions.parse_response(message)
Extract function_call from AIMessage.
llms.rellm_decoder.import_rellm()
Lazily import of the rellm package.
langchain_experimental.open_clip¶
OpenCLIP Embeddings model.
OpenCLIP is a multimodal model that can encode text and images into a shared space.
See this paper for more details: https://arxiv.org/abs/2103.00020
and [this repository](https://github.com/mlfoundations/open_clip) for details.
Classes¶
open_clip.open_clip.OpenCLIPEmbeddings
OpenCLIP Embeddings model.
langchain_experimental.pal_chain¶
PAL Chain implements Program-Aided Language Models.
See the paper: https://arxiv.org/pdf/2211.10435.pdf.
This chain is vulnerable to [arbitrary code execution](https://github.com/langchain-ai/langchain/issues/5872).
Classes¶
pal_chain.base.PALChain
Chain that implements Program-Aided Language Models (PAL).
pal_chain.base.PALValidation([...])
Validation for PAL generated code.
langchain_experimental.plan_and_execute¶
Plan-and-execute agents are planning tasks with a language model (LLM) and | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-10 | Plan-and-execute agents are planning tasks with a language model (LLM) and
executing them with a separate agent.
Classes¶
plan_and_execute.agent_executor.PlanAndExecute
Plan and execute a chain of steps.
plan_and_execute.executors.base.BaseExecutor
Base executor.
plan_and_execute.executors.base.ChainExecutor
Chain executor.
plan_and_execute.planners.base.BasePlanner
Base planner.
plan_and_execute.planners.base.LLMPlanner
LLM planner.
plan_and_execute.planners.chat_planner.PlanningOutputParser
Planning output parser.
plan_and_execute.schema.BaseStepContainer
Base step container.
plan_and_execute.schema.ListStepContainer
Container for List of steps.
plan_and_execute.schema.Plan
Plan.
plan_and_execute.schema.PlanOutputParser
Plan output parser.
plan_and_execute.schema.Step
Step.
plan_and_execute.schema.StepResponse
Step response.
Functions¶
plan_and_execute.executors.agent_executor.load_agent_executor(...)
Load an agent executor.
plan_and_execute.planners.chat_planner.load_chat_planner(llm)
Load a chat planner.
langchain_experimental.prompt_injection_identifier¶
HuggingFace Injection Identifier is a tool that uses
[HuggingFace Prompt Injection model](https://huggingface.co/deepset/deberta-v3-base-injection)
to detect prompt injection attacks.
Classes¶
prompt_injection_identifier.hugging_face_identifier.HuggingFaceInjectionIdentifier
Tool that uses HuggingFace Prompt Injection model to detect prompt injection attacks.
prompt_injection_identifier.hugging_face_identifier.PromptInjectionException([...])
Exception raised when prompt injection attack is detected.
langchain_experimental.recommenders¶
Amazon Personalize primitives.
[Amazon Personalize](https://docs.aws.amazon.com/personalize/latest/dg/what-is-personalize.html) | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-11 | is a fully managed machine learning service that uses your data to generate
item recommendations for your users.
Classes¶
recommenders.amazon_personalize.AmazonPersonalize([...])
Amazon Personalize Runtime wrapper for executing real-time operations.
recommenders.amazon_personalize_chain.AmazonPersonalizeChain
Chain for retrieving recommendations from Amazon Personalize,
langchain_experimental.retrievers¶
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to
store documents, only to return (or retrieve) it.
Classes¶
retrievers.vector_sql_database.VectorSQLDatabaseChainRetriever
Retriever that uses Vector SQL Database.
langchain_experimental.rl_chain¶
RL (Reinforcement Learning) Chain leverages the Vowpal Wabbit (VW) models
for reinforcement learning with a context, with the goal of modifying
the prompt before the LLM call.
[Vowpal Wabbit](https://vowpalwabbit.org/) provides fast, efficient,
and flexible online machine learning techniques for reinforcement learning,
supervised learning, and more.
Classes¶
rl_chain.base.AutoSelectionScorer
Auto selection scorer.
rl_chain.base.Embedder(*args, **kwargs)
Abstract class to represent an embedder.
rl_chain.base.Event(inputs[, selected])
Abstract class to represent an event.
rl_chain.base.Policy(**kwargs)
Abstract class to represent a policy.
rl_chain.base.RLChain
Chain that leverages the Vowpal Wabbit (VW) model as a learned policy for reinforcement learning.
rl_chain.base.Selected()
Abstract class to represent the selected item.
rl_chain.base.SelectionScorer
Abstract class to grade the chosen selection or the response of the llm. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-12 | Abstract class to grade the chosen selection or the response of the llm.
rl_chain.base.VwPolicy(model_repo, vw_cmd, ...)
Vowpal Wabbit policy.
rl_chain.metrics.MetricsTrackerAverage(step)
Metrics Tracker Average.
rl_chain.metrics.MetricsTrackerRollingWindow(...)
Metrics Tracker Rolling Window.
rl_chain.model_repository.ModelRepository(folder)
Model Repository.
rl_chain.pick_best_chain.PickBest
Chain that leverages the Vowpal Wabbit (VW) model for reinforcement learning with a context, with the goal of modifying the prompt before the LLM call.
rl_chain.pick_best_chain.PickBestEvent(...)
Event class for PickBest chain.
rl_chain.pick_best_chain.PickBestFeatureEmbedder(...)
Embed the BasedOn and ToSelectFrom inputs into a format that can be used by the learning policy.
rl_chain.pick_best_chain.PickBestRandomPolicy(...)
Random policy for PickBest chain.
rl_chain.pick_best_chain.PickBestSelected([...])
Selected class for PickBest chain.
rl_chain.vw_logger.VwLogger(path)
Vowpal Wabbit custom logger.
Functions¶
rl_chain.base.BasedOn(anything)
Wrap a value to indicate that it should be based on.
rl_chain.base.Embed(anything[, keep])
Wrap a value to indicate that it should be embedded.
rl_chain.base.EmbedAndKeep(anything)
Wrap a value to indicate that it should be embedded and kept.
rl_chain.base.ToSelectFrom(anything)
Wrap a value to indicate that it should be selected from.
rl_chain.base.embed(to_embed, model[, namespace])
Embed the actions or context using the SentenceTransformer model (or a model that has an encode function).
rl_chain.base.embed_dict_type(item, model)
Embed a dictionary item. | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-13 | rl_chain.base.embed_dict_type(item, model)
Embed a dictionary item.
rl_chain.base.embed_list_type(item, model[, ...])
Embed a list item.
rl_chain.base.embed_string_type(item, model)
Embed a string or an _Embed object.
rl_chain.base.get_based_on_and_to_select_from(inputs)
Get the BasedOn and ToSelectFrom from the inputs.
rl_chain.base.is_stringtype_instance(item)
Check if an item is a string.
rl_chain.base.parse_lines(parser, input_str)
Parse the input string into a list of examples.
rl_chain.base.prepare_inputs_for_autoembed(inputs)
Prepare the inputs for auto embedding.
rl_chain.base.stringify_embedding(embedding)
Convert an embedding to a string.
langchain_experimental.smart_llm¶
SmartGPT chain is applying self-critique using the SmartGPT workflow.
See details at https://youtu.be/wVzuvf9D9BU
The workflow performs these 3 steps:
1. Ideate: Pass the user prompt to an Ideation LLM n_ideas times,
each result is an “idea”
Critique: Pass the ideas to a Critique LLM which looks for flaws in the ideas
& picks the best one
Resolve: Pass the critique to a Resolver LLM which improves upon the best idea
& outputs only the (improved version of) the best output
In total, the SmartGPT workflow will use n_ideas+2 LLM calls
Note that SmartLLMChain will only improve results (compared to a basic LLMChain),
when the underlying models have the capability for reflection, which smaller models
often don’t.
Finally, a SmartLLMChain assumes that each underlying LLM outputs exactly 1 result.
Classes¶
smart_llm.base.SmartLLMChain | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-14 | Classes¶
smart_llm.base.SmartLLMChain
Chain for applying self-critique using the SmartGPT workflow.
langchain_experimental.sql¶
SQL Chain interacts with SQL Database.
Classes¶
sql.base.SQLDatabaseChain
Chain for interacting with SQL Database.
sql.base.SQLDatabaseSequentialChain
Chain for querying SQL database that is a sequential chain.
sql.vector_sql.VectorSQLDatabaseChain
Chain for interacting with Vector SQL Database.
sql.vector_sql.VectorSQLOutputParser
Output Parser for Vector SQL.
sql.vector_sql.VectorSQLRetrieveAllOutputParser
Parser based on VectorSQLOutputParser.
Functions¶
sql.vector_sql.get_result_from_sqldb(db, cmd)
Get result from SQL Database.
langchain_experimental.tabular_synthetic_data¶
Generate tabular synthetic data using LLM and few-shot template.
Classes¶
tabular_synthetic_data.base.SyntheticDataGenerator
Generate synthetic data using the given LLM and few-shot template.
Functions¶
tabular_synthetic_data.openai.create_openai_data_generator(...)
Create an instance of SyntheticDataGenerator tailored for OpenAI models.
langchain_experimental.text_splitter¶
Experimental text splitter based on semantic similarity.
Classes¶
text_splitter.SemanticChunker(embeddings[, ...])
Split the text based on semantic similarity.
Functions¶
text_splitter.calculate_cosine_distances(...)
Calculate cosine distances between sentences.
text_splitter.combine_sentences(sentences[, ...])
Combine sentences based on buffer size.
langchain_experimental.tools¶
Experimental Python REPL tools.
Classes¶
tools.python.tool.PythonAstREPLTool
Tool for running python code in a REPL.
tools.python.tool.PythonInputs
Python inputs.
tools.python.tool.PythonREPLTool
Tool for running python code in a REPL.
Functions¶ | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-15 | Tool for running python code in a REPL.
Functions¶
tools.python.tool.sanitize_input(query)
Sanitize input to the python REPL.
langchain_experimental.tot¶
Implementation of a Tree of Thought (ToT) chain based on the paper
[Large Language Model Guided Tree-of-Thought](https://arxiv.org/pdf/2305.08291.pdf).
The Tree of Thought (ToT) chain uses a tree structure to explore the space of
possible solutions to a problem.
Classes¶
tot.base.ToTChain
Chain implementing the Tree of Thought (ToT).
tot.checker.ToTChecker
Tree of Thought (ToT) checker.
tot.controller.ToTController([c])
Tree of Thought (ToT) controller.
tot.memory.ToTDFSMemory([stack])
Memory for the Tree of Thought (ToT) chain.
tot.prompts.CheckerOutputParser
Parse and check the output of the language model.
tot.prompts.JSONListOutputParser
Parse the output of a PROPOSE_PROMPT response.
tot.thought.Thought
A thought in the ToT.
tot.thought.ThoughtValidity(value)
Enum for the validity of a thought.
tot.thought_generation.BaseThoughtGenerationStrategy
Base class for a thought generation strategy.
tot.thought_generation.ProposePromptStrategy
Strategy that is sequentially using a "propose prompt".
tot.thought_generation.SampleCoTStrategy
Sample strategy from a Chain-of-Thought (CoT) prompt.
Functions¶
tot.prompts.get_cot_prompt()
Get the prompt for the Chain of Thought (CoT) chain.
tot.prompts.get_propose_prompt()
Get the prompt for the PROPOSE_PROMPT chain.
langchain_experimental.utilities¶
Utility that simulates a standalone Python REPL.
Classes¶
utilities.python.PythonREPL | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
1a3f9b66ffdb-16 | Utility that simulates a standalone Python REPL.
Classes¶
utilities.python.PythonREPL
Simulates a standalone Python REPL.
langchain_experimental.video_captioning¶
Classes¶
video_captioning.base.VideoCaptioningChain
Video Captioning Chain.
video_captioning.models.AudioModel(...)
video_captioning.models.BaseModel(...)
video_captioning.models.CaptionModel(...)
video_captioning.models.VideoModel(...)
video_captioning.services.audio_service.AudioProcessor(api_key)
video_captioning.services.caption_service.CaptionProcessor(llm)
video_captioning.services.combine_service.CombineProcessor(llm)
video_captioning.services.image_service.ImageProcessor([...])
video_captioning.services.srt_service.SRTProcessor() | https://api.python.langchain.com/en/latest/experimental_api_reference.html |
e0dcea695f3e-0 | langchain_google_genai 1.0.6¶
langchain_google_genai.chat_models¶
Classes¶
chat_models.ChatGoogleGenerativeAI
Google Generative AI Chat models API.
chat_models.ChatGoogleGenerativeAIError
Custom exception class for errors associated with the Google GenAI API.
langchain_google_genai.embeddings¶
Classes¶
embeddings.GoogleGenerativeAIEmbeddings
Google Generative AI Embeddings.
langchain_google_genai.genai_aqa¶
Google GenerativeAI Attributed Question and Answering (AQA) service.
The GenAI Semantic AQA API is a managed end to end service that allows
developers to create responses grounded on specified passages based on
a user query. For more information visit:
https://developers.generativeai.google/guide
Classes¶
genai_aqa.AqaInput
Input to GenAIAqa.invoke.
genai_aqa.AqaOutput
Output from GenAIAqa.invoke.
genai_aqa.GenAIAqa
Google's Attributed Question and Answering service.
langchain_google_genai.google_vector_store¶
Google Generative AI Vector Store.
The GenAI Semantic Retriever API is a managed end-to-end service that allows
developers to create a corpus of documents to perform semantic search on
related passages given a user query. For more information visit:
https://developers.generativeai.google/guide
Classes¶
google_vector_store.DoesNotExistsException(*, ...)
google_vector_store.GoogleVectorStore(*, ...)
Google GenerativeAI Vector Store.
google_vector_store.ServerSideEmbedding()
Do nothing embedding model where the embedding is done by the server.
langchain_google_genai.llms¶
Classes¶
llms.GoogleGenerativeAI
Google GenerativeAI models.
llms.GoogleModelFamily(value)
An enumeration. | https://api.python.langchain.com/en/latest/google_genai_api_reference.html |
1aeb7f07cbc6-0 | langchain_google_community 1.0.5¶
langchain_google_community.bigquery¶
Classes¶
bigquery.BigQueryLoader(query[, project, ...])
Load from the Google Cloud Platform BigQuery.
langchain_google_community.bigquery_vector_search¶
Vector Store in Google Cloud BigQuery.
Classes¶
bigquery_vector_search.BigQueryVectorSearch(...)
Google Cloud BigQuery vector store.
langchain_google_community.docai¶
Module contains a PDF parser based on Document AI from Google Cloud.
You need to install two libraries to use this parser:
pip install google-cloud-documentai
pip install google-cloud-documentai-toolbox
Classes¶
docai.DocAIParser(*[, client, location, ...])
Google Cloud Document AI parser.
docai.DocAIParsingResults(source_path, ...)
A dataclass to store Document AI parsing results.
langchain_google_community.documentai_warehouse¶
Retriever wrapper for Google Cloud Document AI Warehouse.
Classes¶
documentai_warehouse.DocumentAIWarehouseRetriever
A retriever based on Document AI Warehouse.
langchain_google_community.drive¶
Classes¶
drive.GoogleDriveLoader
Load Google Docs from Google Drive.
langchain_google_community.gcs_directory¶
Classes¶
gcs_directory.GCSDirectoryLoader(...[, ...])
Load from GCS directory.
langchain_google_community.gcs_file¶
Classes¶
gcs_file.GCSFileLoader(project_name, bucket, ...)
Load from GCS file.
langchain_google_community.gmail¶
Classes¶
gmail.base.GmailBaseTool
Base class for Gmail tools.
gmail.create_draft.CreateDraftSchema
Input for CreateDraftTool.
gmail.create_draft.GmailCreateDraft
Tool that creates a draft email for Gmail.
gmail.get_message.GmailGetMessage
Tool that gets a message by ID from Gmail. | https://api.python.langchain.com/en/latest/google_community_api_reference.html |
1aeb7f07cbc6-1 | gmail.get_message.GmailGetMessage
Tool that gets a message by ID from Gmail.
gmail.get_message.SearchArgsSchema
Input for GetMessageTool.
gmail.get_thread.GetThreadSchema
Input for GetMessageTool.
gmail.get_thread.GmailGetThread
Tool that gets a thread by ID from Gmail.
gmail.loader.GMailLoader(creds[, n, raise_error])
Load data from GMail.
gmail.search.GmailSearch
Tool that searches for messages or threads in Gmail.
gmail.search.Resource(value)
Enumerator of Resources to search.
gmail.search.SearchArgsSchema
Input for SearchGmailTool.
gmail.send_message.GmailSendMessage
Tool that sends a message to Gmail.
gmail.send_message.SendMessageSchema
Input for SendMessageTool.
gmail.toolkit.GmailToolkit
Toolkit for interacting with Gmail.
Functions¶
gmail.utils.build_resource_service([...])
Build a Gmail service.
gmail.utils.clean_email_body(body)
Clean email body.
gmail.utils.get_gmail_credentials([...])
Get credentials.
gmail.utils.import_google()
Import google libraries.
gmail.utils.import_googleapiclient_resource_builder()
Import googleapiclient.discovery.build function.
gmail.utils.import_installed_app_flow()
Import InstalledAppFlow class.
langchain_google_community.google_speech_to_text¶
Classes¶
google_speech_to_text.SpeechToTextLoader(...)
Loader for Google Cloud Speech-to-Text audio transcripts.
langchain_google_community.places_api¶
Chain that calls Google Places API.
Classes¶
places_api.GooglePlacesAPIWrapper
Wrapper around Google Places API.
places_api.GooglePlacesSchema
Input for GooglePlacesTool.
places_api.GooglePlacesTool
Tool that queries the Google places API.
langchain_google_community.search¶
Util that calls Google Search.
Classes¶
search.GoogleSearchAPIWrapper
Wrapper for Google Search API.
search.GoogleSearchResults | https://api.python.langchain.com/en/latest/google_community_api_reference.html |
1aeb7f07cbc6-2 | search.GoogleSearchAPIWrapper
Wrapper for Google Search API.
search.GoogleSearchResults
Tool that queries the Google Search API and gets back json.
search.GoogleSearchRun
Tool that queries the Google search API.
langchain_google_community.texttospeech¶
Classes¶
texttospeech.TextToSpeechTool
Tool that queries the Google Cloud Text to Speech API.
langchain_google_community.translate¶
Classes¶
translate.GoogleTranslateTransformer(...[, ...])
Translate text documents using Google Cloud Translation.
langchain_google_community.vertex_ai_search¶
Retriever wrapper for Google Vertex AI Search.
Set the following environment variables before the tests:
export PROJECT_ID=… - set to your Google Cloud project ID
export DATA_STORE_ID=… - the ID of the search engine to use for the test
Classes¶
vertex_ai_search.VertexAIMultiTurnSearchRetriever
Google Vertex AI Search retriever for multi-turn conversations.
vertex_ai_search.VertexAISearchRetriever
Google Vertex AI Search retriever.
vertex_ai_search.VertexAISearchSummaryTool
Class that exposes a tool to interface with an App in Vertex Search and Conversation and get the summary of the documents retrieved.
langchain_google_community.vertex_check_grounding¶
Classes¶
vertex_check_grounding.VertexAICheckGroundingWrapper
Initializes the Vertex AI CheckGroundingOutputParser with configurable parameters.
langchain_google_community.vertex_rank¶
Classes¶
vertex_rank.VertexAIRank
Initializes the Vertex AI Ranker with configurable parameters.
langchain_google_community.vision¶
Classes¶
vision.CloudVisionLoader(file_path[, project])
vision.CloudVisionParser([project]) | https://api.python.langchain.com/en/latest/google_community_api_reference.html |
22fa5befd980-0 | langchain 0.2.3¶
langchain.agents¶
Agent is a class that uses an LLM to choose a sequence of actions to take.
In Chains, a sequence of actions is hardcoded. In Agents,
a language model is used as a reasoning engine to determine which actions
to take and in which order.
Agents select and use Tools and Toolkits for actions.
Class hierarchy:
BaseSingleActionAgent --> LLMSingleActionAgent
OpenAIFunctionsAgent
XMLAgent
Agent --> <name>Agent # Examples: ZeroShotAgent, ChatAgent
BaseMultiActionAgent --> OpenAIMultiFunctionsAgent
Main helpers:
AgentType, AgentExecutor, AgentOutputParser, AgentExecutorIterator,
AgentAction, AgentFinish
Classes¶
agents.agent.Agent
[Deprecated] Agent that calls the language model and deciding the action.
agents.agent.AgentExecutor
Agent that is using tools.
agents.agent.AgentOutputParser
Base class for parsing agent output into agent action/finish.
agents.agent.BaseMultiActionAgent
Base Multi Action Agent class.
agents.agent.BaseSingleActionAgent
Base Single Action Agent class.
agents.agent.ExceptionTool
Tool that just returns the query.
agents.agent.LLMSingleActionAgent
[Deprecated] Base class for single action agents.
agents.agent.MultiActionAgentOutputParser
Base class for parsing agent output into agent actions/finish.
agents.agent.RunnableAgent
Agent powered by runnables.
agents.agent.RunnableMultiActionAgent
Agent powered by runnables.
agents.agent_iterator.AgentExecutorIterator(...)
Iterator for AgentExecutor.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo
Information about a VectorStore.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit
Toolkit for routing between Vector Stores.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-1 | Toolkit for routing between Vector Stores.
agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit
Toolkit for interacting with a Vector Store.
agents.agent_types.AgentType(value)
[Deprecated] An enum for agent types.
agents.chat.base.ChatAgent
[Deprecated] Chat Agent.
agents.chat.output_parser.ChatOutputParser
Output parser for the chat agent.
agents.conversational.base.ConversationalAgent
[Deprecated] An agent that holds a conversation in addition to using tools.
agents.conversational.output_parser.ConvoOutputParser
Output parser for the conversational agent.
agents.conversational_chat.base.ConversationalChatAgent
[Deprecated] An agent designed to hold a conversation in addition to using tools.
agents.conversational_chat.output_parser.ConvoOutputParser
Output parser for the conversational agent.
agents.mrkl.base.ChainConfig(action_name, ...)
Configuration for chain to use in MRKL system.
agents.mrkl.base.MRKLChain
[Deprecated] [Deprecated] Chain that implements the MRKL system.
agents.mrkl.base.ZeroShotAgent
[Deprecated] Agent for the MRKL chain.
agents.mrkl.output_parser.MRKLOutputParser
MRKL Output parser for the chat agent.
agents.openai_assistant.base.OpenAIAssistantAction
AgentAction with info needed to submit custom tool output to existing run.
agents.openai_assistant.base.OpenAIAssistantFinish
AgentFinish with run and thread metadata.
agents.openai_assistant.base.OpenAIAssistantRunnable
Run an OpenAI Assistant.
agents.openai_functions_agent.agent_token_buffer_memory.AgentTokenBufferMemory
Memory used to save agent output AND intermediate steps.
agents.openai_functions_agent.base.OpenAIFunctionsAgent
[Deprecated] An Agent driven by OpenAIs function powered API.
agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-2 | agents.openai_functions_multi_agent.base.OpenAIMultiFunctionsAgent
[Deprecated] An Agent driven by OpenAIs function powered API.
agents.output_parsers.json.JSONAgentOutputParser
Parses tool invocations and final answers in JSON format.
agents.output_parsers.openai_functions.OpenAIFunctionsAgentOutputParser
Parses a message into agent action/finish.
agents.output_parsers.openai_tools.OpenAIToolsAgentOutputParser
Parses a message into agent actions/finish.
agents.output_parsers.react_json_single_input.ReActJsonSingleInputOutputParser
Parses ReAct-style LLM calls that have a single tool input in json format.
agents.output_parsers.react_single_input.ReActSingleInputOutputParser
Parses ReAct-style LLM calls that have a single tool input.
agents.output_parsers.self_ask.SelfAskOutputParser
Parses self-ask style LLM calls.
agents.output_parsers.tools.ToolAgentAction
Override init to support instantiation by position for backward compat.
agents.output_parsers.tools.ToolsAgentOutputParser
Parses a message into agent actions/finish.
agents.output_parsers.xml.XMLAgentOutputParser
Parses tool invocations and final answers in XML format.
agents.react.base.DocstoreExplorer(docstore)
[Deprecated] Class to assist with exploration of a document store.
agents.react.base.ReActChain
[Deprecated] [Deprecated] Chain that implements the ReAct paper.
agents.react.base.ReActDocstoreAgent
[Deprecated] Agent for the ReAct chain.
agents.react.base.ReActTextWorldAgent
[Deprecated] Agent for the ReAct TextWorld chain.
agents.react.output_parser.ReActOutputParser
Output parser for the ReAct agent.
agents.schema.AgentScratchPadChatPromptTemplate
Chat prompt template for the agent scratchpad. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-3 | agents.schema.AgentScratchPadChatPromptTemplate
Chat prompt template for the agent scratchpad.
agents.self_ask_with_search.base.SelfAskWithSearchAgent
[Deprecated] Agent for the self-ask-with-search paper.
agents.self_ask_with_search.base.SelfAskWithSearchChain
[Deprecated] [Deprecated] Chain that does self-ask with search.
agents.structured_chat.base.StructuredChatAgent
[Deprecated] Structured Chat Agent.
agents.structured_chat.output_parser.StructuredChatOutputParser
Output parser for the structured chat agent.
agents.structured_chat.output_parser.StructuredChatOutputParserWithRetries
Output parser with retries for the structured chat agent.
agents.tools.InvalidTool
Tool that is run when invalid tool name is encountered by agent.
agents.xml.base.XMLAgent
[Deprecated] Agent that uses XML tags.
Functions¶
agents.agent_toolkits.conversational_retrieval.openai_functions.create_conversational_retrieval_agent(...)
A convenience method for creating a conversational retrieval agent.
agents.agent_toolkits.vectorstore.base.create_vectorstore_agent(...)
Construct a VectorStore agent from an LLM and tools.
agents.agent_toolkits.vectorstore.base.create_vectorstore_router_agent(...)
Construct a VectorStore router agent from an LLM and tools.
agents.format_scratchpad.log.format_log_to_str(...)
Construct the scratchpad that lets the agent continue its thought process.
agents.format_scratchpad.log_to_messages.format_log_to_messages(...)
Construct the scratchpad that lets the agent continue its thought process.
agents.format_scratchpad.openai_functions.format_to_openai_function_messages(...)
Convert (AgentAction, tool output) tuples into FunctionMessages.
agents.format_scratchpad.openai_functions.format_to_openai_functions(...)
Convert (AgentAction, tool output) tuples into FunctionMessages.
agents.format_scratchpad.tools.format_to_tool_messages(...) | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-4 | agents.format_scratchpad.tools.format_to_tool_messages(...)
Convert (AgentAction, tool output) tuples into FunctionMessages.
agents.format_scratchpad.xml.format_xml(...)
Format the intermediate steps as XML.
agents.initialize.initialize_agent(tools, llm)
[Deprecated] Load an agent executor given tools and LLM.
agents.json_chat.base.create_json_chat_agent(...)
Create an agent that uses JSON to format its logic, build for Chat Models.
agents.loading.load_agent(path, **kwargs)
[Deprecated] Unified method for loading an agent from LangChainHub or local fs.
agents.loading.load_agent_from_config(config)
[Deprecated] Load agent from Config Dict.
agents.openai_functions_agent.base.create_openai_functions_agent(...)
Create an agent that uses OpenAI function calling.
agents.openai_tools.base.create_openai_tools_agent(...)
Create an agent that uses OpenAI tools.
agents.output_parsers.openai_tools.parse_ai_message_to_openai_tool_action(message)
Parse an AI message potentially containing tool_calls.
agents.output_parsers.tools.parse_ai_message_to_tool_action(message)
Parse an AI message potentially containing tool_calls.
agents.react.agent.create_react_agent(llm, ...)
Create an agent that uses ReAct prompting.
agents.self_ask_with_search.base.create_self_ask_with_search_agent(...)
Create an agent that uses self-ask with search prompting.
agents.structured_chat.base.create_structured_chat_agent(...)
Create an agent aimed at supporting tools with multiple inputs.
agents.tool_calling_agent.base.create_tool_calling_agent(...)
Create an agent that uses tools.
agents.utils.validate_tools_single_input(...)
Validate tools for single input.
agents.xml.base.create_xml_agent(llm, tools, ...)
Create an agent that uses XML to format its logic.
langchain.callbacks¶
Callback handlers allow listening to events in LangChain.
Class hierarchy: | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-5 | langchain.callbacks¶
Callback handlers allow listening to events in LangChain.
Class hierarchy:
BaseCallbackHandler --> <name>CallbackHandler # Example: AimCallbackHandler
Classes¶
callbacks.streaming_aiter.AsyncIteratorCallbackHandler()
Callback handler that returns an async iterator.
callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*)
Callback handler that returns an async iterator.
callbacks.streaming_stdout_final_only.FinalStreamingStdOutCallbackHandler(*)
Callback handler for streaming in agents.
callbacks.tracers.logging.LoggingCallbackHandler(logger)
Tracer that logs via the input Logger.
langchain.chains¶
Chains are easily reusable components linked together.
Chains encode a sequence of calls to components like models, document retrievers,
other Chains, etc., and provide a simple interface to this sequence.
The Chain interface makes it easy to create apps that are:
Stateful: add Memory to any Chain to give it state,
Observable: pass Callbacks to a Chain to execute additional functionality,
like logging, outside the main sequence of component calls,
Composable: combine Chains with other components, including other Chains.
Class hierarchy:
Chain --> <name>Chain # Examples: LLMChain, MapReduceChain, RouterChain
Classes¶
chains.api.base.APIChain
Chain that makes API calls and summarizes the responses to answer a question.
chains.base.Chain
Abstract base class for creating structured sequences of calls to components.
chains.combine_documents.base.AnalyzeDocumentChain
Chain that splits documents, then analyzes it in pieces.
chains.combine_documents.base.BaseCombineDocumentsChain
Base interface for chains combining documents.
chains.combine_documents.map_reduce.MapReduceDocumentsChain
Combining documents by mapping a chain over them, then combining results.
chains.combine_documents.map_rerank.MapRerankDocumentsChain
Combining documents by mapping a chain over them, then reranking results. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-6 | Combining documents by mapping a chain over them, then reranking results.
chains.combine_documents.reduce.AsyncCombineDocsProtocol(...)
Interface for the combine_docs method.
chains.combine_documents.reduce.CombineDocsProtocol(...)
Interface for the combine_docs method.
chains.combine_documents.reduce.ReduceDocumentsChain
Combine documents by recursively reducing them.
chains.combine_documents.refine.RefineDocumentsChain
Combine documents by doing a first pass and then refining on more documents.
chains.combine_documents.stuff.StuffDocumentsChain
Chain that combines documents by stuffing into context.
chains.constitutional_ai.base.ConstitutionalChain
Chain for applying constitutional principles.
chains.constitutional_ai.models.ConstitutionalPrinciple
Class for a constitutional principle.
chains.conversation.base.ConversationChain
Chain to have a conversation and load context from memory.
chains.conversational_retrieval.base.BaseConversationalRetrievalChain
Chain for chatting with an index.
chains.conversational_retrieval.base.ChatVectorDBChain
Chain for chatting with a vector database.
chains.conversational_retrieval.base.ConversationalRetrievalChain
[Deprecated] Chain for having a conversation based on retrieved documents.
chains.conversational_retrieval.base.InputType
Input type for ConversationalRetrievalChain.
chains.elasticsearch_database.base.ElasticsearchDatabaseChain
Chain for interacting with Elasticsearch Database.
chains.flare.base.FlareChain
Chain that combines a retriever, a question generator, and a response generator.
chains.flare.base.QuestionGeneratorChain
Chain that generates questions from uncertain spans.
chains.flare.prompts.FinishedOutputParser
Output parser that checks if the output is finished.
chains.hyde.base.HypotheticalDocumentEmbedder
Generate hypothetical document for query, and then embed that.
chains.llm.LLMChain
[Deprecated] Chain to run queries against LLMs. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-7 | chains.llm.LLMChain
[Deprecated] Chain to run queries against LLMs.
chains.llm_checker.base.LLMCheckerChain
Chain for question-answering with self-verification.
chains.llm_math.base.LLMMathChain
Chain that interprets a prompt and executes python code to do math.
chains.llm_summarization_checker.base.LLMSummarizationCheckerChain
Chain for question-answering with self-verification.
chains.mapreduce.MapReduceChain
Map-reduce chain.
chains.moderation.OpenAIModerationChain
Pass input through a moderation endpoint.
chains.natbot.base.NatBotChain
Implement an LLM driven browser.
chains.natbot.crawler.Crawler()
A crawler for web pages.
chains.natbot.crawler.ElementInViewPort
A typed dictionary containing information about elements in the viewport.
chains.openai_functions.citation_fuzzy_match.FactWithEvidence
Class representing a single statement.
chains.openai_functions.citation_fuzzy_match.QuestionAnswer
A question and its answer as a list of facts each one should have a source.
chains.openai_functions.openapi.SimpleRequestChain
Chain for making a simple request to an API endpoint.
chains.openai_functions.qa_with_structure.AnswerWithSources
An answer to the question, with sources.
chains.prompt_selector.BasePromptSelector
Base class for prompt selectors.
chains.prompt_selector.ConditionalPromptSelector
Prompt collection that goes through conditionals.
chains.qa_generation.base.QAGenerationChain
Base class for question-answer generation chains.
chains.qa_with_sources.base.BaseQAWithSourcesChain
Question answering chain with sources over documents.
chains.qa_with_sources.base.QAWithSourcesChain
Question answering with sources over documents.
chains.qa_with_sources.loading.LoadingCallable(...)
Interface for loading the combine documents chain. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-8 | chains.qa_with_sources.loading.LoadingCallable(...)
Interface for loading the combine documents chain.
chains.qa_with_sources.retrieval.RetrievalQAWithSourcesChain
Question-answering with sources over an index.
chains.qa_with_sources.vector_db.VectorDBQAWithSourcesChain
Question-answering with sources over a vector database.
chains.query_constructor.base.StructuredQueryOutputParser
Output parser that parses a structured query.
chains.query_constructor.parser.ISO8601Date
A date in ISO 8601 format (YYYY-MM-DD).
chains.query_constructor.schema.AttributeInfo
Information about a data source attribute.
chains.question_answering.chain.LoadingCallable(...)
Interface for loading the combine documents chain.
chains.retrieval_qa.base.BaseRetrievalQA
Base class for question-answering chains.
chains.retrieval_qa.base.RetrievalQA
[Deprecated] Chain for question-answering against an index.
chains.retrieval_qa.base.VectorDBQA
Chain for question-answering against a vector database.
chains.router.base.MultiRouteChain
Use a single chain to route an input to one of multiple candidate chains.
chains.router.base.Route(destination, ...)
Create new instance of Route(destination, next_inputs)
chains.router.base.RouterChain
Chain that outputs the name of a destination chain and the inputs to it.
chains.router.embedding_router.EmbeddingRouterChain
Chain that uses embeddings to route between options.
chains.router.llm_router.LLMRouterChain
A router chain that uses an LLM chain to perform routing.
chains.router.llm_router.RouterOutputParser
Parser for output of router chain in the multi-prompt chain.
chains.router.multi_prompt.MultiPromptChain
A multi-route chain that uses an LLM router chain to choose amongst prompts.
chains.router.multi_retrieval_qa.MultiRetrievalQAChain | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-9 | chains.router.multi_retrieval_qa.MultiRetrievalQAChain
A multi-route chain that uses an LLM router chain to choose amongst retrieval qa chains.
chains.sequential.SequentialChain
Chain where the outputs of one chain feed directly into next.
chains.sequential.SimpleSequentialChain
Simple chain where the outputs of one step feed directly into next.
chains.sql_database.query.SQLInput
Input for a SQL Chain.
chains.sql_database.query.SQLInputWithTables
Input for a SQL Chain.
chains.summarize.chain.LoadingCallable(...)
Interface for loading the combine documents chain.
chains.transform.TransformChain
Chain that transforms the chain output.
Functions¶
chains.combine_documents.reduce.acollapse_docs(...)
Execute a collapse function on a set of documents and merge their metadatas.
chains.combine_documents.reduce.collapse_docs(...)
Execute a collapse function on a set of documents and merge their metadatas.
chains.combine_documents.reduce.split_list_of_docs(...)
Split Documents into subsets that each meet a cumulative length constraint.
chains.combine_documents.stuff.create_stuff_documents_chain(...)
Create a chain for passing a list of Documents to a model.
chains.example_generator.generate_example(...)
Return another example given a list of examples for a prompt.
chains.history_aware_retriever.create_history_aware_retriever(...)
Create a chain that takes conversation history and returns documents.
chains.loading.load_chain(path, **kwargs)
Unified method for loading a chain from LangChainHub or local fs.
chains.loading.load_chain_from_config(...)
Load chain from Config Dict.
chains.openai_functions.base.create_openai_fn_chain(...)
[Deprecated] [Legacy] Create an LLM chain that uses OpenAI functions.
chains.openai_functions.base.create_structured_output_chain(...)
[Deprecated] [Legacy] Create an LLMChain that uses an OpenAI function to get a structured output. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-10 | chains.openai_functions.citation_fuzzy_match.create_citation_fuzzy_match_chain(llm)
Create a citation fuzzy match chain.
chains.openai_functions.extraction.create_extraction_chain(...)
[Deprecated] Creates a chain that extracts information from a passage.
chains.openai_functions.extraction.create_extraction_chain_pydantic(...)
[Deprecated] Creates a chain that extracts information from a passage using pydantic schema.
chains.openai_functions.openapi.get_openapi_chain(spec)
Create a chain for querying an API from a OpenAPI spec.
chains.openai_functions.openapi.openapi_spec_to_openai_fn(spec)
Convert a valid OpenAPI spec to the JSON Schema format expected for OpenAI
chains.openai_functions.qa_with_structure.create_qa_with_sources_chain(llm)
Create a question answering chain that returns an answer with sources.
chains.openai_functions.qa_with_structure.create_qa_with_structure_chain(...)
Create a question answering chain that returns an answer with sources
chains.openai_functions.tagging.create_tagging_chain(...)
Create a chain that extracts information from a passage
chains.openai_functions.tagging.create_tagging_chain_pydantic(...)
Create a chain that extracts information from a passage
chains.openai_functions.utils.get_llm_kwargs(...)
Return the kwargs for the LLMChain constructor.
chains.openai_tools.extraction.create_extraction_chain_pydantic(...)
[Deprecated] Creates a chain that extracts information from a passage.
chains.prompt_selector.is_chat_model(llm)
Check if the language model is a chat model.
chains.prompt_selector.is_llm(llm)
Check if the language model is a LLM.
chains.qa_with_sources.loading.load_qa_with_sources_chain(llm)
Load a question answering with sources chain.
chains.query_constructor.base.construct_examples(...)
Construct examples from input-output pairs.
chains.query_constructor.base.fix_filter_directive(...)
Fix invalid filter directive. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-11 | chains.query_constructor.base.fix_filter_directive(...)
Fix invalid filter directive.
chains.query_constructor.base.get_query_constructor_prompt(...)
Create query construction prompt.
chains.query_constructor.base.load_query_constructor_chain(...)
Load a query constructor chain.
chains.query_constructor.base.load_query_constructor_runnable(...)
Load a query constructor runnable chain.
chains.query_constructor.parser.get_parser([...])
Return a parser for the query language.
chains.query_constructor.parser.v_args(...)
Dummy decorator for when lark is not installed.
chains.question_answering.chain.load_qa_chain(llm)
Load question answering chain.
chains.retrieval.create_retrieval_chain(...)
Create retrieval chain that retrieves documents and then passes them on.
chains.sql_database.query.create_sql_query_chain(llm, db)
Create a chain that generates SQL queries.
chains.structured_output.base.create_openai_fn_runnable(...)
[Deprecated] Create a runnable sequence that uses OpenAI functions.
chains.structured_output.base.create_structured_output_runnable(...)
[Deprecated] Create a runnable for extracting structured outputs.
chains.structured_output.base.get_openai_output_parser(...)
Get the appropriate function output parser given the user functions.
chains.summarize.chain.load_summarize_chain(llm)
Load summarizing chain.
langchain.chat_models¶
Chat Models are a variation on language models.
While Chat Models use language models under the hood, the interface they expose
is a bit different. Rather than expose a “text in, text out” API, they expose
an interface where “chat messages” are the inputs and outputs.
Class hierarchy:
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
AIMessage, BaseMessage, HumanMessage
Functions¶
chat_models.base.init_chat_model(model, *[, ...]) | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-12 | Functions¶
chat_models.base.init_chat_model(model, *[, ...])
[Beta] Initialize a ChatModel from the model name and provider.
langchain.embeddings¶
Embedding models are wrappers around embedding models
from different APIs and services.
Embedding models can be LLMs or not.
Class hierarchy:
Embeddings --> <name>Embeddings # Examples: OpenAIEmbeddings, HuggingFaceEmbeddings
Classes¶
embeddings.cache.CacheBackedEmbeddings(...)
Interface for caching results from embedding models.
langchain.evaluation¶
Evaluation chains for grading LLM and Chain outputs.
This module contains off-the-shelf evaluation chains for grading the output of
LangChain primitives such as language models and chains.
Loading an evaluator
To load an evaluator, you can use the load_evaluators or
load_evaluator functions with the
names of the evaluators to load.
from langchain.evaluation import load_evaluator
evaluator = load_evaluator("qa")
evaluator.evaluate_strings(
prediction="We sold more than 40,000 units last week",
input="How many units did we sell last week?",
reference="We sold 32,378 units",
)
The evaluator must be one of EvaluatorType.
Datasets
To load one of the LangChain HuggingFace datasets, you can use the load_dataset function with the
name of the dataset to load.
from langchain.evaluation import load_dataset
ds = load_dataset("llm-math")
Some common use cases for evaluation include:
Grading the accuracy of a response against ground truth answers: QAEvalChain
Comparing the output of two models: PairwiseStringEvalChain or LabeledPairwiseStringEvalChain when there is additionally a reference label.
Judging the efficacy of an agent’s tool usage: TrajectoryEvalChain | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-13 | Judging the efficacy of an agent’s tool usage: TrajectoryEvalChain
Checking whether an output complies with a set of criteria: CriteriaEvalChain or LabeledCriteriaEvalChain when there is additionally a reference label.
Computing semantic difference between a prediction and reference: EmbeddingDistanceEvalChain or between two predictions: PairwiseEmbeddingDistanceEvalChain
Measuring the string distance between a prediction and reference StringDistanceEvalChain or between two predictions PairwiseStringDistanceEvalChain
Low-level API
These evaluators implement one of the following interfaces:
StringEvaluator: Evaluate a prediction string against a reference label and/or input context.
PairwiseStringEvaluator: Evaluate two prediction strings against each other. Useful for scoring preferences, measuring similarity between two chain or llm agents, or comparing outputs on similar inputs.
AgentTrajectoryEvaluator Evaluate the full sequence of actions taken by an agent.
These interfaces enable easier composability and usage within a higher level evaluation framework.
Classes¶
evaluation.agents.trajectory_eval_chain.TrajectoryEval
A named tuple containing the score and reasoning for a trajectory.
evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain
A chain for evaluating ReAct style agents.
evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser
Trajectory output parser.
evaluation.comparison.eval_chain.LabeledPairwiseStringEvalChain
A chain for comparing two outputs, such as the outputs
evaluation.comparison.eval_chain.PairwiseStringEvalChain
A chain for comparing two outputs, such as the outputs
evaluation.comparison.eval_chain.PairwiseStringResultOutputParser
A parser for the output of the PairwiseStringEvalChain.
evaluation.criteria.eval_chain.Criteria(value)
A Criteria to evaluate.
evaluation.criteria.eval_chain.CriteriaEvalChain
LLM Chain for evaluating runs against criteria.
evaluation.criteria.eval_chain.CriteriaResultOutputParser
A parser for the output of the CriteriaEvalChain. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-14 | A parser for the output of the CriteriaEvalChain.
evaluation.criteria.eval_chain.LabeledCriteriaEvalChain
Criteria evaluation chain that requires references.
evaluation.embedding_distance.base.EmbeddingDistance(value)
Embedding Distance Metric.
evaluation.embedding_distance.base.EmbeddingDistanceEvalChain
Use embedding distances to score semantic difference between a prediction and reference.
evaluation.embedding_distance.base.PairwiseEmbeddingDistanceEvalChain
Use embedding distances to score semantic difference between two predictions.
evaluation.exact_match.base.ExactMatchStringEvaluator(*)
Compute an exact match between the prediction and the reference.
evaluation.parsing.base.JsonEqualityEvaluator([...])
Evaluate whether the prediction is equal to the reference after
evaluation.parsing.base.JsonValidityEvaluator(...)
Evaluate whether the prediction is valid JSON.
evaluation.parsing.json_distance.JsonEditDistanceEvaluator([...])
An evaluator that calculates the edit distance between JSON strings.
evaluation.parsing.json_schema.JsonSchemaEvaluator(...)
An evaluator that validates a JSON prediction against a JSON schema reference.
evaluation.qa.eval_chain.ContextQAEvalChain
LLM Chain for evaluating QA w/o GT based on context
evaluation.qa.eval_chain.CotQAEvalChain
LLM Chain for evaluating QA using chain of thought reasoning.
evaluation.qa.eval_chain.QAEvalChain
LLM Chain for evaluating question answering.
evaluation.qa.generate_chain.QAGenerateChain
LLM Chain for generating examples for question answering.
evaluation.regex_match.base.RegexMatchStringEvaluator(*)
Compute a regex match between the prediction and the reference.
evaluation.schema.AgentTrajectoryEvaluator()
Interface for evaluating agent trajectories.
evaluation.schema.EvaluatorType(value)
The types of the evaluators.
evaluation.schema.LLMEvalChain
A base class for evaluators that use an LLM.
evaluation.schema.PairwiseStringEvaluator()
Compare the output of two models (or two outputs of the same model). | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-15 | Compare the output of two models (or two outputs of the same model).
evaluation.schema.StringEvaluator()
Grade, tag, or otherwise evaluate predictions relative to their inputs and/or reference labels.
evaluation.scoring.eval_chain.LabeledScoreStringEvalChain
A chain for scoring the output of a model on a scale of 1-10.
evaluation.scoring.eval_chain.ScoreStringEvalChain
A chain for scoring on a scale of 1-10 the output of a model.
evaluation.scoring.eval_chain.ScoreStringResultOutputParser
A parser for the output of the ScoreStringEvalChain.
evaluation.string_distance.base.PairwiseStringDistanceEvalChain
Compute string edit distances between two predictions.
evaluation.string_distance.base.StringDistance(value)
Distance metric to use.
evaluation.string_distance.base.StringDistanceEvalChain
Compute string distances between the prediction and the reference.
Functions¶
evaluation.comparison.eval_chain.resolve_pairwise_criteria(...)
Resolve the criteria for the pairwise evaluator.
evaluation.criteria.eval_chain.resolve_criteria(...)
Resolve the criteria to evaluate.
evaluation.loading.load_dataset(uri)
Load a dataset from the LangChainDatasets on HuggingFace.
evaluation.loading.load_evaluator(evaluator, *)
Load the requested evaluation chain specified by a string.
evaluation.loading.load_evaluators(evaluators, *)
Load evaluators specified by a list of evaluator types.
evaluation.scoring.eval_chain.resolve_criteria(...)
Resolve the criteria for the pairwise evaluator.
langchain.hub¶
Interface with the LangChain Hub.
Functions¶
hub.pull(owner_repo_commit, *[, api_url, ...])
Pull an object from the hub and returns it as a LangChain object.
hub.push(repo_full_name, object, *[, ...])
Push an object to the hub and returns the URL it can be viewed at in a browser.
langchain.indexes¶ | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-16 | langchain.indexes¶
Index is used to avoid writing duplicated content
into the vectostore and to avoid over-writing content if it’s unchanged.
Indexes also :
Create knowledge graphs from data.
Support indexing workflows from LangChain data loaders to vectorstores.
Importantly, Index keeps on working even if the content being written is derived
via a set of transformations from some source content (e.g., indexing children
documents that were derived from parent documents by chunking.)
Classes¶
indexes.vectorstore.VectorStoreIndexWrapper
Wrapper around a vectorstore for easy access.
indexes.vectorstore.VectorstoreIndexCreator
Logic for creating indexes.
langchain.memory¶
Memory maintains Chain state, incorporating context from past runs.
Class hierarchy for Memory:
BaseMemory --> BaseChatMemory --> <name>Memory # Examples: ZepMemory, MotorheadMemory
Main helpers:
BaseChatMessageHistory
Chat Message History stores the chat message history in different stores.
Class hierarchy for ChatMessageHistory:
BaseChatMessageHistory --> <name>ChatMessageHistory # Example: ZepChatMessageHistory
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
memory.buffer.ConversationBufferMemory
Buffer for storing conversation memory.
memory.buffer.ConversationStringBufferMemory
Buffer for storing conversation memory.
memory.buffer_window.ConversationBufferWindowMemory
Buffer for storing conversation memory inside a limited size window.
memory.chat_memory.BaseChatMemory
Abstract base class for chat memory.
memory.combined.CombinedMemory
Combining multiple memories' data together.
memory.entity.BaseEntityStore
Abstract base class for Entity store.
memory.entity.ConversationEntityMemory
Entity extractor & summarizer memory.
memory.entity.InMemoryEntityStore
In-memory Entity store.
memory.entity.RedisEntityStore
Redis-backed Entity store.
memory.entity.SQLiteEntityStore
SQLite-backed Entity store | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-17 | Redis-backed Entity store.
memory.entity.SQLiteEntityStore
SQLite-backed Entity store
memory.entity.UpstashRedisEntityStore
Upstash Redis backed Entity store.
memory.readonly.ReadOnlySharedMemory
Memory wrapper that is read-only and cannot be changed.
memory.simple.SimpleMemory
Simple memory for storing context or other information that shouldn't ever change between prompts.
memory.summary.ConversationSummaryMemory
Conversation summarizer to chat memory.
memory.summary.SummarizerMixin
Mixin for summarizer.
memory.summary_buffer.ConversationSummaryBufferMemory
Buffer with summarizer for storing conversation memory.
memory.token_buffer.ConversationTokenBufferMemory
Conversation chat memory with token limit.
memory.vectorstore.VectorStoreRetrieverMemory
VectorStoreRetriever-backed memory.
Functions¶
memory.utils.get_prompt_input_key(inputs, ...)
Get the prompt input key.
langchain.model_laboratory¶
Experiment with different models.
Classes¶
model_laboratory.ModelLaboratory(chains[, names])
Experiment with different models.
langchain.output_parsers¶
OutputParser classes parse the output of an LLM call.
Class hierarchy:
BaseLLMOutputParser --> BaseOutputParser --> <name>OutputParser # ListOutputParser, PydanticOutputParser
Main helpers:
Serializable, Generation, PromptValue
Classes¶
output_parsers.boolean.BooleanOutputParser
Parse the output of an LLM call to a boolean.
output_parsers.combining.CombiningOutputParser
Combine multiple output parsers into one.
output_parsers.datetime.DatetimeOutputParser
Parse the output of an LLM call to a datetime.
output_parsers.enum.EnumOutputParser
Parse an output that is one of a set of values.
output_parsers.fix.OutputFixingParser
Wrap a parser and try to fix parsing errors.
output_parsers.pandas_dataframe.PandasDataFrameOutputParser | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-18 | output_parsers.pandas_dataframe.PandasDataFrameOutputParser
Parse an output using Pandas DataFrame format.
output_parsers.regex.RegexParser
Parse the output of an LLM call using a regex.
output_parsers.regex_dict.RegexDictParser
Parse the output of an LLM call into a Dictionary using a regex.
output_parsers.retry.RetryOutputParser
Wrap a parser and try to fix parsing errors.
output_parsers.retry.RetryWithErrorOutputParser
Wrap a parser and try to fix parsing errors.
output_parsers.structured.ResponseSchema
Schema for a response from a structured output parser.
output_parsers.structured.StructuredOutputParser
Parse the output of an LLM call to a structured output.
output_parsers.yaml.YamlOutputParser
Parse YAML output using a pydantic model.
Functions¶
output_parsers.loading.load_output_parser(config)
Load an output parser.
langchain.retrievers¶
Retriever class returns Documents given a text query.
It is more general than a vector store. A retriever does not need to be able to
store documents, only to return (or retrieve) it. Vector stores can be used as
the backbone of a retriever, but there are other types of retrievers as well.
Class hierarchy:
BaseRetriever --> <name>Retriever # Examples: ArxivRetriever, MergerRetriever
Main helpers:
Document, Serializable, Callbacks,
CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun
Classes¶
retrievers.contextual_compression.ContextualCompressionRetriever
Retriever that wraps a base retriever and compresses the results.
retrievers.document_compressors.base.DocumentCompressorPipeline
Document compressor that uses a pipeline of Transformers.
retrievers.document_compressors.chain_extract.LLMChainExtractor | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-19 | retrievers.document_compressors.chain_extract.LLMChainExtractor
Document compressor that uses an LLM chain to extract the relevant parts of documents.
retrievers.document_compressors.chain_extract.NoOutputParser
Parse outputs that could return a null string of some sort.
retrievers.document_compressors.chain_filter.LLMChainFilter
Filter that drops documents that aren't relevant to the query.
retrievers.document_compressors.cohere_rerank.CohereRerank
[Deprecated] Document compressor that uses Cohere Rerank API.
retrievers.document_compressors.cross_encoder.BaseCrossEncoder()
Interface for cross encoder models.
retrievers.document_compressors.cross_encoder_rerank.CrossEncoderReranker
Document compressor that uses CrossEncoder for reranking.
retrievers.document_compressors.embeddings_filter.EmbeddingsFilter
Document compressor that uses embeddings to drop documents unrelated to the query.
retrievers.ensemble.EnsembleRetriever
Retriever that ensembles the multiple retrievers.
retrievers.merger_retriever.MergerRetriever
Retriever that merges the results of multiple retrievers.
retrievers.multi_query.LineListOutputParser
Output parser for a list of lines.
retrievers.multi_query.MultiQueryRetriever
Given a query, use an LLM to write a set of queries.
retrievers.multi_vector.MultiVectorRetriever
Retrieve from a set of multiple embeddings for the same document.
retrievers.multi_vector.SearchType(value)
Enumerator of the types of search to perform.
retrievers.parent_document_retriever.ParentDocumentRetriever
Retrieve small chunks then retrieve their parent documents.
retrievers.re_phraser.RePhraseQueryRetriever
Given a query, use an LLM to re-phrase it.
retrievers.self_query.base.SelfQueryRetriever | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-20 | retrievers.self_query.base.SelfQueryRetriever
Retriever that uses a vector store and an LLM to generate the vector store queries.
retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever
Retriever that combines embedding similarity with recency in retrieving values.
Functions¶
retrievers.document_compressors.chain_extract.default_get_input(...)
Return the compression chain input.
retrievers.document_compressors.chain_filter.default_get_input(...)
Return the compression chain input.
retrievers.ensemble.unique_by_key(iterable, key)
Yield unique elements of an iterable based on a key function.
langchain.runnables¶
LangChain Runnable and the LangChain Expression Language (LCEL).
The LangChain Expression Language (LCEL) offers a declarative method to build
production-grade programs that harness the power of LLMs.
Programs created using LCEL and LangChain Runnables inherently support
synchronous, asynchronous, batch, and streaming operations.
Support for async allows servers hosting the LCEL based programs
to scale better for higher concurrent loads.
Batch operations allow for processing multiple inputs in parallel.
Streaming of intermediate outputs, as they’re being generated, allows for
creating more responsive UX.
This module contains non-core Runnable classes.
Classes¶
runnables.hub.HubRunnable
An instance of a runnable stored in the LangChain Hub.
runnables.openai_functions.OpenAIFunction
A function description for ChatOpenAI
runnables.openai_functions.OpenAIFunctionsRouter
A runnable that routes to the selected function.
langchain.smith¶
LangSmith utilities.
This module provides utilities for connecting to LangSmith. For more information on LangSmith, see the LangSmith documentation.
Evaluation | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-21 | Evaluation
LangSmith helps you evaluate Chains and other language model application components using a number of LangChain evaluators.
An example of this is shown below, assuming you’ve created a LangSmith dataset called <my_dataset_name>:
from langsmith import Client
from langchain_community.chat_models import ChatOpenAI
from langchain.chains import LLMChain
from langchain.smith import RunEvalConfig, run_on_dataset
# Chains may have memory. Passing in a constructor function lets the
# evaluation framework avoid cross-contamination between runs.
def construct_chain():
llm = ChatOpenAI(temperature=0)
chain = LLMChain.from_string(
llm,
"What's the answer to {your_input_key}"
)
return chain
# Load off-the-shelf evaluators via config or the EvaluatorType (string or enum)
evaluation_config = RunEvalConfig(
evaluators=[
"qa", # "Correctness" against a reference answer
"embedding_distance",
RunEvalConfig.Criteria("helpfulness"),
RunEvalConfig.Criteria({
"fifth-grader-score": "Do you have to be smarter than a fifth grader to answer this question?"
}),
]
)
client = Client()
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
You can also create custom evaluators by subclassing the
StringEvaluator
or LangSmith’s RunEvaluator classes.
from typing import Optional
from langchain.evaluation import StringEvaluator
class MyStringEvaluator(StringEvaluator):
@property
def requires_input(self) -> bool:
return False
@property
def requires_reference(self) -> bool:
return True
@property | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-22 | def requires_reference(self) -> bool:
return True
@property
def evaluation_name(self) -> str:
return "exact_match"
def _evaluate_strings(self, prediction, reference=None, input=None, **kwargs) -> dict:
return {"score": prediction == reference}
evaluation_config = RunEvalConfig(
custom_evaluators = [MyStringEvaluator()],
)
run_on_dataset(
client,
"<my_dataset_name>",
construct_chain,
evaluation=evaluation_config,
)
Primary Functions
arun_on_dataset: Asynchronous function to evaluate a chain, agent, or other LangChain component over a dataset.
run_on_dataset: Function to evaluate a chain, agent, or other LangChain component over a dataset.
RunEvalConfig: Class representing the configuration for running evaluation. You can select evaluators by EvaluatorType or config, or you can pass in custom_evaluators
Classes¶
smith.evaluation.config.EvalConfig
Configuration for a given run evaluator.
smith.evaluation.config.RunEvalConfig
Configuration for a run evaluation.
smith.evaluation.config.SingleKeyEvalConfig
Configuration for a run evaluator that only requires a single key.
smith.evaluation.progress.ProgressBarCallback(total)
A simple progress bar for the console.
smith.evaluation.runner_utils.ChatModelInput
Input for a chat model.
smith.evaluation.runner_utils.EvalError(...)
Your architecture raised an error.
smith.evaluation.runner_utils.InputFormatError
Raised when the input format is invalid.
smith.evaluation.runner_utils.TestResult
A dictionary of the results of a single test run.
smith.evaluation.string_run_evaluator.ChainStringRunMapper
Extract items to evaluate from the run object from a chain.
smith.evaluation.string_run_evaluator.LLMStringRunMapper
Extract items to evaluate from the run object. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
22fa5befd980-23 | Extract items to evaluate from the run object.
smith.evaluation.string_run_evaluator.StringExampleMapper
Map an example, or row in the dataset, to the inputs of an evaluation.
smith.evaluation.string_run_evaluator.StringRunEvaluatorChain
Evaluate Run and optional examples.
smith.evaluation.string_run_evaluator.StringRunMapper
Extract items to evaluate from the run object.
smith.evaluation.string_run_evaluator.ToolStringRunMapper
Map an input to the tool.
Functions¶
smith.evaluation.name_generation.random_name()
Generate a random name.
smith.evaluation.runner_utils.arun_on_dataset(...)
Run the Chain or language model on a dataset and store traces to the specified project name.
smith.evaluation.runner_utils.run_on_dataset(...)
Run the Chain or language model on a dataset and store traces to the specified project name.
langchain.storage¶
Implementations of key-value stores and storage helpers.
Module provides implementations of various key-value stores that conform
to a simple key-value interface.
The primary goal of these storages is to support implementation of caching.
Classes¶
storage.encoder_backed.EncoderBackedStore(...)
Wraps a store with key and value encoders/decoders.
storage.file_system.LocalFileStore(root_path, *)
BaseStore interface that works on the local file system. | https://api.python.langchain.com/en/latest/langchain_api_reference.html |
062d95a2b003-0 | langchain_google_vertexai 1.0.5¶
langchain_google_vertexai.callbacks¶
Classes¶
callbacks.VertexAICallbackHandler()
Callback Handler that tracks VertexAI info.
langchain_google_vertexai.chains¶
Functions¶
chains.create_structured_runnable(function, ...)
Create a runnable sequence that uses OpenAI functions.
chains.get_output_parser(functions)
Get the appropriate function output parser given the user functions.
langchain_google_vertexai.chat_models¶
Wrapper around Google VertexAI chat-based models.
Classes¶
chat_models.ChatVertexAI
Vertex AI Chat large language models API.
langchain_google_vertexai.embeddings¶
Classes¶
embeddings.GoogleEmbeddingModelType(value)
An enumeration.
embeddings.GoogleEmbeddingModelVersion(value)
An enumeration.
embeddings.VertexAIEmbeddings
Google Cloud VertexAI embedding models.
langchain_google_vertexai.evaluators¶
Classes¶
evaluators.evaluation.VertexPairWiseStringEvaluator(...)
Evaluate the perplexity of a predicted string.
evaluators.evaluation.VertexStringEvaluator(...)
Evaluate the perplexity of a predicted string.
langchain_google_vertexai.functions_utils¶
Classes¶
functions_utils.PydanticFunctionsOutputParser
Parse an output as a pydantic object.
langchain_google_vertexai.gemma¶
Classes¶
gemma.GemmaChatLocalHF
Create a new model by parsing and validating input data from keyword arguments.
gemma.GemmaChatLocalKaggle
Needed for mypy typing to recognize model_name as a valid arg.
gemma.GemmaChatVertexAIModelGarden
Needed for mypy typing to recognize model_name as a valid arg.
gemma.GemmaLocalHF
Local gemma model loaded from HuggingFace.
gemma.GemmaLocalKaggle
Local gemma chat model loaded from Kaggle. | https://api.python.langchain.com/en/latest/google_vertexai_api_reference.html |
062d95a2b003-1 | gemma.GemmaLocalKaggle
Local gemma chat model loaded from Kaggle.
gemma.GemmaVertexAIModelGarden
Create a new model by parsing and validating input data from keyword arguments.
Functions¶
gemma.gemma_messages_to_prompt(history)
Converts a list of messages to a chat prompt for Gemma.
langchain_google_vertexai.llms¶
Classes¶
llms.VertexAI
Google Vertex AI large language models.
langchain_google_vertexai.model_garden¶
Classes¶
model_garden.ChatAnthropicVertex
Create a new model by parsing and validating input data from keyword arguments.
model_garden.VertexAIModelGarden
Large language models served from Vertex AI Model Garden.
langchain_google_vertexai.vectorstores¶
Classes¶
vectorstores.document_storage.DataStoreDocumentStorage(...)
Stores documents in Google Cloud DataStore.
vectorstores.document_storage.DocumentStorage()
Abstract interface of a key, text storage for retrieving documents.
vectorstores.document_storage.GCSDocumentStorage(bucket)
Stores documents in Google Cloud Storage.
vectorstores.vectorstores.VectorSearchVectorStore(...)
VertexAI VectorStore that handles the search and indexing using Vector Search and stores the documents in Google Cloud Storage.
vectorstores.vectorstores.VectorSearchVectorStoreDatastore(...)
VectorSearch with DatasTore document storage.
vectorstores.vectorstores.VectorSearchVectorStoreGCS(...)
Alias of VectorSearchVectorStore for consistency with the rest of vector stores with different document storage backends.
langchain_google_vertexai.vision_models¶
Classes¶
vision_models.VertexAIImageCaptioning
Implementation of the Image Captioning model as an LLM.
vision_models.VertexAIImageCaptioningChat
Implementation of the Image Captioning model as a chat.
vision_models.VertexAIImageEditorChat
Given an image and a prompt, edits the image.
vision_models.VertexAIImageGeneratorChat | https://api.python.langchain.com/en/latest/google_vertexai_api_reference.html |
062d95a2b003-2 | Given an image and a prompt, edits the image.
vision_models.VertexAIImageGeneratorChat
Generates an image from a prompt.
vision_models.VertexAIVisualQnAChat
Chat implementation of a visual QnA model | https://api.python.langchain.com/en/latest/google_vertexai_api_reference.html |
4a17f9abbf33-0 | langchain_qdrant 0.1.0¶
langchain_qdrant.vectorstores¶
Classes¶
vectorstores.Qdrant(client, collection_name)
Qdrant vector store.
vectorstores.QdrantException
Qdrant related exceptions.
Functions¶
vectorstores.sync_call_fallback(method)
Decorator to call the synchronous method of the class if the async method is not implemented. | https://api.python.langchain.com/en/latest/qdrant_api_reference.html |
593fbb1d5d80-0 | langchain_mistralai 0.1.8¶
langchain_mistralai.chat_models¶
Classes¶
chat_models.ChatMistralAI
A chat model that uses the MistralAI API.
Functions¶
chat_models.acompletion_with_retry(llm[, ...])
Use tenacity to retry the async completion call.
langchain_mistralai.embeddings¶
Classes¶
embeddings.DummyTokenizer()
Dummy tokenizer for when tokenizer cannot be accessed (e.g., via Huggingface)
embeddings.MistralAIEmbeddings
MistralAI embedding models. | https://api.python.langchain.com/en/latest/mistralai_api_reference.html |
f7355acc28da-0 | langchain_upstage 0.1.6¶
langchain_upstage.chat_models¶
Classes¶
chat_models.ChatUpstage
ChatUpstage chat model.
langchain_upstage.embeddings¶
Classes¶
embeddings.UpstageEmbeddings
UpstageEmbeddings embedding model.
langchain_upstage.layout_analysis¶
Classes¶
layout_analysis.UpstageLayoutAnalysisLoader(...)
Upstage Layout Analysis.
Functions¶
layout_analysis.get_from_param_or_env(key[, ...])
Get a value from a param or an environment variable.
layout_analysis.validate_api_key(api_key)
Validates the provided API key.
layout_analysis.validate_file_path(file_path)
Validates if a file exists at the given file path.
langchain_upstage.layout_analysis_parsers¶
Classes¶
layout_analysis_parsers.UpstageLayoutAnalysisParser([...])
Upstage Layout Analysis Parser.
Functions¶
layout_analysis_parsers.get_from_param_or_env(key)
Get a value from a param or an environment variable.
layout_analysis_parsers.parse_output(data, ...)
Parse the output data based on the specified output type.
layout_analysis_parsers.validate_api_key(api_key)
Validates the provided API key.
layout_analysis_parsers.validate_file_path(...)
Validates if a file exists at the given file path.
langchain_upstage.tools¶
Classes¶
tools.groundedness_check.GroundednessCheck
[Deprecated]
tools.groundedness_check.UpstageGroundednessCheck
Tool that checks the groundedness of a context and an assistant message.
tools.groundedness_check.UpstageGroundednessCheckInput
Input for the Groundedness Check tool. | https://api.python.langchain.com/en/latest/upstage_api_reference.html |
7eba0115d1b6-0 | langchain_nvidia_ai_endpoints 0.1.1¶
langchain_nvidia_ai_endpoints.callbacks¶
Callback Handler that prints to std out.
Classes¶
callbacks.UsageCallbackHandler()
Callback Handler that tracks OpenAI info.
Functions¶
callbacks.get_token_cost_for_model(...[, ...])
Get the cost in USD for a given model and number of tokens.
callbacks.get_usage_callback([price_map, ...])
Get the OpenAI callback handler in a context manager.
callbacks.standardize_model_name(model_name)
Standardize the model name to a format that can be used in the OpenAI API.
langchain_nvidia_ai_endpoints.chat_models¶
Chat Model Components Derived from ChatModel/NVIDIA
Classes¶
chat_models.ChatNVIDIA
NVIDIA chat model.
langchain_nvidia_ai_endpoints.embeddings¶
Embeddings Components Derived from NVEModel/Embeddings
Classes¶
embeddings.NVIDIAEmbeddings
Client to NVIDIA embeddings models.
langchain_nvidia_ai_endpoints.reranking¶
Classes¶
reranking.NVIDIARerank
LangChain Document Compressor that uses the NVIDIA NeMo Retriever Reranking API.
reranking.Ranking
Create a new model by parsing and validating input data from keyword arguments.
langchain_nvidia_ai_endpoints.tools¶
OpenAI chat wrapper.
Classes¶
tools.ServerToolsMixin() | https://api.python.langchain.com/en/latest/nvidia_ai_endpoints_api_reference.html |
416aa63f7e0a-0 | langchain_weaviate 0.0.2¶ | https://api.python.langchain.com/en/latest/weaviate_api_reference.html |
5ac1b6cc5af2-0 | langchain_community 0.2.4¶
langchain_community.adapters¶
Adapters are used to adapt LangChain models to other APIs.
LangChain integrates with many model providers.
While LangChain has its own message and model APIs,
LangChain has also made it as easy as
possible to explore other models by exposing an adapter to adapt LangChain
models to the other APIs, as to the OpenAI API.
Classes¶
adapters.openai.Chat()
Chat.
adapters.openai.ChatCompletion()
Chat completion.
adapters.openai.ChatCompletionChunk
Chat completion chunk.
adapters.openai.ChatCompletions
Chat completions.
adapters.openai.Choice
Choice.
adapters.openai.ChoiceChunk
Choice chunk.
adapters.openai.Completions()
Completions.
adapters.openai.IndexableBaseModel
Allows a BaseModel to return its fields by string variable indexing.
Functions¶
adapters.openai.aenumerate(iterable[, start])
Async version of enumerate function.
adapters.openai.convert_dict_to_message(_dict)
Convert a dictionary to a LangChain message.
adapters.openai.convert_message_to_dict(message)
Convert a LangChain message to a dictionary.
adapters.openai.convert_messages_for_finetuning(...)
Convert messages to a list of lists of dictionaries for fine-tuning.
adapters.openai.convert_openai_messages(messages)
Convert dictionaries representing OpenAI messages to LangChain format.
langchain_community.agent_toolkits¶
Toolkits are sets of tools that can be used to interact with
various services and APIs.
Classes¶
agent_toolkits.ainetwork.toolkit.AINetworkToolkit
Toolkit for interacting with AINetwork Blockchain.
agent_toolkits.amadeus.toolkit.AmadeusToolkit
Toolkit for interacting with Amadeus which offers APIs for travel. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-1 | Toolkit for interacting with Amadeus which offers APIs for travel.
agent_toolkits.azure_ai_services.AzureAiServicesToolkit
Toolkit for Azure AI Services.
agent_toolkits.azure_cognitive_services.AzureCognitiveServicesToolkit
Toolkit for Azure Cognitive Services.
agent_toolkits.cassandra_database.toolkit.CassandraDatabaseToolkit
Toolkit for interacting with an Apache Cassandra database.
agent_toolkits.clickup.toolkit.ClickupToolkit
Clickup Toolkit.
agent_toolkits.cogniswitch.toolkit.CogniswitchToolkit
Toolkit for CogniSwitch.
agent_toolkits.connery.toolkit.ConneryToolkit
Toolkit with a list of Connery Actions as tools.
agent_toolkits.file_management.toolkit.FileManagementToolkit
Toolkit for interacting with local files.
agent_toolkits.github.toolkit.BranchName
Schema for operations that require a branch name as input.
agent_toolkits.github.toolkit.CommentOnIssue
Schema for operations that require a comment as input.
agent_toolkits.github.toolkit.CreateFile
Schema for operations that require a file path and content as input.
agent_toolkits.github.toolkit.CreatePR
Schema for operations that require a PR title and body as input.
agent_toolkits.github.toolkit.CreateReviewRequest
Schema for operations that require a username as input.
agent_toolkits.github.toolkit.DeleteFile
Schema for operations that require a file path as input.
agent_toolkits.github.toolkit.DirectoryPath
Schema for operations that require a directory path as input.
agent_toolkits.github.toolkit.GetIssue
Schema for operations that require an issue number as input.
agent_toolkits.github.toolkit.GetPR
Schema for operations that require a PR number as input.
agent_toolkits.github.toolkit.GitHubToolkit
GitHub Toolkit.
agent_toolkits.github.toolkit.NoInput
Schema for operations that do not require any input.
agent_toolkits.github.toolkit.ReadFile | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-2 | Schema for operations that do not require any input.
agent_toolkits.github.toolkit.ReadFile
Schema for operations that require a file path as input.
agent_toolkits.github.toolkit.SearchCode
Schema for operations that require a search query as input.
agent_toolkits.github.toolkit.SearchIssuesAndPRs
Schema for operations that require a search query as input.
agent_toolkits.github.toolkit.UpdateFile
Schema for operations that require a file path and content as input.
agent_toolkits.gitlab.toolkit.GitLabToolkit
GitLab Toolkit.
agent_toolkits.gmail.toolkit.GmailToolkit
Toolkit for interacting with Gmail.
agent_toolkits.jira.toolkit.JiraToolkit
Jira Toolkit.
agent_toolkits.json.toolkit.JsonToolkit
Toolkit for interacting with a JSON spec.
agent_toolkits.multion.toolkit.MultionToolkit
Toolkit for interacting with the Browser Agent.
agent_toolkits.nasa.toolkit.NasaToolkit
Nasa Toolkit.
agent_toolkits.nla.tool.NLATool
Natural Language API Tool.
agent_toolkits.nla.toolkit.NLAToolkit
Natural Language API Toolkit.
agent_toolkits.office365.toolkit.O365Toolkit
Toolkit for interacting with Office 365.
agent_toolkits.openapi.planner.RequestsDeleteToolWithParsing
Tool that sends a DELETE request and parses the response.
agent_toolkits.openapi.planner.RequestsGetToolWithParsing
Requests GET tool with LLM-instructed extraction of truncated responses.
agent_toolkits.openapi.planner.RequestsPatchToolWithParsing
Requests PATCH tool with LLM-instructed extraction of truncated responses.
agent_toolkits.openapi.planner.RequestsPostToolWithParsing
Requests POST tool with LLM-instructed extraction of truncated responses.
agent_toolkits.openapi.planner.RequestsPutToolWithParsing
Requests PUT tool with LLM-instructed extraction of truncated responses. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-3 | Requests PUT tool with LLM-instructed extraction of truncated responses.
agent_toolkits.openapi.spec.ReducedOpenAPISpec(...)
A reduced OpenAPI spec.
agent_toolkits.openapi.toolkit.OpenAPIToolkit
Toolkit for interacting with an OpenAPI API.
agent_toolkits.openapi.toolkit.RequestsToolkit
Toolkit for making REST requests.
agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit
Toolkit for PlayWright browser tools.
agent_toolkits.polygon.toolkit.PolygonToolkit
Polygon Toolkit.
agent_toolkits.powerbi.toolkit.PowerBIToolkit
Toolkit for interacting with Power BI dataset.
agent_toolkits.slack.toolkit.SlackToolkit
Toolkit for interacting with Slack.
agent_toolkits.spark_sql.toolkit.SparkSQLToolkit
Toolkit for interacting with Spark SQL.
agent_toolkits.sql.toolkit.SQLDatabaseToolkit
Toolkit for interacting with SQL databases.
agent_toolkits.steam.toolkit.SteamToolkit
Steam Toolkit.
agent_toolkits.zapier.toolkit.ZapierToolkit
Zapier Toolkit.
Functions¶
agent_toolkits.json.base.create_json_agent(...)
Construct a json agent from an LLM and tools.
agent_toolkits.load_tools.get_all_tool_names()
Get a list of all possible tool names.
agent_toolkits.load_tools.load_huggingface_tool(...)
Loads a tool from the HuggingFace Hub.
agent_toolkits.load_tools.load_tools(tool_names)
Load tools based on their name.
agent_toolkits.openapi.base.create_openapi_agent(...)
Construct an OpenAPI agent from an LLM and tools.
agent_toolkits.openapi.planner.create_openapi_agent(...)
Construct an OpenAI API planner and controller for a given spec.
agent_toolkits.openapi.spec.reduce_openapi_spec(spec)
Simplify/distill/minify a spec somehow.
agent_toolkits.powerbi.base.create_pbi_agent(llm) | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-4 | agent_toolkits.powerbi.base.create_pbi_agent(llm)
Construct a Power BI agent from an LLM and tools.
agent_toolkits.powerbi.chat_base.create_pbi_chat_agent(llm)
Construct a Power BI agent from a Chat LLM and tools.
agent_toolkits.spark_sql.base.create_spark_sql_agent(...)
Construct a Spark SQL agent from an LLM and tools.
agent_toolkits.sql.base.create_sql_agent(llm)
Construct a SQL agent from an LLM and toolkit or database.
langchain_community.agents¶
Classes¶
agents.openai_assistant.base.OpenAIAssistantV2Runnable
[Beta] Run an OpenAI Assistant.
langchain_community.cache¶
Warning
Beta Feature!
Cache provides an optional caching layer for LLMs.
Cache is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM
provider if you’re often requesting the same completion multiple times.
It can speed up your application by reducing the number of API calls you make
to the LLM provider.
Cache directly competes with Memory. See documentation for Pros and Cons.
Class hierarchy:
BaseCache --> <name>Cache # Examples: InMemoryCache, RedisCache, GPTCache
Classes¶
cache.AstraDBCache(*[, collection_name, ...])
[Deprecated]
cache.AstraDBSemanticCache(*[, ...])
[Deprecated]
cache.AsyncRedisCache(redis_, *[, ttl])
Cache that uses Redis as a backend.
cache.AzureCosmosDBSemanticCache(...[, ...])
Cache that uses Cosmos DB Mongo vCore vector-store backend
cache.CassandraCache([session, keyspace, ...])
Cache that uses Cassandra / Astra DB as a backend.
cache.CassandraSemanticCache([session, ...]) | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-5 | cache.CassandraSemanticCache([session, ...])
Cache that uses Cassandra as a vector-store backend for semantic (i.e.
cache.FullLLMCache(**kwargs)
SQLite table for full LLM Cache (all generations).
cache.FullMd5LLMCache(**kwargs)
SQLite table for full LLM Cache (all generations).
cache.GPTCache([init_func])
Cache that uses GPTCache as a backend.
cache.InMemoryCache()
Cache that stores things in memory.
cache.MomentoCache(cache_client, cache_name, *)
Cache that uses Momento as a backend.
cache.OpenSearchSemanticCache(...[, ...])
Cache that uses OpenSearch vector store backend
cache.RedisCache(redis_, *[, ttl])
Cache that uses Redis as a backend.
cache.RedisSemanticCache(redis_url, embedding)
Cache that uses Redis as a vector-store backend.
cache.SQLAlchemyCache(engine, cache_schema)
Cache that uses SQAlchemy as a backend.
cache.SQLAlchemyMd5Cache(engine, cache_schema)
Cache that uses SQAlchemy as a backend.
cache.SQLiteCache([database_path])
Cache that uses SQLite as a backend.
cache.UpstashRedisCache(redis_, *[, ttl])
Cache that uses Upstash Redis as a backend.
langchain_community.callbacks¶
Callback handlers allow listening to events in LangChain.
Class hierarchy:
BaseCallbackHandler --> <name>CallbackHandler # Example: AimCallbackHandler
Classes¶
callbacks.aim_callback.AimCallbackHandler([...])
Callback Handler that logs to Aim.
callbacks.aim_callback.BaseMetadataCallbackHandler()
Callback handler for the metadata and associated function states for callbacks.
callbacks.argilla_callback.ArgillaCallbackHandler(...)
Callback Handler that logs into Argilla.
callbacks.arize_callback.ArizeCallbackHandler([...]) | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-6 | Callback Handler that logs into Argilla.
callbacks.arize_callback.ArizeCallbackHandler([...])
Callback Handler that logs to Arize.
callbacks.arthur_callback.ArthurCallbackHandler(...)
Callback Handler that logs to Arthur platform.
callbacks.bedrock_anthropic_callback.BedrockAnthropicTokenUsageCallbackHandler()
Callback Handler that tracks bedrock anthropic info.
callbacks.clearml_callback.ClearMLCallbackHandler([...])
Callback Handler that logs to ClearML.
callbacks.comet_ml_callback.CometCallbackHandler([...])
Callback Handler that logs to Comet.
callbacks.confident_callback.DeepEvalCallbackHandler(metrics)
Callback Handler that logs into deepeval.
callbacks.context_callback.ContextCallbackHandler([...])
Callback Handler that records transcripts to the Context service.
callbacks.fiddler_callback.FiddlerCallbackHandler(...)
Initialize Fiddler callback handler.
callbacks.flyte_callback.FlyteCallbackHandler()
Callback handler that is used within a Flyte task.
callbacks.human.AsyncHumanApprovalCallbackHandler(...)
Asynchronous callback for manually validating values.
callbacks.human.HumanApprovalCallbackHandler(...)
Callback for manually validating values.
callbacks.human.HumanRejectedException
Exception to raise when a person manually review and rejects a value.
callbacks.infino_callback.InfinoCallbackHandler([...])
Callback Handler that logs to Infino.
callbacks.labelstudio_callback.LabelStudioCallbackHandler([...])
Label Studio callback handler.
callbacks.labelstudio_callback.LabelStudioMode(value)
Label Studio mode enumerator.
callbacks.llmonitor_callback.LLMonitorCallbackHandler([...])
Callback Handler for LLMonitor`.
callbacks.llmonitor_callback.UserContextManager(user_id)
Context manager for LLMonitor user context.
callbacks.mlflow_callback.MlflowCallbackHandler([...])
Callback Handler that logs metrics and artifacts to mlflow server.
callbacks.mlflow_callback.MlflowLogger(**kwargs) | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-7 | callbacks.mlflow_callback.MlflowLogger(**kwargs)
Callback Handler that logs metrics and artifacts to mlflow server.
callbacks.openai_info.OpenAICallbackHandler()
Callback Handler that tracks OpenAI info.
callbacks.promptlayer_callback.PromptLayerCallbackHandler([...])
Callback handler for promptlayer.
callbacks.sagemaker_callback.SageMakerCallbackHandler(run)
Callback Handler that logs prompt artifacts and metrics to SageMaker Experiments.
callbacks.streamlit.mutable_expander.ChildRecord(...)
Child record as a NamedTuple.
callbacks.streamlit.mutable_expander.ChildType(value)
Enumerator of the child type.
callbacks.streamlit.mutable_expander.MutableExpander(...)
Streamlit expander that can be renamed and dynamically expanded/collapsed.
callbacks.streamlit.streamlit_callback_handler.LLMThought(...)
A thought in the LLM's thought stream.
callbacks.streamlit.streamlit_callback_handler.LLMThoughtLabeler()
Generates markdown labels for LLMThought containers.
callbacks.streamlit.streamlit_callback_handler.LLMThoughtState(value)
Enumerator of the LLMThought state.
callbacks.streamlit.streamlit_callback_handler.StreamlitCallbackHandler(...)
Callback handler that writes to a Streamlit app.
callbacks.streamlit.streamlit_callback_handler.ToolRecord(...)
Tool record as a NamedTuple.
callbacks.tracers.comet.CometTracer(**kwargs)
Comet Tracer.
callbacks.tracers.wandb.RunProcessor(...)
Handles the conversion of a LangChain Runs into a WBTraceTree.
callbacks.tracers.wandb.WandbRunArgs
Arguments for the WandbTracer.
callbacks.tracers.wandb.WandbTracer([run_args])
Callback Handler that logs to Weights and Biases.
callbacks.trubrics_callback.TrubricsCallbackHandler([...])
Callback handler for Trubrics.
callbacks.upstash_ratelimit_callback.UpstashRatelimitError(...) | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-8 | callbacks.upstash_ratelimit_callback.UpstashRatelimitError(...)
Upstash Ratelimit Error
callbacks.upstash_ratelimit_callback.UpstashRatelimitHandler(...)
Callback to handle rate limiting based on the number of requests or the number of tokens in the input.
callbacks.uptrain_callback.UpTrainCallbackHandler(*)
Callback Handler that logs evaluation results to uptrain and the console.
callbacks.uptrain_callback.UpTrainDataSchema(...)
The UpTrain data schema for tracking evaluation results.
callbacks.utils.BaseMetadataCallbackHandler()
Handle the metadata and associated function states for callbacks.
callbacks.wandb_callback.WandbCallbackHandler([...])
Callback Handler that logs to Weights and Biases.
callbacks.whylabs_callback.WhyLabsCallbackHandler(...)
Callback Handler for logging to WhyLabs.
Functions¶
callbacks.aim_callback.import_aim()
Import the aim python package and raise an error if it is not installed.
callbacks.clearml_callback.import_clearml()
Import the clearml python package and raise an error if it is not installed.
callbacks.comet_ml_callback.import_comet_ml()
Import comet_ml and raise an error if it is not installed.
callbacks.context_callback.import_context()
Import the getcontext package.
callbacks.fiddler_callback.import_fiddler()
Import the fiddler python package and raise an error if it is not installed.
callbacks.flyte_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.flyte_callback.import_flytekit()
Import flytekit and flytekitplugins-deck-standard.
callbacks.infino_callback.get_num_tokens(...)
Calculate num tokens for OpenAI with tiktoken package.
callbacks.infino_callback.import_infino()
Import the infino client.
callbacks.infino_callback.import_tiktoken()
Import tiktoken for counting tokens for OpenAI models. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-9 | Import tiktoken for counting tokens for OpenAI models.
callbacks.labelstudio_callback.get_default_label_configs(mode)
Get default Label Studio configs for the given mode.
callbacks.llmonitor_callback.identify(user_id)
Builds an LLMonitor UserContextManager
callbacks.manager.get_bedrock_anthropic_callback()
Get the Bedrock anthropic callback handler in a context manager.
callbacks.manager.get_openai_callback()
Get the OpenAI callback handler in a context manager.
callbacks.manager.wandb_tracing_enabled([...])
Get the WandbTracer in a context manager.
callbacks.mlflow_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.mlflow_callback.construct_html_from_prompt_and_generation(...)
Construct an html element from a prompt and a generation.
callbacks.mlflow_callback.get_text_complexity_metrics()
Get the text complexity metrics from textstat.
callbacks.mlflow_callback.import_mlflow()
Import the mlflow python package and raise an error if it is not installed.
callbacks.mlflow_callback.mlflow_callback_metrics()
Get the metrics to log to MLFlow.
callbacks.openai_info.get_openai_token_cost_for_model(...)
Get the cost in USD for a given model and number of tokens.
callbacks.openai_info.standardize_model_name(...)
Standardize the model name to a format that can be used in the OpenAI API.
callbacks.sagemaker_callback.save_json(data, ...)
Save dict to local file path.
callbacks.tracers.comet.import_comet_llm_api()
Import comet_llm api and raise an error if it is not installed.
callbacks.uptrain_callback.import_uptrain()
Import the uptrain package.
callbacks.utils.flatten_dict(nested_dict[, ...])
Flatten a nested dictionary into a flat dictionary.
callbacks.utils.hash_string(s)
Hash a string using sha1. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-10 | callbacks.utils.hash_string(s)
Hash a string using sha1.
callbacks.utils.import_pandas()
Import the pandas python package and raise an error if it is not installed.
callbacks.utils.import_spacy()
Import the spacy python package and raise an error if it is not installed.
callbacks.utils.import_textstat()
Import the textstat python package and raise an error if it is not installed.
callbacks.utils.load_json(json_path)
Load json file to a string.
callbacks.wandb_callback.analyze_text(text)
Analyze text using textstat and spacy.
callbacks.wandb_callback.construct_html_from_prompt_and_generation(...)
Construct an html element from a prompt and a generation.
callbacks.wandb_callback.import_wandb()
Import the wandb python package and raise an error if it is not installed.
callbacks.wandb_callback.load_json_to_dict(...)
Load json file to a dictionary.
callbacks.whylabs_callback.import_langkit([...])
Import the langkit python package and raise an error if it is not installed.
langchain_community.chains¶
Chains module for langchain_community
This module contains the community chains.
Classes¶
chains.graph_qa.arangodb.ArangoGraphQAChain
Chain for question-answering against a graph by generating AQL statements.
chains.graph_qa.base.GraphQAChain
Chain for question-answering against a graph.
chains.graph_qa.cypher.GraphCypherQAChain
Chain for question-answering against a graph by generating Cypher statements.
chains.graph_qa.cypher_utils.CypherQueryCorrector(schemas)
Used to correct relationship direction in generated Cypher statements.
chains.graph_qa.cypher_utils.Schema(...)
Create new instance of Schema(left_node, relation, right_node)
chains.graph_qa.falkordb.FalkorDBQAChain | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-11 | chains.graph_qa.falkordb.FalkorDBQAChain
Chain for question-answering against a graph by generating Cypher statements.
chains.graph_qa.gremlin.GremlinQAChain
Chain for question-answering against a graph by generating gremlin statements.
chains.graph_qa.hugegraph.HugeGraphQAChain
Chain for question-answering against a graph by generating gremlin statements.
chains.graph_qa.kuzu.KuzuQAChain
Question-answering against a graph by generating Cypher statements for Kùzu.
chains.graph_qa.nebulagraph.NebulaGraphQAChain
Chain for question-answering against a graph by generating nGQL statements.
chains.graph_qa.neptune_cypher.NeptuneOpenCypherQAChain
Chain for question-answering against a Neptune graph by generating openCypher statements.
chains.graph_qa.neptune_sparql.NeptuneSparqlQAChain
Chain for question-answering against a Neptune graph by generating SPARQL statements.
chains.graph_qa.ontotext_graphdb.OntotextGraphDBQAChain
Question-answering against Ontotext GraphDB
chains.graph_qa.sparql.GraphSparqlQAChain
Question-answering against an RDF or OWL graph by generating SPARQL statements.
chains.llm_requests.LLMRequestsChain
Chain that requests a URL and then uses an LLM to parse results.
chains.openapi.chain.OpenAPIEndpointChain
Chain interacts with an OpenAPI endpoint using natural language.
chains.openapi.requests_chain.APIRequesterChain
Get the request parser.
chains.openapi.requests_chain.APIRequesterOutputParser
Parse the request and error tags.
chains.openapi.response_chain.APIResponderChain
Get the response parser.
chains.openapi.response_chain.APIResponderOutputParser | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-12 | Get the response parser.
chains.openapi.response_chain.APIResponderOutputParser
Parse the response and error tags.
chains.pebblo_retrieval.base.PebbloRetrievalQA
Retrieval Chain with Identity & Semantic Enforcement for question-answering against a vector database.
chains.pebblo_retrieval.models.App
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.AuthContext
Class for an authorization context.
chains.pebblo_retrieval.models.ChainInput
Input for PebbloRetrievalQA chain.
chains.pebblo_retrieval.models.Chains
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.Context
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.Framework
Langchain framework details
chains.pebblo_retrieval.models.Model
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.PkgInfo
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.Prompt
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.Qa
Create a new model by parsing and validating input data from keyword arguments.
chains.pebblo_retrieval.models.Runtime
OS, language details
chains.pebblo_retrieval.models.SemanticContext
Class for a semantic context.
chains.pebblo_retrieval.models.SemanticEntities
Class for a semantic entity filter.
chains.pebblo_retrieval.models.SemanticTopics
Class for a semantic topic filter.
chains.pebblo_retrieval.models.VectorDB
Create a new model by parsing and validating input data from keyword arguments. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-13 | Create a new model by parsing and validating input data from keyword arguments.
Functions¶
chains.ernie_functions.base.convert_python_function_to_ernie_function(...)
Convert a Python function to an Ernie function-calling API compatible dict.
chains.ernie_functions.base.convert_to_ernie_function(...)
Convert a raw function/class to an Ernie function.
chains.ernie_functions.base.create_ernie_fn_chain(...)
[Legacy] Create an LLM chain that uses Ernie functions.
chains.ernie_functions.base.create_ernie_fn_runnable(...)
Create a runnable sequence that uses Ernie functions.
chains.ernie_functions.base.create_structured_output_chain(...)
[Legacy] Create an LLMChain that uses an Ernie function to get a structured output.
chains.ernie_functions.base.create_structured_output_runnable(...)
Create a runnable that uses an Ernie function to get a structured output.
chains.ernie_functions.base.get_ernie_output_parser(...)
Get the appropriate function output parser given the user functions.
chains.graph_qa.cypher.construct_schema(...)
Filter the schema based on included or excluded types
chains.graph_qa.cypher.extract_cypher(text)
Extract Cypher code from a text.
chains.graph_qa.falkordb.extract_cypher(text)
Extract Cypher code from a text.
chains.graph_qa.gremlin.extract_gremlin(text)
Extract Gremlin code from a text.
chains.graph_qa.kuzu.extract_cypher(text)
Extract Cypher code from a text.
chains.graph_qa.kuzu.remove_prefix(text, prefix)
Remove a prefix from a text.
chains.graph_qa.neptune_cypher.extract_cypher(text)
Extract Cypher code from text using Regex.
chains.graph_qa.neptune_cypher.trim_query(query) | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-14 | chains.graph_qa.neptune_cypher.trim_query(query)
Trim the query to only include Cypher keywords.
chains.graph_qa.neptune_cypher.use_simple_prompt(llm)
Decides whether to use the simple prompt
chains.graph_qa.neptune_sparql.extract_sparql(query)
Extract SPARQL code from a text.
chains.pebblo_retrieval.enforcement_filters.set_enforcement_filters(...)
Set identity and semantic enforcement filters in the retriever.
chains.pebblo_retrieval.utilities.get_ip()
Fetch local runtime ip address.
chains.pebblo_retrieval.utilities.get_runtime()
Fetch the current Framework and Runtime details.
langchain_community.chat_loaders¶
Chat Loaders load chat messages from common communications platforms.
Load chat messages from various
communications platforms such as Facebook Messenger, Telegram, and
WhatsApp. The loaded chat messages can be used for fine-tuning models.
Class hierarchy:
BaseChatLoader --> <name>ChatLoader # Examples: WhatsAppChatLoader, IMessageChatLoader
Main helpers:
ChatSession
Classes¶
chat_loaders.facebook_messenger.FolderFacebookMessengerChatLoader(path)
Load Facebook Messenger chat data from a folder.
chat_loaders.facebook_messenger.SingleFileFacebookMessengerChatLoader(path)
Load Facebook Messenger chat data from a single file.
chat_loaders.gmail.GMailLoader(creds[, n, ...])
[Deprecated] Load data from GMail.
chat_loaders.imessage.IMessageChatLoader([path])
Load chat sessions from the iMessage chat.db SQLite file.
chat_loaders.langsmith.LangSmithDatasetChatLoader(*, ...)
Load chat sessions from a LangSmith dataset with the "chat" data type.
chat_loaders.langsmith.LangSmithRunChatLoader(runs)
Load chat sessions from a list of LangSmith "llm" runs. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-15 | Load chat sessions from a list of LangSmith "llm" runs.
chat_loaders.slack.SlackChatLoader(path)
Load Slack conversations from a dump zip file.
chat_loaders.telegram.TelegramChatLoader(path)
Load telegram conversations to LangChain chat messages.
chat_loaders.whatsapp.WhatsAppChatLoader(path)
Load WhatsApp conversations from a dump zip file or directory.
Functions¶
chat_loaders.imessage.nanoseconds_from_2001_to_datetime(...)
chat_loaders.utils.map_ai_messages(...)
Convert messages from the specified 'sender' to AI messages.
chat_loaders.utils.map_ai_messages_in_session(...)
Convert messages from the specified 'sender' to AI messages.
chat_loaders.utils.merge_chat_runs(chat_sessions)
Merge chat runs together.
chat_loaders.utils.merge_chat_runs_in_session(...)
Merge chat runs together in a chat session.
langchain_community.chat_message_histories¶
Chat message history stores a history of the message interactions in a chat.
Class hierarchy:
BaseChatMessageHistory --> <name>ChatMessageHistory # Examples: FileChatMessageHistory, PostgresChatMessageHistory
Main helpers:
AIMessage, HumanMessage, BaseMessage
Classes¶
chat_message_histories.astradb.AstraDBChatMessageHistory(*, ...)
[Deprecated]
chat_message_histories.cassandra.CassandraChatMessageHistory(...)
Chat message history that stores history in Cassandra.
chat_message_histories.cosmos_db.CosmosDBChatMessageHistory(...)
Chat message history backed by Azure CosmosDB.
chat_message_histories.dynamodb.DynamoDBChatMessageHistory(...)
Chat message history that stores history in AWS DynamoDB.
chat_message_histories.elasticsearch.ElasticsearchChatMessageHistory(...)
[Deprecated] Chat message history that stores history in Elasticsearch.
chat_message_histories.file.FileChatMessageHistory(...) | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-16 | chat_message_histories.file.FileChatMessageHistory(...)
Chat message history that stores history in a local file.
chat_message_histories.firestore.FirestoreChatMessageHistory(...)
Chat message history backed by Google Firestore.
chat_message_histories.momento.MomentoChatMessageHistory(...)
Chat message history cache that uses Momento as a backend.
chat_message_histories.mongodb.MongoDBChatMessageHistory(...)
[Deprecated] Chat message history that stores history in MongoDB.
chat_message_histories.neo4j.Neo4jChatMessageHistory(...)
Chat message history stored in a Neo4j database.
chat_message_histories.postgres.PostgresChatMessageHistory(...)
[Deprecated] Chat message history stored in a Postgres database.
chat_message_histories.redis.RedisChatMessageHistory(...)
Chat message history stored in a Redis database.
chat_message_histories.rocksetdb.RocksetChatMessageHistory(...)
Uses Rockset to store chat messages.
chat_message_histories.singlestoredb.SingleStoreDBChatMessageHistory(...)
Chat message history stored in a SingleStoreDB database.
chat_message_histories.sql.BaseMessageConverter()
Convert BaseMessage to the SQLAlchemy model.
chat_message_histories.sql.DefaultMessageConverter(...)
The default message converter for SQLChatMessageHistory.
chat_message_histories.sql.SQLChatMessageHistory(...)
Chat message history stored in an SQL database.
chat_message_histories.streamlit.StreamlitChatMessageHistory([key])
Chat message history that stores messages in Streamlit session state.
chat_message_histories.tidb.TiDBChatMessageHistory(...)
Represents a chat message history stored in a TiDB database.
chat_message_histories.upstash_redis.UpstashRedisChatMessageHistory(...)
Chat message history stored in an Upstash Redis database.
chat_message_histories.xata.XataChatMessageHistory(...)
Chat message history stored in a Xata database. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-17 | Chat message history stored in a Xata database.
chat_message_histories.zep.SearchScope(value)
Scope for the document search.
chat_message_histories.zep.SearchType(value)
Enumerator of the types of search to perform.
chat_message_histories.zep.ZepChatMessageHistory(...)
Chat message history that uses Zep as a backend.
chat_message_histories.zep_cloud.ZepCloudChatMessageHistory(...)
Chat message history that uses Zep Cloud as a backend.
Functions¶
chat_message_histories.sql.create_message_model(...)
Create a message model for a given table name.
chat_message_histories.zep_cloud.condense_zep_memory_into_human_message(...)
chat_message_histories.zep_cloud.get_zep_message_role_type(role)
langchain_community.chat_models¶
Chat Models are a variation on language models.
While Chat Models use language models under the hood, the interface they expose
is a bit different. Rather than expose a “text in, text out” API, they expose
an interface where “chat messages” are the inputs and outputs.
Class hierarchy:
BaseLanguageModel --> BaseChatModel --> <name> # Examples: ChatOpenAI, ChatGooglePalm
Main helpers:
AIMessage, BaseMessage, HumanMessage
Classes¶
chat_models.anthropic.ChatAnthropic
[Deprecated] Anthropic chat large language models.
chat_models.anyscale.ChatAnyscale
Anyscale Chat large language models.
chat_models.azure_openai.AzureChatOpenAI
[Deprecated] Azure OpenAI Chat Completion API.
chat_models.azureml_endpoint.AzureMLChatOnlineEndpoint
Azure ML Online Endpoint chat models.
chat_models.azureml_endpoint.CustomOpenAIChatContentFormatter()
Chat Content formatter for models with OpenAI like API scheme.
chat_models.azureml_endpoint.LlamaChatContentFormatter()
Deprecated: Kept for backwards compatibility | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-18 | chat_models.azureml_endpoint.LlamaChatContentFormatter()
Deprecated: Kept for backwards compatibility
chat_models.azureml_endpoint.LlamaContentFormatter()
Content formatter for LLaMA.
chat_models.azureml_endpoint.MistralChatContentFormatter()
Content formatter for Mistral.
chat_models.baichuan.ChatBaichuan
Baichuan chat models API by Baichuan Intelligent Technology.
chat_models.baidu_qianfan_endpoint.QianfanChatEndpoint
Baidu Qianfan chat models.
chat_models.bedrock.BedrockChat
[Deprecated] Chat model that uses the Bedrock API.
chat_models.bedrock.ChatPromptAdapter()
Adapter class to prepare the inputs from Langchain to prompt format that Chat model expects.
chat_models.cohere.ChatCohere
[Deprecated] Cohere chat large language models.
chat_models.coze.ChatCoze
ChatCoze chat models API by coze.com
chat_models.dappier.ChatDappierAI
Dappier chat large language models.
chat_models.databricks.ChatDatabricks
Databricks chat models API.
chat_models.deepinfra.ChatDeepInfra
A chat model that uses the DeepInfra API.
chat_models.deepinfra.ChatDeepInfraException
Exception raised when the DeepInfra API returns an error.
chat_models.edenai.ChatEdenAI
EdenAI chat large language models.
chat_models.ernie.ErnieBotChat
[Deprecated] ERNIE-Bot large language model.
chat_models.everlyai.ChatEverlyAI
EverlyAI Chat large language models.
chat_models.fake.FakeListChatModel
Fake ChatModel for testing purposes.
chat_models.fake.FakeMessagesListChatModel
Fake ChatModel for testing purposes.
chat_models.fireworks.ChatFireworks
[Deprecated] Fireworks Chat models.
chat_models.friendli.ChatFriendli | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-19 | [Deprecated] Fireworks Chat models.
chat_models.friendli.ChatFriendli
Friendli LLM for chat.
chat_models.gigachat.GigaChat
GigaChat large language models API.
chat_models.google_palm.ChatGooglePalm
Google PaLM Chat models API.
chat_models.google_palm.ChatGooglePalmError
Error with the Google PaLM API.
chat_models.gpt_router.GPTRouter
GPTRouter by Writesonic Inc.
chat_models.gpt_router.GPTRouterException
Error with the GPTRouter APIs
chat_models.gpt_router.GPTRouterModel
GPTRouter model.
chat_models.huggingface.ChatHuggingFace
[Deprecated] Wrapper for using Hugging Face LLM's as ChatModels.
chat_models.human.HumanInputChatModel
ChatModel which returns user input as the response.
chat_models.hunyuan.ChatHunyuan
Tencent Hunyuan chat models API by Tencent.
chat_models.javelin_ai_gateway.ChatJavelinAIGateway
Javelin AI Gateway chat models API.
chat_models.javelin_ai_gateway.ChatParams
Parameters for the Javelin AI Gateway LLM.
chat_models.jinachat.JinaChat
Jina AI Chat models API.
chat_models.kinetica.ChatKinetica
Kinetica LLM Chat Model API.
chat_models.kinetica.KineticaSqlOutputParser
Fetch and return data from the Kinetica LLM.
chat_models.kinetica.KineticaSqlResponse
Response containing SQL and the fetched data.
chat_models.kinetica.KineticaUtil()
Kinetica utility functions.
chat_models.konko.ChatKonko
ChatKonko Chat large language models API.
chat_models.litellm.ChatLiteLLM
Chat model that uses the LiteLLM API. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-20 | Chat model that uses the LiteLLM API.
chat_models.litellm.ChatLiteLLMException
Error with the LiteLLM I/O library
chat_models.litellm_router.ChatLiteLLMRouter
LiteLLM Router as LangChain Model.
chat_models.llama_edge.LlamaEdgeChatService
Chat with LLMs via llama-api-server
chat_models.maritalk.ChatMaritalk
MariTalk Chat models API.
chat_models.maritalk.MaritalkHTTPError(...)
Initialize RequestException with request and response objects.
chat_models.minimax.MiniMaxChat
MiniMax large language models.
chat_models.mlflow.ChatMlflow
MLflow chat models API.
chat_models.mlflow_ai_gateway.ChatMLflowAIGateway
MLflow AI Gateway chat models API.
chat_models.mlflow_ai_gateway.ChatParams
Parameters for the MLflow AI Gateway LLM.
chat_models.mlx.ChatMLX
MLX chat models.
chat_models.moonshot.MoonshotChat
Moonshot large language models.
chat_models.octoai.ChatOctoAI
OctoAI Chat large language models.
chat_models.ollama.ChatOllama
Ollama locally runs large language models.
chat_models.openai.ChatOpenAI
[Deprecated] OpenAI Chat large language models API.
chat_models.pai_eas_endpoint.PaiEasChatEndpoint
Alibaba Cloud PAI-EAS LLM Service chat model API.
chat_models.perplexity.ChatPerplexity
Perplexity AI Chat models API.
chat_models.premai.ChatPremAI
PremAI Chat models.
chat_models.premai.ChatPremAPIError
Error with the PremAI API.
chat_models.promptlayer_openai.PromptLayerChatOpenAI
PromptLayer and OpenAI Chat large language models API.
chat_models.solar.SolarChat | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-21 | PromptLayer and OpenAI Chat large language models API.
chat_models.solar.SolarChat
[Deprecated] Wrapper around Solar large language models.
chat_models.sparkllm.ChatSparkLLM
iFlyTek Spark large language model.
chat_models.tongyi.ChatTongyi
Alibaba Tongyi Qwen chat models API.
chat_models.vertexai.ChatVertexAI
[Deprecated] Vertex AI Chat large language models API.
chat_models.volcengine_maas.VolcEngineMaasChat
Volc Engine Maas hosts a plethora of models.
chat_models.yandex.ChatYandexGPT
YandexGPT large language models.
chat_models.yuan2.ChatYuan2
Yuan2.0 Chat models API.
chat_models.zhipuai.ChatZhipuAI
ZhipuAI large language chat models API.
Functions¶
chat_models.anthropic.convert_messages_to_prompt_anthropic(...)
Format a list of messages into a full prompt for the Anthropic model
chat_models.baidu_qianfan_endpoint.convert_message_to_dict(message)
Convert a message to a dictionary that can be passed to the API.
chat_models.bedrock.convert_messages_to_prompt_mistral(...)
Convert a list of messages to a prompt for mistral.
chat_models.cohere.get_cohere_chat_request(...)
Get the request for the Cohere chat API.
chat_models.cohere.get_role(message)
Get the role of the message.
chat_models.fireworks.acompletion_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.fireworks.acompletion_with_retry_streaming(...)
Use tenacity to retry the completion call for streaming.
chat_models.fireworks.completion_with_retry(...)
Use tenacity to retry the completion call.
chat_models.fireworks.conditional_decorator(...)
Define conditional decorator. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-22 | chat_models.fireworks.conditional_decorator(...)
Define conditional decorator.
chat_models.fireworks.convert_dict_to_message(_dict)
Convert a dict response to a message.
chat_models.friendli.get_chat_request(messages)
Get a request of the Friendli chat API.
chat_models.friendli.get_role(message)
Get role of the message.
chat_models.google_palm.achat_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.google_palm.chat_with_retry(llm, ...)
Use tenacity to retry the completion call.
chat_models.gpt_router.acompletion_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.gpt_router.completion_with_retry(...)
Use tenacity to retry the completion call.
chat_models.gpt_router.get_ordered_generation_requests(...)
Return the body for the model router input.
chat_models.jinachat.acompletion_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.litellm.acompletion_with_retry(llm)
Use tenacity to retry the async completion call.
chat_models.litellm_router.get_llm_output(...)
Get llm output from usage and params.
chat_models.meta.convert_messages_to_prompt_llama(...)
Convert a list of messages to a prompt for llama.
chat_models.minimax.aconnect_httpx_sse(...)
chat_models.minimax.connect_httpx_sse(...)
chat_models.openai.acompletion_with_retry(llm)
Use tenacity to retry the async completion call.
chat_models.premai.chat_with_retry(llm, ...)
Using tenacity for retry in completion call
chat_models.premai.create_prem_retry_decorator(llm, *)
Create a retry decorator for PremAI API errors.
chat_models.tongyi.convert_dict_to_message(_dict)
Convert a dict to a message. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-23 | Convert a dict to a message.
chat_models.tongyi.convert_message_chunk_to_message(...)
Convert a message chunk to a message.
chat_models.tongyi.convert_message_to_dict(message)
Convert a message to a dict.
chat_models.volcengine_maas.convert_dict_to_message(_dict)
Convert a dict to a message.
chat_models.yandex.acompletion_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.yandex.completion_with_retry(...)
Use tenacity to retry the completion call.
chat_models.yuan2.acompletion_with_retry(...)
Use tenacity to retry the async completion call.
chat_models.zhipuai.aconnect_sse(client, ...)
chat_models.zhipuai.connect_sse(client, ...)
langchain_community.cross_encoders¶
Cross encoders are wrappers around cross encoder models from different APIs andservices.
Cross encoder models can be LLMs or not.
Class hierarchy:
BaseCrossEncoder --> <name>CrossEncoder # Examples: SagemakerEndpointCrossEncoder
Classes¶
cross_encoders.fake.FakeCrossEncoder
Fake cross encoder model.
cross_encoders.huggingface.HuggingFaceCrossEncoder
HuggingFace cross encoder models.
cross_encoders.sagemaker_endpoint.CrossEncoderContentHandler()
Content handler for CrossEncoder class.
cross_encoders.sagemaker_endpoint.SagemakerEndpointCrossEncoder
SageMaker Inference CrossEncoder endpoint.
langchain_community.docstore¶
Docstores are classes to store and load Documents.
The Docstore is a simplified version of the Document Loader.
Class hierarchy:
Docstore --> <name> # Examples: InMemoryDocstore, Wikipedia
Main helpers:
Document, AddableMixin
Classes¶
docstore.arbitrary_fn.DocstoreFn(lookup_fn)
Docstore via arbitrary lookup function.
docstore.base.AddableMixin() | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-24 | Docstore via arbitrary lookup function.
docstore.base.AddableMixin()
Mixin class that supports adding texts.
docstore.base.Docstore()
Interface to access to place that stores documents.
docstore.in_memory.InMemoryDocstore([_dict])
Simple in memory docstore in the form of a dict.
docstore.wikipedia.Wikipedia()
Wikipedia API.
langchain_community.document_compressors¶
Classes¶
document_compressors.dashscope_rerank.DashScopeRerank
Document compressor that uses DashScope Rerank API.
document_compressors.flashrank_rerank.FlashrankRerank
Document compressor using Flashrank interface.
document_compressors.jina_rerank.JinaRerank
Document compressor that uses Jina Rerank API.
document_compressors.llmlingua_filter.LLMLinguaCompressor
Compress using LLMLingua Project.
document_compressors.openvino_rerank.OpenVINOReranker
OpenVINO rerank models.
document_compressors.openvino_rerank.RerankRequest([...])
Request for reranking.
document_compressors.rankllm_rerank.ModelType(value)
An enumeration.
document_compressors.rankllm_rerank.RankLLMRerank
Document compressor using Flashrank interface.
langchain_community.document_loaders¶
Document Loaders are classes to load Documents.
Document Loaders are usually used to load a lot of Documents in a single run.
Class hierarchy:
BaseLoader --> <name>Loader # Examples: TextLoader, UnstructuredFileLoader
Main helpers:
Document, <name>TextSplitter
Classes¶
document_loaders.acreom.AcreomLoader(path[, ...])
Load acreom vault from a directory.
document_loaders.airbyte.AirbyteCDKLoader(...) | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-25 | document_loaders.airbyte.AirbyteCDKLoader(...)
Load with an Airbyte source connector implemented using the CDK.
document_loaders.airbyte.AirbyteGongLoader(...)
Load from Gong using an Airbyte source connector.
document_loaders.airbyte.AirbyteHubspotLoader(...)
Load from Hubspot using an Airbyte source connector.
document_loaders.airbyte.AirbyteSalesforceLoader(...)
Load from Salesforce using an Airbyte source connector.
document_loaders.airbyte.AirbyteShopifyLoader(...)
Load from Shopify using an Airbyte source connector.
document_loaders.airbyte.AirbyteStripeLoader(...)
Load from Stripe using an Airbyte source connector.
document_loaders.airbyte.AirbyteTypeformLoader(...)
Load from Typeform using an Airbyte source connector.
document_loaders.airbyte.AirbyteZendeskSupportLoader(...)
Load from Zendesk Support using an Airbyte source connector.
document_loaders.airbyte_json.AirbyteJSONLoader(...)
Load local Airbyte json files.
document_loaders.airtable.AirtableLoader(...)
Load the Airtable tables.
document_loaders.apify_dataset.ApifyDatasetLoader
Load datasets from Apify web scraping, crawling, and data extraction platform.
document_loaders.arcgis_loader.ArcGISLoader(layer)
Load records from an ArcGIS FeatureLayer.
document_loaders.arxiv.ArxivLoader(query[, ...])
Load a query result from Arxiv.
document_loaders.assemblyai.AssemblyAIAudioLoaderById(...)
Load AssemblyAI audio transcripts.
document_loaders.assemblyai.AssemblyAIAudioTranscriptLoader(...)
Load AssemblyAI audio transcripts.
document_loaders.assemblyai.TranscriptFormat(value)
Transcript format to use for the document loader.
document_loaders.astradb.AstraDBLoader(...)
[Deprecated] | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-26 | document_loaders.astradb.AstraDBLoader(...)
[Deprecated]
document_loaders.async_html.AsyncHtmlLoader(...)
Load HTML asynchronously.
document_loaders.athena.AthenaLoader(query, ...)
Load documents from AWS Athena.
document_loaders.azlyrics.AZLyricsLoader([...])
Load AZLyrics webpages.
document_loaders.azure_ai_data.AzureAIDataLoader(url)
Load from Azure AI Data.
document_loaders.azure_blob_storage_container.AzureBlobStorageContainerLoader(...)
Load from Azure Blob Storage container.
document_loaders.azure_blob_storage_file.AzureBlobStorageFileLoader(...)
Load from Azure Blob Storage files.
document_loaders.baiducloud_bos_directory.BaiduBOSDirectoryLoader(...)
Load from Baidu BOS directory.
document_loaders.baiducloud_bos_file.BaiduBOSFileLoader(...)
Load from Baidu Cloud BOS file.
document_loaders.base_o365.O365BaseLoader
Base class for all loaders that uses O365 Package
document_loaders.bibtex.BibtexLoader(...[, ...])
Load a bibtex file.
document_loaders.bigquery.BigQueryLoader(query)
[Deprecated] Load from the Google Cloud Platform BigQuery.
document_loaders.bilibili.BiliBiliLoader(...)
Load fetching transcripts from BiliBili videos.
document_loaders.blackboard.BlackboardLoader(...)
Load a Blackboard course.
document_loaders.blob_loaders.cloud_blob_loader.CloudBlobLoader(url, *)
Load blobs from cloud URL or file:.
document_loaders.blob_loaders.file_system.FileSystemBlobLoader(path, *)
Load blobs in the local file system.
document_loaders.blob_loaders.youtube_audio.YoutubeAudioLoader(...)
Load YouTube urls as audio file(s).
document_loaders.blockchain.BlockchainDocumentLoader(...)
Load elements from a blockchain smart contract. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-27 | document_loaders.blockchain.BlockchainDocumentLoader(...)
Load elements from a blockchain smart contract.
document_loaders.blockchain.BlockchainType(value)
Enumerator of the supported blockchains.
document_loaders.brave_search.BraveSearchLoader(...)
Load with Brave Search engine.
document_loaders.browserbase.BrowserbaseLoader(urls)
Load pre-rendered web pages using a headless browser hosted on Browserbase.
document_loaders.browserless.BrowserlessLoader(...)
Load webpages with Browserless /content endpoint.
document_loaders.cassandra.CassandraLoader(...)
Document Loader for Apache Cassandra.
document_loaders.chatgpt.ChatGPTLoader(log_file)
Load conversations from exported ChatGPT data.
document_loaders.chm.CHMParser(path)
Microsoft Compiled HTML Help (CHM) Parser.
document_loaders.chm.UnstructuredCHMLoader(...)
Load CHM files using Unstructured.
document_loaders.chromium.AsyncChromiumLoader(urls, *)
Scrape HTML pages from URLs using a headless instance of the Chromium.
document_loaders.college_confidential.CollegeConfidentialLoader([...])
Load College Confidential webpages.
document_loaders.concurrent.ConcurrentLoader(...)
Load and pars Documents concurrently.
document_loaders.confluence.ConfluenceLoader(url)
Load Confluence pages.
document_loaders.confluence.ContentFormat(value)
Enumerator of the content formats of Confluence page.
document_loaders.conllu.CoNLLULoader(file_path)
Load CoNLL-U files.
document_loaders.couchbase.CouchbaseLoader(...)
Load documents from Couchbase.
document_loaders.csv_loader.CSVLoader(file_path)
Load a CSV file into a list of Documents.
document_loaders.csv_loader.UnstructuredCSVLoader(...)
Load CSV files using Unstructured.
document_loaders.cube_semantic.CubeSemanticLoader(...) | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-28 | document_loaders.cube_semantic.CubeSemanticLoader(...)
Load Cube semantic layer metadata.
document_loaders.datadog_logs.DatadogLogsLoader(...)
Load Datadog logs.
document_loaders.dataframe.BaseDataFrameLoader(...)
Initialize with dataframe object.
document_loaders.dataframe.DataFrameLoader(...)
Load Pandas DataFrame.
document_loaders.diffbot.DiffbotLoader(...)
Load Diffbot json file.
document_loaders.directory.DirectoryLoader(...)
Load from a directory.
document_loaders.discord.DiscordChatLoader(...)
Load Discord chat logs.
document_loaders.doc_intelligence.AzureAIDocumentIntelligenceLoader(...)
Load a PDF with Azure Document Intelligence.
document_loaders.docugami.DocugamiLoader
[Deprecated] Load from Docugami.
document_loaders.docusaurus.DocusaurusLoader(url)
Load from Docusaurus Documentation.
document_loaders.dropbox.DropboxLoader
Load files from Dropbox.
document_loaders.duckdb_loader.DuckDBLoader(query)
Load from DuckDB.
document_loaders.email.OutlookMessageLoader(...)
Loads Outlook Message files using extract_msg.
document_loaders.email.UnstructuredEmailLoader(...)
Load email files using Unstructured.
document_loaders.epub.UnstructuredEPubLoader(...)
Load EPub files using Unstructured.
document_loaders.etherscan.EtherscanLoader(...)
Load transactions from Ethereum mainnet.
document_loaders.evernote.EverNoteLoader(...)
Load from EverNote.
document_loaders.excel.UnstructuredExcelLoader(...)
Load Microsoft Excel files using Unstructured.
document_loaders.facebook_chat.FacebookChatLoader(path)
Load Facebook Chat messages directory dump.
document_loaders.fauna.FaunaLoader(query, ...)
Load from FaunaDB.
document_loaders.figma.FigmaFileLoader(...)
Load Figma file. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-29 | document_loaders.figma.FigmaFileLoader(...)
Load Figma file.
document_loaders.firecrawl.FireCrawlLoader(url, *)
Load web pages as Documents using FireCrawl.
document_loaders.gcs_directory.GCSDirectoryLoader(...)
[Deprecated] Load from GCS directory.
document_loaders.gcs_file.GCSFileLoader(...)
[Deprecated] Load from GCS file.
document_loaders.generic.GenericLoader(...)
Generic Document Loader.
document_loaders.geodataframe.GeoDataFrameLoader(...)
Load geopandas Dataframe.
document_loaders.git.GitLoader(repo_path[, ...])
Load Git repository files.
document_loaders.gitbook.GitbookLoader(web_page)
Load GitBook data.
document_loaders.github.BaseGitHubLoader
Load GitHub repository Issues.
document_loaders.github.GitHubIssuesLoader
Load issues of a GitHub repository.
document_loaders.github.GithubFileLoader
Load GitHub File
document_loaders.glue_catalog.GlueCatalogLoader(...)
Load table schemas from AWS Glue.
document_loaders.google_speech_to_text.GoogleSpeechToTextLoader(...)
[Deprecated] Loader for Google Cloud Speech-to-Text audio transcripts.
document_loaders.googledrive.GoogleDriveLoader
[Deprecated] Load Google Docs from Google Drive.
document_loaders.gutenberg.GutenbergLoader(...)
Load from Gutenberg.org.
document_loaders.helpers.FileEncoding(...)
File encoding as the NamedTuple.
document_loaders.hn.HNLoader([web_path, ...])
Load Hacker News data.
document_loaders.html.UnstructuredHTMLLoader(...)
Load HTML files using Unstructured.
document_loaders.html_bs.BSHTMLLoader(file_path)
Load HTML files and parse them with beautiful soup.
document_loaders.hugging_face_dataset.HuggingFaceDatasetLoader(path)
Load from Hugging Face Hub datasets. | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-30 | Load from Hugging Face Hub datasets.
document_loaders.hugging_face_model.HuggingFaceModelLoader(*)
Load model information from Hugging Face Hub, including README content.
document_loaders.ifixit.IFixitLoader(web_path)
Load iFixit repair guides, device wikis and answers.
document_loaders.image.UnstructuredImageLoader(...)
Load PNG and JPG files using Unstructured.
document_loaders.image_captions.ImageCaptionLoader(images)
Load image captions.
document_loaders.imsdb.IMSDbLoader([...])
Load IMSDb webpages.
document_loaders.iugu.IuguLoader(resource[, ...])
Load from IUGU.
document_loaders.joplin.JoplinLoader([...])
Load notes from Joplin.
document_loaders.json_loader.JSONLoader(...)
Load a JSON file using a jq schema.
document_loaders.kinetica_loader.KineticaLoader(...)
Load from Kinetica API.
document_loaders.lakefs.LakeFSClient(...)
Client for lakeFS.
document_loaders.lakefs.LakeFSLoader(...[, ...])
Load from lakeFS.
document_loaders.lakefs.UnstructuredLakeFSLoader(...)
Load from lakeFS as unstructured data.
document_loaders.larksuite.LarkSuiteDocLoader(...)
Load from LarkSuite (FeiShu).
document_loaders.larksuite.LarkSuiteWikiLoader(...)
Load from LarkSuite (FeiShu) wiki.
document_loaders.llmsherpa.LLMSherpaFileLoader(...)
Load Documents using LLMSherpa.
document_loaders.markdown.UnstructuredMarkdownLoader(...)
Load Markdown files using Unstructured.
document_loaders.mastodon.MastodonTootsLoader(...)
Load the Mastodon 'toots'.
document_loaders.max_compute.MaxComputeLoader(...) | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-31 | Load the Mastodon 'toots'.
document_loaders.max_compute.MaxComputeLoader(...)
Load from Alibaba Cloud MaxCompute table.
document_loaders.mediawikidump.MWDumpLoader(...)
Load MediaWiki dump from an XML file.
document_loaders.merge.MergedDataLoader(loaders)
Merge documents from a list of loaders
document_loaders.mhtml.MHTMLLoader(file_path)
Parse MHTML files with BeautifulSoup.
document_loaders.mintbase.MintbaseDocumentLoader(...)
Load elements from a blockchain smart contract.
document_loaders.modern_treasury.ModernTreasuryLoader(...)
Load from Modern Treasury.
document_loaders.mongodb.MongodbLoader(...)
Load MongoDB documents.
document_loaders.news.NewsURLLoader(urls[, ...])
Load news articles from URLs using Unstructured.
document_loaders.notebook.NotebookLoader(path)
Load Jupyter notebook (.ipynb) files.
document_loaders.notion.NotionDirectoryLoader(path, *)
Load Notion directory dump.
document_loaders.notiondb.NotionDBLoader(...)
Load from Notion DB.
document_loaders.nuclia.NucliaLoader(path, ...)
Load from any file type using Nuclia Understanding API.
document_loaders.obs_directory.OBSDirectoryLoader(...)
Load from Huawei OBS directory.
document_loaders.obs_file.OBSFileLoader(...)
Load from the Huawei OBS file.
document_loaders.obsidian.ObsidianLoader(path)
Load Obsidian files from directory.
document_loaders.odt.UnstructuredODTLoader(...)
Load OpenOffice ODT files using Unstructured.
document_loaders.onedrive.OneDriveLoader
Load from Microsoft OneDrive.
document_loaders.onedrive_file.OneDriveFileLoader
Load a file from Microsoft OneDrive.
document_loaders.onenote.OneNoteLoader | https://api.python.langchain.com/en/latest/community_api_reference.html |
5ac1b6cc5af2-32 | Load a file from Microsoft OneDrive.
document_loaders.onenote.OneNoteLoader
Load pages from OneNote notebooks.
document_loaders.open_city_data.OpenCityDataLoader(...)
Load from Open City.
document_loaders.oracleadb_loader.OracleAutonomousDatabaseLoader(...)
Load from oracle adb
document_loaders.oracleai.OracleDocLoader(...)
Read documents using OracleDocLoader :param conn: Oracle Connection, :param params: Loader parameters.
document_loaders.oracleai.OracleDocReader()
Read a file
document_loaders.oracleai.OracleTextSplitter(...)
Splitting text using Oracle chunker.
document_loaders.oracleai.ParseOracleDocMetadata()
Parse Oracle doc metadata...
document_loaders.org_mode.UnstructuredOrgModeLoader(...)
Load Org-Mode files using Unstructured.
document_loaders.parsers.audio.FasterWhisperParser(*)
Transcribe and parse audio files with faster-whisper.
document_loaders.parsers.audio.OpenAIWhisperParser([...])
Transcribe and parse audio files.
document_loaders.parsers.audio.OpenAIWhisperParserLocal([...])
Transcribe and parse audio files with OpenAI Whisper model.
document_loaders.parsers.audio.YandexSTTParser(*)
Transcribe and parse audio files.
document_loaders.parsers.doc_intelligence.AzureAIDocumentIntelligenceParser(...)
Loads a PDF with Azure Document Intelligence (formerly Forms Recognizer).
document_loaders.parsers.docai.DocAIParser(*)
[Deprecated] Google Cloud Document AI parser.
document_loaders.parsers.docai.DocAIParsingResults(...)
Dataclass to store Document AI parsing results.
document_loaders.parsers.generic.MimeTypeBasedParser(...)
Parser that uses mime-types to parse a blob.
document_loaders.parsers.grobid.GrobidParser(...)
Load article PDF files using Grobid.
document_loaders.parsers.grobid.ServerUnavailableException | https://api.python.langchain.com/en/latest/community_api_reference.html |