id
stringlengths
14
16
text
stringlengths
36
2.73k
source
stringlengths
59
127
1d1034b022d3-99
(langchain.llms.OpenAI method) (langchain.llms.OpenLM method) (langchain.llms.PromptLayerOpenAI method) streaming (langchain.chat_models.ChatOpenAI attribute) (langchain.llms.Anthropic attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.OpenAI attribute) (langchain.llms.OpenAIChat attribute) (langchain.llms.OpenLM attribute) (langchain.llms.PromptLayerOpenAIChat attribute) strip_outputs (langchain.chains.SimpleSequentialChain attribute) StripeLoader (class in langchain.document_loaders) STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION (langchain.agents.AgentType attribute) structured_query_translator (langchain.retrievers.SelfQueryRetriever attribute) suffix (langchain.llms.LlamaCpp attribute) (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute) summarize_related_memories() (langchain.experimental.GenerativeAgent method) summary (langchain.experimental.GenerativeAgent attribute) summary_message_cls (langchain.memory.ConversationKGMemory attribute) summary_refresh_seconds (langchain.experimental.GenerativeAgent attribute) SupabaseVectorStore (class in langchain.vectorstores) SWIFT (langchain.text_splitter.Language attribute) sync_browser (langchain.agents.agent_toolkits.PlayWrightBrowserToolkit attribute) T table (langchain.vectorstores.ClickhouseSettings attribute) (langchain.vectorstores.MyScaleSettings attribute) table_info (langchain.utilities.PowerBIDataset property) table_name (langchain.memory.SQLiteEntityStore attribute) (langchain.vectorstores.SupabaseVectorStore attribute) table_names (langchain.utilities.PowerBIDataset attribute)
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
1d1034b022d3-100
table_names (langchain.utilities.PowerBIDataset attribute) tags (langchain.llms.AI21 attribute) (langchain.llms.AlephAlpha attribute) (langchain.llms.Anthropic attribute) (langchain.llms.Anyscale attribute) (langchain.llms.Aviary attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.Banana attribute) (langchain.llms.Baseten attribute) (langchain.llms.Beam attribute) (langchain.llms.Bedrock attribute) (langchain.llms.CerebriumAI attribute) (langchain.llms.Cohere attribute) (langchain.llms.CTransformers attribute) (langchain.llms.Databricks attribute) (langchain.llms.DeepInfra attribute) (langchain.llms.FakeListLLM attribute) (langchain.llms.ForefrontAI attribute) (langchain.llms.GooglePalm attribute) (langchain.llms.GooseAI attribute) (langchain.llms.GPT4All attribute) (langchain.llms.HuggingFaceEndpoint attribute) (langchain.llms.HuggingFaceHub attribute) (langchain.llms.HuggingFacePipeline attribute) (langchain.llms.HuggingFaceTextGenInference attribute) (langchain.llms.HumanInputLLM attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.Modal attribute) (langchain.llms.MosaicML attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.OpenAI attribute) (langchain.llms.OpenAIChat attribute) (langchain.llms.OpenLM attribute) (langchain.llms.Petals attribute) (langchain.llms.PipelineAI attribute) (langchain.llms.PredictionGuard attribute) (langchain.llms.Replicate attribute) (langchain.llms.RWKV attribute)
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
1d1034b022d3-101
(langchain.llms.Replicate attribute) (langchain.llms.RWKV attribute) (langchain.llms.SagemakerEndpoint attribute) (langchain.llms.SelfHostedHuggingFaceLLM attribute) (langchain.llms.SelfHostedPipeline attribute) (langchain.llms.StochasticAI attribute) (langchain.llms.VertexAI attribute) (langchain.llms.Writer attribute) Tair (class in langchain.vectorstores) task (langchain.embeddings.HuggingFaceHubEmbeddings attribute) (langchain.llms.HuggingFaceEndpoint attribute) (langchain.llms.HuggingFaceHub attribute) (langchain.llms.SelfHostedHuggingFaceLLM attribute) tbs (langchain.utilities.GoogleSerperAPIWrapper attribute) TelegramChatApiLoader (class in langchain.document_loaders) TelegramChatFileLoader (class in langchain.document_loaders) TelegramChatLoader (in module langchain.document_loaders) temp (langchain.llms.GPT4All attribute) temperature (langchain.chat_models.ChatGooglePalm attribute) (langchain.chat_models.ChatOpenAI attribute) (langchain.llms.AI21 attribute) (langchain.llms.AlephAlpha attribute) (langchain.llms.Anthropic attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.Cohere attribute) (langchain.llms.ForefrontAI attribute) (langchain.llms.GooglePalm attribute) (langchain.llms.GooseAI attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.OpenAI attribute) (langchain.llms.OpenLM attribute) (langchain.llms.Petals attribute) (langchain.llms.PredictionGuard attribute) (langchain.llms.RWKV attribute) (langchain.llms.VertexAI attribute)
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
1d1034b022d3-102
(langchain.llms.RWKV attribute) (langchain.llms.VertexAI attribute) (langchain.llms.Writer attribute) template (langchain.prompts.PromptTemplate attribute) (langchain.tools.QueryPowerBITool attribute) template_format (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute) (langchain.prompts.PromptTemplate attribute) template_tool_response (langchain.agents.ConversationalChatAgent attribute) text_length (langchain.chains.LLMRequestsChain attribute) text_splitter (langchain.chains.AnalyzeDocumentChain attribute) (langchain.chains.MapReduceChain attribute) (langchain.chains.QAGenerationChain attribute) TextLoader (class in langchain.document_loaders) texts (langchain.retrievers.KNNRetriever attribute) (langchain.retrievers.SVMRetriever attribute) TextSplitter (class in langchain.text_splitter) tfidf_array (langchain.retrievers.TFIDFRetriever attribute) Tigris (class in langchain.vectorstores) time (langchain.utilities.DuckDuckGoSearchAPIWrapper attribute) to_typescript() (langchain.tools.APIOperation method) token (langchain.llms.PredictionGuard attribute) (langchain.utilities.PowerBIDataset attribute) token_path (langchain.document_loaders.GoogleApiClient attribute) (langchain.document_loaders.GoogleDriveLoader attribute) Tokenizer (class in langchain.text_splitter) tokenizer (langchain.llms.Petals attribute) tokens (langchain.llms.AlephAlpha attribute) tokens_path (langchain.llms.RWKV attribute) tokens_per_chunk (langchain.text_splitter.Tokenizer attribute) TokenTextSplitter (class in langchain.text_splitter)
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
1d1034b022d3-103
TokenTextSplitter (class in langchain.text_splitter) ToMarkdownLoader (class in langchain.document_loaders) TomlLoader (class in langchain.document_loaders) tool() (in module langchain.agents) (in module langchain.tools) tool_run_logging_kwargs() (langchain.agents.Agent method) (langchain.agents.BaseMultiActionAgent method) (langchain.agents.BaseSingleActionAgent method) (langchain.agents.LLMSingleActionAgent method) tools (langchain.agents.agent_toolkits.JiraToolkit attribute) (langchain.agents.agent_toolkits.ZapierToolkit attribute) (langchain.agents.AgentExecutor attribute) top_k (langchain.chains.GraphCypherQAChain attribute) (langchain.chains.SQLDatabaseChain attribute) (langchain.chat_models.ChatGooglePalm attribute) (langchain.llms.AlephAlpha attribute) (langchain.llms.Anthropic attribute) (langchain.llms.ForefrontAI attribute) (langchain.llms.GooglePalm attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.Petals attribute) (langchain.llms.VertexAI attribute) (langchain.retrievers.ChatGPTPluginRetriever attribute) (langchain.retrievers.DataberryRetriever attribute) (langchain.retrievers.PineconeHybridSearchRetriever attribute) top_k_docs_for_context (langchain.chains.ChatVectorDBChain attribute) top_k_results (langchain.utilities.ArxivAPIWrapper attribute) (langchain.utilities.GooglePlacesAPIWrapper attribute) (langchain.utilities.PubMedAPIWrapper attribute) (langchain.utilities.WikipediaAPIWrapper attribute)
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
1d1034b022d3-104
(langchain.utilities.WikipediaAPIWrapper attribute) top_n (langchain.retrievers.document_compressors.CohereRerank attribute) top_p (langchain.chat_models.ChatGooglePalm attribute) (langchain.llms.AlephAlpha attribute) (langchain.llms.Anthropic attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.ForefrontAI attribute) (langchain.llms.GooglePalm attribute) (langchain.llms.GooseAI attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.OpenAI attribute) (langchain.llms.OpenLM attribute) (langchain.llms.Petals attribute) (langchain.llms.RWKV attribute) (langchain.llms.VertexAI attribute) (langchain.llms.Writer attribute) topP (langchain.llms.AI21 attribute) traits (langchain.experimental.GenerativeAgent attribute) transform (langchain.chains.TransformChain attribute) transform_documents() (langchain.document_transformers.EmbeddingsRedundantFilter method) (langchain.text_splitter.TextSplitter method) transform_input_fn (langchain.llms.Databricks attribute) transform_output_fn (langchain.llms.Databricks attribute) transformers (langchain.retrievers.document_compressors.DocumentCompressorPipeline attribute) TrelloLoader (class in langchain.document_loaders) truncate (langchain.embeddings.CohereEmbeddings attribute) (langchain.llms.Cohere attribute) ts_type_from_python() (langchain.tools.APIOperation static method) ttl (langchain.memory.RedisEntityStore attribute) tuned_model_name (langchain.llms.VertexAI attribute) TwitterTweetLoader (class in langchain.document_loaders)
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
1d1034b022d3-105
TwitterTweetLoader (class in langchain.document_loaders) type (langchain.output_parsers.ResponseSchema attribute) (langchain.utilities.GoogleSerperAPIWrapper attribute) Typesense (class in langchain.vectorstores) U unsecure (langchain.utilities.searx_search.SearxSearchWrapper attribute) (langchain.utilities.SearxSearchWrapper attribute) UnstructuredAPIFileIOLoader (class in langchain.document_loaders) UnstructuredAPIFileLoader (class in langchain.document_loaders) UnstructuredCSVLoader (class in langchain.document_loaders) UnstructuredEmailLoader (class in langchain.document_loaders) UnstructuredEPubLoader (class in langchain.document_loaders) UnstructuredExcelLoader (class in langchain.document_loaders) UnstructuredFileIOLoader (class in langchain.document_loaders) UnstructuredFileLoader (class in langchain.document_loaders) UnstructuredHTMLLoader (class in langchain.document_loaders) UnstructuredImageLoader (class in langchain.document_loaders) UnstructuredMarkdownLoader (class in langchain.document_loaders) UnstructuredODTLoader (class in langchain.document_loaders) UnstructuredPDFLoader (class in langchain.document_loaders) UnstructuredPowerPointLoader (class in langchain.document_loaders) UnstructuredRTFLoader (class in langchain.document_loaders) UnstructuredURLLoader (class in langchain.document_loaders) UnstructuredWordDocumentLoader (class in langchain.document_loaders) UnstructuredXMLLoader (class in langchain.document_loaders) update_document() (langchain.vectorstores.Chroma method) update_forward_refs() (langchain.llms.AI21 class method) (langchain.llms.AlephAlpha class method) (langchain.llms.Anthropic class method)
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
1d1034b022d3-106
(langchain.llms.Anthropic class method) (langchain.llms.Anyscale class method) (langchain.llms.Aviary class method) (langchain.llms.AzureOpenAI class method) (langchain.llms.Banana class method) (langchain.llms.Baseten class method) (langchain.llms.Beam class method) (langchain.llms.Bedrock class method) (langchain.llms.CerebriumAI class method) (langchain.llms.Cohere class method) (langchain.llms.CTransformers class method) (langchain.llms.Databricks class method) (langchain.llms.DeepInfra class method) (langchain.llms.FakeListLLM class method) (langchain.llms.ForefrontAI class method) (langchain.llms.GooglePalm class method) (langchain.llms.GooseAI class method) (langchain.llms.GPT4All class method) (langchain.llms.HuggingFaceEndpoint class method) (langchain.llms.HuggingFaceHub class method) (langchain.llms.HuggingFacePipeline class method) (langchain.llms.HuggingFaceTextGenInference class method) (langchain.llms.HumanInputLLM class method) (langchain.llms.LlamaCpp class method) (langchain.llms.Modal class method) (langchain.llms.MosaicML class method) (langchain.llms.NLPCloud class method) (langchain.llms.OpenAI class method) (langchain.llms.OpenAIChat class method) (langchain.llms.OpenLM class method) (langchain.llms.Petals class method) (langchain.llms.PipelineAI class method) (langchain.llms.PredictionGuard class method) (langchain.llms.PromptLayerOpenAI class method)
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
1d1034b022d3-107
(langchain.llms.PromptLayerOpenAI class method) (langchain.llms.PromptLayerOpenAIChat class method) (langchain.llms.Replicate class method) (langchain.llms.RWKV class method) (langchain.llms.SagemakerEndpoint class method) (langchain.llms.SelfHostedHuggingFaceLLM class method) (langchain.llms.SelfHostedPipeline class method) (langchain.llms.StochasticAI class method) (langchain.llms.VertexAI class method) (langchain.llms.Writer class method) upsert_messages() (langchain.memory.CosmosDBChatMessageHistory method) url (langchain.document_loaders.GitHubIssuesLoader property) (langchain.document_loaders.MathpixPDFLoader property) (langchain.llms.Beam attribute) (langchain.retrievers.ChatGPTPluginRetriever attribute) (langchain.retrievers.RemoteLangChainRetriever attribute) (langchain.tools.IFTTTWebhook attribute) urls (langchain.document_loaders.PlaywrightURLLoader attribute) (langchain.document_loaders.SeleniumURLLoader attribute) use_mlock (langchain.embeddings.LlamaCppEmbeddings attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) use_mmap (langchain.llms.LlamaCpp attribute) use_multiplicative_presence_penalty (langchain.llms.AlephAlpha attribute) use_query_checker (langchain.chains.SQLDatabaseChain attribute) username (langchain.vectorstores.ClickhouseSettings attribute) (langchain.vectorstores.MyScaleSettings attribute) V validate_channel_or_videoIds_is_set() (langchain.document_loaders.GoogleApiClient class method) (langchain.document_loaders.GoogleApiYoutubeLoader class method) validate_init_args() (langchain.document_loaders.ConfluenceLoader static method)
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
1d1034b022d3-108
validate_init_args() (langchain.document_loaders.ConfluenceLoader static method) validate_template (langchain.prompts.FewShotPromptTemplate attribute) (langchain.prompts.FewShotPromptWithTemplates attribute) (langchain.prompts.PromptTemplate attribute) Vectara (class in langchain.vectorstores) vector_field (langchain.vectorstores.SingleStoreDB attribute) vector_search() (langchain.vectorstores.AzureSearch method) vector_search_with_score() (langchain.vectorstores.AzureSearch method) vectorizer (langchain.retrievers.TFIDFRetriever attribute) VectorStore (class in langchain.vectorstores) vectorstore (langchain.agents.agent_toolkits.VectorStoreInfo attribute) (langchain.chains.ChatVectorDBChain attribute) (langchain.chains.VectorDBQA attribute) (langchain.chains.VectorDBQAWithSourcesChain attribute) (langchain.prompts.example_selector.SemanticSimilarityExampleSelector attribute) (langchain.retrievers.SelfQueryRetriever attribute) (langchain.retrievers.TimeWeightedVectorStoreRetriever attribute) vectorstore_info (langchain.agents.agent_toolkits.VectorStoreToolkit attribute) vectorstores (langchain.agents.agent_toolkits.VectorStoreRouterToolkit attribute) verbose (langchain.llms.AI21 attribute) (langchain.llms.AlephAlpha attribute) (langchain.llms.Anthropic attribute) (langchain.llms.Anyscale attribute) (langchain.llms.Aviary attribute) (langchain.llms.AzureOpenAI attribute) (langchain.llms.Banana attribute) (langchain.llms.Baseten attribute) (langchain.llms.Beam attribute) (langchain.llms.Bedrock attribute) (langchain.llms.CerebriumAI attribute) (langchain.llms.Cohere attribute) (langchain.llms.CTransformers attribute)
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
1d1034b022d3-109
(langchain.llms.Cohere attribute) (langchain.llms.CTransformers attribute) (langchain.llms.Databricks attribute) (langchain.llms.DeepInfra attribute) (langchain.llms.FakeListLLM attribute) (langchain.llms.ForefrontAI attribute) (langchain.llms.GooglePalm attribute) (langchain.llms.GooseAI attribute) (langchain.llms.GPT4All attribute) (langchain.llms.HuggingFaceEndpoint attribute) (langchain.llms.HuggingFaceHub attribute) (langchain.llms.HuggingFacePipeline attribute) (langchain.llms.HuggingFaceTextGenInference attribute) (langchain.llms.HumanInputLLM attribute) (langchain.llms.LlamaCpp attribute) (langchain.llms.Modal attribute) (langchain.llms.MosaicML attribute) (langchain.llms.NLPCloud attribute) (langchain.llms.OpenAI attribute) (langchain.llms.OpenAIChat attribute) (langchain.llms.OpenLM attribute) (langchain.llms.Petals attribute) (langchain.llms.PipelineAI attribute) (langchain.llms.PredictionGuard attribute) (langchain.llms.Replicate attribute) (langchain.llms.RWKV attribute) (langchain.llms.SagemakerEndpoint attribute) (langchain.llms.SelfHostedHuggingFaceLLM attribute) (langchain.llms.SelfHostedPipeline attribute) (langchain.llms.StochasticAI attribute) (langchain.llms.VertexAI attribute) (langchain.llms.Writer attribute) (langchain.retrievers.SelfQueryRetriever attribute) (langchain.tools.BaseTool attribute) (langchain.tools.Tool attribute) VespaRetriever (class in langchain.retrievers) video_ids (langchain.document_loaders.GoogleApiYoutubeLoader attribute)
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
1d1034b022d3-110
video_ids (langchain.document_loaders.GoogleApiYoutubeLoader attribute) visible_only (langchain.tools.ClickTool attribute) vocab_only (langchain.embeddings.LlamaCppEmbeddings attribute) (langchain.llms.GPT4All attribute) (langchain.llms.LlamaCpp attribute) W wait_for_processing() (langchain.document_loaders.MathpixPDFLoader method) WeatherDataLoader (class in langchain.document_loaders) Weaviate (class in langchain.vectorstores) WeaviateHybridSearchRetriever (class in langchain.retrievers) WeaviateHybridSearchRetriever.Config (class in langchain.retrievers) web_path (langchain.document_loaders.WebBaseLoader property) web_paths (langchain.document_loaders.WebBaseLoader attribute) WebBaseLoader (class in langchain.document_loaders) WhatsAppChatLoader (class in langchain.document_loaders) Wikipedia (class in langchain.docstore) WikipediaLoader (class in langchain.document_loaders) wolfram_alpha_appid (langchain.utilities.WolframAlphaAPIWrapper attribute) writer_api_key (langchain.llms.Writer attribute) writer_org_id (langchain.llms.Writer attribute) Y YoutubeLoader (class in langchain.document_loaders) Z zapier_description (langchain.tools.ZapierNLARunAction attribute) ZepRetriever (class in langchain.retrievers) ZERO_SHOT_REACT_DESCRIPTION (langchain.agents.AgentType attribute) Zilliz (class in langchain.vectorstores) By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/genindex.html
089816ffd331-0
.md .pdf Dependents Dependents# Dependents stats for hwchase17/langchain [update: 2023-06-05; only dependent repositories with Stars > 100] Repository Stars openai/openai-cookbook 38024 LAION-AI/Open-Assistant 33609 microsoft/TaskMatrix 33136 hpcaitech/ColossalAI 30032 imartinez/privateGPT 28094 reworkd/AgentGPT 23430 openai/chatgpt-retrieval-plugin 17942 jerryjliu/llama_index 16697 mindsdb/mindsdb 16410 mlflow/mlflow 14517 GaiZhenbiao/ChuanhuChatGPT 10793 databrickslabs/dolly 10155 openai/evals 10076 AIGC-Audio/AudioGPT 8619 logspace-ai/langflow 8211 imClumsyPanda/langchain-ChatGLM 8154 PromtEngineer/localGPT 6853 StanGirard/quivr 6830 PipedreamHQ/pipedream 6520 go-skynet/LocalAI 6018 arc53/DocsGPT 5643 e2b-dev/e2b 5075 langgenius/dify 4281 nsarrazin/serge 4228 zauberzeug/nicegui 4084 madawei2699/myGPTReader 4039 wenda-LLM/wenda 3871 GreyDGL/PentestGPT 3837 zilliztech/GPTCache 3625 csunny/DB-GPT 3545 gkamradt/langchain-tutorials 3404
rtdocs_stable/api.python.langchain.com/en/stable/dependents.html
089816ffd331-1
3545 gkamradt/langchain-tutorials 3404 mmabrouk/chatgpt-wrapper 3303 postgresml/postgresml 3052 marqo-ai/marqo 3014 MineDojo/Voyager 2945 PrefectHQ/marvin 2761 project-baize/baize-chatbot 2673 hwchase17/chat-langchain 2589 whitead/paper-qa 2572 Azure-Samples/azure-search-openai-demo 2366 GerevAI/gerev 2330 OpenGVLab/InternGPT 2289 ParisNeo/gpt4all-ui 2159 OpenBMB/BMTools 2158 guangzhengli/ChatFiles 2005 h2oai/h2ogpt 1939 Farama-Foundation/PettingZoo 1845 OpenGVLab/Ask-Anything 1749 IntelligenzaArtificiale/Free-Auto-GPT 1740 Unstructured-IO/unstructured 1628 hwchase17/notion-qa 1607 NVIDIA/NeMo-Guardrails 1544 SamurAIGPT/privateGPT 1543 paulpierre/RasaGPT 1526 yanqiangmiffy/Chinese-LangChain 1485 Kav-K/GPTDiscord 1402 vocodedev/vocode-python 1387 Chainlit/chainlit 1336 lunasec-io/lunasec 1323 psychic-api/psychic 1248 agiresearch/OpenAGI 1208 jina-ai/thinkgpt 1193 thomas-yanxin/LangChain-ChatGLM-Webui 1182
rtdocs_stable/api.python.langchain.com/en/stable/dependents.html
089816ffd331-2
thomas-yanxin/LangChain-ChatGLM-Webui 1182 ttengwang/Caption-Anything 1137 jina-ai/dev-gpt 1135 greshake/llm-security 1086 keephq/keep 1063 juncongmoo/chatllama 1037 richardyc/Chrome-GPT 1035 visual-openllm/visual-openllm 997 mmz-001/knowledge_gpt 995 jina-ai/langchain-serve 949 irgolic/AutoPR 936 microsoft/X-Decoder 908 poe-platform/api-bot-tutorial 902 peterw/Chat-with-Github-Repo 875 cirediatpl/FigmaChain 822 homanp/superagent 806 seanpixel/Teenage-AGI 800 chatarena/chatarena 796 hashintel/hash 795 SamurAIGPT/Camel-AutoGPT 786 rlancemartin/auto-evaluator 770 corca-ai/EVAL 769 101dotxyz/GPTeam 755 noahshinn024/reflexion 706 eyurtsev/kor 695 cheshire-cat-ai/core 681 e-johnstonn/BriefGPT 656 run-llama/llama-lab 635 griptape-ai/griptape 583 namuan/dr-doc-search 555 getmetal/motorhead 550 kreneskyp/ix 543 hwchase17/chat-your-data 510 Anil-matcha/ChatPDF 501 whyiyhw/chatgpt-wechat 497 SamurAIGPT/ChatGPT-Developer-Plugins 496 microsoft/PodcastCopilot 492 debanjum/khoj
rtdocs_stable/api.python.langchain.com/en/stable/dependents.html
089816ffd331-3
496 microsoft/PodcastCopilot 492 debanjum/khoj 485 akshata29/chatpdf 485 langchain-ai/langchain-aiplugin 462 jina-ai/agentchain 460 alexanderatallah/window.ai 457 yeagerai/yeagerai-agent 451 mckaywrigley/repo-chat 446 michaelthwan/searchGPT 446 mpaepper/content-chatbot 441 freddyaboulton/gradio-tools 439 ruoccofabrizio/azure-open-ai-embeddings-qna 429 StevenGrove/GPT4Tools 422 jonra1993/fastapi-alembic-sqlmodel-async 407 msoedov/langcorn 405 amosjyng/langchain-visualizer 395 ajndkr/lanarky 384 mtenenholtz/chat-twitter 376 steamship-core/steamship-langchain 371 langchain-ai/auto-evaluator 365 xuwenhao/geektime-ai-course 358 continuum-llms/chatgpt-memory 357 opentensor/bittensor 347 showlab/VLog 345 daodao97/chatdoc 345 logan-markewich/llama_index_starter_pack 332 poe-platform/poe-protocol 320 explosion/spacy-llm 312 andylokandy/gpt-4-search 311 alejandro-ao/langchain-ask-pdf 310 jupyterlab/jupyter-ai 294 BlackHC/llm-strategy 283 itamargol/openai 281 momegas/megabots 279 personoids/personoids-lite 277 yvann-hub/Robby-chatbot 267 Anil-matcha/Website-to-Chatbot
rtdocs_stable/api.python.langchain.com/en/stable/dependents.html
089816ffd331-4
267 Anil-matcha/Website-to-Chatbot 266 Cheems-Seminar/grounded-segment-any-parts 260 sullivan-sean/chat-langchainjs 248 bborn/howdoi.ai 245 daveebbelaar/langchain-experiments 240 MagnivOrg/prompt-layer-library 237 ur-whitelab/exmol 234 conceptofmind/toolformer 234 recalign/RecAlign 226 OpenBMB/AgentVerse 220 alvarosevilla95/autolang 219 JohnSnowLabs/nlptest 216 kaleido-lab/dolphin 215 truera/trulens 208 NimbleBoxAI/ChainFury 208 airobotlab/KoChatGPT 207 monarch-initiative/ontogpt 200 paolorechia/learn-langchain 195 shaman-ai/agent-actors 185 Haste171/langchain-chatbot 184 plchld/InsightFlow 182 su77ungr/CASALIOY 180 jbrukh/gpt-jargon 177 benthecoder/ClassGPT 174 billxbf/ReWOO 170 filip-michalsky/SalesGPT 168 hwchase17/langchain-streamlit-template 168 radi-cho/datasetGPT 164 hardbyte/qabot 164 gia-guar/JARVIS-ChatGPT 158 plastic-labs/tutor-gpt 154 yasyf/compress-gpt 154 fengyuli-dev/multimedia-gpt 154 ethanyanjiali/minChatGPT 153 hwchase17/chroma-langchain 153 edreisMD/plugnplai 148 chakkaradeep/pyCodeAGI 145
rtdocs_stable/api.python.langchain.com/en/stable/dependents.html
089816ffd331-5
148 chakkaradeep/pyCodeAGI 145 ccurme/yolopandas 145 shamspias/customizable-gpt-chatbot 144 realminchoi/babyagi-ui 143 PradipNichite/Youtube-Tutorials 140 gustavz/DataChad 140 Klingefjord/chatgpt-telegram 140 Jaseci-Labs/jaseci 139 handrew/browserpilot 137 jmpaz/promptlib 137 SamPink/dev-gpt 135 menloparklab/langchain-cohere-qdrant-doc-retrieval 135 hirokidaichi/wanna 135 steamship-core/vercel-examples 134 pablomarin/GPT-Azure-Search-Engine 133 ibiscp/LLM-IMDB 133 shauryr/S2QA 133 jerlendds/osintbuddy 132 yuanjie-ai/ChatLLM 132 yasyf/summ 132 WongSaang/chatgpt-ui-server 130 peterw/StoryStorm 127 Teahouse-Studios/akari-bot 126 vaibkumr/prompt-optimizer 125 preset-io/promptimize 124 homanp/vercel-langchain 124 petehunt/langchain-github-bot 123 eunomia-bpf/GPTtrace 118 nicknochnack/LangchainDocuments 116 jiran214/GPT-vup 112 rsaryev/talk-codebase 112 zenml-io/zenml-projects 112 microsoft/azure-openai-in-a-day-workshop 112 davila7/file-gpt 112 prof-frink-lab/slangchain 111 aurelio-labs/arxiv-bot 110
rtdocs_stable/api.python.langchain.com/en/stable/dependents.html
089816ffd331-6
111 aurelio-labs/arxiv-bot 110 fixie-ai/fixie-examples 108 miaoshouai/miaoshouai-assistant 105 flurb18/AgentOoba 103 solana-labs/chatgpt-plugin 102 Significant-Gravitas/Auto-GPT-Benchmarks 102 kaarthik108/snowChat 100 Generated by github-dependents-info github-dependents-info --repo hwchase17/langchain --markdownfile dependents.md --minstars 100 --sort stars previous Zilliz next Deployments By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/dependents.html
d62479a50a3b-0
Search Error Please activate JavaScript to enable the search functionality. Ctrl+K By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/search.html
b1fe5618669c-0
.rst .pdf Integrations Contents Integrations by Module Dependencies All Integrations Integrations# LangChain integrates with many LLMs, systems, and products. Integrations by Module# Integrations grouped by the core LangChain module they map to: LLM Providers Chat Model Providers Text Embedding Model Providers Document Loader Integrations Text Splitter Integrations Vectorstore Providers Retriever Providers Tool Providers Toolkit Integrations Dependencies# LangChain depends on several hungered Python packages. All Integrations# A comprehensive list of LLMs, systems, and products integrated with LangChain: Tracing Walkthrough AI21 Labs Aim Airbyte Aleph Alpha Amazon Bedrock AnalyticDB Annoy Anthropic Anyscale Apify Argilla Arxiv AtlasDB AwaDB AWS S3 Directory AZLyrics Azure Blob Storage Azure Cognitive Search Azure OpenAI Banana Beam BiliBili Blackboard Cassandra CerebriumAI Chroma ClearML ClickHouse Cohere College Confidential Comet Confluence C Transformers Databerry Databricks DeepInfra Deep Lake Diffbot Discord Docugami DuckDB Elasticsearch EverNote Facebook Chat Figma ForefrontAI Git GitBook Google BigQuery Google Cloud Storage Google Drive Google Search Google Serper Google Vertex AI GooseAI GPT4All Graphsignal Gutenberg Hacker News Hazy Research Helicone Hugging Face iFixit IMSDb Jina LanceDB LangChain Decorators ✨ Quick start Defining other parameters
rtdocs_stable/api.python.langchain.com/en/stable/integrations.html
b1fe5618669c-1
LanceDB LangChain Decorators ✨ Quick start Defining other parameters Simplified streaming Prompt declarations Optional sections Output parsers Binding the prompt to an object More examples: Llama.cpp MediaWikiDump Metal Microsoft OneDrive Microsoft PowerPoint Microsoft Word Milvus MLflow Modal Modern Treasury Momento MyScale NLPCloud Notion DB Obsidian OpenAI OpenSearch OpenWeatherMap Petals PGVector Pinecone PipelineAI Prediction Guard PromptLayer Psychic Qdrant Ray Serve Rebuff Reddit Redis Replicate Roam Runhouse RWKV-4 SageMaker Endpoint SearxNG Search API SerpAPI Shale Protocol scikit-learn Slack spaCy Spreedly StochasticAI Stripe Tair Telegram Tensorflow Hub 2Markdown Trello Twitter Unstructured Vectara Vespa Weights & Biases Weather Weaviate WhatsApp WhyLabs Wikipedia Wolfram Alpha Writer Yeager.ai YouTube Zep Zilliz previous Experimental Modules next Tracing Walkthrough Contents Integrations by Module Dependencies All Integrations By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/integrations.html
1444749d7cc1-0
.rst .pdf Welcome to LangChain Contents Getting Started Modules Use Cases Reference Docs Ecosystem Additional Resources Welcome to LangChain# LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only call out to a language model, but will also be: Data-aware: connect a language model to other sources of data Agentic: allow a language model to interact with its environment The LangChain framework is designed around these principles. This is the Python specific portion of the documentation. For a purely conceptual guide to LangChain, see here. For the JavaScript documentation, see here. Getting Started# How to get started using LangChain to create an Language Model application. Quickstart Guide Concepts and terminology. Concepts and terminology Tutorials created by community experts and presented on YouTube. Tutorials Modules# These modules are the core abstractions which we view as the building blocks of any LLM-powered application. For each module LangChain provides standard, extendable interfaces. LangChain also provides external integrations and even end-to-end implementations for off-the-shelf use. The docs for each module contain quickstart examples, how-to guides, reference docs, and conceptual guides. The modules are (from least to most complex): Models: Supported model types and integrations. Prompts: Prompt management, optimization, and serialization. Memory: Memory refers to state that is persisted between calls of a chain/agent. Indexes: Language models become much more powerful when combined with application-specific data - this module contains interfaces and integrations for loading, querying and updating external data. Chains: Chains are structured sequences of calls (to an LLM or to a different utility).
rtdocs_stable/api.python.langchain.com/en/stable/index.html
1444749d7cc1-1
Agents: An agent is a Chain in which an LLM, given a high-level directive and a set of tools, repeatedly decides an action, executes the action and observes the outcome until the high-level directive is complete. Callbacks: Callbacks let you log and stream the intermediate steps of any chain, making it easy to observe, debug, and evaluate the internals of an application. Use Cases# Best practices and built-in implementations for common LangChain use cases: Autonomous Agents: Autonomous agents are long-running agents that take many steps in an attempt to accomplish an objective. Examples include AutoGPT and BabyAGI. Agent Simulations: Putting agents in a sandbox and observing how they interact with each other and react to events can be an effective way to evaluate their long-range reasoning and planning abilities. Personal Assistants: One of the primary LangChain use cases. Personal assistants need to take actions, remember interactions, and have knowledge about your data. Question Answering: Another common LangChain use case. Answering questions over specific documents, only utilizing the information in those documents to construct an answer. Chatbots: Language models love to chat, making this a very natural use of them. Querying Tabular Data: Recommended reading if you want to use language models to query structured data (CSVs, SQL, dataframes, etc). Code Understanding: Recommended reading if you want to use language models to analyze code. Interacting with APIs: Enabling language models to interact with APIs is extremely powerful. It gives them access to up-to-date information and allows them to take actions. Extraction: Extract structured information from text. Summarization: Compressing longer documents. A type of Data-Augmented Generation. Evaluation: Generative models are hard to evaluate with traditional metrics. One promising approach is to use language models themselves to do the evaluation. Reference Docs#
rtdocs_stable/api.python.langchain.com/en/stable/index.html
1444749d7cc1-2
Reference Docs# Full documentation on all methods, classes, installation methods, and integration setups for LangChain. LangChain Installation Reference Documentation Ecosystem# LangChain integrates a lot of different LLMs, systems, and products. From the other side, many systems and products depend on LangChain. It creates a vibrant and thriving ecosystem. Integrations: Guides for how other products can be used with LangChain. Dependents: List of repositories that use LangChain. Deployments: A collection of instructions, code snippets, and template repositories for deploying LangChain apps. Additional Resources# Additional resources we think may be useful as you develop your application! LangChainHub: The LangChainHub is a place to share and explore other prompts, chains, and agents. Gallery: A collection of great projects that use Langchain, compiled by the folks at Kyrolabs. Useful for finding inspiration and example implementations. Deploying LLMs in Production: A collection of best practices and tutorials for deploying LLMs in production. Tracing: A guide on using tracing in LangChain to visualize the execution of chains and agents. Model Laboratory: Experimenting with different prompts, models, and chains is a big part of developing the best possible application. The ModelLaboratory makes it easy to do so. Discord: Join us on our Discord to discuss all things LangChain! YouTube: A collection of the LangChain tutorials and videos. Production Support: As you move your LangChains into production, we’d love to offer more comprehensive support. Please fill out this form and we’ll set up a dedicated support Slack channel. next Quickstart Guide Contents Getting Started Modules Use Cases Reference Docs Ecosystem Additional Resources By Harrison Chase © Copyright 2023, Harrison Chase.
rtdocs_stable/api.python.langchain.com/en/stable/index.html
1444749d7cc1-3
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/index.html
d5dfbdff3fec-0
.rst .pdf Agents Agents# Reference guide for Agents and associated abstractions. Agents Tools Agent Toolkits previous Memory next Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/reference/agents.html
ed01a6709dc8-0
.rst .pdf Models Models# LangChain provides interfaces and integrations for a number of different types of models. LLMs Chat Models Embeddings previous API References next Chat Models By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/reference/models.html
bf226b0336b2-0
.md .pdf Installation Contents Official Releases Installing from source Installation# Official Releases# LangChain is available on PyPi, so to it is easily installable with: pip install langchain That will install the bare minimum requirements of LangChain. A lot of the value of LangChain comes when integrating it with various model providers, datastores, etc. By default, the dependencies needed to do that are NOT installed. However, there are two other ways to install LangChain that do bring in those dependencies. To install modules needed for the common LLM providers, run: pip install langchain[llms] To install all modules needed for all integrations, run: pip install langchain[all] Note that if you are using zsh, you’ll need to quote square brackets when passing them as an argument to a command, for example: pip install 'langchain[all]' Installing from source# If you want to install from source, you can do so by cloning the repo and running: pip install -e . previous SQL Question Answering Benchmarking: Chinook next API References Contents Official Releases Installing from source By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/reference/installation.html
a6048f1f1ac5-0
.rst .pdf Prompts Prompts# The reference guides here all relate to objects for working with Prompts. PromptTemplates Example Selector Output Parsers previous How to serialize prompts next PromptTemplates By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/reference/prompts.html
c936ed39e930-0
.rst .pdf Indexes Indexes# Indexes refer to ways to structure documents so that LLMs can best interact with them. LangChain has a number of modules that help you load, structure, store, and retrieve documents. Docstore Text Splitter Document Loaders Vector Stores Retrievers Document Compressors Document Transformers previous Embeddings next Docstore By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/reference/indexes.html
5df52031811b-0
.rst .pdf Agents Agents# Interface for agents. pydantic model langchain.agents.Agent[source]# Class responsible for calling the language model and deciding the action. This is driven by an LLMChain. The prompt in the LLMChain MUST include a variable called “agent_scratchpad” where the agent can put its intermediary work. field allowed_tools: Optional[List[str]] = None# field llm_chain: langchain.chains.llm.LLMChain [Required]# field output_parser: langchain.agents.agent.AgentOutputParser [Required]# async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. abstract classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) → langchain.prompts.base.BasePromptTemplate[source]# Create a prompt for this class. dict(**kwargs: Any) → Dict[source]# Return dictionary representation of agent. classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]# Construct an agent from an LLM and tools.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-1
Construct an agent from an LLM and tools. get_allowed_tools() → Optional[List[str]][source]# get_full_inputs(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → Dict[str, Any][source]# Create the full inputs for the LLMChain from intermediate steps. plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → langchain.schema.AgentFinish[source]# Return response when agent has been stopped due to max iterations. tool_run_logging_kwargs() → Dict[source]# abstract property llm_prefix: str# Prefix to append the LLM call with. abstract property observation_prefix: str# Prefix to append the observation with. property return_values: List[str]# Return values of the agent. pydantic model langchain.agents.AgentExecutor[source]# Consists of an agent using tools. Validators raise_deprecation » all fields set_verbose » verbose validate_return_direct_tool » all fields validate_tools » all fields field agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]# field early_stopping_method: str = 'force'#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-2
field early_stopping_method: str = 'force'# field handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False# field max_execution_time: Optional[float] = None# field max_iterations: Optional[int] = 15# field return_intermediate_steps: bool = False# field tools: Sequence[BaseTool] [Required]# classmethod from_agent_and_tools(agent: Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent], tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# Create from agent and tools. lookup_tool(name: str) → langchain.tools.base.BaseTool[source]# Lookup tool by name. save(file_path: Union[pathlib.Path, str]) → None[source]# Raise error - saving not supported for Agent Executors. save_agent(file_path: Union[pathlib.Path, str]) → None[source]# Save the underlying agent. pydantic model langchain.agents.AgentOutputParser[source]# abstract parse(text: str) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Parse text into agent action/finish. class langchain.agents.AgentType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]# CHAT_CONVERSATIONAL_REACT_DESCRIPTION = 'chat-conversational-react-description'# CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'chat-zero-shot-react-description'# CONVERSATIONAL_REACT_DESCRIPTION = 'conversational-react-description'# OPENAI_FUNCTIONS = 'openai-functions'# REACT_DOCSTORE = 'react-docstore'#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-3
REACT_DOCSTORE = 'react-docstore'# SELF_ASK_WITH_SEARCH = 'self-ask-with-search'# STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'structured-chat-zero-shot-react-description'# ZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description'# pydantic model langchain.agents.BaseMultiActionAgent[source]# Base Agent class. abstract async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Actions specifying what tool to use. dict(**kwargs: Any) → Dict[source]# Return dictionary representation of agent. get_allowed_tools() → Optional[List[str]][source]# abstract plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Actions specifying what tool to use.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-4
**kwargs – User inputs. Returns Actions specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → langchain.schema.AgentFinish[source]# Return response when agent has been stopped due to max iterations. save(file_path: Union[pathlib.Path, str]) → None[source]# Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict[source]# property return_values: List[str]# Return values of the agent. pydantic model langchain.agents.BaseSingleActionAgent[source]# Base Agent class. abstract async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. dict(**kwargs: Any) → Dict[source]# Return dictionary representation of agent. classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, **kwargs: Any) → langchain.agents.agent.BaseSingleActionAgent[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-5
get_allowed_tools() → Optional[List[str]][source]# abstract plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], **kwargs: Any) → langchain.schema.AgentFinish[source]# Return response when agent has been stopped due to max iterations. save(file_path: Union[pathlib.Path, str]) → None[source]# Save the agent. Parameters file_path – Path to file to save the agent to. Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs() → Dict[source]# property return_values: List[str]# Return values of the agent. pydantic model langchain.agents.ConversationalAgent[source]# An agent designed to hold a conversation in addition to using tools. field ai_prefix: str = 'AI'# field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-6
classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-7
say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None) → langchain.prompts.prompt.PromptTemplate[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-8
Create prompt in the style of the zero shot agent. Parameters tools – List of tools the agent will have access to, used to format the prompt. prefix – String to put before the list of tools. suffix – String to put after the list of tools. ai_prefix – String to use before AI output. human_prefix – String to use before human output. input_variables – List of input variables the final prompt will expect. Returns A PromptTemplate with the template assembled from the pieces here.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-9
classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix: str = 'Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions: str = 'To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-10
the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix: str = 'AI', human_prefix: str = 'Human', input_variables: Optional[List[str]] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-11
Construct an agent from an LLM and tools. property llm_prefix: str# Prefix to append the llm call with. property observation_prefix: str# Prefix to append the observation with. pydantic model langchain.agents.ConversationalChatAgent[source]# An agent designed to hold a conversation in addition to using tools. field output_parser: langchain.agents.agent.AgentOutputParser [Optional]# field template_tool_response: str = "TOOL RESPONSE: \n---------------------\n{observation}\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else."#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-12
classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], system_message: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables: Optional[List[str]] = None, output_parser: Optional[langchain.schema.BaseOutputParser] = None) → langchain.prompts.base.BasePromptTemplate[source]# Create a prompt for this class.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-13
classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, system_message: str = 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables:
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-14
with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables: Optional[List[str]] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-15
Construct an agent from an LLM and tools. property llm_prefix: str# Prefix to append the llm call with. property observation_prefix: str# Prefix to append the observation with. pydantic model langchain.agents.LLMSingleActionAgent[source]# field llm_chain: langchain.chains.llm.LLMChain [Required]# field output_parser: langchain.agents.agent.AgentOutputParser [Required]# field stop: List[str] [Required]# async aplan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. dict(**kwargs: Any) → Dict[source]# Return dictionary representation of agent. plan(intermediate_steps: List[Tuple[langchain.schema.AgentAction, str]], callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Union[langchain.schema.AgentAction, langchain.schema.AgentFinish][source]# Given input, decided what to do. Parameters intermediate_steps – Steps the LLM has taken to date, along with observations callbacks – Callbacks to run. **kwargs – User inputs. Returns Action specifying what tool to use. tool_run_logging_kwargs() → Dict[source]# pydantic model langchain.agents.MRKLChain[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-16
pydantic model langchain.agents.MRKLChain[source]# Chain that implements the MRKL system. Example from langchain import OpenAI, MRKLChain from langchain.chains.mrkl.base import ChainConfig llm = OpenAI(temperature=0) prompt = PromptTemplate(...) chains = [...] mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt) Validators raise_deprecation » all fields set_verbose » verbose validate_return_direct_tool » all fields validate_tools » all fields classmethod from_chains(llm: langchain.base_language.BaseLanguageModel, chains: List[langchain.agents.mrkl.base.ChainConfig], **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# User friendly way to initialize the MRKL chain. This is intended to be an easy way to get up and running with the MRKL chain. Parameters llm – The LLM to use as the agent LLM. chains – The chains the MRKL system has access to. **kwargs – parameters to be passed to initialization. Returns An initialized MRKL chain. Example from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain from langchain.chains.mrkl.base import ChainConfig llm = OpenAI(temperature=0) search = SerpAPIWrapper() llm_math_chain = LLMMathChain(llm=llm) chains = [ ChainConfig( action_name = "Search", action=search.search, action_description="useful for searching" ), ChainConfig( action_name="Calculator", action=llm_math_chain.run, action_description="useful for doing math" ) ]
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-17
action_description="useful for doing math" ) ] mrkl = MRKLChain.from_chains(llm, chains) pydantic model langchain.agents.ReActChain[source]# Chain that implements the ReAct paper. Example from langchain import ReActChain, OpenAI react = ReAct(llm=OpenAI()) Validators raise_deprecation » all fields set_verbose » verbose validate_return_direct_tool » all fields validate_tools » all fields pydantic model langchain.agents.ReActTextWorldAgent[source]# Agent for the ReAct TextWorld chain. classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool]) → langchain.prompts.base.BasePromptTemplate[source]# Return default prompt. pydantic model langchain.agents.SelfAskWithSearchChain[source]# Chain that does self ask with search. Example from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper search_chain = GoogleSerperAPIWrapper() self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain) Validators raise_deprecation » all fields set_verbose » verbose validate_return_direct_tool » all fields validate_tools » all fields pydantic model langchain.agents.StructuredChatAgent[source]# field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-18
field output_parser: langchain.agents.agent.AgentOutputParser [Optional]# classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\n  "action": $TOOL_NAME,\n  "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\n  "action": "Final Answer",\n  "action_input": "Final response to human"\n}}}}\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[langchain.prompts.base.BasePromptTemplate]] = None) → langchain.prompts.base.BasePromptTemplate[source]# Create a prompt for this class.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-19
classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix: str = 'Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\n  "action": $TOOL_NAME,\n  "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\n  "action": "Final Answer",\n  "action_input": "Final response to human"\n}}}}\n```', input_variables: Optional[List[str]] = None, memory_prompts: Optional[List[langchain.prompts.base.BasePromptTemplate]] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-20
Construct an agent from an LLM and tools. property llm_prefix: str# Prefix to append the llm call with. property observation_prefix: str# Prefix to append the observation with. pydantic model langchain.agents.Tool[source]# Tool that takes in function or coroutine directly. field coroutine: Optional[Callable[[...], Awaitable[str]]] = None# The asynchronous version of the function. field description: str = ''# Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. field func: Callable[[...], str] [Required]# The function to run when the tool is called. classmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, **kwargs: Any) → langchain.tools.base.Tool[source]# Initialize tool from a function. property args: dict# The tool’s input arguments. pydantic model langchain.agents.ZeroShotAgent[source]# Agent for the MRKL chain. field output_parser: langchain.agents.agent.AgentOutputParser [Optional]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-21
field output_parser: langchain.agents.agent.AgentOutputParser [Optional]# classmethod create_prompt(tools: Sequence[langchain.tools.base.BaseTool], prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) → langchain.prompts.prompt.PromptTemplate[source]# Create prompt in the style of the zero shot agent. Parameters tools – List of tools the agent will have access to, used to format the prompt. prefix – String to put before the list of tools. suffix – String to put after the list of tools. input_variables – List of input variables the final prompt will expect. Returns A PromptTemplate with the template assembled from the pieces here.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-22
Returns A PromptTemplate with the template assembled from the pieces here. classmethod from_llm_and_tools(llm: langchain.base_language.BaseLanguageModel, tools: Sequence[langchain.tools.base.BaseTool], callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', suffix: str = 'Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) → langchain.agents.agent.Agent[source]# Construct an agent from an LLM and tools. property llm_prefix: str# Prefix to append the llm call with. property observation_prefix: str# Prefix to append the observation with. langchain.agents.create_csv_agent(llm: langchain.base_language.BaseLanguageModel, path: Union[str, List[str]], pandas_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# Create csv agent by loading to a dataframe and using pandas agent.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-23
langchain.agents.create_json_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-24
ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-25
Construct a json agent from an LLM and tools.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-26
langchain.agents.create_openapi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-27
do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-28
Construct a json agent from an LLM and tools. langchain.agents.create_pandas_dataframe_agent(llm: langchain.base_language.BaseLanguageModel, df: Any, agent_type: langchain.agents.agent_types.AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, include_df_in_prompt: Optional[bool] = True, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a pandas agent from an LLM and dataframe.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-29
langchain.agents.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-30
do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-31
Construct a pbi agent from an LLM and tools.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-32
langchain.agents.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-33
(remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-34
Construct a pbi agent from an Chat LLM and tools. If you supply only a toolkit and no powerbi dataset, the same LLM is used for both. langchain.agents.create_spark_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix: str = '\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a spark agent from an LLM and dataframe.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-35
langchain.agents.create_spark_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-36
Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-37
Construct a sql agent from an LLM and tools.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-38
langchain.agents.create_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, agent_type: langchain.agents.agent_types.AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: Optional[str] = None, format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-39
result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-40
Construct a sql agent from an LLM and tools. langchain.agents.create_vectorstore_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a vectorstore agent from an LLM and tools. langchain.agents.create_vectorstore_router_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a vectorstore router agent from an LLM and tools.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-41
Construct a vectorstore router agent from an LLM and tools. langchain.agents.get_all_tool_names() → List[str][source]# Get a list of all possible tool names. langchain.agents.initialize_agent(tools: Sequence[langchain.tools.base.BaseTool], llm: langchain.base_language.BaseLanguageModel, agent: Optional[langchain.agents.agent_types.AgentType] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# Load an agent executor given tools and LLM. Parameters tools – List of tools this agent has access to. llm – Language model to use as the agent. agent – Agent type to use. If None and agent_path is also None, will default to AgentType.ZERO_SHOT_REACT_DESCRIPTION. callback_manager – CallbackManager to use. Global callback manager is used if not provided. Defaults to None. agent_path – Path to serialized agent to use. agent_kwargs – Additional key word arguments to pass to the underlying agent **kwargs – Additional key word arguments passed to the agent executor Returns An agent executor langchain.agents.load_agent(path: Union[str, pathlib.Path], **kwargs: Any) → langchain.agents.agent.BaseSingleActionAgent[source]# Unified method for loading a agent from LangChainHub or local fs. langchain.agents.load_huggingface_tool(task_or_repo_id: str, model_repo_id: Optional[str] = None, token: Optional[str] = None, remote: bool = False, **kwargs: Any) → langchain.tools.base.BaseTool[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-42
langchain.agents.load_tools(tool_names: List[str], llm: Optional[langchain.base_language.BaseLanguageModel] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → List[langchain.tools.base.BaseTool][source]# Load tools based on their name. Parameters tool_names – name of tools to load. llm – Optional language model, may be needed to initialize certain tools. callbacks – Optional callback manager or list of callback handlers. If not provided, default global callback manager will be used. Returns List of tools. langchain.agents.tool(*args: Union[str, Callable], return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True) → Callable[source]# Make tools out of functions, can be used with or without arguments. Parameters *args – The arguments to the tool. return_direct – Whether to return directly from the tool rather than continuing the agent loop. args_schema – optional argument schema for user to specify infer_schema – Whether to infer the schema of the arguments from the function’s signature. This also makes the resultant tool accept a dictionary input to its run() function. Requires: Function must be of type (str) -> str Function must have a docstring Examples @tool def search_api(query: str) -> str: # Searches the API for the query. return @tool("search", return_direct=True) def search_api(query: str) -> str: # Searches the API for the query. return previous Agents next Tools By Harrison Chase © Copyright 2023, Harrison Chase.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
5df52031811b-43
By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agents.html
f7fd184413b7-0
.rst .pdf Experimental Modules Contents Autonomous Agents Generative Agents Experimental Modules# This module contains experimental modules and reproductions of existing work using LangChain primitives. Autonomous Agents# Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module. class langchain.experimental.BabyAGI(*, lc_kwargs: Dict[str, Any] = None, memory: Optional[langchain.schema.BaseMemory] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, task_list: collections.deque = None, task_creation_chain: langchain.chains.base.Chain, task_prioritization_chain: langchain.chains.base.Chain, execution_chain: langchain.chains.base.Chain, task_id_counter: int = 1, vectorstore: langchain.vectorstores.base.VectorStore, max_iterations: Optional[int] = None)[source]# Controller model for the BabyAGI agent. model Config[source]# Configuration for this pydantic object. arbitrary_types_allowed = True# execute_task(objective: str, task: str, k: int = 5) → str[source]# Execute a task. classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, vectorstore: langchain.vectorstores.base.VectorStore, verbose: bool = False, task_execution_chain: Optional[langchain.chains.base.Chain] = None, **kwargs: Dict[str, Any]) → langchain.experimental.autonomous_agents.baby_agi.baby_agi.BabyAGI[source]# Initialize the BabyAGI Controller.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/experimental.html
f7fd184413b7-1
Initialize the BabyAGI Controller. get_next_task(result: str, task_description: str, objective: str) → List[Dict][source]# Get the next task. property input_keys: List[str]# Input keys this chain expects. property output_keys: List[str]# Output keys this chain expects. prioritize_tasks(this_task_id: int, objective: str) → List[Dict][source]# Prioritize tasks. class langchain.experimental.AutoGPT(ai_name: str, memory: langchain.vectorstores.base.VectorStoreRetriever, chain: langchain.chains.llm.LLMChain, output_parser: langchain.experimental.autonomous_agents.autogpt.output_parser.BaseAutoGPTOutputParser, tools: List[langchain.tools.base.BaseTool], feedback_tool: Optional[langchain.tools.human.tool.HumanInputRun] = None, chat_history_memory: Optional[langchain.schema.BaseChatMessageHistory] = None)[source]# Agent class for interacting with Auto-GPT. Generative Agents# Here, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module. class langchain.experimental.GenerativeAgent(*, name: str, age: Optional[int] = None, traits: str = 'N/A', status: str, memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory, llm: langchain.base_language.BaseLanguageModel, verbose: bool = False, summary: str = '', summary_refresh_seconds: int = 3600, last_refreshed: datetime.datetime = None, daily_summaries: List[str] = None)[source]# A character with memory and innate characteristics. model Config[source]# Configuration for this pydantic object. arbitrary_types_allowed = True# field age: Optional[int] = None# The optional age of the character.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/experimental.html
f7fd184413b7-2
field age: Optional[int] = None# The optional age of the character. field daily_summaries: List[str] [Optional]# Summary of the events in the plan that the agent took. generate_dialogue_response(observation: str, now: Optional[datetime.datetime] = None) → Tuple[bool, str][source]# React to a given observation. generate_reaction(observation: str, now: Optional[datetime.datetime] = None) → Tuple[bool, str][source]# React to a given observation. get_full_header(force_refresh: bool = False, now: Optional[datetime.datetime] = None) → str[source]# Return a full header of the agent’s status, summary, and current time. get_summary(force_refresh: bool = False, now: Optional[datetime.datetime] = None) → str[source]# Return a descriptive summary of the agent. field last_refreshed: datetime.datetime [Optional]# The last time the character’s summary was regenerated. field llm: langchain.base_language.BaseLanguageModel [Required]# The underlying language model. field memory: langchain.experimental.generative_agents.memory.GenerativeAgentMemory [Required]# The memory object that combines relevance, recency, and ‘importance’. field name: str [Required]# The character’s name. field status: str [Required]# The traits of the character you wish not to change. summarize_related_memories(observation: str) → str[source]# Summarize memories that are most relevant to an observation. field summary: str = ''# Stateful self-summary generated via reflection on the character’s memory. field summary_refresh_seconds: int = 3600# How frequently to re-generate the summary. field traits: str = 'N/A'#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/experimental.html
f7fd184413b7-3
How frequently to re-generate the summary. field traits: str = 'N/A'# Permanent traits to ascribe to the character. class langchain.experimental.GenerativeAgentMemory(*, lc_kwargs: Dict[str, Any] = None, llm: langchain.base_language.BaseLanguageModel, memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever, verbose: bool = False, reflection_threshold: Optional[float] = None, current_plan: List[str] = [], importance_weight: float = 0.15, aggregate_importance: float = 0.0, max_tokens_limit: int = 1200, queries_key: str = 'queries', most_recent_memories_token_key: str = 'recent_memories_token', add_memory_key: str = 'add_memory', relevant_memories_key: str = 'relevant_memories', relevant_memories_simple_key: str = 'relevant_memories_simple', most_recent_memories_key: str = 'most_recent_memories', now_key: str = 'now', reflecting: bool = False)[source]# add_memories(memory_content: str, now: Optional[datetime.datetime] = None) → List[str][source]# Add an observations or memories to the agent’s memory. add_memory(memory_content: str, now: Optional[datetime.datetime] = None) → List[str][source]# Add an observation or memory to the agent’s memory. field aggregate_importance: float = 0.0# Track the sum of the ‘importance’ of recent memories. Triggers reflection when it reaches reflection_threshold. clear() → None[source]# Clear memory contents. field current_plan: List[str] = []# The current plan of the agent.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/experimental.html
f7fd184413b7-4
field current_plan: List[str] = []# The current plan of the agent. fetch_memories(observation: str, now: Optional[datetime.datetime] = None) → List[langchain.schema.Document][source]# Fetch related memories. field importance_weight: float = 0.15# How much weight to assign the memory importance. field llm: langchain.base_language.BaseLanguageModel [Required]# The core language model. load_memory_variables(inputs: Dict[str, Any]) → Dict[str, str][source]# Return key-value pairs given the text input to the chain. field memory_retriever: langchain.retrievers.time_weighted_retriever.TimeWeightedVectorStoreRetriever [Required]# The retriever to fetch related memories. property memory_variables: List[str]# Input keys this memory class will load dynamically. pause_to_reflect(now: Optional[datetime.datetime] = None) → List[str][source]# Reflect on recent observations and generate ‘insights’. field reflection_threshold: Optional[float] = None# When aggregate_importance exceeds reflection_threshold, stop to reflect. save_context(inputs: Dict[str, Any], outputs: Dict[str, Any]) → None[source]# Save the context of this model run to memory. previous Utilities next Integrations Contents Autonomous Agents Generative Agents By Harrison Chase © Copyright 2023, Harrison Chase. Last updated on Jun 16, 2023.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/experimental.html
ade01f84a537-0
.rst .pdf Agent Toolkits Agent Toolkits# Agent toolkits. pydantic model langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source]# Toolkit for Azure Cognitive Services. get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.FileManagementToolkit[source]# Toolkit for interacting with a Local Files. field root_dir: Optional[str] = None# If specified, all file operations are made relative to root_dir. field selected_tools: Optional[List[str]] = None# If provided, only provide the selected tools. Defaults to all. get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.GmailToolkit[source]# Toolkit for interacting with Gmail. field api_resource: Resource [Optional]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.JiraToolkit[source]# Jira Toolkit. field tools: List[langchain.tools.base.BaseTool] = []# classmethod from_jira_api_wrapper(jira_api_wrapper: langchain.utilities.jira.JiraAPIWrapper) → langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit[source]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.JsonToolkit[source]# Toolkit for interacting with a JSON spec. field spec: langchain.tools.json.tool.JsonSpec [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-1
get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.NLAToolkit[source]# Natural Language API Toolkit Definition. field nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required]# List of API Endpoint Tools. classmethod from_llm_and_ai_plugin(llm: langchain.base_language.BaseLanguageModel, ai_plugin: langchain.tools.plugin.AIPlugin, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]# Instantiate the toolkit from an OpenAPI Spec URL classmethod from_llm_and_ai_plugin_url(llm: langchain.base_language.BaseLanguageModel, ai_plugin_url: str, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]# Instantiate the toolkit from an OpenAPI Spec URL classmethod from_llm_and_spec(llm: langchain.base_language.BaseLanguageModel, spec: langchain.tools.openapi.utils.openapi_utils.OpenAPISpec, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]# Instantiate the toolkit by creating tools for each operation. classmethod from_llm_and_url(llm: langchain.base_language.BaseLanguageModel, open_api_url: str, requests: Optional[langchain.requests.Requests] = None, verbose: bool = False, **kwargs: Any) → langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-2
Instantiate the toolkit from an OpenAPI Spec URL get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools for all the API operations. pydantic model langchain.agents.agent_toolkits.OpenAPIToolkit[source]# Toolkit for interacting with a OpenAPI api. field json_agent: langchain.agents.agent.AgentExecutor [Required]# field requests_wrapper: langchain.requests.TextRequestsWrapper [Required]# classmethod from_llm(llm: langchain.base_language.BaseLanguageModel, json_spec: langchain.tools.json.tool.JsonSpec, requests_wrapper: langchain.requests.TextRequestsWrapper, **kwargs: Any) → langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit[source]# Create json agent from llm, then initialize. get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.PlayWrightBrowserToolkit[source]# Toolkit for web browser tools. field async_browser: Optional['AsyncBrowser'] = None# field sync_browser: Optional['SyncBrowser'] = None# classmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) → PlayWrightBrowserToolkit[source]# Instantiate the toolkit. get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.PowerBIToolkit[source]# Toolkit for interacting with PowerBI dataset. field callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None# field examples: Optional[str] = None# field llm: langchain.base_language.BaseLanguageModel [Required]# field max_iterations: int = 5#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-3
field max_iterations: int = 5# field powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.SQLDatabaseToolkit[source]# Toolkit for interacting with SQL databases. field db: langchain.sql_database.SQLDatabase [Required]# field llm: langchain.base_language.BaseLanguageModel [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. property dialect: str# Return string representation of dialect to use. pydantic model langchain.agents.agent_toolkits.SparkSQLToolkit[source]# Toolkit for interacting with Spark SQL. field db: langchain.utilities.spark_sql.SparkSQL [Required]# field llm: langchain.base_language.BaseLanguageModel [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.VectorStoreInfo[source]# Information about a vectorstore. field description: str [Required]# field name: str [Required]# field vectorstore: langchain.vectorstores.base.VectorStore [Required]# pydantic model langchain.agents.agent_toolkits.VectorStoreRouterToolkit[source]# Toolkit for routing between vectorstores. field llm: langchain.base_language.BaseLanguageModel [Optional]# field vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-4
Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.VectorStoreToolkit[source]# Toolkit for interacting with a vector store. field llm: langchain.base_language.BaseLanguageModel [Optional]# field vectorstore_info: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo [Required]# get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. pydantic model langchain.agents.agent_toolkits.ZapierToolkit[source]# Zapier Toolkit. field tools: List[langchain.tools.base.BaseTool] = []# classmethod from_zapier_nla_wrapper(zapier_nla_wrapper: langchain.utilities.zapier.ZapierNLAWrapper) → langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit[source]# Create a toolkit from a ZapierNLAWrapper. get_tools() → List[langchain.tools.base.BaseTool][source]# Get the tools in the toolkit. langchain.agents.agent_toolkits.create_csv_agent(llm: langchain.base_language.BaseLanguageModel, path: Union[str, List[str]], pandas_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.agents.agent.AgentExecutor[source]# Create csv agent by loading to a dataframe and using pandas agent.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-5
langchain.agents.agent_toolkits.create_json_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.json.toolkit.JsonToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-6
you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-7
Construct a json agent from an LLM and tools.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-8
langchain.agents.agent_toolkits.create_openapi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = "You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-9
you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, return_intermediate_steps: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-10
Construct a json agent from an LLM and tools. langchain.agents.agent_toolkits.create_pandas_dataframe_agent(llm: langchain.base_language.BaseLanguageModel, df: Any, agent_type: langchain.agents.agent_types.AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: Optional[str] = None, suffix: Optional[str] = None, input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, include_df_in_prompt: Optional[bool] = True, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a pandas agent from an LLM and dataframe.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-11
langchain.agents.agent_toolkits.create_pbi_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-12
you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples: Optional[str] = None, input_variables: Optional[List[str]] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-13
Construct a pbi agent from an LLM and tools.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-14
langchain.agents.agent_toolkits.create_pbi_chat_agent(llm: langchain.chat_models.base.BaseChatModel, toolkit: Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit], powerbi: Optional[langchain.utilities.powerbi.PowerBIDataset] = None, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, output_parser: Optional[langchain.agents.agent.AgentOutputParser] = None, prefix: str = 'Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readible format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix: str = "TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-15
(remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples: Optional[str] = None, input_variables: Optional[List[str]] = None, memory: Optional[langchain.memory.chat_memory.BaseChatMemory] = None, top_k: int = 10, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-16
Construct a pbi agent from an Chat LLM and tools. If you supply only a toolkit and no powerbi dataset, the same LLM is used for both. langchain.agents.agent_toolkits.create_python_agent(llm: langchain.base_language.BaseLanguageModel, tool: langchain.tools.python.tool.PythonREPLTool, agent_type: langchain.agents.agent_types.AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, verbose: bool = False, prefix: str = 'You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return "I don\'t know" as the answer.\n', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a python agent from an LLM and tool.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-17
Construct a python agent from an LLM and tool. langchain.agents.agent_toolkits.create_spark_dataframe_agent(llm: langchain.llms.base.BaseLLM, df: Any, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = '\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix: str = '\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables: Optional[List[str]] = None, verbose: bool = False, return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# Construct a spark agent from an LLM and dataframe.
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-18
langchain.agents.agent_toolkits.create_spark_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: str = 'Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html
ade01f84a537-19
Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html