id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 59
127
|
---|---|---|
ade01f84a537-20 | Construct a sql agent from an LLM and tools. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html |
ade01f84a537-21 | langchain.agents.agent_toolkits.create_sql_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit, agent_type: langchain.agents.agent_types.AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: Optional[str] = None, format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html |
ade01f84a537-22 | result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html |
ade01f84a537-23 | Construct a sql agent from an LLM and tools.
langchain.agents.agent_toolkits.create_vectorstore_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]#
Construct a vectorstore agent from an LLM and tools.
langchain.agents.agent_toolkits.create_vectorstore_router_agent(llm: langchain.base_language.BaseLanguageModel, toolkit: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit, callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → langchain.agents.agent.AgentExecutor[source]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html |
ade01f84a537-24 | Construct a vectorstore router agent from an LLM and tools.
previous
Tools
next
Utilities
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/agent_toolkits.html |
212e130ca2e9-0 | .rst
.pdf
Utilities
Utilities#
General utilities.
pydantic model langchain.utilities.ApifyWrapper[source]#
Wrapper around Apify.
To use, you should have the apify-client python package installed,
and the environment variable APIFY_API_TOKEN set with your API key, or pass
apify_api_token as a named parameter to the constructor.
field apify_client: Any = None#
field apify_client_async: Any = None#
async acall_actor(actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], langchain.schema.Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) → langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]#
Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) – The ID or name of the Actor on the Apify platform.
run_input (Dict) – The input object of the Actor that you’re trying to run.
dataset_mapping_function (Callable) – A function that takes a single
dictionary (an Apify dataset item) and converts it to
an instance of the Document class.
build (str, optional) – Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) – Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) – Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor run’s default dataset.
Return type
ApifyDatasetLoader | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-1 | Return type
ApifyDatasetLoader
call_actor(actor_id: str, run_input: Dict, dataset_mapping_function: Callable[[Dict], langchain.schema.Document], *, build: Optional[str] = None, memory_mbytes: Optional[int] = None, timeout_secs: Optional[int] = None) → langchain.document_loaders.apify_dataset.ApifyDatasetLoader[source]#
Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) – The ID or name of the Actor on the Apify platform.
run_input (Dict) – The input object of the Actor that you’re trying to run.
dataset_mapping_function (Callable) – A function that takes a single
dictionary (an Apify dataset item) and converts it to an
instance of the Document class.
build (str, optional) – Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) – Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) – Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor run’s default dataset.
Return type
ApifyDatasetLoader
pydantic model langchain.utilities.ArxivAPIWrapper[source]#
Wrapper around ArxivAPI.
To use, you should have the arxiv python package installed.
https://lukasschwab.me/arxiv.py/index.html
This wrapper will use the Arxiv API to conduct searches and
fetch document summaries. By default, it will return the document summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
Set doc_content_chars_max=None if you don’t want to limit the content size.
Parameters | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-2 | Set doc_content_chars_max=None if you don’t want to limit the content size.
Parameters
top_k_results – number of the top-scored document used for the arxiv tool
ARXIV_MAX_QUERY_LENGTH – the cut limit on the query used for the arxiv tool.
load_max_docs – a limit to the number of loaded documents
load_all_available_meta –
if True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result),
if False: the metadata gets only the most informative fields.
field arxiv_exceptions: Any = None#
field doc_content_chars_max: Optional[int] = 4000#
field load_all_available_meta: bool = False#
field load_max_docs: int = 100#
field top_k_results: int = 3#
load(query: str) → List[langchain.schema.Document][source]#
Run Arxiv search and get the article texts plus the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
Returns: a list of documents with the document.page_content in text format
run(query: str) → str[source]#
Run Arxiv search and get the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
See https://lukasschwab.me/arxiv.py/index.html#Result
It uses only the most informative fields of article meta information.
class langchain.utilities.BashProcess(strip_newlines: bool = False, return_err_output: bool = False, persistent: bool = False)[source]#
Executes bash commands and returns the output.
process_output(output: str, command: str) → str[source]#
run(commands: Union[str, List[str]]) → str[source]#
Run commands and return final output. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-3 | Run commands and return final output.
pydantic model langchain.utilities.BingSearchAPIWrapper[source]#
Wrapper for Bing Search API.
In order to set this up, follow instructions at:
https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e
field bing_search_url: str [Required]#
field bing_subscription_key: str [Required]#
field k: int = 10#
results(query: str, num_results: int) → List[Dict][source]#
Run query through BingSearch and return metadata.
Parameters
query – The query to search for.
num_results – The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query: str) → str[source]#
Run query through BingSearch and parse result.
pydantic model langchain.utilities.DuckDuckGoSearchAPIWrapper[source]#
Wrapper for DuckDuckGo Search API.
Free and does not require any setup
field k: int = 10#
field max_results: int = 5#
field region: Optional[str] = 'wt-wt'#
field safesearch: str = 'moderate'#
field time: Optional[str] = 'y'#
get_snippets(query: str) → List[str][source]#
Run query through DuckDuckGo and return concatenated results.
results(query: str, num_results: int) → List[Dict[str, str]][source]#
Run query through DuckDuckGo and return metadata.
Parameters
query – The query to search for.
num_results – The number of results to return.
Returns | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-4 | num_results – The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query: str) → str[source]#
pydantic model langchain.utilities.GooglePlacesAPIWrapper[source]#
Wrapper around Google Places API.
To use, you should have the googlemaps python package installed,an API key for the google maps platform,
and the enviroment variable ‘’GPLACES_API_KEY’’
set with your API key , or pass ‘gplaces_api_key’
as a named parameter to the constructor.
By default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results.
Example
from langchain import GooglePlacesAPIWrapper
gplaceapi = GooglePlacesAPIWrapper()
field gplaces_api_key: Optional[str] = None#
field top_k_results: Optional[int] = None#
fetch_place_details(place_id: str) → Optional[str][source]#
format_place_details(place_details: Dict[str, Any]) → Optional[str][source]#
run(query: str) → str[source]#
Run Places search and get k number of places that exists that match.
pydantic model langchain.utilities.GoogleSearchAPIWrapper[source]#
Wrapper for Google Search API.
Adapted from: Instructions adapted from https://stackoverflow.com/questions/
37083058/
programmatically-searching-google-in-python-using-custom-search
TODO: DOCS for using it
1. Install google-api-python-client
- If you don’t already have a Google account, sign up.
- If you have never created a Google APIs Console project,
read the Managing Projects page and create a project in the Google API Console. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-5 | read the Managing Projects page and create a project in the Google API Console.
- Install the library using pip install google-api-python-client
The current version of the library is 2.70.0 at this time
2. To create an API key:
- Navigate to the APIs & Services→Credentials panel in Cloud Console.
- Select Create credentials, then select API key from the drop-down menu.
- The API key created dialog box displays your newly created key.
- You now have an API_KEY
3. Setup Custom Search Engine so you can search the entire web
- Create a custom search engine in this link.
- In Sites to search, add any valid URL (i.e. www.stackoverflow.com).
- That’s all you have to fill up, the rest doesn’t matter.
In the left-side menu, click Edit search engine → {your search engine name}
→ Setup Set Search the entire web to ON. Remove the URL you added from
the list of Sites to search.
- Under Search engine ID you’ll find the search-engine-ID.
4. Enable the Custom Search API
- Navigate to the APIs & Services→Dashboard panel in Cloud Console.
- Click Enable APIs and Services.
- Search for Custom Search API and click on it.
- Click Enable.
URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis
.com
field google_api_key: Optional[str] = None#
field google_cse_id: Optional[str] = None#
field k: int = 10#
field siterestrict: bool = False#
results(query: str, num_results: int) → List[Dict][source]#
Run query through GoogleSearch and return metadata.
Parameters
query – The query to search for.
num_results – The number of results to return.
Returns
snippet - The description of the result. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-6 | Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query: str) → str[source]#
Run query through GoogleSearch and parse result.
pydantic model langchain.utilities.GoogleSerperAPIWrapper[source]#
Wrapper around the Serper.dev Google Search API.
You can create a free API key at https://serper.dev.
To use, you should have the environment variable SERPER_API_KEY
set with your API key, or pass serper_api_key as a named parameter
to the constructor.
Example
from langchain import GoogleSerperAPIWrapper
google_serper = GoogleSerperAPIWrapper()
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field gl: str = 'us'#
field hl: str = 'en'#
field k: int = 10#
field serper_api_key: Optional[str] = None#
field tbs: Optional[str] = None#
field type: Literal['news', 'search', 'places', 'images'] = 'search'#
async aresults(query: str, **kwargs: Any) → Dict[source]#
Run query through GoogleSearch.
async arun(query: str, **kwargs: Any) → str[source]#
Run query through GoogleSearch and parse result async.
results(query: str, **kwargs: Any) → Dict[source]#
Run query through GoogleSearch.
run(query: str, **kwargs: Any) → str[source]#
Run query through GoogleSearch and parse result.
pydantic model langchain.utilities.GraphQLAPIWrapper[source]#
Wrapper around GraphQL API.
To use, you should have the gql python package installed. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-7 | Wrapper around GraphQL API.
To use, you should have the gql python package installed.
This wrapper will use the GraphQL API to conduct queries.
field custom_headers: Optional[Dict[str, str]] = None#
field graphql_endpoint: str [Required]#
run(query: str) → str[source]#
Run a GraphQL query and get the results.
pydantic model langchain.utilities.LambdaWrapper[source]#
Wrapper for AWS Lambda SDK.
Docs for using:
pip install boto3
Create a lambda function using the AWS Console or CLI
Run aws configure and enter your AWS credentials
field awslambda_tool_description: Optional[str] = None#
field awslambda_tool_name: Optional[str] = None#
field function_name: Optional[str] = None#
run(query: str) → str[source]#
Invoke Lambda function and parse result.
pydantic model langchain.utilities.MetaphorSearchAPIWrapper[source]#
Wrapper for Metaphor Search API.
field k: int = 10#
field metaphor_api_key: str [Required]#
results(query: str, num_results: int) → List[Dict][source]#
Run query through Metaphor Search and return metadata.
Parameters
query – The query to search for.
num_results – The number of results to return.
Returns
title - The title of the
url - The url
author - Author of the content, if applicable. Otherwise, None.
date_created - Estimated date created,
in YYYY-MM-DD format. Otherwise, None.
Return type
A list of dictionaries with the following keys
async results_async(query: str, num_results: int) → List[Dict][source]#
Get results from the Metaphor Search API asynchronously.
pydantic model langchain.utilities.OpenWeatherMapAPIWrapper[source]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-8 | pydantic model langchain.utilities.OpenWeatherMapAPIWrapper[source]#
Wrapper for OpenWeatherMap API using PyOWM.
Docs for using:
Go to OpenWeatherMap and sign up for an API key
Save your API KEY into OPENWEATHERMAP_API_KEY env variable
pip install pyowm
field openweathermap_api_key: Optional[str] = None#
field owm: Any = None#
run(location: str) → str[source]#
Get the current weather information for a specified location.
pydantic model langchain.utilities.PowerBIDataset[source]#
Create PowerBI engine from dataset ID and credential or token.
Use either the credential or a supplied token to authenticate.
If both are supplied the credential is used to generate a token.
The impersonated_user_name is the UPN of a user to be impersonated.
If the model is not RLS enabled, this will be ignored.
Validators
fix_table_names » table_names
token_or_credential_present » all fields
field aiosession: Optional[aiohttp.ClientSession] = None#
field credential: Optional[TokenCredential] = None#
field dataset_id: str [Required]#
field group_id: Optional[str] = None#
field impersonated_user_name: Optional[str] = None#
field sample_rows_in_table_info: int = 1#
Constraints
exclusiveMinimum = 0
maximum = 10
field schemas: Dict[str, str] [Optional]#
field table_names: List[str] [Required]#
field token: Optional[str] = None#
async aget_table_info(table_names: Optional[Union[List[str], str]] = None) → str[source]#
Get information about specified tables.
async arun(command: str) → Any[source]#
Execute a DAX command and return the result asynchronously. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-9 | Execute a DAX command and return the result asynchronously.
get_schemas() → str[source]#
Get the available schema’s.
get_table_info(table_names: Optional[Union[List[str], str]] = None) → str[source]#
Get information about specified tables.
get_table_names() → Iterable[str][source]#
Get names of tables available.
run(command: str) → Any[source]#
Execute a DAX command and return a json representing the results.
property headers: Dict[str, str]#
Get the token.
property request_url: str#
Get the request url.
property table_info: str#
Information about all tables in the database.
pydantic model langchain.utilities.PubMedAPIWrapper[source]#
Wrapper around PubMed API.
This wrapper will use the PubMed API to conduct searches and fetch
document summaries. By default, it will return the document summaries
of the top-k results of an input search.
Parameters
top_k_results – number of the top-scored document used for the PubMed tool
load_max_docs – a limit to the number of loaded documents
load_all_available_meta –
if True: the metadata of the loaded Documents gets all available meta info(see https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch)
if False: the metadata gets only the most informative fields.
field doc_content_chars_max: int = 2000#
field email: str = '[email protected]'#
field load_all_available_meta: bool = False#
field load_max_docs: int = 25#
field top_k_results: int = 3#
load(query: str) → List[dict][source]#
Search PubMed for documents matching the query.
Return a list of dictionaries containing the document metadata. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-10 | Search PubMed for documents matching the query.
Return a list of dictionaries containing the document metadata.
load_docs(query: str) → List[langchain.schema.Document][source]#
retrieve_article(uid: str, webenv: str) → dict[source]#
run(query: str) → str[source]#
Run PubMed search and get the article meta information.
See https://www.ncbi.nlm.nih.gov/books/NBK25499/#chapter4.ESearch
It uses only the most informative fields of article meta information.
pydantic model langchain.utilities.PythonREPL[source]#
Simulates a standalone Python REPL.
field globals: Optional[Dict] [Optional] (alias '_globals')#
field locals: Optional[Dict] [Optional] (alias '_locals')#
run(command: str) → str[source]#
Run command with own globals/locals and returns anything printed.
pydantic model langchain.utilities.SearxSearchWrapper[source]#
Wrapper for Searx API.
To use you need to provide the searx host by passing the named parameter
searx_host or exporting the environment variable SEARX_HOST.
In some situations you might want to disable SSL verification, for example
if you are running searx locally. You can do this by passing the named parameter
unsecure. You can also pass the host url scheme as http to disable SSL.
Example
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://localhost:8888")
Example with SSL disabled:from langchain.utilities import SearxSearchWrapper
# note the unsecure parameter is not needed if you pass the url scheme as
# http
searx = SearxSearchWrapper(searx_host="http://localhost:8888",
unsecure=True)
Validators | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-11 | unsecure=True)
Validators
disable_ssl_warnings » unsecure
validate_params » all fields
field aiosession: Optional[Any] = None#
field categories: Optional[List[str]] = []#
field engines: Optional[List[str]] = []#
field headers: Optional[dict] = None#
field k: int = 10#
field params: dict [Optional]#
field query_suffix: Optional[str] = ''#
field searx_host: str = ''#
field unsecure: bool = False#
async aresults(query: str, num_results: int, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#
Asynchronously query with json results.
Uses aiohttp. See results for more info.
async arun(query: str, engines: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#
Asynchronously version of run.
results(query: str, num_results: int, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → List[Dict][source]#
Run query through Searx API and returns the results with metadata.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
num_results – Limit the number of results to return.
engines – List of engines to use for the query.
categories – List of categories to use for the query.
**kwargs – extra parameters to pass to the searx API.
Returns
{snippet: The description of the result.
title: The title of the result.
link: The link to the result. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-12 | title: The title of the result.
link: The link to the result.
engines: The engines used for the result.
category: Searx category of the result.
}
Return type
Dict with the following keys
run(query: str, engines: Optional[List[str]] = None, categories: Optional[List[str]] = None, query_suffix: Optional[str] = '', **kwargs: Any) → str[source]#
Run query through Searx API and parse results.
You can pass any other params to the searx query API.
Parameters
query – The query to search for.
query_suffix – Extra suffix appended to the query.
engines – List of engines to use for the query.
categories – List of categories to use for the query.
**kwargs – extra parameters to pass to the searx API.
Returns
The result of the query.
Return type
str
Raises
ValueError – If an error occured with the query.
Example
This will make a query to the qwant engine:
from langchain.utilities import SearxSearchWrapper
searx = SearxSearchWrapper(searx_host="http://my.searx.host")
searx.run("what is the weather in France ?", engine="qwant")
# the same result can be achieved using the `!` syntax of searx
# to select the engine using `query_suffix`
searx.run("what is the weather in France ?", query_suffix="!qwant")
pydantic model langchain.utilities.SerpAPIWrapper[source]#
Wrapper around SerpAPI.
To use, you should have the google-search-results python package installed,
and the environment variable SERPAPI_API_KEY set with your API key, or pass
serpapi_api_key as a named parameter to the constructor.
Example | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-13 | serpapi_api_key as a named parameter to the constructor.
Example
from langchain import SerpAPIWrapper
serpapi = SerpAPIWrapper()
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field params: dict = {'engine': 'google', 'gl': 'us', 'google_domain': 'google.com', 'hl': 'en'}#
field serpapi_api_key: Optional[str] = None#
async aresults(query: str) → dict[source]#
Use aiohttp to run query through SerpAPI and return the results async.
async arun(query: str, **kwargs: Any) → str[source]#
Run query through SerpAPI and parse result async.
get_params(query: str) → Dict[str, str][source]#
Get parameters for SerpAPI.
results(query: str) → dict[source]#
Run query through SerpAPI and return the raw result.
run(query: str, **kwargs: Any) → str[source]#
Run query through SerpAPI and parse result.
class langchain.utilities.SparkSQL(spark_session: Optional[SparkSession] = None, catalog: Optional[str] = None, schema: Optional[str] = None, ignore_tables: Optional[List[str]] = None, include_tables: Optional[List[str]] = None, sample_rows_in_table_info: int = 3)[source]#
classmethod from_uri(database_uri: str, engine_args: Optional[dict] = None, **kwargs: Any) → langchain.utilities.spark_sql.SparkSQL[source]#
Creating a remote Spark Session via Spark connect.
For example: SparkSQL.from_uri(“sc://localhost:15002”)
get_table_info(table_names: Optional[List[str]] = None) → str[source]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-14 | get_table_info(table_names: Optional[List[str]] = None) → str[source]#
get_table_info_no_throw(table_names: Optional[List[str]] = None) → str[source]#
Get information about specified tables.
Follows best practices as specified in: Rajkumar et al, 2022
(https://arxiv.org/abs/2204.00498)
If sample_rows_in_table_info, the specified number of sample rows will be
appended to each table description. This can increase performance as
demonstrated in the paper.
get_usable_table_names() → Iterable[str][source]#
Get names of tables available.
run(command: str, fetch: str = 'all') → str[source]#
run_no_throw(command: str, fetch: str = 'all') → str[source]#
Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
pydantic model langchain.utilities.TextRequestsWrapper[source]#
Lightweight wrapper around requests library.
The main purpose of this wrapper is to always return a text output.
field aiosession: Optional[aiohttp.client.ClientSession] = None#
field headers: Optional[Dict[str, str]] = None#
async adelete(url: str, **kwargs: Any) → str[source]#
DELETE the URL and return the text asynchronously.
async aget(url: str, **kwargs: Any) → str[source]#
GET the URL and return the text asynchronously.
async apatch(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
PATCH the URL and return the text asynchronously. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-15 | PATCH the URL and return the text asynchronously.
async apost(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
POST to the URL and return the text asynchronously.
async aput(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
PUT the URL and return the text asynchronously.
delete(url: str, **kwargs: Any) → str[source]#
DELETE the URL and return the text.
get(url: str, **kwargs: Any) → str[source]#
GET the URL and return the text.
patch(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
PATCH the URL and return the text.
post(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
POST to the URL and return the text.
put(url: str, data: Dict[str, Any], **kwargs: Any) → str[source]#
PUT the URL and return the text.
property requests: langchain.requests.Requests#
pydantic model langchain.utilities.TwilioAPIWrapper[source]#
Sms Client using Twilio.
To use, you should have the twilio python package installed,
and the environment variables TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN, and
TWILIO_FROM_NUMBER, or pass account_sid, auth_token, and from_number as
named parameters to the constructor.
Example
from langchain.utilities.twilio import TwilioAPIWrapper
twilio = TwilioAPIWrapper(
account_sid="ACxxx",
auth_token="xxx",
from_number="+10123456789"
)
twilio.run('test', '+12484345508')
field account_sid: Optional[str] = None#
Twilio account string identifier. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-16 | field account_sid: Optional[str] = None#
Twilio account string identifier.
field auth_token: Optional[str] = None#
Twilio auth token.
field from_number: Optional[str] = None#
A Twilio phone number in [E.164](https://www.twilio.com/docs/glossary/what-e164)
format, an
[alphanumeric sender ID](https://www.twilio.com/docs/sms/send-messages#use-an-alphanumeric-sender-id),
or a [Channel Endpoint address](https://www.twilio.com/docs/sms/channels#channel-addresses)
that is enabled for the type of message you want to send. Phone numbers or
[short codes](https://www.twilio.com/docs/sms/api/short-code) purchased from
Twilio also work here. You cannot, for example, spoof messages from a private
cell phone number. If you are using messaging_service_sid, this parameter
must be empty.
run(body: str, to: str) → str[source]#
Run body through Twilio and respond with message sid.
Parameters
body – The text of the message you want to send. Can be up to 1,600
characters in length.
to – The destination phone number in
[E.164](https://www.twilio.com/docs/glossary/what-e164) format for
SMS/MMS or
[Channel user address](https://www.twilio.com/docs/sms/channels#channel-addresses)
for other 3rd-party channels.
pydantic model langchain.utilities.WikipediaAPIWrapper[source]#
Wrapper around WikipediaAPI.
To use, you should have the wikipedia python package installed.
This wrapper will use the Wikipedia API to conduct searches and
fetch page summaries. By default, it will return the page summaries
of the top-k results. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
212e130ca2e9-17 | of the top-k results.
It limits the Document content by doc_content_chars_max.
field doc_content_chars_max: int = 4000#
field lang: str = 'en'#
field load_all_available_meta: bool = False#
field top_k_results: int = 3#
load(query: str) → List[langchain.schema.Document][source]#
Run Wikipedia search and get the article text plus the meta information.
See
Returns: a list of documents.
run(query: str) → str[source]#
Run Wikipedia search and get page summaries.
pydantic model langchain.utilities.WolframAlphaAPIWrapper[source]#
Wrapper for Wolfram Alpha.
Docs for using:
Go to wolfram alpha and sign up for a developer account
Create an app and get your APP ID
Save your APP ID into WOLFRAM_ALPHA_APPID env variable
pip install wolframalpha
field wolfram_alpha_appid: Optional[str] = None#
run(query: str) → str[source]#
Run query through WolframAlpha and parse result.
previous
Agent Toolkits
next
Experimental Modules
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/utilities.html |
c39d6ab62898-0 | .rst
.pdf
Embeddings
Embeddings#
Wrappers around embedding modules.
pydantic model langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding[source]#
Wrapper for Aleph Alpha’s Asymmetric Embeddings
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the content of the document?"
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
field aleph_alpha_api_key: Optional[str] = None#
API key for Aleph Alpha API.
field compress_to_size: Optional[int] = 128#
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
field contextual_control_threshold: Optional[int] = None#
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
field control_log_additive: Optional[bool] = True#
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
field hosting: Optional[str] = 'https://api.aleph-alpha.com'#
Optional parameter that specifies which datacenters may process the request.
field model: Optional[str] = 'luminous-base'#
Model name to use.
field normalize: Optional[bool] = True#
Should returned embeddings be normalized
embed_documents(texts: List[str]) → List[List[float]][source]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-1 | embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Aleph Alpha’s asymmetric Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding[source]#
The symmetric version of the Aleph Alpha’s semantic embeddings.
The main difference is that here, both the documents and
queries are embedded with a SemanticRepresentation.Symmetric
.. rubric:: Example
from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding
embeddings = AlephAlphaAsymmetricSemanticEmbedding()
text = "This is a test text"
doc_result = embeddings.embed_documents([text])
query_result = embeddings.embed_query(text)
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Aleph Alpha’s Document endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Aleph Alpha’s asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.BedrockEmbeddings[source]#
Embeddings provider to invoke Bedrock embedding models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-2 | If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field model_id: str = 'amazon.titan-e1t-medium'#
Id of the model to call, e.g., amazon.titan-e1t-medium, this is
equivalent to the modelId property in the list-foundation-models api
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: Optional[str] = None#
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
embed_documents(texts: List[str], chunk_size: int = 1) → List[List[float]][source]#
Compute doc embeddings using a Bedrock model.
Parameters
texts – The list of texts to embed.
chunk_size – Bedrock currently only allows single string
inputs, so chunk size is always 1. This input is here
only for compatibility with the embeddings interface.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a Bedrock model.
Parameters
text – The text to embed.
Returns
Embeddings for the text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-3 | Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.CohereEmbeddings[source]#
Wrapper around Cohere embedding models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import CohereEmbeddings
cohere = CohereEmbeddings(
model="embed-english-light-v2.0", cohere_api_key="my-api-key"
)
field model: str = 'embed-english-v2.0'#
Model name to use.
field truncate: Optional[str] = None#
Truncate embeddings that are too long from start or end (“NONE”|”START”|”END”)
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to Cohere’s embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to Cohere’s embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.DashScopeEmbeddings[source]#
Wrapper around DashScope embedding models.
To use, you should have the dashscope python package installed, and the
environment variable DASHSCOPE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key")
Example
import os
os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY" | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-4 | os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY"
from langchain.embeddings.dashscope import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(
model="text-embedding-v1",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
field dashscope_api_key: Optional[str] = None#
Maximum number of retries to make when generating.
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to DashScope’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to DashScope’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
pydantic model langchain.embeddings.DeepInfraEmbeddings[source]#
Wrapper around Deep Infra’s embedding inference service.
To use, you should have the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
There are multiple embeddings models available,
see https://deepinfra.com/models?type=embeddings.
Example
from langchain.embeddings import DeepInfraEmbeddings
deepinfra_emb = DeepInfraEmbeddings(
model_id="sentence-transformers/clip-ViT-B-32",
deepinfra_api_token="my-api-key"
)
r1 = deepinfra_emb.embed_documents(
[
"Alpha is the first letter of Greek alphabet",
"Beta is the second letter of Greek alphabet",
] | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-5 | "Beta is the second letter of Greek alphabet",
]
)
r2 = deepinfra_emb.embed_query(
"What is the second letter of Greek alphabet"
)
field embed_instruction: str = 'passage: '#
Instruction used to embed documents.
field model_id: str = 'sentence-transformers/clip-ViT-B-32'#
Embeddings model to use.
field model_kwargs: Optional[dict] = None#
Other model keyword args
field normalize: bool = False#
whether to normalize the computed embeddings
field query_instruction: str = 'query: '#
Instruction used to embed the query.
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed documents using a Deep Infra deployed embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using a Deep Infra deployed embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
class langchain.embeddings.ElasticsearchEmbeddings(client: MlClient, model_id: str, *, input_field: str = 'text_field')[source]#
Wrapper around Elasticsearch embedding models.
This class provides an interface to generate embeddings using a model deployed
in an Elasticsearch cluster. It requires an Elasticsearch connection object
and the model_id of the model deployed in the cluster.
In Elasticsearch you need to have an embedding model loaded and deployed.
- https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html
- https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html
embed_documents(texts: List[str]) → List[List[float]][source]#
Generate embeddings for a list of documents. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-6 | Generate embeddings for a list of documents.
Parameters
texts (List[str]) – A list of document text strings to generate embeddings
for.
Returns
A list of embeddings, one for each document in the inputlist.
Return type
List[List[float]]
embed_query(text: str) → List[float][source]#
Generate an embedding for a single query text.
Parameters
text (str) – The query text to generate an embedding for.
Returns
The embedding for the input query text.
Return type
List[float]
classmethod from_credentials(model_id: str, *, es_cloud_id: Optional[str] = None, es_user: Optional[str] = None, es_password: Optional[str] = None, input_field: str = 'text_field') → langchain.embeddings.elasticsearch.ElasticsearchEmbeddings[source]#
Instantiate embeddings from Elasticsearch credentials.
Parameters
model_id (str) – The model_id of the model deployed in the Elasticsearch
cluster.
input_field (str) – The name of the key for the input text field in the
document. Defaults to ‘text_field’.
es_cloud_id – (str, optional): The Elasticsearch cloud ID to connect to.
es_user – (str, optional): Elasticsearch username.
es_password – (str, optional): Elasticsearch password.
Example
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Credentials can be passed in two ways. Either set the env vars
# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically
# pulled in, or pass them in directly as kwargs.
embeddings = ElasticsearchEmbeddings.from_credentials(
model_id,
input_field=input_field, | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-7 | model_id,
input_field=input_field,
# es_cloud_id="foo",
# es_user="bar",
# es_password="baz",
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
classmethod from_es_connection(model_id: str, es_connection: Elasticsearch, input_field: str = 'text_field') → ElasticsearchEmbeddings[source]#
Instantiate embeddings from an existing Elasticsearch connection.
This method provides a way to create an instance of the ElasticsearchEmbeddings
class using an existing Elasticsearch connection. The connection object is used
to create an MlClient, which is then used to initialize the
ElasticsearchEmbeddings instance.
Args:
model_id (str): The model_id of the model deployed in the Elasticsearch cluster.
es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch
connection object. input_field (str, optional): The name of the key for the
input text field in the document. Defaults to ‘text_field’.
Returns:
ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.
Example
from elasticsearch import Elasticsearch
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Create Elasticsearch connection
es_connection = Elasticsearch(
hosts=["localhost:9200"], http_auth=("user", "password")
)
# Instantiate ElasticsearchEmbeddings using the existing connection
embeddings = ElasticsearchEmbeddings.from_es_connection(
model_id,
es_connection,
input_field=input_field,
)
documents = [
"This is an example document.", | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-8 | )
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
pydantic model langchain.embeddings.EmbaasEmbeddings[source]#
Wrapper around embaas’s embedding service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Initialise with default model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb = EmbaasEmbeddings()
# Initialise with custom model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb_model = "instructor-large"
emb_inst = "Represent the Wikipedia document for retrieval"
emb = EmbaasEmbeddings(
model=emb_model,
instruction=emb_inst
)
field api_url: str = 'https://api.embaas.io/v1/embeddings/'#
The URL for the embaas embeddings API.
field instruction: Optional[str] = None#
Instruction used for domain-specific embeddings.
field model: str = 'e5-large-v2'#
The model used for embeddings.
embed_documents(texts: List[str]) → List[List[float]][source]#
Get embeddings for a list of texts.
Parameters
texts – The list of texts to get embeddings for.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Get embeddings for a single text.
Parameters
text – The text to get embeddings for.
Returns
List of embeddings.
pydantic model langchain.embeddings.FakeEmbeddings[source]#
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed search docs. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-9 | Embed search docs.
embed_query(text: str) → List[float][source]#
Embed query text.
pydantic model langchain.embeddings.HuggingFaceEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
hf = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
field encode_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass when calling the encode method of the model.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceHubEmbeddings[source]#
Wrapper around HuggingFaceHub embedding models.
To use, you should have the huggingface_hub python package installed, and the | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-10 | To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model.
field repo_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field task: Optional[str] = 'feature-extraction'#
Task to call the model with.
embed_documents(texts: List[str]) → List[List[float]][source]#
Call out to HuggingFaceHub’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to HuggingFaceHub’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.HuggingFaceInstructEmbeddings[source]#
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python packages installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
hf = HuggingFaceInstructEmbeddings( | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-11 | hf = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
field cache_folder: Optional[str] = None#
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field encode_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass when calling the encode method of the model.
field model_kwargs: Dict[str, Any] [Optional]#
Key word arguments to pass to the model.
field model_name: str = 'hkunlp/instructor-large'#
Model name to use.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.LlamaCppEmbeddings[source]#
Wrapper around llama.cpp embedding models.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: abetlen/llama-cpp-python
Example
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
field f16_kv: bool = False# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-12 | field f16_kv: bool = False#
Use half-precision for key/value cache.
field logits_all: bool = False#
Return logits for all tokens, not just the last token.
field n_batch: Optional[int] = 8#
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
field n_ctx: int = 512#
Token context window.
field n_gpu_layers: Optional[int] = None#
Number of layers to be loaded into gpu memory. Default None.
field n_parts: int = -1#
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
field n_threads: Optional[int] = None#
Number of threads to use. If None, the number
of threads is automatically determined.
field seed: int = -1#
Seed. If -1, a random seed is used.
field use_mlock: bool = False#
Force system to keep model in RAM.
field vocab_only: bool = False#
Only load the vocabulary, no weights.
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed a list of documents using the Llama model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using the Llama model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.MiniMaxEmbeddings[source]#
Wrapper around MiniMax’s embedding inference service.
To use, you should have the environment variable MINIMAX_GROUP_ID and
MINIMAX_API_KEY set with your API token, or pass it as a named parameter to | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-13 | MINIMAX_API_KEY set with your API token, or pass it as a named parameter to
the constructor.
Example
from langchain.embeddings import MiniMaxEmbeddings
embeddings = MiniMaxEmbeddings()
query_text = "This is a test query."
query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
field embed_type_db: str = 'db'#
For embed_documents
field embed_type_query: str = 'query'#
For embed_query
field endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'#
Endpoint URL to use.
field minimax_api_key: Optional[str] = None#
API Key for MiniMax API.
field minimax_group_id: Optional[str] = None#
Group ID for MiniMax API.
field model: str = 'embo-01'#
Embeddings model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed documents using a MiniMax embedding endpoint.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using a MiniMax embedding endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.ModelScopeEmbeddings[source]#
Wrapper around modelscope_hub embedding models.
To use, you should have the modelscope python package installed.
Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id) | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-14 | embed = ModelScopeEmbeddings(model_id=model_id)
field model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a modelscope embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a modelscope embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.MosaicMLInstructorEmbeddings[source]#
Wrapper around MosaicML’s embedding inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicMLInstructorEmbeddings
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict"
)
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction used to embed documents.
field endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict'#
Endpoint URL to use.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction used to embed the query.
field retry_sleep: float = 1.0#
How long to try sleeping for if a rate limit is encountered | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-15 | How long to try sleeping for if a rate limit is encountered
embed_documents(texts: List[str]) → List[List[float]][source]#
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.OpenAIEmbeddings[source]#
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to ‘azure’ and the others correspond to
the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-16 | from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
openai_api_base="https://your-endpoint.openai.azure.com/",
openai_api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
field chunk_size: int = 1000#
Maximum number of texts to embed in each batch
field max_retries: int = 6#
Maximum number of retries to make when generating.
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout in seconds for the OpenAPI request.
embed_documents(texts: List[str], chunk_size: Optional[int] = 0) → List[List[float]][source]#
Call out to OpenAI’s embedding endpoint for embedding search docs.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Call out to OpenAI’s embedding endpoint for embedding query text.
Parameters
text – The text to embed.
Returns
Embedding for the text.
pydantic model langchain.embeddings.SagemakerEndpointEmbeddings[source]#
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-17 | https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
field content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]#
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
field credentials_profile_name: Optional[str] = None#
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
field endpoint_kwargs: Optional[Dict] = None#
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
field endpoint_name: str = ''#
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
field model_kwargs: Optional[Dict] = None#
Key word arguments to pass to the model.
field region_name: str = ''#
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts: List[str], chunk_size: int = 64) → List[List[float]][source]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-18 | Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts – The list of texts to embed.
chunk_size – The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedEmbeddings[source]#
Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-19 | import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Validators
raise_deprecation » all fields
set_verbose » verbose
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings on the remote hardware.
field inference_kwargs: Any = None#
Any kwargs to pass to the model’s inference function.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts – The list of texts to embed.s
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace transformer model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
pydantic model langchain.embeddings.SelfHostedHuggingFaceEmbeddings[source]#
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-20 | Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Validators
raise_deprecation » all fields
set_verbose » verbose
field hardware: Any = None#
Remote hardware to send the inference function to.
field inference_fn: Callable = <function _embed_documents>#
Inference function to extract the embeddings.
field load_fn_kwargs: Optional[dict] = None#
Key word arguments to pass to the model load function.
field model_id: str = 'sentence-transformers/all-mpnet-base-v2'#
Model name to use.
field model_load_fn: Callable = <function load_embedding_model>#
Function to load the model remotely on the server.
field model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']#
Requirements to install on hardware to inference the model.
pydantic model langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings[source]#
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large" | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-21 | import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Validators
raise_deprecation » all fields
set_verbose » verbose
field embed_instruction: str = 'Represent the document for retrieval: '#
Instruction to use for embedding documents.
field model_id: str = 'hkunlp/instructor-large'#
Model name to use.
field model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']#
Requirements to install on hardware to inference the model.
field query_instruction: str = 'Represent the question for retrieving supporting documents: '#
Instruction to use for embedding query.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a HuggingFace instruct model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
langchain.embeddings.SentenceTransformerEmbeddings#
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
pydantic model langchain.embeddings.TensorflowHubEmbeddings[source]#
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url) | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
c39d6ab62898-22 | tf = TensorflowHubEmbeddings(model_url=url)
field model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'#
Model name to use.
embed_documents(texts: List[str]) → List[List[float]][source]#
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts – The list of texts to embed.
Returns
List of embeddings, one for each text.
embed_query(text: str) → List[float][source]#
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text – The text to embed.
Returns
Embeddings for the text.
previous
Chat Models
next
Indexes
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/embeddings.html |
433a678170da-0 | .rst
.pdf
Example Selector
Example Selector#
Logic for selecting examples to include in prompts.
pydantic model langchain.prompts.example_selector.LengthBasedExampleSelector[source]#
Select examples based on length.
Validators
calculate_example_text_lengths » example_text_lengths
field example_prompt: langchain.prompts.prompt.PromptTemplate [Required]#
Prompt template used to format the examples.
field examples: List[dict] [Required]#
A list of the examples that the prompt template expects.
field get_text_length: Callable[[str], int] = <function _get_length_based>#
Function to measure prompt length. Defaults to word count.
field max_length: int = 2048#
Max length for the prompt, beyond which examples are cut.
add_example(example: Dict[str, str]) → None[source]#
Add new example to list.
select_examples(input_variables: Dict[str, str]) → List[dict][source]#
Select which examples to use based on the input lengths.
pydantic model langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector[source]#
ExampleSelector that selects examples based on Max Marginal Relevance.
This was shown to improve performance in this paper:
https://arxiv.org/pdf/2211.13892.pdf
field fetch_k: int = 20#
Number of examples to fetch to rerank.
classmethod from_examples(examples: List[dict], embeddings: langchain.embeddings.base.Embeddings, vectorstore_cls: Type[langchain.vectorstores.base.VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, fetch_k: int = 20, **vectorstore_cls_kwargs: Any) → langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector[source]#
Create k-shot example selector using example list and embeddings. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/example_selector.html |
433a678170da-1 | Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples – List of examples to use in the prompt.
embeddings – An iniialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls – A vector store DB interface class, e.g. FAISS.
k – Number of examples to select
input_keys – If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs – optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
select_examples(input_variables: Dict[str, str]) → List[dict][source]#
Select which examples to use based on semantic similarity.
pydantic model langchain.prompts.example_selector.SemanticSimilarityExampleSelector[source]#
Example selector that selects examples based on SemanticSimilarity.
field example_keys: Optional[List[str]] = None#
Optional keys to filter examples to.
field input_keys: Optional[List[str]] = None#
Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables.
field k: int = 4#
Number of examples to select.
field vectorstore: langchain.vectorstores.base.VectorStore [Required]#
VectorStore than contains information about examples.
add_example(example: Dict[str, str]) → str[source]#
Add new example to vectorstore.
classmethod from_examples(examples: List[dict], embeddings: langchain.embeddings.base.Embeddings, vectorstore_cls: Type[langchain.vectorstores.base.VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, **vectorstore_cls_kwargs: Any) → langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector[source]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/example_selector.html |
433a678170da-2 | Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples – List of examples to use in the prompt.
embeddings – An initialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls – A vector store DB interface class, e.g. FAISS.
k – Number of examples to select
input_keys – If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs – optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
select_examples(input_variables: Dict[str, str]) → List[dict][source]#
Select which examples to use based on semantic similarity.
previous
PromptTemplates
next
Output Parsers
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/example_selector.html |
3fba0a126828-0 | .rst
.pdf
Tools
Tools#
Core toolkit implementations.
pydantic model langchain.tools.AIPluginTool[source]#
field api_spec: str [Required]#
field args_schema: Type[AIPluginToolSchema] = <class 'langchain.tools.plugin.AIPluginToolSchema'>#
Pydantic model class to validate and parse the tool’s input arguments.
field plugin: AIPlugin [Required]#
classmethod from_plugin_url(url: str) → langchain.tools.plugin.AIPluginTool[source]#
pydantic model langchain.tools.APIOperation[source]#
A model for a single API operation.
field base_url: str [Required]#
The base URL of the operation.
field description: Optional[str] = None#
The description of the operation.
field method: langchain.tools.openapi.utils.openapi_utils.HTTPVerb [Required]#
The HTTP method of the operation.
field operation_id: str [Required]#
The unique identifier of the operation.
field path: str [Required]#
The path of the operation.
field properties: Sequence[langchain.tools.openapi.utils.api_models.APIProperty] [Required]#
field request_body: Optional[langchain.tools.openapi.utils.api_models.APIRequestBody] = None#
The request body of the operation.
classmethod from_openapi_spec(spec: langchain.tools.openapi.utils.openapi_utils.OpenAPISpec, path: str, method: str) → langchain.tools.openapi.utils.api_models.APIOperation[source]#
Create an APIOperation from an OpenAPI spec.
classmethod from_openapi_url(spec_url: str, path: str, method: str) → langchain.tools.openapi.utils.api_models.APIOperation[source]#
Create an APIOperation from an OpenAPI URL.
to_typescript() → str[source]#
Get typescript string representation of the operation. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-1 | to_typescript() → str[source]#
Get typescript string representation of the operation.
static ts_type_from_python(type_: Union[str, Type, tuple, None, enum.Enum]) → str[source]#
property body_params: List[str]#
property path_params: List[str]#
property query_params: List[str]#
pydantic model langchain.tools.AzureCogsFormRecognizerTool[source]#
Tool that queries the Azure Cognitive Services Form Recognizer API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/applied-ai-services/form-recognizer/quickstarts/get-started-sdks-rest-api?view=form-recog-3.0.0&pivots=programming-language-python
pydantic model langchain.tools.AzureCogsImageAnalysisTool[source]#
Tool that queries the Azure Cognitive Services Image Analysis API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40
pydantic model langchain.tools.AzureCogsSpeech2TextTool[source]#
Tool that queries the Azure Cognitive Services Speech2Text API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-speech-to-text?pivots=programming-language-python
pydantic model langchain.tools.AzureCogsText2SpeechTool[source]#
Tool that queries the Azure Cognitive Services Text2Speech API.
In order to set this up, follow instructions at:
https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?pivots=programming-language-python
pydantic model langchain.tools.BaseTool[source]#
Interface LangChain tools must implement. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-2 | Interface LangChain tools must implement.
field args_schema: Optional[Type[pydantic.main.BaseModel]] = None#
Pydantic model class to validate and parse the tool’s input arguments.
field callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None#
Deprecated. Please use callbacks instead.
field callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None#
Callbacks to be called during tool execution.
field description: str [Required]#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False#
Handle the content of the ToolException thrown.
field name: str [Required]#
The unique name of the tool that clearly communicates its purpose.
field return_direct: bool = False#
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
field verbose: bool = False#
Whether to log the tool’s progress.
async arun(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Any[source]#
Run the tool asynchronously. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-3 | Run the tool asynchronously.
run(tool_input: Union[str, Dict], verbose: Optional[bool] = None, start_color: Optional[str] = 'green', color: Optional[str] = 'green', callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → Any[source]#
Run the tool.
property args: dict#
property is_single_input: bool#
Whether the tool only accepts a single input.
pydantic model langchain.tools.BingSearchResults[source]#
Tool that has capability to query the Bing Search API and get back json.
field api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]#
field num_results: int = 4#
pydantic model langchain.tools.BingSearchRun[source]#
Tool that adds the capability to query the Bing search API.
field api_wrapper: langchain.utilities.bing_search.BingSearchAPIWrapper [Required]#
pydantic model langchain.tools.BraveSearch[source]#
field search_wrapper: BraveSearchWrapper [Required]#
classmethod from_api_key(api_key: str, search_kwargs: Optional[dict] = None, **kwargs: Any) → langchain.tools.brave_search.tool.BraveSearch[source]#
pydantic model langchain.tools.ClickTool[source]#
field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.click.ClickToolInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Click on an element with the given CSS selector'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'click_element'# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-4 | field name: str = 'click_element'#
The unique name of the tool that clearly communicates its purpose.
field playwright_strict: bool = False#
Whether to employ Playwright’s strict mode when clicking on elements.
field playwright_timeout: float = 1000#
Timeout (in ms) for Playwright to wait for element to be ready.
field visible_only: bool = True#
Whether to consider only visible elements.
pydantic model langchain.tools.CopyFileTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.copy.FileCopyInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Create a copy of a file in a specified location'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'copy_file'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.CurrentWebPageTool[source]#
field args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Returns the URL of the current page'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'current_webpage'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.DeleteFileTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.delete.FileDeleteInput'># | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-5 | Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Delete a file'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'file_delete'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.DuckDuckGoSearchResults[source]#
Tool that queries the Duck Duck Go Search API and get back json.
field api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]#
field num_results: int = 4#
pydantic model langchain.tools.DuckDuckGoSearchRun[source]#
Tool that adds the capability to query the DuckDuckGo search API.
field api_wrapper: langchain.utilities.duckduckgo_search.DuckDuckGoSearchAPIWrapper [Optional]#
pydantic model langchain.tools.ExtractHyperlinksTool[source]#
Extract all hyperlinks on the page.
field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.extract_hyperlinks.ExtractHyperlinksToolInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Extract all hyperlinks on the current webpage'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'extract_hyperlinks'#
The unique name of the tool that clearly communicates its purpose.
static scrape_page(page: Any, html_content: str, absolute_urls: bool) → str[source]#
pydantic model langchain.tools.ExtractTextTool[source]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-6 | pydantic model langchain.tools.ExtractTextTool[source]#
field args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Extract all the text on the current webpage'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'extract_text'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.FileSearchTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.file_search.FileSearchInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Recursively search for files in a subdirectory that match the regex pattern'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'file_search'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.GetElementsTool[source]#
field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.get_elements.GetElementsToolInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Retrieve elements in the current web page matching the given CSS selector'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'get_elements'#
The unique name of the tool that clearly communicates its purpose. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-7 | The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.GmailCreateDraft[source]#
field args_schema: Type[langchain.tools.gmail.create_draft.CreateDraftSchema] = <class 'langchain.tools.gmail.create_draft.CreateDraftSchema'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Use this tool to create a draft email with the provided message fields.'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'create_gmail_draft'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.GmailGetMessage[source]#
field args_schema: Type[langchain.tools.gmail.get_message.SearchArgsSchema] = <class 'langchain.tools.gmail.get_message.SearchArgsSchema'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Use this tool to fetch an email by message ID. Returns the thread ID, snipet, body, subject, and sender.'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'get_gmail_message'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.GmailGetThread[source]#
field args_schema: Type[langchain.tools.gmail.get_thread.GetThreadSchema] = <class 'langchain.tools.gmail.get_thread.GetThreadSchema'>#
Pydantic model class to validate and parse the tool’s input arguments. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-8 | Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Use this tool to search for email messages. The input must be a valid Gmail query. The output is a JSON list of messages.'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'get_gmail_thread'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.GmailSearch[source]#
field args_schema: Type[langchain.tools.gmail.search.SearchArgsSchema] = <class 'langchain.tools.gmail.search.SearchArgsSchema'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Use this tool to search for email messages or threads. The input must be a valid Gmail query. The output is a JSON list of the requested resource.'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'search_gmail'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.GmailSendMessage[source]#
field description: str = 'Use this tool to send email messages. The input is the message, recipents'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'send_gmail_message'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.GooglePlacesTool[source]#
Tool that adds the capability to query the Google places API.
field api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-9 | field api_wrapper: langchain.utilities.google_places_api.GooglePlacesAPIWrapper [Optional]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.google_places.tool.GooglePlacesSchema'>#
Pydantic model class to validate and parse the tool’s input arguments.
pydantic model langchain.tools.GoogleSearchResults[source]#
Tool that has capability to query the Google Search API and get back json.
field api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]#
field num_results: int = 4#
pydantic model langchain.tools.GoogleSearchRun[source]#
Tool that adds the capability to query the Google search API.
field api_wrapper: langchain.utilities.google_search.GoogleSearchAPIWrapper [Required]#
pydantic model langchain.tools.GoogleSerperResults[source]#
Tool that has capability to query the Serper.dev Google Search API
and get back json.
field api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Optional]#
pydantic model langchain.tools.GoogleSerperRun[source]#
Tool that adds the capability to query the Serper.dev Google search API.
field api_wrapper: langchain.utilities.google_serper.GoogleSerperAPIWrapper [Required]#
pydantic model langchain.tools.HumanInputRun[source]#
Tool that adds the capability to ask user for input.
field input_func: Callable [Optional]#
field prompt_func: Callable[[str], None] [Optional]#
pydantic model langchain.tools.IFTTTWebhook[source]#
IFTTT Webhook.
Parameters
name – name of the tool
description – description of the tool
url – url to hit with the json event.
field url: str [Required]#
pydantic model langchain.tools.InfoPowerBITool[source]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-10 | pydantic model langchain.tools.InfoPowerBITool[source]#
Tool for getting metadata about a PowerBI Dataset.
field powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]#
pydantic model langchain.tools.ListDirectoryTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.list_dir.DirectoryListingInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'List files and directories in a specified folder'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'list_directory'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.ListPowerBITool[source]#
Tool for getting tables names.
field powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]#
pydantic model langchain.tools.MetaphorSearchResults[source]#
Tool that has capability to query the Metaphor Search API and get back json.
field api_wrapper: langchain.utilities.metaphor_search.MetaphorSearchAPIWrapper [Required]#
pydantic model langchain.tools.MoveFileTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.move.FileMoveInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Move or rename a file from one location to another'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'move_file'# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-11 | field name: str = 'move_file'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.NavigateBackTool[source]#
Navigate back to the previous page in the browser history.
field args_schema: Type[BaseModel] = <class 'pydantic.main.BaseModel'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Navigate back to the previous page in the browser history'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'previous_webpage'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.NavigateTool[source]#
field args_schema: Type[BaseModel] = <class 'langchain.tools.playwright.navigate.NavigateToolInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Navigate a browser to the specified URL'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'navigate_browser'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.OpenAPISpec[source]#
OpenAPI Model that removes misformatted parts of the spec.
classmethod from_file(path: Union[str, pathlib.Path]) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a file path.
classmethod from_spec_dict(spec_dict: dict) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a dict. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-12 | Get an OpenAPI spec from a dict.
classmethod from_text(text: str) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a text.
classmethod from_url(url: str) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
Get an OpenAPI spec from a URL.
static get_cleaned_operation_id(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation, path: str, method: str) → str[source]#
Get a cleaned operation id from an operation id.
get_methods_for_path(path: str) → List[str][source]#
Return a list of valid methods for the specified path.
get_operation(path: str, method: str) → openapi_schema_pydantic.v3.v3_1_0.operation.Operation[source]#
Get the operation object for a given path and HTTP method.
get_parameters_for_operation(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation) → List[openapi_schema_pydantic.v3.v3_1_0.parameter.Parameter][source]#
Get the components for a given operation.
get_referenced_schema(ref: openapi_schema_pydantic.v3.v3_1_0.reference.Reference) → openapi_schema_pydantic.v3.v3_1_0.schema.Schema[source]#
Get a schema (or nested reference) or err.
get_request_body_for_operation(operation: openapi_schema_pydantic.v3.v3_1_0.operation.Operation) → Optional[openapi_schema_pydantic.v3.v3_1_0.request_body.RequestBody][source]#
Get the request body for a given operation.
classmethod parse_obj(obj: dict) → langchain.tools.openapi.utils.openapi_utils.OpenAPISpec[source]#
property base_url: str# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-13 | property base_url: str#
Get the base url.
pydantic model langchain.tools.OpenWeatherMapQueryRun[source]#
Tool that adds the capability to query using the OpenWeatherMap API.
field api_wrapper: langchain.utilities.openweathermap.OpenWeatherMapAPIWrapper [Optional]#
pydantic model langchain.tools.PubmedQueryRun[source]#
Tool that adds the capability to search using the PubMed API.
field api_wrapper: langchain.utilities.pupmed.PubMedAPIWrapper [Optional]#
pydantic model langchain.tools.QueryPowerBITool[source]#
Tool for querying a Power BI Dataset.
Validators
raise_deprecation » all fields
validate_llm_chain_input_variables » llm_chain
field examples: Optional[str] = '\nQuestion: How many rows are in the table <table>?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(<table>))\n----\nQuestion: How many rows are in the table <table> where <column> is not empty?\nDAX: EVALUATE ROW("Number of rows", COUNTROWS(FILTER(<table>, <table>[<column>] <> "")))\n----\nQuestion: What was the average of <column> in <table>?\nDAX: EVALUATE ROW("Average", AVERAGE(<table>[<column>]))\n----\n'#
field llm_chain: langchain.chains.llm.LLMChain [Required]#
field max_iterations: int = 5#
field powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]#
field session_cache: Dict[str, Any] [Optional]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-14 | field template: Optional[str] = '\nAnswer the question below with a DAX query that can be sent to Power BI. DAX queries have a simple syntax comprised of just one required keyword, EVALUATE, and several optional keywords: ORDER BY, START AT, DEFINE, MEASURE, VAR, TABLE, and COLUMN. Each keyword defines a statement used for the duration of the query. Any time < or > are used in the text below it means that those values need to be replaced by table, columns or other things. If the question is not something you can answer with a DAX query, reply with "I cannot answer this" and the question will be escalated to a human.\n\nSome DAX functions return a table instead of a scalar, and must be wrapped in a function that evaluates the table and returns a scalar; unless the table is a single column, single row table, then it is treated as a scalar value. Most DAX functions require one or more arguments, which can include tables, columns, expressions, and values. However, some functions, such as PI, do not require any arguments, but always require parentheses to indicate the null argument. For example, you must always type PI(), not PI. You can also nest functions within other functions. \n\nSome commonly used functions are:\nEVALUATE <table> - At the most basic level, a DAX query is an EVALUATE statement containing a table expression. At least one EVALUATE statement is required, however, a query can contain any number of EVALUATE statements.\nEVALUATE <table> ORDER BY <expression> ASC or DESC - The optional ORDER BY keyword defines one or more expressions used to sort query results. Any expression that can be evaluated for each row of the result is valid.\nEVALUATE <table> ORDER BY <expression> ASC or DESC START AT <value> or <parameter> - The optional | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-15 | ORDER BY <expression> ASC or DESC START AT <value> or <parameter> - The optional START AT keyword is used inside an ORDER BY clause. It defines the value at which the query results begin.\nDEFINE MEASURE | VAR; EVALUATE <table> - The optional DEFINE keyword introduces one or more calculated entity definitions that exist only for the duration of the query. Definitions precede the EVALUATE statement and are valid for all EVALUATE statements in the query. Definitions can be variables, measures, tables1, and columns1. Definitions can reference other definitions that appear before or after the current definition. At least one definition is required if the DEFINE keyword is included in a query.\nMEASURE <table name>[<measure name>] = <scalar expression> - Introduces a measure definition in a DEFINE statement of a DAX query.\nVAR <name> = <expression> - Stores the result of an expression as a named variable, which can then be passed as an argument to other measure expressions. Once resultant values have been calculated for a variable expression, those values do not change, even if the variable is referenced in another expression.\n\nFILTER(<table>,<filter>) - Returns a table that represents a subset of another table or expression, where <filter> is a Boolean expression that is to be evaluated for each row of the table. For example, [Amount] > 0 or [Region] = "France"\nROW(<name>, <expression>) - Returns a table with a single row containing values that result from the expressions given to each column.\nDISTINCT(<column>) - Returns a one-column table that contains the distinct values from the specified column. In other words, duplicate values are removed and only unique values are returned. This function cannot be used to Return values into a cell or column on a worksheet; rather, you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-16 | you nest the DISTINCT function within a formula, to get a list of distinct values that can be passed to another function and then counted, summed, or used for other operations.\nDISTINCT(<table>) - Returns a table by removing duplicate rows from another table or expression.\n\nAggregation functions, names with a A in it, handle booleans and empty strings in appropriate ways, while the same function without A only uses the numeric values in a column. Functions names with an X in it can include a expression as an argument, this will be evaluated for each row in the table and the result will be used in the regular function calculation, these are the functions:\nCOUNT(<column>), COUNTA(<column>), COUNTX(<table>,<expression>), COUNTAX(<table>,<expression>), COUNTROWS([<table>]), COUNTBLANK(<column>), DISTINCTCOUNT(<column>), DISTINCTCOUNTNOBLANK (<column>) - these are all variantions of count functions.\nAVERAGE(<column>), AVERAGEA(<column>), AVERAGEX(<table>,<expression>) - these are all variantions of average functions.\nMAX(<column>), MAXA(<column>), MAXX(<table>,<expression>) - these are all variantions of max functions.\nMIN(<column>), MINA(<column>), MINX(<table>,<expression>) - these are all variantions of min functions.\nPRODUCT(<column>), PRODUCTX(<table>,<expression>) - these are all variantions of product functions.\nSUM(<column>), SUMX(<table>,<expression>) - these are all variantions of sum functions.\n\nDate and time functions:\nDATE(year, month, day) - Returns a date value that represents the specified year, month, and day.\nDATEDIFF(date1, date2, <interval>) - Returns the difference between two date values, in the specified | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-17 | date2, <interval>) - Returns the difference between two date values, in the specified interval, that can be SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR.\nDATEVALUE(<date_text>) - Returns a date value that represents the specified date.\nYEAR(<date>), QUARTER(<date>), MONTH(<date>), DAY(<date>), HOUR(<date>), MINUTE(<date>), SECOND(<date>) - Returns the part of the date for the specified date.\n\nFinally, make sure to escape double quotes with a single backslash, and make sure that only table names have single quotes around them, while names of measures or the values of columns that you want to compare against are in escaped double quotes. Newlines are not necessary and can be skipped. The queries are serialized as json and so will have to fit be compliant with json syntax. Sometimes you will get a question, a DAX query and a error, in that case you need to rewrite the DAX query to get the correct answer.\n\nThe following tables exist: {tables}\n\nand the schema\'s for some are given here:\n{schemas}\n\nExamples:\n{examples}\n\nQuestion: {tool_input}\nDAX: \n'# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-18 | pydantic model langchain.tools.ReadFileTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.read.ReadFileInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Read file from disk'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'read_file'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.SceneXplainTool[source]#
Tool that adds the capability to explain images.
field api_wrapper: langchain.utilities.scenexplain.SceneXplainAPIWrapper [Optional]#
pydantic model langchain.tools.ShellTool[source]#
Tool to run shell commands.
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.shell.tool.ShellInput'>#
Schema for input arguments.
field description: str = 'Run shell commands on this Linux machine.'#
Description of tool.
field name: str = 'terminal'#
Name of tool.
field process: langchain.utilities.bash.BashProcess [Optional]#
Bash process to run commands.
pydantic model langchain.tools.SteamshipImageGenerationTool[source]#
field model_name: ModelName [Required]#
field return_urls: Optional[bool] = False#
field size: Optional[str] = '512x512'#
field steamship: Steamship [Required]#
pydantic model langchain.tools.StructuredTool[source]#
Tool that can operate on any number of inputs.
field args_schema: Type[pydantic.main.BaseModel] [Required]#
The input arguments’ schema.
The tool schema. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-19 | The input arguments’ schema.
The tool schema.
field coroutine: Optional[Callable[[...], Awaitable[Any]]] = None#
The asynchronous version of the function.
field description: str = ''#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field func: Callable[[...], Any] [Required]#
The function to run when the tool is called.
classmethod from_function(func: Callable, name: Optional[str] = None, description: Optional[str] = None, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True, **kwargs: Any) → langchain.tools.base.StructuredTool[source]#
Create tool from a given function.
A classmethod that helps to create a tool from a function.
Parameters
func – The function from which to create a tool
name – The name of the tool. Defaults to the function name
description – The description of the tool. Defaults to the function docstring
return_direct – Whether to return the result directly or as a callback
args_schema – The schema of the tool’s input arguments
infer_schema – Whether to infer the schema from the function’s signature
**kwargs – Additional arguments to pass to the tool
Returns
The tool
Examples
… code-block:: python
def add(a: int, b: int) -> int:“””Add two numbers”””
return a + b
tool = StructuredTool.from_function(add)
tool.run(1, 2) # 3
property args: dict#
The tool’s input arguments.
pydantic model langchain.tools.Tool[source]#
Tool that takes in function or coroutine directly. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-20 | Tool that takes in function or coroutine directly.
field args_schema: Optional[Type[pydantic.main.BaseModel]] = None#
Pydantic model class to validate and parse the tool’s input arguments.
field callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None#
Deprecated. Please use callbacks instead.
field callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None#
Callbacks to be called during tool execution.
field coroutine: Optional[Callable[[...], Awaitable[str]]] = None#
The asynchronous version of the function.
field description: str = ''#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field func: Callable[[...], str] [Required]#
The function to run when the tool is called.
field handle_tool_error: Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]] = False#
Handle the content of the ToolException thrown.
field name: str [Required]#
The unique name of the tool that clearly communicates its purpose.
field return_direct: bool = False#
Whether to return the tool’s output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
field verbose: bool = False#
Whether to log the tool’s progress.
classmethod from_function(func: Callable, name: str, description: str, return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, **kwargs: Any) → langchain.tools.base.Tool[source]#
Initialize tool from a function.
property args: dict#
The tool’s input arguments. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-21 | Initialize tool from a function.
property args: dict#
The tool’s input arguments.
pydantic model langchain.tools.VectorStoreQATool[source]#
Tool for the VectorDBQA chain. To be initialized with name and chain.
static get_description(name: str, description: str) → str[source]#
pydantic model langchain.tools.VectorStoreQAWithSourcesTool[source]#
Tool for the VectorDBQAWithSources chain.
static get_description(name: str, description: str) → str[source]#
pydantic model langchain.tools.WikipediaQueryRun[source]#
Tool that adds the capability to search using the Wikipedia API.
field api_wrapper: langchain.utilities.wikipedia.WikipediaAPIWrapper [Required]#
pydantic model langchain.tools.WolframAlphaQueryRun[source]#
Tool that adds the capability to query using the Wolfram Alpha SDK.
field api_wrapper: langchain.utilities.wolfram_alpha.WolframAlphaAPIWrapper [Required]#
pydantic model langchain.tools.WriteFileTool[source]#
field args_schema: Type[pydantic.main.BaseModel] = <class 'langchain.tools.file_management.write.WriteFileInput'>#
Pydantic model class to validate and parse the tool’s input arguments.
field description: str = 'Write file to disk'#
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
field name: str = 'write_file'#
The unique name of the tool that clearly communicates its purpose.
pydantic model langchain.tools.YouTubeSearchTool[source]#
pydantic model langchain.tools.ZapierNLAListActions[source]#
Returns a list of all exposed (enabled) actions associated withcurrent user (associated with the set api_key). Change your exposed | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-22 | actions here: https://nla.zapier.com/demo/start/
The return list can be empty if no actions exposed. Else will contain
a list of action objects:
[{“id”: str,
“description”: str,
“params”: Dict[str, str]
}]
params will always contain an instructions key, the only required
param. All others optional and if provided will override any AI guesses
(see “understanding the AI guessing flow” here:
https://nla.zapier.com/api/v1/docs)
Parameters
None –
field api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]#
pydantic model langchain.tools.ZapierNLARunAction[source]#
Executes an action that is identified by action_id, must be exposed(enabled) by the current user (associated with the set api_key). Change
your exposed actions here: https://nla.zapier.com/demo/start/
The return JSON is guaranteed to be less than ~500 words (350
tokens) making it safe to inject into the prompt of another LLM
call.
Parameters
action_id – a specific action ID (from list actions) of the action to execute
(the set api_key must be associated with the action owner)
instructions – a natural language instruction string for using the action
(eg. “get the latest email from Mike Knoop” for “Gmail: find email” action)
params – a dict, optional. Any params provided will override AI guesses
from instructions (see “understanding the AI guessing flow” here:
https://nla.zapier.com/api/v1/docs)
field action_id: str [Required]#
field api_wrapper: langchain.utilities.zapier.ZapierNLAWrapper [Optional]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-23 | field base_prompt: str = 'A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are [\'Message_Text\', \'Channel\'], your instruction should be something like \'send a slack message to the #general channel with the text hello world\'. Another example: if the params are [\'Calendar\', \'Search_Term\'], your instruction should be something like \'find the meeting in my personal calendar at 3pm\'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say \'not enough information provided in the instruction, missing <param>\'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: {zapier_description}, and has params: {params}'#
field params: Optional[dict] = None#
field params_schema: Dict[str, str] [Optional]#
field zapier_description: str [Required]#
langchain.tools.format_tool_to_openai_function(tool: langchain.tools.base.BaseTool) → langchain.tools.convert_to_openai.FunctionDescription[source]#
Format tool into the open AI function API.
langchain.tools.tool(*args: Union[str, Callable], return_direct: bool = False, args_schema: Optional[Type[pydantic.main.BaseModel]] = None, infer_schema: bool = True) → Callable[source]#
Make tools out of functions, can be used with or without arguments.
Parameters
*args – The arguments to the tool. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
3fba0a126828-24 | Parameters
*args – The arguments to the tool.
return_direct – Whether to return directly from the tool rather
than continuing the agent loop.
args_schema – optional argument schema for user to specify
infer_schema – Whether to infer the schema of the arguments from
the function’s signature. This also makes the resultant tool
accept a dictionary input to its run() function.
Requires:
Function must be of type (str) -> str
Function must have a docstring
Examples
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return
previous
Agents
next
Agent Toolkits
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/tools.html |
6ba78a3bb651-0 | .rst
.pdf
Chat Models
Chat Models#
pydantic model langchain.chat_models.AzureChatOpenAI[source]#
Wrapper around Azure OpenAI Chat Completion API. To use this class you
must have a deployed model on Azure OpenAI. Use deployment_name in the
constructor to refer to the “Model deployment name” in the Azure portal.
In addition, you should have the openai python package installed, and the
following environment variables set or passed in constructor in lower case:
- OPENAI_API_TYPE (default: azure)
- OPENAI_API_KEY
- OPENAI_API_BASE
- OPENAI_API_VERSION
- OPENAI_PROXY
For exmaple, if you have gpt-35-turbo deployed, with the deployment name
35-turbo-dev, the constructor should look like:
AzureChatOpenAI(
deployment_name="35-turbo-dev",
openai_api_version="2023-03-15-preview",
)
Be aware the API version may change.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
field deployment_name: str = ''#
field openai_api_base: str = ''#
field openai_api_key: str = ''#
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
field openai_api_type: str = 'azure'#
field openai_api_version: str = ''#
field openai_organization: str = ''#
field openai_proxy: str = ''#
pydantic model langchain.chat_models.ChatAnthropic[source]#
Wrapper around Anthropic’s large language model.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/chat_models.html |
6ba78a3bb651-1 | environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
import anthropic
from langchain.llms import Anthropic
model = ChatAnthropic(model="<model_name>", anthropic_api_key="my-api-key")
get_num_tokens(text: str) → int[source]#
Calculate number of tokens.
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.chat_models.ChatGooglePalm[source]#
Wrapper around Google’s PaLM Chat API.
To use you must have the google.generativeai Python package installed and
either:
The GOOGLE_API_KEY` environment varaible set with your API key, or
Pass your API key using the google_api_key kwarg to the ChatGoogle
constructor.
Example
from langchain.chat_models import ChatGooglePalm
chat = ChatGooglePalm()
field google_api_key: Optional[str] = None#
field model_name: str = 'models/chat-bison-001'#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
field temperature: Optional[float] = None#
Run inference with this temperature. Must by in the closed
interval [0.0, 1.0].
field top_k: Optional[int] = None#
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
field top_p: Optional[float] = None#
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0]. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/chat_models.html |
6ba78a3bb651-2 | pydantic model langchain.chat_models.ChatOpenAI[source]#
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
field max_retries: int = 6#
Maximum number of retries to make when generating.
field max_tokens: Optional[int] = None#
Maximum number of tokens to generate.
field model_kwargs: Dict[str, Any] [Optional]#
Holds any model parameters valid for create call not explicitly specified.
field model_name: str = 'gpt-3.5-turbo' (alias 'model')#
Model name to use.
field n: int = 1#
Number of chat completions to generate for each prompt.
field openai_api_base: Optional[str] = None#
field openai_api_key: Optional[str] = None#
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
field openai_organization: Optional[str] = None#
field openai_proxy: Optional[str] = None#
field request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to OpenAI completion API. Default is 600 seconds.
field streaming: bool = False#
Whether to stream the results or not.
field temperature: float = 0.7#
What sampling temperature to use.
completion_with_retry(**kwargs: Any) → Any[source]#
Use tenacity to retry the completion call. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/chat_models.html |
6ba78a3bb651-3 | Use tenacity to retry the completion call.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int[source]#
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: openai/openai-cookbook
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
get_token_ids(text: str) → List[int][source]#
Get the tokens present in the text with tiktoken package.
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.chat_models.ChatVertexAI[source]#
Wrapper around Vertex AI large language models.
field model_name: str = 'chat-bison'#
Model name to use.
pydantic model langchain.chat_models.PromptLayerChatOpenAI[source]#
Wrapper around OpenAI Chat large language models and PromptLayer.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerChatOpenAI adds to optional
Parameters
pl_tags – List of strings to tag the request with.
return_pl_id – If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.chat_models import PromptLayerChatOpenAI
openai = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo")
field pl_tags: Optional[List[str]] = None#
field return_pl_id: Optional[bool] = False#
previous
Models
next
Embeddings
By Harrison Chase | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/chat_models.html |
6ba78a3bb651-4 | previous
Models
next
Embeddings
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/chat_models.html |
14c2a1c4ad3b-0 | .rst
.pdf
LLMs
LLMs#
Wrappers on top of large language models APIs.
pydantic model langchain.llms.AI21[source]#
Wrapper around AI21 large language models.
To use, you should have the environment variable AI21_API_KEY
set with your API key.
Example
from langchain.llms import AI21
ai21 = AI21(model="j2-jumbo-instruct")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field base_url: Optional[str] = None#
Base url to use, if None decides based on model name.
field countPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to count.
field frequencyPenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens according to frequency.
field logitBias: Optional[Dict[str, float]] = None#
Adjust the probability of specific tokens being generated.
field maxTokens: int = 256#
The maximum number of tokens to generate in the completion.
field minTokens: int = 0#
The minimum number of tokens to generate in the completion.
field model: str = 'j2-jumbo-instruct'#
Model name to use.
field numResults: int = 1#
How many completions to generate for each prompt. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-1 | How many completions to generate for each prompt.
field presencePenalty: langchain.llms.ai21.AI21PenaltyData = AI21PenaltyData(scale=0, applyToWhitespaces=True, applyToPunctuations=True, applyToNumbers=True, applyToStopwords=True, applyToEmojis=True)#
Penalizes repeated tokens.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field temperature: float = 0.7#
What sampling temperature to use.
field topP: float = 1.0#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-2 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-3 | dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-4 | predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.AlephAlpha[source]#
Wrapper around Aleph Alpha large language models.
To use, you should have the aleph_alpha_client python package installed, and the
environment variable ALEPH_ALPHA_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Parameters are explained more in depth here:
Aleph-Alpha/aleph-alpha-client
Example
from langchain.llms import AlephAlpha | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-5 | Example
from langchain.llms import AlephAlpha
alpeh_alpha = AlephAlpha(aleph_alpha_api_key="my-api-key")
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field aleph_alpha_api_key: Optional[str] = None#
API key for Aleph Alpha API.
field best_of: Optional[int] = None#
returns the one with the “best of” results
(highest log probability per token)
field completion_bias_exclusion_first_token_only: bool = False#
Only consider the first token for the completion_bias_exclusion.
field contextual_control_threshold: Optional[float] = None#
If set to None, attention control parameters only apply to those tokens that have
explicitly been set in the request.
If set to a non-None value, control parameters are also applied to similar tokens.
field control_log_additive: Optional[bool] = True#
True: apply control by adding the log(control_factor) to attention scores.
False: (attention_scores - - attention_scores.min(-1)) * control_factor
field echo: bool = False#
Echo the prompt in the completion.
field frequency_penalty: float = 0.0#
Penalizes repeated tokens according to frequency.
field log_probs: Optional[int] = None#
Number of top log probabilities to be returned for each generated token.
field logit_bias: Optional[Dict[int, float]] = None#
The logit bias allows to influence the likelihood of generating tokens.
field maximum_tokens: int = 64#
The maximum number of tokens to be generated.
field minimum_tokens: Optional[int] = 0#
Generate at least this number of tokens.
field model: Optional[str] = 'luminous-base'#
Model name to use.
field n: int = 1# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-6 | Model name to use.
field n: int = 1#
How many completions to generate for each prompt.
field penalty_bias: Optional[str] = None#
Penalty bias for the completion.
field penalty_exceptions: Optional[List[str]] = None#
List of strings that may be generated without penalty,
regardless of other penalty settings
field penalty_exceptions_include_stop_sequences: Optional[bool] = None#
Should stop_sequences be included in penalty_exceptions.
field presence_penalty: float = 0.0#
Penalizes repeated tokens.
field raw_completion: bool = False#
Force the raw completion of the model to be returned.
field repetition_penalties_include_completion: bool = True#
Flag deciding whether presence penalty or frequency penalty
are updated from the completion.
field repetition_penalties_include_prompt: Optional[bool] = False#
Flag deciding whether presence penalty or frequency penalty are
updated from the prompt.
field stop_sequences: Optional[List[str]] = None#
Stop sequences to use.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field temperature: float = 0.0#
A non-negative float that tunes the degree of randomness in generation.
field tokens: Optional[bool] = False#
return tokens of completion.
field top_k: int = 0#
Number of most likely tokens to consider at each step.
field top_p: float = 0.0#
Total probability mass of tokens to consider at each step.
field use_multiplicative_presence_penalty: Optional[bool] = False#
Flag deciding whether presence penalty is applied
multiplicatively (True) or additively (False).
field verbose: bool [Optional]#
Whether to print out response text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-7 | field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-8 | Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-9 | get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-10 | serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.Anthropic[source]#
Wrapper around Anthropic’s large language models.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
import anthropic
from langchain.llms import Anthropic
model = Anthropic(model="<model_name>", anthropic_api_key="my-api-key")
# Simplest invocation, automatically wrapped with HUMAN_PROMPT
# and AI_PROMPT.
response = model("What are the biggest risks facing humanity?")
# Or if you want to use the chat mode, build a few-shot-prompt, or
# put words in the Assistant's mouth, use HUMAN_PROMPT and AI_PROMPT:
raw_prompt = "What are the biggest risks facing humanity?"
prompt = f"{anthropic.HUMAN_PROMPT} {prompt}{anthropic.AI_PROMPT}"
response = model(prompt)
Validators
raise_deprecation » all fields
raise_warning » all fields
set_verbose » verbose
validate_environment » all fields
field default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None#
Timeout for requests to Anthropic Completion API. Default is 600 seconds.
field max_tokens_to_sample: int = 256# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-11 | field max_tokens_to_sample: int = 256#
Denotes the number of tokens to predict per generation.
field model: str = 'claude-v1'#
Model name to use.
field streaming: bool = False#
Whether to stream the results.
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field temperature: Optional[float] = None#
A non-negative float that tunes the degree of randomness in generation.
field top_k: Optional[int] = None#
Number of most likely tokens to consider at each step.
field top_p: Optional[float] = None#
Total probability mass of tokens to consider at each step.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-12 | Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model#
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-13 | dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int[source]#
Calculate number of tokens.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-14 | predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
stream(prompt: str, stop: Optional[List[str]] = None) → Generator[source]#
Call Anthropic completion_stream and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompt to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from Anthropic.
Example
prompt = "Write a poem about a stream."
prompt = f"\n\nHuman: {prompt}\n\nAssistant:"
generator = anthropic.stream(prompt)
for token in generator:
yield token
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-15 | property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.Anyscale[source]#
Wrapper around Anyscale Services.
To use, you should have the environment variable ANYSCALE_SERVICE_URL,
ANYSCALE_SERVICE_ROUTE and ANYSCALE_SERVICE_TOKEN set with your Anyscale
Service, or pass it as a named parameter to the constructor.
Example
from langchain.llms import Anyscale
anyscale = Anyscale(anyscale_service_url="SERVICE_URL",
anyscale_service_route="SERVICE_ROUTE",
anyscale_service_token="SERVICE_TOKEN")
# Use Ray for distributed processing
import ray
prompt_list=[]
@ray.remote
def send_query(llm, prompt):
resp = llm(prompt)
return resp
futures = [send_query.remote(anyscale, prompt) for prompt in prompt_list]
results = ray.get(futures)
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field model_kwargs: Optional[dict] = None#
Key word arguments to pass to the model. Reserved for future use
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-16 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-17 | Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include – fields to include in new model
exclude – fields to exclude from new model, as with values this takes precedence over include
update – values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep – set to True to make a deep copy of the model
Returns
new model instance
dict(**kwargs: Any) → Dict#
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int#
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[langchain.schema.BaseMessage]) → int#
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]#
Get the token present in the text. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-18 | Get the token present in the text.
json(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, by_alias: bool = False, skip_defaults: Optional[bool] = None, exclude_unset: bool = False, exclude_defaults: bool = False, exclude_none: bool = False, encoder: Optional[Callable[[Any], Any]] = None, models_as_dict: bool = True, **dumps_kwargs: Any) → unicode#
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
predict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
save(file_path: Union[pathlib.Path, str]) → None#
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
classmethod update_forward_refs(**localns: Any) → None#
Try to update ForwardRefs on fields based on this Model, globalns and localns.
property lc_attributes: Dict#
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]#
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”] | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-19 | eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]#
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool#
Return whether or not the class is serializable.
pydantic model langchain.llms.Aviary[source]#
Allow you to use an Aviary.
Aviary is a backend for hosted models. You can
find out more about aviary at
ray-project/aviary
Has no dependencies, since it connects to backend
directly.
To get a list of the models supported on an
aviary, follow the instructions on the web site to
install the aviary CLI and then use:
aviary models
You must at least specify the environment
variable or parameter AVIARY_URL.
You may optionally specify the environment variable
or parameter AVIARY_TOKEN.
Example
from langchain.llms import Aviary
light = Aviary(aviary_url='AVIARY_URL',
model='amazon/LightGPT')
result = light.predict('How do you make fried rice?')
Validators
raise_deprecation » all fields
set_verbose » verbose
validate_environment » all fields
field tags: Optional[List[str]] = None#
Tags to add to the run trace.
field verbose: bool [Optional]#
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → str#
Check Cache and run the LLM on the given prompt and input. | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |
14c2a1c4ad3b-20 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[langchain.schema.PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None, **kwargs: Any) → langchain.schema.LLMResult#
Take in a list of prompt values and return an LLMResult.
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str#
Predict text from text.
async apredict_messages(messages: List[langchain.schema.BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → langchain.schema.BaseMessage#
Predict message from messages.
classmethod construct(_fields_set: Optional[SetStr] = None, **values: Any) → Model#
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = ‘allow’ was set since it adds all passed values
copy(*, include: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, exclude: Optional[Union[AbstractSetIntStr, MappingIntStrAny]] = None, update: Optional[DictStrAny] = None, deep: bool = False) → Model# | rtdocs_stable/api.python.langchain.com/en/stable/reference/modules/llms.html |