Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Tags:
code
Libraries:
Datasets
pandas
License:
id
stringlengths
14
15
text
stringlengths
23
2.21k
source
stringlengths
52
97
eab120bec91e-0
Callbacks | 🦜�🔗 Langchain Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksCallbacks📄� ArgillaArgilla - Open-source data platform for LLMs📄� ContextContext - Product Analytics for AI Chatbots📄� Infino - LangChain LLM Monitoring ExampleThis example shows how one can track the following while calling OpenAI models via LangChain and Infino:📄� PromptLayerPromptLayer📄� StreamlitStreamlit is a faster way to build and share data apps.PreviousIntegrationsNextArgillaCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/callbacks/
d989b366e49a-0
PromptLayer | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/callbacks/promptlayer
d989b366e49a-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksPromptLayerOn this pagePromptLayerPromptLayer is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the PromptLayerCallbackHandler. While PromptLayer does have LLMs that integrate directly with LangChain (eg PromptLayerOpenAI), this callback is the recommended way to integrate PromptLayer with LangChain.See our docs for more information.Installation and Setup​pip install promptlayer --upgradeGetting API Credentials​If you do not have a PromptLayer account, create one on promptlayer.com. Then get an API key by clicking on the settings cog in the navbar and
https://python.langchain.com/docs/integrations/callbacks/promptlayer
d989b366e49a-2
set it as an environment variabled called PROMPTLAYER_API_KEYUsage​Getting started with PromptLayerCallbackHandler is fairly simple, it takes two optional arguments:pl_tags - an optional list of strings that will be tracked as tags on PromptLayer.pl_id_callback - an optional function that will take promptlayer_request_id as an argument. This ID can be used with all of PromptLayer's tracking features to track, metadata, scores, and prompt usage.Simple OpenAI Example​In this simple example we use PromptLayerCallbackHandler with ChatOpenAI. We add a PromptLayer tag named chatopenaiimport promptlayer # Don't forget this �from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( HumanMessage,)chat_llm = ChatOpenAI( temperature=0, callbacks=[PromptLayerCallbackHandler(pl_tags=["chatopenai"])],)llm_results = chat_llm( [ HumanMessage(content="What comes after 1,2,3 ?"), HumanMessage(content="Tell me another joke?"), ])print(llm_results)GPT4All Example​import promptlayer # Don't forget this �from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.llms import GPT4Allmodel = GPT4All(model="./models/gpt4all-model.bin", n_ctx=512, n_threads=8)response = model( "Once upon a time, ", callbacks=[PromptLayerCallbackHandler(pl_tags=["langchain", "gpt4all"])],)Full Featured Example​In this example we unlock more of the power of PromptLayer.PromptLayer allows you to visually create,
https://python.langchain.com/docs/integrations/callbacks/promptlayer
d989b366e49a-3
this example we unlock more of the power of PromptLayer.PromptLayer allows you to visually create, version, and track prompt templates. Using the Prompt Registry, we can programatically fetch the prompt template called example.We also define a pl_id_callback function which takes in the promptlayer_request_id and logs a score, metadata and links the prompt template used. Read more about tracking on our docs.import promptlayer # Don't forget this �from langchain.callbacks import PromptLayerCallbackHandlerfrom langchain.llms import OpenAIdef pl_id_callback(promptlayer_request_id): print("prompt layer id ", promptlayer_request_id) promptlayer.track.score( request_id=promptlayer_request_id, score=100 ) # score is an integer 0-100 promptlayer.track.metadata( request_id=promptlayer_request_id, metadata={"foo": "bar"} ) # metadata is a dictionary of key value pairs that is tracked on PromptLayer promptlayer.track.prompt( request_id=promptlayer_request_id, prompt_name="example", prompt_input_variables={"product": "toasters"}, version=1, ) # link the request to a prompt templateopenai_llm = OpenAI( model_name="text-davinci-002", callbacks=[PromptLayerCallbackHandler(pl_id_callback=pl_id_callback)],)example_prompt = promptlayer.prompts.get("example", version=1, langchain=True)openai_llm(example_prompt.format(product="toasters"))That is all it takes! After setup all your requests will show up on the PromptLayer dashboard.
https://python.langchain.com/docs/integrations/callbacks/promptlayer
d989b366e49a-4
This callback also works with any LLM implemented on LangChain.PreviousInfino - LangChain LLM Monitoring ExampleNextStreamlitInstallation and SetupGetting API CredentialsUsageSimple OpenAI ExampleGPT4All ExampleFull Featured ExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/callbacks/promptlayer
d5dcbdbd487c-0
Streamlit | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/callbacks/streamlit
d5dcbdbd487c-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksStreamlitOn this pageStreamlitStreamlit is a faster way to build and share data apps. Streamlit turns data scripts into shareable web apps in minutes. All in pure Python. No front‑end experience required. See more examples at streamlit.io/generative-ai.In this guide we will demonstrate how to use StreamlitCallbackHandler to display the thoughts and actions of an agent in an interactive Streamlit app. Try it out with the running app below using the MRKL agent:Installation and Setup​pip install langchain streamlitYou can run streamlit hello to load a sample app and validate your install succeeded. See full instructions in Streamlit's Getting started documentation.Display thoughts and actions​To create a StreamlitCallbackHandler, you just need to provide a parent container to render the output.from langchain.callbacks import StreamlitCallbackHandlerimport streamlit as stst_callback = StreamlitCallbackHandler(st.container())Additional keyword arguments to customize the display behavior are described in the API reference.Scenario 1: Using an Agent with Tools​The primary supported use case today is visualizing the actions of an Agent with Tools (or Agent Executor). You can create an agent in your Streamlit app and simply pass the StreamlitCallbackHandler to agent.run() in order to visualize the
https://python.langchain.com/docs/integrations/callbacks/streamlit
d5dcbdbd487c-2
thoughts and actions live in your app.from langchain.llms import OpenAIfrom langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import StreamlitCallbackHandlerimport streamlit as stllm = OpenAI(temperature=0, streaming=True)tools = load_tools(["ddg-search"])agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)if prompt := st.chat_input(): st.chat_message("user").write(prompt) with st.chat_message("assistant"): st_callback = StreamlitCallbackHandler(st.container()) response = agent.run(prompt, callbacks=[st_callback]) st.write(response)Note: You will need to set OPENAI_API_KEY for the above app code to run successfully. The easiest way to do this is via Streamlit secrets.toml, or any other local ENV management tool.Additional scenarios​Currently StreamlitCallbackHandler is geared towards use with a LangChain Agent Executor. Support for additional agent types, use directly with Chains, etc will be added in the future.PreviousPromptLayerNextChat modelsInstallation and SetupDisplay thoughts and actionsScenario 1: Using an Agent with ToolsAdditional scenariosCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/callbacks/streamlit
251fde37c715-0
Infino - LangChain LLM Monitoring Example | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksInfino - LangChain LLM Monitoring ExampleOn this pageInfino - LangChain LLM Monitoring ExampleThis example shows how one can track the following while calling OpenAI models via LangChain and Infino:prompt input,response from chatgpt or any other LangChain model,latency,errors,number of tokens consumed# Install necessary dependencies.pip install infinopypip install matplotlib# Remove the (1) import sys and sys.path.append(..) and (2) uncomment `!pip install langchain` after merging the PR for Infino/LangChain integration.import syssys.path.append("../../../../../langchain")#!pip install langchainimport datetime as dtfrom infinopy import InfinoClientimport jsonfrom langchain.llms import OpenAIfrom langchain.callbacks import InfinoCallbackHandlerimport matplotlib.pyplot as pltimport matplotlib.dates as mdimport osimport timeimport sys Requirement already satisfied: matplotlib in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (3.7.1) Requirement already satisfied: contourpy>=1.0.1 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (1.0.7) Requirement already satisfied: cycler>=0.10 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-2
(from matplotlib) (0.11.0) Requirement already satisfied: fonttools>=4.22.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (4.39.4) Requirement already satisfied: kiwisolver>=1.0.1 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (1.4.4) Requirement already satisfied: numpy>=1.20 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (1.24.3) Requirement already satisfied: packaging>=20.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (23.1) Requirement already satisfied: pillow>=6.2.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (9.5.0) Requirement already satisfied: pyparsing>=2.3.1 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (3.0.9) Requirement already satisfied: python-dateutil>=2.7 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from matplotlib) (2.8.2) Requirement already satisfied: six>=1.5 in
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-3
(2.8.2) Requirement already satisfied: six>=1.5 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0) Requirement already satisfied: infinopy in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (0.0.1) Requirement already satisfied: docker in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from infinopy) (6.1.3) Requirement already satisfied: requests in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from infinopy) (2.31.0) Requirement already satisfied: packaging>=14.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from docker->infinopy) (23.1) Requirement already satisfied: urllib3>=1.26.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from docker->infinopy) (2.0.2) Requirement already satisfied: websocket-client>=0.32.0 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from docker->infinopy) (1.5.2) Requirement already satisfied: charset-normalizer<4,>=2 in
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-4
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from requests->infinopy) (3.1.0) Requirement already satisfied: idna<4,>=2.5 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from requests->infinopy) (3.4) Requirement already satisfied: certifi>=2017.4.17 in /Users/vinaykakade/.pyenv/versions/3.10.11/lib/python3.10/site-packages (from requests->infinopy) (2023.5.7)Start Infino server, initialize the Infino client​# Start server using the Infino docker image.docker run --rm --detach --name infino-example -p 3000:3000 infinohq/infino:latest# Create Infino client.client = InfinoClient() 497a621125800abdd19f57ce7e033349b3cf83ca8cea6a74e8e28433a42ecaddRead the questions dataset​# These are a subset of questions from Stanford's QA dataset -# https://rajpurkar.github.io/SQuAD-explorer/data = """In what country is Normandy located?When were the Normans in Normandy?From which countries did the Norse originate?Who was the Norse leader?What century did the Normans first gain their separate identity?Who gave their name to Normandy in the 1000's and 1100'sWhat is France a region of?Who did King Charles III swear fealty to?When did the Frankish identity emerge?Who was the duke in
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-5
Charles III swear fealty to?When did the Frankish identity emerge?Who was the duke in the battle of Hastings?Who ruled the duchy of NormandyWhat religion were the NormansWhat type of major impact did the Norman dynasty have on modern Europe?Who was famed for their Christian spirit?Who assimilted the Roman language?Who ruled the country of Normandy?What principality did William the conquerer found?What is the original meaning of the word Norman?When was the Latin version of the word Norman first recorded?What name comes from the English words Normans/Normanz?"""questions = data.split("\n")LangChain OpenAI Q&A; Publish metrics and logs to Infino​# Set your key here.# os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"# Create callback handler. This logs latency, errors, token usage, prompts as well as prompt responses to Infino.handler = InfinoCallbackHandler( model_id="test_openai", model_version="0.1", verbose=False)# Create LLM.llm = OpenAI(temperature=0.1)# Number of questions to ask the OpenAI model. We limit to a short number here to save $$ while running this demo.num_questions = 10questions = questions[0:num_questions]for question in questions: print(question) # We send the question to OpenAI API, with Infino callback. llm_result = llm.generate([question], callbacks=[handler]) print(llm_result) In what country is Normandy located? generations=[[Generation(text='\n\nNormandy is located in France.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 16, 'completion_tokens': 9, 'prompt_tokens': 7},
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-6
16, 'completion_tokens': 9, 'prompt_tokens': 7}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('8de21639-acec-4bd1-a12d-8124de1e20da')) When were the Normans in Normandy? generations=[[Generation(text='\n\nThe Normans first settled in Normandy in the late 9th century.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 24, 'completion_tokens': 16, 'prompt_tokens': 8}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('cf81fc86-250b-4e6e-9d92-2df3bebb019a')) From which countries did the Norse originate? generations=[[Generation(text='\n\nThe Norse originated from Scandinavia, which includes modern-day Norway, Sweden, and Denmark.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 29, 'completion_tokens': 21, 'prompt_tokens': 8}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('50f42f5e-b4a4-411a-a049-f92cb573a74f')) Who was the Norse leader? generations=[[Generation(text='\n\nThe most famous Norse leader was the legendary Viking king Ragnar Lodbrok. He is believed to have lived in the 9th century and is renowned for his exploits in England and France.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage':
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-7
'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 45, 'completion_tokens': 39, 'prompt_tokens': 6}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('e32f31cb-ddc9-4863-8e6e-cb7a281a0ada')) What century did the Normans first gain their separate identity? generations=[[Generation(text='\n\nThe Normans first gained their separate identity in the 11th century.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 28, 'completion_tokens': 16, 'prompt_tokens': 12}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('da9d8f73-b3b3-4bc5-8495-da8b11462a51')) Who gave their name to Normandy in the 1000's and 1100's generations=[[Generation(text='\n\nThe Normans, a people from northern France, gave their name to Normandy in the 1000s and 1100s. The Normans were descended from Viking settlers who had come to the region in the late 800s.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 58, 'completion_tokens': 45, 'prompt_tokens': 13}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('bb5829bf-b6a6-4429-adfa-414ac5be46e5')) What is France a region of?
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-8
What is France a region of? generations=[[Generation(text='\n\nFrance is a region of Europe.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 16, 'completion_tokens': 9, 'prompt_tokens': 7}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('6943880b-b4e4-4c74-9ca1-8c03c10f7e9c')) Who did King Charles III swear fealty to? generations=[[Generation(text='\n\nKing Charles III swore fealty to Pope Innocent III.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 23, 'completion_tokens': 13, 'prompt_tokens': 10}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('c91fd663-09e6-4d00-b746-4c7fd96f9ceb')) When did the Frankish identity emerge? generations=[[Generation(text='\n\nThe Frankish identity began to emerge in the late 5th century, when the Franks began to expand their power and influence in the region. The Franks were a Germanic tribe that had migrated to the area from the east and had established a kingdom in what is now modern-day France. The Franks were eventually able to establish a powerful kingdom that lasted until the 10th century.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 86, 'completion_tokens': 78, 'prompt_tokens': 8}, 'model_name':
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-9
'completion_tokens': 78, 'prompt_tokens': 8}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('23f86775-e592-4cb8-baa3-46ebe74305b2')) Who was the duke in the battle of Hastings? generations=[[Generation(text='\n\nThe Duke of Normandy, William the Conqueror, was the leader of the Norman forces at the Battle of Hastings in 1066.', generation_info={'finish_reason': 'stop', 'logprobs': None})]] llm_output={'token_usage': {'total_tokens': 39, 'completion_tokens': 28, 'prompt_tokens': 11}, 'model_name': 'text-davinci-003'} run=RunInfo(run_id=UUID('ad5b7984-8758-4d95-a5eb-ee56e0218f6b'))Create Metric Charts​We now use matplotlib to create graphs of latency, errors and tokens consumed.# Helper function to create a graph using matplotlib.def plot(data, title): data = json.loads(data) # Extract x and y values from the data timestamps = [item["time"] for item in data] dates = [dt.datetime.fromtimestamp(ts) for ts in timestamps] y = [item["value"] for item in data] plt.rcParams["figure.figsize"] = [6, 4] plt.subplots_adjust(bottom=0.2) plt.xticks(rotation=25) ax = plt.gca() xfmt = md.DateFormatter("%Y-%m-%d %H:%M:%S") ax.xaxis.set_major_formatter(xfmt) # Create the plot plt.plot(dates, y)
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-10
# Create the plot plt.plot(dates, y) # Set labels and title plt.xlabel("Time") plt.ylabel("Value") plt.title(title) plt.show()response = client.search_ts("__name__", "latency", 0, int(time.time()))plot(response.text, "Latency")response = client.search_ts("__name__", "error", 0, int(time.time()))plot(response.text, "Errors")response = client.search_ts("__name__", "prompt_tokens", 0, int(time.time()))plot(response.text, "Prompt Tokens")response = client.search_ts("__name__", "completion_tokens", 0, int(time.time()))plot(response.text, "Completion Tokens")response = client.search_ts("__name__", "total_tokens", 0, int(time.time()))plot(response.text, "Total Tokens") ![png](_infino_files/output_9_0.png) ![png](_infino_files/output_9_1.png) ![png](_infino_files/output_9_2.png) ![png](_infino_files/output_9_3.png) ![png](_infino_files/output_9_4.png) Full text query on prompt or prompt outputs.​# Search for a particular prompt text.query = "normandy"response = client.search_log(query, 0, int(time.time()))print("Results for", query, ":", response.text)print("===")query = "king charles III"response = client.search_log("king charles III", 0, int(time.time()))print("Results for", query, ":", response.text) Results for normandy :
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-11
for", query, ":", response.text) Results for normandy : [{"time":1686821979,"fields":{"prompt":"In what country is Normandy located?"},"text":"In what country is Normandy located?"},{"time":1686821982,"fields":{"prompt_response":"\n\nNormandy is located in France."},"text":"\n\nNormandy is located in France."},{"time":1686821984,"fields":{"prompt_response":"\n\nThe Normans first settled in Normandy in the late 9th century."},"text":"\n\nThe Normans first settled in Normandy in the late 9th century."},{"time":1686821993,"fields":{"prompt":"Who gave their name to Normandy in the 1000's and 1100's"},"text":"Who gave their name to Normandy in the 1000's and 1100's"},{"time":1686821997,"fields":{"prompt_response":"\n\nThe Normans, a people from northern France, gave their name to Normandy in the 1000s and 1100s. The Normans were descended from Viking settlers who had come to the region in the late 800s."},"text":"\n\nThe Normans, a people from northern France, gave their name to Normandy in the 1000s and 1100s. The Normans were descended from Viking settlers who had come to the region in the late 800s."}] === Results for king charles III : [{"time":1686821998,"fields":{"prompt":"Who did King Charles III swear fealty to?"},"text":"Who did King Charles III swear fealty to?"},{"time":1686822000,"fields":{"prompt_response":"\n\nKing Charles III swore fealty to Pope Innocent III."},"text":"\n\nKing Charles III swore fealty to
https://python.langchain.com/docs/integrations/callbacks/infino
251fde37c715-12
to Pope Innocent III."},"text":"\n\nKing Charles III swore fealty to Pope Innocent III."}]Step 5: Stop infino server​docker rm -f infino-example infino-examplePreviousContextNextPromptLayerStart Infino server, initialize the Infino clientRead the questions datasetLangChain OpenAI Q&A; Publish metrics and logs to InfinoCreate Metric ChartsFull text query on prompt or prompt outputs.Step 5: Stop infino serverCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/callbacks/infino
756514e58cbe-0
Argilla | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/callbacks/argilla
756514e58cbe-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksArgillaOn this pageArgillaArgilla is an open-source data curation platform for LLMs. Using Argilla, everyone can build robust language models through faster data curation using both human and machine feedback. We provide support for each step in the MLOps cycle,
https://python.langchain.com/docs/integrations/callbacks/argilla
756514e58cbe-2
from data labeling to model monitoring.In this guide we will demonstrate how to track the inputs and reponses of your LLM to generate a dataset in Argilla, using the ArgillaCallbackHandler.It's useful to keep track of the inputs and outputs of your LLMs to generate datasets for future fine-tuning. This is especially useful when you're using a LLM to generate data for a specific task, such as question answering, summarization, or translation.Installation and Setup​pip install argilla --upgradepip install openaiGetting API Credentials​To get the Argilla API credentials, follow the next steps:Go to your Argilla UI.Click on your profile picture and go to "My settings".Then copy the API Key.In Argilla the API URL will be the same as the URL of your Argilla UI.To get the OpenAI API credentials, please visit https://platform.openai.com/account/api-keysimport osos.environ["ARGILLA_API_URL"] = "..."os.environ["ARGILLA_API_KEY"] = "..."os.environ["OPENAI_API_KEY"] = "..."Setup Argilla​To use the ArgillaCallbackHandler we will need to create a new FeedbackDataset in Argilla to keep track of your LLM experiments. To do so, please use the following code:import argilla as rgfrom packaging.version import parse as parse_versionif parse_version(rg.__version__) < parse_version("1.8.0"): raise RuntimeError( "`FeedbackDataset` is only available in Argilla v1.8.0 or higher, please " "upgrade `argilla` as `pip install argilla --upgrade`." )dataset = rg.FeedbackDataset( fields=[ rg.TextField(name="prompt"),
https://python.langchain.com/docs/integrations/callbacks/argilla
756514e58cbe-3
rg.TextField(name="prompt"), rg.TextField(name="response"), ], questions=[ rg.RatingQuestion( name="response-rating", description="How would you rate the quality of the response?", values=[1, 2, 3, 4, 5], required=True, ), rg.TextQuestion( name="response-feedback", description="What feedback do you have for the response?", required=False, ), ], guidelines="You're asked to rate the quality of the response and provide feedback.",)rg.init( api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)dataset.push_to_argilla("langchain-dataset")📌 NOTE: at the moment, just the prompt-response pairs are supported as FeedbackDataset.fields, so the ArgillaCallbackHandler will just track the prompt i.e. the LLM input, and the response i.e. the LLM output.Tracking​To use the ArgillaCallbackHandler you can either use the following code, or just reproduce one of the examples presented in the following sections.from langchain.callbacks import ArgillaCallbackHandlerargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"],
https://python.langchain.com/docs/integrations/callbacks/argilla
756514e58cbe-4
api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)Scenario 1: Tracking an LLM​First, let's just run a single LLM a few times and capture the resulting prompt-response pairs in Argilla.from langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)llm.generate(["Tell me a joke", "Tell me a poem"] * 3) LLMResult(generations=[[Generation(text='\n\nQ: What did the fish say when he hit the wall? \nA: Dam.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nThe Moon \n\nThe moon is high in the midnight sky,\nSparkling like a star above.\nThe night so peaceful, so serene,\nFilling up the air with love.\n\nEver changing and renewing,\nA never-ending light of grace.\nThe moon remains a constant view,\nA reminder of life’s gentle pace.\n\nThrough time and space it guides us on,\nA never-fading beacon of hope.\nThe moon shines down on us all,\nAs it continues to rise and elope.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ. What did one magnet say to the other magnet?\nA. "I find you very attractive!"', generation_info={'finish_reason':
https://python.langchain.com/docs/integrations/callbacks/argilla
756514e58cbe-5
other magnet?\nA. "I find you very attractive!"', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nThe world is charged with the grandeur of God.\nIt will flame out, like shining from shook foil;\nIt gathers to a greatness, like the ooze of oil\nCrushed. Why do men then now not reck his rod?\n\nGenerations have trod, have trod, have trod;\nAnd all is seared with trade; bleared, smeared with toil;\nAnd wears man's smudge and shares man's smell: the soil\nIs bare now, nor can foot feel, being shod.\n\nAnd for all this, nature is never spent;\nThere lives the dearest freshness deep down things;\nAnd though the last lights off the black West went\nOh, morning, at the brown brink eastward, springs —\n\nBecause the Holy Ghost over the bent\nWorld broods with warm breast and with ah! bright wings.\n\n~Gerard Manley Hopkins", generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nQ: What did one ocean say to the other ocean?\nA: Nothing, they just waved.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text="\n\nA poem for you\n\nOn a field of green\n\nThe sky so blue\n\nA gentle breeze, the sun above\n\nA beautiful world, for us to love\n\nLife is a journey, full of surprise\n\nFull of joy and full of surprise\n\nBe brave and take small steps\n\nThe future will be revealed with depth\n\nIn the morning, when dawn arrives\n\nA fresh start, no reason to hide\n\nSomewhere down the road,
https://python.langchain.com/docs/integrations/callbacks/argilla
756514e58cbe-6
dawn arrives\n\nA fresh start, no reason to hide\n\nSomewhere down the road, there's a heart that beats\n\nBelieve in yourself, you'll always succeed.", generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'completion_tokens': 504, 'total_tokens': 528, 'prompt_tokens': 24}, 'model_name': 'text-davinci-003'})Scenario 2: Tracking an LLM in a chain​Then we can create a chain using a prompt template, and then track the initial prompt and the final response in Argilla.from langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIfrom langchain.chains import LLMChainfrom langchain.prompts import PromptTemplateargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.Title: {title}Playwright: This is a synopsis for the above play:"""prompt_template = PromptTemplate(input_variables=["title"], template=template)synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, callbacks=callbacks)test_prompts = [{"title": "Documentary about Bigfoot in Paris"}]synopsis_chain.apply(test_prompts) > Entering new LLMChain chain... Prompt after formatting: You are a playwright. Given the title of play, it is your job to write a synopsis
https://python.langchain.com/docs/integrations/callbacks/argilla
756514e58cbe-7
You are a playwright. Given the title of play, it is your job to write a synopsis for that title. Title: Documentary about Bigfoot in Paris Playwright: This is a synopsis for the above play: > Finished chain. [{'text': "\n\nDocumentary about Bigfoot in Paris focuses on the story of a documentary filmmaker and their search for evidence of the legendary Bigfoot creature in the city of Paris. The play follows the filmmaker as they explore the city, meeting people from all walks of life who have had encounters with the mysterious creature. Through their conversations, the filmmaker unravels the story of Bigfoot and finds out the truth about the creature's presence in Paris. As the story progresses, the filmmaker learns more and more about the mysterious creature, as well as the different perspectives of the people living in the city, and what they think of the creature. In the end, the filmmaker's findings lead them to some surprising and heartwarming conclusions about the creature's existence and the importance it holds in the lives of the people in Paris."}]Scenario 3: Using an Agent with Tools​Finally, as a more advanced workflow, you can create an agent that uses some tools. So that ArgillaCallbackHandler will keep track of the input and the output, but not about the intermediate steps/thoughts, so that given a prompt we log the original prompt and the final response to that given prompt.Note that for this scenario we'll be using Google Search API (Serp API) so you will need to both install google-search-results as pip install google-search-results, and to set the Serp API Key as os.environ["SERPAPI_API_KEY"] = "..." (you can find it at https://serpapi.com/dashboard), otherwise the example below won't work.from langchain.agents import AgentType, initialize_agent, load_toolsfrom
https://python.langchain.com/docs/integrations/callbacks/argilla
756514e58cbe-8
example below won't work.from langchain.agents import AgentType, initialize_agent, load_toolsfrom langchain.callbacks import ArgillaCallbackHandler, StdOutCallbackHandlerfrom langchain.llms import OpenAIargilla_callback = ArgillaCallbackHandler( dataset_name="langchain-dataset", api_url=os.environ["ARGILLA_API_URL"], api_key=os.environ["ARGILLA_API_KEY"],)callbacks = [StdOutCallbackHandler(), argilla_callback]llm = OpenAI(temperature=0.9, callbacks=callbacks)tools = load_tools(["serpapi"], llm=llm, callbacks=callbacks)agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callbacks=callbacks,)agent.run("Who was the first president of the United States of America?") > Entering new AgentExecutor chain... I need to answer a historical question Action: Search Action Input: "who was the first president of the United States of America" Observation: George Washington Thought: George Washington was the first president Final Answer: George Washington was the first president of the United States of America. > Finished chain. 'George Washington was the first president of the United States of America.'PreviousCallbacksNextContextInstallation and SetupGetting API CredentialsSetup ArgillaTrackingScenario 1: Tracking an LLMScenario 2: Tracking an LLM in a chainScenario 3: Using an Agent with ToolsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/callbacks/argilla
81a1c2696644-0
Context | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/callbacks/context
81a1c2696644-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksArgillaContextInfino - LangChain LLM Monitoring ExamplePromptLayerStreamlitChat modelsDocument loadersDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsCallbacksContextOn this pageContextContext provides product analytics for AI chatbots.Context helps you understand how users are interacting with your AI chat products.
https://python.langchain.com/docs/integrations/callbacks/context
81a1c2696644-2
Gain critical insights, optimise poor experiences, and minimise brand risks.In this guide we will show you how to integrate with Context.Installation and Setup​$ pip install context-python --upgradeGetting API Credentials​To get your Context API token:Go to the settings page within your Context account (https://go.getcontext.ai/settings).Generate a new API Token.Store this token somewhere secure.Setup Context​To use the ContextCallbackHandler, import the handler from Langchain and instantiate it with your Context API token.Ensure you have installed the context-python package before using the handler.import osfrom langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]context_callback = ContextCallbackHandler(token)Usage​Using the Context callback within a Chat Model​The Context callback handler can be used to directly record transcripts between users and AI assistants.Example​import osfrom langchain.chat_models import ChatOpenAIfrom langchain.schema import ( SystemMessage, HumanMessage,)from langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]chat = ChatOpenAI( headers={"user_id": "123"}, temperature=0, callbacks=[ContextCallbackHandler(token)])messages = [ SystemMessage( content="You are a helpful assistant that translates English to French." ), HumanMessage(content="I love programming."),]print(chat(messages))Using the Context callback within Chains​The Context callback handler can also be used to record the inputs and outputs of chains. Note that intermediate steps of the chain are not recorded - only the starting inputs and final outputs.Note: Ensure that you pass the same context object to the chat model and the chain.Wrong:chat = ChatOpenAI(temperature=0.9,
https://python.langchain.com/docs/integrations/callbacks/context
81a1c2696644-3
chat model and the chain.Wrong:chat = ChatOpenAI(temperature=0.9, callbacks=[ContextCallbackHandler(token)])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[ContextCallbackHandler(token)])Correct:handler = ContextCallbackHandler(token)chat = ChatOpenAI(temperature=0.9, callbacks=[callback])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])Example​import osfrom langchain.chat_models import ChatOpenAIfrom langchain import LLMChainfrom langchain.prompts import PromptTemplatefrom langchain.prompts.chat import ( ChatPromptTemplate, HumanMessagePromptTemplate,)from langchain.callbacks import ContextCallbackHandlertoken = os.environ["CONTEXT_API_TOKEN"]human_message_prompt = HumanMessagePromptTemplate( prompt=PromptTemplate( template="What is a good name for a company that makes {product}?", input_variables=["product"], ))chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])callback = ContextCallbackHandler(token)chat = ChatOpenAI(temperature=0.9, callbacks=[callback])chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])print(chain.run("colorful socks"))PreviousArgillaNextInfino - LangChain LLM Monitoring ExampleInstallation and SetupGetting API CredentialsSetup ContextUsageUsing the Context callback within a Chat ModelUsing the Context callback within ChainsCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/callbacks/context
f73c7d173411-0
Document loaders | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersDocument loaders📄� Etherscan LoaderOverview📄� acreomacreom is a dev-first knowledge base
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-2
acreomacreom is a dev-first knowledge base with tasks running on local markdown files.📄� Airbyte JSONAirbyte is a data integration platform for ELT pipelines from APIs, databases & files to warehouses & lakes. It has the largest catalog of ELT connectors to data warehouses and databases.📄� Airtable* Get your API key here.📄� Alibaba Cloud MaxComputeAlibaba Cloud MaxCompute (previously known as ODPS) is a general purpose, fully managed, multi-tenancy data processing platform for large-scale data warehousing. MaxCompute supports various data importing solutions and distributed computing models, enabling users to effectively query massive datasets, reduce production costs, and ensure data security.📄� Apify DatasetApify Dataset is a scaleable append-only storage with sequential access built for storing structured web scraping results, such as a list of products or Google SERPs, and then export them to various formats like JSON, CSV, or Excel. Datasets are mainly used to save results of Apify Actors—serverless cloud programs for varius web scraping, crawling, and data extraction use cases.📄� ArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.📄� AsyncHtmlLoaderAsyncHtmlLoader loads raw HTML from a list of urls concurrently.📄� AWS S3 DirectoryAmazon Simple Storage Service (Amazon S3) is an object storage service📄� AWS S3 FileAmazon Simple Storage Service (Amazon S3) is an object storage service.📄�
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-3
(Amazon S3) is an object storage service.📄� AZLyricsAZLyrics is a large, legal, every day growing collection of lyrics.📄� Azure Blob Storage ContainerAzure Blob Storage is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that doesn't adhere to a particular data model or definition, such as text or binary data.📄� Azure Blob Storage FileAzure Files offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST API.📄� BibTeXBibTeX is a file format and reference management system commonly used in conjunction with LaTeX typesetting. It serves as a way to organize and store bibliographic information for academic and research documents.📄� BiliBiliBilibili is one of the most beloved long-form video sites in China.📄� BlackboardBlackboard Learn (previously the Blackboard Learning Management System) is a web-based virtual learning environment and learning management system developed by Blackboard Inc. The software features course management, customizable open architecture, and scalable design that allows integration with student information systems and authentication protocols. It may be installed on local servers, hosted by Blackboard ASP Solutions, or provided as Software as a Service hosted on Amazon Web Services. Its main purposes are stated to include the addition of online elements to courses traditionally delivered face-to-face and development of completely online courses with few or no face-to-face meetings📄� BlockchainOverview📄� Brave SearchBrave Search is a search engine developed by Brave
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-4
Brave SearchBrave Search is a search engine developed by Brave Software.📄� BrowserlessBrowserless is a service that allows you to run headless Chrome instances in the cloud. It's a great way to run browser-based automation at scale without having to worry about managing your own infrastructure.📄� chatgpt_loaderChatGPT Data📄� College ConfidentialCollege Confidential gives information on 3,800+ colleges and universities.📄� ConfluenceConfluence is a wiki collaboration platform that saves and organizes all of the project-related material. Confluence is a knowledge base that primarily handles content management activities.📄� CoNLL-UCoNLL-U is revised version of the CoNLL-X format. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines:📄� Copy PasteThis notebook covers how to load a document object from something you just want to copy and paste. In this case, you don't even need to use a DocumentLoader, but rather can just construct the Document directly.📄� CSVA comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. Each line of the file is a data record. Each record consists of one or more fields, separated by commas.📄� Cube Semantic LayerThis notebook demonstrates the process of retrieving Cube's data model metadata in a format suitable for passing to LLMs as embeddings, thereby enhancing contextual information.📄� Datadog LogsDatadog is a monitoring and analytics platform for cloud-scale
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-5
Datadog LogsDatadog is a monitoring and analytics platform for cloud-scale applications.📄� DiffbotUnlike traditional web scraping tools, Diffbot doesn't require any rules to read the content on a page.📄� DiscordDiscord is a VoIP and instant messaging social platform. Users have the ability to communicate with voice calls, video calls, text messaging, media and files in private chats or as part of communities called "servers". A server is a collection of persistent chat rooms and voice channels which can be accessed via invite links.📄� DocugamiThis notebook covers how to load documents from Docugami. It provides the advantages of using this system over alternative data loaders.📄� DuckDBDuckDB is an in-process SQL OLAP database management system.📄� EmailThis notebook shows how to load email (.eml) or Microsoft Outlook (.msg) files.📄� Embaasembaas is a fully managed NLP API service that offers features like embedding generation, document text extraction, document to embeddings and more. You can choose a variety of pre-trained models.📄� EPubEPUB is an e-book file format that uses the ".epub" file extension. The term is short for electronic publication and is sometimes styled ePub. EPUB is supported by many e-readers, and compatible software is available for most smartphones, tablets, and computers.📄� EverNoteEverNote is intended for archiving and creating notes in which photos, audio and saved web content can be embedded. Notes are stored in virtual "notebooks" and can be tagged, annotated, edited, searched, and exported.🗃� example_data1
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-6
edited, searched, and exported.🗃� example_data1 items📄� Microsoft ExcelThe UnstructuredExcelLoader is used to load Microsoft Excel files. The loader works with both .xlsx and .xls files. The page content will be the raw text of the Excel file. If you use the loader in "elements" mode, an HTML representation of the Excel file will be available in the document metadata under the textashtml key.📄� Facebook ChatMessenger) is an American proprietary instant messaging app and platform developed by Meta Platforms. Originally developed as Facebook Chat in 2008, the company revamped its messaging service in 2010.📄� FaunaFauna is a Document Database.📄� FigmaFigma is a collaborative web application for interface design.📄� GeopandasGeopandas is an open source project to make working with geospatial data in python easier.📄� GitGit is a distributed version control system that tracks changes in any set of computer files, usually used for coordinating work among programmers collaboratively developing source code during software development.📄� GitBookGitBook is a modern documentation platform where teams can document everything from products to internal knowledge bases and APIs.📄� GitHubThis notebooks shows how you can load issues and pull requests (PRs) for a given repository on GitHub. We will use the LangChain Python repository as an example.📄� Google BigQueryGoogle BigQuery is a serverless and cost-effective enterprise data warehouse that works across clouds and scales with your data.📄� Google Cloud Storage DirectoryGoogle Cloud Storage is a managed service for storing unstructured
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-7
Google Cloud Storage DirectoryGoogle Cloud Storage is a managed service for storing unstructured data.📄� Google Cloud Storage FileGoogle Cloud Storage is a managed service for storing unstructured data.📄� Google DriveGoogle Drive is a file storage and synchronization service developed by Google.📄� GrobidGROBID is a machine learning library for extracting, parsing, and re-structuring raw documents.📄� GutenbergProject Gutenberg is an online library of free eBooks.📄� Hacker NewsHacker News (sometimes abbreviated as HN) is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as "anything that gratifies one's intellectual curiosity."📄� HuggingFace datasetThe Hugging Face Hub is home to over 5,000 datasets in more than 100 languages that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. They used for a diverse range of tasks such as translation,📄� iFixitiFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY-NC-SA 3.0.📄� ImagesThis covers how to load images such as JPG or PNG into a document format that we can use downstream.📄� Image captionsBy default, the loader utilizes the pre-trained Salesforce BLIP image captioning model.📄� IMSDbIMSDb is the Internet Movie Script
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-8
IMSDbIMSDb is the Internet Movie Script Database.📄� IuguIugu is a Brazilian services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.📄� JoplinJoplin is an open source note-taking app. Capture your thoughts and securely access them from any device.📄� Jupyter NotebookJupyter Notebook (formerly IPython Notebook) is a web-based interactive computational environment for creating notebook documents.📄� LarkSuite (FeiShu)LarkSuite is an enterprise collaboration platform developed by ByteDance.📄� MastodonMastodon is a federated social media and social networking service.📄� MediaWikiDumpMediaWiki XML Dumps contain the content of a wiki (wiki pages with all their revisions), without the site-related data. A XML dump does not create a full backup of the wiki database, the dump does not contain user accounts, images, edit logs, etc.📄� MergeDocLoaderMerge the documents returned from a set of specified data loaders.📄� mhtmlMHTML is a is used both for emails but also for archived webpages. MHTML, sometimes referred as MHT, stands for MIME HTML is a single file in which entire webpage is archived. When one saves a webpage as MHTML format, this file extension will contain HTML code, images, audio files, flash animation etc.📄� Microsoft OneDriveMicrosoft OneDrive (formerly SkyDrive) is a file hosting service operated by Microsoft.📄� Microsoft PowerPointMicrosoft PowerPoint is a presentation program by
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-9
Microsoft.📄� Microsoft PowerPointMicrosoft PowerPoint is a presentation program by Microsoft.📄� Microsoft WordMicrosoft Word is a word processor developed by Microsoft.📄� Modern TreasuryModern Treasury simplifies complex payment operations. It is a unified platform to power products and processes that move money.📄� Notion DB 1/2Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.📄� Notion DB 2/2Notion is a collaboration platform with modified Markdown support that integrates kanban boards, tasks, wikis and databases. It is an all-in-one workspace for notetaking, knowledge and data management, and project and task management.📄� ObsidianObsidian is a powerful and extensible knowledge base📄� Open Document Format (ODT)The Open Document Format for Office Applications (ODF), also known as OpenDocument, is an open file format for word processing documents, spreadsheets, presentations and graphics and using ZIP-compressed XML files. It was developed with the aim of providing an open, XML-based file format specification for office applications.📄� Open City DataSocrata provides an API for city open data.📄� Org-modeA Org Mode document is a document editing, formatting, and organizing mode, designed for notes, planning, and authoring within the free software text editor Emacs.📄� Pandas DataFrameThis notebook goes over how to load data from a pandas DataFrame.📄� PsychicThis
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-10
how to load data from a pandas DataFrame.📄� PsychicThis notebook covers how to load documents from Psychic. See here for more details.📄� PySpark DataFrame LoaderThis notebook goes over how to load data from a PySpark DataFrame.📄� ReadTheDocs DocumentationRead the Docs is an open-sourced free software documentation hosting platform. It generates documentation written with the Sphinx documentation generator.📄� Recursive URL LoaderWe may want to process load all URLs under a root directory.📄� RedditReddit is an American social news aggregation, content rating, and discussion website.📄� RoamROAM is a note-taking tool for networked thought, designed to create a personal knowledge base.📄� RocksetRockset is a real-time analytics database which enables queries on massive, semi-structured data without operational burden. With Rockset, ingested data is queryable within one second and analytical queries against that data typically execute in milliseconds. Rockset is compute optimized, making it suitable for serving high concurrency applications in the sub-100TB range (or larger than 100s of TBs with rollups).📄� RSTA reStructured Text (RST) file is a file format for textual data used primarily in the Python programming language community for technical documentation.📄� SitemapExtends from the WebBaseLoader, SitemapLoader loads a sitemap from a given URL, and then scrape and load all pages in the sitemap, returning each page as a Document.📄� SlackSlack is an instant messaging program.📄� SnowflakeThis notebooks goes over how to load documents from
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-11
SnowflakeThis notebooks goes over how to load documents from Snowflake📄� Source CodeThis notebook covers how to load source code files using a special approach with language parsing: each top-level function and class in the code is loaded into separate documents. Any remaining code top-level code outside the already loaded functions and classes will be loaded into a seperate document.📄� SpreedlySpreedly is a service that allows you to securely store credit cards and use them to transact against any number of payment gateways and third party APIs. It does this by simultaneously providing a card tokenization/vault service as well as a gateway and receiver integration service. Payment methods tokenized by Spreedly are stored at Spreedly, allowing you to independently store a card and then pass that card to different end points based on your business requirements.📄� StripeStripe is an Irish-American financial services and software as a service (SaaS) company. It offers payment-processing software and application programming interfaces for e-commerce websites and mobile applications.📄� SubtitleThe SubRip file format is described on the Matroska multimedia container format website as "perhaps the most basic of all subtitle formats." SubRip (SubRip Text) files are named with the extension .srt, and contain formatted lines of plain text in groups separated by a blank line. Subtitles are numbered sequentially, starting at 1. The timecode format used is hoursseconds,milliseconds with time units fixed to two zero-padded digits and fractions fixed to three zero-padded digits (0000,000). The fractional separator used is the comma, since the program was written in France.📄� TelegramTelegram Messenger is a globally accessible freemium, cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-12
cross-platform, encrypted, cloud-based and centralized instant messaging service. The application also provides optional end-to-end encrypted chats and video calling, VoIP, file sharing and several other features.📄� Tencent COS DirectoryThis covers how to load document objects from a Tencent COS Directory.📄� Tencent COS FileThis covers how to load document object from a Tencent COS File.📄� 2Markdown2markdown service transforms website content into structured markdown files.📄� TOMLTOML is a file format for configuration files. It is intended to be easy to read and write, and is designed to map unambiguously to a dictionary. Its specification is open-source. TOML is implemented in many programming languages. The name TOML is an acronym for "Tom's Obvious, Minimal Language" referring to its creator, Tom Preston-Werner.📄� TrelloTrello is a web-based project management and collaboration tool that allows individuals and teams to organize and track their tasks and projects. It provides a visual interface known as a "board" where users can create lists and cards to represent their tasks and activities.📄� TSVA tab-separated values (TSV) file is a simple, text-based file format for storing tabular data.[3] Records are separated by newlines, and values within a record are separated by tab characters.📄� TwitterTwitter is an online social media and social networking service.📄� Unstructured FileThis notebook covers how to use Unstructured package to load files of many types. Unstructured currently supports loading of text files, powerpoints, html, pdfs, images, and more.📄� URLThis covers how to load HTML documents from
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-13
more.📄� URLThis covers how to load HTML documents from a list of URLs into a document format that we can use downstream.📄� WeatherOpenWeatherMap is an open source weather service provider📄� WebBaseLoaderThis covers how to use WebBaseLoader to load all text from HTML webpages into a document format that we can use downstream. For more custom logic for loading webpages look at some child class examples such as IMSDbLoader, AZLyricsLoader, and CollegeConfidentialLoader📄� WhatsApp ChatWhatsApp (also called WhatsApp Messenger) is a freeware, cross-platform, centralized instant messaging (IM) and voice-over-IP (VoIP) service. It allows users to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content.📄� WikipediaWikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.📄� XMLThe UnstructuredXMLLoader is used to load XML files. The loader works with .xml files. The page content will be the text extracted from the XML tags.📄� Xorbits Pandas DataFrameThis notebook goes over how to load data from a xorbits.pandas DataFrame.📄� Loading documents from a YouTube urlBuilding chat or QA applications on YouTube videos is a topic of high interest.📄� YouTube transcriptsYouTube is an online video sharing and social media platform created by Google.PreviousPromptLayer ChatOpenAINextEtherscan
https://python.langchain.com/docs/integrations/document_loaders/
f73c7d173411-14
video sharing and social media platform created by Google.PreviousPromptLayer ChatOpenAINextEtherscan LoaderCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/
e747e2fd44f2-0
Arxiv | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/arxiv
e747e2fd44f2-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersArxivOn this pageArxivarXiv is an open-access archive for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology,
https://python.langchain.com/docs/integrations/document_loaders/arxiv
e747e2fd44f2-2
for 2 million scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.This notebook shows how to load scientific articles from Arxiv.org into a document format that we can use downstream.Installation​First, you need to install arxiv python package.#!pip install arxivSecond, you need to install PyMuPDF python package which transforms PDF files downloaded from the arxiv.org site into the text format.#!pip install pymupdfExamples​ArxivLoader has these arguments:query: free text which used to find documents in the Arxivoptional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments.optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), Title, Authors, Summary. If True, other fields also downloaded.from langchain.document_loaders import ArxivLoaderdocs = ArxivLoader(query="1605.08386", load_max_docs=2).load()len(docs)docs[0].metadata # meta-information of the Document {'Published': '2016-05-26', 'Title': 'Heat-bath random walks with Markov bases', 'Authors': 'Caprice Stanley, Tobias Windisch', 'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk,
https://python.langchain.com/docs/integrations/document_loaders/arxiv
e747e2fd44f2-3
state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}docs[0].page_content[:400] # all pages of the Document content 'arXiv:1605.08386v1 [math.CO] 26 May 2016\nHEAT-BATH RANDOM WALKS WITH MARKOV BASES\nCAPRICE STANLEY AND TOBIAS WINDISCH\nAbstract. Graphs on lattice points are studied whose edges come from a �nite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on �bers of a\n�xed integer matrix can be bounded from above by a constant. We then study the mixing\nbehaviour of heat-b'PreviousApify DatasetNextAsyncHtmlLoaderInstallationExamplesCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/arxiv
6faecf3e7686-0
mhtml | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/mhtml
6faecf3e7686-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersmhtmlmhtmlMHTML is a is used both for emails but also for archived webpages. MHTML, sometimes referred as MHT, stands for MIME HTML is a
https://python.langchain.com/docs/integrations/document_loaders/mhtml
6faecf3e7686-2
for archived webpages. MHTML, sometimes referred as MHT, stands for MIME HTML is a single file in which entire webpage is archived. When one saves a webpage as MHTML format, this file extension will contain HTML code, images, audio files, flash animation etc.from langchain.document_loaders import MHTMLLoader# Create a new loader object for the MHTML fileloader = MHTMLLoader( file_path="../../../../../../tests/integration_tests/examples/example.mht")# Load the document from the filedocuments = loader.load()# Print the documents to see the resultsfor doc in documents: print(doc) page_content='LangChain\nLANG CHAIN 🦜�🔗Official Home Page\xa0\n\n\n\n\n\n\n\nIntegrations\n\n\n\nFeatures\n\n\n\n\nBlog\n\n\n\nConceptual Guide\n\n\n\n\nPython Repo\n\n\nJavaScript Repo\n\n\n\nPython Documentation \n\n\nJavaScript Documentation\n\n\n\n\nPython ChatLangChain \n\n\nJavaScript ChatLangChain\n\n\n\n\nDiscord \n\n\nTwitter\n\n\n\n\nIf you have any comments about our WEB page, you can \nwrite us at the address shown above. However, due to \nthe limited number of personnel in our corporate office, we are unable to \nprovide a direct response.\n\nCopyright © 2023-2023 LangChain Inc.\n\n\n' metadata={'source': '../../../../../../tests/integration_tests/examples/example.mht', 'title': 'LangChain'}PreviousMergeDocLoaderNextMicrosoft OneDriveCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/mhtml
37c7eed7ef02-0
Open City Data | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/open_city_data
37c7eed7ef02-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersOpen City DataOpen City DataSocrata provides an API for city open data. For a dataset such as SF crime, to to the API tab on top right. That
https://python.langchain.com/docs/integrations/document_loaders/open_city_data
37c7eed7ef02-2
data. For a dataset such as SF crime, to to the API tab on top right. That provides you with the dataset identifier.Use the dataset identifier to grab specific tables for a given city_id (data.sfgov.org) - E.g., vw6y-z8j6 for SF 311 data.E.g., tmnf-yvry for SF Police data.pip install sodapyfrom langchain.document_loaders import OpenCityDataLoaderdataset = "vw6y-z8j6" # 311 datadataset = "tmnf-yvry" # crime dataloader = OpenCityDataLoader(city_id="data.sfgov.org", dataset_id=dataset, limit=2000)docs = loader.load() WARNING:root:Requests made without an app_token will be subject to strict throttling limits.eval(docs[0].page_content) {'pdid': '4133422003074', 'incidntnum': '041334220', 'incident_code': '03074', 'category': 'ROBBERY', 'descript': 'ROBBERY, BODILY FORCE', 'dayofweek': 'Monday', 'date': '2004-11-22T00:00:00.000', 'time': '17:50', 'pddistrict': 'INGLESIDE', 'resolution': 'NONE', 'address': 'GENEVA AV / SANTOS ST', 'x': '-122.420084075249', 'y': '37.7083109744362', 'location': {'type': 'Point', 'coordinates':
https://python.langchain.com/docs/integrations/document_loaders/open_city_data
37c7eed7ef02-3
'location': {'type': 'Point', 'coordinates': [-122.420084075249, 37.7083109744362]}, ':@computed_region_26cr_cadq': '9', ':@computed_region_rxqg_mtj9': '8', ':@computed_region_bh8s_q3mv': '309'}PreviousOpen Document Format (ODT)NextOrg-modeCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/open_city_data
324a60890259-0
HuggingFace dataset | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersHuggingFace datasetOn this pageHuggingFace datasetThe Hugging Face Hub is home to over 5,000 datasets in more than 100 languages that can be used
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-2
Hub is home to over 5,000 datasets in more than 100 languages that can be used for a broad range of tasks across NLP, Computer Vision, and Audio. They used for a diverse range of tasks such as translation,
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-3
automatic speech recognition, and image classification.This notebook shows how to load Hugging Face Hub datasets to LangChain.from langchain.document_loaders import HuggingFaceDatasetLoaderdataset_name = "imdb"page_content_column = "text"loader = HuggingFaceDatasetLoader(dataset_name, page_content_column)data = loader.load()data[:15] [Document(page_content='I rented I AM CURIOUS-YELLOW from my video store because of all the controversy that surrounded it when it was first released in 1967. I also heard that at first it was seized by U.S. customs if it ever tried to enter this country, therefore being a fan of films considered "controversial" I really had to see this for myself.<br /><br />The plot is centered around a young Swedish drama student named Lena who wants to learn everything she can about life. In particular she wants to focus her attentions to making some sort of documentary on what the average Swede thought about certain political issues such as the Vietnam War and race issues in the United States. In between asking politicians and ordinary denizens of Stockholm about their opinions on politics, she has sex with her drama teacher, classmates, and married men.<br /><br />What kills me about I AM CURIOUS-YELLOW is that 40 years ago, this was considered pornographic. Really, the sex and nudity scenes are few and far between, even then it\'s not shot like some cheaply made porno. While my countrymen mind find it shocking, in reality sex and nudity are a major staple in Swedish cinema. Even Ingmar Bergman, arguably their answer to good old boy John Ford, had sex scenes in his films.<br /><br />I do commend the filmmakers for the fact that any sex shown in the film is shown for artistic purposes rather than just to shock people and make money to be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-4
be shown in pornographic theaters in America. I AM CURIOUS-YELLOW is a good film for anyone wanting to study the meat and potatoes (no pun intended) of Swedish cinema. But really, this film doesn\'t have much of a plot.', metadata={'label': 0}), Document(page_content='"I Am Curious: Yellow" is a risible and pretentious steaming pile. It doesn\'t matter what one\'s political views are because this film can hardly be taken seriously on any level. As for the claim that frontal male nudity is an automatic NC-17, that isn\'t true. I\'ve seen R-rated films with male nudity. Granted, they only offer some fleeting views, but where are the R-rated films with gaping vulvas and flapping labia? Nowhere, because they don\'t exist. The same goes for those crappy cable shows: schlongs swinging in the breeze but not a clitoris in sight. And those pretentious indie movies like The Brown Bunny, in which we\'re treated to the site of Vincent Gallo\'s throbbing johnson, but not a trace of pink visible on Chloe Sevigny. Before crying (or implying) "double-standard" in matters of nudity, the mentally obtuse should take into account one unavoidably obvious anatomical difference between men and women: there are no genitals on display when actresses appears nude, and the same cannot be said for a man. In fact, you generally won\'t see female genitals in an American film in anything short of porn or explicit erotica. This alleged double-standard is less a double standard than an admittedly depressing ability to come to terms culturally with the insides of women\'s bodies.', metadata={'label': 0}), Document(page_content="If only to avoid making this type of film in the future. This film is interesting as an experiment but tells no cogent story.<br /><br
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-5
the future. This film is interesting as an experiment but tells no cogent story.<br /><br />One might feel virtuous for sitting thru it because it touches on so many IMPORTANT issues but it does so without any discernable motive. The viewer comes away with no new perspectives (unless one comes up with one while one's mind wanders, as it will invariably do during this pointless film).<br /><br />One might better spend one's time staring out a window at a tree growing.<br /><br />", metadata={'label': 0}), Document(page_content="This film was probably inspired by Godard's Masculin, féminin and I urge you to see that film instead.<br /><br />The film has two strong elements and those are, (1) the realistic acting (2) the impressive, undeservedly good, photo. Apart from that, what strikes me most is the endless stream of silliness. Lena Nyman has to be most annoying actress in the world. She acts so stupid and with all the nudity in this film,...it's unattractive. Comparing to Godard's film, intellectuality has been replaced with stupidity. Without going too far on this subject, I would say that follows from the difference in ideals between the French and the Swedish society.<br /><br />A movie of its time, and place. 2/10.", metadata={'label': 0}), Document(page_content='Oh, brother...after hearing about this ridiculous film for umpteen years all I can think of is that old Peggy Lee song..<br /><br />"Is that all there is??" ...I was just an early teen when this smoked fish hit the U.S. I was too young to get in the theater (although I did manage to sneak into "Goodbye Columbus"). Then a screening at a local film museum beckoned - Finally I could see this film,
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-6
Columbus"). Then a screening at a local film museum beckoned - Finally I could see this film, except now I was as old as my parents were when they schlepped to see it!!<br /><br />The ONLY reason this film was not condemned to the anonymous sands of time was because of the obscenity case sparked by its U.S. release. MILLIONS of people flocked to this stinker, thinking they were going to see a sex film...Instead, they got lots of closeups of gnarly, repulsive Swedes, on-street interviews in bland shopping malls, asinie political pretension...and feeble who-cares simulated sex scenes with saggy, pale actors.<br /><br />Cultural icon, holy grail, historic artifact..whatever this thing was, shred it, burn it, then stuff the ashes in a lead box!<br /><br />Elite esthetes still scrape to find value in its boring pseudo revolutionary political spewings..But if it weren\'t for the censorship scandal, it would have been ignored, then forgotten.<br /><br />Instead, the "I Am Blank, Blank" rhythymed title was repeated endlessly for years as a titilation for porno films (I am Curious, Lavender - for gay films, I Am Curious, Black - for blaxploitation films, etc..) and every ten years or so the thing rises from the dead, to be viewed by a new generation of suckers who want to see that "naughty sex film" that "revolutionized the film industry"...<br /><br />Yeesh, avoid like the plague..Or if you MUST see it - rent the video and fast forward to the "dirty" parts, just to get it over with.<br /><br />', metadata={'label': 0}), Document(page_content="I would put this at the top of my list of films in the category of
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-7
Document(page_content="I would put this at the top of my list of films in the category of unwatchable trash! There are films that are bad, but the worst kind are the ones that are unwatchable but you are suppose to like them because they are supposed to be good for you! The sex sequences, so shocking in its day, couldn't even arouse a rabbit. The so called controversial politics is strictly high school sophomore amateur night Marxism. The film is self-consciously arty in the worst sense of the term. The photography is in a harsh grainy black and white. Some scenes are out of focus or taken from the wrong angle. Even the sound is bad! And some people call this art?<br /><br />", metadata={'label': 0}), Document(page_content="Whoever wrote the screenplay for this movie obviously never consulted any books about Lucille Ball, especially her autobiography. I've never seen so many mistakes in a biopic, ranging from her early years in Celoron and Jamestown to her later years with Desi. I could write a whole list of factual errors, but it would go on for pages. In all, I believe that Lucille Ball is one of those inimitable people who simply cannot be portrayed by anyone other than themselves. If I were Lucie Arnaz and Desi, Jr., I would be irate at how many mistakes were made in this film. The filmmakers tried hard, but the movie seems awfully sloppy to me.", metadata={'label': 0}), Document(page_content='When I first saw a glimpse of this movie, I quickly noticed the actress who was playing the role of Lucille Ball. Rachel York\'s portrayal of Lucy is absolutely awful. Lucille Ball was an astounding comedian with incredible talent. To think about a legend like Lucille Ball being portrayed the way she was in the movie is horrendous. I cannot believe out of all the
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-8
being portrayed the way she was in the movie is horrendous. I cannot believe out of all the actresses in the world who could play a much better Lucy, the producers decided to get Rachel York. She might be a good actress in other roles but to play the role of Lucille Ball is tough. It is pretty hard to find someone who could resemble Lucille Ball, but they could at least find someone a bit similar in looks and talent. If you noticed York\'s portrayal of Lucy in episodes of I Love Lucy like the chocolate factory or vitavetavegamin, nothing is similar in any way-her expression, voice, or movement.<br /><br />To top it all off, Danny Pino playing Desi Arnaz is horrible. Pino does not qualify to play as Ricky. He\'s small and skinny, his accent is unreal, and once again, his acting is unbelievable. Although Fred and Ethel were not similar either, they were not as bad as the characters of Lucy and Ricky.<br /><br />Overall, extremely horrible casting and the story is badly told. If people want to understand the real life situation of Lucille Ball, I suggest watching A&E Biography of Lucy and Desi, read the book from Lucille Ball herself, or PBS\' American Masters: Finding Lucy. If you want to see a docudrama, "Before the Laughter" would be a better choice. The casting of Lucille Ball and Desi Arnaz in "Before the Laughter" is much better compared to this. At least, a similar aspect is shown rather than nothing.', metadata={'label': 0}), Document(page_content='Who are these "They"- the actors? the filmmakers? Certainly couldn\'t be the audience- this is among the most air-puffed productions in existence. It\'s the kind of movie that looks like it was a lot of fun to shoot\x97 TOO much fun, nobody is getting
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-9
that looks like it was a lot of fun to shoot\x97 TOO much fun, nobody is getting any actual work done, and that almost always makes for a movie that\'s no fun to watch.<br /><br />Ritter dons glasses so as to hammer home his character\'s status as a sort of doppleganger of the bespectacled Bogdanovich; the scenes with the breezy Ms. Stratten are sweet, but have an embarrassing, look-guys-I\'m-dating-the-prom-queen feel to them. Ben Gazzara sports his usual cat\'s-got-canary grin in a futile attempt to elevate the meager plot, which requires him to pursue Audrey Hepburn with all the interest of a narcoleptic at an insomnia clinic. In the meantime, the budding couple\'s respective children (nepotism alert: Bogdanovich\'s daughters) spew cute and pick up some fairly disturbing pointers on \'love\' while observing their parents. (Ms. Hepburn, drawing on her dignity, manages to rise above the proceedings- but she has the monumental challenge of playing herself, ostensibly.) Everybody looks great, but so what? It\'s a movie and we can expect that much, if that\'s what you\'re looking for you\'d be better off picking up a copy of Vogue.<br /><br />Oh- and it has to be mentioned that Colleen Camp thoroughly annoys, even apart from her singing, which, while competent, is wholly unconvincing... the country and western numbers are woefully mismatched with the standards on the soundtrack. Surely this is NOT what Gershwin (who wrote the song from which the movie\'s title is derived) had in mind; his stage musicals of the 20\'s may have been slight, but at least they were long on charm. "They All Laughed" tries to coast on its good intentions, but
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-10
were long on charm. "They All Laughed" tries to coast on its good intentions, but nobody- least of all Peter Bogdanovich - has the good sense to put on the brakes.<br /><br />Due in no small part to the tragic death of Dorothy Stratten, this movie has a special place in the heart of Mr. Bogdanovich- he even bought it back from its producers, then distributed it on his own and went bankrupt when it didn\'t prove popular. His rise and fall is among the more sympathetic and tragic of Hollywood stories, so there\'s no joy in criticizing the film... there _is_ real emotional investment in Ms. Stratten\'s scenes. But "Laughed" is a faint echo of "The Last Picture Show", "Paper Moon" or "What\'s Up, Doc"- following "Daisy Miller" and "At Long Last Love", it was a thundering confirmation of the phase from which P.B. has never emerged.<br /><br />All in all, though, the movie is harmless, only a waste of rental. I want to watch people having a good time, I\'ll go to the park on a sunny day. For filmic expressions of joy and love, I\'ll stick to Ernest Lubitsch and Jaques Demy...', metadata={'label': 0}), Document(page_content="This is said to be a personal film for Peter Bogdonavitch. He based it on his life but changed things around to fit the characters, who are detectives. These detectives date beautiful models and have no problem getting them. Sounds more like a millionaire playboy filmmaker than a detective, doesn't it? This entire movie was written by Peter, and it shows how out of touch with real people he was. You're supposed to write what you know, and he did that, indeed. And leaves the audience bored and confused, and jealous, for that matter. This is
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-11
indeed. And leaves the audience bored and confused, and jealous, for that matter. This is a curio for people who want to see Dorothy Stratten, who was murdered right after filming. But Patti Hanson, who would, in real life, marry Keith Richards, was also a model, like Stratten, but is a lot better and has a more ample part. In fact, Stratten's part seemed forced; added. She doesn't have a lot to do with the story, which is pretty convoluted to begin with. All in all, every character in this film is somebody that very few people can relate with, unless you're millionaire from Manhattan with beautiful supermodels at your beckon call. For the rest of us, it's an irritating snore fest. That's what happens when you're out of touch. You entertain your few friends with inside jokes, and bore all the rest.", metadata={'label': 0}), Document(page_content='It was great to see some of my favorite stars of 30 years ago including John Ritter, Ben Gazarra and Audrey Hepburn. They looked quite wonderful. But that was it. They were not given any characters or good lines to work with. I neither understood or cared what the characters were doing.<br /><br />Some of the smaller female roles were fine, Patty Henson and Colleen Camp were quite competent and confident in their small sidekick parts. They showed some talent and it is sad they didn\'t go on to star in more and better films. Sadly, I didn\'t think Dorothy Stratten got a chance to act in this her only important film role.<br /><br />The film appears to have some fans, and I was very open-minded when I started watching it. I am a big Peter Bogdanovich fan and I enjoyed his last movie, "Cat\'s Meow" and all his early ones from "Targets" to
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-12
last movie, "Cat\'s Meow" and all his early ones from "Targets" to "Nickleodeon". So, it really surprised me that I was barely able to keep awake watching this one.<br /><br />It is ironic that this movie is about a detective agency where the detectives and clients get romantically involved with each other. Five years later, Bogdanovich\'s ex-girlfriend, Cybil Shepherd had a hit television series called "Moonlighting" stealing the story idea from Bogdanovich. Of course, there was a great difference in that the series relied on tons of witty dialogue, while this tries to make do with slapstick and a few screwball lines.<br /><br />Bottom line: It ain\'t no "Paper Moon" and only a very pale version of "What\'s Up, Doc".', metadata={'label': 0}), Document(page_content="I can't believe that those praising this movie herein aren't thinking of some other film. I was prepared for the possibility that this would be awful, but the script (or lack thereof) makes for a film that's also pointless. On the plus side, the general level of craft on the part of the actors and technical crew is quite competent, but when you've got a sow's ear to work with you can't make a silk purse. Ben G fans should stick with just about any other movie he's been in. Dorothy S fans should stick to Galaxina. Peter B fans should stick to Last Picture Show and Target. Fans of cheap laughs at the expense of those who seem to be asking for it should stick to Peter B's amazingly awful book, Killing of the Unicorn.", metadata={'label': 0}), Document(page_content='Never cast models and Playboy bunnies in your films! Bob Fosse\'s "Star 80" about Dorothy Stratten, of whom Bogdanovich was obsessed enough
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-13
"Star 80" about Dorothy Stratten, of whom Bogdanovich was obsessed enough to have married her SISTER after her murder at the hands of her low-life husband, is a zillion times more interesting than Dorothy herself on the silver screen. Patty Hansen is no actress either..I expected to see some sort of lost masterpiece a la Orson Welles but instead got Audrey Hepburn cavorting in jeans and a god-awful "poodlesque" hair-do....Very disappointing...."Paper Moon" and "The Last Picture Show" I could watch again and again. This clunker I could barely sit through once. This movie was reputedly not released because of the brouhaha surrounding Ms. Stratten\'s tawdry death; I think the real reason was because it was so bad!', metadata={'label': 0}), Document(page_content="Its not the cast. A finer group of actors, you could not find. Its not the setting. The director is in love with New York City, and by the end of the film, so are we all! Woody Allen could not improve upon what Bogdonovich has done here. If you are going to fall in love, or find love, Manhattan is the place to go. No, the problem with the movie is the script. There is none. The actors fall in love at first sight, words are unnecessary. In the director's own experience in Hollywood that is what happens when they go to work on the set. It is reality to him, and his peers, but it is a fantasy to most of us in the real world. So, in the end, the movie is hollow, and shallow, and message-less.", metadata={'label': 0}), Document(page_content='Today I found "They All Laughed" on VHS on sale in a rental. It was a really old and very used VHS, I had no information about this
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-14
a rental. It was a really old and very used VHS, I had no information about this movie, but I liked the references listed on its cover: the names of Peter Bogdanovich, Audrey Hepburn, John Ritter and specially Dorothy Stratten attracted me, the price was very low and I decided to risk and buy it. I searched IMDb, and the User Rating of 6.0 was an excellent reference. I looked in "Mick Martin & Marsha Porter Video & DVD Guide 2003" and \x96 wow \x96 four stars! So, I decided that I could not waste more time and immediately see it. Indeed, I have just finished watching "They All Laughed" and I found it a very boring overrated movie. The characters are badly developed, and I spent lots of minutes to understand their roles in the story. The plot is supposed to be funny (private eyes who fall in love for the women they are chasing), but I have not laughed along the whole story. The coincidences, in a huge city like New York, are ridiculous. Ben Gazarra as an attractive and very seductive man, with the women falling for him as if her were a Brad Pitt, Antonio Banderas or George Clooney, is quite ridiculous. In the end, the greater attractions certainly are the presence of the Playboy centerfold and playmate of the year Dorothy Stratten, murdered by her husband pretty after the release of this movie, and whose life was showed in "Star 80" and "Death of a Centerfold: The Dorothy Stratten Story"; the amazing beauty of the sexy Patti Hansen, the future Mrs. Keith Richards; the always wonderful, even being fifty-two years old, Audrey Hepburn; and the song "Amigo", from Roberto Carlos. Although I do not like him, Roberto Carlos has been the most popular Brazilian singer since the end of the 60\'s and is called by his fans
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
324a60890259-15
the most popular Brazilian singer since the end of the 60\'s and is called by his fans as "The King". I will keep this movie in my collection only because of these attractions (manly Dorothy Stratten). My vote is four.<br /><br />Title (Brazil): "Muito Riso e Muita Alegria" ("Many Laughs and Lots of Happiness")', metadata={'label': 0})]Example​In this example, we use data from a dataset to answer a questionfrom langchain.indexes import VectorstoreIndexCreatorfrom langchain.document_loaders.hugging_face_dataset import HuggingFaceDatasetLoaderdataset_name = "tweet_eval"page_content_column = "text"name = "stance_climate"loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)index = VectorstoreIndexCreator().from_loaders([loader]) Found cached dataset tweet_eval 0%| | 0/3 [00:00<?, ?it/s] Using embedded DuckDB without persistence: data will be transientquery = "What are the most used hashtag?"result = index.query(query)result ' The most used hashtags in this context are #UKClimate2015, #Sustainability, #TakeDownTheFlag, #LoveWins, #CSOTA, #ClimateSummitoftheAmericas, #SM, and #SocialMedia.'PreviousHacker NewsNextiFixitExampleCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/hugging_face_dataset
be0439f3403e-0
Joplin | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/joplin
be0439f3403e-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersJoplinJoplinJoplin is an open source note-taking app. Capture your thoughts and securely access them from any device.This notebook covers how to load documents from
https://python.langchain.com/docs/integrations/document_loaders/joplin
be0439f3403e-2
app. Capture your thoughts and securely access them from any device.This notebook covers how to load documents from a Joplin database.Joplin has a REST API for accessing its local database. This loader uses the API to retrieve all notes in the database and their metadata. This requires an access token that can be obtained from the app by following these steps:Open the Joplin app. The app must stay open while the documents are being loaded.Go to settings / options and select "Web Clipper".Make sure that the Web Clipper service is enabled.Under "Advanced Options", copy the authorization token.You may either initialize the loader directly with the access token, or store it in the environment variable JOPLIN_ACCESS_TOKEN.An alternative to this approach is to export the Joplin's note database to Markdown files (optionally, with Front Matter metadata) and use a Markdown loader, such as ObsidianLoader, to load them.from langchain.document_loaders import JoplinLoaderloader = JoplinLoader(access_token="<access-token>")docs = loader.load()PreviousIuguNextJupyter NotebookCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/joplin
a6f85cef4c61-0
URL | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/url
a6f85cef4c61-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersURLOn this pageURLThis covers how to load HTML documents from a list of URLs into a document format that we can use downstream.from langchain.document_loaders import
https://python.langchain.com/docs/integrations/document_loaders/url
a6f85cef4c61-2
a list of URLs into a document format that we can use downstream.from langchain.document_loaders import UnstructuredURLLoaderurls = [ "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-8-2023", "https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-9-2023",]Pass in ssl_verify=False with headers=headers to get past ssl_verification error.loader = UnstructuredURLLoader(urls=urls)data = loader.load()Selenium URL LoaderThis covers how to load HTML documents from a list of URLs using the SeleniumURLLoader.Using selenium allows us to load pages that require JavaScript to render.Setup​To use the SeleniumURLLoader, you will need to install selenium and unstructured.from langchain.document_loaders import SeleniumURLLoaderurls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8",]loader = SeleniumURLLoader(urls=urls)data = loader.load()Playwright URL LoaderThis covers how to load HTML documents from a list of URLs using the PlaywrightURLLoader.As in the Selenium case, Playwright allows us to load pages that need JavaScript to render.Setup​To use the PlaywrightURLLoader, you will need to install playwright and unstructured. Additionally, you will need to install the Playwright Chromium browser:# Install playwrightpip install "playwright"pip install "unstructured"playwright installfrom langchain.document_loaders import PlaywrightURLLoaderurls = [ "https://www.youtube.com/watch?v=dQw4w9WgXcQ", "https://goo.gl/maps/NDSHwePEyaHMFGwh8",]loader =
https://python.langchain.com/docs/integrations/document_loaders/url
a6f85cef4c61-3
"https://goo.gl/maps/NDSHwePEyaHMFGwh8",]loader = PlaywrightURLLoader(urls=urls, remove_selectors=["header", "footer"])data = loader.load()PreviousUnstructured FileNextWeatherSetupSetupCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/url
23d46f0b4199-0
XML | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/xml
23d46f0b4199-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersXMLXMLThe UnstructuredXMLLoader is used to load XML files. The loader works with .xml files. The page content will be the text extracted from the XML tags.from
https://python.langchain.com/docs/integrations/document_loaders/xml
23d46f0b4199-2
loader works with .xml files. The page content will be the text extracted from the XML tags.from langchain.document_loaders import UnstructuredXMLLoaderloader = UnstructuredXMLLoader( "example_data/factbook.xml",)docs = loader.load()docs[0] Document(page_content='United States\n\nWashington, DC\n\nJoe Biden\n\nBaseball\n\nCanada\n\nOttawa\n\nJustin Trudeau\n\nHockey\n\nFrance\n\nParis\n\nEmmanuel Macron\n\nSoccer\n\nTrinidad & Tobado\n\nPort of Spain\n\nKeith Rowley\n\nTrack & Field', metadata={'source': 'example_data/factbook.xml'})PreviousWikipediaNextXorbits Pandas DataFrameCommunityDiscordTwitterGitHubPythonJS/TSMoreHomepageBlogCopyright © 2023 LangChain, Inc.
https://python.langchain.com/docs/integrations/document_loaders/xml
8862143246f3-0
Twitter | 🦜�🔗 Langchain
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-1
Skip to main content🦜�🔗 LangChainDocsUse casesIntegrationsAPILangSmithJS/TS DocsCTRLKIntegrationsCallbacksChat modelsDocument loadersEtherscan LoaderacreomAirbyte JSONAirtableAlibaba Cloud MaxComputeApify DatasetArxivAsyncHtmlLoaderAWS S3 DirectoryAWS S3 FileAZLyricsAzure Blob Storage ContainerAzure Blob Storage FileBibTeXBiliBiliBlackboardBlockchainBrave SearchBrowserlesschatgpt_loaderCollege ConfidentialConfluenceCoNLL-UCopy PasteCSVCube Semantic LayerDatadog LogsDiffbotDiscordDocugamiDuckDBEmailEmbaasEPubEverNoteexample_dataMicrosoft ExcelFacebook ChatFaunaFigmaGeopandasGitGitBookGitHubGoogle BigQueryGoogle Cloud Storage DirectoryGoogle Cloud Storage FileGoogle DriveGrobidGutenbergHacker NewsHuggingFace datasetiFixitImagesImage captionsIMSDbIuguJoplinJupyter NotebookLarkSuite (FeiShu)MastodonMediaWikiDumpMergeDocLoadermhtmlMicrosoft OneDriveMicrosoft PowerPointMicrosoft WordModern TreasuryNotion DB 1/2Notion DB 2/2ObsidianOpen Document Format (ODT)Open City DataOrg-modePandas DataFramePsychicPySpark DataFrame LoaderReadTheDocs DocumentationRecursive URL LoaderRedditRoamRocksetRSTSitemapSlackSnowflakeSource CodeSpreedlyStripeSubtitleTelegramTencent COS DirectoryTencent COS File2MarkdownTOMLTrelloTSVTwitterUnstructured FileURLWeatherWebBaseLoaderWhatsApp ChatWikipediaXMLXorbits Pandas DataFrameLoading documents from a YouTube urlYouTube transcriptsDocument transformersLLMsMemoryRetrieversText embedding modelsAgent toolkitsToolsVector storesGrouped by providerIntegrationsDocument loadersTwitterTwitterTwitter is an online social media and social networking service.This loader fetches the text from the Tweets of a list of Twitter users, using the tweepy Python package.
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-2
You must initialize the loader with your Twitter API token, and you need to pass in the Twitter username you want to extract.from langchain.document_loaders import TwitterTweetLoader#!pip install tweepyloader = TwitterTweetLoader.from_bearer_token( oauth2_bearer_token="YOUR BEARER TOKEN", twitter_users=["elonmusk"], number_tweets=50, # Default value is 100)# Or load from access token and consumer keys# loader = TwitterTweetLoader.from_secrets(# access_token='YOUR ACCESS TOKEN',# access_token_secret='YOUR ACCESS TOKEN SECRET',# consumer_key='YOUR CONSUMER KEY',# consumer_secret='YOUR CONSUMER SECRET',# twitter_users=['elonmusk'],# number_tweets=50,# )documents = loader.load()documents[:5] [Document(page_content='@MrAndyNgo @REI One store after another shutting down', metadata={'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False,
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-3
None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô ��\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False,
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-4
False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat @joshrogin @glennbeck Large ships are fundamentally vulnerable to ballistic (hypersonic) missiles', metadata={'created_at': 'Tue Apr 18 03:43:25 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None,
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-5
'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô ��\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328',
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-6
'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}),
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-7
'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat The Golden Rule', metadata={'created_at': 'Tue Apr 18 03:37:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô ��\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name':
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-8
'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color':
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-9
'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@KanekoaTheGreat �', metadata={'created_at': 'Tue Apr 18 03:35:48 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False,
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-10
@REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô ��\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False, 'retweeted': False, 'lang': 'en'}, 'contributors_enabled': False, 'is_translator': False, 'is_translation_enabled': False, 'profile_background_color': 'C0DEED', 'profile_background_image_url': 'http://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_image_url_https': 'https://abs.twimg.com/images/themes/theme1/bg.png', 'profile_background_tile': False, 'profile_image_url':
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-11
'profile_background_tile': False, 'profile_image_url': 'http://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_image_url_https': 'https://pbs.twimg.com/profile_images/1590968738358079488/IY9Gx6Ok_normal.jpg', 'profile_banner_url': 'https://pbs.twimg.com/profile_banners/44196397/1576183471', 'profile_link_color': '0084B4', 'profile_sidebar_border_color': 'C0DEED', 'profile_sidebar_fill_color': 'DDEEF6', 'profile_text_color': '333333', 'profile_use_background_image': True, 'has_extended_profile': True, 'default_profile': False, 'default_profile_image': False, 'following': None, 'follow_request_sent': None, 'notifications': None, 'translator_type': 'none', 'withheld_in_countries': []}}), Document(page_content='@TRHLofficial What’s he talking about and why is it sponsored by Erik’s son?', metadata={'created_at': 'Tue Apr 18 03:32:17 +0000 2023', 'user_info': {'id': 44196397, 'id_str': '44196397', 'name': 'Elon Musk', 'screen_name': 'elonmusk', 'location': 'A Shortfall of Gravitas', 'profile_location': None, 'description': 'nothing', 'url': None, 'entities': {'description': {'urls': []}}, 'protected': False, 'followers_count': 135528327, 'friends_count': 220, 'listed_count': 120478, 'created_at': 'Tue Jun 02 20:12:29 +0000 2009', 'favourites_count': 21285, 'utc_offset': None,
https://python.langchain.com/docs/integrations/document_loaders/twitter
8862143246f3-12
2009', 'favourites_count': 21285, 'utc_offset': None, 'time_zone': None, 'geo_enabled': False, 'verified': False, 'statuses_count': 24795, 'lang': None, 'status': {'created_at': 'Tue Apr 18 03:45:50 +0000 2023', 'id': 1648170947541704705, 'id_str': '1648170947541704705', 'text': '@MrAndyNgo @REI One store after another shutting down', 'truncated': False, 'entities': {'hashtags': [], 'symbols': [], 'user_mentions': [{'screen_name': 'MrAndyNgo', 'name': 'Andy Ngô ��\u200d🌈', 'id': 2835451658, 'id_str': '2835451658', 'indices': [0, 10]}, {'screen_name': 'REI', 'name': 'REI', 'id': 16583846, 'id_str': '16583846', 'indices': [11, 15]}], 'urls': []}, 'source': '<a href="http://twitter.com/download/iphone" rel="nofollow">Twitter for iPhone</a>', 'in_reply_to_status_id': 1648134341678051328, 'in_reply_to_status_id_str': '1648134341678051328', 'in_reply_to_user_id': 2835451658, 'in_reply_to_user_id_str': '2835451658', 'in_reply_to_screen_name': 'MrAndyNgo', 'geo': None, 'coordinates': None, 'place': None, 'contributors': None, 'is_quote_status': False, 'retweet_count': 118, 'favorite_count': 1286, 'favorited': False,
https://python.langchain.com/docs/integrations/document_loaders/twitter
README.md exists but content is empty.
Downloads last month
44