From 488f71ab6065da0cf6f22a061f8ff4141993d1ef Mon Sep 17 00:00:00 2001 From: ImgBotApp Date: Mon, 28 Aug 2023 16:37:22 +0000 Subject: [PATCH] [ImgBot] Optimize images *Total -- 608.90kb -> 585.60kb (3.83%) /svgs/langchain-parent-document-retriever.svg -- 86.66kb -> 82.40kb (4.91%) /svgs/langchain-modules.svg -- 147.84kb -> 140.70kb (4.83%) /svgs/langchain-js-auto-docstrings.svg -- 99.27kb -> 95.23kb (4.07%) /svgs/langchain-use-cases.svg -- 132.22kb -> 128.27kb (2.99%) /svgs/langchain-js-runnable-chain.svg -- 142.91kb -> 139.00kb (2.74%) Signed-off-by: ImgBotApp --- svgs/langchain-js-auto-docstrings.svg | 2 +- svgs/langchain-js-runnable-chain.svg | 2 +- svgs/langchain-modules.svg | 2 +- svgs/langchain-parent-document-retriever.svg | 2 +- svgs/langchain-use-cases.svg | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) diff --git a/svgs/langchain-js-auto-docstrings.svg b/svgs/langchain-js-auto-docstrings.svg index 5cb575b..90dd3f8 100644 --- a/svgs/langchain-js-auto-docstrings.svg +++ b/svgs/langchain-js-auto-docstrings.svg @@ -1 +1 @@ -createGenerateCommentsChain(src/generate_comments_chain.ts) Memory  OpenAIAgentTokenBufferMemoryWhy Weaviate Vector Store? Weaviate is used as an existing index we previously created over our Python documentation, which should mostly be a superset of the JS integrations and abstractions.Retrieval Agent vs. Retrieval Chain Because many classes in LangChain.js extend base classes and import other concepts, I chose to try to prompt the agent to look up information on the main class from the retriever, then have the flexibility to decide if it needed more information and make further queries.All steps use OpenAI's GPT-4 model.Why NOT Unstructured Output? [Fail] Have the model take input code and rewrite it with TSDoc comments. => The model would overwrite handwritten existing TSDoc comments. [Fail] Structure output and splice the comments in as a final step, initially tried to ask the model to output a line number where the comment should be spliced. => Hallucination.Agent ToolWeaviateStoresearch_langchain_knowledgeOpenAIAgent Executor ChatOpenAIsearch_langchain_knowledgeOpenAIAgentTokenBufferMemoryRunnable Chain (RunnableSequence.from([ ...runnables ]).invoke({ input: raw_codes_of_file }) => Structured TSDoc CommentsretrievalResult: researchAgentExecutororiginal_input: new RunnablePassthrough()(a.ka. Runnable Map)input: out => out.original_input.inputctx: out => out.retrievalResult.outputPrompt: { input, ctx }ChatOpenAI().bind(functions)JsonOutputFunctionsParserLangChain.js Auto TSDoc Comment Creator https://github.com/jacoblee93/auto-docstrings/ Authored by Jacob Lee (@Hacubu)Diagramed by Haili Zhang (@zhanghaili0610)Create retrieval tool to search documents about LangChain's modulesInitialize agent executor to look up terms (necessary for understanding the input code)Prepare memory for agent executor \ No newline at end of file +createGenerateCommentsChain(src/generate_comments_chain.ts)MemoryOpenAIAgentTokenBufferMemoryWhy Weaviate Vector Store? Weaviate is used as an existing index we previously created over our Python documentation, which should mostly be a superset of the JS integrations and abstractions.Retrieval Agent vs. Retrieval Chain Because many classes in LangChain.js extend base classes and import other concepts, I chose to try to prompt the agent to look up information on the main class from the retriever, then have the flexibility to decide if it needed more information and make further queries.All steps use OpenAI's GPT-4 model.Why NOT Unstructured Output? [Fail] Have the model take input code and rewrite it with TSDoc comments. => The model would overwrite handwritten existing TSDoc comments. [Fail] Structure output and splice the comments in as a final step, initially tried to ask the model to output a line number where the comment should be spliced. => Hallucination.Agent ToolWeaviateStoresearch_langchain_knowledgeOpenAIAgent ExecutorChatOpenAIsearch_langchain_knowledgeOpenAIAgentTokenBufferMemoryRunnable Chain (RunnableSequence.from([ ...runnables ]).invoke({ input: raw_codes_of_file }) => Structured TSDoc CommentsretrievalResult: researchAgentExecutororiginal_input: new RunnablePassthrough()(a.ka. Runnable Map)input: out => out.original_input.inputctx: out => out.retrievalResult.outputPrompt: { input, ctx }ChatOpenAI().bind(functions)JsonOutputFunctionsParserLangChain.js Auto TSDoc Comment Creator https://github.com/jacoblee93/auto-docstrings/ Authored by Jacob Lee (@Hacubu)Diagramed by Haili Zhang (@zhanghaili0610)Create retrieval tool to search documents about LangChain's modulesInitialize agent executor to look up terms (necessary for understanding the input code)Prepare memory for agent executor \ No newline at end of file diff --git a/svgs/langchain-js-runnable-chain.svg b/svgs/langchain-js-runnable-chain.svg index f91e663..25859a7 100644 --- a/svgs/langchain-js-runnable-chain.svg +++ b/svgs/langchain-js-runnable-chain.svg @@ -1 +1 @@ -LangChain JS Runnable Chain Cheatsheet v0.1.1  Authored by Haili Zhang @zhanghaili0610Reviewed by Jacob Lee @Hacubu(Last Updated: Aug 23, 2023)  Concept: RunnableObjects or functions that expose standard interfaces:- stream: stream back chunks of the response e.g. model = new OpenAI({ streaming: true }) e.g. parser = new BytesOutputParser()- invoke: call the chain on an input- batch: call the chain on a list of inputs How-to: Chain Runnables- Instance Method: runnable.pipe(runnable)- Static Method: RunnableSequence.from([ ...runnables ]), which run runnable objects in sequence when invoked How-to: Passthrough Chain Inputs- If input is string, use new RunnablePassthrough() - If input is object, use arrow (=>) function whichtakes the object as input and extracts the desired key How-to: Bind kwargs (keyword args) - Instance Method: runnable.bind({ ...kwargs })- e.g. Bind Functions to OpenAI Model: model.bind({ functions: [ ...schemas ], function_call: { ... } }) How-to: Fallback to another Runnable- Instance: runnable.withFallbacks({ fallbacks: [ ...runnables ] })Passthrough and MapMain LoopRunnable MapRunnable Bq: passthroughIndividual RunnablesMain LoopPrompt TemplateMain LoopLLM ModelMain LoopChat ModelMain LoopRetrieverExamples of Runnable ChainsMain Loopa.k.a. LLM ChainPrompt TemplateLLM ModelOutput Parsera.k.a. Conversational Retrieval ChainQuestion LLM ChainRunnable MapPassthroughsRetrieversAnswer LLM Chaina.k.a. LLM Tool ChainPrompt TemplateLLM ModelOutput ParserTool (e.g. Search)Runnable A (e.g. Retriever){ q: val } => { ctx: res }Runnable InterfacesMain LoopRunnableABCMain LoopRunnableAMain LoopRunnableA.1A.2A.n...LangChain JS Expression Language Cookbook: https://js.langchain.com/docs/guides/expression_language/cookbooklist of chat messagesPromptValueobjecta list of documentsprompt stringtext or a list of docsChatMessage{ q: val }batchstream{ q: val, ctx: res }invokeresponse string \ No newline at end of file +LangChain JS Runnable Chain Cheatsheet v0.1.1 Authored by Haili Zhang @zhanghaili0610Reviewed by Jacob Lee @Hacubu(Last Updated: Aug 23, 2023) Concept: RunnableObjects or functions that expose standard interfaces:- stream: stream back chunks of the response e.g. model = new OpenAI({ streaming: true }) e.g. parser = new BytesOutputParser()- invoke: call the chain on an input- batch: call the chain on a list of inputs How-to: Chain Runnables- Instance Method: runnable.pipe(runnable)- Static Method: RunnableSequence.from([ ...runnables ]), which run runnable objects in sequence when invoked How-to: Passthrough Chain Inputs- If input is string, use new RunnablePassthrough() - If input is object, use arrow (=>) function whichtakes the object as input and extracts the desired key How-to: Bind kwargs (keyword args) - Instance Method: runnable.bind({ ...kwargs })- e.g. Bind Functions to OpenAI Model: model.bind({ functions: [ ...schemas ], function_call: { ... } }) How-to: Fallback to another Runnable- Instance: runnable.withFallbacks({ fallbacks: [ ...runnables ] })Passthrough and MapMain LoopRunnable MapRunnable Bq: passthroughIndividual RunnablesMain LoopPrompt TemplateMain LoopLLM ModelMain LoopChat ModelMain LoopRetrieverExamples of Runnable ChainsMain Loopa.k.a. LLM ChainPrompt TemplateLLM ModelOutput Parsera.k.a. Conversational Retrieval ChainQuestion LLM ChainRunnable MapPassthroughsRetrieversAnswer LLM Chaina.k.a. LLM Tool ChainPrompt TemplateLLM ModelOutput ParserTool (e.g. Search)Runnable A (e.g. Retriever){ q: val } => { ctx: res }Runnable InterfacesMain LoopRunnableABCMain LoopRunnableAMain LoopRunnableA.1A.2A.n...LangChain JS Expression Language Cookbook: https://js.langchain.com/docs/guides/expression_language/cookbooklist of chat messagesPromptValueobjecta list of documentsprompt stringtext or a list of docsChatMessage{ q: val }batchstream{ q: val, ctx: res }invokeresponse string \ No newline at end of file diff --git a/svgs/langchain-modules.svg b/svgs/langchain-modules.svg index 57c2471..c34693c 100644 --- a/svgs/langchain-modules.svg +++ b/svgs/langchain-modules.svg @@ -1 +1 @@ -Model I/OPromptsTemplateSelectorLanguage ModelsChatLLMOutput ParsersStructuredJSONMemoryBufferKV DBVector DBSQL DBData ConnectionVector StoresMemoryEmbeddingSelf-HostedBaaSDocument LoadersFileFolderWebDocument Transformers / SplittersDocument RetrieversTextCodeTokenVector DBBaaSWeb APIChainsSequentialConversational QARetrieval QAFoundational LLMDocumentAgentsExecutors (Chains)Plan-ExecuteReAct(Reason-Act)OpenAIToolsStandaloneCollectionCallbacksLangSmithConsoleCustomLangChain (JS) Modules Overview v0.2.0 for npm:langchain@0.0.114 (Last Updated: July 22, 2023) by @zhanghaili0610in/outin/out \ No newline at end of file +Model I/OPromptsTemplateSelectorLanguage ModelsChatLLMOutput ParsersStructuredJSONMemoryBufferKV DBVector DBSQL DBData ConnectionVector StoresMemoryEmbeddingSelf-HostedBaaSDocument LoadersFileFolderWebDocument Transformers / SplittersDocument RetrieversTextCodeTokenVector DBBaaSWeb APIChainsSequentialConversational QARetrieval QAFoundational LLMDocumentAgentsExecutors (Chains)Plan-ExecuteReAct(Reason-Act)OpenAIToolsStandaloneCollectionCallbacksLangSmithConsoleCustomLangChain (JS) Modules Overview v0.2.0 for npm:langchain@0.0.114 (Last Updated: July 22, 2023) by @zhanghaili0610in/outin/out \ No newline at end of file diff --git a/svgs/langchain-parent-document-retriever.svg b/svgs/langchain-parent-document-retriever.svg index d1d6ac9..6e8dfe9 100644 --- a/svgs/langchain-parent-document-retriever.svg +++ b/svgs/langchain-parent-document-retriever.svg @@ -1 +1 @@ -Text Splitter Vector Store Storage (Memory) LLM Model Main Loop L1L2Ln L1L2LnL1.C1L1.C2L2.C1Ln.CmL1L2LnL2.C2 Lx.CiLy.CjL1.C1L1.C2L2.C1Ln.CmLz.Ck LxLyLzParent Document Retriever Diagram Authored by @clusteredbytesRedrafted by @zhanghaili0610parent large chunksSplit documents to large chunksLLM response to given questionSplit each large chunk to small chunksStore large chunks to (memory) store as key-value pairs(key: UUID, value: chunk content)similar small chunksSimilarity search over the given question [embeddings]Plug in large chunks into user's query prompt as context data, call LLMStore small chunks [embeddings] to vector store(metadata: uuid of the parent large chunk)Get large chunks per given id list (from small chunks: x, y, z) \ No newline at end of file +Text SplitterVector StoreStorage (Memory)LLM ModelMain LoopL1L2LnL1L2LnL1.C1L1.C2L2.C1Ln.CmL1L2LnL2.C2Lx.CiLy.CjL1.C1L1.C2L2.C1Ln.CmLz.CkLxLyLzParent Document Retriever Diagram Authored by @clusteredbytesRedrafted by @zhanghaili0610parent large chunksSplit documents to large chunksLLM response to given questionSplit each large chunk to small chunksStore large chunks to (memory) store as key-value pairs(key: UUID, value: chunk content)similar small chunksSimilarity search over the given question [embeddings]Plug in large chunks into user's query prompt as context data, call LLMStore small chunks [embeddings] to vector store(metadata: uuid of the parent large chunk)Get large chunks per given id list (from small chunks: x, y, z) \ No newline at end of file diff --git a/svgs/langchain-use-cases.svg b/svgs/langchain-use-cases.svg index 4ae6ac1..af93618 100644 --- a/svgs/langchain-use-cases.svg +++ b/svgs/langchain-use-cases.svg @@ -1 +1 @@ -User🦜️🔗 LangChain-backed ServiceMain LoopConversation ChainChat ModelMemoryBaaS / SasSUpstash RedisChatBotUserStuff / MapReduce / Refine Document ChainText SplitterLLM ModelSummarizationChatBot withWeb SearchUserConversational ReAct AgentChat ModelSearch ToolSerpApiQA & Chat over DocumentsUserConversational Retrieval QA ChainChat ModelMemoryVector StoreText SplitterUpstash RedisPineconeEmbedding My Favorite LangChain Use Cases v0.1.0 (Updated: Aug 6, 2023) by @zhanghaili0610UserRole Play Writing"Runnable" ChainLLM ModelChat ModelPrompt Templates3. message3. call8. load & save chat history5. message(3. google it)1. files7. query by retriever2. set pipe2. text to docs1. message9. message3. docs to vectors1. prepare prompts2. call4. invoke2. files to docs4. message4. summary3. load & save chat history1. message1. text & message2. call6. call4. upsert vectors4. message5. message \ No newline at end of file +User🦜️🔗 LangChain-backed ServiceMain LoopConversation ChainChat ModelMemoryBaaS / SasSUpstash RedisChatBotUserStuff / MapReduce / Refine Document ChainText SplitterLLM ModelSummarizationChatBot withWeb SearchUserConversational ReAct AgentChat ModelSearch ToolSerpApiQA & Chat over DocumentsUserConversational Retrieval QA ChainChat ModelMemoryVector StoreText SplitterUpstash RedisPineconeEmbeddingMy Favorite LangChain Use Cases v0.1.0 (Updated: Aug 6, 2023) by @zhanghaili0610UserRole Play Writing"Runnable" ChainLLM ModelChat ModelPrompt Templates3. message3. call8. load & save chat history5. message(3. google it)1. files7. query by retriever2. set pipe2. text to docs1. message9. message3. docs to vectors1. prepare prompts2. call4. invoke2. files to docs4. message4. summary3. load & save chat history1. message1. text & message2. call6. call4. upsert vectors4. message5. message \ No newline at end of file