Skip to content

Currently working on... #13

@sabszh

Description

@sabszh

Currently working on...

UI implementation for viewing sources in chat

LLM with memory (Bot de Continuonus)

Embeddings

  • Research on how the embedding works in detail
  • Make decision: one or two indexes
  • Figure code to upsert new vector

Summarize and upsert

  • Look into summarization techniques (Abstractive vs. Extractive Summarization)
  • Design and implement the chat summarization pipeline
  • Extract and process chat data for summarization
  • Populate and update the vectorstore with summarized chat data
  • Deploy the summarization pipeline to be included in the RAG chain

Questions to ask

  • How does the embeddings / vector DB work?
  • Do the past queries and the original data live in two different DBs?
  • Or do they live in same one with different metadata tag? If latter, how
    does the DB/ embeddings get updated after a new query comes in?
  • Another question: Does the app start to ask the user to elaborate on their thoughts / reactions to the
    answer?
  • Asking for clarification is a bit false because LLMs aren't capable of real understanding.
  • But asking the human user to elaborate on some aspects of their interpretation.
  • Maybe we should set the focus on whatever is interesting ― i.e. emotional reaction, or something else?

Metadata

Metadata

Assignees

Labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions