Rag
How This Works (MVP v1 - RAG Agent)
The user builds the Agent from the Agent Builder UI (MVP version in the img).
The user provides name and description of what the agent does, and the specific parameters it needs to operate, that are “Instructions” and “Knowledge”.
As the user save the Agent build, they can evoke it (semantic routes) and interact with it via Chat UI.
On the back-end, the Agent will send an event template to the Nostr implementation for the execution.
The communication between the OpenAgents platform Laravel codebase and Nostr are performed through a gRPC client intermediary (OpenAgents gRPC Client documentation here).
The event template is compiled with the following params:
-
poolAddress
= the host -
query
= the LLM generated rag query from the user input (“Instructions”) + chat history (Thread) -
documents
= knowledge files as array of URLs (File) -
k
= how many chunks to return -
max_tokens
= numbers of tokens for text chunk -
overlap
= overlap between chunks -
encryptFor
= encrypt for a specific provider, so it can see it’s content
RAG Agent Pipeline
The above is a representation of a RAG Agent pipeline.
OpenAgents’ RAG Agent handle these phases with the following plugins/standalone nodes:
-
Retrieve Document: Openagents Document Retrieval Node
-
Embedding model: Openagents Embeddings Node
-
Vector DB: Openagents Search Node
These three nodes are coordinated by the RAG Coordinator Extism plugin.