No complex pipelines. No vector database management. Just one-line SDK integration and you're ready to retrieve and generate.
import { RavaClient } from '@rava-ai/sdk';
// Initialize once during app startup
RavaClient.initialize({
apiKey: process.env.RAVA_API_KEY!,
baseUrl: process.env.RAVA_BASE_URL ?? 'https://rava-ydvd.onrender.com'
});
const rava = RavaClient.getInstance();
await rava.ingest({
name: 'product-docs',
content: 'RAG lets you answer questions from your own data.',
metadata: { type: 'text' }
});
const result = await rava.query({
question: 'How do I initialize the SDK?',
top_k: 5
});
console.log(result.answer);Three simple steps to add RAG to your application
Upload your data—text files, PDFs, GitHub repos, URLs. Rava handles chunking and embedding automatically.
Query using natural language. Our vector search instantly finds the most relevant context from your data.
Get back high-quality answers powered by Groq's fast LLM inference and your custom data.
Everything you need to build production-ready RAG applications
Drop our npm package into your project and start building instantly.
Ingest from text, files, GitHub repositories, and URLs seamlessly.
Fast vector search powered by PostgreSQL and pgvector for low latency.
Lightning-fast LLM generation using Groq for sub-second responses.
Each project is isolated with its own vector space and API keys.
Scalable infrastructure built in Go for maximum reliability and performance.
Simple, intuitive APIs designed for developers. No boilerplate, no headaches.
// Reuse the singleton instance anywhere
const rava = RavaClient.getInstance();
await rava.ingest({
name: 'knowledge-base',
filePath: './data.txt',
metadata: { type: 'text' }
});
const response = await rava.query({
question: 'Summarize the ingested file',
history: [
{ role: 'user', content: 'Keep it short.' }
],
top_k: 3
});
return response.answer;From chatbots to internal tools, Rava powers them all
Build context-aware chatbots that understand your data
Semantic search over your entire documentation
Create internal assistants for your team
Embed AI assistance directly into your products
Enterprise-grade infrastructure powering production applications
High-performance backend built in Go for reliability and scale
Distributed vector search with PostgreSQL and pgvector
Sub-second LLM responses via Groq's inference network
Get your API key and start building in minutes. No credit card required.