LetsAsk AI

7 heures il y a
Pricing Type: Paid
Plateforme : API
Plateforme : Web

Écrire un avis

Vous devez Se connecter ou Registre publier un avis
Chatbots IA
Autres outils d'IA
LetsAsk.AI is not simply a wrapper around ChatGPT; it layers retrieval-augmented generation (RAG) on top of OpenAI’s GPT-3.5-turbo and GPT-4 endpoints to ensure answers are grounded solely in the user-supplied knowledge base. The workflow is split into four phases:

Data Ingestion

Users can upload PDFs, DOCX, TXT, CSV, or point the crawler to an entire domain or sub-directory. The crawler respects robots.txt and extracts clean text while discarding scripts, navigation, and duplicate content. During ingestion, files are chunked into overlapping passages of approximately 500–1,000 characters, tokenized, and embedded using OpenAI’s text-embedding-ada-002 model.

Vector Storage & Retrieval

Embedded vectors are stored in an isolated container encrypted with AES-256. When a visitor asks a question, the query is embedded in real time, and a cosine-similarity search retrieves the top-k (typically k=5–7) most relevant passages. To reduce latency, the vector index is held in memory with Redis caching.

Context Assembly & Prompt Engineering

The platform constructs a prompt that combines the retrieved passages, a static system instruction (“You are a helpful assistant answering only from the provided context”), and conversation history. The prompt is sent to GPT-3.5-turbo by default; Pro and Business tiers can toggle GPT-4 for higher accuracy on complex domains.

Response Streaming & Moderation

Answers stream back to the user’s browser via Server-Sent Events. An optional moderation layer (OpenAI’s moderation endpoint + custom keyword filters) blocks disallowed topics and PII leakage. All messages are logged for analytics, but raw files are deleted after processing unless the user explicitly enables “retain source files” for later retraining.
Ajouter aux favoris
Signaler un abus
Copyright © 2025 CogAINav.com. Tous droits réservés.
fr_FRFrench