LetsAsk AI

8 小時 前
Pricing Type: Paid
平台:API
平台:Web

撰寫評論

你必須 登入 或者 登記 發表評論
人工智慧聊天機器人
AI Other Tools
LetsAsk.AI is not simply a wrapper around ChatGPT; it layers retrieval-augmented generation (RAG) on top of OpenAI’s GPT-3.5-turbo and GPT-4 endpoints to ensure answers are grounded solely in the user-supplied knowledge base. The workflow is split into four phases:

Data Ingestion

Users can upload PDFs, DOCX, TXT, CSV, or point the crawler to an entire domain or sub-directory. The crawler respects robots.txt and extracts clean text while discarding scripts, navigation, and duplicate content. During ingestion, files are chunked into overlapping passages of approximately 500–1,000 characters, tokenized, and embedded using OpenAI’s text-embedding-ada-002 model.

Vector Storage & Retrieval

Embedded vectors are stored in an isolated container encrypted with AES-256. When a visitor asks a question, the query is embedded in real time, and a cosine-similarity search retrieves the top-k (typically k=5–7) most relevant passages. To reduce latency, the vector index is held in memory with Redis caching.

Context Assembly & Prompt Engineering

The platform constructs a prompt that combines the retrieved passages, a static system instruction (“You are a helpful assistant answering only from the provided context”), and conversation history. The prompt is sent to GPT-3.5-turbo by default; Pro and Business tiers can toggle GPT-4 for higher accuracy on complex domains.

Response Streaming & Moderation

Answers stream back to the user’s browser via Server-Sent Events. An optional moderation layer (OpenAI’s moderation endpoint + custom keyword filters) blocks disallowed topics and PII leakage. All messages are logged for analytics, but raw files are deleted after processing unless the user explicitly enables “retain source files” for later retraining.
加入收藏夾
檢舉濫用行為
版權所有 © 2025 CogAINav.com。保留所有權利。
zh_HKChinese