14,970
edits
| Line 89: | Line 89: | ||
LLM models are constrained by context window length limitations. Taking long article translation as an example, since we cannot process the entire content at once, we need to segment the article for processing. | LLM models are constrained by context window length limitations. Taking long article translation as an example, since we cannot process the entire content at once, we need to segment the article for processing. | ||
💬 Processing Methods: Chunking and | 💬 Processing Methods: | ||
Method 1: Switch to a model that supports longer context window lengths. | |||
Method 2: Chunking processing and context coherence maintenance strategies. | |||
When processing long texts, we need to adopt chunking technical strategies<ref>[https://ihower.tw/blog/archives/12373 使用繁體中文評測 RAG 的 Chunking 切塊策略 – ihower { blogging }]</ref>. To help the model understand the context of previous chapters when processing subsequent paragraphs, an effective approach is '''Chunking Strategy with Previous Article Summarization''': | When processing long texts, we need to adopt chunking technical strategies<ref>[https://ihower.tw/blog/archives/12373 使用繁體中文評測 RAG 的 Chunking 切塊策略 – ihower { blogging }]</ref>. To help the model understand the context of previous chapters when processing subsequent paragraphs, an effective approach is '''Chunking Strategy with Previous Article Summarization''': | ||