LLMs Usage FAQ: Difference between revisions

Jump to navigation Jump to search
129 bytes added ,  8 June 2025
m
Line 89: Line 89:
💬 Processing Methods: Chunking and Maintaining Context Coherence Strategies
💬 Processing Methods: Chunking and Maintaining Context Coherence Strategies


When processing long texts, we need to adopt chunking technical strategies. To help the model understand the context of previous chapters when processing subsequent paragraphs, an effective approach is '''Chunking Strategy with Previous Article Summarization''':
When processing long texts, we need to adopt chunking technical strategies<ref>[https://ihower.tw/blog/archives/12373 使用繁體中文評測 RAG 的 Chunking 切塊策略 – ihower { blogging }]</ref>. To help the model understand the context of previous chapters when processing subsequent paragraphs, an effective approach is '''Chunking Strategy with Previous Article Summarization''':
# First summarize the previous chapters
# First summarize the previous chapters
# Input the summary together with the full text of the next chapter to be processed to the AI
# Input the summary together with the full text of the next chapter to be processed to the AI

Navigation menu