LLMs Usage FAQ: Difference between revisions

Jump to navigation Jump to search
906 bytes added ,  11 August 2025
Line 91: Line 91:
💬 Processing Methods:  
💬 Processing Methods:  


Method 1: Switch to a model that supports longer context window lengths.
Method 1: Switch to models that support longer context windows, such as Google Gemini:


Method 2: Chunking processing and context coherence maintenance strategies.
# GPT-4o: "16,384 max output tokens"<ref>[https://platform.openai.com/docs/models/gpt-4o Model - OpenAI API]</ref> equivalent to approximately 5,461 Chinese characters (16,384/3)
# gemini-2.5-pro: "65,536 max output tokens"<ref>[https://ai.google.dev/gemini-api/docs/models#gemini-2.5-pro Gemini 2.5 Pro]</ref> equivalent to approximately 21,845 Chinese characters (65,536/3)
# GPT-5: "128,000 max output tokens"<ref>[https://platform.openai.com/docs/models/gpt-5 Model - OpenAI API]</ref> equivalent to approximately 42,666 Chinese characters (128,000/3)
 
Method 2: Start a new conversation and transfer the conversation content to the new dialogue. For existing conversations, you can try using this prompt:
 
<pre>
As the first prompt for a new conversation, please organize our previous dialogue into:
1. Clear operational steps
2. Instructions to verify the success of each prerequisite step
</pre>
 
Method 3: Chunking strategy with context continuity maintenance


When processing long texts, we need to adopt chunking technical strategies<ref>[https://ihower.tw/blog/archives/12373 使用繁體中文評測 RAG 的 Chunking 切塊策略 – ihower { blogging }]</ref>. To help the model understand the context of previous chapters when processing subsequent paragraphs, an effective approach is '''Chunking Strategy with Previous Article Summarization''':
When processing long texts, we need to adopt chunking technical strategies<ref>[https://ihower.tw/blog/archives/12373 使用繁體中文評測 RAG 的 Chunking 切塊策略 – ihower { blogging }]</ref>. To help the model understand the context of previous chapters when processing subsequent paragraphs, an effective approach is '''Chunking Strategy with Previous Article Summarization''':

Navigation menu