LLMs Usage FAQ: Difference between revisions
Jump to navigation
Jump to search
(Created page with "Common Questions and Answers about Using LLMs {{LanguageSwitcher | content = EN, 漢字 }} === How to Solve AI Forgetting Training Content === 📝 Inquiry: <pre> I'd like to ask a follow-up question: If we adopt a "layer-by-layer prompt optimization" approach to improve AI performance, might we encounter the following situation: After multiple rounds of prompt optimization, the AI does learn the relevant skills a...") |
mNo edit summary |
||
| Line 2: | Line 2: | ||
{{LanguageSwitcher | content = [[LLMs Usage FAQ | EN]], [[LLMs Usage FAQ in Mandarin | 漢字]] }} | {{LanguageSwitcher | content = [[LLMs Usage FAQ | EN]], [[LLMs Usage FAQ in Mandarin | 漢字]] }} | ||
=== Force Traditional Chinese Output === | |||
* Add #zh-TW before your question <ref>[https://learntech.tw/chatgpt-traditional-chinese/ ChatGPT: How to Force Traditional Chinese Output | Learn Technology, Save Time - Learn Technology]</ref> | |||
* Or say "Use Traditional Chinese commonly used in Taiwan" | |||
<pre> | |||
``` | |||
Use Traditional Chinese commonly used in Taiwan: | |||
Rules | |||
- Use full-width punctuation marks and add spaces between Chinese and English text. | |||
- Below is a common AI terminology correspondence table (English -> Traditional Chinese): | |||
* Transformer -> Transformer | |||
* Token -> Token | |||
* LLM/Large Language Model -> 大語言模型 | |||
* Zero-shot -> 零樣本 | |||
* Few-shot -> 少樣本 | |||
* AI Agent -> AI 代理 | |||
* AGI -> 通用人工智慧 | |||
- The following is a table of common Taiwanese terms (English -> Traditional Chinese): | |||
* create -> 建立 | |||
* quality -> 質量 | |||
* information = 資訊 | |||
* message = 訊息 | |||
* store = 儲存 | |||
* search = 搜尋 | |||
* view = 檢視, 檢視表 (No 視圖 as always) | |||
* create = created = 建立 | |||
* data = 資料 | |||
* object = 物件 | |||
* queue = 佇列 | |||
* stack = 堆疊 | |||
* invocation = 呼叫 | |||
* code = 程式碼 | |||
* running = 執行 | |||
* library = 函式庫 | |||
* building = 建構 | |||
* package = 套件 | |||
* video = 影片 | |||
* class = 類別 | |||
* component = 元件 | |||
* Transaction = 交易 | |||
* Code Generation = 程式碼產生器 | |||
* Scalability = 延展性 | |||
* Metadata = Metadata | |||
* Clone = 複製 | |||
* Memory = 記憶體 | |||
* Built-in = 內建 | |||
* Global = 全域 | |||
* Compatibility = 相容性 | |||
* Function = 函式 | |||
* document = 文件 | |||
* example = 範例 | |||
* blog = 部落格 | |||
* realtime = 即時 | |||
* document = 文件 | |||
* integration = 整合 | |||
``` | |||
</pre> | |||
=== How to Solve AI Forgetting Training Content === | === How to Solve AI Forgetting Training Content === | ||
| Line 28: | Line 89: | ||
(2) Important aspects and details related to the original problem that the initial solution method didn't fully consider | (2) Important aspects and details related to the original problem that the initial solution method didn't fully consider | ||
</pre> | </pre> | ||
=== References === | |||
<references /> | |||
[[Category: Tools]] | [[Category: Tools]] | ||
[[Category: Generative AI]] | [[Category: Generative AI]] | ||
Revision as of 10:14, 8 June 2025
Common Questions and Answers about Using LLMs
Force Traditional Chinese Output
- Add #zh-TW before your question [1]
- Or say "Use Traditional Chinese commonly used in Taiwan"
``` Use Traditional Chinese commonly used in Taiwan: Rules - Use full-width punctuation marks and add spaces between Chinese and English text. - Below is a common AI terminology correspondence table (English -> Traditional Chinese): * Transformer -> Transformer * Token -> Token * LLM/Large Language Model -> 大語言模型 * Zero-shot -> 零樣本 * Few-shot -> 少樣本 * AI Agent -> AI 代理 * AGI -> 通用人工智慧 - The following is a table of common Taiwanese terms (English -> Traditional Chinese): * create -> 建立 * quality -> 質量 * information = 資訊 * message = 訊息 * store = 儲存 * search = 搜尋 * view = 檢視, 檢視表 (No 視圖 as always) * create = created = 建立 * data = 資料 * object = 物件 * queue = 佇列 * stack = 堆疊 * invocation = 呼叫 * code = 程式碼 * running = 執行 * library = 函式庫 * building = 建構 * package = 套件 * video = 影片 * class = 類別 * component = 元件 * Transaction = 交易 * Code Generation = 程式碼產生器 * Scalability = 延展性 * Metadata = Metadata * Clone = 複製 * Memory = 記憶體 * Built-in = 內建 * Global = 全域 * Compatibility = 相容性 * Function = 函式 * document = 文件 * example = 範例 * blog = 部落格 * realtime = 即時 * document = 文件 * integration = 整合 ```
How to Solve AI Forgetting Training Content
📝 Inquiry:
I'd like to ask a follow-up question: If we adopt a "layer-by-layer prompt optimization" approach to improve AI performance, might we encounter the following situation: After multiple rounds of prompt optimization, the AI does learn the relevant skills and performs well, but after some time, it forgets these trained capabilities? I want to understand whether current mainstream AI model platforms all have stable memory retention capabilities - that is, can they continuously remember the training prompts and guidance we've previously provided? Sometimes I feel that AI's memory seems unstable. During the same project, content and requirements that I've already explained to the AI in detail need to be re-explained from scratch after a while, which makes me question the continuity of AI learning.
💬 Response:
Indeed, early AI models, due to shorter context window limitations, were prone to drifting from their original settings. When I encounter such situations, I usually choose to start a completely new conversation and restart the entire interaction process.
Current AI models should have significant improvements in this regard. If this conversation's results are satisfactory, I suggest you can give the AI an instruction to summarize and consolidate the entire conversation process, integrating the accumulated interaction principles and experiences from the dialogue into the initial prompt:
Assuming I want to start a new conversation to discuss the same topic, please suggest what complete prompt I should use. This prompt needs to include important content from our entire discussion process: (1) The core problems and objectives that the original prompt aimed to solve (2) Important aspects and details related to the original problem that the initial solution method didn't fully consider