|
|
| Line 489: |
Line 489: |
|
| |
|
|
| |
|
|
| |
| === How to Solve AI Forgetting Training Content ===
| |
|
| |
| 📝 Inquiry:
| |
|
| |
| <pre>
| |
| I'd like to ask a follow-up question: If we adopt a "layer-by-layer prompt optimization" approach to improve AI performance, might we encounter the following situation: After multiple rounds of prompt optimization, the AI does learn the relevant skills and performs well, but after some time, it forgets these trained capabilities?
| |
|
| |
| I want to understand whether current mainstream AI model platforms all have stable memory retention capabilities - that is, can they continuously remember the training prompts and guidance we've previously provided?
| |
|
| |
| Sometimes I feel that AI's memory seems unstable. During the same project, content and requirements that I've already explained to the AI in detail need to be re-explained from scratch after a while, which makes me question the continuity of AI learning.
| |
| </pre>
| |
|
| |
| 💬 Response:
| |
|
| |
| Indeed, early AI models, due to shorter context window limitations, were prone to drifting from their original settings. When I encounter such situations, I usually choose to start a completely new conversation and restart the entire interaction process.
| |
|
| |
| Current AI models should have significant improvements in this regard. If this conversation's results are satisfactory, I suggest you can give the AI an instruction to summarize and consolidate the entire conversation process, integrating the accumulated interaction principles and experiences from the dialogue into the initial prompt:
| |
|
| |
| <pre>
| |
| Assuming I want to start a new conversation to discuss the same topic, please suggest what complete prompt I should use.
| |
|
| |
| This prompt needs to include important content from our entire discussion process:
| |
| (1) The core problems and objectives that the original prompt aimed to solve
| |
| (2) Important aspects and details related to the original problem that the initial solution method didn't fully consider
| |
| </pre>
| |
|
| |
|
| == Further reading == | | == Further reading == |