14,970
edits
| Line 74: | Line 74: | ||
💬 Reason: | 💬 Reason: | ||
LLM models have context window length limitations, with a fixed limit on the total number of tokens for input and output combined in each request. Therefore, each generation result encounters an upper limit of 1000-1500 words. The recommended workaround is to break down the intended article structure and generate content chapter by chapter. | LLM models have context window length limitations, with a fixed limit on the total number of tokens for input and output combined in each request. Therefore, each generation result encounters an upper limit of 1000-1500 words. The recommended workaround is to break down the intended article structure and generate content chapter by chapter. | ||
Solution: | Solution: | ||
If it's not possible to generate a 5000-6000 word article in one go, you can pre-plan a five-chapter structure in your input instructions, then generate content sequentially according to the chapter order, ultimately achieving the goal of producing a 5000-6000 word article. | If it's not possible to generate a 5000-6000 word article in one go, you can pre-plan a five-chapter structure in your input instructions, then generate content sequentially according to the chapter order, ultimately achieving the goal of producing a 5000-6000 word article. | ||
{{Tip | tip= Using [https://platform.openai.com/docs/models/o3 OpenAI o3] model as an example: (1) Context Window (200,000): total quota for input + output, (2) Max Output Tokens (100,000): single response limit. Actual input space: 200,000 - expected output length}} | |||
== How to Solve AI Forgetting Training Content == | == How to Solve AI Forgetting Training Content == | ||