How to optimize your OpenAI API token usage: Difference between revisions
No edit summary |
|||
| Line 1: | Line 1: | ||
How to optimize your OpenAI API token usage | How to optimize your OpenAI API token usage | ||
== Prevent Sending Duplicate Content to the OpenAI API == | == Methods for enhancing the efficiency of your OpenAI API token usage. == | ||
=== Prevent Sending Duplicate Content to the OpenAI API === | |||
* [https://errerrors.blogspot.com/2023/08/how-to-find-duplicate-text-values-in-mysql.html MySQL 如何尋找重複的文章 (文字類型資料)] in Tradition Chinese. | * [https://errerrors.blogspot.com/2023/08/how-to-find-duplicate-text-values-in-mysql.html MySQL 如何尋找重複的文章 (文字類型資料)] in Tradition Chinese. | ||
== Streamlining Questions and Responses == | === Streamlining Questions and Responses === | ||
While it's acceptable to pose open-ended questions to explore the capabilities of ChatGPT, keep in mind that such questions can lead to longer, more creative responses that might increase costs. To achieve concise and cost-effective answers, consider refining your question by providing specific and limited options for the AI to select from. | While it's acceptable to pose open-ended questions to explore the capabilities of ChatGPT, keep in mind that such questions can lead to longer, more creative responses that might increase costs. To achieve concise and cost-effective answers, consider refining your question by providing specific and limited options for the AI to select from. | ||
| Line 28: | Line 29: | ||
== Handling Multiple Article Packages == | === Handling Multiple Article Packages === | ||
original promot | original promot | ||
| Line 59: | Line 60: | ||
</pre> | </pre> | ||
== No Additional Explanation Needed == | === No Additional Explanation Needed === | ||
While GPT-4 often attempts to provide explanations for its answers, if you have already explored the topic, you can frame your questions in a way that skips the elaboration. For example: | While GPT-4 often attempts to provide explanations for its answers, if you have already explored the topic, you can frame your questions in a way that skips the elaboration. For example: | ||
| Line 70: | Line 71: | ||
</pre> | </pre> | ||
== Select the appropriate model == | === Select the appropriate model === | ||
For complex tasks, GPT-4 is recommended, while simpler tasks like translation can utilize GPT-3.5. For more information, please refer to the following article: [https://platform.openai.com/docs/models Models - OpenAI API]. | For complex tasks, GPT-4 is recommended, while simpler tasks like translation can utilize GPT-3.5. For more information, please refer to the following article: [https://platform.openai.com/docs/models Models - OpenAI API]. | ||
== Choose Long Article Splitting Strategy (chunk) == | === Enable the JSON mode === | ||
* "Compatible with gpt-4-1106-preview and gpt-3.5-turbo-1106.<ref>[https://platform.openai.com/docs/api-reference/chat/create API Reference - OpenAI API]</ref>" | |||
=== Choose Long Article Splitting Strategy (chunk) === | |||
Different language models have token limits that affect how much text they can process. The max_length API parameter accounts for both input and output. Models like gpt3.5-turbo and gpt-4 have specific token limits like 4,097 and 8,192, respectively. Exceeding these limits requires you to split articles into smaller pieces for processing. | Different language models have token limits that affect how much text they can process. The max_length API parameter accounts for both input and output. Models like gpt3.5-turbo and gpt-4 have specific token limits like 4,097 and 8,192, respectively. Exceeding these limits requires you to split articles into smaller pieces for processing. | ||
| Line 81: | Line 85: | ||
For article splitting, tools like LangChain's [https://github.com/langchain-ai/text-split-explorer text-split-explorer] can help, offering options for delimiters, chunk size, and chunk overlap. | For article splitting, tools like LangChain's [https://github.com/langchain-ai/text-split-explorer text-split-explorer] can help, offering options for delimiters, chunk size, and chunk overlap. | ||
== Preparation Before API Result Verification == | === Preparation Before API Result Verification === | ||
Prepare several sample texts and verify the API results to ensure they are as expected before processing a large number of articles. | Prepare several sample texts and verify the API results to ensure they are as expected before processing a large number of articles. | ||
| Line 96: | Line 100: | ||
* [https://www.reddit.com/r/OpenAI/comments/13scry1/how_to_reduce_your_openai_costs_by_up_to_30_3/ How to Reduce Your OpenAI Costs by up to 30% - 3 Simple Steps 💰 : OpenAI] | * [https://www.reddit.com/r/OpenAI/comments/13scry1/how_to_reduce_your_openai_costs_by_up_to_30_3/ How to Reduce Your OpenAI Costs by up to 30% - 3 Simple Steps 💰 : OpenAI] | ||
* [https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html#RecursiveCharacterTextSplitter langchain.text_splitter — 🦜🔗 LangChain 0.0.280] | * [https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html#RecursiveCharacterTextSplitter langchain.text_splitter — 🦜🔗 LangChain 0.0.280] | ||
== References == | |||
<references /> | |||
[[Category:OpenAI]] | [[Category:OpenAI]] | ||
Revision as of 12:35, 15 January 2024
How to optimize your OpenAI API token usage
Methods for enhancing the efficiency of your OpenAI API token usage.
Prevent Sending Duplicate Content to the OpenAI API
- MySQL 如何尋找重複的文章 (文字類型資料) in Tradition Chinese.
Streamlining Questions and Responses
While it's acceptable to pose open-ended questions to explore the capabilities of ChatGPT, keep in mind that such questions can lead to longer, more creative responses that might increase costs. To achieve concise and cost-effective answers, consider refining your question by providing specific and limited options for the AI to select from.
For example:
- Initial question for exploration:
Please offer five keywords for the following articles: ``` Long text ```
- Refined question:
Please select one of the following keywords: keyword1, keyword2, keyword3, keyword4, keyword5, for the subsequent articles: ``` Long text ```
Handling Multiple Article Packages
original promot
Please select the of the keywords for the subsequent articles: keyword1, keyword2, keyword3, keyword4, keyword5. ``` short text of article No.1 ```
another prompt:
Please select the keywords for the subsequent articles: keyword1, keyword2, keyword3, keyword4, keyword5. ``` short text of article No.2 ```
Refined prompt:
Each row is the article number and content. For each article, select the keywords: keyword1, keyword2, keyword3, keyword4, keyword5. Provide your answer in the CSV format: "article number", "comma_separated_keywords" ``` No1. short text of article No.1 (without return symbol) No2. short text of article No.2 (without return symbol) ... No5. short text of article No.5 (without return symbol) ```
No Additional Explanation Needed
While GPT-4 often attempts to provide explanations for its answers, if you have already explored the topic, you can frame your questions in a way that skips the elaboration. For example:
For the subsequent articles, please select from the keywords: keyword1, keyword2, keyword3, keyword4, keyword5. No further explanation required. ``` Long text ```
Select the appropriate model
For complex tasks, GPT-4 is recommended, while simpler tasks like translation can utilize GPT-3.5. For more information, please refer to the following article: Models - OpenAI API.
Enable the JSON mode
- "Compatible with gpt-4-1106-preview and gpt-3.5-turbo-1106.[1]"
Choose Long Article Splitting Strategy (chunk)
Different language models have token limits that affect how much text they can process. The max_length API parameter accounts for both input and output. Models like gpt3.5-turbo and gpt-4 have specific token limits like 4,097 and 8,192, respectively. Exceeding these limits requires you to split articles into smaller pieces for processing.
You can choose not to split articles, but this restricts you to processing only portions of them. Choices include focusing on the beginning and end or just the final paragraphs.
For article splitting, tools like LangChain's text-split-explorer can help, offering options for delimiters, chunk size, and chunk overlap.
Preparation Before API Result Verification
Prepare several sample texts and verify the API results to ensure they are as expected before processing a large number of articles.
Another version of this article
- Blog article written in Traditional Chinese: 把關你的荷包,怎樣節省使用 OpenAI API token
- Blog article written in English: How I save the OpenAI API token usage
Related pages
Further reading
- Best practices for prompt engineering with OpenAI API | OpenAI Help Center
- How to Reduce Your OpenAI Costs by up to 30% - 3 Simple Steps 💰 : OpenAI
- langchain.text_splitter — 🦜🔗 LangChain 0.0.280