14,954
edits
| (33 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
= | Troubleshooting of OpenAI API | ||
{{LanguageSwitcher | content = [[Troubleshooting of OpenAI API | EN]], [[Troubleshooting of OpenAI API in Mandarin | 漢字]] }} | |||
{{Raise hand | text = If you have any questions about the OpenAI API, you can post them on the OpenAI Developer Forum under the latest [https://community.openai.com/c/api/7 API topics].}} | |||
=== How to fix "An error occurred" === | === How to fix "An error occurred" === | ||
| Line 94: | Line 98: | ||
} | } | ||
</pre> | </pre> | ||
=== How to fix '2000' is not of type 'integer' - 'max_tokens' === | |||
* Error message: | |||
<pre> | |||
{ | |||
"error": { | |||
"message": "'2000' is not of type 'integer' - 'max_tokens'", | |||
"type": "invalid_request_error", | |||
"param": null, | |||
"code": null | |||
} | |||
} | |||
</pre> | |||
Solution: Modify the "max_tokens" parameter to an integer, rather than a string. | |||
<pre> | |||
{ | |||
"model": "gpt-4", | |||
"max_tokens": 1000, | |||
"temperature": 0.9, | |||
"messages": "" | |||
} | |||
</pre> | |||
=== How to fix "API Error (HTTP 400): Encrypted content is not supported with this model" === | |||
Error message: API Error (HTTP 400): Encrypted content is not supported with this model. | |||
<pre> | |||
Error!: API Error (HTTP 400): { | |||
"error": { | |||
"message": "Encrypted content is not supported with this model.", | |||
"type": "invalid_request_error", | |||
"param": "include", | |||
"code": null | |||
} | |||
} | |||
</pre> | |||
Solution: | |||
# This error occurs when using the `include` parameter (e.g., for prompt caching or encrypted content) with a model that does not support it. GPT-4.1 does not support encrypted/included content in this way. | |||
# Remove or omit the `include` parameter from your API request payload. | |||
# Alternatively, switch to a model that supports this feature, such as reasoning models like GPT-5.2 or later. | |||
# See more details on OpenAI's model documentation: https://platform.openai.com/docs/models | |||
=== How to fix "Invalid parameter: 'response_format' of type 'json_object' is not supported with this model" === | |||
Error message | |||
<pre> | |||
Invalid parameter: 'response_format' of type 'json_object' is not supported with this model. | |||
</pre> | |||
Example Input | |||
<pre> | |||
{ | |||
"model": "gpt-4", | |||
"max_tokens": 1000, | |||
"temperature": 0.9, | |||
"messages": [ | |||
{"role": "system", "content": "#zh-TW You are a helpful assistant. Help me to summarize the article"}, | |||
{"role": "user", "content": null} | |||
] | |||
} | |||
</pre> | |||
Solution: | |||
* The model "gpt-4" was not supported. Need to use model of {{kbd | key=gpt-3.5-turbo-1106}} or {{kbd | key=gpt-4-1106-preview}} or {{kbd | key=gpt-4o}} <ref>[https://github.com/openai/openai-python/issues/887 response_format error · Issue #887 · openai/openai-python]</ref><ref>[https://platform.openai.com/docs/api-reference/chat/create API Reference - OpenAI API]</ref><ref>[https://community.openai.com/t/openai-api-guide-using-json-mode/557265/1 🛡️ OpenAI API Guide: Using JSON Mode - API - OpenAI Developer Forum]</ref>. | |||
<pre> | |||
{ | |||
"model": "gpt-4-1106-preview", | |||
"max_tokens": 1000, | |||
"temperature": 0.9, | |||
"response_format": { "type": "json_object" }, | |||
"messages": [ | |||
{"role": "system", "content": "#zh-TW You are a helpful assistant. Help me to summarize the article"}, | |||
{"role": "user", "content": "YOUR ARTICLE ... ..."} | |||
] | |||
} | |||
</pre> | |||
=== How to fix "'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'" === | |||
Error message | |||
<pre> | |||
"error": { | |||
"message": "'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'.", | |||
"type": "invalid_request_error", | |||
"param": "messages", | |||
"code": null | |||
} | |||
</pre> | |||
Solution | |||
* Add "output in JSON format" to the prompt ("system message") e.g. | |||
<pre> | |||
Output Format: Provide your analysis in the following JSON format: | |||
... | |||
</pre> | |||
=== How to fix "Missing bearer or basic authentication in header" === | |||
Command met error condition | |||
<pre> | |||
curl https://api.openai.com/v1/threads/$THREAD_IDmessages \ | |||
-H "Content-Type: application/json" \ | |||
-H "Authorization: Bearer $OPENAI_API_KEY" \ | |||
-H "OpenAI-Beta: assistants=v2" | |||
</pre> | |||
Error message | |||
<pre> | |||
{ | |||
"error": { | |||
"message": "Missing bearer or basic authentication in header", | |||
"type": "invalid_request_error", | |||
"param": null, | |||
"code": null | |||
} | |||
} | |||
</pre> | |||
Corrected command: | |||
<pre> | |||
curl https://api.openai.com/v1/threads/$THREAD_IDmessages \ | |||
-H "Authorization: Bearer $OPENAI_API_KEY" \ | |||
-H "Content-Type: application/json" \ | |||
-H "OpenAI-Beta: assistants=v2" | |||
</pre> | |||
Explanation: | |||
Order of Headers: While typically the order of headers should not impact the execution, ensuring a consistent and logical ordering (e.g., placing the Authorization header first) can sometimes help avoid parsing issues in certain environments or with certain tools. | |||
=== How to fix "None is not of type string - messages.1.content" === | === How to fix "None is not of type string - messages.1.content" === | ||
| Line 135: | Line 275: | ||
} | } | ||
</pre> | </pre> | ||
=== How to fix "Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out" === | === How to fix "Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out" === | ||
| Line 164: | Line 302: | ||
# Correct the typo in the model name. | # Correct the typo in the model name. | ||
# Visit the [https://platform.openai.com/docs/models Models - OpenAI API] to view the list of models. You can also go to the [https://openai.com/product/gpt-4 GPT-4 product page] to join the waiting list. | # Visit the [https://platform.openai.com/docs/models Models - OpenAI API] to view the list of models. You can also go to the [https://openai.com/product/gpt-4 GPT-4 product page] to join the waiting list. | ||
=== How to fix "That model is currently overloaded with other requests" === | |||
Error message: | |||
<pre> | |||
{ | |||
"error": { | |||
"message": "That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID XXX in your message.)", | |||
"type": "server_error", | |||
"param": null, | |||
"code": null | |||
} | |||
} | |||
</pre> | |||
Solution: | |||
* Go to [https://status.openai.com/ OpenAI Status] to check if the server is outage | |||
* Retry your request as suggested in the message | |||
* Send an issue to [https://help.openai.com/en/ OpenAI Help Center] | |||
=== How to fix "The project you are requesting has been archived and is no longer accessable (accessible)" === | |||
Error message: | |||
<pre> | |||
{ | |||
"error": { | |||
"message": "The project you are requesting has been archived and is no longer accessable", | |||
"type": "invalid_request_error", | |||
"param": null, | |||
"code": "not_authorized_invalid_project" | |||
} | |||
} | |||
</pre> | |||
Solution: | |||
* If you are the administrator, first visit https://platform.openai.com/organization/api-keys to verify which API key is associated with which project. | |||
* Next, go to https://platform.openai.com/organization/projects to determine if the project is active or archived. | |||
* If the project is archived, you may need to assign the API key to another active project or a new one<ref>[https://help.openai.com/en/articles/9186755-managing-your-work-in-the-api-platform-with-projects Managing your work in the API platform with Projects | OpenAI Help Center]</ref>. | |||
=== How to fix "This model's maximum context length is 4097 tokens" === | === How to fix "This model's maximum context length is 4097 tokens" === | ||
| Line 182: | Line 357: | ||
Solution: | Solution: | ||
* Reduce the length of an input message<ref>[https://community.openai.com/t/splitting-chunking-large-input-text-for-summarisation-greater-than-4096-tokens/18494/3 ⬛ Splitting / Chunking Large input text for Summarisation (greater than 4096 tokens....) - General API discussion - OpenAI API Community Forum]</ref>. ([[Count number of characters]]) | * Reduce the length of an input message<ref>[https://community.openai.com/t/splitting-chunking-large-input-text-for-summarisation-greater-than-4096-tokens/18494/3 ⬛ Splitting / Chunking Large input text for Summarisation (greater than 4096 tokens....) - General API discussion - OpenAI API Community Forum]</ref>. ([[Count number of characters]]) | ||
* Adjust to another model which support longer text | * Adjust to another [https://platform.openai.com/docs/models/overview model] which support longer text | ||
=== How to fix "The server had an error while processing your request" === | === How to fix "The server had an error while processing your request" === | ||
| Line 212: | Line 386: | ||
* Ensure that the payload you sent to the API is properly formatted as JSON. | * Ensure that the payload you sent to the API is properly formatted as JSON. | ||
== How to fix "RateLimitError: Rate limit reached for default-gpt-3.5-turbo" == | === How to fix "RateLimitError: Rate limit reached for default-gpt-3.5-turbo" === | ||
Error message: RateLimitError: Rate limit reached for default-gpt-3.5-turbo | Error message: RateLimitError: Rate limit reached for default-gpt-3.5-turbo | ||
<pre> | <pre> | ||
RateLimitError: Rate limit reached for default-gpt-3.5-turbo in organization org-XXX on requests per min. Limit: 3 / min. Please try again in 20s. Contact [email protected] if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method. | RateLimitError: Rate limit reached for default-gpt-3.5-turbo in organization org-XXX on requests per min. Limit: 3 / min. Please try again in 20s. Contact [email protected] if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method. | ||
</pre> | |||
Other similar error message: | |||
<pre> | |||
Rate limit reached for default-gpt-4 in organization org-xxx on tokens per min. Limit: 40000 / min. Please try again in 1ms. Contact us through our help center at help.openai.com if you continue to have issues. | |||
</pre> | </pre> | ||
| Line 222: | Line 401: | ||
# Revise the frequency of API requests. To achieve this, incorporate a [[Sleep]] function between each API request within the script. For more information on rate limits, consult OpenAI's documentation at the following link: https://platform.openai.com/docs/guides/rate-limits. | # Revise the frequency of API requests. To achieve this, incorporate a [[Sleep]] function between each API request within the script. For more information on rate limits, consult OpenAI's documentation at the following link: https://platform.openai.com/docs/guides/rate-limits. | ||
# See more details on [[Sleep]] random seconds in programming | |||
=== How to fix "You exceeded your current quota, please check your plan and billing details." === | === How to fix "You exceeded your current quota, please check your plan and billing details." === | ||
| Line 242: | Line 422: | ||
# Once your quota has been increased, navigate to https://platform.openai.com/account/billing/limits to adjust your hard limit accordingly. | # Once your quota has been increased, navigate to https://platform.openai.com/account/billing/limits to adjust your hard limit accordingly. | ||
* | === How to fix "zsh: command not found: -d" === | ||
* | Error condition when I [https://platform.openai.com/docs/api-reference/vector-stores/create create vector store] | ||
<pre> | |||
curl https://api.openai.com/v1/vector_stores \ | |||
-H "Authorization: Bearer $OPENAI_API_KEY" \ | |||
-H "Content-Type: application/json" \ | |||
-H "OpenAI-Beta: assistants=v2" | |||
-d '{ | |||
"name": "Enter your preferred name here" | |||
}' | |||
</pre> | |||
Corrected Command: | |||
<pre> | |||
curl https://api.openai.com/v1/vector_stores \ | |||
-H "Authorization: Bearer $OPENAI_API_KEY" \ | |||
-H "Content-Type: application/json" \ | |||
-H "OpenAI-Beta: assistants=v2" \ | |||
-d '{ | |||
"name": "Enter your preferred name here" | |||
}' | |||
</pre> | |||
Explanation: The backslash {{kbd | key=<nowiki>\</nowiki>}} at the end of each line indicates that the command is not yet finished and will continue on the next line. Ensure that there are no extra characters or commands after the -d parameter or any other parameters to avoid similar errors. | |||
=== Force Traditional Chinese Output === | |||
* Add #zh-TW before your question <ref>[https://learntech.tw/chatgpt-traditional-chinese/ ChatGPT: How to Force Traditional Chinese Output | Learn Technology, Save Time - Learn Technology]</ref> | |||
* Or say "Use Traditional Chinese commonly used in Taiwan" | |||
<pre> | |||
``` | |||
Use Traditional Chinese commonly used in Taiwan: | |||
Rules | |||
- Use full-width punctuation marks and add spaces between Chinese and English text. | |||
- Below is a common AI terminology correspondence table (English -> Traditional Chinese): | |||
* Transformer -> Transformer | |||
* Token -> Token | |||
* LLM/Large Language Model -> 大語言模型 | |||
* Zero-shot -> 零樣本 | |||
* Few-shot -> 少樣本 | |||
* AI Agent -> AI 代理 | |||
* AGI -> 通用人工智慧 | |||
- The following is a table of common Taiwanese terms (English -> Traditional Chinese): | |||
* create -> 建立 | |||
* quality -> 質量 | |||
* information = 資訊 | |||
* message = 訊息 | |||
* store = 儲存 | |||
* search = 搜尋 | |||
* view = 檢視, 檢視表 (No 視圖 as always) | |||
* create = created = 建立 | |||
* data = 資料 | |||
* object = 物件 | |||
* queue = 佇列 | |||
* stack = 堆疊 | |||
* invocation = 呼叫 | |||
* code = 程式碼 | |||
* running = 執行 | |||
* library = 函式庫 | |||
* building = 建構 | |||
* package = 套件 | |||
* video = 影片 | |||
* class = 類別 | |||
* component = 元件 | |||
* Transaction = 交易 | |||
* Code Generation = 程式碼產生器 | |||
* Scalability = 延展性 | |||
* Metadata = Metadata | |||
* Clone = 複製 | |||
* Memory = 記憶體 | |||
* Built-in = 內建 | |||
* Global = 全域 | |||
* Compatibility = 相容性 | |||
* Function = 函式 | |||
* document = 文件 | |||
* example = 範例 | |||
* blog = 部落格 | |||
* realtime = 即時 | |||
* document = 文件 | |||
* integration = 整合 | |||
``` | |||
</pre> | |||
== Further reading == | == Further reading == | ||
* [[ChatGPT prompts]] | * [[AI Prompt Engineering]] ([[ChatGPT prompts#How to improve the prompt design | How to improve the prompt design]]) | ||
* [https://help.openai.com/en/collections/3780021-general-top-faq General Top FAQ | OpenAI Help Center] | * [https://help.openai.com/en/collections/3780021-general-top-faq General Top FAQ | OpenAI Help Center] | ||
* [https://help.openai.com/en/collections/3675931-openai-api#api-error-codes-explained OpenAI API | OpenAI Help Center] | * [https://help.openai.com/en/collections/3675931-openai-api#api-error-codes-explained OpenAI API | OpenAI Help Center] | ||
| Line 255: | Line 519: | ||
* [https://stackoverflow.com/questions/tagged/openai-api Highest scored 'openai-api' questions - Stack Overflow] | * [https://stackoverflow.com/questions/tagged/openai-api Highest scored 'openai-api' questions - Stack Overflow] | ||
* [https://platform.openai.com/docs/guides/error-codes Error codes - OpenAI API] | * [https://platform.openai.com/docs/guides/error-codes Error codes - OpenAI API] | ||
* [https://errerrors.blogspot.com/2024/07/openai-batch-api-troubleshooting.html OpenAI Batch API 常見技術問題排除] | |||
* [https://platform.openai.com/docs/guides/production-best-practices Production best practices - OpenAI API] | |||
== References == | == References == | ||
| Line 260: | Line 526: | ||
[[Category:Tools]] [[Category:OpenAI]] | [[Category:Tools]] [[Category:OpenAI]] | ||
[[Category:Generative AI]] | |||