Editing
How to optimize your OpenAI API token usage
(section)
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Choose Long Article Splitting Strategy (chunk) === Different language models have token limits that affect how much text they can process. The max_length API parameter accounts for both input and output. Models like gpt3.5-turbo and gpt-4 have specific token limits like 4,097 and 8,192, respectively. Exceeding these limits requires you to split articles into smaller pieces for processing. You can choose not to split articles, but this restricts you to processing only portions of them. Choices include focusing on the beginning and end or just the final paragraphs. For article splitting, tools like LangChain's [https://github.com/langchain-ai/text-split-explorer text-split-explorer] can help, offering options for delimiters, chunk size, and chunk overlap.
Summary:
Please note that all contributions to LemonWiki共筆 are considered to be released under the Creative Commons Attribution-NonCommercial-ShareAlike (see
LemonWiki:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Current events
Recent changes
Random page
Help
Categories
Tools
What links here
Related changes
Special pages
Page information