Run LLMs locally: Difference between revisions

Jump to navigation Jump to search
m
 
Line 8: Line 8:
* Support API endpoint: Available<ref>[https://github.com/ollama/ollama?tab=readme-ov-file#rest-api ollama/ollama: Get up and running with Llama 3, Mistral, Gemma, and other large language models.]</ref>
* Support API endpoint: Available<ref>[https://github.com/ollama/ollama?tab=readme-ov-file#rest-api ollama/ollama: Get up and running with Llama 3, Mistral, Gemma, and other large language models.]</ref>
* Supported embedding models:
* Supported embedding models:
* License: MIT<ref>[https://github.com/ollama/ollama/blob/main/LICENSE ollama/LICENSE at main · ollama/ollama]</ref> {{Gd}}
* License: Open-source license, MIT<ref>[https://github.com/ollama/ollama/blob/main/LICENSE ollama/LICENSE at main · ollama/ollama]</ref> {{Gd}}


[https://lmstudio.ai/ LM Studio - Discover, download, and run local LLMs]
[https://lmstudio.ai/ LM Studio - Discover, download, and run local LLMs]

Navigation menu