Run LLMs locally: Difference between revisions

Jump to navigation Jump to search
m
Line 5: Line 5:
* Docker support: Available<ref>[https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image Ollama is now available as an official Docker image · Ollama Blog]</ref> {{Gd}}
* Docker support: Available<ref>[https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image Ollama is now available as an official Docker image · Ollama Blog]</ref> {{Gd}}
* Supported OS: {{Win}}, {{Mac}} & {{Linux}}
* Supported OS: {{Win}}, {{Mac}} & {{Linux}}
* User Interface: CLI (Command-line interface) & GUI
* User Interface: CLI (Command-line interface) & GUI (Graphical User Interface)
* Support API endpoint: Available<ref>[https://github.com/ollama/ollama?tab=readme-ov-file#rest-api ollama/ollama: Get up and running with Llama 3, Mistral, Gemma, and other large language models.]</ref>
* Support API endpoint: Available<ref>[https://github.com/ollama/ollama?tab=readme-ov-file#rest-api ollama/ollama: Get up and running with Llama 3, Mistral, Gemma, and other large language models.]</ref>
* Supported embedding models:
* Supported embedding models:

Navigation menu