Run LLMs locally: Difference between revisions

From LemonWiki共筆
Jump to navigation Jump to search
(update GPT4All)
 
(5 intermediate revisions by the same user not shown)
Line 5: Line 5:
* Docker support: Available<ref>[https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image Ollama is now available as an official Docker image · Ollama Blog]</ref> {{Gd}}
* Docker support: Available<ref>[https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image Ollama is now available as an official Docker image · Ollama Blog]</ref> {{Gd}}
* Supported OS: {{Win}}, {{Mac}} & {{Linux}}
* Supported OS: {{Win}}, {{Mac}} & {{Linux}}
* User Interface: CLI (Command-line interface)
* User Interface: CLI (Command-line interface) & GUI (Graphical User Interface)
* Support API endpoint: Available<ref>[https://github.com/ollama/ollama?tab=readme-ov-file#rest-api ollama/ollama: Get up and running with Llama 3, Mistral, Gemma, and other large language models.]</ref>
* Support API endpoint: Available<ref>[https://github.com/ollama/ollama?tab=readme-ov-file#rest-api ollama/ollama: Get up and running with Llama 3, Mistral, Gemma, and other large language models.]</ref>
* Supported embedding models:
* Supported embedding models:
* License: OSS, MIT<ref>[https://github.com/ollama/ollama/blob/main/LICENSE ollama/LICENSE at main · ollama/ollama]</ref>
* License: Open-source license, MIT<ref>[https://github.com/ollama/ollama/blob/main/LICENSE ollama/LICENSE at main · ollama/ollama]</ref> {{Gd}}


[https://lmstudio.ai/ LM Studio - Discover, download, and run local LLMs]
[https://lmstudio.ai/ LM Studio - Discover, download, and run local LLMs]
Line 86: Line 86:
* [https://www.infoworld.com/article/3705035/5-easy-ways-to-run-an-llm-locally.html 5 easy ways to run an LLM locally | InfoWorld]
* [https://www.infoworld.com/article/3705035/5-easy-ways-to-run-an-llm-locally.html 5 easy ways to run an LLM locally | InfoWorld]
* [https://github.com/Troyanovsky/Local-LLM-Comparison-Colab-UI Troyanovsky/Local-LLM-Comparison-Colab-UI: Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.]
* [https://github.com/Troyanovsky/Local-LLM-Comparison-Colab-UI Troyanovsky/Local-LLM-Comparison-Colab-UI: Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.]
* [https://abishekmuthian.com/how-i-run-llms-locally/ How I run LLMs locally - Abishek Muthian]
* [https://iamhlb.notion.site/HOWTO-Setup-Local-LLM-16757ba4def5805c9baad0c78d54d62f HOWTO: Setup Local LLM]


== References ==
== References ==

Latest revision as of 14:26, 7 August 2025

Run LLMs (Large language model) locally

List of LLMs software[edit]

Ollama[1]

  • Docker support: Available[2] Good.gif
  • Supported OS: Win   , macOS icon_os_mac.png & Linux  
  • User Interface: CLI (Command-line interface) & GUI (Graphical User Interface)
  • Support API endpoint: Available[3]
  • Supported embedding models:
  • License: Open-source license, MIT[4] Good.gif

LM Studio - Discover, download, and run local LLMs

  • Docker support: N/A Icon_exclaim.gif
  • Supported OS: Win   , macOS icon_os_mac.png & Linux  
  • User Interface: GUI
  • Support API endpoint: Available[5]
  • Supported embedding models:
  • License: Mixed; Apache License 2.0 for lmstudio.js[6]. But Terms of Use[7] mentioned "using the Services for your personal, non-commercial purposes."

janhq/jan: Jan is an open source alternative to ChatGPT that runs 100% offline on your computer. Multiple engine support (llama.cpp, TensorRT-LLM)

  • Docker support: Available
  • Supported OS: Win   , macOS icon_os_mac.png & Linux  
  • User Interface: CLI (Command-line interface)
  • Support API endpoint: Available
  • Supported embedding models:
  • License: GAPL[8]

GPT4All

  • Docker support: N/A
  • Supported OS: Win   , macOS icon_os_mac.png & Linux  
  • User Interface: GUI
  • Support API endpoint: Available[9]
  • Supported embedding models: "SBert and Nomic Embed Text v1 & v1.5"[10]
  • License: OSS, MIT[11]

oobabooga/text-generation-webUser Interface: A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

  • Docker support: Available
  • Supported OS: Win   , macOS icon_os_mac.png & Linux  
  • User Interface: GUI
  • Support API endpoint: Available[12]
  • Supported embedding models:
  • License: OSS, GNU v.3[13]

PrivateGPT

  • Docker support: Available
  • Supported OS: Win   , macOS icon_os_mac.png & Linux   [14]
  • User Interface: GUI
  • Support API endpoint: Available[15]
  • Supported embedding models:
  • License: OSS, Apache v.2[16]

AnythingLLM | The ultimate AI business intelligence tool

  • Docker support: Available[17]
  • Supported OS: Win   , macOS icon_os_mac.png & Linux  
  • User Interface: GUI
  • Support API endpoint:
  • License: Mixed, OSS, MIT[18]

LLM: A CLI utility and Python library for interacting with Large Language Models (llm · PyPI)

  • Docker support: The `LLM` package might be supported if installed in a Python Docker Container. However, it has not been actually tested. Icon_exclaim.gif
  • Supported OS:
  • User Interface: CLI
  • Support API endpoint: N/A
  • Supported embedding models:
  • License:

MLC LLM

  • Docker support: Not available[19]
  • Supported OS: Win   , macOS icon_os_mac.png & Linux   [20]
  • User Interface: CLI
  • Support API endpoint: Available[21]
  • Supported embedding models:
  • License: OSS, Apache License Version 2.0[22]

(left blank intentionally)

* Docker support: Available or not
* Supported OS: {{Win}}, {{Mac}} & {{Linux}}
* User Interface: GUI or CLI
* Support API endpoint: Available or not
* Supported embedding models:
* License: Mixed, OSS or copyright

Further reading[edit]

References[edit]