Editing
Run LLMs locally
Jump to navigation
Jump to search
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
Run LLMs (Large language model) locally == List of LLMs software == [https://ollama.com/ Ollama]<ref>[https://blog.miniasp.com/post/2024/03/04/Useful-tool-Ollama 介紹好用工具:Ollama 快速在本地啟動並執行大型語言模型 | The Will Will Web]</ref> * Docker support: Available<ref>[https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image Ollama is now available as an official Docker image · Ollama Blog]</ref> {{Gd}} * Supported OS: {{Win}}, {{Mac}} & {{Linux}} * User Interface: CLI (Command-line interface) & GUI (Graphical User Interface) * Support API endpoint: Available<ref>[https://github.com/ollama/ollama?tab=readme-ov-file#rest-api ollama/ollama: Get up and running with Llama 3, Mistral, Gemma, and other large language models.]</ref> * Supported embedding models: * License: Open-source license, MIT<ref>[https://github.com/ollama/ollama/blob/main/LICENSE ollama/LICENSE at main · ollama/ollama]</ref> {{Gd}} [https://lmstudio.ai/ LM Studio - Discover, download, and run local LLMs] * Docker support: N/A {{exclaim}} * Supported OS: {{Win}}, {{Mac}} & {{Linux}} * User Interface: GUI * Support API endpoint: Available<ref>[https://lmstudio.ai/docs/local-server Local LLM Server | LM Studio]</ref> * Supported embedding models: * License: Mixed; Apache License 2.0 for lmstudio.js<ref>[https://github.com/lmstudio-ai/lmstudio.js/blob/main/LICENSE lmstudio.js/LICENSE at main · lmstudio-ai/lmstudio.js]</ref>. But Terms of Use<ref>https://lmstudio.ai/terms</ref> mentioned "using the Services for your personal, non-commercial purposes." [https://github.com/janhq/jan janhq/jan: Jan is an open source alternative to ChatGPT that runs 100% offline on your computer. Multiple engine support (llama.cpp, TensorRT-LLM)] * Docker support: Available * Supported OS: {{Win}}, {{Mac}} & {{Linux}} * User Interface: CLI (Command-line interface) * Support API endpoint: Available * Supported embedding models: * License: GAPL<ref>[https://github.com/janhq/jan/blob/dev/LICENSE jan/LICENSE at dev · janhq/jan]</ref> [https://gpt4all.io/index.html GPT4All] * Docker support: N/A * Supported OS: {{Win}}, {{Mac}} & {{Linux}} * User Interface: GUI * Support API endpoint: Available<ref>[https://docs.gpt4all.io/gpt4all_python.html Generation - GPT4All Documentation]</ref> * Supported embedding models: "SBert and Nomic Embed Text v1 & v1.5"<ref>[https://docs.gpt4all.io/gpt4all_help/faq.html#what-are-the-system-requirements FAQ - GPT4All]</ref> * License: OSS, MIT<ref>[https://github.com/nomic-ai/gpt4all/blob/main/LICENSE.txt gpt4all/LICENSE.txt at main · nomic-ai/gpt4all]</ref> [https://github.com/oobabooga/text-generation-webui oobabooga/text-generation-webUser Interface: A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.] * Docker support: Available * Supported OS: {{Win}}, {{Mac}} & {{Linux}} * User Interface: GUI * Support API endpoint: Available<ref>[https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API#examples 12 ‐ OpenAI API · oobabooga/text-generation-webui Wiki]</ref> * Supported embedding models: * License: OSS, GNU v.3<ref>[https://github.com/oobabooga/text-generation-webui/blob/main/LICENSE text-generation-webui/LICENSE at main · oobabooga/text-generation-webui]</ref> [https://github.com/zylon-ai/private-gpt PrivateGPT] * Docker support: Available * Supported OS: {{Win}}, {{Mac}} & {{Linux}}<ref>[https://docs.privategpt.dev/installation/getting-started/installation Installation – PrivateGPT | Docs]</ref> * User Interface: GUI * Support API endpoint: Available<ref>[https://docs.privategpt.dev/api-reference/overview/api-reference-overview API Reference overview – PrivateGPT | Docs]</ref> * Supported embedding models: * License: OSS, Apache v.2<ref>[https://github.com/zylon-ai/private-gpt/blob/main/LICENSE private-gpt/LICENSE at main · zylon-ai/private-gpt]</ref> [https://useanything.com/ AnythingLLM | The ultimate AI business intelligence tool] * Docker support: Available<ref>[https://docs.useanything.com/desktop-vs-docker 🆚 Desktop Vs. Docker | AnythingLLM]</ref> * Supported OS: {{Win}}, {{Mac}} & {{Linux}} * User Interface: GUI * Support API endpoint: * License: Mixed, OSS, MIT<ref>[https://github.com/Mintplex-Labs/anything-llm/blob/master/LICENSE anything-llm/LICENSE at master · Mintplex-Labs/anything-llm]</ref> [https://llm.datasette.io/en/stable/ LLM: A CLI utility and Python library for interacting with Large Language Models] ([https://pypi.org/project/llm/ llm · PyPI]) * Docker support: The `LLM` package might be supported if installed in a Python Docker Container. However, it has not been actually tested. {{exclaim}} * Supported OS: * User Interface: CLI * Support API endpoint: N/A * Supported embedding models: * License: [https://llm.mlc.ai/ MLC LLM] * Docker support: Not available<ref>[https://github.com/mlc-ai/mlc-llm/issues/33 Docker support · Issue #33 · mlc-ai/mlc-llm]</ref> * Supported OS: {{Win}}, {{Mac}} & {{Linux}} <ref>[https://llm.mlc.ai/docs/install/mlc_llm Install MLC LLM Python Package — mlc-llm 0.1.0 documentation]</ref> * User Interface: CLI * Support API endpoint: Available<ref>[https://llm.mlc.ai/docs/deploy/rest.html REST API — mlc-llm 0.1.0 documentation]</ref> * Supported embedding models: * License: OSS, Apache License Version 2.0<ref>[https://github.com/mlc-ai/mlc-llm/blob/main/LICENSE mlc-llm/LICENSE at main · mlc-ai/mlc-llm]</ref> (left blank intentionally) <pre> * Docker support: Available or not * Supported OS: {{Win}}, {{Mac}} & {{Linux}} * User Interface: GUI or CLI * Support API endpoint: Available or not * Supported embedding models: * License: Mixed, OSS or copyright </pre> == Further reading == * [https://www.infoworld.com/article/3705035/5-easy-ways-to-run-an-llm-locally.html 5 easy ways to run an LLM locally | InfoWorld] * [https://github.com/Troyanovsky/Local-LLM-Comparison-Colab-UI Troyanovsky/Local-LLM-Comparison-Colab-UI: Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.] * [https://abishekmuthian.com/how-i-run-llms-locally/ How I run LLMs locally - Abishek Muthian] * [https://iamhlb.notion.site/HOWTO-Setup-Local-LLM-16757ba4def5805c9baad0c78d54d62f HOWTO: Setup Local LLM] == References == <references /> [[Category: Tool]] [[Category: Artificial intelligence]] [[Category: Generative AI]]
Summary:
Please note that all contributions to LemonWiki共筆 are considered to be released under the Creative Commons Attribution-NonCommercial-ShareAlike (see
LemonWiki:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Templates used on this page:
Template:Exclaim
(
edit
)
Template:Gd
(
edit
)
Template:Linux
(
edit
)
Template:Mac
(
edit
)
Template:Win
(
edit
)
Navigation menu
Personal tools
Not logged in
Talk
Contributions
Log in
Namespaces
Page
Discussion
English
Views
Read
Edit
View history
More
Search
Navigation
Main page
Current events
Recent changes
Random page
Help
Categories
Tools
What links here
Related changes
Special pages
Page information