LLM da installare localmente

di il
0 risposte

LLM da installare localmente

Questo e' solo un elenco di nomi e link dove ricuperare informazioni su come installare gli LLM.

FONDAMENTALE una scheda grafica con 16GB. 
8GB sono il minimo sindacale.
24GB o 32GB sarebbero meglio.

NON serve avere una scheda top di gamma di ultimizzima generazione. 
Anche una 3090 va benissimo.

La CPU non e' un problema, tanto NESSUNA va bene!

==============================================================================

Ollama
   https://ollama.com/library
   https://github.com/ollama/ollama
   https://github.com/ollama/ollama?tab=readme-ov-file
   
   You should have at least 
        8 GB RAM to run the  7B models, 
       16 GB RAM to run the 13B models, 
       32 GB RAM to run the 33B models.


Nota: per usare ollama PRIMA e' necessario attivare il server:

   ollama serve


Alternatives
------------

   https://medium.com/ml-and-dl/exploring-powerful-ai-tools-alternatives-to-ollama-and-lm-studio-f50021741cdc
   https://slashdot.org/software/p/LM-Studio/alternatives


    HuggingFace transformers
       Best for: Comprehensive model library and research-oriented workflows
       https://huggingface.co/docs/transformers/en/index

    LM studio
       Best for: Local AI model inference with user-friendly interface
       https://lmstudio.ai/

    Gpt4all
       Best for: Open-source enthusiasts and privacy-focused developers
       https://www.nomic.ai/gpt4all

    Llamaindex
       Best for: Advanced RAG (Retrieval-Augmented Generation) and enterprise applications
       https://cloud.llamaindex.ai/login/callback?provider=google


   Msty 
       https://msty.app/

   llama.cpp
       Description: A C++ library for running LLMs (especially LLaMA-based models) efficiently on local hardware. 
       It’s the backbone of many tools like LM Studio and Ollama.
       https://github.com/ggerganov/llama.cpp
   
   KoboldCpp: 
       Description: A user-friendly wrapper around llama.cpp with a GUI and API support.

   Text Generation WebUI (oobabooga)
       Description: A comprehensive web-based interface for running LLMs locally, built on top of frameworks like 
       PyTorch and Hugging Face Transformers.

   Hugging Face Transformers (with Local Setup)

   LocalAI
       Description: An open-source, drop-in replacement for OpenAI’s API that runs LLMs locally.

   AnythingLLM
       Description: A GUI-based tool for running LLMs locally with a focus on document integration and RAG (Retrieval-Augmented Generation).

   GPT4All
       Description: A desktop app for running optimized LLMs locally, with a focus on ease of use.

   Jan.ai
   Open WebUI
   LibreChat
   Mistral AI
   VLLM
   LangDB
   NVIDIA Triton Inference Server

Devi accedere o registrarti per scrivere nel forum
0 risposte