Documentation
Run an LLM locally
Make sure your computer meets the minimum system requirements.
You might sometimes see terms such as open-source models
or open-weights models
. Different models might be released under different licenses and varying degrees of 'openness'. In order to run a model locally, you need to be able to get access to its "weights", often distributed as one or more files that end with .gguf
, .safetensors
etc.
First, install the latest version of LM Studio. You can get it from here.
Once you're all set up, you need to download your first LLM.
Head over to the Discover tab to download models. Pick one of the curated options or search for models by search query (e.g. "Llama"
). See more in-depth information about downloading models here.
The Discover tab in LM Studio
Head over to the Chat tab, and
Quickly open the model loader with cmd
+ L
on macOS or ctrl
+ L
on Windows/Linux
Loading a model typically means allocating memory to be able to accomodate the model's weights and other parameters in your computer's RAM.
Once the model is loaded, you can start a back-and-forth conversation with the model in the Chat tab.
LM Studio on macOS
Chat with other LM Studio users, discuss LLMs, hardware, and more on the LM Studio Discord server.