Documentation
Running LLMs Locally
User Interface
Advanced
Command Line Interface - lms
API
lms
— LM Studio's CLI
LM Studio ships with lms
, a command line tool for scripting and automating your local LLM workflows.
lms
is MIT Licensed and it is developed in this repository on GitHub: https://github.com/lmstudio-ai/lms
👉 You need to run LM Studio at least once before you can use lms
.
lms
lms
ships with LM Studio and can be found under /bin
in the LM Studio's working directory.
Use the following commands to add lms
to your system path.
lms
on macOS or LinuxRun the following command in your terminal:
~/.lmstudio/bin/lms bootstrap
lms
on WindowsRun the following command in PowerShell:
cmd /c %USERPROFILE%/.lmstudio/bin/lms.exe bootstrap
Open a new terminal window and run lms
.
This is the current output you will get:
$ lms lms - LM Studio CLI - v0.2.22 GitHub: https://github.com/lmstudio-ai/lmstudio-cli Usage lms <subcommand> where <subcommand> can be one of: - status - Prints the status of LM Studio - server - Commands for managing the local server - ls - List all downloaded models - ps - List all loaded models - load - Load a model - unload - Unload a model - create - Create a new project with scaffolding - log - Log operations. Currently only supports streaming logs from LM Studio via `lms log stream` - version - Prints the version of the CLI - bootstrap - Bootstrap the CLI For more help, try running `lms <subcommand> --help`
lms
to automate and debug your workflowslms server start lms server stop
lms ls
This will reflect the current LM Studio models directory, which you set in 📂 My Models tab in the app.
lms ps
lms load [--gpu=max|auto|0.0-1.0] [--context-length=1-N]
--gpu=1.0
means 'attempt to offload 100% of the computation to the GPU'.
lms load TheBloke/phi-2-GGUF --identifier="gpt-4-turbo"
This is useful if you want to keep the model identifier consistent.
lms unload [--all]