Featured image of post Ollama: using Large Language Models on your local machine

Ollama: using Large Language Models on your local machine

Ollama: Run Large Language Models Locally

Ollama is a new tool that lets you use large language models (LLMs) on your own computer instead of relying on cloud services. It works like Docker, but specifically for LLMs. Ollama offers various ways to use it, including an interactive shell, REST API, and a Python library.

What is Ollama?

Ollama is a tool designed for running and managing Large Language Models on your local machine. It’s user-friendly and provides multiple ways to interact with LLMs:

  • Interactive shell: Ollama can be run as a shell, allowing you to chat, ask questions, and have conversations with the LLM.
  • REST API: Run Ollama as a service and interact with it by sending requests.
  • Python library: You can integrate Ollama into your Python projects.

How does it work?

Once Ollama is installed, you can start interacting with it right away. For instance, to chat with the “llama2” model, simply run:

1
ollama run llama2

Ollama will automatically download the llama2 model and launch the interactive shell, allowing you to start chatting.

Demo

REST API Integration

Ollama includes a built-in REST API for sending requests. To ask the llama2:13b-chat model a question, use a command like this:

1
2
3
4
5
curl http://localhost:11434/api/generate -d '{
  "model": "llama2:13b-chat",
  "prompt": "What can you tell me about Pink Floyd?",
  "stream": false
}'

You’ll receive a JSON response containing the generated text.

Python Library

Ollama provides libraries for various programming languages, including Python. Here’s how to use it within your Python code:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
import ollama

response = ollama.chat(model='llama2:13b-chat', messages=[
    {
        'role': 'user',
        'content': 'Can you talk me about Pink Floyd?',
    'stream': False,
    },
])

print(response['message']['content'])

Additional Features

Ollama enables the creation of new models based on existing ones. More information can be found at official documentation.

While the examples used the llama2:13b-chat model, Ollama supports other models. A complete list is available at here.

Conclusion

Ollama is a valuable tool for experimenting with Large Language Models locally, eliminating the reliance on cloud services. Its ease of use and support for multiple programming languages make it a powerful tool. The future development of Ollama is promising.

Licensed under CC BY-NC-SA 4.0
Last updated on Nov 14, 2023 01:32 +0100