---
language:
- pt
- en
metrics:
- accuracy
pipeline_tag: question-answering
tags:
- personal
license: mit
---

# Luna Model

This document describes the use and functionalities of the personal model named Luna, which was trained based on the Phi-3 model. This model was developed for specific tasks as detailed below.

## Table of Contents
- [Introduction](#introduction)
- [Requirements](#requirements)
- [Installation](#installation)
- [Usage](#usage)
- [Features](#features)
- [Contribution](#contribution)
- [License](#license)

## Introduction
The Luna Model is a customized version of the Phi-3 model tailored for specific tasks such as text generation. This model leverages the capabilities of the Phi-3 architecture to provide efficient and accurate results for various natural language processing tasks.

## Requirements
- Ollama

# Installation

## Install Ollama

```shell
curl -fsSL https://ollama.com/install.sh | sh
```

# Usage

## Create Modelfile

```shell
touch Modelfile
```

## Modelfile content

```
FROM ./models/luna-4b-v0.5.gguf

PARAMETER temperature 1
"""
```

## Load the model

```bash
ollama create luna -f ./Modelfile
```

## Run Model

```bash
ollama run luna
```

# Usage Python

```python
import ollama

stream = ollama.chat(
    model='llama3',
    messages=[{'role': 'user', 'content': 'Who are you?'}],
    stream=True,
)

for chunk in stream:
  print(chunk['message']['content'], end='', flush=True)
```