Zara-14b Banner

Zara 14b v1.1 πŸ§™β€β™‚οΈ

Zara 14b tries to become the perfect companion for any chat which involves multiple roles. The ability to understand context is pretty awesome and excels in creativity and storytelling. It is built on Lamarck 14B v0.7 and trained on different datasets as well as some layer merges to ehance its capabilities.

Model Details πŸ“Š

Quantization

Model Architecture πŸ—οΈ

  • Base model: sometimesanotion/Lamarck-14B-v0.7
  • Parameter count: ~14 billion
  • Architecture specifics: Transformer-based language model

Intended Use 🎯

As an advanced language model for various natural language processing tasks, including but not limited to text generation (excels in chat), question-answering, and analysis.

Ethical Considerations πŸ€”

As a model based on multiple sources, Zara 14b may inherit biases and limitations from its constituent models. Users should be aware of potential biases in generated content and use the model responsibly.

Performance and Evaluation

Performance metrics and evaluation results for Zara 14b are yet to be determined. Users are encouraged to contribute their findings and benchmarks.

Limitations and Biases

The model may exhibit biases present in its training data and constituent models. It's crucial to critically evaluate the model's outputs and use them in conjunction with human judgment.

Additional Information

For more details on the base model and constituent models, please refer to their respective model cards and documentation.

Downloads last month
31
Safetensors
Model size
14.8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for aixonlab/Zara-14b-v1.1

Finetuned
(2)
this model
Finetunes
1 model
Quantizations
1 model