File size: 1,352 Bytes
bf3e82f 0d3a024 c398c7b 0d3a024 c398c7b 0d3a024 7288edf f1e9182 7288edf c398c7b 0d3a024 041dc50 0d3a024 7288edf f1e9182 0d3a024 7288edf 0d3a024 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
license: cc-by-nc-2.0
language:
- cs
base_model:
- fav-kky/FERNET-C5
---
This is fav-kky/FERNET-C5, fine-tuned with the **Cross-Encoder** architecture on the Czech News Dataset for Semantic Textual Similarity and DaReCzech. The Cross-Encoder architecture processes both input text pieces simultaneously, enabling better accuracy.
The model can be used both for Semantic Textual Similarity and re-ranking.
**Semantic Textual Similarity**: The model takes two input sentences and evaluates how similar their meanings are.
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('ctu-aic/CE-fernet-c5-sfle512', max_length=512)
scores = model.predict([["sentence_1", "sentence_2"]])
print(scores)
```
**Re-ranking task**: Given a query, the model assesses all potential passages and ranks them in descending order of relevance.
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('ctu-aic/CE-fernet-c5-sfle512', max_length=512)
query = "Example query for."
documents = [
"Example document one.",
"Example document two.",
"Example document three."
]
top_k = 3
return_documents = True
results = model.rank(
query=query,
documents=documents,
top_k=top_k,
return_documents=return_documents
)
for i, res in enumerate(results):
print(f"{i+1}. {res['text']}")
``` |