Post-editing using TowerInstruct

#8
by HURIMOZ - opened

Hi, where can we obtain information on using TowerInstruct for post-editing tasks?

Unbabel org

Hi!

There are some prompts in the test data we used when writing the paper (link to HF repo).

Here's a specific example:

The primary goal of automatic post-editing is to boost the translation's quality by making small adjustments to address any existing errors in the translation. If the translation is already accurate, there is no need to change it and instead it should be copied.
Source (English): Good, but would like to find something better
Translation (German): Gut, aber ich würde gerne etwas Besseres finden.
Post-edited: 

Don't forget to apply the chat template.

Hope this helps.

Thank you. Iʻll try those prompts.
Now, Iʻve just fine-tuned the model with my bilingual English-Tahitian language data. Does fine-tuning affect its performance for languages like French, Spanish, Portuguese etc?
Also, Iʻd like to do transfer learning from my English-Tahitian training to a French-Tahitian training. Coming from a classic bilingual NMT background, I would leverage extracted embeddings as a way to transfer learning. How is that done with LLMs?

Unbabel org

Yes, performance on other languages can be affected, so it's a good idea to keep them in check.

We don't really have much experience in that kind of strategy with LLMs. However, it could be the case that the model works well on French-Tahitian after finetuning it only for English-Tahitian. I would start by measuring that; it could be an interesting finding.

Can you please provide a config template to resume fine-tuning from a given step?
Iʻm training from step_1300 but the perplexity is shooting high from the first steps.

Sign up or log in to comment