File size: 977 Bytes
0bb74b8
cb79e25
 
 
0bb74b8
cb79e25
 
 
 
 
 
 
 
 
0bb74b8
e4d1cae
 
 
 
 
 
2242279
 
e4d1cae
 
2242279
 
 
 
 
 
 
 
 
87191d1
2242279
 
 
 
 
 
70a4362
3d6c14c
2242279
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
language: 
  - ja

license: mit
library_name: transformers
tags:
- fastText
- embedding
pipeline_tag: feature-extraction
widget:
- text: "海賊王におれはなる"
  example_title: "ワンピース"

---


# fasttext-jp-embedding

Pretrained FastText word vector for Japanese

## Reference
- fastText </br>
https://github.com/facebookresearch/fastText

- word vector data </br>
https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.ja.300.vec.gz

## Usage

Google Colaboratory Example
```
! apt install aptitude swig > /dev/null 
! aptitude install mecab libmecab-dev mecab-ipadic-utf8 git make curl xz-utils file -y > /dev/null 
! pip install transformers torch mecab-python3, torchtyping > /dev/null 
! ln -s /etc/mecabrc /usr/local/etc/mecabrc
```

```
from transformers import pipeline

pipeline = pipeline("feature-extraction", model="paulhindemith/fasttext-jp-embedding", revision="2022.11.6", trust_remote_code=True)
pipeline("海賊王におれはなる")
```