Transformers.js documentation

tokenizers

You are viewing main version, which requires installation from source. If you'd like regular npm install, checkout the latest stable version (v3.0.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

tokenizers

Tokenizers are used to prepare textual inputs for a model.

Example: Create an AutoTokenizer and use it to tokenize a sentence. This will automatically detect the tokenizer type based on the tokenizer class defined in tokenizer.json.

import { AutoTokenizer } from '@huggingface/transformers';

const tokenizer = await AutoTokenizer.from_pretrained('Xenova/bert-base-uncased');
const { input_ids } = await tokenizer('I love transformers!');
// Tensor {
//   data: BigInt64Array(6) [101n, 1045n, 2293n, 19081n, 999n, 102n],
//   dims: [1, 6],
//   type: 'int64',
//   size: 6,
// }

tokenizers.TokenizerModel ⇐ <code> Callable </code>

Abstract base class for tokenizer models.

Kind: static class of tokenizers
Extends: Callable


new TokenizerModel(config)

Creates a new instance of TokenizerModel.

ParamTypeDescription
configObject

The configuration object for the TokenizerModel.


tokenizerModel.vocab : <code> Array. < string > </code>

Kind: instance property of TokenizerModel


tokenizerModel.tokens_to_ids : <code> Map. < string, number > </code>

A mapping of tokens to ids.

Kind: instance property of TokenizerModel


tokenizerModel.fuse_unk : <code> boolean </code>

Whether to fuse unknown tokens when encoding. Defaults to false.

Kind: instance property of TokenizerModel


tokenizerModel._call(tokens) β‡’ <code> Array. < string > </code>

Internal function to call the TokenizerModel instance.

Kind: instance method of TokenizerModel
Overrides: _call
Returns: Array.<string> - The encoded tokens.

ParamTypeDescription
tokensArray.<string>

The tokens to encode.


tokenizerModel.encode(tokens) β‡’ <code> Array. < string > </code>

Encodes a list of tokens into a list of token IDs.

Kind: instance method of TokenizerModel
Returns: Array.<string> - The encoded tokens.
Throws:

  • Will throw an error if not implemented in a subclass.
ParamTypeDescription
tokensArray.<string>

The tokens to encode.


tokenizerModel.convert_tokens_to_ids(tokens) β‡’ <code> Array. < number > </code>

Converts a list of tokens into a list of token IDs.

Kind: instance method of TokenizerModel
Returns: Array.<number> - The converted token IDs.

ParamTypeDescription
tokensArray.<string>

The tokens to convert.


tokenizerModel.convert_ids_to_tokens(ids) β‡’ <code> Array. < string > </code>

Converts a list of token IDs into a list of tokens.

Kind: instance method of TokenizerModel
Returns: Array.<string> - The converted tokens.

ParamTypeDescription
idsArray<number> | Array<bigint>

The token IDs to convert.


TokenizerModel.fromConfig(config, ...args) β‡’ <code> TokenizerModel </code>

Instantiates a new TokenizerModel instance based on the configuration object provided.

Kind: static method of TokenizerModel
Returns: TokenizerModel - A new instance of a TokenizerModel.
Throws:

  • Will throw an error if the TokenizerModel type in the config is not recognized.
ParamTypeDescription
configObject

The configuration object for the TokenizerModel.

...args*

Optional arguments to pass to the specific TokenizerModel constructor.


tokenizers.PreTrainedTokenizer

Kind: static class of tokenizers


new PreTrainedTokenizer(tokenizerJSON, tokenizerConfig)

Create a new PreTrainedTokenizer instance.

ParamTypeDescription
tokenizerJSONObject

The JSON of the tokenizer.

tokenizerConfigObject

The config of the tokenizer.


preTrainedTokenizer.added_tokens : <code> Array. < AddedToken > </code>

Kind: instance property of PreTrainedTokenizer


preTrainedTokenizer.remove_space : <code> boolean </code>

Whether or not to strip the text when tokenizing (removing excess spaces before and after the string).

Kind: instance property of PreTrainedTokenizer


preTrainedTokenizer._call(text, options) β‡’ <code> BatchEncoding </code>

Encode/tokenize the given text(s).

Kind: instance method of PreTrainedTokenizer
Returns: BatchEncoding - Object to be passed to the model.

ParamTypeDefaultDescription
textstring | Array<string>

The text to tokenize.

optionsObject

An optional object containing the following properties:

[options.text_pair]string | Array<string>null

Optional second sequence to be encoded. If set, must be the same type as text.

[options.padding]boolean | 'max_length'false

Whether to pad the input sequences.

[options.add_special_tokens]booleantrue

Whether or not to add the special tokens associated with the corresponding model.

[options.truncation]boolean

Whether to truncate the input sequences.

[options.max_length]number

Maximum length of the returned list and optionally padding length.

[options.return_tensor]booleantrue

Whether to return the results as Tensors or arrays.

[options.return_token_type_ids]boolean

Whether to return the token type ids.


preTrainedTokenizer._encode_text(text) β‡’ <code> Array < string > </code> | <code> null </code>

Encodes a single text using the preprocessor pipeline of the tokenizer.

Kind: instance method of PreTrainedTokenizer
Returns: Array<string> | null - The encoded tokens.

ParamTypeDescription
textstring | null

The text to encode.


preTrainedTokenizer._tokenize_helper(text, options) β‡’ <code> * </code>

Internal helper function to tokenize a text, and optionally a pair of texts.

Kind: instance method of PreTrainedTokenizer
Returns: * - An object containing the tokens and optionally the token type IDs.

ParamTypeDefaultDescription
textstring

The text to tokenize.

optionsObject

An optional object containing the following properties:

[options.pair]stringnull

The optional second text to tokenize.

[options.add_special_tokens]booleanfalse

Whether or not to add the special tokens associated with the corresponding model.


preTrainedTokenizer.tokenize(text, options) β‡’ <code> Array. < string > </code>

Converts a string into a sequence of tokens.

Kind: instance method of PreTrainedTokenizer
Returns: Array.<string> - The list of tokens.

ParamTypeDefaultDescription
textstring

The sequence to be encoded.

optionsObject

An optional object containing the following properties:

[options.pair]string

A second sequence to be encoded with the first.

[options.add_special_tokens]booleanfalse

Whether or not to add the special tokens associated with the corresponding model.


preTrainedTokenizer.encode(text, options) β‡’ <code> Array. < number > </code>

Encodes a single text or a pair of texts using the model’s tokenizer.

Kind: instance method of PreTrainedTokenizer
Returns: Array.<number> - An array of token IDs representing the encoded text(s).

ParamTypeDefaultDescription
textstring

The text to encode.

optionsObject

An optional object containing the following properties:

[options.text_pair]stringnull

The optional second text to encode.

[options.add_special_tokens]booleantrue

Whether or not to add the special tokens associated with the corresponding model.

[options.return_token_type_ids]boolean

Whether to return token_type_ids.


preTrainedTokenizer.batch_decode(batch, decode_args) β‡’ <code> Array. < string > </code>

Decode a batch of tokenized sequences.

Kind: instance method of PreTrainedTokenizer
Returns: Array.<string> - List of decoded sequences.

ParamTypeDescription
batchArray<Array<number>> | Tensor

List/Tensor of tokenized input sequences.

decode_argsObject

(Optional) Object with decoding arguments.


preTrainedTokenizer.decode(token_ids, [decode_args]) β‡’ <code> string </code>

Decodes a sequence of token IDs back to a string.

Kind: instance method of PreTrainedTokenizer
Returns: string - The decoded string.
Throws:

  • Error If `token_ids` is not a non-empty array of integers.
ParamTypeDefaultDescription
token_idsArray<number> | Array<bigint> | Tensor

List/Tensor of token IDs to decode.

[decode_args]Object{}
[decode_args.skip_special_tokens]booleanfalse

If true, special tokens are removed from the output string.

[decode_args.clean_up_tokenization_spaces]booleantrue

If true, spaces before punctuations and abbreviated forms are removed.


preTrainedTokenizer.decode_single(token_ids, decode_args) β‡’ <code> string </code>

Decode a single list of token ids to a string.

Kind: instance method of PreTrainedTokenizer
Returns: string - The decoded string

ParamTypeDefaultDescription
token_idsArray<number> | Array<bigint>

List of token ids to decode

decode_argsObject

Optional arguments for decoding

[decode_args.skip_special_tokens]booleanfalse

Whether to skip special tokens during decoding

[decode_args.clean_up_tokenization_spaces]boolean

Whether to clean up tokenization spaces during decoding. If null, the value is set to this.decoder.cleanup if it exists, falling back to this.clean_up_tokenization_spaces if it exists, falling back to true.


preTrainedTokenizer.get_chat_template(options) β‡’ <code> string </code>

Retrieve the chat template string used for tokenizing chat messages. This template is used internally by the apply_chat_template method and can also be used externally to retrieve the model’s chat template for better generation tracking.

Kind: instance method of PreTrainedTokenizer
Returns: string - The chat template string.

ParamTypeDefaultDescription
optionsObject

An optional object containing the following properties:

[options.chat_template]stringnull

A Jinja template or the name of a template to use for this conversion. It is usually not necessary to pass anything to this argument, as the model's template will be used by default.

[options.tools]Array.<Object>

A list of tools (callable functions) that will be accessible to the model. If the template does not support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema, giving the name, description and argument types for the tool. See our chat templating guide for more information.


preTrainedTokenizer.apply_chat_template(conversation, options) β‡’ <code> string </code> | <code> Tensor </code> | <code> Array < number > </code> | <code> Array < Array < number > > </code> | <code> BatchEncoding </code>

Converts a list of message objects with "role" and "content" keys to a list of token ids. This method is intended for use with chat models, and will read the tokenizer’s chat_template attribute to determine the format and control tokens to use when converting.

See here for more information.

Example: Applying a chat template to a conversation.

import { AutoTokenizer } from "@huggingface/transformers";

const tokenizer = await AutoTokenizer.from_pretrained("Xenova/mistral-tokenizer-v1");

const chat = [
  { "role": "user", "content": "Hello, how are you?" },
  { "role": "assistant", "content": "I'm doing great. How can I help you today?" },
  { "role": "user", "content": "I'd like to show off how chat templating works!" },
]

const text = tokenizer.apply_chat_template(chat, { tokenize: false });
// "<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]"

const input_ids = tokenizer.apply_chat_template(chat, { tokenize: true, return_tensor: false });
// [1, 733, 16289, 28793, 22557, 28725, 910, 460, 368, 28804, 733, 28748, 16289, 28793, 28737, 28742, 28719, 2548, 1598, 28723, 1602, 541, 315, 1316, 368, 3154, 28804, 2, 28705, 733, 16289, 28793, 315, 28742, 28715, 737, 298, 1347, 805, 910, 10706, 5752, 1077, 3791, 28808, 733, 28748, 16289, 28793]

Kind: instance method of PreTrainedTokenizer
Returns: string | Tensor | Array<number> | Array<Array<number>> | BatchEncoding - The tokenized output.

ParamTypeDefaultDescription
conversationArray.<Message>

A list of message objects with "role" and "content" keys, representing the chat history so far.

optionsObject

An optional object containing the following properties:

[options.chat_template]stringnull

A Jinja template to use for this conversion. If this is not passed, the model's chat template will be used instead.

[options.tools]Array.<Object>

A list of tools (callable functions) that will be accessible to the model. If the template does not support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema, giving the name, description and argument types for the tool. See our chat templating guide for more information.

[options.documents]*

A list of dicts representing documents that will be accessible to the model if it is performing RAG (retrieval-augmented generation). If the template does not support RAG, this argument will have no effect. We recommend that each document should be a dict containing "title" and "text" keys. Please see the RAG section of the chat templating guide for examples of passing documents with chat templates.

[options.add_generation_prompt]booleanfalse

Whether to end the prompt with the token(s) that indicate the start of an assistant message. This is useful when you want to generate a response from the model. Note that this argument will be passed to the chat template, and so it must be supported in the template for this argument to have any effect.

[options.tokenize]booleantrue

Whether to tokenize the output. If false, the output will be a string.

[options.padding]booleanfalse

Whether to pad sequences to the maximum length. Has no effect if tokenize is false.

[options.truncation]booleanfalse

Whether to truncate sequences to the maximum length. Has no effect if tokenize is false.

[options.max_length]number

Maximum length (in tokens) to use for padding or truncation. Has no effect if tokenize is false. If not specified, the tokenizer's max_length attribute will be used as a default.

[options.return_tensor]booleantrue

Whether to return the output as a Tensor or an Array. Has no effect if tokenize is false.

[options.return_dict]booleantrue

Whether to return a dictionary with named outputs. Has no effect if tokenize is false.

[options.tokenizer_kwargs]Object{}

Additional options to pass to the tokenizer.


PreTrainedTokenizer.from_pretrained(pretrained_model_name_or_path, options) β‡’ <code> Promise. < PreTrainedTokenizer > </code>

Loads a pre-trained tokenizer from the given pretrained_model_name_or_path.

Kind: static method of PreTrainedTokenizer
Returns: Promise.<PreTrainedTokenizer> - A new instance of the PreTrainedTokenizer class.
Throws:

  • Error Throws an error if the tokenizer.json or tokenizer_config.json files are not found in the `pretrained_model_name_or_path`.
ParamTypeDescription
pretrained_model_name_or_pathstring

The path to the pre-trained tokenizer.

optionsPretrainedTokenizerOptions

Additional options for loading the tokenizer.


tokenizers.BertTokenizer ⇐ <code> PreTrainedTokenizer </code>

BertTokenizer is a class used to tokenize text for BERT models.

Kind: static class of tokenizers
Extends: PreTrainedTokenizer


tokenizers.AlbertTokenizer ⇐ <code> PreTrainedTokenizer </code>

Albert tokenizer

Kind: static class of tokenizers
Extends: PreTrainedTokenizer


tokenizers.NllbTokenizer

The NllbTokenizer class is used to tokenize text for NLLB (β€œNo Language Left Behind”) models.

No Language Left Behind (NLLB) is a first-of-its-kind, AI breakthrough project that open-sources models capable of delivering high-quality translations directly between any pair of 200+ languages β€” including low-resource languages like Asturian, Luganda, Urdu and more. It aims to help people communicate with anyone, anywhere, regardless of their language preferences. For more information, check out their paper.

For a list of supported languages (along with their language codes),

Kind: static class of tokenizers
See: https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200


nllbTokenizer._build_translation_inputs(raw_inputs, tokenizer_options, generate_kwargs) β‡’ <code> Object </code>

Helper function to build translation inputs for an NllbTokenizer.

Kind: instance method of NllbTokenizer
Returns: Object - Object to be passed to the model.

ParamTypeDescription
raw_inputsstring | Array<string>

The text to tokenize.

tokenizer_optionsObject

Options to be sent to the tokenizer

generate_kwargsObject

Generation options.


tokenizers.M2M100Tokenizer

The M2M100Tokenizer class is used to tokenize text for M2M100 (β€œMany-to-Many”) models.

M2M100 is a multilingual encoder-decoder (seq-to-seq) model trained for Many-to-Many multilingual translation. It was introduced in this paper and first released in this repository.

For a list of supported languages (along with their language codes),

Kind: static class of tokenizers
See: https://huggingface.co/facebook/m2m100_418M#languages-covered


m2M100Tokenizer._build_translation_inputs(raw_inputs, tokenizer_options, generate_kwargs) β‡’ <code> Object </code>

Helper function to build translation inputs for an M2M100Tokenizer.

Kind: instance method of M2M100Tokenizer
Returns: Object - Object to be passed to the model.

ParamTypeDescription
raw_inputsstring | Array<string>

The text to tokenize.

tokenizer_optionsObject

Options to be sent to the tokenizer

generate_kwargsObject

Generation options.


tokenizers.WhisperTokenizer ⇐ <code> PreTrainedTokenizer </code>

WhisperTokenizer tokenizer

Kind: static class of tokenizers
Extends: PreTrainedTokenizer


whisperTokenizer._decode_asr(sequences, options) β‡’ <code> * </code>

Decodes automatic speech recognition (ASR) sequences.

Kind: instance method of WhisperTokenizer
Returns: * - The decoded sequences.

ParamTypeDescription
sequences*

The sequences to decode.

optionsObject

The options to use for decoding.


whisperTokenizer.decode() : <code> * </code>

Kind: instance method of WhisperTokenizer


tokenizers.MarianTokenizer

Kind: static class of tokenizers
Todo

  • This model is not yet supported by Hugging Face’s β€œfast” tokenizers library (https://github.com/huggingface/tokenizers). Therefore, this implementation (which is based on fast tokenizers) may produce slightly inaccurate results.

new MarianTokenizer(tokenizerJSON, tokenizerConfig)

Create a new MarianTokenizer instance.

ParamTypeDescription
tokenizerJSONObject

The JSON of the tokenizer.

tokenizerConfigObject

The config of the tokenizer.


marianTokenizer._encode_text(text) β‡’ <code> Array </code>

Encodes a single text. Overriding this method is necessary since the language codes must be removed before encoding with sentencepiece model.

Kind: instance method of MarianTokenizer
Returns: Array - The encoded tokens.
See: https://github.com/huggingface/transformers/blob/12d51db243a00726a548a43cc333390ebae731e3/src/transformers/models/marian/tokenization_marian.py#L204-L213

ParamTypeDescription
textstring | null

The text to encode.


tokenizers.AutoTokenizer

Helper class which is used to instantiate pretrained tokenizers with the from_pretrained function. The chosen tokenizer class is determined by the type specified in the tokenizer config.

Kind: static class of tokenizers


new AutoTokenizer()

Example

const tokenizer = await AutoTokenizer.from_pretrained('Xenova/bert-base-uncased');

AutoTokenizer.from_pretrained(pretrained_model_name_or_path, options) β‡’ <code> Promise. < PreTrainedTokenizer > </code>

Instantiate one of the tokenizer classes of the library from a pretrained model.

The tokenizer class to instantiate is selected based on the tokenizer_class property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible)

Kind: static method of AutoTokenizer
Returns: Promise.<PreTrainedTokenizer> - A new instance of the PreTrainedTokenizer class.

ParamTypeDescription
pretrained_model_name_or_pathstring

The name or path of the pretrained model. Can be either:

  • A string, the model id of a pretrained tokenizer hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
  • A path to a directory containing tokenizer files, e.g., ./my_model_directory/.
optionsPretrainedTokenizerOptions

Additional options for loading the tokenizer.


tokenizers.is_chinese_char(cp) β‡’ <code> boolean </code>

Checks whether the given Unicode codepoint represents a CJK (Chinese, Japanese, or Korean) character.

A β€œchinese character” is defined as anything in the CJK Unicode block: https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block)

Note that the CJK Unicode block is NOT all Japanese and Korean characters, despite its name. The modern Korean Hangul alphabet is a different block, as is Japanese Hiragana and Katakana. Those alphabets are used to write space-separated words, so they are not treated specially and are handled like all other languages.

Kind: static method of tokenizers
Returns: boolean - True if the codepoint represents a CJK character, false otherwise.

ParamTypeDescription
cpnumber | bigint

The Unicode codepoint to check.


tokenizers~AddedToken

Represent a token added by the user on top of the existing Model vocabulary. AddedToken can be configured to specify the behavior they should have in various situations like:

  • Whether they should only match single words
  • Whether to include any whitespace on its left or right

Kind: inner class of tokenizers


new AddedToken(config)

Creates a new instance of AddedToken.

ParamTypeDefaultDescription
configObject

Added token configuration object.

config.contentstring

The content of the added token.

config.idnumber

The id of the added token.

[config.single_word]booleanfalse

Whether this token must be a single word or can break words.

[config.lstrip]booleanfalse

Whether this token should strip whitespaces on its left.

[config.rstrip]booleanfalse

Whether this token should strip whitespaces on its right.

[config.normalized]booleanfalse

Whether this token should be normalized.

[config.special]booleanfalse

Whether this token is special.


tokenizers~WordPieceTokenizer ⇐ <code> TokenizerModel </code>

A subclass of TokenizerModel that uses WordPiece encoding to encode tokens.

Kind: inner class of tokenizers
Extends: TokenizerModel


new WordPieceTokenizer(config)

ParamTypeDefaultDescription
configObject

The configuration object.

config.vocabObject

A mapping of tokens to ids.

config.unk_tokenstring

The unknown token string.

config.continuing_subword_prefixstring

The prefix to use for continuing subwords.

[config.max_input_chars_per_word]number100

The maximum number of characters per word.


wordPieceTokenizer.tokens_to_ids : <code> Map. < string, number > </code>

A mapping of tokens to ids.

Kind: instance property of WordPieceTokenizer


wordPieceTokenizer.unk_token_id : <code> number </code>

The id of the unknown token.

Kind: instance property of WordPieceTokenizer


wordPieceTokenizer.unk_token : <code> string </code>

The unknown token string.

Kind: instance property of WordPieceTokenizer


wordPieceTokenizer.max_input_chars_per_word : <code> number </code>

The maximum number of characters allowed per word.

Kind: instance property of WordPieceTokenizer


wordPieceTokenizer.vocab : <code> Array. < string > </code>

An array of tokens.

Kind: instance property of WordPieceTokenizer


wordPieceTokenizer.encode(tokens) β‡’ <code> Array. < string > </code>

Encodes an array of tokens using WordPiece encoding.

Kind: instance method of WordPieceTokenizer
Returns: Array.<string> - An array of encoded tokens.

ParamTypeDescription
tokensArray.<string>

The tokens to encode.


tokenizers~Unigram ⇐ <code> TokenizerModel </code>

Class representing a Unigram tokenizer model.

Kind: inner class of tokenizers
Extends: TokenizerModel


new Unigram(config, moreConfig)

Create a new Unigram tokenizer model.

ParamTypeDescription
configObject

The configuration object for the Unigram model.

config.unk_idnumber

The ID of the unknown token

config.vocab*

A 2D array representing a mapping of tokens to scores.

moreConfigObject

Additional configuration object for the Unigram model.


unigram.scores : <code> Array. < number > </code>

Kind: instance property of Unigram


unigram.populateNodes(lattice)

Populates lattice nodes.

Kind: instance method of Unigram

ParamTypeDescription
latticeTokenLattice

The token lattice to populate with nodes.


unigram.tokenize(normalized) β‡’ <code> Array. < string > </code>

Encodes an array of tokens into an array of subtokens using the unigram model.

Kind: instance method of Unigram
Returns: Array.<string> - An array of subtokens obtained by encoding the input tokens using the unigram model.

ParamTypeDescription
normalizedstring

The normalized string.


unigram.encode(tokens) β‡’ <code> Array. < string > </code>

Encodes an array of tokens using Unigram encoding.

Kind: instance method of Unigram
Returns: Array.<string> - An array of encoded tokens.

ParamTypeDescription
tokensArray.<string>

The tokens to encode.


tokenizers~BPE ⇐ <code> TokenizerModel </code>

BPE class for encoding text into Byte-Pair-Encoding (BPE) tokens.

Kind: inner class of tokenizers
Extends: TokenizerModel


new BPE(config)

Create a BPE instance.

ParamTypeDefaultDescription
configObject

The configuration object for BPE.

config.vocabObject

A mapping of tokens to ids.

config.merges*

An array of BPE merges as strings.

config.unk_tokenstring

The unknown token used for out of vocabulary words.

config.end_of_word_suffixstring

The suffix to place at the end of each word.

[config.continuing_subword_suffix]string

The suffix to insert between words.

[config.byte_fallback]booleanfalse

Whether to use spm byte-fallback trick (defaults to False)

[config.ignore_merges]booleanfalse

Whether or not to match tokens with the vocab before using merges.


bpE.tokens_to_ids : <code> Map. < string, number > </code>

Kind: instance property of BPE


bpE.merges : <code> * </code>

Kind: instance property of BPE


merges.config.merges : <code> * </code>

Kind: static property of merges


bpE.cache : <code> Map. < string, Array < string > > </code>

Kind: instance property of BPE


bpE.bpe(token) β‡’ <code> Array. < string > </code>

Apply Byte-Pair-Encoding (BPE) to a given token. Efficient heap-based priority queue implementation adapted from https://github.com/belladoreai/llama-tokenizer-js.

Kind: instance method of BPE
Returns: Array.<string> - The BPE encoded tokens.

ParamTypeDescription
tokenstring

The token to encode.


bpE.encode(tokens) β‡’ <code> Array. < string > </code>

Encodes the input sequence of tokens using the BPE algorithm and returns the resulting subword tokens.

Kind: instance method of BPE
Returns: Array.<string> - The resulting subword tokens after applying the BPE algorithm to the input sequence of tokens.

ParamTypeDescription
tokensArray.<string>

The input sequence of tokens to encode.


tokenizers~LegacyTokenizerModel

Legacy tokenizer class for tokenizers with only a vocabulary.

Kind: inner class of tokenizers


new LegacyTokenizerModel(config, moreConfig)

Create a LegacyTokenizerModel instance.

ParamTypeDescription
configObject

The configuration object for LegacyTokenizerModel.

config.vocabObject

A (possibly nested) mapping of tokens to ids.

moreConfigObject

Additional configuration object for the LegacyTokenizerModel model.


legacyTokenizerModel.tokens_to_ids : <code> Map. < string, number > </code>

Kind: instance property of LegacyTokenizerModel


tokenizers~Normalizer

A base class for text normalization.

Kind: inner abstract class of tokenizers


new Normalizer(config)

ParamTypeDescription
configObject

The configuration object for the normalizer.


normalizer.normalize(text) β‡’ <code> string </code>

Normalize the input text.

Kind: instance abstract method of Normalizer
Returns: string - The normalized text.
Throws:

  • Error If this method is not implemented in a subclass.
ParamTypeDescription
textstring

The text to normalize.


normalizer._call(text) β‡’ <code> string </code>

Alias for Normalizer#normalize.

Kind: instance method of Normalizer
Returns: string - The normalized text.

ParamTypeDescription
textstring

The text to normalize.


Normalizer.fromConfig(config) β‡’ <code> Normalizer </code>

Factory method for creating normalizers from config objects.

Kind: static method of Normalizer
Returns: Normalizer - A Normalizer object.
Throws:

  • Error If an unknown Normalizer type is specified in the config.
ParamTypeDescription
configObject

The configuration object for the normalizer.


tokenizers~Replace ⇐ <code> Normalizer </code>

Replace normalizer that replaces occurrences of a pattern with a given string or regular expression.

Kind: inner class of tokenizers
Extends: Normalizer


replace.normalize(text) β‡’ <code> string </code>

Normalize the input text by replacing the pattern with the content.

Kind: instance method of Replace
Returns: string - The normalized text after replacing the pattern with the content.

ParamTypeDescription
textstring

The input text to be normalized.


tokenizers~NFC ⇐ <code> Normalizer </code>

A normalizer that applies Unicode normalization form C (NFC) to the input text.

Kind: inner class of tokenizers
Extends: Normalizer


nfC.normalize(text) β‡’ <code> string </code>

Normalize the input text by applying Unicode normalization form C (NFC).

Kind: instance method of NFC
Returns: string - The normalized text.

ParamTypeDescription
textstring

The input text to be normalized.


tokenizers~NFKC ⇐ <code> Normalizer </code>

NFKC Normalizer.

Kind: inner class of tokenizers
Extends: Normalizer


nfkC.normalize(text) β‡’ <code> string </code>

Normalize text using NFKC normalization.

Kind: instance method of NFKC
Returns: string - The normalized text.

ParamTypeDescription
textstring

The text to be normalized.


tokenizers~NFKD ⇐ <code> Normalizer </code>

NFKD Normalizer.

Kind: inner class of tokenizers
Extends: Normalizer


nfkD.normalize(text) β‡’ <code> string </code>

Normalize text using NFKD normalization.

Kind: instance method of NFKD
Returns: string - The normalized text.

ParamTypeDescription
textstring

The text to be normalized.


tokenizers~StripNormalizer

A normalizer that strips leading and/or trailing whitespace from the input text.

Kind: inner class of tokenizers


stripNormalizer.normalize(text) β‡’ <code> string </code>

Strip leading and/or trailing whitespace from the input text.

Kind: instance method of StripNormalizer
Returns: string - The normalized text.

ParamTypeDescription
textstring

The input text.


tokenizers~StripAccents ⇐ <code> Normalizer </code>

StripAccents normalizer removes all accents from the text.

Kind: inner class of tokenizers
Extends: Normalizer


stripAccents.normalize(text) β‡’ <code> string </code>

Remove all accents from the text.

Kind: instance method of StripAccents
Returns: string - The normalized text without accents.

ParamTypeDescription
textstring

The input text.


tokenizers~Lowercase ⇐ <code> Normalizer </code>

A Normalizer that lowercases the input string.

Kind: inner class of tokenizers
Extends: Normalizer


lowercase.normalize(text) β‡’ <code> string </code>

Lowercases the input string.

Kind: instance method of Lowercase
Returns: string - The normalized text.

ParamTypeDescription
textstring

The text to normalize.


tokenizers~Prepend ⇐ <code> Normalizer </code>

A Normalizer that prepends a string to the input string.

Kind: inner class of tokenizers
Extends: Normalizer


prepend.normalize(text) β‡’ <code> string </code>

Prepends the input string.

Kind: instance method of Prepend
Returns: string - The normalized text.

ParamTypeDescription
textstring

The text to normalize.


tokenizers~NormalizerSequence ⇐ <code> Normalizer </code>

A Normalizer that applies a sequence of Normalizers.

Kind: inner class of tokenizers
Extends: Normalizer


new NormalizerSequence(config)

Create a new instance of NormalizerSequence.

ParamTypeDescription
configObject

The configuration object.

config.normalizersArray.<Object>

An array of Normalizer configuration objects.


normalizerSequence.normalize(text) β‡’ <code> string </code>

Apply a sequence of Normalizers to the input text.

Kind: instance method of NormalizerSequence
Returns: string - The normalized text.

ParamTypeDescription
textstring

The text to normalize.


tokenizers~BertNormalizer ⇐ <code> Normalizer </code>

A class representing a normalizer used in BERT tokenization.

Kind: inner class of tokenizers
Extends: Normalizer


bertNormalizer._tokenize_chinese_chars(text) β‡’ <code> string </code>

Adds whitespace around any CJK (Chinese, Japanese, or Korean) character in the input text.

Kind: instance method of BertNormalizer
Returns: string - The tokenized text with whitespace added around CJK characters.

ParamTypeDescription
textstring

The input text to tokenize.


bertNormalizer.stripAccents(text) β‡’ <code> string </code>

Strips accents from the given text.

Kind: instance method of BertNormalizer
Returns: string - The text with accents removed.

ParamTypeDescription
textstring

The text to strip accents from.


bertNormalizer.normalize(text) β‡’ <code> string </code>

Normalizes the given text based on the configuration.

Kind: instance method of BertNormalizer
Returns: string - The normalized text.

ParamTypeDescription
textstring

The text to normalize.


tokenizers~PreTokenizer ⇐ <code> Callable </code>

A callable class representing a pre-tokenizer used in tokenization. Subclasses should implement the pre_tokenize_text method to define the specific pre-tokenization logic.

Kind: inner class of tokenizers
Extends: Callable


preTokenizer.pre_tokenize_text(text, [options]) β‡’ <code> Array. < string > </code>

Method that should be implemented by subclasses to define the specific pre-tokenization logic.

Kind: instance abstract method of PreTokenizer
Returns: Array.<string> - The pre-tokenized text.
Throws:

  • Error If the method is not implemented in the subclass.
ParamTypeDescription
textstring

The text to pre-tokenize.

[options]Object

Additional options for the pre-tokenization logic.


preTokenizer.pre_tokenize(text, [options]) β‡’ <code> Array. < string > </code>

Tokenizes the given text into pre-tokens.

Kind: instance method of PreTokenizer
Returns: Array.<string> - An array of pre-tokens.

ParamTypeDescription
textstring | Array<string>

The text or array of texts to pre-tokenize.

[options]Object

Additional options for the pre-tokenization logic.


preTokenizer._call(text, [options]) β‡’ <code> Array. < string > </code>

Alias for PreTokenizer#pre_tokenize.

Kind: instance method of PreTokenizer
Overrides: _call
Returns: Array.<string> - An array of pre-tokens.

ParamTypeDescription
textstring | Array<string>

The text or array of texts to pre-tokenize.

[options]Object

Additional options for the pre-tokenization logic.


PreTokenizer.fromConfig(config) β‡’ <code> PreTokenizer </code>

Factory method that returns an instance of a subclass of PreTokenizer based on the provided configuration.

Kind: static method of PreTokenizer
Returns: PreTokenizer - An instance of a subclass of PreTokenizer.
Throws:

  • Error If the provided configuration object does not correspond to any known pre-tokenizer.
ParamTypeDescription
configObject

A configuration object for the pre-tokenizer.


tokenizers~BertPreTokenizer ⇐ <code> PreTokenizer </code>

Kind: inner class of tokenizers
Extends: PreTokenizer


new BertPreTokenizer(config)

A PreTokenizer that splits text into wordpieces using a basic tokenization scheme similar to that used in the original implementation of BERT.

ParamTypeDescription
configObject

The configuration object.


bertPreTokenizer.pre_tokenize_text(text, [options]) β‡’ <code> Array. < string > </code>

Tokenizes a single text using the BERT pre-tokenization scheme.

Kind: instance method of BertPreTokenizer
Returns: Array.<string> - An array of tokens.

ParamTypeDescription
textstring

The text to tokenize.

[options]Object

Additional options for the pre-tokenization logic.


tokenizers~ByteLevelPreTokenizer ⇐ <code> PreTokenizer </code>

A pre-tokenizer that splits text into Byte-Pair-Encoding (BPE) subwords.

Kind: inner class of tokenizers
Extends: PreTokenizer


new ByteLevelPreTokenizer(config)

Creates a new instance of the ByteLevelPreTokenizer class.

ParamTypeDescription
configObject

The configuration object.


byteLevelPreTokenizer.add_prefix_space : <code> boolean </code>

Whether to add a leading space to the first word.This allows to treat the leading word just as any other word.

Kind: instance property of ByteLevelPreTokenizer


byteLevelPreTokenizer.trim_offsets : <code> boolean </code>

Whether the post processing step should trim offsetsto avoid including whitespaces.

Kind: instance property of ByteLevelPreTokenizer
Todo

  • Use this in the pretokenization step.

byteLevelPreTokenizer.use_regex : <code> boolean </code>

Whether to use the standard GPT2 regex for whitespace splitting.Set it to False if you want to use your own splitting. Defaults to true.

Kind: instance property of ByteLevelPreTokenizer


byteLevelPreTokenizer.pre_tokenize_text(text, [options]) β‡’ <code> Array. < string > </code>

Tokenizes a single piece of text using byte-level tokenization.

Kind: instance method of ByteLevelPreTokenizer
Returns: Array.<string> - An array of tokens.

ParamTypeDescription
textstring

The text to tokenize.

[options]Object

Additional options for the pre-tokenization logic.


tokenizers~SplitPreTokenizer ⇐ <code> PreTokenizer </code>

Splits text using a given pattern.

Kind: inner class of tokenizers
Extends: PreTokenizer


new SplitPreTokenizer(config)

ParamTypeDescription
configObject

The configuration options for the pre-tokenizer.

config.patternObject

The pattern used to split the text. Can be a string or a regex object.

config.pattern.Stringstring | undefined

The string to use for splitting. Only defined if the pattern is a string.

config.pattern.Regexstring | undefined

The regex to use for splitting. Only defined if the pattern is a regex.

config.behaviorSplitDelimiterBehavior

The behavior to use when splitting.

config.invertboolean

Whether to split (invert=false) or match (invert=true) the pattern.


splitPreTokenizer.pre_tokenize_text(text, [options]) β‡’ <code> Array. < string > </code>

Tokenizes text by splitting it using the given pattern.

Kind: instance method of SplitPreTokenizer
Returns: Array.<string> - An array of tokens.

ParamTypeDescription
textstring

The text to tokenize.

[options]Object

Additional options for the pre-tokenization logic.


tokenizers~PunctuationPreTokenizer ⇐ <code> PreTokenizer </code>

Splits text based on punctuation.

Kind: inner class of tokenizers
Extends: PreTokenizer


new PunctuationPreTokenizer(config)

ParamTypeDescription
configObject

The configuration options for the pre-tokenizer.

config.behaviorSplitDelimiterBehavior

The behavior to use when splitting.


punctuationPreTokenizer.pre_tokenize_text(text, [options]) β‡’ <code> Array. < string > </code>

Tokenizes text by splitting it using the given pattern.

Kind: instance method of PunctuationPreTokenizer
Returns: Array.<string> - An array of tokens.

ParamTypeDescription
textstring

The text to tokenize.

[options]Object

Additional options for the pre-tokenization logic.


tokenizers~DigitsPreTokenizer ⇐ <code> PreTokenizer </code>

Splits text based on digits.

Kind: inner class of tokenizers
Extends: PreTokenizer


new DigitsPreTokenizer(config)

ParamTypeDescription
configObject

The configuration options for the pre-tokenizer.

config.individual_digitsboolean

Whether to split on individual digits.


digitsPreTokenizer.pre_tokenize_text(text, [options]) β‡’ <code> Array. < string > </code>

Tokenizes text by splitting it using the given pattern.

Kind: instance method of DigitsPreTokenizer
Returns: Array.<string> - An array of tokens.

ParamTypeDescription
textstring

The text to tokenize.

[options]Object

Additional options for the pre-tokenization logic.


tokenizers~PostProcessor ⇐ <code> Callable </code>

Kind: inner class of tokenizers
Extends: Callable


new PostProcessor(config)

ParamTypeDescription
configObject

The configuration for the post-processor.


postProcessor.post_process(tokens, ...args) β‡’ <code> PostProcessedOutput </code>

Method to be implemented in subclass to apply post-processing on the given tokens.

Kind: instance method of PostProcessor
Returns: PostProcessedOutput - The post-processed tokens.
Throws:

  • Error If the method is not implemented in subclass.
ParamTypeDescription
tokensArray

The input tokens to be post-processed.

...args*

Additional arguments required by the post-processing logic.


postProcessor._call(tokens, ...args) β‡’ <code> PostProcessedOutput </code>

Alias for PostProcessor#post_process.

Kind: instance method of PostProcessor
Overrides: _call
Returns: PostProcessedOutput - The post-processed tokens.

ParamTypeDescription
tokensArray

The text or array of texts to post-process.

...args*

Additional arguments required by the post-processing logic.


PostProcessor.fromConfig(config) β‡’ <code> PostProcessor </code>

Factory method to create a PostProcessor object from a configuration object.

Kind: static method of PostProcessor
Returns: PostProcessor - A PostProcessor object created from the given configuration.
Throws:

  • Error If an unknown PostProcessor type is encountered.
ParamTypeDescription
configObject

Configuration object representing a PostProcessor.


tokenizers~BertProcessing

A post-processor that adds special tokens to the beginning and end of the input.

Kind: inner class of tokenizers


new BertProcessing(config)

ParamTypeDescription
configObject

The configuration for the post-processor.

config.clsArray.<string>

The special tokens to add to the beginning of the input.

config.sepArray.<string>

The special tokens to add to the end of the input.


bertProcessing.post_process(tokens, [tokens_pair]) β‡’ <code> PostProcessedOutput </code>

Adds the special tokens to the beginning and end of the input.

Kind: instance method of BertProcessing
Returns: PostProcessedOutput - The post-processed tokens with the special tokens added to the beginning and end.

ParamTypeDefaultDescription
tokensArray.<string>

The input tokens.

[tokens_pair]Array.<string>

An optional second set of input tokens.


tokenizers~TemplateProcessing ⇐ <code> PostProcessor </code>

Post processor that replaces special tokens in a template with actual tokens.

Kind: inner class of tokenizers
Extends: PostProcessor


new TemplateProcessing(config)

Creates a new instance of TemplateProcessing.

ParamTypeDescription
configObject

The configuration options for the post processor.

config.singleArray

The template for a single sequence of tokens.

config.pairArray

The template for a pair of sequences of tokens.


templateProcessing.post_process(tokens, [tokens_pair]) β‡’ <code> PostProcessedOutput </code>

Replaces special tokens in the template with actual tokens.

Kind: instance method of TemplateProcessing
Returns: PostProcessedOutput - An object containing the list of tokens with the special tokens replaced with actual tokens.

ParamTypeDefaultDescription
tokensArray.<string>

The list of tokens for the first sequence.

[tokens_pair]Array.<string>

The list of tokens for the second sequence (optional).


tokenizers~ByteLevelPostProcessor ⇐ <code> PostProcessor </code>

A PostProcessor that returns the given tokens as is.

Kind: inner class of tokenizers
Extends: PostProcessor


byteLevelPostProcessor.post_process(tokens, [tokens_pair]) β‡’ <code> PostProcessedOutput </code>

Post process the given tokens.

Kind: instance method of ByteLevelPostProcessor
Returns: PostProcessedOutput - An object containing the post-processed tokens.

ParamTypeDefaultDescription
tokensArray.<string>

The list of tokens for the first sequence.

[tokens_pair]Array.<string>

The list of tokens for the second sequence (optional).


tokenizers~PostProcessorSequence

A post-processor that applies multiple post-processors in sequence.

Kind: inner class of tokenizers


new PostProcessorSequence(config)

Creates a new instance of PostProcessorSequence.

ParamTypeDescription
configObject

The configuration object.

config.processorsArray.<Object>

The list of post-processors to apply.


postProcessorSequence.post_process(tokens, [tokens_pair]) β‡’ <code> PostProcessedOutput </code>

Post process the given tokens.

Kind: instance method of PostProcessorSequence
Returns: PostProcessedOutput - An object containing the post-processed tokens.

ParamTypeDefaultDescription
tokensArray.<string>

The list of tokens for the first sequence.

[tokens_pair]Array.<string>

The list of tokens for the second sequence (optional).


tokenizers~Decoder ⇐ <code> Callable </code>

The base class for token decoders.

Kind: inner class of tokenizers
Extends: Callable


new Decoder(config)

Creates an instance of Decoder.

ParamTypeDescription
configObject

The configuration object.


decoder.added_tokens : <code> Array. < AddedToken > </code>

Kind: instance property of Decoder


decoder._call(tokens) β‡’ <code> string </code>

Calls the decode method.

Kind: instance method of Decoder
Overrides: _call
Returns: string - The decoded string.

ParamTypeDescription
tokensArray.<string>

The list of tokens.


decoder.decode(tokens) β‡’ <code> string </code>

Decodes a list of tokens.

Kind: instance method of Decoder
Returns: string - The decoded string.

ParamTypeDescription
tokensArray.<string>

The list of tokens.


decoder.decode_chain(tokens) β‡’ <code> Array. < string > </code>

Apply the decoder to a list of tokens.

Kind: instance method of Decoder
Returns: Array.<string> - The decoded list of tokens.
Throws:

  • Error If the `decode_chain` method is not implemented in the subclass.
ParamTypeDescription
tokensArray.<string>

The list of tokens.


Decoder.fromConfig(config) β‡’ <code> Decoder </code>

Creates a decoder instance based on the provided configuration.

Kind: static method of Decoder
Returns: Decoder - A decoder instance.
Throws:

  • Error If an unknown decoder type is provided.
ParamTypeDescription
configObject

The configuration object.


tokenizers~FuseDecoder

Fuse simply fuses all tokens into one big string. It’s usually the last decoding step anyway, but this decoder exists incase some decoders need to happen after that step

Kind: inner class of tokenizers


fuseDecoder.decode_chain() : <code> * </code>

Kind: instance method of FuseDecoder


tokenizers~WordPieceDecoder ⇐ <code> Decoder </code>

A decoder that decodes a list of WordPiece tokens into a single string.

Kind: inner class of tokenizers
Extends: Decoder


new WordPieceDecoder(config)

Creates a new instance of WordPieceDecoder.

ParamTypeDescription
configObject

The configuration object.

config.prefixstring

The prefix used for WordPiece encoding.

config.cleanupboolean

Whether to cleanup the decoded string.


wordPieceDecoder.decode_chain() : <code> * </code>

Kind: instance method of WordPieceDecoder


tokenizers~ByteLevelDecoder ⇐ <code> Decoder </code>

Byte-level decoder for tokenization output. Inherits from the Decoder class.

Kind: inner class of tokenizers
Extends: Decoder


new ByteLevelDecoder(config)

Create a ByteLevelDecoder object.

ParamTypeDescription
configObject

Configuration object.


byteLevelDecoder.convert_tokens_to_string(tokens) β‡’ <code> string </code>

Convert an array of tokens to string by decoding each byte.

Kind: instance method of ByteLevelDecoder
Returns: string - The decoded string.

ParamTypeDescription
tokensArray.<string>

Array of tokens to be decoded.


byteLevelDecoder.decode_chain() : <code> * </code>

Kind: instance method of ByteLevelDecoder


tokenizers~CTCDecoder

The CTC (Connectionist Temporal Classification) decoder. See https://github.com/huggingface/tokenizers/blob/bb38f390a61883fc2f29d659af696f428d1cda6b/tokenizers/src/decoders/ctc.rs

Kind: inner class of tokenizers


ctcDecoder.convert_tokens_to_string(tokens) β‡’ <code> string </code>

Converts a connectionist-temporal-classification (CTC) output tokens into a single string.

Kind: instance method of CTCDecoder
Returns: string - The decoded string.

ParamTypeDescription
tokensArray.<string>

Array of tokens to be decoded.


ctcDecoder.decode_chain() : <code> * </code>

Kind: instance method of CTCDecoder


tokenizers~DecoderSequence ⇐ <code> Decoder </code>

Apply a sequence of decoders.

Kind: inner class of tokenizers
Extends: Decoder


new DecoderSequence(config)

Creates a new instance of DecoderSequence.

ParamTypeDescription
configObject

The configuration object.

config.decodersArray.<Object>

The list of decoders to apply.


decoderSequence.decode_chain() : <code> * </code>

Kind: instance method of DecoderSequence


tokenizers~MetaspacePreTokenizer ⇐ <code> PreTokenizer </code>

This PreTokenizer replaces spaces with the given replacement character, adds a prefix space if requested, and returns a list of tokens.

Kind: inner class of tokenizers
Extends: PreTokenizer


new MetaspacePreTokenizer(config)

ParamTypeDefaultDescription
configObject

The configuration object for the MetaspacePreTokenizer.

config.add_prefix_spaceboolean

Whether to add a prefix space to the first token.

config.replacementstring

The character to replace spaces with.

[config.str_rep]string"config.replacement"

An optional string representation of the replacement character.

[config.prepend_scheme]'first' | 'never' | 'always''always'

The metaspace prepending scheme.


metaspacePreTokenizer.pre_tokenize_text(text, [options]) β‡’ <code> Array. < string > </code>

This method takes a string, replaces spaces with the replacement character, adds a prefix space if requested, and returns a new list of tokens.

Kind: instance method of MetaspacePreTokenizer
Returns: Array.<string> - A new list of pre-tokenized tokens.

ParamTypeDescription
textstring

The text to pre-tokenize.

[options]Object

The options for the pre-tokenization.

[options.section_index]number

The index of the section to pre-tokenize.


tokenizers~MetaspaceDecoder ⇐ <code> Decoder </code>

MetaspaceDecoder class extends the Decoder class and decodes Metaspace tokenization.

Kind: inner class of tokenizers
Extends: Decoder


new MetaspaceDecoder(config)

Constructs a new MetaspaceDecoder object.

ParamTypeDescription
configObject

The configuration object for the MetaspaceDecoder.

config.add_prefix_spaceboolean

Whether to add a prefix space to the decoded string.

config.replacementstring

The string to replace spaces with.


metaspaceDecoder.decode_chain() : <code> * </code>

Kind: instance method of MetaspaceDecoder


tokenizers~Precompiled ⇐ <code> Normalizer </code>

A normalizer that applies a precompiled charsmap. This is useful for applying complex normalizations in C++ and exposing them to JavaScript.

Kind: inner class of tokenizers
Extends: Normalizer


new Precompiled(config)

Create a new instance of Precompiled normalizer.

ParamTypeDescription
configObject

The configuration object for the Precompiled normalizer.

config.precompiled_charsmapObject

The precompiled charsmap object.


precompiled.normalize(text) β‡’ <code> string </code>

Normalizes the given text by applying the precompiled charsmap.

Kind: instance method of Precompiled
Returns: string - The normalized text.

ParamTypeDescription
textstring

The text to normalize.


tokenizers~PreTokenizerSequence ⇐ <code> PreTokenizer </code>

A pre-tokenizer that applies a sequence of pre-tokenizers to the input text.

Kind: inner class of tokenizers
Extends: PreTokenizer


new PreTokenizerSequence(config)

Creates an instance of PreTokenizerSequence.

ParamTypeDescription
configObject

The configuration object for the pre-tokenizer sequence.

config.pretokenizersArray.<Object>

An array of pre-tokenizer configurations.


preTokenizerSequence.pre_tokenize_text(text, [options]) β‡’ <code> Array. < string > </code>

Applies each pre-tokenizer in the sequence to the input text in turn.

Kind: instance method of PreTokenizerSequence
Returns: Array.<string> - The pre-tokenized text.

ParamTypeDescription
textstring

The text to pre-tokenize.

[options]Object

Additional options for the pre-tokenization logic.


tokenizers~WhitespacePreTokenizer

Splits on word boundaries (using the following regular expression: \w+|[^\w\s]+).

Kind: inner class of tokenizers


new WhitespacePreTokenizer(config)

Creates an instance of WhitespacePreTokenizer.

ParamTypeDescription
configObject

The configuration object for the pre-tokenizer.


whitespacePreTokenizer.pre_tokenize_text(text, [options]) β‡’ <code> Array. < string > </code>

Pre-tokenizes the input text by splitting it on word boundaries.

Kind: instance method of WhitespacePreTokenizer
Returns: Array.<string> - An array of tokens produced by splitting the input text on whitespace.

ParamTypeDescription
textstring

The text to be pre-tokenized.

[options]Object

Additional options for the pre-tokenization logic.


tokenizers~WhitespaceSplit ⇐ <code> PreTokenizer </code>

Splits a string of text by whitespace characters into individual tokens.

Kind: inner class of tokenizers
Extends: PreTokenizer


new WhitespaceSplit(config)

Creates an instance of WhitespaceSplit.

ParamTypeDescription
configObject

The configuration object for the pre-tokenizer.


whitespaceSplit.pre_tokenize_text(text, [options]) β‡’ <code> Array. < string > </code>

Pre-tokenizes the input text by splitting it on whitespace characters.

Kind: instance method of WhitespaceSplit
Returns: Array.<string> - An array of tokens produced by splitting the input text on whitespace.

ParamTypeDescription
textstring

The text to be pre-tokenized.

[options]Object

Additional options for the pre-tokenization logic.


tokenizers~ReplacePreTokenizer

Kind: inner class of tokenizers


new ReplacePreTokenizer(config)

ParamTypeDescription
configObject

The configuration options for the pre-tokenizer.

config.patternObject

The pattern used to split the text. Can be a string or a regex object.

config.contentstring

What to replace the pattern with.


replacePreTokenizer.pre_tokenize_text(text, [options]) β‡’ <code> Array. < string > </code>

Pre-tokenizes the input text by replacing certain characters.

Kind: instance method of ReplacePreTokenizer
Returns: Array.<string> - An array of tokens produced by replacing certain characters.

ParamTypeDescription
textstring

The text to be pre-tokenized.

[options]Object

Additional options for the pre-tokenization logic.


tokenizers~BYTES_TO_UNICODE β‡’ <code> Object </code>

Returns list of utf-8 byte and a mapping to unicode strings. Specifically avoids mapping to whitespace/control characters the BPE code barfs on.

Kind: inner constant of tokenizers
Returns: Object - Object with utf-8 byte keys and unicode string values.


tokenizers~loadTokenizer(pretrained_model_name_or_path, options) β‡’ <code> Promise. < Array < any > > </code>

Loads a tokenizer from the specified path.

Kind: inner method of tokenizers
Returns: Promise.<Array<any>> - A promise that resolves with information about the loaded tokenizer.

ParamTypeDescription
pretrained_model_name_or_pathstring

The path to the tokenizer directory.

optionsPretrainedTokenizerOptions

Additional options for loading the tokenizer.


tokenizers~regexSplit(text, regex) β‡’ <code> Array. < string > </code>

Helper function to split a string on a regex, but keep the delimiters. This is required, because the JavaScript .split() method does not keep the delimiters, and wrapping in a capturing group causes issues with existing capturing groups (due to nesting).

Kind: inner method of tokenizers
Returns: Array.<string> - The split string.

ParamTypeDescription
textstring

The text to split.

regexRegExp

The regex to split on.


tokenizers~createPattern(pattern, invert) β‡’ <code> RegExp </code> | <code> null </code>

Helper method to construct a pattern from a config object.

Kind: inner method of tokenizers
Returns: RegExp | null - The compiled pattern.

ParamTypeDefaultDescription
patternObject

The pattern object.

invertbooleantrue

Whether to invert the pattern.


tokenizers~objectToMap(obj) β‡’ <code> Map. < string, any > </code>

Helper function to convert an Object to a Map

Kind: inner method of tokenizers
Returns: Map.<string, any> - The map.

ParamTypeDescription
objObject

The object to convert.


tokenizers~prepareTensorForDecode(tensor) β‡’ <code> Array. < number > </code>

Helper function to convert a tensor to a list before decoding.

Kind: inner method of tokenizers
Returns: Array.<number> - The tensor as a list.

ParamTypeDescription
tensorTensor

The tensor to convert.


tokenizers~clean_up_tokenization(text) β‡’ <code> string </code>

Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms

Kind: inner method of tokenizers
Returns: string - The cleaned up text.

ParamTypeDescription
textstring

The text to clean up.


tokenizers~remove_accents(text) β‡’ <code> string </code>

Helper function to remove accents from a string.

Kind: inner method of tokenizers
Returns: string - The text with accents removed.

ParamTypeDescription
textstring

The text to remove accents from.


tokenizers~lowercase_and_remove_accent(text) β‡’ <code> string </code>

Helper function to lowercase a string and remove accents.

Kind: inner method of tokenizers
Returns: string - The lowercased text with accents removed.

ParamTypeDescription
textstring

The text to lowercase and remove accents from.


tokenizers~whitespace_split(text) β‡’ <code> Array. < string > </code>

Split a string on whitespace.

Kind: inner method of tokenizers
Returns: Array.<string> - The split string.

ParamTypeDescription
textstring

The text to split.


tokenizers~PretrainedTokenizerOptions : <code> Object </code>

Additional tokenizer-specific properties.

Kind: inner typedef of tokenizers
Properties

NameTypeDefaultDescription
[legacy]booleanfalse

Whether or not the legacy behavior of the tokenizer should be used.


tokenizers~BPENode : <code> Object </code>

Kind: inner typedef of tokenizers
Properties

NameTypeDescription
tokenstring

The token associated with the node

biasnumber

A positional bias for the node.

[score]number

The score of the node.

[prev]BPENode

The previous node in the linked list.

[next]BPENode

The next node in the linked list.


tokenizers~SplitDelimiterBehavior : <code> ’ removed ’ </code> | <code> ’ isolated ’ </code> | <code> ’ mergedWithPrevious ’ </code> | <code> ’ mergedWithNext ’ </code> | <code> ’ contiguous ’ </code>

Kind: inner typedef of tokenizers


tokenizers~PostProcessedOutput : <code> Object </code>

Kind: inner typedef of tokenizers
Properties

NameTypeDescription
tokensArray.<string>

List of token produced by the post-processor.

[token_type_ids]Array.<number>

List of token type ids produced by the post-processor.


tokenizers~EncodingSingle : <code> Object </code>

Kind: inner typedef of tokenizers
Properties

NameTypeDescription
input_idsArray.<number>

List of token ids to be fed to a model.

attention_maskArray.<number>

List of token type ids to be fed to a model

[token_type_ids]Array.<number>

List of indices specifying which tokens should be attended to by the model


tokenizers~Message : <code> Object </code>

Kind: inner typedef of tokenizers
Properties

NameTypeDescription
rolestring

The role of the message (e.g., "user" or "assistant" or "system").

contentstring

The content of the message.


tokenizers~BatchEncoding : <code> Array < number > </code> | <code> Array < Array < number > > </code> | <code> Tensor </code>

Holds the output of the tokenizer’s call function.

Kind: inner typedef of tokenizers
Properties

NameTypeDescription
input_idsBatchEncodingItem

List of token ids to be fed to a model.

attention_maskBatchEncodingItem

List of indices specifying which tokens should be attended to by the model.

[token_type_ids]BatchEncodingItem

List of token type ids to be fed to a model.


< > Update on GitHub