tokenizer_utils

class PretrainedTokenizer(*args, **kwargs)[source]

Bases: object

The base class for all pretrained tokenizers. It provides some attributes and common methods for all pretrained tokenizers, including attributes for and special tokens (arguments of __init__ whose name ends with _token) and methods for saving and loading. It also includes some class attributes (should be set by derived classes): - tokenizer_config_file (str): represents the file name for saving and loading

tokenizer configuration, it’s value is tokenizer_config.json.

  • resource_files_names (dict): use this to map resource related arguments of __init__ to specific file names for saving and loading.

  • pretrained_resource_files_map (dict): The dict has the same keys as resource_files_names, the values are also dict mapping specific pretrained model name to URL linking to vocabulary or other resources.

  • pretrained_init_configuration (dict): The dict has pretrained model names as keys, and the values are also dict preserving corresponding configuration for tokenizer initialization.

property all_special_tokens

List all the special tokens (‘<unk>’, ‘<cls>’…) mapped to class attributes (cls_token, unk_token…).

property all_special_ids

List the vocabulary indices of the special tokens (‘<unk>’, ‘<cls>’…) mapped to class attributes (cls_token, unk_token…).

convert_tokens_to_ids(tokens)[source]

Converts a sequence of tokens into ids using the vocab. The tokenizer should has the vocab attribute. Args:

tokens (list(str)): List of tokens.

Returns

Converted id list.

Return type

list

convert_tokens_to_string(tokens)[source]

Converts a sequence of tokens (list of string) to a single string by using ' '.join(tokens) . :param tokens: List of tokens. :type tokens: list(str)

Returns

Converted string.

Return type

str

convert_ids_to_tokens(ids, skip_special_tokens=False)[source]

Converts a single index or a sequence of indices (integers) in a token or a sequence of tokens (str) by using the vocabulary.

Parameters

skip_special_tokens – Don’t decode special tokens (self.all_special_tokens). Default: False

classmethod from_pretrained(pretrained_model_name_or_path, *args, **kwargs)[source]

Instantiate an instance of PretrainedTokenizer from a predefined tokenizer specified by name or path., and it always corresponds to a pretrained model. :param pretrained_model_name_or_path: A name of or a file path to a

pretrained model.

Parameters
  • *args (tuple) – position arguments for __init__. If provide, use this as position argument values for tokenizer initialization.

  • **kwargs (dict) – keyword arguments for __init__. If provide, use this to update pre-defined keyword argument values for tokenizer initialization.

Returns

An instance of PretrainedTokenizer.

Return type

PretrainedTokenizer

save_pretrained(save_directory)[source]

Save tokenizer configuration and related resources to files under save_directory. :param save_directory: Directory to save files into. :type save_directory: str

save_resources(save_directory)[source]

Save tokenizer related resources to files under save_directory. :param save_directory: Directory to save files into. :type save_directory: str

static load_vocabulary(filepath, unk_token=None, pad_token=None, bos_token=None, eos_token=None, **kwargs)[source]

Instantiate an instance of Vocab from a file reserving all tokens by using Vocab.from_dict. The file contains a token per line, and the line number would be the index of corresponding token. :param filepath: path of file to construct vocabulary. :type filepath: str :param unk_token: special token for unknown token. If no need, it also

could be None. Default: None.

Parameters
  • pad_token (str) – special token for padding token. If no need, it also could be None. Default: None.

  • bos_token (str) – special token for bos token. If no need, it also could be None. Default: None.

  • eos_token (str) – special token for eos token. If no need, it also could be None. Default: None.

  • **kwargs (dict) – keyword arguments for Vocab.from_dict.

Returns

An instance of Vocab.

Return type

Vocab

static save_vocabulary(filepath, vocab)[source]

Save all tokens to a vocabulary file. The file contains a token per line, and the line number would be the index of corresponding token. Agrs:

filepath (str): File path to be saved to. vocab (Vocab|dict): the Vocab or dict instance to be saved.

truncate_sequences(ids, pair_ids=None, num_tokens_to_remove=0, truncation_strategy='longest_first', stride=0)[source]

Truncates a sequence pair in place to the maximum length.

Parameters
  • ids – list of tokenized input ids. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • pair_ids – Optional second list of input ids. Can be obtained from a string by chaining the tokenize and convert_tokens_to_ids methods.

  • num_tokens_to_remove (int, optional, defaults to 0) – number of tokens to remove using the truncation strategy

  • truncation_strategy

    string selected in the following options: - ‘longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_seq_len

    starting from the longest one at each token (when there is a pair of input sequences). Overflowing tokens only contains overflow from the first sequence.

    • ’only_first’: Only truncate the first sequence. raise an error if the first sequence is shorter or equal to than num_tokens_to_remove.

    • ’only_second’: Only truncate the second sequence

    • ’do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_seq_len)

  • stride (int, optional, defaults to 0) – If set to a number along with max_seq_len, the overflowing tokens returned will contain some tokens from the main sequence returned. The value of this argument defines the number of additional tokens.

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[source]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens.

Should be overridden in a subclass if the model has a special way of building those.

Parameters
  • token_ids_0 (List[int]) – List of IDs to which the special tokens will be added.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of input_id with the appropriate special tokens.

Return type

List[int]

build_offset_mapping_with_special_tokens(offset_mapping_0, offset_mapping_1=None)[source]

Build offset map from a pair of offset map by concatenating and adding offsets of special tokens.

Should be overridden in a subclass if the model has a special way of building those.

Parameters
  • offset_mapping_ids_0 (List[tuple]) – List of char offsets to which the special tokens will be added.

  • offset_mapping_ids_1 (List[tuple], optional) – Optional second list of char offsets for offset mapping pairs.

Returns

List of char offsets with the appropriate offsets of special tokens.

Return type

List[tuple]

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[source]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer encode methods.

Parameters
  • token_ids_0 (List[int]) – List of ids of the first sequence.

  • token_ids_1 (List[int], optinal) – List of ids of the second sequence.

  • already_has_special_tokens (bool, optional) – Whether or not the token list is already formatted with special tokens for the model. Defaults to None.

Returns

The list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.

Return type

results (List[int])

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[source]

Create a mask from the two sequences passed to be used in a sequence-pair classification task.

Should be overridden in a subclass if the model has a special way of building those.

If token_ids_1 is None, this method only returns the first portion of the mask (0s).

Parameters
  • token_ids_0 (List[int]) – List of IDs.

  • token_ids_1 (List[int], optional) – Optional second list of IDs for sequence pairs.

Returns

List of token_type_id according to the given sequence(s).

Return type

List[int]

encode(text, text_pair=None, max_seq_len=512, pad_to_max_seq_len=False, truncation_strategy='longest_first', return_position_ids=False, return_token_type_ids=True, return_attention_mask=False, return_length=False, return_overflowing_tokens=False, return_special_tokens_mask=False)[source]

Returns a dictionary containing the encoded sequence or sequence pair and additional information: the mask for sequence classification and the overflowing elements if a max_seq_len is specified.

Parameters
  • text (str, List[str] or List[int]) – The first sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)

  • text_pair (str, List[str] or List[int], optional, defaults to None) – Optional second sequence to be encoded. This can be a string, a list of strings (tokenized string using the tokenize method) or a list of integers (tokenized string ids using the convert_tokens_to_ids method)

  • max_seq_len (int, optional, defaults to :int:`512`) – If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those will be added to the returned dictionary

  • pad_to_max_seq_len (bool, optional, defaults to False) – If set to True, the returned sequences will be padded according to the model’s padding side and padding index, up to their max length. If no max length is specified, the padding is done up to the model’s max length.

  • truncation_strategy (str, optional, defaults to longest_first) –

    String selected in the following options:

    • ’longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_seq_len starting from the longest one at each token (when there is a pair of input sequences)

    • ’only_first’: Only truncate the first sequence

    • ’only_second’: Only truncate the second sequence

    • ’do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_seq_len)

  • return_position_ids (bool, optional, defaults to False) – Set to True to return tokens position ids (default True).

  • return_token_type_ids (bool, optional, defaults to True) – Whether to return token type IDs.

  • return_attention_mask (bool, optional, defaults to False) – Whether to return the attention mask.

  • return_length (int, defaults to False) – If set the resulting dictionary will include the length of each encoded inputs

  • return_overflowing_tokens (bool, optional, defaults to False) – Set to True to return overflowing token information (default False).

  • return_special_tokens_mask (bool, optional, defaults to False) – Set to True to return special tokens mask information (default False).

Returns

A Dictionary of shape:

{
    input_ids: list[int],
    position_ids: list[int] if return_position_ids is True
    token_type_ids: list[int] if return_token_type_ids is True (default)
    attention_mask: list[int] if return_attention_mask is True
    seq_len: int if return_length is True
    overflowing_tokens: list[int] if a ``max_seq_len`` is specified and return_overflowing_tokens is True
    num_truncated_tokens: int if a ``max_seq_len`` is specified and return_overflowing_tokens is True
    special_tokens_mask: list[int] if return_special_tokens_mask is True
}

With the fields:

  • input_ids: list of token ids to be fed to a model

  • position_ids: list of token position ids to be fed to a model

  • token_type_ids: list of token type ids to be fed to a model

  • attention_mask: list of indices specifying which tokens should be attended to by the model

  • length: the input_ids length

  • overflowing_tokens: list of overflowing tokens if a max length is specified.

  • num_truncated_tokens: number of overflowing tokens a max_seq_len is specified

  • special_tokens_mask: list of [0, 1], with 0 specifying special added tokens and 1 specifying sequence tokens.

batch_encode(batch_text_or_text_pairs, max_seq_len=512, pad_to_max_seq_len=False, stride=0, is_split_into_words=False, truncation_strategy='longest_first', return_position_ids=False, return_token_type_ids=True, return_attention_mask=False, return_length=False, return_overflowing_tokens=False, return_special_tokens_mask=False)[source]

Returns a list of dictionary containing the encoded sequence or sequence pair and additional information: the mask for sequence classification and the overflowing elements if a max_seq_len is specified.

Parameters
  • batch_text_or_text_pairs (List[str], List[Tuple[str, str]], List[List[str]], List[Tuple[List[str], List[str]]], List[List[int]], List[Tuple[List[int], List[int]]]) – Batch of sequences or pair of sequences to be encoded. This can be a list of string/string-sequences/int-sequences or a list of pair of string/string-sequences/int-sequence

  • max_seq_len (int, optional, defaults to :int:`512`) – If set to a number, will limit the total sequence returned so that it has a maximum length. If there are overflowing tokens, those will be added to the returned dictionary

  • pad_to_max_seq_len (bool, optional, defaults to False) – If set to True, the returned sequences will be padded according to the model’s padding side and padding index, up to their max length. If no max length is specified, the padding is done up to the model’s max length.

  • stride (int, optional, defaults to 0) – If set to a positive number and batch_text_or_text_pairs is a list of pair sequences, the overflowing tokens which contain some tokens from the end of the truncated second sequence will be concatenated with the first sequence to generate new features. And The overflowing tokens would not be returned in dictionary. The value of this argument defines the number of overlapping tokens.

  • is_split_into_words (bool, optional, defaults to False) – Whether or not the text has been pretokenized.

  • truncation_strategy (str, optional, defaults to longest_first) –

    String selected in the following options:

    • ’longest_first’ (default) Iteratively reduce the inputs sequence until the input is under max_seq_len starting from the longest one at each token (when there is a pair of input sequences)

    • ’only_first’: Only truncate the first sequence

    • ’only_second’: Only truncate the second sequence

    • ’do_not_truncate’: Does not truncate (raise an error if the input sequence is longer than max_seq_len)

  • return_position_ids (bool, optional, defaults to False) – Set to True to return tokens position ids (default True).

  • return_token_type_ids (bool, optional, defaults to True) – Whether to return token type IDs.

  • return_attention_mask (bool, optional, defaults to False) – Whether to return the attention mask.

  • return_length (int, defaults to False) – If set the resulting dictionary will include the length of each encoded inputs

  • return_overflowing_tokens (bool, optional, defaults to False) – Set to True to return overflowing token information (default False).

  • return_special_tokens_mask (bool, optional, defaults to False) – Set to True to return special tokens mask information (default False).

Returns

A List of dictionary of shape:

{
    input_ids: list[int],
    position_ids: list[int] if return_position_ids is True
    token_type_ids: list[int] if return_token_type_ids is True (default)
    attention_mask: list[int] if return_attention_mask is True
    seq_len: int if return_length is True
    overflowing_tokens: list[int] if a ``max_seq_len`` is specified and return_overflowing_tokens is True and stride is 0
    num_truncated_tokens: int if a ``max_seq_len`` is specified and return_overflowing_tokens is True and stride is 0
    special_tokens_mask: list[int] if return_special_tokens_mask is True
    offset_mapping: list[Tuple] if stride is a positive number and batch_text_or_text_pairs is a list of pair sequences
    overflow_to_sample: int if stride is a positive number and batch_text_or_text_pairs is a list of pair sequences
}

With the fields:

  • input_ids: list of token ids to be fed to a model

  • position_ids: list of token position ids to be fed to a model

  • token_type_ids: list of token type ids to be fed to a model

  • attention_mask: list of indices specifying which tokens should be attended to by the model

  • length: the input_ids length

  • overflowing_tokens: list of overflowing tokens if a max length is specified.

  • num_truncated_tokens: number of overflowing tokens a max_seq_len is specified

  • special_tokens_mask: if adding special tokens, this is a list of [0, 1], with 0 specifying special added tokens and 1 specifying sequence tokens.

  • offset_mapping: list of (index of start char in text,index of end char in text) of token. (0,0) if token is a sqecial token

  • overflow_to_sample: index of example from which this feature is generated