LLM fine-tuning step: Tokenizing

TeeTracker
2 min readAug 24, 2023

--

Tokenization is a crucial phase in fine-tuning the LLM, requiring us to:

  1. Encode the input, which is typically a form of guiding text, such as a question.
  2. Decode the output, usually it’s a courteous response after the “generate()” method has been invoked.

What is kind Tokenizing

In simple terms, it means using a dictionary (an instance of the tokenizer class) to transform words into something that computers can better understand, that is, digitization. Of course, this is a simplified explanation, as shown in the figure:

Boilerplate codes

In fact, the vast majority of work is boilerplate code, which is repeated and reused in daily work.

Example (just encode and decode)

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("chat gpt")
text = "Hi, how are you?"
encoded_text = tokenizer(text)["input_ids"]
decoded_text = tokenizer.decode(encoded_text)
print("Decoded tokens back into text: ", decoded_text)

Example(preprocessing after load HF dataset api)

import datasets

def tokenizing(example):
#
# Callback for each examples
# examples["question"] is a list containing str.
#
if "question" in example and "answer" in example:
text = example["question"][0] + example["answer"][0]
elif "input" in example and "output" in example:
text = example["input"][0] + example["output"][0]
else:
text = example["text"][0]

tokenizer.pad_token = tokenizer.eos_token
tokenized_inputs = tokenizer(
text,
return_tensors="np",
padding=True,
)

max_length = min(
tokenized_inputs["input_ids"].shape[1],
2048
)
tokenizer.truncation_side = "left"
tokenized_inputs = tokenizer(
text,
return_tensors="np",
truncation=True,
max_length=max_length
)

return tokenized_inputs

finetuning_dataset_loaded = datasets.load_dataset("json", data_files=filename, split="train")

tokenized_dataset = finetuning_dataset_loaded.map(
tokenizing,
batched=True,
batch_size=1,
drop_last_batch=True
)

print(tokenized_dataset)

'''
output:
Dataset({
features: ['question', 'answer', 'input_ids', 'attention_mask'],
num_rows: 1400
})
'''

Example (prompting)

from transformers import AutoTokenizer

def infer(text, model, tokenizer, max_input_tokens=1000, max_output_tokens=100):
# Tokenize
input_ids = tokenizer.encode(
text,
return_tensors="pt",
truncation=True,
max_length=max_input_tokens
)

# Generate
device = model.device
generated_tokens_with_prompt = model.generate(
input_ids=input_ids.to(device),
max_length=max_output_tokens
)

# Decode
generated_text_with_prompt = tokenizer.batch_decode(generated_tokens_with_prompt, skip_special_tokens=True)

# Strip the prompt
generated_text_answer = generated_text_with_prompt[0][len(text):]

return generated_text_answer

tokenizer = AutoTokenizer.from_pretrained("chat gpt")
model = AutoModelForCausalLM.from_pretrained("chat gpt")

infer("hello, how to write loop in Python?", model, tokenizer)

--

--

TeeTracker
TeeTracker

No responses yet