meerqat.train.optim module#

Loss functions, optimizers, and schedulers.

class meerqat.train.optim.LinearLRWithWarmup(*args, warmup_steps, total_steps, **kwargs)[source]#

Bases: LambdaLR

Linear learning rate scheduler with linear warmup. Adapted from huggingface/transformers

Parameters:
  • *args (additionnal arguments are passed to LambdaLR) –

  • **kwargs (additionnal arguments are passed to LambdaLR) –

  • warmup_steps (int) –

  • total_steps (int) –

lr_lambda(current_step: int)[source]#
meerqat.train.optim.multi_passage_rc_loss(input_ids, start_positions, end_positions, start_logits, end_logits, answer_mask, max_pooling=False)[source]#