meerqat.models.outputs module#
Dataclasses for model outputs.
- class meerqat.models.outputs.ReaderOutput(loss: Optional[FloatTensor] = None, start_logits: Optional[FloatTensor] = None, end_logits: Optional[FloatTensor] = None, hidden_states: Optional[Tuple[FloatTensor]] = None, attentions: Optional[Tuple[FloatTensor]] = None, start_log_probs: Optional[FloatTensor] = None, end_log_probs: Optional[FloatTensor] = None)[source]#
Bases:
QuestionAnsweringModelOutput
Same as QuestionAnsweringModelOutput but with start and end log-probabilities
(equivalent to softmax(start_logits) when there is only one passage per question)
- start_log_probs: FloatTensor = None#
- end_log_probs: FloatTensor = None#
- class meerqat.models.outputs.EncoderOutput(pooler_output: Optional[FloatTensor] = None)[source]#
Bases:
ModelOutput
Generic class for any encoder output of the BiEncoder framework.
- pooler_output: Optional[FloatTensor] = None#
- class meerqat.models.outputs.ECAEncoderOutput(pooler_output: Optional[FloatTensor] = None, last_hidden_state: Optional[FloatTensor] = None, hidden_states: Optional[Tuple[FloatTensor]] = None, attentions: Optional[Tuple[FloatTensor]] = None)[source]#
Bases:
EncoderOutput
Returns the full sequence hidden states (optionally across layer) and attentions scores in addition to pooled sequence embedding
- pooler_output: Optional[FloatTensor] = None#
- attentions: Optional[Tuple[FloatTensor]] = None#
- class meerqat.models.outputs.BiEncoderOutput(question_pooler_output: Optional[FloatTensor] = None, context_pooler_output: Optional[FloatTensor] = None)[source]#
Bases:
ModelOutput
Simply wraps both encoders output in one.
- question_pooler_output: Optional[FloatTensor] = None#
- context_pooler_output: Optional[FloatTensor] = None#
- class meerqat.models.outputs.JointMonoAndCrossModalOutput(question_images: Optional[torch.FloatTensor] = None, context_images: Optional[torch.FloatTensor] = None, context_titles: Optional[torch.FloatTensor] = None)[source]#
Bases:
ModelOutput
- question_images: Optional[FloatTensor] = None#
- context_images: Optional[FloatTensor] = None#
- context_titles: Optional[FloatTensor] = None#
- class meerqat.models.outputs.JointBiEncoderAndClipOutput(question_images: Optional[torch.FloatTensor] = None, context_images: Optional[torch.FloatTensor] = None, context_titles: Optional[torch.FloatTensor] = None, question_pooler_output: Optional[torch.FloatTensor] = None, context_pooler_output: Optional[torch.FloatTensor] = None)[source]#
- class meerqat.models.outputs.ReRankerOutput(logits: Optional[FloatTensor] = None, hidden_states: Optional[Tuple[FloatTensor]] = None, attentions: Optional[Tuple[FloatTensor]] = None)[source]#
Bases:
ModelOutput
- Parameters:
logits (torch.FloatTensor of shape (batch_size, config.num_labels)) – Classification (or regression if config.num_labels==1) scores (before SoftMax).
hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) –
Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) –
Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).
Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- logits: FloatTensor = None#
- attentions: Optional[Tuple[FloatTensor]] = None#