EmbeddingDecode

EmbeddingDecode#

class penzai.deprecated.v1.nn.embeddings.EmbeddingDecode[source]#

Bases: Layer

Uses an embedding table to map embeddings back to token scores.

This layer can be used to map a model’s output embedding to the logits for a distribution over output tokens. It is usually the last layer in a language model.

The primary purpose of this layer is to allow sharing weights between the embedding lookup and decode layers. It functions similarly to a Linear layer, but retrieves its parameter from an EmbeddingTable.

Variables:

table (EmbeddingTable) – The embedding table to look up embeddings in.

Methods

__init__(table)

input_structure()

output_structure()

__call__(out_embeddings)

Retrieves tokens from the embedding table.

Attributes

table

Inherited Methods

(expand to view inherited methods)

attributes_dict()

Constructs a dictionary with all of the fields in the class.

from_attributes(**field_values)

Directly instantiates a struct given all of its fields.

key_for_field(field_name)

Generates a JAX PyTree key for a given field name.

select()

Wraps this struct in a selection, enabling functional-style mutations.

tree_flatten()

Flattens this tree node.

tree_flatten_with_keys()

Flattens this tree node with keys.

tree_unflatten(aux_data, children)

Unflattens this tree node.

treescope_color()

Computes a CSS color to display for this object in treescope.

__call__(out_embeddings: named_axes.NamedArray) named_axes.NamedArray[source]#

Retrieves tokens from the embedding table.

Parameters:

out_embeddings – The output embeddings that should be mapped to token logits. Should be a named array that includes the same axes as the embedding table, except for the vocabulary axis, and may also include additional batch axes.

Returns:

A named array of logits, which includes all batch axes of the input along with the vocabulary axis of the embedding table.