apache_beam.ml.rag.ingestion.alloydb module
- class apache_beam.ml.rag.ingestion.alloydb.AlloyDBConnectionConfig(jdbc_url: str, username: str, password: str, connection_properties: Dict[str, str] | None = None, connection_init_sqls: List[str] | None = None, autosharding: bool | None = None, max_connections: int | None = None, write_batch_size: int | None = None)[source]
Bases:
object
Configuration for AlloyDB database connection.
Provides connection details and options for connecting to an AlloyDB instance.
- jdbc_url
JDBC URL for the AlloyDB instance. Example: ‘jdbc:postgresql://host:port/database’
- Type:
- connection_properties
Optional JDBC connection properties dict. Example: {‘ssl’: ‘true’}
- connection_init_sqls
Optional list of SQL statements to execute when connection is established.
- Type:
List[str] | None
- autosharding
Enable automatic re-sharding of bundles to scale the number of shards with workers.
- Type:
bool | None
- max_connections
Optional number of connections in the pool. Use negative for no limit.
- Type:
int | None
Example
>>> config = AlloyDBConnectionConfig( ... jdbc_url='jdbc:postgresql://localhost:5432/mydb', ... username='user', ... password='pass', ... connection_properties={'ssl': 'true'}, ... max_connections=10 ... )
- class apache_beam.ml.rag.ingestion.alloydb.ConflictResolution(on_conflict_fields: str | List[str], action: Literal['UPDATE', 'IGNORE'] = 'UPDATE', update_fields: List[str] | None = None)[source]
Bases:
object
Specification for how to handle conflicts during insert.
Configures conflict handling behavior when inserting records that may violate unique constraints.
- on_conflict_fields
Field(s) that determine uniqueness. Can be a single field name or list of field names for composite constraints.
- action
How to handle conflicts - either “UPDATE” or “IGNORE”. UPDATE: Updates existing record with new values. IGNORE: Skips conflicting records.
- Type:
Literal[‘UPDATE’, ‘IGNORE’]
- update_fields
Optional list of fields to update on conflict. If None, all non-conflict fields are updated.
- Type:
List[str] | None
Examples
Simple primary key: >>> ConflictResolution(“id”)
Composite key with specific update fields: >>> ConflictResolution( … on_conflict_fields=[“source”, “timestamp”], … action=”UPDATE”, … update_fields=[“embedding”, “content”] … )
Ignore conflicts: >>> ConflictResolution( … on_conflict_fields=”id”, … action=”IGNORE” … )
- apache_beam.ml.rag.ingestion.alloydb.chunk_embedding_fn(chunk: Chunk) str [source]
Convert embedding to PostgreSQL array string.
Formats dense embedding as a PostgreSQL-compatible array string. Example: [1.0, 2.0] -> ‘{1.0,2.0}’
- Parameters:
chunk – Input Chunk object.
- Returns:
PostgreSQL array string representation of the embedding.
- Return type:
- Raises:
ValueError – If chunk has no dense embedding.
- apache_beam.ml.rag.ingestion.alloydb.chunk_content_fn(chunk: Chunk) str [source]
Extract content text from chunk.
- Parameters:
chunk – Input Chunk object.
- Returns:
The chunk’s content text.
- Return type:
- apache_beam.ml.rag.ingestion.alloydb.chunk_metadata_fn(chunk: Chunk) str [source]
Extract metadata from chunk as JSON string.
- Parameters:
chunk – Input Chunk object.
- Returns:
JSON string representation of the chunk’s metadata.
- Return type:
- class apache_beam.ml.rag.ingestion.alloydb.ColumnSpec(column_name: str, python_type: Type, value_fn: Callable[[Chunk], Any], sql_typecast: str | None = None)[source]
Bases:
object
Specification for mapping Chunk fields to SQL columns for insertion.
Defines how to extract and format values from Chunks into database columns, handling the full pipeline from Python value to SQL insertion.
The insertion process works as follows: - value_fn extracts a value from the Chunk and formats it as needed - The value is stored in a NamedTuple field with the specified python_type - During SQL insertion, the value is bound to a ? placeholder
- python_type
Python type for the NamedTuple field that will hold the value. Must be compatible with must be compatible with
RowCoder
.- Type:
Type
- value_fn
Function to extract and format the value from a Chunk. Takes a Chunk and returns a value of python_type.
- Type:
Callable[[apache_beam.ml.rag.types.Chunk], Any]
- sql_typecast
Optional SQL type cast to append to the ? placeholder. Common examples: - “::float[]” for vector arrays - “::jsonb” for JSON data
- Type:
str | None
Examples
Basic text column (uses standard JDBC type mapping): >>> ColumnSpec.text( … column_name=”content”, … value_fn=lambda chunk: chunk.content.text … ) # Results in: INSERT INTO table (content) VALUES (?)
Vector column with explicit array casting: >>> ColumnSpec.vector( … column_name=”embedding”, … value_fn=lambda chunk: ‘{’ + … ‘,’.join(map(str, chunk.embedding.dense_embedding)) + ‘}’ … ) # Results in: INSERT INTO table (embedding) VALUES (?::float[]) # The value_fn formats [1.0, 2.0] as ‘{1.0,2.0}’ for PostgreSQL array
Timestamp from metadata with explicit casting: >>> ColumnSpec( … column_name=”created_at”, … python_type=str, … value_fn=lambda chunk: chunk.metadata.get(“timestamp”), … sql_typecast=”::timestamp” … ) # Results in: INSERT INTO table (created_at) VALUES (?::timestamp) # Allows inserting string timestamps with proper PostgreSQL casting
- Factory Methods:
text: Creates a text column specification (no type cast). integer: Creates an integer column specification (no type cast). float: Creates a float column specification (no type cast). vector: Creates a vector column specification with float[] casting. jsonb: Creates a JSONB column specification with jsonb casting.
- classmethod text(column_name: str, value_fn: Callable[[Chunk], Any]) ColumnSpec [source]
Create a text column specification.
- classmethod integer(column_name: str, value_fn: Callable[[Chunk], Any]) ColumnSpec [source]
Create an integer column specification.
- classmethod float(column_name: str, value_fn: Callable[[Chunk], Any]) ColumnSpec [source]
Create a float column specification.
- classmethod vector(column_name: str, value_fn: ~typing.Callable[[~apache_beam.ml.rag.types.Chunk], ~typing.Any] = <function chunk_embedding_fn>) ColumnSpec [source]
Create a vector column specification.
- apache_beam.ml.rag.ingestion.alloydb.chunk_id_fn(chunk: Chunk) str [source]
Extract ID from chunk.
- Parameters:
chunk – Input Chunk object.
- Returns:
The chunk’s ID.
- Return type:
- class apache_beam.ml.rag.ingestion.alloydb.ColumnSpecsBuilder[source]
Bases:
object
Builder for
ColumnSpec
’s with chainable methods.- static with_defaults() ColumnSpecsBuilder [source]
Add all default column specifications.
- with_id_spec(column_name: str = 'id', python_type: ~typing.Type = <class 'str'>, convert_fn: ~typing.Callable[[str], ~typing.Any] | None = None, sql_typecast: str | None = None) ColumnSpecsBuilder [source]
Add ID
ColumnSpec
with optional type and conversion.- Parameters:
column_name – Name for the ID column (defaults to “id”)
python_type – Python type for the column (defaults to str)
convert_fn – Optional function to convert the chunk ID If None, uses ID as-is
sql_typecast – Optional SQL type cast
- Returns:
Self for method chaining
Example
>>> builder.with_id_spec( ... column_name="doc_id", ... python_type=int, ... convert_fn=lambda id: int(id.split('_')[1]) ... )
- with_content_spec(column_name: str = 'content', python_type: ~typing.Type = <class 'str'>, convert_fn: ~typing.Callable[[str], ~typing.Any] | None = None, sql_typecast: str | None = None) ColumnSpecsBuilder [source]
Add content
ColumnSpec
with optional type and conversion.- Parameters:
column_name – Name for the content column (defaults to “content”)
python_type – Python type for the column (defaults to str)
convert_fn – Optional function to convert the content text If None, uses content text as-is
sql_typecast – Optional SQL type cast
- Returns:
Self for method chaining
Example
>>> builder.with_content_spec( ... column_name="content_length", ... python_type=int, ... convert_fn=len # Store content length instead of content ... )
- with_metadata_spec(column_name: str = 'metadata', python_type: ~typing.Type = <class 'str'>, convert_fn: ~typing.Callable[[~typing.Dict[str, ~typing.Any]], ~typing.Any] | None = None, sql_typecast: str | None = '::jsonb') ColumnSpecsBuilder [source]
Add metadata
ColumnSpec
with optional type and conversion.- Parameters:
column_name – Name for the metadata column (defaults to “metadata”)
python_type – Python type for the column (defaults to str)
convert_fn – Optional function to convert the metadata dictionary If None and python_type is str, converts to JSON string
sql_typecast – Optional SQL type cast (defaults to “::jsonb”)
- Returns:
Self for method chaining
Example
>>> builder.with_metadata_spec( ... column_name="meta_tags", ... python_type=list, ... convert_fn=lambda meta: list(meta.keys()), ... sql_typecast="::text[]" ... )
- with_embedding_spec(column_name: str = 'embedding', convert_fn: Callable[[List[float]], Any] | None = None) ColumnSpecsBuilder [source]
Add embedding
ColumnSpec
with optional conversion.- Parameters:
column_name – Name for the embedding column (defaults to “embedding”)
convert_fn – Optional function to convert the dense embedding values If None, uses default PostgreSQL array format
- Returns:
Self for method chaining
Example
>>> builder.with_embedding_spec( ... column_name="embedding_vector", ... convert_fn=lambda values: '{' + ','.join(f"{x:.4f}" ... for x in values) + '}' ... )
- add_metadata_field(field: str, python_type: Type, column_name: str | None = None, convert_fn: Callable[[Any], Any] | None = None, default: Any | None = None, sql_typecast: str | None = None) ColumnSpecsBuilder [source]
“”Add a
ColumnSpec
that extracts and converts a field from chunk metadata.- Parameters:
field – Key to extract from chunk metadata
python_type – Python type for the column (e.g. str, int, float)
column_name – Name for the column (defaults to metadata field name)
convert_fn – Optional function to convert the extracted value to desired type. If None, value is used as-is
default – Default value if field is missing from metadata
sql_typecast – Optional SQL type cast (e.g. “::timestamp”)
- Returns:
Self for chaining
Examples
Simple string field: >>> builder.add_metadata_field(“source”, str)
Integer with default: >>> builder.add_metadata_field( … field=”count”, … python_type=int, … column_name=”item_count”, … default=0 … )
Float with conversion and default: >>> builder.add_metadata_field( … field=”confidence”, … python_type=intfloat, … convert_fn=lambda x: round(float(x), 2), … default=0.0 … )
Timestamp with conversion and type cast: >>> builder.add_metadata_field( … field=”created_at”, … python_type=intstr, … convert_fn=lambda ts: ts.replace(‘T’, ‘ ‘), … sql_typecast=”::timestamp” … )
- add_custom_column_spec(spec: ColumnSpec) ColumnSpecsBuilder [source]
Add a custom
ColumnSpec
to the builder.Use this method when you need complete control over the
ColumnSpec
, including custom value extraction and type handling.- Parameters:
spec – A
ColumnSpec
instance defining the column name, type, value extraction, and optional SQL type casting.- Returns:
Self for method chaining
Examples
Custom text column from chunk metadata: >>> builder.add_custom_column_spec( … ColumnSpec.text( … name=”source_and_id”, … value_fn=lambda chunk: … f”{chunk.metadata.get(‘source’)}_{chunk.id}” … ) … )
- build() List[ColumnSpec] [source]
Build the final list of column specifications.
- class apache_beam.ml.rag.ingestion.alloydb.AlloyDBVectorWriterConfig(connection_config: ~apache_beam.ml.rag.ingestion.alloydb.AlloyDBConnectionConfig, table_name: str, *, column_specs: ~typing.List[~apache_beam.ml.rag.ingestion.alloydb.ColumnSpec] = [ColumnSpec(column_name='id', python_type=<class 'str'>, value_fn=<function ColumnSpecsBuilder.with_id_spec.<locals>.value_fn>, sql_typecast=None), ColumnSpec(column_name='embedding', python_type=<class 'str'>, value_fn=<function ColumnSpecsBuilder.with_embedding_spec.<locals>.value_fn>, sql_typecast='::float[]'), ColumnSpec(column_name='content', python_type=<class 'str'>, value_fn=<function ColumnSpecsBuilder.with_content_spec.<locals>.value_fn>, sql_typecast=None), ColumnSpec(column_name='metadata', python_type=<class 'str'>, value_fn=<function ColumnSpecsBuilder.with_metadata_spec.<locals>.value_fn>, sql_typecast='::jsonb')], conflict_resolution: ~apache_beam.ml.rag.ingestion.alloydb.ConflictResolution | None = ConflictResolution(on_conflict_fields=[], action='IGNORE', update_fields=None))[source]
Bases:
VectorDatabaseWriteConfig
Configuration for writing vectors to AlloyDB using managed transforms.
Supports flexible schema configuration through column specifications and conflict resolution strategies.
- Parameters:
connection_config – AlloyDB connection configuration.
table_name – Target table name.
column_specs – Column specifications. If None, uses default Chunk schema. Use ColumnSpecsBuilder to construct the specifications.
conflict_resolution – Optional strategy for handling insert conflicts. ON CONFLICT DO NOTHING by default.
Examples
Basic usage with default schema: >>> config = AlloyDBVectorWriterConfig( … connection_config=AlloyDBConnectionConfig(…), … table_name=’embeddings’ … )
Custom schema with metadata fields: >>> specs = (ColumnSpecsBuilder() … .with_id_spec() … .with_embedding_spec(column_name=”embedding_vec”) … .add_metadata_field(“source”) … .add_metadata_field( … “timestamp”, … column_name=”created_at”, … sql_typecast=”::timestamp” … ) … .build()) >>> config = AlloyDBVectorWriterConfig( … connection_config=AlloyDBConnectionConfig(…), … table_name=’embeddings’, … column_specs=specs … )
- create_write_transform() PTransform [source]