Large Language Model Inference in Beam
In Apache Beam 2.40.0, Beam introduced the RunInference API, which lets you deploy a machine learning model in a Beam pipeline. A RunInference
transform performs inference on a PCollection
of examples using a machine learning (ML) model. The transform outputs a PCollection that contains the input examples and output predictions. For more information, see RunInference here. You can also find inference examples on GitHub.
Using RunInference with very large models
RunInference works well on arbitrarily large models as long as they can fit on your hardware.
Memory Management
RunInference has several mechanisms for reducing memory utilization. For example, by default RunInference load at most a single copy of each model per process (rather than one per thread).
Many Beam runners, however, run multiple Beam processes per machine at once. This can cause problems since the memory footprint of loading large models like LLMs multiple times can be too large to fit into a single machine.
For memory-intensive models, RunInference provides a mechanism for more intelligently sharing memory across multiple processes to reduce the overall memory footprint. To enable this mode, users just have
to set the parameter large_model
to True in their model configuration (see below for an example), and Beam will take care of the memory management. When using a custom model handler, you can override the share_model_across_processes
function or the model_copies
function for a similar effect.
Running an Example Pipeline with T5
This example demonstrates running inference with a T5
language model using RunInference
in a pipeline. T5
is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks. Each task is converted into a text-to-text format. The example uses T5-11B
, which contains 11 billion parameters and is 45 GB in size. In order to work well on a variety of tasks, T5
prepends a different prefix to the input corresponding to each task. For example, for translation, the input would be: translate English to German: …
and for summarization, it would be: summarize: …
. For more information about T5
see the T5 overiew in the HuggingFace documentation.
To run inference with this model, first, install apache-beam
2.40 or greater:
pip install apache-beam -U
Next, install the required packages listed in requirements.txt and pass the required arguments. You can download the T5-11b
model from Hugging Face Hub with the following steps:
- Install Git LFS following the instructions here
- Run
git lfs install
- Run
git clone https://huggingface.co/t5-11b
(this may take a long time). This will download the checkpoint, then you need to convert it to the model state dict as described here:
import torch
from transformers import T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("path/to/cloned/t5-11b")
torch.save(model.state_dict(), "path/to/save/state_dict.pth")
You can view the code on GitHub
- Locally on your machine:
python main.py --runner DirectRunner \
--model_state_dict_path <local or remote path to state_dict> \
--model_name t5-11b
You need to have 45 GB of disk space available to run this example.
- On Google Cloud using Dataflow:
python main.py --runner DataflowRunner \
--model_state_dict_path <gs://path/to/saved/state_dict.pth> \
--model_name t5-11b \
--project <PROJECT_ID> \
--region <REGION> \
--requirements_file requirements.txt \
--staging_location <gs://path/to/staging/location>
--temp_location <gs://path/to/temp/location> \
--experiments "use_runner_v2,no_use_multiple_sdk_containers" \
--machine_type=n1-highmem-16 \
--disk_size_gb=200
You can also pass other configuration parameters as described here.
Pipeline Steps
The pipeline contains the following steps:
- Read the inputs.
- Encode the text into transformer-readable token ID integers using a tokenizer.
- Use RunInference to get the output.
- Decode the RunInference output and print it.
The following code snippet contains the four steps:
with beam.Pipeline(options=pipeline_options) as pipeline:
_ = (
pipeline
| "CreateInputs" >> beam.Create(task_sentences)
| "Preprocess" >> beam.ParDo(Preprocess(tokenizer=tokenizer))
| "RunInference" >> RunInference(model_handler=model_handler)
| "PostProcess" >> beam.ParDo(Postprocess(tokenizer=tokenizer))
)
In the third step of pipeline we use RunInference
.
In order to use it, you must first define a ModelHandler
. RunInference provides model handlers for PyTorch
, TensorFlow
and Scikit-Learn
. Because the example uses a PyTorch
model, it uses the PyTorchModelHandlerTensor
model handler.
A ModelHandler
requires parameters like:
state_dict_path
– The path to the saved dictionary of the model state.model_class
– The class of the Pytorch model that defines the model structure.model_params
– A dictionary of arguments required to instantiate the model class.device
– The device on which you wish to run the model. If device = GPU then a GPU device will be used if it is available. Otherwise, it will be CPU.inference_fn
- The inference function to use during RunInference.large_model
- (seeMemory Management
above). Whether to use memory minimization techniques to lower the memory footprint of your model.
Troubleshooting Large Models
Pickling errors
When sharing a model across processes with large_model=True
or using a custom model handler, Beam sends the input and output data across a process boundary.
To do this, it uses a serialization method known as pickling.
For example, if you call output=model.my_inference_fn(input_1, input_2)
, input_1
, input_2
, and output
will all need to be pickled.
The model itself does not need to be pickled since it is not passed across process boundaries.
While most objects can be pickled without issue, if one of these objects is unpickleable you may run into errors like error: can't pickle fasttext_pybind.fasttext objects
.
To work around this, there are a few options:
First of all, if possible you can choose not to share your model across processes. This will incur additional memory pressure, but it may be tolerable in some cases.
Second, using a custom model handler you can wrap your model to take in and return serializable types. For example, if your model handler looks like:
class MyModelHandler():
def load_model(self):
return model_loading_logic()
def run_inference(self, batch: Sequence[str], model, inference_args):
unpickleable_object = Unpickleable(batch)
unpickleable_returned = model.predict(unpickleable_object)
my_output = int(unpickleable_returned[0])
return my_output
you could instead wrap the unpickleable pieces in a model wrapper. Since the model wrapper will sit in the inference process, this will work as long as it only takes in/returns pickleable objects.
class MyWrapper():
def __init__(self, model):
self._model = model
def predict(self, batch: Sequence[str]):
unpickleable_object = Unpickleable(batch)
unpickleable_returned = model.predict(unpickleable_object)
return int(prediction[0])
class MyModelHandler():
def load_model(self):
return MyWrapper(model_loading_logic())
def run_inference(self, batch: Sequence[str], model: MyWrapper, inference_args):
return model.predict(unpickleable_object)
RAG and Prompt Engineering in Beam
Beam is also an excellent tool for improving the quality of your LLM prompts using Retrieval Augmented Generation (RAG). Retrieval augmented generation is a technique that enhances large language models (LLMs) by connecting them to external knowledge sources. This allows the LLM to access and process real-time information, improving the accuracy, relevance, and factuality of its responses.
Beam has several mechanisms to make this process simpler:
- Beam’s MLTransform provides an embeddings package to generate the embeddings used for RAG. You can also use RunInference to generate embeddings if you have a model without an embeddings handler.
- Beam’s Enrichment transform makes it easy to look up embeddings or other information in an external storage system like a vector database.
Collectively, you can use these to perform RAG using the following steps:
Pipeline 1 - generate knowledge base:
- Ingest data from external source using one of Beam’s IO connectors
- Generate embeddings on that data using MLTransform
- Write those embeddings to a vector DB using a ParDo
Pipeline 2 - use knowledge base to perform RAG:
- Ingest data from external source using one of Beam’s IO connectors
- Generate embeddings on that data using MLTransform
- Enrich that data with additional embeddings from your vector DB using Enrichment
- Use that enriched data to prompt your LLM with RunInference
- Write that data to your desired sink using one of Beam’s IO connectors
To view an example pipeline performing RAG, see https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/rag_usecase/beam_rag_notebook.ipynb
Last updated on 2024/11/14
Have you found everything you were looking for?
Was it all useful and clear? Is there anything that you would like to change? Let us know!