Wietse Venema's blog


DevFest Berlin: Running open models on Cloud Run

This is a list of links from my recent talk on open models at DevFest Berlin.

Abstract: Running open large language models in production with serverless GPUs Many developers are interested in running open large language models, such as Google’s Gemma and Llama. Open models give you full control over the deployment options, the timing of model upgrades, the private data that goes into the model, and the ability to fine-tune on specific tasks such as data extraction. Hugging Face TGI is a popular open-source LLM inference server. You’ll learn how to build and deploy an application that uses an open model on Google Cloud Run with cost-effective GPUs that scale down to zero instances.