Support dying for prebuilt containers on Vertex AI training?

What should customers running workloads in containers on Vertex AI Training do in 2026 if they want to use reasonably recent versions of ML frameworks and CUDA drivers?

I used to recommend that our customers use Google Cloud’s pre-built containers for serverless training on Vertex AI. Sure, they may have been a version or two behind the latest for TF or PyTorch version, but their convenience usually outweighted the need for the cutting edge.

The documentation shows only seriously outdated ML framework versions and end of life patch and support dates that have passed (July 2025), with end of availability rapidly approaching for the ones that remain alive.
https://docs.cloud.google.com/vertex-ai/docs/training/pre-built-containers
https://docs.cloud.google.com/deep-learning-containers/docs/release-notes

The Artifact registries for the two image families similarly haven’t been updated in a while.

What’s the latest best practice? Just build your own Docker image? Autopackaging was never appealing to me. I think it uses these registries as base images, anyway.