Hello,
I have been searching on how to deploy models on VertexAI in AI Platform manner. Most tutorials shows using pre-built container which seems to load the model and return inference results.
My current requirement needs to post-process. This was easy to do with AI platform’s Predictor class. Is something like that doable with pre-built containers on VertexAI (where we upload a package for inference by inheriting Predictor class and specifyinh the class name)? Using custom container makes it complex to handle and response to requests.