Hi, I fine-tuned some models in Vertex AI managed platform using Gemini 2.0 flash as a base.
Once fine-tuning is done, I get “pre-deployed” endpoints (and their respective IDs) which I have been using to make inference calls to these fine-tuned models.
However, I need to be able to view a record of past request/response payloads (or at least text inputs & outputs) for these endpoints but couldn’t find any way to get this to work.
I tried to follow the available documentation to enable logging and directing those logs either to Google Cloud Log Explorer or to a Big Query table and every time I have done so has resulted in the same problem: the actual “request” and “response” (or input & output) contents are never logged.
Similar products (like OpenAI Platform endpoints for fine-tuned models) have pretty straight forward server-side logging for this, is this not available for Vertex AI or am I missing something?