I was able to successfully finetune a gemini 2.5 flash model for VLM tasks. I can access it using vertexai libraries but when I try to go and test the model through Vertex AI Studio chat, it continues to say I’m blocked by org level policies. I’ve reviewed our org level policies (AllowedModels and ResourceLocations) but neither appear to be blocking this. Wanted to see if others have faced this issue and understand which policies may be blocking this. Any help is greatly appreciated!
This issue usually occurs because Vertex AI Studio applies stricter UI-level restrictions than the SDK, so even though your fine-tuned Gemini 2.5 Flash model works via Vertex AI libraries, Studio can block it due to region mismatch, org policy allowlist, or VPC Service Controls; the most common fix is to make sure the Studio region exactly matches the tuned model’s deployment region, confirm that both aiplatform.allowedModels (including the base model publishers/google/models/gemini-2.5-flash) and gcp.resourceLocations allow that region, verify your account has Vertex AI Studio/AI Platform user permissions, and check if VPC-SC is restricting Studio UI access—if all are correct and it still fails, it is likely a known limitation where some tuned VLM models are currently accessible only through API/SDK and not yet fully supported in Studio Chat.
Appreciate the response! Unfortunately I don’t have a great answer as to what did work but after retrying after an hour or so, it seems to work. I am convinced it is related to studio region but I was unsure how I had changed it. Not very helpful for those who may have the same issue as I did but it does seem to be linked to ensuring studio region matches the tuned model’s deployment region.

