Hi,
I am noticing extreme performance drop when the same code and prompt templates while calling Gemini-2.5-flash API with vertexai=True via vertex ai pipelines component vs running it from Jupiter notebook in workbench. From workbench it took around 10 minutes to process 5K records while the same takes more than an hour from within vertex ai component.
Have anyone faced a similar issue?