Hey,
I’m using the Openai library to do some function calling requests with the Gemini models. Based on the documentation (https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/call-gemini-using-openai-library#client-setup) the Pro 1.5 model should support forced function calling, but whenever I try adding ‘required’ or ‘any’ as tool_choice I’m getting this error:
Error code: 400 - [{‘error’: {‘code’: 400, ‘message’: “Expected a string ‘tool_choice’ to be one of ‘none’ or ‘auto’; found ‘required’.”, ‘status’: ‘INVALID_ARGUMENT’}}]
I’ve tried both ‘google/gemini-1.5-pro-preview-0409’ and ‘google/gemini-1.5-pro-001’ as models. It does work with ‘auto’ so the rest of the call should be ok.
5 Likes
My guess is that many of the features of this API support is still under development (which is why this is experimental). I am also facing some other issue (https://www.googlecloudcommunity.com/gc/AI-ML/Error-using-openai-api-for-Gemini-AI/m-p/771955#M8021), and haven’t seen any response so far! Good luck.
4 Likes
Thanks, yes, that’s probably the case.
I’ve tried the vertexai version of the API too, and that seems to accept the .ANY parameter for function calls with gemini pro 1.5 but it strangely it fails if the function call schema is a large one and the .ANY param is selected .
1 Like
Can you share a code sample on how you made it work? I have been breaking my head with forced function calling for more than a week now. It calls the function, but fails when i send the response back to llm. I see the same issue with
@Google /generative-ai library as well
1 Like
anyone managed to make this work?