I’m working on an app in which I’ve grounded Gemini 1.5 Pro in a Vertex AI Search app and data store. The user chats with the model, the intended use case being that they ask questions related to the subject matter covered in the grounding documents.
I need the model to consistently and reliably decline to answer the following prompts:
- Questions for which the answer cannot be found in the grounding documents
- Questions that are entirely unrelated to the grounding documents
I’ve also observed that the model sometimes provides a citation and sometimes doesn’t, even in response to the exact same prompt. I’d like the model to always provide a citation whenever one is available.
My question is, are there particular settings (system instructions, temperature, top-P, something else?) that will make the model consistently behave in the ways I’ve described above? (Or, given my needs, perhaps a different tool/approach is more appropriate?)
Thanks for your help.