Requesting BLOCK_NONE access for Gemini safety filters — content moderation use case (Vertex AI)

Dear Vertex AI Experts,

My team is building a content moderation pipeline using Gemini on Vertex AI. Google’s official instructions on this say to “turn off Gemini’s safety filters” as a best practice when using Gemini as a content moderator, so as not to interfere with the moderation task itself.

We also know we are not supposed to use this to detect CSAM, and we have a contingencies in our application code to properly route modetation for inputs that are flagged by the csam safety filter output (same for sensitive pii).

We have set the safety filter to BLOCK_NONE for all configurable categories as recommended by documentation, however it is unclear to us if we are able to use this setting given the following statement on the content-safety-filters page (see links at bottom).

This is a restricted field that isn’t available to all users in GA model versions.

Could someone from the Vertex AI team advise on how to get this access enabled for our project, or point us to the right internal request process? Happy to share our project ID directly with a Google engineer here. we would like:

  1. Access/approval for “BLOCK_NONE” on all 4 harm categories
  2. Confirmation of whether any additional terms of service / policy approval is required
  3. Guidance on if any additional approval is needed for the “OFF” setting (default for gemini 2.5 flash)

Thanks for your time, i know this is more a request than a discussion topic. However I wasn’t able to find a reasonable avenue for this in the GCP console.

References: