Hello Google Cloud Community,
I am currently working with Google Cloud Batch and I’m interested in increasing the memory size of G2 instance types using custom machine types. According to the documentation on accelerator-optimized machines https://cloud.google.com/compute/docs/accelerator-optimized-machines?hl=ja#g2_limitations , it’s possible to modify the memory size for G2 instances. It’s also mentioned that one can create VM instances with increased memory using gcloud commands.
However, I’m unclear on how to apply this to Google Cloud Batch jobs. Despite specifying the ComputeResource in my job configuration to use a custom type, my jobs continue to launch using the default g2-standard-8 settings. Here is the relevant part of the code I am using:
job {
task_groups {
task_spec {
compute_resource {
cpu_milli: 4000
memory_mib: 30517
}
max_run_duration {
seconds: 1209600
}
max_retry_count: 1
runnables {
container {
image_uri: "ubuntu"
commands: "/bin/bash"
commands: "-c"
commands: "sleep 3650d"
}
}
}
}
allocation_policy {
instances {
policy {
provisioning_model: SPOT
accelerators {
type_: "nvidia-l4"
count: 1
}
boot_disk {
size_gb: 30
}
}
install_gpu_drivers: true
install_ops_agent: true
}
}
logs_policy {
destination: CLOUD_LOGGING
}
}
Could someone advise how to properly configure a Google Cloud Batch job to utilize a custom G2 machine type with increased memory?
Thank you!
