Cloud Run Service vs Cloud Run Jobs for ML Inference jobs

I wish the official GCP documentation had a comparison table for Cloud Run service vs Cloud Run Jobs. I found this table via Google search but it is not part of the official GCP documentation. Anyway coming back to my question, does cloud run service NOT offer parallelism at all? I see a concurrency parameter as part of the configuration. I have 2 questions

  1. Does Cloud Run jobs support attaching GPUs?
  2. Is it a good idea to run ML Inference jobs using Cloud Run Jobs?