Troubleshooting Cloud Logging costs is so hard

Dear dheerajpanyam,

I am 100% sure that the cause is the noise from too many logs written to the default bucket. The retaining period =< 30 days are included per this documentation. Even though you set the retaining period 20 days, the cost would still be the same https://cloud.google.com/stackdriver/pricing You only need to control how many GBs of logs ingested into a particular bucket.

As I previously stated, even though the GKE whether it is autopilot or standard stdout all logs but if your ingestion settings only include specific/granular parameter. You will not get charged for the non-ingested logs. You will only get charged for the logs that are included in the specific/granular parameter (except the Networking Telemetry I mentioned before ofc)

For the query, I’m not quite sure whether it is MQL or not. I think Google have it’s own language. Please refer to https://cloud.google.com/logging/docs/view/logging-query-language

For the monitoring, is something like this sufficient for you?

To be able accurately track the ingestion, I suggest to add more filter to this monitoring such as filter by label. But unfortunately I don’t see any way to label a Logging Bucket. Therefore I recommend you to create a new Cloud Storage Bucket then label it with appropriate key:value. After you finished creating the bucket, you then create a new sink with destination to that bucket that you just created.

Fyi, this is what I usually do. I disabled the default Sink, and created so many customs sinks so that I can control my logging cost more flexible. When I need something I just enable certain sinks, disable it again once I got what I need.

Hope this could give you some inspiration in your own GCP projects

Regards,
Iza

1 Like