@Maruti Chand Really appreciate your help, Thank you.
Still looking for an answer.
From your example set of dimensions it is understandable that both metrics and filter can be applied on a single dimension. And its not possible to get other dimensions in the same call. I think that is how the design is.
Let me explain my use case in detail.
Client submits a request with VIN number. So for report purpose i would collect stats on custom dimensions VIN and Username and other common dimensions like proxy_pathsuffix, request time and response code.
Order of dimensions as below:
"dimensions":["proxy_pathsuffix","username","vin","Response status code", "date time"],<br>
As per your explanation , in this use case proxy_pathsuffix is a high-level dimension and it will have many usernames, each username will have many vins , each Vin will have a response code so on and so forth.
Basically i view proxy_pathsuffix as a collection, because all dimensions can only be drilled down from that dimension. So, I want to get all dimensions within the collection.
I have already tried making call by pathsuffix stats and filtered by apiproxyname , username and response status code 200.
Below is the response. This is fine.
{
"environments": [{
"dimensions": [{
"metrics": [{
"name": "avg(total_response_time)",
"values": [{
"timestamp": 1496435940000,
"value": "463.0"
},
{
"timestamp": 1496433840000,
"value": "593.0"
}]
}],
"name": "/v1/specs"
},
{
"metrics": [{
"name": "avg(total_response_time)",
"values": [{
"timestamp": 1496435940000,
"value": "0.0"
},
{
"timestamp": 1496419260000,
"value": "0.0"
}]
}],
"name": "/v1/fin1"
}],
"name": "exp-dev"
}],
"metaData": {
"errors": [],
"notices": ["Spark engine used",
"query served by:cf960e72-8ee8-45c0-a094-e5f9904b9ee5"]
}
}
But i am looking for more data in json. I would like to have sub-dimension vin also listed within each dimension. Something like this
{
"environments": [{
"dimensions": [{
"metrics": [{
"name": "avg(total_response_time)",
"values": [{
"timestamp": 1496435940000,
"value": "463.0"
},
{
"timestamp": 1496433840000,
"value": "593.0"
}]
}],
"subdimensions": {
"vin": {
"value": [
"vin1",
"vin2",
"vin3"
]
}
},
"name": "/v1/specs"
},
{
"metrics": [{
"name": "avg(total_response_time)",
"values": [{
"timestamp": 1496435940000,
"value": "0.0"
},
{
"timestamp": 1496419260000,
"value": "0.0"
}]
}],
"subdimensions": {
"vin": {
"value": [
"vin1",
"vin4",
"vin3",
"vin1"
]
}
},
"name": "/v1/fin1" }],
"name": "dev" }], "metaData": { "errors": [], "notices": ["Spark engine used", "query served by:cf960e72-8ee8-45c0-a094-e5f9904b9ee5"] } }
I don’t know if it’s possible or not. But if there is any other approach please suggest.
Thanks.