Make JQ output a table

jq format output
jq add new key/value
jq 'map
jq nested json
jq add text to output
jq filter by key
jq select
jq csv

My question is: how to make JQ output in a table format, replacing absent values with 0?

So the input for JQ is the following Elastic Search JSON response:

{"aggregations": {
    "overall": {
        "buckets": [
            {
                "key": "2018-01-18T00:00:00.000Z-2018-01-25T19:33:16.010Z",
                "from_as_string": "2018-01-18T00:00:00.000Z",
                "to": 1516908796010,
                "to_as_string": "2018-01-25T19:33:16.010Z",
                "doc_count": 155569,
                "agg_per_name": {
                    "doc_count_error_upper_bound": 0,
                    "sum_other_doc_count": 0,
                    "buckets": [
                        {
                            "key": "ASSET-DD583",
                            "doc_count": 3016,
                            "totalMaxUptime_perDays": {
                                "buckets": [
                                    {
                                        "key_as_string": "2018-01-22T00:00:00.000Z",
                                        "key": 1516579200000,
                                        "doc_count": 161,
                                        "totalMaxUptime": {
                                            "value": 77598
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-23T00:00:00.000Z",
                                        "key": 1516665600000,
                                        "doc_count": 251,
                                        "totalMaxUptime": {
                                            "value": 80789
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-24T00:00:00.000Z",
                                        "key": 1516752000000,
                                        "doc_count": 192,
                                        "totalMaxUptime": {
                                            "value": 56885
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-25T00:00:00.000Z",
                                        "key": 1516838400000,
                                        "doc_count": 2088,
                                        "totalMaxUptime": {
                                            "value": 7392705
                                        }
                                    }
                                ]
                            }
                        },
                        {
                            "key": "ASSET-DD568",
                            "doc_count": 2990,
                            "totalMaxUptime_perDays": {
                                "buckets": [
                                    {
                                        "key_as_string": "2018-01-18T00:00:00.000Z",
                                        "key": 1516233600000,
                                        "doc_count": 106,
                                        "totalMaxUptime": {
                                            "value": 31241
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-19T00:00:00.000Z",
                                        "key": 1516320000000,
                                        "doc_count": 241,
                                        "totalMaxUptime": {
                                            "value": 2952565
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-20T00:00:00.000Z",
                                        "key": 1516406400000,
                                        "doc_count": 326,
                                        "totalMaxUptime": {
                                            "value": 2698235
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-21T00:00:00.000Z",
                                        "key": 1516492800000,
                                        "doc_count": 214,
                                        "totalMaxUptime": {
                                            "value": 85436
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-22T00:00:00.000Z",
                                        "key": 1516579200000,
                                        "doc_count": 279,
                                        "totalMaxUptime": {
                                            "value": 83201
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-23T00:00:00.000Z",
                                        "key": 1516665600000,
                                        "doc_count": 50,
                                        "totalMaxUptime": {
                                            "value": 96467
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-24T00:00:00.000Z",
                                        "key": 1516752000000,
                                        "doc_count": 5,
                                        "totalMaxUptime": {
                                            "value": 903
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-25T00:00:00.000Z",
                                        "key": 1516838400000,
                                        "doc_count": 1769,
                                        "totalMaxUptime": {
                                            "value": 12337946
                                        }
                                    }
                                ]
                            }
                        },
                        {
                            "key": "ASSET-42631",
                            "doc_count": 2899,
                            "totalMaxUptime_perDays": {
                                "buckets": [
                                    {
                                        "key_as_string": "2018-01-18T00:00:00.000Z",
                                        "key": 1516233600000,
                                        "doc_count": 132,
                                        "totalMaxUptime": {
                                            "value": 39054
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-19T00:00:00.000Z",
                                        "key": 1516320000000,
                                        "doc_count": 172,
                                        "totalMaxUptime": {
                                            "value": 47634
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-20T00:00:00.000Z",
                                        "key": 1516406400000,
                                        "doc_count": 214,
                                        "totalMaxUptime": {
                                            "value": 68264
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-21T00:00:00.000Z",
                                        "key": 1516492800000,
                                        "doc_count": 220,
                                        "totalMaxUptime": {
                                            "value": 66243
                                        }
                                    },
                                    {
                                        "key_as_string": "2018-01-25T00:00:00.000Z",
                                        "key": 1516838400000,
                                        "doc_count": 128,
                                        "totalMaxUptime": {
                                            "value": 47660
                                        }
                                    }
                                ]
                            }
                        }
                    ]
                }
            }
        ]
    }
}
}

This JSON has some inherent properties:

  1. There will be a variable number of buckets in agg_per_name.buckets
  2. TotalMaxUptime_perDays.buckets represents a group by day for the last 7 days from the current date. The totalMaxUptime_perDays.buckets will have a number of buckets between 1 and 8 for every asset, each bucket corresponding to a certain date.

The desired output of JQ, for the given sample is a table where on horizontal you have the date from key_as_string (in this case from 18.01.2018 to 25.01.2018) and on vertical the all the asset keys (i.e. ASSET-DD583, ASSET-DD568, etc). The table is populated with totalMaxUptime.value for every corresponding date and if the date is not present in the result, a "0" value should be put instead:

XXXXXXXXXXX, 2018-01-18, 2018-01-19, 2018-01-20, 2018-01-21, 2018-01-22, 2018-01-23, 2018-01-24, 2018-01-25
ASSET-DD583, 0,           0,           0,         0,          77598,      80789,      56885,      7392705
ASSET-DD568, 31241,       2952565,     2698235,   85436,      83201,      96467,      903,        12337946
ASSET-42631, 39054,       47634,       68264,     66243,      0,          0,          0,          47660

EDIT 1:

This is how far I got:

input.json | jq '.aggregations.overall.buckets[0].agg_per_name.buckets[] | .key + ", " + (.totalMaxUptime_perDays.buckets[] | .key_as_string + ", " + (.totalMaxUptime.value | tostring))' | sed 's/"//g' | sed 's/T00:00:00.000Z//g'> uptime.csv

Which produces this kind of output:

ASSET-DD583, 2018-01-22, 77598
ASSET-DD583, 2018-01-23, 80789
ASSET-DD583, 2018-01-24, 56885
ASSET-DD583, 2018-01-25, 7392705
...............

In the following, I've used @tsv so the output can more easily be seen as a table, but you might want to use @csv.

The tricky part here is to put the 0s in the right place. Creating a JSON "dictionary" (i.e. JSON object) makes it easy. Here, normalize takes advantage of the fact that jq will honor the order in which keys are added to an object.

def dates:
  ["2018-01-18", "2018-01-19", "2018-01-20", "2018-01-21", "2018-01-22", "2018-01-23", "2018-01-24", "2018-01-25"];

def normalize:
  . as $in 
  | reduce dates[] as $k ({}; .[$k] = ($in[$k] // 0));

(["Asset"] + dates),
(.aggregations.overall.buckets[].agg_per_name.buckets[]
 | .key as $asset
 | .totalMaxUptime_perDays.buckets
 | map( { (.key_as_string | sub("T.*";"") ): .totalMaxUptime.value } ) 
 | add
 | normalize
 | [$asset] + [.[]]
 )
| @tsv

You might want to modify the above so that dates is computed from the data.

Output:
Asset   2018-01-18  2018-01-19  2018-01-20  2018-01-21  2018-01-22  2018-01-23  2018-01-24  2018-01-25
ASSET-DD583 0   0   0   0   77598   80789   56885   7392705
ASSET-DD568 31241   2952565 2698235 85436   83201   96467   903 12337946
ASSET-42631 39054   47634   68264   66243   0   0   0   47660

EDIT: Parentheses around $in[$k] // 0 have been added.

Reshaping JSON with jq, jq '.object1,.object2,.object3' returns the value for these objects each on a new Is there an option, or some filter to output these values on a single line, maybe you could ask him to make a more recent version available. The desired output of JQ, for the given sample is a table where on horizontal you have the date from key_as_string (in this case from 18.01.2018 to 25.01.2018) and on vertical the all the asset keys (i.e. ASSET-DD583, ASSET-DD568, etc).


A partial solution for your problem.

You can put the values of an array on the same line if you use @csv.

For example, let's say you have

{
  "a": [1,2,3],
  "b": [
    {
      "x": 10
    },
    {
      "x": 20
    },
    {
      "x": 30
    }
  ]
}

To obtain 1,2,3 you should use jq '.a | @csv'

To obtain 10,20,30 you should use jq '[.b[].x] | @csv'

Hope this helps!

Is it possible to output multiple values on a single line? · Issue #785 , Just make a web request with cURL and save the response to a file. Logically the output is identical to the output, but jq writes it out neatly  jq "[.iss-position.latitude, iss_position.longitude, .timestamp]" iss.json The output is wrapped in brackets and separated by commas, making it a correctly formed array. Numeric values can also be manipulated as they’re retrieved.


Try the following:

cat input.json
    | jq '.aggregations.overall.buckets[0].agg_per_name.buckets[] |
    .key + ", " + (.totalMaxUptime_perDays.buckets[] |
    .key_as_string + ", " + (.totalMaxUptime.value | tostring))' |column -t -s,

Convert JSON to CSV with jq and RecordStream, A series of how to examples on using jq, a command-line JSON processor. Table of contents. How to pretty print JSON; How to use pipes with Unless told not to jq will pretty print making JSON readable. jq '.' names.json  Output a CSV: @csv. To create a CSV table with jq we want to filter our input JSON into a series of arrays, with each array being a row of the CSV. The previous filter gave us an array with the id and title keys of each painting. Let’s add the primary artist for each artwork as well:.artObjects[] | [.id, .title, .principalOrFirstMaker, .webImage.url]


JSON on the command line with jq, curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' is the expression . , which takes the input and produces it unchanged as output. of the "html_url" fields inside that array of parent commits and make a simple list of strings  Call a function which first adds the column names to the < table > element. (It is looking for the all columns, which is UNION of the column names). Traverse the JSON data and match key with the column name. Put the value of that key in the respective column. Leave the column empty if there is no value of that key.


JQ, jq has a filter, @csv, for converting an array to a CSV string. I prefer to make each record an row in my CSV. jq import os, json with open('x.json') as f: x = json.load(f) print '{}{}{}'.format(', '.join(y['displayName'] for y in x['data']), os.linesep,​  jq can be used for more than just reading values from a JSON object. It can also transform JSON into new data structures. It can also transform JSON into new data structures. Returning to the dog.json example earlier a new array can be created containing the name and likes as follows.


Using jq to extract values and format in CSV, Table 35.1 shows a standard competitive-imports type make-use table. i used for a production activity of industry j q D .qi/ W Gross domestic commodity output  jq Manual (development version) For released versions, see jq 1.6, jq 1.5, jq 1.4 or jq 1.3.. A jq program is a "filter": it takes an input, and produces an output. There are a lot of builtin filters for extracting a particular field of an object, or converting a number to a string, or various other standard tasks.