Thursday 15 January 2015

json - How to iterate over a directory of files and save out to new files with jq and shell scripting? -


I am using jq () to draw some specific data from some JSON files and it is in another JSON file Eg transforming: / P>

  cat data1.json | ./jq '[. ["Message"] [] | {To: .to, from: .from, body: .body, direction: .direction, date_sent: .date_sent}] '& gt; Results1.json  

I have to do 50 JSON files in a directory to do this. How can I write a bit of shell script to iterate all 50 files? Can that save the function, and 50 scrub JSON files?

I am thinking of something with these lines but some guidance is needed:

  in the file * .json | ./jq | '[. ["Message"] [] | {From: .to, from: .from, body: .body, direction: .direction, date_sent: .date_sent}] '' $ file "" $ newfile.json ";  

Thank you!

I'm not familiar with jq, so it should be processed in a single volume to process several files There may be a way that this file will work once every one to implement it:

  #! / Bin / bash file * .json; Do. / Jq '[. ["Message "... '& lt;" $ File "& gt;" $ File.scrubbed " 

To redirect the input to a file, cat Using Ana Use & lt; instead.

If your input files consistently naming schemes like data n .json and you want the output file to be called for example results n .json , instead you have "$ {file / data / result } " (although this may not be portable for some non-Bash shells). See, you do not overwrite some files by mistake, although the name does not have" data ", though. Search for $ {parameter / pattern / string} .


No comments:

Post a Comment