2016-01-27 26 views
-2

我在文件中有以下内容,我想过滤Executor Deserialize Time并添加所有值以获得最终结果。我怎样才能做到这一点?Bash脚本读取文件并添加内容

{"Event":"SparkListenerTaskEnd","Stage ID":0,"Stage Attempt ID":0,"Task Type":"ShuffleMapTask","Task End Reason":{"Reason":"Success"},"Task Info":{"Task ID":29,"Index":29,"Attempt":0,"Launch Time":1453927221831,"Executor ID":"1","Host":"172.17.0.226","Locality":"ANY","Speculative":false,"Getting Result Time":0,"Finish Time":1453927230401,"Failed":false,"Accumulables":[]},"Task Metrics":{"Host Name":"172.17.0.226","Executor Deserialize Time":9,"Executor Run Time":8550,"Result Size":2258,"JVM GC Time":18,"Result Serialization Time":0,"Memory Bytes Spilled":0,"Disk Bytes Spilled":0,"Shuffle Write Metrics":{"Shuffle Bytes Written":0,"Shuffle Write Time":4425,"Shuffle Records Written":0},"Input Metrics":{"Data Read Method":"Hadoop","Bytes Read":134283264,"Records Read":100890}}} 
{"Event":"SparkListenerTaskEnd","Stage ID":0,"Stage Attempt ID":0,"Task Type":"ShuffleMapTask","Task End Reason":{"Reason":"Success"},"Task Info":{"Task ID":30,"Index":30,"Attempt":0,"Launch Time":1453927222232,"Executor ID":"1","Host":"172.17.0.226","Locality":"ANY","Speculative":false,"Getting Result Time":0,"Finish Time":1453927230493,"Failed":false,"Accumulables":[]},"Task Metrics":{"Host Name":"172.17.0.226","Executor Deserialize Time":7,"Executor Run Time":8244,"Result Size":2258,"JVM GC Time":16,"Result Serialization Time":0,"Memory Bytes Spilled":0,"Disk Bytes Spilled":0,"Shuffle Write Metrics":{"Shuffle Bytes Written":0,"Shuffle Write Time":4190,"Shuffle Records Written":0},"Input Metrics":{"Data Read Method":"Hadoop","Bytes Read":134283264,"Records Read":100886}}} 
{"Event":"SparkListenerTaskEnd","Stage ID":0,"Stage Attempt ID":0,"Task Type":"ShuffleMapTask","Task End Reason":{"Reason":"Success"},"Task Info":{"Task ID":31,"Index":31,"Attempt":0,"Launch Time":1453927222796,"Executor ID":"1","Host":"172.17.0.226","Locality":"ANY","Speculative":false,"Getting Result Time":0,"Finish Time":1453927230638,"Failed":false,"Accumulables":[]},"Task Metrics":{"Host Name":"172.17.0.226","Executor Deserialize Time":5,"Executor Run Time":7826,"Result Size":2258,"JVM GC Time":18,"Result Serialization Time":0,"Memory Bytes Spilled":0,"Disk Bytes Spilled":0,"Shuffle Write Metrics":{"Shuffle Bytes Written":0,"Shuffle Write Time":3958,"Shuffle Records Written":0},"Input Metrics":{"Data Read Method":"Hadoop","Bytes Read":134283264,"Records Read":101004}}} 
+1

**灵活轻便的JSON处理器:** [jq](https://stedolan.github.io/jq/)是一款轻量级且灵活的命令行JSON处理器。 **查询JSON数据:** [Jshon](http://kmkeen.com/jshon/)解析,读取并创建JSON。它的设计目的是在shell中尽可能地使用,并替换grep/sed/awk制作的易碎adhoc解析器以及perl/python制作的重量级单行解析器。 –

回答

0
grep -P -o "Executor Deserialize Time.:[0-9]+" file.txt | 
    cut -d: -f2 | awk '{ sum+=$1} END {print sum}' 

grep的那个位与所需的字段每一行的一部分。
拆分它只抓取数字。
使用awk来总结所有的值

+0

基本上*从来没有*需要将'grep'或'cut'管道到'awk'。 awk可以自己完成这些工作。 –

+0

如果想在上面的例子中将“Executor Deserialize Time”与“Stage ID”0相加,那么脚本会是什么?还有其他的阶段ID,例如1,2,5,8。 – user3180835

+0

如果你打算这样做,你可能要考虑解析json。我没有使用JSON和bash的经验。如果可能,我会使用python。 – dan08

0
awk -v RS=, '/^"Executor Deserialize Time":/ {split($0,a,":"); tot+=a[2]} END{print tot}' file 
  • 设置RS(记录分隔符)以,
  • 匹配所需字段名称的匹配记录。
  • 将当前记录拆分为:
  • 将第二个拆分字段添加到我们的总数中。
  • 打印总数为END

或者相同的想法,但设置FS(字段分隔符),而不是

awk -F , '{for (i=1;i<=NF;i++) {if ($i ~ /^"Executor Deserialize Time":/) {split($i,a,":"); tot+=a[2]}}} END{print tot}' file 
  • 设置FS,
  • 将每个字段从1循环到NF
  • 匹配所需的字段。
  • 将当前记录拆分为:
  • 将第二个拆分字段添加到我们的总数中。
  • 打印总数为END

如果你只希望将它的Stage ID给定值,那么你可以使用这个:

awk -v stage=0 -F , '{ 
    ds=0; val=0 
    for (i=1;i<=NF;i++) { 
     split($i,a,":") 

     if (a[1] == "\"Executor Deserialize Time\"") { 
      val=a[2] 
     } 

     if ((a[1] == "\"Stage ID\"") && (a[2] == stage)) { 
      ds++ 
     } 

     if (ds && val) { 
      tot+=val 
      next 
     } 
    } 
} 
END{print tot}' file 

跟踪我们是否已经看到了这两个必要的值的每一行,只有总结的时候,我们有。它使用stage变量来执行此操作,以便您可以从awk脚本外部控制此参数(-v stage=0参数)。

+0

如果想在上面的例子中将“Executor Deserialize Time”与“Stage ID”:0相加,那么脚本会是什么? – user3180835