My file looks like this
[0.00137532,[0,13,19,16,18,15,19,16,11,15,12,12,13,14,0,11,17,18,14,17],[0,0,0,0,0,0,0,0,0,0,0,0.0189924,0.0871235,0.179813,0.307779,0$
SITE: 0 0.000853196055 0.0694597696 0000000001
[0.00111747753,[0,13,18,16,19,15,18,19,11,15,12,12,13,14,0,11,17,14,16,17],[0,0,0,0,0,0,0,0,0,0,0,0.018992411,0.0871235198,0.179812517$
[0.000200093646,[0,13,19,17,18,16,19,15,11,16,12,12,13,14,15,0,11,18,14,17],[0,0,0,0,0,0,0,0,0,0,0,0.018992411,0.0871235198,0.17981251$
[1.9658373e-05,[0,18,14,11,12,19,14,15,16,19,17,12,13,0,11,13,17,18,15,16],[0,0,0,0,0,0,0,0,0,0,0,0.106437198,0.163778333,0.758483056,$
[0.000282736441,[0,18,15,11,13,19,15,12,16,19,17,12,13,14,0,11,17,18,14,16],[0,0,0,0,0,0,0,0,0,0,0,0.106437198,0.129806881,0.163778333$
[0.00111187732,[0,13,19,16,18,15,19,17,11,15,12,12,13,14,0,11,17,18,14,16],[0,0,0,0,0,0,0,0,0,0,0,0.018992411,0.0871235198,0.179812517$
SITE: 1 0.00363901565 0.820587534 1000100111
[0.000647295926,[0,13,19,16,18,15,19,17,11,15,12,12,13,14,0,11,17,18,14,16],[0,0,0,0,0,0,0,0,0,0,0,0.018992411,0.0871235198,0.17981251$
[0.000272141,[0,11,19,16,18,15,19,17,13,15,14,12,0,14,11,13,17,18,12,16],[0,0,0,0,0,0,0,0,0,0,0,0.687401201,0.989300937,0.018992411,0.$
[1.82208814e-05,[0,11,16,13,15,19,16,14,17,19,18,12,0,14,15,11,13,18,12,17],[0,0,0,0,0,0,0,0,0,0,0,0.569817481,0.687401201,0.106437198$
[0.000160613913,[0,11,19,16,18,15,19,17,13,15,14,12,0,14,11,13,17,18,12,16],[0,0,0,0,0,0,0,0,0,0,0,0.687401201,1.05012976,0.018992411,$
SITE: 2 0.00509457547 0.0291019941 1000000000
how can I get a new file in which the lines starting with SITE are excluded (the white space does not have to be there)
grep
seems like the way to go in this case. – Byte Commander Jun 21 '16 at 13:27sed
is efficient too in this case.. – heemayl Jun 21 '16 at 13:28