Is there a method to print multi-line output (single output) on the same line?
For example, if the output is:
abc
def
qwerty
Is it possible to print:
abcdefqwerty
Is there a method to print multi-line output (single output) on the same line?
For example, if the output is:
abc
def
qwerty
Is it possible to print:
abcdefqwerty
You can remove all occurrences of characters from a given set with tr -d
. To remove the newline character use:
tr -d '\n'
As always you can use input and output redirection and pipes to read from or write to files and other processes.
If you want to keep the last newline you can simply add it back with echo
or printf '\n'
, e. g.:
cat file1 file2... | { tr -d '\n'; echo; } > output.txt
Many ways. To illustrate, I have saved your example in file
:
$ cat file | tr -d '\n'
abcdefqwerty$
$ cat file | perl -pe 's/\n//'
abcdefqwerty$
That removes all newline characters, including the last one though. You might instead want to do:
$ printf "%s\n" "$(cat file | perl -pe 's/\n//')"
abcdefqwerty
$ printf "%s\n" "$(cat file | tr -d '\n')"
abcdefqwerty
printf
+ cat
+ perl
combo is much better handled via perl -pe 's/\n//;END{printf "\n"}' input.txt
Single process, no command-substitution and messing with quotes, and no UUOC. Perhaps.
โ Sergiy Kolodyazhnyy
Jun 01 '18 at 00:10
You can pipe your multiline output through awk
awk '{printf "%s",$0} END {print ""}'
or use sed
:
sed ':a;N;$!ba;s/\n//g'
$ echo -e "1\n2\n3\n4" | awk '{printf "%s",$0} END {print ""}'
1234
$ echo -e "1\n2\n3\n4" | sed ':a;N;$!ba;s/\n//g'
1234
Further reading: remove line break using AWK ยท serverfault.SE
This answer has a solution to the problem you are trying to create: Why does bash remove \n in $(cat file)?
If you type cat myfile.txt
you will see:
abc
def
ghi
But if you type echo $(cat myfile.txt)
you will see:
abc def ghi
Note this method inserts a space where separate new lines used to be. This makes the output easier to read but doesn't strictly adhere to your question scope.
If you want to use Perl for this, you can use its chomp
function:
perl -pe chomp
You can pass one or more filenames as subsequent arguments.
perl -pe chomp file1 file2 file3
In some cases you might not want to do that, but you can pipe or redirect to that command instead. (These abilities--and that caveat--all apply to any other perl -p
or perl -n
based solution, too.)
So, if you want to process output from running some-command
:
some-command | perl -pe chomp
This is potentially faster than the Perl command in terdon's answer. Newline characters (represented by \n
) only appear at the end of lines in this situation--because they are what Perl is using to decide where each line ends. s/\n//
searches through the entire line for newlines, while chomp
just removes the one at the end (if any, as the very last line might or might not have one).
Performance might not really be the main reason to prefer chomp
. The perl
command in terdon's answer would only be slightly slower unless you're processing a huge file with very long lines. And if you need speed, David Foerster's way is still probably faster. chomp
is, however, exactly the right tool for this if you are using Perl, and I think the intent of chomp
is clearer than that of s/\n//
. Ultimately there's probably no objective answer as to which is better.
However, I do recommend against the use of command substitution ($(
)
) just to ensure a trailing newline gets printed. Unless the text you're processing is short, that's likely to be quite slow--it has to collect all the text at once and then print it--and it makes your command harder to understand. Instead you can do as David Foerster suggested and run echo
or printf '\n'
afterwards.
perl -pe chomp; echo
If you're piping from or (as David Foerster shows) redirecting from that command, then you can enclose it in {
;}
so that the output of both commands is taken together. If you're just printing it to the terminal, then you can run it exactly as shown above. This works even if you're piping or redirecting to the command, because echo
/printf '\n'
does not actually read any input. That is, this works:
some-command | perl -pe chomp; echo
While here you cannot simply remove the {
;}
:
{ perl -pe chomp; echo; } | some-other-command
Or, if you prefer, you can make perl
print the final newline itself. Like enclosing the perl
and echo
commands in {
;}
, this approach works even if you're piping or redirecting from it (and, like all approaches, this works when you're piping to it, too):
perl -wpe 'chomp; END { print "\n" }'
Note that the command in terdon's answer works fine with these methods of printing the final newline, too. For example:
perl -pe 's/\n//'; echo
perl -pe 's/\n//; END { print "\n" }'
perl -wpe 'chomp; END { print "\n" }
is same process, minor advantage over adding trailing echo
โ Sergiy Kolodyazhnyy
Mar 17 '18 at 18:18
Using shell-only way:
while IFS= read -r line || [ -n "$line" ]; do printf "%s" "$line"; done < inputfile.txt
If you are using Bash, then you can do it without any external tool.
x=($(echo line1; echo line2))
echo "${x[@]}"
Result:
line1 line2
The first command turn multiline output into an array, the second command expand array into multiple arguments on one line.
tr '\n' ' '
โ element11 Oct 12 '20 at 16:44