backtick usage

Diputs

Active Member
Joined
Jul 28, 2021
Messages
134
Reaction score
48
Credits
1,030
So you can use backticks to put the result of a command, into a variable. Example:

my_machine=`uname -n`

Many years before now - 2023 - I learned that you can do the same with another syntax. This one being:

my_machine=$(uname -n)

I would prefer the second one because it doesn't use any unusual characters ... like, the backtick.
But I remember that I understood that the backtick usage should be abandoned.
But, still in 2023, I see them being used.
Question is : should they be abandoned ? Lets say I run a Redhat 8 with latest Kernel version.
Also, if yes, why ?

I don't like backticks, because basically they have no use ... the ONLY reason to use them, well, it would be in this exact context. In my mind, they don't belong on a keyboard.
 


I still use them for time to time. They come in handy if I need to quote a quote.
I've never heard they should be abandoned. A quick google search doesn't come up with anything like that.

I seriously doubt the kernel version matters when using the back-tick. But if I wanted to run the latest kernel,
redhat 8 wouldn't be my first choice.

On the other hand, I have heard most programmers recommend using the $( ) method when putting the
results of a shell command as a variable. That keeps you from having to use back-tick quotes, but I think
it's more a syntax thing, than a "no back-ticks" thing.
 
In shell-scripting, backticks are deprecated. They have been for a while. They're not deprecated in the sense that they're going to be removed any time soon. They're deprecated in the sense that the newer syntax has advantages over the old back-tick syntax and that the newer syntax should be preferred. Back-ticks are only really kept in bash for backwards compatibility with older, pre-existing scripts. But the current POSIX standard strongly recommends using the newer $() substitution syntax.

The main advantage of using the newer syntax is that it's much easier to create nested substitutions.
e.g.
Bash:
someVar=$(/path/to/script -infile "$(ls -1tr 202312[0-9][0-9]*.txt | tail -n 1)" -print0)
Above is a bit of a contrived example using two nested command substitutions.
I couldn't think of an actual practical/useful example offhand.

The nested substitution above uses ls to list all text files that start with a timestamp in December 2023. e.g. 20231201-somefile.txt, 20231203-anotherfile.txt and pipes the result to tail, to display the most recent file.
The output of that command is substituted in as the value for the -infile parameter for a script we're running. And the output of the script is substituted and assigned to a variable called someVar.

To do that purely with back-ticks you have to escape the inner back-ticks, to avoid closing the main, outer back-ticks.
Bash:
someVar=`/path/to/script -infile \`ls -1tr 202312[0-9][0-9]*.txt | tail -n 1\` -print`
In the above, we've escaped the back-ticks for the nested substitution.
And already, we can see that it's not entirely clear where each substitution begins and ends.
Now let's consider that we want to add another parameter to the script using another substitution, we could escape the back-ticks again to create a substitution for another parameter. That wouldn't be a problem.
For example, let's add a -compare option to the main script and we'll use the earliest time-stamped file from december 2023 as a parameter:
Bash:
someVar=`/path/to/script --infile \`ls -1tr 202312[0-9][0-9]*.txt | tail -n 1\` --compare \`ls -1tr 202312[0-9][0-9].txt | head -n 1\` -print`
OK - now we've got two substitutions nested inside the main substitution. But from looking at that line, it's really unclear what it's doing.
Whereas if we use the $() syntax:
Bash:
someVar=$(/path/to/script --infile "$(ls -1tr 202312[0-9][0-9]*.txt | tail -n 1)" --compare "$(ls -1tr 202312[0-9][0-9].txt | head -n 1)" -print)
Now we can see exactly where each substitution begins and ends.

What happens if we now want to nest a substitution inside an already nested substitution, using back-ticks?
For example IF the ls command took another parameter with a value that came from another script. (NOTE: Obviously ls DOESN'T have anything like that, but for this example - let's just imagine that it does.
Lets try to nest the result of another command as a fictional parameter to the ls command:
e.g. --not-real $(/path/to/anotherscript -a)
Using the $() syntax:
Bash:
someVar=$(/path/to/script --infile "$(ls -1tr 202312[0-9][0-9]*.txt --not-real $(/path/to/anotherscript -a) | tail -n 1)" --compare "$(ls -1tr 202312[0-9][0-9].txt | head -n 1)" -print)
With the nested $() substitutions, that's really easy to do.
Obviously, the ls command would fail IRL because it doesn't have a --not-real parameter, but for this example, we'll just imagine that it is a valid parameter and it does work.

But what about with back-ticks?
With back-ticks it looks like this:
Bash:
someVar=`/path/to/script --infile \`ls -1tr 202312[0-9][0-9]*.txt --not-real \`/path/to/anotherscript -a\` | tail -n 1\` --compare \`ls -1tr 202312[0-9][0-9].txt | head -n 1\` -print`
Now we have a real mess AND our substitutions will fail because the escaped back-tick after the fictional --not-real parameter (which we intended to be the opening back-tick for our new nested substitution) is actually going to be interpreted as the closing back-tick for the ls command that we're attempting to do a nested substitution in. So now we've broken our command.

With the back-ticks, there's a hard limit to the amount of nesting you can do. If you have nested substitutions using back-ticks, the more substitutions you have, the more ugly and unreadable the line becomes and the more likely it is that you'll make a mistake.

And that's the reason that the $() syntax was eventually introduced. Because if you do ever find yourself needing to multi-layered nested substitutions, you couldn't do that with the back-ticks.


When it comes to scripting - if you only have a single command substitution using a simple command, then you could get away with using back-ticks. Nobody's going to stop you. They are still supported, albeit in more of a legacy way.

And you could potentially mix and match back-ticks and $() substitutions to a certain extent, I suppose?!
e.g.
Bash:
someVar=`/path/to/script --param "$(/path/to/anotherscript -a)"`
But again, the newer $() syntax is more readable, it allows you to create much more complex and expressive nested substitutions (if you need to) and it's a lot less error prone than using back-ticks.

I hope this helps!
 
Last edited:
Thanks for that explanation, JasKinasis. I do a fair amount of scripting but have never really got down and dirty with the newer syntax. While I don`t consider the backtick to be any more of an unusual character than the parentheses, I also have been known to jump through some hoops to avoid nested substitutions. I guess its time for a bit of learning and an update to the ole scripting skills!

* I did that on purpose - I usually don't type "don`t". ;)
 
I still use backticks occasionally. Mainly out of sheer habit from all the years using them, but I agree. The new syntax is superior and should be used due to it's more robust nature when building these types of operations.

Though a quick backtick to variablize something is quick and simple. If you're going to write something for long term use. Do it right. :)
 
Thanks Jas

I think it sums it all up quite nicely, it's a more complex and less readable method (using back-ticks) which on itself is reason enough to abandon it, in my book. The fact it IS a backwards compatible feature says a lot as well.

What I never understood is that they just use a DIFFERENT kind of quote, which I would suspect was to avoid needing confusion with NORMAL single quotes, and thus no confusion (other that they look alike and thus less easy readable for humans), ...

.. but then STILL you'd need to escape quote usage of the command inside. It feels like they made the backtick work with a complexity .... but without any advantage.

Granted, I would often just

var=`uname -n`

But still, not all commands will be that easy.


I also hate any command with escape characters, no matter what language. They usually destroy readability. I hate it when you need to debug the command inside, and first need to strip all of them, get the command running, and then spend another big amount of time reconstructing the command in the only syntax in which the context requires it to be.
 
Last edited:
That begs the question, what is ...

Almost any other distro. You can enable the elrepo ml-kernel repo's and that will let you run a fairly
late kernel. ( I think the current version is 6.6.7 ) Redhat 8 is still on 4.18.0. Redhat 9 gets you up
to 5.14.0 ( which is still a little aged ). However even if you enable the elrepo to get a newer kernel, you
still won't get all the latest, apps, libraries, and config's to go with it. If you want to run the latest and greatest
you need to run a rolling release distro. Arch and Fedora are two that stay near the latest most of the time.
If you're already used to using an RPM based distro like Redhat which uses yum/dnf, then Fedora is
very similar. Virtually the commands are the same. I believe Ubuntu is still on 6.2. but I understand
there are some 3rd party repo's that have newer kernels.

If none of those are "new enough" for you, you can always roll your own from https://kernel.org/
 
Nah, we run stable Kernels instead, Redhat is just fine. Besided, those distros are not supported by the vendor, no matter what distro.

To give you an idea, this software supports Unix as OS as well.
 
Know what, tell me the interesting difference between Kernel 5 and the latest stable release.
Try to sell it.
 
Know what, tell me the interesting difference between Kernel 5 and the latest stable release.
Try to sell it.
Rather than a "try to sell it" situation, the later kernel releases usually incorporate new features which may or may not be relevant to users. For example, having a look at a site like:
provides some details about what's being included in the latest release.

Some examples: if one is particularly concerned about "extra control protection" in the CPU, then this latest kernel would be an appropriate choice (see 1.2 at the above link); if one wished to embrace the latest ext4 improvements, then again this latest kernel has that (see 3.4 at the above link).

The way I see it, it's a "horses for courses" situation.
 
Rather than a "try to sell it" situation, the later kernel releases usually incorporate new features which may or may not be relevant to users. For example, having a look at a site like:
provides some details about what's being included in the latest release.

Some examples: if one is particularly concerned about "extra control protection" in the CPU, then this latest kernel would be an appropriate choice (see 1.2 at the above link); if one wished to embrace the latest ext4 improvements, then again this latest kernel has that (see 3.4 at the above link).

The way I see it, it's a "horses for courses" situation.

I don't know that expresssion - I'm not native English, I'll need to look it up

You see, I'm a practical guy, so when I think about Kernel upgrades, I think : OK, what's wrong with the one I'm using ?
Can't think of many things really ... but, maybe that one super annoying thing ...

Argument List Too Long

Yes !
That's a problem I'd like to see solved.
Basically, the filesystem has not enough entries to store info on the files actually wanting to be used on a disk, and various commands (LS, CP, ...) fail because of that. Ow yes, and they also become extremely slow ... before failing.

So, is that solved in the latest Kernel ?
 
I don't know that expresssion - I'm not native English, I'll need to look it up

You see, I'm a practical guy, so when I think about Kernel upgrades, I think : OK, what's wrong with the one I'm using ?
Can't think of many things really ... but, maybe that one super annoying thing ...

Argument List Too Long

Yes !
That's a problem I'd like to see solved.
Basically, the filesystem has not enough entries to store info on the files actually wanting to be used on a disk, and various commands (LS, CP, ...) fail because of that. Ow yes, and they also become extremely slow ... before failing.

So, is that solved in the latest Kernel ?
Certainly I take your point about not needing to upgrade a kernel if there are no problems with the current one in use. If the current kernel satisfies the needs of the system, then, despite additional features of a newer kernel, they may make no difference in practical terms.

One can argue that there's an overemphasis on "keeping up with the latest" because many user's needs are relatively modest and conservative and simply don't benefit in practical terms from constant on-going developments.

Occasionally in linux a more serious problem arises such as the Meltdown and Spectre exploits which exploited vulnerabilities in the cpu which enabled some programs to "steal" data from other programs which opened a route for a malicious program to take control of an operating system. The vulnerability was patched, and nearly all, if not all distros issued their new kernels with mitigations, even though there may not have been any practical consequences.

The user can check if their kernel is patched, that is, has the mitigations with the following command:
Code:
[tom@min ~]$ lscpu
<snip>
Vulnerabilities:      
  Gather data sampling:  Not affected
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Not affected
  Retbleed:              Not affected
  Spec rstack overflow:  Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass dis
                         abled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and
                          __user pointer sanitization
  Spectre v2:            Mitigation; Enhanced / Automatic IBRS, I
                         BPB conditional, RSB filling, PBRSB-eIBR
                         S SW sequence
  Srbds:                 Not affected
  Tsx async abort:       Not affected
This machine has a kernel mitigated against those vulnerabilities.

On the matter you raise about "the filesystem has not enough entries to store info on the files actually wanting to be used on a disk", I'm not entirely certain of your experience. In the first instance though, the question of the number of files an operating system can have is dependent on the number of inodes that have been set by default at installation. This is not so much a kernel matter, but one of configuration set in the installation software including the kernel. To see how many inodes are available on a system one can run the following:
Code:
[tom@min ~]$ df -i
Filesystem       Inodes  IUsed    IFree IUse% Mounted on
udev            1993971    461  1993510    1% /dev
tmpfs           2000091    879  1999212    1% /run
/dev/nvme0n1p3 29523968 330850 29193118    2% /
tmpfs           2000091      1  2000090    1% /dev/shm
tmpfs           2000091      5  2000086    1% /run/lock
efivarfs              0      0        0     - /sys/firmware/efi/efivars
/dev/nvme0n1p1        0      0        0     - /boot/efi
tmpfs            400018     82   399936    1% /run/user/1000

It is clear from the output that very few have been used, that the number available is huge and the likelhood of running out of them, remote at this point. You can check to see whether this is implicated in the problem you refer to with the ls and cp commands, or whether it is something else as yet unknown. That's all that comes to mind about what you have written though.
 
Last edited:
It is clear from the output that very few have been used, that the number available is huge and the likelhood of running out of them, remote at this point. You can check to see whether this is implicated in the problem you refer to with the ls and cp commands, or whether it is something else as yet unknown. That's all that comes to mind about what you have written though.

I've seen the slow performing and erroring out (Argument List Too Long) on many different systems. Not really a remote issue. A remote issue is something you'd either see never or once, but you know it exists. For me, that is not the case here.

Exhausting the inodes is something more special, but I've seen it. Live production system ...
 
Last edited:
On the matter you raise about "the filesystem has not enough entries to store info on the files actually wanting to be used on a disk", I'm not entirely certain of your experience. In the first instance though, the question of the number of files an operating system can have is dependent on the number of inodes that have been set by default at installation. This is not so much a kernel matter, but one of configuration set in the installation software including the kernel.

Ow yes, it MAY be a parameter set on installation ...
And like every parameter, there's default parameters, and some are too low.
 
I've seen the slow performing and erroring out on many different systems. Not really a remote issue. A remote issue is something you'd either see never or once, but you know it exists. For me, that is not the case here.

Exhausting the inodes is something more special, but I've seen it. Live production system ...
What comes to mind is first checking the time of the ls and cp commands. For example, run the ls command in a directory with few files, and then one with many files timing it. For a comparison you can compare the results of the following with results on your own machine:

For a directory with a few files:
Code:
[tom@min ~]$ time ls
fff        file4  filea  filee         gfile1     terminology
fieldfile  file5  fileb  filef         hello      todayfile
file2      file6  filec  filename.txt  newfile    vermagic.h
file3      file7  filed  filenonos     newfile23

real    0m0.003s
user    0m0.003s
sys     0m0.001s

For a directory with a lot of files:
Code:
[tom@min ~]$ time ls /usr/bin
<snip>
real    0m0.032s
user    0m0.006s
sys     0m0.006s

These results are just thousandths of seconds, and are quite normal in my experience. You can do the same sort of tests for the cp command.
 
Last edited:
That's exactly how I measured the gravity of the issue
In the examples you're showing there's just nothing wrong,
but I have some examples that are literally thousands times slower than the one you show

Note that the whole thing consists of 3 issues:
1. speed (which is the one we examine by using the TIME command)
2. the dreaded "Argument List Too Long", which is a complete failure of the command in question, be it LS or CP or MV or whatever
3. Hitting the maximum number of inodes

Issue #1 and 2 occur before hitting number #3

I'll lookup some examples of extremely slow occurances,
I also used to do this:

time ls -l | wc -l
 
Another way of checking what's happening is to use the strace command to check on the system calls that the command in question is making. The strace command can be used to mark the time it takes to make those system calls, so that the user can inspect the output of the strace command and detect exactly which system call is taking longer than any other, if there is such a time delay.

In your case, according to your experience, one would expect that such a time delay would be identifiable in one or more system calls.

For example, to run the ls command and have the system calls logged in a file called "logfile", run the following command:
Code:
strace -T -o logfile ls
The -T option is the one instructing strace to record the times. Included below is an image of a section of the output of the command on this machine here because the output is in color. The times for each of the system calls are in green within the pointy brackets at the end of each line and can be seen in this case to be in ten thousands of a second or less.

logfile.jpg


Since all the times are relatively close to each other in this case, there is no problem apparent.
 
Last edited:
Yeah, but you see, I'm a Linux user, I manage the software that runs in the Linux box
I have Root access,
but I'm not the Linux admin

If an LS command takes 1 hour, instead of 7 milliseconds, there is no discussion anymore whether or not there is an issue.
The next thing is that I wait on the OS admin to fix this.
But, the OS admin has to wait on the Kernel and OS features to allow problems to be fixed. Example : https://superuser.com/questions/1345268/ls-command-very-slow - first "solution" even says "the solution is to have fewer file". It's mind-boggling people would actually come to this "conclusion". Maybe shutdown your server can also solve the issue ?

And that is where the issue is ... it's still not fixed, isn't it ?

In the mean time, I'm waiting. Google is full with reports, so if the Linux guys needs some examples, Google to the rescue.
And, I'm waiting.

And of course, the usual : "We're not aware of this issue, never heard of it, is it even possible ?"
Issue exists at least for a decade, but very likely even longer. Here's one of many, from 2012: https://stackoverflow.com/questions/11289551/argument-list-too-long-error-for-rm-cp-mv-commands - says it's a Kernel "limitation" but I don't get that, it's a bug.
It may actually exist on Unix as well.
 
Last edited:


Latest posts

Top