Does C++ compiler remove/optimize useless parentheses?

Will the code

int a = ((1 + 2) + 3); // Easy to read

run slower than

int a = 1 + 2 + 3; // (Barely) Not quite so easy to read

or are modern compilers clever enough to remove/optimize “useless” parentheses.

It might seems like a very tiny optimization concern, but choosing C++ over C#/Java/… is all about optimizations (IMHO).

22

The compiler does not actually ever insert or remove parentheses; it just creates a parse tree (in which no parentheses are present) corresponding to your expression, and in doing so it must respect the parentheses you wrote. If you fully parenthesise your expression then it will also be immediately clear to the human reader what that parse tree is; if you go to the extreme of putting in blatantly redundant parentheses as in int a = (((0))); then you will be causing some useless stress on the neurons of the reader while also wasting some cycles in the parser, without however changing the resulting parse tree (and therefore the generated code) the slightest bit.

If you don’t write any parentheses, then the parser must still do its job of creating a parse tree, and the rules for operator precedence and associativity tell it exactly which parse tree it must construct. You might consider those rules as telling the compiler which (implicit) parentheses it should insert into your code, although the parser doesn’t actually ever deal with parentheses in this case: it just has been constructed to produce the same parse tree as if parentheses were present in certain places. If you place parentheses in exactly those places, as in int a = (1+2)+3; (associativity of + is to the left) then the parser will arrive at the same result by a slightly different route. If you put in different parentheses as in int a = 1+(2+3); then you are forcing a different parse tree, which will possibly cause different code to be generated (though maybe not, as the compiler may apply transformations after building the parse tree, as long as the effect of executing the resulting code would never be different for it). Supposing there is a difference in the resuting code, nothing in general can be said as to which is more efficient; the most important point is of course that most of the time the parse trees do not give mathematically equivalent expressions, so comparing their execution speed is beside the point: one should just write the expression the gives the proper result.

So the upshot is: use parentheses as needed for correctness, and as desired for readability; if redundant they have no effect at all on execution speed (and a negligible effect on compile time).

And none of this has anything to do with optimisation, which comes along way after the parse tree has been built, so it cannot know how the parse tree was constructed. This applies without change from the oldest and stupidest of compilers to the smartest and most modern ones. Only in an interpreted language (where “compile time” and “execution time” are coincident) could there possibly be a penalty for redundant parentheses, but even then I think most such languages are organised so that at least the parsing phase is done only once for each statement (storing some pre-parsed form of it for execution).

10

The parentheses are there solely for your benefit – not the compilers. The compiler will create the correct machine code to represent your statement.

FYI, the compiler is clever enough to optimise it away entirely if it can. In your examples, this would get turned into int a = 6; at compile time.

8

The answer to the question you actually asked is no, but the answer to the question you meant to ask is yes. Adding parentheses does not slow down the code.

You asked a question about optimisation, but parentheses have nothing to do with optimisation. The compiler applies a variety of optimisation techniques with the intention of improving either the size or speed of the generated code (sometimes both). For example, it might take the expression A^2 (A squared) and replace it by A x A (A multiplied by itself) if that is faster. The answer here is no, the compiler does nothing different in its optimisation phase depending on whether parentheses are present are not.

I think you meant to ask whether the compiler still generates the same code if you add unnecessary parentheses to an expression, in places that you think might improve readability. In other words, if you add parentheses is the compiler smart enough to take them out again rather than somehow generating poorer code. The answer is yes, always.

Let me say that carefully. If you add parentheses to an expression that are strictly unnecessary (have no effect whatever on the meaning or order of evaluation of an expression) the compiler will silently discard them and generate the same code.

However, there exist certain expressions where apparently unnecessary parentheses will actually change the order of evaluation of an expression and in that case the compiler will generate code to put into effect what you actually wrote, which might be different from what you intended. Here is an example. Don’t do this!

short int a = 30001, b = 30002, c = 30003;
int d = -a + b + c;    // ok
int d = (-a + b) + c;  // ok, same code
int d = (-a + b + c);  // ok, same code
int d = ((((-a + b)) + c));  // ok, same code
int d = -a + (b + c);  // undefined behaviour, different code

So add parentheses if you want to, but make sure they really are unnecessary!

I never do. There is a risk of error for no real benefit.


Footnote: unsigned behaviour happens when a signed integer expression evaluates to a value that is outside the range that it can express, in this case -32767 to +32767. This is a complex topic, out of scope for this answer.

9

Brackets are only there for you to manipulate the order of operator precedence.
Once compiled, brackets no longer exists because the run-time doesn’t need them.
The compilation process removes all of the brackets, spaces and other syntactic sugar that you and I need and changes all of the operators into something [far] simpler for the computer to execute.

So, where you and I might see …

  • “int a = ((1 + 2) + 3);”

… a compiler might emit something more like this:

  • Char[1]::”a”
  • Int32::DeclareStackVariable()
  • Int32::0x00000001
  • Int32::0x00000002
  • Int32::Add()
  • Int32::0x00000003
  • Int32::Add()
  • Int32::AssignToVariable()
  • void::DiscardResult()

The program is executed by starting at the beginning and executing each instruction in turn.
Operator precedence is now “first-come, first served”.
Everything is strongly typed, because the compiler worked all that out while it was tearing the original syntax apart.

OK, it’s nothing like the stuff that you and I deal with, but then we’re not running it!

4

Depends if it is floating point or not:

  • In floating point arithmetic addition is not associative so the optimizer can’t reorder the operations (unless you add the fastmath compiler switch).

  • In integer operations they can get reordered.

In your example both will run the exact same time because they will compile to the exact same code (addition is evaluated left to right).

however even java and C# will be able optimize it, they will just do it at runtime.

4

The typical C++ compiler translates to machine code, not C++ itself. It removes useless parens, yes, because by the time it’s done, there are no parens at all. Machine code doesn’t work that way.

Both code end up hard coded as 6 :

movl    $6, -4(%rbp)

Check for yourself here

2

No, but yes, but maybe, but maybe the other way, but no.

As people have already pointed out, (assuming a language where addition is left-associative, such as C, C++, C# or Java) the expression ((1 + 2) + 3) is exactly equivalent to 1 + 2 + 3. They’re different ways of writing something in the source code, that would have zero effect on the resulting machine code or byte code.

Either way the result is going to be an instruction to e.g. add two registers and then add a third, or take two values from a stack, add it, push it back, then take it and another and add them, or add three registers in a single operation, or some other way to sum three numbers depending on what is most sensible at the next level (the machine code or byte code). In the case of byte code, that in turn will likely undergo a similar re-structuring in that e.g. the IL equivalent of this (which would be a series of loads to a stack, and popping pairs to add and then push back the result) would not result in a direct copy of that logic at the machine code level, but something more sensible for the machine in question.

But there is something more to your question.

In the case of any sane C, C++, Java, or C# compiler, I would expect the result of both of the statements you give to have the exact same results as:

int a = 6;

Why should the resultant code waste time doing math on literals? No changes to the state of the program will stop the result of 1 + 2 + 3 being 6, so that’s what should go in the code being executed. Indeed, maybe not even that (depending on what you do with that 6, maybe we can throw the whole thing away; and even C# with it’s philosophy of “don’t optimise heavily, since the jitter will optimise this anyway” will either produce the equivalent of int a = 6 or just throw the whole thing away as unnecessary).

This though leads us to a possible extension of your question though. Consider the following:

int a = (b - 2) / 2;
/* or */
int a = (b / 2)--;

and

int c;
if(d < 100)
  c = 0;
else
  c = d * 31;
/* or */
int c = d < 100 ? 0 : d * 32 - d
/* or */
int c = d < 100 && d * 32 - d;
/* or */
int c = (d < 100) * (d * 32 - d);

(Note, this last two examples are not valid C#, while everything else here is, and they are valid in C, C++ and Java.)

Here again we’ve exactly equivalent code in terms of output. As they aren’t constant expressions, they won’t be calculated at compile time. It’s possible that one form is faster than another. Which is faster? That would depend on the processor and perhaps on some rather arbitrary differences in state (particularly since if one is faster, it’s not likely to be a lot faster).

And they aren’t entirely unrelated to your question, as they are mostly about differences in the order in which something is conceptually done.

In each of them, there’s a reason to suspect that one may be faster than the other. Single decrements may have a specialised instruction, so (b / 2)-- could indeed be faster than (b - 2) / 2. d * 32 could perhaps be produced faster by turning it into d << 5 so making d * 32 - d faster than d * 31. The differences between the last two are particularly interesting; one allows some processing to be skipped in some cases, but the other avoids the possibility of branch mis-prediction.

So, this leaves us with two questions: 1. Is one actually faster than the other? 2. Will a compiler convert the slower into the faster?

And the answer is 1. It depends. 2. Maybe.

Or to expand, it depends because it depends on the processor in question. Certainly there have existed processors where the naïve machine-code equivalent of one would be faster than the naïve machine-code equivalent of the other. Over the course of the history of electronic computing, there hasn’t been one that was always the faster, either (the branch mis-prediction element in particular wasn’t relevant to many when non-pipelined CPUs were more common).

And maybe, because there are a bunch of different optimisations that compilers (and jitters, and script-engines) will do, and while some may be mandated in certain cases, we’ll generally be able to find some pieces of logically equivalent code that even the most naïve compiler has exactly the same results and some pieces of logically equivalent code where even the most sophisticated produces faster code for one than for the other (even if we have to write something totally pathological just to prove our point).

It might seems like a very tiny optimization concern,

No. Even with more complicated differences than those I give here, it seems like an absolutely minute concern that has nothing to do with optimisation. If anything, it’s a matter of pessimisation since you suspect the harder to read ((1 + 2) + 3 could be slower than the easier to read 1 + 2 + 3.

but choosing C++ over C#/Java/… is all about optimizations (IMHO).

If that’s really what choosing C++ over C# or Java was “all about” I’d say people should burn their copy of Stroustrup and ISO/IEC 14882 and free up the space of their C++ compiler to leave room for some more MP3s or something.

These languages have different advantages over each other.

One of them is that C++ is still generally faster and lighter on memory use. Yeah, there are examples where C# and/or Java are faster and/or have better application-lifetime use of memory, and these are becoming more common as the technologies involved improve, but we can still expect the average program written in C++ to be a smaller executable that does its job faster and using less memory than the equivalent in either of those two languages.

This isn’t optimisation.

Optimisation is sometimes used to mean “making things go faster”. It’s understandable, because often when we really are talking about “optimisation” we are indeed talking about making things go faster, and so one has become a shorthand for the other and I’ll admit I misuse the word that way myself.

The correct word for “making things go faster” is not optimisation. The correct word here is improvement. If you make a change to a program and the sole meaningful difference is that it is now faster, it isn’t optimised in any way, it’s just better.

Optimisation is when we make an improvement in regards to a particular aspect and/or particular case. Common examples are:

  1. It’s now faster for one use case, but slower for another.
  2. It’s now faster, but uses more memory.
  3. It’s now lighter on memory, but slower.
  4. It’s now faster, but harder to maintain.
  5. It’s now easier to maintain, but slower.

Such cases would be justified if, e.g.:

  1. The faster use case is more common or more severely hampered to begin with.
  2. The program was unacceptably slow, and we’ve lots of RAM free.
  3. The program was grinding to a halt because it used so much RAM it spent more time swapping than executing its super-fast processing.
  4. The program was unacceptably slow, and the harder to understand code is well-documented and relatively stable.
  5. The program is still acceptably fast, and the more understandable code-base is cheaper to maintain and allows for other improvements to be more readily made.

But, such cases would also not be justified in other scenarios: The code hasn’t been made better by an absolute infallible measure of quality, it’s been made better in a particular regard that makes it more suitable for a particular use; optimised.

And choice of language does have an effect here, because speed, memory use, and readability can all be affected by it, but so can compatibility with other systems, availability of libraries, availability of runtimes, maturity of those runtimes on a given operating system (for my sins I somehow ended up having Linux and Android as my favourite OSs and C# as my favourite language, and while Mono is great, but I still come up against this one quite a bit).

Saying “choosing C++ over C#/Java/… is all about optimizations” only makes sense if you think C++ really sucks, because optimisation is about “better despite…” not “better”. If you think C++ is better despite itself, then the last thing you need is to worry about such minute possible micro-opts. Indeed, you’re probably better off abandoning it at all; happy hackers are a quality to optimise for too!

If however, you’re inclined to say “I love C++, and one of the things I love about it is squeezing out extra cycles”, then that’s a different matter. It’s still a case that micro-opts are only worth it if they can be a reflexive habit (that is, the way you tend to code naturally will be the faster more often than it is the slower). Otherwise they’re not even premature optimisation, they’re premature pessimisation that just make things worse.

Parentheses are there to tell the compiler in which order expressions should be evaluated. Sometimes they are useless (except they improve or worsen readability), because they specify the order that would be used anyway. Sometimes they change the order. In

int a = 1 + 2 + 3;

practically every language in existence has a rule that the sum is evaluated by adding 1 + 2, then adding the result plus 3. If you wrote

int a = 1 + (2 + 3);

then the parenthesis would force a different order: First adding 2 + 3, then adding 1 plus the result. Your example of parentheses produces the same order that would have been produced anyway. Now in this example, the order of operations is slightly different, but the way that integer addition works, the outcome is the same. In

int a = 10 - (5 - 4);

the parentheses are critical; leaving them out would change the result from 9 to 1.

After the compiler has determined what operations are performed in which order, the parenthesis are completely forgotten. All that the compiler remembers at this point is which operations to perform in which order. So there is actually nothing that the compiler could optimise here, the parentheses are gone.

1

I agree with much of what has been said, however…the over-arch here is that the parentheses are there to coerce order of operation…which the compiler is absolutely doing. Yes, it produces machine code…but, that is not the point and is not what is being asked.

The parentheses are indeed gone : as has been said, they are not part of machine code, which is numbers and not a thing else. Assembly code is not machine code, it is semi-human readable and contains the instructions by name — not opcode. The machine runs what are called opcodes — numeric representations of assembly language.

Languages like Java fall into an in-between area as they compile only partially on the machine that produces them. They are compiled to machine specific code on the machine that runs them, but that makes no difference to this question — the parentheses are still gone after the first compile.

4

Compilers, regardless of the language, translate all infix math to postfix. In other words, when the compiler sees something like:

((a+b)+c)

it translates it into this:

 a b + c +

This is done because while infix notation is easier for people to read, the postfix notation is much closer to the actual steps the computer has to take to get the job done (and because there is already a well-developed algorithm for it.) By definition, postfix has eliminates all issues with order-of-operations or parentheses, which naturally makes things much easier when actually writing the machine code.

I recommend the wikipedia article on Reverse Polish Notation for more information on the subject.

1

Trang chủ Giới thiệu Sinh nhật bé trai Sinh nhật bé gái Tổ chức sự kiện Biểu diễn giải trí Dịch vụ khác Trang trí tiệc cưới Tổ chức khai trương Tư vấn dịch vụ Thư viện ảnh Tin tức - sự kiện Liên hệ Chú hề sinh nhật Trang trí YEAR END PARTY công ty Trang trí tất niên cuối năm Trang trí tất niên xu hướng mới nhất Trang trí sinh nhật bé trai Hải Đăng Trang trí sinh nhật bé Khánh Vân Trang trí sinh nhật Bích Ngân Trang trí sinh nhật bé Thanh Trang Thuê ông già Noel phát quà Biểu diễn xiếc khỉ Xiếc quay đĩa Dịch vụ tổ chức sự kiện 5 sao Thông tin về chúng tôi Dịch vụ sinh nhật bé trai Dịch vụ sinh nhật bé gái Sự kiện trọn gói Các tiết mục giải trí Dịch vụ bổ trợ Tiệc cưới sang trọng Dịch vụ khai trương Tư vấn tổ chức sự kiện Hình ảnh sự kiện Cập nhật tin tức Liên hệ ngay Thuê chú hề chuyên nghiệp Tiệc tất niên cho công ty Trang trí tiệc cuối năm Tiệc tất niên độc đáo Sinh nhật bé Hải Đăng Sinh nhật đáng yêu bé Khánh Vân Sinh nhật sang trọng Bích Ngân Tiệc sinh nhật bé Thanh Trang Dịch vụ ông già Noel Xiếc thú vui nhộn Biểu diễn xiếc quay đĩa Dịch vụ tổ chức tiệc uy tín Khám phá dịch vụ của chúng tôi Tiệc sinh nhật cho bé trai Trang trí tiệc cho bé gái Gói sự kiện chuyên nghiệp Chương trình giải trí hấp dẫn Dịch vụ hỗ trợ sự kiện Trang trí tiệc cưới đẹp Khởi đầu thành công với khai trương Chuyên gia tư vấn sự kiện Xem ảnh các sự kiện đẹp Tin mới về sự kiện Kết nối với đội ngũ chuyên gia Chú hề vui nhộn cho tiệc sinh nhật Ý tưởng tiệc cuối năm Tất niên độc đáo Trang trí tiệc hiện đại Tổ chức sinh nhật cho Hải Đăng Sinh nhật độc quyền Khánh Vân Phong cách tiệc Bích Ngân Trang trí tiệc bé Thanh Trang Thuê dịch vụ ông già Noel chuyên nghiệp Xem xiếc khỉ đặc sắc Xiếc quay đĩa thú vị
Trang chủ Giới thiệu Sinh nhật bé trai Sinh nhật bé gái Tổ chức sự kiện Biểu diễn giải trí Dịch vụ khác Trang trí tiệc cưới Tổ chức khai trương Tư vấn dịch vụ Thư viện ảnh Tin tức - sự kiện Liên hệ Chú hề sinh nhật Trang trí YEAR END PARTY công ty Trang trí tất niên cuối năm Trang trí tất niên xu hướng mới nhất Trang trí sinh nhật bé trai Hải Đăng Trang trí sinh nhật bé Khánh Vân Trang trí sinh nhật Bích Ngân Trang trí sinh nhật bé Thanh Trang Thuê ông già Noel phát quà Biểu diễn xiếc khỉ Xiếc quay đĩa
Thiết kế website Thiết kế website Thiết kế website Cách kháng tài khoản quảng cáo Mua bán Fanpage Facebook Dịch vụ SEO Tổ chức sinh nhật