Добрый день.
Наверное каждый в своей практике помнит случай, когда придумал правильно, написал правильно — а задача не проходит. В такой ситуации иногда срабатывает стратегия "послать решение на другом компиляторе".
Баг в g++, найденный недавно Gerald-ом и Nerevar-ом — еще один аргумент к такой стратегии.
Следим внимательно за руками, сейчас будет чудо:
#include <cstdlib>
#include <iostream>
#include <cstdio>
using namespace std;
int main() {
int n;
cin >> n;
for (int x = 0; x <= n; x++) {
if (n == x + (x + 1) + (x + 2)) {
cout << x + 2 << " " << x + 1 << " " << x << endl;
exit(0);
}
}
cout << -1 << endl;
return 0;
}
Компилируем с -O2: g++ -O2 -o a a.cpp
Запускаем и вводим 9
. Видим -1
.
Пробуем скомпилировать без -O2 или другим компилятором, получаем правильный ответ 4 3 2
.
Конечно, я зафайлил баг в багтрекер gnu gcc. Баг воспроизводится на версиях g++ как минимум от 4.7.2 до 4.8.2.
Есть три интересные мысли на этот счет.
Можно взять большое количество ACCEPTED решений Codeforces (те, что на g++). Перетестировать их под valgrind и без оптимизаций (чтобы убедиться, что всё с ними точно ОК). И делать такое regression тестирование новых компиляторов. Вышла новая версия g++ — перетестировал под ней пару десятков тысяч решений. Что-то не сошлось — посмотрел и зафайлил баг.
Уже неоднократно сообщество Codeforces находило баги в g++. Так вот любят все вокруг ругать Microsoft и их C++ компилятор. А в нём мы видели какие-нибудь подобные баги?
На Codeforces этот баг воспроизводится. Будьте внимательны и осторожны!
Баги есть везде: http://blogs.msdn.com/b/vcblog/archive/2012/08/10/10338661.aspx
См. особенно вторую таблицу.
It works correctly for me, both with and without -O2. My g++ is old, though.
That's not really surprising if it's a regression in 4.8 or 4.9.
both work for me too.
and yeah i'm getting this same message when i check version.
I remember there is a bug with ST::reverse() with -O2. I rarely use it.
zan!
why the result is right when x starts with -1 ? ...
-Wstrict-overflow
, то gcc предупреждает, что упростил выражение до константы. Он хотя бы честный :)Говорят, что
-fdump-tree-optimized
помогает. У меня получилось так.То есть на выходе получилась программа, которая корректно работает с n = 3 и некорректно со всем остальным.
С отрицательными тоже корректно. Интересно как так получилось. Ну ладно.
В багтрекере есть предположение, что он ловит несуществующее переполнение (видимо, при n > 0).
А какое минимальное изменение нужно внести в сэмпл, чтобы он начал корректно работать после оптизации на проблемных g++?
Мне заменить
x + (x + 1) + (x + 2)
на3 * x + 1 + 2
помогло.2 * x + x + 3
тоже работает, а вот2 * x + 3 + x
иx + x + x + 1 + 2
— уже нет.Спасибо за информацию.
Можно сделать предположение, что если в выражении упоминать идентификатор однократно, то бага не будет, но всеравно стало страшно сдавать на g++.
Достаточно условие цикла заменить с
x <= n
наx < n
.А почему бы не добавить в компиляторы clang? Там подобные баги встречаются гораздо реже.
Может, встречаются реже именно из-за того, что его нигде не используют и нет большой выборки? :)
Не, его только на контестах не используют. В production-разработке это сейчас мейнстрим, по крайней мере во многих крупных корпорациях.
я только за clang, но под Windows он сейчас работает очень плохо. Начиная с того, что приходится использовать статическую линковку (здесь), и заканчивая кучей других багов.
Это не clang работает очень плохо, а mingw. Если хочется использовать clang под windows, лучше следовать официальной инструкции: http://clang.llvm.org/get_started.html. Ну и по слухам, неплохо работает с cygwin.
Why the compile command is not "g++ -o a a.cpp -O2" ?
That's a typo.
«Можно взять большое количество ACCEPTED решений Codeforces (те, что на g++). Перетестировать их под valgrind и без оптимизаций (чтобы убедиться, что всё с ними точно ОК). И делать такое regression тестирование новых компиляторов. Вышла новая версия g++ — перетестировал под ней пару десятков тысяч решений. Что-то не сошлось — посмотрел и зафайлил баг.»
Я предполагаю, что 99% решений, которые не сдадутся но новой версии — случаи проезда по памяти, неверных указателей, неинициализированных переменных, нечищенной памяти, Undefined Behaviour, Compiler Specified Behaviour, хотя valgring может отсеет много.
Не могу быть уверен без эксперимента, но мне кажется, что это оооочень вероятная ситуация.
Для этого и написал про valgrind. Кроме того можно эти же решения потестить и на других компиляторах и выбрать только те, которые ACCEPTED всюду.
I am an active stackoverflow user and every now and then I see some code that demonstrates strange behavior in g++. I remember very clearly a case when compiling with optimizations caused abs(-1) to eval to -1. You can imagine what this could cause to some programs!
I regret so much I could not find that thread to link here.
I compile the code with
-O2 -S
and view the assembly code. I find that the code only simply checks whether n is equal to 3. If yes, the code outputs2 1 0
, else always outputs-1
. The assembly code is here, https://gist.github.com/klogk/9994674. If changing the conditional statement into3*x+3
, the code will work fine. I guess that the reason is the wrong optimization for evaluation such bool statements.I wonder why it wasn't found before.
x+x+x
don't work too.PS idea about g++ testing by codeforces is awesome. CF is the big storage of compicated combintations of simple syntax-valid instuctions that runs in ~1sec and have testcases from 40-80 tests) I've read somewhere that compiler developers usually checking the results and consider compiler working after passing 95% of them. I think that it will be brilliant to run separate testing) If I were a student of SGU it would be interesting to understand such testing system (or try to contribute) and write several bugreports by simplifying participants solutions.
You may also use
-fdump-tree-optimized
, as I mentioned in one of the Russian comments. I prints code after optimization, but before translation to assembly.Great! Yours is much more clear at a glance. New skill gotten!!
Very useful for me!It's very clear!
zan.
This situation often happens in POJ. G++ got WA,but C++ got ac finally know the reason.
Well, at least I have known, when output a double variable, people always use "%lf", and it will get AC in C++ but WA in G++. However, if you use "%f" to output a double or float variable, both of them will return AC. But scanf is different, you should use "%lf" to input a double variable and use "%f" to input a float one in both of G++ and C++.
This has nothing to do with GCC. The C/C++ standards say that
%f
, and not%lf
, shall be used to print adouble
withprintf()
.Yes, you're right. That's my point actually, but I didn't explain it clear enough:)
In C89,
%lf
withprintf()
is undefined. But from C99, it is defined as equivalent to%f
(more exactly,l
on a followingf
is defined as has no effect). So, I wonder why you got both AC and WA.I used gdb to debug the program, and the program will always break the for loop if the first n == x + (x + 1) + (x + 2) is false. It seems that if I change the expression into n <= x + (x + 1) + (x + 2), it works fine. The optimizer must have confirmed once n != x + (x + 1) + (x + 2), it would never do. It is not because n will always be greater than the rhs. Instead the it might be resulted from number theory, and the derivation makes some mistakes.
The optimized code is here:
Bug also found in shortened version, gcc 4.7.1
If changed into x * 3 + 3, it would be correct.
My gcc version is 4.8.2.
I can get the correct answer when I use
g++ -O2 -fno-tree-ch
, so I think-ftree-ch
option enabled by-O
may lead to some wrong optimization.Good job... compile code with -O2 fno-tree-ch seems to be safe for now.
(maybe) Another bug found in the last year's ICPC Asia Nanjing Regional Contest. I coded a snippet like this, in order to test whether "-O2" in our compiler command line works correctly.
To my surprise, this code runs for more than 10 seconds without anything outputed. Objdump shows the whole snippet was optimized as a "while(true);". But without -O2 this code works in about 5 seconds.
We tried all g++ versions we have at that time, 3.4.2(carried along with Dev-Cpp 4.9.9.2), 4.4.7, 4.6.3, 4.7.2, 4.8.1 and even 4.9pre. Among them only 3.4.2 works. But obviously we can't use this so-old one on judge workstations.
This fact forced us to double check many suspicious TLE solution during contest.
And one bug only affects Codeforces. (Among all compiler I tested only GCC 4.7.2 MinGW was affected.)
4054614 vs 4054535
Minimized test: http://paste.ubuntu.com/7215859/
C standart says two things:
Variable sum is signed and overflowed, and it can be predicted at compile time. So, compiler can do absolutely anything with it.
Solutions are:
-fwrapv This option instructs the compiler to assume that signed arithmetic overflow of addition, subtraction and multiplication wraps around using twos-complement representation. This flag enables some optimizations and disables other.`
Thanks!
But as
sum
andx
is not related to the for-loop, I assumed "do anything" at least does not contain "convert the whole loop into a while(true);"Now I know I'm wrong :)
Still wonder the logic behind this scene, How can compiler determine that this for-loop is infinite?
I agree that it is awful way to "do anything".
Compilers use UB for optimisations with assumption, that programmer can't cause a UB.
I see such a chain to get 'while(true)':
Я репортил пару багов оптимизатора VS 2005, но не следил чем дело закончивалось, это могли быть и не баги. Там есть куча тонкостей. Например нельзя устраивать переполнение в цикле for. т.е.
for (unsigned i = 1; i; ++i)
по идее undefined behaviorЭто ещё ок, потому что
unsigned
просто идёт по кругу (wrap around). А вот сint
было бы действительно неопределённое поведение.The idea to use Codeforces submissions as regression tests is great and would help the open source community a lot! However it's always possible that programs exhibit undefined behaviour without valgrind noticing, so it's probably still a lot of work to manually extract the failing part of the program and verify whether there is indeed a regression or the code is just wrong
Hey, I am getting a similar kind of problem. My code in g++ in my computer is giving a right answer, however the output from the file compiled on codeforces' server is giving a different, hence wrong answer.
http://mirror.codeforces.com/contest/496/submission/9178052
care to have a look? Is there any solution to it?
Thanks.
Take a look at count[len — 1] value. It has unexpected garbage because you initiate an array count inside main. If count is global, it will be 0, for example. Any way, this is a bug for you, as far as I can understand your code.
As zurg pointed out, you are accessing an uninitialized piece of memory.
Here's an advice: install valgrind and use it to check your memory usage. As an example, here is what valgrind said when running your code on test #4:
You can see that valgrind has detected every access to uninitialized memory, and it gives you the exact line where it happened.
This code is causing bug even without -O2 (tested on GCC 4.8.1):
Isn't signed integer overflow an undefined behavior?
http://stackoverflow.com/questions/16188263/is-signed-integer-overflow-still-undefined-behavior-in-c
Shortly: if something like this happens, the compiler may do anything — generate a correct code (that is, doing exactly what you want), generate an incorrect code or... format all your drives (because why not).
Signed integer overflow is undefined behavior. I guess the optimizer removes the (a*b < 0) condition because in a program with well-defined behavior it's always false (since at that point in program a>=0 and b>=0).
You are right! I was going to say gcc can't assume a >= 0 and b >= 0, but it actually can, because of the two previous if. I spent so much time trying to understand why gcc was treating this as UB and I didn't even notice that. How stupid >.<...
It makes sense now.
Thanks!
Sorry to post here after 3 months, but I found a very annoying bug somewhere in g++.
My AC code for problem 474E didn't work at home. You can find it easily, it's my last submission.
In fact:
It does random crap at a time, some indexes go to some values like 2123421 rather than 2 without any reason...
I think that there is a problem with an optimization who guesses the value of a variable wrongly. It seemed to be deterministic (the crap variables were always the same) but totally unlogical.
I know that some of you here are g++ experts. Personally, I simply use these options because they're nice for debugging, but I don't know if they're unstable. In fact, I never really try to find out :D
I think the same problem can happen to some of you, because my code is a pretty classical segment tree code.
If you find what the fuck happened, please let me know. Thanks ! :-)
Looks like you already did a lot of investigation. The obvious next steps are: 1. Get rid of all warnings. 2. If the problem remains, use binary search to find out the offending option.
I found it !
The problem is due to the combination of -O2 and -fwhole-program
When I use them separately, it works, but with both of them I get the error.
So, to create the problem, you simply need to use "-O2 -fwhole-program"
That sounds logical, according to https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html :
-fwhole-program : Assume that the current compilation unit represents the whole program being compiled. All public functions and variables with the exception of main and those merged by attribute externally_visible become static functions and in effect are optimized more aggressively by interprocedural optimizers.
The mystery seems to be solved :D