Why such strange behaviour while outputting the difference in sizes of two queues. When i am storing them in a diff variable its showing right answer, while just outputting the difference gives wrong answer.
#include<bits/stdc++.h>
using namespace std;
int main()
{
priority_queue<int , vector<int> , greater<int> > pmin;
priority_queue<int , vector<int> >pmax;
pmin.push(5);
pmin.push(9);
pmin.push(10);
pmax.push(1);
cout << pmax.size() - pmin.size() << endl;
int diff = pmax.size() - pmin.size();
cout << diff <<endl;
return 0 ;
}
**** TO DV : Since downvotes doesn't affects me, but I think this is a very good question whom most of have no idea about the flaw in this. So it would be good if some good answers known to other people.
Because a.size() return unsigned int and you get overflow. This is true for all STL container
unsigned bit, but why its not happening when i am storing it in diff variable.
This work: cout << (int)(pmax.size() — pmin.size()) << endl; It has something to do with cast to a larger type
See here.the overflowed diff in unsigned long long will be equal to the exact signed diff as leftmost bit will be sign bit(you can observe it with 2's complement). Even if it is converted to signed int32 and diff is in int32 range then we can also see it correctly(your code). As leftmost 32 bits of 64 bits will be overflow bit for int32. However if diff is not converted to signed number then we will see garbage value.
ok that overflow part i got, but why overflow is not happening in case when we are typecasting it or storing it in a variable.
without typecasting why there is problem and how typecasting removes the error.
Overflow bits are always thrown away. Int64 can't have more than 64 bit. For unsigned the gotten number will be garbage but if converted to signed number then we will see signed value(actually their binary representation will be same) . You can see about signed, unsigned and overflow.
Learn about unsigned numbers