Hey, Codeforces I suffered for awhile from writing efficient codes for binary searching, but recently I found an efficient way using bit masking. Say you want to maximize a result using binary search and good function that return true if it's oky or false otherwise. so our binary search would be like that:
__int64 r = 0; for(int i = 62; i >= 0; --i) { if( good(r + (1LL<<i)) ) r += (1LL<<i); }
now you can binary search with out usage of variables such as low and high and middle which would lead to infinite loops.
As I know it's called binary lifting, btw lower_bound and upper_bound in c++ stl uses same approach)
Hmmmm...Nice...I didn't see that before...
Can we always use this instead of binary search???
I think we can't use it on doubles.
i think it'll be fine if you do it like this
for (long double gap = (1LL << 62); gap > eps; gap /= 2)
if (good(r + gap))
r += gap;
А как писать левый бианрный поиск таким подходом???
We can modify the classic binary search to avoid these bugs.
You can read more about it here.
.. but introduces subtraction overflow.
Assuming n is non-negative, hi is always greater than lo. So, how would that line cause an overflow?
If I am correct, since you have lo = -1 at beginning, if hi were maximum positive value for "int" i.e. hi = [(1L << 31) — 1] Then (hi — lo) would evaluate to (hi + 1) = (1 << 31) = -2147483648. And for 32 bit signed integer, it would turn negative. But who would use such hi value?
No infinite loops, no bugs, no thinking, only AC at the first attempt.
Can you elaborate a bit.And what is good(mid),a predicate function ?
Yes, it's a predicate function, what else can it be.
The main advantage of this version of binary search is that it always saves its structure. No unclear "invariants" with all these half-opened segments, which usually are the causes of bugs. The fact that
result
variable is updated as the search goes on, makes it easy to manage.The code above assumes that the function good() is non-increasing (1, ..., 1, 0, ..., 0). Now, say, it's non-decreasing (0, ..., 0, 1, ..., 1). Then just swap the lines
left = mid + 1
andright = mid - 1
and it's done!No idea why you've got minuses, but thank you for the most understandable and predictable binary search I've ever seen.
Thanks, I've always been using recursive binary search and this has opened my eyes.
I didn't got it what should i konw to understand this code?
Bitmasking Then if you understood the topic good you may trace the code to know what the code is doing
(1LL << i) = 2^i
I have no idea about bit masking but i found this article very helpful to clearly understand binary search.
Here's a clean Scala version that also works for negative numbers:
Source: https://github.com/pathikrit/scalgos/blob/c7f79824a4110397cbf340c08cae487b00ba7677/src/main/scala/com/github/pathikrit/scalgos/BitHacks.scala#L90