### OjjOj_'s blog

By OjjOj_, history, 3 weeks ago,

# Problem Description

Having an unsigned integer x (x > 0), return True if it is a power of 2, else return False. Example:

16 -> True (12 = 2 * 2 * 2 * 2)

12 -> False (12 = 2 * 2 * 3)

## Traditional Algorithm : O(logn)

def is_power_of_two(x):
while x > 1:
if x % 2 != 0:
return False
x = x // 2

return True


Explanation: This code keeps dividing the number by 2 until it reaches 1 (for powers of 2) or finds an odd number (for non-powers of 2).

## Prerequisites

Before getting to O(1) algorithm let's spend some time playing with bitwise operations

### Bitwise AND

Bitwise AND (&) is a binary operation that compares each bit of two numbers: - It returns 1 for each bit position where both input bits are 1. - It returns 0 for all other cases.

Example (&):

1010 (10 in decimal)

1100 (12 in decimal)

1000 (8 in decimal)

### Bitwise OR

Bitwise OR (|) is also a binary operation that compares each bit of two numbers: - It returns 1 for each bit position where at least one of the input bits is 1. - It returns 0 only when both input bits are 0.

Example (|):

1010 (10 in decimal)

1100 (12 in decimal)

1110 (14 in decimal)

### Manipulating Rightmost Bits

Use the following formula to flip the rightmost 1-bit to 0 in a binary word, if the word does not contain ones, nothing will be changed: /predownloaded/72/60/7260331deafb1b25bbede890d89ce248fd31eb97.png Example:

00001111 -> 00001110

00000000 -> 00000000

00100000 -> 00000000

(If you still don't understand how the formula works, feel free to try one of the examples on your own)

## Constant Algorithm : O(1)

Now after discovering the above wonderful formula, we can use it in our algorithm. Let's see how each number that is a power of two looks in binary :

1 -> 00000001

2 -> 00000010

4 -> 00000100

8 -> 00001000

16 -> 00010000

...

As you can notice, the power of two numbers have only a single 1-bit in their binary representation (of course that is not a coincidence, this is by the core of the binary system)

Therefore, removing this single 1-bit from those numbers will result in a 0.

Ultimately, when removing the rightmost 1-bit from a number, if it turns out to be a 0, then voila, it is a power of two, if not, then unfortunately ... it is not a power of two :(

def is_power_of_two(x):
return (x & (x - 1)) == 0


### Final words

Thanks for reading this article.

This is one of many programming tricks from the book in the References section below.

### References:

Hacker's Delight, 2nd ed., H. S. Warren, Jr., Addison-Wesley, 2013, ch. 2, sec. 2-1, p. 11.

# ❤️ Leave a like if you enjoyed it! ❤️

• -26

 » 3 weeks ago, # |   +5 is this faster than __builtin_popcount() in c++ or is it actually the same ?
•  » » 3 weeks ago, # ^ |   0 Yes, it is faster 10000000 number of tests Language : C++, compiler : C++98Time taken by ispowerof2_1 (builtin): 0.056 seconds Time taken by ispowerof2_2 (bitwise): 0.03 seconds
 » 3 weeks ago, # |   +3 You can directly use builtin function __has_single_bit()
•  » » 3 weeks ago, # ^ |   +3
 » 3 weeks ago, # | ← Rev. 2 →   +3 Calling this an $O(1)$ algorithm requires an assumption that these & and - operations always work in $O(1)$ regardless of the size of the integer.In Python, this is not true, because it needs to use more bits to store larger integers, and both operations will take time proportional to the number of bits. In other words, it takes $O(\log{n})$ time.In C/C++, there is an upper limit of the size of the integer where you can apply this trick to, and within this limit it will take constant time, but at the same time you can say that the other method also takes a limited number of operations (hence, constant time) to determine whether it's a power of two.Therefore in C/C++, both operations can be called to take constant time, but in reality your method is much better than the traditional one because it uses light operations instead of heavier ones.
•  » » 3 weeks ago, # ^ | ← Rev. 2 →   0 In the unlimited integer version if you use the traditional version it takes $O(\log^2{n})$ time because each operation takes $O(\log{n})$ time and you repeat it $O(\log{n})$ times. However, this can easily be reduced to $O(\log{n})$ without using the trick such as just naively counting the number of bits. In other words, (x & (x - 1)) == 0 isn't something that magically reduces the time complexity. It just takes advantage from that these operations are super light and can be highly optimized by the compiler.