Hello Codeforcers,
Recently I've stumbled upon this AtCoder problem. The naive solution in O(n2m) gets TLE (n≤2000, m≤2000), and the solution is to use arrays of bitsets.
I don't understand why it's so much more efficient. The editorial uses bitsets of size 1000, so I feel like the overall complexity would be O(1000nm), which is pretty similar to the naive complexity. Should I understand that performing XOR on bitsets of size n isn't of complexity O(n) ? (using precomputed tables with masks of size 64 ?)
I had a hard time understanding the solution, so any similar problem resource to learn would be appreciated.
when you think of bitsets, think of them as a list of 64-bit integers without you having to worry about indexing and iteration. Xor'ing N-bit bitset is simply xor's on N/64 integers, hence it is O(N/64) or sometimes denoted as O(N/w) where w = 32 or 64 and depends on the architecture.
Ok, thank you ! So it's a linear gain, but still enough to avoid a TLE in some problems. Good to know
Yes. You can particularly watch out for O(N^3 / w) when N <= 2000 and O(N^2 / w) when N <= 2e5.