Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js

Orn0's blog

By Orn0, history, 2 months ago, In English

Hello Codeforcers,

Recently I've stumbled upon this AtCoder problem. The naive solution in O(n2m) gets TLE (n2000, m2000), and the solution is to use arrays of bitsets.

I don't understand why it's so much more efficient. The editorial uses bitsets of size 1000, so I feel like the overall complexity would be O(1000nm), which is pretty similar to the naive complexity. Should I understand that performing XOR on bitsets of size n isn't of complexity O(n) ? (using precomputed tables with masks of size 64 ?)

I had a hard time understanding the solution, so any similar problem resource to learn would be appreciated.

  • Vote: I like it
  • 0
  • Vote: I do not like it

»
2 months ago, # |
Rev. 2   Vote: I like it +14 Vote: I do not like it

when you think of bitsets, think of them as a list of 64-bit integers without you having to worry about indexing and iteration. Xor'ing N-bit bitset is simply xor's on N/64 integers, hence it is O(N/64) or sometimes denoted as O(N/w) where w = 32 or 64 and depends on the architecture.

  • »
    »
    2 months ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Ok, thank you ! So it's a linear gain, but still enough to avoid a TLE in some problems. Good to know

    • »
      »
      »
      2 months ago, # ^ |
        Vote: I like it +11 Vote: I do not like it

      Yes. You can particularly watch out for O(N^3 / w) when N <= 2000 and O(N^2 / w) when N <= 2e5.