omar_amer47's blog

By omar_amer47, history, 4 weeks ago, In English

I'm trying to solve https://www.codechef.com/problems/COOLSBST on codechef, I tried to think of solution that involves precomuting and square decomposition or sorting the test cases based on L and R but couldn't figure it out. can somebody explain how to solve it using square decomposition? thanks in advance.

  • Vote: I like it
  • +5
  • Vote: I do not like it

»
4 weeks ago, # |
  Vote: I like it +7 Vote: I do not like it

Let's start with a well-known fact from number theory, also mentioned in the linked Wikipedia article: the "cool" numbers are 1, 2, 4 and $$$p^k$$$, $$$2p^k$$$ for all odd primes $$$p$$$ and positive integers $$$k$$$ (that is, the cool numbers are 1, 2, 4, 3, 9, 27, 81, ..., 5, 25, 125, 625, ..., ...).

What is, then, the number of cool divisors of a number $$$x$$$? I'll refer to this as the coolness of $$$x$$$, similarly to how the problem talks about the coolness of a set. If I write $$$x$$$ as the product of its prime factors:

$$$ x = 2^{k_0} p_1^{k_1} p_2^{k_2} \cdots p_s^{k_s}, $$$

then:

  • if $$$k_0 = 0$$$, then the coolness is $$$1 + k_1 + k_2 + \cdots + k_s$$$.
  • if $$$k_0 = 1$$$, then the coolness is $$$1 + 1 + 2(k_1 + k_2 + \cdots + k_s)$$$.
  • if $$$k_0 \ge 2$$$, then the coolness is $$$1 + 2 + 2(k_1 + k_2 + \cdots + k_s)$$$.

The fact that 1 is considered cool is annoying, so is the fact that 2 is different from other primes. We can get rid of the $$$+1$$$ at the beginning of each of these expressions by separately counting, for each query, the total number of sets we are summing over, and adding that number to the answer. As for the rest, the plan is as follows:

  • Initially pretend that the coolness of every number and set is $$$2(k_1 + k_2 + \cdots + k_s)$$$, calculate the answer using that.
  • Now pretend that the coolness of every number and set is $$$k_1 + k_2 + \cdots + k_s$$$. Solve the problem considering only odd numbers, since sets of odd numbers are the ones for which we initially used the wrong coolness. Subtract that from the answer.
  • Count the number of sets that we are interested in that contain at least one even number. Add twice that count to the answer.
  • Count the number of sets that we are interested in that contains exactly one even number, such that that even number is not divisible by 4. Subtract that from the answer.

That's a lot of subproblems. Let's look only at the first one, since the others can be solved by similar techniques.

Now we say that the coolness of a number $$$x$$$ is $$$2(k_1 + k_2 + \cdots + k_s)$$$, where $$$x = 2^{k_0} p_1^{k_1} \cdots p_s^{k_s}$$$. Similarly, the coolness of a set is the coolness of the product of its elements. Now, notice that because of how coolness is defined, the coolness of a set is also the sum of the coolnesses of its elements (**key observation**!).

Now we have a new problem yet again. We are given an array $$$c_1, c_2, \ldots$$$, where $$$c_i$$$ is the coolness of the number $$$i$$$. This array is known to us and not part of the input, and we can precalculate it once, at the beginning of the solution, not for each query. We want to find the sum of $$$\sum_{i \in T} c_i$$$ over all sets $$$T$$$ with size between $$$A$$$ and $$$B$$$ such that $$$T \subseteq \{L, L + 1, \ldots, R\}$$$.

For each $$$L \le i \le R$$$, $$$c_i$$$ contributes to this double-sum the same number of times. How many times? It's the number of ways to pick between $$$A - 1$$$ and $$$B - 1$$$ other elements to the set among the $$$R - L$$$ choices. In other words, the answer to the query is

$$$ \left[ \binom{R - L}{A - 1} + \binom{R - L}{A} + \cdots + \binom{R - L}{B - 1} \right] (c_L + c_{L + 1} + \cdots + c_R). $$$

For each query, we need to calculate these two sums fast. The second one is easy: at the beginning of our solution, we precompute the array $$$c_i$$$ and then its prefix sums.

How to do the first one? Let's denote

$$$ d_{i, j} = \binom{i}{0} + \binom{i}{1} + \cdots + \binom{i}{j}. $$$

To answer a single query, we need to know $$$d_{R - L, B - 1}$$$ and $$$d_{R - L, A - 2}$$$. Calculating all $$$d_{i, j}$$$-s takes too much time, however we only need to know $$$2T$$$ values over the entire solution.

How to effectively calculate these? Notice that

$$$ d_{i, j} = 2d_{i - 1, j - 1} + \binom{i - 1}{j}. $$$

This means that if we already know $$$d_{i, j}$$$, we can efficiently calculate $$$d_{i - 1, j - 1}$$$, $$$d_{i + 1, j + 1}$$$, $$$d_{i, j - 1}$$$ and $$$d_{i, j + 1}$$$. If we cleverly change the order of these $$$2T$$$ values we need to calculate, we can use these steps to move from one needed value to the next and only take $$$O(N \sqrt N)$$$ steps, where $$$N$$$ is the upper bound for $$$R$$$ and $$$B$$$, i.e. $$$10^5$$$. This is Mo's algorithm (which is sorting the test cases as well as square root decomposition). And that's the solution.


Sorry if this explanation turned out to be complex. The problem itself is complex (sum of products of weird things in statement), has many annoying details (such as 2 being different), and is at the end of the day a 2740 problem. I also left out some details (avoiding logs in the steps of Mo's algorithm).


May I ask why you thought the solution should involve precomputing, square root decomposition or sorting the test cases? Were they guesses based on constraints or something you saw when reading others' solutions?

In general I'd advise to not start from thinking "I need to do square root decomposition" or "I need to reorder the test cases". See this. After a certain level, this information is not helpful at the beginning of the problem solving process.

Look how different my process was: start from analyzing what this "coolness" value really is. Make the key observation. I ended up at all the things you mentioned in your question, but all those things came up at the end only.

Or more specifically, I paid no attention to any of that before it was clear the problem could be solved in polynomial time (looking at only the CodeChef statement, the obvious solution is exponential time). There is no point in thinking of reordering the test cases if you don't have a polynomial-time solution because reordering the test cases can't change an exponential solution to a polynomial one.

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it +1 Vote: I do not like it

    OMG is this CP iam down and sad now

    • »
      »
      »
      4 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Nah bro, at least not for the vast majority of people. All problems rated <=1500 are just tricky things. At least in my experience, most problems rated <=2000 are also just tricky things. Nothing like this.

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it +10 Vote: I do not like it

        What's the difference between this and these "tricky things"?

        I think the only difference is that this problem has a longer chain of observations than the 1500s and 2000s.

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          That is a good point. Maybe I see it as complicated because idk what sqrt decomposition or all these things are. If I knew what they were, the problem would probably become a tricky thing for me.

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    thank you for the explanation, I didn't think of square decomposition initially. I first made all the observations you talked about before the four subproblems part,I tried to calculate the contribution of every prime number and it's powers independently. I had Ideas of dp and dp+fft solutions but they didn't work. then I looked at the submissions and found the test cases reordering but it actually made sense because the constraints says that t<=100000 and R-L can be 100000 nearly and codechef doesn't guarantee the sum of R-L over all test cases won't exceed 100000 so if I tried a linear solution per test case I will get TLE, which suggests that I should precompute and answer each testcase in atmost O(sqrt(N)). and no the explanation wasn't complex I understood it completely. I should have thought of dividing into subproblems, summing and subtracting as they are common in counting problems.