ko_osaga's blog

By ko_osaga, history, 5 months ago, In English

Several years ago, a wise person told me that a convolution on a bitwise operator is possible: Given $$$A, B$$$ of size $$$2^N$$$, you can compute

$$$C[i] = \sum_{j \oplus k = i} A[j] B[k]$$$

$$$C[i] = \sum_{j \land k = i} A[j] B[k]$$$

$$$C[i] = \sum_{j \lor k = i} A[j] B[k]$$$

in $$$O(2^N N)$$$ time. Cool!

I asked a wise person, how such things are possible. A wise person replied, "Of course you know how FFT works, let's begin with Fast Welsh-Hadamard Transform..." I said, No. I don't know how FFT works. Thank you. Then I just threw it into my ICPC teamnote.

Years have passed, I still don't know how FFT works, and while writing some stupid essay, a random idea came to my mind. I wondered, "Does nobody really know this? Why anyone didn't explain OR convolution this way?". I searched on Google, and nobody was telling things this way, so this is certainly not a common explanation. But why? It should be. Let me use my time to change things for good.

Sum of Subsets

For convenience, I'll use a set notation. We want to compute:

$$$C[i] = \sum_{j \cup k = i} A[j] B[k]$$$

If we can do this, we can also do $$$j \cap k$$$ easily. My approach can't do XOR convolution anyway, let's skip it.

Let's relax the condition as follows: $$$C^\prime[i] = \sum_{(j \cup k) \subseteq i} A[j] B[k]$$$

Which is $$$C^\prime[i] = \sum_{(j \subseteq i) \land (k \subseteq i)} A[j] B[k] = (\sum_{j \subseteq i} A[j]) (\sum_{k \subseteq i} B[k])$$$

Given an array $$$A$$$, how to compute $$$(\sum_{j \subseteq i} A[j])$$$? This is just a sum-of-subsets DP. Let's do it for both arrays $$$A$$$, $$$B$$$. Code:

// compute sum-of-subset
for (int i = 0; i < n; i++) {
  for (int j = 0; j < (1 << n); j++) {
    if ((j >> i) & 1) {
      A[j] += A[j - (1 << i)];
      B[j] += B[j - (1 << i)];
    }
  }
}

Then we have $$$C^\prime[i] = A[i] \times B[i]$$$.

A naughty cat

You have $$$C^\prime[i] = \sum_{j \subseteq i} C[j]$$$. How to get $$$C$$$ from $$$C^\prime$$$?

Think about this. You had an array $$$A$$$, but a naughty cat took a sum-of-subset of it and replaced it. You want to take $$$A$$$ back. What should you do? Just undo it!

for (int i = n - 1; i >= 0; i--) {
  for (int j = (1 << n) - 1; j >= 0; j--) {
    if ((j >> i) & 1) {
      A[j] -= A[j - (1 << i)];
    }
  }
}

You know what's going on, you are doing everything in reverse.

But $$$C^\prime$$$ is a sum-of-subset of $$$C$$$. What?

// compute C^\prime
for (int i = 0; i < (1 << n); i++) {
  C[i] = A[i] * B[i];
}

// reverse sum-of-subset
for (int i = n - 1; i >= 0; i--) {
  for (int j = (1 << n) - 1; j >= 0; j--) {
    if ((j >> i) & 1) {
      C[j] -= C[j - (1 << i)];
    }
  }
}

That's it, enjoy your convolution without some crazy ad-hoc maths!

Remark 1. This same approach works for GCD and LCM convolution since it's something like (num of primes $$$\leq n$$$)-dimension equivalent of the above approach, and "sum of divisors" can be done in $$$O(n \log n)$$$ time.

Remark 2. This article used 50 minutes of time that should be used to complete the stupid essay.

  • Vote: I like it
  • +609
  • Vote: I do not like it

»
5 months ago, # |
  Vote: I like it +64 Vote: I do not like it

Wow, and this approach almost trivializes subset convolution. Now I can do subset convolution without any complicated tutorial :)

»
5 months ago, # |
Rev. 15   Vote: I like it +58 Vote: I do not like it

I think it works in any lattice, that is a set with a $$$\leq$$$ relation (partially ordered set) with a "least common ancestor"

Unable to parse markup [type=CF_MATHJAX]

and a "highest common descendent"

Unable to parse markup [type=CF_MATHJAX]

. We can kind of interpret them as follows:
  • Unable to parse markup [type=CF_MATHJAX]

    as the maximum element that is less or equal than both $$$i$$$ and $$$j$$$, i.e. similar to

    Unable to parse markup [type=CF_MATHJAX]

    ,

    Unable to parse markup [type=CF_MATHJAX]

    or $$$i \cap j$$$,
  • Unable to parse markup [type=CF_MATHJAX]

    as the minimum element that is greater or equal than both $$$i$$$ and $$$j$$$, i.e. similar to

    Unable to parse markup [type=CF_MATHJAX]

    ,

    Unable to parse markup [type=CF_MATHJAX]

    or $$$i \cup j$$$.

In this notion, for $$$C[k]$$$ defined as

Unable to parse markup [type=CF_MATHJAX]

it always holds that

Unable to parse markup [type=CF_MATHJAX]

And you can always inverse the sum over

Unable to parse markup [type=CF_MATHJAX]

via Möbius inversion formula as

Unable to parse markup [type=CF_MATHJAX]

where

Unable to parse markup [type=CF_MATHJAX]

is the Möbius function of the poset. See also this article by nor which extensively goes into the topic from poset point of view.

I think, this approach is generally known to CP community, as there are problems about GCD/LCM convolutions on Library Checker, but maybe it's not standard to use it in tutorials because tutorial authors perceive polynomial evaluation/interpolation, or sometimes matrix Kronecker product as more natural, insightful and relatable to cpers than the poset point of view?

Besides, these tutorials are usually written in a way that is supposed to be uniform for OR convolution, AND convolution and XOR convolution, and as you pointed it out yourself it doesn't work with XOR convolution.

  • »
    »
    5 months ago, # ^ |
      Vote: I like it +53 Vote: I do not like it

    Lattice, such a cool name, thanks!

  • »
    »
    5 months ago, # ^ |
      Vote: I like it +19 Vote: I do not like it

    Here's a fun lattice: subspaces of

    Unable to parse markup [type=CF_MATHJAX]

    under inclusion. (This recently appeared as part of problem A on UCup 1 Stage 13: Iberia).

    You can experimentally find the closed form of the Mobius function or derive it using some math.

    Spoiler

    Does anyone have a nice combinatorial interpretation of this? (Combinatorial means something like this comment for the lattice of partitions.)

    • »
      »
      »
      5 months ago, # ^ |
        Vote: I like it +42 Vote: I do not like it

      Let me try.

      Let

      Unable to parse markup [type=CF_MATHJAX]

      . For a subspace $$$W \subset V$$$, the equation we ultimately want to have is

      Unable to parse markup [type=CF_MATHJAX]

      if

      Unable to parse markup [type=CF_MATHJAX]

      and $$$0$$$ otherwise. If we let

      Unable to parse markup [type=CF_MATHJAX]

      and

      Unable to parse markup [type=CF_MATHJAX]

      , the equation becomes

      Unable to parse markup [type=CF_MATHJAX]

      subspaces of

      Unable to parse markup [type=CF_MATHJAX]

      with dimension

      Unable to parse markup [type=CF_MATHJAX]

      . (BTW How did you use \mathbb{F} in math mode? It gave me an error...)

      The case of $$$n=0$$$ is easy, so let's focus on

      Unable to parse markup [type=CF_MATHJAX]

      .

      Take an arbitrary vector $$$v$$$, say,

      Unable to parse markup [type=CF_MATHJAX]

      . For a subspace of

      Unable to parse markup [type=CF_MATHJAX]

      , let's call it type A if it contains $$$v$$$ and type B otherwise. I will show that

      Unable to parse markup [type=CF_MATHJAX]

      type A subspaces of dimension

      Unable to parse markup [type=CF_MATHJAX]

      type B subspaces of dimension

      Unable to parse markup [type=CF_MATHJAX]

      , then everything calcels out in the sought sum.

      Consider a type A subspace of dimension $$$k+1$$$ and let $$$b_i$$$ (

      Unable to parse markup [type=CF_MATHJAX]

      ) be its basis. We can safely assume

      Unable to parse markup [type=CF_MATHJAX]

      . Then, for any

      Unable to parse markup [type=CF_MATHJAX]

      ,

      Unable to parse markup [type=CF_MATHJAX]

      (

      Unable to parse markup [type=CF_MATHJAX]

      ) form a basis and they correspond to a type B subspace of dimension $$$k$$$. Thus we have a

      Unable to parse markup [type=CF_MATHJAX]

      -to-one correspondence and this completes the proof.
    • »
      »
      »
      5 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Wow, did you upsolve it?

      • »
        »
        »
        »
        5 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Yeah, it was a pretty cute problem, the math wasn't too bad. I wonder if it's possible to solve in subquadratic time, and I also wonder if it's possible without PIE (e.g. over floating point numbers).

        • »
          »
          »
          »
          »
          5 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          From your comment it seems that you found it not that hard? Mind telling the gist of your solution? We have some digit dps with insane tricks.

          • »
            »
            »
            »
            »
            »
            5 months ago, # ^ |
            Rev. 3   Vote: I like it +23 Vote: I do not like it

            First, we'll do PIE on the basis condition: for each subpsace $$$B'$$$ of the given basis $$$B$$$, we want to count the number of subsets of

            Unable to parse markup [type=CF_MATHJAX]

            with all elements in $$$B'$$$, weighted by

            Unable to parse markup [type=CF_MATHJAX]

            .

            The key observation is that, the problem on a subspace $$$S$$$ of dimension $$$M$$$ and an upper bound $$$X$$$ can be reduced to the problem on

            Unable to parse markup [type=CF_MATHJAX]

            with some upper bound of $$$X'$$$ (which clearly has

            Unable to parse markup [type=CF_MATHJAX]

            valid subsets).

            The proof is not too difficult: consider the reduced-row-echelon-form basis $$$B$$$ of $$$S$$$, and express each element of $$$S$$$ in terms of this basis (i.e. as

            Unable to parse markup [type=CF_MATHJAX]

            for some

            Unable to parse markup [type=CF_MATHJAX]

            ). One nice property of an RREF basis is that lexicographic ordering is preserved, i.e.

            Unable to parse markup [type=CF_MATHJAX]

            . Thus, the elements of $$$S$$$ less than $$$X$$$ must be the elements of

            Unable to parse markup [type=CF_MATHJAX]

            at most some upper bound $$$X'$$$.

            We can give an explicit construction of $$$X'$$$: consider the element

            Unable to parse markup [type=CF_MATHJAX]

            of $$$S$$$ which matches $$$X$$$ for as many digits as possible. Then, if

            Unable to parse markup [type=CF_MATHJAX]

            , our upper bound is

            Unable to parse markup [type=CF_MATHJAX]

            ; otherwise, our upper bound is

            Unable to parse markup [type=CF_MATHJAX]

            , where $$$d$$$ is the number of digits of $$$v$$$ after the mismatch.

            To finish the problem, we do digit-DP on the RREF basis of the subspace $$$S$$$. We go from the least significant bit upwards, and our state is the current dimension, as well as whether we've hit the first mismatch of

            Unable to parse markup [type=CF_MATHJAX]

            with $$$X$$$ or not. For each digit, we either a vector starting at this digit to our basis, or we don't. It's helpful to note that, in the second case, exactly half of the bases will match $$$X$$$ at that digit.

            It's kind of interesting that the problem statement gives you an explicit subspace; it's almost a hint that you should look at subspaces and try to reduce the problem. I think the solution would've been harder to come up with if the problem just asked for full-rank-subsets of

            Unable to parse markup [type=CF_MATHJAX]

            .
  • »
    »
    5 months ago, # ^ |
      Vote: I like it +11 Vote: I do not like it

    A little nitpick: you only need semilattices for such an argument to work.

    One obvious case where this becomes significant is the case of trees — a tree forms a semilattice (but not a lattice) under the LCA operation, and you can find things like

    Unable to parse markup [type=CF_MATHJAX]

    where $$$x_i$$$ are some numbers assigned to vertices.

    Looking at it from a poset perspective, the DP to recover $$$f$$$ from the zeta transform — which is the square of the subtree sum of that vertex — is just a more manageable way to compute the Möbius function via the theorem for lattices mentioned in my blog, and since the answer is linear in the Mobius function, it manages to also solve the problem.

    • »
      »
      »
      5 months ago, # ^ |
      Rev. 2   Vote: I like it 0 Vote: I do not like it

      Yea, sorry, I'm not a lattice expert and did not know the name semilattice >.<

»
5 months ago, # |
  Vote: I like it +13 Vote: I do not like it

super straightforward approach, I love it.

»
5 months ago, # |
  Vote: I like it +16 Vote: I do not like it

"Does nobody really know this?" — it's certainly the way I always thought about it and I didn't even know that it has any FWHT/FFT connections xd

»
5 months ago, # |
  Vote: I like it +69 Vote: I do not like it

I still don't know what FWHT is...

  • »
    »
    5 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    This is literally the transform you do to the array for XOR convolution...

»
5 months ago, # |
  Vote: I like it -10 Vote: I do not like it
  1. This looks amazing
  2. Can you do non-binary base xor convolution, please mister legendary grandmaster?
»
5 months ago, # |
Rev. 10   Vote: I like it +11 Vote: I do not like it

There is a super nice and general way to view all kinds of transforms that are used for different kinds of convolution. The trick is to think about properties (functions) $$$f$$$ such that

Unable to parse markup [type=CF_MATHJAX]

. In your example of OR-convolution, the proprties you are using are

Unable to parse markup [type=CF_MATHJAX]

where

Unable to parse markup [type=CF_MATHJAX]

.

Let's as another example take circular convolution with periodicity $$$n$$$. Meaning you are given two polynomials $$$A(x)$$$ and $$$B(x)$$$, and your goal is to find the polynomial

Unable to parse markup [type=CF_MATHJAX]

, which is an $$$(n-1)$$$-degree polynomial.

Note that for any complex number $$$\omega$$$ such that

Unable to parse markup [type=CF_MATHJAX]

, we have that

Unable to parse markup [type=CF_MATHJAX]

. There are $$$n$$$ such complex numbers, and $$$C$$$ is an $$$(n - 1)$$$-degree polynomial. This means that by evaluating $$$A$$$ and $$$B$$$ in all of the $$$\omega$$$:s we have enough information to calculate $$$C$$$ using interpolation. The properties "$$$f$$$" that we are using here is the evaluation of a polynomial at the $$$\omega$$$:s.

So what is FFT? If you give FFT a $$$(n-1)$$$-degree polynomial, then it evaluates the polynomial in all $$$n$$$-th roots of unity (the $$$\omega$$$:s). Inverse FFT is the reverse of this, it finds a $$$(n-1)$$$-degree polynomial using interpolation. So if you have FFT and IFFT implemented, then it is trivial to implement circular convolution.

Here is an example of a Python implemention of circular convolution using FFT.

# Input:
# Two (n-1)-degree polynomials A and B.
# Integer n.
#
# Output:
# C := A * B (mod x^n - 1) 

def circular_convolution(A, B, n):
    assert n == len(A) == len(B)
    A_eval = FFT(A)
    B_eval = FFT(B)
    C_eval = [Aw*Bw for Aw,Bw in zip(A_eval, B_eval)]
    C = IFFT(C_eval)
    return C
  • »
    »
    5 months ago, # ^ |
      Vote: I like it +18 Vote: I do not like it

    Sir, I think the point of this blog is specifically to avoid this general way to view them...

    • »
      »
      »
      5 months ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      True (and this blog does a good job of explaining it in that way), but the DPs used for such transforms usually end up being representable as such maps. There's stuff like the infimal convolution that feels hard to explain in such a simplified manner, but is useful in a lot of places nevertheless (for example, in Alien's trick).

    • »
      »
      »
      5 months ago, # ^ |
        Vote: I like it +8 Vote: I do not like it

      No, just using sensible terminology and approach.

      • »
        »
        »
        »
        5 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        But pajenegod uses the same approach as the blogs on fwht you complain about...

        • »
          »
          »
          »
          »
          5 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Yeah, glad that you know where it got wrong

          • »
            »
            »
            »
            »
            »
            5 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            If something has an established name, it's sensible to call it by the name, rather than explain it without mentioning the name...

            • »
              »
              »
              »
              »
              »
              »
              5 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              Opposite of how I learned to write: Use the easiest way to describe things.

              I mean, I think your posts have a specific audience, they probably know what's going on, so it's ok. I just found it sad that nobody discussed this for general audience (such as me).

              • »
                »
                »
                »
                »
                »
                »
                »
                5 months ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                I mean, I get your point. It's much worse in math textbooks and I hate it. But complaining that blog about something that is exactly FWHT (XOR convolution) mentions FWHT and FFT and skipping it because of this is still weird.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  5 months ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  Considering that Um_nik don't know FWHT, nor I do, most schoolchildrens won't, and my 2400+ friends saying they are enlightened, I don't think this is a niche position.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  5 months ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  Bruh, it's more like you and Um_nik know it, but refuse to acknowledge it by this name.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  5 months ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  No I really don't know it. Really.. I kinda know FFT, but zero idea on FWHT.

            • »
              »
              »
              »
              »
              »
              »
              5 months ago, # ^ |
                Vote: I like it 0 Vote: I do not like it

              I don't think that's always true. Do you know what "Kadane's algorithm" is?

              • »
                »
                »
                »
                »
                »
                »
                »
                5 months ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                It's the algorithm to find the max-sum segment. I constantly forget its details because prefix sums are more natural for this. I think, it still makes sense to mention its name in a tutorial about it, and it shouldn't divert people from reading the tutorial? Besides, I don't know a better name for it anyway.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  5 months ago, # ^ |
                    Vote: I like it 0 Vote: I do not like it

                  That depends on people I think. The algorithm in this situation is so simple, that giving it a name (from my viewpoint) creates an unearned aura of scientific significance and/or complexity. I don't think that algorithm deserves a name, and I would prefer to refer to it by "max-sum subsegment".

                  The situation is a bit different with FWHT and XOR convolution. Here I just think that "XOR convolution in

                  Unable to parse markup [type=CF_MATHJAX]

                  " is the algorithm, and FWHT might be the name of the transform used in the algorithm, but mentioning it instead of XOR convolution for me seems like an obfuscation, like you are trying to hide XOR convolution and/or give it a scientific-sounding name.

                  And I have to acknowledge that my position is weird because I don't have anything against saying FFT instead of "polynomial multiplication in $$$O(n \log n)$$$.

              • »
                »
                »
                »
                »
                »
                »
                »
                5 months ago, # ^ |
                  Vote: I like it +21 Vote: I do not like it

                One day someone said he was using an "imos method" to do some stuff. I asked, what's that trick? And it was a difference array. I almost died. I don't know who "imos" is, but shame on him. I can say that. That's like a communication crashing because of some crazy ego.

              • »
                »
                »
                »
                »
                »
                »
                »
                5 months ago, # ^ |
                  Vote: I like it 0 Vote: I do not like it

                Here's an even more obtuse example: you can use something called the Bird-Meerten's formalism to derive Kadane's algorithm from the $$$O(n^3)$$$ definition. Though it is definitely a cool thing when it comes to deriving programs formally from their specification (and I really like it for that reason), it's not really useful to learn in the context of CP.

                I would say it is more useful to name ideas rather than to name specific algorithms (so stuff like posets etc. make more sense in this context, but FWHT in isolation does not).

    • »
      »
      »
      5 months ago, # ^ |
        Vote: I like it +18 Vote: I do not like it

      Well yes, this blog is about doing OR-convolution using sum-of-subsets.

      However I think that it is still good to point out that there are very similar explanations of how to do other convolutions. For example the blog just leaves out XOR-convolution claiming that "My approach can't do XOR convolution". Which is not entirely true.

      The transform used for XOR-convolution is

      Unable to parse markup [type=CF_MATHJAX]

      .

      The reason for how I know this is explained in my original comment. It has to do with the fact that

      Unable to parse markup [type=CF_MATHJAX]

      for

      Unable to parse markup [type=CF_MATHJAX]

      ,

      Unable to parse markup [type=CF_MATHJAX]

      .
      • »
        »
        »
        »
        5 months ago, # ^ |
        Rev. 5   Vote: I like it 0 Vote: I do not like it

        Does it have any natural explanation?

        The one I know is to treat $$$A$$$ and $$$B$$$ as multivariate polynomials, where the entry with the index

        Unable to parse markup [type=CF_MATHJAX]

        corresponds to the coefficient near

        Unable to parse markup [type=CF_MATHJAX]

        for $$$(n+1)$$$ variables

        Unable to parse markup [type=CF_MATHJAX]

        , one per bit. We want them to behave like

        Unable to parse markup [type=CF_MATHJAX]

        where $$$i \oplus j$$$ is the XOR of $$$i$$$ and $$$j$$$. So, we evaluate them in numbers

        Unable to parse markup [type=CF_MATHJAX]

        such that

        Unable to parse markup [type=CF_MATHJAX]

        for each bit $$$k$$$. Conveniently it's equivalent to

        Unable to parse markup [type=CF_MATHJAX]

        , hence

        Unable to parse markup [type=CF_MATHJAX]

        , so the transform for the XOR convolution is equivalent to evaluating these multivariate polynomials in the vertices of the cube

        Unable to parse markup [type=CF_MATHJAX]

        .
        • »
          »
          »
          »
          »
          5 months ago, # ^ |
          Rev. 3   Vote: I like it 0 Vote: I do not like it

          ko_osaga, this (poly eval/interpolation interpretation) and the comment below (set theoretic interpretation) is essentially FWHT. Do you have any idea of it now? @_@

          • »
            »
            »
            »
            »
            »
            5 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Ok, adamant eventually made me understand this weird math stuff. You win ;_;

            I think I get that set theory 100%, but for the poly one I don't know how it corresponds to an algorithm. But I just want to chill, so plz don't explain it

            • »
              »
              »
              »
              »
              »
              »
              5 months ago, # ^ |
              Rev. 3   Vote: I like it +8 Vote: I do not like it

              I think, you just change

              A[j] += A[j - (1 << i)];
              

              to

              tie(A[j - (1 << i)], A[j]) = make_pair(
                A[j - (1 << i)] + A[j],
                A[j - (1 << i)] - A[j]
              );
              

              In your impl of the OR transform, as OR transform is eval in

              Unable to parse markup [type=CF_MATHJAX]

              .
        • »
          »
          »
          »
          »
          5 months ago, # ^ |
          Rev. 3   Vote: I like it +24 Vote: I do not like it

          Here is how I would intuitively explain the transforms of many different types of convolutions.

          OR-convolution

          Unable to parse markup [type=CF_MATHJAX]

          uses the transform

          Unable to parse markup [type=CF_MATHJAX]

          (called sum-of-subsets transform)

          since

          Unable to parse markup [type=CF_MATHJAX]

          .

          AND-convolution

          Unable to parse markup [type=CF_MATHJAX]

          uses the transform

          Unable to parse markup [type=CF_MATHJAX]

          since

          Unable to parse markup [type=CF_MATHJAX]

          .

          XOR-convolution

          Unable to parse markup [type=CF_MATHJAX]

          uses the transform

          Unable to parse markup [type=CF_MATHJAX]

          (called FWHT)

          since

          Unable to parse markup [type=CF_MATHJAX]

          .

          Circular convolution mod $$$n$$$

          Unable to parse markup [type=CF_MATHJAX]

          uses the transform

          Unable to parse markup [type=CF_MATHJAX]

          (called FFT)

          since

          Unable to parse markup [type=CF_MATHJAX]

          .

          GCD-convolution

          Unable to parse markup [type=CF_MATHJAX]

          uses the transform

          Unable to parse markup [type=CF_MATHJAX]

          since

          Unable to parse markup [type=CF_MATHJAX]

          .

          LCM-convolution

          Unable to parse markup [type=CF_MATHJAX]

          uses the transform

          Unable to parse markup [type=CF_MATHJAX]

          since

          Unable to parse markup [type=CF_MATHJAX]

          .
          • »
            »
            »
            »
            »
            »
            5 months ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Oh, I see. Yeah, looking on it this way makes a lot of sense. The family of transforms for XOR convolution still looks arbitrary and somewhat unmotivated, but it's still a very nice way to present it.

      • »
        »
        »
        »
        5 months ago, # ^ |
        Rev. 3   Vote: I like it +10 Vote: I do not like it

        I suppose, from set-theoretic point of view you define it as

        Unable to parse markup [type=CF_MATHJAX]

        and then

        Unable to parse markup [type=CF_MATHJAX]

        which then rewrites as

        Unable to parse markup [type=CF_MATHJAX]

        which explains the formula. But there's no way I would ever consider such transform naturally... Like, I don't see a meaningful relaxation of $$$i \Delta j = k$$$ similar to how the blog goes from $$$i \cup j = k$$$ to

        Unable to parse markup [type=CF_MATHJAX]

        .

        P.S.

        Unable to parse markup [type=CF_MATHJAX]

        is the symmetric difference if $$$i$$$ and $$$j$$$.
  • »
    »
    5 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    This is clean, good!

    We need $$$n = 2^k$$$ to have an $$$O(n \log n)$$$ algorithm, right?

    • »
      »
      »
      5 months ago, # ^ |
      Rev. 2   Vote: I like it +10 Vote: I do not like it

      Standard FFT needs it, but you can reduce arbitrary $$$n$$$ to a convolution (which is doable with powers of $$$2$$$) with chirp Z-transform. On the other hand, if you only want to compute

      Unable to parse markup [type=CF_MATHJAX]

      , rather than

      Unable to parse markup [type=CF_MATHJAX]

      for each root of

      Unable to parse markup [type=CF_MATHJAX]

      , you can as well compute

      Unable to parse markup [type=CF_MATHJAX]

      in full and reduce it modulo $$$x^n-1$$$ in $$$O(n)$$$.
    • »
      »
      »
      5 months ago, # ^ |
      Rev. 3   Vote: I like it 0 Vote: I do not like it

      Essentially yes.

      However, there is a common trick to get around this limitation. By paying a factor of 2 extra, you can use circular convolution to calculate usual convolution (no modding by $$$(x^n - 1)$$$). This is because modding with $$$(x^n - 1)$$$ doesn't matter if $$$n$$$ becomes large enough.

      So to do circular convolution for an $$$n$$$ that is not a power of 2, you start by doing a circular convolution with a large power of 2 as its periodicity. Then you simply mod the result of that by $$$x^n - 1$$$.

      • »
        »
        »
        »
        5 months ago, # ^ |
          Vote: I like it +10 Vote: I do not like it

        Modding by $$$x^n$$$ is removing all coefficients with index $$$\geq n$$$. You want to mod by $$$x^n-1$$$, that is substitute

        Unable to parse markup [type=CF_MATHJAX]

        and turn

        Unable to parse markup [type=CF_MATHJAX]

        into $$$x^b$$$. Note that you can't do this if you want to invert the circular convolution (e.g. in 901E - Cyclic Cipher).
        • »
          »
          »
          »
          »
          5 months ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Ah ops, meant

          Unable to parse markup [type=CF_MATHJAX]

          and not

          Unable to parse markup [type=CF_MATHJAX]

          . Thanks for pointing that out. I'll fix it.
»
5 months ago, # |
  Vote: I like it +14 Vote: I do not like it

What for uncommon people like us?

»
5 months ago, # |
  Vote: I like it -11 Vote: I do not like it

Thanks for the easy to understand blog — it is clearly another step in the right direction since it provides more evidence for the claim that all problems can be solved using DP.

»
5 months ago, # |
  Vote: I like it +10 Vote: I do not like it

You may also consider what you did (specifically sum-of-subset) as if you calculated the following DP: dp[b][i] is sum of $$$f(j)$$$ over all $$$j$$$ so that $$$j$$$ is a subset of $$$i$$$ and may differ only in the last (or first, doesn't matter) $$$b$$$ bits. Do you know how to calculate dp[b][i] from dp[b - 1][*]? Yes, if the $$$b$$$-th bit of $$$i$$$ is zero, then these values coincide, otherwise dp[b][i] = dp[b - 1][i] + dp[b - 1][i - (1 << b)]. Your two-cycles is computing this dp on one layer.

Keeping this in mind, you may generalize this to xor-convolution. Indeed, let dp[b][i] be the sum of $$$f(j)$$$ over $$$j$$$ that only differ from $$$i$$$ in the last $$$b$$$ bits, but also each $$$f(j)$$$ is multiplied by $$$-1$$$ if the number of bits they differ in is odd. Now I think you also know how to compute dp[b][i] from dp[b - 1][*].

Idk if this makes any visual sense to you, but probably it is easier to understand xor-convolution in terms of dp

»
5 months ago, # |
  Vote: I like it -32 Vote: I do not like it

Number of legend's comment in this blog >>>>>>>>>>>

»
5 months ago, # |
  Vote: I like it -76 Vote: I do not like it

Do not read this article if you are not red. Go solve binary search instead.

  • »
    »
    5 months ago, # ^ |
      Vote: I like it +42 Vote: I do not like it

    Was funny before 1000000007th time.

  • »
    »
    5 months ago, # ^ |
      Vote: I like it +48 Vote: I do not like it

    It's not funny and the post is not exclusively for 2400+. If you understand SOS DP you can read it.

  • »
    »
    5 months ago, # ^ |
      Vote: I like it -25 Vote: I do not like it

    keep it up. do nto lsten to ahders. yuo aer fnuonny.

    -Yuor Snciernly Btsowana

»
5 months ago, # |
  Vote: I like it -8 Vote: I do not like it

Wow FMT&FWT!

»
5 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Cool! I used this technique to solve these problems:

You'll have to join this group for the second one.

Thank you for writing such a clear and concise blog.

»
5 months ago, # |
Rev. 2   Vote: I like it +5 Vote: I do not like it

What is this set notation in your summation ($$$\sum$$$) notation? Apologies if the question is duplicate or too noob.

  • »
    »
    5 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The number $$$j, k$$$ is not a set per se, but I'm considering it as a set, which makes sense if you consider the bitmask.

  • »
    »
    5 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Can somebody explain how to read/understand that formular: $$$C[i] = \sum_{j \cup k = i} A[j] B[k]$$$

    If A={1,2,3} and B={4,5,6}, what would C[] be? Has A[] and B[] same size?

    • »
      »
      »
      5 months ago, # ^ |
        Vote: I like it +14 Vote: I do not like it

      Using bitwise operations, the formula is $$$C[i] = \displaystyle\sum_{j | k = i}A[j]B[k]$$$.

      The formula tells us that for all $$$0 \le i < 2^N$$$, $$$C[i]$$$ is the sum of $$$A[j]B[k]$$$ over all pairs $$$(j, k)$$$ such that $$$0 \le j, k < 2^N$$$ and $$$j | k = i$$$.

      The first line of the blog says ... Given $$$A$$$, $$$B$$$ of size $$$2^N$$$.... Your arrays $$$A$$$ and $$$B$$$ are not of size $$$2^N$$$. But let's suppose that $$$N = 2$$$ and $$$A = [1, 2, 3, 4]$$$ and $$$B = [5, 6, 7, 8]$$$.

      Now, we go through all pairs $$$(j, k)$$$ such that $$$0 \le j, k < 2^N$$$ or in this case, $$$0 \le j, k < 4$$$. We will calculate the bitwise OR of each of these pairs.

      Bitwise OR of all pairs

      We can group them like this:

      • Bitwise OR is 0: $$$(0, 0)$$$
      • Bitwise OR is 1: $$$(0, 1)$$$, $$$(1, 0)$$$, $$$(1, 1)$$$
      • Bitwise OR is 2: $$$(0, 2)$$$, $$$(2, 0)$$$, $$$(2, 2)$$$
      • Bitwise OR is 3: $$$(0, 3)$$$, $$$(1, 2)$$$, $$$(1, 3)$$$, $$$(2, 1)$$$, $$$(2, 3)$$$, $$$(3, 0)$$$, $$$(3, 1)$$$, $$$(3, 2)$$$, $$$(3, 3)$$$

      Thus,

      $$$C[0] = A[0]B[0] = 1 \cdot 5 = 5$$$

      $$$C[1] = A[0]B[1] + A[1]B[0] + A[1]B[1]$$$

      $$$C[1] = 1 \cdot 6 + 2 \cdot 5 + 2 \cdot 6$$$

      $$$C[1] = 6 + 10 + 12 = 28$$$

      $$$C[2] = A[0]B[2] + A[2]B[0] + A[2]B[2]$$$

      $$$C[2] = 1 \cdot 7 + 3 \cdot 5 + 3 \cdot 7$$$

      $$$C[2] = 7 + 15 + 21 = 43$$$

      $$$C[3] = A[0]B[3] + A[1]B[2] + A[1]B[3] + A[2]B[1] + A[2]B[3] + A[3]B[0] + A[3]B[1] + A[3]B[2] + A[3]B[3]$$$

      $$$C[3] = 1 \cdot 8 + 2 \cdot 7 + 2 \cdot 8 + 3 \cdot 6 + 3 \cdot 8 + 4 \cdot 5 + 4 \cdot 6 + 4 \cdot 7 + 4 \cdot 8$$$

      $$$C[3] = 8 + 14 + 16 + 18 + 24 + 20 + 24 + 28 + 32 = 184$$$

      This means that $$$C = [5, 28, 43, 184]$$$.

      You can check that this is correct using the code in the blog.

      Code
      Results