ftiasch's blog

By ftiasch, history, 3 years ago, In English

I used to see a generalized version of 1658D2 - 388535 (усложненная версия) which unfortunately I can't solve in an old Petrozavodsk contest. That is to say, given two arrays $$$a_1, \dots, a_n$$$ and $$$b_1, \dots, b_n$$$, find the least $$$x$$$ where the multiset $$${a_1 \oplus x, \dots, a_n \oplus x}$$$ and $$${b_1, \dots, b_n}$$$ are identical.

The constraint may be $$$n \leq 10^5$$$ and $$$a_i, b_i < 2^{30}$$$.

As the task somehow reappears, I think it's the best time I can ask for help.

I also appreciate who provides the exact source of the mentioned task. Thanks in advance.

  • Vote: I like it
  • +227
  • Vote: I do not like it

»
3 years ago, # |
  Vote: I like it 0 Vote: I do not like it

Auto comment: topic has been updated by ftiasch (previous revision, new revision, compare).

»
3 years ago, # |
Rev. 2   Vote: I like it +121 Vote: I do not like it

Thanks for posting this problem. I think I have a randomized solution that seems quite strong (I am not familiar with randomized analysis).

If $$$\max(a_i,b_i) < 2^z$$$, then we can solve this in $$$O(n + z \cdot 2^z)$$$. We make 2 arrays $$$cnt_a$$$ and $$$cnt_b$$$ of size $$$2^z$$$ where we store the number of occurence of each element in $$$a$$$ and $$$b$$$ respectively.

Let $$$cnt$$$ be the xor-convolution of $$$cnt_a$$$ and $$$cnt_b$$$, defined by $$$cnt_k=\sum\limits_{i \oplus j = k} cnt_{a,i} \cdot cnt_{b,j}$$$. We can calculate it in $$$O(z \cdot 2^z)$$$ using FWHT. Now, $$$x$$$ is a possible answer if $$$cnt_x= \sum\limits_i cnt_{a,i}^2$$$ (also you can show that $$$cnt_x$$$ is maximum using $$$2ab \leq a^2+b^2$$$).

Returning back to the orignial limits with $$$a_i,b_i < 2^{30}$$$, we will treat these as $$$30 \times 1$$$ vectors. Then we make a $$$17 \times 30$$$ matrix randomly putting $$$0$$$ or $$$1$$$ in each entry. This matrix will allow us to pretend that $$$a_i,b_i < 2^{17}$$$ if we pass inputs through this matrix (under modulo $$$2$$$).

Since the only possible values of $$$x$$$ are of the form $$$a_i \oplus b_0$$$, we only $$$O(n)$$$ candidates of $$$x$$$. Everytime we do the checking, we just scan all $$$O(n)$$$ candidates and throw away all bad candidates.

I think if we repeat this process of generating the matrix maybe $$$5$$$ times, we have a good chance of being correct (and we also recovered all possible $$$x$$$).

Edit:
here is my code since Qingyu has linked the problem below (if you are going to code this please note that you read input from files and that you need to print $$$-1$$$ if it is impossible QAQ). I used $$$z=17$$$ and $$$10$$$ shots here. For some reason, when you make $$$z$$$ something smaller like $$$10$$$, even with $$$100$$$ shots you can still get WA. Why do we need so many shots when $$$z$$$ is small?

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +8 Vote: I do not like it

    Your solution is very similar to the one which YuukaKazami has told me. He said he had a randomized algorithm involving random projection or so, though I didn't understand the full detail at that time.

    Now I am trying to analysis the success probability of the algorithm, yet no progress had been made so far.

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +23 Vote: I do not like it

    Let me post some of my premature thoughts here.

    First of all, suppose that $$$a_1, \dots, a_n, b_1, \dots, b_n$$$ lies in the linear space $$$F_2^k$$$. The random binary matrix you choose is a random linear map $$$\mathcal{L}: F_2^k \to F_2^l$$$ (where $$$l < k$$$).

    The candidates of the answer is $$$c_1 = a_1 \oplus b_1, c_2 = a_1 \oplus b_2, \dots, c_n = a_1 \oplus b_n$$$. Some of them are valid, which is good, while others are not.

    When we are lucky, we hope the random linear map $$$\mathcal{L}$$$ the good candidates and the bad candidates into distinct elements in $$$F_2^l$$$.

    Here I think I can partially answer your question. If $$$2^l = O(\sqrt{n})$$$, by Birthday Paradox, with high probability two of our candidates will collides. If there're $$$n/2$$$ goods and $$$n/2$$$ bads (is it possible?), very likely we mess up things.

    Following the idea of random linear map, I have found Alon's paper. However, it does not contain the analysis of our specified task.

    • »
      »
      »
      3 years ago, # ^ |
        Vote: I like it +10 Vote: I do not like it

      Suppose that $$$c^*$$$ is bad, then there exists some $$$a^*$$$ satisfying $$$|\lbrace i \mid a_i = a^*\rbrace| \gt |\lbrace j \mid a^* + c^* = b_j \rbrace|$$$. For $$$c^*$$$ to change good when $$$a, b,$$$ and $$$c^*$$$ are transferred by $$$\mathcal{L}$$$ to a lower dimensional space, there must exist $$$j$$$ such that $$$a^* + c^* \neq b_j$$$ and $$$\mathcal{L}(a^*) + \mathcal{L}(c^*) = \mathcal{L}(b_j)$$$. From the linearity of $$$\mathcal{L}$$$ we have $$$\mathcal{L}(a^* + c^* - b_j) = 0$$$. Since $$$a^* + c^* - b_j$$$ is nonzero, the probability of obtaining such $$$\mathcal{L}$$$ is $$$2^{-l}$$$. Thus, the probability that this is true for any $$$j$$$ is less than $$$n2^{-l}$$$. If we take the smallest $$$l$$$ that satisfies $$$2n \leq 2^l$$$, any bad $$$c_i$$$ will be removed with probability at least $$$1/2$$$. By repeating this procedure $$$m + \log_2(n)$$$ times, the probability of getting the wrong answer is at most $$$2^{-m}$$$. The time complexity is $$$O((m + \log_2(n)) n \log(A))$$$.

»
3 years ago, # |
  Vote: I like it +71 Vote: I do not like it

Maybe this problem is the one you are referring to. The source of the problem is Petrozavodsk Winter 2010 Moscow SU Contest.

I have seen this problem and discussed it with suncongbo and qazswedx2 last year, but we failed to solve it faster than $$$O(n^2 \log n)$$$. We also found jqdai0815's post about this problem, which is also a brute force solution.

»
3 years ago, # |
Rev. 3   Vote: I like it +23 Vote: I do not like it

I once discussed this problem with jqdai0815 and we got a (quite strange, perhaps not so practical but with a $$$\tilde O(n)$$$ complexity) randomized solution.

The problem can be converted into: given an array $$$a$$$, find all $$$x$$$ such that $$${a_i \oplus x} = {a_i}$$$ — it can be shown that such $$$x$$$ forms a linear space and you only need to let $$$a$$$ = $$${2a_i} \cup {2b_i + 1}$$$.

Iterate through $$$i$$$, compute $$$x = b_i \oplus a_1$$$ and check whether $$$a_j \oplus x$$$ is in $$$b$$$ in a random order. If we cannot claim that $$$x$$$ is illegal in $$$c = O(\log n)$$$ rounds, it's reasonable to believe (with $$$\exp O(-c)$$$ probability to fail) that there are $$$O(n)$$$ pairs with XOR = $$$x$$$. Divide a into 2 parts: paired $$$P$$$ and unpaired $$$Q$$$.

For any legal $$$y$$$, $$$a_i \oplus y$$$ is paired iff $$$a_i$$$ is paired, so we can caculate the omorphism of $$$P$$$ and $$$Q$$$ individually and compute their intersection. Also, for $$$P$$$, we can keep only the elements with some bit = 0 to reduce the scale to $$$\frac{|P|}{2}$$$. Thus $$$T(n) = T(an) + T(\frac{n - a}{2}) + O(n \log n) = O(n \log^2 n)$$$ (with $$$a < 1$$$).

The solution above seems quite strange though and I believe there exist more elegant ones(perhaps errorgorn's can be proved to be correct?).

»
3 years ago, # |
Rev. 6   Vote: I like it +20 Vote: I do not like it

Edit: This is a simplified/optimized version of my previous edit.

I believe this can be solved deterministically in $$$O(n\log A)$$$.

First, we generalize the problem to finding the minimum $$$x$$$ that works for multiple instances of the original problem.

Look at any instance. If the $$$k$$$th bit is not $$$1$$$ in exactly half the $$$a_i$$$, we can determine the value of the $$$k$$$th bit in $$$X$$$ by counting how many times the $$$k$$$th bit of $$$b_i$$$ is $$$1$$$. Suppose for example that the counts differ. Then, the $$$k$$$th bit of $$$X$$$ must be $$$1$$$, so $$$a_i$$$ with $$$k$$$th bit $$$0$$$ must get mapped to $$$b_i$$$ with $$$k$$$th bit $$$1$$$, and vice versa. Thus, for all instances, we partition the $$$a_i$$$ and $$$b_i$$$ by the $$$k$$$th bit and align them appropriately to form (up to) two instances. Once we do this, the $$$k$$$th bit becomes redundant and we can delete it from all numbers (this is undone when recovering the answer). We can generalize this idea. Call a set $$$s$$$ (I will identify sets with their bitmasks) of bits balanced if each of the $$$2^{|s|}$$$ combinations of them appears an equal number of times in the $$$a_i$$$, i.e. $$$\#\{i\in[n]:a_i\cap s=t\}$$$ is the same for all $$$t\subseteq s$$$. Suppose a set of bits $$$s$$$ was not balanced but every subset was balanced. Then, by linear algebra, $$$\#\{i\in[n]:a_i\cap s=t\}$$$ is the same for all $$$t\subseteq s$$$ with even popcount, the same for all $$$t\subseteq s$$$ with odd popcount, and different between the two cases. (For example if $$$|s|=2$$$, we know that $$$00$$$ and $$$11$$$ occur equally often, $$$01$$$ and $$$10$$$ occur equally often, and these number of occurences are different). We can then determine the parity of the popcount of $$$s\cap X$$$ by counting the same frequencies in the $$$b_i$$$. Then, like the $$$1$$$ bit case, for all instances, we partition the $$$a_i$$$ and $$$b_i$$$ by the parity of popcount of $$$a_i\cap s$$$ and align them appropriately to form two instances. We can now delete the LSB of $$$s$$$ from all numbers since it is now redundant (deleting LSB makes recovering the minimum $$$X$$$ easier). If an instance is balanced for all remaining bits, we can disregard it. Note that the reduction process can repeat at most $$$O(\log A)$$$ times, since a bit is deleted each time.

Note that any set of more than $$$\log_2n$$$ bits cannot be balanced. Thus, checking if a set of bits is balanced can be done in $$$O(n)$$$ by keeping frequency counts in an array of size at most $$$n$$$. We can find a minimal unbalanced set of bits in $$$O(n\log n)$$$ by starting with any unbalanced set (first $$$\lfloor \log_2n\rfloor+1$$$ bits, or all bits if fewer) and greedily removing bits as long as some proper subset is unbalanced.

We can optimize this to $$$O(n)$$$ by observing that a minimal balanced prefix can be computed in $$$O(n)$$$ by reusing frequency counts of larger sets for smaller sets, and using the fact that geometric series converges. Then, by linear algebra, if $$$S$$$ is balanced and $$$\{p\}$$$ is balanced but $$$S+\{p\}$$$ is unbalanced, for all $$$T\subseteq S$$$, $$$T+\{p\}$$$ is balanced iff $$$T$$$ is balanced when restricting $$$a_i$$$ to those with $$$p$$$th bit set (or unset, there is equal number of each). Then, we can make a recursive call with number of $$$a_i$$$ halved.

O(n log A) code
»
3 years ago, # |
Rev. 2   Vote: I like it +31 Vote: I do not like it

I have an algebraic approach, it seems to be quite heavy and impractical to implement, but the idea is quite straightforward.

Consider the finite field $$$\Bbbk = \mathrm{GF}(2^{m})$$$ where $$$m\geq 32$$$, we have the multiset $$${a_i}$$$ equals to $$${b_i}$$$ iff $$$\prod_i(x - a_i) = \prod_i(x-b_i)$$$. So let $$$A(x) = \prod_i (x - a_i)$$$ and $$$B(x) = \prod_i (x - b_i)$$$, we only need to find $$$k$$$ such that $$$A(x+k) = B(x)$$$.

We sample $$$r$$$ points $$$x_1,\dots,x_r \in \Bbbk$$$, for each fixed $$$k$$$, if $$$k$$$ does not satisfy $$$A(x+k) =B(x)$$$, then $$$k$$$ is the root of the polynomial $$$A(x_i + k) - B(x_i)$$$ with probability $$$\leq \frac{n}{|\Bbbk|}$$$. With probability $$$\geq 1 - |\Bbbk| \cdot \left(\frac{n}{|\Bbbk|}\right)^r$$$, the common roots of these polynomials are exactly the solutions.

We can take the $$$\gcd$$$ of these polynomials and calculate the roots. Let $$$\mathsf{M}(n)$$$ denote the time of polynomial multiplication, we can compute the coefficient of $$$A(x)$$$ in $$$O(\mathsf{M}(n)\log n)$$$ time. Since the field has characteristic $$$2$$$, the coefficient of $$$A(x_i + k)$$$ can be computed through some procedure like FWT, which needs $$$O(n\log n)$$$ field operations. And HALF-GCD gives the $$$O(\mathsf{M}(n)\log n)$$$ time to compute $$$\gcd$$$. Root-finding can be done in $$$O(\mathsf M(n)\log n \log (n |\Bbbk|))$$$.

(This can be improved to an expected $$$O(\mathsf M(n)\log (n|\Bbbk|))$$$ solution.)

  • »
    »
    3 years ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    A similar algorithm was proposed by yosupo and is noted. https://twitter.com/yosupot/status/1368589333419073547

    There are only $$$n$$$ possible values of $$$k$$$, $$$b_1 - a_1, b_2 - a_1, \dots, b_n - a_1$$$, that satisfy $$$A(x + k) = B(x)$$$. We only need to determine whether $$$A(x + k) - b(x) = 0$$$ for each of them. Assign a random value to $$$x$$$ and evaluate the resulting polynomial for $$$k$$$ at $$$n$$$ points in $$$k = b_1 - a_1, b_2 - a_1, \dots, b_n - a_1$$$. The time complexity is $$$O(\mathsf{M}(n) \log(n))$$$.