PoPularPlusPlus's blog

By PoPularPlusPlus, history, 3 years ago, In English

1847:A — The Man who became a God

tutorial
solution

1847:B — Hamon Odyssey

tutorial
solution

1847:C-Vampiric Powers, anyone?

tutorial
solution

1847:D Professor Higashikata

tutorial
solution

1847:E Triangle Platinum?

tutorial
deterministic solution
randomisation solution

1847:F The Boss's Identity

tutorial
solution
  • Vote: I like it
  • +124
  • Vote: I do not like it

| Write comment?
»
3 years ago, hide # |
 
Vote: I like it +6 Vote: I do not like it

How this Believer.Code solution is accepted because in this solution vector had initialised with 1e5 and testcases wasupto 1e4 and how can 1e9 will running in 1sec time limit?

»
3 years ago, hide # |
Rev. 2  
Vote: I like it -23 Vote: I do not like it

hmm

  • »
    »
    3 years ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    What algorithm are you talking about? would be going through the editorial later but if there is a way other than the one mentioned in the editorial do link it please.

    • »
      »
      »
      3 years ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      Problem C can be reduced to finding maximum XOR for a sub-array. Which is a standard problem.

      • »
        »
        »
        »
        3 years ago, hide # ^ |
         
        Vote: I like it +33 Vote: I do not like it

        Reducing problem C to finding maximum XOR for a sub-array is not a trivial task (at least it was not for me; when I finally submited a solution I was still not sure whether it was correct).

        • »
          »
          »
          »
          »
          3 years ago, hide # ^ |
           
          Vote: I like it +6 Vote: I do not like it

          I agree, it's not trivial in-fact I didn't even remotely think that. I just guessed a way round. (got -ve delta ultimately)
          The contest was fair except for the initial queue problem, which has happened before so not that big a deal.

        • »
          »
          »
          »
          »
          3 years ago, hide # ^ |
          Rev. 2  
          Vote: I like it 0 Vote: I do not like it

          Yes, it was not so easy to see that we have to find maximum XOR, I figured that out after spending nearly 40 mins on the problem. But sadly I didn't know the implementation. I just wish I knew it was a standard problem becoz it would have my rating go up.

        • »
          »
          »
          »
          »
          3 years ago, hide # ^ |
          Rev. 3  
          Vote: I like it 0 Vote: I do not like it
          I Think i have much better approach
          let  x[i] is suff xor [i..n]
          
                  x1        x2        x3       x4   ...  xk
                           *(let i choose x3 to append)*
                (x1^x3)  (x2^x3)      0      x4^x3 ...  xk^x3    x3
                           *(let  choose (x3^x4) to append)* 
                (x1^x4)  (x2^x4)    (x3^x4)    0       (xk^x4)   x4  x4^x3
                      
          Observation: we can create new element with 2 xors only
                       just do (2^8)*(2^8) and find all new element and take max
          Observation: xor of 2 means X[i]^X[j] -> which remain the subarray
          
      • »
        »
        »
        »
        3 years ago, hide # ^ |
         
        Vote: I like it +9 Vote: I do not like it

        The community is pretty weird, the guy asked "what algorithm" and I just answered that, in no way did I blame the authors for carelessness.
        I think people downvoted because they thought I was critical of the authors.

    • »
      »
      »
      3 years ago, hide # ^ |
       
      Vote: I like it +1 Vote: I do not like it

      https://www.geeksforgeeks.org/find-the-maximum-subarray-xor-in-a-given-array/ I saw the same code in a lot of submissions today along with the same comments lol.

      I myself copied the code from my old submission in this problem https://cses.fi/problemset/task/1655

  • »
    »
    3 years ago, hide # ^ |
    Rev. 2  
    Vote: I like it +3 Vote: I do not like it

    But the point of this problem is transform its several conditions into standard problem.Can you find the train of thought on the Google?I believe that the main idea of transforming is what the Author want to talk about.

    • »
      »
      »
      3 years ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      Can you please explain,in c if we take i=1 and get a new power then again taking i=1, we can get the max power , why it will not work plzzz help

      • »
        »
        »
        »
        3 years ago, hide # ^ |
         
        Vote: I like it +3 Vote: I do not like it

        Shouldn’t it be 0 if u do so? It’s XOR, any number xor Itself would be 0, and by doing so, you xor 1st-nth numbers for two times each, getting 0 at last.

      • »
        »
        »
        »
        3 years ago, hide # ^ |
         
        Vote: I like it 0 Vote: I do not like it

        Sorry bruh I can't clearly understand what you mean,but I can share my idea about this problem.(Btw,this problem cannot be solved by Greedy maybe.)

        We can find that the new "strength" equals the suffix XOR of the array.Try to understand that two identical numbers' XOR equals to 0.Now it comes to 2 different conditions.

        First,if you choose a suffix which start at k1,then choose a suffix start k2,you will get a prefix which equals to the XOR of k2 to (k1-1),because k2 to n appeared two times and their XOR is 0.

        According to this conclusion,we can clearly find that we need to find a contiguous subarray to make its XOR as big as you can.

        Notice that you can get the prefix XOR.According to the conclusion of XOR,you can get any subarray by two prefix XOR together.

        Choose 2 numbers among n numbers and find the maxinum XOR,you can solve it by using Trie.

        Hope this comment can help you.

»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

I somehow cheesed F. 212418362 my submission computes every element of (n-1) sized sequences for each different length, which is queried, so complexity is O(n*k), where k is number of different lengths appeared in input. I don't know is there a test to exploit it, random cases can't do it.

»
3 years ago, hide # |
 
Vote: I like it +35 Vote: I do not like it

SegmentForces

»
3 years ago, hide # |
 
Vote: I like it +3 Vote: I do not like it

The contest was pretty good. I somehow couldn't find how to prove the assumption in C during the contest, and got lucky with my intuition there (which was that 2 operations are enough to reach the maximum)

»
3 years ago, hide # |
Rev. 3  
Vote: I like it +2 Vote: I do not like it

[deleted]

»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

In problem D, can someone please explain "Now, this t can be found using various pbds like set,dsu,segment tree,etc.", beacuse it is the only thing I couldn't figure out?

  • »
    »
    3 years ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    You need the relative order of the first appearances for each index in string T, and when relative order comes into play then pbds or segment tree can be made use of. Not sure how to use dsu.

    For segment tree I mapped first occurrences for each index to a counter variable starting from 1 and made a segment tree of N nodes to maintain range sum.

    • »
      »
      »
      3 years ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      I was thinking for every number from 1 to n to find first interval [L,R] in which it is? How to do it in O(nlogn)?

      • »
        »
        »
        »
        3 years ago, hide # ^ |
         
        Vote: I like it +12 Vote: I do not like it

        I used a set to find it. The set first takes all numbers from 1 to N and then I run lower bound on each L,R segment in order and remove the indexes which lie in [L, R]. I think it's pretty obvious that in this manner we can find the first occurrence for all indexes. An index once marked is removed from the set.

        My submission — https://mirror.codeforces.com/contest/1847/submission/212454814

        • »
          »
          »
          »
          »
          3 years ago, hide # ^ |
           
          Vote: I like it +1 Vote: I do not like it

          i assigned priority by using a little trick lets make a jump array initialise jump[i]=i+1 now we will iterate for these m segments for the ith segment(li,ri) lets start from li so if it is not visited(make a boolean bitset) then we will assign the current_priority to it and increase our current_priority by 1 now make li=jump[li] and update jump[li] as max(jump[li],ri+1)

          do these for all segments

        • »
          »
          »
          »
          »
          3 years ago, hide # ^ |
           
          Vote: I like it 0 Vote: I do not like it

          in your while loop the worst case is O(m),so total time complexity of your set finding stuff is at least O(m*m);so how its not TLE'ing.

  • »
    »
    3 years ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    I did it using a lazy segment tree( I saw many others doing it with sets, but idk how yet) Let T be an array of length 'n' where T[i] = x means that x is the first index such that L[x] <= i <= R[x].

    Initialise T to inf(or a value greater than m).

    for i in range(1,m):
        do range update: T[j] = min(T[j],i) where L[i]<= j <= R[i]
    

    The range update can be done in O(logn) using a lazy segment tree (google it).

    I know this is overkill, and I am looking for someone to explain the method using set. But if you have code for lazy seg tree beforehand, then it really is easy to implement during the contest without any fuss.

    my submission

    • »
      »
      »
      3 years ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      Thank you guys

    • »
      »
      »
      3 years ago, hide # ^ |
       
      Vote: I like it +7 Vote: I do not like it

      We need to know the lowest indexed substring present at a given index 'i' of the string s from a collection of all possible substrings. Let's say we have this set for the index'i'. When we move to 'i+1', we simply need to delete all subtrings which end at 'i' and insert all substrings which begin from 'i+1'. So, we can iterate across all elements from 0 to n, performing 'm' insert and delete operations overall in the set, analogous to a line sweep.

      https://mirror.codeforces.com/contest/1847/submission/212435999

      • »
        »
        »
        »
        3 years ago, hide # ^ |
         
        Vote: I like it 0 Vote: I do not like it

        Thanks for this well-phrased explanation( I could understand the idea from reading your comment just once!)

        P.S. How did I not see this first ?!! (Maybe because I read lazy segment tree just 2 weeks ago, and when I saw the exact problem, I got sucked in.)

  • »
    »
    3 years ago, hide # ^ |
     
    Vote: I like it +3 Vote: I do not like it

    The way I did it was to insert every number till n in a set. Then for each l and r I find the lower bound of l if it exists and is lesser than r and remove it from the set. Now find the lower bound of this number and keep doing this till either the set is empty or lower bound exceeds r.

    Submission: 212923478

  • »
    »
    3 years ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    Can anyone answered me how to do this with dsu please?

    I still can't figure it out.

    • »
      »
      »
      3 years ago, hide # ^ |
       
      Vote: I like it +5 Vote: I do not like it

      The solution using DSU is actually quite elegant; it should look something like this:

      • At first, all indicies are in their own components.
      • Every component will always represent some continuous segment of indicies.
      • For each component, we should store an integer $$$r$$$ — the rightmost value of the corresponding segment. This can easily be updated when merging components.
      • For each component, $$$r$$$ has never been picked before, but all other indicies in the component have been picked already.
      • When you want to find all remaining values from a range $$$[L, R]$$$ in increasing order, you should start by finding the component containing $$$L$$$. Now, the value of $$$r$$$ in this component is the first remaining value. After handling this, merge the current component with the component of $$$r+1$$$. Now, the new component containing $$$L$$$ has a new value of $$$r$$$; this is the next remaining value. After handling this, merge the component with $$$r+1$$$ and keep repeating this while $$$r \le R$$$. After that, you've found all remaining values in $$$[L, R]$$$. Those values have been merged into new components from the right and thus, those values are not the right points of any current segments and they won't be picked again.
      • For easier implementation, you should add a dummy component at $$$n+1$$$. This allows you to avoid dealing with the element $$$n$$$ separately.
»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Why I am getting wrong answer in D? Please help, my submission:212445636

  • »
    »
    3 years ago, hide # ^ |
     
    Vote: I like it +6 Vote: I do not like it

    You should debug how you calculate the priorities of indexes. Maybe there are some other issues as well but didn't check all of it.

    Input:

    6 2 0
    00000000
    3 4
    2 5
    

    Must be inf 3 1 2 4 inf but you find inf 3 1 2 inf inf inf inf

»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

I'm pretty sure time complexity in D is different from the one from the tutorial as it should include m

»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

i have the same solution for c (the first subm before A) but it TLEd on pretest 3 , why?

  • »
    »
    3 years ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    from your code: for(int i = 0;i < maxn * 1e5 ;i++){ trie[i][0] = trie[i][1] = 0; }

    suppose t==1e5 -> needs 1e5*maxn*t -> tle

    • »
      »
      »
      3 years ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      no not the trie one i meant the first submission (before A as i mentioned)

      • »
        »
        »
        »
        3 years ago, hide # ^ |
         
        Vote: I like it 0 Vote: I do not like it

        212482567 I changed the map to a vector and it passed. don't use maps when values are not big your complexity is like O(n*(1<<8)*log(1<<8)) honestly it should pass by that complexity but the constant coeff of the map is high !

        • »
          »
          »
          »
          »
          3 years ago, hide # ^ |
           
          Vote: I like it 0 Vote: I do not like it

          yeah i tried changing it to array it passed too, damn man i wouldnt have gotten all this penalty..................................

        • »
          »
          »
          »
          »
          3 years ago, hide # ^ |
           
          Vote: I like it 0 Vote: I do not like it

          why would they not make the TL 2 seconds

»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

D was really good problem

»
3 years ago, hide # |
 
Vote: I like it +1 Vote: I do not like it

In Problem B : If the and of all the element is not zero why can't we divide them in more than one group? Example : 3, 3, 3 According to the editorial answer should be 1 but we can clearly see the answer could be 3 with minimum possible and for each group as 3.

»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

In Problem B's editorial, it is written that if F(1,n)>0 then the answer will always be 1, but if the test case was like

n= 4

2,3,2,3

then we can see F(1,n)= 2>0 and we can divide the array into groups means the answer should be >1

  • »
    »
    3 years ago, hide # ^ |
    Rev. 2  
    Vote: I like it 0 Vote: I do not like it

    the sum of all the groups should be minimum

    if we divide the array into two groups, the sum would be F(1, 2) + F(3, 4) = 2 + 2 = 4

    if we don't divide, then the sum is F(1, n) = 2, which is the minimum possible sum

»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Can anyone Please tell were this code is wrong. https://mirror.codeforces.com/contest/1847/submission/212493731

»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

painful E

»
3 years ago, hide # |
 
Vote: I like it +3 Vote: I do not like it

hartiksalaria explain solution of problem D.

  • »
    »
    3 years ago, hide # ^ |
    Rev. 2  
    Vote: I like it +8 Vote: I do not like it

    First, explain to me why for TC 2, the answer for 2nd query is 3.

    state of string t after update 2

    UPD: Finally I understand what's going on there thanks to letsintegreat

    So first observe that you actually just care about the first time any index in s appears in string t(s) which gives us a compressed version t'(s) where every index is present at most one time. Now to make t(s) lexicographically largest we try to put as many ones in the prefix as possible.

    We can place all the ones we have in our string s in the prefix of t'(s) so our t'(s) can have min(len(t'(s)), one) length of prefix filled with one, so we just need to replace every zero in that prefix with some one. So answer for each query is just number of zeros in that prefix.

    • »
      »
      »
      3 years ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      wish i had read 2nd TC. I was in another dimension, problem misunderstood. I thought that we have to swap in string t but it was 's'. it is easier than problem C :(. Thanks gawd, i would have never cared to read TC.

      • »
        »
        »
        »
        3 years ago, hide # ^ |
         
        Vote: I like it 0 Vote: I do not like it

        I agree, the implementation and idea is not that hard to come up with, it's just that there were so many moving parts in the problem that it was easy to miss a crucial detail, during the contest I had the correct idea but somehow I assumed to just ignore every index which was not present in the given m segments due to which I wasn't able to make sense of 2nd query of TC 2.

      • »
        »
        »
        »
        3 years ago, hide # ^ |
         
        Vote: I like it +3 Vote: I do not like it

        We can't help misunderstanding the problem in the first go. But not going through sample test cases is a huge mistake, especially in problems with so many intricacies such as D.

»
3 years ago, hide # |
Rev. 3  
Vote: I like it +25 Vote: I do not like it

F can be also done in O(Nlog(max(A[i]))logQ):

We can notice that from each starting point, only at max 30 positions are important ie. Whenever a new bit is added in the range bitwise OR.

So we only need to look at these ~30N potential answers instead of the n^2 potential answers.

Now for each potential answer, we get a prefix of queries that can be satisfied, so we can use a difference array type technique, for minimums instead of for sums.

submission: 212515812

»
3 years ago, hide # |
 
Vote: I like it +11 Vote: I do not like it

Is it just me or do the proofs for C make no sense?

If anyone can dumb it down for me, I would greatly appreciate it.

  • »
    »
    3 years ago, hide # ^ |
    Rev. 2  
    Vote: I like it 0 Vote: I do not like it

    My consideration during contest:

    let

    $$$a_{m+1}=a_i\oplus .. \oplus a_m$$$
    $$$a_{m+2}=a_j\oplus .. \oplus a_{m+1}$$$

    Using the first formula to replace

    $$$a_{m+1}$$$

    in the second one.(assuming

    $$$i \lt j$$$

    )

    $$$a_{m+2}=a_i\oplus .. \oplus a_{j-1}$$$

    Now we discovered that

    $$$a_{m+2}$$$

    can actually be all contiguous subarray of the orginal

    $$$a$$$

    and same for the upcoming elements.

»
3 years ago, hide # |
Rev. 2  
Vote: I like it 0 Vote: I do not like it

F can be solved in O(n log max(a[i]) + q (log n + log q)) 212559533

»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Why do I keep getting TLE on test 13? 212785251

»
3 years ago, hide # |
 
Vote: I like it +8 Vote: I do not like it

Proof of C is poorly written.

»
3 years ago, hide # |
Rev. 2  
Vote: I like it 0 Vote: I do not like it

$$$O(NlogMAX + QlogQ)$$$ solution for F. Uses some dumb counting sort + the observation about $$$logMAX$$$ required bits.

https://mirror.codeforces.com/contest/1847/submission/212474580

»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Hi, I have submitted code for problem D, but it is showing WA. Can someone help me understand why? Here is my submission link: 212934178

»
3 years ago, hide # |
Rev. 2  
Vote: I like it 0 Vote: I do not like it

Thank you for problem E!

In the end, what worked for me was an adaptive priorities approach. For each side, maintain a mask of four bits: whether this side can still be 1, 2, 3, and 4. Now, as long as we have questions to ask, pick a triple of masks $$$(m_1, m_2, m_3)$$$, pick random sides with such masks, and ask about them. As feedback, see how many sides $$$u$$$ became uniquely determined after this question. After that, change priority of $$$(m_1, m_2, m_3)$$$ by $$$u - 1$$$: it does not change if we found one new side, decreases by $$$1$$$ if we found none, and so on. Now, if we ever pick a triple of masks with the best possible priority, the silly questions are quickly filtered out, and only the promising questions remain.

Had to also remember the asked questions, and consider pairs of questions: considering triples alone, my approach could not uniquely assign values for tests like [3, 3, 4, 4]. That's a technical detail though, which can be approached differently.

»
3 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

.

»
2 years ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

F can be solved in an easier way if one answers queries offline, sorted in decreasing order of $$$v$$$: 244806887

»
14 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

The time complexity of Problem C can be further reduced from $$$O(N \times 2^B)$$$ to $$$O(N \times B)$$$, where $$$B$$$ is the number of bits in each number, by using tries and hence will also get accepted if $$$B$$$ is larger, say 32 or 64. 301467165

»
4 months ago, hide # |
 
Vote: I like it +1 Vote: I do not like it

Hey guys, maybe someone will find this useful someday.

I've compiled my own analysis of problem C and attached my proofs, because the analysis was written very poorly, very vague, and unclear.

Solution: We want to find out what the maximum element in an array can be after performing a certain number of operations. First, let's understand what these operations are and what properties they have.

Let $$$m$$$ be the index of the last element in the current array (since the array size is constantly changing). Our operation looks like adding an arbitrary XOR operation on a suffix to the array:

$$$a_{n + 1} = a_{i} \oplus a_{i + 1} \oplus ... \oplus a_{m}$$$

When talking about XOR on any segments, it's always useful to remember prefix XOR. Let pfixXor be an array of XOR prefixes. $$$pfixXor_{0} = 0$$$

$$$pfixXor_{i} = pfixXor_{i - 1} \oplus a_{i}$$$

XOR on the interval $$$[a_{i}, a_{i + 1}, ..., a_{j}]$$$ is $$$pfixXor_{j} \oplus pfixXor_{i - 1}$$$.

Proof

Then the XOR on our suffix $$$[a_{i}, ..., a_{n}]$$$ will be $$$a_{n + 1} = pfixXor_{n} \oplus pfixXor_{i - 1}$$$.

We add this element to the array, and now it looks like this:

$$$[a_{1}, a_{2}, ..., a_{n}, pfixXor_{n} \oplus pfixXor_{i - 1}]$$$

And the new

$$$pfixXor_{n + 1} = pfixXor_{n} \oplus pfixXor_{n} \oplus pfixXor_{i - 1} = pfixXor_{i - 1}$$$

We see that the last XOR is now equal to the previous index from the one we selected for the operation.

Let's try adding another element. To do this, let's choose a suffix with a different index j: $$$[a_{j}, a_{j + 1}, ..., a_{n}, a_{n + 1}]$$$

$$$a_{n + 2} = pfixXor_{n + 1} \oplus pfixXor_{j - 1} = pfixXor_{i - 1} \oplus pfixXor_{j - 1}$$$

And the new prefix is ​​

$$$pfixXor_{n + 2} = pfixXor_{i - 1} \oplus pfixXor_{i - 1} \oplus pfixXor_{j - 1} = pfixXor_{j - 1}$$$

We see that the prefix XOR of the entire array takes the value of some prefix from the original array and never becomes larger, no matter which pair of elements we take.

We also see that the element $$$a_{n + 2}$$$ is the XOR of two arbitrary pfixXor values ​​from the original array. P.S. - Including $$$pfixXor_{0}$$$, since if we choose a suffix starting with $$$a_{1}$$$ in the first step, we get a new $$$pfixXor_{n + 1} = 0$$$. You can verify this using the steps described above. - Including any $$$a_{i}$$$, since $$$a_{i} = pfixXor_{i} \oplus pfixXor_{i - 1}$$$.

Now, we can say that the element $$$a_{n + 2}$$$ is ONLY the XOR of two arbitrary pfixXor values ​​from the original array. Moreover, the maximum element can be obtained in two operations.

  • ONLY the XOR of two arbitrary pfixXor values ​​from the original array — because we've proven that in the $$$pfixXor$$$ array, all subsequent values, after adding a new element, will simply be some existing $$$pfixXor_{k}$$$ from the original pfixXor array itself.
  • In just two operations, since already in the second operation we have access to any $$$pfixXor_{i - 1}$$$, as well as any $$$pfixXor_{j - 1}$$$.

Now the final step:

How can we efficiently count all pairs? The array size can reach 10^5, and iterating over all possible pfixXor pairs will have complexity O(n^2). Note that we don't necessarily need to iterate over all pairs, since the values ​​in them will most likely be repeated.

The elements of our array are $$$a_{i} \lt 2^{8}$$$

This means that the numbers in the array have a maximum of 7 bits. We only use XOR operations, and they cannot produce a number with more bits than $$$max(popcnt(a), popcnt(b))$$$ from two numbers with the number of bits $$$popcnt(a)$$$ and $$$popcnt(b)$$$.

Proof

Therefore: $$$0 \lt = pfixXor_{k} \lt 2^{8}$$$ $$$(0 \lt = k \lt m)$$$

Let's create a counter-array $$$exist$$$, where $$$exist_{i} = 1$$$ will mean that $$$pfixXor_{k} = i$$$ exists.

Now, instead of counting repetitions across the entire array, we'll only count unique pairs across the counter array and update the maximum when we find a higher value.

The complexity of this calculation will be $$$O(256*256) = O(65536)$$$

The total complexity of the solution is $$$O(n + 65536) = O(n)$$$