wuhudsm's blog

By wuhudsm, history, 21 month(s) ago, In English

Thank you for participation and we hope you enjoy this round :)

Additionally, we sincerely invite you to join the unofficial round of TheForces Community tomorrow. Compete alongside many high-rated participants and earn your own rating in a our own rating system(TheForces Rating System)!

For more details, pls read https://mirror.codeforces.com/blog/entry/131733.

A — Submission Bait

hint1
hint2
solution 1
solution 2
code(solution 1)

B — Array Craft

hint1
hint2
solution
Bonus
code

C — Mad MAD Sum

hint1
hint2
hint3
solution
code
fun fact

D Grid Puzzle

hint1
hint2
hint3
solution
code(greedy)
code(DP)

E Catch the Mole

hint1
hint2
hint3
hint4
solution1
solution2
bonus
code(hard version)

F Polygonal Segments

solution1
solution2
code(solution 1)
code(solution 2)
  • Vote: I like it
  • +92
  • Vote: I do not like it

| Write comment?
»
21 month(s) ago, hide # |
Rev. 3  
Vote: I like it 0 Vote: I do not like it

first

Can someone please tell me what the bait was for A? Was it that people thought the $$$\ge$$$ was $$$ \gt $$$ so the answer is always yes?

Nvm, I think I got it. Did people only count the frequency of the max number?

  • »
    »
    21 month(s) ago, hide # ^ |
    Rev. 2  
    Vote: I like it +10 Vote: I do not like it

    I bit the bait.Got a WA just in first few minutes......(Yes,Only counting the largest number)

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it +7 Vote: I do not like it

    same here, at first only considered frequency of max number (;

    • »
      »
      »
      21 month(s) ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      oh wow! This was my first contest ever and I felt stupid after just ensuring that the max appears odd times. Same thing happened with B. Got the initial idea but couldn't combine the prefix and suffix sum construction. Is it down to just lack of practice? Any resources for me to start on :) ?

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

anyone done C ...

Can you tell where 271597372 nd 271626656 is wrong ??

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    N<=2e5, your solution repeats at most N times the code in while-loop and you make 1 for-loop inside while-loop and another for-loop inside the first for-loop, so the time complexity is O(N^3) which is obviously too slow.

»
21 month(s) ago, hide # |
Rev. 5  
Vote: I like it -14 Vote: I do not like it

okay, i accept defeat

  • »
    »
    21 month(s) ago, hide # ^ |
    Rev. 2  
    Vote: I like it 0 Vote: I do not like it

    In the case that the sum becomes <= -1 at point x or y. Then the first time the sum is equal to the maximum will be at 1 or n instead of x or y. That's why you need to alternate between -1 and 1.

»
21 month(s) ago, hide # |
 
Vote: I like it -27 Vote: I do not like it

Nice brain teasers. Now where are the problems?

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Thanks for editorial! I really liked problem C :)

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    same here, C was good one, but can you please simply explain to me why we cannot get the answer if we perform the operation on array only once and then using one for loop and maths we calculate the answer.

    • »
      »
      »
      21 month(s) ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      The main reason is that we can only directly calculate the answer when the MAD operation is performed on a sorted array, since performing it on unsorted arrays leads to things like this:

      Image (as is evident, it doesn't work out)

      The main reason for this is because when we perform the MAD operation on an unsorted array, the resulting array may not follow the "rules" of the calculations (mainly that each value appears at least twice), whereas it will definitely follow the rules with a sorted array:

      Image

      It intuitively makes sense why this is, since in order for a new value to appear in any MAD array, it must appear at least twice in the original array. Since the input array is sorted, a new value must only appear strictly after its first appearance in the original array. So, each number in the MAD array must appear twice, which requires that the input array be sorted. Only once we know that each value appears twice can we use the trick to calculate the answer.

      Hope this helps! :)

      • »
        »
        »
        »
        4 months ago, hide # ^ |
         
        Vote: I like it 0 Vote: I do not like it

        like we would always be getting an non decreasing array even after the first mad operation, so we can devise a formula from there and get the answer like i have done in the submission (355125607). but i still dont understand whats the need of second mad operation. moreover even after following the rules on the second mad array its quite possible that any number might appear only single time like for example the last element

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

For E solution 2, I think it should say

spoiler
»
21 month(s) ago, hide # |
Rev. 4  
Vote: I like it +45 Vote: I do not like it

Here is another solution for E (actually, I think there are a lot of ways to solve it):

Solution
»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

For D, I have a binary search solution: 271626468

»
21 month(s) ago, hide # |
 
Vote: I like it +12 Vote: I do not like it

I had a different approach to E which I don't have proof for, but it managed to pass tests with $$$\le 160$$$ queries.

Spoiler
  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    Isn't it right since $$$ d \times w \le n $$$ where d = depth, w = width, and your algorithm would be $$$O(min(d, w))$$$ which is less than $$$\sqrt n$$$?

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it +8 Vote: I do not like it

    when you say using dfs to find the subtree of the current node with the fewest alive children you mean the subtree of a direct child right?

»
21 month(s) ago, hide # |
Rev. 2  
Vote: I like it 0 Vote: I do not like it

For Problem B, Can anyone please help me understand why my construction is wrong for the case 5 2 1

my construction : 1 1 -1 -1 -1. Prefix max is at 2 and suffix max is at 1.

Thank you.

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Can someone tell me why is this wrong for B? 271591444

»
21 month(s) ago, hide # |
 
Vote: I like it +4 Vote: I do not like it

I find it interesting that a theforces organizer was 2nd place in div2. What are the odds some of the problems were proposed or seen for some theforces rounds? Not accusing anyone though. Just an observation.

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

In question A, how is it possible that the maximum number does not get condisered ?

Foe Example if the numbers were 1 2 2 2 3 3. Then, Alice won cause there are 3 2s. But what if Alice chose 2 and then Bob chose 3. In the operations mentioned, we can choose a[i]>= what the opponent chose.

Why is that Bob only chooses equal to Alicw?

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    Alice chose 2 then Bob chose 3 Then again Alice chose 3 See?

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    if every number occurs even number of times then, any number Alice will choose, Bob can mimic it and choose it again

    if there is a number that appears odd number of times, then Alice can choose it, forcing Bob to choose a number that occurs even number of times, now Alice can mimic Bob choices and win

    • »
      »
      »
      21 month(s) ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      Yeah but why should Bob even mimic. I mean its not even mentioned anywhere that Bob plays optimally. There DOES exist a winning strategy for Alice

      • »
        »
        »
        »
        21 month(s) ago, hide # ^ |
         
        Vote: I like it 0 Vote: I do not like it

        well, there does not exist a winning strategy for Alice, if she cannot win using this strategy somehow.

        let's say that bob won't play optimally, but he still can mimic Alice choices, right ?

        while there is a chance to Bob to win in Alice's winning strategy, then it won't be considered as a winning strategy

        I don't know why would Bob not to play optimally btw :)

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    just another case 1 2 2 2 3.

    bob could have won in this but alice will win lmao!!

    I'm not trying to oppose anyone, just a curious doubt!!

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

What about this approach for E:

Spoiler
»
21 month(s) ago, hide # |
Rev. 2  
Vote: I like it 0 Vote: I do not like it

I solved E by deleting a half of tree each query, so there is $$$O(\log(n))$$$ queries.

In each query find a subtree with closest size to a half of the current tree. From current root, go to subtree with maximum size while the current subtree size is greater than a half. Query this subtree.

Response 1. Set this subtree as a current tree.

Response 0. Delete this subtree and all leaves from the current tree. Then add a parent of the root to the tree. This can be done in $$$O(n)$$$, so total complexity is $$$O(n\log(n))$$$.

CODE

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    Yes I realized this ended up being my solution, but in my head I thought of the subtree you find as the centroid. I wonder what kind of bound they potentially could have set for the number of queries.

  • »
    »
    21 month(s) ago, hide # ^ |
    Rev. 2  
    Vote: I like it +84 Vote: I do not like it

    I think the problem is unsolvable in less than $$$O(\sqrt{n})$$$ queries. Consider the tree with the root having $$$\sqrt{n}$$$ chains of length $$$\sqrt{n} + 1$$$ attached to it. You have to at least know in which chain the mole is in, so you have to query each chain at least once, since the mole won't get to the root in $$$\sqrt{n}$$$ queries.

  • »
    »
    21 month(s) ago, hide # ^ |
    Rev. 2  
    Vote: I like it +11 Vote: I do not like it

    this code works in sqrt(n) queries in the worst case

    this case is 4901 nodes

    • adj[1]={2,3,...,71}

    • adj[i]={i-70,i+70} for all i in [72,4831]

    • adj[i]={i-70} for all in [4832,4901]

    and at each query you ask for a child between 2 and 71 and the response is 0 at the 70'th query you get the response

    you guess the correct answer in the some testcases in the 25-th test at 70 queries

»
21 month(s) ago, hide # |
 
Vote: I like it +14 Vote: I do not like it

My solution 271646773 for E passes tests in $$$\le 100$$$ queries but I don't know why. I would appreciate it if anyone could prove some upper bound on the number of queries or provide a counter-example.

The idea is to greedily choose a node that results in the minimum number of candidates in the worst case. Specifically, querying $$$x$$$ results in size[x] candidates if the answer is $$$1$$$ and (size[0] - leaf[0]) - (size[x] - leaf[x]) candidates if the answer is $$$0$$$. Here size[x] is the number of candidates in the subtree of $$$x$$$ and leaf[x] is the number of leaf candidates in the subtree of $$$x$$$.

It's easy to compute these numbers for all $$$x$$$ with a simple dfs, after which I pick $$$x$$$ with the minimum value of max(size[x], (size[0] - leaf[0]) - (size[x] - leaf[x])). Finally, I repeat this process until only one candidate is remaining,

  • »
    »
    21 month(s) ago, hide # ^ |
    Rev. 2  
    Vote: I like it +3 Vote: I do not like it

    Lemma 1: if B <= size[x] <= n - B for some $$$B$$$ then querying $$$x$$$ removes at least $$$B$$$ candidates, where $$$n$$$ is the current number of candidates. To prove it formally, note that

    (size[0] - leaf[0]) - (size[x] - leaf[x]) =
      = (size[0] - size[x]) - (leaf[0] - leaf[x])
      <= size[0] - size[x]
      <= n - B
    

    Lemma 2: if size[x] < B or size[x] > n - B for all $$$x$$$ then there exists a node $$$p$$$ with size[p] > n - B and size[c] < B for every child $$$c$$$ of $$$p$$$. To prove it, note that the subtree size of the root is $$$n \gt n - B$$$ and the subtree size of any leaf is $$$1 \lt B$$$, so there must be a transition point in between.

    Lemma 3: if size[x] < B or size[x] > n - B for all $$$x$$$ then the tree has at least $$$\lceil\frac{n}{B}-1\rceil$$$ leaves. To prove it, consider node $$$p$$$ from lemma 2. It has at least $$$\frac{n-B}{B-1}$$$ children because its subtree size is large but all children have small subtree sizes so there must be quite a few of them. $$$\frac{n-B}{B-1} \gt \frac{n-B}{B} = \frac{n}{B} - 1$$$. All numbers are integers so we can take $$$\lceil \cdot \rceil$$$ of the last expression.

    Lemma 4: if there are at least $$$C$$$ leaves then querying one of them removes at least $$$C$$$ candidates. If the response is $$$1$$$ then we found the mole, and if the response is $$$0$$$ then all $$$C$$$ leaves are no longer valid candidates.

    Choosing $$$B = \sqrt{n}$$$ means that we remove roughly $$$\sqrt{n}$$$ nodes per query (where $$$n$$$ is the current number of candidates), and hence the greedy process results in $$$O(\sqrt{n})$$$ queries overall.

»
21 month(s) ago, hide # |
Rev. 2  
Vote: I like it 0 Vote: I do not like it

Can anyone explain me how to solve B plzz :(, i might have lost my mind after today's contest

»
21 month(s) ago, hide # |
Rev. 2  
Vote: I like it +11 Vote: I do not like it

Personal Editorial that focuses mostly on intuition

A:Basically, Alice can always pick the largest element a_i such that count[a_i]%2==1. No matter what Bob picks, Alice still wins. If that element doesn't exist Alice loses. This was a pretty hard Div 2 A in my opinion.

B: I started by thinking that we want to limit the maximum sum or in other words we don't want the prefix sum to get any larger after X or the suffix sum to get any larger before Y. The obvious construction is [-1,-1,...,y,1,1,...,x,-1,-1,...] The problem is that it might be the case that the prefix sum of -1 (the first element) is the larger than the first x elements (the same might be true for the suffix sum). To avoid that, we notice that we can alternate between -1 and 1 at the ends. The construction is as follows: [...,-1,1,-1,Y,1,1,...,1,1,X,-1,1,-1,...] This solves both problems.

C: We try testing the operation on the array to see what happens. The two key observations are that the first operation makes the array increasing, and the second operation shifts the array to the right by 1. The solution then becomes pretty much just math. We use the operation once, and then we use math to calculate the answer.

D: I didn't have time to write the AC code during the contest, but the approach is actually very simple — too simple in my opinion for a Div 2 D. The key observation is realizing that if a[i] is large the only operation that makes sense is to fill the whole row. If it takes more than two operations to fill a row with white with 2x2 squares, we might as well just fill both the ith and the ith+1 row with white. We also notice that there are two cases where using 2x2 squares are beneficial.

Case 1: a[i] <=2 and a[i+1]<=2 Case 2: a[i]<=2 and there is an even number of 3 <= a_x[i] <=4 followed by an a_y[i] <=2. In both cases our answer goes up by one.

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it +12 Vote: I do not like it

    Why the negative contribution? I'm genuinely curious. I'm gonna write them either way because it helps me get better at explaining things, an important life and interview skill, and also because it might help some people.

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it +1 Vote: I do not like it

    I genuinely thank u for sharing the way you think. I am suck at thinking constructive problems and your comment really helps me to build a certain way to think. Keep contributing! Don't let these downvotes upset u.

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it +2 Vote: I do not like it

    In problem B, actually i came up with only [-1,-1,...,y,1,1,...,x,-1,-1,...] this and can't able to think further. Btw now understood completely thanks :)

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

For B problem if test is 5 2 1
is [ 1 1 -1 -1 -1] consider to be wrong answer ?

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

For F solution2, I think the pseudocode should have 2a[mxp] <= instead of 2a[mxp] >=.

»
21 month(s) ago, hide # |
Rev. 2  
Vote: I like it +6 Vote: I do not like it

I am here to provide yet another solution to E that I cannot prove the query ammount.

Solution
  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    Yay, I had the same solution =)

    But I added an additional condition: if you can't find the centroid — 'the node with the smallest subtree size that contains at least half of the remaining nodes of the current node's subtree and is not equal to the current node' — I chose the biggest child of the current node.

    current node — the last node that contained the answer in it's subtree.

»
21 month(s) ago, hide # |
 
Vote: I like it +33 Vote: I do not like it

E2 with $$$100$$$ queries

hint 1
hint 2(almost the solution)
answer
bonus 2
  • »
    »
    21 month(s) ago, hide # ^ |
    Rev. 3  
    Vote: I like it +20 Vote: I do not like it

    i solved the problem in at most 70 queries (sqrt(n))

    if we denote by size[node] the number of possible location in the subtree of "node" and we denote by size_valid the number of possible locations if we ask about the node that minimizes abs(size[node]-size_valid/2)

    • if the response was 1 : so size_valid will be size[node] because the location will be in the subtree of node
    • if the response was 0 : so size_valid will be size_valid — the number of possible locations that hasn't any child that can be possible + 1 if the lca of all possible nodes isn't the root

    in the worst case childs of the lca of the possible nodes has the same size: size[node] in the worst case size[node] will be sqrt(size_valid) for sqrt(size_valid) nodes and size_valid after each query will decrease by at least 2*sqrt(size_valid)-1 (sqrt(size_valid) for the subtree of the choosen node and sqrt(size_valid)-1 for the nodes that hasn't any child that can be possible location else it will decrease by x+size_valid/x-1 with the same logic ) so if n can be solved in sqrt(n) than n-2*sqrt(n)+1 can be solved in sqrt(n-2*sqrt(n)+1)=sqrt(n)-1 so for n=5000 only 70 queries can be enough

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    If you found, ask the same root until not found.

    How do you do this? If the answer to a query is $$$1$$$ then the mole doesn't move

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    Could you explain in detail that how we can 'ask the same root until not found'? Thanks! My question is the mole doesn't just stop at the root you pick, so we have to use binary search for the real position.

    • »
      »
      »
      21 month(s) ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      During asking the same node, if the result changes 1 to 0, the mole will be in the parer of the node, so answer there.

      • »
        »
        »
        »
        21 month(s) ago, hide # ^ |
         
        Vote: I like it 0 Vote: I do not like it

        But the mole doesn't move if the result is 1. So how can the result become 0 if we keep asking the same node?

        • »
          »
          »
          »
          »
          21 month(s) ago, hide # ^ |
          Rev. 2  
          Vote: I like it 0 Vote: I do not like it

          Sorry, I answered as the inverted version(originally the outcome was inverted, but during testing the outcome is current style).
          If the answer is 0, cutting down the subtree is the same.(In this step, we can also cut all the leaves) If the answer is 1, now we have a tree atmost depth $$$100-i$$$, rooted by $$$v$$$.
          Now, ask sons in the order of decreasing the depth. If we receive 1, do the same thing. Note that, if we receive 0, all leaves are disappeared so we shouldn't care about small subtrees and we must look up atmost $$$O(\sqrt{n})$$$ sons.
          Finally, if we gets lost(no such subtree found), we can identify the path the mole is in(from $$$v$$$ to the parents), so do binary search.
          I think this fits in $$$100$$$ queries, but sorry if I'm wrong.

»
21 month(s) ago, hide # |
 
Vote: I like it +6 Vote: I do not like it

I'm still traumatized from A

»
21 month(s) ago, hide # |
 
Vote: I like it +1 Vote: I do not like it

Could anyone explain why 271582281 fails on the 3rd test case?

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

I am really confused. I have used the even and odd approaches. If the n is odd then it is clear that Alice will win.

example: 1 2 3 4 5 5 6 A B A B A B A

If n is even, then there would be 2 cases:

Case 1: The frequency of maximum numbers is odd. example: 1 2 3 5 5 5 A B A Hence, Alice will win.

Case 2: The frequency of the maximum number is even. example: 1 2 3 4 5 5 A B Hence, Alice will lose.

This is the approach I applied but I got the wrong answer. Could you guys please tell me, what is wrong with my approach?

  • »
    »
    21 month(s) ago, hide # ^ |
    Rev. 3  
    Vote: I like it 0 Vote: I do not like it

    Your reasoning for case 1 is correct.

    Your reasoning is faulty in case 2.

    Take the test case:

    1 2 3 3 3 3

    You would say that Alice would lose because N is even and the frequency of the maximum element is even. However, if Alice chooses 2 in her first move she can win because then Bob goes first and they alternate picking 3s.

    Alice loses iff the frequency of all elements is even because as explained in the editorial Bob can mirror all her moves.

    In any other case Alice can force Bob into this situation if there is at least one odd frequency element by selecting the last odd element as her first move.

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

orz round

»
21 month(s) ago, hide # |
 
Vote: I like it +1 Vote: I do not like it

C, D is brilliant!

»
21 month(s) ago, hide # |
Rev. 2  
Vote: I like it 0 Vote: I do not like it

Can anyone please explain why this code 271651820 for problem C giving wrong answer. I tried to do it with one operation and after this if mad value appears more than once , it will right shift in further operation otherwise if mad value appears only once, it will add once to the answer.

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

I have another solution for E1. my solution in chinese :)

»
21 month(s) ago, hide # |
 
Vote: I like it +19 Vote: I do not like it

In problem F, solution 1:

We notice that a local maximal polygon segment $$$[l,r]$$$ must satisfy $$$\min(a_l−1,a_r+1)\ge \mathbf{2}(a_l+...+a_r)$$$. (or $$$l=1,r=n$$$).

shouldn't it be:

...must satisfy $$$\min(a_l−1,a_r+1)\ge (a_l+...+a_r)$$$.

Is the extra "2" a typo?

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it +5 Vote: I do not like it

    Can you please explain this formula? How can we transform: a(l) + a(l + 1) + .. + a(r) > 2 * max(a[l..r]) (original condition to form a polygon I guess?) to that? Thanks

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    I am so confused about this, it that $$$\min(a_l-1,a_r+1)$$$ but not $$$\min(a_{l-1},a_{r+1})$$$? The author writes the latter. However, I still think both of them are confusing. I think $$$(a_l+\cdots +a_r) \gt 2\max{a_l,\cdots,a_r}$$$ works.

    • »
      »
      »
      21 month(s) ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      The condition in the editorial is neccesary but not sufficient. However, since there are only $$$\log V$$$ "key points", it is quite enough for us to use.

  • »
    »
    21 month(s) ago, hide # ^ |
    Rev. 2  
    Vote: I like it +5 Vote: I do not like it

    I think so. The necessary and sufficient condition for judging whether $$${a_1,\ldots,a_n}$$$ is a "polygonal segment" is $$$\forall 1\le i\le n, a_i \lt \left( \sum_{j=1}^{n}a_j \right) -a_i$$$. We can obtain the necessary and sufficient condition for the "polygonal segment" $$$[l, r]$$$ to not be further expanded as $$$\left[\left(l=1\right) \vee \left(a_{l-1}\ge \sum_{i=l}^{r} a_i\right)\right] \wedge \left[\left(r=n\right) \vee \left(a_{r+1}\ge \sum_{i=l}^{r} a_i\right)\right]$$$.

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

can someone explain the funfact on c?

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it +3 Vote: I do not like it

    Maybe it's referring to doing the naive 'mad' simulation twice, and then just summing up once.

»
21 month(s) ago, hide # |
Rev. 2  
Vote: I like it 0 Vote: I do not like it

can someone please tell me why is this code giving tle for problem c? link

»
21 month(s) ago, hide # |
Rev. 2  
Vote: I like it 0 Vote: I do not like it

How to solve B if

  • The maximum prefix position of b is the largest index i that satisfies ...
  • The maximum suffix position of b is the smallest index i that satisfies ...
»
21 month(s) ago, hide # |
 
Vote: I like it +1 Vote: I do not like it

I think I've solved E in 100 queries,heres my submission:submission.

»
21 month(s) ago, hide # |
Rev. 3  
Vote: I like it 0 Vote: I do not like it

Can anyone explain the solution 1 of F which store some information in one node in Segment Tree? I don't really understand what we have to store, and how we use it to get the answer.

  • »
    »
    20 months ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    Hello! I just finished upsolving it, here's my code: 274583614. Search for prefix_t to skip the template.

    Let's call an element prefix peak if it is greater than a corresponding prefix sum. Define suffix peaks similarly. Pad the subarray with positive infinities on both ends, they are also peaks. Store a tuple (prefix length, sum, maximum, peak) for each peak. Let's call a vector of such tuples prefix info. Define suffix info similarly.

    You need a few operations:

    1. extend prefix info of the left segment with prefix info of the right segment to form prefix info of the combined segment;

    2. extend suffix info of the right segment with suffix info of the left segment to form suffix info of the combined segment (same as 1);

    3. combine suffix info of the left segment with prefix info of the right segment to update the answer.

    Operation 3 can be tricky to implement in linear time ($$$log^2$$$ overall) with two pointers, so I implemented it naively in quadratic time ($$$\log^3$$$ overall) to stress test but it passed comfortably in 3 seconds :| Disappointing preparation from the setter considering that most solutions are within a factor of 4 of my execution time whereas $$$\log = 40$$$.

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Can someone please explain the alternate solution for A using dp? Also, what does the author mean when he writes =∼? Thanks!

  • »
    »
    21 month(s) ago, hide # ^ |
    Rev. 2  
    Vote: I like it 0 Vote: I do not like it

    For dp[i], Alice is faced with the array a[1:i], where a is non-increasing.

    Transition

    Hope I didn't complicate it too much. And, =~ just means we are taking negation of RHS, and assigning it to the LHS.

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

I was a bit confused for C where i did one operation and did the same contribution idea (n-i)*a[i]. I am not sure why it is failing there but getting accepted if we do 2 operations. while one operation is sufficient for making the array increasing acquiring required elements.

can anyone help me to understand? Thank You

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    While the array may be increasing, some of the elements might only have a frequency of 1 which means they will be erased in subsequent operations. 2 operations ensures that the count of every element is >=2

    • »
      »
      »
      21 month(s) ago, hide # ^ |
       
      Vote: I like it 0 Vote: I do not like it

      can you give an example.. i tried hard for finding a case where this assumption fails

      • »
        »
        »
        »
        21 month(s) ago, hide # ^ |
        Rev. 2  
        Vote: I like it 0 Vote: I do not like it

        Sure.

        [2,3,1,1,2,3]

        With 1 operation it is [0,0,0,1,2,3]

        with 2 operations it is [0,0,0,0,0,0].

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

What do we mean by "left most row" is editorial of problem D. Rows run horizontally shouldn't they be top or bottom. Can anyone help i am confused.

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    "left most row" --> means absolutely left of any row (or from beginning of any row) example: if [a1,a2,.....an] representing a row , then a1 is leftmost, a2 is next leftmost,....

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

brainlet solution on problem D. 271773223 no dp, no extra variable. proof: AC

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

C > B > D gg contest the most stupid D problem of all contest

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

I have a few doubts in the solution 1 of the editorial for E. Firstly, why do we choose 50? It seems that we should be taking $$$\sqrt{n}$$$, so 70 should be ideal right? Is there something special about 50, because I saw another comment below which says that using a number slightly smaller than $$$\sqrt{n}$$$ is better and I didn't understand the reason behind that either.

Additionally, in the 3rd case, when there's no vertex with $$$maxdep_v - dep_v = 50$$$, then we only need at most 50 moves right, why does it say 100 moves then?

  • »
    »
    21 month(s) ago, hide # ^ |
    Rev. 2  
    Vote: I like it +5 Vote: I do not like it

    I figured taking Sqrt(n) is ideal for solving E2, because you'll need N / 70 + 70 + logN operations(though due to weird constants you still need to optimize binary search by cutting down to r = mid — 2), Then I believe it is optimal to choose K=50 as a constant only in E1 cuz choosing 70 exceeds allowed operations: N / 70 + 2*70 > 200

    As to the 3rd case, yeah you only up to 50 moves after optimization

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it 0 Vote: I do not like it

    When we drive away this mole once, we need to immediately check if it is still in the subtree v. So it is $$$50 \times 2$$$ moves.

»
21 month(s) ago, hide # |
 
Vote: I like it +5 Vote: I do not like it

Can someone explain this part of sol2 for E?

Then the tree can be split into ≤70 chains.

  • »
    »
    21 month(s) ago, hide # ^ |
     
    Vote: I like it +5 Vote: I do not like it

    We query any leaf $$$70$$$ times, then delete all nodes $$$x$$$ such that $$$maxdep_v-dep_v+1 \lt =70$$$. So if a chain is still alive after the deleting, there must be at least 70 deleted nodes underneath it.

»
21 month(s) ago, hide # |
 
Vote: I like it +6 Vote: I do not like it

An approach for bonus of E

Consider all the nodes that the mole can be one. These lie on a single path from leaf to the initial node of mole. If we can find this path then we can find the mole in O(log n) operations. To find this path consider the following: 1. We know that root node (node 1) lies on this path. 2. For each node that lies on this path exactly one of its children lies on the path.

Let numJumps be the number of operations where jury has returned 0, that is number of times mole has jumped up.

Now start tracing the path from root. Let's call the set of all children where maxDepth — depth >= numJumps candidates for our path extension. If candidates is empty, we are at end of our path. Otherwise while candidates has more than one node, remove the node with minimum maxDepth — depth value and query it. If judge return 1, we move to this node and proceed tracing our path recursively. If judge returns 0, we discard this node, increment numJumps and remove all nodes that have maxDepth — depth < numJumps.

At the end if we are left with exactly one node in candidates we move to this node.

To calculate the number of operations required, notice that first time we query at least 1 node is discarded, second time atleast 2, and so on.

If judge returns 0, then chain is discarded and size of numJumps increases meaning next chain to be discarded must have maxDepth — depth atleast 1 more that cur node's maxDepth — depth

If judge returns 1, notice that there must be atleast one more node with maxDepth — depth at least equal to this node's maxDepth — depth. Meaning at least half of nodes under consideration were discarded, considering single chains with no branching.

Thus nodes under consideration either reduce as 1, 2, 3, ... or half. Either way after k moves, atleast k * (k + 1) / 2 nodes will be removed from nodes under consideration. Meaning after atmost 71 moves we will have completed our path.

Binary search on path will take at most log n moves, that is around 13.

Thus we take atmost 71 + 13 = 84 moves.

Submission: Submission for the above approach

#include <bits/stdc++.h>
 
using namespace std;
 
//#pragma GCC optimize("Ofast,unroll-loops") 
//#pragma GCC target("avx,avx2,avx512,fma") 
 
template<typename A, typename B> ostream& operator<<(ostream &os, const pair<A, B> &p) { return os << '(' << p.first << ", " << p.second << ')'; }
template<typename T_container, typename T = typename enable_if<!is_same<T_container, string>::value, typename T_container::value_type>::type> ostream& operator<<(ostream &os, const T_container &v) { os << '{'; string sep; for (const T &x : v) os << sep << x, sep = ", "; return os << '}'; }
void dbg_out() { cerr << endl; }
template<typename Head, typename... Tail> void dbg_out(Head H, Tail... T) { cerr << ' ' << H; dbg_out(T...); }
#ifdef LOCAL
#define dbg(...) cerr << "(" << #__VA_ARGS__ << "):", dbg_out(__VA_ARGS__)
#else
#define dbg(...)
#endif
 
#define ll long long
#define ld long double
 
#define PI 3.1415926535897932384626433832795l 

// -------------------------<rng>------------------------- 
// RANDOM NUMBER GENERATOR
mt19937 RNG(chrono::steady_clock::now().time_since_epoch().count());  
#define SHUF(v) shuffle(all(v), RNG); 
// Use mt19937_64 for 64 bit random numbers.
 
vector<vector<int>> adj;
vector<int> depth;
vector<int> maxDepth;
vector<int> par;
vector<int> path;
int numOperations = 0;
int numJumps = 0;

int query(int x){
    ++x;
    cout << "? " << x << endl;
    int res;
    cin >> res;
    if(res == 0){
        ++numJumps;
    }
    ++numOperations;
    return res;
}

void answer(int x){
    ++x;
    cout << "! " << x << endl;
}


void dfs(int u, int p){
    for(int v: adj[u]){
        if(v != p){
            depth[v] = maxDepth[v] = depth[u] + 1;
            par[v] = u;
            dfs(v, u);
            maxDepth[u] = max(maxDepth[u], maxDepth[v]);
        }
    }
}

void tracePath(int u){
    path.push_back(u);
    stack<int> candidates;
    for(int v: adj[u]){
        if(v == par[u]){
            continue;
        }
        if(maxDepth[v] - depth[v] < numJumps){
            continue;
        }
        candidates.push(v);
    }

    while(!candidates.empty()){

        int v = candidates.top();
        candidates.pop();

        if(maxDepth[v] - depth[v] < numJumps){
            continue;
        }

        if(candidates.empty() || query(v)){
            tracePath(v);
            return;
        }
    }
}

bool cmp(int a, int b){
    return (maxDepth[a] - depth[a]) < (maxDepth[b] - depth[b]);
}

void solve() {
    int n;
    cin >> n;

    adj = vector<vector<int>>(n);
    depth = vector<int>(n, 1);
    maxDepth = vector<int>(n, 1);
    par = vector<int>(n);
    path.clear();
    numJumps = 0;
    numOperations = 0;


    for(int i = 0; i < n - 1; ++i){
        int u, v;
        cin >> u >> v;
        --u, --v;
        adj[u].push_back(v);
        adj[v].push_back(u);
    }


    dfs(0, -1);

    for(int u = 0; u < n; ++u){
        sort(adj[u].rbegin(), adj[u].rend(), cmp);
    }

    tracePath(0);

    int l = 0;
    int r = path.size() - 1;
    while(l < r){
        int mid = (l + r + 1) / 2;
        if(query(path[mid])){
            l = mid;
        }
        else{
            r = mid - 1;
            l = max(0, l - 1);
            r = max(0, r - 1);
        }
    }

    answer(path[l]);
    assert(numOperations <= 84);
}
 
int main() {
    int tc;
    cin >> tc;
    for(int t = 1; t <= tc; ++t){
        solve();
    }
    
    return 0;
}
»
21 month(s) ago, hide # |
 
Vote: I like it +3 Vote: I do not like it

For solution 2 on F, I'm pretty sure you can store a max segtree for the last time each position was updated, and in the map store also store when the range was added. Then when you visit a range just check whether you need to recompute it in log N and then recompute if you have to. I think this doesn't increase the complexity because an interval tree is already n log^2 n to update...?

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

Uneven problems

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

I had a general question for interactive problems, I've seen a lot of problems mention whether the interactive is adaptive or not, I wanted to understand if this makes any difference to the solution. In this contest for example, for E1 and E2 it said that the interactor is not adaptive, but from what I understand, even if it was adaptive, that wouldn't have changed the solution in any way right? What's exactly the purpose of mentioning this then?

»
21 month(s) ago, hide # |
Rev. 2  
Vote: I like it 0 Vote: I do not like it

Can anyone explain, please, why we cant solve problem C simply implementing the algorithm?

My observation was that number of 0's after each operation grows exponentially, so there will be no more log(n) iteration. Recalculating next iteration of the array will be O(n), times std::map operation to find the most frequent element is O(log(n)) times O(log(n)) total operation result in O(n*log^2(n)) opearions that should pass TL.

Also for some reason my implementation gives WA 272048767. Why so?

»
21 month(s) ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

For problem E, I just query the subtree that has the most possible nodes less than half of the amount every time. https://mirror.codeforces.com/contest/1990/submission/273813620

»
20 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

can anyone make tell what does that prefix_sum till x should be >0, in B

»
17 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

I solved E2 in q=log(n) (optimally) and O(n) time complexity. The idea is to find a node that divides the nodes of the tree in two. I cannot prove the worst number of queries is less than 160 though. Can someone hack it or explain its mechanisms in a mathematical/logical manner? Thank you very much!

Solution
»
12 months ago, hide # |
 
Vote: I like it 0 Vote: I do not like it

I found a cool greedy for D. It can be proven that usually you pay 1 coin for each $$$a_i \gt 0$$$, but you can take segment $$$[l, r]$$$ matching pattern $$$2(44)^*2$$$ and cover it for $$$r-l$$$. We greedily try to extend this pattern and that's it. Code: 316951310.

»
8 months ago, hide # |
Rev. 2  
Vote: I like it 0 Vote: I do not like it

I think C solution is missing more clear reasoning what is it tried to be achieved by contradiction (that after 2 operations, there won't be contiguous segments of length <= 1). Also, I'm wondering if it's misleading assumption to state that in case $$$b_i \lt b_{i+1} \lt b_{i+2}$$$, it's necessary to have $$$b_i=a_i$$$ (inequality is also possible, consider subsequences 6 | 5 5($$$b_i=a_i$$$) 6 ($$$b_{i+1}=a_{i+1}$$$) and 6 | 4 4($$$b_{i-1}=4$$$) 5(4, $$$b_i \ne a_i$$$) 6 (6, $$$b_{i+1}=a_{i+1})$$$), although that doesn't change final conclusion, that $$$a$$$ (array after one operation) is not non-decreasing if $$$b_i \lt b_{i+1} \lt b_{i+2}$$$ holds (denying existence of single occurence of an element after 2 operations, contradiction).

$$$a_j=a_{i+1}(j \le i)$$$ part $$$(j = i)$$$ is incorrect as well, as previous it was stated that $$$b_i=a_i$$$.