Блог пользователя MikeMirzayanov

Автор MikeMirzayanov, 23 месяца назад, По-русски

Привет!

Уже завтра (7-го декабря) состоится Чемпионат Северной Евразии ICPC (ранее известный как NEERC). Соревнование будет проходить на нескольких площадках: Санкт-Петербурге, Барнауле, Кутаиси и Алматы. В нём примут участие почти 300 команд.

Текущие результаты официального соревнования →
Предлагаем вам не смотреть текущие результаты, если планируете участвовать в онлайн-зеркале

Приглашаем вас присоединиться к онлайн-зеркалу соревнования: 2022-2023 ICPC, NERC, Northern Eurasia Onsite (Unrated, Online Mirror, ICPC Rules, Teams Preferred). Оно начнётся в 07.12.2022 11:05 (Московское время). Рекомендуем участвовать командами, но и опытным индивидуальным участникам будет весело!

Продолжительность соревнования составит 5 часов. Конечно, раунд нерейтинговый.

Удачи!

  • Проголосовать: нравится
  • +158
  • Проголосовать: не нравится

»
23 месяца назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

Typo: contest ID 1773, not 1733.

»
23 месяца назад, # |
  Проголосовать: нравится +65 Проголосовать: не нравится

Hello! When will the contest be open for upsolving?

»
23 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I thought of problem H's solution for the whole 5-hour period, but still I couldn't solve it. I hope I could get some help towards finalizing the current approach. Here's my current approach so far.

My current approach
  • »
    »
    23 месяца назад, # ^ |
      Проголосовать: нравится +10 Проголосовать: не нравится

    On the contrary, I was able to deterministically find out the language mapping in atmost $$$11$$$ queries but could not figure out how to perform the search in less than $$$4\log{10^6}$$$ queries.

    My approach to find the language mapping
    • »
      »
      »
      23 месяца назад, # ^ |
        Проголосовать: нравится +10 Проголосовать: не нравится

      May I ask what your approach for $$$4 \log 10^6$$$ queries was? I have a guess for it, but I am not very sure.

      My guess
      • »
        »
        »
        »
        23 месяца назад, # ^ |
        Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

        Do two binary search, one for the x-coordinate and other for the y-coordinate. To determine if the x-coordinate lies to the left or right of $$$m$$$, do queries on $$$(m,0)$$$ and $$$(m+1,0)$$$.

        Edit — Just got AC using $$$3\log 10^6$$$ queries for searching. You can do both binary searches simultaneously by querying $$$(mx,my)$$$, $$$(mx+1,my)$$$ and $$$(mx+1,my+1)$$$ to reduce the search space of both x and y to half in $$$3$$$ queries.

    • »
      »
      »
      23 месяца назад, # ^ |
      Rev. 2   Проголосовать: нравится +36 Проголосовать: не нравится

      Ask (0, 0), (0, 0) and (1, 1). Unless the answer is (0, 1) or (1, 0), you get first equal, then closer.

  • »
    »
    23 месяца назад, # ^ |
      Проголосовать: нравится +19 Проголосовать: не нравится

    Your claim for 40 queries in two separate binary searches is quite optimistic because it's not easy to do a binary search (though I believe it's possible in theory to approach this limit).

    As a side note, there was a similar problem on IOI2010 that focuses on the 1D version, where the words "hotter" and "colder" are known. It's quite tricky, you can try it in the IOI archive on Codeforces: https://ioi.contest.codeforces.com/group/32KGsXgiKA/contest/103756/problem/B.

    • »
      »
      »
      23 месяца назад, # ^ |
        Проголосовать: нравится +10 Проголосовать: не нравится

      You are correct, 40 queries would be a theoretical limit. I think a ternary search would be possible within 52 queries, and it should be sufficient to pass the task.

  • »
    »
    23 месяца назад, # ^ |
    Rev. 2   Проголосовать: нравится +2 Проголосовать: не нравится

    Determining the language is easy, so focus on the search part.

    I solve for both coordinates independently (I always set the other one to zero), so let's think of it in 1D. The problem is that we are not able to do binsearch easily as for that to work we would need to be able to ask questions for out of bounds points. In my solution I keep an interval $$$[L, R]$$$, where the answer is within that and my last question was either in $$$L$$$ or $$$R$$$. I want to shrink this interval 3 times using 2 questions and maintain my invariant about last question being in one of the ends of my interval. For simplicity assume that last question was in L. Then I ask about $$$c = \frac{L + 2R}{3}$$$. If I get "further" then I know that my answer is within $$$[L, \frac{2L+R}{3}]$$$ and I already shrunk my interval 3 times and I can ask my next question in L to maintain the invariant. If I get "closer" then I ask about $$$c+1$$$. Depending on the answer I restrict to either $$$[\frac{2L+R}{3}, c+1]$$$ or $$$[c+1, R]$$$. That uses $$$4 \cdot log_3(10^6)$$$ questions for the search part, so in addition with constant number of queries determining the language, it barely fits into the limit

»
23 месяца назад, # |
  Проголосовать: нравится +23 Проголосовать: не нравится

When can we submit the problems in practice mode?

»
23 месяца назад, # |
  Проголосовать: нравится +17 Проголосовать: не нравится

Will we have the tutorial?

»
23 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

any hint for I?

  • »
    »
    23 месяца назад, # ^ |
    Rev. 4   Проголосовать: нравится +6 Проголосовать: не нравится
    1. You can brute force it locally.
    2. Consider finding a place with the highest entropy (or similar).
    • »
      »
      »
      23 месяца назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      The first thing that came to mind was also entropy. But it worked for 11 queries. Is it different for you? (I changed entropy just to minimise max size of equivalent factorials)

      • »
        »
        »
        »
        23 месяца назад, # ^ |
          Проголосовать: нравится +3 Проголосовать: не нравится

        Entropy worked for me using your definition, except I did these steps first:

        Spoiler
  • »
    »
    23 месяца назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    Just always ask the question that minimizes the size of the biggest class that you restrict to after getting an answer. Though it will be a bit slow, but if you hardcode first move (that does not depend on anything) as 1475 (that you compute with your a bit too slow code) then it is fast enough

    • »
      »
      »
      23 месяца назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Is there any way to prove that minimizing the size of the bigger class after getting answer to the query guesses the answer in less than $$$10$$$ queries other than actually running the solution code on all $$$5982$$$ numbers locally?

      Because if that is the case, the problem is not very interesting.

      • »
        »
        »
        »
        23 месяца назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I doubt. But I mean, that should be the first thing to try if we are minimizing the worst-case and it works, I don't need the concise math proof. Talking about entropy a bit surprises me as we are not talking about the average case. It's not a mind-blowing problem, but I guess it's a fair one

»
23 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

how to solve problem D ?

  • »
    »
    23 месяца назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Could not write an AC solution yet, but these should be the cases of the two cells that make the tiling impossible.

    • The two cells are on a different connected component.
    • The two cells are on the same connected component, and are on the same color (The color being $$$x+y \bmod 2$$$)
    • The two cells are on the same connected component, are on the different color, and the two cells divide the component to more than one so that at least one of the divided components are invalid (i.e. odd area or invalid color matching),
  • »
    »
    23 месяца назад, # ^ |
      Проголосовать: нравится +11 Проголосовать: не нравится

    Covering with dominoes is of course matching between white and black cells. Let $$$w, b$$$ denote number of white and black cells that remained. Of course we have $$$w=b$$$ and if we remove cells of the same color then we win, so the true answer is at least $$$2 \cdot {w \choose 2}$$$, so if $$$w > 1005$$$ then we are done, because we simply output $$$10^6$$$ and we are allowed to solve the rest of the problem in quadratic complexity in terms of number of free cells. We compute the matching. The answer will be $$${2w \choose 2}$$$ minus the number of pairs so that after their removal there is still a full matching, so we focus on counting these. Then, for each white cell, we unmatch it and remove it and want to count how many black cells we can remove so that there is still a full matching. These are simply reachable vertices from the black cell that was matched with the removed white cell, so we count these using one DFS.

»
23 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится
»
23 месяца назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

I have $$$O(3^m \cdot m)$$$ time and $$$O(3^m)$$$ memory in G (and it passed). That sounds like a lot. Was that intended that it passes or was there some better solution? Being afraid of that was the main reason why I decided to focus on J instead of G, which was a bad decision.

  • »
    »
    23 месяца назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I don't know how to optimise the time complexity, but space complexity isn't hard to cut to $$$O(2^n)$$$.

  • »
    »
    23 месяца назад, # ^ |
    Rev. 3   Проголосовать: нравится +48 Проголосовать: не нравится

    Let's compute DP[i] = chance of it reaching a state of mask i as in set bits represent people that haven't gotten any answer wrong. From state i, whenever it takes a question that "i AND question_mask != i" (all such questions aren't yet taken because otherwise the AND of all taken questions wouldn't be i) it goes to a state with less bits.

    The problem is optimizing the transition and I guess you did that by iterating through submasks in some way. We can solve the problem from here in O(2^M * M^2).

    Let's have an array A[i] = number of questions with mask of answers i.

    At first we know the answer for DP[(1 << M) — 1]. If we use AND convolution of that DP array with the array A we almost have the contribution of transitions from state "(1 << M) — 1" to submasks of it. What's missing is dividing it by the number of masks that change result when we take the AND. A mask of questions "j" won't change the AND with "i" iff j is a supermask of i. So if we take B[i] = sum of A[j] for j supermask of i, then N — B[i] is the number of masks that will change state.

    Now, to apply that transition in general we can simply apply it to states with M active bits, then M-1 active bits and so on.

    The solution is: For each BITS from M to 1, take D[i] = 0 if pop_count(i) != BITS otherwise DP[i] / (N — B[i]). Then take the AND convolution of D and A, that'll give us the contribution of states with BITS active bits to states of less than BITS active bits. Add the result of that convolution to the corresponding positions for every position with less than BITS set bits.

  • »
    »
    23 месяца назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    can you explain your solution as well?

    • »
      »
      »
      23 месяца назад, # ^ |
      Rev. 2   Проголосовать: нравится +16 Проголосовать: не нравится

      I want to compute for each subset of people, the prob of Genie winning if this is the subset of people that are alive. Key observation is that this value does not depend on the history of questions.

      In order to compute that dp, from each state I iterate over all possible submasks that I can get to with the soonest question that tells apart some of the people in it. For that I need to know following values $$$f(A,B,C)$$$ denoting what is the number of questions that has zeros on set A, ones on set B and whatever on set C (where A,B,C is some partition of all people). I compute these in SOS-like manner in $$$O(3^m \cdot m)$$$ time and array storing there takes around 520MB

»
23 месяца назад, # |
  Проголосовать: нравится +3 Проголосовать: не нравится

can someone please give hint for problem A

  • »
    »
    23 месяца назад, # ^ |
    Rev. 2   Проголосовать: нравится +15 Проголосовать: не нравится
    Hint
    • »
      »
      »
      23 месяца назад, # ^ |
        Проголосовать: нравится +6 Проголосовать: не нравится

      coudle you please express more clear? i am not so clever,can not understand it

      • »
        »
        »
        »
        23 месяца назад, # ^ |
          Проголосовать: нравится +1 Проголосовать: не нравится

        Well, that was supposed to be a hint, not a full solution :P. The more direct hint is that

        Hint
        • »
          »
          »
          »
          »
          23 месяца назад, # ^ |
            Проголосовать: нравится +7 Проголосовать: не нравится

          thank you very much, maybe i have kown how to solve it, because permutation "q" have restrict, so we can use bipartite to get q. is that? i get this idea at first,but i worry about it will spend much time, because Hungarian algorithm need take O(n*m) time,and in this problem one point have n-2 edges,this is the thing that i worry about

          • »
            »
            »
            »
            »
            »
            23 месяца назад, # ^ |
              Проголосовать: нравится +6 Проголосовать: не нравится

            It's nothing like that what you describe. Draw random q. It uniquely determines p. Check if neither q nor p have any fixed point. If they do, repeat from scratch. If not successful in some number of turns (I tried 700 times), declare as impossible.

            • »
              »
              »
              »
              »
              »
              »
              23 месяца назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              why 700 is enough

              • »
                »
                »
                »
                »
                »
                »
                »
                23 месяца назад, # ^ |
                  Проголосовать: нравится +3 Проголосовать: не нравится

                Intuitively, the chance that a random permutation has no fixed point is around $$$1/e$$$, so for two permutations it would be $$$1/e^2$$$, which is more than $$$1/9$$$. 100 may be too less for $$$10^5$$$ test cases (and indeed I got WA), but 700 sounds more than safe

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  23 месяца назад, # ^ |
                    Проголосовать: нравится +3 Проголосовать: не нравится

                  Even better would be to check if there's some answer by using Hall's theorem. There's a valid answer iff there isn't a value blocked by N positions and there isn't 2 values blocked by N-1 positions. 2 values being blocked by N-1 positions doesn't happen for N > 3 so checking these conditions is easy.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  23 месяца назад, # ^ |
                    Проголосовать: нравится 0 Проголосовать: не нравится

                  That's probably clever and all that, but one would need to get any understanding of the structure of this problem first xD My approach has this advantage that it is basically brainless and super easy to code making it probably the optimal approach on the competition, though solutions like yours are of course much more legit from the theoretical point of view

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  23 месяца назад, # ^ |
                    Проголосовать: нравится +31 Проголосовать: не нравится

                  Actually, the only test data for which no $$$p$$$, $$$q$$$ exists is:

                  • [1]
                  • [2, 1]
                  • [1, 3, 2]
                  • [2, 1, 3]
                  • [3, 2, 1]

                  So if you know the answer exists, you can just use "infinitely" many turns. Proof by AC: 184521024

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  23 месяца назад, # ^ |
                    Проголосовать: нравится 0 Проголосовать: не нравится

                  Thanks for your approach..., in these contests we are not allowed to see others code even after getting AC. Could you please post your code here?

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  22 месяца назад, # ^ |
                  Rev. 5   Проголосовать: нравится 0 Проголосовать: не нравится

                  Ig the reason WA for c=100 times is not because of 10^5 test cases, but because of probability of failing can be made greater than (1-1/e^2) according to input, due to dependency.

                  Correct me if i m wrong, if we use std::mt19937_64 rng(std::chrono::steady_clock::now().time_since_epoch().count()); within each test case, then it's like each test case being independent, as the seed will be different.

                  Also, can anyone say upper bound for probability of random permutation to have no common point wrt any two permutation.,I m curious to know it.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  22 месяца назад, # ^ |
                    Проголосовать: нравится +1 Проголосовать: не нравится

                  I added srand(clock()); on th beginning of each test-case and my code passed with c=100, but I think that is a pure luck as it gets WA with c=90 and I think 100 is just on the verge of getting accepted.

                  However I have to agree that 100 should sound fairly safe if it was indeed independent 1-1/e^2 on each try (80 wouldn't though). I don't feel like delving into details what causes that difference though

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  21 месяц назад, # ^ |
                    Проголосовать: нравится 0 Проголосовать: не нравится

                  Hey, I know this thread is like two months old, but I'm wondering why this idea is "intuitive".

                  Obviously, it can be proven mathematically but I don't see how someone could figure out this probability on the fly.

                  Thank you.

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  21 месяц назад, # ^ |
                  Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

                  I guess it helps if someone has seen similar facts/exercises about permutations and understands the intuition behind them. You may want to read up on derangements if you haven't heard about them https://en.wikipedia.org/wiki/Derangement

                  On a higher level, you may think that this problem is very loosely constrained. You have a ton of freedom. If you have a lot of freedom, there is a chance that some kind of random/heuristic solution may pass

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  21 месяц назад, # ^ |
                    Проголосовать: нравится 0 Проголосовать: не нравится

                  Thanks for the quick response! I'll read that page now...

                • »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  »
                  6 месяцев назад, # ^ |
                    Проголосовать: нравится 0 Проголосовать: не нравится

                  Correct me if i m wrong, if we use std::mt19937_64 rng(std::chrono::steady_clock::now().time_since_epoch().count()); within each test case, then it's like each test case being independent, as the seed will be different.

                  sorry, i was wrong there. In a multi-testcase problem, its about probability that it passes all test cases simultaneously. So, definitely probability of passing is decreased.

            • »
              »
              »
              »
              »
              »
              »
              22 месяца назад, # ^ |
                Проголосовать: нравится 0 Проголосовать: не нравится

              How can we generate random permutation repeatedly more times can u explain.

              • »
                »
                »
                »
                »
                »
                »
                »
                22 месяца назад, # ^ |
                  Проголосовать: нравится 0 Проголосовать: не нравится

                I don't get your question. Do you know what random_shuffle is? If yes, just apply it many times and you get various random permutations

  • »
    »
    23 месяца назад, # ^ |
      Проголосовать: нравится +4 Проголосовать: не нравится

    You can treat it as a bipartite matching problem. A greedy approach will find a matching of size at least N-2. We want a perfect matching, so starting from a matching of size N-2 we can find O(1) augmenting paths in O(N).

    • »
      »
      »
      23 месяца назад, # ^ |
        Проголосовать: нравится +1 Проголосовать: не нравится

      could tell me why from n-2 nodes to get path one need O(n)

      • »
        »
        »
        »
        23 месяца назад, # ^ |
          Проголосовать: нравится +4 Проголосовать: не нравится

        Find the path with BFS. From unmatched vertex, you can reach N-2 vertices in the other partition. From that point onwards, keep the unmatched vertices in the other partition in a vector, there'll be <= 2 such vertices. You can check if an edge to a vertex exists in O(1).

    • »
      »
      »
      22 месяца назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      hey can you please specify exactly what you mean or give code.

  • »
    »
    23 месяца назад, # ^ |
    Rev. 4   Проголосовать: нравится +4 Проголосовать: не нравится

    the ideas other people proposed are a bit simpler than my way but I started off by thinking about cyclic shifts — for a cycle of length $$$>2$$$ mapping each element $$$a_i := a_{a_i}$$$ results in a valid $$$q$$$ which allows you to derive $$$p$$$ for the elements in that cycle. So then you just have to figure out how to cover the cycles of length $$$ \leq 2$$$

    • »
      »
      »
      23 месяца назад, # ^ |
      Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

      I originally tried so too, but I couldn't figure out how to cover cycles of length less than 2. Like what if we have a large enough cycle (say, of length 100) and a smaller cycle of length 1? Then, if you map $$$a_i := a_{a_i}$$$, the smaller cycle can't get covered.

      How did you deal with that edge case?

      • »
        »
        »
        »
        23 месяца назад, # ^ |
          Проголосовать: нравится +6 Проголосовать: не нравится

        I inserted into the cycle in some arbitrary spot and then just reversed the edges of the cycle.

  • »
    »
    23 месяца назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Ok, so let's suppose that there is a set of integers that doesn't occur in $$$q$$$ at given moment. If this set has at least 3 elements, for every $$$i$$$, we can set $$$q[i]$$$ as one of these elements (as every $$$q[i]$$$ has at most two integers that cannot be put there). Problem starts when there are only 2 elements left in this set. So we only want to make sure that remaining 2 elements will suit in remaining positions.

    Rest of the solution
  • »
    »
    23 месяца назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится
    my solution
»
23 месяца назад, # |
Rev. 2   Проголосовать: нравится +23 Проголосовать: не нравится

Is there any editorial ?

»
23 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hey MikeMirzayanov. Thank you for making the contest available for upsolving. Unfortunately, task K can not be submitted now (only PDF statement can be downloaded). Can you please fix this?

»
23 месяца назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

Any hints regarding problem K, please.

  • »
    »
    23 месяца назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Try reducing problem for (n, k) to problem for (n-2, k-2). For that to work you will also need solutions for base cases k=1 and k=2.

»
23 месяца назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

Am I the only one who doesn't see others' submissions? Even for solved problems

»
23 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can anyone help me how to solve A?

»
23 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

from editorial of c, what's topological structure of a graph? for the grouping what operation is considered? is there an alternative approach to think? any helpful reading material?

»
23 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I tried E with approach that if in any array has a[i-1] > a[i] I will do split++ and tower++ and that I will do for all n arrays and then number of splits will be split while number of combine will be tower-1 but it gives WA at test case 11 can anyone tell me why my solution is wrong.

»
23 месяца назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

MikeMirzayanov, when submissions of other participants of this round will be opened?

»
20 месяцев назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Solved A: Amazing trick without using randomisation.

Idea is to first make a parent graph(directed graph) where i is linked to a[i] (i-->a[i]). Now, there will be multiple disjoint cycles of this graph. Append all disjoint cycles in a single array (flatten it) such that [a-->b-->c-->d-->a], [e-->f-->e] is taken as a, b, c, d, e, f. Link by skipping 2 indices. So, a-->c-->b, b-->d-->c, ....., e-->a-->f, f-->b-->e. where first transition is linked by q and second one by p. This way fixed point do not occurs in p and q. Also, there is an edge case corresponding to this approach where there are two disjoint cycles with size [[n-1][1]](eg. a-->b-->c-->a, d-->d where c-->a-->a transition gives a fixed point for p). So, to tackle these test cases skip by 3 instead of 2. Impossible cases are [ n=1, [1] ],[ n= 2, [2] ], [ n=3, [1][2] or [2][1] ].

Here is the code
»
15 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Anyone can help with problem E.