Блог пользователя vovuh

Автор vovuh, история, 5 лет назад, перевод, По-русски

Все идеи кроме задачи C принадлежат MikeMirzayanov.

1213A - Перемещение фишек

Разбор
Решение

1213B - Плохие цены

Разбор
Решение

1213C - Чтение книги

Разбор
Решение

1213D1 - Приравнивание делением (легкая версия)

Разбор
Решение

1213D2 - Приравнивание делением (сложная версия)

Разбор
Решение

1213E - Две маленькие строки

Разбор
Решение

1213F - Нестабильная сортировка строки

Разбор
Решение

1213G - Запросы на пути

Разбор
Решение
Разбор задач Codeforces Round 582 (Div. 3)
  • Проголосовать: нравится
  • +105
  • Проголосовать: не нравится

»
5 лет назад, # |
  Проголосовать: нравится -30 Проголосовать: не нравится

Thanks for the quick editorial.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

I think calculating answer for each possible query is also an option in the given constraints in G.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +26 Проголосовать: не нравится

A different approach for F, time complexity is O(n). Consider each permutation number as a node and create a directed graph by making an edge between adjacent nodes of the permutations (doesn't matter if from left to right or right to left). Some cycles may appear and it's obvious that all nodes in a cycle have to have the same letter in a string. Then just get strongly connected components and each can have a different letter. If there's less components than necessary K, print NO, otherwise do a topological sort on those components and give them letters correspondingly. Here's my code: https://mirror.codeforces.com/contest/1213/submission/59754658

  • »
    »
    5 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится +4 Проголосовать: не нравится

    You don't even need to do a topological sort of the vertexes.

    Start with some character as 'a' and then iterate through one given permutation. Every time you visit a new component (previously not seen yet) you can increase current character value.

    My code: 59772918

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +18 Проголосовать: не нравится

    Actually the topological sort is unnecessary because in Kosaraju's, the components come in topological order :D

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +31 Проголосовать: не нравится

      To be a lucky contestant-

      1. Copy le template de Kosaraju
      2. Plan to run a topsort on the sccs
      3. Make the meta graph (you can find the code at the end of the comment, if you ever need to copy /home/your-self/competitive-programming/templates/building-meta-graph.cpp)
      4. Forget to run a topsort on the meta graph
      5. Submit
      6. Pass pretests
      7. Finish contest
      8. After a while realize that you have forgotten to implement a part of your solution
      9. Almost get a heart attack
      10. But then wonder why ~69 pre testcases passed
      11. "Ah maaaan, I've run Kosaraju's... lucky me~"
      That piece of code
  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    SCC is easier to compute in this problem. If you order nodes by $$$p$$$, the only back edges are created by $$$q$$$. You do not need DFS.

    My code is dirty but here's my submission: https://mirror.codeforces.com/contest/1213/submission/59731555

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    the idea behind the official solution is same as the idea you have suggested. From the point of implementation it is much easier in official solutions.In your approach there are many redundant information such as actual bonds between nodes from one component.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +1 Проголосовать: не нравится

      I didn't say it's a different idea, it's just a different approach to it with a longer code but it has a better time complexity so that's why I wanted to share it..

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится +3 Проголосовать: не нравится

        I see, sorry for misunderstanding your initial intention, I bring my apologize :)

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
          Rev. 4   Проголосовать: нравится +3 Проголосовать: не нравится

          Not sure if this is helpful now, but the solutions above are too much of code. A simple approach is using two pointers with the complexity of O(n) (code) Sorry, if someone has already shared a similar approach.

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            Can you elaborate the idea behind your solution? i couldn't reach the full solution with 2 pointers, thanks in advance :)

»
5 лет назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

For problem G, what if the queries are onnline?.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +37 Проголосовать: не нравится

    You can precalculate answer for each $$$w_i$$$ (weight of $$$i_{th}$$$ edge), if it will be a query. For each real query you can just find first $$$w_i$$$, that not greater, than weight from query, using binary search.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Actually online/offline solution share the same idea, right? Calculating the answer by merging two sets in increasing order.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Yes. You make offline solution for interesting weights, and then easily answer for given queries in online.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        can you please explain, how to start with the problem(I have got some vague idea but can't understand completely)what are the component around the edges, if we start from beginning and how to proceed then, with a small example, that will we very helpful.this problem seems very interesting to me.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
          Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

          Look at the first example input.

          7 5
          1 2 1
          3 2 3
          2 4 1
          4 5 2
          5 7 4
          3 6 2
          5 2 3 4 1
          

          At first every node makes a single set. Now we calculate the answer for $$$weight \le 1$$$. Merge two nodes if there's an edge connected them, and its weight is less or equal $$$1$$$. In this example we have two edges $$$<1,2>$$$ and $$$<2,4>$$$. After merging, the node sets become $$$((1, 2, 4), (3), (5))$$$. Following is similar.

          Now we look at how to update the answer when merging two sets. Let two set's size be $$$a$$$ and $$$b$$$. Merging them will add $$$a+b$$$ to the answer because, that before merging: $$$C(a, 2) + C(b, 2)$$$, and after merging: $$$C(a+b, 2)$$$. The difference is $$$a + b$$$, if you extend them.

          Sorry for my poor English. You are welcome to ask again if you got confusing.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Ого какой быстрый разбор. Благодарю

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hi. Could someone please tell me why, in question D1, the number of possible candidates is O(nlogn) and not O(nlog(max(Ai)))?

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +3 Проголосовать: не нравится

Why D2 don't get tle ? We have 1e5 values and we are summing k values so k*1e5 ==> 1e5*1e5 ?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    No, its not like that. Lets suppose you have k == n then only 0 or 1 will be able to satisfy the greater than condition. so the total complexity in that will be at max 8*10^5. Similarly you can check for other cases as well.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    We have at most $$$n=2*10^5$$$ numbers. Each number will contribute to $$$O(\log_{2}(2*10^5))$$$ arrays, when we divide it by $$$2$$$ until $$$0$$$. So, at worst case we have $$$O(n*\log_{2}(2*10^5))$$$ numbers in the arrays in total. So we never have to sort, and sum more numbers than that, no matter what $$$k$$$ is.

    • »
      »
      »
      4 года назад, # ^ |
      Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

      please correct me if im wrong

      let numbers = 2e5

      let all be 2e5

      they contribute to log(2e5) arrays

      so log(2e5) numbers have 2e5 numbers in their array

      total = log(2e5) * 2e5 = z

      worst case O(zlog(z)) i.e. sorting all of them

      zlog(z) + klog(2e5)

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For the problem D1, I don't understand the statement: We can see that there is always O(nlogn) possible candidates because all values x are among all possible values of ⌊ai2m⌋ for some m from 0 to 18. So we need to check each candidate separately and try to update the answer with it.

Also, how are we selecting the value for x? is it in the sequence 2,4,8... or do we calculate x in some way and then use it to check the elements in array which can be reduced to it.

Thanks. :)

  • »
    »
    5 лет назад, # ^ |
    Rev. 3   Проголосовать: нравится +5 Проголосовать: не нравится

    I can tell you the idea. Basically it is a brute force, we try to check every possible x, so we loop from zero to 2e5 and check how many moves can we make to make x and then take the minimum

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      I saw the solution code after reading your reply. It makes sense to me now. Thank you!

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Our goal here is to divide some numbers from this array by 2 in order to obtain $$$k$$$ equal elements. Let's say that the required number is $$$x$$$, this number is equal to one of the numbers in the final array; since any number in the final array is the result of dividing it by $$$2^p_i (0 <= p_i)$$$, where $$$p_i$$$ is the number of time we divided the $$$i-th$$$ number by 2, then $$$x$$$ is of the form $$$\lfloor a_i/2^p_i \rfloor$$$. Now, we know that there are $$$n$$$ numbers and in worst case for input size and values the maximum number in the array is equal to $$$n$$$, so each number can be divide by at most $$$\lfloor log(n) \rfloor + 1$$$ and we have $$$n$$$ numbers, so the number of possible choices for $$$x$$$ is $$$O(n*log(n))$$$.

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      It is known that there are n elements in the array. But why is the value of largest element n in worst case.

      What if for example we had 3 elements. Thus n=3 but if the the largest element in it is 445 (which is much greater than 3).

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        I said the worst case for input size and values.

        • »
          »
          »
          »
          »
          5 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          If we are already denoting n as the number of elements in the array, how can we also use n to denote the largest value of array?

          Here is what I am understanding:

          For input size n. Let's have the largest element in array denoted by m.

          In the worst case, we can divide each number by at most ⌊log(m)⌋+1 and this is to be done for n elements. Thus, worst case time complexity: O(n*log(m))

          • »
            »
            »
            »
            »
            »
            5 лет назад, # ^ |
              Проголосовать: нравится 0 Проголосовать: не нравится

            Yeah, that's right, but for the sake of simplicity we are just using $$$n$$$.

»
5 лет назад, # |
  Проголосовать: нравится +5 Проголосовать: не нравится

Can anyone tell me what's wrong in my code for G. It is failing on the 4th test case. I have done the exact same thing as the editorial suggested.

link

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Can some plz explain to me the approach for problem-C? Thanks!!

  • »
    »
    5 лет назад, # ^ |
    Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

    For the sum of the unit's digit, the whole divisor isn't important, only the units place is useful. All digits(0 to 9) tend to follow a pattern when multiplied with other digits(0 to 9). Kind of cycle which they keep on repeating.For example...

    1) all multiples of zero will end with 0.

    2) all multiples of an odd number other than 5 can end with any digit 0 to 9.

    3) all even number end with 2,4,6,8,0.

    4) all multiple of 5 ends with either 0 or 5.

    Now try comprehending this... https://mirror.codeforces.com/contest/1213/submission/59745010

»
5 лет назад, # |
  Проголосовать: нравится +99 Проголосовать: не нравится

In G you can also just do $$$res := res + s_{1} s_{2}$$$, since exactly $$$s_{1} s_{2}$$$ new paths are formed by adding the new edge. Indeed,

$$$\binom{s_{1} + s_{2}}{2} - \binom{s_{1}}{2} - \binom{s_{2}}{2} = \frac{(s_{1} + s_{2})(s_{1} + s_{2} - 1) - s_{1}(s_{1} - 1) - s_{2}(s_{2} - 1)}{2} = \frac{s_{1}^{2} - s_{1} + s_{2}^{2} - s_{2} + 2s_{1}s_{2} - s_{1}^{2} + s_{1} - s_{2}^{2} + s_{2}}{2} = s_{1} s_{2}$$$.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +28 Проголосовать: не нравится

    Hmm, this seems nice and there is very intuitive way to understand this formula. In fact, you have some paths in the first component and some paths in the second component. And all paths you need to add are the paths between the vertices of the first component and the vertices of the second component. Obviously, there is exactly $$$s_1 s_2$$$ such paths. Thanks for providing this formula, I didn't thought about that :)

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

vovuh Can you explain why my approach is wrong for D2

I run a loop from j = 1 to j = 200005 considering each j as an optimal value which will be our k equal elements. Now for each j I calculate the minimum number of steps required to get k occurences of j.

For that I have sorted the vector already and follow this startegy

Suppose I want to make x as my optimal value , then in 1 step i can get x from elements which are in the range 2*x to 2*(x+1)-1 in 2 steps i can get x from elements which are in range (4*x) to 4*(x+1)-1 in 3 steps i can get x from elements which are in range (8*x) to 8*(x+1)-1 .... and so on.

to get the number of elements in a range I use upper_bound and lower_bound on my sorted array.

Why is this approach failing for test case 4?

code -> code

»
5 лет назад, # |
  Проголосовать: нравится +11 Проголосовать: не нравится

Can G be solved without DSU?

»
5 лет назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

All people who managed to solve E problem, I just want to know what struck you? what idea came to your mind. Do you think you solved it because you have solved similar problems in the past... if so do share them?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I am not sure that I have seen any similar problems before, but almost as soon as I looked at the problem it seemed to me likely that the result would simply be a pattern repeated n times, so one only needed to look at all the different cases for s and t. I then realised one only had to look at two values for s ('aa' and 'ab') since every other value can be transformed into one of these by swapping letter names.

    This then gave me a number of different cases for the relationships between the characters of s and t, each of which had a reasonably obvious solution.

    In the end my first solution failed the tests, but it was then quite easy to write a small test for my code going through all possible values of s and t to work out what case I had missed (maybe I should have done this before submitting the first time!).

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I think there is an O(N) solution to F (sadly not completed during the contest). See https://mirror.codeforces.com/contest/1213/submission/59767381.

My code first inverts the q permutation so that it maps s indexes to q values rather than the reverse; then by working backwards through the p values I can find, for each index of s, the minimum q value corresponding to any later p value.

Finally, it fill in s in p value order keeping track of the maximum q value associated with the letters of s filled in so far. It starts with filling in with 'a's. Whenever the maximum value it has seen is less than the minimum value q value associated with later p values it can move on to the next letter. This continues until it has k different letters, at which point it simply fills in the remaining characters of the string with the final letter.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    I think there's an O(n) solution too, i was thinking of building an oriented graph of n nodes and for each permutation take any two neighbour values and add an edge between first and second. For example if 123 and 132 are the permutations you'll have edges 1->2, 2->3, 1->3 and 3->2. Then notice a strongly connected component will have the same letter throughout and the problem is reduced to assigning letters to each connected component

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Here is in my opinion the simplest O(n) solution, which uses a bitset.

    59775080

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      please, explain the idea.

      • »
        »
        »
        »
        5 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        It's probably the same as in the editorial, but checks whether the two sets are equal by maintaining the size of their symmetric difference. Here's my code based on the same idea: 59794369.

        The symmetric difference is just the number of positions where the two sets differ, therefore it is zero exactly when the sets are equal. Maintaining which positions have appeared in exactly one permutation and the symmetric difference makes updates easy. Whenever the symmetric difference is zero, as in the editorial you can move to the next character.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I have been trying to solve at least problem 'A' in last AUGUST month. But always failed to solve even 'A'. Maybe I am not even considered as a 'newbie' in problem solving.

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    A part of overcoming newbie region is to just stop underestimating yourself and overestimating problems. :)

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Another way to look at D2, we consider every value from 1 to MaxVal in array to be the optimal value and choose the best ans.
1. Mark the frequencies of all elements in an array.
2. Consider all values from 1 to MaxVal.
3. For value i we do a BFS on a tree to get the cost.
The tree here is a binary tree with root node as 1 and the child of node i are 2i and 2i+1.
A node at same level as Source node i of BFS will take 0 cost to convert to i, nodes at 1 level under will cost 1 to turn into i.
So the BFS will run till: either we have found k elements or it is not possible to make k elements equal to i. Time Complexity : MaxVal log (MaxVal) https://mirror.codeforces.com/contest/1213/submission/59750061

  • »
    »
    5 лет назад, # ^ |
    Rev. 9   Проголосовать: нравится 0 Проголосовать: не нравится

    From what I can tell that BFS is actually $$$O(m \log^2 m)$$$ (where $$$m = max(a)$$$), since the number of nodes to search at each "level" grows by one each time. That's also roughly the solution I implemented during contest time, except that I noticed that the numbers of each level of the tree are consecutive, so I used a BIT for the queries: #59749502

    While looking through other solutions I found one that was quite interesting, but I don't remember who wrote it:

    1. Ensure array $$$a$$$ is sorted.
    2. Create two arrays, both indexed from $$$1$$$ to $$$max(a)$$$: $$$q$$$ and $$$c$$$
    3. Iterate through $$$a$$$. For every $$$s_j$$$ where $$$s_0 = a_i, s_x = \displaystyle\lfloor\dfrac{s_{x-1}}2\displaystyle\rfloor, s_{last} = 1$$$ (i.e. the sequence of numbers "reachable" from $$$a_i$$$), check $$$q[{s_j}]$$$; if it is less than $$$k$$$, increment it and add $$$j$$$ to $$$c[s_j]$$$
    4. Once you're done, the final answer is the minimum possible $$$c[x]$$$ where $$$q[x] = k$$$.

    Asymptotics $$$O(n (\log n + \log m) + m)$$$

    Sample code I wrote after contest time based on this idea: #59761739 (instead of two arrays, I just use one array here with the two variables as a tuple)

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      Hi, I dont understand why the complexity will be mlog2m.
      here is my analysis:
      a bfs on node 1 will be O(m).
      a bfs on node 2 will be O(m/2) and on node 3 will also be O(m/2). a bfs on node 4,5,6,7 will O(m/4).
      from this I conclude that for all nodes at a given level BFS cost summed is O(m).
      also number of levels in the tree will be logm.
      So i feel overall m*logm should be the time complexity, correct me if I am wrong.

      • »
        »
        »
        »
        5 лет назад, # ^ |
        Rev. 3   Проголосовать: нравится +1 Проголосовать: не нравится

        You may be right; I didn't consider that searching the higher numbers might be shorter in a way that is material to the asymptotic analysis. It makes me curious as to what the best asymptotic estimate for my BIT variation would be...

  • »
    »
    4 месяца назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    nice approach

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

in problem E it's for permutations of c_1 ,c_2,c_3 it's enough to only consider abc & acb because all others permutations will contain the same sequence of repeating letters by just rotating these two permutations so we can only narrow down our search space instead of 12 to be 8

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    Test 2 ac ab Answer bbccaa

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    In problem E Tutuorial, "Then let's add the following two candidates to the answer: "c_1 c_2 c_3 c_1 c_2 c_3 ... c_1 c_2 c_3" (the string consisting of n copies of "c_1 c_2 c_3") and "c_1 ... c_1 c_2 ... c_2 c_3 ... c_3" (exactly n copies of c1 then exactly n copies of c2 and exactly n copies of c3). Then the answer will be among these 12 strings and we can check each of them naively."

    how can you say only this permutations will be enough? and why we are just concatanating each permutation of 'abc' n times ?

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      if you write all possible s & t and for C_1,C_2,C_3 you will find that using abc and acb will not pass if either of s or t is (ab,bc,ca ) or (ac ,cb,ba) respectively for other permutations like for example (bac) it will be the same as above it will not pass of s or t is (ba or ac or cb) the same as acb because it's only the rotation of acb so now we have for the permutation of acb only 2 options (the other 4 are redundant) then you consider using C_1C_1C_1 C_2C_2C_2 C_3C_3C_3 each C_i n times and their permutation so here u will get 6 more options

    • »
      »
      »
      5 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      I think I got it:


      to_num = { 'a': 0, 'b': 1, 'c': 2 } permutations = ['abc', 'acb', 'bca', 'bac', 'cba', 'cab'] to_char = 'abc' all_combos = ['{}{} {}{}'.format(_1,_2, _3,_4) for _1 in 'abc' for _2 in 'abc' for _3 in 'abc' for _4 in 'abc'] def all_perms(combination): permuted_combos = [] for perm in permutations: # ab cb -> ac bc, ba ca, bc ac, cb ab, ca ba current = '' for char in combination: if char == ' ': current += ' ' continue current += to_char[perm.index(char)] permuted_combos.append(current) total_combos = [] for perm in permuted_combos: # ab cb -> cb ab total_combos.append(perm) total_combos.append('{1} {0}'.format(*perm.split(' '))) return total_combos distinct_combos = [] for combo in all_combos: skip = False for perm in all_perms(combo): if perm in distinct_combos: skip = True break if skip: continue distinct_combos.append(combo) # print('\n'.join(distinct_combos)) ''' all were solved by 'abc'*3 perms or 'a'*3+'b'*3+'c'*3 perms { 'aa aa': 'abcabcabc', 'aa ab': 'acbacbacb', 'aa ba': 'abcabcabc', 'aa bb': 'abcabcabc', 'aa bc': 'acbacbacb', 'ab ab': 'acbacbacb', 'ab ac': 'cccaaabbb', 'ab ba': 'aaacccbbb', 'ab bc': 'acbacbacb', 'ab cb': 'bbbaaaccc', } '''

      here I print out all possible s and t values, and then using rotations like: ab and bb is the same as ac cc, ba aa, and so on, I minimize cases count (10 total)

      Then I manually went through all the combinations and found out that combinations of c1c2c3*3 and c1*3+c2*3+c3*3 is enough to build a valid string

      seems that 'NO' never can happen (it would be obvious if I notice assert(false) in example earlier)

»
5 лет назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

Can someone please help me with understanding the time complexity of "1213D2 — Equalizing by Division (hard version)". I understood the time taken to prepare the array cnt in O(nlogn). After that the process of finding the sum of k smallest values for all values of x should take another O(n*nlog(n)) since for each x [0 to 2*10^5], we need to sort the array of values (the count of these can again be upto n ie. upto 2*10^5 taking O(nlogn)) to obtain the smallest k values? Please let me know where am I getting it wrong? Thank You.

  • »
    »
    5 лет назад, # ^ |
    Rev. 6   Проголосовать: нравится 0 Проголосовать: не нравится

    Lets save our numbers like this pair ( val, iteration when we get it) Then we sort our pairs, it will rake nlogn Then first equal k val's sum of iterations is minimum to reach this val

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    It can't be true that for each of MAX=2*10^5 values size of vector will be 2*10^5 because we have total length of all vectors <= MAX * 18 (each x can be used not more than 18 times). so if we have 18 vectors of length MAX total time of sorting them will be 18*MAX*log(MAX) = A if we have 36 vectors of length MAX/2 total time of sorting them will be 36*MAX/2*log(MAX/2) = 18*MAX*log(MAX/2) <= A so worth case is 18*MAX*log(MAX)

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Is this the other correct way to solve problem A?59723491

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

F можно решить и по-другому. Можно представить ориентированный граф (вершины это позиция символа в строккюе), где ребро из u в v, означает s[u] <= s[v]. Давайте в жадную решать с дфсом, если идём в другую вершину увеличиванм значение буквы. Проблема в том что могут быть циклы, но это решаеться конденсацией графа. Получаеться Топсорт + Конденсация графа

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    https://ideone.com/oCpDsg Мое решение. Оно получила AC. Там я просто делал ребро, из u в v, если s[u] >= s[v]. Соответственно когда я начинал проход то начинал метку вершин с 26 и когда я проходил по ребру, уменьшал на 1

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Well.. now I know how to solve problem F. but.. WHY to do so??(#`-_ゝ-)

I had tried to understand the tutorial's solution for hours but failed.. Could someone give me a explanation? thx ( •̀ ω •́ )✧

»
5 лет назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

What is wrong with my solution for D2 . My solution was also O(nlogn) but got TLE while submitting... Can someone point out the mistake my submission — https://mirror.codeforces.com/contest/1213/submission/59748079

»
5 лет назад, # |
  Проголосовать: нравится +2 Проголосовать: не нравится

Missed expert by 8 points :(

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Is it possible to solve G for online queries? Did anyone solve?

»
5 лет назад, # |
  Проголосовать: нравится +8 Проголосовать: не нравится

I think there is a better solution for F.Time complexity:$$$O(n)$$$

We can build a graph as follows: for each $$$s[p_{i}]\le s[p_{i+1}]$$$,$$$s[q_{i}]\le s[q_{i+1}]$$$, we can connect $$$p_i$$$ and $$$p_{i+1}$$$ . Similarly we connect $$$q_i$$$ and $$$q_{i+1}$$$ .

It is easy to prove that we should set the same letter in those positions which are the points on one cycle on the graph that we built.Then we can use Tarjan to "Shrink point", thus we can get a DAG.

We can think of the DAG as a layered graph. Because we have to use at least $$$k$$$ letters, if the graph doesn't have at least $$$k$$$ layers, it's impossible to construct such an answer.

Then we give each layer one letter in turn. If there are more than 26 layers, just set those layers as letter 'z', it's not difficult to prove it.

Time complexity:$$$O(n)$$$

code 59775323

// I used map to remove duplicate edges, this is not necessary.

»
5 лет назад, # |
Rev. 3   Проголосовать: нравится 0 Проголосовать: не нравится

Can someone give small test case like TC-20 for problem F? Nvm was a silly mistake.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +4 Проголосовать: не нравится

Duh, in problem C i failed in test 6 when was using the python, and the approach seems to be fine, but I didn't have time to reimplement my solution in C++. Today when I reimplemented it with C++ it passed all tests -------------_________________------------------

btw commented python total calculation is my original formula and it works in cpp, even with algorithm from author python solution didn't work

# python Fails at test 6: wrong answer 212th numbers differ -
# expected: '4999999999999954', found: '4999999999999958'

import math

'''
# 7
# 1 1
# 10 1
# 100 3
# 1024 14
# 998244353 1337
# 123 144
# 1234312817382646 13
'''


def calculate_sequence(m):
    _iter = 0
    _len = 0
    _sum = 0
    while True:
        _iter = int(str(_iter + m)[-1])
        _sum += _iter
        _len += 1
        if _iter == 0:
            break

        
    return _len, _sum

def seq_gen(m, cycles):
    _iter = 0
    while cycles > 0:
        _iter = int(str(_iter + m)[-1])
        cycles -= 1
        yield _iter

def process(n, m):
    seq_len, seq_sum = calculate_sequence(m)
    good_pages = math.floor(n/m)
    multiplier = seq_sum
    complete_sequences = math.floor(good_pages/seq_len)
    left_elements = good_pages%seq_len

    # total = complete_sequences * seq_sum + sum(x for x in seq_gen(m, left_elements))
    total = math.floor(good_pages/10) * sum(m*(i+1) % 10 for i in range(10)) + sum(m*(i+1) % 10 for i in range(good_pages%10))
    return total


datasets = int(input())
for _ in range(datasets):
    n, m = [int(x) for x in input().split(' ')]
    print(process(n, m))
// C++ complete solution

#include <iostream>
#include <list>
#include <algorithm>
#include <map>
#include <string>
#include <math.h>
#include <iostream> 

typedef long long longint;

void calculate_sequence(longint m, longint& len, longint& sum) {
	longint _iter = 0;
	longint _len = 0;
	longint _sum = 0;
	while (true) {
		std::string newIter = std::to_string(_iter + m);
		_iter = std::stoi(newIter.substr(newIter.size() - 1, 1));
		_sum += _iter;
		_len++;
		if (_iter == 0) {
			break;
		}
	}

	len = _len;
	sum = _sum;
}

longint seq_sum(longint m, longint cycles) {
	longint _iter = 0;
	longint result = 0;
	while (cycles > 0) {
		std::string newIter = std::to_string(_iter + m);
		_iter = std::stoi(newIter.substr(newIter.size() - 1, 1));
		result += _iter;
		cycles--;
	}
	return result;
}


longint process(longint n, longint m) {
	longint len;
	longint sum;
	calculate_sequence(m, len, sum);
	longint	good_pages = floor(n / m);
	longint complete_sequences = floor(good_pages / len);
	longint left_elements = good_pages % len;
	longint total = complete_sequences * sum + seq_sum(m, left_elements);
	return total;
}

int main() {

/*
1
1234312817382646 13
*/
	int q;
	longint n[1000];
	longint m[1000];
	std::cin >> q;

	
	for (int i = 0; i < q; i++) {
		std::cin >> n[i] >> m[i];
	}

	for (int i = 0; i < q; i++) {
		std::cout << process(n[i], m[i]) << std::endl;
	}
	return 0;
}

This is a total disaster...

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится +1 Проголосовать: не нравится

    I suspect that math.floor(n/m) is the culprit, and changing it to n//m may solve the issue.

    Internally, math.floor should use floating point arithmetic with double precision, leading to inexact results when used to calculate large integers.

    >>> n = 9999999999999999 # 10**16 - 1
    >>> m = 2
    >>> int(math.floor(n/m))
    5000000000000000L
    >>> n//m
    4999999999999999L
    
»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

can anyone please tell me why am i getting TLE at testcase number 6 for poblem G. I am using DSU here is my code link

»
5 лет назад, # |
  Проголосовать: нравится +1 Проголосовать: не нравится

for the problem E how someone will come up with this kind of solution and can be so sure all the ans lies in these 12 pattern. its very difficult for someone to think this way. pls help with your approach.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

My code(problem d2) work in 77 ms. 59748709

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Anyone please tell me... What is the intuition behind the editorial of Problem F?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In E how one can prove that, by following algorithm we won't miss any possible permutation? can somebody prove that by that construction it is impossible to miss any permutation of 3n numbers(also considering constraints in the problem)

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For C I used similar approach as in editorial but getting memory limit exceeded. Can anyone of you help me https://mirror.codeforces.com/contest/1213/submission/59756440

»
5 лет назад, # |
Rev. 4   Проголосовать: нравится +18 Проголосовать: не нравится

Proof of Correctness of E

If you want the graphs to embedded within the text, you can read this link

Let us draw a graph on 3 vertices, namely a , b, and c. There is an edge from i to j if it is permissible to go from i to j. Notice that there can be self loops. Initially, the graph looks like this.
Initial Graph

Now, as soon as we receive a query, we need to remove the corresponding edge in the graph. For example if the string s is ab, it means we won't be able to go from a to b as it is now forbidden. Similarly, if the string t is cc, it means that we need to remove the self loop of c. It is clear that we need to remove exactly 2 edges from the graph, and after that if we can traverse the graph such that all the vertices are visited exactly n times, and at the same time, ensuring that we travel only on permissible edges, then we can get our answer.

Case 1 --> We remove 2 self loops.
Graph of Case 1
Observe that we can travel the graph in the path abc abc ....

Case 2 --> We remove 1 self loop and 1 normal edge.
Graph of Case 2
WLOG, assume that the directed edge from a to b has been removed. This means that if we start at a, we can't directly go to b. Fine, Let us start from the vertex which isn't reachable in one move, i.e b. We traverse the graph in the manner bac bac ....

Case 3 --> We don't remove any self loops. For each edge i --> jwhich is removed, we will call i as being marked.
Sub Case 1 ⇒ Only 1 vertex is marked
Graph of subcase 1
WLOG, Suppose, only the vertex a is marked, this means that both outgoing edges from a is removed. We can traverse the graph in the orderccc... bbb.... aaa...

Sub Case 2
Graph of sub case 2
2 Vertices are marked. Suppose they are a and b. If we cutoff both the link of a and b, we can traverse the graph in the manner acb bca acb ..... So now, the only case remaining is the one where the edge a ---> b is cutoff, and the edge b --> c is cutoff. Notice that we can still traverse the graph in the manner acb acb ...

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In Problem D1, my approach was similar to the editorial. I sorted the array , now we traverse it and for each element we see that with the next(higher elements after it as array is sorted) elements can give this a[i] element or not. If yes, we add the number of divisions("cost" variable) needed. Repeat till we get k similar elements.(if possible) Repeat this process for all elements of array.

Now we have min divisions needed to form a number. k times. which already existed in original array also. in "ans" variable. Now I checked minimum divisions needed to make k Zero's. Printed minimum of them both.

It Fails on 5th case.Please Help! 59830286

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Why the E problem solve is that? Why is the permutations of the string "abc" ? help me. thanks

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

For problem F, what is the approach or intuition behind vovuh solution in the editorial? Although, I am able to understand what is done while inserting/removing from the set until they are equal but not able to understand why this approach is used and what will be the mindset for similar future problems?

  • »
    »
    5 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    As, both permutation leads to same string when sorted.. therefore the segment [l,r] in string1 and string2 should be a permutation of each other ( and hence should have same character ) therefore we are finding such segments & assigning them the same character.

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

excellent editorial

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

This O(n*n*logn) solution ( https://mirror.codeforces.com/contest/1213/submission/60268220 ) takes ~ 217 ms . However an identical but faster O(n * n) solution (https://mirror.codeforces.com/contest/1213/submission/60267572) takes about ~1800 ms and also more memory. Please explain what's wrong? Is the complexity analysis wrong or what?

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Thanks for this.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

If I am not mistaken, I found a solution for D2 with O(N * log2(max(a[i]))) complexity. Sort of O(N * 20) ~ O(N). That is the same complexity with such limits as O(N * log2(N)), but maybe it would be interesting for someone. I've used binary prefix tree — http://mirror.codeforces.com/contest/1213/submission/60412397

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

In Question E (Two small strings), I am getting wrong answer on fourth test case

1 ab cb

judge's answer is NO but I think we can construct res as "bac".

Please help.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

D1/D2:

There exist O(max_value) solution for this problem, which is O(n) by the problem statement.

We create LOGN arrays of sizes MAXN, MAXN/2, MAXN/4, ..., 1. Array number j will store (under index i) how many numbers are there that require exactly j divisions to get to value i.

We see that Array[j][i] = Array[j - 1][2 * i] + Array[j - 1][2 * i + 1], and Array[0][x] = number of occurences of x in the input.

Now, for each candidate value x in range [1 .. MAXN] we only need to greedily take first k cheapest elements that leads to x. We do this by taking elements group by group until we get total of k elements.

This way, we only once compute values for each array element and only once retrieve value from the array. Note, that the total size of the array is MAXN + MAXN/2 + MAXN/4 + ... + 1 <= MAXN * (1 + 1/2 + 1/4 + ...) = 2 * MAXN = O(MAXN).

My C++ code for this solution: submission/60682312. It takes 62 ms to execute.

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

I can't solve problem D2 (D1 too) in the contest because of my poor complexity analysis. At the time, I thought the complexity was $$$O(n^2 log(n))$$$ and I refused to solve it. Now, I think the complexity exactly is
$$$T = x_1log(x_1) + x_2log(x_2) + ... + x_nlog(x_n)$$$ with $$$x_1 + x_2 + ... + x_n = n$$$.

I want to ask you how to prove: $$$T <= n\ log\ n\ log(n\ log\ n)$$$

»
5 лет назад, # |
Rev. 2   Проголосовать: нравится +1 Проголосовать: не нравится

i solved D2 using dynamic programming.I created a 2D array a[2.10^5][20]. wherer a[i][j] represent number of element which became i after dividing it j times by 2.

link to my solution: https://mirror.codeforces.com/contest/1213/submission/62255328

»
5 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Another way to think of problem D (or D2 specifically) is to put the numbers [1,2,...,aMax] in a complete binary tree where node i has children 2*i and 2*i+1.

Then the process of equalizing k numbers is equivalent to: choose a subtree of size at least k, with root, say, x; denote Level 0 of the subtree is the number of occurrences of x; Level 1 of the subtree is the number of occurrences of children of x; Level 2 of the subtree is the number of occurrences of grandchildren of x; and so on.

To get the cost of equalizing this subtree, you (greedily) sum 0*(#x's)+1*(#children of x)+2*(#grandchildren of x)+... until you have added up k distances.

It is possible to calculate subtree sizes in linear time, and to build an N-by-log(aMax) matrix descendants[][] where descendants[x][n] = # of generation-n children of x, in O(N log aMax) time.

»
4 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

//My solution for problem A

include

include

include

include

include

include

include

using namespace std; typedef long long ll;

ll solve() { ll ev=0; ll od=0; ll n; cin>>n; vector arr; for(ll i=0;i<n;i++) { ll x; cin>>x; x%2==0?ev++:od++; } return min(ev,od); }

int main() { ll t=1; //cin>>t; while(t--) cout<<solve()<<endl;

return 0;

}

»
3 года назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Problem F can be solved differently, too. I think the approach is easier to come up with, but it's harder to implement.

The main idea is that we can construct a graph with the nodes as indices and a directed edge between nodes $$$u$$$ and $$$v$$$ (i.e. indices $$$u, v$$$) if $$$u$$$ appears directly before $$$v$$$ in either permutation. If we ever have a cycle in the graph, then we know that all the elements in the cycle must be the same. (We can detect cycles using Kosaraju's Algorithm to find strongly connected components.) Once we detected SCCs or cycle parts, then we know exactly which elements are less than each other and which elements are equal to each other. Easy from here.

»
15 месяцев назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

problem F can be done differently, using hashing

here's my submission : 221040748