satyam343's blog

By satyam343, 10 months ago, In English

Thank you for your participation! I hope you liked atleast one problem from the set (:

1930A - Maximise The Score

Idea: satyam343
Editorial: Non-origination

Hint 1
Solution
Code

video editorial by aryanc403

1930B - Permutation Printing

Idea: satyam343
Editorial: satyam343

Hint 1
Solution
Code

video editorial by aryanc403

1930C - Lexicographically Largest

Idea: satyam343
Editorial: Non-origination and satyam343

Hint 1
Solution
Code

video editorial by aryanc403

1930D1 - Sum over all Substrings (Easy Version)

Idea: satyam343
Editorial: satyam343

Hint 1
Hint 2
Solution
Code

1930D2 - Sum over all Substrings (Hard Version)

Idea: satyam343
Editorial: satyam343

Hint 1
Solution
Code

1930E - 2..3...4.... Wonderful! Wonderful!

Idea: satyam343
Editorial: satyam343

Hint 1
Hint 2
Solution
Code

1930F - Maximize the Difference

Idea: satyam343
Editorial: satyam343

Hint 1
Hint 2
Solution
Code

1930G - Prefix Max Set Counting

Idea: satyam343
Editorial: satyam343

Hint 1
Hint 2
Hint 3
Solution
Code

1930H - Interactive Mex Tree

Idea: satyam343
Editorial: satyam343

Hint 1
Hint 2
Hint 3
Solution
Code

1930I - Counting Is Fun

Idea: satyam343
Full Solution: Elegia
Editorial: errorgorn

Hint 1
Solution
Code(errorgorn)
Tutorial of think-cell Round 1
  • Vote: I like it
  • +151
  • Vote: I do not like it

»
10 months ago, # |
  Vote: I like it +7 Vote: I do not like it

My solution to problem C is very brief, maybe it could help: https://mirror.codeforces.com/contest/1930/submission/246846484

  • »
    »
    10 months ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    Can you please explain, How it works ?

    • »
      »
      »
      10 months ago, # ^ |
        Vote: I like it +1 Vote: I do not like it

      Think about the simplest situation: select numbers from the end of the array to the front. This can ensure that the resulting S is the lexicographically largest but not necessarily the longest because some elements might be repeated. It can be proven that you can always, in some ways, reduce those repeated elements to the ones that haven't appeared in the set. You can do the reduction through a simple iteration through the sorted array b: since the element after must be smaller than the element front, b[i]=min(b[i],b[i-1]-1), which can ensure all elements appear at most once and are lexicographically largest.

      • »
        »
        »
        »
        10 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        this is so much simpler lol and intuitive

  • »
    »
    10 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I did the same thing. Don't know why they are using upper_bound to find largest element everytime when the sequence can be found beforehand.

    • »
      »
      »
      10 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      when I did this problem, I also first came up with using upper_bound, but I felt it was too complicated for me so I came up with this way

»
10 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Nice contest!

»
10 months ago, # |
Rev. 4   Vote: I like it +25 Vote: I do not like it

I believe my solution for C is slightly different from the editorial, and imo cleaner.

Let $$$b_i = a_i + i$$$

Now the claim is that if all the elements of $$$b$$$ are distinct, then the descending order of $$$b$$$ is the answer. Try to prove why this might be the case yourself, its not hard. I'd recommend taking some examples and trying to figure it out. ;)

The other case to handle is when there are duplicates.

Let the value $$$x$$$ repeat at least twice in $$$b$$$. Then we can always make the values $$$x$$$ and $$$x - 1$$$ instead of $$$x$$$ and $$$x$$$.

How?

We may now repeat the above operation multiple times until all the elements of $$$b$$$ are distinct.

These observations are enough to solve the problem.

Here is my code: 246841577

  • »
    »
    10 months ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    Well, I thought of a similar logic & proof while contest time, but I wasn't fully convinced. Looking at your proof, to me it feels like your logic is a bit incomplete. It's obvious that if x appears n times in {y | y = ai + i}, then x, x — 1, ... , x — (n — 1) can be made. But is it really possible to make x, x — 1, ..., x — (n — 1) while not influencing other numbers? (i.e. let's say array a is {5, 4, 2} and if you take a[0] first, the result would be {6, 5, 3} but answer would be {6, 5, 4}.) So basically, my question is "is it guaranteed that such sequence of operation exists? how can you prove it?"

  • »
    »
    10 months ago, # ^ |
      Vote: I like it +10 Vote: I do not like it

    The point is, I'm worried about the case where your logic "Then to construct the values x and x−1 we can first delete the index i, resulting in x, and then we can delete index j resulting in the value x−1." influencing other indexs such that ai + i == x does not hold. Maybe it's just obvious and I'm just being dumb, but I thought about this for 2 hours but couldn't prove it. So if someone can prove this, I would be really greatful. Thanks in advance.

    • »
      »
      »
      10 months ago, # ^ |
      Rev. 5   Vote: I like it +14 Vote: I do not like it

      Hmmm... You're right about my proof being incomplete, and actually maybe incorrect as well.

      for instance my proof idea would be incorrect for the following example -

      1
      4
      100 2 3 97
      

      And while the answer is 101 100 6 4, its not because of the reason I mentioned. Its because you first delete index 3, then 4, then 2 and finally 1.

      So indeed my proof is incorrect, thanks for pointing it out!

      But now that I think about it, our goal is to only reduce the value of index $$$j$$$ by one, and it doesn't matter how exactly we do it.

      So in order to reduce the value of index $$$j$$$, let us look at index $$$j - 1$$$. If we do not wish to reduce index $$$j - 1$$$ by any value, then we can simply delete it and we managed to reduce the value of index $$$j$$$ by 1. However, if we wanted to decrease $$$j - 1$$$ as well, then we look at $$$j - 2$$$ and so on. If this keeps on happening, then eventually we will hit index $$$i$$$ which by definition we do not wish to decrease. Therefore we can delete index $$$i$$$ and the process stops. This way we have decreased the value of index $$$j$$$ without impacting any of the other numbers in between (since the in between numbers had to be decreased as well)

      I hope I didn't mess up anything this time. Incase I did, kindly correct me once again :P

      EDIT: Another thing that I forgot to mention is that we will start decreasing the value starting from the right-most element. If it is already at the required value, then we can simply delete it. Otherwise we do the process mentioned above. This ensures that the deletion process doesn't impact any numbers after index $$$j$$$.

      • »
        »
        »
        »
        10 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        I'm still not convinced. Let's call the value we want a result want[0], ..., want[n — 1]. In order your logic to work, I think you should also prove that for every i, 0 <= a[i] + i — want[i] <= i. i.e. is it possible to prove that the difference between ai + i and want[i] is not bigger than i? if this is proved, i think the claim is quite obvious.

        • »
          »
          »
          »
          »
          10 months ago, # ^ |
            Vote: I like it +7 Vote: I do not like it

          I'll use the terminology I used above, ie. $$$b_i = a_i + i$$$

          The claim is that $$$want[i] >= b_i - (i - 1)$$$, ie. I'll decrease $$$b_i$$$ by $$$i - 1$$$ times at most before deleting it. (Also note that we can't decrease $$$b_i$$$ by more than $$$i - 1$$$ since there are only $$$i - 1$$$ elements before it)

          And since there are at most $$$i - 1$$$ distinct values before $$$b_i$$$, whereas the number of options for the $$$i^{th}$$$ value is $$$i$$$ (since we can decrease by $$$0, 1, 2, ..., i -1$$$) we can always find a suitable $$$want[i]$$$ such that $$$want[i] >= b_i - (i - 1)$$$

          This should prove what you're asking for.

          I've rarely written proofs so I really apologise for not being thorough in my proof,

    • »
      »
      »
      10 months ago, # ^ |
      Rev. 3   Vote: I like it 0 Vote: I do not like it

      I was also worried about the same thing and it turns out that there is always a way to preserve other elements while decreasing duplicate elements but I don't know how to prove this claim.

  • »
    »
    10 months ago, # ^ |
    Rev. 2   Vote: I like it +2 Vote: I do not like it

    In the contest I ended of solving C with dynamic fenwick tree(?) which can answer k'th none existing integer in O(log^2(n)). O(logn) is also possible with DST, but I really didn't want to implement DST so I went for dynamic fenwick tree. My solution is O(nlog^2(n)) and it almost got TLE. For those who are interested (although I'm certain no one will be. just look at the struct fwtree part) -> https://mirror.codeforces.com/contest/1930/submission/246865035

    • »
      »
      »
      10 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Can you please explain your code? Even I am unable to convince myself that the elements between 2 x's will remain unchanged

      • »
        »
        »
        »
        10 months ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        you can approach this problem by greedy. if you think about it, a[0] can be a[0] + 1 ~ a[0] + 1, a[1] can be a[1] + 1 ~ a[1] + 2, ... a[n — 1] can be a[n — 1] + 1 ~ a[n — 1] + n now, we pick one numbers from each range. If there is a integer such that it is in the range a[i] + 1 ~ a[i] + i + 1, pick the biggest one. Greedy solution that chooses from small ranges (i = 0) to big ranges (i = n — 1) is correct, but I don't know why it is correct

    • »
      »
      »
      10 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Can you explain how your fenwick tree is finding the kth non existing number. It would be great if you could provide any reference for this. Thanks

      ps:I am familiar with fenwick trees.

      • »
        »
        »
        »
        10 months ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it

        It's similar to binary search. what we want is the maximum index i s.t. the # numbers before i (inclusive) is smaller than k. than i + 1 would be the kth number. If size of N = 1000: initialize res = -1. you first try 512 if # numbers before 511 (== # of numbers between [0, 511]) is less than k, you add 512 to res and subtract (== # of numbers between [0, 511]) from k. than you try 256, 128, 64, ... , 1.

  • »
    »
    10 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    I was also thinking along the same lines that we should look at the first occurrence of maxiumum value in the array ai+i for every i . Then all the values after that will be subtracted by one . Then again we look for the first occurrence of maximum value and continue in the fashion. I thought of a variation in segment tree which gives the maximum and also updates the range. Can we do this question using this variation of segment tree as i haven't used this type yet. Ignore this i found the flaw in my logic

  • »
    »
    9 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    I was aware of this solution. In fact, the model solution on polygon is exactly the same.

    I just included that particular solution in the editorial, as I found it easiest to explain, and it also uses a nice technique, which I liked.

»
10 months ago, # |
  Vote: I like it +15 Vote: I do not like it

A is similar to AGC001A

  • »
    »
    9 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Damn i dont know why my code is failing in second test case all I am doing is picking the minimum and adding it with the answer how can I spot that sorting is required here?

»
10 months ago, # |
  Vote: I like it +10 Vote: I do not like it

"Instead of counting strings s which are good, let us count the number of strings which are not bad."

Shouldn't this be, the number of strings which are bad?

»
10 months ago, # |
  Vote: I like it +4 Vote: I do not like it
»
10 months ago, # |
  Vote: I like it +7 Vote: I do not like it

How to prove that it is optimal to partition s into substrings of length 3 in problem D?

»
10 months ago, # |
  Vote: I like it +24 Vote: I do not like it

D is crazy

»
10 months ago, # |
  Vote: I like it +4 Vote: I do not like it

I could not even understand the problem statement for D problem , like how does it calculate the min no of 1s required in p-good string can someone explain?

»
10 months ago, # |
Rev. 2   Vote: I like it +11 Vote: I do not like it

Nice choice of title for problem E. The football match played 13 years ago was the last thing I was thinking about before this round :)

»
10 months ago, # |
  Vote: I like it 0 Vote: I do not like it

why iterator in problem c need --?

»
10 months ago, # |
  Vote: I like it 0 Vote: I do not like it

need help on 1930D1 - Sum over all Substrings (Easy Version), my submission 246919948

I'm really unsure where I lost, but clearly something wrong with calc

»
10 months ago, # |
  Vote: I like it +3 Vote: I do not like it

Can someone explain the editorial of C? i don't understand this "The elements at indices 1,2,…,x in the same order. The element at index k . The elements at indices x+1,…,k−1 in the same order." part.

  • »
    »
    10 months ago, # ^ |
      Vote: I like it +5 Vote: I do not like it

    Hmm, yeah, this part looks strange... I suppose they mean the following. We have some order of deletion for array of length k. For the array of length k + 1, let's delete first k elements in the same order, but also delete k+1-st element after x deletions. Then we obtain the same good array, but with x appended to the end.

    • »
      »
      »
      10 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Well, I suppose your explanation makes sense. so thank you. The editorial is so bad that it is almost impossible to understand.

»
10 months ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

Btw, in problem F there is a good way to think about efficient insertions of bitmasks. Consider the graph whose vertices are all possible bitmasks, and each bitmask is connected by directed edge with all submasks that can be obtained from it by switching one bit from 1 to 0. Then the sumbasks of a mask are just vertices reachable from this mask. So we can traverse this graph in order to find all submasks. And the pseudocode in the editorial is just dfs of this graph.

»
10 months ago, # |
  Vote: I like it +75 Vote: I do not like it

I will add the editorial of D and G after waking up.

Oh no, seems like he's fallen into a coma

»
10 months ago, # |
  Vote: I like it +16 Vote: I do not like it

E is really similar to https://mirror.codeforces.com/problemset/problem/1468/H. The hints mentioned in the editorial for E is exactly the solution to this problem.

  • »
    »
    9 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    The setup of problem E is indeed the same as the problem you mentioned. It is indeed a coincidence.

    We still need to do the counting part, which a good amount of participants found non-trivial.

    Interestingly, our team participated in that round. My teammate solved that problem. But when I asked him to solve the problem E of my round, he didn't remember that gym problem.

»
10 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Hello,

I received a notification regarding the similarity between my solution 246886924 to Problem 1930C and Yarab_Master/246865178to the same problem. While I acknowledge that our solutions may share similarities, it's important to clarify that we didn't exchange or share code intentionally. We're not acquainted, and we haven't communicated or shared solutions with each other.

I believe it's unfair to assume cheating solely based on similar approaches to problem-solving. Our solutions might align closely due to our thought processes or coding styles, but that doesn't imply rule violation.

I appreciate the efforts of the Codeforces team in implementing a system to detect potential plagiarism, but I would like to emphasize that in this case, it's a coincidence rather than deliberate copying.

I'm open to providing any additional information or clarification needed to address this matter. Thank you for organizing the round and presenting challenging problems.

Best regards, Vishal Sharma(vsvishalsharma777)

»
10 months ago, # |
  Vote: I like it +11 Vote: I do not like it

I still wait D solution -_-

»
10 months ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

For F I can't get the hints. Please can someone explain why the maximum difference will be find in that way:

  • For an array b consiting of m non-negative integers, f(b)=max(max(bi|bp)−min(bi|bp))

  • f(b) is the maximum value of bi ∧ ~ bj over all pairs

  • »
    »
    10 months ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Just fix , i and j. so all we want to pick is an x , which maximizes (b[i] | x) — (b[j] | x). Now look at a particular bit of b[i] and b[j]. There are 4 cases (1,1) ; (0,0) ; (1,0) and (0,1). (1,1) and (0,0) contributes nothing. In (1,0) take the corresponding bit of x as 0 and in (0,1) take corresponding bit of x as 1. From this you shall be able to deduce f(b) = b[i] ^ ~b[j]. Hope this helps. (Note that we dont need to prove "f(b)=max(max(bi|bp)−min(bi|bp))" ).

    • »
      »
      »
      10 months ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Thanks, I did the truth table, and it's seen immediately

»
10 months ago, # |
  Vote: I like it +58 Vote: I do not like it

satyam343 please update the solutions.

»
9 months ago, # |
  Vote: I like it 0 Vote: I do not like it

satyam343 Hello? When will the solution be added?

»
9 months ago, # |
Rev. 2   Vote: I like it -7 Vote: I do not like it

satyam343 WHERE ARE SOLUTIONS???

»
9 months ago, # |
  Vote: I like it -11 Vote: I do not like it

Thank you for fast editorial

»
9 months ago, # |
  Vote: I like it -8 Vote: I do not like it

shit

»
8 months ago, # |
  Vote: I like it 0 Vote: I do not like it

am i the only one finding the editorial for E not understandable ?

"

Let us compress all the 0 's between the k -th 0 from the left and the k -th 0 from the right into a single 0 and call the new string t . Note that t will have exactly 2k−1 0 's. We can also observe that for each t , a unique s exists. This is only because we have already fixed the parameters n and k . Thus the number of bad s having exactly x 1 's is (x+2k−12k−1) as there are exactly (x+2k−12k−1) binary strings t having 2k−1 0 's and x 1 's."

its not necessary that in the string t we'll have exactly 2k — 1 0s

lets take this example 010100 and k is 1 if we compress the 0s inbetween the 1st zero from the left and the first zero from the right it'll become 01010 which has 3 zeros ?

also where did the x 1s come from ?? i really cant understand this tutorial

  • »
    »
    8 months ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    ok after afew tries i realized that in order for the idea in the tutorial to work you also have to compress the k-th zero from the left and the k-th zero from the right into the compressed segment to form a string of length (2k — 1) zeros + (n — k*2 * L) ones and another thing u'll always need to keep that we dont have a solution when and only when there is no atleast one 1 in the segment between the k-th zero from the left and the k-th zero from the right

»
7 months ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

...

»
7 months ago, # |
  Vote: I like it 0 Vote: I do not like it

Did anyone else use Small to Large Merge in Problem C? The solution runs for 700+ms but It gets Accepted!

Spoiler
»
3 months ago, # |
  Vote: I like it 0 Vote: I do not like it

For anyone finding the editorial to Problem $$$C$$$ to be a little hard to understand, you could try to read the following a little more detailed explanation of mine:

For each $$$1 <= i <= n$$$, define $$$c_i$$$ to be the number of elements before the $$$i$$$th element in the array that are removed before the $$$i$$$th element in the process. Apparently, $$$0 <= c_i < i$$$ for all $$$1 <= i <= n$$$. Let's call the array $$$(c_i)_{i = 1} ^ n$$$ good if for each $$$1 <= i <= n$$$, the condition $$$0 <= c_i <= i - 1$$$ holds. Notice that now the problem can be rephrased as inserting the values of $$$a_i + i - c_i$$$ in some order of $$$i$$$, where $$$1 <= i <= n$$$. We now introduce and prove an interesting claim.

$$$\text{Lemma:}$$$ Given a length $$$n$$$, all good arrays $$$(c_i)_{i = 1} ^ n$$$ are achievable in the process described in the problem.

$$$\text{Proof:}$$$ Note that $$$c_1 \equiv 0$$$. Consider $$$c_2$$$: if we want $$$c_2 = 0$$$, we could simply delete the second element before deleting the first element. On the other hand, if we want $$$c_2 = 1$$$, we can just delete the first element before the second. Thus, it is clear that all good arrays of length $$$n <= 2$$$ are achievable.

Now, assume that all good arrays of length $$$n$$$ are achievable, where $$$n >= 2$$$. We now show that this implies all good arrays of length $$$n + 1$$$ are achievable. Notice that all good arrays of length $$$n + 1$$$ are formed by appending a value $$$x$$$ with $$$0 <= x <= n$$$ at the end of a good array of length $$$n$$$. For a specific good array of length $$$n$$$, and a particular value of $$$x$$$, we can make this possible by doing the following:

$$$\quad$$$ 1. delete the first $$$x$$$ values in the deletion sequence corresponding to the good array of length $$$n$$$;

$$$\quad$$$ 2. delete the $$$(n + 1)$$$ th element;

$$$\quad$$$ 3. delete the rest of the element in the deletion sequence corresponding to the good array of length $$$n$$$.

Notice how the second step doesn't affect the previous good array, and how $$$c_{n + 1}$$$ is now equal to $$$x$$$. The the good array of length $$$n + 1$$$ formed by the given good array of length $$$n$$$ appending $$$x$$$ is achievable.

Thus, by induction, for all positive integer $$$n$$$, all good arrays of length $$$n$$$ can be obtained in the process described in the problem. $$$\square$$$

We claim that the algorithm:

iterating $$$i$$$ from $$$1$$$ to $$$n$$$, insert the largest integer $$$a_i + i - c_i$$$, where $$$0 <= c_i <= i - 1$$$, that is not yet contained in $$$S$$$.

produces the lexicographically largest answer.

$$$\text{Proof:}$$$ If we are now inserting the $$$i$$$th element of the original array, it is obviously the best to choose the largest integer $$$a_i + i - c_i$$$, where $$$0 <= c_i <= i - 1$$$, that is not yet contained in $$$S$$$.

So what now remains to be proved is why the order of iteration from $$$1$$$ to $$$n$$$ gives the lexicographically largest answer.

Note that if we assume that during the process we don't face the scenario where when we want to delete element $$$i$$$, there's no available value in the range $$$[a_i + 1, a_i + i]$$$, then the resultant answer array is the same for all orders of deletions.

This is proved by consider all values of $$$a_i + i$$$, which is the best choice for each $$$i$$$ at the beginning. Of course, there might be duplicates, so we have to continue decreasing the values of those duplicated $$$a_i + i$$$ until no more duplicates are made. This actually fixes the final answer. For instance, if the values of $$$a_i + i$$$ gives one $$$2$$$ and three $$$5$$$'s, it apparently only corresponds to the answer $$$2, 3, 4, 5$$$, where one original $$$5$$$ is decreased by $$$1$$$, and the other repeated $$$5$$$ is decreased by $$$2$$$.

Moreover, when such a collision scenario occurs, the answer array that is decreased in length apparently doesn't grow lexicographically larger, because collisions are equivalent to removing some answers from the optimal answer array.

Thus, we now only have to show that our algorithm actually gives the longest possible answer sequence.

But this is obvious. Following the order from $$$1$$$ to $$$n$$$, when we arrive at $$$i$$$, there are $$$i$$$ possible choices in the range $$$[a_i + 1, a_i + i]$$$ for us to choose, but the given algorithm implies that at most $$$(i - 1)$$$ of these $$$i$$$ available values are taken by the time we get to $$$i$$$, so there is always a possible choice.

Thus, the correctness of the algorithm is proved. $$$\square$$$