Arpa's blog

By Arpa, history, 5 weeks ago, In English

Hey there!

Thank you for being part of the ICPC India Prelims 2024! 🚀

I've put together hints and solutions for the problems to help you reflect on the challenges. As the contest admin, I even recorded myself solving the problems live — it's a mix of strategy, insights, and a behind-the-scenes look at my thought process.

🎥 Watch the video here: https://youtu.be/NsIj7CgDPY8

Let me know what you think, and feel free to share your thoughts or questions! 😊

The problems are ordered from easy to hard.

Unsatisfying Array

Hint
Solution

AND Quest

Hint
Solution

Small Indices

Hint
Solution

Yet Another GCD Problem

Hint
Solution

Equations

Hint
Solution

Update:

I'm sorry about what happened in the ICPC prelims. The problems passed to me just 10 hours before the contest. As you can see in the video almost all the time I was just engaged with solving them. The time was such tight that I was occupied with just solving the problems to finish them before the contest started. I didn't have any time to test.

I know that the organizers are making a decision that will make everyone happy.

  • Vote: I like it
  • -190
  • Vote: I do not like it

»
5 weeks ago, # |
  Vote: I like it +35 Vote: I do not like it

Thanks for the fast editorial.

Spoiler
  • »
    »
    5 weeks ago, # ^ |
    Rev. 3   Vote: I like it +26 Vote: I do not like it

    Regarding the small indices problem, there are other explanations for the sudden rise as well.

    Towards the end of the contest, you have a fight or flight response. According to the constraints, n**3 shall not be accepted. However, if the only solution I had was n**3 and I was not getting anywhere, I would try it regardless. As a matter of fact, I was about to attempt a n**3 solution. As per the information I have seen, n**3 solutions have been accepted.

    Apart from that, even if someone just tried brute force as a last resort, it would get accepted (which is literally the solution provided in the editorial). Even if your team had no idea why brute force would work, it wouldn't change the outcome of the submission.

    I am not saying that there has not been cheating, but I am saying that it is likely a large number of those solutions are not a result of cheating. Regardless, I hope the contest organizers pay their due diligence towards checking for plagiarism, sharing of solutions and any other form of malpractices and cheating.

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it +6 Vote: I do not like it

      Yes, that may be the case too.

      Spoiler
»
5 weeks ago, # |
  Vote: I like it +44 Vote: I do not like it

Why was there a test case with K=0 in problem Yet Another GCD Problem? The constraint mentioned in the problem states that 1<=K<=1e9

  • »
    »
    5 weeks ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    At what testcase was your answer failing?

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it -127 Vote: I do not like it

    Apologies for this error. We will re-judge the solution which failed due to this and the penalties will be adjusted accordingly.

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it +54 Vote: I do not like it

      What about the time wasted because of that?

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it +49 Vote: I do not like it

      we were stuck on this problem for $$$58$$$ mins, this is not a small mistake? You can't make such mistakes in ICPC....

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it +8 Vote: I do not like it

        very bad contest experience it was! I hope they consider recontest but they won't probably -_-

        good luck for next year (to those who are not in their final year)

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it +22 Vote: I do not like it

      when will the problems be made available for practice [wanna submit some solutions]

    • »
      »
      »
      5 weeks ago, # ^ |
      Rev. 3   Vote: I like it +30 Vote: I do not like it

      [deleted]

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it +10 Vote: I do not like it

      By when can we expect an update to the rankings?

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Contest page is restricted now. Maybe they are updating now.

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Ranklist is visible now, but it's not final ranking (cheaters will be removed soon, so your rank may go up)

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it -17 Vote: I do not like it

      I hope you are doing well. I wanted to share some feedback about the recent contest, specifically regarding the problem 'Yet Another GCD Problem.'

      While solving the problem, we encountered a test case where K = 0. This was surprising since the problem clearly mentioned that 1 <= K <= 1e9 in the constraints. This inconsistency made it confusing and directly impacted how we approached the problem. It feels unfair to include cases outside the constraints, especially in a competition where precision matters so much. Now just removing the penalty won’t be of much help, as teams from our college lost a lot of time trying to debug which also caused energy drain.

      Additionally, in small indices problem it seems like many n^3 or incorrect greedy solutions were able to pass, likely because the test cases were not strong enough. As participants, we invest a lot of time preparing for these contests and even pay to participate. It’s really disheartening when the quality of the contest doesn’t match the effort we put in.

      Because of issues like these, our team couldn’t qualify for the next round, even though we had a rank close to 100 in the first ICPC prelims, which unfortunately got canceled. It feels frustrating to miss out due to errors that could have been avoided with better problem testing.

      I hope that the test organizers keep a best-of-two-ranks method, which would ultimately be the best solution, as none of the two contest was perfect, and would thereby prevent unnecessary penalization of any team.

      I hope you take this feedback into account and make sure future contests have stronger quality control. We just want a fair environment where our hard work pays off.

      Thank you for listening.

      Best regards, Monis

      • »
        »
        »
        »
        5 weeks ago, # ^ |
        Rev. 2   Vote: I like it +35 Vote: I do not like it

        Lets take an ex-

        Suppose a team got a rank x in both rounds(cancelled and this one), which was good enough to qualify for the regionals. But now you are saying lets take the best of two. Due to which the number of teams higher ranked than this team are more now and it could result in this team not qualifying.

        I am not not saying what happened was right, I am just giving my opinion on why I feel this best of two is not right.

        • »
          »
          »
          »
          »
          5 weeks ago, # ^ |
            Vote: I like it -9 Vote: I do not like it

          Yes, I get that this might not seem fair to that team in this situation, though it is also not fair that we are having so many difficulties in such an important contest. I mean such things have rarely if not never happened in codeforces contest. It is quite shameful that we can't do even one contest without such issues. (also, I wouldn't consider getting good rank in first contest as a fluke because everyone experienced the same problem of delayed submissions)

          • »
            »
            »
            »
            »
            »
            5 weeks ago, # ^ |
            Rev. 2   Vote: I like it +9 Vote: I do not like it

            Yes I agree with what you are saying, but the best of two doesn't seem to be a good idea.

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it +23 Vote: I do not like it

        Yes, I agree that accepting n**3 solutions shall not be possible and k=0 is a mistake that shall not have happened. However, best of 2 is straight up stupid and much worse. Cases with k=0 shall be removed, yes, but the same is not as easy for the small indices n**3 issue.

        Another re-contest would be terrible as well. I wish I could suggest a good way to deal with this, but best of 2 definitely isn't.

        "I hope that the test organizers keep a best-of-two-ranks method, which would ultimately be the best solution, as none of the two contest was perfect", the first contest was cancelled mid-way, its standings have no value. The solution of having one flat tire is not to replace every tire with a flat one.

        Honestly, the "best of two" idea can't be described as anything but selfish interest from your end.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

umm , we came 2nd in our college and in top 100 overall, the first place team from our college had chosen amritapuri and one more region and we had only chosen amritapuri , so is there much chance of us getting selected or no? ;-;

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    +1

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    Amritapuri has more seats and 2 teams from the same institute cam get selected.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    You have good really good chances as per the amritapuri selection process.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    how many did you solve

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      4

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        you shall be good to go. You have excellent chances of getting selected for Amritapuri

        • »
          »
          »
          »
          »
          5 weeks ago, # ^ |
            Vote: I like it +1 Vote: I do not like it

          Upto what rank will the 2nd team be taken for amrita? What was the case last year?

»
5 weeks ago, # |
  Vote: I like it +9 Vote: I do not like it

How can we run backtrack for n=3000??

»
5 weeks ago, # |
Rev. 2   Vote: I like it +24 Vote: I do not like it

What is the proof for Small Indices problem that the proposed backtracking solution works in O(n^2). Also could you share the dynamic programming solution of the same?

Update:

The proof is as follows:

b[i] > level (b[i] > sum of 2 previous max numbers) can only occur about O(logn) times.

So at O(logn) times we have 2 choices to explore and hence works in the given time complexity

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +3 Vote: I do not like it

    Damn, wild to think brute force works. If someone would have just tried brute force, would work. I did think that it was possible to do so, but I couldn't prove that time complexity would be n**2.

    I think someone randomly used GPT and it got accepted. That's why we might have as many solves as we did

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    If you checkout the video, you can see I say there is not that much number of elements that we have two choices for them. After that I generate the worst case and count the number of such elements.

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it
    Iterative 2D DP
    • »
      »
      »
      4 weeks ago, # ^ |
        Vote: I like it +6 Vote: I do not like it

      It is giving output as 6 for this testcase link. But the actual ans is 7.

»
5 weeks ago, # |
Rev. 2   Vote: I like it +15 Vote: I do not like it

In small indices, if the array size is n then elements of array are <= 2*n and not <= 6000. right? Arpa Enigma27

»
5 weeks ago, # |
  Vote: I like it +77 Vote: I do not like it

So, solutions were tested against inputs which clearly defy given constraints in the absence of which many would've AC'd for Yet Another GCD (not to mention many teams saving enough time to attempt at least one more problem), and GPT written brute forces would pass Small Indices? And somehow both things flew under the radar during testing? That too in ICPC prelims, one chance a year for students which they've paid good money for? Must say, stellar work.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +79 Vote: I do not like it

    I agree, I don't know how people are fine with this. There are plenty of n^3 or incorrect greedy solutions which AC'd for small indices. The contest which people prepare for months for and have paid for is conducted with little to no standards.

    I know nothing can be done now, they won't redo the contest again, but this is just sad.

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it -11 Vote: I do not like it

      We can only hope that problem setters for future contests learn from the mistakes made.

      • »
        »
        »
        »
        5 weeks ago, # ^ |
          Vote: I like it +52 Vote: I do not like it

        Wow, ICPC prelims are now serving as training grounds for organizers to learn from.

        Just to put into perspective how easily the so called "mistakes" could've been avoided : it's the most basic of standard practices to write validators for input files (Yet Another GCD) and perform AI checks (Small Indices) for contest problems. Even if they somehow forgot to validate inputs (beats me how that happens but let's just say), some organizers should've been following submissions while contest was live, and given the large number of failing solutions at the same test case it's at least expected for them to have investigated at the time. An announcement, either updating constraints in the PS or notifying teams of running separate tests post rounds so that they could have moved on was the bare minimum acceptable from any responsible organizer.

        The above is stuff I know to be mandatory aspects of problem setting from having set local contests with participant pools of less than size 50. Judging by the credentials of organizers it'd be ridiculous for them to not know the same. The only reason for screw ups this big is negligence. And of course, the cost was paid by students who lost an attempt and a year.

        The latest concern seems to be an incorrect editorial solution for Equations (just saw an upvoted comment about this, can't confirm atm but if it's true, this could mean that outputs generated from such setters' code was faulty and testing was potentially wrong, not to mention an astounding 3 out of 5 problems being blatantly faulty).

        This at least calls for them acknowledging the screw ups in a comment or edit somewhere and the only means to give a fair chance to contestants would be a re given both the magnitude of the contest and that of their sheer blunders, but I guess we should know better than to expect such accountability.

        • »
          »
          »
          »
          »
          5 weeks ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Absolutely, a lot of these mistakes are easily avoidable and really make me doubt the contest makers. However, little can be done over the mistakes already made.

          It is common sense to attempt the problemset with GPT as GPT will be a tool used by lots of people. As a problem setter you must take heed of that fact and adjust accordingly.

          Furtherby, having good test cases is a must, without good test cases, you might as well give a free point to any one who attempts the question.

          These mistakes may be acceptable in practice rounds, but for a contest with such importance, rigorous testing of the problem set is necessary.

          However, the damage is done and another re-contest is beyond question.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Can anyone identify where my code for unsatisfied array goes wrong?

ll n, k;
    cin >> n >> k;
    vector<ll> a(n + 1, 0);
    vector<set<ll>> bound(n + 1);
    for (ll i = 0; i < k; i++) {
        ll x, y, m;
        cin >> x >> y >> m;
        if (x < 1 || y > n || x > y || m < 1 || m > n) {
            cout << -1 << '\n';
            return;
        }
        for (ll j = x; j <= y; j++) {
            bound[j].insert(m);
        }
    }
    for (ll i = 1; i <= n; i++) {
        if ((ll)bound[i].size() == n) {
            cout << -1 << '\n';
            return;
        }
    }

    ll i = 1;
    while (i <= n) {
        ll t = 1;
        while (bound[i].find(t) != bound[i].end() && t <= n) {
            t++;
        }
        a[i] = t;
        i++;
    }
    for (ll i = 1; i <= n; i++) {
        cout << a[i] << " ";
    }
    cout << '\n';
  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    take this testcase and rethink 3 3 1 2 1 1 3 3 2 2 2

    • »
      »
      »
      5 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Hey can you tell where this is going wrong

      Code
»
5 weeks ago, # |
Rev. 2   Vote: I like it 0 Vote: I do not like it

where can i find the questions.?

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

how to solve the problem Small Indices? I cant understand Arpa's Solution.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it 0 Vote: I do not like it

    https://mirror.codeforces.com/blog/entry/136522?#comment-1220912

    try proving why b[i] > level (b[i] > sum of 2 previous max numbers) can only occur about O(logn) times.

    Hint: try to make a case like 1 1 3 5 9 15, you will observe a pattern.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it -57 Vote: I do not like it

    There are always two choices for each element, right? But think about it, almost always the optimal choice is obvious. For example if sum of two maximums from the left (named level in my code) is very big, i.e. whatever we do, we can't increase it.

    The interesting fact here is, almost always the optimal choice is obvious and there are at most 18 elements that you need to pick. You can use backtracking here to end up with $$$\mathcal{O}(2^{18} \times n)$$$ solution.

    • »
      »
      »
      4 weeks ago, # ^ |
        Vote: I like it +78 Vote: I do not like it

      The complexity of your solution is at least $$$\mathcal{O}((n/\log n)^{\log n})$$$, which would TLE (about $$$200^{10}$$$ operations). I have tested it with the following test case with your code, let me know if there are any mistakes with it.

      $$$n = 3000$$$, let $$$A_i = 1$$$ for all i. Set Bi as follows: $$$B_0 = B_1 = 1$$$. For the rest, consider this sequence: $$$L_i = 2^{i+1} - 1$$$. Now construct $$$B$$$ by setting it from $$$B_2$$$ onwards as $$$L_i$$$ in blocks of 200. So $$$B_2$$$ to $$$B_{202}$$$ is $$$3$$$, next is $$$7$$$ and so on. Of course we never go above $$$4095$$$ to ensure $$$B_i < 2n$$$.

      The only valid solution I've found till now is an $$$\mathcal{O}(n^2\log n)$$$ DP which not a lot of people submitted. Most teams submitted completely incorrect solutions which would either TLE or WA. I would like you and the organizing team to please look into this and find a solution.

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it -99 Vote: I do not like it

        Thanks! I updated my code. Now, I'm using caching. Check it out. Can you find counter case?

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
            Vote: I like it +90 Vote: I do not like it

          Thanks for the update, but you are missing the point here. The point is that the official solution, as well as many other solutions would TLE, and many greedy ones would WA. The test cases were weak enough for naive GPT solutions to pass through.

          As I requested earlier, please look into this along with the organizing team. This is no way to organize such an important contest. I hope this is fixed by the organizers.

          Apart from this, there is still no proof that the proposed solution is sub-cubic, as the $$$2^{18}N$$$ claim is not true.

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it +5 Vote: I do not like it

        Hey

        Sorry to piggyback over your comment, but I don't understand how unsatisfying array model solution is optimal.

        for i in range(1, n + 1): for rng in triplets[i]: if min(a[rng[0]:rng[1]]) == i: a[rng[0]:rng[1]] = [i + 1] * (rng[1] — rng[0])

        This is O(N*K*N) according to me, while Sum(N) == 2000, sum(K) == 2000 should make it TLE. I used a segment tree for the min part, was that unnecessary?

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
            Vote: I like it +17 Vote: I do not like it

          https://www.codechef.com/problems/CS2023_PON

          I think unintentionally, AND Quest is same as

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
          Rev. 3   Vote: I like it +3 Vote: I do not like it

          I think the complexity of nested loop, for i in range(1, n + 1): for rng in triplets[i]: is O(k). Because, every range falls into a group of unique minimum. And to iterate every range it takes atmost n operations, so overall complexity is O(n * k).

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Can you please share that (n^2logn) DP solution?

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
          Rev. 4   Vote: I like it +6 Vote: I do not like it

          A trivial solution would be to make a n^3 dp where dp [i][j][k] would represent the max number of small indices you have if the max value and second max value till the ith index is j and k respectively.

          But think about number of NON small indices, those can only be around logn.

          You can just swap the states of the dp matrix to something like dp[i][j][k] would represent the second maximum value till ith index if j the the maximum value and you have k NON small indices up till now.

          Note:- Max value and Second Max value is Cj and Ck respectively.

          As 1<=i<=n & 1<=j<=2*n & 1<=k<=25 and for each state there only some constant transitions.

          Memory limit was kinda strict for us so we mem optimized it.

          • »
            »
            »
            »
            »
            »
            4 weeks ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Can you please explain the states in somewhat detail?

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          I am not sure if this is correct now (ACed in Contest BTW).

          Edit: It's wrong

          Code
»
5 weeks ago, # |
  Vote: I like it +1 Vote: I do not like it

This is Code foe Unsatisfied Array, can someone please point out why it is failing.. void solve(){ int n,k; cin>>n>>k; vector<vector<int>> a(k); for(int i=0; i<k; i++){ vector<int> temp(3); cin>>temp[0]>>temp[1]>>temp[2]; a[i]=temp; } vector<vector<int>> data(n+2,vector<int>(n,0)); for(int i=0; i<k; i++){ int st=a[i][0]; int en=a[i][1]; int val=a[i][2]; data[st][val-1]++; data[en+1][val-1]--; } for(int i=0; i<n; i++){ for(int j=1; j<n+2; j++) data[j][i]+=data[j-1][i]; } vector<int> ans(n+2); for(int i=1; i<n+1; i++){ bool ch=0; for(int j=0; j<n; j++){ if(data[i][j]==0){ ans[i]=(j+1); ch=1; break; } } if(ch==0){ cout<<-1<<endl; return; } } for(int i=1; i<=n; i++) cout<<ans[i]<<' '; cout<<endl; }
»
5 weeks ago, # |
  Vote: I like it +75 Vote: I do not like it

I doubt the editorial solution for Equations is incorrect. The solution fails for the input

4 4 1 1 2 3 1 2 3 4 1 3 4 1 2 1 3

The editorial code's output is -1. The expected output for the test case is 1 since we can determine the value of a1 and a4 since a1+a4 = 0 and since the numbers in the array are non negative, a1 = 0 and a4 = 0.

Please add this test case and rejudge the solutions if possible.

  • »
    »
    5 weeks ago, # ^ |
      Vote: I like it +15 Vote: I do not like it

    Against it, we had finished the contest pretty early, if that test case had been there from the start we could have possibly debugged our code upon getting the wrong verdict.

    • »
      »
      »
      4 weeks ago, # ^ |
        Vote: I like it +4 Vote: I do not like it

      Yeah makes sense not to add any extra test case. But they need to change the author's code and regenerate output files for the existing test cases. There could exist a test case file where ai + aj = 0 that generated a wrong output and could have led to many teams getting a wrong verdict. In that case, it's unfair for the ones who wrote the right code and still got WA.

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it +38 Vote: I do not like it

        In that sense getting AC and then claiming your solution is wrong is also not fair, as you cannot guarantee that the team could not have solved with the right verdict given. In either case it's a shitty situation to be in, which could have been easily avoided if the setters had removed the word "non-negative" lol.

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it +9 Vote: I do not like it

        I think it's better to include both the already accepted ones and those who were affected due to this case, just check if some one got a wrong verdict due to such a case and rejudge only these solutions particularly.

»
5 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Can someone share the contest link? Interested to check out the problems.

»
5 weeks ago, # |
  Vote: I like it +37 Vote: I do not like it

When will be the final standings be declared?

»
4 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Why is this solution failing for Unsatisfying Array?

My code
»
4 weeks ago, # |
Rev. 2   Vote: I like it +85 Vote: I do not like it

To summarise:

  1. AND Quest — same as this problem.
  2. Small Indices — weak testcases, GPTable (Edit: GPT gets AC due to weak test cases).
  3. Yet Another GCD Problem — unvalidated/wrong testcases.
  4. Equations — wrong model solution (might have lead to false judging).
»
4 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

Where can we submit these problems for practice, kindly make this contest public.

great contest overall could have solved 4 problems but 3 is a very good result

»
4 weeks ago, # |
Rev. 4   Vote: I like it +5 Vote: I do not like it
  • for the problem, Small Indices, would this work? I came up with a $$$O(n^2)$$$ solution, I haven't submitted it but I just wanted to discuss the idea. Do correct me if i have made any logical error.
  • dp[index][count of good indices in prefix of i]

DP [i][j]

  • consider first $$$i$$$ guys $$$(0..i)$$$. there are $$$j$$$ good indices
  • return pair {mxsum , mxelement}
  • {max sum formed by taking 2 elements, maximum element possible}
  • if mxsum or mxelement is -inf, then (i,j) state is not possible

Note:

  • mxelement and mxsum need not be from the same configuration, they can be from different configurations for the same $$$(i,j)$$$
  • mxelement is among all possible configurations of j good indices in first prefix (0..i) which element can be maximum.
  • mxsum is among all possible configurations of j good indices in first prefix (0..i), the maximum sum of 2 elements taken in all valid configurations

UPD: This approach is wrong.

  • »
    »
    4 weeks ago, # ^ |
    Rev. 5   Vote: I like it 0 Vote: I do not like it

    I think i have an edge case, consider the following case of (ai,bi) pairs, (1,1),(1,1),(5,2),(5,7),(5,5),(11,11),(17,17)...(1,1) 1000 times now according to the proposed method we will end up picking dp[4][1] as (10,5) even though the pair(9,7) leads to a better answer.

    Basically, the following case

    1

    10

    1 1 5 5 5 11 17 1 1 1

    1 1 2 7 5 11 17 1 1 1

    • »
      »
      »
      4 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      can you explain, how dp[4][3] is (10, 5)?

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        Mb, it should be dp[4][1], i was counting the initial two elements as well basically choose 5 for both 3rd and 4th index and we get one small index, and max sum as 10 with max element 5 but if we were to choose 2 and 7 for 3rd and 4th element we would get one small index, 9 as max sum with max element as 7. Since we prefer the configuration with higher sum, we get dp[4][1] as (10,5).

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          We can still do better, by choosing 5 and 7 for 3rd and 4th indices and get higher sum as (12, 7).

          Edit: j is different, got it.

          • »
            »
            »
            »
            »
            »
            4 weeks ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            Yeah that's true i specifically mention (9,7) because that leads to the more optimal answer later on for max small indices, also (12,7) will have 0 small indices, so it would be saved in dp[4][0].

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
            Vote: I like it +3 Vote: I do not like it

          No, I think the dp[4][1] would be (10,7) Note: both max sum and max element need not be from same configuration. They can be from different configurations.

          • »
            »
            »
            »
            »
            »
            4 weeks ago, # ^ |
              Vote: I like it +3 Vote: I do not like it

            How are you saving the max element for the corresponding max sum to update future transitions then?

            • »
              »
              »
              »
              »
              »
              »
              4 weeks ago, # ^ |
              Rev. 2   Vote: I like it 0 Vote: I do not like it

              If maxsum gets updated, then it means that the current element $$$a[i]$$$ or $$$b[i]$$$ is involved in the updation, so new_maxsum would be the current element + max_element of all previous configurations (i-1,j-1)

              • »
                »
                »
                »
                »
                »
                »
                »
                4 weeks ago, # ^ |
                Rev. 2   Vote: I like it 0 Vote: I do not like it

                Ok that's correct

                UPD: Found an edge case, it's wrong

          • »
            »
            »
            »
            »
            »
            4 weeks ago, # ^ |
            Rev. 3   Vote: I like it 0 Vote: I do not like it

            I think in this test case the ans should be 2, but your approach suggests it is 3.

            1

            6

            1 1 5 5 10 16

            1 1 2 7 10 16

            • »
              »
              »
              »
              »
              »
              »
              4 weeks ago, # ^ |
              Rev. 2   Vote: I like it 0 Vote: I do not like it

              just choosing A array would give answer as 3.
              index satisfying are: 4th, 5th and 6th indices. The inequality is $$$C_i \le C_j + C_k$$$

              UPD: It doesnt satisfy 6th index. My bad, calculation mistakes.

              • »
                »
                »
                »
                »
                »
                »
                »
                4 weeks ago, # ^ |
                Rev. 2   Vote: I like it 0 Vote: I do not like it

                Still don't understand how it satisfies for 6th index.

    • »
      »
      »
      4 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      Can you check this one

      link

      • »
        »
        »
        »
        4 weeks ago, # ^ |
        Rev. 2   Vote: I like it +3 Vote: I do not like it

        I think the following testcase mentioned here link fails. The output of your code is 6. But, we can get optimal answer as 7 (by choosing dp[4][1] as (9, 7)).

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
          Rev. 2   Vote: I like it 0 Vote: I do not like it

          Sorry I don't get it still, can you tell me the elements that were chosen to get answer 7?

          Edit: I get it now they are [1, 1, 2, 7, 5, 11, 17, 1, 1, 1]

          • »
            »
            »
            »
            »
            »
            4 weeks ago, # ^ |
              Vote: I like it +8 Vote: I do not like it

            from 3rd index pick the elements in the order 2, 7, 5, 11, 17, 1, 1, 1. The pairs of max_sum and max_value are {9, 7}, {12, 7}, {18, 11}, {28, 17}, {28, 17}, {28, 17}, {28, 17}. So at every index from 5 to 10 contributes to ans and at index 3 as we picked 2 (it still contributes to answer as intial elements are 1, 1). So, in total we got 7 good indices .

  • »
    »
    4 weeks ago, # ^ |
    Rev. 2   Vote: I like it 0 Vote: I do not like it

    Got it, my approach is wrong. thanks

    • »
      »
      »
      4 weeks ago, # ^ |
        Vote: I like it +3 Vote: I do not like it

      This Passes, we did it similarly.

      Iterative Code
      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it -8 Vote: I do not like it

        Thanks for the response

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        The test case i mentioned below seems to pass for your code, could you mention your approach in brief?

    • »
      »
      »
      4 weeks ago, # ^ |
        Vote: I like it 0 Vote: I do not like it

      1

      10

      1 1 5 5 10 16 1 1 1 1

      1 1 2 7 10 16 1 1 1 1

      Ans should be 6, your code gives 7, just using dp[i-1][j-1][1] for the transition is not correct

      • »
        »
        »
        »
        4 weeks ago, # ^ |
          Vote: I like it 0 Vote: I do not like it

        yeah someone just pointed it out

      • »
        »
        »
        »
        4 weeks ago, # ^ |
        Rev. 2   Vote: I like it 0 Vote: I do not like it
        Code

        This code is passing all edge cases I have seen so far. Is it possible to make a case where this fails

        • »
          »
          »
          »
          »
          4 weeks ago, # ^ |
            Vote: I like it 0 Vote: I do not like it

          Could you mention your approach in brief?

          • »
            »
            »
            »
            »
            »
            4 weeks ago, # ^ |
              Vote: I like it 0 Vote: I do not like it

            First think of the n**3 naive dp solution. With a 2x2 matrix with sum and max element of the 2 for which I take sum.

            After that, I can claim that for every value of sum, I only need to consider the value with the highest score. In case of multiple ways to achieve the same highest score with same sum, I consider the one with the greatest max element. (This can be proven really easily by simply considering cases)

            By doing this, I have reduced the transition from a n**2 matrix to a 2n array making this solution n**2.

            I can consider sum values greater than 2**n as equal, as my max element never exceeds 2**n

            • »
              »
              »
              »
              »
              »
              »
              4 weeks ago, # ^ |
              Rev. 4   Vote: I like it 0 Vote: I do not like it

              Proof lets consider 2d matrix state {sum of 2 elements,max of 2 elements} with score x

              Consider a case where you have to decide between {sum,sum/2} with score x and {sum,sum-1} with score x-1.

              The {sum,sum/2} will always be better/equal to the other case.

              Consider a new element y which is >= sum. let's consider sum+1 and sum separately

              case 1 : sum+1 (generalised for sum + any positive value) new elements {sum+sum/2+1,sum+1} with score x and {sum + sum,sum+1} with score x-1

              the case {sum,sum/2} has a greater score

              now for a new element y such that sum+sum/2+1 < y <= sum + sum for any other case of y, the relation remains same the cases become {sum+1+y,y} with score x and {sum+1+y,y} with score x which are both equal

              case 2: sum new elements {sum+sum/2,sum} with score x+1 and {sum + sum -1,sum} with score x

              the case {sum,sum/2} has a greater score

              now for a new element y such that sum+sum/2 < y <= sum + sum -1 for any other case of y, the relation remains same the cases become {sum+y,y} with score x+1 and {sum+y,y} with score x+1 which are both equal

              now consider y < sum

              1. y <= sum/2

              new elements {sum,sum/2} with score x+1 and {sum-1+sum/2,sum-1} with score x it can be recursively proven that {sum,sum/2} with score x+1 will always have a higher score.

              1. sum/2 < y < sum

              new elements {sum+y-sum/2,y} with score x+1 and {sum+y-1,sum-1} with score x

              the same can be recursively proven again.

              Hence, there exists no case for which {sum,sum/2} with score x is worse than {sum,sum-1} with score x-1

              This proof can be generalized for every value of mx

»
4 weeks ago, # |
  Vote: I like it +60 Vote: I do not like it

There's a problem with AND Quest as well You are initializing with $$$cur = (1 \ll 30) - 1$$$, which would be incorrect if $$$k = cur$$$ and none of $$$A_i$$$ satisfy $$$A_i$$$ & $$$k$$$ $$$= k$$$. For example, consider the following case:

$$$k = (1 \ll 30) - 1$$$

$$$A = [1, 2, 3]$$$

your Output will be $$$YES$$$

here there is no subset of $$$A$$$ satisfying the constraint(your program assumes that null set gives cur leading to wrong answer)

  • »
    »
    4 weeks ago, # ^ |
      Vote: I like it +46 Vote: I do not like it

    and here I thought 3 out 5 problems with issues were enough...

»
4 weeks ago, # |
  Vote: I like it +6 Vote: I do not like it

When will the list of selected teams be declared for different sites?

»
4 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it
»
4 weeks ago, # |
Rev. 4   Vote: I like it 0 Vote: I do not like it

Can someone give few testcases for unsatisfying array since mine fails on tc 5.

My approach is to put the minimum value possible at any given instance. If 1,2,3 and 6 cannot be used at a index, use 4 and remember 1,2,3 to not use them till their respective ending index and ignore condition for 6 since 4 is already smaller than 6.

here is the code:


int main() { DRAMA; #ifndef ONLINE_JUDGE freopen("input.txt", "r", stdin); freopen("output.txt", "w", stdout); #endif int t; cin>>t; for(int o=1;o<=t;o++){ ll n,k; cin>>n>>k; vector<pair<pair<ll,ll>,ll>> v; for(int i=0;i<k;i++){ ll a,b,c; cin>>a>>b>>c; v.push_back({{a,b},c}); } sort(all(v)); ll j=0; vector<ll> ans(n); int r=0; vector<ll> temp(n+2,0); priority_queue<pair<ll,ll>,vector<pair<ll,ll>>,greater<pair<ll,ll>>> pq; for(int i=0;i<n;i++){ vector<pair<ll,pair<ll,ll>>> temp1; while(j<v.size()&&v[j].first.first-1==i){ temp[v[j].second]++; temp1.push_back({v[j].second,v[j].first}); j++; } while(!pq.empty()&&pq.top().first-1<i){ temp[pq.top().second]--; pq.pop(); } ll res=mex(temp); if(res>n){ r=1; break; } ans[i]=res; for(int i=0;i<temp1.size();i++){ if(temp1[i].first<res){ pq.push({temp1[i].second.second,temp1[i].first}); } else temp[temp1[i].first]--; } } if(r==1)cout<<-1<<endl; else { for(auto el: ans)cout<<el<<" "; cout<<endl; } } }
»
4 weeks ago, # |
  Vote: I like it 0 Vote: I do not like it

how long until the results are finalized?

»
4 weeks ago, # |
Rev. 3   Vote: I like it 0 Vote: I do not like it

Hey guys, just wanted to verify my solution to Small Indices as my approach was different Arpa and Enigma27 could you please verify this since you both seemed active on here:

I did a O(N^2) approach by using the state as index and sum of the previous 2 maximum elements found. The final state has 12000 values as 0 <= A[i] and B[i] <= 2*N and N is maximum 3000 so 4*N => 12000.

This is what I submitted as Small Indices (It was accepted in contest but since there is a lot of speculation on the problem I'm not sure that even if it was accepted it is correct)

#include <bits/stdc++.h>
using namespace std;

int solve(int i, int &n, int m1, int m2, vector<int> &a, vector<int> &b, vector<vector<int>> &dp) {
    if(i == n) return 0;
    if(i == 0) {
        m1 = max(a[i], b[i]);
        return solve(i+1,n,m1,m2,a,b,dp);
    }
    else if(i == 1) {
        m2 = max(a[i], b[i]);
        return solve(i+1,n,m1,m2,a,b,dp);
    }
    
    if(dp[i][m1 + m2] != -1) return dp[i][m1 + m2];
    
    int ans1, ans2, tm1 = m1, tm2 = m2, add = 0;
    if(a[i] <= (m1 + m2)) add = 1;
    if(a[i] > m1 && a[i] > m2) {
        if(m1 > m2) {
            m2 = a[i];
        }
        else {
            m1 = a[i];
        }
    }
    else if(a[i] > m1) {
        m1 = a[i];
    }
    else if(a[i] > m2) {
        m2 = a[i];
    }
    ans1 = add + solve(i+1,n,m1,m2,a,b,dp);
    m1 = tm1;
    m2 = tm2;
    add = 0; 
    if(b[i] <= (m1 + m2)) add = 1;
    if(b[i] > m1 && b[i] > m2) {
        if(m1 > m2) {
            m2 = b[i];
        }
        else {
            m1 = b[i];
        }
    }
    else if(b[i] > m1) {
        m1 = b[i];
    }
    else if(b[i] > m2) {
        m2 = b[i];
    }
    ans2 = add + solve(i+1,n,m1,m2,a,b,dp);
    m1 = tm1;
    m2 = tm2;
    
    dp[i][m1 + m2] = max(ans1, ans2);
    return dp[i][m1 + m2];
}
void solve() {
    int n;
    cin >> n;
    vector<int> a(n);
    vector<int> b(n);
    
    for(int i = 0; i < n; i++) cin >> a[i];
    for(int i = 0; i < n; i++) cin >> b[i];
    
    vector<vector<int>> dp(n, vector<int>(12001,-1));
    
    cout << solve(0,n,-1,-1,a,b,dp) << "\n";
}

int main() {
 int t;
 cin >> t;
 while(t--) {
     solve();
 }
 return 0;
}