F
| # | User | Rating |
|---|---|---|
| 1 | Benq | 3792 |
| 2 | VivaciousAubergine | 3647 |
| 3 | Kevin114514 | 3603 |
| 4 | jiangly | 3583 |
| 5 | strapple | 3515 |
| 6 | tourist | 3470 |
| 7 | dXqwq | 3436 |
| 8 | Radewoosh | 3415 |
| 9 | Otomachi_Una | 3413 |
| 10 | Um_nik | 3376 |
| # | User | Contrib. |
|---|---|---|
| 1 | Qingyu | 158 |
| 2 | adamant | 152 |
| 3 | Proof_by_QED | 146 |
| 3 | Um_nik | 146 |
| 5 | Dominater069 | 144 |
| 6 | errorgorn | 141 |
| 7 | cry | 139 |
| 8 | YuukiS | 135 |
| 9 | chromate00 | 134 |
| 9 | TheScrasse | 134 |
| Name |
|---|



Okay so D is done by ChatGPT but also oursaco. oursaco, if you don't mind, can you please tell a little bit about how you contributed to the creation of that problem? Like did you just write the prompt or did the chatGPT stuff have to be revised?
Asked chatgpt to make a problem which was initially solving the problem for a single array with no i < j constraint (this was an existing div2a). I thought the i < j constraint would make it more interesting and solved for it. Then the solve for every prefix part was added. Also made sure to plug it back into o1-preview to make sure it couldnt solve XD
Code is not accessible.
Fixed, thanks for letting me know
cant see tutorial of D, but is it possible to solve D using knapsack?
I didn't solve it, but I think D is to be solved using the observation: if $$$a_i=2^{p_i} \cdot {q_i}$$$, there should be $$$p_i$$$ operations that greedily decrease $$$A_i$$$ and greedily increase the member of the prefix with greater or equal index and max $$$q$$$.
With this, we can compute the optimal sum for each prefix separately in quadratic time. Or, we can compute for each prefix using the the previous prefix in linear time (somewhat like DP).
Essentially, we can process the $$$i^{\text{it}}$$$ prefix as adding $$$A_i$$$ to an array $$$p$$$. By adding $$$A_i$$$, we offer a new $$$q$$$ to all of the elements that precede it. We obviously want all of $$$A_1, \cdots, A_{i-1}$$$ that are using their operations to increase an element with a lesser $$$q$$$ to instead increase $$$A_i$$$. All of the elements that precede the nearest element with a greater $$$q$$$ will not change, but those that follow will now use their operations to increase $$$A_i$$$. We can find the nearest greater element with a greater $$$q$$$ using monotonic stacks, which are well explained by the USACO Guide or CPH. With prefix sums, we can find the the factor by which $$$A_i$$$ increases, and monotonic stacks have a linear time complexity, so the solution is done in $$$O(n)$$$.
I don't see the intuition for Knapsack DP as the problem isn't looking for a subset, but maybe you're on to something.
All of the elements that precede the nearest element with a greater q _ will not change_
I don't think comparing only q values work, since a later element with lower q could make better use of previously allocated operations to higher q values if enough operations are performed on it that take place after the previous greater q. Case in point sample test case 2, at prefix 5 — where using operations(1) previously performed on 9 are better performed on 7 since 7's max operations can be 3 with the one from the 9, making its value(9 + 7*(2^3)) be higher than 9*2 + 7*(2^2).
EDIT : Got an AC by applying a logarithm based check for shifting operations.
C is the hardest div2C in history. Even harder than D2 from the last div2 round (rated 2187 on CList).
what is CList?
https://clist.by/problems/
I took a look, and it doesn't seem that difficult; it only has a score of 1322.
You, sir, have shared a resource of major importance ! I will definitely use it in the future !
i found it easy
What? It shows only 1322 for me. Maybe it wasn't properly updated when you saw it?
He means the last div2 D2 was 2187
Oh, I get it now. I thought they meant it is rated higher than that 2187. Turns out that they 'felt' it's harder than that.
Yo, i've just completed tutorial for problem D in Hackmd : link
Sorry for not directly publishing tutorial in codeforces, i prefer using hackmd :)
First time getting upvoted, thanks a lot !
C odd case: since n%2==1 then l will always be 1 so it is easier to just set the last 4 to 1 3 n-1,n
Can someone share their thought process of D like how and why it works
you have to collect all the 2's in terms of powers and give them to the best successor possible, then calculate the sum of the remaining odds which are not considered
Ex1:1 6 5
for this, you have to give all the 2 powers to 5 which is better over 3
Ex2:4 1 3
same as above but the format is different, first you will accumulate at 1 then accumulate at 3
Ex3:4 7 2 2 2
Here for the first 2 elements, 4 will be accumulated into 7
For 3 elements, since there is no benefit of sending 4 to 2 over 7, we leave it unchanged
For 4 elements, first, we will accumulate the previous 2 and then compare 4 with 7, since 7 is bigger, the powers remain unchanged
For 5 elements, we will accumulate past 2 twos to get 8, which is in turn greater than 7, so we will send the 4 to 8 making it 32
thus we will be storing all the accumulated indices and powers in a stack, all the odds remaining after removing powers of 2 in other variables unaccumulated_odds, whenever we get a new element we try to pop elements in the stack, and try to accumulate values at new element same as shown in example 3
think like this . from a given i to previous 31 even numbers you will have a number with or without shifting 2's greater than 1e9 . since every ai is bounded by 1e9 . all other even numbers before that will lose thier 2's to largest element
Can E be solved by simulated annealing?I submitted serveral times, but no luck.288417811.This is the first time I wrote simulated annealing, so it may come with some mistakes. So I wonder whether it can be solved by such a approach or not.
I also tried but failed ;( multiple-test aren't so easy for SA to pass.
No way in hell I can ever "notice" what the editorial tells us to notice in problem D.
Solved it by thinking about where to utilize the 2s in between the last number we have given 2s to and the current number i. If we can use these 2s to make a[i] greater than the last number we gave all the 2s, then we should give all the twos we gave the previous number to this a[i]. Else we re-use the answer from the last number we gave all the twos and give all the twos in between to a[i]. To maintain the last number we gave all the 2s we can use monotonic stack. Submission: https://mirror.codeforces.com/contest/2035/submission/288418798
Actually, I did notice it after thinking about it for a while. Not bad. Wouldn't have ever come up with it myself though. Thanks for the editorial.
you can solve E using ternary search too.
talk is cheap, share code.
288768348
runs in Clogz.
C = 100 works.
Idea: if you replace ceiling division in the formula with normal division, then the function can be simplified down to pA + t/A, where p and t are arbitrary constants. This is a valley array, and the valley can be found with ternary search. Then just search a bit to the side of that valley to account for any inaccuracies because of the division change.
I can't really prove the horizontal search, if anyone could share a counter case that would be cool.
Also side note: I actually never had all the details ironed out before I started coding it, so ig drexdelta was right. upvoted :)
nice one. although, I tried a lot during contest to pass ternary search. but didn't work.
Lol no. You can maybe pass the current testset if you are lucky but it's not "solving".
for C,
we can also approach it greedily, observe ifn is oddthen last operation is and so we can get to less than or equal to last operation, so we choose last to be n and second last to be n-1 as(n &(n-1)) = (n-1), but what if we try to make the first bit of (n-1) on? then we can make it (n), but how to do it? observe if we place3, 1 before it we would get the first bit ONwhich would be carried forward to get themaximum value as n.Also in the case ofn if it is not a power of two, n would have a bit which isn't present in the sequence so we can greedily place it at the end, and have a permutation such as 1, 2, 3..., n. Rest is nicely explained in the editorial, thanks for it!Submission link: https://pastebin.com/x0gwHdeL
Am I the only one who found B to be harder than C and D? (and D easier than C)
One more way to solve F is (same observation(and logic) but different implementation) :
Do a normal binary search on answer , but this time instead of checking for only mid , check from mid-2*n+1 , mid , even if one turns out to be true(say for num) , hi = num-1 else lo = mid+1;
I came up with the same thing but got TLE.
https://mirror.codeforces.com/contest/2035/submission/288440877
Maybe it's a random pass, they should have given n<=1000
can someone help me in proving my solution for C? I brute forced some cases of $$$ 2, 1, 3, 4, .... , n $$$ and noted that we get the required $$$ k $$$ somewhere always. based on this, my solution is just $$$ x + 1, x + 2, x + 3, ......, n, 2, 1, 3, ....., x $$$ for some x such that the resultant $$$ k $$$ for $$$ 2, 1, 3, ... x $$$ is maximum and $$$ n - x $$$ is even. I cannot formally prove why the sequence $$$ 2, 1, 3, 4, ... x $$$ gives the best k always.
Submission
I just don't know why $$$O(n^2\log{a_i})$$$ does not fit in 4 seconds in problem F.
UPD: It seems to be my problem. I was submitting in G++ 17 instead of G++ 20, which is based on 32-bit architecture.
We knew the TL was tight for some people but were also aware of a very fast N^3 solution and wanted to cut it.
In such cases, it might be better to modify the constraints a bit.
For example, you could use
0 <= a_i <= 10^5instead of0 <= a_i <= 10^9.I guess that would have made sense. However, there were a lot of optimizations that where any of them could make it pass (although unoptimized passed comfortably for many people).
What is your $$$ \mathcal{O}(n^3)$$$ solution?
I am confused, in problem D , we can have $$$a_i$$$ > $$$a_j$$$ but it is more optimal to divide $$$a_i$$$ by 2 and multiply $$$a_j$$$ by 2
example: 20 and 9
20 + 9 = 29
but we can do: 20 / 2 / 2 = 5
and 9 × 2 × 2 = 36
and the sum becomes 41 > 29
what am I missing here ?
Consider this case in a "distribute the $$$2$$$ factors" manner, now we have $$$20=5\times 2^2$$$, and it's pretty clear that giving the $$$2$$$ factors to $$$9$$$ is better than to $$$5$$$.
However, E can be solve in $$$O(z^{3/4})$$$ , only by using some optimizations in brute force. My Code
Please explain the logic you used for optimization
Every time increase $$$d$$$ by $$$k$$$ and use a second operation, it will be end in at most $$$O(\sqrt{\dfrac zk})$$$ times.
If you choose to increase $$$d$$$ by $$$i$$$ and always choose the second operation from then on, the additional cost is $$$ix+\lceil\dfrac z{d+i}\rceil y$$$ . You can find that there are at most $$$O(\sqrt z)$$$ different $$$\lceil\dfrac z{d+i}\rceil$$$ and always use the smallest $$$i$$$ with the same $$$\lceil\dfrac z{d+i}\rceil$$$ will be better. So now is $$$O(\sqrt{\dfrac zk}\min(\sqrt z,k))$$$ .
If $$$\sqrt z\le k$$$ , then $$$O(\sqrt{\dfrac zk}\cdot\sqrt z)=O(\dfrac z{\sqrt k})\le O(z^{\frac 34})$$$ ;
If $$$\sqrt z \gt k$$$ ,then $$$O(\sqrt{\dfrac zk}\cdot k)=O(\sqrt{zk}) \lt O(z^{\frac 34})$$$ .
Hori
In the editorial of E problem, you have mentioned this statement.
a * b >= 2 * z, this implies thatmin(a,b) <= sqrt(2*z)? Why is that ? How did we reach there ?Just to replace few values, lets take a = 5 , b = 5, and z = 5, then
a * b >= 2 * zholds true, butmin(a,b) <= sqrt(2 * z)doesn't hold true.Please can you elaborate a little here ?
When
z=5anda=5,b=2satisfies the inequality. Therefore,min(a, b) <= sqrt(2*z)is still satisfied. While there are definitely, other solutions,b=5is one as you correctly point out, they are simply worse than the solution we just found as they cost more.is system testing done till this round !!?
long back bruh.
For solution E, I don't understand how we got $$$\dfrac{a\cdot b}{2} \ge z$$$
You can think about it like this: since we know that $$$damage(a, b) \gt \frac{a \cdot b}{2}$$$, if we pick $$$a, b$$$ where $$$a \gt \sqrt{2z}$$$ and $$$b \gt \sqrt{2z}$$$, then $$$\frac{ab}{2} \gt z$$$, meaning that $$$damage(a, b)$$$ is surely greater than $$$z$$$. So our $$$a$$$ and $$$b$$$ were not optimal.
The value for $$$\text{has_}i$$$ in Problem F's editorial is wrong. Should be $$$\lfloor \frac{x}{n}\rfloor + [(x\mod n)\geq i] =:\text{has_}i$$$ instead.
Hey I came up with this solution but unsure of it's correctness and having difficulty in implementing it because of large numbers and comparing them while handling mod
dp[i] = max result possible for prefix i,
so transition would be whether i place all power of 2 possible in ith index and get sum of base odd number for all 0->i-1 indexes
or
take last index where we placed all 2's before that i.e. dp[last[i-1]] + placing all 2's in range last[i-1] +1 to i to ith element can then calculate answer and take whatever is maximum and set last[i] = i or last[i-1] accordingly. can anyone help me with its correctness and how to implement it.
In F, let's say the root is 3, now is doing operation on node 1, the parities of the nodes are 0 0 1 0 0
If I do the operations as the editorial "Have all nodes toggle their own parity, but have the root toggle the parity of the node that comes directly after it. Now, that node can just fix itself, and all nodes have the same parity."
After the operations, isn't it became 1 1 1 0 1? how come to make all parity the same?
How to think of problems such as C ??
Problem E we can solve it in O(k * sqrt(z / k)) if k <= 10000, else O(1e4 + sqrt(z / k))
If k <= 10000, the brute force takes k * sqrt(z / k)
Else, we try for 1 to 1e4 upgrades without attacks between them. Then we need to calculate the points where (z + w — 1) / w * y is optimized, and these points are (in worst case) 1e4. These points can be calculated while simulating jumps of length k, so the complexity of this part is 1e4 + sqrt(z / k)
It maybe has a better complexity if we reduce the threshold of k, but I'm too lazy to calculate it xd
Seems like E can be solved in O(sqrt(z)) time, here is the code for it: 366869087 As of writing this, it also has the best execution time