Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 166 |
2 | maomao90 | 163 |
2 | Um_nik | 163 |
4 | atcoder_official | 161 |
5 | adamant | 160 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | nor | 153 |
9 | Dominater069 | 153 |
Name |
---|
For the second paragraph of 1E (Vabank), can someone explain what exactly the queries are, and what is the analysis for why this results in $$$2 \log (10^{14})$$$ queries? I don't quite understand from the description in the editorial.
EDIT: I think this makes sense now. Start with $$$L$$$ money, and try $$$L$$$ then $$$L + \frac{R-L}{2}$$$ each time. If $$$L + \frac{R - L}{2}$$$ is too large, then this process will decrease the amount of money by $$$\frac{R-L}{2}$$$. $$$\frac{R-L}{2}$$$ vanishes quickly (i.e. halves at each step), so the balance will never become negative.
In problem D, can anyone explain how to decide the number of times to perform iterations.?
while(!set.empty()){}
Help needed in task B Can someone please tell,why m would be arbitarily large if we have no positive difference.
consider this array- 2 2 2 2 2 --sice c=0,a[i]%m=a[i] for every m >a[i].so m can be any value greater than a[i]. hope it help
But according to the editorial if a = {9,7,5,3} m should be arbitrarily large as all the negative differences are same and no positive difference exists.but why?. I haven’t understood this part.Can you please help me?
If $$$a=[9,7,5,3]$$$, then the answer could be $$$c=8$$$, $$$m=10$$$, or $$$c=9$$$, $$$m=11$$$.
Formally, $$$m > max(a)$$$, and $$$c = m + d$$$, where $$$d$$$ is the common difference, if and only if the sequence is an arithmetic sequence with negative common difference. Thus $$$m$$$ can be arbitrarily large.
Not author solutions, but fun solution D, E:
Solution D: Let's do a Dijkstra on the moments of deletion. It is clear that a song is deleted later if it has a larger deletion circle number, or if its ancestor is equal, about which it is deleted less. So let's do Dijkstra on these parameters. The first round is a simple simulation, and then you just need to maintain the next active song, which is easily done by a two-linked list. As a result, we take out, check that these songs have not been deleted yet, if we have not deleted them, then add them to Dijkstra, delete the next song, and see if the gcd with the new song is equal to one, and if so, add increasing the circle by 1. As a result, the asymptotic $$$O (n \cdot (log(n) + log (c))$$$.
Code: https://pastebin.com/n2Sx49fW
Solution E: Let's look at the element with the minimum height in the array, and look at the 2 segments into which it divides the array. Note that we must take the beauty of this element in the answer, and that the two segments into which the array is divided are independent, so we can solve the answer for them independently. The only difference from the original problem is that we can remove the array suffix / prefix for free.
Then we implement the recursive function $$$solve (l, r, t1, t2)$$$, where $$$t1/t2$$$ — can we remove the prefix/suffix for free. Well, you can immediately see the exit condition, if $$$r < l$$$, then the answer is $$$0$$$, because there are simply no elements
Let $$$pos$$$ be the position of the minimum on the segment $$$(l, r)$$$, $$$res1 = solve(l, pos-1, t1, 1)$$$. $$$res2 = solve(pos + 1, 1, t1)$$$. Also, if $$$t1 = 1$$$ or $$$t2 = 1$$$, then we can say that the answer is at least 0 (because we can just take the entire array for free). If $$$t1 = 1$$$, then we can not take $$$res1 + b[pos]$$$ (since this is a prefix). If $$$t2 = 1$$$, then you can also not take $$$res2 + b[pos]$$$ in the response (since this is a prefix). The answer can also be $$$res1 + res2 + b[pos]$$$. Well, these transitions are enough, because if we took $$$b[pos]$$$ in the answer, then the problem is divided into 2 smaller ones, and we solve them correctly, and if we did not take this element, then we did not take $$$res1$$$ or $$$res2$$$, which is case considered. As a result, each time we find the minimum element, and we divide the problem into 2 smaller ones, and it is not difficult to show that the complexity of this solution is $$$O(n \cdot log(n)$$$, since after each transition, one element becomes less, and one search query is made for $$$log$$$ (you can do it for a line, but you need to think)
Code: https://pastebin.com/5H8TnGq5
can you please explain what you mean by doing dijkstra?
Thanks, nice explanation.
Is there any dp arroach in C ?
In problem D editorial, It is easy to find it using the bad pairs set. How exactly?
Maintain a queue of songs that are known to make a bad pair with the next undeleted song. In the beginning, fill the queue with
i
-s, wheregcd(A[i], A[(i+1)%n] = 1
. Then, every time you pop an indexi
from the queue,i
-s element is deleted, continuei
back to the queue111234401
Simple circular linked list solution for Div2 D can be optimised to O(N).
110871644
AT the time of deletion just manage deletion in a circular linked list plain algorithm one use of set can also be transformed to queue with just a few changes.
For problem statement B , for input: 3 7 3 4 why (5 1) is W/A?
All the elements should be less than m, because a(I) = ( a(i-1) + c) %m and a(1) = s % m You can check the solution (m, c) before printing (3+1)%5!= 7 #wa
only the first element should be less than m ig.
Just try to generate the array If you got the same as the input, then your suggestion of (m, c) is correct. Otherwise, the answer is -1
got A/C thanks!
Problem G very nice and interesting! Below is my explanation of the editorial.
The first part of solution is trivial. Let's discuss the second part. The first solution in my mind is binary search, but it costs too much queries.
The editorial uses a different idea, which is similar to a famous problem: "egg dropping problem". The idea is:
Although it's a rough estimate, it seems that you can perform a few more "safe" queries to cover the extra cost.(A very informal proof: after each query, the $$$\Delta$$$ of "extra cost" drops very fast, so the sum of it will not too large.)
So let's define the $$$size$$$ of a problem is $$$R - L + 1$$$; define $$$dp[x][y]$$$ is the maximum $$$size$$$ of problem we can solve using $$$x$$$ queries and have $$$y$$$ "chances" initially. Using the idea above, it's not hard to find that $$$dp[x][y] = dp[x-1][y-1] + dp[x-1][y+1] + 1$$$, which is similar to "egg drop" problem. The initial state of dp is: $$$dp[0][...] = 0$$$.
We will find that $$$dp[49][0] \approx 6 \cdot 10^{13} > 10^{14} - 2^{46}$$$, so it's enough. So we can use up to $$$47 + 49 + x= 96 + x$$$ queries to guess the right $$$M$$$, where $$$x$$$ is the number extra queries to cover "extra cost". It seems that $$$x \le 2$$$ for the test data of problem G, my solution(110886240) uses max $$$98$$$ queries.
Thanks a lot for the great explanation!
Here is my code for question-C, i don't know what's wrong here, like my code is not passing pretest-2(giving correct answer for first few testcases,wrong for some and then again correct for rest). But when i tried running those testcases individually, all of them are giving correct answers. idk howz that even possible since i've not used any global variable. Can someone help me with this please?
It seems that
has a bug. If ans == false but x > 0, then you break the loop, and skipped some test data, so when you read next text case, you may read wrong data which belongs to previous test case.
how will that give an incorrect answer? since while taking all m arrays input, previous gets cleared.
I updated the comment, your code skipped some test data. If you skipped some test data, when you are solving next test case, you may read wrong data(it's belong to previous test case).
Upd: sorry, reading array is not the bug code. the real bug is in
if(ans && x > 0) ... else break;
Thanks for the clear editorial(but a little slower), the contest is really good in general, I really enjoy it!:-)
Hope there will be more nice contests like this one on Codeforces.
My solution in E( almost the same as the author's)
For each height — h [i] we will find the leftmost (left [i]) and right border (right [i]) so that min[left [i], i] = h[i] = min[i, right [i]]. OK . this is understood how to do it. one of the ways is this binsearch + sparse table (finding min segment in O(1)).
Next, we will use dynamic programming on a tree of segments. in begin all dp equal to -1e18. dp[1] = b[1], then we can update all dp on the segment from 1 to right[1] because the minimum on this segment is h[1]. Go ahead, to calculate dp [i] — we know that on the segment from left [i] to i, the minimum is h [i], that is, no matter how you choose the last subsegment, min will be H [i]. i.e. dp [i] = max (dp [i], get_max (1, n, 1, lef [i] — 1, i — 1) + b [i]) and do not forget to update all dp from i to right [i] — (j = i, i + 1, right[i]) dp [j] = max (dp [j], dp [i]); this one can be done with a lazy push
Final asymptotics O (nlogn) My code for more information: 110848133
Heh, I feel like problem E from ARC 115 (contest that took place ~2 hours before this one) is very similar to problem E here.
I used an even simpler approach to C. Do two passes through the data. On the first pass only look at those days where there is only one friend available, and choose that friend, counting the number of times each friend has been chosen. If any friend is chosen more than $$$\left\lceil \frac{m}{2}\right\rceil $$$ times then there is no solution. Otherwise, do a second pass through the remaining days, still keeping count, and choosing the first available friend who hasn't been chosen $$$\left\lceil \frac{m}{2}\right\rceil $$$ times.
This will always give a solution since at most one friend can have been chosen $$$\left\lceil \frac{m}{2}\right\rceil $$$ times.
My code is here.
Another way to look at the solution of F: It's somewhat obvious that you're going to do Floyd-Warshall. However, after looking at the problem a bit, one can "spot some similarities" between the first and the second part of the input, with the difference being, that in the second part of the input, the values you are given are values over the paths, whereas in the first part, these are values over edges. With Floyd-Warshall, we can calculate the "first values" over paths aka distances, so why shouldn't we try to apply some sort of a "reversed Floyd-Warshall" on the "second values"? The loop you get is over $$$i, j, k$$$ to set $$$snd[i][j] = max(snd[i][j], snd[i][k] - dist[j][k])$$$. The only problem left is, at least I wasn't sure in which order to loop over $$$i, j, k$$$. I experimented a bit with the order and repeated the procedure three times (because why should this not work?). Then, since we have the "second values" over the edges, we can just compare them with the edge lengths.
I just found the aspect of "reversing Floyd-Warshall" too intriguing not to share it. Also, the code is quite short.
That's a funny paper XD. I like this idea a lot, and so I tried to understand why it worked. Turns out my proof is a bit unlike Floyd Warshall.
As you did in your code, we first use Floyd Warshall to get the all pairs distance graph $$$d[i][j]$$$ the minimum distance from $$$i$$$ to $$$j$$$, and then aim to create an array $$$M_3[i][j]$$$ where $$$M_3[i][j]$$$ represents the maximum weight that the edge $$$i\to j$$$ can have while still being useful. First, initialize $$$M[i][j]$$$ according to the input.
Then consider paths $$$i\to j\to k$$$. We can compute $$$M_2[i][k] = \max_{j} M[i][j] - d[k][j].$$$
After this, $$$M_2[i][k]$$$ is the maximum weight that the edge $$$i\to k$$$ can be to be on a useful path which uses a useful edge $$$i\to j$$$ first, then goes on the shortest path to $$$k$$$.
Note: Test cases let something like this pass (https://mirror.codeforces.com/contest/1483/submission/111041214), but I think this is just a fluke (for example, swapping the $$$i$$$ and $$$k$$$ indices doesn't pass, but there's not really a difference conceptually as far as I can tell). No proof though.
In any case, we can complete the proof with another loop, now considering paths from $$$s \to i\to k$$$ $$$M_3[s][k] = \max_{i} M_2[i][k] - d[s][i].$$$
This is the maximum weight that the edge $$$s\to k$$$ can be to first go on the shortest path $$$s\to i$$$, then (using what we know about $$$M_2$$$) go on a path $$$i\to j \to k$$$ with $$$i\to j$$$ useful.
https://mirror.codeforces.com/contest/1483/submission/111061937
This runs really fast and is quite simple. It may even be possible to improve the time complexity of the max loop computing $$$M_2$$$ and $$$M_3$$$ with some convex hull stuff. However, because the computation of $$$d$$$ is already cubic, this won't improve the overall time complexity.
Otherwise the modulo has to equal their sum c+(m−c)=m
what does it mean ? may any one help me. (c+m-c=m but how can we find modulo by this ?)When you have the positive difference $$$c$$$ and the negative difference $$$m - c$$$, the modulo $$$m$$$ is simply the sum of the two, as noted: $$$c + m - c = m$$$.
Did you mean to ask about $$$c$$$?
Isn't m the modulo here?
Yes, as I also mentioned in my comment.
I was trying to figure out what exactly they tried to ask about since they copied the exact part that showed how to find $$$m$$$. ("Did you mean to ask about $$$c$$$ instead?")
Wow, I just realized that Gustaw can choose $$$X$$$ greater than the number of euros he currently has in D1E ...
And in particular he can steal money even if he has 0 euros
It's amazing that 1C and ARC115 E (just about 2 hours before this contest) need similar tricks to improve DP. I suggest those who are trying to solve 1C solve both of them together since I tried ARC115 E and felt it great for my DP skill. :)
For Problem C, I try to maintain a struct as (value, las).
The 'las' means the friend where it was last seen. And I think it can provides a ability to undo the previous operation (when it more than $$$\lceil \frac{m}{2} \rceil$$$). Through continuous withdrawal operations, I think if it has vaild case, it can be found.
But I wrong answer on test 4. I can't find the hack data, here is my code: Code.
My English is not good...very sorry...
in div2F/div1D...
can anyone explain these lines in details ( mentioned in editorial )
"It can be done using Dijkstra in O(n^2) if we initialize the distance to all ui with -li. After we are done for all vertices v, there only remains to check for each edge whether it has been marked useful for any vertex v."
how to solve problem C with flows(this is in tag)?? Who solved C with flow ... pls .. can you pls explain how did you solve ??
it is a blog use flows.CSDN
But it is writing by Chinese.The key is using bipartite graph to build model, the left is days, the right is friends. I think you can read the code and know why it can be solved by flows.
but the complexity seems not to be correct
Please update problem ratings.
Can someone please explain this in more detail? Thank you!
KAN
Indeed, some part of the solution was missing. Now it is updated (check that paragraph and the one before that). Also you may find this explanation helpful.
In problem C's editorial, it is written that:
How can we prove this?
Let f is the only friend who can play in more tren ceil(n/2) days => we choose him more than ceil(m/2) times. In other case we can chose f in ceil(m/2) days and onother people in others days.
For problem C, you could also have sets of friends that are available at each day, sort by the days with fewest people, then greedily pick any friend that isn't at the cap yet, increasing someones frequency when picked.
I had a little trouble convincing myself that it should work tho, did someone else do it this way?
I am also doing the similar way except for choosing randomly choose the one which is selected least time. getting runtime error for some reason so not able to verify :(
accepted now
Can you tell me what did you do to resolve runtime error? Even I am getting runtime error . my code passes the same tests sometimes and sometimes not (segmentation fault) if I repeateadly run same input.
147246831 that is how I think but I choose the person who has maximum frequency
For the last problem, I submitted something that's quite obviously at least O(number of all pairs satisfying just the first two conditions). It passed easily. Is there a clear general small upper bound on this number?
In problem E-Skyline Photo Can someone explain why I'm getting WA in test case 7?
[user:KAN] How to do problem D using dsu?
I solved problem D by dsu. Here is the code: https://mirror.codeforces.com/contest/1484/submission/140649770
problem H can also be solved by suffix array.
like this
can someone tell what is wrong in my submission 283135877