Open
Open
Open
Open
- E: Author: gangsterveggies, Prepared by: gangsterveggies
Open
Open
Open
Open
Open
Open
Open
| # | User | Rating |
|---|---|---|
| 1 | Benq | 3792 |
| 2 | VivaciousAubergine | 3647 |
| 3 | Kevin114514 | 3603 |
| 4 | jiangly | 3583 |
| 5 | strapple | 3515 |
| 6 | tourist | 3470 |
| 7 | dXqwq | 3436 |
| 8 | Radewoosh | 3415 |
| 9 | Otomachi_Una | 3413 |
| 10 | Um_nik | 3376 |
| # | User | Contrib. |
|---|---|---|
| 1 | Qingyu | 157 |
| 2 | adamant | 152 |
| 3 | Proof_by_QED | 146 |
| 3 | Um_nik | 146 |
| 5 | Dominater069 | 144 |
| 6 | errorgorn | 141 |
| 7 | cry | 139 |
| 8 | YuukiS | 135 |
| 9 | TheScrasse | 134 |
| 10 | chromate00 | 133 |
| Name |
|---|



I have a probabilistic solution for Problem A (optimised) https://mirror.codeforces.com/contest/2068/submission/308681793
E is, supposedly, well known in romania
Right, we understood it very early in the mirror and it became a bit of a meme among the judges :D But, we could not really find the source so thanks for sharing. Could you also tell us where the problem appeared? I could not understand it immediately from the link.
We were lucky, because among the Romanian teams participating to the onsite, none sent a submission.
It's from the first contest of the 2022 TST ("Lot")
Hehehe, I saw this exact task at the Russian IOI training camp in 2023, and an even bigger hehehe is that we only qualified for WF thanks to it.
H when the distances are the same. https://atcoder.jp/contests/abc135/tasks/abc135_e
Btw A can be solved with $$$4n - 7$$$ votes: 308693499.
This can be proven inductively. WLOG, assume $$$m = \binom{n}{2}$$$. The $$$n=2$$$ case is trivial. For $$$n \ge 3$$$, from the inductive hypothesis, say we have a set $$$V$$$ of $$$4n - 11$$$ votes getting the desired facts concerning only the candidates $$$1, 2, \ldots n-1$$$. Now, let $$$S$$$ be the set of candidates who we want to beat $$$n$$$ and $$$T$$$ be the set of candidates who we want to be beaten by $$$n$$$. Let $$$\sigma_S$$$ be any ordering of $$$S$$$ and $$$\sigma_T$$$ be any ordering of $$$T$$$. For $$$2n-6$$$ of the votes from $$$V$$$, make $$$n$$$ the most preferred candidate, and for the remaining $$$2n-5$$$ votes, make $$$n$$$ the least preferred candidate (keeping their ranking among $$${1, 2, \ldots n-1}$$$ the same). Let $$$\hat{\sigma_S}, \hat{\sigma_T}$$$ denote the reverse of $$$\sigma_S, \sigma_T$$$ respectively.
Now, we add four more votes, with the following rankings:
Intersetingly, $$$O(n/ \log{n})$$$ votes suffice and are asymptotically optimal.
Ohhhhh, super cool that it can be done in $$$O(n/\log n)$$$. This Erdos guy must be strong!
We knew about the fact it could be done with $$$O(n)$$$ voters, but we thought it would make the problem harder without really making it more interesting.
I think an easier way to get to $$$O(n)$$$ votes (specifically, $$$2n$$$) is to just color the edges of the graph with $$$n$$$ colors (according to $$$(a+b)\pmod n$$$), and then for each color process all the edges of that color simultaneously.
308721319
This is remarkable $$$O(n / \log n)$$$ bound is possible! I would've been shy to propose the problem if I realized it's solved in 1964's paper.
Interestingly, I came up with pretty much same problem as D. Morse Code for another contest some years ago, it got rejected (reason: it appeared in some shortlist very long time ago, $$$O(N^4)$$$ intended solution IIRC)
Later I discovered $$$O(N^2)$$$ solution in a paper.
I know about a paper on the problem (linked from the wiki page about Huffman) but I don't know an $$$O(n^2)$$$ solution. Do you have a reference?
I read in this article
I have two solutions for K: the one we didn't believe in during the onsite (passing in 1930ms on CF) and the one that I think more or less matches the intended:
AC 1900ms: Kuhn on graph where left part has numbers we are matching to ($$$a[i] \cdot j$$$), right part is $$$i$$$. We build $$$O(n^2)$$$ edges in $$$O(n^2\log)$$$ (or with hashtable lookup, but it seems sorting would be faster). Then Kuhn with 2 default optimizations: timer-based
used[v]for O(1) clear on each successful path found, andshuffle(all(g[v]), rng)so we don't go too often into the same vertex and hopefully find a free index to pair with in first elements of adjacency list. Overall it should be $$$O(n^3)$$$ as we have $$$n$$$ vertices and $$$n^2$$$ edges, but in reality $$$n^2\log$$$ construction part took most of the time in my TL upsolving attempts. 308915147AC 1300ms: on failed path push, remove all future and existing edges to right-part vertices that we failed to propagate the path from. Each time we fail we watched X edges and we remove more than X edges, and we succeeded $$$n$$$ times so it's more or less clear why it's $$$O(nE)$$$ and it's also fast because most of successful steps require much less than $$$E$$$ for dfs. Here $$$E$$$ should be something close to $$$O(n\log n)$$$ as this is pretty close to the jury's solution (though it is done a bit backwards). 308914819
Is it supposed to be this way? The gap between two solutions in terms of how hard it is to come up with is pretty big. The first one really just squeezes in TL tightly. The second one is not so fast itself. Was it the same experience for the onsite participants with 5+ attempts?
Would appreciate some comments in general as well. I am thinking a lot now on this "remove edges for not-pushed vertices" optimization and how useful it could be in other problems
We wanted second kind of solution to pass, unfortunately that meant that some very optimized aproaches like first one would pass as well
Is it true, btw, that the second solution is exactly the same as maxflow from the editorial to the extent where one node is split into several? Instead of left to right it makes the graph right to left, but seems like no extra steps are involved (maybe additional $$$+E$$$)
TBH I still don't get the $$$O(n\log n)$$$ bound on edges so for me both solutions seem like a hyper-optimized cubic one :)
I don't know if Egor can prove the $$$O(n\log n)$$$ bound. As far as I know, the best we can show is $$$O(n\log(n)^2)$$$. The proof is a bit long and the crucial step is contained in this short note
Citing oneself like a true researcher.
I did Kuhn using bitsets instead of adjacency lists just simulating the problem in the stupidest way possible and it passed in 671ms https://mirror.codeforces.com/contest/2068/submission/309010517
We implemented the binary search plus Kuhn's algorithm with standard optimizations in the onsite contest with one additional observation: You can always match the first occurrence of some a_i with time a_i, which basically halves n. To our surprise, this got accepted first try although its complexity should be O(n^3 log n). I experimented with it a little on Codeforces https://mirror.codeforces.com/contest/2068/submission/309081742 and it's not even close to the timelimit (0.5 seconds). Even when disabling one of the optimizations, it still comfortably fits the timelimit, multiple optimizations have to be removed for it to fail. I wonder if that was intended :)
OMG it feels like bravery was rewarded in this problem. We got too scared by $$$n = 2000$$$ and completely ignored our binsearch + perfect matching idea. And it even runs faster than my implementation of what I thought to be the same as model solution...
I can see why it runs fast. If everything is the same, you basically get $$$n$$$ edges. If everything is different, you get perfect matching from the first try. And in between — well, yeah, it halves, and also you should have enough vacant numbers when you take $$$\frac{n}{2}$$$ different $$$a_i$$$'s and generate $$$a_i \times j$$$, as each two generator's can't overlap on more than half occurrences.
What can I say, well-deserved qualification to WF, congrats!
there is O((n+m) log (n+m)) solution in E if we consider edges not in the BFS traversal as taking the minimum along vertical paths and update all of them offline using binary lifting.
And with the linear memory lifting Blog, you can even make it $$$O(n)$$$ memory, $$$O(n \log n)$$$ time, with a low constant.
I wonder if the algorithm (theoretically) can be optimized to $$$O(n+m)$$$ overall (the dijkstra part probably can be, as all the distances will stay $$$O(n)$$$). But this tree DS part is maybe hard. LCA's can be calculated with some complicated stuff in linear time. There are some $$$k$$$th ancestor algorithms that work in linear build time, and $$$O(1)$$$ queries, but I don't know if they can be augmented to also do the offline range min= path queries.
Why is $$$E = O(n \log n)$$$ in problem K?
In the editorial for problem E, in the part for "Computing f assuming we know g" it says:
I don't think this works on the first sample.
The procedure would do the following:
Initialize $$$f$$$ as $$$f[1] = 3, f[2] = 4, f[3] = 3, f[4] = 4, f[5] = 0$$$
Pick $$$v = 5$$$ and set $$$f[5] = max(g[5], min(1 + f[2], 1 + f[4])) = 5$$$
Pick $$$v = 1$$$ and set $$$f[1] = max(g[1], min(1 + f[2], 1 + f[3])) = 4$$$
...
But the correct answer of f[1] should be 5. Am I misunderstanding something?
I got AC by modifying the procedure to
Set $$$f[n]=0$$$
and f[v]=g(v) for all other vand set n active. Now process each vertex in the following way. Pick the unprocessed active vertex $$$v$$$ with the lowestcurrentvalue of $$$f[v]$$$. Process $$$v$$$ by iterating through all inactive vertices $$$u$$$ adjacent to $$$v$$$ and setting $$$\mathbf{f[u]=max(g[u],1+f[v])}$$$ and setting u active.You're right, there is a typo, thanks for pointing it out! The vertex being updated should be $$$u$$$ and not $$$v$$$, like you pointed out in the modified procedure (otherwise this isn't compatible with the proof of the lemma). I believe the rest can stay the same. Your modified version is correct, but unless I am missing something, the initial version can be fixed if we just fix the update method.
E is a really beautiful problem, thank you! I found an easier way to calculate $$$g(v)$$$ that doesn't need DSU or the lazy propagator (though I don't understand what exactly it does in the editorial).
If we want to use a non-tree edge $$$(u,v)$$$ at some node $$$w$$$, then exactly one of $$$u$$$ and $$$v$$$ must be in the subtree of $$$w$$$, which is equivalent to saying that $$$w$$$ is in the subtree of $$$lca(u,v)$$$, but is not the LCA itself, and at least one of $$$u$$$ and $$$v$$$ is in $$$w$$$'s subtree. If we denote by $$$level[]$$$ the distance to $$$n$$$, then the path using the non-tree edge $$$(u,v)$$$ from $$$w$$$ will have length $$$level[u] + level[v] + 1 - level[w]$$$. Now we can store the pair $$$(level[u] + level[v] + 1, edge-index)$$$ in the sets for both $$$u$$$ and $$$v$$$. Just like in the editorial, we calculate the $$$g(v)$$$ using DFS, doing smaller-into-larger merging. When merging, if $$$(value, index)$$$ is already contained in the other set, it means we are at the LCA, so we delete it instead, otherwise we insert it. After processing all children, $$$g(v)$$$ will be the minimum in $$$v$$$'s set minus $$$level[v]$$$.
Implementation
Could someone explain Problem H to me more clearly,I cannot understand the editorial very well,very please
Can someone explain the dp solution for problem J used in this Submission
What's up with the checker of problem I?
I have some submission 317737217 (which still has some bugs), but to me the output of the program seems clearly correct on the third testcase (remove the wall above the start immediately and roll the bar towards it).
Why is it graded as "Wrong answer Ball did not escape"?
pinging bicsi