Tutorial is loading...
Tutorial is loading...
Tutorial is loading...
# | User | Rating |
---|---|---|
1 | tourist | 3985 |
2 | jiangly | 3814 |
3 | jqdai0815 | 3682 |
4 | Benq | 3529 |
5 | orzdevinwang | 3526 |
6 | ksun48 | 3517 |
7 | Radewoosh | 3410 |
8 | hos.lyric | 3399 |
9 | ecnerwala | 3392 |
9 | Um_nik | 3392 |
# | User | Contrib. |
---|---|---|
1 | cry | 169 |
2 | maomao90 | 162 |
2 | Um_nik | 162 |
4 | atcoder_official | 161 |
5 | djm03178 | 158 |
6 | -is-this-fft- | 157 |
7 | adamant | 155 |
8 | awoo | 154 |
8 | Dominater069 | 154 |
10 | luogu_official | 150 |
Name |
---|
My solutions:
How do you use so many functions and manage to get under the TLE?
The time complexity of your code has little to do with the number of functions one uses. You should look up basic algorithm complexity analysis (say, an intro to Big O notation) to be introduced to analyzing the runtime complexity of a program.
Sorry to disturb you,but could you explain how LCT in problem C worked.I do not understand your solution.
I compute the longest suffix of edge which can form a bipartite firstly,and then I try to add edge from prefix and remove the edge in the suffix to compute the array $$$Last$$$(the same as the $$$last[]$$$ in the tutorial).but it doesn't work,because some circle is ignored(maybe in these cycles,there is a circle of odd length).
Thank you very much.
First I concatenate the edges array with itself, doubling its length. For each $$$l\in [0,M-1]$$$ I compute the max $$$r$$$ such that edges $$$l\ldots r$$$ are bipartite. Maintain the maximum spanning tree of edges $$$l\ldots r$$$ (where the weight of an edge is just its index).
You should check the editorial for the other problem I linked if what I said about maintaining the MST doesn't make sense. :)
I know that now.Thank you~
How do we maintain DSU for C to check if graph is bipartite?
Initially set all to the same color.
Let's say we want to add an edge from $$$u$$$ to $$$v$$$. Check if $$$u$$$ and $$$v$$$ are of the same parent(Are connected). If they are, then make sure their colors are different.
If they have different parents, Let's say $$$p$$$ and $$$q$$$ respectively, add an edge from $$$q$$$ to $$$p$$$ in the DSU(parent[q] = p). If their colors are the same, you want an additional mark on the edge which says to reverse the color.
When you want to get the color, go through all the edges leading to the parent, and change the color as you go along the edges if you need to reverse the color.
Look here
In problem C in subtask $$$4$$$ there is $$$l_i \leq 500$$$ in editorial but $$$l_i \leq 200$$$ in the problem statement. Which variant is correct?
Sorry, it should be 200, I will fix it later.
In problem A, subtask 3, 2*sqrt(1000) = 2 * 33 = 66, and 66 > 64, the limit of querys in that problem, so, 2*sqrt(n) is ok?, why?, i know that you can use that solution 3 times and get 3*cubic_root(n) which is good enough (it uses 30 querys or something like that), but is more hard to code, and it has a lot of case handiling. UPD: Sorry, sqrt(1000) <= 32
For Joker subtask 5, I think the post should say:
(rather than $$$B = M \sqrt{Q}$$$)
otherwise the numbers don’t add up. $$$B$$$ is the block size, and there are $$$M \div B$$$ blocks. The total number of operations is $$$M$$$ per block for the right pointer + $$$QB$$$ in total for the left pointer = $$$M^2 \div B + QB$$$ in total. Equating the two terms so that neither dominates the other gives $$$B = M \div \sqrt{Q}$$$.
And to expand slightly on the algorithm: at the start of each block, initialize DSU with all edges to the left of the block’s leftmost edge. Sort queries by $$$r$$$ in descending order. For each query, add the new edges on the right, then add the appropriate edges on the left, answer the query, and immediately roll back all of those left edges, so that once again the left pointer is at the very start of the block. Don’t bother trying to remove individual left edges (and struggling to do it without also rolling back right edges): just remove them all after each query.
Further, I think it should be possible to use path compression to improve the complexity somewhat, although I’m not sure it will help in practice.
The original reason for the $$$\log N$$$ factor is that path compression is not performed in the DSU because it “is impossible” with rollbacks. Of course, there’s nothing preventing one from implementing rollbacks of path compression, but the question is whether it such revertible path compression actually improves efficiency.
I can’t answer that. But for this particular problem, we can still improve the complexity by using path compression in a part of the solution that doesn’t require rollbacks:
Notice that the DSU operations corresponding to the right-hand-side edges are not disturbed by any rollbacks. (When a new block starts, we can rebuild the whole DSU from scratch. It is not necessary to roll back the previous block’s right-hand-side operations.) There are $$$M$$$ unions per block from the right-hand side. We can do those unions with path compression, for a total run-time of $$$M \alpha(N)$$$. Assuming revertible path compression is meaningless, we can do the left-hand-side operations without further path compression, at $$$\log N$$$ time per operation. The total time is then $$$M^2 \div B\, \alpha(N) + Q B \log N$$$ (down from $$$(M^2 \div B + Q B) \log N$$$), and equating the two terms to ensure neither dominates the other gives optimal $$$B = M \div \sqrt{Q \frac{\log N}{\alpha(N)}} \approx M \div \sqrt{Q \log N}$$$ with the grand total run-time $$$\mathrm{O}\left(\mathrm{sort}\ Q + M \sqrt{Q\, \alpha(N) \log N}\right)$$$.
You can do the LHS operations with path compression too, making the complexity $$$M\sqrt Q\alpha(N)$$$.
Ah, the trick is to have the LHS-within-block build a DSU over entire sets from the RHS+previous-LHS-blocks rather than over individual nodes. The RHS sets don’t change while moving the LHS, so the LHS can use a separate DSU tree and can be wiped clean after each query without any granular rollbacks. Nice!
What does LHS mean?
Left-hand side. Or simply left, as opposed to right. In this case, I’m talking about about the disjoint-set union formed by the streets whose indices are smaller than the patrolled streets.
Thanks, fixed!
Can someone explain the strategy when k is odd, in Problem A? thx
When
k
is odd, do query of lengthk/2
, and the problem size becomes at mostk/2+1
, which is acceptable. BTW the solution has missed a keypoint: let functionbegin
be the startpoint index, thenbegin(n)=n/2+1-begin(n/2)
when n is even, andbegin(n)=n/2+2-begin(n/2+1)
when n is odd.Could you explain more carefully how to use the strategy of k using color [1,k] to construct a new strategy of 2k+1 using color [1, 2k+1] ?
It's not using stategy k to construct 2k+1, but to use stategy k and stategy k+1 to construct 2k+1.
For example, strategy for 3 is
2 -> 3 -> 1
and strategy for 4 is2 -> 4 -> (1 or 3)
. Then stategy 7 is:We first calculate the startpoint,
begin(7)=5-2=3
. So we do query3 -> 6
first. (6=3+(7/2)
)If res==0, then the result is in [4, 7], and the problem reduces into strategy 4, which is
6 -> 1 -> (5 or 7)
.If res==1, then range is [1, 3], so the problem reduces into strategy 3, which is
6 -> 5 -> 7
.You may refer to my solution: 89211392
Got it. But I still have a question: if res == 0, why we won't use a color twice or more?
"the graph exluding..."——you missed a 'c'.(excluding)