Gp of Zhejiang is over...Let's discuss solutions. How to solve C?
Is there an editorial? jqdai0815
# | User | Rating |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
# | User | Contrib. |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
Name |
---|
Is there anyway to reduce computing partition function to counting the number of perfect matchings?
Or is this not the correct way at all? Anyway, how to solve F?
F:
For a matching, we can add auxiliary edges between vertex $$$2i-1$$$ and $$$2i$$$. Then the graph becomes disjoint cycles and paths. Since $$$2i-1$$$ and $$$2i$$$ must be in the same set, there are at most $$$2^{n/2}$$$ sets. For each set, we can compute the number of paths and cycles in $$$O(2^{n/2} n^2)$$$, and then merge them. The latter part can be also done in $$$O(2^{n/2} n^2)$$$, but $$$O(3^{n/2})$$$ is good enough.
C:
Consider subtree dp : dp[v][x] = (maximum sum of distance when selecting x vertices in subtree under v)
When updating dp[v] with dp[(children of v)], these facts are important:
So you can calculate this dp by BST (or priority_queue) in $$$O(N \log^2 N)$$$ time.
Could you elaborate on this?
Actually, we should maintain dp[v][x+1]-dp[v][x] instead of dp[v][x].
If doing so, updating becomes as follows:
After that, we just merge them and done. (we must use small to large technique.)
This is my code (not well written, though.) https://ideone.com/AmLhRp
The intended solution utilizes the combinatorial structure of the answer much more. Maybe I should work harder to come up with a version before the contest that asks the answer for all $$$k=1\dots n$$$.
Here is a sketch of the intended solution.
The intended solution is centroid decomposition. Find the $$$k$$$-th farthest vertices of centroid. If some subtree contains more than $$$\frac{k}{2}$$$, move to this subtree. And we do the same thing in the subtree. However, it only works for $$$k$$$ is an even number. For the odd case, we can prove the optimal solution is the optimal solution for $$$k-1$$$ with one more vertex.
Time complexity is $$$O(n\log^2 n)$$$. We can improve it to $$$O(n\log n)$$$ using nth_element.
yep, the contest was too easy this way, you should have asked for all 1..n
Sorry, I do not quite understand.
You said that if there is a subtree containing more than $$$k/2$$$ $$$k$$$-th farthest vertices, we then move to the subtree and do the same thing. Do you mean we still find the $$$k$$$-th farthest vertices or other parameters instead of $$$k$$$? And I also wonder how do we use the answer for the subtree to help with the original tree? Could you please elaborate on this? Thanks.
If you fixed the centroid $$$u$$$ of the chosen vertices, we can find the optimal answer by a simple greedy, denoted as $$$f(u)$$$.
We can prove when $$$k$$$ is even, $$$f(u)$$$ is something like a convex function. So we can move a subtree with the derivative less than $$$0$$$.
You can read the solution of this problem as a good start.
Code
I manage to upsolve this problem! Thank you for the solution and also the reference of the example problem (it really helps me understand the solution of this problem)!
How to solve G, E, H, B, J?
Our solution for E required too much case work, is it possible to avoid it.
In G, using hill climbing we manage to get a solution that satisfied the distance among the points, but was not coplanar enough (the best achieved was
5e-18
).In G, I first set base triples as (999990,999990,0),(-999990,0,999990) and (0,-999990,-999990). Then I try to adjust those 9 coordinates in the range of [x-10,x+10]. Several solutions can be reached when my program runs about 5 minutes.
Your local checker should be implemented carefully or some precision errors will cause misjudge.
H: Assume at some point we add $$$l$$$ elements to the left and after that $$$r$$$ elements to the right. When is it better to change order and add $$$r$$$ right elements first and after that $$$l$$$ elements to the left? When the average of right elements is less than the average of left elements. So let's think geometrically: draw 2 polylines: the first one will contain vectors $$$(1, a[i])$$$ for all elements to the left of start in order in which they are added and similar for elements to the right. Compute lower convex hull of both polylines. Each edge of convex hull corresponds to a group of points that will always be added together. Now all we need to do is to calculate for each edge of the first convex hull number of edges in the second convex hull which are greater than it and vice versa. When we change starting point, both convex hulls and the answer can be easily maintained with stacks. Code.
B:
For each query of second type add edge between $$$u_p$$$ and $$$v_p$$$ with weight equal to query index. Now answer to query of first type with query index $$$T$$$ is equal to "how many vertices are achievable from vertex $$$u$$$ if we can move only with edges with increasing weights and with weights $$$\le T$$$ (it is good to think about weights as time).
This can be solved using centroid decomposition. Assume we found centroid $$$r$$$. Let's root tree in $$$r$$$. We will calculate for each query in $$$u$$$ only vertices $$$v$$$ such that simple path pass centroid.
For each vertex $$$v$$$ compute $$$travel_v$$$ equal to minimum time required to reach centroid from vertex $$$v$$$ and for each edge from $$$p_v$$$ to $$$v$$$ find maximum time when we can leave centroid and reach vertex $$$v$$$ with this edge.
Now for query in $$$u$$$ with time $$$T$$$ answer is — how many vertices in our tree (except my subtree) has path which starts in centroid after time $$$travel_u$$$ and reaches that vertex before $$$T$$$. It can be done with sweeping technique. Complexity $$$O(N \log ^2 N)$$$.
J: take the smallest integer $$$q$$$ so that $$$2^q \ge |S|$$$ and consider the finite field $$$\mathbb{F}_{2^q}$$$. Now, for each integer $$$i$$$ from $$$0$$$ to $$$|S| - 1$$$, compute $$$i^3$$$ in this field (call the result $$$f(i)$$$) and output $$$i \cdot 2^q + f(i)$$$ (computed normally in $$$\mathbb{Z}$$$).
As $$$f(i) \in \mathbb{F}_{2^q}$$$, then $$$f(i) < 2^q$$$ and therefore $$$i \cdot 2^q + f(i) < |S| \cdot 2^q < |S| \cdot 2|S| = 2|S|^2 \le n$$$.
Now, assume $$$a_{i_1} \oplus a_{j_1} = a_{i_2} \oplus a_{j_2}$$$. It means that $$$i_1 \oplus j_1 = i_2 \oplus j_2$$$ (or, in $$$\mathbb{F}_{2^q}$$$, simply $$$i_1 + j_1 = i_2 + j_2$$$) and $$$i_1^3 - j_1^3 = i_2^3 - j_2^3$$$. As $$$x^3 - y^3 = (x-y)(x^2 + xy + y^2)$$$ (and $$$y = -y$$$ within this field), we have that $$$i_1^2 + i_1j_1 + j_1^2 = i_2^2 + i_2j_2 + j_2^2$$$. Then, $$$(x + y)^2 = x^2 + y^2$$$ within the field, so we also have $$$i_1j_1 = i_2j_2$$$.
Let $$$s := i_1 + j_1$$$, $$$p := i_1j_1$$$. Now, each of the values $$$i_1$$$, $$$j_1$$$, $$$i_2$$$, $$$j_2$$$ satisfies the polynomial equation $$$x(s - x) = p$$$. This polynomial has degree 2, so it has at most 2 solutions in the field. As $$$i_1 \neq j_1$$$, $$$i_2 \neq j_2$$$ and $$${i_1, j_1} \neq {i_2, j_2}$$$, we arrive at contradiction.
The explanation is excellent, thank you ...
but it gives me no clue about how on earth do you get to this result. What was your thought process in this problem?
Well, it's quite sad for me to say, but the same idea was used in some local competition in Poland (Wrocław) a few years ago. :( I can't offer much insight on it, apart from it sharing some high-level ideas with no-three-in-line problem.
This is not the right way of saying this — you should respond "This problem is well known in Poland"
I can try to give some insight on how you can get this by yourself. It is of course pretty hard to think like this, but should make it seem slightly less out of nowhere.
Instead of thinking "what kind of weird construction can give me $$$a_i \oplus a_j \neq a_k \oplus a_l$$$?" think like this "How can I uniquely restore unordered pair $$$(i,j)$$$ from $$$f(i,j)$$$?". If you forget for a second about some xors and binary world, in real numbers you can uniquely restore $$$(i,j)$$$ from $$$(i+j, i^2+j^2)$$$. But how can we adapt this to binary world?
First thing is that reals are nice field, so (multiplication is important here), so maybe we should think of these binary vectors as field as well, so correct way of thinking could be not adding/multiplying them as numbers corresponding to binary representation, but as elements of finite field on $$$2^k$$$ elements for some $$$k$$$ (you know, these binary polynomials modulo some other irreducible polynomial).
Second thing is that we would like to encode pair ($$$(i+j,i^2+j^2)$$$) into one vector — just concatenate binary representations of these
Third thing is that in $$$\mathbb{Z_2}$$$ world squares will not work — but cubes will.
For multiplication in the finite field $$$\mathbb{F}_{2^q}$$$, we use an irreducible polynomial of degree $$$q$$$.
I can just brute force through all polynomials, and substitute all values to check for roots (and effectively reducibility), and this takes $$$O(2^{2q})$$$ to find one. In this case it is just $$$O(n)$$$, which fits in the TL.
Is there a faster method for doing this? Or is there a different way to define multiplication efficiently?
In this case it's even easier, you simply take a polynomial at random (afair there's something like 1/q chance you pick a correct one), compute a solution modulo this polynomial, check if it's correct, and repeat until you're lucky.
As for finding the finite field, I don't have a (really) good answer. You can try using random candidates, which requires $$$O(q)$$$ polynomials to check on average instead of $$$O(2^q)$$$, you can also test the irreducibility modulo the polynomials of degree at most $$$q/2$$$ (so that a single test takes $$$O(2^{q/2})$$$ time instead of $$$O(2^q)$$$).
There are some efficient algorithms factorizing the polynomials over $$$\mathbb{Z}_2$$$ (and various computer algebra systems can handle it really fast), but unfortunately I don't know much about them. If someone does and has a fairly simple explanation of any of them, please let us know!
How to solve div2 problems I and C
How to solve D? Or in general, how to do lazy update in 2D structure (at least partially)? Is it related to this?
It looks there should be a conditional lower bound (like $$$\Omega(\sqrt{n})$$$) about fully 2D lazy update.
The intended solution is D&C and sweep line.
process $$$N$$$ queries for a $$$\sqrt{N} \times \sqrt{N}$$$ matrix $$$a_{ij}$$$:
A lowerbound of this problem is $$$\Omega^*(N\sqrt{N})$$$ (If we can not solve the all pair shortest path problem in $$$O(N^{2.999})$$$ time. This conjecture is called an APSP conjecture and well-known)
Under APSP conjecture, we can not do a matrix multiplication in $$$O(N^{2,999})$$$ time over (min, +)-semiring. (Moreover, we can not check AB=C for given matrix A,B,C [Virginia Vassilevska Williams and Ryan Williams. Subcubic Equivalences Between Path, Matrix, and Triangle Problems])
However, we can calculate a matrix product of $$$\sqrt{N} \times \sqrt{N}$$$ matrices by above problem.
Thanks, you made my day
Wow, I was always interested whether it's possible to perform such queries in $$$o(n^{0.5 - \epsilon})$$$ time and the reduction to famous APSP problem is so simple and beautiful.
You made my day too, that's really cool.
F: the best problem in 2020! (until now).
Btw, the problemset is just gorgeous. I definitely want to see more kind of contest like this. Thanks to author!
Thanks ko_osaga teacher. Let‘s wait for Yuhao Du contest 11.
I also enjoy the problems a lot! Can't wait to upsolve them all.
I hope this is sarcasm.
Div1 guys, could you check out div2 problems, please, problems were so good, but i have no idea how to solve them.
http://opentrains.snarknews.info/~ejudge/flg.cgi?sid=fe24381eae9c1260 statements
How to solve L?
How to solve div2 A, C, D, I?
Tutorial slides: http://acm.math.spbu.ru/trains/191215.slides.en.pdf.
Best viewed one per page, otherwise scrolling will be nonsensical.
Feel free to ask more specific questions if the solutions outlined in the slides are not clear.
Thanks for editorial! Is there a way to upsolve div2 problems?
I have no idea why for problem C we can't just check square of size at most 2 — 3 centimetres around of the point M and and there will certainly be an answer in it?
Consider a circle with center $$$(0, 0)$$$ and radius $$$10^8$$$.
Measure the distance from point $$$(2 \cdot 10^8, 0)$$$ to the points on the circle. If we go to $$$(10^8, 0)$$$, the distance is just $$$10^8$$$.
If we go to $$$(10^8 - \varepsilon, 100)$$$, with $$$\varepsilon$$$ such that the point is on the circle, the distance is very much like $$$10^8$$$ still.
Consider then rotating the picture slightly, then the image of $$$(10^8 - \varepsilon, 100)$$$ may well be closer to an integer point than the image of $$$(10^8, 0)$$$.
What do you mean by rotating — rotating point (2 * 1e8, 0) around of center?
Yeah.
But point M(point — intersection between line between given point and center and circle) will also change -> square also change -> and probably point (1e8 — e, 100) will be answer?
Alright, my example was a bit off. A more practical one, here. The idea is the same.
Input:
Output:
See, with a shift up to $$$10^4$$$ (asymptotically), it is still best to arrive at $$$(10^8, 0)$$$, which is then $$$5000$$$ away from the intersection point.
Still can't get it:( It would be great, if you show me on the cell field, where I am wrong(if it is possible)
Alright.
Go to https://csacademy.com/app/geometry_widget/, and enter the following:
Click "View all", then use dragging and mouse wheel to zoom in.
You can see that the optimal path, $$$(200, 10) \to (100, 0)$$$, is 5 cells away from the intersection point already.
Oh, Thank you! Got it.
How to solve I (Interesting Game) and K (Knowledge-Oriented Problem)?
jqdai0815?
jqdai0815 pls help D:
Tutorial of I in Chinese
K: Basically the graph is the Cartisent product of $$$G$$$ and a path graph $$$P_n$$$. The number of spanning tree $$$G$$$ is $$$\frac{1}{n} \prod_{i=1}^{n-1} \lambda_i$$$ where $$$\lambda_i$$$ is the eigenvalue of the Laplacian matrix of $$$G$$$.
There is a lemma that the eigenvalue of the Cartisent product of $$$G_1$$$ and $$$G_2$$$ is $$$\lambda_i + \mu_j$$$ where $$$\lambda_i$$$ and $$$\mu_j$$$ is the eigenvalue of $$$G_1$$$ and $$$G_2$$$ respectively. Let $$$L(G,x)$$$ be the characteristic polynomial of the Laplacian matrix of $$$G$$$. This can be viewed as the resultant of $$$L(G_1, x)$$$ and $$$L(G_2,-x)$$$.
Let $$$P_n(x) = L(P_n, x)$$$, $$$Q(x)=L(G,x)$$$. We know $$$\mathrm{res}(P_n, Q) = \mathrm{res} (P_n\bmod Q, Q)$$$. So we only need to compute $$$P_n \bmod Q$$$. There is a linear recurrence for $$$P_n(x)$$$ where $$$P_n=(x+2) P_{n-1}- P_{n-2}$$$, thus it can be computed in $$$O(\log n |Q|^2)$$$.
The characteristic polynomial of a matrix can be computed in $$$O(n^3)$$$, and resultant can be computed in $$$O(n^2)$$$.
Will be official editorial for all problems published?