Some stats about the round
Person | A | B | C | D1 | D2 | E |
---|---|---|---|---|---|---|
BucketPotato | 900 | 1200 | 1400 | 1600 | 2000 | 2200 |
lunchbox | 800 | 1300 | 1200 | 2000 | 2400 | 2300 |
lce4113 | 1000 | 1400 | 1400 | |||
lethan3 | 800 | 1000 | 1200 | 1500 | 2000 | |
Apple_Method | 800 | 1200 | 1200 | 2100 | 2500 | 2400 |
izhang | 800 | 1200 | 1400 | 1600 | 2200 | |
EnDeRBeaT | 800 | 1300 | 1500 | 1700 | 2000 | 2200 |
Runtime-Terr0r | 800 | 1200 | 1400 | 1800 | ||
AlperenT | 800 | 1100 | 1400 | 1900 | 2400 | 2200 |
A | B | C | D1 | D2 | E | |
---|---|---|---|---|---|---|
Average | 833 | 1211 | 1344 | 1775 | 2214 | 2260 |
Actual | 800 | 1100 | 1400 | 1700 | 2400 | 2300 |
General Credits
First, a huge thanks to IgorI! Working with inexperienced problemsetters over a 10 hour timezone difference is already difficult enough. But due to a wide range of problems with the round, the preparation process took several months. I personally have nothing but admiration for his patience and commitment throughout the entire time. He worked closely with each one of us to prepare every problem, helping us rewrite statements and redo tests over and over again to make sure they were as good as possible. He hasn't been included in this table, otherwise he would appear under every box.
Problem | Author(s) | Preparation | Editorial |
---|---|---|---|
A | BucketPotato | BucketPotato | BucketPotato |
B | lce4113, BucketPotato | BucketPotato | BucketPotato, izhang |
C | lce4113 | lce4113 | lce4113 |
D | BucketPotato | BucketPotato | BucketPotato |
E | lunchbox | lunchbox | lunchbox, BucketPotato |
Polygon Commits
The stats here may be a bit misleading, as many of the commits made were very minor changes (fixing typos in statements, etc.)
A | B | C | D1 | D2 | E | Total | |
---|---|---|---|---|---|---|---|
IgorI | 23 | 23 | 22 | 35 | 36 | 30 | 169 |
BucketPotato | 46 | 53 | 4 | 85 | 91 | 20 | 299 |
lunchbox | 36 | 36 | |||||
lce4113 | 38 | 38 | |||||
lethan3 | 6 | 1 | 7 | ||||
izhang | 1 | 1 | |||||
codeforces | 2 | 3 | 3 | 4 | 5 | 17 | |
Total | 75 | 80 | 67 | 123 | 131 | 91 | 567 |
Solutions
Problem A
Solution 1 (BucketPotato): 164799099
Solution 2 (BucketPotato): 164799196
Problem B
Solution 1 (BucketPotato): 164798955
Solution 2 (BucketPotato): 164799024
Problem C
Solution (BucketPotato): 164798817
Problem D1
Solution (BucketPotato): 164798699
Problem D2
Solution 1 (BucketPotato): 164798145
super speedy editorial, BucketPotato orzzzzzz
Nice ProblemSet :)
Fast editorial.. Why D2 has an unusual memory limit?
To make the problem harder
(copy-pasted from here)
There is a $$$n \sqrt n$$$ two-pointers memory solution in D2 that we wanted to cut off. We decided it would be fine since all of the correct $$$O(n)$$$ memory solutions (from testers, authors, and the coordinator, including in Python and Java) use less than 1/6 of that memory.
And this two-pointer method can be optimized to linear memory by getting the sqrt(ai) distinct values online(one by one).
That is, instead of preprocess all the sqrt(n) values, we get only a current value for ai and get the next value one by one during the process of two-pointer, which costs linear memory since one ai have only one current value.
You're right. Unfortunately none of the setters or testers came up with this idea :(
Neither do I during the contest :(
Same... I came up with the two-pointer method during the contest and got an MLE on test 7. Then I tried to optimize it by getting the values online but I didn't have enough time :(
Can you explain your 2 pointer method?
We know that floor values can range from 0 to 3000 (From given constraints).
So I kept a bitset array of size 3001, and mapped every index that can possibly give that particular floor value.
That is to say, if we can get floor value of x by dividing ith element by some value, then I did this — bSet[x].set(i, 1)
Then the problem boils down to, find the smallest window containing all n elements in them.
Did that using 2 pointers.
My submission is a bit messy : 164790795.
Hope this helps :)
thanks
I came up with it outside of the contest and took a few iterations with MLE. In particular, clear() doesn't actually free the memory taken up by a vector which took me some time to realize.
AC Submission: 165728693
This is interesting. I was quite surprised reading the editorial, because my approach was quite different; its exactly the two-pointers solution you tried to cut off, but I could optimize it to linear memory by getting the values online (as mentioned by fried-chicken. In fact, this was quite easy to write.
However, I still got MLE twice because I didn't know that .clear() doesn't actually deallocate any memory. That took up a bit of time fixing.
I enjoyed the round! In particular, want to share a sol for E that involves kruskal tree (reachability tree/line tree). So lets say for each $$$i$$$, you gave the $$$i$$$'th edge wegiht $$$i$$$. Then you can find the MST (min spanning tree) of the graph. Define $$$f(u, v)$$$ as the maximum value of any weight on the path from $$$u$$$ to $$$v$$$ on the MST. The answer will end up being the max value of $$$f(u, v)$$$ for all $$$u, v \in [l, r]$$$.
When it comes to reachability/max path weight on an MST, we can use a kruskal tree. In particular, the answer will be the weight of $$$lca(l, l+1, l+2 ... r)$$$ on the kruskal tree. A cool property of lca is that $$$lca(lca(u, v), w) = lca(u, lca(v, w))$$$. So this means we can build a segment tree over all the nodes. Some node on the segtree stores the lca of all the nodes $$$[l ... r]$$$ in the kruskal tree. The merge function in this segtree will be finding the lca.
The final complexity is $$$O(n \log n + q \log^2 n)$$$ since you have to call lca $$$\log n$$$ times per query.
If you use $$$O(1)$$$ query lca methods (such as euler tour + sparse table) and use another sparse table to query lca of a range, you can achieve $$$O(1)$$$ time per query.
Nice comments in your code XD
Number (and weirdness) of comments is proportional to amount of skill issues I have on this problem :( I think I also didn't care about typos at all lol
But that is really slow =_=
I wrote this solution at first but got TLE
164773045
After that I tried to use Heavy-Light Decomposition to get LCA of two nodes and it was 4 times faster =_=
164778738
the answer will be the max weight edge in path from x = lca(l,l+1,l+2...r) to all given nodes ? Am I correct?
In fact, the newly created node represents the edge, and the $$$LCA(l...r)$$$ found when $$$l\not= r$$$ must be the newly created node. In the Kruskal tree, the smaller the depth of the node represents the later the edge was added to the MST, so the weight of the LCA is the answer.
Here is my code for problem E using LCA to find the maximum weight edge of the path between two consecutive nodes(x,x+1) and then applying a simple seg tree with max query.
I would like to interject to add that tjere is yet another way to achieve $$$O(log n)$$$ per query, that is by not using RMQ to do LCA, but by finding the "leftmost" and "rightmost" element in the range you are LCA-ing. This pair, unique to every set, is proved to be one of the many paths that go through the LCA of the given set (if not the only). As such, after finding this pair for a segment (by using segment tree), one could then easily get their LCA and that would be the answer for the query
It's also funny because now reading back this comment I realised one could make O(1) per query, because the "leftmost/rightmost" pair is denoted by a minimum/maximum value on a segement, as such using RMQ for getting the pair and RMQ for LCA you could have O(1) per query. Although the time remains log-linear, you can further "optimise" to linear by using a probably worse, considering the constant, but linear RMQ (ok, DSU isn't linear but it still goes along that line)
You can construct the line tree in $$$O(n \cdot \alpha(n))$$$ time if you have 2 separate roots for each component in the DSU: one that is the root for merging purposes, and one that is the extra node created in the kruskal tree.
Then replace the segment trees with $$$O(n)$$$ build / $$$O(1)$$$ query RMQs to get a $$$O(n \cdot \alpha(n) + q)$$$ solution:
https://mirror.codeforces.com/contest/1706/submission/165315620
Good contest. The difficulty of Problem A is very appropriate, and Problem B and C are very interesting. D1 and D2 are also good. Thank you.
Why would I use
memset
in Problem B? This resulted in getting "Time limit exceeded on pretest 2" and debugging for a long time. :(My submission for B that doesn't use DP: 164767537.
For D2, https://mirror.codeforces.com/contest/1706/submission/164790931 logically maintains only n elements in the 'hold' vector, yet MLE's. Any clue if this is because CF counts total memory allocated throughout the execution of the program, or am I actually using more space than I think?
You don't release the memory from
hold[start-1]
. The memory is still reserved, it doesn't get released just because the vector became empty. You can usehold[start-1].shrink_to_fit()
at the end, or swap it with an empty vector to release it.c in the even case just checked left and right leaning cool building's should have checked all possibilities.. this is the closest i've ever been to solving div 2 C. hope i'll solve c in next div2 rounds
Posting a comment to edit it when problem ratings arrive, so I can say how right/wrong I was
Edit: So yeah, you know how they say:"Humans are very bad when it comes to predicting something". Well, I can confirm that I am, indeed, a human!
Great contest though!
.
Why reposting?
So that people can dislike it ..
I haven't seen the $$$t\le 100$$$ constraint in D2, so I only count the $$$a_i$$$/x or ($$$a_i$$$/x)+1 place, let the complexity be strictly $$$O(\sum \sqrt a)$$$.
Please someone explain C
if n is odd --> only one scenario i.e start from index 2 and take upto n-1 with a gap of 2 Example:- 1 2 3 4 5 (we can only make floors on 2 and 4 as we need to maximize cool buildings)
if n is even ---> Example:- 1 2 3 4 5 6 7 8 9 10 here no of cool buildings will be (n-2)/2 here there are many scenarios like (2,4,6,9) or (3,5,7,9) or (2,5,7,9) and son on....
Common between these scenarios (only 1 pair of buildings has difference 3 and all other have difference 2 )
Thanks bro for the explanation I will work it out the rest on my own!
a very beginner question: what does parity mean??? from problem B
odd or even
If two numbers have different parity, it means one is divisible by 2 and the other one isn't. Hope it helps.
Is there a proof that the maximium value X such that A//X >= B is A//B ?
For the minimim X such that A//X <= B it is not working.
floor(A/X) <= B implies A < X*(B+1). Hence X > A/(B+1).
You are absolutely right. Thanks. Don't you know any good article to read about such things ? I have a rating about 1900 but still not good in these.
If my logic is correct $$$\lfloor \frac{A}{X}\rfloor\geq B \iff \frac{A}{X}\geq B \iff \frac{A}{B}\geq X \iff \lfloor \frac{A}{B}\rfloor\geq X$$$.
Doing the same for the second case is not possible because the first "iff" would not be correct if instead of $$$\geq$$$ we had $$$\leq$$$.
Alternative solution to E without much thinking, do parallel binary search, to check if everything between l and r is connected, get rightmost node to l where everything between is connected and check if it's >= r
How does "getting the rightmost node connected to l and checking if it's >= r" work? For example, if node 1 is connected to 5, and we're checking that [1, 3] are connected, this will give a wrong answer.
Sorry, I meant max node R[i] where everything [i, R[i]] is connected. You can calculate it like in DSU with path compression, also you need DSU to check if two nodes are connected.
Why does this pass? I mean complexity-wise.
UPD: nevermind, I understood how, it works amortized.
You can do something similar to calculate $$$f(i)$$$ (as defined in the editorial) so you can skip the MST trickery.
However I still don't get how you can solve the problem using just $$$R[i]$$$. When you mention parallel binary search I guess you mean that for every query (in a parallel fashion) you binary search for $$$k$$$ (the answer to each query). If so how would you handle the DSU updates? I can't think of a way to handle all the dsu simultaneously.
You can sort by queries by midpoint of binary search interval, and add edges that are needed between queries.
I don't really get how this can work.
The way I see it you need to do something along the lines of
Now, we can't explicitly create a new $$$h$$$ in every query, we'd end up with a $$$O(MN)$$$ solution that way.
We could instead add the edges $$$[l, m)$$$ to $$$g$$$, do the check then roll back these changes when traversing further down. This could work out fine however, we need to support the $$$R(i)$$$ which uses path compression so we can't do this either.
Other ideas like traversing down the left subtree first and then adding edges to $$$g$$$ so that we don't need to rollback, don't really work either since we need to partition the queries before traversing any further, which requires $$$h$$$.
Ultimately, I don't really see anyway to make this work without supporting $$$R(i)$$$ query in true $$$O(\log n)$$$ (or better) time.
Got a message explaining the idea, instead of doing a DFS of the search tree instead traverse level by level, traversing each level from left to right. This way, we only ever add edges and don't need to roll back.
Here's the implementation, 164978306.
This does actually give you a way to compute what the editorial describes as $$$f(i)$$$ in $$$O(M\log M\log N)$$$ time though. We no longer need to compute $$$R(i)$$$ for checking if $$$(i, i + 1)$$$ are connected so we can implement rollback easily.
Once we have $$$f(i)$$$ we just need RMQ so that works out nicely.
UPD: Implemented it, 164944610.
Don't understand solution of D1,can someone explain it to me in detail
I copy-pasted from my own personal notes so sorry if it's hard to read. Maybe it helps though.
Solution 2 in D2 by AlperenT doesn't open.
Sorry about that. Try again?
Thanks!
If there exists some Ai≥(k+1)⋅v, then it is impossible to have the max element as v, and we should skip
In my opinion that's not true, in the last testcase from example we have array {1, 3} and k = 2
maximum can be 1, because 3 / 2 = 1
You're right, that's my mistake. It will be fixed soon.
B is really harder than C and D1
Actually, in the original version of the round, B and C were swapped. But feedback from testers suggested that their current placement would be better.
How to do C using DP?
Geothermal solved it with dp but I dont understand his dp states check his soln from leader board Can anyone Know how to do it with dp
Did it after the contest link
can you explain the code?
My DP returns a pair of integers the first one is the maximum number of buildings that can be cooled and the second value returns the minimum cost we need to cool those buildings.
Now for a particular index, we have two options either we cool it or we don't if we don't cool it we can simply move to the next building. But if we cool it we calculate to cost to cool it and move to (ind + 2) because we don't wanna cool adjacent buildings. Now choose the best answer amongst both of the two options the priority should be the maximum number of buildings cooled
165126416
This is my dp solution for C
E is cool, could not figure out that connecting $$$i^{th}$$$ vertex to $$$(i+1)^{th}$$$ vertex is the key, the editorial is really smart.
Video Solution for Problem C. Will upload Problem D in the morning :)
I think there is a slight mistake in the alternative solution for D: "Continuing this logic, for all integers u=1,2,…,k, we should check the elements a_i satisfying (u−1)⋅v+1≤a_i≤u⋅v, and set all these p_i=u."
It seems the condition should be (u-1)(v+1)<=a_i<u(v+1). For example, if we take a_i=2v+1 then p_i=2. Or if we take a_i=3v+2 then p_i=3
Yes you're correct. Thanks for noticing it. It will be fixed soon.
Here is my solution to D2: First initialize $$$p_i = 1$$$ \forall i$ the greedy action is always to reduce the actual maximum $$$c_i = \lfloor {\frac{a_i}{p_i}} \rfloor$$$ because we can only increment $$$p_i$$$ ($$$p_i > 0$$$) and reducing the actual minimum will make the differcence increment, so the idea is in a
priority_queue
add all the pairs $$$(c_i, i)$$$ in each iteration get the maximum value, reduce it with an increment of $$$p_i$$$ by $$$x$$$ such that $$$\lfloor {\frac{a_i}{p_i + x}} \rfloor < \lfloor {\frac{a_i}{p_i}} \rfloor$$$ (p_i += x
), we can get $$$x$$$ with $$$x = (a_i - c_i * p_i) / c_i + 1$$$ (for each $$$a_i$$$ there are at most $$$\sqrt {a_i}$$$ updates of $$$p_i$$$). Next update the answer if the differcence of the maximum and minimum value is smaller than before until a $$$p_i > k$$$ or maximum value of $$$\lfloor {\frac{a_i}{p_i}} \rfloor$$$ is $$$0$$$. To optimze the solution i used an vector to replace depriority_queue
and save the indices, because all values are in range $$$[0, 10^5]$$$ so no sorting is needed, to update a maximum with value $$$c_i$$$ (position $$$c_i$$$ in the vector), update $$$p_i$$$ and get the new $$$c_i'$$$ and move the index to position $$$c_i'$$$, 164794465Why for each ai there are atmost sqrt(i) updates of pi ?
If we ignore the floor operation for a while, n/p is integer only when p is a factor of n and there can be atmost sqrt(n) factors for a number n. Hence floor(n/p) has sqrt(n) values. Hence sqrt(n) updates value is decreasing by one in every operation.
Suppose we have a number $$$N$$$ such that $$$\big\lfloor{\frac{N}{x}}\big\rfloor = y$$$.
Then notice that for all $$$x \le \sqrt{N}$$$ we must have $$$y \ge \sqrt{N}$$$. Therefore, there exists a "connection" between any number $$$x \le \sqrt{N}$$$ with some $$$ y \ge \sqrt{N}$$$.
So by the Pigeonhole Principle, for the entire domain $$$x \le \sqrt{N}$$$, there exists at most $$$\sqrt{N}$$$ distinct "landing spots" comprised of a coordinate in $$$y$$$. By the same token, for all $$$y \ge \sqrt{N}$$$, there exists at most $$$\sqrt{N}$$$ landing spots from $$$x$$$, implying there are $$$O(\sqrt{N})$$$ updates.
164793050 D1 solution with bitsets using for indices storing. Works due to low count of divisors
Thank you for the contest the problem was clear and fun and this editorial is so easy to read thanks alot
Hey everyone,
I'm wondering about the solution to D2. I read AlperenT's solution because his strategy of fixing the maximum value is similar to what I did for D1. There are a few things I don't understand. Also just to preface this, I'm probably going to misquote the author at almost every point in this post, that is not intentional it is just a consequence of my foolishness.
First, in the fourth paragraph, he says that we need to find the best Pi for some maximum v by iterating through all possible values of k. Doesn't this immediately trigger a TLE, because in the worst case we need to iterate through all k for all n --> n^2?
Second, AlpertenT says that the minimum value for our fixed maximum will come from the smallest Ai. But this can be disproved with a simple example: 6, 10 with a maximum of 6. To get 10 under the maximum, we must divide by 2, leaving us with 5, 6 --> 6 is not the maximum. The author says that this is only true for samples with the same "u", which does make sense, but at that point, aren't you just iterating through all values in the array regardless, since you need to check for the minimum of every value at every "u"?
At this point, I think I'm lost to the extent that I can't ask any more intelligent questions, but I guess the final things I'm wondering about is how we generate the "next" array in less than n^2 time, and how that will help us for a fixed maximum rather than a fixed minimum.
Thank you so much for reading, this is a long comment.
Hi, thanks for the questions.
In the fourth paragraph, we're fixing the maximum as $$$v$$$ and creating segments for every $$$u$$$. A segment consists of numbers $$$[(u - 1)(v + 1), u(v + 1) - 1]$$$. When it becomes impossible for a segment to contain any elements of the array we can stop early. We only create segments for $$$u$$$ where $$$(u - 1)(v + 1) \leq a_n$$$ which means for every $$$v$$$ we only create segments for $$$u \leq \frac{a_n}{v + 1} + 1$$$. We're essentially only doing $$$\frac{a_n}{v}$$$ operations for every $$$v$$$ (This is also written in editorial). In total it makes $$$O(a_n log(a_n))$$$ because this is the same as harmonic series * $$$a_n$$$.
Like in the previous answer, the amount of $$$u$$$ we have to check is $$$O(a_n log(a_n))$$$. We find the minimum element in the segment with the help of the $$$next$$$ array in $$$O(1)$$$ so this is also $$$O(a_n log(a_n))$$$.
Like the editorial says $$$next_j$$$ means minimum $$$a_i$$$ where $$$a_i \geq j$$$. There are many ways to do this. You can either use lower_bound or if you want to do it in $$$O(n)$$$ you can notice that for every $$$a_{i - 1} < j \leq a_i$$$, $$$next_j$$$ should be $$$a_i$$$.
Please correct me if I understood this question wrong. We're fixing the maximum as $$$v$$$ so in order to minimise $$$v - min(\left\lfloor \frac{a_i}{p_i} \right\rfloor)$$$ we have to maximise $$$min(\left\lfloor \frac{a_i}{p_i} \right\rfloor)$$$.
For a better understanding, I suggest you to check my code in the editorial. I hope it's an easy to read code.
Thank you, this cleared it up!
An alternative solution to E without using LCA and minimum spanning tree.
The idea is to keep a persistent DSU data structure to be able to query whether 2 nodes are connected at any point (after adding some edges from the beginning).
Then for every $$$i$$$ and $$$i+1$$$ nodes. We do a binary search to find the first time the 2 nodes become connected. Then we can continue in the same way using a range max query structure.
To construct a persistent DSU data structure we need to keep snapshots for updates for every change for node’s parent. Also, we need to do small into large merges instead of path compression to get the required complexity.
The overall complexity for the first part is $$$O( \thinspace n \thinspace log(m) \thinspace log(log(n)) \thinspace )$$$
submission
I solved E by keeping track of all the intervals formed after connecting $$$k$$$ edges, using DSU.
In DSU, similar to parent/size, we store the ranges covered by each set in the form of a set of $$$[l, r]$$$ pairs. Anytime two intervals are merged, our goal is to recalculate the $$$[l, r]$$$ ranges in the larger set, after having added the elements of the smaller set to it. Whenever a new range is formed, we note the time when it was formed. This will give us all the possible ranges of the form $$$[l, r, k]$$$, which means that it is possible to connect all vertices within $$$[l, r]$$$ using the first $$$k$$$ edges.
Suppose that initially the larger set contains ranges: $$$[1, 3], [6, 7], [10, 11]$$$ and the smaller set contains ranges: $$$[4, 4], [8, 9]$$$. First, we merge $$$[4, 4]$$$. Query using lower_bound on the larger set and get the previous and next ranges. In this case, it is possible to merge $$$[4, 4]$$$ with $$$[1, 3]$$$, giving a new range $$$[1, 4]$$$.
Next, up, we see that it is possible to merge $$$[8, 9]$$$ with both $$$[6, 7]$$$ and $$$[10, 11]$$$. We do that and get a new range $$$[6, 11$$$
The key is to do union by size. We will only merge smaller size sets into the larger size sets. Think about how many times a particular range will have to change sets, i.e., be merged. Since we'll only merge with larger sets, the size of the set that a range belongs to will atleast double after every merging. So, every element/range will go through $$$O(Log N)$$$ merges.
See my DSU class for more clarity : 164821167
Now the problem reduces to this: We have a bunch of ranges of the form $$$[l, r, k]$$$, and queries of the form $$$[x, y]$$$. For each query, we have to find the range with minimal $$$k$$$ such that $$$l <= x$$$ and $$$r >= y$$$. This can be solved by using a segment tree for the minimum. Sort the ranges and queries by their left ends. Try to minimize $$$k$$$ for each right end of a range. For a query, find the minimum $$$k$$$ for all $$$r$$$ such that $$$y <= r <= n$$$.
Time complexity : $$$O(N Log^{2}N)$$$ for merging sets using DSU.
I have used same DSU structure as you. You can simplify the query part a bit though by only storing for each vertex $$$V$$$ an integer $$$D[V]$$$ = maximum edge number till which there exist a range whose first number is $$$V$$$ .
So for each query say $$$[x,y]$$$, All of these nodes will be connected to each other by edges upto edge number $$$j$$$ if and only if there exists no range that begins with a number in $$$[x+1,y]$$$ after merging all edges upto $$$j$$$ in DSU .which leads to answer being a rangeMax Query over $$$D[x+1]..D[y]$$$.
Submission for context
I liked the problems a lot, though B seems a bit hard for a Div2 B problem..
I disagree. It wasn't that hard. You just needed to find a way to put blocks to make a tower and see if it's possible.
Thank you now I finally know how to keep
std::vector
's memoryshrink_to_fit
I love problem E. Thank you for the contest
Nice contest, finally became expert :)
for me, B is really tough at the first glance, but sample figures are really inspiring
in D1 tutorial, you directly enumerate min value from 0 to a[0], does this mean we can always obtain all numbers smaller than a[0] using that operation on array?(at least [1, a[0]]?) . looking forward to some formally prove, im not good at number theory. thanks
also, E looks really great, maybe will become another standard graph problem.
great contest!
In D1's solution,
Firstly we know it is always possible to get a[0] as v by simply taking pi = 1.
Now there may be only a few v between 1 and a[0] that are actually attainable. But the good part of the solution is that , suppose you are in an interation for a v that is not attainable, you are assured a v higher than that (lets call it v1) is attainable. Now we also know that all the a[i]/p[i] values will be greater than or equal to v1 in the interation you are checking.
Hence while checking for a v that is not attainable, you are assured you will anyway get a better answer when you check for v1. (v1 > v). (Max(a[i]/p[i]) — v1)
Is it necessary that the minimum a[i]/p[i] will occur for i=1? For example, in the first sample test case resulting array {4,5,6,4,5} has maximum value at i=3. So why for minimum we assume we get it only from first value? EDIT: No it is not necessary that the minimum value will always occur on first element that's why we are testing all elements from 1 to a[1]and not just the elements that a[1] can become.
This set of problems is very good, harvest a lot
I don't really understand the Solution D1.
It's said that we iterate v as the minimum value, but what if it can't derived from the array p and the array a?
Same question...
In D1's solution,
Firstly we know it is always possible to get a[0] as v by simply taking pi = 1.
Now there may be only a few v between 1 and a[0] that are actually attainable. But the good part of the solution is that , suppose you are in an interation for a v that is not attainable, you are assured a v higher than that (lets call it v1) is attainable. Now we also know that all the a[i]/p[i] values will be greater than or equal to v1 in the interation you are checking.
Hence while checking for a v that is not attainable, you are assured you will anyway get a better answer when you check for v1. (v1 > v). (Max(a[i]/p[i]) — v1)
Thx for your rely! But p[i] is derived from min(k, a[i]//v) so if v change from v to v1(v1 > v), p[i] may decrease, so Max(a[i]/p[i]) may increase. Finally I think it's hard to say (Max(a[i]/p[i]) — v1) is greater than (Max(a[i]/p[i]) — v)
Actually for a v that is not attainable,
min(K, Floor(ai/V)) = min(K, Floor(ai/v1))
I think the proof is simple to work out.. You can take up some examples and see
Thats why my above comment holds.
Just updated my comment. Do have a look.
Thx very much! It's absolutely right by taking some examples but hard to proof in math form~
I tested some examples and seems like Max(a[i]/p[i]) — v) will not change during v to v1, but it's hard for me to proof orz
Why did my approach for D1 did not work? Submission: https://mirror.codeforces.com/contest/1706/submission/164765734
the min variable in Solution is like continuous from 1 to a[0], but yours is like from a[0]//k, a[0]//(k-1) to a[0].
yeah because we also divide the minimum element of array A, right? How can we have continuous minimums?
For example, if k=3 and a[0]=5
a[0]/1 = 5
a[0]/2 = 2
a[0]/3 = 1
Here we don't have 3 and 4 in the range [1, 5]
Which means i screwed up my algo by iterating on k, instead i need to iterate on minimum value. Any proof or reason why that only works?
the min value can derived from not only a[0], but also a[1]... so if you iterate the k, you also need to iterate the a[i], that's my first solution and it's TLE orzorz
All problems in this contest are very nice!
Why isn't there anyone to point out the nearly same problem as Problem C on Atcoder ? Actually it's ABC162F. Select Half , and to make an equivalent transformation you just have to define the "a[i]" of the i'th building as the minimal cost of making it higher than both of its neighbors.
Sorry for my poor English.
Can anyone explain the problem no. D1 because I am not getting from tutorial
他奶奶的全他娘的是外国佬??
What do you mean? 这种网站你还想见到全中文的聊天?
Bad E, it is too common so that people who know kruskal reconstruction tree can pass it very fast but it is very diffcult to people who don't know this algorithm like me :(
164856513 164742302
ST table can also be used to pass E.My solution is $$$O(n\log^2n)$$$.
Can someone explain solution 1 of problem D2 in details please?
The way using dsu to find the maximum weight in problem E was new to me.
Here is how i understand it (I'll be so glad if anybody fix it if I get anything wrong).
1.) The function
weight
doesn't find the lca in the actually graph, it finds the maximum weights.2.) The function doesn't itterate all the edge between
u
andv
.3.) The reason it can be sure that the weight we found will be the maximum is follow:
weight(u, v) = k
then before thek-th
edge being added to the dsu,u
andv
will be at two different subtree, so to be connected, it must pass thek-th
edge.Apply all of this, we can find the maximum weight on the path between
u
andv
without actually using the lca of two vertices.Also it only works because in this union find we are not doing path compression.
In my solution for E, I used DSU with small to big merging to calculate $$$f(i)$$$, everytime while inserting the elements of the smaller set (say $$$x$$$), I checked if the larger set has $$$x+1$$$ or $$$x-1$$$ in it, and if it does then I updated the value of $$$f(x)$$$ or $$$f(x-1)$$$.
It was a pretty small implementation
My submission
Very neat solutions BucketPotato
I saw the first sample code for D2 and I wanted to show how to iterate the distinct values of $$$\lfloor{\frac{a}{b}}\rfloor$$$. I'll leave it here if anyone is interested.
Let $$$\lfloor{\frac{a}{b}}\rfloor = q$$$. We want to find $$$b'$$$ such that $$$\lfloor{\frac{a}{b'}}\rfloor < q$$$. Then $$$\frac{a}{b'} < q \Rightarrow b' > \frac{a}{q} \Rightarrow b' > \lfloor{\frac{a}{q}}\rfloor \Rightarrow b' \ge \lfloor{\frac{a}{\lfloor{\frac{a}{b}}\rfloor}}\rfloor + 1$$$, which is the formula you can see in the loop in the sample code.
I think I have seen this in some Codeforces blog but I couldn't find it. I'm not good at this floor/ceil stuff so if anyone has related blogs/links, they are welcome.
floor(a/b) < q then a/b < q ?.can u explain a bit ? a/b >= floor(a/b) right ? then how is it guaranteed that a/b<q ?
Intuitively, the floor is an integer, so $$$\lfloor \frac{a}{b'} \rfloor < q$$$ is actually $$$\lfloor \frac{a}{b'} \rfloor \le q-1$$$, but you can now see that this can only happen if $$$\frac{a}{b'} < q$$$ because otherwise the floor would be $$$q$$$.
More formally, we have $$$\frac{a}{b'} = \lfloor \frac{a}{b'} \rfloor + r$$$, where $$$0 \le r < 1$$$. We had $$$\lfloor \frac{a}{b'} \rfloor < q$$$, which is equivalent to $$$\lfloor \frac{a}{b'} \rfloor \le q-1$$$ so that gives $$$\frac{a}{b'} = \lfloor \frac{a}{b'} \rfloor + r \le q-1 + r < q-1+1 = q$$$ since $$$r < 1$$$.
I hope I didn't make it more confusing. Note that I didn't use the inequality $$$\frac{a}{b'} \ge \lfloor \frac{a}{b'} \rfloor$$$ you suggested.
Hey, can anyone please help me clear doubt in the editorial of Problem D1? It says that we can consider that the minimum can have a range of anything from (0....a[0]). But what if let say a[0] was 4, but constructing 3 was not possible by dividing any of the ai's with any value of p (and we might get the minimum difference when assuming the minimum value to be 3).
For example, Let A = [4, 4, 8, 16] and K = 2
Here we will also consider 3 as the minimum value when iterating from 0 to 4, but constructing 3 as a minimum value is not possible. We can also state that there might be an array for which having 3 as the minimum value is not possible but having 3 gives us the minimum difference
Thanks!
You can assume the minimum value $$$v$$$ is just a lower bound. If there is no division that gives $$$v$$$, there will be another lower bound $$$v' > v$$$ that will give you a better answer, and previously trying to update the answer from $$$v$$$ will not harm your result. Finally, this argument won't go further than $$$v = a_1$$$ because $$$a_1 = \lfloor \frac{a_1}{1} \rfloor$$$.
Ohhh! Makes sense. But lets just say if v is the minimum value that gives us the best possible result, so this means that we can say that there would be a v' > v that can be created by some
ai = floor(ai / pi)
that gives us the same result. Did I get that right?I'm not sure I understand your argument. I was trying to say you would get the same array $$$p$$$ in both cases ($$$v$$$ and $$$v'$$$) but when computing the difference, if you use $$$v'$$$ you are not considering the useless range $$$[v, v')$$$ in which there is no division, so the answer will obviously be better, since the maximum didn't change.
Have a look at this comment and the reply, that was what I was trying to say. You can try to update the answer at $$$v=5$$$ but you will get a better result at $$$v=7$$$ because there was nothing in between anyway.
In tutorial for D2, Solution 2 (AlperenT) "Continuing this logic, for all integers u=1,2,…,k, we should check the elements ai satisfying (u−1)⋅v+1≤ai≤u⋅v, and set all these pi=u."
I think the range should be (u−1)⋅v+u-1≤ai≤u⋅v+u-1
With reference to the tutorial of D1, is it not possible to not achieve a particular value of the minimum value v at all?
Let's take this example,
Am I missing something?
It doesn't matter as all values of v from 0 to a1 are checked and you will see the 0 when v = 7. It makes the implementation slightly easier as you don't have to track whether v was achieved.
Can you explain to me . Why did my code get WA https://mirror.codeforces.com/contest/1706/submission/164908500
Take a look at Ticket 15895 from CF Stress for a counter example.
Is this app free guy?? Thank you so much
I have used linear Dp in 'C' for n==2. I don't know why it's giving wrong ans . Any assistance will be most welcomed.
Can someone explain D2 in the most childish way possible?
We start the same as we did in D1, by fixing the minimum, then bringing each element as close to the minimum as possible, but instead of checking each element individually, let's look at the various values of $$$p_i$$$ that we choose (We can easily handle the case where ideal $$$p_i$$$ >= $$$k$$$ separately).
You want $$$a_i/p_i$$$ to be as close to the min as possible, so you should choose $$$p_i$$$ = $$$x$$$ for all elements of the array such that $$$x*min <= a_i < (x+1)*min$$$.
Even in floor division $$$a/b >= c/b$$$ if $$$a > c$$$, so among all those values you divide by $$$x$$$, the max result would be given by the one that's just smaller than $$$(x+1)*min$$$.
You can iterate over all possible $$$x$$$ from $$$1$$$ to $$$k$$$ and get the maximums in $$$O(log(n))$$$ this way, and if you break the loop when $$$x*min > 10^5$$$, it gives you an amortised complexity of I think $$$O(a_{max} log(a_{max}))$$$
great contest
Solution of problem D1 using sets and lower_bound: 165982319
Could anyone help me explain why that the number of distinct $$$\lfloor\frac{a}{q}\rfloor$$$ is at most $$$O(\sqrt{a})$$$ ?
Because you can estimate this way: for $$$q = 1...\sqrt{a}$$$ you get unique $$$a/q$$$ and then for $$$q > \sqrt{a}$$$ you get smth from the range $$$1...\sqrt{a}$$$
UPD: this idea gets AC
I wonder if my idea for problem E is wrong. I'm getting the wrong answer, however I might have a bug somewhere.So to the idea:Let's have a DSU and unite set u_i and v_i processing edge i. We process the edges 1 to m. The dsu-node struct stores continuous ranges currently present in the set. Uniting the sets we rebuild some of those ranges if they get merged. E.g. merging set with ranges {(1...2), (4...4)} and set with {(3...3), (6...6)} results in set with ranges {(1...4), (6...6)}. Then we know what ranges are subjects to change at each merge. For the example above the merge yields new range (1...4) which is fully "walkable" (i.e. the answer for query l=1, r=4 is the edge i we used to unite the above sets).
Now on the queries. While performing the DSU-unite steps let's store new "walkable" ranges e.g. i-th edge yields (l...r) range so we store it like {(l...r), i}. Initially we have {(v...v), 0} for each v in the graph. Next let's sort the ranges we have after all edges processed through the dsu. It may look like {(1...1), 0}, {(1...3), 3},..., {(2...3), 2} ... . Suppose we have a query (l, r) now. The answer to that will be the the element whose left range-bound is less or equal to l and whose right range-bound is greater or equal. We can find smallest range that contains (l...r), the second value we store would be the answer. Note the way we construct the ranges: there are no intersecting ranges: the only possible case is one range contains the other. More formally there is no ranges {(<>...i), <>} and {(j...<>), <>} so that i > j.
To answer all queries offline we can sort them so that first come the ones with smaller l and then iterate the queries preserving a set of not-yet-closed ranges with their second values. The answer to current query (l_i, r_i) would be lower_bound({r_i, 0}) on that set.
May anyone tell if the logic is broken at some point?As usual, would like to provide a shitty and overcomplicated solution for E.
Let's use persistent segment tree to keep track of updates — at the start we say that at position i (0 <= i < N), there stands number i + 1 (so we store the information about all components directly in the segment tree). Now, let's handle queries one by one — we use small-to-large merging to update the new parent for each vertex from the component with less size. As such, we will have to update O(NlogN) in worst case, making the total to O(Nlog^2N + M). To answer a query, run binary search. A segment is filled if and only if its maximum is equal to its minimum. A query is answered in O(logN), + logN from binary search -> the query answer time is O(Qlog^2N)
So the total complexity isO(Nlog^2N) space and O((N + Q)log^2N + M) time, and I believe that with enough pain that is squeezable into TL.
can you give me link to submission even though I know its a overcomplicated implementation but I would like to know, how it got implemented
Why the problem name of D is "Chopping Carrots"???
$$$O(N\log{N})$$$ solution for problem E using two segment trees and DSU: Link
Note to self: Write explanation later.