Hi everyone!
Sixth round of COCI will be held today Saturday, March 13th at 14:00 UTC. You can access the judging system here. If you are not familiar with COCI contest format, we encourage you to read the announcement.
The round was prepared by paula, dpaleka, pavkal5, bukefala, stjepanp and me.
Feel free to discuss the problems in the comment section after the contest ends.
Hope to see you later today!
Is there is solution for E, faster than the Mo and fenwick?
Parallel Binary Search
I used persistent segment tree to solve it in $$$O((n + q)\log{MAXP})$$$.
Can you please explain your solution?
Sure, first build the pst over the frequency of the values and store the root of each version.
After that, we want to know the maximum $$$k$$$ such that there are at least $$$k$$$ values greater than or equal to $$$k$$$ in the range of positions $$$[l, r]$$$.
Then, we can make a traversal over the persistent segment tree considering the representative node of the first $$$l - 1$$$ values (let it be $$$last$$$) and the representative node of the first $$$r$$$ values (let it be $$$pos$$$).
Now, the number of elements with value in range $$$[L, R]$$$ and in position range $$$[l, r]$$$ is $$$st[pos] - st[last]$$$ for the associated nodes $$$pos$$$ and $$$last$$$. This will help us to notice that we can use that information without an extra binary search.
There are two possibilities, given that we have an extra value called $$$add$$$ that stores the number of elements greater than $$$R$$$ in position range $$$[l, r]$$$. Denote $$$mi = \lfloor\frac{L + R}{2}\rfloor$$$:
1) $$$st[right(pos)] - st[right(last)] + add \leq mi$$$: This means that the number of elements greater than or equal to $$$mi + 1$$$ is less than $$$mi + 1$$$, which means that none of the values in range $$$[mi + 1, R]$$$ is a possible answer. Then, we must go to the left child and update $$$add = st[right(pos)] - st[right(last)] + add$$$.
2) Otherwise, we can use $$$mi + 1$$$ as a valid $$$k$$$, which means that the maximum one is in the range $$$[mi + 1, R]$$$.
It will be something like this:
Thus, it's just $$$O(\log{MAXP})$$$ per query.
Short summaries of my solutions to the harder problems:
Turn each word into a canonical form (e.g. sort the letters alphabetically) to identify sets of similar words. The words themselves can then be discarded, as only the sizes of the sets matter. Now this is an integer knapsack problem: add one set at a time, and for each set, iterate over how many elements to take from it.
I found this by far the hardest problem. The basic idea is to add points one at a time, working left to right, and maintaining the set of "good" segments. To add a new point P, there are two steps: cull segments that are no longer good (because they intersect a segment from P to one of the existing points) and add new segments that start at P.
For the first part, we identify a set of "safe" segments, i.e. segments between the prior vertices that are not cut by a segment from P. Sort them by the angle they make with P, and add them one at a time to a convex hull (Graham Scan). When adding a new point Q to the hull, any segment from Q to a previously considered point R will be safe if and only if R is on the hull and "visible" from Q. These are the points removed from the hull to add Q plus the last point that isn't removed. It is easy to see that this builds a "safe" set of size O(n), which can be intersected with the previously "good" segments, which is also of size O(n).
For the second part, it is easy to see that segments from P to the previous point are good if and only if they connect to vertices of the convex hull "visible" from P. Again, these can be found in linear time after the hull is constructed.
Runtime is $$$O(n^2\log n)$$$. I actually got an implementation that scored 100% before I did all the optimisations above so was technically $$$O(n^3)$$$.
I used a persistent segment tree, but it was $$$O(n\log n + MAXP + q\log^2 n)$$$ (I think). The papers are added to the tree in decreasing citation count. To answer a query, use a binary search for the answer: to check if an answer is small enough, consult the appropriate version of the segment tree to count the number of papers in the range with at least that many citations.
That answers queries online; I had some ideas for a solution that answers queries in order of right endpoint, but nothing that was significantly simpler or faster.
Solutions are published here.