Topcoder SRM 790 is scheduled to start at 11:00 UTC -4, Sept 10, 2020.
Registration is now open for the SRM in the Arena or Applet and will closes at 10:55 UTC-4. The coding phase will start at 11:05 UTC-4, so make sure that you are all ready to go.
Thanks to nikhil_chandak for writing the problem set. Also thanks to misof for testing and coordinating the round.
Choose Your Competition Arena
There are two ways to compete in Algorithm Competitions or Single Round Matches (SRMs). Read here to find which works best for you.
Some Important Links
Match Results | (To access match results, rating changes, challenges, failure test cases) |
Problem Archive | (Top access previous problems with their categories and success rates) |
Problem Writing | (To access everything related to problem writing at Topcoder) |
Algorithm Rankings | (To access Algorithm Rankings) |
Editorials | (To access recent Editorials) |
Good luck to everyone!
Gentle Reminder: The Round begins in 3 hours!
Is this a div.3 ? I'm sorry, but this guy's maximal rating is lower than my minimal rating. Are you sure that this should be rated for div.1 and affect TCO points?
For previous SRM in the announcement this guys was also initially listed as author, but after that he disappeared and other authors appeared...
We were planning to use some of Nikhil's problems but they were not fully ready, so had to update. :)
https://www.topcoder.com/tc?module=Static&d1=help&d2=problemWriter&node=algo_write https://www.topcoder.com/thrive/articles/Write%20Problems%20for%20Topcoder
What requirements to be a writer are still alive?
Yeah, the criteria mentioned here is still applicable.
Recently we added that in case your Topcoder Algorithm/Marathon Rating is not above 1500, your performance on other competitive programming platforms can also be considered.
Also, the rating at the point of time when the application is submitted is considered for the rating criteria
I hope you enjoy the problems today :)
I guess I have to take that back
I'm so ashamed that I cannot become the fastest submit person among all people in today's 250 pt problem because it's the same with past 500pt SRM problem authored by me.
I think it's quite easy to find three old problems and earn money in Topcoder recently...
Thanks everyone for participating in the round! I hope you all found something interesting and challenging. If you have any feedback, you can share it here anonymously : https://www.admonymous.co/nikhil_chandak
Unfortunately, no one was able to solve Div1-Hard (I guess ksun48's solution was very close but failed in system tests due to some bug). Rough stats of % of users who successfully solved (ie. passed system tests) is quite disheartening:
I didn't expect such drastic drop. Anyway, I hope future TC Rounds turn out to be more balanced. I'm adding the editorial for Div1-Med and Div1-Hard below for convenience. Editorial for the whole problem set will be published on Topcoder later.
Such kind of ratio problems can be solved by using a niche trick. I think it's hard to come up with this line of thinking unless you have encountered it before.
Let's say the optimal ratio is $$$Q$$$. Thus, $$$\frac{\sum R_i}{\sum T_i} = Q$$$. What we can do is binary search on $$$Q$$$, and check if a ratio is feasible or not. So, for a given $$$Q$$$, we have to check whether $$$\frac{\sum R_i}{\sum T_i} \ge Q$$$ is possible, which is the same thing as $$$\sum R_i - Q*\sum T_i \ge 0$$$ (let's call the L.H.S. $$$P$$$). The constraints on $$$R_i, T_i$$$ seem to be quite high to do any form of DP while constraints on $$$R, C$$$ are quite low although obviously not permitting brute force solutions. However, meet-in-the-middle looks promising since we can store $$$P$$$ while doing brute force from one corner of the grid and checking from another (meeting in cells lying on some sort of diagonal of the grid).
It's in fact the solution and we also have to take care of the costs along with $$$P$$$ to actually know if any of the options is actually feasible. So, when doing brute from one side, we store $$$(P_i, Cost_i)$$$ pairs on those diagonal cells and later sort them according to $$$Cost_i$$$ to ease our process in the next brute. While doing brute from the other end, we have $$$(P_j, Cost_j)$$$ and we want the highest $$$P_i$$$ whose cost satisfies $$$Cost_i \le K - Cost_j$$$. By storing $$$P_i$$$ in prefix maximum manner, we can binary search on the highest index option with cost $$$\le K - Cost_j$$$ and take its $$$P_i$$$ (since we already sorted) to get our answer.
A number can be expressed as a sum of two sqaures only if all its prime factors of the form $$$4k+3$$$ occur even times. So while taking the product of few numbers, we are only interested in its prime decomposition hence, we can store a bitvector for each $$$A_i$$$ denoting odd/even occurrence of such prime factors.
Now, $$$A_i \le 10^7$$$ so #primes is around $$$3* 10^5$$$. Normal gaussian elimination would easily TLE, even with bitsets. Still, let's imagine doing gaussian elimination traversing from the highest bit to the lowest and xoring when necessary. Notice that, when you are traversing, if any prime factor > $$$\sqrt A_i$$$ exists (*at max one can exist*), the moment you xor it, no more bits > $$$\sqrt A_i$$$ will be on anymore (since only one such was on earlier and now you have xored it). Now, you can simply simulate gaussian elimination on lower order primes (ie. $$$primes \le \sqrt A_i$$$) and handle that one higest bit (if it exists) separately. This way, combined with bitset for lower order primes solves the problem without queries.
For queries, let's process them offline. Let $$$queries[l]$$$ store all the queries which have their left end point as $$$l$$$. We will insert $$$A_i$$$ into our basis from the right side, $$$i$$$ going from $$$N-1$$$ to $$$0$$$. After we have inserted the element at index $$$i$$$, we will answer all the queries which begin their ie. the ones in $$$queries[i]$$$.
We will use a simple fact about linear combination of vectors but in a clever manner here to answer the queries. If $$$v = b_1 + b_2 + \cdots b_k$$$ (where $$$b_i \in Basis$$$), then we can remove one of the $$$b_i$$$ and instead insert $$$v$$$ into its place, keeping the basis intact. So, while inserting what we will try is to keep such elements in the basis which have leftmost index so as to answer as many queries as possible. We can do this greedily: while inserting $$$v$$$, whenever we are about to xor it with some $$$b_i$$$, we will first check which has the least index and $$$swap(b_i, v)$$$ if $$$v$$$'s index is smaller. And from then onwards, $$$v$$$ is a part of the basis (in place of $$$b_i$$$) and we progress doing Gaussian Elimination with $$$b_i$$$ instead.
Now, when we answer queries which have left index $$$i$$$, we will have the basis vectors from $$$A[i:N]$$$ having least possible index. So for a query $$$i\text{ }R$$$, we simply need to count how many vectors in our basis have index $$$\le R$$$. This can be easily done with a BIT or set.
I still don't understand how to implement div1 Hard. I'm referring to the part where we change basis.
What do you mean by swapping two vectors when we are about to xor them? To do gaussian elimination we have to maintain some king of row echelon form, so vectors in this form are already linear combinations of something from basis. It's not clear what's index of $$$b_{i}$$$ since it's a linear combination. Do you do gaussian elimination somehow differently?
Also some trivia:
1. In ptz 2016 there was a simpler problem with the same ideas which was unsolved during the contest. Did we evolve such that problems that were unsolved by the best teams in 5 hours 4 years ago are now ok to give in 75-minutes personal contest?
2. I was the author of that problem and I still don't understand how to write this one.
Hey, I think a simple code illustrating that would do more justice than words, so I have attached it below:
Given an array $$$a$$$ and $$$q$$$ queries of the $$$l\text{ }r$$$, this solution finds the number of distinct integers that can be obtained by xoring some subsequence of $$$a[l:r]$$$.
The above code's idea can be easily extended to the original solution using bitsets and some Fenwick tree or set.
Thanks for sharing the trivia. I believe roughly two major ideas were required to solve the problem:
Initially, we thought of keeping the problem without queries, but I felt that it required just one observation, which is not so hard once you imagine simulating gauss naively and noting the redundancy. The version of static queries can generally be solved by divide-and-conquer trick (or maybe even overkill it with segtree?). If this was the intended method, it wouldn't really add much to the existing problem and feel like just another mechanical step as top coders would already be familiar with it. However, the method of greedily maintaining the basis for queries felt like an interesting challenge and it certainly feels non-trivial to come up with.
Overall, the problem did turn out to be a little too difficult for 75-minutes. I think it would have been fine had the contest been around 90-minutes long. After all, the implementation looks fairly doable to me once you have figured out the solution.
On your point of whether we have actually evolved so much, I don't have any such comments. Maybe other experienced coders will be better able to address that. On a side note, recently, we did witness the very first problem of AGC 045 requiring non-trivial application of Gaussian Elimination.
When we have trouble with talking about a solution, it is generally good and helpful to mention the complexity (and the (coming?) editorial should contain it).
I got accepted with time $$$\Theta(N (\#\{ p: \text{prime} \mid p \equiv 3 \pmod{4},\, p \le \sqrt{\max A_i}\})^2 / W + Q \log N)$$$ where $$$W$$$ is the word size (i.e. /64 from bitset). Not sure about worst case. Is the intended solution better than this?
I think the difficulty is OK (hard enough, of course), but I see no reason why the there were only few and small examples if you knew it was hard.
Our solution has the same complexity as yours, so that's the intended time complexity. Here, I just wanted to mention the idea on how to solve it (shouldn't have called these solution sketch editorial, I guess). The proper editorial will contain the complexity analysis and reference solution.
I do regret not having added enough examples in Div1-Hard. The fact that a generator was being used, we could have easily added larger samples. I am sorry for that.
You can check ksun's code (apparently there's a one line bug) or my code for Easy Win.
In that problem, we want to maintain the maximum weight basis in $$$O(SZ^2)$$$ time after each insertion, where $$$SZ=124$$$ is the size of each vector.
Suppose that all weights are distinct; namely, when the indices $$$1\ldots i$$$ are sorted in descending order of edge weight we get $$$(j_1,j_2,\ldots,j_i)$$$. Let $$$v_k$$$ denote the original vector corresponding to index $$$k$$$. Also define $$$\text{min_bit}(v)$$$ for a bit vector $$$v$$$ to be the position of the minimum bit set in $$$v$$$ (or $$$\infty$$$ if $$$v=0$$$).
Then for each $$$1\le k\le i$$$, define $$$basis_{j_k}=x\oplus v_{j_k}$$$ where $$$x\in \text{span}(v_{j_1},v_{j_2},\ldots,v_{j_{k-1}})$$$ and $$$x$$$ is chosen such that $$$\text{min_bit}(basis_{j_k})$$$ is maximized. In particular, if $$$v_{j_k}\in \text{span}(v_{j_1},v_{j_2},\ldots,v_{j_{k-1}})$$$, then $$$basis_{j_k}=0.$$$ This is essentially just Gaussian Elimination (without row swaps).
Obviously, all values of $$$\text{min_bit}(basis_{j_k})$$$ must be distinct (excluding $$$\infty$$$). Furthermore, the max weight basis consists of all $$$j$$$ such that $$$basis_j\neq 0$$$. When $$$\text{min_bit}(basis_{j_k})\neq \infty$$$, store $$$j_k$$$ at position $$$\text{min_bit}(basis_{j_k})$$$ of the integer vector $$$at$$$.
Now suppose that we add a new vector with weight $$$w_{ind}$$$. Repeat the following while $$$basis[ind]\neq 0$$$:
Define $$$a=at_{\text{min_bit}(basis_{ind})}$$$. If $$$a=-1$$$ we just set $$$at_{\text{min_bit}(basis_{ind})}=ind$$$ and return. Otherwise,
$$$\text{min_bit}(basis_{ind})$$$ is strictly increasing during this process, so this terminates after $$$SZ$$$ steps. If the loop terminates without returning, then we can remove $$$ind$$$ from the basis.
The code should still work when not all edge weights are distinct, although the definition of $$$basis_j$$$ will be different.
The concept in Div1 Hard (maintaining lexicographically maximum or minimum basis) is almost the same; set the weight of index $$$i$$$ to be equal to $$$i$$$. Using vectors of indices instead of bitsets seems to be fast enough.
I wonder how many people failed med because of precision errors. I'm not quite used to seeing binsearch problems with $$$10^{-9}$$$ precision asked, so I contemplated coding this solution for a bit.
Other than that I don't even see how a solution with the correct idea that passes examples can fail systests :/ And there were a lot of such solutions.
Any updates on the complete editorial? Thanks in advance!