Since it will be a little while longer before the editorials for the recent Division 3 Round come out, I thought I'd post some solutions I wrote up. Please feel free to post below if you have any questions or notice any errors. If any of my solutions end up getting hacked, I'll replace the submission and solution text as soon as possible. Links to my submissions are at the bottom of this post.
A: We can easily prove that the answer will not be much bigger than A, so simply iterating through all the possible values until we find one that works will do fine. (As a trivial proof, note that 1003 is interesting, so we will have to test at most 1003 numbers.) Then, the only challenging part from here is finding the sum of the digits of a number, which can be implemented easily with modular arithmetic.
B: Start by sorting the prices. It turns out that only the greatest and least prices matter. We have two key facts to prove:
- The answer is -1 if the difference between the greatest and least prices is greater than 2*K.
- Otherwise, the answer it is equal to the least price plus K.
Part 1 has an easy proof: if the difference between these two prices is greater than 2*K, we obviously can never force the greatest and least prices to become equal. Otherwise, we can easily prove that the least price plus K will have absolute difference at most K to all of the other prices, so this is a valid solution. Since the answer must have absolute difference of at most K to the least price, the least price plus K is clearly optimal, so it's our answer. This can be implemented in O(N log N) by sorting or in O(N) by simply picking the least and greatest prices while reading in the data.
C: For convenience, we subtract one from K. Then, the answer is -1 if and only if K / B (using integer division) is less than N. Otherwise, subtract N * B from K and subtract B from A. Then, the answer is the minimum of N and K / A (again using integer division). Essentially, we're figuring out how many times we can add the difference between A and B (i.e. replacing B's with A's) until we have too little charge.
This is clearly an O(1) solution for each query, so our overall complexity is O(Q).
D/E: See G/H.
F: The key observation is that any number up to 200,000 will have a relatively small number of factors (in fact, it turns out there are at most 160 of them). Ignore duplicates within the data, since obviously they're never useful, and sort the data such that A[0] is the greatest value. Then, we can see based on this observation that if A[0] is in a set of two problems, the other problem can be at least roughly A[170] (adding a bit of wiggle room on top of the 160 factors to prevent edge cases from becoming an issue), so the higher number in a set of two problems should obviously never be less than A[170]. Likewise, if A[0] is in a set of three problems, the second problem can be at least A[170], and the third number can be at least A[340] (since there will be at most about 320 factors between A[0] and the second problem combined), so in this case, the first problem should be no less than A[340].
To actually implement the solution, we start by taking A[0] as the best set of one problem. Then, to deal with sets of two problems, we iterate through the largest 170 values in the array. For each value, we find the greatest smaller value in the array that does not divide our original value, and we check if this gives us prettier pair of problems. (Obviously, we should take the greatest possible second problem in this case.)
Finally, to deal with sets of three problems, we start by considering the first 340 problems as potential first problems. Then, we build an array of the 170 largest candidate values for our second and third problems. (Since we can take the largest candidate and guarantee that we'll find another viable candidate in this array, we know that we don't need any numbers outside of the first 170 candidates.)
Checking all 170 choose 2 pairs of potential second and third problems would be fine if we only had a single query with size up to 200,000, but with lots of queries (consider about 400 queries of size roughly 500), it may be possible to break this solution (because the optimization of checking only the first 350 elements in the array as first problems doesn't help much if there aren't many problems in the array in the first place). It may be possible to pass with a few constant-factor optimizations, since not many cases will be able to break this in the first place, but to be completely safe, we can implement a solution that will find the best pair of candidates in O(N) time in the number of candidates.
To do so, maintain a stack containing the candidate values we haven't paired up yet. Initialize this stack such that it contains only the first (largest) candidate. Then, iterate through the candidates in decreasing order of size. While it doesn't divide the number on top of the stack, pop this candidate from the stack and check if this pair is the best pair of candidates we've seen so far. Then, if the stack is empty, we've paired up our largest element, so we don't need to check any lower values, so we can break. If the stack is non-empty, add this candidate to the stack. The key observation (and the reason this works) is that any number in this stack divides all the numbers below it, so if a number divides the value on top of the stack, it also divides all the values below it. (So, if we can't pair a candidate with the value on top of the stack, we won't be able to pair it with any of the other unpaired values in the stack either.)
To put it all together, add the best pair of candidates to the first value we're considering and check if their sum is greater than the best problem set we've found so far. Once we've checked all possible first values, print the best answer we've found.
Here's an alternate approach to the three-problem case: Observe that if A[0] is not included in the set, then the only possible set that gives a superior result to an alternative including A[0] is A[0] / 2, A[0] / 3, and A[0] / 5. So, we simply have to test this case and then all possible cases that do contain A[0], which proceeds similarly to our two-problem case. Credit to nishant403 for presenting this solution!
G: We first decide which numbers of candies we should use. Note that this alone is enough to answer the easy subtask. To do so, build an array containing the number of each type of candy and a set containing the numbers from 1 through N (i.e. the possible numbers of candy we might want to use). Then, for each type of candy, add the greatest value less than the number of this type of candy that remains in the set to our answer, and remove this value from our set. Essentially, we're greedily taking the most candies we can from each set, because obviously we won't get a higher answer by taking fewer candies than possible from a set.
One key observation is that the set of numbers we've added to get our answer will be unique--in other words, there will only be one way to choose which numbers of types of candy we'll use. This allows us to solve the hard version of this problem. Maintain a priority queue containing the numbers of type-1 candies in sets of candies large enough to use. Then, iterate through the numbers of types of candy we need to use in decreasing order (for example, in sample test three, these numbers are 4, 3, and 2). Add the number of type-1 candies in all remaining types of candy with more candies than the current number to the priority queue. Then, pick the greatest element in the priority queue and add the minimum of that element and the current number to the answer. Essentially, our strategy here is to greedily use the greatest possible number of type-1 candies at each step in this process, since we'll never want to save a greater number of type-1 candies for a smaller set.
The complexity for each case is O(N log N), where the limiting steps are sorting the data and our usage of sets and priority_queues.
H: Note that the solution presented here solves both E/H at once. In-contest, I didn't see what made E substantially easier than H, so I used the same solution for both of them.
We first consider a subproblem: how many unique subsequences of length L exist in our string? To solve this subproblem, start by counting the number of distinct letters in each suffix of our string. (This gives us the answer for L=1.) Then, we update this array L-1 times to compute the number of distinct subsequences of each length up to and including L. To do so, at index i, the number of subsequences of length L in the suffix of S starting at position i is the number of such subsequences in the suffix starting at position i+1, plus the number of subsequences of length L-1 in the suffix starting at position i-1, minus the number of subsequences of length L-1 in the suffix starting at the next occurrence of S[i] (we overcounted these because choosing the next occurrence of S[i] rather than choosing S[i] gives us an identical subsequence). Once we've computed these arrays, we can then find the number of subsequences of length L in the entire string S.
Once we've found these values, we can easily solve the problem by prioritizing the largest subsequences. Iterate through the possible subsequence lengths in decreasing order and add the appropriate number of subsequences with the current length (all of them, unless this gives us more than K total subsequences), and break the loop immediately once we get to K total subsequences.
One note is that there may be lots of subsequences: a trivial (though by no means strict) upper bound is 2^100, which is too many to fit into a long long. However, this is not an issue because to receive an incorrect answer, we would need there to be more than 2^63-1 subsequences of a certain length while there are less than K subsequences of all greater lengths. With K up to 10^12, this means that there would need to be more than 1,000,000 times as many subsequences of some length as there are of all greater lengths, which is clearly not going to happen. Thus, while some values in the array may cause an overflow, we won't look at them when computing our answer, so we don't need to do any separate work to account for them.
The total runtime for the most efficient implementation of this idea is O(N^2). Less efficient implementations that run in O(N^3) will also pass.
My submissions are below:
Problem E (Note that this is the same code as my H.)
Auto comment: topic has been updated by Geothermal (previous revision, new revision, compare).
For B, the owner can do price decrease too rihgt
Yes.
Didnt get the O(1) C solution. How did you come up with that?
Our first step is checking whether the answer is -1 or not. Clearly, the cheapest way to make N moves is to play and charge for all of them, which has a total cost of B * N. Hence, if B * N < K, the answer is not -1, and if B * N >= K, then the answer is -1.
Afterwards, we subtract B * N from K to figure out how much charge we have left over. Then, we subtract B from A so that every time we add A, it's like subtracting the original B and adding the original A (in other words, it's equivalent to switching from playing/charging to just playing). Then, we simply need to figure out how many times we can add this new A without going over the remaining charge, which is where the final K / A division in my solution comes from.
Why did you subtract 1 from k?
Because you can use operation A only if K is strictly greater than A, same with B. Subtraction 1 allows you to ignore this restriction.
In other words, it makes the restriction softer: you can use operation A if K is at least A.
Still didn't got how (k-1-(n*b))/a-b gives you the maximum 'a' turns.
k-1
means that now you can use operation A or B if K is at least A instead of K is strictly greater than A (same with B).k-1 - n*b
means that you try achieve the goal without using any A's. You should do it to check whether it's even possible to maken
turns.(k-1-n*b) / (a-b)
means that you can spenda-b
to replace one B with one A by paying costa-b
.You can throw away sets and heaps in problem G. I could explain why it works fast, but
I'm lazyit's easy to prove: 56128923I solved F in a similar way as mentioned in alternative but I don't know why it is hacked. Can someone mention the test where it fails? csegura
Edit : Is it because of the multiple values present?
Probably so. The case used to hack you is here: https://mirror.codeforces.com/contest/1183/hacks/567571/test
Yes, that was the reason I got it accepted after removing duplicate items. Very unfortunate no such test was present in initial pretest.
I know what is going wrong in my code but am unable to figure out how to correct the mistake. Could someone let me know what correction should I do. Thanks in advance. Here is my submission 56160090
Problem H:
You wrote "plus the number of subsequences of length L-1 in the suffix starting at position i-1" instead of i+1. Change it please, it confused me a lot.
Maybe this helps if someone still doesn't understand:
Let dp[i][L] be the number of subsequences starting at index at least i, with length L, and next[ s[i] ][i] the index of the next occurrence of char s[i] in the possition i.
Then: