I just did my first ABC in a while, so I decided to write up and share my solutions below. Feel free to leave questions in the comments!
There were several questions this round that had the potential to create precision issues; however, the solutions below give approaches that sidestep those errors altogether. Sadly, I think compiling and testing my A prevented me from winning the round :(
A — Multiplication 1
Just multiply the numbers and print them out. It's not that hard.
Time Complexity: $$$O(1)$$$. Click here for my submission.
B — Multiplication 2
This turns out not to be quite as easy as the last problem. Multiplying numbers this large is actually a challenge in C++ because the result is likely to be greater than $$$2^{63}$$$, and thus will cause long long overflow. Thus, in C++, we can't just multiply the numbers and check if the result is greater than $$$10^{18}$$$.
One possible approach is to switch to a different language in order to solve this problem. For example, Python integers seem like a natural choice given their ability to handle integers of arbitrary size. Java BigIntegers also seem workable for similar reasons. However, we have to be a little careful here: if you try to multiply all the integers together and then check whether they're greater than $$$10^{18}$$$, you might TLE: you're performing $$$O(N)$$$ multiplications using a number with $$$O(N)$$$ digits, so the time complexity could potentially be at least $$$O(N^2)$$$.
To get around this, we need a slightly smarter approach: multiply the numbers together, and print $$$-1$$$ as soon as our product exceeds $$$10^{18}$$$. Now, the number of digits we're working with is bounded, so this is safe time-wise. Note that you need to separately check if the input contains a $$$0$$$, since that's the only way the product can decrease: for example, on an input of $$$10^{10}$$$, $$$10^{10}$$$, $$$0$$$, you need to check for the $$$0$$$ separately because otherwise you'll stop as soon as you scan the first two numbers and reach a product of $$$10^{20}$$$.
But, what if you don't want to switch languages? It turns out that there's a pretty simple C++ solution. The key idea is that we can easily compute the base-10 logarithm of the answer, using the identity that $$$\log A_1 A_2 A_3 \cdots A_N = \log A_1 + \log A_2 + \log A_3 + \cdots + \log A_N$$$. Then, the product is greater than $$$18$$$ if and only if the logarithm is greater than $$$18$$$.
To avoid precision errors, though, my solution just checks that the logarithm is small enough that we can perform the computation without long long overflow. If the logarithm is smaller than some value slightly greater than $$$18$$$ (say, $$$18.1$$$), we perform the multiplication and print the result if it is less than $$$10^{18}$$$. Note that we also need to handle the $$$0$$$ case separately here because $$$\log 0$$$ is undefined.
Time Complexity: $$$O(N)$$$. Click here for my submission.
C — Multiplication 3
One could perform the multiplication using doubles, but precision errors are always scary, so let's try a different approach. Let's read $$$A$$$ as a long long and $$$B$$$ as two integers: one representing its integer part and one representing its decimal part. Then, we can write the integer $$$100B$$$ as the sum of $$$100$$$ times $$$B$$$'s integer part plus the two digits representing $$$B$$$'s fractional part. For example, in the first sample, we read $$$1$$$ as $$$B$$$'s integer part and $$$10$$$ as its decimal part, so $$$100B = 110$$$.
Now that we have $$$A$$$ and $$$100B$$$ as integers, we can compute $$$100AB$$$ by taking their product. Then, we can compute the integer part of $$$AB$$$ by dividing this by $$$100$$$, noting that division in C++ drops the fractional part.
Time Complexity: $$$O(1).$$$ Click here for my submission.
D — Div Game
First, let's factor $$$N$$$ in $$$O(\sqrt{N})$$$ time. This is doable by attempting trial divisions by all numbers up to $$$\sqrt{N}$$$. Then, if there's anything left over when all these factors are removed from $$$N$$$, what's left must be $$$N$$$'s last prime factor.
Realize that since each $$$z$$$ we select only affects one prime, we can just process each of $$$N$$$'s prime factors independently, compute the number of times we can apply the given operation to the power of that prime factor, and sum up the results.
So, we now only need to solve the problem for numbers equal to $$$p^k$$$ for some prime $$$p$$$. We first attempt a simple greedy algorithm: divide out $$$p$$$, then $$$p^2$$$, then $$$p^3$$$, and so on, until our input is no longer divisible by the next power of $$$p$$$; let the last power of $$$p$$$ we divided out be $$$p^x$$$. Then, we claim that $$$x$$$ is the maximum number of times we can apply this operation to this number.
First, we see that $$$x+1$$$ is impossible: since $$$p p^2 p^3 \cdots p^x p^{x+1}$$$ does not divide $$$p^k$$$, we know that the smallest $$$x+1$$$ operations are still too large, so we cannot have $$$x+1$$$ or more operations. Then, we see that $$$x$$$ is doable: apply the operations with $$$p$$$, $$$p^2$$$, and so on, up to $$$p^{x-1}$$$, then apply one last operation to whatever is left over. The remaining exponent must be at least $$$p^x$$$ because we know $$$p p^2 p^3 \cdots p^x$$$ divides $$$p^k$$$, so we don't repeat any values of $$$z$$$, so this is a valid series of operations. Thus, we can't do any better than $$$x$$$, but we can definitely achieve $$$x$$$ operations, so $$$x$$$ is our answer for $$$p^k$$$.
We thus compute the value of $$$x$$$ for each $$$p^k$$$ and sum up the results to get our answer. Though there are more efficient ways to compute $$$x$$$, you can just do it naively (by adding $$$1$$$, then $$$2$$$, then $$$3$$$, and so on, until the result exceeds $$$k$$$): $$$N$$$ can't have very many prime factors and none of them can be raised to especially large powers, so the $$$O(\sqrt{N})$$$ factor from factoring $$$N$$$ will dominate.
Time Complexity: $$$O(\sqrt{N}).$$$ Click here for my submission.
E — Count Median
So that we can just deal with integers, rather than decimals, we redefine the definition of median for even numbers to refer to $$$x_{N/2} + x_{N/2 + 1}$$$. This is essentially twice the given median. We can see that this does not change the number of unique medians in the set because we're essentially doubling every number in the set of medians, which means that two equal medians will still be equal and two different medians will still be different after we double them.
We can easily compute the smallest and largest possible medians: take the medians of the arrays $$$A$$$ and $$$B$$$, respectively. Let these medians be $$$Y$$$ and $$$Z$$$. Then, we make a critical claim: the medians we can achieve are exactly the set of integers between $$$Y$$$ and $$$Z$$$. Thus, our answer is the number of integers from $$$Y$$$ to $$$Z$$$, which is $$$Z-Y+1$$$.
But, of course, we need to prove this claim. Obviously, we can't achieve a non-integer median, nor can we have a median lower than $$$Y$$$ or greater than $$$Z$$$, so there's no way to have any medians that aren't integers from $$$Y$$$ to $$$Z$$$.
Now, we just need to show that every median from $$$Y$$$ to $$$Z$$$ is achievable. Start with an array equal to $$$A$$$, which has median $$$Y$$$. Here's the key observation: whenever we increase an integer in the array by $$$1$$$, the median will either not change or will increase by $$$1$$$. This can be proven by fairly simple casework, but it's also pretty intuitive. Thus, as we increase the integers by $$$1$$$ in some arbitrary order until the array becomes $$$B$$$, our median never skips any integers, so it goes from $$$Y$$$ to $$$Z$$$ and, at some point, touches all the integers in between. This shows that any integer between $$$Y$$$ and $$$Z$$$ is a possible median, as desired.
Now, we can sort the arrays $$$A$$$ and $$$B$$$, compute their medians, and print $$$Z-Y+1$$$ as our answer.
Time Complexity: $$$O(N \log N).$$$ Click here for my submission.
F — Knapsack for All Subsets
Consider a subset $$$x_1, x_2, x_3, \cdots, x_k$$$ summing to $$$K$$$. Then, notice that there are $$$2^{N-k}$$$ subsets of $$$A$$$ containing this subset, because for each element not in our set of $$$k$$$, we have two options: to include it or exclude it from our subset. So, for each subset containing $$$k$$$ integers, we want to add $$$2^{N-k}$$$ to our answer.
We compute this summation using DP. Let $$$dp[i][j]$$$ be the sum of $$$2^{N-k}$$$ over all subsets of the first $$$i$$$ elements of the array $$$A$$$ that sum to $$$j$$$. Initially, $$$dp[0][0] = 2^N$$$ and $$$dp[0][j] = 0$$$ for all other $$$j$$$, since there is one subset of the first $$$0$$$ elements, which has size $$$0$$$ and sums to $$$0$$$.
Then, to transition, we can either add the next element in the array to our subset or not add it. If we don't add it, the result is simple: we add $$$dp[i][j]$$$ to $$$dp[i+1][j]$$$, effectively skipping this element. Alternatively, though, we can add the array to the subset. However, this adds $$$1$$$ to $$$k$$$, effectively multiplying the sum of $$$2^{N-k}$$$ by one-half, so we add $$$\frac{dp[i][j]}{2}$$$ to $$$dp[i+1][j + A[j]]$$$. Since we're working with modular arithmetic, we can divide by $$$2$$$ by multiplying by the modular inverse of $$$2$$$.
Then, our answer is $$$dp[N][S]$$$. Since we have $$$O(NS)$$$ states and each state has $$$O(1)$$$ transitions, this easily passes in time.
Time Complexity: $$$O(NS).$$$ Click here for my submission.