# | User | Rating |
---|---|---|
1 | tourist | 3985 |
2 | jiangly | 3814 |
3 | jqdai0815 | 3682 |
4 | Benq | 3529 |
5 | orzdevinwang | 3526 |
6 | ksun48 | 3517 |
7 | Radewoosh | 3410 |
8 | hos.lyric | 3399 |
9 | ecnerwala | 3392 |
9 | Um_nik | 3392 |
# | User | Contrib. |
---|---|---|
1 | cry | 169 |
2 | maomao90 | 162 |
2 | Um_nik | 162 |
4 | atcoder_official | 161 |
5 | djm03178 | 158 |
6 | -is-this-fft- | 157 |
7 | adamant | 155 |
8 | awoo | 154 |
8 | Dominater069 | 154 |
10 | luogu_official | 150 |
Name |
---|
How to solve I and M?
I:There will be an optimal solution in which all the added numbers are in nonincreasing order. Use this to formulate an O(n2k) dynamic program and then optimize using divide and conquer trick or convex hull trick.
M: I'm unsure about the solution, but apparently it can be proven that you spend all your money on at most two people. Then some sort of ternary search should work.
For M, my team's solution was to map the soldiers to (x, y) points (budget*potency/cost, budget*health/cost) and then the goal becomes to maximize x * y. So then you build the convex hull of those points.
That way, it's only ever optimal to combine soldiers that are adjacent along the convex hull. The reasoning behind that is if you tried to merge more than 2 soldiers, you'd effectively be drawing a line between two segments on the convex hull (and of course that segment lies strictly inside the hull) and the product of any (x, y) on that segment will never be greater than some (x, y) product along the outer segments of the hull.
So it's not really a proof but there's some intuition behind why you only combine 2 soldiers (also other solutions don't seem to use convex hull at all).
I'm having trouble convincing myself of this. Do you have any proof or good intuition on why it's true?
Some intuition: let's say there are two undecided positions i and j. They will divide the array in three parts A, B and C in something like this:
-----A------|i|------B-----|j|-----C-----
Let's say there is some value Oi which is the greatest optimal value to be chosen to be in position i. The smaller Oi is, the greater is the number of elements to pair with it in part A; the bigger Oi is, the greater is the number of elements to pair with it in part B and C.
Let's say Oj is the same thing for position j. The only difference is that now the smaller Oj is, the greater is the number of elements to pair with it in part B.
So, let's say Oj = Oi. If we increase Oj, obviously it will be worse than not increasing, because the greater Oi was, the greater the number of pairs would be made with part B, and Oi is optimal by definition. Now, even worse, the number of pairs made within B is greater if Oj is smaller. Thus, it's always optimal to make Oj ≤ Oi and you can extend this to any number of undecided elements.
M: Transform (c, h, p) to (h * C / c, p * C / c). The answer is a combination x1 * v1 (vi is (hi, pi)) + x2 * v2 + ..., you take that vector and take x*y. But that combination has other constraint, x1 + x2 + ... + xn == 1 and xi is in [0, 1] so it's a convex combination (https://en.wikipedia.org/wiki/Convex_combination). This means that you can take the convex hull and solve the problem for the vertices and edges and it will have the greatest answer.
I understood the Convex_combination and that its solution lies inside the convex hull of the points but, why is it like that? Why must the $$$x_i$$$ sum to 1?
And what is the intuition behind this: $$$(h*C/c,p*C/c)$$$ ?
Thank you!
That transformation is needed to make taking an object proportional. If it has weight 0 <= x <= 1, then you use x from your total "funds" (I don't remember the exactly story in the problem). Since not spending all the money isn't optimal you'll always spend everything so the sum is 1.
It's much clearer now, thank you for the quick response!
Is it possible to solve M with c++? It seems like there is a precision problem. My python code using (exactly) the same logic got AC.
We have reference solutions in C++ (and we also have some other AC in C++ from other contestants independently). I'm not sure why yours is failing.
Thanks. BTW, do you by any chance know why my solution to E got Idleness limit exceeded on test 11? Thanks for your help.
http://mirror.codeforces.com/gym/101982/submission/45508248 here is the code.
nvm solved. my n and m are the other way round...
how to solve d?
dp[number of digits filled in][this number is k modulo m] or dp[n][k] can transition to dp[n+1][2*k] and dp[n+1][2*k+1]. dp[n][k] = (number of ways to get to this state from [0][0], sum of 1's of these ways). answer is dp[bits][0].second
ty
Similar solution to tfg's Is dp[position of bit (from higher to lower)][number module n][how many ones i've put]. Base case is when pos = = - 1, if module is 0, return ones (third dimension). otherwise return 0. transifions is:
f(p, k, o) = f(p-1, (2^p + k)%n, o+1) + f(p-1, k, o)
I am not sure whether i got the question right.I tried two different approaches one similar to osdajigu_. Can't seem to figure out what's wrong .Thanks . Post Edit: Found the issue was taking the wrong modulo number :/
... //@cartmancodes
include<bits/stdc++.h>
define MAX 1015
define MAXN 130
define ll long long
define mp make_pair
const ll mod = 1e9 + 9;
using namespace std;
pair<ll,ll> dp[MAXN][MAX];
ll vis[MAXN][MAX];
ll n,k;
ll fexpo(ll a,ll b) { ll ans = 1; a = a%k; while(b) { if(b % 2) ans = (ans * a)%k; a = (a * a)%k; b /= 2; } return ans%k; }
pair<ll,ll> solve(ll idx,ll rem) { if(idx == -1) { if(rem == 0) return mp(1,0); return mp(0,0); } if(vis[idx][rem]) return dp[idx][rem]; ll temp = fexpo(2,idx); ll cnt1 = solve(idx — 1,(temp + rem)%k).first; ll sum1 = (solve(idx — 1,(temp + rem)%k).second)%mod; //put 0. ll cnt0 = solve(idx — 1,rem%k).first; ll sum0 = (solve(idx — 1,rem%k).second)%mod; ll cnt = (cnt1 + cnt0)%mod; ll sum = (sum1 + sum0 + cnt1)%mod; vis[idx][rem] = 1; return dp[idx][rem] = mp(cnt,sum); }
int main() { cin>>k>>n; ll i,j; cout<<solve(n — 1,0).second; return 0; }
haha I had same mistake, had module 1e9+7, instead of 1e9+9.
Did exactly the same :p
How to solve K?
It can be solved by DP. let's focus on minimum expected value, and then its similar for maximum. Be f(mask) = minimum expected value if I have the current mask (1 for available numbers, and 0 otherwise). So, the transition would some something like.
You have to calculate values of what numbers can I remove such that sum up d, and add that movement to moves[d]. There a few cases to handle, in case you can't make any movement, that will be the base case. Also you have to calculate the probabilitie of get d with 2 dice. For that just do a double nested loop and sum up for i+j, and then divide all (for 2 to 12) by 36.
Here is the code
How to solve E, and B?
E: You have to find the shortest cycle around cell having 'B'. For each cell, consider 2 nodes in the graph, P and Q. Also, consider a line from 'B' to an edge of the grid. For a particular cell, you are on node Q if you have crossed that line an odd number of times in your path. Otherwise, you are on node P. Now, calculate the shortest distance from P to Q for any cell.
problem E can be solved using max-flow(min-cut actually). Lets build the following flow network: for every position on the matrix lets create two nodes(call them entry node and exit node) and add an edge between them with capacity equal to the cost of blocking that position if it has a letter('a', 'b', ...) or INF in case it is the position of 'B' or a dot('.'). The source will be entry node of 'B' and the sink will be an extra node that we will connect with the exit nodes of every position on the borders with capacity equal to INF. Then for every position add an edge between its exit node and the entry nodes of every adjacent position with capacity equal to INF. Then run max-flow algorithm on this graph and that is the answer. What we are actually computing here is the minimum cost to separate the sink from the source(min-cut) which is equivalent to max-flow. The problem is similar to this one http://coj.uci.cu/24h/problem.xhtml?pid=2505 in case you want to practice this approach. Good luck
Nice, thank you
For problem E, why do standard max-flow algorithms (Dinitz and Push-Relabel) run very fast even though the graphs involved have about $$$2nm = 1800$$$ nodes and a similar order of magnitude of edges, while these algorithms take cubic time in the worst case? Is it simply that the test data does not have the worst case, or is there a tighter analysis you can do on grid graphs like these to prove better upper bounds?
What is the time complexity of B? My time complexity is O(10^9) using prime factorization and inclusion-exclusion and which is I think give TLE. So how to solve this problem? Thanks in Advance.
I assume that you mean your solution is $$$O(\max(a,b,c,d))$$$ and just iterates up to $$$10^7$$$, $$$10^9$$$ would TLE.
I never actually figured out why this worked, but during the contest I just observe that the inclusion exclusion only took on values of 0 or 1 and ran fast enough when I put the answers in a char array (but 2-4x slower in an int array)
You can also solve the problem in $$$O(n)$$$ with Möbius inversion and a linear-time sieve.
Can You explain me how to solve in O(n)? Thanks in advance.
Nisiyama_Suzune's excellent blog on Möbius inversion describes this exact problem.
I'm still stuck at F_Rectangles even after reading the editorial. Here it is.
I'm struggling with the lazy propagation part, and dividing the regions etc.. Since I know how to do the standard version without the odd property.
Here is the problem statement:
You are given several($$$N$$$ $$$<=$$$ $$$1e5$$$) axis-aligned rectangles. Compute the sum of the area of the regions that are covered by an odd number of rectangles.
Any ideas anyone?