We hope that you enjoyed the contest. Let's get right into the editorial:
Which numbers smaller than the median should be taken?
Which numbers bigger than the median should be taken?
Greedy algorithm.
Let's consider the array of $$$n$$$ elements in non-decreasing order. We can make numbers before the median equal to zero, after that we have $$$m = \lfloor {\frac{n}{2}} \rfloor + 1$$$ numbers, which sum should be $$$n$$$ and the minimal of them (i.e. median value) should be maximized.
To do so, it is enough to make all these numbers equal $$$\lfloor {\frac{s}{m}} \rfloor$$$, and then add what's left to the last number ($$$s \bmod m$$$). It's easy to see that such array matches all the conditions and it is impossible to make median greater.
#include <bits/stdc++.h>
using namespace std;
int main() {
int T;
cin >> T;
while (T --> 0) {
int n, s;
cin >> n >> s;
int m = n / 2 + 1;
cout << s / m << '\n';
}
return 0;
}
t = int(input())
for test in range(t):
n, s = map(int, input().split())
m = n // 2 + 1
print(s // m)
If it's possible to make the median element $$$m > 1$$$ then you can make it $$$m - 1$$$.
It is possible to make the median element any number from $$$0$$$ to some $$$M$$$.
We can use the binary search for the answer.
Let's run a binary search for the answer. This will work because if the answer $$$M$$$ then we can decrease the median element by $$$d$$$ and add $$$d$$$ to the max element, so we can get any median element from $$$1$$$ to $$$M$$$, but we can't get more than $$$M$$$. In the binary search we will use the greedy technique. All numbers less than the median can be $$$0$$$ and all numbers from the median should be at least $$$M$$$. So there are $$$m = \lfloor {\frac{n}{2}} \rfloor + 1$$$ numbers and each of them should be at least $$$M$$$ and we find the $$$M$$$ using the binary search. Number $$$M$$$ is reachable if $$$M \cdot m \leq s$$$ because we can add $$$s - M \cdot m$$$ to the maximal number and the median will be $$$M$$$. Otherwise, the median element can not be $$$M$$$.
#include <bits/stdc++.h>
using namespace std;
int main() {
int T;
cin >> T;
while (T --> 0) {
long long n, s;
cin >> n >> s;
long long L = 0, R = s + 1;
while (R - L > 1) {
long long M = (L + R) / 2;
long long m = n / 2 + 1;
if (m * M <= s) {
L = M;
} else {
R = M;
}
}
cout << L << '\n';
}
return 0;
}
t = int(input())
for test in range(t):
n, s = map(int, input().split())
L = 0
R = 10**10
while R - L > 1:
M = (L + R) // 2
m = n // 2 + 1
if m * M <= s:
L = M
else:
R = M
print(L)
The answer is never greater than $$$2$$$.
If a string consists only of $$$1$$$, then the answer is $$$0$$$.
If all zeroes are consequent, then the answer is $$$1$$$.
The answer is never greater than $$$2$$$, because $$$\text{MEX}$$$ of the whole string is not greater than $$$2$$$.
The answer is $$$0$$$ only if there are no zeroes in the string.
Now we need to understand, when the answer is $$$1$$$ or when it is $$$2$$$. The sum of $$$\text{MEX}$$$ is $$$1$$$ only if all zeroes create a single segment without ones. Then we can cut this segment out, its $$$\text{MEX}$$$ is $$$1$$$, everything else is ones, their total $$$\text{MEX}$$$ is $$$0$$$.
If the zeroes do not create a single segment without ones, the there are two such zeroes that there is a $$$1$$$ between them. Then either these zeroes are in a single segment with the $$$1$$$, so total $$$\text{MEX}$$$ is not less than $$$2$$$ or these zeroes are in different segments and the answer is still not less then $$$2$$$.
#include <bits/stdc++.h>
using namespace std;
int main() {
int T;
cin >> T;
while (T --> 0) {
string s;
cin >> s;
int zeroes = count(s.begin(), s.end(), '0');
if (zeroes == 0) {
cout << 0 << '\n';
continue;
}
int first = s.find('0');
int last = s.rfind('0');
if (last - first + 1 == zeroes) {
cout << 1 << '\n';
} else {
cout << 2 << '\n';
}
}
return 0;
}
t = int(input())
for test in range(t):
s = input()
zeroes = s.count('0')
if zeroes == 0:
print(0)
continue
first = s.find('0')
last = s.rfind('0')
if last - first + 1 == zeroes:
print(1)
else:
print(2)
You can cut out the columns with both $$$0$$$ and $$$1$$$.
Now in each column there are only $$$0$$$ or only $$$1$$$. We only need to solve the problem for a string because the columns can be replaced by one digit (they consist of equal elements).
Let's be greedy, to each zero we will "join" not more than one $$$1$$$.
Let's solve the same problem but for a string:
It's needed to cut a binary string into segments so that each its element is in exactly one segment and the sum of $$$\text{MEX}$$$ for all segments is maximal.
Initially we will say that the string is cut into segments of length 1. Then the answer is the number of zeroes in the string. After that the answer is increased every time we merge a segment of $$$0$$$ with a segment of $$$1$$$. Each such merge increases the answer by $$$1$$$. Let's make the merges greedily, maximizing the number of merges. Let's consider the first zero. If the previous element is a $$$1$$$, let's merge them and consider the next zero. Else, if the next element is a $$$1$$$, let's merge them and consider the next zero. Else, the next element is a zero and we should consider it instead of the current zero the same way. By doing so we get the answer as the number of zeroes + the number of merges.
Now let's solve the initial problem. We can cut out the columns that contain both $$$0$$$ and $$$1$$$, because their $$$\text{MEX}$$$ is already maximized and the answer will not become worse.
Now we solve the problem for all remaining bi-tables independently. Each their column consists either only of $$$0$$$ or only of $$$1$$$ so both rows are equal. We will solve the problem for one row of each remaining bi-table as mentioned before and then sum up the values to get the answer.
#include <bits/stdc++.h>
using namespace std;
int solve(string s) {
int ans = count(s.begin(), s.end(), '0');
int n = s.size();
bool a = false, b = false;
for (int i = 0; i < n; ++i) {
if (s[i] == '0') a = true;
if (s[i] == '1') b = true;
if (a && b) {
++ans;
a = b = false;
}
}
return ans;
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int T;
cin >> T;
while (T --> 0) {
int n, ans = 0;
string a, b;
cin >> n >> a >> b;
string s = "";
for (int i = 0; i < n; ++i) {
if (a[i] != b[i]) {
ans += 2 + solve(s);
s = "";
} else {
s += a[i];
}
}
cout << ans + solve(s) << '\n';
}
return 0;
}
This problem could be solved in many ways using the dp. We will consider these solutions in short.
Let's say that $$$dp_i$$$ — is the answer for a prefix until $$$i$$$. Then there are different approaches:
We can calculate the dp values, iterating through all possible $$$\operatorname{MEX}$$$ values on the last segment. For example, if we want to make $$$\operatorname{MEX}$$$ equal $$$2$$$ on the last segment, then we need to find the closest $$$0$$$ and the closest $$$1$$$ to position $$$i$$$. Let it be $$$last_0$$$ and $$$last_1$$$. Then we should recalc the dp like this $$$dp_i = \max(dp_i, dp_{j-1} + 2)$$$, where $$$j = \min(last_0, last_1)$$$, because we take the shortest segment ending in $$$i$$$ which has both $$$0$$$ and $$$1$$$ and after that we add the answer for this segment and for prefix that ends in $$$j-1$$$.
#include <bits/stdc++.h>
using namespace std;
#define FOR(i, l, r) for(int i = (l); i <= (r); ++i)
#define rep(i, l, r) for(int i = (l); i < (r); ++i)
typedef vector<int> vi;
//Did you REAALLY consider sample tests?
void solve(int tc) {
string s[2];
int n;
cin >> n;
cin >> s[0] >> s[1];
vi dp(n + 1, 0);
vi last(2, -1);
auto take = [&](int i, bool take0, bool take1) {
int MEX = 0;
if (take0) {
if (take1) MEX = 2;
else MEX = 1;
}
int index = i;
if (take0) index = min(index, last[0]);
if (take1) index = min(index, last[1]);
if (index != -1) return MEX + dp[index - 1];
return -100000;
};
FOR(i, 1, n) {
vi val(2);
rep(j, 0, 2) last[s[j][i-1] - '0'] = i;
dp[i] = dp[i-1];
dp[i] = max(dp[i], take(i, 1, 0));
dp[i] = max(dp[i], take(i, 0, 1));
dp[i] = max(dp[i], take(i, 1, 1));
}
cout << dp[n] << "\n";
}
int main() {
int tests;
cin >> tests;
FOR(test, 1, tests) {
solve(test);
}
}
Another possible solution with dp is based on the fact that we should not take any segments with length more than $$$x$$$, where $$$x$$$ is some small number. We can just take some random big enough $$$x$$$ and not prove anything. There exists a solution which does not consider segments with length bigger than $$$5$$$.
#include<bits/stdc++.h>
using namespace std;
int mex(string s){
int fl=0;
for(auto &nx : s){
if(nx=='0'){fl|=1;}
else if(nx=='1'){fl|=2;}
}
if(fl==3){return 2;}
if(fl==1){return 1;}
return 0;
}
int main(){
ios::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
while(t>0){
t--;
int n;
string s1,s2;
cin >> n >> s1 >> s2;
vector<int> dp(n+1,0);
for(int i=0;i<n;i++){
string s;
for(int j=0;j<5;j++){
if(i+j>=n){break;}
s.push_back(s1[i+j]);
s.push_back(s2[i+j]);
dp[i+j+1]=max(dp[i+j+1],dp[i]+mex(s));
}
}
cout << dp[n] << '\n';
}
return 0;
}
1566D2 - Seating Arrangements (hard version)
Each person can be seated on some subsegment of places with people with the same level of sight.
If $$$n=1$$$ we should seat people on the maximal possible place.
The places available for some person may be a subsegment of a row or a suffix of a row + some full rows + a prefix of a row.
Let's consider all seats in increasing order of their indices. For each seat we can consider the level of sight of a person that takes that seat. The considered sequence is non-decreasing. This means that people with the same level of sight should take consequent seats and for each level of sight $$$x$$$ we can determine which seats will be taken by people with level of sight $$$x$$$.
Now we should find out how do people with level of sight $$$x$$$ sit in the cinema. For that $$$x$$$ we know that people with that level of sight should take places with indices from $$$l$$$ to $$$r$$$.
Let's seat people with the same level of sight greedily. If all places from $$$l$$$ to $$$r$$$ are in the same row, then we should seat people in decreasing order of seat indices. Otherwise, these seats form a suffix of some row (maybe empty), some full rows (maybe zero), a prefix of some row (maybe empty). Firstly we need to seat people on the suffix in decreasing order of seat indices. This is not worse than other seatings, because the seats before that suffix may be taken in future and total inconvenience will increase. Seats on the prefix should be taken last of all in decreasing order of seat indices. This is not worse than other seatings, because if some people sit on that prefix then they will bother people on the right, increasing the total inconvenience. Full rows should be taken in decreasing order of seat indices, this does not increase the total inconvenience at all. So we should start by seating poeple on the suffix, then we should cover full rows and finally we should cover the prefix. In each case places should be taken in decreasing order of seat indices.
The constraints for $$$m$$$ are low, so each time we want to seat someone we can consider all places in the row and find how many people increase our inconvenience.
#include <bits/stdc++.h>
#define long long long int
using namespace std;
// @author: pashka
void solve_test() {
int n, m;
cin >> n >> m;
vector<pair<int, int>> a(n * m);
for (int i = 0; i < n * m; i++) {
cin >> a[i].first;
a[i].second = i;
}
sort(a.begin(), a.end());
for (int i = 0; i < n * m; i++) {
a[i].second = -a[i].second;
}
int res = 0;
for (int i = 0; i < n; i++) {
sort(a.begin() + (m * i), a.begin() + (m * i + m));
for (int j = 0; j < m; j++) {
for (int k = 0; k < j; k++) {
if (a[i * m + k].second > a[i * m + j].second) res++;
}
}
}
cout <<res << "\n";
}
int main() {
ios::sync_with_stdio(false);
int n;
cin >> n;
for (int i = 0; i < n; i++) {
solve_test();
}
return 0;
}
Re-hanging a bud to a leaf decreases the total amount of leaves by $$$1$$$.
Buds can be re-hanged that way so in the end there is only a root, buds that are connected to the root and leaves connected to the root or to the buds.
Now the answer depends only on the total amount of vertices, on the amount of buds and on the fact whether there is a leaf connected to the root or not.
If we re-hang a bud from its parent to the root, the amount of leaves either doesn't change (if the parent has other children), or increases by $$$1$$$. By doing that we don't make the answer worse, because if there are leaves except for bud's leaves, then we can re-hang the bud to some leaf, decreasing the amount of leaves by $$$1$$$. So let's re-hang all buds to the root, until there are no free buds left.
Now we will assume that all buds are hung to the root, and their amount is $$$k$$$. The answer is either $$$n - 2 \cdot k$$$ if there is no leaf, hung to the root, or $$$n - 2 \cdot k - 1$$$ if there is a leaf, hung to the root. Why is it so? Initially, there are $$$n - k - 1$$$ leaves (total amount of nodes — the amount of buds — root). If there is a leaf, hung to the root, then we can hang a bud to it, then hang a bud to the previous bud's leaf, and keep doing so until we use all $$$k$$$ buds. Then there are $$$k$$$ re-hangs, each of them decreases the answer by $$$1$$$. So the final answer is $$$n - k - 1 - k = n - 2 \cdot k - 1$$$. If there is no leaves hung to the root, then we can hang a bud to another bud's leaf, then a bud to the previous bud's leaf, and so on until we use $$$k - 1$$$ buds. The final answer in this case is $$$n - k - 1 - (k - 1) = n - 2 \cdot k$$$. It is not possible to make the answer less, because each re-hang decreases the answer by \le 1 and each re-hang we make decreases it exactly by 1 and we use all re-hangs.
#include <bits/stdc++.h>
using namespace std;
vector<vector<int>> g;
vector<int> type; // -1 -- default, 0 -- root, 1 -- leaf, 2 -- bud
void dfs(int v, int p) {
bool leaves = false;
for (auto to : g[v]) {
if (to == p) continue;
dfs(to, v);
if (type[to] == 1) leaves = true;
}
if (v != p) {
if (!leaves) type[v] = 1;
else type[v] = 2;
}
}
int main() {
ios_base::sync_with_stdio(false);
cout.tie(0);
cin.tie(0);
int T;
cin >> T;
while (T --> 0) {
int n;
cin >> n;
g.assign(n, vector<int>());
type.assign(n, -1);
for (int i = 0; i < n - 1; ++i) {
int x, y;
cin >> x >> y;
--x; --y;
g[x].emplace_back(y);
g[y].emplace_back(x);
}
type[0] = 0;
dfs(0, 0);
bool root_leaf = false;
for (auto v : g[0]) {
if (type[v] == 1) {
root_leaf = true;
}
}
int k = 0;
for (int i = 0; i < n; ++i) {
k += (type[i] == 2);
}
cout << n - 2 * k - root_leaf << '\n';
}
return 0;
}
If a segment already contains a point then it can be thrown out of the consideration.
If there is a segment that contains some segment, we can save only the smaller one.
If you already know which segments will be visited by some point, how you should calculate the answer for this point?
For some segment, which points should visit it?
Firstly, if a point has initially visited some segment, then we can throw it out of consideration. It means that all segments that cover at least one point can be thrown out. This can be done using the binary search for each segment separately.
Secondly, if a segment $$$A$$$ is contained in a segment $$$B$$$ (i.e. $$$l_B \le l_A \le r_A \le r_B$$$), then a point that visited $$$A$$$ will visit $$$B$$$, too. It means that we should save only those segments that do not contain any other segments.
This can be done using a Fenwick tree. Initially, all values in the tree are zeroes. We will consider the segments in decreasing order of $$$l$$$. Let's assume we are considering segment $$$i$$$. If there is an already considered segment $$$j$$$ $$$(r_j \le r_i)$$$, then segment $$$j$$$ is in segment $$$i$$$. It is so because $$$l_i \le l_j$$$ because of the considering order, so $$$l_i \le l_j \le r_j \le r_i$$$. To find the amount of such $$$j$$$ it is enough to find the amount of $$$1$$$ on a prefix until $$$r_i$$$ in the Fenwick tree. Now when we considered the segment $$$i$$$ we set $$$1$$$ in the Fenwick tree on position $$$r_i$$$.
After that there are only segments that do not contain any other segments and they are not initially visited by any point. We will consider only these segments.
Let's say that a segment is assigned to a point if that point visits this segment. Let's find out how to calculate the answer for one point if we already know the set of segments that are assigned to it. Let's consider the segments that are on the left to the point and say that the maximal distance from the point to these segments is $$$a$$$. In the same way let $$$b$$$ be the maximal distance from the point to the segments on the right. Then the answer for the point is $$$2\cdot min(a, b)+max(a, b)$$$.
Now if a segment is between two neighbouring points, then it should be assigned to one of these points. (If a segment is to the left from the leftmost point or to the right from the rightmost point then it should be assigned to the leftmost point or to the rightmost point respectively). Now let's consider the segments between two neighbouring points. These segments are ordered by the left ends and by the right ends at the same time. Some prefix of these segments should be assigned to the left point, other segments (suffix) should be assigned to the right point.
Now let's solve the problem using the dynamic programming. Let $$$dp[i][j]$$$ be the answer if we assigned the segments for first $$$i$$$ points and there are $$$j$$$ segments assigned to the $$$i$$$-th point that are to the right from it. There is a linear amount of dp states. Let's learn to calculate the dp. We will consider the dp states in increasing order of $$$i$$$ and, after that, $$$j$$$. Let $$$b$$$ be the distance from $$$i$$$-th point to $$$j$$$ segment after it. Then $$$dp[i][j]=\displaystyle\min_{0 \le k \le x} \Big(dp[i-1][k]+2\cdot min(b, a_{i, k})+max(b, a_{i, k})\Big)$$$, where $$$a_{i, k}$$$ — is the distance from $$$i$$$-th point to $$$(k+1)$$$-th segment after $$$(i-1)$$$-th point and $$$x$$$ is the amount of segments between points $$$i$$$ and $$$(i+1)$$$. But this is a quadratic dp. Now we can find out that for some prefix of segments after $$$(i-1)$$$-th point $$$a_{i, k}$$$ is greater than $$$b$$$ and for some suffix it is less than $$$b$$$. The length of that prefix may be found using the binary search or two pointers. For $$$k$$$ on that prefix the dp will be $$$dp[i-1][k] + x_i - r_{k+1}+2\cdot b$$$, and for $$$k$$$ on the suffix — $$$dp[i-1][k] + 2\cdot(x_i - r_{k+1})+b$$$. In these formulas everything except $$$b$$$ depends on $$$i$$$ and $$$k$$$. It means that we can calculate dp quickly using prefix and suffix minimums.
The answer is $$$dp[n][x]$$$, where $$$x$$$ is the amount of segments to the right of the rightmost point.
#include <bits/stdc++.h>
using namespace std;
#define pb emplace_back
#define all(x) (x).begin(), (x).end()
#define fi first
#define se second
#define pii pair<int, int>
#define ll long long
const long long INFLL = 1e18;
const int INF = 1e9 + 1;
struct segment_tree {
vector<int> t;
segment_tree(int n) {
t.assign(4 * n, INF);
}
void mod(int v, int vl, int vr, int id, int val) {
if (vr - vl == 1) {
t[v] = min(t[v], val);
return;
}
int vm = (vl + vr) / 2;
if (id < vm) mod(2 * v + 1, vl, vm, id, val);
else mod(2 * v + 2, vm, vr, id, val);
t[v] = min(t[v], val);
}
int get(int v, int vl, int vr, int l, int r) {
if (vl >= l && vr <= r) return t[v];
if (r <= vl || l >= vr) return INF;
int vm = (vl + vr) / 2;
return min(get(2 * v + 1, vl, vm, l, r), get(2 * v + 2, vm, vr, l, r));
}
};
bool cmp(const pii &a, const pii &b) {
return a.se - a.fi < b.se - b.fi;
}
int main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int t;
cin >> t;
while (t--) {
int n, m;
cin >> n >> m;
vector<ll> a(n);
for (auto &c : a) cin >> c;
sort(all(a));
vector<pii> seg1(m);
vector<int> dl;
for (auto &c : seg1) {
cin >> c.fi >> c.se;
dl.pb(c.fi);
}
sort(all(dl));
dl.resize(unique(all(dl)) - dl.begin());
sort(all(seg1), cmp);
map<int, int> ma;
for (int i = 0; i < (int)dl.size(); i++) ma[dl[i]] = i;
segment_tree tr((int)dl.size());
vector<pii> seg;
for (auto &c : seg1) {
int id = lower_bound(all(a), c.fi) - a.begin();
if (id < (int)a.size() && a[id] <= c.se) continue;
if (tr.get(0, 0, dl.size(), ma[c.fi], dl.size()) <= c.se) continue;
tr.mod(0, 0, dl.size(), ma[c.fi], c.se);
seg.pb(c.fi, c.se);
}
m = seg.size();
sort(all(seg));
vector<vector<pii>> g(n + 1);
int L = -1, R = m;
while (R - L > 1) {
int M = (L + R) / 2;
if (seg[M].se < a[0]) L = M;
else R = M;
}
for (int i = 0; i <= L; i++) g[0].pb(seg[i]);
for (int i = 0; i < n; i++) {
int RIGHT = INF;
if (i + 1 < n) RIGHT = a[i + 1];
int id = upper_bound(all(seg), make_pair((int)a[i], (int)a[i])) - seg.begin();
if (id == m) continue;
int L = id - 1, R = m;
while (R - L > 1) {
int M = (L + R) / 2;
if (seg[M].se < RIGHT) L = M;
else R = M;
}
for (int j = id; j <= L; j++) g[i + 1].pb(seg[j]);
}
vector<vector<ll>> dp(n), pr(n), suff(n);
for (int i = 0; i < n; i++) {
dp[i].resize(g[i + 1].size() + 1, INFLL);
pr[i].resize(g[i + 1].size() + 1, INFLL);
suff[i].resize(g[i + 1].size() + 1, INFLL);
}
for (int j = 0; j <= (int)g[1].size(); j++) {
ll x = 0;
if (g[0].size()) x = a[0] - g[0][0].se;
ll y = 0;
if (j) y = g[1][j - 1].fi - a[0];
dp[0][j] = 2 * min(x, y) + max(x, y);
}
for (int i = 1; i < n; i++) {
for (int j = 0; j <= (int)g[i].size(); j++) {
if (j > 0) pr[i - 1][j] = pr[i - 1][j - 1];
pr[i - 1][j] = min(pr[i - 1][j], dp[i - 1][j] - (j == (int)g[i].size() ? a[i] : g[i][j].se));
}
for (int j = (int)g[i].size(); j >= 0; j--) {
if (j + 1 <= (int)g[i].size()) suff[i - 1][j] = suff[i - 1][j + 1];
suff[i - 1][j] = min(suff[i - 1][j], dp[i - 1][j] - 2 * (j == (int)g[i].size() ? a[i] : g[i][j].se));
}
int L = (int)g[i].size();
for (int j = 0; j <= (int)g[i + 1].size(); j++) {
ll y = 0;
if (j) y = g[i + 1][j - 1].fi - a[i];
while (L > 0 && a[i] - g[i][L - 1].se <= y) L--;
if (L > 0) dp[i][j] = min(dp[i][j], 2 * y + a[i] + pr[i - 1][L - 1]);
dp[i][j] = min(dp[i][j], y + 2 * a[i] + suff[i - 1][L]);
}
}
cout << dp[n - 1].back() << "\n";
}
return 0;
}
The answer always consists either of three edges with common end or of two edges that don't have any common ends.
To find the answer of the second type you can leave in the graph only those edges that are in the list of three smallest edges for both ends.
Let's consider those cases when the minimal edge is in the answer or not in the answer.
Observation: the answer always consists either of three edges that have one common end or of two edges that do not have any common ends.
To find the answer of the first type it is enough to maintain three minimal edges for each vertex.
To find the answer of the second type we will use this observation: we can leave in the graph only those edges that are in the set of three minimal edges for each their end. Let's prove that. Let's assume that the answer consists of two edges $$$(a, b)$$$ and $$$(c, d)$$$ and there are at least three edges $$$(a, a_1)$$$, $$$(a, a_2)$$$, $$$(a, a_3)$$$ less than $$$(a, b)$$$. Then the edge $$$(a, b)$$$ can be swapped by one of these edges because at least one of te integers $$$a_1$$$, $$$a_2$$$, $$$a_3$$$ is not equal to $$$c$$$ and $$$d$$$. Then we will maintain the set of all these edges.
Now let's consider two cases. Let $$$(a, b)$$$ be the minimal edge. If it is in the answer then we need to find the minimal edge, that does not have any common vertices with $$$(a, b)$$$. In this case there are at most $$$6$$$ ``bad'' edges because the degrees of each vertex in the graph that consists only of remaining edges do not exceed $$$3$$$. It means that we have to consider $$$O(1)$$$ variants. If $$$(a, b)$$$ is not in the answer, then there are two edges in the answer that have vertices $$$a$$$ and $$$b$$$ as one of their ends. But there are at most $$$9$$$ such pairs of edges, so we have to consider only $$$O(1)$$$ variants.
So the final answer is the minimum of answers of both types.
#include<bits/stdc++.h>
using namespace std;
#define sz(a) (int) (a).size()
const int N = 1e5;
vector<pair<int, int>> g[N];
map<pair<int, int>, int> cost;
set<pair<int, int>> a[N], b[N];
multiset<long long> dp;
set<pair<int, pair<int, int>>> dp2;
void solve() {
auto minr = *dp2.begin();
long long ans = 1e18;
if (sz(dp)) {
ans = min(ans, *dp.begin());
}
int v = minr.second.first, u = minr.second.second;
auto it = dp2.begin();
for (int i = 0; i < 6 && it != dp2.end(); i++, it = next(it)) {
auto res = *it;
int v2 = res.second.first, u2 = res.second.second;
if (v2 != v && v2 != u && u2 != v && u2 != u) {
ans = min(ans, (long long) res.first + minr.first);
}
}
for (auto [w, v2] : a[v]) {
for (auto [w2, u2] : a[u]) {
if (v2 != u2 && v2 != u && u2 != v) {
ans = min(ans, (long long) w + w2);
}
}
}
cout << ans << '\n';
}
void del(int v) {
if (sz(a[v]) >= 3) {
long long res = 0;
for (auto [w, to] : a[v]) {
res += w;
}
dp.erase(dp.find(res));
}
for (auto [w, to] : a[v]) {
if (dp2.find({w, {min(v, to), max(v, to)}}) != dp2.end()) {
dp2.erase({w, {min(v, to), max(v, to)}});
}
}
}
void upd(int v) {
if (sz(a[v]) >= 3) {
long long res = 0;
for (auto [w, to] : a[v]) {
res += w;
}
dp.insert(res);
}
for (auto [w, to] : a[v]) {
if (a[to].find({w, v}) != a[to].end()) {
dp2.insert({w, {min(v, to), max(v, to)}});
}
}
}
void relax(int v) {
while (sz(a[v]) && sz(b[v])) {
auto A = *a[v].rbegin(), B = *b[v].begin();
if (A.first <= B.first) break;
a[v].erase(A);
b[v].erase(B);
a[v].insert(B);
b[v].insert(A);
}
while (sz(a[v]) < 3 && sz(b[v])) {
auto B = *b[v].begin();
b[v].erase(B);
a[v].insert(B);
}
}
void erase(int v, int u, int w) {
del(v);
del(u);
if (a[v].find({w, u}) != a[v].end()) {
a[v].erase({w, u});
}
else {
b[v].erase({w, u});
}
if (a[u].find({w, v}) != a[u].end()) {
a[u].erase({w, v});
}
else {
b[u].erase({w, v});
}
relax(v);
relax(u);
upd(v);
upd(u);
}
void add(int v, int u, int w) {
del(v);
del(u);
b[v].insert({w, u});
b[u].insert({w, v});
relax(v);
relax(u);
upd(v);
upd(u);
}
signed main() {
ios_base::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
int n, m;
cin >> n >> m;
for (int i = 0; i < m; i++) {
int x, y, w;
cin >> x >> y >> w;
x--, y--;
if (x > y) swap(x, y);
cost[{x, y}] = w;
g[x].push_back({w, y});
g[y].push_back({w, x});
}
for (int i = 0; i < n; i++) {
sort(g[i].begin(), g[i].end());
for (int j = 0; j < sz(g[i]); j++) {
if (j <= 2) {
a[i].insert(g[i][j]);
}
else {
b[i].insert(g[i][j]);
}
}
if (sz(a[i]) >= 3) {
long long res = 0;
for (auto [w, to] : a[i]) {
res += w;
}
dp.insert(res);
}
}
for (int i = 0; i < n; i++) {
for (int j = 0; j < sz(g[i]); j++) {
int v = g[i][j].second;
if (j <= 2) {
if (a[v].find({g[i][j].first, i}) != a[v].end()) {
dp2.insert({g[i][j].first, {min(i, v), max(i, v)}});
}
}
}
}
int q;
cin >> q;
solve();
while (q--) {
int t, v, u;
cin >> t >> v >> u;
v--, u--;
if (v > u) swap(v, u);
if (t == 0) {
int w = cost[{v, u}];
erase(v, u, w);
}
else {
int w;
cin >> w;
cost[{v, u}] = w;
add(v, u, w);
}
solve();
}
}
Numbers with the same set of prime divisors may be considered as the same numbers.
The amount of distinct sets of prime divisors which multiplication does not exceed $$$C$$$ for such constraints does not exceed $$$\lceil 0.65 \cdot C \rceil$$$.
How you should find the $$$xor$$$ of all numbers that have the same set of prime divisors?
If you know the $$$xor$$$ of all numbers with the same set of prime divisors you can recover these numbers randomly.
Let $$$f(x)$$$ be the multiplication of all prime divisors of $$$x$$$. Then let's make queries for all such $$$y$$$ that there is at least one such $$$x$$$, $$$1 \le x \le C$$$ and $$$f(x) = y$$$. For such constraints there will be at most $$$\lceil 0.65 \cdot C \rceil$$$ queries.
Let's group all numbers by $$$f(x)$$$. All numbers in one group in any query will be considered together. Let $$$ans(x)$$$ be the answer for the query with number $$$x$$$ and $$$g(x)$$$ be the $$$xor$$$ of all number that are not coprime with $$$x$$$. Then $$$g(x)=ans(x) \oplus ans(1)$$$. Now let's find out how to calculate the $$$xor$$$ of all such $$$x$$$ that $$$f(x)$$$ is divisible by an arbitrary $$$y$$$. Let's take $$$xor$$$ of all $$$g(k)$$$ where $$$k$$$ is a divisor of $$$y$$$ greater than $$$1$$$. Let's prove that by doing that we will get the needed value. If $$$f(x)$$$ is divisible by $$$y$$$ then such $$$x$$$ will be considered $$$2^l - 1$$$ times, where $$$l$$$ is the amount of prime divisors of $$$y$$$. It means that $$$x$$$ will be contained in the final value. Now let's prove that there will be no unintended $$$x$$$ values. Let $$$f(x)$$$ be not divisible by $$$y$$$. It means that there is suche prime $$$p$$$ that $$$y$$$ is divisible by $$$p$$$ and $$$x$$$ is not divisible by $$$p$$$. Then for each divisor $$$a$$$ of $$$y$$$ that is divisible by $$$p$$$ we will assign $$$b=a/p$$$. Then such $$$x$$$ will be considered either for both $$$a$$$ and $$$b$$$ or for none of them. It means that it will be considered an even amount of times. Now to find the $$$xor$$$ of all numbers with an arbitrary $$$f(x)$$$ we need to consider all $$$x$$$ from $$$C$$$ to $$$1$$$ and make $$$f(x) = f(x) \oplus f(2x) \oplus f(3x) \oplus \ldots$$$.
Now we only need to find $$$n$$$ distinct numbers from $$$1$$$ to $$$C$$$ such that $$$xor$$$ of numbers in each group is equal to a specific number. For each group of numbers with given $$$f(x)$$$ we will start the Gaussian elimination. Let $$$k$$$ be the size of a group and after the Gaussian elimination there are $$$b$$$ non-zero numbers. Then if $$$b=k$$$ there is a single way to obtain the needed $$$xor$$$. Else there are $$$2^{k-b}$$$ ways to obtain the needed $$$xor$$$. Now let's take some random sets of numbers with the needed $$$xor$$$ and calculate dp on these numbers to get take exactly $$$n$$$ numbers. If there are several ways we can choose any of them.
#include <bits/stdc++.h>
using namespace std;
#define pb emplace_back
#define all(x) (x).begin(), (x).end()
#define fi first
#define se second
#define pii pair<int, int>
#define ll long long
#define ld long double
const long long INFLL = 1e18;
const int INF = 1e9 + 1;
const int MAXC = 1e6;
mt19937 gen(time(0));
vector<int> e(MAXC + 5);
void precalc() {
vector<int> p;
for (int i = 2; i <= MAXC; i++) {
if (e[i] == 0) {
e[i] = i;
p.pb(i);
}
for (int j = 0; j < (int)p.size() && p[j] * i <= MAXC && p[j] <= e[i]; j++) {
e[p[j] * i] = p[j];
}
}
}
int f(int x) {
int ans = 1;
vector<int> p;
while (x > 1) {
p.pb(e[x]);
x /= e[x];
}
for (int i = 0; i < (int)p.size(); i++) {
if (i + 1 == (int)p.size() || p[i + 1] != p[i]) ans *= p[i];
}
return ans;
}
void gauss(int need, vector<int> &lst, vector<int> &ans, vector<vector<int>> &sz, vector<vector<vector<int>>> &v) {
int n = lst.size();
vector<bitset<20>> a(n);
for (int i = 0; i < n; i++) a[i] = lst[i];
bitset<20> b = need;
vector<bitset<260>> l(n);
for (int i = 0; i < n; i++) l[i][i] = 1;
int i = 0;
vector<int> col(20, -1);
int bas_sz = 0;
for (int j = 0; j < 20 && i < n; j++) {
int i1 = i;
while (i1 < n && a[i1][j] == 0) i1++;
if (i1 == n) continue;
swap(a[i], a[i1]);
swap(l[i], l[i1]);
bas_sz++;
col[j] = i;
for (int i2 = i + 1; i2 < n; i2++) {
if (a[i2][j]) {
a[i2] ^= a[i];
l[i2] ^= l[i];
}
}
i++;
}
bitset<20> res;
bitset<260> path;
for (int j = 0; j < 20; j++) {
if (res[j] != b[j] && col[j] == -1) {
exit(0);
}
if (res[j] == b[j]) continue;
res ^= a[col[j]];
path ^= l[col[j]];
}
if (a.back().count() != 0) {
for (int i = 0; i < n; i++) {
if (path[i]) ans.pb(lst[i]);
}
return;
}
vector<int> diff_sz(300);
sz.pb();
v.pb();
for (int it = 0; it < 100; it++) {
bitset<260> now = path;
for (int i = 0; i < n - bas_sz; i++) {
if (gen() % 2) now ^= l[bas_sz + i];
}
int now_sz = now.count();
if (diff_sz[now_sz]) continue;
v.back().pb();
for (int i = 0; i < n; i++) {
if (now[i]) v.back().back().pb(lst[i]);
}
diff_sz[now_sz] = 1;
sz.back().pb(now_sz);
}
}
vector<int> mem(2 * MAXC + 5, -1);
int query(int x) {
return mem[x];
}
void solve(int n, int c) {
vector<int> calc(c + 1);
vector<vector<int>> total(c + 1);
for (int x = 1; x <= c; x++) total[f(x)].pb(x);
vector<int> need;
for (int x = 1; x <= c; x++) {
if (total[x].size()) need.pb(x);
}
cout << need.size() << " ";
for (auto &c : need) cout << c << " ";
cout << endl;
for (auto &c : need) {
int x;
cin >> x;
mem[c] = x;
}
int total_xor = query(1);
for (int x = 1; x <= c; x++) {
if (total[x].empty()) continue;
for (int y = x; y <= c; y += x) calc[y] ^= (query(x) ^ total_xor);
}
calc[1] = total_xor;
for (int x = c; x >= 1; x--) {
if (total[x].empty()) continue;
for (int y = 2 * x; y <= c; y += x) {
if (total[y].size()) calc[x] ^= calc[y];
}
}
vector<int> ans;
vector<vector<int>> sz;
vector<vector<vector<int>>> v;
for (int x = 1; x <= c; x++) {
if (total[x].empty()) continue;
gauss(calc[x], total[x], ans, sz, v);
}
vector<bitset<40000>> bag(sz.size() + 1);
bag[0][0] = 1;
for (int i = 1; i <= (int)sz.size(); i++) {
for (auto &x : sz[i - 1]) {
bag[i] |= (bag[i - 1] << x);
}
}
int now = n - ans.size();
for (int i = (int)sz.size(); i >= 1; i--) {
for (int j = 0; j < (int)sz[i - 1].size(); j++) {
int x = sz[i - 1][j];
if (now - x >= 0 && bag[i - 1][now - x]) {
now -= x;
for (auto &y : v[i - 1][j]) ans.pb(y);
break;
}
}
}
sort(all(ans));
for (auto &c : ans) cout << c << " ";
cout << endl;
}
int main() {
precalc();
int c, n;
cin >> c >> n;
solve(n, c);
return 0;
}
- Problem A
- Idea: shishyando
- Polygon: shishyando
- Editorial: shishyando
- Problem B
- Idea: shishyando
- Polygon: shishyando
- Editorial: shishyando
- Problem C
- Idea: shishyando
- Polygon: shishyando
- Editorial: shishyando
- Problem D
- Problem E
- Idea: shishyando
- Polygon: shishyando
- Editorial: shishyando
- Problem F
- Idea: Artyom123
- Polygon: shishyando
- Editorial: shishyando + Artyom123
- Problem G
- Idea: Artyom123 + isaf27
- Polygon: tests: Artyom123 + other: shishyando
- Editorial: Artyom123
- Problem H
- Idea: Artyom123 + isaf27
- Polygon: shishyando + Artyom123
- Editorial: Artyom123
- English translation: shishyando
- Edits on Polygon, common enhancements: KAN, isaf27, MikeMirzayanov
Thx for fast editorial
Nice profile picture:)
Thanks for the round and fast editorial.
Fastest Editorial I've ever seen
I can't read the condition!!! For task D2 , I thought that there are columns and rows that do not affect anything. Because of this, I did not solve the problem. OMG
that is why reading question is imp before solving it. :)
Got the logic for D2 but wasn't able to convert it to code :(
solving lot of implementation questions helps! and generally it comes with time and consistency.
Did i tell wrongly? Downvoters should comment the appropriate answer before degrading my answer.
Feels bad to solve B and C but not A ):
Thanks for the editorial. I noticed that the second solution for Problem A has the editorial for Problem B. It will be great if it can be rectified.
Yes, thank you. It will be fixed soon.
I think you over-killed F with seg-trees. it could be done with just one sort and the rest was O(n) dp.
first remove all covered segments that have either a point or another segment inside them. then sort all segments and points.(compare segments by right ends with points). now let dp_i be answer for the first i things(segments and points together) and let pd_i be answer if we force that after our operations, we have a point in the position X_i.(if the i'th thing is a point X_i is its position otherwise its the segment's leftpoint) you can see details of updating the table in my submission.128662794(its very messy cause I finnished debugging in last minute of the contest)
I have something similar with just a sort and O(n) dp.
First keep only useful segments and assign them to the point on their right. Then for each point, we want to calculate dp[i][c], which is the minimal cost to cover all segments on the left of point i if the point i travels to the left c times (c = 1 or 2). Either point i goes to the left, then comes back and goes to the right, or vice versa.
Then to go from point i to i + 1, there are (k + 1) ways to split the k intervals between them, and for each of them there are 4 transitions. Finally, the answer is just dp[n] (we add a point with position +inf at the start).
You can see my submission here 128653517
Code is good, but logic description would be better.
I have also O(n) dp without trees. I did check point coverage by binary search, then removed overlapping segments using simple sort and stack. Additional observation is that you don't need to move point over any previous point initial position. So dp[i][j] was where dp[i+1][j] would tell how much we need to move points [0,i] to cover everything up to point i would be visited and j segments in between point i and i+1 exactly visited by point i. So dp[2][1] would tell how much should we move points 0 and 1 such that every segment up to point 1 would be visited and exactly single segment between points 1 and 2 would be visited by point 1. For each j I did brute force how far we should go left and then rearranged it a bit and noticed fact that we actually pick one of two choices for each j. 128674186 (commented part is brute force version).
the author's approach to finding segments that don't contain other segments might also be overkill. If we consider all segments in decreasing order of L then we can just maintain the min r[j] we've met so far. When considering {l[i], r[i]}-> this segment contains another segments if (min r[j]) <=r[i];
I'm not really sure why the author decided to use Fenwick trees here
Nice Approach
This was a nice round. For the first time I solved more than 2 problems of a global round. Could have solve D2 also, but the implementation was beyond me....
For E, Can anyone explain why Sum of Leaves + Sum of Buds = n — 1? Precisely, why this holds true?
Because if initially a node is neither a bud nor a leaf, when you remove all the buds under this node, it becomes either a bud or a leaf (doesn't apply to the root).
Constraints in D should've been stricter IMHO
I instinctively went straight to ordered multiset (happy that I would finally use this thing in a contest) to achieve $$$O(nmlog(m))$$$ but yeah, I didn't notice that $$$O(nm^2)$$$ fits.
I completely agree. I myself used the Fenwick tree, not noticing that the constraints allow us to solve for a square.
I strongly disagree — the problem is nice in its current state, increasing constraints just unnecessarily adds some data structure work on top of that.
In my opinion adding data structures is usually justifiable if its easier than the actual ad-hoc part of the problem. Even a fenwick tree (or any other easy method of inversions) is way tougher than the observations needed in D1, maybe even D2.
In my opinion, counting inversion is actually easier (or more straightforward) than the construction part. Still, I agree with you that it adds no value to the problem.
I see your point, but I'd argue its more standard, not easier. It still requires someone to know Fenwick / pbds / merge sort trick.
This is most likely trivial for anyone 17-1800 or higher as they've likely encountered the idea before.
However I suspect there are a lot of participants in the 14-1600 range who could make the observations for D1/D2 but don't know any of the standard methods for inversions. I certainly didn't till I was 16-1700.
Some knows, some don't. With this logic "there are some people who don't know these alg/ds" you could cut them out from any problem.
Your argument makes sense, but I think you underestimate what people know. Pretty sure half the cyans know about Fenwick/Segtree nowadays and would be happy to practice them
Ya your point is fine increasing constraints force the participants to use data structures like fenwik tree/segment-tree but in my opinion this has a positive outcome,If the participant couldn't solve the problem during the contest due to lack of data structure knowledge but it enables them to learn these data structures after the contest and then try to solve it again that's what upsolving all about. Having knowledge of more data structures always helps :)).
Ya I did it in O(nmlogm) by counting inversion with divide and conquer whereas my friend got also got passes with naive O(nm^2) algo.T_T
I recently encountered a problem during the contest. I don't know if it's a bug or a Codeforces thing. The problem is, if I give two correct submissions for the same problem, my last correct submission is taken into account for my points distribution. I think that logically, the first correct submission for the problem should be taken into account. I solved D1 first, and then, D2. After which, I submitted the solution for D2 in D1 also, due to which I lost half of the points of that question. So, it didn't matter if I got AC for D1 long ago, I got considerably low points .
Not a bug, part of the rules. You might want to resubmit a problem if you think it won't pass sys tests or it will get hacked.
Thanks for the great round!
Really liked the question. This contest was really competitive, with people solving questions at high speed. And liked the editorial(the idea of giving hints first).ThankYou
I like this kind of editorial with spoiler hints
thx for good and fast editorial!
unfortunately you still slow..
Didn't notice O(nm^2) fits into the solution.Was thinking of something else. Thanks for the round and a pretty fast editorial!
Thanks for the great contest!
Unfortunately, I think I saw problem G somewhere.
Is there meant to be special-casing for $$$C \in \{3,6,7,15,23,43\}$$$ in problem H? For these values, the number of square-free integers in $$$[1..C]$$$ is strictly greater than $$$\lceil 0.65 \cdot C \rceil$$$, which Hint 2 suggests is impossible.
EDIT: Whoops! I just noticed the unusual constraint $$$C \geq 100$$$, which is important for this reason.
EDIT2: But in light of this, I don't understand the problem-setting reasoning behind making the problem interactive at all. Was it originally intended to be an easier problem?
Why there are 700+ test cases for G... XD
Because author wanted to break Radewoosh's solution(random)
.
.
.
Weak pretests on problem D2
your solution is weaker
My G (the case with two edges without common endpoints):
Paint each vertices in one of $$$c$$$ colors (I've chosen $$$c=5$$$). Now for each edge we know a pair $$$(a, b)$$$ ($$$1 \leq a \leq b \leq c$$$) which denotes the colors of it's endpoints. If for two edges and their pairs $$$(a, b)$$$ and $$$(x, y)$$$ each of $$$a \neq x$$$, $$$a \neq y$$$, $$$b \neq x$$$ and $$$b \neq y$$$ holds, then we know for sure that these edges have no common endpoints.
For each pair maintain a multiset of edge values (and the largest one of them) and iterate over a pair of pairs. Then, we can with probability around $$$0.416$$$ (for $$$c=5$$$) find a correct answer. Do as many iterations as you can fit in the time limit.
My H:
Also gaussian eliminations for each group to find any solution. To find a solution with correct $$$n$$$, use hill climbing.
Chinese universe students with a morning lesson cryyyyy.
Thanks for such a good round and fast editorial
So in reality the $$$\lceil 0.65 n \rceil$$$ can be replaced with the number of squarefree numbers smaller than $$$n$$$? (in H)
After looking at the constraints again, Order Statistic Tree seems like an overkill for D1 and D2.
My Submission
Problem D Video Editorial
I think that the input in D2 is not easy to understand &_&
why not every contest tutorials are like this .. very well written tutorial and solution code thanks !
Help() Please
https://mirror.codeforces.com/contest/1566/submission/128700326
Here is my submission i'm trying to solve D1 with help of PBDA where the heck i'm wrong : ( Could'nt find any counter test case why I'm getting WA on Test Case 2
Merge Sort inversion cnt on D2: https://mirror.codeforces.com/contest/1566/submission/128722549
128636145 is what I did in-contest (merge sort to count inversions).
The first hint in problem E is incorrect, and here's a counter-example:
Re-hanging 4 (bud) to 2 (leaf) won't decrease the number of leaves.
Can you explain how answer is 2 in the third test case. I think it should be 3
shouldn't it be min instead of max? as we need here the ending index (j) of the second last segment? Or I am getting it wrong? Here is my accepted solution using the given idea but using min : 128769652
128624993 why I am getting tle on Question c although my solution is of O(1e5) and editorial of d2 which is of O(1e8) is still running.
I am unable to get how having a leaf connected with root will always reduce the size. Can anyone explain in detail for the third test case given in the question.
Problem D can actually be solved in O(n*m*log(m)) by using an ordered set to calculate the inconvenience of a person in log(m). Here is my solution using Policy Based DS in cpp.
So Fast!
I think the difference in difficulty between E and F is too large.
For problem E, why we are subtracting 2*k, it is not clear to me. Can someone expalin?
I have a problem about PROBLEM B, 0002000 ,i think it ouput 1,but std output 2. Maybe I misunderstand the problem?
I am sorry to make a stupid question, i have solve the problem:)
thanks for having a pictorial explanation of test cases...