| Predictor | A | B | C | D | E | F | G |
|---|---|---|---|---|---|---|---|
| Proof_by_QED | 800 | 1000 | 1100 | 1300 | 1600 | 1900 | 2400 |
| cry | 800 | 1000 | 1000 | 1200 | 1700 | 2000 | 2400 |
| chromate00 | 800 | 1000 | 1000 | 1400 | 1700 | 2000 | 2400 |
| DivinePunishment | 800 | 1200 | 1100 | 1200 | 1700 | 2000 | 2300 |
| -firefly- | 800 | 1000 | 1000 | 1400 | 1800 | 2000 | 2400 |
| Error_Yuan | 800 | 1100 | 1200 | 1600 | 1800 | 2100 | 2400 |
| macaquedev | 800 | 1000 | 1500 | 1400 | 1300 | 2000 | 1900 |
| Intellegent | 800 | 900 | 1200 | 1400 | 1700 | 2100 | 2400 |
| yoru_sacri | 800 | 900 | 1300 | 1100 | 1700 | 2000 | |
| efishel | 800 | 800 | 800 | 1300 | 1600 | 1900 | 2100 |
| Friedrich | 800 | 800 | 1100 | 1300 | 1500 | 2100 | 2300 |
| larush | 800 | 1000 | 1100 | 1200 | 1600 | 2000 | 2400 |
| Dominater069 | 800 | 800 | 900 | 1200 | 1500 | 1900 | 2100 |
| A | B | C | D | E | F | G | |
|---|---|---|---|---|---|---|---|
| Average | 800 | 961.5 | 1100 | 1307.7 | 1630.8 | 2000 | 2291.7 |
| Actual | 800 | 1000 | 900 | 1100 | 1500 | 2200 | 2400 |
2060A - Fibonacciness
Problem Credits: Proof_by_QED
Analysis: larush
We can notice that the possible values of $$$a_3$$$ ranges from $$$-99$$$ to $$$200$$$. Thus, we can iterate over all values of $$$a_3$$$ from $$$-99$$$ to $$$200$$$, and for each value, compute the Fibonacciness of the sequence. The maximal of all these values gives the solution to the problem.
Time Complexity : $$$O(3*a_{max})$$$
$$$a_3$$$ can be represented as either of the following:
$$$a_3 = a_1 + a_2$$$, or
$$$a_3 = a_4 - a_2$$$, or
$$$a_3 = a_5 - a_4$$$.
These are at most 3 possible values. Notice that any value of $$$a_3$$$ which increases the Fibonacciness must belong to one of the above values (Think why!). So, in order to increase the Fibonacciness of the sequence, we should choose the most frequent of the above values. Then, in the end, we can calculate the Fibonacciness of the sequence after choosing the best value for $$$a_3$$$.
Time Complexity : $$$O(1)$$$
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
using vll = vector <ll>;
using ii = pair <ll, ll>;
using vii = vector <ii>;
void tc () {
vll ve(4);
for (ll &i : ve) cin >> i;
set <ll> st;
st.insert(ve[3]-ve[2]);
st.insert(ve[2]-ve[1]);
st.insert(ve[0]+ve[1]);
cout << 4-st.size() << '\n';
}
int main () {
cin.tie(nullptr) -> sync_with_stdio(false);
ll T; cin >> T; while (T--) { tc(); }
return 0;
}
2060B - Farmer John's Card Game
Problem Credits: Lilypad
Analysis: larush
Assume the cows are initially ordered correctly. A solution exists if the first cow can place $$$0$$$, the second cow can place $$$1$$$, and so on, continuing with the first cow placing $$$n$$$, the second cow placing $$$n+1$$$, and so forth. Observing the pattern, the first cow must be able to place $$${0, n, 2n, \dots}$$$, the second $$${1, n+1, 2n+1, \dots}$$$, and in general, the $$$i$$$-th cow (where $$$i \in [0, n-1]$$$) must be able to place $$${i, n+i, 2n+i, \dots}$$$.
For each cow, sorting their cards reveals whether they satisfy this condition: the difference between adjacent cards in the sorted sequence must be $$$n$$$. If any cow's cards do not meet this criterion, the solution does not exist, and we output $$$-1$$$.
If the cows are not ordered correctly, we need to rearrange them. To construct the solution, find the index of the cow holding card $$$i$$$ for each $$$i \in [0, n-1]$$$. Since the cards of each cow are sorted, denote $$$\texttt{min_card}$$$ as the smallest card a cow holds. Using an array $$$p$$$, iterate over each cow $$$c$$$ from $$$0$$$ to $$$n-1$$$ and set $$$p_\texttt{min_card} = c$$$. Finally, iterate $$$i$$$ from $$$0$$$ to $$$n-1$$$ and output $$$p_i$$$.
Time Complexity : $$$O(nmlog(m))$$$. Note that sorting is not necessary, and a solution with complexity up to $$$O((nm)^2)$$$ will still pass.
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
using vll = vector <ll>;
using ii = pair <ll, ll>;
using vii = vector <ii>;
void tc () {
ll n, m;
cin >> n >> m;
vector <vll> ve(n, vll(m));
vll p(n, -16);
bool val = true;
ll c = 0;
for (vll &we : ve) {
for (ll &i : we) cin >> i;
ll minN = *min_element(we.begin(), we.end());
if (minN < n) p[minN] = c++;
val &= minN < n;
sort(we.begin(), we.end());
ll last = we[0]-n;
for (ll i : we) {
val &= last+n == i;
last = i;
}
}
if (!val) {
cout << "-1\n";
return;
}
for (ll i : p) cout << i+1 << ' ';
cout << '\n';
}
int main () {
cin.tie(nullptr) -> sync_with_stdio(false);
ll T; cin >> T; while (T--) { tc(); }
return 0;
}
2060C - Game of Mathletes
Problem Credits: Friedrich
Analysis: macaquedev
Note that Bob has all the power in this game, because the order in which Alice picks numbers is irrelevant, since Bob can always pick the optimal number to give himself a point. Therefore, we can just ignore Alice and play the game from Bob's perspective.
From this point on, a "paired" number is any number $$$a$$$ on the blackboard such that there exists a $$$b$$$ on the blackboard such that $$$a+b=k$$$.
Bob's strategy is as follows:
if Alice picks a paired number, Bob should pick the other number in the pair.
if Alice picks an unpaired number, Bob can pick any other unpaired number, it doesn't matter which.
This always works because the number of "unpaired numbers" is always even, since $$$n$$$ is even and the number of "paired numbers" will always be even. Therefore, for every unpaired number Alice picks, Bob will always have an unpaired number to respond with.
Therefore, the final score is just the number of pairs in the input. To count them, use a map of counts $$$c$$$ such that $$$c_x$$$ is the number of occurrences of $$$x$$$ on the whiteboard. Then, for each number from $$$1$$$ to $$$\lfloor \frac{k}{2} \rfloor$$$, take the minimum of $$$c_x$$$ and $$$c_{k-x}$$$, and add that to the total. Remember the edge case where $$$k$$$ is even and $$$x = \frac{k}{2}$$$. The number of pairs here is just $$$\lfloor \frac{c_x}{2} \rfloor$$$.
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
using vll = vector <ll>;
using ii = pair <ll, ll>;
using vii = vector <ii>;
void tc () {
ll n, k;
cin >> n >> k;
vll ve(n);
for (ll &i : ve) cin >> i;
vll th(k+1, 0);
for (ll i : ve) {
if (i >= k) continue;
th[i]++;
}
ll ans = 0;
for (ll i = 1; i < k; i++) {
if (i == k-i) {
ans += th[i]/2;
continue;
}
ll minN = min(th[i], th[k-i]);
th[i] -= minN;
th[k-i] -= minN;
ans += minN;
}
cout << ans << '\n';
}
int main () {
cin.tie(nullptr) -> sync_with_stdio(false);
ll T; cin >> T; while (T--) { tc(); }
return 0;
}
2060D - Subtract Min Sort
Problem Credits: Proof_by_QED
Analysis: Proof_by_QED
For clarity, let's denote $$$op_i$$$ as an operation performed on $$$a_i$$$ and $$$a_{i+1}$$$.
Claim: if it is possible, then the sequence of operations $$$op_1,op_2,\ldots,op_{n-1}$$$ will sort the array.
Proof: Let $$$b$$$ be any sequence of operations that will sort the array. Let's transform the sequence $$$b$$$ such that it becomes $$$[op_1,op_2,\ldots,op_{n-1}]$$$.
First, let's note that an $$$op_i$$$ does nothing if $$$a_i=0$$$ or $$$a_{i+1}=0$$$. Additionally, after $$$op_i$$$, at least one of $$${a_i,a_{i+1}}$$$ will become zero. Thus, we can remove all duplicates in $$$b$$$ without altering the result.
Now, let $$$x$$$ be the largest number such that $$$op_x$$$ is in $$$b$$$. Since at least one of $$$a_x,a_{x+1}$$$ will be zero, operations must be performed such that each of $$$a_1,a_2,\ldots,a_{x-1}$$$ are zero. Let $$$S$$$ be the set of indices with $$$i \lt x$$$ such that $$$op_i$$$ is not in $$$b$$$. Note that we can simply append each operation in $$$S$$$ at the end of $$$b$$$ without altering the result, since all elements before $$$a_x$$$ are already zero.
Our sequence of operations now contain a permutation of $$$op_1,op_2,\ldots,op_x$$$, and we have ensured that all of $$$a_1,a_2,\ldots,a_x=0$$$. Since the sequence is now sorted, we can in fact continue performing $$$op_{x+1},op_{x+2},\ldots,op_{n-1}$$$ in this order. Notice that this sequence of operations will keep our array sorted, as $$$op_y$$$ will make $$$a_y=0$$$ (since $$$a_y \lt a{y+1}$$$).
Let's show that we can rearrange our operations such that $$$1$$$ comes first. There are two cases: either $$$op_1$$$ comes before $$$op_2$$$, or $$$op_2$$$ comes before $$$op_1$$$. In the first case, notice that no operation done before $$$op_1$$$ will impact either the value of $$$a_1$$$ or $$$a_2$$$, so rearranging such that $$$op_1$$$ comes first does not impact the final results. In the second case, notice that after $$$op_2$$$ is done, we must have $$$a_1=a_2$$$, as otherwise $$$op_1$$$ will not simultaneously make $$$a_1=a_2=0$$$. This implies that right before $$$op_2$$$ is performed, we must have $$$a_1+a_3=a_2$$$. Then, rearranging the operations such that $$$op_1$$$ comes first will not impact the final result.
Using the same line of reasoning, we can make $$$op_2$$$ the second operation, then $$$op_3$$$, and so on. To solve the problem, we can simply simulate the operations in this order, and then check if the array is sorted at the end.
Time Complexity : $$$O(n)$$$
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
using vll = vector <ll>;
using ii = pair <ll, ll>;
using vii = vector <ii>;
void tc () {
ll n;
cin >> n;
vll ve(n);
for (ll &i : ve) cin >> i;
ll j = 0;
for (ll i = 1; i < n; i++) {
if (ve[i-1] > ve[i]) j = i;
}
if (j == 0) { // all done
cout << "YES\n";
return;
}
auto fop = [&](ll i) {
ll minN = min(ve[i], ve[i+1]);
ve[i] -= minN;
ve[i+1] -= minN;
};
for (ll i = 0; i < j; i++) {
fop(i);
}
cout << (is_sorted(ve.begin(), ve.end()) ? "YES" : "NO") << '\n';
}
int main () {
cin.tie(nullptr) -> sync_with_stdio(false);
ll T; cin >> T; while (T--) { tc(); }
return 0;
}
2060E - Graph Composition
Problem Credits: Friedrich
Analysis: DivinePunishment
Let's solve the problem for each operation.
First, consider the operation that removes an edge from $$$ F $$$. Divide $$$ G $$$ into its connected components and assign each vertex a component index. Then, for each edge in $$$ F $$$, if it connects vertices with different component indices, remove it and increment the operation count.
This guarantees no path between $$$ x $$$ and $$$ y $$$ in $$$ F $$$ if there is none in $$$ G $$$. Next, to ensure a path exists between $$$ x $$$ and $$$ y $$$ in $$$ F $$$ if there is one in $$$ G $$$, we divide $$$ F $$$ into connected components. After removing excess edges, each component of $$$ F $$$ only contains vertices of the same component index. The number of operations needed now is the difference between the number of connected components in $$$ F $$$ and $$$ G $$$.
All operations can be efficiently performed using DFS or DSU.
#include <bits/stdc++.h>
#define int long long
#define pb emplace_back
#define mp make_pair
#define x first
#define y second
#define all(a) a.begin(), a.end()
#define rall(a) a.rbegin(), a.rend()
typedef long double ld;
typedef long long ll;
using namespace std;
mt19937 rnd(time(nullptr));
const int inf = 1e9;
const int M = 1e9 + 7;
const ld pi = atan2(0, -1);
const ld eps = 1e-6;
void Gdfs(int v, vector<vector<int>> &sl, vector<int> &col, int c){
col[v] = c;
for(int u: sl[v]){
if(col[u] == 0){
Gdfs(u, sl, col, c);
}
}
}
int Fdfs(int v, vector<vector<int>> &sl, vector<int> &col, vector<int> &old_col, int c){
col[v] = c;
int res = 0;
for(int u: sl[v]){
if(col[u] == 0){
if(old_col[u] != c) res++;
else res += Fdfs(u, sl, col, old_col, c);
}
}
return res;
}
void read_con_list(vector<vector<int>> &sl, int m){
for(int i = 0; i < m; ++i){
int u, v;
cin >> u >> v;
sl[--u].emplace_back(--v);
sl[v].emplace_back(u);
}
}
void solve(int tc){
int n, mf, mg;
cin >> n >> mf >> mg;
vector<vector<int>> fsl(n), gsl(n);
read_con_list(fsl, mf);
read_con_list(gsl, mg);
vector<int> fcol(n), gcol(n);
int ans = 0;
for(int i = 0; i < n; ++i){
if(gcol[i] == 0){
Gdfs(i, gsl, gcol, i + 1);
}
if(fcol[i] == 0){
ans += Fdfs(i, fsl, fcol, gcol, gcol[i]);
if(gcol[i] < i + 1) ans++;
}
}
cout << ans;
}
bool multi = true;
signed main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int t = 1;
if (multi)cin >> t;
for (int i = 1; i <= t; ++i) {
solve(i);
cout << "\n";
}
return 0;
}
2060F - Multiplicative Arrays
Problem Credits: Proof_by_QED, cry
Analysis: -firefly-
When $$$n$$$ is large, many of the array's elements will be $$$1$$$.
The number of non-$$$1$$$ elements can't be so large.
Try to precompute all arrays without $$$1$$$ and then use combinatorics.
The naive solution is to define $$$f_k(n)$$$ as the number of arrays containing $$$n$$$ elements with a product of $$$k$$$. We could then try to compute this by considering the last element of the array among the divisors of $$$k$$$, giving us:
with $$$f_k(1)=1$$$.
The answers would then be $$$\sum_{i=1}^{n}f_1(i),\sum_{i=1}^{n}f_2(i),\dots,\sum_{i=1}^{n}f_k(i)$$$. However, this leads to an $$$O(nk\log k)$$$ solution, which exceeds our time constraints. We need to optimize this approach.
We can prove that there are at most $$$16$$$ non-$$$1$$$ elements in our arrays. This is because:
- The prime factorization of $$$k=p_1p_2\cdots p_t$$$ divides $$$k$$$ into the most non-$$$1$$$ elements.
- With $$$17$$$ non-$$$1$$$ elements, the smallest possible value would be $$$k=2^{17}=131072 \gt 10^5$$$.
Based on this observation, we can define our dynamic programming state as:
- Let $$$dp[i][j]$$$ represent the number of arrays with product $$$i$$$ containing only $$$j$$$ non-$$$1$$$ elements.
- The recurrence relation becomes: $$$dp[i][j]=\sum_{p|i,p \gt 1}dp[\frac{i}{p}][j-1]$$$.
- Base case: $$$dp[i][1]=1$$$ for $$$i \gt 1$$$.
This computation has a time complexity of $$$O(k\log^2k)$$$, as we perform $$$\sum_{j=1}^{k}d(j)=O(k\log{k})$$$ additions for each $$$i$$$.
To calculate $$$f_k(n)$$$, we:
- Enumerate the number of non-$$$1$$$ elements from $$$1$$$ to $$$16$$$.
- For $$$j$$$ non-$$$1$$$ elements in the array:
- We have $$$n-j$$$ elements that are $$$1$$$.
- We need to choose $$$j$$$ positions for non-$$$1$$$ elements.
- Fill these positions with $$$dp[k][j]$$$ possible sequences.
This gives us:
Therefore:
Note that $$$\sum_{i=1}^{n}\binom{i}{j} = \binom{n+1}{j+1}$$$ is given by the Hockey Stick Identity.
Each answer can be calculated in $$$O(\log^2k)$$$ time, giving an overall time complexity of $$$O(k\log^2k)$$$.
Let's revisit the naive approach and define $$$S_k(n)=\sum_{i=1}^{n}f_{k}(n)$$$ as our answers. We can observe:
Accumulating Formula $$$(1)$$$ yields:
By induction, we can prove that both $$$f_{k}(n)$$$ and $$$S_k(n)$$$ are polynomials. Let's denote:
- Degree of $$$f_k(n)$$$ as $$$p(k)$$$.
- Degree of $$$S_k(n)$$$ as $$$q(k)$$$.
From the definition of $$$S_k(n)$$$, we have $$$q(k)=p(k)+1$$$. Formula $$$(2)$$$ gives us:
with $$$p(1)=0$$$. By induction, we can show that $$$p(k)$$$ equals the number of primes in $$$k$$$'s prime factorization. Therefore, $$$p(k)\le16$$$ and $$$q(k)\le17$$$.
Since $$$q(k)$$$ doesn't exceed $$$17$$$, we can:
- Precompute $$$S_k(1),S_k(2),S_k(3),\dots,S_k(18)$$$.
- Use Lagrange Interpolation to compute any $$$S_k(n)$$$.
This also yields an $$$O(k\log^2k)$$$ time complexity.
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Linq;
using System.Runtime.Serialization.Json;
using System.Text;
using System.Threading.Tasks;
using TemplateF;
#if !PROBLEM
SolutionF a = new();
a.Solve();
#endif
namespace TemplateF
{
internal class SolutionF
{
private readonly StreamReader sr = new(Console.OpenStandardInput());
private T Read<T>()
where T : struct, IConvertible
{
char c;
dynamic res = default(T);
dynamic sign = 1;
while (!sr.EndOfStream && char.IsWhiteSpace((char)sr.Peek())) sr.Read();
if (!sr.EndOfStream && (char)sr.Peek() == '-')
{
sr.Read();
sign = -1;
}
while (!sr.EndOfStream && char.IsDigit((char)sr.Peek()))
{
c = (char)sr.Read();
res = res * 10 + c - '0';
}
return res * sign;
}
private T[] ReadArray<T>(int n)
where T : struct, IConvertible
{
T[] arr = new T[n];
for (int i = 0; i < n; ++i) arr[i] = Read<T>();
return arr;
}
public void Solve()
{
StringBuilder output = new();
const int maxk = 100000, mod = 998244353, bw = 20;
List<int>[] d = new List<int>[maxk + 1];
for (int i = 1; i <= maxk; ++i) d[i] = new() { 1 };
for (int i = 2; i <= maxk; ++i) {
for (int j = i; j <= maxk; j += i) d[j].Add(i);
}
int[,] dp = new int[bw + 1, maxk + 1], S = new int[bw + 1, maxk + 1];
for (int k = 1; k <= maxk; ++k) dp[1, k] = S[1, k] = 1;
for (int i = 2; i <= bw; ++i) {
for (int j = 1; j <= maxk; ++j) {
foreach (var k in d[j]) {
dp[i, j] += dp[i - 1, k];
dp[i, j] %= mod;
}
S[i, j] = (S[i - 1, j] + dp[i, j]) % mod;
}
}
int T = Read<int>();
while (T-- > 0)
{
int k = Read<int>(), n = Read<int>();
if (n <= bw) {
for (int j = 1; j <= k; ++j) output.Append(S[n, j]).Append(' ');
} else {
int _k = k;
for (k = 1; k <= _k; ++k) {
long ans = 0;
for (int i = 1; i <= bw; ++i) {
long t = S[i, k];
for (int j = 1; j <= bw; ++j) {
if (j == i) continue;
t *= n - j;
t %= mod;
t *= Inv((mod + i - j) % mod, mod);
t %= mod;
}
ans = (ans + t) % mod;
}
output.Append(ans).Append(' ');
}
}
output.AppendLine();
}
Console.Write(output.ToString());
}
public static long Inv(long a, long Mod) {
long u = 0, v = 1, m = Mod;
while (a > 0) {
long t = m / a;
m -= t * a; (a, m) = (m, a);
u -= t * v; (u, v) = (v, u);
}
return (u % Mod + Mod) % Mod;
}
}
}
2060G - Bugged Sort
Problem Credits: chromate00
Analysis: macaquedev
Consider $$$(a_i, b_i)$$$ as the $$$i$$$-th pair. Can two elements ever become unpaired?
No. Every operation keeps the pairs intact. The best way to think about the operation is like this:
Swap the pairs at index $$$i$$$ and index $$$j$$$.
Flip each pair (swap $$$a_i$$$ with $$$b_i$$$, and $$$a_j$$$ with $$$b_j$$$).
The operations allow us to swap two pairs, but it also results in both of them getting flipped. Is there any way to swap two pairs without flipping anything?
It turns out there is, and there always is, because $$$n \geq 3$$$. To swap pairs $$$i$$$ and $$$j$$$, follow this procedure:
Pick any index $$$k$$$, where $$$k \ne i$$$ and $$$k \ne j$$$.
Do the operation on $$$i$$$ and $$$k$$$.
Do the operation on $$$j$$$ and $$$k$$$.
Finally, do the operation on $$$i$$$ and $$$k$$$ again. This results in the pair at index $$$k$$$ being unchanged, and the pairs at $$$i$$$ and $$$j$$$ simply being swapped.

Can you flip two pairs without swapping them?
Yes. Just perform the operation on pairs $$$i$$$ and $$$j$$$, and then follow the procedure from Hint 2, and you are effectively swapping, then "swapping and flipping", therefore the final result is simply a flip of two pairs.

Can you flip one pair on its own?
No, this is not possible due to the parity invariant. The operation involves performing two flips at the same time, which means that it is impossible to end up in a situation where the number of flips is odd.
Read the hints first.
Key observation:
In the final solution, the pairs must be sorted in increasing order of $$$\min(a_i, b_i)$$$. We prove this by contradiction. Suppose there exists a pair $$$(a_i, b_i)$$$, where $$$a_i \lt b_i$$$ and $$$a_i$$$ is smaller than the minimum element in the previous pair. In this case, it becomes impossible to place this pair after the previous one, as $$$a_i$$$ would always be smaller than the preceding element, regardless of orientation.
Since all $$$a_i$$$ and $$$b_i$$$ are distinct, there is exactly one valid final ordering of the pairs. Moreover, this ordering is always reachable because we previously proved that any two pairs can be swapped without flipping their elements. Since swapping any two items in an array an arbitrary number of times allows for sorting by definition, we can sort the pairs in this order. Thus, the solution is to initially sort the pairs based on $$$\min(a_i, b_i)$$$.
Solution:
The problem simplifies to determining whether flipping pairs allows sorting. We can solve this using dynamic programming (DP), defined as follows:
$$$dp[1][i] = 1$$$ if it's possible to sort up and including pair $$$i$$$ using an even number of flips, where pair $$$i$$$ is not flipped.
$$$dp[2][i] = 1$$$ if it's possible to sort up and including pair $$$i$$$ using an even number of flips, where pair $$$i$$$ is flipped.
$$$dp[3][i] = 1$$$ if it's possible to sort up and including pair $$$i$$$ using an odd number of flips, where pair $$$i$$$ is not flipped.
$$$dp[4][i] = 1$$$ if it's possible to sort up and including pair $$$i$$$ using an odd number of flips, where pair $$$i$$$ is flipped.
The base cases are as follows:
$$$dp[1][1] = 1$$$, because this represents the state where the first pair is not flipped, resulting in zero flips (which is even)
$$$dp[4][1] = 1$$$, because this represents the state where the first pair is flipped, resulting in one flip (which is odd).
There are eight possible transitions, as you can reach each of the four states from two previous states: you either choose to flip the current element or you don't.
The final answer is $$$dp[1][n]$$$ $$$\texttt{OR}$$$ $$$dp[2][n]$$$, because by Hints 3 and 4, we can only reach states where an even number of pairs are flipped from the starting position.
#include <bits/stdc++.h>
using namespace std;
#define F0R(i, n) for (int i = 0; i < n; i++)
#define FOR(i, m, n) for (int i = m; i < n; i++)
#define print pair<int, int>
#define vp vector<print>
#define ALL(x) (x).begin(), (x).end()
void solve()
{
int n;
cin >> n;
vp pairs(n);
F0R(i, n)
{
cin >> pairs[i].first;
}
F0R(i, n)
{
cin >> pairs[i].second;
}
sort(ALL(pairs), [](print a, print b)
{ return min(a.first, a.second) < min(b.first, b.second); });
vector<vector<int>> dp(4, vector<int>(n, 0));
dp[0][0] = 1;
dp[3][0] = 1;
FOR(i, 1, n)
{
if (pairs[i - 1].first < pairs[i].first and pairs[i - 1].second < pairs[i].second)
{
dp[0][i] |= dp[0][i - 1];
dp[1][i] |= dp[1][i - 1];
dp[2][i] |= dp[3][i - 1];
dp[3][i] |= dp[2][i - 1];
}
if (pairs[i - 1].first < pairs[i].second and pairs[i - 1].second < pairs[i].first)
{
dp[0][i] |= dp[2][i - 1];
dp[1][i] |= dp[3][i - 1];
dp[2][i] |= dp[1][i - 1];
dp[3][i] |= dp[0][i - 1];
}
}
cout << ((dp[0][n - 1] | dp[2][n - 1]) ? "YES" : "NO") << endl;
}
signed main()
{
int tests;
cin >> tests;
F0R(test, tests)
{
solve();
}
return 0;
}








C took me incredibly long. I was completely overcomplicating it thinking that we needed to subtract from the answer when there was an odd number of non-pair numbers.
hahaha, I did the same. I didn't realize the fact that non pair numbers can never be odd up until the contest ended, so silly.
hahahaha me too bro
Same, I didn't read that n is even only :(
How will that drastically change anything? Like all i can think of is by making some modifications like storing remaining useless elemenst' count and then checking its parity would work, no?
It has 2 additional cases:
If # unpaired is odd and n is odd, Alice gets the last unpaired, so we are good. If # unpaired is odd and n is even, Bob will be forced to disturb 1 pair.
Certainly not drastic but even is just easier, that's all. I was complicating the problem too much in my mind.
Everyone in my friend group seems to have problem with this.
But I don't understand how I did it under 10 mins of reading the problem.
On the other hand, I never have problems implementing solutions. (I did This on my own)
But I got destroyed trying to implement B.
And I did not understand D to begin with
waooh! joose bro
Thanks!
It was a nice contest! I got stuck on problem E, but I'll learn from it.
:)
At first reading your comment, I thought you actually had to take a dump during the contest. I almost felt sorry for you.
Why is F so damn hard???
Is it me or the B problem was kind of hard to implement ? It took me a while to solve it. Even the total number of users who solved the problem is less than the problem C.
As an AKer in 2h29min, I think this is closer to DIV2 ;)
Problem G is very good.
I solved A, C and D but can't solve B. I don't know what is the logical mistake in my solution.
my approach which might help:
store in the form of int[n][m] sort all vectors find order by taking first el of all check if order is valid, else -1
I understood the problem statement wrong, and maybe you did the same. I thought that the card stack resets to -1 every time a new round starts. This gives a different problem, with a different solution, but you can't notice it in the sample I/Os. This variation gets WA on test 2.
Did anyone use Faulhaber formula to solve F?
I tried that at first, but then I realised my way of counting was wrong lol
good contest , bad contest
The testcases for E couldve been better. I just thought we have to make both the graphs same but i guess thats what makes me a newbie
thank for contest
man i dont know why my c is wrong .
It fails on this testcase: 1 2 4 2 3
why 301896773 gives TLE when submitted in C++20 but accepted when submitted in C++17
I'm quite new to graph theory problems, so please pardon any ignorance. However, for the life of me, I cannot understand why in problem E you cannot just count the number of pairs of (u, v) that are in G and not F, and the pairs in F but not G?
If (u, v) does exist in G, but not in F, it doesn't mean that you can not reach v from u (u -> w -> v)
Consider F = 1-2-3-4 and G = 1-4-2-3
Can someone explain how for E on the second test of test 2 the # of operations needed is only 2? I'm getting 33.
oh as long as there is a path between them it works didn't realize that
F was awesome
Any good resources to learn combinatorics?
Self review no one asked for:
so mad that i sold around 50 penalty points because of the misread on a...
i lost a little time on b because of interesting construction
rushed on c, my mind was too disorganized
c-d turnaround was lowkey clean, i hit an instant mindsolve
e i sent two wa because i thought we needed to simply match the edge list, rather than the path. test case 1 seemed to support this. i sent another wa submission because i thought it was overflow
f and g were beyond me
My man, at least you realize it in time.
When I was realized I read the problem E wrong, the contest is already over.
It only took me ~ 30 min of implementation so if I could realize I read the problem wrong, I can comeback easily (theoretically)...
Ready going back to pupils and wait for Div 4.
can I know the extension that you are using to predict the rating changes
carrot on firefox
Why was d at d?
Because we hoped that some people will prove the result instead of resorting to guessing that the obvious strategy works, and the proof is actually not so simple.
Hey can you tell me whethe my proof was correct?
I started from the first index of array and started taking the mid of first two elements. Than I subtracted it from both. Than we will reduce one element to 0 and the next to whatever was left after subtracting. By the way this was done only when first element was smaller than second as otherwise if this operation is performed there is no way that array can be non decreasing as zero is the minimum element obtainaible and everytime we pair an element with 0 the min of them both will be zero. So if we get something like 1 0 than it's likely impossible but if we have smooth sailing till last index than you are good to go.
Does it seem satisfactory or you have a better proof? I'd love to hear your take.
This doesn't tell me why it is optimal to decrease on the first element first. Perhaps it is better to do them in a different order. My proof is in editorial
There is an easier solution for G without dp and more straightforward to prove(?). Imagine $$$a$$$ and $$$b$$$ stacked on top of each other, draw an arrow pointing downwards from $$$a_i$$$ to $$$b_i$$$ if $$$a_i \gt b_i$$$ otherwise draw an upwards arrow from $$$b_i$$$ to $$$a_i$$$. Now doing an operation on $$$(i,j)$$$ is equivalent to swapping the pairs $$$(a_i,b_i),(a_j,b_j)$$$ then flipping their arrows.
Following from the editorial, the order of the pairs in the final sorted configuration is fixed and we can freely reorder the pairs. The question left is whether we can orient the arrows correctly?
Assume we already fixed the final direction of each arrow in the final state, the number of up arrows in the final and initial state has to have the same parity (parity of up arrows is invariant under swapping). So we now ask what possible parities of up arrows can the final state have?
First sort the pairs by $$$\min(a_i,b_i)$$$. For each $$$i$$$, if $$$\max(a_i,b_i) \gt min(a_{i+1},b_{i+1})$$$ then their arrows must share the same direction, otherwise they can be different or the same. Now we split the pairs into consecutive segments where each all arrows in the same segment must have the same direction. The parity of number of up arrows can be $$$0$$$ or $$$1$$$ iff there exist a segment of odd length because changing the direction of all arrows in an even segment does not change the parity of up arrows. The answer to yes or no reduces to these two conditions:
1) No $$$\max(a_i,b_i) \gt \max(a_{i+1},b_{i+1})$$$ exist
2)There exist an odd segment or the parity of up arrows in the initial state is even
My solution is similar to yours, but I think you're missing one case.
Even if all the fixed segments are even length, if there exists two consecutive
i's wheremax(ai,bi)>min(ai+1,bi+1)is false, then you can flip the first of those twoi's without changing anything else. This means you can always make the total number of flips even.(Also you still have to check that after ordering the pairs, one of the flips (or arrow directions) is valid — such as in the first example test case where the middle pair can never be valid.)
Yes, that is in the second condition (initial up arrows even)
In the tutorial of B, it says in general,
the i-th cow (where i∈[0,n−1]) must be able to place i,n+i,2n+i,….
but what if the input is
1
3 3
3 4 5
0 1 2
6 7 8
here there is still a possible permutation but not according to the tutorial ??
No player can play consecutive turns, so suppose player 2 plays 0, player 1 plays 3, and player 3 plays 6, then player 2 can't play any of his cards since they are smaller than 6. (If you think that player 2 can play all his cards, then player 1, then player 3, then you might have misread the question) A player will only get his turn after all other players have played, everyone maintains the same turn order throughout the game
In problem F solution 1, how are we calculating \begin{align*} \binom{n+1}{j+1} \end{align*}
since n is very large?
n is large, but j is small. You can compute in O(j) time.
Oh, got it. Thanks.
$$$\binom{n+1}{j+1} = (n+1)(n)(n-1)\ldots (n-j)/(j+1)!$$$, there are only $$$j+1$$$ terms in the binomial
my downfall was A. I don't know why I was overthinking it so much when it was so obvious. Took me nearly one hour to solve. I really have a lot to learn...
For D I used a completely different approach, where the logic (at least if you have to prove it!) is simpler: starting from the end, find the allowed interval for the previous element. Then check that the first element fits in the required interval.
Damn... I've used DSU implementation from Algorithmica. And it was wrong all from the start. Spent 30 minutes to find a bug in my solution, and the only bug was my trust in this site.
Algorithmica DSU implementation is fine. The wrong answer for your first several submissions was caused by the
if (p[u] != p[*it]): to check if two vertices are in the same component, you should compare their leaders, not their parents.Yes, you are right here. But I couldn't pass tests anyway because unite function doesn't check that leader(a) == leader(b). So it caused overflowing. I know that you should understand an implementation, that you find in the Internet, in details. And I didn't use DSU for quite a long time. I was just convinced that this is the basic check for the unite function.
So, yeah, my statement about Algorithmica was not completely honest. Sorry about that.
Anyway, glad that I got the idea of the problem, and thanks for your reply.
I solved G without using dynamic programming and I think it's a bit more intuitive (at least for me).
Using the same insights as mentioned in the hints, my solution builds out a possible answer and checks if it has enough degrees of freedom to end up with an even number of flips, or necessarily has an even number of flips.
Say you've placed the first n pairs (as the editorial mentions, this order is fixed by $$$min(a_i, b_i)$$$), then you take the next smallest pair and decide how it should be flipped. There are 4 cases:
"NO". This would be like in the first example case where you place $$${1, 6}$$$ first and then have to place $$${2, 4}$$$, but no matter how you flip $$${2, 4}$$$ the number after the array with 6 will no longer be increasing.You don't need to track the whole array, just the chosen state of the previous pair, which you can initialize as $$${-1, -1}$$$ (since the first actual pair can have either orientation).
If at any point you have two in a row which can be either orientation, then you can flip or not flip that fist one to make the total number of swaps even, regardless of how the total turns out. From this point on, you will return
"YES"as long as you don't find a situation where a pair can't be either orientation.You will also want to track the number of fixed orientations there are in a row. If you end up with a block of fixed orientations which is of an even count, then you can flip the pair just before this block (which is necessarily one that can be either orientation, otherwise it would be part of this block) which would flip the orientation of all the pairs in this block as well, which will have flipped the orientation of an odd number of pairs, meaning the number of total flips will have gone from even to odd or vice versa. This freedom will mean no matter what the parity of the other fixed blocks are, you can make the total number of flips even or odd.
After going through all pairs, if you found either degree of freedom (either two free pairs in a row, or a block of fixed pairs of even count) or the number of flips applied for the array you built is already even, then it's possible.
Why is dp[3][1] = 1 in solution of question G? Shouldn't the dp[4][1] be equal to 1 since we flip the first pair and the total flips is equal to 1?
Editorial author here. Sorry, just a really annoying typo, because I got confused between zero and one-based indexing! It's definitely meant to be dp[4][1].
In problem F, why are we only doing (n,j), there can be sequence of repeated factors as well which will reduce the multiple.
i.e. example consider a sequence [2,3,3,4,4,4] and then (n-6) 1's for any n. The multiple should be n!/((n-6)! * 2! * 3!)
why am i getting TLE in C? I need help pls
You're declaring an vector of length 200002 for each test case, which is slow and can cause a TLE with many test cases.
You can solve problem B in O(nm) time complexity. You don't have to sort cards array, you have to only check is difference between consecutive terms in array is factor of n or not.
You mean multiple of n. By the way sorting makes it easier to construct the permutation.
Yeah but time complexity increases from O(nm) to O(nmlogm)
There is an easier way to prove D, notice that if any set of operations sorts $$$a$$$, then that set + $$$op_1,op_2...op_{n-1}$$$ also sorts.
In the end, we will perform operations on every index at least once, and one can notice that performing $$$op_1$$$ as a first operation is optimal. After operating on index $$$1$$$, induction works for other indices.
A kind of brute force based matrix multiplication solution for F: https://mirror.codeforces.com/contest/2060/submission/301857001. This completes the max test
1
100000 536870911
in ~3.6s. The complexity is $$$O(klog^2(k)log(n))$$$.
Learnt something from the editorial.
A new word for me — Horrendous
Hii!
Can you guys tell me why this solution is getting wrong answer for problem E?
I know DSU will be the correct way of solving this, but this is what clicked in my mind.
Although your method itself (calculate the difference between edges) is wrong, your comparing method (concat it to a string) is even wrong. Consider $$$(1,112)$$$ and $$$(11,12)$$$.
For why your method is wrong, consider:
$$$F$$$: $$$(1,2)$$$ $$$(1,3)$$$.
$$$G$$$: $$$(2,3)$$$ $$$(1,3)$$$.
I would also like to mention, to save you from possible hacks in the future, to not use unordered_set and unordered_map, since those have worst case O(n) time complexity for lookups. Instead use the normal set and map
okay!
Thanks for your suggestion. I am not so active on codeforces. What are hacks?
Essentially, people can make custom test cases to test against another solution, and if you are using unordered_map, they can make a test case that triggers the O(n) time complexity for lookups and give it to your program to make it time out.
The Solution2 of F has a typing mistake.
The formula $$$\sum\limits_{i=2}^n f_k(n)-f_k(n-1)$$$ should be corrected to $$$\sum\limits_{i=2}^n f_k(i)-f_k(i-1)$$$.
F: How dp from editorial works? Example:
But it wrong, correct is 4! Possible arrays: [9 2] [2 9] [6 3] [3 6]
You update dp from all non-1 factors, not just prime. In this case it should be dp[18][2]=dp[18/2][1]+dp[18/3][1]+dp[18/6][1]+dp[18/9][1]+dp[18/18][1]=1+1+1+1+0=4
I wish Problem E had been based on a different topic instead of graphs.
For problem G, if you have played Rubic's cude, you can find out that we use 2*2 Y perm to contruct E perm.It's very interseting. G is a very good problem.
Regarding the solution for G, the key observation part can be proven without contradiction.
Since the final solution keeps the sequences $$$a$$$ and $$$b$$$ in ascending order, we have the following.
$$$a_i \leq a_{i+1}$$$ and $$$b_i \leq b_{i+1}$$$.
Real numbers have a convenient property where $$$x \leq y$$$ and $$$z \leq w$$$ implies $$$min(x,z) \leq min(z,w)$$$, so we can immediately derive the following result.
$$$min(a_i,b_i) \leq min(a_{i+1}, b_{i+1})$$$.
For problem F, the solution 1 that is described in the editorial for computing non-unity sequences for length till 16 using dp. You can actually do it using inclusion and exclusion too.
Problem F is an advanced version of LeetCode 1735. Count Ways to Make Array With Product. You can try solving the simpler one first.
)
Can someone tell me why Problem C gives TLE in testcase 17 for this solution in java 8 This uses sorting and two pointers.
Thanks for the contest, this was such a great problem set; it's been a long time since I've enjoyed one like this.Problem F is definitely on my top 3 list.
Can someone comment about the time complexity if E is solved using DFS rather than DSU , I think for DFS , it can get TLE if testcases are not so gentle for the same constraints ?
My solution with BFS was O(NlogN), though I can't say if that's the best time complexity possible
Anyone in community pls help with the following issue of mine:
I had solved first 4 and hadn't got any error in C. But during system testing I was getting runtime error on test 14 (my submission: 301790765). But with same logic another solution of a friend of mine (301906786) was getting accepted. You can see that only difference is that he is accessing first element in multiset and I was accessing last element.
And with my other ID I had submitted exact same to same code as that on which I was getting runtime error on test 14, but to my surprise on this other ID with same code same compiler I was getting runtime error on test 17 (301902846) instead of 14.
So after that I tried exact solution as that of my friend on same compiler C++ 20, but I was getting runtime error on test 4 this time (302179372). So pls help me with this because because of this runtime error I got -99 instead of -35, my official standing slipped from 4000 to 11500.
Pls help cry
OK here's why it's happening
You're setting req equal to the value at *it.
you're deleting the element that the iterator points to
then you're again trying to access the data behind this iterator that you've deleted, which results in undefined behaviour.
You were on the right track when you wrote
ll req = *it", if you had just writtens.erase(s.find(k-req))instead ofs.erase(s.find(k-*it)), you'd have been fine.Yes, I too realized it after some time. Anyways, thanks a lot
no problem, was a fun exercise in debugging for me
Am i the only one who understood something wrong on B problem statement? I thought the card stack restets to -1 every time a new round starts. I realised that i was wrong when i saw the solution. Luckily for me i didn't participate in the contest :P
I have a few questions regarding Solution 2 for Problem F.
1) "By induction we can prove that both fk(n) and Sk(n) are polynomials." How can we prove this?
2) "From the definition of Sk(n), we have q(k)=p(k)+1."
The definition of Sk(n) is sum of fk(i) for i=1 to n, isn't it? Because of that, Sk(n) is sum of polynomials with degree p(k). Sum of polynomials with degree p(k) should have the same degree (as p(k)). Supposing all of that, q(k) should be p(k). How is q(k)=p(k)+1?
1) First, $$$f_1(n)=1$$$ is a polynomial.
Second, assume that for all $$$\Omega(k)\le m$$$, $$$f_k(n)$$$ are polynomials (where $$$\Omega(k)$$$ is the prime omega function). Then, for $$$\Omega(k)=m+1$$$, $$$f_k(n)-f_k(n-1)=\sum_{j|k,j \lt k}f_j(n-1)$$$. Since $$$\Omega(j)\le m$$$, the RHS is a polynomial. Then, you can accumulate the formula to get $$$f_k(n)-f_k(1)= \mathrm{sum\ of\ some\ polynomials}$$$, so $$$f_k(n)$$$ is also a polynomial.
2) Note that $$$\sum_{i=1}^{n}i^m=\Theta(n^{m+1})$$$.
Thank you.
I understand answer for the first question (thanks again), but I have no idea what happened on the right side of equation in second answer. I have never seen something other than angle denoted as Θ. Is Θ some function? How do I calculate Θ(n^(m+1))?
I made a video solution on this problem, it might help, https://youtu.be/ULskMLZo7yQ?si=QQgImyKcXCHqq2c7. Sorry for the slow explanation
Sorry that doesnt explain Solution 2. Pls ignore.
$$$f(x)=\Theta(g(x))$$$ is the big Theta notation.
Thanks. Now I understand.
I am unable to find the wrong case for my submission on G it's WA on 4. Can someone give some test case for it.
here is my submission:303466389
Another solution for F: For a fixed $$$x$$$, let the $$$(\alpha_1, \alpha_2, \dots, \alpha_s)$$$ be the powers of primes in its factorization. Then, for a fixed $$$|a| = m$$$, the number of arrays is $$$\prod\limits_{i=1}^{s}\binom{m+\alpha_i-1}{\alpha_i}$$$ since we choose positions of primes independently for different primes and for a fixed prime we have a kind of "stars and bars" situation. To get the answer for a certain $$$x$$$, we sum the products for $$$m = \overline{1,n}$$$.
Since $$$n$$$ is large, it's impossible to do explicitly. Instead, notice that $$$\prod\limits_{i=1}^{s}\binom{m+\alpha_i-1}{\alpha_i}$$$ is a polynomial of $$$m$$$ with a degree no more than $$$16$$$ (since there are no more than $$$\log_2{(x)} \leq \log_2{(k)} \lt 17$$$ primes in factorization of $$$x$$$). We can write the polynomial as $$$\sum\limits_{j=0}^{t}c_{xj}\cdot m^j$$$, then the answer for $$$x$$$ is \begin{equation} \sum\limits_{m=1}^{n}\sum\limits_{j=0}^{t}c_{xj}\cdot m^j = \sum\limits_{j=0}^{t}c_{xj}\sum\limits_{m=1}^{n}m^j = \sum\limits_{j=0}^{t}c_{xj}\cdot f_j(n), \end{equation} where $$$f_j(n)$$$ is a sum of first $$$n$$$ $$$j$$$-th powers which is actually a polynomial of degree $$$j+1$$$ (Faulhaber's formula: https://en.wikipedia.org/wiki/Faulhaber%27s_formula ).
We can precompute the coefficients of $$$f_j(x)$$$ for $$$j = \overline{1...\lfloor\log_2(k)\rfloor}$$$ by constructing a system of linear equations and Gaussian elimination in $$$O(\log^4(k))$$$. Then we get an answer for a single $$$x$$$ in $$$O(\log^2(k))$$$ time required for multiplication and evaluation of polynomials.
why problem B is not run my code 158th test case
probably you aren't checking current top card and comparing with minimum of currently left cards. my idea of solution is following: on every round each cow must place its minimum card, otherwise if on round j it has minimum card k and instead places some m>k, then on any round i>j it can't place that k anymore, since top will be larger. so on each round, every cow removes minimum from its deck. Thus, sort decks for each cows, take minimum cards from each deck (from each cow's m cards, take minimum) -> sort this m cards -> this should be order followed after each round, since order doesn't change between rounds and remains fixed, order to be correct, it must be possible to play with that order on round 0, thus on round 0 (from sorted order of minimums from each deck ) you can determine what order should be followed, so 2 properties should hold on each round: on round j, minimum from each deck of remaining cards should have same order as on round 0 (in order I mean in sorted version if minimum of cow i < minimum of cow j then this should also happened in round 0). also maintain top card on the deck (this will be maximum of minimums from each deck for round j) if on round j, minimum card out of minimums from each deck < top card then answer is -1, otherwise if >top_card and fixed order property is maintained for each round, then answer is yes and order is order determined on round =0, for implementation simply sort card decks initially, then iterate from 0 to m, on each iteration j (which corresponds round) sort column j of row wise sorted grid, and compare to order of initial one. (comparing order means maintain indices of row in sorted version of round ==0 and in every sorted column, row indices should match this one).
Rather than using DP, another approach for G:
We already establish, the final order of pairs must appear with ascending $$$\min(a_i, b_i)$$$.
Observe that a valid reordering of pairs, i.e. ensuring $$$a$$$ and $$$b$$$ are sorted but not necessarily with an even number of flips, exists if and only if flipping all pairs to ensure $$$a_i \lt b_i$$$ produces a valid reordering.
If a valid reordering exists then casework can be applied to see if the parity condition can be satisfied. Firstly:
Then, generalizing the second case: it's possible that an odd-length subsegment can be flipped while keeping the reordering valid (it's a generalization of the second trivial case). It turns out, when the first case is false, the parity condition can be met if and only if this is true.
In summary, the algorithm is:
NOYESYESNO.Let the pairs be $$$p_1, p_2, \ldots$$$. If we try to flip $$$p_i,\ldots p_j$$$, it's necessary and sufficient to ensure ordering is still preserved between $$$p_{i-1}, p_i$$$ and $$$p_j, p_{j+1}$$$. We need to find any $$$i, j$$$ that meet this condition where $$$j-i+1$$$ is odd. Find the earliest $$$i\equiv 0\pmod{2}$$$ and $$$i\equiv 1\pmod{2}$$$ where order between $$$p_{i-1}, p_i$$$ are preserved (actually the first value of $$$i$$$ is always $$$0$$$). Then enumerate $$$j$$$. If we find $$$j$$$ where flipping preserves order between $$$p_j, p_{j+1}$$$, and there exists corresponding $$$i$$$, then statement 5 is true.
Can someone help me how should I improve , I was able to come up with the approach for E but implementing was difficult . Any suggestions.
good contest for practice
The solution for problem B can be done in O(n*m) too, I guess. Below is my solution.
Isn't E brute force?
Why can't you just use sets and if s1 contains a pair that s2 doesn't then ans++ and if s2 contains a pair that s1 doesn't, again we do ans++.
hush little boater don't you yap.I'll write the code for you.