Thanks for participation! We hope you loved the contest.
2124A - Deranged Deletions
Problem Credits: Lilypad
When is there definitely not a solution?
First, note that since relative order is preserved no matter which elements are deleted, if $$$a$$$ is originally sorted in nondecreasing order, the array cannot be a derangement no matter which elements are deleted.
If $$$a$$$ is not sorted, we can note that any two elements that form an inversion pair satisfies the requirements. The total runtime is $$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
void solve(){
int n;
cin >> n;
vector<int> arr(n);
for(auto &x : arr) cin >> x;
for(int i = 0; i < n; i++){
for(int j = i + 1; j < n; j++){
if(arr[i] > arr[j]){
cout << "YES\n2\n";
cout << arr[i] << " " << arr[j] << "\n";
return;
}
}
}
cout << "NO\n";
}
int main(){
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
while(t--) solve();
}
2124B - Minimise Sum
Problem Credits: cry
Special thanks to prvocislo for misreading problem G to the statement of this problem, nifeshe for solving it, and Dominater069 for suggesting we use this problem!
Write some examples of arrays down.
The answer is $$$\min(a_1,a_2)+a_1$$$.
If $$$a_1 \gt a_2$$$, we can perform the operation with $$$i=1$$$ and $$$j=2$$$, so that the prefix minimum after index $$$2$$$ will always be $$$0$$$ to get $$$a_1+a_2$$$ as our answer.
If $$$a_1 \lt a_2$$$, we can perform the operation with $$$i=2$$$ and $$$j=3$$$. Since $$$a_1 \lt a_2, min(a_1,a_2)=a_1$$$. The prefix minimum will be $$$0$$$ in index $$$3$$$ and later. We should be careful about the edge case $$$n=2$$$, as there is no index $$$3$$$ we can use. However, if $$$n=2$$$, we can simply not perform the operation.
We can easily show that the above cases are optimal for this problem. The code runs in $$$O(n)$$$, bottlenecked by reading the array.
import sys
input = sys.stdin.readline
for _ in range(int(input())):
n = int(input())
a = list(map(int, input().split()))
print(min(2*a[0], a[0] + a[1]))
2124C - Subset Multiplication
Problem Credits: Proof_by_QED
Come up with some numbers that $$$x$$$ must be a multiple of.
Take the LCM. Prove its sufficient (optional).
First, note that in order for $$$b$$$ to be obtainable after an array $$$a$$$, each $$$a_i$$$ should be a factor of $$$b_i$$$. We can find that $$$x$$$ must be a multiple of $$$b_i/gcd(b_i, b_{i+1})$$$ for each $$$1 \leq i \lt n$$$ as a lower bound for the answer.
Let's prove that these conditions are also sufficient. To do this, let's analyze each pair $$$(i,i+1)$$$ separately. We analyze the following cases:
- if $$$b_i=a_i\cdot x$$$ and $$$b_{i+1}=a_{i+1}\cdot x$$$, or $$$b_i=a_i$$$ and $$$b_{i+1}=a_{i+1}$$$: in both cases, the choice of $$$x$$$ does not matter. The array is good if $$$b_i$$$ divides $$$b_{i+1}$$$ initially.
- If $$$b_i=a_i\cdot x$$$ and $$$b_{i+1}=a_{i+1}$$$, then taking the LCM over all $$$1 \leq i \lt n$$$ should ensure this case is always correct.
- If $$$b_i=a_i$$$ and $$$b_{i+1}=a_{i+1}\cdot x$$$, then a larger value of $$$x$$$ cannot possibly be beneficial, as the product $$$\frac{a_{i+1}}{a_i}$$$ decreases.
Since the problem guarantees $$$x$$$ exists, taking the LCM and outputting suffices. The solution runs in $$$O(n \log{a_i})$$$.
import sys
input = sys.stdin.readline
import math
for _ in range(int(input())):
n = int(input())
a = list(map(int, input().split()))
gcd = 0
lcm = 1
for i in range(n-1,-1,-1):
gcd = math.gcd(gcd, a[i])
lcm = math.lcm(lcm, a[i] // gcd)
print(lcm)
Solve the old version of this problem: given the array $$$b$$$ (where a solution is no longer guaranteed), output any array $$$a$$$ or claim it is not possible to obtain $$$b$$$ through a valid array $$$a$$$.
2124D - Make a Palindrome
Problem Credits: satyam343
Determine which elements you can delete with the operation.
Come up with a greedy solution that erases as little elements as possible.
First, let's come up with some conditions that determines whether erasing an element is possible.
Claim: as long as the $$$k-1$$$ smallest elements remain in the array, we can erase other elements in any way we want.
Proof: It is obvious why we cannot erase the $$$k-1$$$ smallest elements. If we want to erase any other element, we can simply start the following process to determine a pair of $$$(l,r)$$$ where we can erase the element: initially, start $$$l=1$$$ and $$$r=n$$$. Clearly, the element we want to erase is at least the $$$k$$$-th smallest element. After that, we can incrementally either increase $$$l$$$ or decrease $$$r$$$ until the element is exactly the $$$k$$$-th smallest. This process should always terminate because after $$$r-l+1 \lt k$$$, the element we want to delete is definitely less than the $$$k$$$-th smallest.
Let the $$$k-1$$$-th smallest element be $$$x$$$. Now, we can categorize each element in the array into three groups: elements less than $$$x$$$, elements equal to $$$x$$$, and elements greater than $$$x$$$. Note there is no point in keeping any elements in the third group, so we can erase them first (by creating a second array $$$b$$$ filled with only elements of the first and second groups). Now, we must strategically delete elements of the second group to make a palindrome. Note that we should try to erase as little elements as possible, because we must ensure the number of elements we don't erase is at least $$$k-1$$$. We can maintain this process with a two-pointers approach. Initialize $$$l=1$$$ and $$$r=|b|$$$. Then, if $$$b_l=b_r$$$, we can keep both. If $$$b_l\neq b_r$$$ and neither elements is $$$x$$$, we can conclude it's impossible. If one of $$$b_l,b_r$$$ is equal to $$$x$$$, we should flag the one equal to $$$x$$$ as being deleted and move the pointer. In the end, we can do a final check to ensure we did not erase too many elements.
The final solution runs in $$$O(n \log(n))$$$ due to sorting to find the $$$k$$$-th smallest element.
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
#define all(x) x.begin(), x.end()
#define vecin(name, len) vector<int> name(len); for (auto &_ : name) cin >> _;
#define vecout(v) for (auto _ : v) cout << _ << " "; cout << endl;
void solve() {
int n; cin >> n;
int k; cin >> k;
vecin(aa, n);
if (k == 1) {
cout << "YES" << endl;
return;
}
vector<int> bb = aa;
sort(all(bb));
int cutoff = bb[k - 2];
vector<int> cc;
for (auto a : aa)
if (a <= cutoff)
cc.push_back(a);
int spare = cc.size() - k + 1;
int L = 0, R = cc.size() - 1;
bool bad = false;
while (L < R) {
if (cc[L] != cc[R]) {
if (spare == 0) {
bad = true; break;
}
if (cc[L] == cutoff) {
L ++;
spare --;
} else if (cc[R] == cutoff) {
R --;
spare --;
} else {
bad = true; break;
}
continue;
}
L ++;
R --;
}
if (bad) {
cout << "NO" << endl;
} else {
cout << "YES" << endl;
}
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int tt = 1;
cin >> tt;
while (tt--) solve();
return 0;
}
2124E - Make it Zero
Problem Credits: satyam343
Solve the $$$n=3$$$ case first.
The answer is either $$$-1$$$, $$$1$$$, or $$$2$$$.
First, find a construction in $$$3$$$. Then, optimize that to be done in $$$2$$$ moves.
First, let's determine when the answer is $$$-1$$$. Let $$$s$$$ be the sum of all elements in the array. The answer is $$$-1$$$ if:
- $$$s$$$ is odd, or
- There exists an element that is greater than half of $$$s$$$.
If neither of the above conditions hold, we can definitely get a solution in finitely many moves by simply subtracting $$$1$$$ from the largest element and $$$1$$$ from an arbitrary element in one move.
Let's now solve the $$$n=3$$$ case to grab some intuition. There's of course trivial cases where the answer is $$$1$$$ (where the first or last element is the sum of the other two). One array we may want to examine is where the second element is the sum of the first and last, like $$$[1,3,2]$$$. We can of course solve this in two moves, by setting $$$a_1$$$ to $$$0$$$ and $$$a_2$$$ to $$$a_3$$$ on the first move. What about some array where the middle element is less than the sum of the other two? For example, the array $$$[3,4,3]$$$. One thing we can try is trying to subtract from $$$a_1$$$ and $$$a_3$$$ so we can reduce to the $$$a_1+a_3=a_2$$$ case. Is this always possible? Turns out, it is always possible as long as $$$a_2 \lt a_1+a_3$$$ and $$$a_1+a_2+a_3$$$ is even (in this case, we would turn it into $$$[2,4,2]$$$ after our first move). We can even optimize it to two moves by combining two moves into one. In the case of $$$[3,4,3]$$$, we can transform the array to $$$[2,2,0]$$$ after the first move, combining two of the moves.
It turns out we can generalize the $$$n=3$$$ case for any $$$n$$$, by simply finding one "pivot" element to act as $$$a_2$$$, then letting $$$a_1$$$ be the sum of prefix before that and $$$a_3$$$ be the sum of suffix after that. Let $$$pos$$$ be the unique position in the array where $$$a_1+a_2+\ldots+a_{pos-1} \lt \frac{s}{2}$$$ and $$$a_1+a_2+\ldots+a_{pos}\geq\frac{s}{2}$$$. Let $$$x,y,z$$$ denote $$$a_1+a_2+\ldots+a_{pos-1}$$$, $$$a_{pos}$$$, and $$$a_{pos+1}+a_{pos+2}+\ldots+a_n$$$ respectively. Precisely, our goal is to make $$$x+z=y$$$ in the first move after subtracting a non-negative integer $$$w$$$ from both $$$x$$$ and $$$y$$$, make $$$x=0$$$ and $$$z=y$$$ after the second move, and $$$x=y=z=0$$$ after the third. To prove the first move is always possible, note that $$$x+z-y$$$ is always even (since sum of all elements in the array is even), and because $$$y\leq \frac{s}{2}$$$ implies $$$x+z\geq\frac{s}{2}$$$.
Finally, to optimize this solution to $$$2$$$ moves, instead of subtracting $$$w$$$ from both $$$x$$$ and $$$z$$$ in the first move, let's subtract $$$x$$$ from the first block, $$$x-w$$$ from the second block, and $$$w$$$ from the third block. Essentially, we are combining the first two operations into one.
The solution runs in $$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
const ll MOD = 1E9 + 7;
const int INF = 1E9; const ll INFLL = 1E18;
int n;
int main() {
ios_base::sync_with_stdio(false);
cin.tie(0);
int T; cin >> T;
for(int test = 1; test <= T; test++) {
cin >> n;
vector<ll> a(n);
for(int i = 0; i < n; i++) {
cin >> a[i];
}
ll sum = 0;
for(ll i : a) {
sum += i;
}
if(sum % 2) {
cout << "-1\n";
continue;
}
ll mx = 0;
for(ll i : a) {
mx = max(mx, i);
}
if(mx * 2 > sum) {
cout << "-1\n";
continue;
}
ll cur = 0; int ind = 0;
for(; ind < n; ind++) {
if(cur + a[ind] > sum / 2) {
ind--;
break;
}
cur += a[ind];
}
if(cur == sum / 2) {
cout << "1\n";
for(ll i : a) {
cout << i << " ";
}
cout << "\n";
} else {
cout << "2\n";
vector<ll> aa(n);
ll extra = sum - 2 * cur;
aa[ind + 1] = extra / 2;
a[ind + 1] -= aa[ind + 1];
extra /= 2;
for(int i = n - 1; i > ind + 1 && extra; i--) {
if(a[i] >= extra) {
aa[i] = extra;
a[i] -= extra;
extra = 0;
} else {
aa[i] = a[i];
extra -= a[i];
a[i] = 0;
}
}
for(int i = 0; i < n; i++) {
cout << a[i] << " ";
}
cout << "\n";
for(int i = 0; i < n; i++) {
cout << aa[i] << " ";
}
cout << "\n";
}
}
}
2124F1 - Appending Permutations (Easy Version) and 2124F2 - Appending Permutations (Hard Version)
Problem Credits: Proof_by_QED
Huge thanks to Benq for transforming my original problem (which is solvable using a sequence found in OEIS) to the current problem!
Given an integer $$$n$$$ and no restrictions, count the number of possible arrays you can form $$$(n \leq 2\cdot 10^5)$$$.
Characterize the arrays that we might overcount.
Optimize the DP using some cumulative sums.
Coming up with a polynomial solution
It's clear we need to do some sort of dynamic programming to count for this task. Initially, one may come up with let $$$dp_i$$$ be the number of ways to fill completely permutations with total length $$$i$$$.
Unfortunately, one downfall of the above solution is that some arrays are obtainable through many ways: for example, the array $$$[1,2,1]$$$ can be constructed with $$$[1]+[2,1]$$$ or $$$[1,2]+[1]$$$.
Upon further investigation, we can see that this occurs when we add two identity permutations of size $$$x$$$ and $$$y$$$ with $$$x \gt y$$$, as this can also be counted with a permutation of size $$$y$$$ and a shifted permutation of size $$$x$$$. We will choose to ban the second option (appending a shifted permutation that starts with $$$y+1$$$ when we just added a permutation of size $$$y$$$). We can come up with a counterexample to banning two identity permutations (a bigger one following a smaller one) with the array $$$[1,2,3,1,2,1]$$$, which can be counted as both $$$[1,2]+[3,1,2]+[1]$$$ and $$$[1]+[2,3,1]+[2,1]$$$.
This gives us a $$$O(n^4)$$$ solution. Let $$$dp(i,j)$$$ be the number of ways to make the sequence from indices $$$1$$$ through $$$i$$$, such that the last permutation added was an identity permutation of length $$$j$$$ (or not an identity permutation if $$$j=0$$$). We can build a solution by looping through (index, length of permutation, which cyclic shift to use), and checking if we can add this permutation naively.
Optimizing the solution to $$$O(n^2)$$$
There are several ways to do this part. Below shows one approach.
First, let's precalculate $$$chain(i,j)$$$, which returns the largest integer $$$k$$$ such that it is possible to fill numbers $$$i, i+1, \ldots, i+k-1$$$ with numbers $$$j,j+1,\ldots,j+k-1$$$. We can calculate this as $$$chain(i,j)=0$$$ if $$$a_i\neq j$$$ is a requirement, and $$$1+chain(i+1,j+1)$$$ otherwise. Let
- $$$dp(i,j)$$$ be the number of ways to fill indices $$$i,i+1,\ldots,n$$$ such that $$$a_i=j$$$
- $$$dp_2(i)$$$ be the sum of $$$dp(i,j)$$$ over all $$$1 \leq j \leq n$$$.
- $$$suff(i,j)$$$ be the number of ways such that we are ready to insert $$$j$$$ to complete the suffix from $$$i$$$ to $$$n$$$, but without taking into account restrictions.
We can calculate $$$suff(i,j)$$$ as $$$\sum_{x=i}^n(chain(x,1)\geq j-1)\cdot dp_2(x+j-1)$$$. Why is this helpful? Let's create an example. Suppose we fix $$$j=3$$$, and we have $$$chain(x,1)\geq 2$$$ is true for $$$x=3,4,6$$$. That is, it's possible to place $$$1$$$ and $$$2$$$ in positions $$$(3,4)$$$, $$$(4,5)$$$, and $$$(6,7)$$$. Then, we should have $$$suff(2,3)=dp_2(5)+dp_2(6)+dp_2(8)$$$. This is because suppose we set $$$a_2=3$$$. Then our $$$suff(2,3)$$$ means that we can theoretically put $$$(3,1,2), (3,4,1,2), (3,4,5,6,1,2)$$$ starting at index $$$2$$$, assuming there are no further restrictions.
Now our actual DP loop:
- Calculate $$$suff(i,j)$$$ as above
- Calculate $$$dp(i,1)$$$ by looping through all identity permutations we can append at index $$$i$$$. Say we are trying to add identity permutation of length $$$x$$$. We should add $$$dp_2(i+x)$$$ and subtract $$$dp(i+x,x+1)$$$ (this is to prevent overcounting as outlined before).
- Calculate $$$dp(i,j)$$$ for $$$j \gt 1$$$ as $$$suff(i+1,j)-suff(i+1+chain(i,j),j)$$$. The subtraction is to remove contribution from $$$suff$$$ that is not accessible due to constraints on $$$a_i \neq j$$$ using our chain array.
This solution works in $$$O(n^2+m)$$$. $$$m$$$ was kept low to allow $$$O(nm+n^2)$$$ to pass as well.
#include <bits/stdc++.h>
using namespace std;
const int MOD = 998244353;
template<typename T> std::ostream& operator<<(std::ostream& os, const std::vector<T>& vec) {
os << "[ ";
for(const auto& elem : vec) {
os << elem << " ";
}
os << "]";
return os;
}
struct Mint {
int v;
Mint(long long val = 0) {
v = int(val % MOD);
if (v < 0) v += MOD;
}
Mint operator+(const Mint &o) const { return Mint(v + o.v); }
Mint operator-(const Mint &o) const { return Mint(v - o.v); }
Mint operator*(const Mint &o) const { return Mint(1LL * v * o.v); }
Mint operator/(const Mint &o) const { return *this * o.inv(); }
Mint& operator+=(const Mint &o) { v += o.v; if (v >= MOD) v -= MOD; return *this; }
Mint& operator-=(const Mint &o) { v -= o.v; if (v < 0) v += MOD; return *this; }
Mint& operator*=(const Mint &o) { v = int(1LL * v * o.v % MOD); return *this; }
Mint pow(long long p) const {
Mint a = *this, res = 1;
while (p > 0) {
if (p & 1) res *= a;
a *= a;
p >>= 1;
}
return res;
}
Mint inv() const { return pow(MOD - 2); }
friend ostream& operator<<(ostream& os, const Mint& m) {
os << m.v;
return os;
}
};
void solve() {
int N, K;
cin >> N >> K;
vector<vector<bool>> banned(N, vector<bool>(N));
for (int i = 0; i < K; ++i) {
int x, y;
cin >> x >> y;
banned[x - 1][y - 1] = true;
}
vector<Mint> tot(N + 1);
tot[N] = 1;
vector<vector<Mint>> dp(N + 1, vector<Mint>(N + 1));
vector<vector<int>> chain(N + 1, vector<int>(N + 1));
for (int i = N - 1; i >= 0; --i) {
for (int j = N - 1; j >= 0; --j) {
chain[i][j] = 1 + chain[i + 1][j + 1];
if (banned[i][j]) chain[i][j] = 0;
}
}
vector<vector<Mint>> ssum(N + 2, vector<Mint>(N + 1));
for (int i = N - 1; i >= 0; --i) {
for (int j = 0; j <= N - i; ++j) {
if (j == 0) {
for (int len = 1; len <= N - i; ++len) {
bool ok = chain[i][0] >= len;
if (!ok) break;
dp[i][j] += tot[i + len];
dp[i][j] -= dp[i + len][j + len];
}
} else {
if (chain[i][j]) {
dp[i][j] += ssum[i + 1][j];
if (i + 1 + chain[i][j] <= N) {
dp[i][j] -= ssum[i + 1 + chain[i][j]][j];
}
}
}
tot[i] += dp[i][j];
}
for (int j = 1; j < N; ++j) {
ssum[i][j] = ssum[i + 1][j];
if (chain[i][0] >= j) {
ssum[i][j] += tot[i + j];
}
}
}
// cout << dp << endl;
// cout << ssum << endl;
cout << tot[0].v << '\n';
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
while(t--) {
solve();
}
return 0;
}
2124G - Maximise Sum
Problem Credits: cry
Special thanks to Error_Yuan and Friedrich for solving this problem!
Disregard cost first. Find the maximum sum of prefix minimums.
Come up with a way to have $$$O(n)$$$ $$$(i,j)$$$ pairs to test.
Let $$$f(a)=\min(a_1)+\min(a_1,a_2)+\ldots+\min(a_1,a_2,\ldots,a_n)$$$. Can you come up with a way to solve $$$q$$$ queries of what $$$f(a)$$$ will be after performing the operation on $$$(i,j)$$$? (queries are not persistent here)
Let's disregard the cost part of the problem completely and try to solve the problem just for the maximum sum.
To solve this problem, we make the following observations:
- If $$$i$$$ is not a prefix minimum, there is no point in using the operation to increase $$$a_i$$$.
- If $$$j$$$ is not a suffix maximum, there is no point in using the operation to set $$$a_j:=0$$$. This is because we can instead use the closest suffix maximum to its right, and the final answer can only be larger.
Knowing this, let's enumerate $$$i$$$. If $$$i$$$ is not a prefix minima, we can skip it. Otherwise, there is no point to increasing $$$a_i$$$ past the previous prefix minima. Let's enumerate the number that we would like to increase $$$a_i$$$ to, and there should be a clear candidate on which suffix maximum to use as $$$j$$$. Since each $$$a_i$$$ is bounded by $$$n$$$, there is only $$$O(n)$$$ pairs of $$$(i,j)$$$ produced.
We now have to solve for what the score is after each operation. Suppose we increased $$$a_i$$$ to $$$x$$$. Let $$$k$$$ be the smallest index $$$k \gt i$$$ with $$$a_k\leq x$$$ ($$$k$$$ can be found by using a sparse table and binary search). We can now decompose $$$score(a)$$$ into the sum of the following four parts:
- The prefix sum up to index $$$i$$$ — this can be precomputed easily
- The score from $$$i$$$ to $$$k-1$$$ — this can be calculated easily as $$$x\cdot(k-i)$$$.
- The score from $$$k$$$ to $$$n$$$ assuming $$$a_k$$$ is the prefix minimum. Call this $$$suffix(k)$$$. This can be precalculated as well using a monotonic stack. We can find the smallest number $$$l \gt k$$$ with $$$a_l \lt a_k$$$, then compute the answer as $$$suffix(l)+a_k\cdot(l-k)$$$.
- Finally, we have to subtract the contribution from $$$j$$$ to $$$n$$$ since we set $$$a_j:=0$$$. Note that up to index $$$j$$$, the minimum $$$M$$$ is now $$$\min(a_1,a_2,\ldots,a_i-1, x, a_{i+1},a_{i+2},\ldots,a_{j-1})$$$. Now, we can use the same trick — binary search and sparse table to find the first $$$m\geq j$$$ with $$$a_m\leq M$$$, and subtract $$$suffix(m)+M(m-j)$$$ from our count here.
Now that we have a way to solve the full array, let's solve for all possible costs.
It can be shown that the same $$$O(n)$$$ queries from before will necessarily obtain the correct answers (proof is left as an exercise). Thus, to solve the problem, let's create an array $$$ans$$$, and while processing each $$$(i,j)$$$ pair, update $$$ans_{j-i}$$$. After that, taking the suffix maximum will suffice.
#ifdef DEBUG
#define _GLIBCXX_DEBUG
#endif
//#pragma GCC optimize("O3")
#include<bits/stdc++.h>
using namespace std;
#ifdef DEBUG
#include "lib/debug.h"
#else
#define debug(...) 228
#endif
#define pb push_back
typedef long long ll;
typedef long double ld;
int n;
const int maxN = 1e6 + 10;
int a[maxN];
ll best[maxN];
ll pref[maxN];
int sec_mn[maxN];
ll pref_sec[maxN];
int MN[maxN];
int f[maxN];
void solve() {
cin >> n;
for (int i = 1; i <= n; i++) cin >> a[i];
for (int i = 0; i <= n - 1; i++) {
best[i] = 0;
}
vector<int> suf_max;
suf_max.reserve(n);
for (int i = n; i >= 1; i--) {
if (suf_max.empty() || a[i] > a[suf_max.back()]) {
suf_max.emplace_back(i);
}
}
int mn = 1e9;
sec_mn[0] = 1e9;
MN[0] = 1e9;
int ind = -1;
for (int i = 1; i <= n; i++) {
sec_mn[i] = sec_mn[i - 1];
if (a[i] < mn) {
if (ind != -1) {
f[ind] = i;
}
sec_mn[i] = mn;
mn = a[i];
ind = i;
}
else if (a[i] < sec_mn[i]) {
sec_mn[i] = a[i];
}
MN[i] = mn;
pref[i] = pref[i - 1] + mn;
pref_sec[i] = pref_sec[i - 1] + sec_mn[i];
}
assert(ind != -1);
f[ind] = n + 1;
best[0] = pref[n];
vector<int> st{n + 1};
a[n + 1] = -1e9;
for (int i = n; i >= 1; i--) {
if (MN[i - 1] > a[i]) {
for (int id: suf_max) {
if (id <= i) break;
int cur_mn = min(a[i] + a[id], MN[i - 1]);
int l = 0;
int r = st.size();
while (r - l > 1) {
int mid = (l + r) / 2;
if (a[st[mid]] <= cur_mn) l = mid;
else r = mid;
}
int upto = min(id - 1, st[l] - 1);
ll val = pref[i - 1] + cur_mn * 1LL * (upto - i + 1);
// if (i == 3 && id == 5) {
// cout << " HI " << endl;
// cout << val << " " << cur_mn << " " << upto << endl;
// }
if (upto + 1 < id) {
int upup = min(f[i], id);
// if (i == 1 && id == 4) {
// cout << val << " ???? " << upup << endl;
// }
assert(upup - 1 >= upto);
val += pref_sec[upup - 1] - pref_sec[upto];
val += pref[id - 1] - pref[upup - 1];
}
best[id - i] = max(best[id - i], val);
if (a[i] + a[id] >= MN[i - 1]) break;
}
}
while (!st.empty() && a[i] <= a[st.back()]) st.pop_back();
st.emplace_back(i);
}
for (int i = n - 2; i >= 0; i--) best[i] = max(best[i], best[i + 1]);
for (int i = 0; i < n; i++) cout << best[i] << " ";
cout << '\n';
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
auto start = std::chrono::high_resolution_clock::now();
#ifdef DEBUG
freopen("input.txt", "r", stdin);
#endif
int tst;
cin >> tst;
while (tst--) {
solve();
}
return 0;
}
2124H - Longest Good Subsequence
Problem Credits: Proof_by_QED
Special thanks to satyam343 for helping with improving the problem, and Dominater069 for the full solution!
Characterize what a good subsequence is.
Use a range DP approach and some greedy techniques to find the answer
Let's first determine whether a given sequence can be good. We claim that a sequence $$$b$$$ is good if for each index $$$1 \leq i \leq |b|$$$, if $$$b_i=x$$$, then one of the following must be true:
- $$$x=i$$$
- $$$b_x=x$$$ and $$$min(b_x,b_{x+1},\ldots,b_i)\geq x$$$.
To prove this, we can note that this is necessary by transitivity of $$$ \gt $$$, and sufficient as we can construct a permutation (the method of doing so is left as an exercise to the reader).
Another way of thinking about this is: let $$$(i,j)$$$ be an interval where $$$b_i=i$$$, $$$b_j=i$$$, and there exists no index $$$k \gt j$$$ such that $$$b_k=i$$$. The condition $$$b_p \geq i$$$ should be satisfied for all indices $$$i \leq p \leq j$$$.
Uisng the above, we can construct a range DP solution. Let $$$dp(i,j)$$$ be the maximum length of a sequence if $$$b_{a_i}=a_i$$$, and we only consider numbers greater than or equal to $$$a_i$$$ from $$$i$$$ to $$$j$$$ while building the subsequence (essentially, this acts as one complete segment). We can initialize $$$dp(i,i)=a_i$$$ for each $$$i$$$. Then, we can enumerate $$$i$$$ from the back, and calculate the answer for each possible $$$j$$$. As we travel from $$$dp(i,j-1)$$$ to $$$dp(i,j)$$$, there are a few cases to consider:
- $$$a_j=a_i$$$: we can simply add $$$1$$$ to $$$dp(i,j-1)$$$ to get $$$dp(i,j)$$$ as we can append it to our sequence for no cost.
- $$$a_j \lt a_i$$$: we cannot append this due to the definition of good, so $$$dp(i,j)=dp(i,j-1)$$$.
- $$$a_j \gt a_i$$$: Let's use a greedy approach to find where the segment beginning with $$$a_j$$$ should start. Clearly, it should start as soon as possible (since the starting point must have length $$$j$$$ already, and we cannot append anything less than $$$j$$$ before we begin). To do this, maintain a new array $$$first$$$ such that $$$first_k$$$ holds the smallest index $$$y$$$ where $$$dp(i,y)\geq a_y$$$ (so it is possible to start another segment with $$$a_y$$$ here, since we can delete any elements from a good sequence, and the sequence will still be good). Then, we can make the transition $$$dp(i,j)=\max(dp(i,j-1), dp(y,j))$$$.
The problem is solved in $$$O(n^2)$$$ time.
#include <bits/stdc++.h>
using namespace std;
#define int short
#define INF (int)1e18
mt19937_64 RNG(chrono::steady_clock::now().time_since_epoch().count());
void Solve()
{
int n; cin >> n;
vector <int> a(n + 1);
for (int i = 1; i <= n; i++){
cin >> a[i];
}
vector<vector<int>> dp(n + 1, vector<int>(n + 1, 0));
for (int i = n; i >= 1; i--){
vector <int> good(n + 1, -1);
int curr = a[i];
dp[i][i] = curr;
for (int j = i + 1; j <= n; j++){
if (a[j] < a[i]){
continue;
}
if (a[j] == a[i]){
curr += 1;
dp[i][j] = curr;
continue;
}
if (good[a[j]] == -1 && a[j] <= curr + 1){
good[a[j]] = j;
}
if (good[a[j]] != -1){
int idx = good[a[j]];
curr = max(curr, dp[idx][j]);
}
}
}
// for (int l = 1; l <= n; l++){
// for (int r = l; r <= n; r++){
// if (dp[l][r])
// cout << l << " " << r << " " << dp[l][r] << "\n";
// }
// }
int curr = 0;
vector <int> good(n + 1, -1);
for (int j = 1; j <= n; j++){
if (good[a[j]] == -1 && a[j] <= curr + 1){
good[a[j]] = j;
}
if (good[a[j]] != -1){
int idx = good[a[j]];
curr = max(curr, dp[idx][j]);
}
}
cout << curr << "\n";
}
int32_t main()
{
auto begin = std::chrono::high_resolution_clock::now();
ios_base::sync_with_stdio(0);
cin.tie(0);
int t = 1;
// freopen("in", "r", stdin);
// freopen("out", "w", stdout);
cin >> t;
for(int i = 1; i <= t; i++)
{
//cout << "Case #" << i << ": ";
Solve();
}
auto end = std::chrono::high_resolution_clock::now();
auto elapsed = std::chrono::duration_cast<std::chrono::nanoseconds>(end - begin);
cerr << "Time measured: " << elapsed.count() * 1e-9 << " seconds.\n";
return 0;
}
2124I - Lexicographic Partition
Problem Credits: Proof_by_QED
Solve the inverse of this problem first. That is, given a permutation $$$p$$$, output $$$f(p)$$$.
Analyze the case of $$$x=[1,2,2,3,3]$$$. There is no solution to this $$$x$$$. Why?
Let's first come up with an algorithm to find $$$b$$$ given $$$p$$$.
Let $$$i$$$ be the first index with $$$p_i \lt p_1$$$. We can easily prove that the first subarray cannot go up to index $$$i$$$. This is because otherwise, the first element in $$$b$$$ will decrease, and this will be lexicographically smaller than a partition that does not include $$$i$$$.
Now, if $$$i=2$$$, then of course we should partition $$$p_1$$$ by itself. Otherwise, we should find the index $$$j$$$ with $$$1 \leq j \leq i$$$ and $$$p_j$$$ is the maximum element in the segment $$$[1,i]$$$, and partition up to $$$j-1$$$ (since this maximises the second element, which is what we want to do after maximising the first). We can now continue the process recursively, but assuming that $$$p_j$$$ is the front of the array.
Let's solve the actual problem now. Suppose we have $$$x_i=k$$$. Let $$$j$$$ be the largest index with $$$j \lt i$$$ and $$$x_j=k-1$$$. We can show that the sequence we obtain up to $$$i$$$ is the sequence up to $$$j$$$, plus $$$p_i$$$. This means that we can see the input array $$$b$$$ as the depths in a tree, where $$$j$$$ is the parent of $$$i$$$. We make the following observations:
- If $$$x_i-x_{i-1} \gt 1$$$, then this is a trivial impossible case (the sequence up to $$$i$$$ or $$$i-1$$$ is invalid).
- What does it mean if $$$i-j \gt 1$$$? This means that the sequence up to $$$i$$$ is an extension of $$$j$$$ (which is more than $$$1$$$ index ago), plus one number. From our analysis above, we know this means that all of $$$p_{j+1},p_{j+2},\ldots,p_i$$$ is greater than $$$p_j$$$, and $$$p_i$$$ is the largest of them all.
- What does it mean if $$$i-j=1$$$? In this case, we cannot make any conclusions about the comparison of $$$p_i$$$ and $$$p_{i-1}$$$. Because index $$$i$$$ will be included no matter the comparison.
- What does point $$$2$$$ mean? Observe the $$$[1,2,2,3,3]$$$ case again. Here, we should have $$$p_1 \lt p_2 \lt p_3$$$ and $$$p_3 \lt p_4 \lt p_5$$$. This implies $$$p_1 \lt p_2 \lt p_3 \lt p_4 \lt p_5$$$, but if this were the case then the array $$$x$$$ would be $$$[1,2,2,2,2]$$$ instead. More formally, looking at the tree, if node $$$i$$$ has more than one children, all of its children must have at most one children, or this contradiction will occur.
Using this, let's come up with a recursive function that traverses this tree and finds the permutation. Let $$$f(i,j,a,b)$$$ fill in $$$p_i,p_{i+1},p_{i+2},\ldots,p_j$$$ with numbers $$$a,a+1,a+2,\ldots,b$$$, where $$$i$$$ to $$$j$$$ is one full subtree, assuming that $$$i$$$ is already filled. First, let's find all direct children of node $$$i$$$ — that is all of the indices with $$$x_k=x_i+1$$$. If $$$i$$$ has more than one child and any of its children has more than one child, this is the impossible case as above. If $$$i$$$ has more than one child, then each of its children should have at most one child. First, let's assume that $$$p_i$$$ is already the smallest in the entire subtree. Let the sizes of subtrees of children be $$$b_1,b_2,\ldots,b_y$$$. We can reserve $$$b_i$$$ numbers for each subtree, and set each direct child to the largest number possible in each interval, and call our function recursively. Finally, if there is only one direct child, we can assume that $$$p_i$$$ is the largest number in the subtree, and we can assign the direct child to either be the largest or smallest number, depending on the number of children. We can first assign $$$p_1$$$ to be either $$$1$$$ or $$$n$$$ depending on how many indices $$$x_i=2$$$ there are, then call the recursive function. This will create the valid permutation.
#include <bits/stdc++.h>
using namespace std;
using ll = long long;
#define vt vector
vt<int> solve(const vt<int>& b, vt<vt<int>>& inv) {
int n = b.size();
if (inv[1].size() > 1) return {};
vt<int> a(n);
stack<array<int,5>> s;
if (inv[2].size() > 1) {
a[0] = 1;
s.push({0, n-1, 2, n, 1});
} else {
a[0] = n;
s.push({0, n-1, 1, n-1, 1});
}
while (!s.empty()) {
auto tp = s.top();
s.pop();
int l = tp[0], r = tp[1], leftNum = tp[2], rightNum = tp[3], ok = tp[4];
if (l == r) continue;
if(b[l+1]!=b[l]+1) return {};
int num = b[l] + 1;
if (num >= inv.size()) return {};
vt<int> nxt;
while (inv[num].size() && inv[num].back() > l) {
nxt.push_back(inv[num].back());
inv[num].pop_back();
}
if (nxt.size()==0) return {};
if(!ok && nxt.size()!=1) return {};
if (nxt.size() == 1) {
if (num+1 < inv.size() && inv[num+1].size() > 1 && inv[num+1][inv[num+1].size()-2] > l) {
a[nxt[0]] = leftNum++;
} else {
a[nxt[0]] = rightNum--;
}
s.push({l+1, r, leftNum, rightNum, 1});
} else {
rightNum -= nxt.size();
int nx = rightNum + 1;
reverse(nxt.begin(), nxt.end());
int curLeft = leftNum;
for (int i = 0; i < (int)nxt.size(); ++i) {
int pos = nxt[i];
a[pos] = nx++;
int nextInd = (i+1 < (int)nxt.size() ? nxt[i+1] - 1 : r);
int sz = nextInd - pos - 1;
s.push({pos, nextInd, curLeft, curLeft + sz, 0});
curLeft += sz + 1;
}
}
}
return a;
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
vt<int> x(n);
for (int i = 0; i < n; ++i) cin >> x[i];
vt<vt<int>> inv(n+3);
for (int i = 0; i < n; ++i) inv[x[i]].push_back(i);
auto a = solve(x, inv);
if (a.empty()) {
cout << "NO\n";
} else {
cout << "YES\n";
for (int v : a) cout << v << ' ';
cout << "\n";
}
}
return 0;
}








OMG!! Lightning fast editorial :orz:
at most 17 operations...... was that part just to confuse participants ? I doubted myself for 10 minutes if 2 operations are actually enough.
Happy with + delta in the end
Bro I am so annoyed rn. They did us dirty.
Hey bro, can you tell me why my rating decreased even when i solved A,B and C without any errors and resubmissions?
6 points short of becoming an Expert. But if I get the D question wrong one less time... To maintain the level of the Expert, it is necessary to make A, B, C, D, E or A, B, C, D quickly and without mistakes
I didn’t make mistakes, but surely took a lot of time ig…
did you take time, or your GPT? i freaking hate cheaters
your monther don
According to me, difficulty was A, B, C wasn't that much and since you took a lot of time, you standing increased. Dont worry bro. youll get it next time
ok bro, i’ll try next time now.
please don't cheat in contests (jus tlook at your submissions)
Well the point of the problem is to prove that it is always possible in 2 operations or less, so telling participants that would break the problem. They set it at 17 because maybe the fastest way is to use logn operations.
Yes I was thinking something like. We find an I in array where difference between right and left part is minimum. Then we try to reduce the grater one. But to reduce it we can find another I in that subarray and we keep doing it to one. But I couldn't find anything along this line.
Thats actually the right track, you just need to prove the second reduction will always make everything 0
Yeah that was so deceptive that I immediately got a way to solve it by divide and conquer, and finally got REs and WAs TaT
If he hadn't told me this condition, I would have definitely been able to solve this problem. This condition just confuses me.
initially noticed that it seems to require at most two operations, but I couldn't prove it was correct. In the end, I wasted an hour without making the submit.Their deception worked perfectly.
I doubted myself till the very end. I could not think of any example with s > 2, I kept rereading the problem wondering what on earth I was doing wrong.
Instant editorial
GreatProblemsForces orz !!
D was the Best problem imo :)
My English is poor.At first I even think imo means international math.Then I realize it means "in my opinion'D:)
why it's showing 4 days ago?
fastest editorial i have ever seen
The draft is completed four days ago, but I only made it public now.
Its cool that authors made in advance
Lightning Fast Editorial!
thank you for rating
Wow, didn't expect me misreading a statement to turn out to be a good thing :D
Lovely Contest and Great problems
Thanks for the round! Hope to see more problems from you guys where the input is just an array of n numbers
thanks for fast editorial
Great problemset, but had my worst performance :(
Can anyone clarify why in 327782150, the fourth given testcase doesnt accept 3 as a solution?
$$$\frac{73080}{3} \nmid 255780 $$$
Got it, tnx !
I also made the same mistake.. luckily it fails pretest 1
Check last and 2nd last element, 2nd last doesnt't divide last element
oh wow.. wish I could upvote more than once for superfast editorial.
why was the constraint a[i]<=n mentioned in problem B?
yeah ,i was wondering about the same ,maybe the greedy claim fails in those cases.
Maybe they just want to make the statement looks like problem G.
I am so angry right now with myself for not solving D.
same lol I instantly thought of that k-1 thing but could not take my mind out of seg tree
I did not even realise that we can't delete any element smaller than the k-1th element. If I noticed that, I would have solve the problem in no time.
satyam343 is your birthday coming up on 17th July? Man, there are better ways to tell us that.
$$$\lceil( \log(n)) \rceil + 1 = 17$$$
I have to admit that at the very beginning, I was indeed pondering the significance behind $$$\log n$$$
which confused me a lot :(
so bad, 17 has no effect. It would be better to remove 17
and n could be changed to 2e5
interesting
I solved D with a different approach.First try to imagine you are creating a new array lets say its name is ANS with at least k elements and set the limit to the k-th smallest value in A.Then find the values under the limit and save their indexes in a new array lets say this array is B.Then make a prefix sum array with this: pref[i]=pref[i-1]+(a[i]==limit); then look how many values with limit number between every two adjacent indexes in B (using the prefix array I mentioned above) then try to add many integer to ANS.We need to maximize the number of values we take but our ANS array should be palindrome so we will take same amount of integer from the opposite.so we should use ANS+=min(diff[i],diff[n-i-1])
Implementation:https://mirror.codeforces.com/contest/2124/submission/327813510
Note:Sorry for my poor english and I cant explain myself very well :(
Wow, a same solution as mine. But it's too hard to debug :(
I took the limit as a[k-2] and it was giving WA then I just did it a[k-1] and it worked xD.Btw it took me 40 minutes to think this :(
I appeared for the very first time in div 1+2, great contest to be honest. Overestimated B and wrote a very complicated code just because of div 1 in name of the round :( . And D , why was 17 operations mentioned in the question, it just threw me off the scent. Anyways, enjoyed the round and nice questions.
Maybe 17 * 50000 ≈ 1e6 and it is reasonable output length.
The author dont want to suggest that 2 operations are enough.
Minor corrections in the editorial for E: the example array [3, 4, 3] (I see what you did there!) should be reduced by [3, 2, 1] to become [0, 2, 2]. Transforming it to [1, 2, 0] violates the problem constraints
Thank you, fixed
I take back the second half of my claim. The original triple x, x — w and w is correct.
Great contest, many thanks!
In the problem D you claim "Claim as long as the k-1 smallest elements remain in the array, we can erase other elements in any way we want."
Is "any way" part of the claim correct? For example, if k=3 and array is
1,2,3,8,8,8,8,8,7,8,8,8,8then there is no way you can erase the 7 only. In the next paragraph you write you remove everything larger than x, so this is not important.Select l=2, r=9 7 is the 3rd smallest number
Oh cool, thanks! I was not right then
In D problem, can someone explain more clearly, two pointer part when b[l] != b[r]
You make an array of only elements <= value of kth element, in the order they appear in the array. Now, you basically do a palindrome check on this new array using l and r (two pointer) from ends of array. When both elements are equal, it's cool.
If they aren't equal, there's two possibilities ---> atleast one of those elements is the kth smallest element, in which case you increase a variable cnt and increment/decrement that index (basically you imagine you removed it). Or both elements are not equal to kth smallest element, in which case the answer is straight NO. Do this till l<r holds.
Now, this cnt variable signifies the number of removals of kth smallest value you had to do to make this array a palindrome. If the number of elements remaining after such removals (size of this array — cnt) >= k-1, then it is valid. Else not
Could someone please explain D? Cannot understand the logic even after reading the tutorial..
Okay so I didn't solve the question but I understood what he wants to say, the only elements we can't erase by making it the kth smallest are the k-1 smallest elements, every other element can be erased by some manipulation of l and r. Let's take an example:
1 2 2 3 3 5 5 6 7
let k be 3
I want to erase 7. Notice that I cannot erase 1, one occurence of 2 whatever I do. to erase 7 i just take the last 3 elements, to erase 6 i can take the last — 1th 3 elements and so on. Try this for any array and convince yourself that this works.
So effectively, all you need for checking the palindrome is the numbers upto the face value of the kth smallest number. Also notice that if I am able to make a palindrome using 3 different numbers, even after removing some numbers completely I will be able to make a palindrome.
Eg: Suppose 1 5 3 4 4 6 3 3 1 3 and I want to see with k = 5
I am saying that if you can make a palindrome 1 3 4 4 3 3 1, you could just remove 4, 3 and still have 1 1 as a palindrome. For k = 5, keep only the numbers lesser than or equal to the k — 1th smallest number, in this case I keep numbers <= 3 So 1 3 3 3 1 3
Now use a normal 2 pointer approach to check palindrome but make sure you do not remove so many elements that you have less than k — 1 elements left. 1 != 3, remove that 3
size = 5, which is greater than 4. Proceed, if size < 4 then stop(You can remove only 3s not 1s).
I tried to explain it with examples, hopefully it is helpful.
In problem D, Claim: as long as the k−1 smallest elements remain in the array, we can erase other elements in any way we want.
Let say, k=3 and a= [1,2,3,4,5,4,3,2]
How to erase 5? Can we erase 5 in any order that we want?
take l = 4 and r = 6, 5 is indeed the third smallest
Feeling good that I did till D. But also little bit sad for not getting E.
D can be solved in O(n) as a[i]<=n
counting sort, indeed!
the constraint is not even needed, you can get k smallest elements in $$$O(n)$$$, see nth_element in C++
Since the numbers are only up to N, you can also counting sort
$$$O(n)$$$ is average time complexity of
std::nth_element. I am not sure whether the worst-case time complexity is $$$O(n\ logn)$$$ or $$$O(n^2)$$$.Worst case for
std::nth_elementis $$$\mathcal{O}(n \log n)$$$ for at least well-known implementations of it. However, you can just shuffle the array to make the worst case almost improbable.actually can be done with some /median of $$$\frac{n}{5}$$$ groups of 5 elements/ messy approach in deterministic time, see this.
In problem $$$C$$$, when $$$b_i = a_i . x$$$ and $$$b_{i + 1} = a_{i + 1}$$$, how are we sure that the final $$$x$$$ we get will divide $$$b_i$$$? Any multiple should be okay right? Why only take the lowest multiple?
Let $$$x_{'}$$$ be the actual answer used by Bob.
For this case, we know these statements are true:
Using the property, if $$$p$$$ divides $$$q$$$ and $$$q$$$ divides $$$r$$$, then $$$p$$$ divides $$$r$$$: From 1 and 4, we can conclude $$$x$$$ divides $$$b_i$$$.
The main statement was this: "It is guaranteed the array $$$b$$$ can be obtained from some beautiful array $$$a$$$ and some positive integer $$$x$$$ as described in the statements."
Ah so the solution to the bonus problem would be to just check the $$$x$$$ we get should divide all such $$$b_i$$$ and if it doesn’t then a solution doesn’t exist right?
What do you mean by "all such $$$b_i$$$"?
Umm, i want to ask, this contest rated? Or?
When will ratings come
This is the first contest of div2(+div1) where I managed to solve 4 problem :)
Codeforces participants when a question involves basic maths :( [Round 1035 B]
Codeforces participants when the entire contest is arrays :)
any idea about rating of problem C and D
Great
you need to
breakinstead ofcontinue, i thinkyea I just checked it :sob: Actually I was changing the code again and again so left it
continueonly didn't noticeThanks
in A i used the approach, i.e. sort the input array and then compare it to the original ones and print the non similar elements, much easier and intuitive than the approach given in editorial.
Problem D
8 54 7 1 2 3 1 3 4why the output of this test case is "NO" ?
After sorting this we get 1 1 2 3 3 4 4 7 and we can always delete all elements >= 5th largest element in the array. so we can delete 7,3,3 and get 4 1 2 1 4 which is a valid Palindrome.
My solution-
you can't delete both 3's
thank you
In C, the editorial says to take the LCM of b_i/gcd(b_i,b_i+1), but in the code theyre taking the LCM of b_i/gcd(b_i, b_i+1, .... b_n). Can anyone please explain how they do the same thing?
could someone help me out with what tc this fails for E ?:
about problem E
so can in be solved in binary ?
my idea is :
let sum(l, mid) = sl, sum(r, mid) = sr
if sl == sr return
else if sl > sr [l, r] -> [l, mid]
else [l, r] -> [mid + 1, r]
About Problem F1
In the case 1234123,I chose the order 123/4123,not 1234/123.but it is wrong.
My submission https://mirror.codeforces.com/contest/2124/submission/327919549
I have solve it.But how to solve F2? https://mirror.codeforces.com/contest/2124/submission/327918691
My solution is also wrong in this case. Can you help me?
observationforces
Can someone suggest how I should start solving a problem after reading it? I'm unable to understand how to begin approaching it.
For D, There is an $$$O(n)$$$ approach to find the kth element in a
vector, see LeetCode linkIts a small and good approach but
O(n) is average, worst case it takes O(n^2), although extremely rare but still its possible. Its like people just ignore the worst case and only consider the average, like many people say unordered_map is O(1) but it has O(n) worst case as well.
And I think in O(nlog(n)) and O(n) both are almost equally same, some times if too many collisions happen or constant factor are very far apart, O(nlog(n)) is better and the main thing is we know it will definitely not fail in any case while O(n) is can.
“Although very rare”
First, the worst complexity of
std::nth_elementis $$$O(n\log(n))$$$ , and still has a constant that is far less thanstd::sortNext, if you are REALLY worried about the potential that the codeforcers would hack
std::nth_element, we can implement it by ourselvesBased on a random pivot, it is impossible to make the process degrade to $$$O(n^2)$$$ or $$$O(n\log(n))$$$
Then, I don't know if you have heard a term names "卡常", which means that the time limit is very tight, and we need to optimize EVERYTHING that we can to avoid it getting TLE. For example, instead of
scanforstd::cin, we might use:to read integer datas, and use a C-style array to simulate a queue instead of
std::queue. Tens of milliseconds as those optimization could, they might save a correct code from TLE. So it is important to at least know these optimizations.Finally, "not fail" is only for this problem, LEARNING is the goal that we solve problems and read comments on Codeforces. The idea behind
std::nth_elementis quick select, which is a grand performance of divide-and-conquer, and is definitely worthy for a deeper insight.Thanks that's quite useful info's. Many people including me only knows some of it. If you don't mind where do you learn this things because now a days everyone only wants to know the basic and not dive deep like this. Because like I can't like find the std::nth_element being used, but as u described its useful. I used ordered_set but I could have used this instead.
Hey guys for question C :
I iterated from right to left comparing adjacent elements, if right element is not an multiple of left element I add it to a set by value (left_element/hcf(left__elemenet,right_element)), the final output is the LCM of the collected hcfs set, It is losing at around 5000 testcase, can u guys tell me what is wrong
My Code
This is not LCM
use this
Thanks
I had a hard time figuring out the editorial solution for Problem F2 — Appending Permutations (Hard Version). So I'm sharing my solution which slightly deviates from that.
We will still need the $$$chain(i, j)$$$ using the same definition as given in the editorial. We will also require:
Now, we calculate the dp values as follows:
The tricky case which was explained in the editorial:
Finally, for $$$j \gt 1$$$,
We can calculate $$$dp(i,1)$$$ in $$$O(n)$$$ time, and by maintaining suffix sums over the $$$close(i, j)$$$ array, (i.e. $$$sufsum(i,x) = close(i,x) + close(i+1,x) + \ldots + close(n,x)$$$) we can calculate each $$$dp(i, j)$$$ in $$$O(1)$$$ for $$$j \gt 1$$$.
Final complexity $$$O(n^2)$$$. Here is my AC submission: 328032797
Did anyone notice that the problem H have insane and severe time limitation???????
B was much easier than I thought I had the correct idea at first but I doubted myself and over complicated it but still AC
Alternative way to think about H:
First, subtract $$$1$$$ from all elements of $$$a$$$. Now, $$$b_i$$$ would be the largest integer such that $$$p_{b_i} \lt p_i$$$ (or $$$b_i = 0$$$ if $$$i$$$ is a prefix min). Let's look at some sequence $$$b$$$. Imagine the tree where the parent of vertex $$$i$$$ is $$$b_i$$$ (with an auxiliary node $$$0$$$ as the root of the tree).
Claim: $$$b$$$ is good iff $$$0, 1, 2, \ldots, |b|$$$ is a valid dfs pre-order traversal of the tree. This condition is equivalent to every subtree having vertices whose indices form a contiguous interval of integers.
Now, a more intuitive way to understand the DP is that $$$dp_{i,j}$$$ is the biggest vertex you can put down when assembling the subtree of $$$a_i$$$ (imagine your dfs pre-order traversal has already gone through $$$1, 2, ... a_i$$$, and now you are underneath vertex $$$a_i$$$), and only using elements in the subarray $$$a_i, a_{i+1}, \ldots, a_j$$$.
Btw, I really enjoyed this problem. Kudos to the author!
The problem 2124A — Deranged Deletions has all the below tags. Isn't that incorrect and if it's incorrect, how can it be fixed?
2-sat; chinese remainder theorem; divide and conquer; expression parsing; fft; flows; geometry; implementation; interactive; matrices; probabilities; schedules; sortings; string suffix structures; ternary search
Solution:
Problem A- https://youtu.be/6-1zSkEgZm8?si=JxeQxGoAQz9Uc0Rh
problem B- https://youtu.be/M6Zy3y3lT74?si=tM8GnvAyxjN5iYPt
problem C- https://youtu.be/UB6Qe1EWKtg?si=EOlLwamZ4hCQURe6
problem D- https://youtu.be/vlLkAv2Sh6c?si=kqYq3y3XrnsOrPGQ
problem E- https://youtu.be/qI-aVWBwf7w?si=r-SDxHL6gwWDN2MW
Actually the optimal time complexity for D is O(n), because you only sort to find kth element
Great tutorial! But why does problem H exist "Uisng the above"?
In problem F1, i dont actually understand the way to counter the overlap when dp, can someone explain it for me.(●'◡'●)
in problem C, can we say that $$$\frac{b_i}{gcd(b_i,\phantom{1}b_{i + 1})}$$$ always equals either $$$1$$$ or $$$x$$$ for each $$${1}\le{i}\lt{n}$$$?
3rd problem the Bonus part, we don't find an x, for 50,70,100, we get multiple values, this is happening maybe because b1/gcd(b1,b2),b2/gcd(b1,b2) are coprimes?
I was missing explanation in D on how to construct array that any element >= k-1 smallest could be deleted. There are several ways, as mentioned, two pointer technique is one of those (l=0, r=n-1). You track rank of the element which you'd like to delete. If it's larger than k, remove any side l/r bordering element (if it's not the element itself) and adjust rank accordingly: if selected l (let's say) is >= than the element, reduce rank by 1, otherwise keep it unchanged (larger element is removed). Other way is to create sorted array of indices of elements <= selected element (including itself), and pick range start as [max(0, element index in that array — k + 1), start + k — 1].
Also this remark wasn't obvious for me "Note there is no point in keeping any elements in the third group"