2148A - Sublime Sequence
Notice that if we pair each odd-indexed element with an even-indexed element, it sums to $$$0$$$.
Therefore, our strategy is to pair the $$$2i$$$'th element with the $$$(2i+1)$$$th element until there are no more elements to pair. Note that if $$$n$$$ is odd, then there exists an unpaired odd element at the end. In this case, the answer will just be that ending element, which will be $$$x$$$. Otherwise, when $$$n$$$ is even, we paired every element perfectly and the sum will be $$$0$$$.
Time complexity: $$$\mathcal{O}(1)$$$ per test case.
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); i++)
int main() {
int t;
cin >> t;
forn(tt, t) {
int x, n;
cin >> x >> n;
if (n % 2 == 0)
cout << 0;
else
cout << x;
cout << endl;
}
}
2148B - Lasers
Since our path is continuous, whatever route we take will always pass through all vertical and horizontal lasers in the region from $$$(0,0)$$$ to $$$(x,y)$$$ since our movement is bounded inside the region. Therefore, the answer is simply $$$n + m$$$.
Time complexity: $$$\mathcal{O}(1)$$$ per test case.
#include <bits/stdc++.h>
using namespace std;
#define forn(i, n) for (int i = 0; i < int(n); i++)
int main() {
int t;
cin >> t;
forn(tt, t) {
int n, m, x, y, s = 0;
cin >> n >> m >> x >> y;
forn(i, n)
cin >> s;
forn(i, m)
cin >> s;
cout << n + m << endl;
}
}
2148C - Pacer
In this problem, we want to maximize the amount of points between consecutive requirements. We can ask ourselves the following questions:
What is the maximum number of points we can obtain between requirements $$$i$$$ and $$$i+1$$$? The answer is $$$a_{i+1} - a_i$$$, by running every minute. When can we run every minute? Only if $$$(a_{i+1} - a_i)$$$ and $$$|b_{i+1} - b_i|$$$ have the same parity.
What if they don't have the same parity? Well we can stay in place for one minute, then we can gain $$$a_{i+1} - a_i - 1$$$ points.
The start can be represented by a requirement where $$$a_0 = 0$$$ and $$$b_0 = 0$$$. Summing up the number of points between all consecutive requirements yields the answer. Remember to add $$$m - a_n$$$ to the answer, since we are running until the $$$m$$$'th minute.
Time complexity: $$$\mathcal{O}(n)$$$ per test case.
#include<bits/stdc++.h>
using namespace std;
int main(){
int T;
cin >> T;
while(T--){
int n, m, x , y;
cin >> n >> m;
int px = 0, py = 0;
int points = 0;
while(n--){
cin >> x >> y;
points += x - px;
if(((x - px + 2) % 2) != ((y - py + 2) % 2))points--;
px = x;
py = y;
}
if(px != m){
points += m - px;
}
cout << points << endl;
}
}
2148D - Destruction of the Dandelion Fields
Firstly, if there is no odd farm, then the answer is $$$0$$$ by default since we can never turn the lawnmower on.
Otherwise, let's first visit an odd farm to turn the lawnmower on. To maximize the number of grass, let's visit the odd farm with the maximum number of grass. Now that the lawnmower is on, we can visit any farm. Since the lawnmower can never turn off when we visit an even farm, let's take advantage of this and visit all the even farms.
Finally, we are forced to visit odd farms. Note that every other odd farm we visit will turn the lawnmower off. From this, we can see that it is optimal to visit the half of the odd farms with the largest amounts; we will visit them when the lawnmower is on.
Time complexity: $$$\mathcal{O}(n \log n)$$$ per test case. The $$$\log n$$$ complexity comes from sorting.
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n;
cin >> n;
vector<int> a(n);
for (auto &i: a) cin >> i;
vector<int> p[2];
for (int i : a) p[i%2].push_back(i);
long long ans = 0;
if (p[1].size()) ans += accumulate(p[0].begin(), p[0].end(), 0LL);
sort(p[1].rbegin(), p[1].rend());
int m = p[1].size();
for (int i = 0; i < (m+1)/2; i++) ans += p[1][i];
cout << ans << '\n';
}
signed main() {
cin.tie(0)->sync_with_stdio(0);
int t = 1;
cin >> t;
while (t--) solve();
}
2148E - Split
First, let's highlight the important parts of the statement. In my opinion, the most important part is the actual condition for determining whether subarray $$$a[l, r]$$$ is awesome, That is: there is some way to place items outside the subarray such that all multisets contain must the exact same elements (ignoring ordering).
There can only be one way that all multisets can contain must the exact same elements. Let $$$\texttt{cnt}_i$$$ denote the number of times element $$$i$$$ occurs in $$$a$$$. Then all multisets must contain exactly $$$\frac{\texttt{cnt}_i}{k}$$$ copies of element $$$i$$$.
Now, let's find a closed form for how we determine $$$a[l, r]$$$ is awesome. Firstly, we can note that the ordering of elements in the subarray does not matter; only the counts matter. Therefore, let's denote $$$c_i$$$ as the number of elements with value $$$i$$$ in the subarray. The claim is that the subarray is awesome if and only if $$$c_i \leq \frac{\texttt{cnt}_i}{k}$$$ over all $$$i$$$. We can show this through some casework:
If $$$c_i \leq \frac{\texttt{cnt}_i}{k}$$$, then we can place $$$\frac{\texttt{cnt}_i}{k} - c_i$$$ occurrences inside multiset $$$1$$$. We can place the other elements such that there are exactly $$$\frac{\texttt{cnt}_i}{k}$$$ occurrences in all other multisets as well.
Otherwise, if $$$c_i \gt \frac{\texttt{cnt}_i}{k}$$$. We can never make the other multisets have $$$\frac{\texttt{cnt}_i}{k}$$$ occurrences because we don't have enough occurrences to do so.
Therefore, the problem simplifies to: Count the number of subarrays such that $$$c_i \leq \frac{\texttt{cnt}_i}{k}$$$ over all $$$i$$$. We can do so using two-pointers.
Say our left pointer $$$l$$$ is fixed. Until the inequality is satisfied, we will keep increasing the right pointer $$$r$$$. Then, we will add $$$r - l$$$ to the answer since all subarrays $$$a[l, l], a[l, l+1], \ldots, a[l, r - 1]$$$ are satisfied. Then, increment the left pointer by $$$1$$$ and repeat.
#include <bits/stdc++.h>
using namespace std;
#define int long long
void solve() {
int n, k;
cin >> n >> k;
vector<int> a(n), cnt(n + 1), ct(n + 1);
for (int i = 0; i < n; i++) {
cin >> a[i];
cnt[a[i]]++;
}
for (int i = 0; i <= n; i++) {
if (cnt[i] % k) {
return void(cout << 0 << endl);
} else {
cnt[i] /= k;
}
}
int res = 0;
for (int l = 0, r = 0; r >= l and r < n; r++) {
ct[a[r]]++;
while (ct[a[r]] > cnt[a[r]]) {
ct[a[l]]--;
l++;
}
res += (r - l + 1);
}
cout << res << endl;
}
int32_t main() {
cin.tie(0)->sync_with_stdio(0);
int tc = 1;
cin >> tc;
while (tc--) solve();
}
2148F - Gravity Falls
Firstly, note the last row must have a length of $$$\texttt{max_k}$$$, the maximum $$$k$$$ among all arrays. Next, we can think about stacking as appending a suffix of the array on top to the array on the bottom. For example, if we stack an array $$$p$$$ of length $$$y$$$ on top of an array $$$q$$$ of length $$$x$$$ where $$$y \gt x$$$, then we append the suffix of $$$p$$$ that starts at index $$$x + 1$$$ to $$$q$$$ after gravity takes place.
Let $$$b_1, b_2, \ldots, b_i$$$ denote the lexicographically minimum possible bottom row if we have a prefix of length $$$i$$$ already fixed. From what array (out of the $$$n$$$ given arrays) should we append next to $$$b$$$?
The answer is: among all arrays with length greater than $$$i$$$, we want to append the array with the lexicographically minimum suffix starting at index $$$i+1$$$. We must append the entire suffix of that array to $$$b$$$.
We will repeat this process until $$$b$$$ has $$$\texttt{max_k}$$$ elements. Now, we just need to find the array to append at each step efficiently.
Approach 1: $$$\mathcal{O}(\sum k \log n)$$$
Iterate $$$i$$$, denoting the current length of prefix, backwards from $$$\texttt{max_k}$$$. We will keep track of the indices of all arrays that have length at least $$$i$$$. At each $$$i$$$, sort the array with the indices using a comparator with the following priority, from high to low:
the element at index $$$i$$$.
The "rank" of the suffix starting at index $$$i+1$$$ (i.e. its relative order in the previous sorting). If the rank doesn't exist (i.e. the array is exactly length $$$i$$$), then we can denote the rank as $$$-1$$$ as shorter arrays are more optimal when suffix are tied.
After the sorting, update the "rank" of each array with the current relative ordering. The first element in the sorted array will be the array you choose to append, if you ever end up with a fixed prefix of $$$i$$$ when you build $$$b$$$.
Now that we have the "next array to append" precalculated for every length, we build $$$b$$$ from left to right. Since we are sorting at most $$$\sum k$$$ elements and at most $$$n$$$ array indices can be tracked, the complexity is $$$\mathcal{O}(\sum k \log n)$$$
Approach 2: $$$\mathcal{O}(\sum k \sqrt{\sum k})$$$
Every time we want to append the suffix starting at index $$$i$$$ of some array, let's say array $$$c$$$, then $$$c$$$ must have at least $$$i$$$ elements. Additionally, the sum of lengths of all arrays (i.e. $$$\sum k$$$) is bounded by $$$2 \cdot 10^5$$$.
The essential observation is: There exist at most $$$\sqrt{\sum k}$$$ distinct values of $$$k$$$ among the $$$n$$$ arrays. This also means that we are appending a non-empty suffix of some array at most $$$\sqrt{\sum k}$$$ times.
You can think of this as: if we have multiple arrays with the same length, then we only care to append the lexicographically minimum out of those arrays. Besides the array we will append, we don't care about any other arrays with that length.
Now, at each iteration of $$$i$$$, it suffices to manually iterate through all the suffixes of all $$$n$$$ arrays with lengths at least $$$i+ 1$$$ to find the lexicographically minimum one. You are iterating through at most $$$\sum k$$$ elements per step, so the complexity is
$$$\mathcal{O}(\sum k \sqrt{\sum k})$$$.
#include <bits/stdc++.h>
using namespace std;
#define all(x) x.begin(), x.end()
#define pb push_back
#define FOR(i,a,b) for(int i = (a); i < (b); ++i)
#define ROF(i,a,b) for(int i = (a); i >= (b); --i)
#define trav(a,x) for(auto& a: x)
#define sz(x) (int)x.size()
template<class T> bool ckmax(T& a, const T& b){return a<b?a=b,1:0;}
template<typename T> ostream& operator<<(ostream& out, vector<T>& a) {for(auto &x : a) out << x << ' '; return out;};
void solve() {
int n; cin >> n;
vector<vector<int>> g(n), relevant;
int max_k = 0;
FOR(i,0,n){
int k; cin >> k;
g[i].resize(k);
ckmax(max_k, k);
FOR(j,0,k){
cin >> g[i][j];
if(sz(relevant) == j) relevant.pb({});
relevant[j].pb(i);
}
}
vector<int> lex_min(max_k), rank(n, -1);
ROF(i,max_k-1,0){
vector<array<int, 3>> cur;
trav(j, relevant[i]){
cur.pb({g[j][i], rank[j], j});
}
sort(all(cur));
lex_min[i] = cur[0][2];
int rk = 0;
trav(j, cur) rank[j[2]] = rk++;
}
vector<int> ans;
while(sz(ans) < max_k){
int tmp = sz(ans);
auto& v = g[lex_min[tmp]];
FOR(i,tmp,sz(v)) ans.pb(v[i]);
}
assert(sz(ans) == max_k);
cout << ans << "\n";
}
signed main() {
cin.tie(0) -> sync_with_stdio(0);
int t = 1;
cin >> t;
for(int test = 1; test <= t; test++){
solve();
}
}
/* /\_/\
* (= ._.)
* / > \>
*/
from collections import deque
def solve():
n = int(input())
a = [deque(list(map(int, input().split()))[1:]) for i in range(n)]
a.sort(key=lambda x: -len(x))
ans = []
while a:
cur = min(a)
ans += cur
while a and len(a[-1]) <= len(cur):
a.pop()
for i in range(len(a)):
for j in range(len(cur)):
a[i].popleft()
print(*ans)
for _ in range(int(input())):
solve()
2148G - Farmer John's Last Wish
Let's start by understanding how $$$f(a)$$$ works. The definition essentially asks for the largest index $$$k$$$ such that the $$$\gcd$$$ of the first $$$k$$$ elements is strictly greater than the $$$\gcd$$$ of the first $$$k+1$$$. In other words, we're looking for the last point at which the running gcd 'drops'.
A useful observation is that scaling the entire array (multiplying or dividing every element by the same integer) does not affect where this drop occurs; each $$$\gcd$$$ value is just scaled accordingly. This allows us to 'normalize' the problem: divide each $$$a_i$$$ by the $$$\gcd$$$ of the whole array. After this, the $$$\gcd$$$ of the full array becomes $$$1$$$, and the question reduces to:
What is the last index where the prefix GCD is still greater than $$$1$$$?
Now, note that $$$\gcd \gt 1$$$ for a prefix means there exists some integer $$$d \gt 1$$$ that divides every element in that prefix. So, the problem boils down to constructing the largest prefix that shares a common divisor greater than $$$1$$$.
To handle this, let’s think in terms of divisors. For each number $$$a_i$$$, compute all of its divisors. Maintain a frequency array $$$\text{div}$$$, where $$$\text{div}[d]$$$ stores how many elements seen so far are divisible by $$$d$$$. The maximum value of $$$\text{div}[d]$$$ tells us the largest prefix size where all elements share divisor $$$d$$$. That gives us $$$f(a)$$$.
Extending to Prefixes
The problem, however, doesn’t just ask for $$$f(a)$$$ of the entire array. We need $$$g(p)$$$ for every prefix $$$p = [a_1, a_2, \dots, a_i]$$$, considering all permutations. Fortunately, the divisor-counting approach extends naturally: as we iterate through the array from left to right, we update the divisor frequencies for each new element and recompute the maximum.
There is a subtle edge case here. Suppose the array is $$$[8,4,2,1]$$$ and we are at prefix $$$[8, 4, 2]$$$. All three numbers are divisible by 2, so $$$\max(\text{div}) = 3$$$. But the true answer is not 3, it’s 2, because the gcd only drops before the last element (the full prefix always ends with $$$\gcd = 1$$$ by normalization). To handle this, we should ignore the trivial case where every element of the prefix shares a divisor. In other words, the correct answer is
ensuring we don’t mistakenly include the full prefix.
Complexity
For each element, computing divisors takes $$$O(\sqrt{a_i})$$$. Updating frequencies and maintaining the maximum is $$$O(1)$$$ per divisor. Across all elements, the time complexity is:
O\!\left( \sum_{i=1}^n \sqrt{a_i} \right) \leq O(n \sqrt{A}),
$$$
where $$$A$$$ is the maximum element. This is efficient enough given the constraints.
For every prefix of length $$$i$$$ ($$$1 \leq i \leq n$$$), the most optimal reordering is to reorder all prefix elements such that the $$$k$$$ first elements of $$$a$$$ can all be divided by the same divisor while the $$$(k+1)$$$'th element can't. This ensures that $$$gcd(a_1,...,a_k) \gt gcd(a_1,...,a_{k+1})$$$ is always true.
Thus, we have reduced the problem to : Find a divisor that can divide the most elements for every prefix of length $$$i$$$. To solve this, we can simply precompute divisors of all numbers and increase their count by $$$1$$$ as we iterate through the array and take the maximum $$$count[x]$$$ as answer.
A special case where maximum $$$count[x] = i$$$ is invalid, as $$$gcd(a_1, ..., a_k) \gt gcd(a_1, ..., a_{k+1})$$$ is no longer true. We can save all such $$$x$$$ into an array and check for every $$$i$$$ if $$$count[x] \lt i$$$ is true with bruteforce, since the number of divisors for a number $$$x$$$ is roughly at most $$$\sqrt[3]{x}$$$.
The final complexity will be $$$O(N \log N + N \sqrt[3]{N})$$$.
#include<bits/stdc++.h>
#define pb push_back
using namespace std;
const int mxN = 2e5 + 2;
vector<int> divs[mxN];
void pre(){
for(int i = 2; i<mxN; i++){
for(int j = i; j<mxN; j+=i)divs[j].pb(i);
}
}
void solve(){
int n;
cin>>n;
vector<int> a(n+1), cnt(n+1, 0), max_nums;
vector<bool> vis(n+1, 0);
for(int i = 1; i<=n; i++)cin>>a[i];
int ans = 0;
for(int i = 1; i<=n; i++){
int x = a[i];
vector<int> next;
for(auto &num : divs[x]){
cnt[num]++;
if(cnt[num] != i)ans = max(ans, cnt[num]);
else if(!vis[num]){
next.pb(num);
vis[num] = 1;
}
}
for(auto &num : max_nums){
if(cnt[num] != i){
ans = max(ans, cnt[num]);
vis[num] = 0;
} else next.pb(num);
}
max_nums = next;
cout<<ans<<" \n"[i == n];
}
}
int main(){
ios_base::sync_with_stdio(0);
cin.tie(nullptr);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
#endif
int t = 1;
cin>>t;
pre();
while(t--)solve();
return 0;
}








Fast as light
I wonder how this submission got accepted. It does not even run the sample in my local. Really, I see out of bounds bug in this code, but it somehow works on Codeforces and online compilers.
bro chk ur compiler.. btw why u are replying this thing to me?
I'm replying to you, because your comment is on top. So my comment is also on top if I reply you.
Congrats on pupil, btw. ;)
Yeah bro! Thank u
how can i write contest bro?
First, we have to become masters, and after that, you can set one. But if you want to write an unofficial one, you can. First, you have to create some problems in Polygon, then make some test cases, and finally test them with your friends. That’s all.
Thank u! UwU
hey robi i see an unrated create a div 2 contest?
Big Brain.
Fast editorial! (please like im farming)
I am a noob, I can't understand the rules does it mean that if I haven't participated in 5 previous rounds and solved a Question I won't get rated?
No, if your rating is <1400 then the contest will still be rated
Thankyou for the reply, Really Appreciated
And also if you are a sigma(contest announcement as a short)
Problem
Ewas a beautiful problem for a div 4.Enjoyed the round.
actually easier for E, usually it would be some simple DP or shitty greedy.
nah sliding window is enough for E.we dont need dp for this problem
Problem-F: How it works step by step:
Read all the lists Keep track of their lengths. Start building the final list Look at all lists that still have numbers left. From the remaining numbers in each list, pick the list whose numbers are smallest when compared from the current position. Add numbers from that list Take all remaining numbers from that list and add them to the final list. Mark that list as used so we don’t pick it again. Repeat Keep repeating until all lists are used or we have added all numbers. Print the final list This is your combined list that is as small as possible in order. Key Idea: Always pick the next list whose remaining numbers make the smallest sequence possible. Once picked, take all remaining numbers from that list. This is called a greedy approach.
You are so talented sir!! I am sure, you are casually practicing in 3 different languages during contest..
He is a cheater..
He knows that, he's just mocking the guy
MikeMirzayanov, please ban this user. He is a cheater
When selecting the list with the smallest number compared to the current position, if the same number is encountered, the size of the next column needs to be considered
中文
why is my code tle? https://mirror.codeforces.com/contest/2148/submission/338508342
B was easy but the problem statement was tricky to understand.
i took 45 mins for B and another 45 mins for A,C,D,E.
bro just look at the image in the statement that clears everything tho it took me 12 minutes to guess
It's really based on luck, bro. No offense I'm just saying.
B messed me up too(I was watching the contest) but through A — E I solved at first glance.
Same with F but I am not sure if it works(I mean i'm sure in my head but edge cases can always appear outta nowhere ya know)
yeah they should gve easy implementation problems in B
No I think it's best the way it is. B was about who thought the clearest, you know?
for the 3rd time div 4's B problem cooked me fr
Solved B as fast as fcuk boii
super fast editorial !
can any one explained me E i did not get it?
Use sliding window to count subarrays where each number appears exactly freq[i]/k times. after apply Sliding Window Logic. My solution: https://mirror.codeforces.com/contest/2148/submission/338449194
Got problem F accepted 10 minutes after the round got completed :( Still ENJOYED the contest :D
F is really a good question, a very interesting idea. Unfortunately, I couldn't come up with it during the competition (unhappy)
Fast editorial thanks! E and F was awwsome
G was betterrrrr
and i was applying line equations and verifying the point of intersection, to get the total intersections for the problem B. :/
:(
F possible to solve with $$$O(\sum k)$$$ time
Oops, my solution wrong. But anyway this time complexity exist — read: https://mirror.codeforces.com/blog/entry/146112?#comment-1308911
was it correct initially and later system testing gave WA?
Fast editorial fr, fr.
orz to The coder, The myth, The legend, cry
problem C was lovely.
loads of love to the writer.
was able to solve 4 questions.
Is anyone else having the same messed up rankings as me? AK, penalty time 217, 10th in common standings (but not shown), 69th in friend standings
Its becoz you havent participated in atleast 5 contests. theres these rule you can find in the announcement of the round
oh i see thx!
Wrong interpretation for a solution for F, nvm. :)
Interesting idea! So clear, and also have "one pass through the data" vibe!OwO
what if we don't check the unique minimum case? will it time fail directly?
and even if we do check it suppose there are 2 identical arrays(minimums) so leading to no unique element case. In this scenario isn't this method leading to a (mx*n) time algorithm? is that fine? (could you please explain how it runs fine?)
my similar solution also passed that didn't check for unique case but kept track of when a minimum array ends(since it gives more options) however i am not able to analyse the worst case complexity. During the contest i assumed it will exceed TL. 338529461
This is true. Nice catch.
The current implementation runs in $$$\mathcal{O}(n \cdot mx)$$$, but it can be improved to $$$\mathcal{O}\left(\sum k\right)$$$ by storing, for each column, only the indices of arrays that actually have an element there. This way every element is processed at most once.
In the given code, however, $$$mx$$$ and $$$n$$$ are dependent — the worst case occurs when all arrays are identical (same size and same values). Even then, the bound is still manageable, since $$$\sum k = n^2$$$ which means that $$$n = \sqrt(\sum k))$$$ yeilds to $$$\mathcal{O}(\sum k * \sqrt(\sum k))$$$
Hacked your submission i can go to bed peacefully now.
i did the same, but unfortunately got hacked.
We got tricked.
By the way there's a typo in the editorial for E:
"There is can only be one way that all multisets can contain must the exact same elements"
how does the order not matter in E? like the order must matter right?
Multiset ensures elements are in ascending order regardless of order of insertion
ohhh...yeah. my bad. didn't read the statement properly. Thank You!
will i get rating as it was my third contest ?
Yesss, you just want see urself in the standings tab.
Hi guys!
This was my first contest using c++, can somebody please explain to me why did my code output 33? https://mirror.codeforces.com/contest/2148/submission/338397788
you call scanf with no address to store it in the loops, read into a dummy variable so that the integers are consumed
Very cool bug, lol. You are getting 33 because
scanfis expecting the memory address where to store the int read from the stdin. Since you have not provided any, it just ends up using the register value that was in %rdi earlier. This is undefined behavior, but here it happens to be the address of the variablen. So, all new values are read tonactually. And, since the last value in test case 2 was 32, you have n + m = 32 + 1 = 33. For test case 1, the last value and the originalnvalue both happen to be 1 so it somehow works.For quick fix you can replace your
scanflines withscanf("%d", &x);. We are reading new values toxbecause we have no use forxand these new values. Ideally, you would declare a new int and then read in that.Read https://usaco.guide/general/input-output for C++ specific I/O guide.
Thank you both for replying and explaining!
Can anyone hack my solution, don't know why it passed even though it is O(k^2logn) for F:
https://mirror.codeforces.com/contest/2148/submission/338467466
Your solution eliminates rows with increasing lengths, so there can be at most $$$O(\sqrt{\sum{k_i}})$$$ elimination rounds. The worst case I can think of is:
I tried hacking it with a test case like this: https://mirror.codeforces.com/contest/2148/hacks/1146556/test . Passes easily with 500ms.
Please hack mine f
Can anyone tell me why my code is failing on F. https://mirror.codeforces.com/contest/2148/submission/338500676
For problem G, it suffices to only check prime divisors, which lowers the time complexity a lot if you precompute those.
can you explain why prime divisors only is enough
If a non-prime number divides some amount of values, then all of its prime divisors will divide at least as many values as it, so prime numbers are always optimal.
But how does this explain drops? $$$[4,2,1]$$$ for example: yeah, it is covered by just $$$2$$$ but the drop is not represented because $$$4$$$ never exists?
Whenever the gcd drops, lets say it is divided by some number x, all numbers before the drop are now going to be divisible by x when divided by the new gcd, hence it is sufficient to update the counts of all prime divisors of x to be the size of the current considered prefix.
See submission for details: 338721463
powers of primes
Correct me if i'm wrong, but i don't think it works, here's a counter tc for prime divisors
it works. the output will be
0 1. We're not just checking the prime factors but rather the powers of each prime factor. Check my submissionThanks, i understood how it works from your code, here's a cleaner code that can run faster without maps (small change on editorial's code)
You also would need to check between two powers of a prime divisor, in case:
125 25 5 5 5You are supposed to divide by the gcd first, which results in
25 5 1 1 1, so 5 is optimal.oops...I failed the E implementation so hard
Super fast editorial... Thanks for the contest.loved problem F and happy to solve in the contest
Okay wrong answer on testcase 19..lol
Nyaha-ha-ha, I join to club with 23 WA
yaa this was the testcase..how did u figured it out?
Under the code in submission exist "Judgement Protocol" (or "Click to see test details").
And due to Checker Log you can write special if what detect needed test
mine too. does you figured out the tc -19?
culprit tc ^^ 4 3 1 2 3 4 1 5 5 2 4 1 2 3 4 1 1
I know this is not the place to ask such question, but still: is there anyway I can bypass "crawling" block, so that I will be able to continue hacking submissions?
Apparently there's none; the only way to get unblocked is to just wait. From my experience, you sometimes get unblocked after a few hours, and sometimes blocking persists for more than 10 hours. So yeah, I don't know...
I agree that it's silly to block someone who eagerly tries to hack solutions in the open hacking phase. Also the warning text is a bit too scary and unhelpful.
Yep, thanks
nvm
G math approach
Is it possible to implement such an approach? For now I'm getting TLE, I'm not strong in implementation.
For some random array is the No. of times does the prefix gcd change almost O(1) ?
i.e # of times
[g != __gcd(g, a[i])]this condition occurs found to be O(1) for large random array?Please someone hack my f
too short F with python slices. i think it can be hacked
338490282
Cute sol. Ported it to C++: https://mirror.codeforces.com/contest/2148/submission/338536337 . I thought of doing this during the contest but I figured out an even more cursed method.
I don't think it can be hacked because the time complexity is like $$$O(n \sqrt{\sum{k_i}})$$$.
Thank you!
Holy what is this magnificent piece of code
Problem G can be done very easily by just going through each number. Say, gcd till now is g and after adding current number, gcd becomes g', then there are only two cases —
g' == g: In this case, if current element is not part of longest subarray, then answer does not change. If current element is part of longest subarray, i.e. it affects our answer, then g' has to be one of the divisors of current element. Hence ans = max(ans, F[d]), where 'd' is any divisor of current number > g as g'> g and F[] is cumulative frequency of divisors till now.
g'< g: this is best case, we already found best subarray till the last element & current element decreases the gcd which satisfies the problem condition, that's what the problem is asking for!
[submission:https://mirror.codeforces.com/contest/2148/submission/338531912]
Problem G can be solved in $$$O(n\log n+\sum_{i=1}^{n}\tau(a_i)). $$$ My submission: 338537949
For each element, we must increment the count of each of its divisors by 1. Then, the problem boils down to finding the maximum count of a divisor such that it is not equal to the number of elements processed so far. Naively, this can be done with a multiset, but the heavy log factor TLEs. So, we can instead build a "buffer" which holds all elements with count equal to the number of elements processed so far (essentially, all elements in the buffer are divisors of the gcd so far). If the current element we process is divisible by some number x in the buffer, x remains. If not, we remove it (it does not divide the gcd anymore). The answer for the prefix is then the maximum of the count of every divisor not in the buffer.
Wish there was code for Solution 1 of G because my train of thought was like it. How can the maximum be maintained at O(1) per divisor? It feels like it's mandatory to check all currently counted divisors' count.
I don't know why, but I had a stroke reading problem E statement and understanding its meaning
I believe my solution for Problem F is $$$\mathcal{O}(\sum k_i)$$$. It's just about maintaining all the possible candidates for the current element. Here is The link.
You mean F?
Oh sorry, changed now.
Umm... isn't there some logarithmic cost for maintaining the valid array pool (you named them candidate)? I didn't understand fully since it's in python, but I believe i implemented a similar solve where for each column, I go through all the "valid" rows that have the minimum element for current index. Makes it $$$O(\sum k \cdot \log k)$$$ for me. Again, not 100% sure for your code, that's why asking.
There are nothing in my code that would cause a $$$\log$$$. You just need to iterate over the whole structure and it doesn't need to be sorted.
I believe I understand now what you did. I'm essentially doing the same thing, but didn't realize that I could get rid of the $$$\log$$$ factor. Thanks.
Can someone tell me why is my code failing for F..
https://mirror.codeforces.com/contest/2148/submission/338427667
any failing testcases will be appreciated
test cases:
1
4
3 2 3 5
3 3 4 4
2 2 3
2 2 4
my code was failing on this but still got accepted...
Correct output: 2 3 4
Your output: 2 3 5
can someone explain E problem , I didnt understand it
Can my G be hacked? ( https://mirror.codeforces.com/contest/2148/submission/338510007 )?
Uses segtrees to find most occurring prime factor, and gets TLE with segtrees of size 2e5. Realised 1e5 works a bit later and got AC with that, but still seems slightly iffy imo.
E is much easier than F,i think F is a good problem! btw I solve A-E so slowly because of the bad network I took half of time to load the problem and used m1.cf.com to submit lol
Also it's the first time I successfully hacked a code
These questions are very challenging(For Div4)! My friend's F code has been hacked, haha.
mine too..
Look at Ahmed_Nassar submissions surely they are ai solutions right? They are in the top of the leaderboard
Man, what the heck! My Python code for E got a TLE on test case 10 during post-contest system testing. It has the same exact logic and time complexity as the editorial's C++ solution, which feels so unfair! :((
Does anyone can tell me why my code is wrong on problem F? I can't figure it out.
Also, why is the indentation of the code so strange when editing, and how can I adjust the tab space?
here is the counter test case: 1 3 1 1 2 1 2 2 2 1 the output should be 1 1 instead of 1 2 like what it did in your code
thanks a lot
the test case #20 of problem F n=2 but it has only 1 following line? Am I missing something?
Why is my solution getting TLE? Is it solely due to the language being Python?
338599674
I assume it is because of Python + using a dict instead of a vector for the counting the elements that are in ur range.
I assume if you replace that dict, for the counting, with a list it will pass without a problem
how the f*ck did i fst my code is literally fkn logically equivilant to the edi like wtf ts pmo sm like ok fine i have fst i dont care, but i literally do not see a single thing wrong with my code and i am not debugging 300 lines of shit to figure ts out. i literally asked like 4 masters to check my code and i still got fst great fucking vegatables ok go downvote me now
k srry for crashout ig
anyways uh can someone tell me whats wrong lol (i seriously got no idea)
Now, I really do understand the meaning of cry (for the second time)
sybau gpt monkey ur not slick ur not gonna get contrib
upd: nvm im dumb and careless in impl
There is a typo in Problem F:- - Then, ki space-separated integers ai1,ai2,…,aiki follow (1≤aij≤2⋅105). it should be aik instead of aij in the bracket
It is correct. The restriction is applied on all of the elements of ai, not only on the last element.
thx for round
Could someone help me to understand question F?
F-> is anyone knows test case 19 . my code is failing on tc-19
Hello programmers!!!!!!!!!!!!!!!!!!!!!!!
Can anyone tell me why my code is giving runtime error 338645166 I am checking for out of bound in the loop but still giving error. On my compiler it is giving correct answer but cf judge is giving runtime error
Can Somebody explain the editorial of F(Gravity Falls),in simple language, I am having difficulty understanding that
Basically, you can stack the arrays in any order and the arrays must be left-aligned (A longer array on top of a shorter one is also ok). Then, elements in each array will fall down until it touches another element (In other words, no elements are "floating"). Finally, get the elements at the bottom of the stack from left to right and form an array. Let the array be a, your task is to find the lexicographically smallest a.
I solved F by O(sum(k) + n)
My submission [338493989] was sent earlier (2025-09-13 18:35:43) and judged earlier (2025-09-14 10:27:56) than the similar submission [338456423], which was sent later (2025-09-13 19:25:16) and judged later (2025-09-14 11:23:41). I did not share my code with anyone or make it public. The similarity is coincidental, and I kindly request that this be taken into account.
Hey, can someone explain problem F in simple terms. I get the idea, but the implementation is difficult for me to get my head around. I also watched a yt video on problem F which had a different solution, but again, I couldn't understand a part of the implementation. I would really appreciate it if one of you guys could help me out, thank you.
bro check this article it will explain you
In problem E
I’m having trouble figuring out how binary search could be applied here. Could someone please explain or share an outline of how this problem can be solved using binary search?
Regarding solution similarity warning
Hello, I would like to clarify the situation regarding the similarity warning for my solution to problem 2148E. During the contest, I developed the solution based on my understanding of standard algorithms and competitive programming techniques. For faster code writing and autocompletion, I used GitHub Copilot integrated in VSCode as an assistant tool to help speed up writing boilerplate code and ensure syntax correctness. At no point did I copy or refer to any solutions from other contestants or any publicly available sources. If there is any similarity with other solutions, I believe it is purely coincidental, as the problem allows for multiple correct approaches derived from well-known methods. I fully support fair play and remain committed to solving problems independently and responsibly. I kindly request the contest admins to consider my good faith efforts and not to ban my account, as I have never intentionally violated the rules. Thank you very much for your understanding and consideration.
wow F had beutifull sloution.
Hello, my solution 338490913 was flagged for coincidence with another user’s code. I do not know the other participant at all and did not share or receive any solutions anywhere (I compile my solutions offline). I solved the problem independently. The similar part of the code is generic, which I use snippets for, and the core logic seems to be analogous. Please review this case manually. Thanks!
I am quite confused in solution 2 of problem G, why the time complexity goes to NlogN + N(cube root (N))?? Is there any formal proof??
omaralimm Sorry for tagging you, i saw one of your comment say that you like the Problem G, that is why i am tagging to get some quick reply, so here is my confusion, do you have any idea??
I am quite confused in solution 2 of problem G, why the time complexity goes to NlogN + N(cube root (N))?? Is there any formal proof?? It is quite of confusing to see NlogN term and even the cube root term!
I can prove it.
Firstly, let's get all the divisors of numbers up to $$$N$$$ using sieve like method before we start solving the testcases.
That will be $$$O(NlogN)$$$, which comes from this equation — $$$n/1 + n/2 + ... n/n = n*log(n)$$$.
And the number of divisors of a number $$$n$$$ is $$$cbrt(n)$$$.
So, we do precomputation with a complexity of $$$O(NlogN)$$$ and solving the testcases with a complexity of $$$O(NcbrtN)$$$.
Problem G can be solved with complexity $$$O(n\log A)$$$, where $$$A$$$ is the maximum element.
Let's improve the Solution 2 in the tutorial:
Suppose $$$g_k=gcd(a_1,\cdots,a_k)$$$ and $$$g_{k+1}=gcd(a_1,\cdots,a_{k+1})$$$. If $$$g_k \gt g_{k+1}$$$, then there will be a prime $$$p$$$ satisfying $$$g_{k+1}p\mid g_k$$$.
Let $$$m$$$ be the maximum integer that satisfies $$$p^m\mid g_{k+1}$$$ but $$$p^{m+1}\not\mid g_{k+1}$$$, then we have $$$p^{m+1}\mid g_k$$$. Thus, we only need to try divisors in the form of powers of prime numbers.
Since the number of divisors in this form for a number $$$x$$$ is at most $$$\log x$$$, we get the complexity at the beginning.
My submission: 339058812
I feel a bit confused because of the analysis for problem E:"Say our left pointer l is fixed. Until the inequality is satisfied, we will keep increasing the right pointer r. Then, we will add r−l to the answer since all subarrays a[l,l],a[l,l+1],…,a[l,r−1] are satisfied. Then, increment the left pointer by 1 and repeat." Isn't this actually a different method from the code written in the solution? In the method described in the analysis, for each l, we find the largest r such that the subarray [l, r] satisfies the condition, and then we count the subarrays from l to r. But if we implement this method directly, it would require resetting the count each time l changes, leading to O(n^2) time complexity. This method isn't the sliding window technique, right? Am I misunderstanding something or is the analysis wrong?
This is two pointer and the complexity is $$$O(N)$$$,the key is come through a critical observation and also a well know method in dp which is optimization point.
First any valid subarray must have a boarder located at somewhere between [1,n],more formally any valid [l,r],that r must in [1,n],this lead us to think if we can find when r=i,how many l satisfy then sum up we can get the answer.
Second,optimization point in dp mean the best transition point,in different condition it may have different meaning,in this problem let opt(i-1) represent the minimum l such that it may form a valid subarray,in other word any l inside [opt(i-1),i-1] will form a valid subarray with i-1,also any l don't in that range will not form a valid subarray,and we can observe when we iterate until i,[opt(i-1),i] is including range [opt(i-1),i-1],so the opt(i) can only greater or equal than opt(i-1) otherwise since we know if opt(i)<opt(i-1) this will lead to a invalid subarray as i mention before
The Editorial's second solution of Problem G by Jteh can be optimized further. The $$$N\sqrt[3]{N}$$$ can be internally handled by counting the number of factors in the current iteration which have count equal to $$$i$$$. If this number is lesser than what it was in previous iteration, we simply assume our answer to be $$$i-1$$$ for prefix $$$a_1 ... a_i$$$. The Proof for this is that, the set of such numbers don't see any addition as we move forward. An element from this set is removed at some rounds, thus simply keeping the count and continuing our iteration works.
Submission: 339061514
TC: $$$O(NlogN + NlogA)$$$ where $$$A$$$ is the maximum value of $$$a_i$$$
2148 D
Why Wrong answer on test 2?
Hello, I want to know that why this code for problem G (using segment tree) https://mirror.codeforces.com/contest/2148/submission/339214294
is NOT GIVING TLE
-
By clicking on link, it is redirecting to solution
okk... i can see now....
understand it!! great solution
what i was doing for a particular i finding the largest value in the cnt array and if the value == i then find the max element smaller than i. this i achieved by storing 2 values in the node (largest and value smaller than it) -> and merge the nodes accordingly... but it is giving TLE on 6
can someone explain why my code is wrong with string implementation, i know the solution using with int, i use string for connect the element in each array, i think i do wrong in comparing but i dont know why... i want to know my mistake but i cant find it... https://mirror.codeforces.com/contest/2148/submission/339787704
Problem G is very interesting
Is there a code implementation for solution 1 to problem G?
you'd better not use chinese
I am not getting, for, 2148E — Split, I thought i need to all possible subarray of a and need to check if this is awesome or not. But the solution code directly returning 0 if the whole subarray is not awesome. My question is why not considering other subarrays? Also I am not getting this line: "there is some way to place items outside the subarray such that all multisets contain must the exact same elements (ignoring ordering)." what do you mean by "placing items outside"? help please!!! [user:cry][user:kwant]
For Problem 2148E — Split: Reason behind this line in the editorial: return void(cout << 0 << endl); For a subarray [l, r] to be awesome: ∀v, in[v] x k <= cnt[v] But if some cnt[v] is not divisible by k, then we can never achieve: cnt[v] = x x k
the normal solution for lasers question?
Isn't the expected solution for second testcase for 2148D - Destruction of the Dandelion Fields wrong? Shouldn't it be 12 instead?
cry can you respond to it please.
how is E nlogn in editorial solution seems n^2