Hi everyone,
The Indian ICPC Amritapuri multisite regional round is ongoing from 9 AM IST to 2 PM IST.
We will be starting a stream covering the round in 30 minutes.
We are also going to have a mirror, starting from 10 AM IST today.
| # | User | Rating |
|---|---|---|
| 1 | Benq | 3792 |
| 2 | VivaciousAubergine | 3647 |
| 3 | Kevin114514 | 3603 |
| 4 | jiangly | 3583 |
| 5 | turmax | 3559 |
| 6 | tourist | 3541 |
| 7 | strapple | 3515 |
| 8 | ksun48 | 3461 |
| 9 | dXqwq | 3436 |
| 10 | Otomachi_Una | 3413 |
| # | User | Contrib. |
|---|---|---|
| 1 | Qingyu | 157 |
| 2 | adamant | 153 |
| 3 | Um_nik | 146 |
| 3 | Proof_by_QED | 146 |
| 5 | Dominater069 | 145 |
| 6 | errorgorn | 141 |
| 7 | cry | 139 |
| 8 | YuukiS | 135 |
| 9 | TheScrasse | 134 |
| 10 | chromate00 | 133 |
Hi everyone,
The Indian ICPC Amritapuri multisite regional round is ongoing from 9 AM IST to 2 PM IST.
We will be starting a stream covering the round in 30 minutes.
We are also going to have a mirror, starting from 10 AM IST today.
Hi everyone,
The ICPC Chennai onsite regional round is ongoing from 9:15 AM IST to 2:15 PM IST. You can follow the public ranklist here
UPD1:
We invite you to participate in ICPC Chennai 2025 Onsite — Replay Round (Unrated), on Wednesday, 31 December.
Time: 10:00 AM — 3:00 PM IST
Hi everyone,
The ICPC Kanpur onsite regional round is ongoing from 10 AM IST to 3 PM IST. You can follow the public ranklist here
UPD1:
A mirror will be conducted on CodeChef this weekend. The exact time and the contest link will be updated later. The editorial will be posted after the mirror.
The problems were authored by: IceKnight1093, wuhudsm, jtnydv25, jeroenodb, nifeshe, and satyam343.
Special thanks to our testers: (thawin.ice, ttamx, temporary1), nifeshe, (Karuk, BLOBVISGOD, jeroenodb), (blitztage, Kira_1234), catgirl, _runtimeTerror_, SmolBrain, aryanc403, JagguBandar, (Proof_by_QED, cry, and Lilypad).
Congratulations to the top 3 teams!
Rank 1: Div4Maxxers (IIT Kharagpur) (9 / 11): Dominater069, Aryan-Raina, _ar07
Rank 2: Malai Chaap Extra Masala (IIT Roorkee) (8 / 11): Divine_Spark, Prady, Rodger2041
Rank 3: just_better (IIT Bombay) (8 / 11): Sam_2027, DustOfSnow, Kritin
UPD2:
We invite you to participate in ICPC Kanpur 2025 Onsite — Replay Round (Unrated), this Saturday, 27th December.
Time: 10:00 AM — 3:00 PM IST
We invite you to participate in CodeChef’s Starters144, this Wednesday, 24th July.
It will be rated for all and is my last contest as contest admin.
Time: 8:00 PM — 10:00 PM IST
Joining us on the problem setting panel are:
Setter: Satyam satyam343 Kumar Roy.
Text Editorialist: Nishank IceKnight1093 Suresh.
Statement Verifier: Kanhaiya notsoloud1 Mohan.
Contest Admin: Satyam satyam343 Kumar Roy.
On the difficulty
The Division 1 problemset features problems with difficulties from div2A to div1E. We expect the contest to be interesting enough for LGMs as well.
Written editorials will be available for all on discuss.codechef.com. Pro users can find the editorials directly on the problem pages after the contest. The video editorials of the problems will be available only to Pro users.
Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here. Hope to see you participating.
Good Luck!
Hi everyone! I wanted to write such a blog for a long time. I finally decided to do it after Codeforces Round $$$934$$$.
I would like to thank Non-origination, GoatTamer and KingRayuga for discussing the problems with me.
For each problem, I tried to include the some interesting stuff if I could remember.
I tried to avoid using spoilers for the problems. So you can look at the comments even if you have not tried the problem.
| # | Date | Problem | Difficulty | Contest | Comment |
|---|---|---|---|---|---|
| 1 | Aug 2021 | Tree Distance Sum | Div 2 E | Codechef Starters 10 | My first problem which appeared in some contest. I noticed some nice properties about the dfs traversal and decided to have a problem exploiting those properties. There are alternative solutions too. |
| 2 | Oct 2021 | Equal Beauty | Div 2 C | SnackDown 2021 — Online Round 1A | |
| 3 | Feb 2022 | Or Sum | Div 2 E | CodeChef Starters 27 | Standard problem revolving around counting contribution ideas |
| 4 | Feb 2022 | Non Coprime Neighbours | Div 2 C | CodeChef Starters 27 | |
| 5 | Feb 2022 | Distinct Binary Strings | Div 2 A | CodeChef Starters 27 | Cute easy problem |
| 6 | Mar 2022 | Interactive Tree 2 | Div 2 E | CodeChef Starters 28 | |
| 7 | Jun 2022 | Divisible by 3 | Div 2 A | CodeChef Starters 42 | |
| 8 | Jun 2022 | Minimum or Maximum | Div 2 A | CodeChef Starters 42 | |
| 9 | Jun 2022 | Maximise Set Bits | Div 2 C | CodeChef Starters 42 | My favorite problem from the round |
| 10 | Jun 2022 | Construct An Array | Div 2 D | CodeChef Starters 42 | How to write the checker? |
| 11 | Jun 2022 | Maximise Score | Div 2 E | CodeChef Starters 42 | I misread one of the problems in the qualifying round of Google Code Jam Qualification Round 2022, interpreted it as the mentioned the problem. |
| 12 | Jun 2022 | Divisible by i | Div 2 A | CodeChef June Long One | |
| 13 | Jun 2022 | Gcd and Lcm | Div 2 D | CodeChef June Long Two | |
| 14 | Jun 2022 | Counting Is Fun | Div 2 F | CodeChef June Long Two | Cool educational problem in my opinion |
| 15 | Oct 2022 | Playing with GCD | Div 2 B | Codeforces Round 825 | It was my first Div 2 round on Codeforces. |
| 16 | Oct 2022 | Good Subarrays | Div 2 E | Codeforces Round 825 | Do look at the intended solution. I think it is pretty nice. |
| 17 | Oct 2022 | Equal Binary Subsequences | Div 2 D | Codeforces Round 825 | One of my best problems. Here's the background story about this problem. I was solving the following problem. You are given a binary string $$$s$$$ of length $$$2n$$$. Partition it into two disjoint equal subsequences of equal length. I could not solve the problem above (later learned it is NP-hard). I realized that we could solve this if we could modify $$$s$$$ a bit, and I came up with the setup used in the contest. |
| 18 | Nov 2022 | Avoid Squares Please | Div 3 A | CodeChef Starters 63 | Funny problem |
| 19 | Nov 2022 | Make Length 1 | Div 2 A | CodeChef Starters 63 | |
| 20 | Nov 2022 | Count Partitions | Div 2 D | CodeChef Starters 63 | Good problem. I misread some OpenCup problem and came up with this problem. |
| 21 | Nov 2022 | GCD Sort | Div 2 E | CodeChef Starters 63 | I proposed this problem as Div 2 B. While testing, we realized that the intended solution was wrong. Utkarsh.25dec came up with the correct solution. |
| 22 | Nov 2022 | Distinct Neighbours | Div 2 E | CodeChef Starters 63 | Nice problem. |
| 22 | Nov 2022 | Distinct Neighbours | Div 2 B | CodeChef Starters 64 | Pretty cool problem |
| 23 | Nov 2022 | Count Number Of Pairs | Div 2 D | CodeChef Starters 67 | Cool task |
| 24 | Nov 2022 | Interactive Tree | Div 2 D | CodeChef Starters 67 | |
| 25 | Dec 2022 | Gcd of Subarrays | Div 2 A | CodeChef December Long | |
| 26 | Dec 2022 | Divisible by K | Div 2 A | CodeChef December Long | |
| 27 | Dec 2022 | Divisible by A_i | Div 2 A | CodeChef December Long | |
| 28 | Dec 2022 | Sum of Cube | Div 2 D | CodeChef December Long | |
| 29 | Dec 2022 | Distinct Sequence | Div 2 A | Codechef Starters 69 | |
| 30 | Dec 2022 | Longest Subarray | Div 2 B | Codechef Starters 69 | |
| 31 | Dec 2022 | Sort Permutation | Div 2 E | Codechef Starters 69 | I really like this task. Fun fact — This problem was not planned to be included in the contest. Around 3 hours before the contest, we found some bug in one of the problems, and this problem was prepared to be used as the replacement. |
| 32 | Dec 2022 | Divide and Conquer | Div 2 A | Codeforces Round 838 | |
| 33 | Dec 2022 | Make Array Good | Div 2 B | Codeforces Round 838 | |
| 34 | Dec 2022 | Binary Strings are Fun | Div 2 C | Codeforces Round 838 | One of my best problems. I came up with the problem idea while cycling and solved it while attending a college lecture. |
| 35 | Dec 2022 | Tree Sum | Div 2 E | Codeforces Round 838 | One of the best problems in the round. In the original problem, the weight of each edge was defined in some way. But TheScrasse suggested using the definition of good trees, which resulted in the same assignment of edge weight. |
| 36 | Dec 2022 | Good Pairs | Div 2 F | Codeforces Round 838 | Pretty cool data structure problem in my opinion |
| 37 | Dec 2022 | Unequal Adjacent Elements | Div 1 D | Codeforces Round 838 | I was trying the problem: You are given a tree consisting of $$$n$$$ nodes. Find a permutation $$$p$$$ of $$$[1,2,\ldots n]$$$ such that $$$dist(p_{i-1},p_i) \le dist(p_i,p_{i+1})$$$ for all $$$2 \le i \lt n$$$, where $$$dist(u,v)$$$ denotes the number of edges in the shortest path from $$$u$$$ to $$$v$$$. I came up with a constructive solution that did not use advanced tree techniques. Later, I learned that it could be bashed with centroid decomposition, which some people found standard. So, I dropped the idea of using the original problem. I thought, why not have a constructive problem revolving around my solution? Bonus: How can we use the solution to this Codeforces problem to solve the tree task? |
| 38 | Feb 2023 | Divisible Subarray Counting | Div 2 D | Codechef Starters 78 | |
| 39 | Feb 2023 | Not Divisible | Div 2 A | Codechef Starters 76 | |
| 40 | Feb 2023 | Expected Value | Div 2 D | Codechef Starters 76 | |
| 41 | Feb 2023 | Counting Is Fun | Div 2 E | Codechef Starters 76 | Nice problem. The original version of this problem was proposed as Div 2 C for the Codeforces round 838. It was removed when I found a better candidate for problem C. The original problem was: You are given two binary arrays $$$A$$$ and $$$B$$$, each of length $$$N$$$. Find $$$F(A,B)$$$. |
| 42 | Apr 2023 | Maximise Score | Div 2 A | Codechef Starters 86 | |
| 43 | Apr 2023 | String Game | Div 2 A | Codechef Starters 86 | |
| 44 | Apr 2023 | Largest Y | Div 2 B | Codechef Starters 86 | |
| 45 | Apr 2023 | Minimum Operation | Div 2 C | Codechef Starters 86 | |
| 46 | Apr 2023 | Least Size | Div 2 D | Codechef Starters 86 | |
| 47 | Apr 2023 | Sum Over All Arrays | Div 2 E | Codechef Starters 86 | Nice counting problem |
| 48 | Apr 2023 | Prefix Max | Div 2 F | Codechef Starters 86 | My opinion on the problem |
| 49 | Apr 2023 | Trees Are Fun | Div 1 D | Codechef Starters 86 | I think it is a really nice problem. While trying a problem in some Universal Cup contest, I came up with an interesting idea, which seemed overkill for that problem. I decided to have a problem revolving around that idea, as I liked it a lot. It is a pity that some inefficient solutions were able to pass the tests. |
| 50 | May 2023 | Printing Binary Array | Div 2 A | Codechef Starters 90 | |
| 51 | May 2023 | Anti Palindrome | Div 2 A | Codechef Starters 90 | |
| 52 | May 2023 | Finding Sum | Div 2 E | Codechef Starters 90 | Nice problem |
| 53 | May 2023 | Good Subarrays | Div 2 E | ICPC Algo Queen 2023 Finals | It was proposed as Div 2 E for Codeforces round 825, but ended up being unused. I came up with this problem by misreading one of the problems in Google Code Jam Qualification Round 2022. |
| 54 | Sept 2023 | Combinatorics Is Fun | Div 2 F | Codechef Starters 90 | I was looking for a second last problem of rated for all Starters. yan.silva sent me a problem. His version was to find $$$F(S, X)$$$ if you are given $$$S$$$ and $$$X$$$. It seemed a bit easier for the position I was looking for. I thought that if we can introduce counting into it, it will be hard as well as interesting enough to be the second last problem of Div 1. |
| 55 | Oct 2023 | Maximise Sum | Div 2 A | Codechef Starters 104 | |
| 56 | Oct 2023 | Lexicographically Largest | Div 2 B | Codechef Starters 104 | |
| 57 | Oct 2023 | Construct An Array | Div 2 C | Codechef Starters 104 | Cute problem |
| 58 | Oct 2023 | Find Diameter | Div 2 D | Codechef Starters 104 | |
| 59 | Oct 2023 | Longest Non Decreasing Subsequence | Div 2 E | Codechef Starters 104 | Cool problem |
| 60 | Oct 2023 | Combinatorics Is Fun | Div 1 D | Codechef Starters 104 | Educational problem |
| 61 | Dec 2023 | Maximum Score | Div 2 A | Codechef Starters 113 | |
| 62 | Dec 2023 | Minimise Maximum Subarray Sum | Div 2 B | Codechef Starters 113 | Pretty nice problem. This was proposed as Div 2 A for think-cell round 1. As it was quite hard for position A, it was not approved. In retrospect, I should have replaced think-cell B with this problem. |
| 63 | Dec 2023 | Count Distinct Arrays | Div 2 D | Codechef Starters 113 | |
| 64 | Dec 2023 | Score Sum | Div 2 C | Codechef Starters 113 | Cute problem. |
| 65 | Dec 2023 | Make All Equal | Div 2 C | Codechef Starters 113 | |
| 66 | Dec 2023 | Maximise The Minimum | Div 1 C | Codechef Starters 113 | Nice problem. |
| 67 | Dec 2023 | Counting Is Fun | Div 1 D | Codechef Starters 113 | One of my best problems. An easy version of this problem was proposed for think-cell round 1, but it did not end up being used. |
| 68 | Dec 2023 | Trees Are Fun | Div 1 E | Codechef Starters 113 | I think it is a good problem. I thought it is not that hard, but standings say otherwise. |
| 69 | Jan 2024 | Counting Is Fun | Div 1 D | Codechef Starters 115 | It was also proposed for thinkcell round 1. It was kept as a reserve. I thought we had enough problems around similar difficulty for the think-cell round, so I decided to use it on Codechef. |
| 70 | Feb 2024 | Count Subarrays | Div 2 B | Codechef Starters 120 | |
| 71 | Feb 2024 | Count Arrays | Div 2 B | Codechef Starters 120 | |
| 72 | Feb 2024 | Find Permutation | Div 2 B | Codechef Starters 120 | Cute problem |
| 73 | Feb 2024 | Construct Permutation | Div 2 B | Codechef Starters 120 | |
| 74 | Feb 2024 | Minimum Operations | Div 2 E | Codechef Starters 120 | I think this problem is pretty nice. Bonus: Come up with proof for your solution. |
| 75 | Feb 2024 | Trees Are Fun | Div 1 D | Codechef Starters 120 | I like this problem too. |
| 76 | Feb 2024 | Maximise The Score | Div 2 A | think-cell Round 1 | I wanted to change this task. But I could not get a better candidate. |
| 77 | Feb 2024 | Permutation Printing | Div 2 B | think-cell Round 1 | It is also possible to solve this for an arbitrary array $$$A$$$ containing $$$n$$$ distinct positive integers. |
| 78 | Feb 2024 | Lexicographically Largest | Div 2 C | think-cell Round 1 | I came up with the original problem on a night walk. The original problem seemed a bit standard. So, I modified it a bit and came up with this version. I really like this problem. But somehow, not many people like this :( |
| 79 | Feb 2024 | Sum over all Substrings | Div 2 E | think-cell Round 1 | Cool problem. Bonus: Prove your solution for this problem. |
| 80 | Feb 2024 | 2..3...4.... Wonderful! Wonderful! | Div 1 C | think-cell Round 1 | One of my best problems. Initially, I came up with $$$k=1$$$ version and discussed it with Non-origination. We both liked it. After some time, I realized we could solve the construction problem for arbitrary $$$k$$$. At that time, I thought counting portion might not be possible for large constraints and abandoned the problem. The next day, I suddenly came up with a cool solution. I coded it and verified it with a brute-force solution. I liked this problem and found it non-trivial as well. Later, I came to know that much simpler solutions exist as well. |
| 81 | Feb 2024 | Maximize the Difference | Div 1 D | think-cell Round 1 | I think it is that kind of problem which you either solve in less than 15 minutes or struggle for a long time. |
| 82 | Feb 2024 | Prefix Max Set Counting | Div 1 D | think-cell Round 1 | One of my best problems. We received an overwhelming response on this problem. Among all the problems that I authored, I think this is the problem that the participants liked the most. |
| 82 | Feb 2024 | Interactive Mex Tree | Div 1 E | think-cell Round 1 | Here is the background story behind this problem. So we(me and amurto) were discussing how to find the mex of some path of tree efficiently in April 2022. At that time, I did not know HLD. I came up with a solution($$$O(n \log(n))$$$). It was proposed for Codeforces round 838. I later came to know that it can be bashed with HLD to solve in $$$O(n \log^2(n))$$$ time. So, I made the problem interactive to cut the HLD solutions. |
| 83 | Feb 2024 | Counting In Fun | Div 1 F | think-cell Round 1 | I like this problem a lot. This problem motivated me for the setup used in Counting In Fun. Huge thanks to Elegia for helping me to solve this problem. |
| 84 | March 2024 | Equal XOR | Div 2 B | Codeforces Round 934 (Div. 2) | This problem was originally proposed for think-cell round 1. |
| 85 | March 2024 | Counting Is Fun | Div 1 D | Codeforces Round 934 (Div. 1) | This problem was originally proposed for think-cell round 1 too. My solution exploited the necessary and sufficient condition to solve it. But some testers solved it using standard techniques and didn't like it that much. So it was removed from the problemset. I proposed the counting task when the coordinator asked for some Div 1 C level problems to bridge the difficulty gap. |
| 86 | March 2024 | Minimum Hamming Distance | Div 1 F | Codeforces Round 934 (Div. 1) | One of my best problems. Here is the background story behind this problem. Have a look at this problem first. Initially, we were asking to find the value of $$$F(s[1,n])$$$, and it was intended to be used as Div 2 C. At that time, we were looking for some Div 1 C-level problems. So I proposed this problem(exact statement of Minimum Hamming Distance, but with $$$1 \le n \le 10^6$$$). At that time, I thought the greedy solution would work. When I stress-tested my greedy idea against brute force, I found out that my greedy idea was wrong. I solved the problem in $$$O(n^2)$$$ later on. We(me and errorgorn) tried to solve this problem in subquadratic time. But we didn't succeed. I underestimated this problem a bit. I thought it was Div 1 E difficulty, and I wanted to use it as problem H in think-cell round 1. But this problem was not used, as problems D and I of that round had the same setups. |
Thank you for your participation! I hope you liked atleast one problem from the set (:
Idea: satyam343
Editorial: Non-origination
Selecting the smallest two elements on the whiteboard is a good choice in the first move.
Let $$$b$$$ denote the sorted array $$$a$$$. Assume that $$$b$$$ contains only distinct elements for convenience. We prove by induction on $$$n$$$ that the maximum final score is $$$b_1 + b_3 + \ldots + b_{2n-1}$$$.
For the base case $$$n = 1$$$, the final and only possible score that can be achieved is $$$b_1$$$.
Now let $$$n \gt 1$$$.
Claim: It is optimal to choose $$$b_{1}$$$ with $$$b_{2}$$$ for some move.
Suppose that in some move, $$$b_{1}$$$ is choosen with $$$b_i$$$ and $$$b_{2}$$$ is choosen with $$$b_j$$$, for some $$$2 \lt i,j \lt 2n, i \not = j$$$.
The contribution to the score according to these choices is $$$\min(b_{1}, b_{i}) + \min(b_{2}, b_{j}) = b_{1} + b_{2}$$$.
However, if we had chosen $$$b_{1}$$$ and $$$b_{2}$$$ in one move, and $$$b_i$$$ and $$$b_j$$$ in the other move, the score according to these choices is $$$\min(b_{1}, b_{2}) + \min(b_{i}, b_{j}) = b_{1} + \min(b_{i}, b_{j})$$$.
As $$$i, j \gt 2$$$, $$$b_i \gt b_{2}$$$ and $$$b_j \gt b_{2} \implies \min(b_{i}, b_{j}) \gt b_2$$$. Thus, we can achieve a strictly larger score by choosing $$$b_1$$$ with $$$b_2$$$ in some move.
The choice of selecting $$$b_1$$$ and $$$b_2$$$ contributes a value of $$$b_1$$$ to the score. The maximum score that can achieved for the remaining numbers $$$[b_3, b_4, \ldots, b_{2n}]$$$ on the whiteboard in the remaining moves is $$$b_3 + b_5 + b_7 + \ldots b_{2n-1}$$$ by the induction hypothesis.
Note that we can extend the arguments for the case where $$$a$$$ has duplicate elements.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
void solve(){
ll n; cin>>n;
vector<ll> a(2*n);
ll ans=0;
for(auto &it:a){
cin>>it;
}
sort(a.begin(),a.end());
for(ll i=0;i<2*n;i+=2){
ans+=a[i];
}
cout<<ans<<"\n";
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
cin>>test_cases;
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
Idea: satyam343
Editorial: satyam343
For integers $$$x$$$ ($$$\lfloor \frac{n}{2} \rfloor \lt x \leq n$$$), there does not exist integer $$$y$$$ ($$$y \gt x$$$) such that $$$y$$$ is divisible by $$$x$$$.
Consider the permutation $$$p$$$ such that $$$p=[1, n, 2, n - 1, \ldots \lceil \frac{n+1}{2} \rceil]$$$. It is valid. Why?
We have $$$\max(p_a, p_{a+1}) \gt \lfloor \frac{n}{2} \rfloor$$$ for all $$$1 \leq a \lt n - 1$$$. So we cannot ever have a pair of integers ($$$a,b$$$) such that:
Now, we just need to check for $$$a = n - 1$$$. First of all, notice that $$$p_a$$$ does not divide $$$p_1$$$.
There does not exist an integer $$$b$$$ ($$$2 \leq b \lt n - 1$$$) such that $$$p_{a+1}$$$ divides $$$p_{b+1}$$$ as $$$2 \cdot p_{a+1} \ge n$$$ and $$$p_{c+1} \lt n$$$ for all $$$c$$$ ($$$2 \leq c \lt n - 1$$$).
Note that we covered all possible pairs of indices and did not find two distinct indices $$$i$$$ and $$$j$$$ ($$$1 \leq i, j \lt n$$$; $$$i \neq j$$$) such that $$$p_i$$$ divides $$$p_j$$$ and $$$p_{i+1}$$$ divides $$$p_{j+1}$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
void solve(){
ll n; cin>>n;
ll l=1,r=n;
for(ll i=1;i<=n;i++){
if(i&1){
cout<<l<<" ";
l++;
}
else{
cout<<r<<" ";
r--;
}
}
cout<<"\n";
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
cin>>test_cases;
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
1930C - Lexicographically Largest
Idea: satyam343
Editorial: Non-origination and satyam343
Consider an array $$$c$$$ of length $$$n$$$ such that $$$c_i :=$$$ number of indices smaller than $$$i$$$ which were chosen before index $$$i$$$.
So set $$$S$$$ will be a collection of $$$a_i + i - c_i$$$ over all $$$1 \leq i \leq n$$$.
Now one might wonder what type of arrays $$$c$$$ is it possible to get.
First, it is easy to see that we should have $$$0 \leq c_i \lt i$$$ for all $$$i$$$. Call an array $$$c$$$ of length $$$n$$$ good, if $$$0 \leq c_i \lt i$$$ for all $$$1 \leq i \leq n$$$. The claim is that all good arrays of length $$$n$$$ can be obtained.
We can prove it by induction on $$$n$$$.
$$$c_1 = 0$$$ always holds. Now $$$c_2$$$ can either be $$$0$$$ or $$$1$$$. We can obtain $$$c_2 = 0$$$ by deleting the element at index $$$2$$$ before the element at index $$$1$$$. We can also obtain $$$c_2 = 1$$$ by deleting it after deleting the element at index $$$1$$$. Thus, all good arrays of length 2 can be obtained.
Now assume that it is possible to obtain all good arrays of length atmost $$$k$$$. Choose an integer $$$x$$$ ($$$0 \leq x \leq k$$$) arbitrarily. Consider the following sequence for the order of deletion:
It is easy to see that the array obtained on performing the above sequence of operations is a good array of length $$$k + 1$$$ with $$$c_{k+1} = x$$$. Hence we can establish a bijection between the sequence of order of deletion and the number of good arrays.
So we have the following subproblem. We have a set $$$S$$$. We will iterate $$$i$$$ from $$$1$$$ to $$$n$$$, select an integer $$$c_i$$$ ($$$0 \leq c_i \leq i - 1$$$) and insert $$$a_i + i - c_i$$$ into set $$$S$$$ and move to $$$i + 1$$$. Now using exchange arguments, we can prove that it is never bad if we select the smallest integer $$$v$$$ ($$$0 \leq v \leq i - 1$$$) such that $$$a_i + i - v$$$ is not present in the set $$$S$$$, and assign it to $$$c_i$$$. Note that as we have $$$i$$$ options for $$$v$$$, and we would have inserted exactly $$$i-1$$$ elements before index $$$i$$$, there always exists an integer $$$v$$$ ($$$0 \leq v \leq i - 1$$$) such that $$$a_i + i - v$$$ is not present in the set $$$S$$$. You can refer to the attached submission to see how to find $$$v$$$ efficiently for each $$$i$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
void solve(){
ll n; cin>>n;
set<ll> used,not_used;
vector<ll> ans;
for(ll i=1;i<=n;i++){
ll x; cin>>x; x+=i;
if(!used.count(x)){
not_used.insert(x);
}
ll cur=*(--not_used.upper_bound(x)); //find the largest element(<= x) which is not in set "used"
not_used.erase(cur);
ans.push_back(cur);
used.insert(cur);
if(!used.count(cur-1)){
not_used.insert(cur-1);
}
}
sort(ans.begin(), ans.end());
reverse(ans.begin(), ans.end());
for(auto i:ans){
cout<<i<<" ";
}
cout<<"\n";
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
cin>>test_cases;
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
1930D1 - Sum over all Substrings (Easy Version)
Idea: satyam343
Editorial: satyam343
To find $$$f(s)$$$, we can partition $$$s$$$ into multiple independent substrings of length atmost $$$3$$$ and find best answer for them separately.
There always exists a string $$$g$$$ such that:
First of all, append $$$n$$$ $$$\mathtt{0} $$$ s to the back of $$$s$$$ for our convenience. Note that this does not change the answer.
Now let us call a binary string $$$p$$$ of size $$$d$$$ nice if:
there exists a positive integer $$$k$$$ such that $$$d = 3k$$$
$$$p$$$ is of form $$$ f(\mathtt{0} , k) + f(\mathtt{1} , k) + f(\mathtt{0} , k) $$$, where $$$f(c, z)$$$ gives a string containing exactly $$$z$$$ characters equal to $$$c$$$.
Suppose binary string $$$t$$$ is one of the $$$s-good$$$ strings such that there are exactly $$$f(s)$$$ $$$\mathtt{1} $$$ s in $$$t$$$.
We claim that for any valid $$$t$$$, there always exists a binary string $$$t'$$$ such that:
Initially, all the $$$\mathtt{1} $$$s in $$$s$$$ are unmarked. We will mark all of them and modify the string $$$t$$$ in the process.
We will do the following recursive process unless all the $$$\mathtt{1} $$$ s in $$$s$$$ are marked.
Find the index of leftmost unmarked $$$\mathtt{1} $$$ in $$$s$$$. Suppose its index is $$$x$$$. Now suppose $$$y$$$ is the largest index such that there are an equal number of $$$\mathtt{0} $$$ s and $$$\mathtt{1} $$$ s in substring $$$t[x, y]$$$. Note that $$$y$$$ will always exist as we appended some extra $$$\mathtt{0} $$$s in the starting. Now we can rearrange the characters in substring $$$t[x,y]$$$, as they will still contain an equal number of $$$\mathtt{0} $$$ s and $$$\mathtt{1} $$$ s and $$$\mathtt{1} $$$ will still be the mode of substring $$$t[x,y]$$$. Obviously rearranging the characters in $$$t[x,y]$$$ to $$$\mathtt{0} \ldots \mathtt{0} \mathtt{1} \ldots \mathtt{1}$$$ is the best we can do. We will mark all the $$$\mathtt{1} $$$ s in substring $$$s[x,y]$$$. Suppose $$$y-x+1 = 2v$$$. Now $$$t[y+1,y+v]$$$ might contain some $$$\mathtt{1} $$$ s. Say there are $$$z$$$ $$$\mathtt{1} $$$ s in $$$t[y+1,y+v]$$$ initially. We will do the following operation exactly $$$z$$$ times.
Now note that $$$t[x,x+3v-1]$$$ will be of form $$$f(\mathtt{0}, v) + f(\mathtt{1}, v) + f(\mathtt{0}, v)$$$. It is easy to verify that in the updated $$$t$$$, there won't be any index $$$i$$$ for which there does not exist two indices $$$1 \le l \le i \le r \le 2n$$$ such that $$$s_i$$$ is mode of $$$t[l,r]$$$.
Now we can mark all the $$$\mathtt{1} $$$ s in substring $$$s[x+2v,x+3v-1]$$$ too, as $$$t[x+v,x+3v-1]$$$ contain equal number of $$$\mathtt{0} $$$ s and $$$\mathtt{1} $$$ s.
It is not hard to conclude that the updated $$$t$$$ will be of form $$$f(\mathtt{0}, d_1) + z_1 + f(\mathtt{0}, d_2) + z_2 + f(\mathtt{0}, d_3) + \ldots + z_g + f(\mathtt{0}, d_{g+1}) $$$, where $$$z_1, z_2, \ldots z_g$$$ are nice binary strings and $$$d_1, d_2, \ldots d_{g+1}$$$ are non-negative integers.
Note that the $$$\mathtt{1}$$$ s in $$$t[x, x + 3v - 1]$$$ won't help the $$$\mathtt{1}$$$ s in $$$s[x + 3v, 2n]$$$. So, we can solve for $$$s[x + 3v, 2n]$$$ independently.
Let $$$t'$$$ be the updated $$$t$$$.
Now, carefully observe the structure of $$$t'$$$. We can replace all the substrings of the form $$$ f(\mathtt{0} , k) + f(\mathtt{1} , k) + f(\mathtt{0} , k) $$$ in $$$t'$$$ with $$$ \mathtt{0} \mathtt{1} \mathtt{0} \mathtt{0} \mathtt{1} \mathtt{0} \ldots \mathtt{0} \mathtt{1} \mathtt{0} \mathtt{0} \mathtt{1} \mathtt{0}$$$.
So the updated $$$t'$$$(say $$$t"$$$) will be of form $$$b_1 + b_2 + \ldots b_q$$$, where $$$b_i$$$ is either equal to $$$\mathtt{0}$$$ or $$$\mathtt{010}$$$.
So whenever we need to find $$$f(e)$$$ for some binary string $$$e$$$, we can always try to find a string of form $$$t"$$$ using as few $$$\mathtt{1} $$$s as possible. Notice that we can construct $$$t"$$$ greedily. You can look at the attached code for the implementation details. Also, we don't need to actually append the $$$\mathtt{0} $$$s at the back of $$$s$$$. It was just for proof purposes.
#include <bits/stdc++.h>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
using namespace std;
#define ll long long
#define pb push_back
#define mp make_pair
#define nline "\n"
#define f first
#define s second
#define pll pair<ll,ll>
#define all(x) x.begin(),x.end()
const ll MOD=1e9+7;
const ll MAX=500500;
ll f(string s){
ll len=s.size(),ans=0,pos=0;
while(pos<len){
if(s[pos]=='1'){
ans++;
pos+=2;
}
pos++;
}
return ans;
}
void solve(){
ll n,ans=0; cin>>n;
string s; cin>>s;
for(ll i=0;i<n;i++){
string t;
for(ll j=i;j<n;j++){
t.push_back(s[j]);
ans+=f(t);
}
}
cout<<ans<<nline;
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
cin>>test_cases;
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
1930D2 - Sum over all Substrings (Hard Version)
Idea: satyam343
Editorial: satyam343
We can use the idea of D1 and dynamic programming to solve in $$$O(n)$$$.
Suppose $$$dp[i][j]$$$ denotes $$$f(s[i,j])$$$ for all $$$1 \le i \le j \le n$$$.
Performing the transition is quite easy.
If $$$s_i = \mathtt{1}$$$, $$$dp[i][j]=1+dp[i+3][j]$$$, otherwise $$$dp[i][j]=dp[i+1][j]$$$. Note that $$$dp[i][j] = 0$$$ for if $$$i \gt j$$$.
So if we fix $$$j$$$, we can find $$$dp[i][j]$$$ for all $$$1 \le i \le j$$$ in $$$O(n)$$$, and the original problem in $$$O(n^2)$$$.
Now, we need to optimise it.
Suppose $$$track[i] = \sum_{j=i}^{n} dp[i][j]$$$ for all $$$1 \le i \le n$$$, with base condition that $$$track[i] = 0$$$ if $$$i \gt n$$$.
There are two cases:
So, the answer to the original problem is $$$\sum_{i=1}^{n} track[i]$$$, which we can do in $$$O(n)$$$.
#include <bits/stdc++.h>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
using namespace std;
#define ll long long
#define pb push_back
#define mp make_pair
#define nline "\n"
#define f first
#define s second
#define pll pair<ll,ll>
#define all(x) x.begin(),x.end()
const ll MOD=1e9+7;
const ll MAX=500500;
void solve(){
ll n,ans=0; cin>>n;
string s; cin>>s; s=" "+s;
vector<ll> dp(n+5,0);
for(ll i=n;i>=1;i--){
if(s[i]=='1'){
dp[i]=dp[i+3]+n-i+1;
}
else{
dp[i]=dp[i+1];
}
ans+=dp[i];
}
cout<<ans<<nline;
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
cin>>test_cases;
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
1930E - 2..3...4.... Wonderful! Wonderful!
Idea: satyam343
Editorial: satyam343
Suppose you are given some array $$$b$$$ of length $$$m$$$ and a positive integer $$$k$$$. How to check whether we can get the array $$$b$$$ if we start with an array $$$a$$$ of length $$$n$$$ such that $$$a_i = i$$$ for all $$$i$$$ ($$$1 \leq i \leq n$$$)?
First of all, array $$$b$$$ should be a subsequence of $$$a$$$. Now consider an increasing array $$$c$$$ (possibly empty) such that it contains all the elements of $$$a$$$ which are not present in $$$b$$$. Now look at some trivial necessary conditions.
The length of array $$$c$$$ should divisible by $$$2k$$$, as exactly $$$2k$$$ elements were deleted in one operation.
There should be atleast one element $$$v$$$ in $$$b$$$ such that there are atleast $$$k$$$ elements smaller than $$$v$$$ in the array $$$c$$$, and alteast $$$k$$$ elements greater than $$$v$$$ in the array $$$c$$$. Why? Think about the last operation. We can consider the case of empty $$$c$$$ separately.
In fact, it turns out that these necessary conditions are sufficient (Why?).
Now, we need to find the number of possible $$$b$$$. We can instead find the number of binary strings $$$s$$$ of length $$$n$$$ such that $$$s_i = 1$$$ if $$$i$$$ is present in $$$b$$$, and $$$s_i=0$$$ otherwise.
For given $$$n$$$ and $$$k$$$, let us call $$$s$$$ good if there exists some $$$b$$$ which can be achieved from $$$a$$$.
Instead of counting strings $$$s$$$ which are good, let us count the number of strings which are not good.
For convenience, we will only consider strings $$$s$$$ having the number of $$$\mathtt{0}$$$'s divisible by $$$2k$$$.
Now, based on the conditions in hint $$$2$$$, we can conclude that $$$s$$$ is bad if and only if there does not exist any $$$1$$$ between the $$$k$$$-th $$$0$$$ from the left and the $$$k$$$-th $$$0$$$ from the right in $$$s$$$.
Let us compress all the $$$\mathtt{0}$$$'s between the $$$k$$$-th $$$0$$$ from the left and the $$$k$$$-th $$$0$$$ from the right into a single $$$0$$$ and call the new string $$$t$$$. Note that $$$t$$$ will have exactly $$$2k - 1$$$ $$$\mathtt{0}$$$'s. We can also observe that for each $$$t$$$, a unique $$$s$$$ exists. This is only because we have already fixed the parameters $$$n$$$ and $$$k$$$. Thus the number of bad $$$s$$$ having exactly $$$x$$$ $$$\mathtt{1}$$$'s is $$${{x + 2k - 1} \choose {2k - 1}}$$$ as there are exactly $$${{x + 2k - 1} \choose {2k - 1}}$$$ binary strings $$$t$$$ having $$$2k - 1$$$ $$$\mathtt{0}$$$'s and $$$x$$$ $$$\mathtt{1}$$$'s.
Finally, there are exactly $$$\binom{n}{x} - \binom{x + 2k - 1}{2k - 1}$$$ good binary strings $$$s$$$ having $$$x$$$ $$$\mathtt{1}$$$'s and $$$n-x$$$ $$$\mathtt{0}$$$'s. Now, do we need to find this value for each $$$x$$$ from $$$1$$$ to $$$n$$$? No, as the number($$$n-x$$$) of $$$\mathtt{0}$$$'s in $$$s$$$ should be a multiple of $$$2k$$$. There are only $$$O(\frac{n}{2k})$$$ useful candidates for $$$x$$$.
Thus, our overall complexity is $$$O(n \log(n))$$$ (as $$$\sum_{i=1}^{n} O(\frac{n}{i}) = O(n \log(n))$$$).
#pragma GCC optimize("O3,unroll-loops")
#include <bits/stdc++.h>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
using namespace std;
#define ll long long
const ll INF_ADD=1e18;
#define pb push_back
#define mp make_pair
#define nline "\n"
#define f first
#define s second
#define pll pair<ll,ll>
#define all(x) x.begin(),x.end()
const ll MOD=998244353;
const ll MAX=5000500;
vector<ll> fact(MAX+2,1),inv_fact(MAX+2,1);
ll binpow(ll a,ll b,ll MOD){
ll ans=1;
a%=MOD;
while(b){
if(b&1)
ans=(ans*a)%MOD;
b/=2;
a=(a*a)%MOD;
}
return ans;
}
ll inverse(ll a,ll MOD){
return binpow(a,MOD-2,MOD);
}
void precompute(ll MOD){
for(ll i=2;i<MAX;i++){
fact[i]=(fact[i-1]*i)%MOD;
}
inv_fact[MAX-1]=inverse(fact[MAX-1],MOD);
for(ll i=MAX-2;i>=0;i--){
inv_fact[i]=(inv_fact[i+1]*(i+1))%MOD;
}
}
ll nCr(ll a,ll b,ll MOD){
if(a==b){
return 1;
}
if((a<0)||(a<b)||(b<0))
return 0;
ll denom=(inv_fact[b]*inv_fact[a-b])%MOD;
return (denom*fact[a])%MOD;
}
ll n,k;
ll ways(ll gaps,ll options){
gaps--;
ll now=nCr(gaps+options-1,options-1,MOD);
return now;
}
void solve(){
cin>>n;
for(ll k=1;k<=(n-1)/2;k++){
ll ans=1;
for(ll deleted=2*k;deleted<=n-1;deleted+=2*k){
ll options=2*k,left_elements=n-deleted;
ans=(ans+ways(left_elements+1,deleted+1)-ways(left_elements+1,options)+MOD)%MOD;
}
cout<<ans<<" ";
}
cout<<nline;
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
cin>>test_cases;
precompute(MOD);
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
1930F - Maximize the Difference
Idea: satyam343
Editorial: satyam343
For an array $$$b$$$ consiting of $$$m$$$ non-negative integers, $$$f(b) = \max\limits_{p=1}^{m} ( \max\limits_{i = 1}^{m} (b_i | b_p) - \min\limits_{i = 1}^{m} (b_i | b_p))$$$. In other, we can get the maximum possible value by choosing $$$x=b_p$$$ for some $$$p$$$ ($$$1 \leq p \leq m$$$).
$$$f(b)$$$ is the maximum value of $$$b_i \land $$$ ~ $$$b_j$$$ over all pairs of ($$$i,j$$$) ($$$1 \leq i,j \leq m$$$), where $$$\land$$$ is the bitwise AND operator, and ~ is the bitwise One's complement operator.
Let us see how to find $$$f(b)$$$ in $$$O(n \log(n))$$$ first. We will focus on updates later. Let us have two sets $$$S_1$$$ and $$$S_2$$$ such that
We can observe that $$$f(b)$$$ is the largest element present in both sets $$$S_1$$$ and $$$S_2$$$.
Now, we can insert all submasks naively. But it would be pretty inefficient. Note that we need to insert any submask atmost once in either of the sets.
Can you think of some approach in which you efficiently insert all the non-visited submasks of some mask?
insert_submask(cur, S){
return if mask cur is present in S
add mask cur to the set S
vec = list of all the set bits of the mask cur
for i in vec:
insert_submask(cur - 2^i)
}
Note that the above pseudo code inserts all the all submasks efficiently. As all the masks will be visited atmost once, the amortised complexity will be $$$O(n \log(n)^2)$$$. Note that instead of using a set, we can use a boolean array of size $$$n$$$ to reduce the complexity to $$$O(n \log(n))$$$.
Thus, we can use the above idea to find $$$f(b)$$$ in $$$O(n \log(n))$$$. For each $$$i$$$ from $$$1$$$ to $$$m$$$, we can insert all submasks of $$$b_i$$$ into set $$$S_1$$$ and insert all the submasks of ~$$$b_i$$$ into set $$$S_2$$$.
The above idea hints at how to deal with updates. If we need to append an element $$$z$$$ to $$$b$$$, we can just insert all submasks of $$$z$$$ into set $$$S_1$$$ and insert all the submasks of ~$$$z$$$ into set $$$S_2$$$.
Hence, the overall complexity is $$$O(n \log(n))$$$.
#pragma GCC optimize("O3,unroll-loops")
#include <bits/stdc++.h>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
using namespace std;
#define ll long long
#define pb push_back
#define mp make_pair
#define nline "\n"
#define f first
#define s second
#define pll pair<ll,ll>
#define all(x) x.begin(),x.end()
const ll MAX=100100;
void solve(){
ll n,q; cin>>n>>q;
ll till=1,len=1;
while(till<n){
till*=2;
len++;
}
ll ans=0;
vector<vector<ll>> track(2,vector<ll>(till+5,0));
auto add=[&](ll x,ll p){
queue<ll> trav;
if(track[p][x]){
return;
}
trav.push(x); track[p][x]=1;
while(!trav.empty()){
auto it=trav.front();
trav.pop();
if(track[0][it]&track[1][it]){
ans=max(ans,it);
}
for(ll j=0;j<len;j++){
if(it&(1<<j)){
ll cur=(it^(1<<j));
if(!track[p][cur]){
track[p][cur]=1;
trav.push(cur);
}
}
}
}
};
ll supermask=till-1;
vector<ll> a(q+5);
for(ll i=1;i<=q;i++){
ll h; cin>>h;
a[i]=(h+ans)%n;
add(a[i],0);
add(supermask^a[i],1);
cout<<ans<<" \n"[i==q];
}
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
cin>>test_cases;
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
1930G - Prefix Max Set Counting
Idea: satyam343
Editorial: satyam343
Consider some subsequence $$$S$$$ of $$$[1,2, \ldots n]$$$ such that there exists atleast one pre-order $$$a$$$ such that $$$F(a)=S$$$. Look for some non-trivial properties about $$$S$$$. Node $$$S_i$$$ will be visited before the node $$$S_{i+1}$$$.
Assume $$$|S|=k$$$. First of all, we should $$$S_1=1$$$ and $$$S_k = n$$$. There cannot exists there distinct indices $$$a, b$$$ and $$$c$$$ ($$$1 \leq a \lt b \lt c \leq |k|$$$) such that $$$S_c$$$ lies on the path from $$$S_a$$$ to $$$S_b$$$. Assume $$$LCA_i$$$ is the lowest common ancestor of $$$S_i$$$ and $$$S_{i+1}$$$. For all $$$i$$$($$$1 \leq i \lt k$$$), the largest value over all the nodes on the unique path from $$$S_i$$$ to $$$S_{i+1}$$$ should be $$$S_{i+1}$$$.
Suppose $$$nax_p$$$ is the maximum value in the subtree of $$$p$$$. There is one more important restriction if $$$S_i$$$ does not lie on the path from $$$S_{i+1}$$$ to $$$1$$$. Suppose $$$m$$$ is the child of $$$LCA_i$$$, which lies on the path from $$$S_i$$$ to $$$LCA_i$$$. We should have $$$S_{i+1} \gt nax_m$$$. In fact, if you observe carefully you will realise that we should have $$$S_i = nax_m$$$.
Let us use dynamic programming. Suppose $$$dp[i]$$$ gives the number of valid subsequences(say $$$S$$$) such that the last element of $$$S$$$ is $$$i$$$. Note that the answer to our original problem will be $$$dp[n]$$$.
Suppose $$$nax(u,v)$$$ denotes the maximum value on the path from $$$u$$$ to $$$v$$$(including the endpoints).
Let us have a $$$O(n^2)$$$ solution first.
We have $$$dp[1]=1$$$.
Suppose we are at some node $$$i$$$($$$i \gt 1$$$), and we need to find $$$dp[i]$$$. Let us consider some node $$$j$$$($$$1 \le j \lt i$$$) and see if we can append $$$i$$$ to all the subsequences which end with node $$$j$$$. If we can append, we just need to add $$$dp[j]$$$ to $$$dp[i]$$$.
But how do we check if we can append $$$i$$$ to all the subsequences that end with node $$$j$$$?
Check hints 2 and 3.
So, we have a $$$n^2$$$ solution now.
We need to optimise it now.
We will move in the increasing value of the nodes(from $$$2$$$ to $$$n$$$) and calculate the $$$dp$$$ values.
Suppose $$$nax(1, par_i) = v$$$, where $$$par_i$$$ denotes the parent of $$$i$$$.
Assume we go from node $$$j$$$($$$j \lt i$$$) to node $$$i$$$.
There are two cases:
$$$j$$$ lies on the path from $$$1$$$ to $$$i$$$: This case is easy, as we just need to ensure that $$$nax(j,par_i) = j$$$. We can add $$$dp[j]$$$ to $$$dp[i]$$$ if we have $$$nax(j,par_i) = j$$$. Note that there exists only one node(node $$$v$$$) for which might add $$$dp[v]$$$ to $$$dp[i]$$$
$$$j$$$ does not lie on the path from $$$1$$$ to $$$i$$$ : We will only consider the case in which $$$dp[j]$$$ will be added to $$$dp[i]$$$. Suppose $$$lca$$$ is the lowest common ancestor of $$$j$$$ and $$$i$$$, and $$$m$$$ is the child of $$$lca$$$, which lies on the path from $$$j$$$ to $$$lca$$$. Notice that the largest value in the subtree of $$$m$$$ should be $$$j$$$. This observation is quite helpful. We can traverse over all the ancestors of $$$i$$$. Suppose that we are at ancestor $$$u$$$. We will iterate over all the child(say $$$c$$$) of $$$u$$$ such that $$$nax_c \lt i$$$, and add $$$dp[nax_c]$$$ to $$$dp[i]$$$ if $$$nax(u,par_i) \lt nax_c$$$. Suppose $$$track[u][i]$$$ stores the sum of $$$dp[nax_c]$$$ for all $$$c$$$ such that $$$nax_c \lt i$$$. So we should add $$$track[u][i]$$$ to $$$dp[i]$$$. But there is a catch. This way, $$$dp[nax_c]$$$ will also get added $$$dp[i]$$$ even when $$$nax_c \lt nax(u, par_i)$$$. So, we need to subtract some values, which is left as an exercise for the readers. Now, $$$track[u][d] = track[u][d-1] + dp[nax_c]$$$ if $$$nax_c = d$$$. So, instead of keeping a two-dimensional array $$$track$$$, we can just maintain a one-dimensional array $$$track$$$. Note that we will only need the sum of $$$track[u]$$$ for all the ancestors of $$$i$$$, which we can easily calculate using the euler tour.
You can look at the attached code for the implementation details.
The intended time complexity is $$$O(n \cdot \log(n))$$$.
#include <bits/stdc++.h>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
using namespace std;
#define ll long long
#define pb push_back
#define mp make_pair
#define nline "\n"
#define f first
#define s second
#define pll pair<ll,ll>
#define all(x) x.begin(),x.end()
const ll MOD=998244353;
const ll MAX=1000100;
struct FenwickTree{
vector<ll> bit;
ll n;
FenwickTree(ll n){
this->n = n;
bit.assign(n, 0);
}
FenwickTree(vector<ll> a):FenwickTree(a.size()){
ll x=a.size();
for(size_t i=0;i<x;i++)
add(i,a[i]);
}
ll sum(ll r) {
ll ret=0;
for(;r>=0;r=(r&(r+1))-1){
ret+=bit[r];
ret%=MOD;
}
return ret;
}
ll sum(ll l,ll r) {
if(l>r)
return 0;
ll val=sum(r)-sum(l-1)+MOD;
val%=MOD;
return val;
}
void add(ll idx,ll delta) {
for(;idx<n;idx=idx|(idx+1)){
bit[idx]+=delta;
bit[idx]%=MOD;
}
}
};
vector<vector<ll>> adj;
vector<ll> tin(MAX,0),tout(MAX,0);
vector<ll> parent(MAX,0);
vector<ll> overall_max(MAX,0);
ll now=1;
vector<ll> jump_to(MAX,0),sub(MAX,0);
void dfs(ll cur,ll par){
parent[cur]=par;
tin[cur]=now++;
overall_max[cur]=cur;
for(auto chld:adj[cur]){
if(chld!=par){
jump_to[chld]=max(jump_to[cur],cur);
dfs(chld,cur);
overall_max[cur]=max(overall_max[cur],overall_max[chld]);
}
}
tout[cur]=now++;
}
vector<ll> dp(MAX,0);
void solve(){
ll n; cin>>n;
adj.clear();
adj.resize(n+5);
for(ll i=1;i<=n-1;i++){
ll u,v; cin>>u>>v;
adj[u].push_back(v);
adj[v].push_back(u);
}
now=1;
dfs(1,0);
ll ans=0;
FenwickTree enter_time(now+5),exit_time(now+5);
overall_max[0]=MOD;
dp[0]=1;
for(ll i=1;i<=n;i++){
ll p=jump_to[i];
dp[i]=(enter_time.sum(0,tin[i])-exit_time.sum(0,tin[i])-sub[p]+dp[p]+MOD+MOD)%MOD;
if(p>i){
dp[i]=0;
}
ll node=i;
while(overall_max[node]==i){
node=parent[node];
}
enter_time.add(tin[node],dp[i]);
exit_time.add(tout[node],dp[i]);
sub[i]=(enter_time.sum(0,tin[i])-exit_time.sum(0,tin[i])+MOD)%MOD;
}
cout<<dp[n]<<nline;
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
cin>>test_cases;
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
Idea: satyam343
Editorial: satyam343
$$$\operatorname{MEX}$$$ of the path from $$$u$$$ to $$$v$$$ will be the minimum value over all the nodes of $$$T$$$ which do not lie on the path from $$$u$$$ to $$$v$$$.
$$$p_1$$$ and $$$p_2$$$ are associated with the Euler tour
$$$p_1$$$ is the permutation of $$$[1,2, \ldots n]$$$ sorted in increasing order on the basis on $$$tin$$$ time observed during Euler tour. Similarly, $$$p_2$$$ is the permutation of $$$[1,2, \ldots n]$$$ sorted in increasing order based on $$$tout$$$ time. Note that any Euler tour is fine.
Now we have $$$p_1$$$ and $$$p_2$$$ with us. Suppose we need to find $$$\operatorname{MEX}$$$ of the path from $$$u$$$ to $$$v$$$. Assume that $$$tin_u \lt tin_v$$$ for convenience. Assume $$$T'$$$ is the forest we get if we remove all the nodes on the path from $$$u$$$ to $$$v$$$ from $$$T$$$. Our goal is to find the minimum value over all the nodes in $$$T'$$$. Assume that $$$T'$$$ is non-empty, as $$$\operatorname{MEX}$$$ will be $$$n$$$ if $$$T'$$$ is empty. Assume $$$LCA$$$ is the lowest common ancestor of $$$u$$$ and $$$v$$$. Suppose $$$m$$$ is the child of $$$LCA(u,v)$$$, which lies on the path from $$$v$$$ to $$$LCA(u,v)$$$.
Let us consider some groups of nodes $$$p$$$ such that.
Note that we have precisely $$$5$$$ groups, with the $$$i$$$-th group consisting of only those nodes which satisfy the $$$i$$$-th condition.
Here comes the interesting claim. All nodes in $$$T'$$$ are present in atleast one of the above $$$5$$$ groups. There does not exist a node $$$p$$$ such that $$$p$$$ is on the path from $$$u$$$ from $$$v$$$ and $$$p$$$ is present in any of the groups.
Now, it is not hard to observe that if we consider the nodes of any group, they will form a continuous segment in either $$$p_1$$$ or $$$p_2$$$. So we can cover each group in a single query. Hence, we can find the $$$\operatorname{MEX}$$$ of the path in any round using atmost $$$5$$$ queries.
#include <bits/stdc++.h>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
using namespace std;
#define ll long long
const ll INF_ADD=1e18;
#define pb push_back
#define mp make_pair
#define nline "\n"
#define f first
#define s second
#define pll pair<ll,ll>
#define all(x) x.begin(),x.end()
const ll MOD=998244353;
const ll MAX=200200;
vector<ll> adj[MAX];
ll now=0,till=20;
vector<ll> tin(MAX,0),tout(MAX,0),depth(MAX,0);
vector<vector<ll>> jump(MAX,vector<ll>(till+1,0));
void dfs(ll cur,ll par){
jump[cur][0]=par;
for(ll i=1;i<=till;i++)
jump[cur][i]=jump[jump[cur][i-1]][i-1];
tin[cur]=++now;
for(ll chld:adj[cur]){
if(chld!=par){
depth[chld]=depth[cur]+1;
dfs(chld,cur);
}
}
tout[cur]=++now;
}
bool is_ancestor(ll a,ll b){
if((tin[a]<=tin[b])&&(tout[a]>=tout[b]))
return 1;
return 0;
}
ll lca(ll a,ll b){
if(is_ancestor(a,b))
return a;
for(ll i=till;i>=0;i--){
if(!is_ancestor(jump[a][i],b))
a=jump[a][i];
}
return jump[a][0];
}
void solve(){
ll n; cin>>n;
ll m; cin>>m;
vector<ll> a(n+5);
for(ll i=1;i<=n-1;i++){
ll u,v; cin>>u>>v;
adj[u].push_back(v);
adj[v].push_back(u);
}
now=1;
dfs(1,1);
vector<ll> p(n),q(n);
for(ll i=0;i<n;i++){
p[i]=q[i]=i+1;
}
sort(all(p),[&](ll l,ll r){
return tin[l]<tin[r];
});
sort(all(q),[&](ll l,ll r){
return tout[l]<tout[r];
});
for(auto it:p){
cout<<it<<" ";
}
cout<<endl;
for(auto it:q){
cout<<it<<" ";
}
cout<<endl;
auto query_p=[&](ll l,ll r){
ll left_pos=n+1,right_pos=-1;
for(ll i=0;i<n;i++){
ll node=p[i];
if((tin[node]>=l) and (tin[node]<=r)){
left_pos=min(left_pos,i);
right_pos=i;
}
}
ll now=n+5;
if(right_pos!=-1){
left_pos++,right_pos++;
cout<<"? 1 "<<left_pos<<" "<<right_pos<<endl;
cin>>now;
}
return now;
};
auto query_q=[&](ll l,ll r){
ll left_pos=n+1,right_pos=-1;
for(ll i=0;i<n;i++){
ll node=q[i];
if((tout[node]>=l) and (tout[node]<=r)){
left_pos=min(left_pos,i);
right_pos=i;
}
}
ll now=n+5;
if(right_pos!=-1){
left_pos++,right_pos++;
cout<<"? 2 "<<left_pos<<" "<<right_pos<<endl;
cin>>now;
}
return now;
};
while(m--){
ll u,v; cin>>u>>v;
if(tout[u]>tout[v]){
swap(u,v);
}
ll lca_node=lca(u,v);
ll ans=n;
if(lca_node==v){
ans=min(ans,query_q(1,tin[u]));
ans=min(ans,query_p(tin[u]+1,tout[v]));
ans=min(ans,query_q(tout[v]+1,now));
cout<<"! "<<ans<<endl;
ll x; cin>>x;
continue;
}
ans=min(ans,query_q(1,tin[u]));
ll consider=v;
for(auto it:adj[lca_node]){
if(is_ancestor(lca_node,it) and is_ancestor(it,v)){
consider=it;
}
}
ans=min(ans,query_p(tin[u]+1,tin[consider]-1));
ans=min(ans,query_q(tin[consider],tin[v]));
ans=min(ans,query_p(tin[v]+1,tout[lca_node]));
ans=min(ans,query_q(tout[lca_node]+1,now));
cout<<"! "<<ans<<endl;
ll x; cin>>x;
}
for(ll i=1;i<=n;i++){
adj[i].clear();
}
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
cin>>test_cases;
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
Idea: satyam343
Full Solution: Elegia
Editorial: errorgorn
It is convinient here to assign weights to $$$\mathtt{0} \to -1$$$ and $$$\mathtt{1} \to 1$$$.
Given a string $$$t$$$, we can define the prefix sum $$$p$$$ of it's weights. For example, if $$$t=\mathtt{0010111}$$$, then $$$p=[0,-1,-2,-1,-2,-1,0,1]$$$. So that if $$$t$$$ is bad and $$$i$$$ is a index that violates the definition, then $$$\max(p_0,p_1,\ldots,p_{i-1}) \lt \min(p_i,p_{i+1},\ldots,p_n)$$$ if $$$s_i = \mathtt{0}$$$ or $$$\min(p_0,p_1,\ldots,p_{i-1}) \gt \max(p_i,p_{i+1},\ldots,p_n)$$$ if $$$s_i = \mathtt{1}$$$.
Naturally, it is convinient to assume that $$$p_0 \lt p_n$$$. All string with $$$p_0 = p_n$$$ are clearly good and $$$p_0 \gt p_n$$$ is handled similarly. For strings with $$$p_0 \lt p_n$$$, the condition can only be violated on an index with $$$s_i = \mathtt{0}$$$.
The solution works using PIE. Let us fix a set of positions $$$I$$$ that are bad, so that if $$$i \in I$$$, then $$$s_i =\mathtt{0}$$$ and $$$\max(p_0,p_1,\ldots,p_{i-1}) \lt \min(p_i,p_{i+1},\ldots,p_n)$$$. Then we need to count the number of ways to construct $$$p$$$ satisfying these conditions and then add it to the answer multiplied by $$$(-1)^{|I|}$$$.
Suppose that $$$I = i_1,i_2,\ldots,i_k$$$. $$$t[1,i_1]$$$ and $$$t[i_k,n]$$$ need to be a ballot sequence of length $$$i_1-1$$$ and $$$n-i_k$$$ respectively (A001405, denote it as $$$f(n)$$$) while $$$t[i_j,i_{j+1}]$$$ needs to be bidirectional ballot sequence of length $$$i_{j+1}-i_j-1$$$ (A167510, denote it as $$$g(n)$$$). Note that in our definition of ballot sequence, we are do not require that prefixes and suffixes have strictly more $$$\mathtt{1}$$$ s thatn $$$\mathtt{0}$$$ s. It is the same sequence, but note that we need to shift it by a few places when refering to OEIS.
The first $$$n$$$ terms of $$$f$$$ is easily computed in linear time. We will focus on how to compute the first $$$n$$$ terms of $$$g$$$ in $$$O(n \log^2 n)$$$.
Computing $$$g(n)$$$ Firstly, let us consider the number of bidirectional sequences with $$$\frac{n+k}{2}$$$ $$$\mathtt{1}$$$ s and $$$\frac{n-k}{2}$$$ $$$\mathtt{0}$$$ s. We will imagine this as lattice walks from $$$(0,0)$$$ to $$$(n,k)$$$ where $$$\mathtt{1} \to (1,1)$$$ and $$$\mathtt{0} \to (1,-1)$$$. If we touch the lines $$$y=-1$$$ or $$$y=k+1$$$, the walk is invalid.
We can use the reflection method here, similar to a proof of Catalan. The number of valid walks is $$$#(*) - #(T) + #(TB) - #(TBT) ..... - #(B) + #(BT) - #(BTB) + .....$$$ where $$$#(BTB)$$$ denotes the number of walks that touch the bottom line, then the top line, then the bottom line, and then may continue to touch the top and bottom line after that.
We have $$$#(*) =$$$ $$$\binom{n}{\frac{n+k}{2}}$$$, $$$#(T) =$$$ $$$ \binom{n}{\frac{n+k+2}{2}}$$$, $$$#(TB) =$$$ $$$ \binom{n}{\frac{n+3k+4}{2}}$$$, $$$#(TBT) =$$$ $$$ \binom{n}{\frac{n+3k+6}{2}}$$$, $$$\ldots$$$, $$$#(B) = $$$ $$$\binom{n}{\frac{n+k+2}{2}}$$$, $$$#(BT) =$$$ $$$ \binom{n}{\frac{n+k+4}{2}}$$$, $$$#(BTB) =$$$ $$$ \binom{n}{\frac{n+3k+6}{2}}$$$, $$$\ldots$$$
This already gives us a method to compute $$$g(n)$$$ in $$$O(n \log n)$$$ since for a fixed $$$k$$$, we can compute the above sum in $$$O(\frac{n}{k})$$$, since only the first $$$O(\frac{n}{k})$$$ terms are not $$$0$$$.
First, notice that we can aggregate them sums without iterating on $$$k$$$, for some fixed $$$j$$$, we can find the coefficient $$$c_j$$$ of $$$\binom{n}{\frac{n+j}{2}}$$$ across all $$$k$$$. Notice that this coefficient is independent across all $$$n$$$, so we only need to compute $$$c$$$ once.
Now, note that $$$\binom{n}{\frac{n+z}{2}} = [x^z] (x^{-1} + x) ^ n$$$. So that $$$g(n) = [x^0] C \cdot (x^{-1} + x)^n$$$, where $$$C$$$ is the ogf of $$$c$$$.
From this formulation, we can describe how to compute the first $$$n$$$ terms of $$$g$$$ in $$$O(n \log^2 n)$$$ using Divide and Conquer.
$$$DnC(l,r,V)$$$ computes the $$$g(l) \ldots g(r)$$$ where $$$V$$$ is the coefficents between $$$[l-r,r-l]$$$ of $$$C \cdot (x^{-1} + x)^l$$$. $$$DnC(l,r,V)$$$ will call $$$DnC(l,m,V)$$$ and $$$DnC(m+1,r,V \cdot (x^{-1}+x)^{m-l+1})$$$. We have the reccurence $$$T(n) = 2 T(\frac{n}{2}) + O(n \log n)$$$ so $$$T(n) = O(n \log^2 n)$$$.
Of course, for constant time speedup, you can choose to split the odd and even coefficients, but that is not needed.
It is possible to compute the first $$$n$$$ of $$$g$$$ in $$$O(n \log n)$$$ but it does not improve the overall complexity of the solution.
Final Steps
Now that we obtained the $$$n$$$ terms of $$$f$$$ and $$$g$$$, let us return to the origial problem.
If $$$s_i =\mathtt{0}$$$, define $$$dp_i = f(i-1) - \sum\limits_{s_j = \mathtt{0}} dp_j \cdot g(j-i-1)$$$. Then this contributes $$$f(n-i) \cdot dp_i$$$ to the number of bad strings.
Again, we will use Divide and Conquer to perform this quickly.
Briefly, $$$DnC(l,r)$$$ will compute the values of $$$dp[l,r]$$$ given that contributions from $$$dp[1,l-1]$$$ has been transferred to $$$dp[l,r]$$$ already.
We will call $$$DnC(l,m)$$$, compute the contribution from $$$dp[l,m]$$$ to $$$dp[m+1,r]$$$ using FFT and then call $$$DnC(m+1,r)$$$. The complexity of this is $$$O(n \log^2 n)$$$.
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define ll long long
#define ii pair<int,int>
#define iii tuple<int,int,int>
#define fi first
#define se second
#define endl '\n'
#define debug(x) cout << #x << ": " << x << endl
#define pub push_back
#define pob pop_back
#define puf push_front
#define pof pop_front
#define lb lower_bound
#define ub upper_bound
#define rep(x,start,end) for(int x=(start)-((start)>(end));x!=(end)-((start)>(end));((start)<(end)?x++:x--))
#define all(x) (x).begin(),(x).end()
#define sz(x) (int)(x).size()
mt19937 rng(chrono::system_clock::now().time_since_epoch().count());
const int MOD=998244353;
ll qexp(ll b,ll p,int m){
ll res=1;
while (p){
if (p&1) res=(res*b)%m;
b=(b*b)%m;
p>>=1;
}
return res;
}
ll inv(ll i){
return qexp(i,MOD-2,MOD);
}
ll fix(ll i){
i%=MOD;
if (i<0) i+=MOD;
return i;
}
ll fac[1000005];
ll ifac[1000005];
ll nCk(int i,int j){
if (i<j) return 0;
return fac[i]*ifac[j]%MOD*ifac[i-j]%MOD;
}
const ll mod = (119 << 23) + 1, root = 62; // = 998244353
// For p < 2^30 there is also e.g. 5 << 25, 7 << 26, 479 << 21
// and 483 << 21 (same root). The last two are > 10^9.
typedef vector<ll> vl;
void ntt(vl &a) {
int n = sz(a), L = 31 - __builtin_clz(n);
static vl rt(2, 1);
for (static int k = 2, s = 2; k < n; k *= 2, s++) {
rt.resize(n);
ll z[] = {1, qexp(root, mod >> s, mod)};
rep(i,k,2*k) rt[i] = rt[i / 2] * z[i & 1] % mod;
}
vector<int> rev(n);
rep(i,0,n) rev[i] = (rev[i / 2] | (i & 1) << L) / 2;
rep(i,0,n) if (i < rev[i]) swap(a[i], a[rev[i]]);
for (int k = 1; k < n; k *= 2)
for (int i = 0; i < n; i += 2 * k) rep(j,0,k) {
ll z = rt[j + k] * a[i + j + k] % mod, &ai = a[i + j];
a[i + j + k] = ai - z + (z > ai ? mod : 0);
ai += (ai + z >= mod ? z - mod : z);
}
}
vl conv(const vl &a, const vl &b) {
if (a.empty() || b.empty()) return {};
int s = sz(a) + sz(b) - 1, B = 32 - __builtin_clz(s), n = 1 << B;
int inv = qexp(n, mod - 2, mod);
vl L(a), R(b), out(n);
L.resize(n), R.resize(n);
ntt(L), ntt(R);
rep(i,0,n) out[-i & (n - 1)] = (ll)L[i] * R[i] % mod * inv % mod;
ntt(out);
return {out.begin(), out.begin() + s};
}
int n;
string s;
int c[100005];
int f[100005];
void calc(int l,int r,vector<int> v){
while (sz(v)>(r-l)*2+1) v.pob();
if (l==r){
f[l]=v[0];
return;
}
int m=l+r>>1;
calc(l,m,vector<int>(v.begin()+(r-m),v.end()));
vector<int> a;
int t=m-l+1;
rep(x,0,t+1) a.pub(nCk(t,x)),a.pub(0);
v=conv(v,a);
calc(m+1,r,vector<int>(v.begin()+(2*t),v.end()));
}
int ans[100005];
void solve(int l,int r){
if (l==r) return;
int m=l+r>>1;
solve(l,m);
auto a=conv(vector<int>(ans+l,ans+m+1),vector<int>(f,f+(r-l)));
rep(x,m+1,r+1) if (s[x]=='0') ans[x]=fix(ans[x]-a[x-1-l]);
solve(m+1,r);
}
signed main(){
ios::sync_with_stdio(0);
cin.tie(0);
cout.tie(0);
cin.exceptions(ios::badbit | ios::failbit);
fac[0]=1;
rep(x,1,1000005) fac[x]=fac[x-1]*x%MOD;
ifac[1000004]=inv(fac[1000004]);
rep(x,1000005,1) ifac[x-1]=ifac[x]*x%MOD;
rep(x,1,100005){
c[x]++;
int curr=x;
while (curr<100005){
if (curr+2<100005) c[curr+2]-=2;
if (curr+4<100005) c[curr+4]++;
if (curr+2*x+4<100005) c[curr+2*x+4]++;
curr+=2*x+4;
}
}
cin>>n;
cin>>s;
vector<int> v;
rep(x,n+1,1) v.pub(fix(c[x]+(x>=2?c[x-2]:0LL)));
calc(1,n,v);
f[0]=1;
int fin=qexp(2,n,MOD);
rep(_,0,2){
rep(x,0,n) ans[x]=(s[x]=='0')?nCk(x,x/2):0LL;
solve(0,n-1);
rep(x,0,n) if (s[x]=='0') fin=fix(fin-ans[x]*nCk((n-x-1),(n-x-1)/2));
for (auto &it:s) it^=1;
}
cout<<fin<<endl;
}
Hello, Codeforces!
Welcome to the think-cell Round 1 supported by think-cell, which will start on Feb/17/2024 17:35 (Moscow time). It will be a combined rated round for both divisions. All problems were authored and prepared by Elegia and satyam343.
We would like to thank:
You will be given $$$9$$$ problems and $$$3$$$ hours to solve them. One of the problems will be divided into two subtasks. One of the problems will be interactive. Make sure to read this blog and familiarize yourself with these types of problems before the round!
We hope you'll like the problemset!
UPD 1 : The score distribution is $$$500 - 1000 - 1500 - (1250 + 1000) - 2500 - 2750 - 3500 - 3500 - 5000$$$.
Congratulations to the winners!
Congratulations to the first solves as well!
A : SSerxhs
B : SSerxhs
C : M_A
D1 : hitonanode
D2 : hitonanode
E : ksun48
F : MAOooOAM
G : gyh20
H : maroonrk
I : Kapt [upsolved]
UPD 2: Editorial is out.
And now a word from our round partner – think-cell:
think-cell is the leading data visualization software for business presentations. Our mission is to offer the most intuitive user interface for generating complex data-driven charts and slides, while at the same time ensuring seamless integration with Microsoft Office. We work on challenging visualization problems, reverse-engineer Microsoft’s code, and reinvent how slides are created. think-cell is the only German company funding the C++ ISO committee delegation, so there is a good chance that components we invent will find their way into the standard.
Right now, we are looking for smart, creative C++ engineers with a solid theoretical background to join our engineering team.
Highlights of the role:
Here is what we offer:

Already convinced?
P.S. don't forget to check out our developer blog to learn more about our commitment to the tech world!
Join think-cell Round 1 that will start on Feb/17/2024 17:35 (Moscow time) for a chance to challenge your skills among other developers and win the following prizes.
- First place: Apple iPad Air (10.9-inch iPad Air Wi-Fi 256GB),
- Second and Third place: Apple Watch (Apple Watch Series 9 GPS + Cellular, 41mm Aluminum Case with Solo Loop),
- 4-50 places: think-cell branded socks
- 200 additional socks will be distributed randomly among the rest of the participants who solved at least one problem and for whom this is not the first rated round!
We invite you to participate in CodeChef’s Starters120, this Wednesday, 7th February, rated for till 6-Stars (ie. for users with rating < 2500).
Time: 8:00 PM — 10:00 PM IST
Joining us on the problem setting panel are:
Tester: Takuki tabr Kurokawa
Text Editorialists: Nishank IceKnight1093 Suresh
Statement Verifier: Kanhaiya notsoloud1 Mohan
Contest Admin: Satyam satyam343
Additional Note about the contest:
Written editorials will be available for all on discuss.codechef.com. Pro users can find the editorials directly on the problem pages after the contest. The video editorials of the problems will be available only to Pro users.
Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here.
Hope to see you participating.
Good Luck!
We invite you to participate in CodeChef’s Starters113, this Wednesday, 20th December, rated for All.
Time: 8:00 PM — 10:30 PM IST
Joining us on the problem setting panel are:
Setters: Satyam satyam343 Kumar Roy.
Tester: Takuki tabr Kurokawa.
Statement Verifier: Kanhaiya notsoloud1 Mohan
Text Editorialists: Nishank IceKnight1093 Suresh.
Contest Admin : Satyam satyam343 Kumar Roy.
Written editorials will be available for all on discuss.codechef.com. Pro users can find the editorials directly on the problem pages after the contest. The video editorials of the problems will be available only to Pro users.
Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here.
Hope to see you participating.
Good Luck!
We invite you to participate in CodeChef’s Starters 104, this Wednesday, 18th October, rated till 6-stars (ie. for users with rating < 2500).
Time: 8:00 PM — 10:00 PM IST
Read about the recent judge migration here.
Joining us on the problem setting panel are:
Setters: Kanhaiya notsoloud1 Mohan, Satyam satyam343 Kumar Roy
Tester: Apoorv TyroWhizz Kumar
Statement Verifier: Kanhaiya notsoloud1 Mohan
Text Editorialists: Nishank IceKnight1093 Suresh
Contest Admin: Satyam satyam343 Kumar Roy
Additional Note about the contest:
Written editorials will be available for all on discuss.codechef.com. Pro users can find the editorials directly on the problem pages after the contest. The video editorials of the problems will be available only to Pro users.
Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here.
Hope to see you participating.
Good Luck!
We invite you to participate in CodeChef’s Starters 100, this Wednesday, 13th September, rated for all users.
Time: 8:00 PM — 10:00 PM IST
Note that the duration is 2 hours. Read about the recent judge migration here.
Joining us on the problem setting panel are:
Setters: Kanhaiya notsoloud1 Mohan, Satyam satyam343, Takuki tabr Kurokawa, wuhudsm, NaOH_Frog, Yan yan.silva Silva
Testers: Takuki tabr Kurokawa
Statement Verifier: Kanhaiya notsoloud1 Mohan
Video Editorialists: Madhav jhamadhav Jha, Suraj jhasuraj01 Jha, Jwala jwalapc, Adhish competitive__programmer Kancharla, Pravin pravin_as Shankhapal
Text Editorialists: Nishank IceKnight1093 Suresh
Contest Admin: Satyam satyam343
Written editorials will be available for all on discuss.codechef.com. Pro users can find the editorials directly on the problem pages after the contest. The video editorials of the problems will be available only to Pro users.
Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here.
Hope to see you participating.
Good Luck!
We invite you to participate in CodeChef’s Starters 90, this Wednesday, 17th May, rated till 6-stars Coders (ie. for users with rating < 2500).
Time: 8:00 PM — 10:00 PM IST
Note that the duration is 2 hours. Read about the recent CodeChef changes here.
Joining us on the problem setting panel are:
Setters: Aryan aryan12 Maskara, Patel _Prince Prince, Indresh indreshsingh Rajesh Singh, emptypromises empty-promises, Satyam satyam343
Testers: Udhav udhavvarma03 Varma
Video Editorialists:Sundar K Letscode_sundar, Madhav jhamadhav Jha, Suraj jhasuraj01 Jha, Jwala jwalapc , Adhish competitive__programmer Kancharla, Pravin pravin_as Shankhapal
Text Editorialists: Nishank IceKnight1093 Suresh
Contest Admin: Satyam satyam343
Written editorials will be available for all on discuss.codechef.com. Pro users can find the editorials directly on the problem pages after the contest.
The video editorials of the problems will be available for all users for 1 day as soon as the contest ends, after which they will be available only to Pro users.
Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here.
Hope to see you participating. Good Luck!
We invite you to participate in CodeChef’s Starters 86, this Wednesday, 19th April, rated for all.
Time: 8:00 PM — 10:00 PM IST
Note that the duration is 2 hours. Read about the recent CodeChef changes here.
Joining us on the problem setting panel are:
Setters: Satyam satyam343
Statement Verifier: Kanhaiya notsoloud1 Mohan
Video Editorialists: Sundar K Letscode_sundar, Madhav jhamadhav Jha, Suraj jhasuraj01 Jha, Jwala jwalapc, Rohit ezo_tihor Oze
Text Editorialists: Nishank IceKnight1093 Suresh
Contest Admins: Satyam satyam343
Written editorials will be available for all on discuss.codechef.com. Pro users can find the editorials directly on the problem pages after the contest.
The video editorials of the problems will be available for all users for 1 day as soon as the contest ends, after which they will be available only to Pro users.
Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here.
Hope to see you participating. Good Luck!
UPD:
I hope you liked the problems!!
Congratulations to the winners of Div 1!
We invite you to participate in CodeChef’s Starters 72, this Wednesday, 4th January.
Time: 8 PM — 11:00 PM IST
Joining us on the problem setting panel are:
Written editorials will be available for all on discuss.codechef.com. Pro users can find the editorials directly on the problem pages after the contest.
The video editorials of the problems will be available for all users for 1 day as soon as the contest ends, after which they will be available only to Pro users.
Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here.
Hope to see you participating. Good Luck!
Thank you for participation! I hope you liked atleast one problem from the set (:
We tried hard to have an interesting problemset.
It is sad to see people disliking the round only because some problems were hard. Please read the intended solutions to know why we decided to put the problems(especially D) at current positions.
Idea:satyam343
If sum is even, answer is $$$0$$$. Otherwise we need to change parity of atleast one element of $$$a$$$.
It it optimal to change parity of atmost one element.
Answer can be atmost $$$20$$$, as we need to divide any integer $$$x$$$ ($$$1 \leq x \leq 10^6$$$) atmost $$$20$$$ times to change its parity.
We are assuming initial sum is odd. Suppose $$$f(x)(1 \leq x \leq 10^6)$$$ gives the minimum number of operations needed to change parity of $$$x$$$.
Iterate from $$$i=1$$$ to $$$n$$$ and calculate $$$f(a_i)$$$ for each $$$i$$$.
Answer is minimum among all the calculated values.
Time complexity is $$$O(n \cdot log(A_{max}))$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
void solve(){
ll n; cin>>n;
ll sum=0,ans=21;
vector<ll> a(n);
for(auto &it:a){
cin>>it;
sum+=it;
}
if(sum&1){
for(auto &it:a){
ll cur=it,now=0;
while(!((cur+it)&1)){
now++;
cur/=2;
}
ans=min(ans,now);
}
}
else{
ans=0;
}
cout<<ans<<"\n";
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
ll t; cin>>t;
while(t--){
solve();
}
}
Idea:satyam343
Suppose we have a prime number $$$p$$$. Suppose there are two perfect powers of $$$p$$$ — $$$l$$$ and $$$r$$$. Now it is easy to see $$$\max(l,r)$$$ is divisible by $$$\min(l,r)$$$.
So now we need to choose some prime number $$$p$$$. Let us start with the smallest prime number $$$p=2$$$.
Here is one interesting fact. There always exists a power of $$$2$$$ in the range $$$[x,2x]$$$ for any positive integer $$$x$$$.
Suppose $$$f(x)$$$ gives the smallest power of $$$2$$$ which is greater than $$$x$$$.
Iterate from $$$i=1$$$ to $$$n$$$ and change $$$a_i$$$ to $$$f(a_i)$$$ by adding $$$f(a_i)-a_i$$$ to $$$i$$$-th element.
Time complexity is $$$O(n \cdot log(A_{max}))$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
ll f(ll x){
ll cur=1;
while(cur<=x){
cur*=2;
}
return cur;
}
void solve(){
ll n; cin>>n;
cout<<n<<"\n";
for(ll i=1;i<=n;i++){
ll x; cin>>x;
cout<<i<<" "<<f(x)-x<<"\n";
}
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
ll t; cin>>t;
while(t--){
solve();
}
}
1762C - Binary Strings are Fun
Idea:satyam343
Let us first find $$$f(s[1,n])$$$.
$$$f(s[1,n])=2^{len-1}$$$ where $$$len$$$ is the length of longest suffix of $$$s$$$ in which all characters are same.
How to prove the result in hint $$$2$$$?
First of all it is easy to see if all characters of $$$s$$$ are same, $$$f(s[1,n])=2^{n-1}$$$ as median is always $$$s_i$$$.
Now we assume that $$$s$$$ contains distinct characters.
Suppose $$$t$$$ is one good extension of $$$s$$$. Assume we are index $$$i$$$. If there exists an index $$$j(j \gt i)$$$ such that $$$s_i \neq s_j$$$, we should have $$$t_{2i} \neq s_i$$$.
Why? Assume $$$k$$$ is the smallest index greater than $$$i$$$ such that $$$s_i \neq s_k$$$. Now if we have $$$t_{2i} = s_i$$$, $$$s_k$$$ can never be median of subarray $$$t[1,2k-1]$$$. So if longest suffix of $$$s$$$ having same characters of starts at index $$$i$$$, $$$t_{2j} \neq s_j$$$ for all $$$j(1 \leq j \lt i)$$$ and $$$t_{2j}$$$ can be anything(either $$$0$$$ or $$$1$$$) for all $$$j(i \leq j \lt n)$$$.
Now we know how to solve for whole string $$$s$$$.
We can similarly solve for all prefixes.
To find $$$f(s[1,i])$$$, we need to find the longest suffix of $$$s[1,i]$$$ containing same character.
We can easily calculate this all prefixes while moving from $$$i=1$$$ to $$$n$$$.
Time complexity is $$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
const ll MOD=998244353;
void solve(){
ll n; cin>>n;
string s; cin>>s; s=" "+s;
ll ans=0,cur=1;
for(ll i=1;i<=n;i++){
if(s[i]==s[i-1]){
cur=(2*cur)%MOD;
}
else{
cur=1;
}
ans=(ans+cur)%MOD;
}
cout<<ans<<"\n";
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
ll t; cin>>t;
while(t--){
solve();
}
}
Idea:amurto Prepared by:errorgorn
Intended solution uses $$$2 \cdot (n-2)$$$. You are allowed to guess two indices. Doesn't this hint towards something?
If we can eliminate $$$n-2$$$ elements that cannot be $$$0$$$ for sure, we are done.
Suppose we have three distinct indices $$$i$$$, $$$j$$$ and $$$k$$$. Is it possible to remove one index(say $$$x$$$) out of these three indices such that $$$p_x \neq 0$$$ for sure. You are allowed to query two times.
So suppose we have three distinct indices $$$i$$$, $$$j$$$ and $$$k$$$.
Let us assume $$$l=query(i,k)$$$ and $$$r=query(j,k)$$$
Now we have only three possibilities.
This we can eliminate one index on using $$$2$$$ queries. We will perform this operation $$$n-2$$$ times. Refer to attached code for details.
Time complexity is $$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
void solve(){
ll n; cin>>n;
ll l=1,r=2;
for(ll i=3;i<=n;i++){
ll ql,qr;
cout<<"? "<<l<<" "<<i<<endl;
cin>>ql;
cout<<"? "<<r<<" "<<i<<endl;
cin>>qr;
if(ql>qr){
r=i;
}
else if(ql<qr){
l=i;
}
}
cout<<"! "<<l<<" "<<r<<endl;
ll check; cin>>check;
assert(check==1);
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
ll t; cin>>t;
while(t--){
solve();
}
}
Idea:satyam343 Improved by:TheScrasse
There does not exist any good tree of size $$$n$$$ if $$$n$$$ is odd.
How to prove it? Suppose $$$f(v)$$$ gives the product of weight of edges incident to node $$$v$$$ in a good tree. We know that $$$f(i)=-1$$$ as if tree is good. Now $$$\prod_{i=1}^{n} f(i) = -1$$$ if $$$n$$$ is odd.
There is another way to find $$$\prod_{i=1}^{n} f(i)$$$. Look at contribution of each edge. Each edge contribitues $$$1$$$ to $$$\prod_{i=1}^{n} f(i)$$$, no matter what the weight of this edge is, as it gets multiplied twice. Thus we get $$$\prod_{i=1}^{n} f(i) = 1$$$. We got contradiction. Thus no good tree of size $$$n$$$ exists.
Now assume $$$n$$$ is even.
Here is an interesting claim.
For any unweighted tree,there exists exactly one assignment of weight of edges which makes it good.
Thus there are $$$n^{n-2}$$$ distinct edge-weighted trees.
How to prove the claim in hint $$$2$$$?
Arbitrarily root the tree at node $$$1$$$.
Now start from leaves and move towards root and assign the weight of edges in the path.
First of all the edge incident to any leaf node will have $$$-1$$$ as the weight. While moving towards root, it can be observed that weight of edge between $$$u$$$ and parent of $$$u$$$ depends on the product of weight of edges between $$$u$$$ and its children.
As we are moving from leaves towards root, weight of edges between $$$u$$$ and its children are already fixed. Weight of edge between $$$u$$$ and parent $$$u$$$ is $$$-1 \cdot \prod_{x \in C(u)}{pw(x)}$$$, where $$$pw(x)$$$ gives the weight of edge between $$$x$$$ and its parent, and $$$C(u)$$$ denotes the set of children of $$$u$$$.
Time for one more interesting claim. The weight of edge $$$e$$$ is $$$(-1)^{l}$$$ if there are $$$l$$$ nodes on one side and $$$n-l$$$ nodes on other side of $$$e$$$, irrespective of the structure of tree.
We can prove this claim by induction, similar to what we did in hint $$$3$$$.
To find answer we will look at contribution of each edge.
Here's detailed explanation on how to dot it.
In total, we have $$$n^{n-2} \cdot (n-1)$$$ edges.
Suppose for some edge(say $$$e$$$), we have $$$l$$$ nodes(including node $$$1$$$) on left side and $$$r$$$ nodes(including node $$$n$$$) on right side.
Among $$$n^{n-2} \cdot (n-1)$$$ edges, how many possibilities do we have for $$$e$$$?
It is $$${{n-2} \choose {l-1}} \cdot l \cdot r \cdot l^{l-2} \cdot r^{r-2}$$$. Why? First we select $$$l-1$$$ nodes(as node $$$1$$$ is fixed to be on left side) to be on left side, we get $$${{n-2} \choose {l-1}}$$$ for this.
Now we have $$$l$$$ nodes on left side and $$$r$$$ nodes on right side. Edge $$$e$$$ will connect one among $$$l$$$ nodes on left and one among $$$r$$$ nodes on right. So edge $$$e$$$ will exist between $$$l \cdot r$$$ pairs. We know that number of distinct trees having $$$x$$$ nodes is $$$x^{x-2}$$$.
Now on selecting one node from left and one from right, we have fixed the root of subtree on left side, and have also fixed the root of subtree on right side. So, number of distinct subtrees on left side is $$$l^{l-2}$$$, and number of distinct subtrees on right side is $$$r^{r-2}$$$.
Thus, on mutliplying all(since they are independent), we get $$${n \choose l} \cdot l \cdot r \cdot l^{l-2} \cdot r^{r-2}$$$ possibilities for $$$e$$$.
Now this edge lies on the path from $$$1$$$ to $$$n$$$ as both lie on opposite sides of this node.
So this edge contributes $$$(-1)^l \cdot {{n-2} \choose {l-1}} \cdot l \cdot r \cdot l^{l-2} \cdot r^{r-2}$$$ to answer.
Hence $$$d(1,n)=\sum_{l=1}^{n-1} (-1)^l \cdot {{n-2} \choose {l-1}} \cdot l \cdot r \cdot l^{l-2} \cdot r^{r-2}$$$ where $$$l+r=n$$$. Note that we assumed that we are always going from left subtree to right subtree while calculating contribution. As we have tried all possibilties for l, all cases get covered. We used left and right subtrees just for our own convention.
Time complexity is $$$O(n \cdot \log(n))$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
const ll MOD=998244353;
const ll MAX=500500;
vector<ll> fact(MAX+2,1),inv_fact(MAX+2,1);
ll binpow(ll a,ll b,ll MOD){
ll ans=1;
a%=MOD;
while(b){
if(b&1)
ans=(ans*a)%MOD;
b/=2;
a=(a*a)%MOD;
}
return ans;
}
ll inverse(ll a,ll MOD){
return binpow(a,MOD-2,MOD);
}
void precompute(ll MOD){
for(ll i=2;i<MAX;i++){
fact[i]=(fact[i-1]*i)%MOD;
}
inv_fact[MAX-1]=inverse(fact[MAX-1],MOD);
for(ll i=MAX-2;i>=0;i--){
inv_fact[i]=(inv_fact[i+1]*(i+1))%MOD;
}
}
ll nCr(ll a,ll b,ll MOD){
if((a<0)||(a<b)||(b<0))
return 0;
ll denom=(inv_fact[b]*inv_fact[a-b])%MOD;
return (denom*fact[a])%MOD;
}
void solve(){
ll n,ans=0; cin>>n;
if(n&1){
cout<<0;
return;
}
ll sgn=1;
for(ll i=1;i<n;i++){
sgn*=-1;
ll r=n-i,l=i;
ll fix_l=nCr(n-2,l-1,MOD); //fixing l nodes on left side
ll fix_root=(l*r)%MOD; //fixing roots of subtrees on both sides
ll trees=(binpow(l,l-2,MOD)*binpow(r,r-2,MOD))%MOD; //counting no of subtrees
ll no_of_e=(((fix_l*fix_root)%MOD)*trees)%MOD; //no of possibilities for e
ans=(ans+sgn*no_of_e)%MOD;
}
ans=(ans+MOD)%MOD;
cout<<ans;
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
precompute(MOD);
ll t=1;
//cin>>t;
while(t--){
solve();
}
}
Idea:satyam343
We should have $$$|a_{i_j}-a_{i_{j+1}}| \leq k$$$. This seems a bit hard, as we can have $$$a_{i_{j+1}}$$$ greater than, smaller than or equal to $$$a_{i_j}$$$.
Why not solve the easier version first?
A pair $$$(l,r)$$$ is good if there exists a sequence of indices $$$i_1, i_2, \dots, i_m$$$ such that
Suppose $$$F(a,k)$$$ number of pairs $$$(l,r)$$$ ($$$1 \leq l \lt r \leq n$$$) that are good.
Find $$$F(a,k)$$$.
To solve the problem in hint $$$1$$$, let us define $$$dp_i$$$ as the number of pairs $$$j(i \lt j)$$$ such that $$$(i,j)$$$ is good.
Let us move from $$$i=n$$$ to $$$1$$$. To find $$$dp_i$$$, let us first find the smallest index $$$j$$$ such that $$$a_j$$$ lies in range $$$[a_i+1,a_i+k]$$$.
We can observe that $$$dp_i=dp_j+f(i,a_i+1,a_j)$$$, where $$$f(i,l,r)$$$ gives us the number of indices $$$x$$$ among last $$$i$$$ elements of $$$a$$$ such that $$$a_x$$$ lies in the range $$$[l,r]$$$. We can use fenwik tree or ordered set to find $$$f(i,l,r)$$$.
Now let us get back to original problem.
First let us count number of pairs $$$(i,j)(1 \leq i \leq j)$$$ such that $$$a_i=a_j$$$. Assume $$$cnt$$$ is number of such pairs.
Time for another cool claim!
For our original problem, answer is $$$cnt+F(a,k)+F(rev(a),k)$$$, where $$$rev(a)$$$ denotes the array $$$a$$$ when it is reversed.
How to prove the claim in hint $$$3$$$?
Suppose we have a good pair $$$(l,r)$$$ such that $$$a_l \neq a_r$$$. Now using exchange arguments we can claim that there always exists a sequence(say $$$s$$$) starting at index $$$l$$$ and ending at index $$$r$$$
Thus $$$(l,r)$$$ will be counted in $$$F(a,k)$$$ if $$$a_l \lt a_r$$$ and $$$(l,r)$$$ will be counted in $$$F(rev(a),k)$$$ if $$$a_l \gt a_r$$$.
Time complexity is $$$O(n \cdot \log(n))$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
const ll MAX=1000100;
class ST{
public:
vector<ll> segs;
ll size=0;
ll ID=MAX;
ST(ll sz) {
segs.assign(2*sz,ID);
size=sz;
}
ll comb(ll a,ll b) {
return min(a,b);
}
void upd(ll idx, ll val) {
segs[idx+=size]=val;
for(idx/=2;idx;idx/=2){
segs[idx]=comb(segs[2*idx],segs[2*idx+1]);
}
}
ll query(ll l,ll r) {
ll lans=ID,rans=ID;
for(l+=size,r+=size+1;l<r;l/=2,r/=2) {
if(l&1) {
lans=comb(lans,segs[l++]);
}
if(r&1){
rans=comb(segs[--r],rans);
}
}
return comb(lans,rans);
}
};
struct FenwickTree{
vector<ll> bit;
ll n;
FenwickTree(ll n){
this->n = n;
bit.assign(n, 0);
}
FenwickTree(vector<ll> a):FenwickTree(a.size()){
ll x=a.size();
for(size_t i=0;i<x;i++)
add(i,a[i]);
}
ll sum(ll r) {
ll ret=0;
for(;r>=0;r=(r&(r+1))-1)
ret+=bit[r];
return ret;
}
ll sum(ll l,ll r) {
if(l>r)
return 0;
return sum(r)-sum(l-1);
}
void add(ll idx,ll delta) {
for(;idx<n;idx=idx|(idx+1))
bit[idx]+=delta;
}
};
FenwickTree freq(MAX);
ST segtree(MAX);
vector<ll> dp(MAX,0);
ll solve(vector<ll> a,ll n,ll k){
ll now=0;
for(ll i=n-1;i>=0;i--){
ll j=segtree.query(a[i]+1,a[i]+k);
if(j<n){
dp[i]=dp[j]+freq.sum(a[i]+1,a[j]);
}
else{
dp[i]=0;
}
now+=dp[i];
segtree.upd(a[i],i);
freq.add(a[i],1);
}
for(auto it:a){
segtree.upd(it,MAX);
freq.add(it,-1);
}
return now;
}
void solve(){
ll n,k; cin>>n>>k;
vector<ll> a(n);
ll ans=0;
map<ll,ll> cnt;
for(auto &it:a){
cin>>it;
cnt[it]++;
ans+=cnt[it];
}
ans+=solve(a,n,k);
reverse(a.begin(),a.end());
ans+=solve(a,n,k);
cout<<ans<<"\n";
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
ll t=1;
cin>>t;
while(t--){
solve();
}
}
1762G - Unequal Adjacent Elements
Idea:satyam343
Answer is NO only when there exists an element of $$$a$$$ which occurs more that $$$\lceil \frac{n}{2} \rceil$$$ times.
Let us say an array $$$b$$$ is beautiful if length of $$$b$$$ is odd and mode(say $$$x$$$) of $$$b$$$ occurs exactly $$$\lceil \frac{n}{2} \rceil$$$ times.
If $$$a$$$ is beautiful, there exists only one permutation.
We have rearrange such that $$$x$$$ occupies all the odd indices and keep the elements at even indices such that condition $$$2$$$ in satisfied.
To solve the original problem, we will divide the array $$$a$$$ into multiple beautiful subarrays and arrange the elements in those subarrays.
Let us continue from where we left off.
So our motivation is to break the original array into multiple beautiful subarrays and the elements in those subarrays, as mentioned before. Now for condition $$$1$$$ to be satisfied, we should not have two adjacent subarrays such that the elements at the end positions of both subarrays(after rearranging the elements) are the same.
Here is one construction using which we can achieve our goal.
Suppose $$$l$$$ denotes the leftmost point of our concerned subarray.
If $$$a_l \neq a_{l+1}$$$, we move forward, as subarray $$$a[l,l]$$$ is good.
Otherwise, we keep moving towards the right till index $$$r$$$(here, $$$r$$$ should be the smallest possible) such that the subarray $$$a[l,r]$$$ is beautiful and $$$a_l \neq a_{r+1}$$$. So it is easy to notice the following observations about the subarray $$$a[l,r]$$$
length of this subarray is odd
$$$a_l$$$ occurs exactly $$$\lceil \frac{r-l+1}{2} \rceil$$$ times in this subarray
Now we can rearrange the elements of this subarray $$$a[l,r]$$$.
Do note that the subarray $$$a[1,r]$$$ satisfies both the conditions stated in the statement.
So our task is to make the subarray $$$a[r+1,n]$$$ good now.
We can now update $$$l=r+1$$$ and continue searching for the corresponding $$$r$$$ and so on.
Now it might be the case that we did not get a valid $$$r$$$ for the last search.
From here, I assume we did not get valid $$$r$$$ for the last search. We could print the obtained permutation if we got it, as $$$a$$$ would satisfy both conditions.
Assume that we had started at $$$pos=l$$$ and couldn't find $$$r$$$.
Subarray $$$a[1,pos-1]$$$ is already good.
To fix this issue, we will do a similar search that we did before.
We start from the back(from index $$$n$$$) and move towards left till index $$$m$$$ such that
Now we arrange elements of this subarray in the same fashion that we did before.
Are we done?
No. First, we must prove that we will always get some $$$m$$$.
Let us have function $$$f(a,l,r,x)$$$, which denotes the score of the subarray $$$a[l,r]$$$ for the element $$$x$$$. $$$f(a,l,r,x)=freq_x-(r-l+1-freq_x)$$$, where $$$freq_x$$$ denotes the frequency of element $$$x$$$ in the subarray $$$a[l,r]$$$
It is easy to note that $$$f(a,pos,n,a_{pos}) \gt 1$$$
(Hint $$$ — $$$ Prove that $$$f(a,pos,r,a_{pos}) \neq 0$$$ for $$$pos \leq r \leq n$$$. Why?(If it does then $$$a[pos,r-1]$$$ would be beautiful ))
Now we start from the back and move towards the right to find $$$m$$$ with $$$n$$$ as our right endpoint of the concerned subarray.
Note that $$$f(a,1,n,a_{pos}) \leq 1$$$ (Why? $$$a_{pos}$$$ would have occurred at most $$$\lceil \frac{n}{2} \rceil$$$ times in $$$a$$$)
So while moving from $$$pos$$$ to $$$1$$$ we will indeed find a $$$m$$$ such that $$$f(a,m,n,a_{pos})=1$$$, and $$$a_{m-1} \neq a_{pos}$$$ (assuming $$$a_0=-1$$$)
Are we done?
Not still :p. We can observe that condition $$$1$$$ is satisfied, but sometimes condition $$$2$$$ would not be. For example, simulate the above approach on the array $$$a=[1,1,2,3,3]$$$.
How to fix this issue? It's pretty easy to fix this issue.
Instead of rearranging the subarray $$$a[m,n]$$$, we will rearrange the subarray $$$a[m-1,n]$$$.
How to rearrange?
Okay, time for one more hint.
What will be the answer for $$$a=[1,1,2,3,3]$$$?
$$$p=[1,4,2,5,3]$$$
You can refer to the attached code for implementation details.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
#define all(x) x.begin(),x.end()
void solve(){
ll n; cin>>n;
vector<ll> a(n+5),freq(n+5,0);
for(ll i=1;i<=n;i++){
cin>>a[i]; freq[a[i]]++;
}
for(ll i=1;i<=n;i++){
ll till=(n+1)/2;
if(freq[i]>till){
cout<<"NO\n";
return;
}
}
cout<<"YES\n";
vector<ll> ans;
ll cur=1;
while(cur<=n){
ll val=a[cur];
vector<ll> v1,v2;
while(cur<=n){
if(a[cur]==val){
v1.push_back(cur);
}
else{
v2.push_back(cur);
}
if(v1.size()==v2.size()){
for(ll i=0;i<v1.size();i++){
ans.push_back(v1[i]); ans.push_back(v2[i]);
}
ans.pop_back();
break;
}
if(cur==n){
while(1){
if(ans.empty()||v1.size()==v2.size()){
sort(all(v1)); sort(all(v2));
if(v1.size()!=v2.size()){
ans.push_back(v1[0]);
v1.erase(v1.begin());
}
if(!v2.empty()&&!ans.empty()){
if(a[ans.back()]==a[v2[0]]){
swap(v1,v2);
}
}
for(ll i=0;i<v1.size();i++){
ans.push_back(v2[i]); ans.push_back(v1[i]);
}
break;
}
if(a[ans.back()]==val){
v1.push_back(ans.back());
}
else{
v2.push_back(ans.back());
}
ans.pop_back();
}
cur=n+1;
}
cur++;
}
}
for(auto it:ans){
cout<<it<<" ";
}
cout<<"\n";
return;
}
int main()
{
ll test_cases=1;
cin>>test_cases;
while(test_cases--){
solve();
}
}
Hello, Codeforces!
amurto and I are glad to invite everyone to participate in Codeforces Round 838 (Div. 2), which will be held on Dec/15/2022 17:35 (Moscow time).
This Round will be rated for participants with rating strictly lower than 2100. Participants from the first division are also welcomed to take part in the competition but it will be unrated for them.
You will be given 7 problems and 2 hours and 30 minutes to solve them. One of the problems will be interactive. Make sure to read this blog and familiarize yourself with these types of problems before the round!
We would like to thank:
Score distribution is 500 — 1000 — 1750 — 2000 — 2500 — 2750 — 3500.
UPD1: Congratulations to the winners!
Overall:
Rated:
UPD2: Editorial is out.
We are looking forward to your participation!
We invite you to participate in CodeChef’s December Long Challenge, starting this Saturday, 10th December, 8 PM IST onwards.
Please note, the contest is open for 2 days, i.e, from 10th — 12th December. The long Challenge will be rated for Div 3 & 4 coders.
Joining us on the problem setting panel are:
Setters: Utkarsh Utkarsh.25dec Gupta, Satyam satyam343, Pratik i.am.pratik Kumar, GudeGude
Testers: Nishank IceKnight1093 Suresh, Tejas tejas10p Pandey
Statement Verifier: Kanhaiya notsoloud1 Mohan
Video Editorialists: Sundar K Letscode_sundar, Garvit garvitvirmani0 Virmani, Vibhu Vibhubhatia Bhatia, Madhav jhamadhav Jha, Suraj jhasuraj01 Jha, Jwala jwalapc , Rohit ezo_tihor Oze, Adhish competitive__programmer Kancharla
Text Editorialists:Nishank IceKnight1093 Suresh
Contest Admins: Utkarsh Utkarsh.25dec Gupta, Satyam satyam343
Written editorials will be available for all on discuss.codechef.com. Pro users can find the editorials directly on the problem pages after the contest.
The video editorials of the problems will be available for all users for 1 day as soon as the contest ends, after which they will be available only to Pro users.
Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here.
Hope to see you participating. Good Luck!
We invite you to participate in CodeChef’s Starters 63, this Wednesday, 2nd November, rated till 6-stars Coders (ie. for users with rating < 2500).
Time: 8 PM — 11:00 PM IST
Joining us on the problem setting panel are:
Written editorials will be available for all on discuss.codechef.com. Pro users can find the editorials directly on the problem pages after the contest.
The video editorials of the problems will be available for all users for 1 day as soon as the contest ends, after which they will be available only to Pro users.
Also, if you have some original and engaging problem ideas, and you’re interested in them being used in CodeChef's contests, you can share them here.
Hope to see you participating. Good Luck!
Thank you for participation! We apologize for problem D that turned out to be harder than expected. Still, we hope that you liked most of the problems.
In case you found C2 tedious to implement or found many cases to deal with, I would recommend you to have a look at the intended solution. I think it is interesting and easy to implement.
It is easy to observe that the second operation needs to be performed at most once. Now, we just need to check $$$2$$$ cases, one in which the re-arrangement operation is used, and one in which it is not.
If the re-arrangement operation is to be used, then we just need to make the counts of $$$0$$$s and $$$1$$$s in $$$a$$$ equal to that of $$$b$$$. Without loss of generality assume $$$a$$$ contains $$$x$$$ more $$$0$$$s than $$$b$$$, then the cost in this case will just be $$$x + 1$$$ (extra one for re-arrangement cost).
If the re-arrangement operation is not to be used, then we just need to make each element of $$$a$$$ equal to the corresponding element of $$$b$$$.
Finally, our answer is the smaller cost of these $$$2$$$ cases.
Time complexity is $$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
void solve(){
ll n; cin>>n;
ll sum=0,ans=0;
vector<ll> a(n),b(n);
for(auto &it:a){
cin>>it;
sum+=it;
}
for(auto &it:b){
cin>>it;
sum-=it;
}
for(ll i=0;i<n;i++){
ans+=(a[i]^b[i]);
}
ans=min(ans,1+abs(sum));
cout<<ans<<"\n";
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
ll t; cin>>t;
while(t--){
solve();
}
}
Take $$$a_0 = a_{n+1} = 1$$$.
Now take $$$b_i=lcm(a_{i-1},a_i)$$$ for $$$1 \leq i \leq n+1$$$. If $$$b$$$ gives us $$$a$$$ after performing the $$$\gcd$$$ operations, then the answer is YES, otherwise the answer is NO. (When answer is NO, we would get a case like $$$\gcd(b_i, b_{i + 1}) = k \cdot a_i$$$(where $$$k \gt 1$$$ for some $$$i$$$).
Suppose $$$c$$$ is some valid array which gives us $$$a$$$. So, $$$c_i$$$ should be divisible by $$$b_i$$$. This means $$$\gcd(c_i, c_{i+1}) \geq \gcd(b_i, b_{i + 1})$$$.
So, if $$$\gcd(b_i, b_{i + 1}) \gt a_i$$$ for any $$$i$$$, we should also have $$$\gcd(c_i, c_{i+1}) \gt a_i$$$. This implies that $$$c$$$ is not valid if $$$b$$$ is not valid.
Time complexity is $$$O(n \cdot \log(bmax))$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
ll lcm(ll a,ll b){
ll g=__gcd(a,b);
return (a*b/g);
}
void solve(){
ll n; cin>>n;
vector<ll> a(n+2,1);
for(ll i=1;i<=n;i++){
cin>>a[i];
}
vector<ll> b(n+2,1);
for(ll i=1;i<=n+1;i++){
b[i]=lcm(a[i],a[i-1]);
}
for(ll i=1;i<=n;i++){
if(__gcd(b[i],b[i+1])!=a[i]){
cout<<"NO\n";
return;
}
}
cout<<"YES\n";
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
ll t; cin>>t;
while(t--){
solve();
}
}
1736C1 - Good Subarrays (Easy Version)
Suppose $$$l[i]$$$ represents the leftmost point such that subarray $$$a[l[i],i]$$$ is good.
Notice that the array $$$l$$$ is non-decreasing.
So suppose $$$dp[i]$$$ denotes the length of longest good subarray which ends at index $$$i$$$.
Take $$$dp[0]=0$$$.
Now $$$dp[i]=min(dp[i-1]+1,a[i])$$$.
Suppose $$$a[i] \geq dp[i-1]+1$$$. Now we claim that $$$dp[i]=dp[i-1]+1$$$. We know $$$a[i-dp[i-1],i-1]$$$ is \t{good}. Now if we look at array $$$b=a[i-dp[i-1],i]$$$, $$$b_i \geq i $$$ for $$$ 1 \leq i \leq dp[i-1]$$$. For $$$b$$$ to be good, last element of $$$b$$$(which is $$$a[i]$$$) should be greater than or equal $$$dp[i-1]+1$$$(which is consistent with our supposition). So $$$b$$$ is good.
We can similarly cover the case when $$$a[i] \lt dp[i-1]+1$$$.
So our answer is $$$ \sum_{i=1}^{n} dp[i]$$$. Time complexity is $$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
void solve(){
ll n; cin>>n;
vector<ll> dp(n+5,0);
ll ans=0;
for(ll i=1;i<=n;i++){
ll x; cin>>x;
dp[i]=min(dp[i-1]+1,x);
ans+=dp[i];
}
cout<<ans<<"\n";
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
ll t; cin>>t;
while(t--){
solve();
}
}
1736C2 - Good Subarrays (Hard Version)
Let us continue the idea of C1.
Suppose $$$track[i]$$$ denotes $$$\sum_{j=i}^{n} dp[j]$$$ if $$$dp[i]=a[i]$$$.
We can precalculate array $$$track$$$.
Now suppose $$$a_p$$$ is changed to $$$x$$$ and $$$adp[i]$$$ denotes the length of longest good subarray which ends at index $$$i$$$ in the updated array.
It is easy to see that $$$adp[i]=dp[i]$$$ for $$$ 1 \leq i \lt p$$$. Now let $$$q$$$ be the smallest index greater than $$$p$$$ such that $$$adp[q]=a[q]$$$(It might be the case that there does not exist any such $$$q$$$ which can be handled similarly). So we have $$$3$$$ ranges to deal with — $$$(1,p-1)$$$, $$$(p,q-1)$$$ and $$$(q,n)$$$.
Now $$$\sum_{i=1}^{p-1} adp[i]$$$ = $$$\sum_{i=1}^{p-1} dp[i]$$$(which can be stored as prefix sum).
Also $$$\sum_{i=q}^{n} adp[i]$$$ = $$$track[q]$$$.
Now we only left with range $$$(p,q-1)$$$. An interesting observation is $$$adp[i]=adp[i-1]+1$$$ for $$$p \lt i \lt q$$$.
This approach can be implemented neatly in many ways(one way is answer each query offline).
Time complexity is $$$O(n \cdot \log(n))$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
#define f first
#define s second
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
ll n; cin>>n;
vector<ll> track(n+5,0),sum(n+5,0);
vector<pair<ll,pair<ll,ll>>> trav;
vector<ll> pref(n+5,0),dp(n+5,0);
for(ll i=1;i<=n;i++){
ll x; cin>>x;
sum[i]=sum[i-1]+i; trav.push_back({x-i,{i,0}});
dp[i]=min(dp[i-1]+1,x); pref[i]=pref[i-1]+dp[i];
}
ll q; cin>>q;
vector<ll> ans(q+5,0);
for(ll i=1;i<=q;i++){
ll p,x; cin>>p>>x; x=min(dp[p-1]+1,x);
trav.push_back({x-p,{p,i}});
}
set<ll> found; found.insert(n+1);
sort(trav.begin(),trav.end());
for(auto it:trav){
if(it.s.s){
ll r=*found.upper_bound(it.s.f);
ans[it.s.s]=pref[it.s.f-1]+track[r]+sum[r-it.s.f]+(it.f+it.s.f-1)*(r-it.s.f);
}
else{
ll r=*found.lower_bound(it.s.f); found.insert(it.s.f);
track[it.s.f]=track[r]+sum[r-it.s.f]+(it.f+it.s.f-1)*(r-it.s.f);
}
}
for(ll i=1;i<=q;i++){
cout<<ans[i]<<"\n";
}
}
#include <bits/stdc++.h>
#include <ext/pb_ds/tree_policy.hpp>
#include <ext/pb_ds/assoc_container.hpp>
using namespace __gnu_pbds;
using namespace std;
#define ll long long
const ll INF_MUL=1e13;
const ll INF_ADD=1e18;
#define pb push_back
#define mp make_pair
#define nline "\n"
#define f first
#define s second
#define pll pair<ll,ll>
#define all(x) x.begin(),x.end()
#define vl vector<ll>
#define vvl vector<vector<ll>>
#define vvvl vector<vector<vector<ll>>>
#ifndef ONLINE_JUDGE
#define debug(x) cerr<<#x<<" "; _print(x); cerr<<nline;
#else
#define debug(x);
#endif
const ll MOD=1e9+7;
const ll MAX=100010;
ll cal(ll n){
ll now=(n*(n+1))/2;
return now;
}
class ST {
public:
vector<ll> segs;
ll size = 0;
ll ID = INF_ADD;
ST(ll sz) {
segs.assign(2 * sz, ID);
size = sz;
}
ll comb(ll a, ll b) {
return min(a, b);
}
void upd(ll idx, ll val) {
segs[idx += size] = val;
for(idx /= 2; idx; idx /= 2) segs[idx] = comb(segs[2 * idx], segs[2 * idx + 1]);
}
ll query(ll l, ll r) {
ll lans = ID, rans = ID;
for(l += size, r += size + 1; l < r; l /= 2, r /= 2) {
if(l & 1) lans = comb(lans, segs[l++]);
if(r & 1) rans = comb(segs[--r], rans);
}
return comb(lans, rans);
}
};
void solve(){
ll n; cin>>n;
vector<ll> a(n+5,0),use(n+5,0),pref(n+5,0);
for(ll i=1;i<=n;i++){
cin>>a[i];
use[i]=min(use[i-1]+1,a[i]); pref[i]=pref[i-1]+use[i];
}
ST segtree(n+5);
auto get=[&](ll l,ll r,ll till,ll pos,ll tar){
while(l<=r){
ll mid=(l+r)/2;
if(segtree.query(pos,mid)>=tar){
till=mid,l=mid+1;
}
else{
r=mid-1;
}
}
return till;
};
vector<ll> track(n+5,0);
for(ll i=n;i>=1;i--){
segtree.upd(i,a[i]-i); ll till=get(i,n,i,i,a[i]-i);
track[i]=track[till+1]+cal(a[i]+till-i)-cal(a[i]-1);
}
ll q; cin>>q;
while(q--){
ll p,x; cin>>p>>x; ll target=min(x,use[p-1]+1);
ll till=get(p+1,n,p,p+1,target-p);
ll ans=pref[p-1]+track[till+1]+cal(target+till-p)-cal(target-1);
cout<<ans<<nline;
}
return;
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
#ifndef ONLINE_JUDGE
freopen("input.txt", "r", stdin);
freopen("output.txt", "w", stdout);
freopen("error.txt", "w", stderr);
#endif
ll test_cases=1;
//cin>>test_cases;
while(test_cases--){
solve();
}
cout<<fixed<<setprecision(10);
cerr<<"Time:"<<1000*((double)clock())/(double)CLOCKS_PER_SEC<<"ms\n";
}
1736D - Equal Binary Subsequences
It is easy to see that a necessary condition for a solution to exist is that the number of $$$1$$$ in $$$s$$$ should be even. It turns out that this condition is sufficient too.
Here is one valid construction:
We make $$$n$$$ pairs of the form $$$(s[2i-1],s[2i])$$$ for $$$(1 \leq i \leq n)$$$.
Assume we have $$$x$$$ pairs in which both elements are different and $$$n-x$$$ pairs in which both elements are same.
\textbf{Claim} — $$$x$$$ should be even.
\textbf{Proof} — Assume that among the $$$n-x$$$ pairs in which both elements are same, we have $$$y$$$ pairs in which both elements are $$$1$$$. So number of $$$1$$$ in $$$s$$$ is $$$x+2 \cdot y$$$. We know that number of $$$1$$$ in $$$s$$$ is even, so for $$$x+2 \cdot y$$$ to be even, $$$x$$$ should also be even.
Now we will select $$$x$$$ indices; exactly one index from each of the $$$x$$$ pairs in which both elements are distinct. Take the index of $$$0$$$ from $$$i_{th}$$$ pair if $$$i$$$ is odd, else take the index of $$$1$$$. Thus our selected characters = $$${0,1,0,1, \dots ,0,1}$$$
Now on cyclically shifting the selected characters clockwise once, we can see that elements at selected indices got flipped.
Since, elements in those $$$x$$$ pairs were distinct initially, and we flipped exactly one character from each of those $$$x$$$ pairs, both elements of those $$$x$$$ pairs are same now.
Hence, in updated $$$s$$$, $$$s[2i-1]=s[2i]$$$.
So, for $$$s_1$$$, we can select characters of all odd indices.
Finally we'll have $$$s_1 = s_2$$$. Time complexity is $$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
#define ll long long
void solve(){
ll n; cin>>n;
string s; cin>>s;
ll freq=0;
vector<ll> ans;
ll need=0;
for(ll i=0;i<2*n;i+=2){
if(s[i]!=s[i+1]){
freq++;
ans.push_back(i+1);
if(s[i]-'0'!=need){
ans.back()++;
}
need^=1;
}
}
if(freq&1){
cout<<"-1\n";
return;
}
cout<<ans.size()<<" ";
for(auto it:ans){
cout<<it<<" ";
}
cout<<"\n";
for(ll i=1;i<=2*n;i+=2){
cout<<i<<" \n"[i+1==2*n];
}
}
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
ll t; cin>>t;
while(t--){
solve();
}
}
As the constraints suggest, we should use dp to solve this problem.
Let's write the original indices of the array that are added during this process — $$$p_1, p_2, \ldots, p_n$$$. None of added numbers are zeroed in an optimal answer. It gives that $$$p_1 \le p_2 \le \ldots \le p_n$$$ and the answer is equal to the sum of $$$a[p_k]$$$ ($$$1 \leq k \leq n$$$).
To get the optimal answer we'll use $$$dp[t][last][m]$$$ = maximum score on $$$t$$$-th turn if $$$p_t = last$$$ and we have performed $$$m$$$ swapping moves (the first dimension can be omitted). Note that $$$m \leq i$$$. It can be updated by considering the next index but it will take $$$O(n^4)$$$. The most straightforward way to improve it to $$$O(n^3)$$$ is to use prefix maximums.
Here are some details.
We have only two cases:
$$$p_t=p_{t-1}$$$ — In this case, our transition is just $$$dp[t][last][m]=dp[t-1][last][m-1]+a[last]$$$
$$$p_t \gt p_{t-1}$$$ — Let us make some observations. First of all, $$$p_t \ge t$$$. So number of swaps to bring $$$p_t$$$ to index $$$t$$$ is fixed. It is $$$p_t-t$$$. So $$$dp[t][last][m]=\max_{j=1}^{last-1} (dp[t-1][j][m-(p_t-t)])+a[last]$$$. Note that we can find $$$\max_{j=1}^{last-1} (dp[t-1][j][m-(p_t-t)])$$$ in $$$O(1)$$$. Hint — use prefix maximum.
Time complexity is $$$O(n^3)$$$.
#include <bits/stdc++.h>
using namespace std;
const int MAX=505;
vector<vector<vector<int>>> dp(MAX,vector<vector<int>>(MAX,vector<int>(MAX,-(int)(1e9))));
int main()
{
int n; cin>>n;
vector<int> a(n+5);
for(int i=1;i<=n;i++){
cin>>a[i];
}
vector<vector<int>> prefix(n+5,vector<int>(n+5,0));
int ans=0;
for(int i=1;i<=n;i++){
for(int j=1;j<=n;j++){
for(int k=0;k<=i;k++){
if(k){
dp[i][j][k]=dp[i-1][j][k-1]+a[j];
}
if(j>=i){
int need=j-i;
if(need>k){
continue;
}
dp[i][j][k]=max(dp[i][j][k],prefix[k-need][j-1]+a[j]);
}
}
}
for(int j=1;j<=n;j++){
for(int k=0;k<=i;k++){
prefix[k][j]=max(prefix[k][j],dp[i][j][k]);
}
}
for(int j=0;j<=i;j++){
for(int k=1;k<=n;k++){
prefix[j][k]=max(prefix[j][k],prefix[j][k-1]);
ans=max(ans,prefix[j][k]);
}
}
}
cout<<ans;
}
Hello, Codeforces!
bkbtpout, naman1601 and I are glad to invite everyone to participate in Codeforces Round 825 (Div. 2), which will be held on Oct/10/2022 17:35 (Moscow time).
This Round will be rated for participants with rating lower than 2100.
You will be given 5 problems with one subtask and 2 hours to solve them
We would like to thank:
The score distribution will be announced closer to the start of the round.
UPD1: Score distribution is 500 — 1000 — ( 1250 + 2000 ) — 2000 — 2500.
UPD2: Congratulations to the winners!
Overall:
Rated:
UPD3: Editorial is out.
Good luck and have fun!
| Name |
|---|


