BitShift 2026 Editorial
Hello everyone, here is the editorial for BitShift 2026 Contest conducted by Programming Club, IITM. Thank you for participating in the contest. Hope you had fun solving the problems.
All of the problems were original and we would be happy to hear your feedback in the comments and in the mini-survey below.
A. Complete Subarrays
Author: aryak05
For a given $$$l$$$(left index), let $$$r_{min}$$$ be the smallest value of $$$r$$$(right index) such that $$$a[l…r]$$$ is a complete subarray. If $$$a[l…r]$$$ is a complete subarray for $$$r \gt r_{min}$$$, then $$$a[{r_{min}+1}…r]$$$ must also be a complete subarray
Since $$$a[l…r]$$$ is complete, any element $$$x$$$ in $$$a[r_{min}+1…r]$$$ has all its occurrences of the whole array in $$$a[l…r]$$$ but not in $$$a[l…r_{min}]$$$. This is because if there was an occurrence of $$$x$$$ in $$$a[l…r_{min}]$$$, then this would violate the fact that $$$a[l…r_{min}]$$$ is complete. This implies that $$$a[r_{min}+1…r]$$$ is complete.
Let $$$dp[l]$$$ be the number of complete subarrays starting at $$$l$$$. Then the previous hint gives, $$$dp[l]=dp[r_{min}+1]+1$$$.
The only candidate indices for the start of a complete subarray are the first occurrences of elements in the array $$$a$$$. For the end of a complete subarray, the candidate indices are the last occurrences.
If the subarray started at an occurrence of an element that is not the first occurrence of that element, then it cannot be complete as it cannot contain the first occurrence of that element. Similar reasoning may be provided for the ending.
Consider two elements $$$x$$$ and $$$y$$$ with first occurrences $$$l_x$$$ and $$$l_y$$$ and last occurrences $$$r_x$$$ and $$$r_y$$$ satisfying $$$l_x \lt l_y \lt r_x \lt r_y$$$. Then $$$l_y$$$ can never be the start of a complete subarray and if a complete subarray starts at $$$l_x$$$, then it must end at $$$r \gt =r_y$$$.
Any complete subarray starting at $$$l_y$$$ must end at $$$r \gt =r_y$$$ and would not contain the first occurrence of element $$$x$$$ but would definitely contain the last occurrence of $$$x$$$ making it not complete. So, there cannot exist any complete subarray starting at $$$l_y$$$. Also, any complete subarray starting at $$$l_x$$$ must end at $$$r \gt =r_x$$$ and thus must contain $$$l_y$$$. For it to be complete, it should contain all occurrences of $$$y$$$ and thus it must end at $$$r \gt =r_y$$$.
Let's be simple for now and ignore all occurrences of an element except its first and last. Note that by doing so we may consider subarrays which are not complete also. We shall invalidate such subarrays in another pass through the array.
Let us find candidate subarrays by first ignoring occurrences of elements other than the first and last.
We shall use a stack. While iterating over the elements of the array from left to right, suppose the element we are considering in this iteration is $$$x$$$.
If this is the first occurrence of $$$x$$$, we push $$$x$$$ onto the stack.
If this is the last occurrence of $$$x$$$ and the top of the stack is $$$x$$$, then we obtain a candidate complete subarray.
If this is the last occurrence of $$$x$$$ and the top of the stack is not $$$x$$$, then all the elements present on the stack above $$$x$$$ are of the type $$$y$$$ described in the observation above. These elements can never be the start of a complete subarray, so we remove them from the stack (and note that we ignore their last occurrences as well from now on).
After removing them, we set the last occurrence of $$$x$$$ to be the maximum among the last occurrences of the removed elements. This becomes the new last occurrence of $$$x$$$, and we continue iterating over the array.
We shall now eliminate all candidate subarrays found above that contain an element whose first occurrence lies before the start of the subarray.
Note that there will not be any elements whose last occurrence lies after the end of the subarray because of the way we have constructed the candidates above.
To perform this filtering, we may either use a range minimum query data structure,or observe that the subarrays obtained above are either nested or disjoint, but never partially intersecting.
Using this observation, we iterate over the array and use a stack to maintain the starting indices of all candidate subarrays that have started but not yet ended. Every time we encounter an occurrence of an element that is not its first occurrence, we pop all starting indices in the stack that are after the first occurrence of this element and invalidate the corresponding subarrays.
This leads to an $$$O(n \log n)$$$ solution.
#include <bits/stdc++.h>
using namespace std;
#define endl "\n"
#define F first
#define S second
#define ll long long
void solve() {
int n;
cin>>n;
vector<int>a(n+1);
map<int,int>first_pos;
map<int,int>last_pos;
for(int i=1;i<=n;i++){
cin>>a[i];
if(first_pos[a[i]]){
last_pos[a[i]]=i;
}
else{
first_pos[a[i]]=i;
last_pos[a[i]]=i;
}
}
stack<int>stk;
vector<int>ridx(n+1);
vector<bool>removed(n+1);
//ridx[i] contains the right index for a candidate complete subarray starting at i, if it exists
for(int i=1;i<=n;i++){
if(removed[i])continue;
if(first_pos[a[i]]==i){
if(first_pos[a[i]]==last_pos[a[i]]){
ridx[i]=i;
}
else{
stk.push(i);
}
}
else if(last_pos[a[i]]==i){
if(stk.top()==first_pos[a[i]]){
ridx[stk.top()]=i;
stk.pop();
}
else{
int max_r=i;
while(!stk.empty() && stk.top()!=first_pos[a[i]]){
max_r=max(max_r,last_pos[a[stk.top()]]);
removed[last_pos[a[stk.top()]]]=true;
stk.pop();
}
last_pos[a[i]]=max_r;
a[max_r]=a[i];
removed[max_r]=false;
}
}
}
//checking if the intervals are valid
for(int i=1;i<=n;i++){
if(ridx[i]){
if(ridx[i]!=i)stk.push(i);
continue;
}
if(!stk.empty() && ridx[stk.top()]==i){
stk.pop();
continue;
}
while(!stk.empty() && stk.top()>first_pos[a[i]]){
ridx[stk.top()]=0;
stk.pop();
}
}
vector<ll>dp(n+2);
for(int i=n;i>=1;i--){
if(ridx[i]){
dp[i]=dp[ridx[i]+1]+1;
}
}
cout<<accumulate(dp.begin()+1,dp.end(),0LL)<<endl;
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);cout.tie(NULL);
int t; cin >> t;
while (t--) {
solve();
}
return 0;
}
B. Minesweeper
Author: RighterTheOriginal (thanks to Madhav_AK for contributing to the solution)
Try solving the problem without the bomb first.
If we somehow knew the coordinates of $$$B$$$, could we make all queries safely? How?
What's the least possible information we need on the row number and column number to use the technique in Hint 2?
By the setup, it is clear that Player B has to binary search the grid for $$$x_M$$$ and $$$y_M$$$. We can find $$$x_M$$$ by searching over any column, and find $$$y_M$$$ by searching over any row. However, we need enough information to avoid ever searching a cell that has the same row or column as $$$B$$$.
To do this, Player A can encode $$$k = 1 + 2 \cdot (x_B \bmod 2) + (y_B \bmod 2)$$$ — it can be seen that this satisfies $$$1 \le k \le 4$$$ and uniquely produces the values $$$(x_B \bmod 2)$$$ and $$$(y_B \bmod 2)$$$. Since the parities of $$$x_B$$$ and $$$y_B$$$ are now known to Player B, we can use row $$$y=1$$$ (or row $$$y=2$$$) and column $$$x=1$$$ (or column $$$x=2$$$) to safely search without being in the same row and column (respectively) as $$$B$$$.
However, we need to also avoid being in the same column as $$$B$$$ while searching over a row (or being in the same row while searching over a column). Say we need to query $$$(x, y, R)$$$ and $$$y$$$ has the same parity as $$$y_B$$$. The information for $$$(y_M \gt y)$$$ is exactly the opposite of the information $$$(y_M \le y)$$$, or equivalently $$$(y_M \lt y + 1)$$$. So we can get this information by querying $$$(x, y + 1, L)$$$ instead. The case of $$$y+1 \gt n$$$ is a technical detail and can be avoided while implementing the binary search. In this way, the binary search can be carried out without ever using a cell that is in the same-parity row or column as $$$B$$$.
This will take at most $$$\lceil \log_2 n \rceil + \lceil \log_2 n \rceil$$$ queries, which is at most $$$60$$$ for $$$2 \le n \le 10^9$$$ — so the $$$100$$$ queries will suffice.
It can be proved that $$$x=4$$$ is the minimum possible — see the proof below.
Let us show that $$$x_{min}=4$$$, i.e., there is no solution with $$$x \lt 4$$$ (or equivalently, $$$x \le 3$$$) which can correctly and safely determine the point $$$M$$$ in all cases. Let us assume to the contrary that there exists such a solution, and analyse its behaviour for $$$n=2$$$. There are two possibilities, for any particular configuration of $$$M$$$ and $$$B$$$:
- Situation I: For the produced value of $$$k$$$, Player B does not make any queries to the interactor. Then the only information that Player B has is $$$n$$$ and $$$k$$$, and from this they must determine the exact coordinates of $$$M$$$.
- Situation II: For the produced value of $$$k$$$, Player B makes at least one query to the interactor. Since there are $$$3$$$ cells that share a row and/or column with point $$$B$$$, they must determine which is the remaining safe cell to query (or equivalently they must determine the exact coordinates of $$$B$$$).
Let $$$S_1$$$ be the set of all values of $$$k$$$ that are in Situation I, and $$$S_2$$$ be the set of all values of $$$k$$$ that are in Situation II. Note that $$$S_1 \cap S_2 = \phi$$$ and $$$S_1 \cup S_2 \subseteq$$$ {$$$1, 2, ..., x$$$}, so $$$|S_1| + |S_2| \le x \le 3$$$. Then, at least one of the following cases will necessarily be true:
- Case 1: $$$|S_1| \le 3$$$ and $$$|S_2| \le 0$$$. Then for all possible configurations of $$$M$$$ and $$$B$$$, Player B must determine the exact coordinates of $$$M$$$ from $$$k$$$. Since $$$M$$$ can have $$$4$$$ possible coordinates, we need at least $$$4$$$ distinct values of $$$k$$$, which leads to $$$|S_1| \ge 4$$$ — a contradiction.
- Case 2: $$$|S_1| \le 2$$$ and $$$|S_2| \le 1$$$. If $$$|S_2| \le 0$$$, we follow the logic of Case 1. Otherwise, there is only one value of $$$k$$$ for which Player B makes any queries to the interactor. So this value of $$$k$$$ can only correspond to $$$1$$$ particular coordinates for $$$B$$$. The other $$$3$$$ (or $$$4$$$, if those particular ones aren't covered fully for this value of $$$k$$$) coordinates for $$$B$$$ correspond to Situation I. Among the different configurations for $$$3$$$ coordinates of $$$B$$$, $$$M$$$ can have all $$$4$$$ different coordinates. So Player B must determine the coordinates of $$$M$$$, and we need at least $$$4$$$ distinct values of $$$k$$$ for this. This leads to $$$|S_1| \ge 4$$$ — a contradiction.
- Case 3: $$$|S_1| \le 1$$$ and $$$|S_2| \le 2$$$. If $$$|S_1| \le 0$$$, we follow the logic of Case 4. Otherwise, there is only one value of $$$k$$$ for which Player B makes no queries to the interactor. So this value of $$$k$$$ can only correspond to $$$1$$$ particular coordinates for $$$M$$$. The other $$$3$$$ (or $$$4$$$, if those particular ones aren't covered fully for this value of $$$k$$$) coordinates for $$$M$$$ correspond to Situation II. Among the different configurations for $$$3$$$ coordinates of $$$M$$$, $$$B$$$ can have all $$$4$$$ different coordinates. So Player B must determine the coordinates of $$$B$$$, and we need at least $$$4$$$ distinct values of $$$k$$$ for this. This leads to $$$|S_2| \ge 4$$$ — a contradiction.
- Case 4: $$$|S_1| \le 0$$$ and $$$|S_2| \le 3$$$. Then for all possible configurations of $$$M$$$ and $$$B$$$, Player B must determine the exact coordinates of $$$B$$$ from $$$k$$$. Since $$$B$$$ can have $$$4$$$ possible coordinates, we need at least $$$4$$$ distinct values of $$$k$$$, which leads to $$$|S_2| \ge 4$$$ — a contradiction.
Therefore, all cases lead to a contradiction, and so our assumption was incorrect.
Hence it is proved that $$$x_{min}=4$$$.
#include <bits/stdc++.h>
#define int long long int
#define INF (LLONG_MAX / 4)
using namespace std;
enum { BOOM, TRUE, FALSE, ERROR };
void first() {
int t;
cin >> t;
for (int tc = 0; tc < t; tc++) {
int n;
cin >> n;
int xM, yM, xB, yB;
cin >> xM >> yM >> xB >> yB;
int p1 = xB % 2, p2 = yB % 2;
int k = 1 + (2 * p1 + p2);
assert(1 <= k and k <= 4);
cout << k << endl;
}
}
void answer(int x, int y) {
cout << '!' << ' ' << x << ' ' << y << endl;
}
int ask(int x, int y, char direction) {
cout << '?' << ' ' << x << ' ' << y << ' ' << direction << endl;
string reply;
cin >> reply;
if (reply == "Boom") {
abort();
return BOOM;
}
if (reply == "Yes") {
return TRUE;
}
if (reply == "No") {
return FALSE;
}
abort();
return ERROR;
}
void binary_search_careful(int n, int curr, int &ans, int danger_parity,
bool x_coordinate) {
int start = 1, end = n, mid = (start + end) / 2;
int query = 1;
// binary search
while (start < end) {
int result;
if (mid % 2 == danger_parity) {
if (mid == end) {
// start == end, not possible
abort();
}
result = x_coordinate ? ask(mid + 1, curr, 'U')
: ask(curr, mid + 1, 'L');
query++;
if (result == TRUE) {
result = FALSE;
} else if (result == FALSE) {
result = TRUE;
}
} else {
result = x_coordinate ? ask(mid, curr, 'D') : ask(curr, mid, 'R');
query++;
}
if (result == TRUE) {
// mid < M
start = mid + 1;
mid = (start + end) / 2;
} else if (result == FALSE) {
// mid >= M
end = mid;
mid = (start + end) / 2;
} else {
abort();
}
}
ans = mid;
}
void second() {
int t;
cin >> t;
for (int tc = 0; tc < t; tc++) {
int n, k;
cin >> n >> k;
k--;
int x_danger_parity = k / 2, y_danger_parity = k % 2;
// binary search for y coordinate
int x_curr = 1;
if (x_danger_parity == 1) {
x_curr = 2;
}
int y;
binary_search_careful(n, x_curr, y, y_danger_parity, false);
// binary search for x coordinate
int y_curr = 1;
if (y_danger_parity == 1) {
y_curr = 2;
}
int x;
binary_search_careful(n, y_curr, x, x_danger_parity, true);
answer(x, y);
}
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
string run;
cin >> run;
if (run == "first") {
first();
} else if (run == "second") {
second();
} else {
abort();
}
}
C. Div. 2,3,4,5 Problem
Author: Yukash (thanks to RighterTheOriginal for contributing to the solution)
Solve the problem for $$$n = 1$$$ and for even $$$n$$$. Is it ever optimal to choose a number that immediately increases the score of the player?
For $$$n = 1$$$ , let $$$x = a_0$$$ . Write $$$x$$$ as $$$2^p \cdot 3^q \cdot 5^r \cdot S$$$ , where $$$S$$$ is remaining prime factorization of $$$x$$$. Does choosing $$$k = 4$$$ change the outcome of the game depending on $$$p$$$? If so, when does it?
Let $$$c$$$ be the number of elements in the actual array, such that Alice can conditionally win if it was the only element present in the array i.e. depending on the factorization, Alice can choose $$$k = 2$$$ or $$$k = 4$$$ to win the game regardless. Solve for $$$c = 0$$$.
Players can mimic strategy to keep the state of the game to their favor, but only through the elements falling into the type mentioned in the previous Hint.
Let's call a number invalid, if there does not exist a $$$k~(2 \leq k \leq 5)$$$ such that $$$n \bmod k = 0$$$. It is clear that for even $$$n$$$, the game always ends in a tie if played optimally.
Suppose, assume that some optimal play lead to a winner. Consider the last move made by the loser of the game. Since it is the last move, the score difference between the players must be more than one (The scores cannot be equal as $$$n$$$ is even). This implies that at some point in the game, Both the players had an equal score, and the loser chose an invalid number. At this point in game, the loser could have chosen a valid number and keeping the score at level. Since, the remaining number of elements in the array is even (Both the scores are equal, which means the number of elements removed is even), therefore the same argument can be applied. This results in a play that leads to tie. Therefore, the optimal play always results in a tie.
For odd $$$n$$$, Let us solve the problem for $$$n = 1$$$ first. Write $$$a_0 = x$$$ as $$$2^p \cdot 3^q \cdot 5^r \cdot S$$$. Let us classify $$$x$$$ as different types for understanding.
Type 1: For $$$p = 0$$$, the winner is fixed regardless of the players' choices. If $$$ w = (q \gt 0)$$$ xor $$$(r \gt 0)$$$ is $$$1$$$ then Alice wins, else Bob wins.
Type 2: For $$$p = 1$$$ and $$$p \bmod 2 = 0$$$, again the choices of $$$k$$$ doesn't change the outcome. In this case, the answer is vice versa of the $$$p = 0$$$ i.e. If $$$w$$$ is $$$1$$$ then Bob wins, else Alice wins.
Type 3: For $$$p \bmod 2 = 1$$$ and $$$p \neq 1$$$, Alice can force a win always by optimally choosing $$$k = 2$$$ or $$$k = 4$$$ based on $$$x$$$. This is because after choosing $$$k = 2$$$ or $$$k = 4$$$, the problem reduces to one of the above two cases. Since, the above cases are complementary, therefore Alice must be losing in one of the cases, which she can choose so that it becomes Bob's turn to lose.
Now for any odd $$$n$$$, Let's consider the simpler case, where there are no Type 3 elements. Note that choosing an invalid number over a valid number is never more optimal, as discussed in the even $$$n$$$ case. Therefore, optimal play always reaches the situation where all the elements in the array are invalid and the answer boils down to finding the player making the last valid number invalid.
Consider any valid number $$$x$$$. If Alice can win for this $$$x$$$ in $$$n = 1$$$ case, this number can be made invalid in odd number of turns, otherwise even number of turns. Therefore, the answer in this case is dependent only on the parity of such available numbers. If $$$A_w$$$ is the number of $$$x$$$ in the array such that Alice can win for $$$x$$$ in $$$n = 1$$$ and $$$B_w$$$ is the counterpart where Bob wins, then the player making the last valid number invalid can be calculated as $$$m = A_w + 2 \cdot B_w \bmod 2$$$ (Note that $$$B_w$$$ does not change the parity, so we can simply exclude it). If $$$m = 1$$$, then Alice wins since Bob has to take the last invalid element, else Bob wins.
Now we consider the case where the array has Type 3 elements as well. Note that after one turn, the Type 3 element reduces to one of Type 1 or Type 2 on choosing either $$$k = 2$$$ or $$$k = 4$$$. In other words, it either increases $$$A_w$$$ or $$$B_w$$$ based on the player's choice and it swaps the turn as well.
If there are even number of Type 3 elements, then the answer is similar as in the case where all the Type 3 elements are reduced to Type 1 (or Type 2). This is because the winner can choose to ignore $$$k = 2 , 4$$$ choice for Type 3 elements until the other player chooses one of such element and play $$$k = 2$$$ or $$$k = 4$$$, in which case, the winner can mimic/nullify opponents effort to change the parity of $$$m$$$ as there are even number of Type 3 elements. Note that we need to take into account of the effective parity change of Type 3 elements as well after considering them as Type 1 (or Type 2) elements i.e. $$$A_w$$$ increases by one if $$$w = 1$$$ (or $$$w = 0$$$ for Type 2).
If there are odd number of Type 3 elements, then Alice can always win. She can start the game by choosing one of the Type 3 elements, then making it to some $$$x$$$ such that it adds to $$$A_w$$$ or $$$B_w$$$ based on the current parity of $$$m$$$. This then reduces to the previous case with Bob's turn. She then can follow the strategy discussed in the previous case to guarantee her win.
The final answer therefore, can be computed by calculating the number of Type 3 elements present in the array and the number $$$A_w$$$, as explained above.
Time complexity: $$$O(n \cdot \log(\mathrm{max}~ a_i))$$$ per testcase
#include <bits/stdc++.h>
// using namespace std;
#define int long long
#define float double
void solve(){
int n ; std::cin >> n ;
std::vector<int> arr(n) ;
for(int i = 0 ; i < n ; ++i){
std::cin >> arr[i] ;
}
int Aw = 0 ;
int Bw = 0 ;
int cAw = 0 ;
if(n%2 == 0){
std::cout << "NO" << std::endl ;
}
else{
for(int i = 0 ; i < n ; ++i){
int x = arr[i] ;
int px = 0 ; int qx = 0 ; int rx = 0 ;
while(x%2 == 0){
x = x/2 ;
px += 1 ;
}
while(x%3 == 0){
x = x/3 ;
qx += 1 ;
}
while(x%5 == 0){
x = x/5 ;
rx += 1 ;
}
int wx = (qx > 0) xor (rx > 0) ;
if(px == 0){
if(wx == 1){
Aw += 1 ;
}
else{
Bw += 1 ;
}
}
else{
if(px == 1 || px%2 == 0){
if(wx == 0){
Aw += 1 ;
}
else{
Bw += 1 ;
}
}
else{
cAw += 1 ;
if(wx == 1){
Aw += 1 ;
}
else{
Bw += 1 ;
}
}
}
}
if(cAw%2 != 0){
std::cout << "YES" << std::endl ;
}
else{
int m = (Aw + 2*Bw) % 2 ;
if(m == 1){
std::cout << "YES" << std::endl ;
}
else{
std::cout << "NO" << std::endl ;
}
}
}
}
int32_t main(){
std::ios_base::sync_with_stdio(false);
std::cin.tie(NULL); std::cout.tie(NULL);
int T = 1 ;
std::cin >> T ;
while(T--){
solve();
}
}
D. Supercharging Nodes
Author: Madhav_AK
We can afford an $$$O(nx^4)$$$ solution
If we cut a subtree off from its parent, the rest of the tree only interacts with it via a single edge. What information about the subtree would be sufficient to describe the subtree’s contribution to the final answer?
We really only care about:
- The total marked points in the subtree (This is needed as we need the final total to be bounded by $$$x$$$)
- The number of marked points on the edge connecting the node to its parent (the only edge going out of the subtree) (This is needed because it has to be in compliance to the parent's permitted range)
- The actual distribution of marked points among all the other edges is irrelevant to the rest of the tree
- A binary indicator for whether the node is supercharged or not (This is needed as we will proceed to solve this question by treating the regular node and the supercharged node as two different states, and we will perform all our transitions separately for both)
Consider using this DP formulation:
dp[node][supercharge][edge][subtree] = [Only considering the subtree of node], Number of ways such that:
- exactly
edgepoints are marked on the edge (node, parent) - exactly
subtreepoints are marked inside node's subtree supercharge= $$$1$$$ if node is being supercharged, otherwise $$$0$$$
We will perform an $$$O(nx^4)$$$ tree DP with iterative Knapsack-style merges for the transition. Maintain:
dp[node][supercharge][edge][subtree] = [Only considering the subtree of node], Number of ways such that:
- exactly
edgepoints are marked on the edge (node, parent) - exactly
subtreepoints are marked inside node's subtree supercharge= $$$1$$$ if node is being supercharged, otherwise $$$0$$$
For insights on why we chose this state, refer to hint $$$3$$$.
For the transition (say we are processing children one by one), any node's possible connections to the next child only depends on the the total marked points on all subtrees of all children processed till now, and on the total marked points on all the edges connecting the node to its children.
For each transition define F[edge][subtree]:
edge= total marked points on the edge from (node to child) [on already processed children]subtree= total marked points in the subtree including the node to processed children edges [on already processed children]
Realize that both edge and subtree must be bounded by $$$x$$$, and hence for every state of the new full subtree of the current node, we have a 0/1 knapsack-style problem to figure out how many states in the children subtrees could have led to this. Now we will perform iterative Knapsack-style merges for the transition.
#include <bits/stdc++.h>
#define int long long
using namespace std;
using p = pair<int,int>;
const int MOD = 1e9 + 7;
int power(int a, int b) {
a = a % MOD;
int r = 1;
while (b > 0) {
if (b % 2 == 1) {
r = (r*a)%MOD;
}
b = b/2;
a = (a*a) % MOD;
}
return r;
}
int divide(int a, int b) {
return (a * power(b, MOD-2)) % MOD;
}
int comb(int n, int r, vector<int> &factorial, vector<int> &invfactorial) {
if (r > n || r < 0) return 0;
int ans = 1;
ans = (ans * factorial[n]) % MOD;
ans = (ans * invfactorial[n-r]) % MOD;
ans = (ans * invfactorial[r]) % MOD;
return ans;
}
void dfs(int node, int x, vector<vector<p>> &adj, vector<p> &permitted_range, vector<vector<vector<vector<int>>>> &dp, vector<int> &parent, vector<int> &factorial, vector<int> &invfactorial) {
for (auto [child, canal_length]: adj[node]) {
if (parent[node] == child) continue;
parent[child] = node;
dfs(child, x, adj, permitted_range, dp, parent, factorial, invfactorial);
}
for (int supercharge = 0; supercharge <= 1; supercharge++) {
// Interative Knapsack for merging children F[edge][subtree]:
// edge = total marked points on edge from (v to child) [on already processed children]
// subtree = total marked points in subtree including node to processed children edges [on already processed children]
vector<vector<int>> F(x+1, vector<int>(x+1, 0));
F[0][0] = 1;
for (auto [child, weight] : adj[node]) {
if (child == parent[node]) continue;
vector<vector<int>> F_temp(x+1, vector<int>(x+1, 0));
for (int childsupercharge = 0; childsupercharge <= 1; childsupercharge++) {
int slots = (weight - supercharge - childsupercharge);
if (slots < 0) continue;
for (int edge = 0; edge <= x; edge++) {
for (int subtree = 0; subtree <= x; subtree++) {
int base = F[edge][subtree];
if (base == 0) continue;
// new_edge -> Proposed number of marked points on the edge from node to the current child we are iterating on
// new_subtree -> Proposed number of marked points on subtree of the current child we are iterating on
// edge -> The sum of number of marked points on node-child edges till this child
// subtree -> The total number of marked points on node's subtree till this child (so edges and subtrees of all children till this child are counted)
for (int new_edge = 0; new_edge <= min(x, slots); new_edge++) {
int possible_edge_ways = comb(slots, new_edge, factorial, invfactorial);
if (possible_edge_ways <= 0) continue;
for (int new_subtree = 0; subtree + new_subtree + new_edge <= x; new_subtree++) {
int possible_subtree_ways = dp[child][childsupercharge][new_edge][new_subtree];
if (possible_subtree_ways <= 0) continue;
int edge_now = edge + new_edge;
int subtree_now = subtree + new_subtree + new_edge;
if (edge_now > x || subtree_now > x) continue;
int extra = (((base * possible_edge_ways) % MOD) * possible_subtree_ways) % MOD;
F_temp[edge_now][subtree_now] = (F_temp[edge_now][subtree_now] + extra) % MOD;
}
}
}
}
}
F.swap(F_temp);
}
for (int parentedge = 0; parentedge <= x; parentedge++) {
for (int subtree = 0; subtree <= x; subtree++) {
int sum = 0;
for (int edge = 0; edge <= x; edge++) {
int ways = F[edge][subtree];
// Check if we satisfy bounds of nodes
auto [L, R] = permitted_range[node];
if (supercharge == 1) {L *= 2; R *= 2;}
if (parentedge + edge < L || parentedge + edge > R) continue;
sum = (sum + ways) % MOD;
}
dp[node][supercharge][parentedge][subtree] = sum;
}
}
}
}
void solve(vector<int> &factorial, vector<int> &invfactorial) {
int n, x;
cin >> n >> x;
vector<vector<p>> adj(n+1, vector<p>());
for (int i = 0; i < n-1; i++) {
int a,b,w;;
cin >> a >> b >> w;
adj[a].push_back({b, w});
adj[b].push_back({a, w});
}
vector<p> permitted_range(n+1);
for (int i = 1; i <= n; i++) {
int L, R;
cin >> L >> R;
permitted_range[i] = {L, R};
}
// dp[node][supercharge][edge][subtree] = Only considering the subtree of node, Number of ways such that:
// exactly edge points are marked on the edge (node, parent)
// exactly subtree points are marked inside node's subtree
// supercharge = 1 if node is being supercharged, otherwise 0
vector<vector<vector<vector<int>>>> dp(n+1, vector<vector<vector<int>>>(2, vector<vector<int>>(x+1, vector<int>(x+1, 0))));
vector<int> parent(n+1, -1);
dfs(1, x, adj, permitted_range, dp, parent, factorial, invfactorial);
int ans = 0;
for (int supercharge = 0; supercharge <= 1; supercharge++) {
for (int subtree = 0; subtree <= x; subtree++) {
ans = (ans + dp[1][supercharge][0][subtree]) % MOD;
}
}
cout << ans % MOD << '\n';
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
vector<int> factorial(2e5+1), invfactorial(2e5+1);
factorial[0] = 1;
for (int i = 1; i <= 2e5; i++) factorial[i] = (factorial[i-1] * i) % MOD;
invfactorial[2e5] = divide(1, factorial[2e5]);
for (int i = 2e5-1; i >= 0; i--) invfactorial[i] = (invfactorial[i+1] * (i+1)) % MOD;
int t;
cin >> t;
while (t--) {
solve(factorial, invfactorial);
}
return 0;
}
E. Coin Combinations III
Author: seeeeoh (thanks to aryak05 for the solution)
Can the problem be reduced to a simple binary representation problem?
Every string can be represented as a binary string by assigning a bit to each alphabet, and the symmetric difference is equivalent to a XOR operation.
For a fixed index $$$i$$$, what set of values can the element at position $$$i$$$ possibly take after arbitrary operations?
For any index $$$i$$$, every subset XOR of $$${a_i, a_{i+1}, \dots, a_n}$$$ is attainable at position $$$i$$$.
Fix $$$i$$$ and any $$$j \gt i$$$. Apply the operation successively at indices
and then again at
Now apply the operation successively at indices
After doing these steps, the value $$$a_j$$$ is XORed into $$$a_i$$$ exactly once.
Finally, to revert everything else back, apply the operation successively on indices
Hence, we can independently XOR any chosen suffix element into $$$a_i$$$.
Repeating this process for all desired indices yields any subset XOR of the suffix at position $$$i$$$.
We maintain a small array (of size $$$26$$$) that stores carefully chosen values from the suffix. Each stored value has a unique highest set bit, and together they represent all XOR combinations obtainable from the suffix.
To check whether a value $$$x$$$ can be formed using the current suffix:
- Start from the highest bit and iterate downwards.
- If the current bit is set in $$$x$$$, try to eliminate it by XORing with a stored value that has the same highest set bit.
- If no such stored value exists, then $$$x$$$ cannot be formed.
- If all bits are eliminated and $$$x$$$ becomes zero, then $$$x$$$ is achievable.
To add a number $$$v$$$ to the structure:
- Process bits of $$$v$$$ from highest to lowest.
- If the highest set bit of $$$v$$$ is not present among the stored values, store $$$v$$$.
- Otherwise, eliminate that bit by XORing $$$v$$$ with the stored value and continue.
- If $$$v$$$ becomes zero, it was already representable and does not add any new information.
Time Complexity : Each check and insertion works in $$$O(26)$$$ time (number of bits), so the overall complexity is $$$O(26 \cdot n)$$$
#include <bits/stdc++.h>
using namespace std;
#define int long long
#define nl "\n"
#define fastio() ios_base::sync_with_stdio(false); cin.tie(nullptr); cout.tie(nullptr)
using vi = vector<int>;
const int MOD = 1e9 + 7;
void solve() {
int n; cin >> n;
vi a(n), b(n);
for (int i = 0; i < n; ++i) {
string s; cin >> s;
int mask = 0;
for (char c : s) {
mask |= 1LL << (c - 'a');
}
a[i] = mask;
}
for (int i = 0; i < n; ++i) {
string s; cin >> s;
int mask = 0;
for (char c : s) {
mask |= 1LL << (c - 'a');
}
b[i] = mask;
}
if(a[n-1] != b[n-1]) {
cout << "NO" << nl;
return;
}
vi basis(30, 0);
for(int i = n-1; i >= 0; --i) {
// check if (a[i] XOR b[i]) can be formed using basis (suffix a[i+1..])
int y = a[i] ^ b[i];
for (int bit = 29; bit >= 0; --bit) {
if (!(y & (1LL << bit))) continue;
if (!basis[bit]) {
cout << "NO" << nl;
return;
}
y ^= basis[bit];
}
if(y != 0) {
cout << "NO" << nl;
return;
}
// now add a[i] to basis
int x = a[i];
for (int bit = 29; bit >= 0; --bit) {
if (!(x & (1LL << bit))) continue;
if (!basis[bit]) {
basis[bit] = x;
break;
}
x ^= basis[bit];
}
}
cout << "YES" << nl;
}
int32_t main() {
fastio();
int t; cin >> t;
while (t--) {
solve();
}
return 0;
}
F. XOR XRO OXR ORX RXO ROX
Author: RighterTheOriginal
Try solving for the cases where $$$n$$$ is a power of $$$2$$$. Can we find $$$a_i - a_1$$$ for some index $$$i$$$?
When $$$n$$$ is not a power of $$$2$$$, can we solve the problem in $$$4n$$$ queries instead?
Try trying to deduce a pair $$$a_i - a_1$$$ and $$$a_j - a_1$$$ together. Can we solve it in $$$3n$$$ queries now?
First, let's attempt to solve this problem for when $$$n$$$ is a power of $$$2$$$.
Consider a permutation $$$q$$$ with $$$q_a = 0$$$ and $$$q_b = n-1$$$ (the motivation behind these two numbers is that they are $$$000...0$$$ and $$$111...1$$$ respectively in binary, which makes their behaviour over XOR simple). Let $$$r$$$ be the same permutation with indices $$$a$$$ and $$$b$$$ swapped. If we query $$$q$$$ and $$$r$$$, we get the sums:
Subtracting, since the last term is equal for both, we get
Here, $$$\sim x$$$ represents the bitwise NOT of $$$x$$$, up to $$$\log_2 n$$$ bits. Note that for any integer $$$0 \le x \le n-1$$$, $$$x + \sim x = n - 1$$$ and so $$$\sim x - x = n - 1 - 2x$$$. Therefore,
Using this method, we can use $$$2$$$ queries to find the difference between any two elements of $$$p$$$. In $$$2(n-1)$$$ queries we can find all the differences $$$(p_2-p_1), (p_3-p_1), ..., (p_n-p_1)$$$. And since $$$p$$$ is a permutation, this is sufficient to deduce all elements of $$$p$$$.
Now, let us attempt to solve this problem for the case when $$$n$$$ is not a power of $$$2$$$. The difficulty is that we no longer have access to the number $$$111...1$$$ as an element of a permutation, as it may be greater than $$$n-1$$$. However, there are three such numbers of interest that we can still use: $$$0$$$ ($$$000...0$$$), $$$2^{\lfloor \log_2 n \rfloor}-1$$$ ($$$011...1$$$) and $$$2^{\lfloor \log_2 n \rfloor}$$$ ($$$100...0$$$).
- Consider permutations $$$q$$$ and $$$r$$$ with the values $$$0$$$ and $$$2^{\lfloor \log_2 n \rfloor}-1$$$ at indices $$$a$$$ and $$$b$$$, swapped in $$$r$$$ (as done above). The most significant bit of both these values is $$$0$$$, so while subtracting $$$s_q - s_r$$$, those bits get cancelled. Therefore, $$$s_q - s_r = 2 ((p_a)_{lower} - (p_b)_{lower})$$$ where the subscript $$$x_{lower} = x$$$ & $$$(011...1)$$$ represents the lower bits of $$$x$$$.
- Consider permutations $$$q$$$ and $$$r$$$ with the values $$$0$$$ and $$$2^{\lfloor \log_2 n \rfloor}$$$ at indices $$$a$$$ and $$$b$$$, swapped in $$$r$$$. The lower bits of both these values are $$$0$$$, so while subtracting $$$s_q - s_r$$$, those bits get cancelled. Therefore, $$$s_q - s_r = 2 ((p_a)_{upper} - (p_b)_{upper})$$$ where the subscript $$$x_{upper} = x$$$ & $$$(100...0)$$$ represents the most significant bit of $$$x$$$. This information can directly tell us whether the most significant bits of $$$p_a$$$ and $$$p_b$$$ are the same, and if not, it can also tell us which is $$$1$$$ and which is $$$0$$$.
- Consider permutations $$$q$$$ and $$$r$$$ with the values $$$2^{\lfloor \log_2 n \rfloor}$$$ and $$$2^{\lfloor \log_2 n \rfloor} - 1$$$ at indices $$$a$$$ and $$$b$$$, swapped in $$$r$$$. The XOR of any of these values ($$$100...0$$$ or $$$011...1$$$) with a number $$$x$$$, is equivalent to the XOR of $$$000...0$$$ or $$$111...1$$$ (respectively) with $$$x$$$, followed by XOR with $$$100...0$$$. So if it is known whether the most significant bits of $$$p_a$$$ and $$$p_b$$$ are same (and if not, which is greater), $$$s_q - s_r$$$ can tell us the value of $$$p_a - p_b$$$.
Observe that if we know the differences in (Case 1 and Case 2) or (Case 2 and Case 3), we can find the difference $$$p_a - p_b$$$ exactly. Therefore, we can use the following scheme to find the differences $$$p_i - p_1$$$ and $$$p_{i+1} - p_1$$$ simultaneously using $$$5$$$ queries (if $$$n$$$ is even, the last difference $$$p_n-p_1$$$ can be found with $$$2$$$ more queries — this is an implementation detail).
- Permutation $$$q$$$ with $$$q_1 = 000...0$$$, $$$q_i = 011...1$$$, $$$q_{i+1} = 100...0$$$.
- Permutation $$$r$$$ with $$$q_1 = 100...0$$$, $$$q_i = 011...1$$$, $$$q_{i+1} = 000...0$$$.
- Permutation $$$u$$$ with $$$q_1 = 011...1$$$, $$$q_i = 000...0$$$, $$$q_{i+1} = 100...0$$$.
- Permutation $$$v$$$ with $$$q_1 = 100...0$$$, $$$q_i = 000...0$$$, $$$q_{i+1} = 011...1$$$.
- Permutation $$$w$$$ with $$$q_1 = 000...0$$$, $$$q_i = 100...0$$$, $$$q_{i+1} = 011...1$$$.
$$$s_q-s_u$$$ (Case 1) and $$$s_v-s_w$$$ (Case 2) tell us the value of $$$p_{i}-p_{1}$$$, and $$$s_q-s_r$$$ (Case 2) and $$$s_u-s_v$$$ (Case 3) tell us the value of $$$p_{i+1}-p_{1}$$$. So using this method, we again get all the differences $$$(p_2-p_1), (p_3-p_1), ..., (p_n-p_1)$$$ and deduce all elements of $$$p$$$.
#include <bits/stdc++.h>
#define int long long int
#define f first
#define s second
#define INF (LLONG_MAX / 4)
using namespace std;
int query(vector<int> p) {
cout << '?';
for (int x : p) {
cout << ' ' << x;
}
cout << endl;
int reply;
cin >> reply;
return reply;
}
void answer(vector<int> p) {
cout << '!';
for (int x : p) {
cout << ' ' << x;
}
cout << endl;
}
int largest_2pow(int n) {
int m = 1;
while (n > 1) {
n >>= 1;
m <<= 1;
}
return m;
}
vector<int> generate_permutation(int n, set<int> skip_ind, set<int> skip_val) {
vector<int> v(n);
int c = 0;
for (int i = 0; i < n; i++) {
// skip indices
if (skip_ind.find(i) != skip_ind.end()) {
continue;
}
// skip values
while (skip_val.find(c) != skip_val.end()) {
c++;
}
v[i] = c;
c++;
}
return v;
}
void solve() {
int n;
cin >> n;
int m = largest_2pow(n);
vector<int> diff(n);
if (n == m) {
// n is a power of 2
int x = 0;
int y = m - 1;
diff[0] = 0;
for (int i = 1; i < n; i++) {
vector<int> test = generate_permutation(n, {0, i}, {x, y});
test[0] = x;
test[i] = y;
int val1 = query(test);
test[0] = y;
test[i] = x;
int val2 = query(test);
// extract value difference
diff[i] = (val2 - val1) / 2;
}
} else {
// n is not a power of 2
int x = 0;
int y = m - 1;
int z = m;
diff[0] = 0;
for (int i = 1; i + 1 < n; i += 2) {
vector<int> test = generate_permutation(n, {0, i, i + 1}, {x, y, z});
test[0] = x;
test[i] = y;
test[i + 1] = z;
int val1 = query(test);
test[0] = z;
test[i] = y;
test[i + 1] = x;
int val2 = query(test);
test[0] = y;
test[i] = x;
test[i + 1] = z;
int val3 = query(test);
test[0] = z;
test[i] = x;
test[i + 1] = y;
int val4 = query(test);
test[0] = x;
test[i] = z;
test[i + 1] = y;
int val5 = query(test);
// check msb difference of first element
int greater1 = (val4 - val5) / (2 * m);
// extract first element difference
diff[i] = (val3 - val1) / 2;
diff[i] += m * greater1;
// check msb difference of second element
int greater2 = (val2 - val1) / (2 * m);
// extract second element difference
diff[i + 1] = (val3 - val4) / 2;
diff[i + 1] += 2 * m * greater2;
}
if (n % 2 == 0) {
// handle last element
vector<int> test = generate_permutation(n, {0, n - 1}, {x, y});
test[0] = x;
test[n - 1] = y;
int val1 = query(test);
test[0] = y;
test[n - 1] = x;
int val2 = query(test);
test = generate_permutation(n, {0, n - 1}, {x, z});
test[0] = x;
test[n - 1] = z;
int val3 = query(test);
test[0] = z;
test[n - 1] = x;
int val4 = query(test);
int greater = (val4 - val3) / (2 * m);
diff[n - 1] = (val2 - val1) / 2;
diff[n - 1] += m * greater;
}
}
int offset = INF;
for (int i = 0; i < n; i++) {
offset = min(offset, diff[i]);
}
for (int i = 0; i < n; i++) {
diff[i] -= offset;
}
answer(diff);
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int t = 1;
cin >> t;
for (int tc = 0; tc < t; tc++) {
solve();
}
}
G. Christmas Packing Service
Author: ScepticallySam
Reduce the problem into segmenting the array into a few pieces such that when each segment is internally sorted, the entire array is sorted, and we need the maximum number of segments to do so.
If there exist indices $$$i \lt j$$$ such that $$$a[i] \gt a[j]$$$, can $$$i$$$ and $$$j$$$ belong to different segments?
For a cut between positions $$$k$$$ and $$$k+1$$$, what condition involving $$$\max(a[0...k])$$$ and $$$\min(a[k+1...n-1])$$$ must hold?
We can simplify the problem into finding segments in the array such that sorting these segments will sort the entire array, but we need to use the maximum number of robots to do so.
Observation: Consider two elements at index $$$a$$$ and index $$$b$$$, where $$$a$$$ < $$$b$$$.
If the element at $$$a$$$ is greater than the element at $$$b$$$, the only way of getting the correct relative ordering is to put both of them in the same segment.
From the observation, we can come to the following conclusion: The maximum element in segment $$$k$$$ should be less than or equal to the minimum element in segment $$$k+1$$$.
If the above were not true, then the maximum element in segment $$$k$$$ (say at index $$$i$$$) and the minimum element in segment $$$k+1$$$ (say at index $$$j$$$) would need to have a swap in relative ordering, which is not possible since they both are in different segments, hence a contradiction!
Now, with the conclusion above, we can look into the algorithm.
Idea: Find the pair of consecutive indices where the observation holds.
To do that, we first calculate the prefix maximum and the suffix minimum of the array.
Now, for every index, we check if the prefix maximum is less than or equal to the suffix minimum of the next element. If it is, then we can put a partition between those two elements.
Let us take an example and compare it with its sorted variant.
1 4 5 2 8 6 7 9
1 2 4 5 6 7 8 9
We can segment it as follows
1 | 4 5 2 | 8 6 7 | 9
1 | 2 4 5 | 6 7 8 | 9
min:1 | 2 2 2 | 6 6 7 | 9
max:1 | 4 5 5 | 8 8 8 | 9
Time Complexity: $$$O(n)$$$ per test case.
#include <iostream>
#include <vector>
using namespace std;
int segments(vector<int>& arr) {
int n = arr.size();
vector<int> prefix_max(n,0);
vector<int> suffix_min(n,0);
prefix_max[0] = arr[0];
for(int x = 1 ; x < n ; x++) prefix_max[x] = max(prefix_max[x-1],arr[x]);
suffix_min[n-1] = arr[n-1];
for(int x = n-2 ; x >= 0 ; x--) suffix_min[x] = min(suffix_min[x+1],arr[x]);
int segment_count = 1;
for(int x = 1 ; x < n ; x++) {
if(suffix_min[x] >= prefix_max[x-1]) {
segment_count++;
}
}
return segment_count;
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin >> t;
while(t--) {
int n;
cin >> n;
vector<int> v(n);
for(int x = 0 ; x < n ; x++) {
cin >> v[x];
}
cout << segments(v) << endl;
}
}
H. To be Updated
Let’s consider a special case where all the numbers in the array are square-free (i.e, no prime in the prime-factorization of any number appears more than once). Can you solve it now?
For a query of type 1, if $$$g$$$ doesn’t divide $$$a[i]$$$, the answer is trivially $$$0$$$. Otherwise, we have to find the number of multiples $$$x$$$ of $$$g$$$ in the array such that $$$\gcd(a[i],x)=g$$$.
We maintain an array cnt where cnt[i] is the number of elements in the array which are multiples of $$$i$$$ and try inclusion-exclusion.
Let’s temporarily subtract $$$1$$$ from cnt of all the factors of $$$a[i]$$$.
We want to count elements $$$a_j$$$ such that $$$\gcd(a_i,a_j)=g$$$.
After fixing $$$g$$$, this means $$$a_j$$$ must be divisible by $$$g$$$,but must share no prime factor with $$$a_i/g$$$. Let the distinct prime factors of $$$a_i/g$$$ be $$$p_1,p_2,…,p_k$$$.
We start with all multiples of $$$g$$$, which is cnt[g]. From these, we subtract all elements divisible by $$$g \cdot p_1,g \cdot p_2,…,$$$ since such elements share at least one prime factor with $$$a_i/g$$$ and hence cannot have gcd equal to $$$g$$$. However, an element divisible by $$$g \cdot p_1 \cdot p_2$$$ gets subtracted twice (once for $$$p_1$$$ and once for $$$p_2$$$), so we must add it back once. Now elements divisible by $$$g \cdot p_1 \cdot p_2 \cdot p_3$$$ are added one extra time and must be subtracted again. This alternating subtraction and addition continues for all combinations of prime factors.
Consider the older problem, can we do the same inclusion-exclusion idea from before?
This time instead of considering all the factors of $$$a_i/g$$$, we need to consider each factor of $$$a[i]/g$$$ after reducing it to a product of its distinct prime factors. Equivalently, we consider the factors of the $$$a_i/g$$$ obtained by reducing it to a product of its distinct prime factors.
Please refer to the hints above.
For a query of type 2, maintaining the cnt array is straightforward. When $$$a[i]$$$ is updated, we iterate over all the factors of the old value and decrement their counts by $$$1$$$, and then iterate over all the factors of the new value and increment their counts by $$$1$$$. This ensures that cnt[d] always stores the number of elements divisible by $$$d$$$.
For a query of type 1, we proceed exactly as described in the hints. If $$$g$$$ does not divide $$$a[i]$$$, the answer is $$$0$$$.
Otherwise, we iterate over all factors $$$f$$$ of $$$a_i/g$$$ after reducing it to the product of its distinct prime factors, and compute the answer using inclusion–exclusion
where $$$\omega (f)$$$ denotes the number of distinct prime factors of $$$f$$$.
All required information can be efficiently precomputed. We use a sieve to compute the smallest prime factor of every number, which allows us to build the reduced (square-free) form of each number and enumerate its factors. Since the number of divisors of any integer $$$\leq 10^6$$$ is at most about $$$250$$$, the total number of operations is well within limits (at most about $$$2 \cdot 10^8$$$ in a very loose upper bound), and the solution comfortably fits within the time limit.
Queries of type 1 can be optimized further, but the approach described above is simpler and sufficient for the editorial.
Time Complexity: $$$O(q \cdot 750)$$$ per test case
Note: Some ideas in number theory involving mobius function may also be used (which in effect will do what we have done above from first principles)
#include <bits/stdc++.h>
using namespace std;
#define endl "\n"
#define F first
#define S second
#define ll long long
int N=1'000'000;
vector<vector<int>>factors(N+5);
//contains all factors of i excluding 1
vector<int>sieve(N+5);
//contains smallest "prime" factor
vector<pair<int,int>>primeproduct(N+5);
//contains the number reduced to its product of distinct primes and the count of distinct primes
vector<int>cnt(N+5,0);
//cnt[i] is the number of elements in the array which are divisible by i
void solve(){
int n,q;
cin>>n>>q;
vector<int>a(n);
for(int i=0;i<n;i++){
cin>>a[i];
cnt[1]++;
for(int f:factors[a[i]]){
cnt[f]++;
}
}
while(q--){
int t;
cin>>t;
if(t==2){
int i,x;
cin>>i>>x;
--i;
for(int f:factors[a[i]]){
cnt[f]--;
}
a[i]=x;
for(int f:factors[x]){
cnt[f]++;
}
}
else{
int i,g;
cin>>i>>g;
--i;
if(a[i]%g){
cout<<0<<endl;
}
else{
for(int f:factors[a[i]]){
cnt[f]--;
}
cnt[1]--;
int ans=cnt[g];
int leftover=primeproduct[a[i]/g].F;
for(int f:factors[leftover]){
if(primeproduct[f].S%2){
ans-=cnt[f*g];
}
else{
ans+=cnt[f*g];
}
}
cout<<ans<<endl;
for(int f:factors[a[i]]){
cnt[f]++;
}
cnt[1]++;
}
}
}
for(int i=0;i<n;i++){
for(int f:factors[a[i]]){
cnt[f]=0;
}
}
cnt[1]=0;
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);cout.tie(NULL);
sieve[1]=1;
for(int i=2;i<=N;i++){
bool is_prime=(sieve[i]==0);
for(int j=i;j<=N;j+=i){
if(!sieve[j] && is_prime)sieve[j]=i;
factors[j].push_back(i);
}
}
primeproduct[1]={1,0};
for(int i=2;i<=N;i++){
if((primeproduct[i/sieve[i]].F)%sieve[i]){
primeproduct[i].F=primeproduct[i/sieve[i]].F*sieve[i];
primeproduct[i].S=primeproduct[i/sieve[i]].S+1;
}
else{
primeproduct[i].F=primeproduct[i/sieve[i]].F;
primeproduct[i].S=primeproduct[i/sieve[i]].S;
}
}
int t; cin >> t;
while (t--) {
solve();
}
return 0;
}
I. Terrorium
Author: Madhav_AK
Our problem involves maximizing beauty such that some characteristic features of the stickers are constrained. However the UV cream poses an annoying quirk — it is not directly a property of the stickers. Can we decouple the UV problem and the IR/Visible problem?
For a fixed area, we can calculate the maximum UV protection that we can get by applying UV cream on that area using the standard 0/1 Knapsack.
This means we can calculate the minimum area needed. Now our problem has reduced to maximizing beauty such that $$$\text{IR} \leq I$$$, $$$\text{Vis} \leq V$$$ and $$$\text{area } \geq \text{minArea}$$$ (all characteristic features of the stickers)
First we determine the minimum sticker area required to satisfy UV constraints. In order to block off at least $$$U$$$ units of UV radiation, there exists some minimum area of stickers below which it would be impossible to distribute our UV cream sufficiently. This minimum area can be computed using a standard 0/1 Knapsack formulation.
Once we have calculate the minimum area needed, our problem reduces to maximizing beauty such that $$$\text{IR} \leq I$$$, $$$\text{Vis} \leq V$$$ and $$$\text{area } \geq \text{minArea}$$$ (all characteristic features of the stickers).
Define a 3D DP using the three parameters at hand:
$$$dp[i][j][k] =$$$ maximum beauty you can attain with $$$i$$$ IR blockage, $$$j$$$ Visible blockage and total sticker area $$$k$$$.
Consider including the sticker $$$s$$$. For any given state $$$dp[i][j][k]$$$, adding this sticker will elevate us to the state $$$dp[i+\text{ir}[s]][j+\text{vis}[s]][k+\text{area}[s]]$$$ and will increment the beauty by $$$b[s]$$$. Thus, we can loop through all stickers one by one and consider its affect on every legal state. We must be careful and process the states in reverse order so that a sticker's effect is not applied twice.
For further optimization, the area dimension can be saturated at $$$\text{minArea}$$$.
Time Complexity: $$$O(m \cdot A_1 + n \cdot I \cdot V \cdot A_1)$$$ or $$$O(m^2 \cdot \mathrm{max}(\text{u}[i]) + n \cdot I \cdot V \cdot A_1)$$$ or similar complexity, based on the implementation.
#include <bits/stdc++.h>
using namespace std;
void solve() {
int n, m, I, V, U;
cin >> n >> m >> I >> V >> U;
int A = 0;
vector<int> b(n+1), ir(n+1), vis(n+1), area(n+1), u(m+1), au(m+1);
for (int i = 1; i <= n; i++) {
cin >> b[i] >> ir[i] >> vis[i] >> area[i];
A += area[i];
}
for (int i = 1; i <= m; i++) {
cin >> u[i] >> au[i];
}
// Regular Knapsack
// dp1[i][j] = Max UV protection you can get using the first i bottles, and using exactly j units of area. (-1 if impossible) (1 indexed)
vector<vector<int>> dp1(m+1, vector<int>(A+1, -1));
dp1[0][0] = 0;
for (int i = 1; i < m+1; i++) {
for (int j = 0; j < A+1; j++) {
dp1[i][j] = dp1[i-1][j]; // Not using the ith bottle
if (j-au[i] >= 0 && dp1[i-1][j-au[i]] != -1) dp1[i][j] = max(dp1[i][j], dp1[i-1][j-au[i]] + u[i]); // Using the ith bottle
}
}
// Min decorative sticker area required to fulfil our UV requirement
int min_area = -1;
for (int a = 0; a < A+1; a++) {
if (dp1[m][a] >= U) {
min_area = a;
break;
}
}
if (min_area == -1) {
cout << -1 << '\n';
return;
}
// 3D Knapsack (sort of)
// dp[i][j][k] = Max beauty you can attain with i IR blockage, j Visible blockage, and having total area k.
int best_beauty = (U == 0 ? 0 : -1);
vector<vector<vector<int>>> dp(I+1, vector<vector<int>>(V+1, vector<int>(min_area+1, -1)));
dp[0][0][0] = 0;
for (int sticker = 1; sticker <= n; sticker++) {
int s = sticker;
// Note here that the three inner for loops (i,j,k) have to reverse to ensure that a sticker doesn't get counted twice
for (int i = I; i >= 0; i--) {
for (int j = V; j >= 0; j--) {
for (int k = min_area; k >= 0; k--) {
if (dp[i][j][k] != -1 && i+ir[s] <= I && j+vis[s] <= V) {
int i_ = i+ir[s];
int j_ = j+vis[s];
int k_ = min(k+area[s], min_area);
dp[i_][j_][k_] = max(dp[i_][j_][k_], dp[i][j][k] + b[s]);
if (k_ == min_area) best_beauty = max(best_beauty, dp[i_][j_][k_]);
}
}
}
}
}
cout << best_beauty << '\n';
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(NULL);
int t;
cin >> t;
while (t--) {
solve();
}
return 0;
}
J. One Percent
Author: RighterTheOriginal
Will greedy work?
How do we efficiently find the number of coins of a certain type needed?
To construct an optimal subset of coins, we can greedily choose the highest-value coins one by one. However, going one by one will be inefficient (as it will take up to $$$t \cdot n \cdot c_i \le 100 \cdot 1000 \cdot 1000 = 10^8$$$ such iterations).
Instead, we can greedily process each type of coins, highest-value first. While processing type $$$i$$$, let the current number of coins in the subset be $$$x$$$, and the current sum of their values be $$$y$$$. Let $$$dx$$$ the minimum number of coins of type $$$i$$$ needed to add to the subset for meeting the condition, and let $$$dy$$$ be the value of those $$$dx$$$ coins. Clearly $$$dy = v_i \cdot dx$$$. Then, the condition is
Since $$$dx$$$ must be an integer,
So we add $$$\min(dx, c_i)$$$ coins of type $$$i$$$ to the subset (and continue if the condition is not met).
Note that the constraints are $$$n \le 10^3, c_i \le 10^3, v_i \le 10^3$$$. So $$$\displaystyle{C = \sum_{i=1}^{n}c_i \le 10^6}$$$ and $$$V = \sum_{i=1}^{n}c_i \cdot v_i \le 10^9$$$. Therefore, all terms in the formula for $$$dx$$$ above are bounded by $$$10^{18}$$$ and will fit in 64-bit integer types. Thus, we can safely compute $$$dx$$$ in integer arithmetic without the risk of floating-point errors (although it's possible that a floating-point solution will also pass — we have not added special test cases to combat that as the main idea of the problem is fulfilled either way).
The time complexity is $$$O(n)$$$, which is sufficient for $$$\Sigma_{t}n \le 10^5$$$.
#include <bits/stdc++.h>
#define int long long int
#define INF (LLONG_MAX / 4)
using namespace std;
#define value first
#define count second
void solve() {
int n;
cin >> n;
vector<pair<int, int>> arr(n);
for (int i = 0; i < n; i++) {
cin >> arr[i].count;
}
for (int i = 0; i < n; i++) {
cin >> arr[i].value;
}
int c = 0, v = 0;
for (int i = 0; i < n; i++) {
c += arr[i].count;
v += arr[i].count * arr[i].value;
}
sort(arr.begin(), arr.end(), greater<>());
// top x coins have y >= (1-x/c)*v = (c-x)*v/c value in total
// then c*y >= (c-x)*v
int x = 0, y = 0;
for (int i = 0; i < n; i++) {
int numerator = c * v - v * x - c * y;
int denominator = arr[i].value * c + v;
// integer ceil
int dx = (numerator + denominator - 1) / (denominator);
dx = min(dx, arr[i].count);
assert(0 <= dx and dx <= arr[i].count);
x += dx;
y += dx * arr[i].value;
if (c * y >= (c - x) * v) {
break;
}
}
assert(0 <= x and x <= c);
cout << x << endl;
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int t = 1;
cin >> t;
for (int tc = 0; tc < t; tc++) {
solve();
}
}
K. Evo XOR
Author: pvpwarrior
Since XOR is addition without carry over, the update rule is linear over the set {0, 1}. Can we apply principle of superposition in this case? If you know the pattern generated by just one cell at index 0, how can you find the pattern generated by multiple initially active cells at different indexes?
Do we really need to simulate all $$$10^{18}$$$ cells? At time $$$T$$$, will an active cell affect all cells or only a subset?
Observe that $$$ C_i^{(t+1)} = C_{i-1}^{(t)} \oplus C_{i+1}^{(t)} $$$ acts like addition modulo 2. Hence it is a linear relation. This means that the state of a cell $$$X$$$ is simply the total XOR of the states generated by each initially active cell $$$P_{i}$$$ individually.
If a function $$$S(x_{r}, t)$$$ gives us the state of the cell at relative distance $$$x_{r}$$$ and time $$$t$$$, then for a query $$$(X, T)$$$, the answer is:
A single source's effect propagates as triangle with a speed of 1 cell per time step. In other words, at time $$$T$$$, a source at position $$$0$$$ can only affect indices in the range $$$[-T, T]$$$.
Because $$$T$$$ is small, we can precompute the evolution of single cell located at index 0. We can store this in a table $$$S[t][distance]$$$. The size of this table will be $$$1000 \times 2000$$$ and can be computed in $$$O(T^2)$$$ time.
For each query $$$(X, T)$$$, iterate through every initial position $$$P_{i}$$$ and calculate the relative distance $$$x_{r} = X - P_{i}$$$. If $$$x_{r} \gt T$$$, the source is too far away and can be ignored. Otherwise look up $$$S[T][x_{r}]$$$ from the precomputed table and XOR it. This is done in $$$O(K)$$$ time.
Time complexity: $$$O(T^2 + Q.K)$$$
#include <bits/stdc++.h>
using namespace std;
#define fastio ios_base::sync_with_stdio(false); cin.tie(NULL)
#define ll long long
#define vi vector<ll>
ll solve() {
ll k, q;
unordered_map<ll, ll> dp[1009];
dp[0][0] = 1;
for (ll t = 0; t < 1008; ++t) {
ll start = -(t+1);
ll end = (t+1);
for (int i = start; i <= end; ++i) {
dp[t+1][i] = dp[t][i-1] ^ dp[t][i+1];
}
}
cin >> k >> q;
vi p(k,0);
for (int i = 0; i < k; ++i)
{
cin >> p[i];
}
while(q--){
ll x, t, ans =0;
cin >> x >> t;
for (int i = 0; i < k; ++i)
{
ll relx = x-p[i];
if(abs(relx)<=t)ans^=dp[t][relx];
}
cout << ans << "\n";
}
return 0;
}
int main(){
fastio;
ll t= 1;
// cin >> t;
for (int i = 0; i < t; ++i){
// cerr << "Case #" << i+1 << ": ";
solve();
// cout << "\n";
}
return 0;
}
L. Incomplete Subarrays
Author: Ovetsar_ilish
If it is possible to achieve answer $$$X$$$, is it also possible to achieve $$$X - 1$$$? Observe that the search space is monotonic and hence we can binary search on answer.
Now for a particular $$$X$$$, since checking through every subarray is not optimal, transform the problem into a prefix-sums problem.
Use a monotonic deque to keep set of valid indices satisfying certain prefix sum inequality in the window, to reduce the time complexity of the solution.
For the sake of simplicity, we will use 0-based indexing.
Say you are checking for a particular $$$X$$$. Define two auxiliary arrays pref_cost and pref_cum. They are defined as follows.
pref_cost[i + 1] = pref_cost[i] + ((a[i] < x) ? (x - a[i]) : 0)
pref_cum[i + 1] = pref_cum[i] + ((a[i] >= x) ? 1 : -1)
So now if pair of indices $$$(j, i)$$$ satisfy the constraints in the problem where $$$j \lt i$$$, in terms of the auxiliary arrays we defined, it is easy to see the constraint transforms to :
pref_cost[i] - pref_cost[j] <= kpref_cum[i] - pref_cum[j] > 0
Rewriting $$$2$$$, we get pref_cum[j] < pref_cum[i]
Now use a monotonic deque to update the window of valid indices $$$j$$$ sorted by pref_cum values. We are interested in the deque having property that indices in it are sorted with respect to the pref_cum values due to the constraint pref_cum[j] < pref_cum[i].
Time Complexity: $$$O(N \cdot \log (\max(A))$$$ per test case.
#include <bits/stdc++.h>
using namespace std;
typedef pair<int, int> pi;
typedef long long ll;
#define MOD (int)(1e9 + 7)
bool check(int x, int n, ll k, const vector<ll> &a){
vector<ll> pref_cost(n + 1, 0);
vector<ll> pref_cum(n + 1, 0);
for(int i = 0; i < n; i++){
pref_cost[i + 1] = pref_cost[i] + ((a[i] < x) ? (x - a[i]) : 0);
pref_cum[i + 1] = pref_cum[i] + ((a[i] >= x) ? 1 : -1);
}
deque<ll> dq;
int l = 0;
for(int i = 1; i <= n; i++){
while(l < i && pref_cost[i] - pref_cost[l] > k){
l++;
}
if(i >= 2){
while(!dq.empty() && pref_cum[dq.back()] >= pref_cum[i - 2]){
dq.pop_back();
}
dq.push_back(i - 2);
}
while(!dq.empty() && dq.front() < l){
dq.pop_front();
}
if(!dq.empty() && pref_cum[dq.front()] < pref_cum[i]){
return true;
}
}
return false;
}
void solve(){
int n;
ll k;
cin >> n >> k;
vector<ll> a(n);
ll mx = 0;
for(int i = 0; i < n; i++){
cin >> a[i];
mx = max(mx, a[i]);
}
int low = 1;
int high = mx;
int ans = 0;
while(low <= high){
int mid = (low + high) / 2;
if(check(mid, n, k, a)){
ans = mid;
low = mid + 1;
} else {
high = mid - 1;
}
}
cout << ans << "\n";
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int T = 1;
cin >> T;
while(T--){
solve();
}
return 0;
}
M. EMX EXM MXE XEM XME
Author: RighterTheOriginal (thanks to aryak05 for contributing to the solution)
How can we use the queries to test whether some $$$a_i=k$$$ for some $$$i$$$ and $$$k$$$?
How can we use the queries to test whether $$$k$$$ is present in a subset of $$$a$$$?
Using Hint 2, we can binary search for the index of some $$$k$$$ in $$$a$$$. What if there are multiple such indices?
First, observe that we can test for whether some set {$$$a_{p_i} \mid p_i \in P$$$} (for corresponding index set $$$P$$$) contains some value $$$0 \le k \le n$$$ or not. This is done by querying $$$P$$$ and $$$Q = {0, 1, ..., k-1}$$$ — if $$$k \in$$$ {$$$a_{p_i} \mid p_i \in P$$$}, then the MEX will be $$$k+1$$$. Otherwise, the MEX will be $$$k$$$.
Now, we can use this technique to search for the indices of any $$$k$$$ in the array $$$a$$$. We will use a modified binary search that searches for the first occurrence of $$$k$$$ in $$$a$$$. Given a particular search space, we search the left half for $$$k$$$ if present, otherwise we search the right half. While querying, we exclude all elements that have already been found, so that we correctly find the first index of $$$k$$$. This will guarantee that the search eventually visits all subarrays $$$[a_i]$$$ where $$$a_i=k$$$.
The number of queries taken by this search (added over all values of $$$k$$$) is bounded by $$$n (1 + \log_2 n)$$$, since each $$$[a_i]$$$ will have $$$1 + \log_2 n$$$ queries (a $$$\log_2 n$$$-depth path from root in the search tree) to reach there. There is an additional cost bounded by $$$n+1$$$ queries, for determining that a particular $$$k$$$ has no more occurrences in the array. Therefore, the total number of queries is bounded by $$$n \log_2 n + 2n + 1$$$ queries, which is loosely bound by $$$15n$$$ for $$$n \le 400$$$.
#include <bits/stdc++.h>
#define int long long int
#define f first
#define s second
#define INF (LLONG_MAX / 4)
using namespace std;
int queries;
int query(vector<int> ind, int val) {
assert(queries > 0);
queries--;
cout << '?' << ' ' << ind.size() << ' ' << val << endl;
for (int ele : ind) {
cout << ele + 1 << ' ';
}
cout << endl;
for (int i = 0; i < val; i++) {
cout << i << ' ';
}
cout << endl;
int reply;
cin >> reply;
return reply;
}
void answer(vector<int> arr) {
cout << '!';
for (int ele : arr) {
cout << ' ' << ele;
}
cout << endl;
}
bool binary_search_first(int start, int end, vector<int> &arr, int val,
bool known) {
if (not known) {
vector<int> ind;
for (int i = start; i <= end; i++) {
if (arr[i] == -1) {
ind.push_back(i);
}
}
if (query(ind, val) == val) {
return false;
}
}
if (start == end) {
arr[start] = val;
return true;
}
int mid = (start + end) / 2;
if (binary_search_first(start, mid, arr, val, false)) {
return true;
}
if (binary_search_first(mid + 1, end, arr, val, true)) {
return true;
}
abort();
}
void solve() {
int n;
cin >> n;
queries = (int)ceil(log2(n)) * n + 2 * n + 1;
vector<int> arr(n, -1);
for (int i = 0; i <= n; i++) {
for (int j = 0; j < n; j++) {
if (not binary_search_first(0, n - 1, arr, i, false)) {
break;
}
}
}
answer(arr);
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int t;
cin >> t;
for (int tc = 0; tc < t; tc++) {
solve();
}
}
N. BitShift
Author: RighterTheOriginal
Instead of thinking of the binary string shifting left and right, think of the "decimal point" shifting right and left.
Where does the "decimal point" finally end up? What changed between the $$$m$$$ operations?
At the end of all $$$m$$$ operations, the "decimal point" will move right by $$$\sum_{i=1}^{m}k_{i}$$$ bits. However, in the process, it will zero out any bits that it crosses over — so all the bits from the leftmost position of the "decimal point" will become 0. The leftmost position will be $$$(- \min_{1 \le i \le m} (\sum_{j=1}^{i} k_{j}))$$$ bits.
Therefore, by keeping track of the sum and minimum of $$$k_{i}$$$, we can generate the final binary string (by printing all the higher bits till the leftmost position, followed by zeroes till the final position). The time complexity of this approach is $$$O(n + m + |\sum_{i=1}^{m}k_{i}|)$$$.
#include <bits/stdc++.h>
#define int long long int
#define f first
#define s second
#define INF (LLONG_MAX / 4)
using namespace std;
void solve() {
int n, m;
cin >> n >> m;
string s;
cin >> s;
int sum = 0, minimum = 0;
for (int i = 0; i < m; i++) {
int k;
cin >> k;
sum += k;
minimum = min(minimum, sum);
}
int right_movement = sum;
int leftmost = -minimum;
if (leftmost >= n) {
cout << '0' << endl;
return;
}
for (int i = 0; i < n - leftmost; i++) {
cout << s[i];
}
for (int i = -leftmost; i < sum; i++) {
cout << '0';
}
cout << endl;
}
int32_t main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
cout.tie(nullptr);
int t = 1;
cin >> t;
for (int tc = 0; tc < t; tc++) {
solve();
}
}
We would be happy to see alternate and nice answers/editorials for the problems in the comments section. Let us know if there are any corrections in the editorial. Happy Coding!







