[contest:329695]
Editorial in PDF:
https://drive.google.com/file/d/1KL723lzYE8tVlgIUveMt2lSWDbd4nccv/view?usp=sharing
[problem:329695A]
Task author: try_kuhn
Translate author: MatesV13
Solution author: try_kuhn
Let's just go through all the numbers on the segment $$$[l; r]$$$ and, by dividing the numbers by 2, 3, 5 while we can, check whether the resulting number is one (in other words, there are no dividers larger than 5).
The asymptotics of this solution will be $$$n \cdot \sigma(n)$$$, where $$$\sigma(n)$$$ is the number of twos, triples and fives in the factorization of the number. Since this number is fairly small (around 3, so there will be no problem with time limit.
#include <bits/stdc++.h>
#define int int64_t
using namespace std;
bool f(int n){
while((n & 1) == 0)
n >>= 1;
while(n % 3 == 0)
n /= 3;
while(n % 5 == 0)
n /= 5;
return n == 1;
}
int32_t main() {
int l, r;
cin >> l >> r;
int cnt = 0;
for(int i = l; i <= r; i++)
cnt += f(i);
cout << cnt;
}
[problem:329695B]
Task author: try_kuhn
Translate author: MatesV13
Solution author: try_kuhn
First group can be solved by simply checking for every number in interval if it is prime — this will work in $$$O(n \sqrt{n}$$$).
Second group can be solved with worse implementation of sieve of Eratosthenes (in $$$O(n \log n)$$$ ), or with Fermat's probabilistic test. We get the asymptotics in $$$O(n \log n)$$$.
Complete solution will be obtained with sieve of Eratosthenes in $$$O(n)$$$.
#include <bits/stdc++.h>
using namespace std;
const int N = 1e8;
int lp[N + 1];
int pref[N + 1];
int pr[N / 10 + 1];
int r = 0;
void precalc(int n) {
for (int i=2; i<=n; ++i) {
pref[i] += pref[i - 1];
if (lp[i] == 0) {
pref[i]++;
lp[i] = i;
pr[r] = i;
r++;
}
for (int j = 0; j < r && pr[j] <= lp[i] && i * pr[j] <= n; ++j)
lp[i * pr[j]] = pr[j];
}
}
int32_t main(){
int32_t l, r;
cin >> l >> r;
precalc(r);
cout << pref[r] - pref[l];
}
[problem:329695C]
Task author: try_kuhn
Translate author: MatesV13
Solution author: Gareton
Let's find out, for a given permutation, how many permutations there are lexicographically less than a given one.
Let's iterate over the permutation. Suppose that we have fixed some element $$$p_i$$$, then we will iterate over all numbers less than $$$p_i$$$, but not used before. To the right of them, we can put any numbers by definition of a lexicographically smaller permutation.
The number of options for each such number less than $$$p_i$$$ will be equal to the number of permutations. So, if we put $$$p_i$$$ in position $$$i$$$, we would have gone through $$$(n - i)!$$$ (in 1-indexing) multiplied with number of smaller numbers we can still use. With using simple bool array to keep track of what numbers we have used, and then iterating through it to count the number of smaller numbers (in $$$O(n)$$$), the asymptotics of the solution is $$$O(n^2)$$$.
We must use additional structures, such as a segment tree, or Fenwick tree. Using them we can find number of smaller numbers in $$$O(\log n)$$$ per query, so you can get the asymptotics of $$$O(n \log n)$$$
Knowing the number of lexicographically smaller permutations, we can find the number of the current permutation by adding 1.
#include <bits/stdc++.h>
using namespace std;
long long mod=1000000007;
long long n, fact[100005], perm;
long long fenw[1<<20];
void update(int x){
while (x!=1<<20){
fenw[x]++;
x+=x&-x;
}
}
long long query(int poz){
long long ret=0;
while (poz){
ret+=fenw[poz];
poz-=poz&-poz;
}
return ret;
}
int main (){
ios::sync_with_stdio(0);
cin.tie(0); cout.tie(0);
cin >> n;
fact[1]=1;
for (int i=2; i<=n; i++)
fact[i]=(fact[i-1]*i)%mod;
long long ans=1;
for (int i=1; i<=n; i++){
cin >> perm;
long long prev = perm-1-query(perm);
update(perm);
ans = (ans + (prev*fact[n-i])%mod)%mod;
}
cout << ans << endl;
return 0;
}
[problem:329695D]
Task author: try_kuhn
Translate author: MatesV13
Solution author: try_kuhn
The task is rather similar to the previous one.
We will restore the combination from left to right. Let's iterate over the possible values of the current element and check how many combinations the current element generates.
Depending on this number, either we look for an element further and subtract the resulting number of combinations, or we fix this element and go on to find the next one. The asymptotics is O ($$$k^2$$$).
With some smart binary search and segment tree or Fenwick tree we could get $$$O(n \log ^2n$$$), what is rather a overkill for a such small $$$n$$$.
#include <bits/stdc++.h>
#define int int64_t
using namespace std;
int c[100][100];
void precalc(){
for(int i = 0; i <= 32; i++)
c[i][0] = 1;
for(int i = 1; i <= 32; i++)
for(int j = 1; j <= 32; j++)
c[i][j] = c[i - 1][j - 1] + c[i - 1][j];
}
int32_t main() {
precalc();
int n, k, p;
cin >> n >> k >> p;
int prev = -1;
for(int i = 0; i < k; i++){
for(int j = max(prev + 1, (int)1); j <= n; j++){
if(p < c[n - j][k - i - 1]){
prev = j;
cout << j << ' ';
break;
}
else
p -= c[n - j][k - i - 1];
}
}
}
[problem:329695E]
Task author: try_kuhn
Translate author: MatesV13
Solution author: try_kuhn
For the first two groups of tests we can write an exhaustive solution. We simply remove an edge from the graph for the first request, and for the second request, we start DFS. The final asymptotics is $$$O(n \cdot k)$$$.
For complete solution, we will process requests from the end. Then we have an empty graph and two kinds of queries: merge and ask. A data structure such as DSU can respond to such requests. The asymptotics is O ($$$(m+k) \cdot \alpha(n)$$$), where $$$\alpha(n)$$$ is inverse Ackermann function, and for any practical uses can be considered equal to 4 or 5.
#include <bits/stdc++.h>
using namespace std;
int p[100000];
int rk[100000];
void init_dsu() {
for (int i = 0; i < 100000; i++) {
p[i] = i;
rk[i] = 1;
}
}
int get_root(int v) {
if (p[v] == v) {
return v;
} else {
return p[v] = get_root(p[v]);
}
}
bool merge(int a, int b) {
int ra = get_root(a), rb = get_root(b);
if (ra == rb) {
return false;
} else {
if (rk[ra] < rk[rb]) {
p[ra] = rb;
} else if (rk[rb] < rk[ra]) {
p[rb] = ra;
} else {
p[ra] = rb;
rk[rb]++;
}
return true;
}
}
int32_t main() {
cout.setf(ios::fixed);
cout.precision(9);
ios::sync_with_stdio(false);
int n, m, k;
cin >> n >> m >> k;
init_dsu();
while (m--){
int u, v;
cin >> u >> v;
}
vector<pair<string, pair<int, int>>> q;
while(k--){
string s;
int u, v;
cin >> s >> u >> v;
q.emplace_back(s, make_pair(u, v));
}
stack<bool> st;
while(!q.empty()){
string s = q.back().first;
int u = q.back().second.first, v = q.back().second.second;
q.pop_back();
if(s == "1")
merge(u, v);
else
st.push(get_root(u) == get_root(v));
}
while(!st.empty()){
cout << (st.top() ? "YES\n" : "NO\n");
st.pop();
}
}
[problem:329695F]
Task author: try_kuhn
Translate author: MatesV13
Solution author: Gareton
For first group it is enough to write a $$$n^4$$$ brute force check for every 4 numbers.
For the second group you could do $$$O(n^2 \log n)$$$ or $$$O(n^2)$$$ algorithm that takes pairs of numbers, so you actually want to find two pairs with the same xor. This can be done with set for example, or even frequency array from 0 to 131071 (= 65536 xor 65535).
For complete solution, key insight is to notice that since all $$$a_i$$$ are different, we don't really have to check really large numbers. I claim that we need to check only if $$$n < 530$$$, because for larger one there will some four numbers that give xor 0.
Indeed, all xors of pairs fall between 0 and 131071. Since we claim that we don't know if there is a solution, number of pairs must be below $$$2^{17}$$$ — if it was larger then there has to be two same numbers by Dirichlet principle.
Since number of pairs is $$$\frac{n(n-1)}{2} < 2^{17}$$$, so we only really need to check numbers up to $$$n=513$$$. (Note: largest tests with answer ``No'' we found were around 200, but if we find larger, we will add them)
So, we only need to write solution for second group, and for larger $$$n$$$ we just print ``Yes''.
#pragma GCC optimize("O3")
// #pragma GCC optimize("Ofast")
// #pragma GCC target("sse", "sse2", "sse3", "sse4")
// #pragma GCC target("avx2")
#include <bits/stdc++.h>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
#define orset tree<int, null_type, less_equal<int>, rb_tree_tag, tree_order_statistics_node_update>
#define hti cc_hash_table<int, int, hash<int>>
#define htp cc_hash_table<int, pii, hash<int>>
// #define double long double
// #define int long long
#define ll long long
#define ull unsigned long long
#define pii pair<int, int>
#define pdd pair<double, double>
#define x first
#define y second
#define all(a) (a.begin()), (a.end())
#define gsz(x) ((int)x.size())
#define mat vector<vector<int>>
#define endl "\n"
#define LOCAL
#ifdef LOCAL
#define dbg(x) cerr << #x << ":" << (x) << endl
#define print(x) cerr << (x) << endl
#else
#define dbg(x) 0
#define print(x) 0
#endif
using namespace std;
using namespace __gnu_pbds;
const int N = 1e6 + 10;
const int B = 300;
const int inf = 1e9 + 10;
const int mod = 1e9 + 7;
mt19937 rnd(chrono::high_resolution_clock::now().time_since_epoch().count());
int n;
int a[N];
bool was[N];
int32_t main()
{
// freopen("input.txt", "r", stdin);
// freopen("output.txt", "w", stdout);
ios_base::sync_with_stdio(0);
cin.tie(0); cout.tie(0);
// cout.precision(10);
cin >> n;
for(int i = 0; i < n; ++i)
cin >> a[i];
if(n >= 513)
{
cout << "Yes" << endl;
return 0;
}
for(int i = 0; i < n; ++i)
for(int j = i + 1; j < n; ++j)
if(was[a[i] ^ a[j]])
{
cout << "Yes" << endl;
return 0;
}
else
was[a[i] ^ a[j]] = true;
cout << "No" << endl;
}
// n * (n - 1) / 2 = 131072
// n * (n - 1) = 262144
// n^2 - n - 262144 = 0
// d = 1048577
// n = (1 + 1025) / 2 = 513
[problem:329695G]
Task author: try_kuhn
Translate author: MatesV13
Solution author: try_kuhn
Let's add an additional vertex $$$n+1$$$, which we connect to all the other vertices. The cost of each received edge $$$ (u, n + 1) $$$ will be $$$c_u$$$.
Now we will construct an MST (Minimum Spanning Tree) for such a graph. This will be the answer. Notice that is we need to connect any vertex to $$$n+1$$$, that we need to build the power plant in that city.
MST can be found with sorting the vertexes by price, and then DSU to know what to add. The asymptotics is $$$O(m \log m)$$$, where $$$m$$$ is number of vertexes, what is just $$$\frac{n(n+1)}{2}$$$ (dont forget about vertex $$$n+1$$$).
#include <bits/stdc++.h>
#define int int64_t
using namespace std;
struct point{
int x, y;
};
int32_t main() {
cout.setf(ios::fixed);
cout.precision(9);
ios::sync_with_stdio(false);
int n;
cin >> n;
int g[n + 1][n + 1];
point a[n];
for(auto &i: a)
cin >> i.x >> i.y;
g[n][n] = 0;
for(int i = 0; i < n; i++){
cin >> g[n][i];
g[i][n] = g[n][i];
}
int k[n];
for(auto &i: k)
cin >> i;
for(int i = 0; i < n; i++)
for(int j = i + 1; j < n; j++)
g[i][j] = g[j][i] = (abs(a[i].x - a[j].x) + abs(a[i].y - a[j].y)) * (k[i] + k[j]);
bool used[n + 1];
memset(used, false, sizeof used);
int min_e[n + 1];
for(int i = 0; i <= n; i++)
min_e[i] = LLONG_MAX;
int sel_e[n + 1];
memset(sel_e, -1, sizeof sel_e);
min_e[0] = 0;
vector<pair<int, int>> ans;
for (int i=0; i<=n; ++i) {
int v = -1;
for (int j=0; j<=n; ++j)
if (!used[j] && (v == -1 || min_e[j] < min_e[v]))
v = j;
used[v] = true;
if (sel_e[v] != -1)
ans.emplace_back(v, sel_e[v]);
for (int to=0; to<=n; ++to)
if (!used[to] && v != to && g[v][to] < min_e[to]) {
min_e[to] = g[v][to];
sel_e[to] = v;
}
}
int w = 0;
for(auto i: min_e)
w += i;
cout << w << '\n';
vector<int> f;
vector<pair<int, int>> s;
for(auto i: ans) {
if (i.first == n || i.second == n)
f.emplace_back(i.first == n ? i.second : i.first);
else
s.push_back(i);
}
cout << f.size() << '\n';
for(auto i: f)
cout << i + 1 << ' ';
cout << '\n';
cout << s.size() << '\n';
for(auto i: s)
cout << i.first + 1 << ' ' << i.second + 1 << '\n';
}
[problem:329695H]
Task author: try_kuhn
Translate author: MatesV13
Solution author: try_kuhn
First two groups: we will add elements to the set, and for a request of the second type, we will go through the set and calculate what we are asked for. The asymptotics is $$$O(n^2)$$$. This should be enough for third group if written optimally.
For complete solution we need a data structure that can quickly add elements and find out the required sum. This can be obtained by Cartesian tree or Sparse segment tree.
We will build a Cartesian tree that can add elements. We will also store the sum on the subtree at the top of the tree. We can create a Cartesian tree that will be a subtree of the original tree that will hold all the keys from $$$l$$$ to $$$r$$$ using the split operation. And now the amount in the root will be the required amount. The main thing is not to forget at the end to combine the Cartesian trees into the original one. Asymptotics — $$$O (n \log n)$$$
#include <bits/stdc++.h>
#define int int64_t
using namespace std;
mt19937_64 gen(chrono::steady_clock::now().time_since_epoch().count());
uniform_int_distribution<int> rnd(1, 100000000);
struct node{
int key, pr;
node *left, *right;
int sz, sum;
node(int k){
sz = 1;
key = sum = k;
pr = rnd(gen);
left = right = nullptr;
}
};
int getsize(node* T){
return T ? T->sz : 0;
}
int getsum(node* T){
return T ? T->sum : 0;
}
void calc(node* &T){
if(!T)
return;
T->sz = getsize(T->left) + getsize(T->right) + 1;
T->sum = getsum(T->left) + getsum(T->right) + T->key;
}
pair<node*, node*> split(node* T, int k){
if(T == nullptr)
return {nullptr, nullptr};
else if(T->key <= k){
auto [T1, T2] = split(T->right, k);
T->right = T1;
calc(T);
return {T, T2};
}
else{
auto [T1, T2] = split(T->left, k);
T->left = T2;
calc(T);
return {T1, T};
}
}
node* merge(node* T1, node* T2){
if(!T1)
return T2;
if(!T2)
return T1;
if(T1->pr > T2->pr){
T1->right = merge(T1->right, T2);
calc(T1);
return T1;
}
else{
T2->left = merge(T1, T2->left);
calc(T2);
return T2;
}
}
bool find(node* T, int k){
while(T) {
if (T->key == k)
return true;
if(T->key > k)
T = T->left;
else
T = T->right;
}
return false;
}
node* insert(node* T, int k){
if(T == nullptr){
T = new node(k);
return T;
}
if(find(T, k))
return T;
auto [T1, T2] = split(T, k);
T = merge(merge(T1, new node(k)), T2);
return T;
}
int sum(node* &T, int l, int r){
auto [T1, T2] = split(T, l - 1);
auto [Tl, Tr] = split(T2, r);
int sum = getsum(Tl);
T = merge(merge(T1, Tl), Tr);
return sum;
}
node* root = nullptr;
int32_t main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
int n;
cin >> n;
int sm = 0;
while(n--){
string s;
cin >> s;
if(s == "+"){
int x;
cin >> x;
root = insert(root, (x + sm) % 1000000000);
sm = 0;
}
else{
int l, r;
cin >> l >> r;
sm = sum(root, l, r);
cout << sm << '\n';
}
}
}
[problem:329695I]
Task author: try_kuhn
Translate author: MatesV13
Solution author: try_kuhn
First three groups can be solved by some version of DP in $$$O(n \cdot k)$$$ (where $$$k$$$ is just some constant).
We can solve this problem using matrix multiplication and fast exponentiation. Lets have vector (matrix with only one column) that says how many valid DNA can start with $$$i$$$. (So, for $$$n=1$$$, it would be $$$S=[1, 1, 1, 1, 1]$$$), and matrix that describes if you can go from $$$i$$$ to $$$j$$$ (if you can, than $$$M[i][j]=1$$$, 0 otherwise).
Using fast exponentiation we can easily multiply $$$S$$$ with $$$M^{n-1}$$$, and then just sum over $$$S$$$. Overall complexity is $$$O(5^3 \log n)$$$.
#include <bits/stdc++.h>
#define int int64_t
#define double long double
using namespace std;
const int mod = 999999937;
vector<vector<int>> f;
vector<vector<int> > x;
vector<vector<int> > matrix;
vector<vector<int>> mult(vector<vector<int>> a, vector<vector<int>> b) {
int m = a.size(), n = b.size(), k = b[0].size();
vector<vector<int>> p(m, vector<int>(k));
for(int i = 0; i < m; i++)
for(int j = 0; j < k; j++)
for(int l = 0; l < n; l++)
p[i][j] = (p[i][j] + a[i][l] * b[l][j]) % mod;
return p;
}
vector<vector<int> > binpow(vector<vector<int> > v, int y){
if(y == 1)
return v;
if(y & 1)
return mult(binpow(v, y - 1), v);
vector<vector<int>> cur = binpow(v, y>>1);
return mult(cur, cur);
}
int32_t main() {
ios::sync_with_stdio(false);
int n;
cin >> n;
if (n == 1) {
cout << 5 << '\n';
return 0;
}
f.assign(1, vector<int>(5, 1));
matrix.assign(5, vector<int>(5, 1));
matrix[2][3] = matrix[2][4] = matrix[4][3] = matrix[4][4] = 0;
x = binpow(matrix, n - 1);
f = mult(f, x);
int sum = 0;
for (auto i: f[0]) {
sum += i;
if (sum >= mod)
sum %= mod;
}
cout << sum << '\n';
}
[problem:329695J]
Task author: try_kuhn
Translate author: MatesV13
Solution author: try_kuhn, xyz.
We will need to construct a suffix array $$$p$$$ and count LCP (the largest common prefix) for it. Now we actually just take the next suffix in the sorting order and see what prefixes it gives new substrings.
The current $$$p_i$$$ suffix (where $$$p$$$ is a suffix array) will give new substrings for all of its prefixes, except for those matching the $$$p_ {i-1}$$$ suffix prefixes. Thus, all its prefixes, except for the first $$$ lcp_ {i-1} $$$, will give new substrings.
Then the answer to the problem will be $$$\sum_{i = 0}^{n - 1} (n - p_i) - \sum_ {i = 0}^{n - 1} lcp_i$$$. Time complexity is O ($$$n \log n$$$).
#include <bits/stdc++.h>
#define int int64_t
using namespace std;
int sum_mod(int x, int y, int mod){
x += y;
if(x >= mod)
x -= mod;
return x;
}
void sort_cc_sh(string &s, vector<int> &sa) {
int n = s.size();
vector<int> a, na;
vector<int> color, ncolor;
vector<int> buckets;
buckets.assign(256, 0);
for (char c: s)
buckets[c]++;
for (int i = 1; i < 256; i++)
buckets[i] += buckets[i - 1];
a.assign(n, -1);
for (int i = n - 1; i >= 0; i--)
a[--buckets[s[i]]] = i;
color.assign(n, 0);
color[a[0]] = 0;
for (int i = 1; i < n; i++)
if (s[a[i]] != s[a[i - 1]])
color[a[i]] = color[a[i - 1]] + 1;
else
color[a[i]] = color[a[i - 1]];
na.assign(n, -1);
ncolor.assign(n, -1);
for (int l = 1; l < n; l <<= 1) {
buckets.assign(n, 0);
for (int i = 0; i < n; i++)
buckets[color[i]]++;
for (int i = 1; i < n; i++)
buckets[i] += buckets[i - 1];
for (int i = n - 1; i >= 0; i--) {
int j = a[i] - l;
if (j < 0)
j += n;
na[--buckets[color[j]]] = j;
}
a.swap(na);
ncolor[a[0]] = 0;
for (int i = 1; i < n; i++)
if (color[a[i]] == color[a[i - 1]] && color[sum_mod(a[i], l, n)] == color[sum_mod(a[i - 1], l, n)])
ncolor[a[i]] = ncolor[a[i - 1]];
else
ncolor[a[i]] = ncolor[a[i - 1]] + 1;
color.swap(ncolor);
}
sa = a;
}
void build_suf_array(string &s, vector<int> &sa){
s.push_back((char)0);
sort_cc_sh(s, sa);
s.pop_back();
sa.erase(sa.begin());
}
void build_lcp(string &s, vector<int> &sa, vector<int> &lcp){
int n = s.size();
lcp.assign(n - 1, -1);
vector<int> pos(n);
for(int i = 0; i < n; i++)
pos[sa[i]] = i;
int k = 0;
for(int i = 0; i < n; i++){
if(k > 0)
k--;
if(pos[i] == n - 1)
k = 0;
else{
int j = sa[pos[i] + 1];
while (max(i + k, j + k) < n && s[i + k] == s[j + k])
k++;
lcp[pos[i]] = k;
}
}
}
int32_t main() {
string s;
cin >> s;
vector<int> sa;
build_suf_array(s, sa);
vector<int> lcp;
build_lcp(s, sa, lcp);
int sum = 0;
for(auto i : sa)
sum += s.size() - i;
for(auto i: lcp)
sum -= i;
cout << sum;
}
Auto comment: topic has been updated by try_kuhn (previous revision, new revision, compare).
Can someone please explain to me Problem C solution.I was trying to relate it to a Standard P&C problem of finding the rank of a word in a dictionary like if $$$a_i$$$ is $$$!= i$$$ then the total possible ways of arranging $$$n-i$$$ numbers is $$$(n-i)!*i$$$ and for the whole array it will be the summation of this.
Okay, I'll try. Imagine that we have counted the number of permutations that are lexicographically smaller than the given one. Then the answer will be cnt + 1. We will process the permutation from left to right. From the definition of a lexicographically smaller permutation, a permutation a is less than a permutation b if the first m numbers of these permutations are the same, and the (m + 1) th number of a permutation a is less than the (m + 1) th number of b permutation. Let's say we are now at position i. Consider all the numbers of this permutation that are to the right of the current position i. Then it follows from the definition above that if we put a number Pj at position i such that Pj < Pi and j > i, this permutation will be lexicographically smaller. Then if we know the number of such Pj's, call it k, this will add k * (n — i)! to the answer, since if Pj < Pi, the arrangement of the remaining (n — i) elements is not important to us. To find k, you can use, for example, a data structure such as the Fenwick tree.
Ok now i understood what was the use of fenwick tree in this question.Thanks for help.
You're welcome :)