Will we, will you...
Can we, can you, can we change?
1562A - The Miracle and the Sleeper
Попробуйте рассмотреть такие отрезки, где $$$l \le \lfloor \frac{r}{2} \rfloor + 1$$$.
Теперь подумайте, что делать, если это условие не выполняется.
#include <iostream>
using namespace std;
int l, r;
void solve() {
if (r < l * 2) {
cout << r - l << endl;
}
else {
cout << (r - 1) / 2 << endl;
}
}
int main() {
int t;
cin >> t;
while (t--) {
cin >> l >> r;
solve();
}
}
Попробуйте рассмотреть небольшие числа, например, состоящие из трех цифр.
Теперь подумайте, какое максимальное количество цифр может быть в оптимальном ответе.
#include <iostream>
using namespace std;
int n;
string s;
bool prime[100];
void solve() {
for (int i = 0; i < n; i++) {
if (s[i] == '1' || s[i] == '4' || s[i] == '6' || s[i] == '8' || s[i] == '9') {
cout << 1 << endl;
cout << s[i] << endl;
return;
}
}
for (int i = 0; i < n; i++) {
for (int j = i + 1; j < n; j++) {
if (!prime[(s[i] - '0') * 10 + (s[j] - '0')]) {
cout << 2 << endl;
cout << s[i] << s[j] << endl;
return;
}
}
}
exit(42);
}
int main() {
for (int i = 2; i < 100; i++) {
prime[i] = true;
for (int j = 2; j * j <= i; j++) {
if (i % j == 0) {
prime[i] = false;
}
}
}
int t;
cin >> t;
while (t--) {
cin >> n;
cin >> s;
solve();
}
}
Попробуйте доказать, что авторское решение работает за $$$O(k)$$$ на один тестовый случай.
Поиск ответа, состоящего из одной цифры, работает за $$$O(k)$$$. Теперь посмотрим, за сколько выполнится цикл. В самом деле, нетрудно видеть, что после того, как будут рассмотрены пары индексов $$$(0,1)$$$, $$$(0,2)$$$ и $$$(1,2)$$$, всегда будет найден ответ. Это так потому, что в любом числе из трех цифр можно найти ответ, состоящий из двух цифр (это было доказано ранее). Поэтому это равносильно тому, что мы удалили все цифры, кроме первых трех, а потом решили задачу для них (при условии, что длина оптимального ответа равна двум).
Попробуйте придумать такой способ перебирать пары индексов и такой тест, чтобы этот перебор работал за $$$O(k^2)$$$ на один тестовый случай.
в качестве строки возьмем строку \t{3737 \ldots 37}. Перебирать пары индексов будем следующим образом: сначала будем перебирать все пары индексов с разными четностями, а потом все пары индексов с одинаковыми четностями. Нетрудно видеть, что такой перебор будет работать за $$$O(k^2)$$$ на один тестовый случай, т.к. все пары индексов с разными четностями будут давать числа $$$37$$$ и $$$73$$$, которые являются простыми, а таких пар индексов $$$O(k^2)$$$.
подумайте, можно ли сделать так, чтобы подходящее $$$k$$$ (такое, что $$$f(t) = f(w) \cdot k$$$) было небольшим? На какие маленькие $$$k$$$ легко умножать двоичные числа?
такими $$$k$$$ могут являться числа $$$0$$$, $$$1$$$ и $$$2$$$~--- умножение на $$$0$$$ дает $$$0$$$, умножение на $$$1$$$ оставляет число как есть, а умножение на $$$2$$$ добавляет ноль в конец. Подумайте, как можно этим воспользоваться.
#include <iostream>
using namespace std;
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
string s;
cin >> s;
bool solved = false;
for (int i = 0; i < n; i++) {
if (s[i] == '0') {
solved = true;
if (i >= n / 2) {
cout << 1 << " " << i + 1 << " " << 1 << " " << i << endl;
break;
} else {
cout << i + 2 << " " << n << " " << i + 1 << " " << n << endl;
break;
}
}
}
if (!solved) {
cout << 1 << " " << n - 1 << " " << 2 << " " << n << endl;
}
}
}
Подумайте, как решалась бы задача, если бы при всех прежних условиях (строка состоит из цифр $$$0$$$ и $$$1$$$, в строке есть хотя бы одна единица) была выбрана любая произвольная система счисления.
решение бы никак не изменилось. В самом деле, в системе счисления с основанием $$$k$$$ приписывание нуля слева не меняет число (умножение на $$$1$$$), а дописывание нуля справа увеличивает число в $$$k$$$ раз (умножение на $$$k$$$).
Подумайте, как решалась бы задача, если бы условие звучало так: \textit{необходимо, чтобы $$$f(t)$$$ делилось на $$$f(w)$$$ без остатка. В строке есть хоть бы одна единица. Ноль делится на все числа, но никакое число не делится на ноль. Ноль не делится на ноль.}
На самом деле, задача становится несколько сложнее. Например, рассмотрим такой тест: \t{111000000}. В данном случае нельзя было бы выводить $$$4$$$ $$$9$$$ $$$5$$$ $$$9$$$, поскольку ноль не делится на ноль.
Будем выполнять старое решение, но проверять, что мы не пытаемся делить ноль на ноль (если мы делим что-то на ноль, то поменяем числа местами). А если ответ не нашелся, значит, либо вся левая половина строки состоит из нулей, либо вся правая. Пусть самая левая единица имеет позицию $$$p1$$$, а самая правая единица~--- позицию $$$p2$$$. Тогда в первом случае можно вывести $$$1$$$ $$$p2$$$ $$$2$$$ $$$p2$$$, а во втором случае~--- $$$p1$$$ $$$n$$$ $$$p1$$$ $$$n-1$$$.
Нетрудно видеть, что такое решение также всегда будет находить ответ.
На самом деле, можно действовать проще: когда мы встречаем ноль в левой половине, мы дополнительно проверяем, есть ли среди символов справа хотя бы одна единица. Если есть, то выводим ответ, иначе идем дальше. Аналогично, когда мы встречаем ноль в правой половине, мы дополнительно проверяем, есть ли среди символов слева хотя бы одна единица. Можно показать, что такое решение также будет всегда находить ответ.
Интересно, что мы решили \textbf{более сильную версию задачи}, так как полученное нами решение корректно работает и для старой задачи.
1562D1 - Двести двадцать один (простая версия)
Подумайте: как связана четность длины отрезка и четность количества элементов, которое необходимо из него удалить?
Подумайте, насколько сильно и как отличается значение знакопеременной суммы при удалении только одного элемента, и при удалении только его соседа.
Рассмотрите отрезки нечетной длины. Попробуйте взять небольшие отрезки и для каждого элемента узнать, какой будет знакопеременная сумма после его удаления. После этого можно будет заметить кое-что интересное.
#include <iostream>
using namespace std;
int a[1000000 + 5], p[1000000 + 5];
int get_sum(int l, int r) {
if (l > r) {
return 0;
}
return (l % 2 == 1) ? p[r] - p[l - 1] : p[l - 1] - p[r];
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin>>t;
while (t--) {
int n, q;
cin >> n >> q;
string ss;
cin >> ss;
for (int i = 1; i <= n; i++) {
a[i] = (ss[i - 1] == '+' ? 1 : -1);
}
p[0] = 0;
for (int i = 1; i <= n; i++) {
p[i] = p[i - 1] + (i % 2 == 1 ? a[i] : -a[i]);
}
for (int o = 0; o < q; o++) {
int l, r;
cin >> l >> r;
if (get_sum(l, r) == 0) {
cout << "0\n";
continue;
}
if ((r - l + 1) % 2 == 0) {
cout << "2\n";
}
else {
cout << "1\n";
}
}
}
}
1562D2 - Двести двадцать один (сложная версия)
Смотрите соответствующие подсказки из задачи D1.
Подумайте, как быстро проверять значение знакопеременной суммы на отрезке, если известно, что из него удален ровно один элемент (и известен номер этого элемента).
Из решения задачи D1 становится понятно, что для любого нечетного отрезка достаточно удалить всего лишь один элемент. Подумайте, как искать такой элемент, опираясь на доказательство его существования, приведенное выше.
#include <iostream>
using namespace std;
int a[1000000 + 5], p[1000000 + 5];
int get_sum(int l, int r) {
if (l > r) {
return 0;
}
return (l % 2 == 1) ? p[r] - p[l - 1] : p[l - 1] - p[r];
}
int check_elimination(int l, int r, int m) {
return ((m - l + 1) % 2 == 1)
? get_sum(l, m - 1) + get_sum(m + 1, r)
: get_sum(l, m - 1) - get_sum(m + 1, r);
}
int get_sign(int m) {
return m > 0 ? 1 : -1;
}
int calculate_odd_segment(int l, int r) {
if (l == r) {
return l;
}
int pos = 0;
int lb = l;
int rb = r;
while (lb < rb) {
int mb = (lb + rb) / 2;
int lq = check_elimination(l,r,lb);
int mq = check_elimination(l,r,mb);
int rq = check_elimination(l,r,rb);
if (lq == 0) {
pos = lb;
break;
}
if (mq == 0) {
pos = mb;
break;
}
if (rq == 0) {
pos = rb;
break;
}
if (get_sign(lq) == get_sign(mq)) {
lb = mb;
} else {
rb = mb;
}
}
return pos;
}
int main() {
ios_base::sync_with_stdio(false);
cin.tie(nullptr);
int t;
cin>>t;
while (t--) {
int n, q;
cin >> n >> q;
string ss;
cin >> ss;
for (int i = 1; i <= n; i++) {
a[i] = (ss[i - 1] == '+' ? 1 : -1);
}
p[0] = 0;
for (int i = 1; i <= n; i++) {
p[i] = p[i - 1] + (i % 2 == 1 ? a[i] : -a[i]);
}
for (int o = 0; o < q; o++) {
int l, r;
cin >> l >> r;
if (get_sum(l, r) == 0) {
cout << "0\n";
continue;
}
bool even = false;
if ((r - l + 1) % 2 == 0) {
even = true;
l++;
}
int pos = calculate_odd_segment(l, r);
if (even) {
cout << "2\n" << l - 1 << " "<< pos << '\n';
} else {
cout << "1\n" << pos << '\n';
}
}
}
}
Существует ли решение с асимптотикой $$$O(n + q)$$$ на один тестовый случай?
Для решения понадобится быстро (за О(1)) искать наибольший общий префикс двух подстрок.
Возьмите несколько небольших строк и попробуйте построить наибольшую возрастающую подпоследовательность. Быть может, в ней есть какая-то особенность?
Используйте динамическое программирование.
#include <iostream>
using namespace std;
int16_t lcp[10000 + 5][10000 + 5];
int dp[10000 + 5];
bool is_greater(const string& s, int x, int y) {
if (lcp[x][y] == static_cast<int>(s.size()) - x) {
return false;
}
return s[x + lcp[x][y]] > s[y + lcp[x][y]];
}
int get_score(const string& s, int x, int y) {
if (is_greater(s, x, y)) {
return dp[y] + static_cast<int>(s.size()) - x - lcp[x][y];
}
return 0;
}
int main() {
int t;
cin >> t;
while (t--) {
int n;
cin >> n;
string s;
cin >> s;
if (s.size() != n) return 42;
for (int i = 0; i <= n; i++) {
for (int j = 0; j <= n; j++) {
lcp[i][j] = 0;
}
}
for (int i = n - 1; i >= 0; i--) {
for (int j = n - 1; j >= 0; j--) {
if (i == j) {
lcp[i][j] = n - i;
} else
if (s[i] != s[j]) {
lcp[i][j] = 0;
} else {
lcp[i][j] = lcp[i + 1][j + 1] + 1;
}
}
}
int ans = n;
dp[0] = n;
for (int i = 1; i < n; i++) {
dp[i] = n - i;
for (int j = 0; j < i; j++) {
dp[i] = max (dp[i], get_score(s, i, j));
}
ans = max(ans, dp[i]);
}
cout << ans << '\n';
}
}
Насколько мне известно, используя бор (trie), можно отсортировать все подстроки заданной строки за $$$O(n^2)$$$. Также, насколько мне известно, в языке программирования С++ \textit{std::lower\textunderscore bound} работает настолько быстро, что в массиве размера $$$1.25 \cdot 10^7$$$ такое же количество операций \textit{std::lower\textunderscore bound} выполняется чуть меньше, чем за секунду. Известно также, что при помощи \textit{std::lower\textunderscore bound} можно написать поиск наибольшей возрастающей подпоследовательности в массиве. Вопрос такой: можно ли на основании этого написать решение за $$$O(n^2 \cdot log(n))$$$, которое бы проходило тесты?
Придумайте решение, которое бы использовало $$$\frac{n \cdot (n-1)}{2}$$$ запросов.
При $$$n < 100$$$ примените решение, которое использует $$$\frac{n \cdot (n-1)}{2}$$$ запросов. Теперь предположим, что $$$n \geq 100$$$. Придумайте решение, которое бы использовало $$$\lceil\frac{3n}{2}\rceil$$$ запросов.
При $$$n < 10000$$$ примените решение, которое использует $$$2n$$$ запросов. Теперь предположим, что $$$n \geq 5000$$$. Скорее всего, в решении за $$$2n$$$ запросов, которое вы придумали, необходимо найти наибольшее простое число на отрезке. Подумайте, а так ли вам надо наибольшее простое число? Может быть, будет достаточно просто достаточно большого простого числа? Как найти такое число?
Используйте вероятностные алгоритмы для нахождения такого числа. С помощью вероятностных алгоритмов также убедитесь, что вы нашли подходящее число.
Теперь используйте найденное простое число, чтобы найти наибольшее простое число. Возможно, при его нахождении вы уже узнаете большую часть чисел, и с помощью наибольшего простого числа вы сможете узнать оставшиеся.
#include <iostream>
#include <vector>
#include <algorithm>
#include <cmath>
#include <ctime>
#include <random>
using namespace std;
#define int long long
mt19937 rnd(time(NULL));
int swnmb[1000000+5];
int swnmbr[1000000+5];
int b[1000000+5];
bool p[1000000+5];
vector<int> primes;
int get_lcm(int x, int y)
{
cout<<"? "<<swnmb[x]<<" "<<swnmb[y]<<endl;
int u;
cin>>u;
return u;
}
int max_fact(int d)
{
for (int j=(int)primes.size()-1; j>=0; j--)
{
if (d%primes[j]==0) return primes[j];
}
}
vector<int> fact(int d)
{
vector<int> ans;
for (auto j:primes)
{
if (d==1) break;
while (d%j==0)
{
ans.push_back(j);
d/=j;
}
}
return ans;
}
bool is_prime(int d)
{
for (int j=0; j<(int)primes.size(); j++)
{
if (primes[j]>500) return true;
if (d%primes[j]==0) return false;
}
return true;
}
int paired_get_lcm[100+5][100+5];
void solve_quadric(int n)
{
for (int i=1; i<=n; i++) {
for (int j=i+1; j<=n; j++) {
paired_get_lcm[i][j]=get_lcm(i,j);
paired_get_lcm[j][i]=paired_get_lcm[i][j];
}
}
int m1=0;
int m2=0;
int mx=0;
for (int i=1; i<=n; i++) {
for (int j=i+1; j<=n; j++) {
if (paired_get_lcm[i][j]>mx)
{
mx=paired_get_lcm[i][j];
m1=i;
m2=j;
}
}
}
int u=0;
for (int i=1; i<=200000; i++)
{
if (mx==i*(i+1))
{
u=i+1;
break;
}
}
for (int i=1; i<=n; i++)
{
if (i==m1||i==m2) continue;
if (paired_get_lcm[m1][i]%u!=0)
{
swap(m1,m2);
break;
}
}
b[m1]=u;
int pos=m2;
for (int v=u-1; v>u-n-1; v--)
{
int m0=0;
int mx=0;
for (int j=1; j<=n; j++) {
if (b[j]!=0) continue;
if (pos==j) continue;
if (paired_get_lcm[pos][j]>mx)
{
mx=paired_get_lcm[pos][j];
m0=j;
}
}
b[pos]=v;
pos=m0;
}
b[pos]=u-n+1;
}
void solve_pair(int n)
{
int maxnum=0;
int maxpos=0;
int notmaxpos=0;
for (int i=1; i<=n; i+=2)
{
int u;
if (i!=n) u=get_lcm(i,i+1);
else u=get_lcm(i-1,i);
int w=max_fact(u);
if (w>maxnum)
{
maxnum=w;
maxpos=min(i+1,n);
} else {
notmaxpos=i;
}
}
int cur=get_lcm(notmaxpos,maxpos);
if (cur%maxnum!=0) {
maxpos--;
}
b[maxpos]=maxnum;
for (int i=1; i<=n; i++)
{
if (i==maxpos) continue;
int u=get_lcm(i,maxpos)/maxnum;
b[i]=u;
}
}
void print_ans(int n)
{
cout<<"! ";
for (int i=1; i<=n; i++) {
cout<<b[swnmbr[i]];
if (i!=n) cout<<" ";
}
cout<<endl;
}
int32_t main()
{
for (int i=2; i<=200000; i++)
{
if (p[i]) continue;
for (int j=i*i; j<=200000; j+=i)
{
p[j]=true;
}
primes.push_back(i);
}
int t;
cin>>t;
while (t--)
{
int n;
cin>>n;
for (int i=1; i<=n; i++) {
swnmb[i]=0;
swnmbr[i]=0;
b[i]=0;
}
for (int i=1; i<=n; i++) {
swnmb[i]=i;
}
for (int i=1; i<=n*5; i++) {
swap(swnmb[rnd()%n+1], swnmb[rnd()%n+1]);
}
for (int i=1; i<=n; i++) {
swnmbr[swnmb[i]]=i;
}
if (n<100) {
solve_quadric(n);
print_ans(n);
continue;
}
if (n<10000) {
solve_pair(n);
print_ans(n);
continue;
}
int t1=0;
int t2=0;
int pos1=0;
int pos2=0;
for (int i=1; i<=4000; i++)
{
int i1=rnd()%n+1;
int i2=rnd()%n+1;
while (i1 == i2) {
i2=rnd()%n+1;
}
vector<int> f=fact(get_lcm(i1,i2));
if (f.size()==2&&f[0]>500&&f[1]>500)
{
bool first1=true;
bool first2=true;
for (int j=0; j<100; j++)
{
int t=rnd()%n+1;
while (t==i1||t==i2)
{
t=rnd()%n+1;
}
if (get_lcm(i1,t)%f[0]!=0)
{
first1=false;
break;
}
}
for (int j=0; j<100; j++)
{
int t=rnd()%n+1;
while (t==i1||t==i2)
{
t=rnd()%n+1;
}
if (get_lcm(i1,t)%f[1]!=0)
{
first2=false;
break;
}
}
if (first1&&first2)
{
bool second1=true;
for (int j=0; j<100; j++)
{
int t=rnd()%n+1;
while (t==i1||t==i2)
{
t=rnd()%n+1;
}
if (get_lcm(i2,t)%f[1]!=0)
{
second1=false;
break;
}
}
if (second1)
{
t2=f[0]*f[1];
t1=f[0];
}
else
{
t2=f[0]*f[1];
t1=f[1];
}
pos1=i2;
pos2=i1;
}
else
{
if (!first1)
{
t1=f[1];
t2=f[0];
}
else
{
t1=f[0];
t2=f[1];
}
pos1=i1;
pos2=i2;
}
b[pos1]=t1;
b[pos2]=t2;
int posf=0;
int maxf=0;
for (int j=1; j<=n; j++)
{
if (j==i1||j==i2) continue;
int u=get_lcm(pos1,j);
if (u/t1>500)
{
b[j]=u/t1;
if (is_prime(b[j]))
{
if (b[j]>maxf)
{
maxf=b[j];
posf=j;
}
}
}
}
for (int j=1; j<=n; j++)
{
if (b[j]!=0) continue;
b[j]=get_lcm(posf,j)/b[posf];
}
break;
}
}
print_ans(n);
}
}
Thanks for the great contest and the fast editorial :)
Wish all div2 contest were like this. Quite standard contest.
Which problems were standard lol?
I think he/she forgot to add "high" before standard.
I mean, everyone in the contest has his level of difficulty to solve but definitely not last 2 problems.
Can you still provide code for problem F? Even though it might be incomprehensible, it might be used to stress test our solutions. Or maybe you can provide some participant's code which is more readable.
Well, if you want...
Unable to parse markup [type=CF_MATHJAX] error in D1
Will fix it soon, thank you!
Thank you for the nice round! The problems were very interesting, and C's cases using the zeroes was pretty tricky in particular. However, D1's solution was pretty easily noticed by me and some others because of the samples (the observation that all answers were either 0, 1, or 2). Maybe giving less samples would "hide" this fact?
Problem statements were clear and concise, and from what I did, problems are roughly in increasing order of difficulty!
Again, thanks for the round, and I hope you write another one soon! (pikachu-themed plsssss :D)
I've thought about that, but left this sample because I have worried about the difficulty of D1. I decided to make it easier doing this.
Damn, I found out that in problem D1, there are only 0, 1, 2 but I forgot casting it to abs
Unable to parse markup [type=CF_MATHJAX]
I will fix it soon.
Thank you for the nice round and the fast fixing.Your round and editorial will benefit us a lot !
What's more,In D2,the hint was corresponding to "C1" ,which should be D1
Addition:the Solution We will use the facts already obtained, given in the solution of problem C1.-->D1
How will we solve B for , what is the minimum number of digits that can be removed from the number so that the number becomes not prime,
This task is harder than checking is the number prime.
Can someone help me figure out why my O(n + q*logn) solution failed for D2? The time complexity should be same as the editorial solution
My approach was a little different from the editorial however 1) While precomputing I store for all prefix sum values v the indices of elements where the prefix sum is equal to v 2) in the binary search phase I calculate the prefix sum of the element to be removed (this will be midpoint of prefix sums of left and right boundary), and then binary search for an index with that prefix sum in the data structure from 1
https://mirror.codeforces.com/contest/1562/submission/127124443
You copied vector, so it's O(n + qn). auto reqv = (some vector) creates NEW vector. Use references to not copy vector.
Also, endl is really slow. (It flushes your output — 6*10^5 times in worst case!) Use '\n'.
https://mirror.codeforces.com/contest/1562/submission/127153639
oh wow, really need to be more careful with these details. thank you!
hey man, just wanted to thank you again for digging through my code and finding this- i made the same mistake today but was able to catch it because i remembered your comment! :D
TLE: https://mirror.codeforces.com/contest/1556/submission/127390031
AC: https://mirror.codeforces.com/contest/1556/submission/127390584
why does this solution fail the pretest 2 :( 127132103, Problem C.
Try this case:
1
7
1110010
oh noooo but thank you an unnecessary for loop
Thanks for the great problemset, really liked the fun constructives :D
I have an offline solution for D which works in O(N + Q). Precompute the prefix array as in D1. Then sort on the values by l (by bucket sort i.e. query[l].pb({r, q}), where q is the idx of this query). if sign variable sum is zero, then just continue. Now there are two cases.
Case 1: If r-l+1 is odd : We want to find an index idx, such that then the sign variable sum between l and idx-1 and idx+1 to r should be equal. Because when we remove idx, the sum after those elements just alternates. This can be written as pre_sum[idx — 1] — pre_sum[l-1] = pre_sum[r] — pre_sum[idx]. Now, we can rewrite this as pre_sum[l-1] + pre_sum[r] = pre_sum[idx] + pre_sum[idx-1]. So for a given l, r, we can find any idx such that l <= idx <= r and the above condition holds.
Case 2: if r-l+1 is even, we can remove r, do r-- and then it becomes case 1.
We can create an hash map of vectors to store the adjacent pre_sum's index. i.e. map[pre_sum[i-1] + pre_sum[i]].pb(i);
We also create an index hash map which stores for this key what index we are at in the vector hashmap. Then since the queries are sorted by l, we can just do like 2 pointer approach until we find index > l for the key in the vector hashmap. Here the key is sum[l-1] + sum[r].
Upd : added the sortig method to get O(n+Q) complexity.
Correct me if I'm wrong... But sorting in the very first step will take O(Nlog(N)) time.
Until and unless you use some radix sort type of algorithm to sort but I think that would be an overkill.
Just put the queries in buckets by $$$l$$$. Then loop over the indices and do the same above.
I think I have an easier implementation. Let $$$ps$$$ be the prefix sum array. Notice that for each query $$$(l, r)$$$, we can determine $$$k$$$ and need to find some index $$$x$$$ such that $$$l \leq x \leq r$$$ and $$$ps_x = k$$$.
Then we can have an offline solution to answer all queries $$$(l, r, k)$$$. Put the queries into buckets by $$$r$$$. Then loop over $$$1 \dots N$$$ and maintain last occurrence of each prefix sum value. Due to the proven existence of our target index, the last occurrence before $$$r$$$ must meet our requirement.
Since $$$|ps_x| \leq N$$$, this solution works in $$$O(N + Q)$$$.
I think such simplification can not be done.
Consider a sequence whose sum is 2m-1, m>0:
We not only want $$$ps_x$$$ = m-1, $$$v_x=1$$$ is also required.
No, it is possible to solve problem D2 in $$$O(n + q)$$$ and I too have discovered the same solution with asymptotics $$$O(n + q)$$$ as mentioned aboved. Taking offline queries and storing them in a list of their corresponding $$$r$$$ values, can do it.
Here's the link to my submission 127266077.
Even tho it takes $$$483 ms$$$, whereas my $$$O(n + q.logn)$$$ takes $$$358 ms$$$. This might be because setter didn't included required test cases in the test sets in order to differentiate between the execution time of the two solutions. Or maybe there's some problem with the way I've implemented.
A tricky question.
Assume $$$m > 0$$$. Actually it can be shown that for the last occurrence where $$$ps_x = m - 1$$$, there must be $$$v_x = 1$$$, otherwise there will be another position $$$x < y < r$$$ such that $$$ps_y = m - 1$$$. My in-contest solution made use of this conclusion, besides that I tried to find the first occurrence online.
Another way to avoid this discussion, is to find the position where $$$ps_x + ps_{x+1} = 2m - 1$$$. This idea is already discussed in this comment, and you can see the implementation in this comment below.
Yes you are right.
I only looked at your first claim, and didn't notice that the trick of finding the last occurrence will account for this.
Thanks for the clarification!
Although it's O(N+QlgN), this approach made a very simple online solution :O
https://mirror.codeforces.com/contest/1562/submission/127151337
I think you can use an arrary but not a hashmap, that will be faster.
Thank you for sharing this solution!
Loved this contest.
It looks like F editorial is slightly overcomplicated. For n > 100 it suffices to iterate with for loop and ask for pairs (1, a[i]) for min(5000, n) pairs. Then take maximum prime number that ever occured as divisor of lcm. Using this get whole array in n operations. Or can you hack it?
I know that there exists very simple solution. Just explain to me please: if we have not the biggest prime number, but a prime number p such that p < r/2, how can we get the whole array using this prime? I really don't know how to do this. But I know that this can be done somehow.
If n > 5000 it should be clear that we have very high chance of getting at least one prime > r / 2. Else we know all the primes and it is well known that there always exists prime between x and 2x for natural x, so it can't happen that the biggest prime is < r / 2.
A way to improve your chances more with the same approach is to first assume the permutation is randomly permuted. Than asking query(1,2), query(3,4), ..., query(9999,10000). (Or less, if n<10000)
The biggest prime that's a divisor of one these queries is kept, and the pair where it was found. Now this big prime could be at either position i or i+1, we use one extra query to determine which one.
Only if all the "big" ( > r/2) primes are not in the first 10000 elements of the permutation this will fail. The number of big primes is always more than $$$\frac{n}{200}$$$, because the largest prime gap under the constraints is less than 100. $$$P(fail) = P(\textrm{All big primes not in first 10000 elements}) \leq (1-\frac{10000}{n})^{n/200}$$$. For $$$n \in [10000,100000]$$$, this probability is always less than $$$10^{-22}$$$. (Actual probability is even lower).
Sorry, stll don't understand why there are more than $$$\frac{n}{200}$$$ big primes.
Because $$$n=r-l+1$$$, $$$r$$$ is greater or equal than $$$n$$$. So the range where big primes could exist is $$$r- \frac{r}{2} = \frac{r}{2} \geq \frac{n}{2}$$$. For all numbers less than 200000, The biggest gap between two adjacent primes is 86. This means that after you move from some number 100 places to the right, you're guaranteed to get at least one prime. So the number of primes in the good region is at least $$$\frac{n}{2} \cdot \frac{1}{100} = \frac{n}{200}$$$.
For my full solution:
For $$$n<86$$$ it's not guaranteed that the region [l,r] contains a prime. So for this case, you query all pairs. If you want to know the number on position a_i, you can take $$$gcd(q_{i,1}, q_{i,2},...,q_{i,n})$$$. Only for n=3, there's a special case, where this doesn't work.
For $$$n \geq 86$$$, you query pairs of positions like I described. Then you know the position and the value of a big prime. For all the other a_i's you can just put $$$a_i = \frac{q_{i,\textrm{position of big prime}}}{\textrm{big prime}}$$$
Submission: AC
Thank you, got it! Very short and nice solution :)
Thanks for the nice Round. But in problem F case $$$100 <n \le 10000$$$,why the biggest prime divisor we found must be in the sequence?
Forgive my poor English.
Excuse me, but how can you prove (in problem D2) that by such binary searching we are always making a right move?
D1 problem was very unfair. Like it was not suitable for D1. So many people are there who could not solve B and C but still solved D1 .Just because it was so evident from the example cases that the answer can only be 0 1 or 2. Rest of the contest was very good but i think some more thought could have been put into D1. D2 was very good.
Thank you so much for fast editorial, the level of difficulty and question was excellent. I hope you will make more contest.
I got WA on test 3 of problem F, but the judger said
wrong answer The answer is wrong! (test case 300)
. However it is guaranteed that $$$ t \leqslant 20 $$$ and the value $$$ t $$$ of test 3 is $$$ 20 $$$. Also I can pass data of test 3 on my own PC.What's the meaning of the judger's comment?
Well, this is a bug, checker is showing incorrect test case.
Fine. Thanks a lot.
I have a little better solution of B, which is less guessy and more easy to come up with.
1) Firstly if there is 1,4,6,8,9-> then you already got your answer.
2)Now, your string consists of 2,3,5,7 only. Now, if your string contains repeating characters, we are done since 22,33,55,77->all of them are divisible by 11.
3)Now since there are no repeating characters up till now, the maximum length of your string can be at most 4.(since contains only 2,3,5 and 7).So you can brute force to check all possible combinations in 2^4=16 steps.
4) I think this solution is better if there can be cases in the problem in which the answer doesn't exists.
https://mirror.codeforces.com/contest/1562/submission/127156841 can anyone help I m not getting that at which test case my code is failing?
you solution fail for case when {1, 4, 6, 8, 9} is present at 0th index in string for ex: for 15, 47, many more
Which Okami used
string
function to solve the E problem ?string
function can automatically compare the string in Lexicographic order. So we just run a simple LIS,that will Ok.my code is down to the fourth point, I feel it is correct.
The link is as follow : test
This is my first asked question in the Codeforces, So maybe somewhere is wrong, %%%%%
I don't know where WA is, but your code will get TL sooner or later, as string comparasion works in O(n).
127118923 wrong answer Integer 61 violates the range [1, 6] (test case 54) test case please.
UPD. I was missing a dam whitespace. ;(
In problem E, I didn't quite understand the reasoning behind the fact that if the string $$$[l,r]$$$ is present in the LIS, the following ones ($$$[l,r+1],\, \dots,\, [l,n]$$$) must be present too so I want to share the proof that I came up with.
Firstly, we can observe that if $$$[l,r]$$$ and $$$[l,r+k]$$$ are present, all $$$[l,r+i],\ i<k$$$ must be present too. This is trivial since $$$[l,r+1]\ge [l,r]$$$. Furthermore, all these elements must be next to each other since that is the order that they follow in the sequence.
Now, we can prove the initial claim:
Let us assume that we have a LIS in which this is not true. Then there must be two adjacent elements in the LIS $$$a_1=[l_1,r_1],\ a_2=[l_2,r_2]$$$ such that $$$l_1<l_2$$$ and $$$r_1<n$$$.
By performing these changes a finite number of times we get a LIS that does not have less elements and fulfils the initial claim.
Edit: Changed the conditions in the proof following maomao90's suggestion to clarify the proof
As your solution,the second situation "if $$$|s1|<|s2|...$$$" and maybe "The first character of $$$s1$$$ must be smaller than the first character of $$$s2$$$." in this situation also is right, so the rest of the prefixes of the suffix $$$[l_1,n]$$$ must smaller than $s2$,they can make contributions and not need delete.
I understand other parts, It’s very good, it’s easier to understand than the solution, at least I understand it.
Please answer my doubts,thanks
I'm not sure if I understand your doubt correctly but I think there are some things that I could clarify.
What we actually want to prove is that there is a LIS of maximum length in which the "initial claim" holds. To do so I can assume that the LIS of maximum length does not fulfil this claim. In that case, I can take that LIS and modify it so that it fulfils the claim. Therefore I have reached a contradiction and there is a LIS of maximum length that fulfils the claim.
Keep in mind that there might be other LIS of maximum length that do not follow that structure at all. All we need to know is that there is one that does.
Hi, thank you for taking the time to write such a nice detailed proof. There's a part of your it that I don't understand, I'd really appreciate it if you could explain it to me. If |s1| < |s2|, my understanding is that we get rid of all the prefixes of [l1,r1] that existed in the LIS, so a maximum of r1 — l1 + 1 elements removed from the LIS. Afterwards, we add [l2, r2 + 1], [l2, r2 + 2] ... [l2, n], so n — r2 new elements added in. 1) How do you know that n — r2 > r1 — l1 + 1 ? 2) How do you know that these newly added in elements won't be in conflict with some other elements that came after [l2,r2] in the original LIS, creating the need for more removals. Or maybe I'm misunderstanding your proof. What exactly are you removing from the LIS and what are you replacing each removed element with?
Regarding your questions:
Why n — r2 + 1 = |p| + |s2|?
Yup, that was wrong. Hopefully, I got it right this time.
Alright, now I get it! Thanks a lot for the help.
I didn't understand why you were assuming $$$a_1$$$ is strictly less than $$$a_2$$$. In LIS, as much as I know, it's enough for elements to be increasing not strictly increasing. So I guess $$$a_1 \leq a_2$$$ should be correct.
Sure, but if they are equal you can simply replace $$$a_1$$$ and all of its prefixes with $$$a_2$$$ and all its prefixes since they will all be equal. I was probably thinking of a strictly increasing LIS when I wrote the proof
Well, lets say $$$a_1$$$ equals c and $$$a_2$$$ equals c as well. And notice that there is no force for $$$l_2$$$ to be next to $$$l_1$$$. So in this case we have had selected $$$[l_1, l_1 + 1]$$$ and then $$$[l_2, l_2 + 1]$$$ and all of its suffixes. So what should we replace in this case?
Yeah, you’re right. There is no substitution that we can make without decreasing the size of the LIS.
But I have revised my submission and the solution in the editorial and the LIS has to be strictly increasing (which I didn’t remember last night when I wrote the previous answer) so we shouldn’t run into this problem.
Thanks for your response. I do agree that both editorial and your proofs are valid for strictly LIS. However, I see no clue in the problem statement for LIS to be strictly increasing. At this point, my guess is both increasing and strictly increasing LIS will have the same total answer in the problem.
I think this is confusing and it took me some time to understand it, so can I suggest that you change it to
This is because as long as $$$|s_1|>0$$$, the first character of $$$s_1$$$ has to be smaller than the first character of $$$s_2$$$, so the only case this doesn't hold is when $$$s_1$$$ is an empty string.
Thank you!
At first I thought that changing the conditions would make it harder to realize why we do not run out of prefixes during the substituion step. But after rewriting that part I think it did end up being easier to understand so... thanks for the suggestion!
Thanks for the great contest!
Thank you for participating!
I can't understand this sentence:then we can «drop» the prefixes of the suffix $$$[l1 . . n]$$$
Are $$$[l1,n]$$$ or $$$[l1,l1+1],[l1,l1+2],[l1,l1+3]...[l1,n]$$$ discarded in the solution
Yes, since $$$|s_1| \le |s_2|$$$, we can get a better answer.
So from what I understand we are trying to prove that If the largest increasing subsequence has a substring
[l_2 ... r_2]
, then it also has a substring[l_2 ... n]
.and in this step we move all suffixes of [l_1 ... n] included in a theoretical longest common prefix to [l_2 ... n]. But how do we know that we took the prefix [l_2 ... n]?
It just proved this property that if the string $$$[l,r]$$$ is present in the LIS, the following ones $$$([l,r+1], ..., [l,n])$$$ must be present too.
It's proof by contradiction. Assume the optimal answer doesn't meet this condition, we can get a better answer by removing all prefixes of $$$[l_1, n]$$$ and adding the prefixes of $$$[l_2, n]$$$, so there is a contradiction. Taking the prefixes of $$$[l_2, n]$$$ is part of the proof, it doesn't mean we must take prefixes of $$$[l_2, n]$$$, or taking prefixes of $$$[l_2, n]$$$ can lead to the optimal answer.
Ok I was still a bit confused so I just tried to write everything up, the main point I was missing was that $$$s_1$$$ is a substring of $$$s_2$$$ so your method will work. Maybe you were just doing it a different way though since there quite a few different details.
in problem why cannot be b1>0 and bn>0 or b1<0 and bn<0 please explain.
let's define f(l,r) = a[l] — a[l + 1] + ... -1 ^ (r — l) * a[r] then b1 = f(2 , n) , bn = f(1, n — 1) since we know that n is odd, that means that the last element in f(2,n) will be with a — in front of it, so
b1 = f(2, n- 1) — a[n] bn = a[1] — f(2 , n — 1)
to make typing easier, f(2 , n — 1) = x
case 1) we say a[1] = -a[n], in which case b1 * bn = (a[1] — x) * (a[1] + x) = a[1] ^ 2 — x ^ 2 = 1 — x ^ 2 and since we know that x can't be even, that means b1 * bn will be either 0 or negative. If it's 0, then either b1 or bn are 0, and if it's negative that means that b1 and bn have opposing signs. case 2) we say a[1] = a[n] then: b1 = x — a[1] bn= a[1] — x b1 = -bn
I hope this helped!
Alternative in F to find a big prime. Pick a random triple x,y,z. Find gcd(Query(x,y), Query(x,z)). Call the gcd g. If g is prime and g>n/2 do Query(y,z). Then $$$a_{x}=g$$$ if g does not divide Query(y,z).
Loved Dream Theater references
Can anyone explain why I'm getting TLE in E? https://mirror.codeforces.com/contest/1562/submission/127401115
std::string.substr() works in O(n).
Thanks! Never thought about it :)
what is NOC in solution of problem F? To do this, we ask all the NOCs of the number p and the other numbers.
Sorry, it is a bad translation from Russian.
In Russian НОК (наименьшее общее кратное) is lcm (least common multiple).
I found another solution for D2:
Let $$$P_i$$$ be the prefix sign variable sum up to $$$i$$$. Let the answer to an odd-sized query $$$[l, r]$$$ be $$$m$$$. The old sign-variable sum of $$$[l, r]$$$ is $$$\pm(P_r - P_{l-1})$$$ and after removing $$$m$$$, the new sign-variable sum is $$$\pm(P_{m-1} - P_{l-1} + P_m - P_r)$$$. So, $$$m$$$ needs to satisfy $$$P_{m-1} + P_{m} = P_{l-1} + P_r$$$, and it can be located by maintaining a std::set of all possible $$$m$$$ for each value of $$$P_{m-1} + P_m$$$ and using lower_bound.
My explanation for F (which I feel is simpler) :- we can actually find any index in 20 queries so we find 5000/20 = 250 numbers and then the probability that some number > (l+r)/2 and is prime is fairly high. my submission
Nice solution :)
Mine is quite similar! Repeatedly query all pairs between $$$3$$$ random indices $$$i, j, k$$$. A pair of sufficient conditions for $$$a_i$$$ being detected as prime are that $$$a_i$$$ is prime, and $$$a_j$$$ and $$$a_k$$$ are coprime (we can guess that $$$a_i = \gcd(lcm(a_i, a_j), lcm(a_i, a_k))$$$. And we can do the same thing for detecting whether $$$a_j$$$ and $$$a_k$$$ are prime. So the hope is again to find a large enough prime number.
After some independence assumptions, prime gap blahblahblah, it seems like a pessimistic lower bound for the success of this on a single test is 99.99% but probably much better for most tests (and in retrospect can be improved with the TLE trick!).
Awesome problem nonetheless though!
Funny enough, after getting so many TLE's I finally get accepted with $$$O(n^{2}log(n))$$$ algorithm in problem E.
You did use suffix array and LIS finding using lower_bound, right?
Great work, I really amazed. Suspected that this is actually possible, but haven't seen any successful realizations.
Thank you for solving my contest!
Why 21 does not work for testcase 1 Problem B;
https://mirror.codeforces.com/contest/1562/submission/179461212
Because we have to remove maximum number of digits so that the number becomes not prime, that is, either composite or equal to one. Thus, we can remove both 2's from the number 221 to make the resulting number equal to 1 which is a non-prime number.
D2 — is the example of bad editorial
c was satisfying to solve
В задачи B. Можно, пожалуйста, в следующий раз выделить слово "**максимальное**" полужирным. Один раз только вскользь упомянули в условии.