Блог пользователя Igor_Parfenov

Автор Igor_Parfenov, история, 3 года назад, перевод, По-русски

1823A - A-характеристика

Разбор
Решение C++
Решение Python

1823B - Шаговая сортировка

Разбор
Решение C++
Решение Python

1823C - Сильно составные

Разбор
Решение C++
Решение Python
Замечания

1823D - Уникальные палиндромы

Разбор
Решение C++
Решение Python
Замечания

1823E - Удалить граф

Разбор
Решение C++
Решение Python

1823F - Случайная прогулка

Разбор
Решение C++
Решение Python
Разбор задач Codeforces Round 868 (Div. 2)
  • Проголосовать: нравится
  • +106
  • Проголосовать: не нравится

»
3 года назад, скрыть # |
 
Проголосовать: нравится +9 Проголосовать: не нравится

Editorial comes out so fast!

»
3 года назад, скрыть # |
 
Проголосовать: нравится +42 Проголосовать: не нравится

The implementation of F is wrong, you forgot to reduce modulo 998244353. It's causing unexpected verdict in hacks.

»
3 года назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Problem B was cool.

»
3 года назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Зәңгәр өсөн рәхмәт, ағай!

»
3 года назад, скрыть # |
 
Проголосовать: нравится +6 Проголосовать: не нравится

Are you using palindromic tree for checker of problem D?

»
3 года назад, скрыть # |
 
Проголосовать: нравится +3 Проголосовать: не нравится

Okay, I don't think I could have realistically proved that $$$chain[x] = \left\lfloor \frac{x}{\ell} \right\rfloor$$$ during the contest time limit, but... I think I could definitely have observed the pattern if I just bothered to generate the first 100 Sprague-Grundy values or so and eyeballed them. Everything else was in place except this single component of the solution >_>. I hope I can learn to do this for future Sprague-Grundy problems now.

Anyway, I really enjoyed these problems a lot! I even had fun thinking about F briefly, until I was assured that it's definitely beyond my capabilities. These are nice combinations of standard tools that still require additional creative effort to apply correctly.

Also, I especially appreciate that the implementations are very smooth once the correct idea is understood. For example, the problemsetter could've easily done the jerk move of requiring the program to print the swap sequence in B or the elements of the maximum-length array in C, which would not have really made the problems harder to figure out, but would have been unnecessarily more tedious.

»
3 года назад, скрыть # |
 
Проголосовать: нравится +6 Проголосовать: не нравится

I don't know about Sprague-Grundy theory. Is there anyone to teach me?

»
3 года назад, скрыть # |
Rev. 2  
Проголосовать: нравится +8 Проголосовать: не нравится

Thanks so much for there're py solution~

»
3 года назад, скрыть # |
 
Проголосовать: нравится +1 Проголосовать: не нравится

I don't understand how cycle[x] equals chain[x] up to r+l−1. Could anyone please explain it to me? Thanks

»
3 года назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Thought we need to consider $$$l = r$$$ for E, stuck for a long time. Now I am curious, is there a pattern in this case?

»
3 года назад, скрыть # |
 
Проголосовать: нравится +11 Проголосовать: не нравится

1823F - Random Walk has another solution. Root the tree at $$$t$$$ and consider the following question: suppose we are at a vertex $$$v$$$ at the given moment. What is the expected number of times we will visit it from now on, including the current time?

Well, if we go into any of its $$$\deg(v) - 1$$$ children we must visit it again. Else, if we go up to its parent, the probability we visit it again is exactly $$$1 - \frac{1}{d(v)}$$$ where $$$d(v)$$$ is the depth of $$$v$$$ in the tree. I think the adedalic solution covers an approach to seeing this, but the short answer is induction on the length of the path between $$$t$$$ and $$$v$$$.

Hence, if $$$e(v)$$$ is the expected number of visits to $$$v$$$ given we are there at the current moment satisfies

$$$e(v) = 1 + \left(\frac{\deg(v) - 1}{\deg(v)} + \frac{1}{\deg(v)}\cdot \left(1 - \frac {1}{d(v)}\right)\right)e(v).$$$ (for some reason the displaystyle math environment is acting weird for me)

Simplifying this gives $$$e(v) = \deg(v)\cdot d(v).$$$ However, this doesn't take into account the probability that $$$v$$$ is ever visited! If $$$v$$$ is on the path from $$$s$$$ to $$$t$$$, then it is always visited. Else, suppose that the first time the path from $$$v$$$ to $$$t$$$ intersects with the path from $$$s$$$ to $$$t$$$ is at vertex $$$u$$$. Since $$$u$$$ is always visited, the probability that $$$v$$$ is ever visited is the probability that a random walk from $$$u$$$ on the bamboo/path from $$$v$$$ to $$$t$$$ visits $$$v$$$ before $$$t$$$. But by the same induction, this is exactly $$$\frac{d(u)}{d(v)}$$$.

Hence, if $$$e'(v)$$$ is the true expected number of visits to $$$v$$$, we have $$$e'(v) = e(v) \cdot \frac{d(u)}{d(v)} = \deg(v) \cdot d(u)$$$. Finally, if $$$v = t$$$ we have $$$e'(v) = 1$$$. This is fairly easy to calculate: 203820888

»
3 года назад, скрыть # |
Rev. 2  
Проголосовать: нравится 0 Проголосовать: не нравится

The first equation of F may seem obvious, here is my attempt at deriving it rigorously, where $$$x_{\tau}$$$ is position of the chip at time $$$\tau$$$, with initial $$$\tau = 1$$$:

$$$E(c(i))$$$

$$$= E(\Sigma_{\tau=1}^\infty 1_{x_{\tau = i}})$$$

$$$= \Sigma_{\tau=2}^\infty P(x_{\tau = i}) + P(x_1 = i)$$$

$$$= \Sigma_{\tau=2}^\infty \Sigma_{j=1}^n P(x_{\tau} = i | x_{\tau - 1} = j) P(x_{\tau - 1} = j) + 1_{i=s}$$$

$$$= \Sigma_{\tau=1}^\infty \Sigma_{j \in N(i), j \neq t} \frac{1}{|N(j)|} P(x_{\tau} = j) + 1_{i=s}$$$

$$$= \Sigma_{j \in N(i), j \neq t} \frac{E(c(j))}{|N(j)|} + 1_{i=s}$$$

Notice the special $$$1_{i=s}$$$ at the end. Also, if $$$i = t$$$, we need to change the definition so that $$$|N(t)| = 1$$$. Imagine that it then goes into some other absorbing state, so $$$t$$$ is visited only once.

»
3 года назад, скрыть # |
Rev. 3  
Проголосовать: нравится 0 Проголосовать: не нравится

For problem F, we have

$$$E(c(u)) = sum_{(u, v)}\ E(c(v)) * P(v, u) \ with \ P(v, u) \ is \ the \ probability \ that \ v \ goes \ to \ u$$$

=> E(c(u)) and E(c(v)) depends on each other and this problem is presented on a tree so we can assume that

$$$E(c(u)) = free(u) + E(c(p)) * g(u)$$$

with free(u) is a constant cofficient, g(u) is the "familiarity" rate with the parent p. Our goal is to find free(u) and g(u), so how can we do that ? We have

$$$E(c(u)) = sum_{(u, v)} (free(v) + E(c(u)) * g(v)) * P(v, u) + E(c(p)) * P(p, u)$$$

<=>

$$$(1 - sum_{(u, v)} g(v) * P(v, u)) * E(c(u)) = sum_{(u, v)} free(v) * P(v, u) + E(c(p)) * P(p, u)$$$

with v is a child of u.

We run through a DFS to calculate free(u) and g(u) and then a DFS to calculate the final answer.

Short solution
»
3 года назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Very fun contest! Both problem D and E have quite interesting solutions (I didn't attempt problem F).

»
3 года назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Thank you for task E, it is very nice!

»
21 месяц назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

Who the f***k made the problem D

»
15 месяцев назад, скрыть # |
 
Проголосовать: нравится -10 Проголосовать: не нравится

... This gives the system of equations, which, however, cannot be directly solved.

Igor_Parfenov, why you've lied about that? It can easily be solved with sparse gauss elimination: https://mirror.codeforces.com/blog/entry/138384

»
11 месяцев назад, скрыть # |
 
Проголосовать: нравится 0 Проголосовать: не нравится

E was very nice. forced me to learn Grundy theory and it was really amazing. thank you for the great contest