How to solve A again? :D
B and C were of the same relative difficulty.
№ | Пользователь | Рейтинг |
---|---|---|
1 | tourist | 4009 |
2 | jiangly | 3823 |
3 | Benq | 3738 |
4 | Radewoosh | 3633 |
5 | jqdai0815 | 3620 |
6 | orzdevinwang | 3529 |
7 | ecnerwala | 3446 |
8 | Um_nik | 3396 |
9 | ksun48 | 3390 |
10 | gamegame | 3386 |
Страны | Города | Организации | Всё → |
№ | Пользователь | Вклад |
---|---|---|
1 | cry | 167 |
2 | Um_nik | 163 |
3 | maomao90 | 162 |
3 | atcoder_official | 162 |
5 | adamant | 159 |
6 | -is-this-fft- | 158 |
7 | awoo | 157 |
8 | TheScrasse | 154 |
9 | Dominater069 | 153 |
9 | nor | 153 |
Название |
---|
Solution to $$$stations\ (C)$$$:
Amazingly, I claim that it's possible to find the answer using this information.
Now, we have labels $$$s$$$ and $$$t$$$ and labels $$$c_0 < c_1 < \ldots < c_{m-1}$$$ of $$$m$$$ neighbors of $$$s$$$. Let's say $$$s$$$ and $$$t$$$ correspond to $$$tin$$$'s and/or $$$tout$$$'s of vertices $$$u$$$ and $$$v$$$, respectively.
Now, we will determine if $$$s$$$ is $$$tin_u$$$ or $$$tout_u$$$.
Observation 1. If $$$s < c_0$$$, then $$$s = tin_u$$$, otherwise $$$s = tout_u$$$.
Proof. If $$$s = tin_u$$$, then all neighbors' labels are $$$tout$$$'s. Obviously, $$$c_{m-1}$$$ is the label of $$$u$$$'s parent and $$$s < c_{m-1}$$$ holds. Also, we leave the first child of $$$u$$$ after we enter it and we enter it after we enter $$$u$$$, thus $$$s < c_0$$$.
The $$$s = tout_u$$$ case is proven symmetrically.
Observation 2. $$$v$$$ is in the subtree of $$$u$$$ if $$$tin_u < tin_v < tout_v < tout_u$$$.
Proof. You should know that from tree algorithms like finding LCA with binary lifting, etc.
Observation 3. If $$$s = tin_u$$$, then $$$tout_u = c_{m-2} + 1$$$ if $$$u$$$ is not the root (that is if $$$s \neq 0$$$), otherwise $$$tout_u = c_{m - 1} + 1$$$.
Proof. Because $$$c_{m-1}$$$ is the label of $$$u$$$'s parent and $$$c_{m-2}$$$ is the label of the $$$u$$$'s last visited child in the first case or $$$c_{m-1}$$$ is the label of the $$$u$$$'s last visited child in the second case. Thus, the observation is proved.
Now that we know $$$s = tin_u$$$ and $$$tout_u = c_{m-2} + 1$$$, we can check if $$$v$$$ is in the subtree of $$$u$$$ by checking if $$$s < t < c_{m-2} + 1$$$. We need to consider the following two cases:
The case when $$$s = tout_u$$$ is symmetric.
I will add solution to the problem $$$mushrooms\ (B)$$$ (probably around 80-92 pts) soon. High-level explanation would be:
We will have $$$O(20000 / 130 = 153)$$$ queries (here it's much less than 153 queries but still not enough for the full solution) + 132 queries < 285 queries. That's around 80 pts. Also, we can randomly shuffle the indices (instead of asking $$$0, 1, 2, \ldots, n - 1$$$ just randomly shuffle them at the beginning). I think it's the 92 pts solution.
Randomisation won't change anything because the grader is adaptive.
Oh, I see.. then it’s just an ~80 pts solution. :P
Oh, according to PavelKunyavskiy it seems to be a 95 pts solution because I, too, use the fact:
“If we just use each query for counting 102 mushrooms we will finish in 286 queries (~80 points). But each query gives us a new known mushroom. If take this into account, only 237 queries (95 points) will be enough.”
Автокомментарий: текст был обновлен пользователем Azret (предыдущая версия, новая версия, сравнить).
It seems I have a solution for mushrooms, but I hope there is something much easier.
First, let's construct some basic primitives.
If we have $$$c$$$ mushrooms of same type (let's say type A, and U for unknown), we can ask AUAUAUAUAU..AU and receive count of mushrooms of type A in first $$$c-1$$$ mushrooms, and know last one for exact.
Also brute force shows (details below), that it's possible to find exact value of 11 mushrooms, using 7 queries for count in theirs subsets. First of this query have size 11. So, if we have at least 12 exact-known mushrooms of same type, we can know 18 new mushrooms in 7 queries, because in addition to mushrooms count we can get one extra mushroom for each query.
So, we can get 23 mushrooms by 22 one-mushroom queries, after that use 70 queries to have 180 more known mushrooms. Now we spent 92 moves and have 203 mushrooms with known type. If we just use each query for counting 102 mushrooms we will finish in 286 queries (~80 points). But each query gives us a new known mushroom. If take this into account, only 237 queries (95 points) will be enough.
One can notice, that 22 one-mushroom queries looks to be totally sub-optimal. I have some tweaks to deal with it. First we can gain 2 same-type mushrooms by 2 one-mushroom queries. Now we can gain 2 more known mushrooms using AUAU or BUBU query. Let's do it 4 times, finishing with 11 known mushrooms and 6 spent moves. It would be good to start using our 18-mushrooms-for-7-queries primitive now, but we can't do it, because we need to ask 11 mushrooms for the first query, but can ask only for 5. So, let's use 2 queries to do that (one without extra mushroom). We don't need to ask too many mushrooms on next steps, so, we will receive 18 mushrooms for 8 queries at first time (11 + 1 extra for each query, except first). Now we spent 14 moves, and have 29 known mushrooms. Much better, than 23 mushrooms in 22 moves. We can use our primitive 10 more times, finishing with 209 mushrooms and 84 moves passed. Same strategy as above leads us to 226 queries total, which receives 100 points. This part is definitely over-complicated, and should be done much easier by some brute-force.
The only detail is how to gain 18 known mushrooms in 7 queries. This can be done by following greedy algorithm. Let $$$[m_0, m_1, ... m_n]$$$ be list of masks consistent with currently known information. Initially it's full list of $$$2^11$$$ masks. If there are only one of them — we are done. If not, let's try all masks to ask for number of each type in them. For each mask list will be splited to lists with sizes $$$[s_0, s_1, ..., s_11]$$$ by this count. Let's choose mask, which minimizes $$$\sum\limits{i=0}^{11}{s_i * log(s_i)}$$$. It's most informative query in sense of entropy. And it should be asked. When we receive response, we move to one of sublists, and do the same work recursively. 7 queries is enough to know types of all mushrooms in set. Funny, that minimizing maximal set size, instead of entropy, leads to 19-in-8 algorithm, which is not good enough.
To be honest, I'm would be a bit scary, if I need to write this solution, because it should be totally undebuggable and have tons of technical details. Is this intended one? Can it be optimized more, or 226 is optimal by some reasons? Is it correct, or I have some off-by-one-error (they often lead to several extra queries)?
If you minimize maximal set size, it's possible to get 31 known mushrooms in 11 queries. Though with size like that(n=20) you can't just try every possible query; what you can do instead is choose 20 random queries which leads to 31 mushrooms in 11 queries. I used this idea in contest but i only got about 80 points because i didn't realize that you can also identify 1 bit at the end in a query, i only used queries trying to get the sum.
Auto comment: topic has been translated by Azret (original revision, translated revision, compare)
My bad mushrooms solution which should be 227/228 queries but got lucky:
Part 1: Do 2 queries with A_. Then do 1 query with A_A_ (switch A with B if necessary). Now we know the types of 5 mushrooms.
We can repeat the following process to find 5 mushrooms per 2 queries until we know 2*m-1 mushrooms:
Query A_A_A_. If x/2 != 1 (where x is the result), we can determine all 3 unknown mushrooms. Otherwise, we know that the first mushroom is different from the second. We then query B_BA_A_A_ (the 2 mushrooms before go in the first 2 spots). x will be from 1 to 8, and we can uniquely determine all 2^3 possibilities.
Note that if we don't have 2 B's, the second query is impossible. Fortunately, this can happen at most 2 times since every time it happens the count of both types increase by 1.
Part 2:
WLOG, we know more A mushrooms than B mushrooms. We will query A_A_...A_ (2*l total), which will give us the count of the first l-1 mushrooms, as well as the type of the last mushroom. We repeat this until all mushrooms have been found.
Choosing x carefully (m=93, I found it by testing all possibilities of m) and handling special cases in Part 1 will give 227. Surprisingly, I got 99.56 with x=93 without handling special cases (I expected 228), so I decided to try m=83 and m=103. Even more surprisingly, m=103 got AC... It seems like making test cases for this problem is hard :D
Edit: Changed m to 2*m-1 in Part 1
The solution to A (biscuits) is dynamic programming with some math-y observation.
We let dp(i) be the number of valid y with the i^th bit being the largest bit turned on. We observe that if the i^th bit is the largest bit turned on, there is a greedy sequence of smaller bits we can turn on. We also observe that if we do not turn on a bit in the greedy sequence, we can freely turn on and off all smaller bits. We use these observations to construct an O(k^2) dynamic programming solution.
Note: apparently, there is an alternative solution. We let dp(i) be the number of valid y smaller than i, and we only compute dp(2^i). I'm not precisely sure how the details work.
Probably mildly controversial opinion: I found biscuits easier than getting partial scores on mushrooms. Also stations is very big brain when you see it.
I found stations pretty simple, and was one of the first ACs, so i guess ill explain some of the motivation behind the solution.
The first thing we think about is some form of tree ordering, as we are trying to determine which subtree to traverse. This scores about 66 points.
However, we realise the preorder of node s is not used, and we need 1 more "bit" of information, the postorder. Hence, replacing alternate layers comes quite naturally. It is also one of the few tricks for tree problems, such as this
https://oj.uz/problem/view/JOI20_stray
Personally, I find many tree problems uninteresting as they are solvable with technique bash. They are also very easy and often "overmugged". I guess this is why they are so rare at IOI.