The Code Jam Qualification Round has officially begun! → goo.gle/cj2022
You've got until 02:00 UTC on April 3 to register AND score enough points (at least 30) to advance to Round 1.
Registration closes at the end of the round!
# | User | Rating |
---|---|---|
1 | tourist | 3985 |
2 | jiangly | 3814 |
3 | jqdai0815 | 3682 |
4 | Benq | 3529 |
5 | orzdevinwang | 3526 |
6 | ksun48 | 3517 |
7 | Radewoosh | 3410 |
8 | hos.lyric | 3399 |
9 | ecnerwala | 3392 |
9 | Um_nik | 3392 |
# | User | Contrib. |
---|---|---|
1 | cry | 170 |
2 | maomao90 | 162 |
2 | Um_nik | 162 |
4 | atcoder_official | 160 |
5 | djm03178 | 158 |
5 | -is-this-fft- | 158 |
7 | adamant | 154 |
7 | Dominater069 | 154 |
9 | awoo | 153 |
10 | luogu_official | 152 |
The Code Jam Qualification Round has officially begun! → goo.gle/cj2022
You've got until 02:00 UTC on April 3 to register AND score enough points (at least 30) to advance to Round 1.
Registration closes at the end of the round!
Name |
---|
lol they actually implemented Punched Card Python. I'm curious if anyone used it to solve a problem.
I sent a e-mail to them on 20th November 2021, requesting to add
-frelease
option to their GDC compiler command line:Hello,
Would it be possible to add '-frelease' command line option for your GDC compiler to ensure that it compiles fast binaries?
The '-frelease' option disables some extra safety checks in the standard library. And more importantly, it also disables arrays bounds checking. Which may be quite expensive if arrays indexing is done really a lot by a submitted solution.
They thanked me for my feedback, but did nothing. Now I know that they were too busy implementing this Punched Card Python thingie... Because priorities do matter.
Edit: This is resolved now and GDC command line has been updated to add "-frelease" option. I also got a notification e-mail from GCJ people. Looks like they are reading codeforces blogs :)
Hey, are you guys also facing the same problem for Twisty little passages ? My code passes well above required in the testing tool but gives wrong answer while submitted.
Maybe you need to check the content of the local testing tools, it generates the test randomly with some strategies. While the official test case maybe not (with human guides or manually generated).
Yep, it looks like the testing tool does indeed test, but only with graphs of a certain structure, unlike with system tests. You can see how I handled this at 1:25:25 in my screencast/solution video which will be available as soon as the contest ends.
The local tool is really weak (IMO they could have explained that more clearly). E.g. I tested a solution which did not use the
W
command at all (basically equivalent to the first paragraph of the official editorial), and it passed locally.I edited the local testing tools to generate only stars (https://en.wikipedia.org/wiki/Star_(graph_theory) ). Then my first submission which passed that locally also AC'd. In fact I'm wondering if the remote test is also somewhat weak as I did not expect my solution to pass.
I guess they did not want to add a star generator to the local tool, as realising that uniform sampling converges too slowly for such graphs (i.e. containing high degrees) is a part of the solution.
Is there system testing? I have enough points to make round 1 if everything passes, and I'm too tired to get more than that.
If the problem has the green check mark, it passed all the tests. If it has a question mark, the results won't be given to you. (Since most problems don't have hidden tests for this round, if you have enough points for this round, then you're good to go)
Is there still an opportunity to somehow edit the list of users one's following? If I remember correctly, there was a star-like button in standings, but now I see no things like that.
Just below your name where the search button is, click in to the search space and wait for a pop up menu. Then click on following. You'll have a list of all your starred participants.
My question was about editing that list, i. e. how to add a new user to it.
Search for the person's name the same place you search for countries and then click the star beside the persons name to add to your list.
Oh, thank you. Apparently, one of my browser extensions blocked these stars for some reason, so the standings page looked
. to me. Now I've disabled the extension and the stars are back. Thank you again.
So, I solved Twisty Little Passages by an intutive guess. It appears to be correct since the analysis lists it as a correct approach, but doesn't help in proving the correctness since it offers no proof as to why its works:
My idea is to perform the following $$$\frac{k}{2}$$$ times:
Then the guess is $$$(\frac{2n}{k} \cdot (\text{sum of degrees for teleport nodes})) + \text{sum of degree of walk nodes that haven't been reached by teleport}$$$.
My intuition for this was that:
But I have zero clue how to prove anything about that the accuracy or precision of this heuristic.
I even ignored this idea at first and left the contest since I didn't think this would work, only coming back to code and submit it 12 hours later when I was far more bored.
My solution was slightly different, but using same ideas.
On the proof, I had flashbacks to my Master's degree's thesis in sublinear algortihms. For example if you want to tell with a certain confidence whether the graph is bipartite, it can be proven that if you choose a specific sublinear size sample of nodes and check whether the sample graph is bipartite or not, you can actually prove that that answer will hold for the whole graph with certain probability.
So I had a hunch that you just need to sample, and the fact that they gave us walks was a big hint.
Although I imagine there's certain element of luck to it and some sensible ideas probably wouldn't work in reality.
Cool! I hope such a kind of problem would appear in the future (e.g. GKS, they seem to like such statistics-flavoured problems). (But maybe not this specific one, it seems easy to AC it without proof)
Section 3 of this paper gives a solution to this exact problem. It is indeed very similar.
There is also this paper. It seems that one should also be able to solve the problem with random walks.
idk why but whenever I visit that site it shows me this screen (I've never downloaded, and couldn't download, anything from there)
Also did the teleport to random node X (didn't even check that it wasn't visited before) then walk to adjacent node Y and repeat k/2 times.
For the sum I passed the tests with an even simpler idea, which is summing up degree(X) if and only if degree(X) <= degree(Y) (and then multiplying by N / number of rounds).
The idea comes from a double-counting avoidance idea of assigning edges an orientation from lower degree end to higher degree end. Since this method of sampling is inherently oriented, you only count if you hit the right orientation — which is a very simplified version of methods mentioned in these papers: https://arxiv.org/pdf/2011.14291.pdf and https://arxiv.org/pdf/1604.03661.pdf
Here's my solution to Twisty Little Passages:
First note that if $$$N <= K$$$ then we can visit all the rooms, and calculate that the number of passages is exactly (sum of degrees) / 2.
Now if $$$K < N$$$, if we know the average degree $$$D_{avg}$$$ then the answer is $$$\frac{D_{avg}*N}{2}$$$. How can we estimate $$$D_{avg}$$$? An initial approach is to teleport to $$$K$$$ random rooms, and guess that $$$D_{avg}$$$ is the average of degrees of the visiteed rooms. However, this will not work well for a 'star' input, with one vertex of degree 99999, and 99999 vertices of degree one. (In general, it doesn't work well for graphs where most vertices have 'low' degree, and very few vertices have 'high' degree, because the few high-degree vertices can have an impactful shift on the average degree yet not make it into our calculation.)
So we save a few (I used 50) commands just for finding the ultra-high degree vertices. Note that if most vertices have low degree, but a few have high degree, we are likely to find one of the high-degree vertices by just walking from a random node. Here is the algorithm sketch:
Hi,
Could anyone provide the intuition behind Chain reaction ?
The given graph is a directed tree (actually a forest or unconnected tree). Now imagine the abyss as the root and those with in-degree 0 as a leaf. Imagine at every leaf a person is standing, they want to go to the leaf. Consider a junction, at a particular junction if 3 people (3 children) are waiting only one person can pass through the junction (or only one can activate the junction). For the rest of the people who cannot pass, their journey will end there and their value will add to the answer. So for every junction sort all the values of the child... make the min element pass through the junction, assign max(min_ele,val[junction]) to the junction, and add the rest to the answer.
The following solution for Chain Reaction was leaked yesterday during the contest:
https://www.codepile.net/pile/ae18xmX8
A lot of contestants submitted this code with or without modifications.
It’s a bit silly but as someone pointed out to me last year, collaboration is actually allowed during qualification. These people will have no chance during the more time limited rounds. I wouldn’t worry.
Did someone managed to solve the last problem using Importance Sampling approach from the editorial?
As I understand it, the following pseudocode should work:
It passes the local testing tool samples with a small error, but fails in system tests.
In count you need to consider the weights, rather than the number of nodes. Inside the else block code, if you replace the
count += 1;
line withcount += prevRoomLinks / currentRoomLinks;
it should pass the system tests.Now it was accepted, thanks a lot!
I think there is a family of test cases that can hack (or challenge) the "importance sampling" solution explained in the editorial for last problem Twisty Little Passages. I modified the local testing tool provided in the problem statement to generate these test cases that the official solution should fail. It can be downloaded from here.
The idea is to fix $$$N = 100000$$$ vertices and have:
In the modified local testing tool I didn't make graph $$$A$$$ complete. I made every vertex have the same high degree, but not as high as $$$|A| - 1$$$, so that the python code could run faster. With degrees being high enough, the test cases should work fine.
For instance, let's take:
Average vertex degree for graph is:
Let's see what the estimation of the importance sampling solution could be. Recall that we have $$$K = 8000$$$ operations available, $$$4000$$$ will be teleporting ("T") to a random vertex and $$$4000$$$ will be walk ("W") through a random edge from those vertices. Each teleporting operation will determine in which subgraph ($$$A$$$ or $$$B$$$) next edge walk will be:
Since there are just $$$2$$$ special vertices among the $$$100\text{k}$$$ total vertices, it is unlikely that one of them is chosen in any of the $$$4000$$$ teleportations. We have 2 cases:
$$$\implies$$$ The relative error of this estimation is approximately $$$ \frac{21.45}{13.6} - 1 \sim 1.577 - 1 = +0.577 $$$, way above the maximum allowed of $$$+0.333$$$.
$$$\implies$$$ The relative error of this estimation is approximately $$$\frac{5.55}{13.6} - 1 \sim 0.408 - 1 = -0.592$$$, way below the minimum allowed of $$$-0.333$$$.
Note: Since it is convenient not to put insignificant "epsilons" around, the approximations made in the formulas are for making them easier to process mentally. Nevertheless, numbers in the results are the same as the true value, up to the number of digits shown. The modified local testing tool generates $$$100$$$ test cases, randomly choosing the number of special vertices (between $$$1$$$ and $$$4$$$), the size of subgraph $$$A$$$ (between $$$10\text{k}$$$ and $$$15\text{k}$$$) and the degree of their vertices. I expect the official importance sampling solution to fail all $$$100$$$ tests with a "near $$$1$$$" probability.