DCJ Round 2 starts soon.
"You will advance to the onsite final round for Distributed Code Jam if you are one of the top-scoring 15 contestants from Distributed Code Jam Round 2."
Let us discuss problems here afterwards.
Best of luck to all competitors!
If one of the organizers is looking at the post it would be good to indicate the last date when the DCJ testing tool was updated so that we can know if it may be necessary to update it.
When school reopens tomorrow, but you have to code till midnight. #dcjr2
GLHF everyone!
Judging the Larges will take at least a few hours, so no need to keep refreshing. :) We will email all participants once the scoreboard has been updated (but the results will not yet be official at that point). Thank you for participating.
Few hours... Suspence will kill me
Furthermore, the official Code Jam Twitter is mocking us. I'd like to know who are those people! :)
If I haven't failed completely, the problem E translates to "find a minimum price over every segment of length
GetTankSize()
(with everything padded to infinity) and output the sum over allGetNumKms()
segments".How do you distribute this?
What I did was calculating the global minimum for a contiguous block in each node, then getting the minimums like in sqrt-decomposition (full minimum for some blocks, plus a few elements in partial blocks).
I'm pretty sure I made an error in implementing it that puts me above the message limit, though, so I'll fail this one...
Give an interval of those (let's say K) segments to each node. Now for a given node there is at most 2*K numbers that are not present in every segment. Handle those numbers manually. Numbers that are present in every segment will form an interval, and we are only interested in minimal number there. Those queries can be distributed easily, I think.
As usual, cut array into blocks, and each node gets a block. First get min of each block and send all mins to everyone. Then each node finds mins of all GetTankSize()-segments ending in its block. To do this it only needs mins of all blocks that are contained in all such segments, and at most three other blocks (including its own).
My idea was to maintain the multiset of all values in interval
If the interval is fully covered by one of our nodes only insert minimum from this interval into our multiset
When we keep removing old elements we may have to actually transform some of those minimum values to full sub-interval but total number of operations shouldn't be big
How to solve asteroids?
Judging from your results, I assume you're asking about D.
We have a DP where the cell depends on the three above it. So I computed the DP diagonally. Now, if we split everything by sections of rows, if we do diagonal DP, we can start obtaining final row of that section way before we have processed everything in that section. So I am sending an element from the final row to the next node as soon as it's computed (passing the message actually happens in a batch of 50 to comply with the limits).
Works in 3 seconds on maximum test case.
UPD: Got a TLE on that. Perhaps 30K*30K of 1s has some fortunate behaviour or bugs.
My idea was to run small DP-s from each cell in every 100th row (it will be 100x100 operations each time). After we compute all the maximums until next 100th row we can redistribute results and do it again from new rows.
Not sure if it is good enough though
Maximum total size of messages a single node can send: 8 MB.
I was thinking about splitting rows like that, but one node cannot send 30000 / 100 rows, as that is 300 * 30000 = 9M elements. Or am I misunderstanding your solution?
UPD: OK, I got from fhlasek's comment that when redistributing results, only the neighbouring blocks will affect us, so we will fit within our limits.
I haven't calculated if it fits in limits but you can try tweaking this parameter I guess (100->200 or maybe 300)
I run the straightforward row-by-row DP, every node is responsible for a block of columns. Every 300 rows the neighboring nodes exchange their blocks. Note that only neighboring 300 cells can influence the block in the following 300 rows.
I found my bug in E-large 1min after the end...
Watch out for memory limit in the problem E. The size of the data divided by the number of nodes is already over 38 MB, which is about one third of the memory limit.
:(
):
:(
It's strange that the testrun memory limit is 700 MB while the problems' memory limits are all 128 MB.
I wonder if they'll let me keep my 27 bonus points...
They should — any bug in the system should not be resolved to detriment of affected participant. ;)
Can one of you explain how zxqfl did it? Looking at scoreboard right now not obvious where these 27 bonus points come from ...
In any US based competition Canadians usually get some bonus points ;)
I reported that bug during practice round (I had doubled points for one problem), but they seemed to ignore me and now they have their lesson :P.
I guess you all want to do something waiting for results. Make sure to sit comfortably, because I'm going to tell you a story.
I was in some random town (not in Warsaw, where I live) and I had a choice:
It was a town designed to be tourist-friendly (huh). Because of some non-important things, I was free only 20 minutes before a contest. I was running for 20 minutes but I didn't find even a single place. All rooms taken, all restaurants and cafes crowded as fuck.
I simply sat on a bench in a park. When the contest started I was turning my laptop on. I opened the first problem maybe one minute after the contest start.
I was sweaty and a bit breathless. Battery in my laptop was 60%, and in my phone it was 40% (a phone was very important because I used the Internet connection from it).
It was 4pm and the day was quite hot. First two problems (B and C) were easy. Everything was going well, except for the Sun making it hard to read my code. I didn't want to make my screen brighter because of low battery. After getting C done, I submitted its easy version first, read E (because maybe BCE would be enough to qualify), and run to find some other bench. I succeed to quickly find one under a tree and thus in shadow. I came up with an idea of "minimum over last K numbers" and wrote a brute force to check it on samples. It was fine but only then I saw WA in C-small. It wasn't hard to debug and shortly after that I had C-small and C-big.
At that moment, I saw that my phone wouldn't last 3 hours of being a hotspot. Fortunately, I had a charger possible to plug into my laptop. So, my laptop gave some power to my phone (#InternetForPower).
The weather changed much. I was afraid it's going to rain... but it didn't!
I submitted E, then submitted D-small, and then spend much time on D-big (and submitted). My D-big uses the fact that there are exactly 100 nodes so I wasn't able to test it on small. I was lazy during coding and in my D-big one node could check the same letter many times (up to 5 times). I calculated that it will be too slow, so I improved it and resubmitted at 2:40,
one hour20 minutes after the first submit. It was too slow because one call should take 0.05 microseconds and:so it's 90% of TL only from asking the system for a letter.
At the end of the contest I was freezing (the weather really changed much) so I decided to find some place to eat a hot soup. I went to a pub, get a soup and 5 minutes later saw the only goal in the match of Poland, hurray.
Now I'm sitting in a train with a socket under my sit. I have the 23-th place and I hope all my problems will pass.
Cheers.
I see you are learning from master in that area (humble myself) an art of writing stories which are not useful input to community in any way xD.
I don't complain like you. I've described an adventure.
I thought I was the "master in that area" after my story about drilling :)
Lol, my initial D-large was correct (2.4/5s) but a new one gave WA. So remember, don't test your solutions!
Without a resubmit I would have the 3-rd place and Poles would have places 2,3,4. But 15-th place is still nice. What a feeling.
So they are not letting us practice (presumably, because it costs money) but are perfectly fine with evaluating the solutions that do not count for anything. Nice.
And suddenly it is accepted ;-) Do you know what has changed?
Looks like a bug. On the list of my submissions my first submit is still AC (and it's shown in the leaderboard) and my second submit is still WA.
Even if the problem is rejudged and my second submit gives AC then I should get worse total time (2:40 instead of 2:20).
Judging from my and others' standings, it looks like the contest rule is changed to that they consider all our submissions and take the best (fastest correct) into accounts, or the standings has bugs.
In Section 4.3, the contest rule says:
(D) Large Inputs. Within a round of Distributed Code Jam, you can submit more than one solution for a large input, but only your last submission will be judged. Google will notify you whether your submission for the large input was correct or incorrect after the round ends by posting the results on the Contest website.
But even if the rule would have been changed, it's strange that there're no penalties for temporary wrong submissions for large solutions.
I am sorry for my rubbish Photoshop skills.
Something like that, yes
Judged :)
Congratulations on advancing to the final round :)
Does anyone have any suggestion as to what could cause a Runtime Error in large inputs, but not small ones?
It happened to me last round too, a common suggestion then was that dynamically resizing vectors were taking up too much space, so this time I only used statically allocated arrays for every problem. From what I can see in my Asteroids code:
And there is no practice system in place to test any theories :/
I added a code printing how many elements you are putting into a single message, and on the grid of 30000x30000 '1's I could see a message with 30001 ints, which is more than the limit of 10KB per message. As specified in the statement, larger messages will generate a Runtime Error.
Ah, message sizes, of course! Thanks, I can sleep now :)
I was disappointed with a problem set. Comparing to last year, it was too easy and without any new ideas.
The only thing I was disappointed more was my performance, because I had more than an hour to implement the correct idea in the last problem, but for some reason it kept getting WA on the small test.