Round 1 of the 2020 Facebook Hacker Cup is less than 48 hours away!
The round will begin on August 15th, 2020 at 10am PDT and will last for 24 hours. The contest can be found here.
You're eligible to compete in Round 1 if you solved at least one problem correctly in the Qualification Round.
Everyone who scores at least 25 points (regardless of time taken) will advance to Round 2, which will take place on Sat. Aug. 29th. Please note that all submission judgments will only be revealed after the round ends. More details about rules and other information can be found in the FAQ.
We wish you the best of luck, and hope that you enjoy the contest!
The Facebook post can be found here.
Update 1: Due to an error with problem B (Dislodging Logs), only 25 points are now required to advance to Round 2, rather than 30.
Update 2: The round has ended, thank you for participating! The scoreboard can be found here, and the solutions here.
Update 3: As an update on prizes, everybody who solves at least one problem in the upcoming Round 2 will receive a Facebook Hacker Cup t-shirt! The top 200 competitors from Round 2 will also have a "Top 200" badge on their shirt.
When the announcement about the t-shirts will be made?
The only reason anyone less than international grandmaster would participate in it.
It's already existed in 'Terms of Service'.
Wow, you are right. Top 2000 from round 1 will receive t-shirts.
Sorry, that information mentioned in the Terms had been a placeholder and has been removed for now. We hope to offer such prizes based on Round 2, but we're unfortunately unable to make any public announcements about that quite yet, due to working out the effects of COVID-19 on prize shipping. We will announce more information when possible, before Round 2!
Are we getting a new design for this year btw?
back to panel 2 again for now :smug:
Didn't all the participants already agreed to the terms and conditions? Can you really now change that after setting the expectations?
Yes, change of TOC is taken into account in
15.1 Changes; Termination. Facebook reserves the right to limit the participation of any person in the Competition, amend or interpret these Terms or official communications governing the Competition or cancel the Competition for any reason with prior notice. Notices for any such amendment, interpretation or cancellation will be deemed given by posting on the Competition website and by virtue of a Competitor's continued participation in the Competition. A Competitor may terminate participation in the Competition upon written notice to Facebook.
of the TOC.
Damn! I can't believe I agreed to such a condition without reading it. XD
Anyway, I think every other TOC that I have actually read always had this point. Keeping TOC is basically a formality nowadays. :)
100% of the time terms and conditions basically translates to "We can do whatever we want".
Are you guys done reviewing all the disabled accounts LoneFox?
Yes, I believe we've helped each participant who reached out to us at hackercup@fb.com with details regarding a disabled FB account (generally caused by new FB accounts with no profile information sending friend requests to others, which was activity flagged as being normally associated with fake/spam accounts).
Lol that reminds me of a conversation I once had: "just contact me on FB" "I don't have FB" "make an account, you don't need to use it for anything except as a messenger" "when I tried that my account got instalocked everytime" "nah there's no way" "well yes way, mail me".
Please prefer formal problem statements in further rounds.
What do you mean by "formal" in this case?
Where problem statement is not written like some essay
By formal, I mean very short problem statements without any story. I feel the current problem statements due to the story is too long. Or you can write a story in the first paragraph in italics. In such a way that one who skips the first paragraph doesn't lose any points.
For e.g. see this https://mirror.codeforces.com/contest/1286/problem/F story ends in the first line.
Many times problem setters write long story and end last paragraph with - "Formally, find no of subarrays of a given array with positive-sum." One not interested in reading story can just read paragraph starting with "Formally". This paragraph many times involve some mathematical notations to describe problem statement in a line or two.
LoneFox where is time limit of each question is mentioned?
I am talking about allowed max running time of my program aryanc403 ~~~~~ aryanc403 can you help me? ~~~~~
no one limits it, no one cares about it
is It means that I can write code that is doing more than 10^12 iterations is fine?WitchOfTruth
All they need is the output file so as far as your code finishes with the given input before the 6 minutes countdown ends it's fine.
You can also write a program which takes 1e18 operations. As long as you use the language with free compiler/interpreter and it completes those 1e18 operations in less than 6 minutes. I'm not sure if using GPUs/supercomputers is allowed but you can check rules.
If you still want a strict time limit. Assume that the time limit is 15s on code forces server. Most of the people use those lenient 6 mins to debug in the last moment in case something fails because fixes after those 6 mins won't give you any points.
Why, MacOS stack size limit, why...
I had the same problem but in windows, the thing is that in windows the size of the stack is 1mb, in Linux and MacOsX it is bigger. This is highly operating system dependent, but can still be changed with a few commands. Luckily the same code that gave me RTE for the stack in windows gave me AC in Ubuntu, but the submition time had passed...
Unfortunately Mac has a hard limit of 64 mb, and I'm not sure if there's a workaround.
I even tested my solution with some test cases that resulted in deep recursions, and there was no problem (since I got similarly burned in the qualification round).
So I went over to my Linux desktop, fumbled around transferring the code and the input file, it ran no problem... and I was 2 seconds too late.
Next time I'll just SSH to the desktop...
I also ran into this problem — passed validation but sadly ...... I should have tested my code before this with max limits tbh, since it's a 24h contest
anyone know a good solution to this other than having another desktop? are there safe+fast remote sites to run code with file input? i only have a mac for now, and virtual machines probably don't work if they're on mac anyway.
otherwise i will just have to code bottom up things or implement my own stack next time i guess... which is quite sad but doable
Instead of creating arrays of size 10^6 create dynamic arrays using pointers and new function. This runs fine on my Windows 10 ,as then memory is allocated in heap and not stack.
Recursion also use stack memory and there is no easy way to configure it to use heap instead.
It's OK to make huge arrays as long as they're global. Global variables aren't allocated on the stack.
Well, I decided not to bother with coordinates compression and decided to have a 500M array of ints for A1. Which exceeded Windows' 32-bit 2GB RAM limit. Thankfully, after panicking for 5 minutes (it was the last hour of the contest) I just launched Ubuntu using Windows Subsystem for Linux; and everything worked like a charm there.
Actually an Ubuntu virtual machine inside the Mac works, I did it that way because I didn't have a compiler on the Mac.
This is highly operating system dependent, but can still be changed with a few commands
Maximum stack size can be changed?? If yes, how?? I'm on Windows 10, and I'm encountering the same RTE problem...Actually I couldn't change it but what I was able to google it and I found some tutorials to do it, some suggest to use commands like / F and / STACK, I really can't figure out how to do it so I just ran the program in an ubuntu virtual machine and that fixed the problem Here you have some links. Link 1 Link 2
I use this
'-Wl,--stack,268435456'
in windows. Pass this as a command argument. Try with/ without''
, different shells parse this differently afaik.Damnit this was also my issue, except i was on windows.. and i dont even know the flag to increase stack size. Alas, i was too inexperienced. Didn't even notice it was a stack overflow before timed out..
Lesson learned i guess..
g++ -std=c++17 -Wl,-stack_size -Wl,0x10000000 main.cpp
works to increase the stack size on MacOSThanks!
I tried to find these command line options in the past, but couldn't get it to work.
Thanks for the command. I've been harassed by stack-overflow problem multiple times and have not dealt it well.
wondering what is that 0x10000000 mean? stack size in bytes (ie 10MB) or recursion depth? I searched for g++ documentation for this option without success.
g++ -std=c++17 -Wl,-stack_size -Wl,0x10000000 main.cpp // This command passes on for 10million recursive calls in mac
g++ -std=c++11 -Wl,-stack_size -Wl,0x10000000 main.cpp // this one does not. I guess g++ fixed some bugs in c++17 or just this is recent enhancement
deleting post since too many down votes — perhaps it was too long or the code wasn't good :)
yeah, me too, and lost my problem C...
I feel your pain! thats tough losing two problems man. Always worth it for the excitement of when you do solve on though, I kinda like the format.. the excitement of your computer or connection crashing, making sure you upload the correct file, only having 6 minutes!
yeah, man, lessons learned, now i would execute my code with stack increasing command in windows cmd, lol.
I saw that B got updated so that S=0 for all cases now. I submitted my solution before that change was made. Will my source code be run against the new cases?
Yes, earlier submissions will be rejudged accordingly, we apologize for the inconvenience!
Is the code run by Facebook against the new S = 0 cases, or is our output file compared to the old S > 0 cases?
How does this work? I just looked at my input file and S is not 0 (at least in the first few cases I checked, I didn't go through the whole file). And my solution assumes S can't be 0 (or at least, I'm not sure if it does).
So, will my submission be judged against the original output?
In my code, there is a division by S. How are you going to judge my solution?
My submission depends on $$$s > 0$$$ in a very silly way, so rejudging won't work in my case.
Besides that my code doesn't compile without an extra owned library (just for basic templates). From my understanding of the following rule it should be ok:
What file formats can I submit?
Your output and source code should both be in plain text. Their file extensions don’t matter. Your source code must be a single file. If your solution depends on additional library files, please upload only the single main source code file.
Do not submit executable files (such as .exe files.).
I just submitted and after refresh the page see the announcement... Since the contest is blind they should let anyone resubmit with the new constraints.. The constraints on S before was strictly greater than 0, and someone who puts some assertation may even fail now....
Don't worry -- if need be we'll handle any such cases manually as they arise.
I just realized that S = 0 for the new updated test for Problem B. This change happened just when I was submitting. When validating, I got the S != 0 input set. I waited for around 10 minutes upon submitting validation output file to ensure that my solution was correct and then downloaded the final input file, unknowing of this new constraint.
Upon running my code, I got an error and realized that S = 0. I reread the problem and discovered this new constraint. For this reason, I completely missed the submission window for B. I believe that something should be done about this.
First, all competitors that submitted before the change should be given an option to resubmit for the new version of the problem (if they think it's easier). Otherwise, those who submit B later get an advantage. I believe that this "resubmission" window should be rather lenient especially for users like me who had a gap between validation and final submission.
Furthermore, when something else like this happens, I believe that FHC should have a way of notifying contestants (e.g. CF's way of giving a browser notification), not simply relying on contestant's refreshing the page. I had no idea this happened until I looked at the input file.
wjomlex LoneFox please look into this. I think that this is a potential reason to make the round "unrated". This new constraint will mess up the point values associated to problems and may cause an imbalance in problems.
Edit: My timer has been refunded. Thank you to FHC for the fast response! Nevertheless, I think a better notification system of these issues should be established. Do you think that the point values will be recalculated?
Yes, you're right, thank you for the suggestion. Aside from rejudging submissions that were made at all, we're also working to reset the timer for everybody who downloaded the input but made no submissions, as they could have also been affected (though there are few such cases).
Based on what we've seen, we're proceeding with the round and the existing point values. We believe that, though unfortunate, this issue has had little effect on the balance and results of the round.
I'm unable to download the validation input file for B (but did it for A1 earlier without issues). Am I missing something or is there an outage?
It should be working, can you please try again after reloading your page, and send us a clarification if the issue persists?
Thanks for your reply. I was able to get it working by quitting Chrome entirely (it seems like the download was hanging in some no-person's-land).
Now I have another issue -- I'm getting WA on the validation input. I figured out why my algorithm is wrong, but I'm not sure how to fix it. Are you the right person to ask for help with this? :D
You can download the validation input and re-validate as many times as you need to. Feel free to send a clarification through the contest page if you encounter any issues.
I don't understand why was the constraint of S changed for problem B??
I think S > 0 worked perfectly fine.
Let's not discuss the problem from a running contest and wait for the official explanation later on (I have my suspicion about what happened).
Did they forget to generate different S for each test case?
Assume $$$P$$$ and $$$Q$$$ to be sorted. Let $$$x_i$$$ be the index of the log driver that set the explosive charge on cluster $$$i$$$. For $$$S = 0$$$, there will be an optimal solution in which $$$x_i \leq x_{i + 1}$$$ for all $$$i$$$.
But for $$$S > 0$$$, this might not be true.
Example : $$$P = [0, 50]$$$, $$$Q = [1, 2, 3, 4, 75, 100]$$$, $$$S = 10^9$$$. In the optimal solution, log driver at position $$$0$$$ covers clusters at $$$1, 2, 100$$$, and the other log driver covers clusters at $$$3, 4, 75$$$.
Auto comment: topic has been updated by LoneFox (previous revision, new revision, compare).
For some reason I couldn't run the full test case file on 1B... and couldn't submit my solution within the 6 minutes. Tried to run it with both file and system io.
Sad :( Does this have to do with the error just announced?
I'm afraid that would have been unrelated — you should have still received a valid input file, meaning that it would have been an issue with your code.
Why does making a problem easier, reduce the cutoff instead of increase it?
Using dynamite on the river kills the fish. The fish had their revenge in the failure of this task.
Help LoneFox I just solved the chapter 1 problem in today's round and while I ran the actual input file on my 32 bit python compiler, it gave me memory error, and due to being panicked, I couldn't think much at that time. Next, after my 6 minutes got elapsed, I tried the same input file and code on 64 bit compiler, and it generated the output file. I know its my fault but still, it feels really bad when u got the solution and couldn't submit at such an important contest because u chose the wrong compiler version. Can anything be done regarding this please?
Two things:
Just one small doubt. Do I get points for B if I submit now after the change, if my solution complies with the change(S=0) and is correct? I didn't make a submission for B before the change.
You can still solve B and score points.
So what about this problem? I assume that this error is now fixed, all samples (including the validation input) are correct and we can still solve it?
That is correct.
Wierd inputs (large) . Just declaring array as global gave correct output else i was getting segmentation fault .
Debugging it took more than 6 minutes ,and i was not able to submit my solution.
Even B is giving WA ( i think my logic is correct).
Now ,there seems no way i can make it to next round.
You must not be initializing your array. Declaring globally implicitly initializes the array for you.
Submitted C, it showed processing your submission for ~1 min, timer expired and said a problem occurred, please try again. Not being a cry baby but do something for me please.
Friends standings page in not available anymore, is it intended to?
It looks like there's a bug affecting some people. We'll try to fix it soon.
Does anyone have a solution to the old version of B (S > 0)?
I was unable to come up with a solution that passes this test case:
The log driver at position 2 sets an explosive charge at the log at position 9.
The log driver at position 4 sets an explosive charge at the logs at positions 5 and 8.
This case also hacked all of my friends solutions (possibly this is why they changed the problem).
Any insights?
My suspicion is that they thought the binary search would work for S > 0 too. Before seeing your example I was convinced that the optimal solution would involve the leftmost driver going to the leftmost groups of logs. Really curious about what the solution & best complexity is for the original problem
How to solve A2 and B , I was thinking of a lazy propagation based solution for A2 but couldn't came up with one! Anyone please give me the correct approach for these problems ,Thanks in advance!
B is binary search. Process drivers from left to right. Leftmost driver visits the leftmost cluster, then tries to go right as far as possible. Or vice versa, he goes right then visits the leftmost cluster.
Ya exactly first look says to me it is a binary search ,but how to simulate for a possible k seconds , my mind was revolving around right left right left movements only.
well there are 4 possible routes for each driver:
L
,R
,LR
,RL
. We know the starting point, the leftmost point, and we have to maximize the rightmost point. If we can't visit the leftmost point, the answer is no.current B (S=0) : http://mirror.codeforces.com/problemset/problem/343/C
That's my problem, when I read it, I just copied my old code and it worked :D
I'm so glad I had solved it before hackercup.
I had taken almost 10-12 hours to figure out the correct solution back then. (Seeing it for the first time during hackercup would have been very bad for me XD).
PS: It was a nice problem btw!
Can someone share his solution for problem C?
For each vertex, calculate with DP the following values only considering its subtree:
Now, consider removing a edge connecting a vertex u and it's parent. Optimal way is to connect the maximum size connected component of each separated tree. For the subtree of u, we can just use the DP value before. For the other tree, the DP values coming from the parent can be calculated with rerooting technique. You should take care of case where there is no uninfected patient in a certain tree : any pair is optimal in this case.
I had all the cases for C down (no components, 1 component, multiple components) and passed validation but failed the real tests due to a bug in one of my dfs I guess..kind of rip
Can someone post the correct validation output for C? I can't for the life of me figure out what's wrong with my logic/code.
Edit: Damn. If anyone is wondering what was wrong, my for loop to generate edges used 0 based indexing so, copying over the provided formula that uses one based indexing, I calculated the edges mod $$$i-2$$$ instead of $$$i-1$$$. This unluckily passes the only sample that generates additional edges and is 4 hours of my life gone.
The thing I missed for quite some time is that I allowed removing edges between two uninfected nodes.
Wouldn't removing edges between two uninfected nodes be fine as long as the result remained a tree and the size of the newly created uninfected component was the maximum possible?
Like say we had three connected areas of uninfected rooms of size $$$x$$$. Suppose removing an edge from one of them breaks the graph into two connected components such that the other two uninfected areas of size $$$x$$$ were in different components, then we would be able to create a room of size $$$2 * x$$$ by attaching any two nodes from among those uninfected areas of size $$$x$$$.
Anyway, thanks for the input, I seem to be somehow printing the wrong maximum answer for some cases while printing the correct number of ways.
In your case, yes you can break it into parts of size a, b, 2x, where a+b=x. But that's not optimal — it's better to break it into x, 2x. In general, you never want to break up a piece into smaller pieces because aC2+bC2 < (a+b)C2.
There's one annoying edge case though — when everyone is uninfected, you do want to separate uninfected edges.
Huh. It seems I either have a fundamental misunderstanding of the problem or I am reading input wrong. My O(N^4) brute outputs a different answer on Case #7.
Let’s say you have connected components of uninfected nodes with sizes [3,3,4]. The optimal solution would connect either the first or second component with the third, thus creating a component of size 7. However, since we’re asked about the max number of pairs of people that can visit each other, we can’t break one of the remaining components up, since that would reduce this number. So in this case, the optimal solution will end up with one component of size 3, which contributes 3 pairs to the answer, and one component of size 7, which contributes 21 pairs to the answer. So, the max number of pairs you can get is 24, breaking up any component in the middle would make this number smaller.
Seems the same as my solution. When you have time could you share your initial components for Case #7? I got [11, 4, 4, 3, 2, 1 . . .] which I think gives 115 instead of 193 (Unless I'm missing something): 15*14 + 4*3 + 3*2 + 2 = 230.
If you don't mind me intruding in your conversation, I have [18, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1], which gives $$$20\cdot19+2\cdot1+2\cdot1+2\cdot1 = 386 = 193\cdot2$$$
Yep, I think I rember having 18 and four 2s somewhere, thanks for the input :)
Yikes, I probably should not have copied over my component finding into my brute force then
But what if you have 3 components of uninfected nodes with size 4?
Suppose one of these components is an ancestor of the other two, and those two are different descendants.
You can remove one edge between uninfected nodes from the parent component and connect the other two.
Am I missing something?
Well, this is a much simpler case.
Link to image
If you’d used one of the edges incident to a red node instead, you would have made two groups of white nodes (with sizes 2 and 4). This means that the number of pairs of patients that can reach each other is 2+6=8. In your example, there are groups with sizes 1, 1 and 4, so the number of pairs of patients that can reach each other is 0+0+6=6, which is less than 8. The problem asks us to count the number of ways to achieve a situation which maximizes the mentioned number of pairs, so removing the edge you showed should not be included in the solution.
You can't do this because we want to maximize total number of pairs of people who can visit each other (not just in one component, but over ALL components), so the answer in your diagram above should be 4C2 + 2C2 = 7.
I made the same mistake and spent 2 hours trying to debug before rereading the problem statement and re-solving stuff.
Ohhh that's right. Got it. I think I spent so much time solving one component that I forgot the rest.
Thanks!!
+1, I even wrote an n ^ 4 brute force to stress test but it passed 10k cases against that and I can't figure out what is wrong with it ._.
I don't want to be so critical but this contest was IMO really bad (even ignoring the thing with B). Almost all problems contained no interesting parts but had tons of annoying implementation (esp. A and C). I get that Round 1 maybe shouldn't have very hard problems but easy problems should still be tolerable. This contest I felt like quitting but forced myself to continue because it's an "important" contest. I very much liked rng_58's opinion on easy problems:
More generally: how is it that regular small contests (Codeforces, AtCoder, OpenCup, even TopCoder) can deliver such nice problems weekly while "major" contests on average have mediocre or even bad problems? Most of all I mean FHC, but to some extent also GCJ and ICPC WF.
Also the input format: I realize this is a response to people who couldn't download the big inputs, but this is so ugly. And it is quite limited because you can still only do "big input" and "random input" (the latter might help the contestants in problems like A3). I know this requires additional development but wouldn't the following be better:
I'd like to offer another opinion on problem A. By doing this, I'm not in any way discounting your opinion, and in fact I do think your opinion as someone who is more experienced and more invested in the sport should matter more than that of the casuals like me.
That being said, as someone who is in the target audience for easy problems, I enjoyed A quite a bit. In hindsight, figuring out the details of the perimeter updates was tedious, but I guess at the time, I just assumed that was part of the process and didn't think much about it.
Interesting. IMO the details of A were tedious enough to consider quitting the contest.
Unlike some other reds I have nothing against "classical" problems in contests aimed at lower-rated people. But I'd have thought that no one likes tedious problems.
Yeah I think for me, I just considered figuring that part out as part of the problem's solution, whereas I'm guessing you probably considered it more of a time-wasting implementation detail after you had already figured out the problem's high-level solution?
In general I think having "tricky" implementation "tedious" problems is ok in principle for programming contests. In fact, I find that it is a very "testable" and remarkable skill: Some people are actually very good at writing such fairly error-prone tricky things without mistakes and without taking too much time, and since I wish I had such a high implementation skill, I find it ok to "reward" that coding ability.
However, I think that it is critical to consider "implementation" or "being tedious" or "being tricky" as part of a problems difficulty. Both for a problem setter (to achieve the aimed difficulty) and for a contestant (to avoid thinking that a problem is "easy" because the idea is easy: an "easy" but "tedious" problem is not easy anymore).
In that way, I think that the contest completely failed to have easy problems (maybe it was intended): the ones with the "easy" ideas were very tricky and tedious.
Also I agree that the general contest felt fairly unbalanced: the overall contest difficulty was mainly in the tricky/tedious area, and not the "algorithmic" part itself, which is ok for a single problem but not nice for the whole contest.
Good idea!!
Well, you could skip A2, A3 and C to be honest. I am happy that I did that. A1 and B are pretty easy to write and are not that annoying as some classical bullshit segment merging in A2 and A3.
yeah, because everybody's sure their B solution is correct
B is only easy to write after the revision setting S=0, which came (I believe) very late in the contest.
Not sure about A3, but A2 is reasonably easy solvable with std::set
Solved A2 using set, and A3 can be solved using set too, in the same way.
I think that, since queries are all available beforehand, A2 is much easier to solve using sweep-line.
Sort all the "rectangle intervals" events (start and end), an process, adding a rectangle to current-set upon opening and removing when closed. Each "piece" can then be annotated with the lowest-index rectangle containing it, since that rectangle will fill it once and for all. A list of pieces for each rectangle can be precomputed this way.
Then, process each rectangle in order and for each one, explicitly iterate all its pieces, knowing that each such piece "grows" the area from height 0 to that height (just have to check the height-values of neighbor pieces, which can be kept in a map or array after coordinate-compression).
On the other hand, A3 can be solved with coordinate compresion + lazy-propagation-segtree in a much more less error prone way (in my opinion) than the official solution (assuming a correct lazy-propagation-segtree is available to simply copy-paste it).
However, although less tricky to implement, this other way involves using totally different methods for each chapter, and would not work with online queries, as the segment merging version does.
There is one detail here: you need to provide each contestant a slightly different file, otherwise contestants may cheat by collaborating: each person solves one problem and shares the output file to other people (technically people can cheat by sharing code too, but that is easier to catch). GCJ (in the old format) did that for large dataset, and for small dataset they even generated different files per submission to prevent people from brute forcing the solution.
There was one task in GCJ Final 2009 where Google handled large input file by following: they provided the contestants the script for generating large input, and when submitting, contestant download a seed value as parameter to run the script against. There are still variations on how fast/slow it is on individual computers, but assuming a reasonably large time limit that is still pretty reasonable.
Yeah, I think some problems like this are OK for $$$\ge 24$$$ hour contests, but having all/most of them like this is a little tiring. I managed to solve all the problems, but took quite a portion of the day to do it (due to hunting all the little bugs that caused samples or verification to fail)
Unable to check the scoreboard on the friends tab
How to solve A2 and A3
Maintain a set of non-overlapping polygons (storing starting position, ending position and perimeter of each polygon would be enough). In the $$$i$$$th iteration, try to combine the $$$i$$$th rectangle with an already stored polygon or store it as a separate one. The technique for combining two polygons is slightly different in A2 and A3. In A3, you will need another set (or other DS) to efficiently find the current height of the polygon at the position $$$x$$$.
Solution to Problem A1 without using the constraint on W:
I don't know if this has been asked before but could there be a way to filter the scoreboard by country?
As far as I know, the only way to filter results is by adding people as friends, and people who recently made a new account will get their accounts disabled if they send friend requests. Filtering by country might help that issue.
Thanks for the suggestions! We'll definitely consider adding public countries of representation in the future.
LoneFox Why in the input we were only getting first K elements and then generate rest using a formula? I personally think it's unnecessary as it requires no extra thinking but just writing an extra for loop, unless it's because of judge limitations, is this the case?
Not sure of official reason, but in Qualification round some people could not finish downloading input in time constraint. This formula makes it so input is small while having a large test case.
Yeah, that makes sense, but this also puts the limitation on the kind of problems you can create like suppose you have to print a final array of the same size as input because in that case input can be shrunk but output can't. I think it's better to create judge like CF where you just submit the source code, Google also did this last year.
I couldn't download the test data for problem C during the qualification round on time. They reset my timer but they also told me that they wouldn't do it again during the next rounds.
Generating tests with a formula is a nice solution to this matter. However, one has to be really careful coding these formulas.
I personally feel FB should offer a server for executing the source code, primarily because of the insufficient resources for executing the program for larger inputs in our personal computers. This is highly true in case of FBHC, where the inputs are of the order of 1e6.
Take my case for example, in the Qualification round, there was a problem on tree dp (the last problem if I recall correctly). After checking my program against the validation input, I tried running it on the test input and I received a segmentation fault. I had assumed there was an error with my code, and spent debugging it for nearly 8hrs, and couldn't find anything, and finally resorted to using Valgrind, through which I found out that my stack overflowed due to the high recursion depth. There was a similar case in Round1 problem C, as well, which left me helpless (not that it affected my qualification, but still getting higher in the scoreboard always feels nice).
In my opinion, these are not things one should worry about while coding the solutions, not that I had a workaround for the stack overflow considering I needed a DFS.
LoneFox Is there any chance of FB providing us a server for running codes? Will round 2 also have the same format of downloading and executing on our personal computers ? If so, people with superior PCs have an unfair advantage of coding a looser solution ( lets not even consider my scenario).
I'm not trying to dismiss this opinion, I think these are good points, I'm just trying to understand for my own curiosity. I don't remember that many complaints about code execution on contestants' PCs when Google Code Jam did it not too long ago. I genuinely wonder what's the reason of this?
Unfortunately it seems like the old GCJ problems are being transitioned to use the new system so we can't access them now.
If my memory serves me right, we rarely had a task with a huge input size (including pseudorandoms) in older Google Code Jam (Kickstart is not included). I can't remember having tasks that differentiate O(N) solutions and slower solutions--which what usually N>500k tasks are designed for. What I remember to be the only exception to this is on Finals problem, which is less of an issue since the contestants have the same working environment.
Most of the older tasks are differentiating between polynomial and exponential time solutions, which IMHO makes it interesting.
I am not sure about the others, but this is my first ever Facebook Hacker Cup, and till now I haven't given a single GCJ round. So, I just voiced out my opinion. I just feel that the competing environment should be fair. Even if its impossible to provide a common execution platform because of the huge number of people in the Qualification and Round 1, the further rounds should be considered for the case, considering time penalty is an important factor.
I just wanted to say something in support of this local-judge format that FBHC uses and GCJ used to use. Note that the alternative for FBHC would be remote-judging without any feedback, as you only get one shot to submit.
Also, I think the validation file is a great addition; it allows the contestants to ensure their setup is working correctly in a 0-stakes situation. (Old GCJ's small-inputs kind of shared this function.)
As you point out though, there are some drawbacks, but I don't think they out-weigh the benefits:
Overall, I think this format is net positive for contestants, and I really hope FBHC keeps it (especially with validation sets).
Fun fact: Task A can be solved (albeit much more technical) for inputs without any restrictions on $$$H$$$ using the idea behind Segment Tree Beats. In each segtree node you keep the value of the minimum height, the second minimum height, and how much of the $$$y$$$-perimeter of the figure would change if you were to add $$$1$$$ to all the blocks with the minimum height (computed from leaves to root). Now you can write the push and pull functions and it should be (kind of) straightforward.
For the restricted subtasks, though, you could obtain $$$O(N logN)$$$ if you recurse down if the minimum height is strictly smaller than the maximum height in the subtree (i.e. if not all heights are equal), and with minimal lazy propagation logic. This is similar to the set solutions, but (IMHO) much easier to implement and with fewer moving parts. And it has the advantage of solving A1-A3 with just one implementation (even without the imposed constraint on $$$W$$$ for A1).
I actually implemented this after I finished (for fun/for verification), you can find a screencast here: https://www.youtube.com/watch?v=CwEJmC3RWOI.
Can you paste the final code in text format somewhere? Thanks!
Regarding the stack limit:
I'm not surprised by all people struggling with stack limit but I hope that it won't make FHC to change the contest format in the future. A qualification round is there to test the system and this is when you might be surprised by stack limit and forced to learn how to deal with it. You should know your own environment and be able to run a program.
That being said, it's still bad that machine power and internet speed differ for participants but the former isn't a big deal if intended solutions work in ~10 seconds and brute force requires hours, and the latter is resolved by providing encrypted tests in advance.
It was good that they had a problem in the qual round that forced us to deal with the stack limit. But for some reason, even though I set my system limit to the maximum and passed the qual round problem, I still got a segfault in R1. Only with some extra g++ options did it pass. At least I know how to set my compiler invocation properly now!
I do think it's good that the format is different. It could lead to more variety — like using more obscure languages or third party libraries, or more room for precomputation for some kinds of problems. It's just that the execution makes it stressful.
That's the fun!
I did the segment merging approach which some people have described here for A2. It passed the validation input but failed on the main cases :(. Could someone tell me what I am doing wrong?
https://gist.github.com/cathreya/e76fb3918bbb4dbd5437d545572ed4cf
You may need to give sufficiently big values instead of {0,0}. I have made an exact same mistake.
What are
{0,0}
here?I stored
{{left,right},perimeter}
and used{{l[i],l[i]},0}
in the lower_bound function. I had to write two messy functions to find those two bounds.Damn. It worked. I can't believe I messed this up :'(
Thanks a lot!
Lol. In problem B I was like: god damn it I'm not missing another dp or binary-search-over-answer solution again! And then all I did was to keep thinking how I could binary search, try dp, or thought it could be binary search with dp, and then I freaking made it xD Every big contest. EVERY BIG CONTEST I miss a dp or a BS solution. Well, not today! (well actually not yesterday :0 )
Edit: fun fact — I watched Errichto's video and lold hard when I saw him saying the exact words as me. Dp? Bs? Bs with dp?
One funny thing is , for the first problem A1, I had initially thought of the same approach as Errichto. Then I thought it won't finish executing under 6mins.
In his video , his computer (which is a lot faster than mine in terms of performance) finished the test in 3mins. I think in my case it would have been a very close call if I had implemented Errichto's solution.
Interesting... I'll check it out.
If you use unodered_map on his solution you get (on my pc) from ~4 mins to ~26 seconds. Also if you clear the map when l[i-1] + w +1 > l[i] (so when you'll never use the prev values), you get like 2 minutes.
My solution that simulates this by having a vector for the values, and clearing it from time to time is ~2 secs.
Yup. As I said during a video, I would quickly switch to an array/vector if it seemed that I'm running out of 6 minutes.
My map solution was too slow and it was taking around 20+ seconds for each $$$10^6$$$ cases. After switching to unordered map it was able to finish all cases in 20+ seconds.
Fun fact is I didn't know that 6 minute window was the only time limit and I thought they would run codes on their server. So I spent an hour to find a more efficient solution with an array of size 20 (thanks to the constraints).
I started late and I couldn't finish implementing A3 before CF Global Round started because of spending time on making A1 efficient. Luckily A1 and A2 was enough for clearing this round.
One question: is it possible to initialize array of size $$$5*10^8$$$ in high config pc? I couldn't do it on my machine.
Too bad, the binary search + dp solution is flawed. :P
Well, I wrote from my phone so maybe I wasn't clear but in the end I solved it with binary search only.
Or you mean that you tried dp+bs and it didn't work? :0
If you were talking about solving it when S>0, read this comment: https://mirror.codeforces.com/blog/entry/81418?#comment-681841
My solution for $$$S = 0$$$ was DP + optimum monotonocity (cause the transitions were to minimize the maximum of two functions (one non-decreasing and the other one non-increasing)); so I used binary search to get the optimum for each state. I could have used Two pointers and get a linear solution, but this wasn't necessary.
So has anyone figured out a solution for B with S> 0?
.
It fails because you took max of H every time. think of a smaller height between two bigger ones, it will increase the perimeter.
oh! I see. Thanks.
Auto comment: topic has been updated by LoneFox (previous revision, new revision, compare).
I'm happy to say that we're now able to share information on this year's t-shirt prizes, and will be giving out more than ever before! Everybody who solves at least one problem in the upcoming Round 2 will receive a Facebook Hacker Cup t-shirt. The top 200 competitors from Round 2 will also have a "Top 200" badge on their shirt.