Hey All!
Topcoder SRM 771 is scheduled to start at 11:00 UTC -5, Nov 27, 2019. Registration will open 24 hours before the start time in the Web Arena or Applet and will close 5 minutes before the match begins.
Thanks to misof for writing the problems for the round
This is the fourth SRM of Stage 1 of TCO20 Algorithm Tournament.
Stage 1 TCO20 Points Leaderboard | Topcoder Java Applet | Next SRM 772 — Dec 11, 07:00 UTC-5
Some Important Links
Match Results | (To access match results, rating changes, challenges, failure test cases) |
Problem Archive | (Top access previous problems with their categories and success rates) |
Problem Writing | (To access everything related to problem writing at Topcoder) |
Algorithm Rankings | (To access Algorithm Rankings) |
Editorials | (To access recent Editorials) |
Good luck to everyone!
It will begin right after (25 minutes) the Educational Codeforces Round 77, will be a busy day for competitive programmers :)
Anyway, the reminder — the contest will start in 6.5 hours.
Sad...
I fail on 500 pts problem just because I write wrong generator code as following:
I failed because of MLE.
I love "I failed" threads!!
I failed because projects can take 0 workers, and they will get done on their own, apparently.
This problem just had so many fails we can easily keep this thread going until messages "the thread is too long" start.
I will also contribute something here.
I started off with topcoder today. Participated in few srms last year without any plugins or anything.
Today -
Plugins ready. Done A. Wrote a greedy. Failed on sample test 1 and passed rest.
Ok. Let's check how scoring works. Submitted same soln 3 times in the last 10 mins.
Challenge phase - dreamoon challenged my B. Unsuccessful hacking attempt. dreamoon challenged my B again. Successful hacking attempt.
fetetriste blindly challenged my A twice. Unsuccessful hacking attempt. After the challenge phase, he wrote he selected the wrong solution for the blind challenge. System test passed on A.
I didn't blindly challenge you: I saw you return 1 for negative bounds and challenged against this (twice to be sure, it didn't influence my place much anyway).
The wrong solution that I chose for blind challenge was 1000 (we had two in the room and I chose the passing one. I tried the challenge in the practice room and it works against the wrong solution).
When I looked at your code and found your code pattern is far away from correct solution. So I blindly challenged you with a large random test which I had prepared before challenge phase. But your code return correct solution...
Then I looked at your code carefully and found you just use simple greedy to solve it. I had checked this solution will get WA on sample 1 before challenge phase. So I just choose sample 1 to challenge you again. And this time It was successful. At this moment, what I think in my mind is: What a strange guy. He didn't pass the sample but still submitted it.
You would be surprised to know how often people submit a code that doesn't pass all the samples. It's just weird that TC never does anything to prevent this.
I saw that two people submitted nothing, then they hacked each other immediately after beginning of challenging phase :)
How to solve 1000? I just started with a random point and repeatedly added the closest one not yet visited until I got line that's short enough.
That. It passed, so that's how you solve it. Write some simple random shit that has a good enough chance of working, it's the optimal strategy for this problem.
For a more complicated and more bug-prone deterministic solution, the editorial link is among system messages, button "Messages".
Not in the web arena I'm afraid.
Split the triangle into 4 triangles with sides $$$x/2$$$ and $$$y/2$$$ and solve for them recursively. In each of the smaller triangles we will have a path of length at most $$$(x/2)^2 + (y/2)^2 = (x^2 + y^2)/4$$$. We need to add condition that our polyline starts at the right corner of triangle and ends at the upper corner and go through smaller triangles in the correct order for it to work.
This is incorrect, consider the following test case:
Instead of medians, it has to be altitudes, like in the editorial.
I'm glad that we were in different rooms. :)
Not only that, but also that nobody had the same solution in my room :) (successful challenges are added to system tests) The other two solutions like yours are ecnerwal's and maroon_kuri's.
Omg, so there was no such test in system tests? Shameful. It's kind of enticing way to solve it, more natural than altitude, but noticing we can't make "shortcut" in an obtuse triangle and dealing with that problem is significant part of the solution.
Where can I find the editorial, Could you please provide the link?
Ctrl + F -> Editorial.
Can someone provide me with the link of editorial docs? I didn't save announcement link and now idk how to access that.
Link
As we have been complaining a lot about TC problem quality recently, I think this round is a pretty good one. Gotta give credit where it's due.
I don't like that 1000 had generated inputs, it made challenges much harder even though they were important in that problem. Otherwise, yeah, nice problems. It's not even a rare occurrence, TC is just a good punching bag.
What the f*** was that round?! Both majk's and aid's solutions MUST be implemented and defeated. I understand that sometimes it's hard to make good tests, BUT MAJK'S SHIT IS THE FIRST THING THAT SHOULD COME TO MIND. I see three options.
1: Problemsetters didn't mind to write this solution. Conclusion: bad problemsetters.
2: They wrote it, but were too lazy to make the test against it. Conculsion: bad problemsetters.
3: They wrote it and decided that it's almost impossible to make tests against it. Conculsion: they shouldn't use this problem, bad problemsetters.
I'm sorry about my rude words, but TC again doesn't care about contests and loses quality. (Not even talking about the system now). Guys, please, wake up! Don't be surprised that people don't want to participate.
I think you're being too harsh.
The first paragraph of the editorial mentions that "some local optimization techniques" can get accepted -- but no accepted solutions use local optimization, even though there were some attempts.
Out of 8 accepted submissions, 2 are correct, 3 use medians instead of altitudes, and 3 use different kinds of greedy.
It's a bit sad that one could get away with medians and greedy, but there's no way to make these solutions fail unless you know about them. The only way to have a lot of incorrect solutions is to have a lot of testers, and even this is not always enough. I, for one, didn't come up with either: if I had to implement an incorrect solution, it would be local optimization.
Saying "you didn't think about my incorrect solution -- you're a bad problemsetter!" is not very fair.
And then, as the editorial says:
Indeed, this problem is not suitable for TCO finals, but today we had a regular SRM. I enjoyed this problem, and my enjoyment beats my frustration over accepted incorrect solutions of others. If we disallow problems without provably perfect test data, the world will be a more boring place.
It's true that sometimes in order to fail some specific solution some tests are need to be created explicitly against it and coming up with this heuristic solution as a developer of a problem is required for it to fail system tests. And it's true that sometimes people come up with some random ideas that just can't be predicted. But no, this was not the case this time. Dividing by medians and greedy "go to the closest one" are things that definitely should come to a mind of a setter that knows a solutions and devotes 5 minutes to think about incorrect solutions. Maybe it's dangerous to use well-prepared version of this problem as a TCO finals problems anyway, but setter shouldn't just shrug it off and not devote any time to fail these.
By the way on an unrelated note I am fairly confident local search should do wonders in this problem. This exact problem (with a slight difference of points being located in a square, not in a triangle) was used as an optimization problem on Polish OI camp (ONTAK 2012) and 2-OPT outperformed everything else by far.
I imagine it as dividing into two triangles and each of those into two again, with three medians. But then intermediate triangles are not right, so yeah, calling it dividing into four right triangles with two midlines and a median (or a diagonal) is more natural.
I'm not convinced these two ideas should come to mind and are obvious to everyone who knows the solution. I respect it if it was obvious to you :)
Is ONTAK problem also about optimizing the sum of squares of distances, not just the sum of distances?
Ah, you replied to part of my comment which I deleted cause I realized "median" is a line that connects vertex with a midpoint of opposite side instead of line connecting two midpoints, sorry for that :P.
Yes. Even though it was an optimization problem where contestants where expected to write heuristic solutions, Kuba Łącki (an author afair) presented a solution with provable bound $$$4s^2$$$, where $$$s$$$ was a side of a bounding square which was exactly today's problem where X=Y after dividing square with a diagonal :).
Meh, maybe it's true, I'm sorry. Anyway, I wouldn't use the word "my". I just deduce that the tests were weak, and problemsetters haven't paid enough attention to the preparation. If they'd do, they'd probably come up at least with one of these solutions. It seems to happen on TC too often.
I know that the conditions are hard. I've heard that "in TC's problemsetting system, the author is able to upload only one solution" — and ofc it should be the model solution.
I don't know whether it's true, but if it is, guess what do I think about it. Maybe I'm not the best person, I'm too young to know the TC's era (as I started when CF was already grown-up), but due to histories, it was the best page and having Target was the highest possible glory.
Now their system seems to have a lot of bugs, TCO is a giant mess (look at the dates in the last marathon TCO), many tasks are just tiring or badly prepared. I just think that they should try to save their good reputation. Not with e-mails and advertisements, but with quality of what they give us.
It also has to be in Java. More precisely, it had to be in Java last time I checked, but it's TC so these two statements are basically equivalent.
I'm sure TC relied on challenges back then to fail bad solutions the testing didn't catch, too (successful challenges are added to systests). The problem is that with less contestants, there's less challenges, doubly so because it's less likely to have multiple submissions on hard in the same room. If you use problems that are easy to seemingly get right (any problems with weaker samples and no pretesting), this matters a lot.
Anyways, you'd be surprised how easy it is to find problems with really stupid "machine learning" or just plain random solutions. 64856439 63687717 (seriously WTF?) 54893862 are just what I remember off the top of my head. It's surprisingly easy even for a lot of testers to miss something stupid and in particular for a class of solutions that do something random but differ in implementation or only in the seed, it's completely possible to write and break one of these solutions while still allowing a different one to pass. Having countertests to solutions that are countersolutions to tests (a.k.a. challenges) is The way to stop such wrong solutions from passing.
Btw the 1000 seems to be Problem 20 in this handout.