Контест доступен с 11 по 14 января 2013 года.Всем удачи!
Мы можем обсудить проблемы здесь, только после конкурса завершена.
UPD: Ссылка на контест. Вы можете выбрать удобное для вас трехчасовое "окно" на протяжении контеста. Результаты будут через день-два после окончания соревнования.
Лесной болван
Лесной тролль
Мороз и солнце, день чудесный! Ещё ты дремлешь, друг прелестный..а на дворе уже 2013 )
Извини за ошибки и Поздравляю с Новым Годом 2013 :)
Good Luck to evryone, help FJ with his cows :) :)
Please note that December Contest promotions HAS NOT been done,at least for sliver contestants.I have sent an email to bcdean,but he hasn't replied yet.So if you participated in December Contest in Bronze or Sliver and reached the promotion score,please check your division before you start your contest,since one can only participate in one division.
what means "promotions HAS NOT been done"?
Well,do you know that if we score particularly well in Bronze/Silver ,we can be promoted to Sliver/Gold?For example, in December contest results ,it says"All participants with scores at least 650 will be automatically promoted to the gold division for future contests.".I scored 667,and one of my friend scored 900,but we are still in Silver Division.
I had participated in December Contest in Sliver and reached the promotion score and I promoted to Gold
so there is no problem with me!
Looks like promotion is done now!
Lool -3 :D
already -8 why have negative votes blog about usaco jan. contest?can someone explain?
No one can explain negative votes here on codeforces. So don't try to understand them. I present to you as an example this comment, which will get negative votes for no reason.
Bronze division problems is vey hard,i think the promotion score will be small for bronze
I would say that FJ had hard problems in farm :) For me I solved B,C problems (correctly I think) but not A(Because of the time and complexity)
I have given a "rand()%N+1" for the C, not much time after I had solved A and B ^^"
What is the best data structure for problem1 and problem3 Gold???
My solution is like this:
Compress breed numbers to 0..a (using an STL map, for example). For each breed, put the cows' positions into a map, which enables you to efficiently find the number of cows of this breed in range [a,b) (think how).
Process the cows in the order of decreasing x-coordinate. If the first cow in the contiguous subsequence is cow i with breed B[i], then there's no point in removing any cows with smaller x-coordinates. So what you need to find is the greatest j < N such that cows i..j are of at most K+1 breeds (of which you remove K).
Suppose you know the number of breeds for each j, N[j]. You can binary-search the greatest possible j, or just observe that when i decreases, N[j] doesn't decrease, and use the idea of '2 pointers'.
How to update / recalculate N[j] efficiently? You can simply use an interval tree (with operations 'increase a given range of values in N by 1' and 'get N[j]'), or just observe that successive values of N differ by at most 1, remember the positions at which this happens (in a heap, for example) and update them as you decrease j or i.
Gold, task 3
http://ace.delos.com/TESTDATA/FEB08.hotel.htm
You can get the solution during the contest from the USACO site. I think, it's very disappointing.
Sadly, this time, problems were not as good as always. Problem "island" is another BFS + known DP with sets. I think "lineup" was the nicest.
'lineup' is also repeated but in harder version
http://usaco.org/index.php?page=viewproblem2&cpid=130
For this problem N is 500,000 instead of 50,000 in problem 'Hotel'.I wonder if segment tree will get TLE.(My solution with segment tree nearly exceed memory limit)
Yes, I was wondering the same thing. Each node in my segment tree contains three long values (free seats from the left, free seats from the right, max contiguous free seats) and three pointers to nodes (left child, right child, parent). At each step, I create at most 2logN nodes, so in the worst case, I'll have 600000logN nodes, which is around 11,5M nodes, each with six longs. If I'm not confused, that would be over 256M. I hope that test case doesn't exist.....
As for runtime, the same calculations apply I believe, so in the worst case, there should be around 12M operations of calculating values, dynamically creating nodes, etc.. Time limit is 2 seconds... I'm not sure.
There are many different ways to write a segment tree. The most traditional way usually needs a process to build the whole tree. In this way, you must need some extra memory to record the relations between father node and child node on the tree.
But it is really not necessary. You can assume the segment tree is a heap or a complete binary tree. According to the properties of complete binary tree, for each node x, its father is exactly x/2, its left child is x*2, and its right child is x*2+1. This skill can save a lot of space and time when you write a segment tree. Here you can see a solution that I wrote in this way. 2218207
I used the same way to solve the third problem. My program can get the answer of maximum random data within 0.5s, and it just cost 16MB memory.
Yeah, I thought about doing it that way (it's the way I usually code segment trees), but I ended up choosing dynamic memory because I didn't want to deal with marking whether a node is a leaf or not with a bool, and then checking in every query not to go over the available number of seats, etc.. Maybe my decision was wrong...
After building such a tree you could have run a query to occupy the seats N+1,..,SizeOfTree, before processing real queries.
Great idea!
I hadn't thought of that...
Well, a node is a leaf if the left (start) endpoint and the right (end) endpoint are the same number.
In a full tree, yes. But that would be terribly inefficient for this particular problem. If a test case gives you 150000 L queries with a range of size 500000, you would be doing 75000M operations.
You need to make a leaf those nodes where every seat in the range is either free or occupied.
Yeah but my recursion doesn't end when I hit a node with L=R; it ends when I hit a node that is completely contained within the query interval.
Then that makes it a leaf, since you won't consider any children of that node, not unless that range gets modified by another query.
In that case, it's OK.
Yeah, maybe my O(NlogN) solution will fail too. But of course, it's impossible to solve it faster than O(logN) per query and I guess, their test machine is very fast.
500000!!! and 300000!!! I don't get it. Just reading the input takes a lot.
For problem "lineup" I use a slinding window, each step the sliding window contains a most K + 1 different breeds, and the answer for that step is the breed with more occurrences (since you can remove K breeds, just leave the one with most repeats). Each step remove the first element and enlarge the window as long as it contains K + 1 different breeds . The idea is clean with overall complexity O(n lg n), and you just need some sets and a map.
You can also safely assume that the breeds with more occurrences will be the first in the sliding window.
For problem "lineup" I fixed end of the longest continues part of string with same numbers and then run BinerySearch to find first place in the array that between it and our fixed tail, there's exactly K+1 different numbers. and I use segment tree to calculate the BinerySearch values.
And it's complexity O(n.lg(n)^2)
For lineup I did a solution similar to mukel's. I use a sliding window. If the current amount of different breeds is > K + 1, then I shrink it from the left. Otherwise, I expand it to the right.
I keep track of the breeds seen so far using a map that stores the breed ID and the amount I have in my window. If I shrink the window, I take one from the leftmost breed. If it becomes 0, I erase the entry from the map.
Each time I expand the window, I check the ID I'm adding to see if it's greater than the best answer so far. If so, I update it.
I think that works in O(2*N*x), where x is the complexity STL maps queries have.
Sadly though, I had two minor errors, and so my program won't work when either the answer is 1 (it will output 0), or when the answer involves breed ID 0 and it also involves the rightmost cow (my program will output "correct answer" + 1).
x=logN
Yeah, that's what I thought too. In that case, it should run in time.
Oh, yes. Your solution is pretty good. It reminds me of another problem comes from Open Ural FU Personal Contest 2012. The solutions of these two problems are very similar. Both of them can be solved by using a double-ended queue of a specific size. But unfortunately, I did not come up with the right solution during the competition. I wish you have already got a good results in this contest.
What answer do you have for this test case in the second problem (Gold divison)?
168
Damn...!:(
Thanks anyway.
Same here, 168.
my solution answered: 168
Yes, same here too, though my program takes 7.8 seconds to run :P
After doing problems 1 and 3, I had only 22 minutes left. Didn't have much time to think a nice solution... I didn't even have time to make a search pruning. Complete brute force search of N! is my solution.
I realized later that since N <= 15, we can do a DP with a state of size 2^15 telling us what islands we've visited so far and the minimum distance required to do so.
Wish the contest lasted 4 hours... hehe.
for me, problem2 is the first problem I solved
Problem 3 was the first one I solved. It took me almost 2 hours to code, debug and test. Then I moved on to #1 and came up with the solution in a couple of minutes. Then I rushed a brute-force solution for #2.
I still have to see if my solutions are correct. Maybe they are not...
Somebody knows when the results will be displayed approximatively, or is it always as unprevisible than for december?
Sometime between today and the next month... hahaha!!
Let's hope they post it ASAP. On december, they took a century, while on November, they took only a couple of days.
While posting results here is immediate, and so it is on most programming contest sites. I don't know what must take so long. sigh my bad experience with USA just got another upgrade
OH LAZYINESS!!!!
Why don't you say "As soon as possible" instead of "ASAP"??
it took me 5mins to realize that "ASAP" means "as soon as possible".
Usually results will be posted within this week.
USACO January 2013 Silver Party Invitations
How to solve this properly? Everything I did is just running through the groups while I can add at least one cow to the list of invited ones. For sure it would get TL but I hoped it would pass some tests. Does anybody know how to code it under constraints?
Well all you need is to optimize your greedy approach: for each cow, you remember in which groups it is, and for each group the number X of cows belonging to it that are already invited. Now, you remember in a queue events to complete: "invite cow c". You also remember if you already invited a whole group / a cow, as to not complete the same event twice. When you invite a cow, you update X of all the groups it's in, and if there's only 1 cow left in some of them, you invite that cow, too (you can pass the entire group and find the only cow left). When there are no more cows to invite left, you have the desired subset.
You only look at a group at most once for every cow in it, only decide to invite the whole group at least once, and only invite a single cow at most once, so time complexity is O(N+size of input).
Also notice the analogy with BFS.
I think N is not that important, given that sum of group sizes is at most 250,000, there are at most 250,000 different cows in the data set.
Again BFS-like algorithm will work, by maintaining the set of groups and for each cow, which groups it is in. Each cow is in the queue at most once (thus at most 250,000 times), and each cow in a particular group is erased at most once (thus at most 250,000 erasing). I am not aware of any fast method that support erasing in constant time so I used set for that, which is logarithmic.
If there are close to no groups, N is important. It's like when you have graph algorithms, you don't write complexity O(M) but O(N+M).
Erasing can be done in a different way, which enables you to obtain linear complexity (in some problems, you'd need an array of sets, and that can overflow tighter memory limits): just remember if an element is erased in an array and instead of lists of neighbors, focus on orders of vertices. An example of the implementation is determining the largest subgraph in which every vertex has degree at least K (greedy O(N+M) algorithm): http://riesky.sk/zenit/solutions/12kk/12kki.cpp (ignore the Slovak comments :D)
I can't wait for the results :)
Official results has been posted now.
Is there something wrong with Square(silver)'s test data? The description garantees that K is to be even, but in test 9 K=25.
It seems this is an error.You should notify the contest director Brian Dean.
in solution to island (gold) sol_island
it is mentioned that th third part of solution is well known problem called "Traveling Salesman Problem"
actually Traveling Salesman Problem problem is a bit different from island because in TSP we should back to the first node.
Well, if you used the standard TSP, the trick is to make a "fake" node with dist 0 to all other nodes. Then you can run TSP as normal.
I'm amazed at how hard the Bronze Division was this time.
I tried to simulate a Bronze Division Contest (I assigned myself 3 hours and tried to solve every problem). I ended up earning 533 points.
So I actually found the GOLD Contest easier than the Bronze one. It's ridiculous. I scored 582 points in Gold, and that's because I used dynamic memory in seating and I got 's' in 7 test cases. If I had chosen to build a static tree, I would have solved all test cases, for a final score of 815 points.
They should really make Bronze contests easier. They are intended for programmers with very little knowledge of algorithms, yet they include problems where you must implement range trees or compress a coordinate system...
so They should promote who scored well in Gold to bronze :)
just a joke.
Maybe... hahaha!