Блог пользователя Franklyn_W

Автор Franklyn_W, история, 3 года назад, По-английски

Addendum: This is mostly about the story behind our solution to N. To see more about the math, see here

When I was younger, fairytales always captivated me, with their nice and clean stories. All through my life I've liked the idea of having a fairytale ending to a life arc, but I was frequently disappointed. Many times after an unsuccessful experience, one simply gets more unsucessful experiences and then time runs out...

At ICPC recently, I was lucky enough to finally attain my fairytale ending to my competitive programming career. I figured I would tell the fairytale, to close a career that started about 6 years ago.

On my World Finals team, my role is to help the team get hard math problems, as I haven't actively processed algorithms in years. So imagine my delight when my teammate (ekzhang) points out that problem N reduces to finding a vector so that the (l2)-norm of x is r, and $$$Ax = b$$$ for a matrix $$$A$$$ and vector $$$b$$$. This has to be doable!

The first observation is that finding the min-norm solution $$$x_{\text{min}}$$$ to $$$Ax=b$$$ is useful. Once we have the minimum-norm solution, it's sufficient to just find an $$$e$$$ in the null space of $$$Ax = b$$$, and consider $$$x_c = x_{\text{min}} + c \cdot e$$$, choosing $$$c$$$ so that the norm of $$$x_c$$$ is equal to $$$r$$$. Note that if we replace $$$x_{\text{min}}$$$ with any such $$$x$$$, it may not be possible to find a $$$c$$$ so that the norm of $$$x_c$$$ is equal to $$$r$$$.

When I hear min-norm solution to $$$Ax = b$$$ my ears perk up -- a well known result in machine learning is that stochastic gradient descent on $$$Ax - b$$$ (more precisely, $$$(Ax - b)^{\top}(Ax - b)$$$)) converges to the min-norm solution. Could this be my chance to use preconditioned gradient descent?

As it turns out this is not to be. I try using gradient descent, and the precision after a finite number of operations is simply not high enough, especially given the massive precision demanded by the problem.

It's here that I stop trying to overcomplicate things. I rewrite $$$Ax = b$$$ as $$$Cx = d$$$, where $$$C$$$ is a matrix composed of orthogonal rows, via Gram-Schmidt orthogonalization (or QR decomposition). As such, if we write

$$$ C = \begin{bmatrix} -- & e_1 & -- \\ -- & \vdots & -- \\ -- & e_n & -- \end{bmatrix}$$$

Now, to find the min-norm solution of $$$Cx = d$$$, note that we can simply take $$$x_{\text{min}} = d_1e_1 + \ldots + d_ne_n$$$. Note that any other solution to $$$Cx = d$$$ is just $$$x_{\text{min}}$$$ added to vectors in $$$e_{n+1}, e_{n+2}, \ldots e_d$$$, each of which only add norm to $$$x$$$. And now I knew I was almost done! We just need to find an $$$e_{n+1}$$$ that's orthogonal to each of $$$e_1, e_2, \ldots e_n$$$, and then find a $$$c$$$ so that the norm of $$$x_{\text{min}} + ce_{n+1}$$$ is equal to $$$r$$$.

Inspired by this, and wanting to get the coveted first solve award, I begin coding this solution, typing the fastest that I have in a while. After about 20 minutes, at the three-hour mark, I have a mostly working solution that seems to check out on the sample data, and I excitedly submit, to get

$$${\color{red}{\textbf{Wrong Answer}}}$$$.

After this, there's a fairly obvious typo I fix, and I resubmit again, getting

$$${\color{red}{\textbf{Runtime Error}}}$$$.

At the last few programming contests I've participated in, run-time error has been a pretty common occurence. This is a fairly rare error to have, and one that is usually not too hard to fix. Calculating the value of $$$c$$$ in the above uses a square root operation, and due to floating point precision issues sometimes this number is slightly below zero, so I patch that issue. Resubmit!

$$${\color{red}{\textbf{Runtime Error}}}$$$

At this point, I try running my new code on the samples again, and get something quite unusual: my code prints the answer, but throws a run-time error in the middle. Frustrated, I step away from the problem briefly and let my other teammate (halohalo) try looking for a run-time error. He quickly finds something amiss. Namely, in order to compute $$$e_{n+1}$$$ my code calls a subroutine which essentially runs Gram-Schmidt orthogonalization on $$$Ax = b$$$, but with a random vector appended to the end of $$$A$$$. The last value of $$$b$$$ doesn't matter for this component (in fact the value of $$$b$$$ doesn't matter at all for computing $$$e_{n+1}$$$) but the way I had it implemented, $$$b$$$ was one shorter than it was supposed to be, and was thus leading to out-of-bounds errors. We fix this, and are greeted by:

$$${\color{red}{\textbf{Wrong Answer}}}$$$

At this point ekzhang decided to reimplement my program from scratch. I worked on preparing large tests, hoping to find an error. I find it, a large test where the variable $$$n$$$ is equal to $$$500$$$ and where $$$d$$$ is $$$490$$$. In this case, the linear system is actually overdetermined, and we have $$$499$$$ equations in $$$490$$$ variables. In these cases, naive Gram-Schmidt will start trying to nomalize vectors which (up to floating point error) are zero, leading to potentially unusual consequences. In this case, my program explodes.

His program passes the samples, but doesn't pass the large case either. There are now about $$$10$$$ minutes left in the contest, so he tries a hail mary adding one line of code to the beginning of the program: $$$n = min(n, d + 1)$$$. In fact, I thought that I had expressed this implicitly later in the program.

His solution still fails the weird case, but is much closer to right. With nothing else to try, and with 4 minutes left and 15 wrong submissions, I add this line of code to the beginning of my program, and submit.

$$${\color{green}{\textbf{Correct}}}$$$

When I see the green $$${\color{green}{\textbf{Correct}}}$$$ on the screen, it's probably the happiest moment I've ever had in any contest ever.

You couldn't have scripted a better ending to my ICPC career (I'm not doing it in the future), one where I solved the hardest math problem on the set.

Originally we thought that Latvia may have solved the problem before us, but then when we see their last submission is after ours (at 298, with 2 minutes left) we know that we've won the first to solve award for the problem.

There were many things that could have gone better in solving this problem. We probably didn't need to make these wrong submits, and I could've made the $$$n = min(n, d + 1)$$$ check earlier (never hurts to check twice!) But at the end we had our shiny Correct, and that's all that mattered to me in the end.

I'd like to thank everyone who helped me (far too numerous to list in the post) for a great 6 years of competitive programming. Bon voyage!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +531
  • Проголосовать: не нравится

Автор Franklyn_W, история, 7 лет назад, По-английски

Hi,

Today I wanted to read the accepted codes for educational codeforces round 2, by going here: http://mirror.codeforces.com/contest/600/status. However, I couldn't find anything, since it seems like I can't access any of the code.

Полный текст и комментарии »

  • Проголосовать: нравится
  • -8
  • Проголосовать: не нравится

Автор Franklyn_W, история, 7 лет назад, По-английски

Hi Codeforces,

I'm currently working on a problem related to network flows. Does anyone know how to implement constraints of the following type into a system?

Edges 1 and 2 go into the vertex, and have capacity one. Edges 3 and 4 go out and have capacity one. How can one force a constraint to the effect of (edges 3 and 2 can't have a sum of flows greater than one)? 

UPD: I should note the context involves min-cost flow, so perhaps we can leverage the costs of the edges.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +8
  • Проголосовать: не нравится

Автор Franklyn_W, история, 8 лет назад, По-английски

Hi all,

For a research project I am conducting, I have come across an algorithms problem. I would like a solution which runs in at most 1 hour or so. Can someone give me an idea? Thank you!

Find all permutations a such that a has 28 elements and:

a has 8 cycles of length 3 and 4 fixed points b is ({1,2,3...,7}, {8,9,10...,14}, {15,16,17...,21}, {22,23,24,...,28}) cycle structure a * b has 14 cycles of length 2.

Полный текст и комментарии »

  • Проголосовать: нравится
  • +12
  • Проголосовать: не нравится

Автор Franklyn_W, история, 9 лет назад, По-английски

Problem 629D - Babaei and Birthday Cake asks for the heaviest strictly increasing subsequence of a set of volumes. When I saw this problem (in practice) I immediately recognized that this was probably a well-known problem, so I looked up code. I found some code from StackOverflow, but it occured to me that this code only found the heaviest nondecreasing subsequence. Not willing to change the code too much, I came up with the following idea. If taking the array one by one is like:

l = []
for r in range(n):
    k = int(input())
    l.append(k)

then in order to make the heaviest strictly increasing subsequence roughly equal to the heaviest nondecreasing subsequence, do something along these lines. This should work, because the epsilon is small enough such that the relative order should not be changed, but equal terms will have a difference: terms which come closer to the front will be ever so slightly larger than equal terms closer to the end.

l = []
for r in range(n):
    k = int(input())
    l.append(k + (n-r)*0.000001)

I submitted this idea, 16347971, but it did not succeed. Why? Because floats do not have sufficient precision, and the numbers in the array can range from 1e15 to 1. Hence, this will have negligible difference under this. The naive way to fix this is just to increase the precision of the numbers, like this:

l = []
for r in range(n):
    k = int(input())
    l.append(Decimal(k) + Decimal((n-r)*0.000001))

I tried this too, but it turns out that we in fact get TLE when we try this, even when we water down the precision: 16348275. And who can blame this? We clearly have to undergo a paradigm shift, because this is bad: since the numbers change so much: a fixed error for each term is not acceptable. How about instead of an absolute error, we use a relative error?

This would look like:

l = []
for r in range(n):
   k = int(input())
   l.append(k + (n-r)*k*0.00000001)

The reason this works is that equal terms are still larger when they come at the front. However, this method has its own flaws: for instance, there is a chance that if k is large enough, this term will become larger than k+1. Furthermore, the sum of all terms in the subsequence will have a significant difference with the original value. This can be fixed, however, by using a small enough epsilon, as seen below:

l = []
for r in range(n):
   k = int(input())
   l.append(k + (n-r)*k*0.00000000001)

Surely enough, this gets AC! 16348026

Hopefully this brightened up your day!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +3
  • Проголосовать: не нравится

Автор Franklyn_W, история, 9 лет назад, По-английски
  • Проголосовать: нравится
  • +7
  • Проголосовать: не нравится

Автор Franklyn_W, история, 9 лет назад, По-английски

Hi all,

Today I was unable to solve this problem: http://usaco.org/index.php?page=viewproblem2&cpid=213. My code is as follows, exactly following the official solution, yet, WA 7, 8, 9.

http://ideone.com/cV0eKj is my code. Thanks!

Полный текст и комментарии »

  • Проголосовать: нравится
  • +3
  • Проголосовать: не нравится

Автор Franklyn_W, 10 лет назад, По-английски

The contribution Leaderboard says that user Endagorion has contribution 135, but clicking on his page says that he had contribution 136. Why does this occur?

In fact, now it is one hundred thirty seven

EDIT: It has finally been updated!

Полный текст и комментарии »

  • Проголосовать: нравится
  • -16
  • Проголосовать: не нравится