Блог пользователя ramprakash_k

Автор ramprakash_k, 10 лет назад, По-английски

There are n>0 people standing in a queue. Their heights are a permutation of {1,2,...,n}.
There are m>0 ships ready for departure.
You can do 2 operations : (in the order of the queue)
-> Allot a boat i (1<=i<=m) for the person in the front of the queue.
-> Deny him service (send him away).
There is one constraint though. A person will board the ship allotted to him only if all the people already in that ship have lesser height than him (else he feels inferior).
As the manager, you need to find a way to maximize the number of people that can board these m ships.
(Sorry I dont remember the exact constraints on m,n but they are pretty big)
Input :
n m
a1 a2 a3 ... an
Output :
k
(a single integer 0<k<=n which is the maximum people that can be allotted to these m ships)
Example :
5 2
4 2 5 1 3
Output :
4

  • Проголосовать: нравится
  • +2
  • Проголосовать: не нравится

»
10 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Isn't it a longest increasing subsrquence? I'm not sure if I understood the statement right — one will use a boat if all people that have used a boat until now are shorter than him. Did I get it right?

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    It is similar to the longest increasing sub-sequence problem but differs in the fact that we are to partially partition the input permutation into atmost m increasing subsequences maximizing the number of elements of the initial permutation used. In the example I stated, one possible method would be : (4,5) (1,3)

»
10 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

Hello ramprakash_k ,

This problem is just a modification of longest increasing subsequence problem as mentioned by @P_Nyagolov but in this case we need M such sequence disjoint sequences which maximise the total number of selected elements.

This problem can be solved using greedy approach. First select the longest increasing sequence of maximum length and assign this to M1. Remove all those elements which are selected in this LIS from the given permutation and then do the same process again. This way you will end up finding the maximum number of elements that are selected.

if dynammic programming is used to find the LIS then it will take O(M*N2) time. Otherwise, if data structure is used to find the LIS complexity boils down to O(M*Nlog(N)).

If you find anything incorrect then please reply back.

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    It will be great if you share link to the above problem.

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится +8 Проголосовать: не нравится

    I think this approach will not work. Consider the sequence 3 4 7 2 5 6 and let m = 2. The first LIS would be 3 4 5 6, after removing these, we will get 7 2. Thus, the second LIS would have only 1 person, leaving 1 person back. However, we can accommodate everyone in these two ships : 3 4 7 in first ship and 2 5 6 in second ship.

  • »
    »
    10 лет назад, # ^ |
    Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

    I have thought about this but I rejected the idea because of two reasons:

    1) It won't work correctly, as sajo said.

    2) I think that pretty big constraints means N,M>=10^4 or N,M>=10^5

»
10 лет назад, # |
Rev. 2   Проголосовать: нравится 0 Проголосовать: не нравится

Ignore. The solution presented here was incorrect.

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    In your case, you don't check if number of nodes selected is maximum.

    Flows just maximises the total flow possible which in this case is m.

    I had asked this question to my senior friends earlier and they said the solution is related to flows but did not tell me the solution exactly. (That's why the tag "flow")

    • »
      »
      »
      10 лет назад, # ^ |
        Проголосовать: нравится 0 Проголосовать: не нравится

      If it is related to flow what are the constrains approximately? Something like 1000 or 1000000?

    • »
      »
      »
      10 лет назад, # ^ |
        Проголосовать: нравится +3 Проголосовать: не нравится

      When you see word "flow" here — obvious idea should be min-cost flow. Add edge between person a and person b if they can be in same sequence and a goes before b; split every person in two vertices with edge of cost -1 and capacity 1 between them (now every person can belong only to 1 sequence, and we should "pay" -1 for it). Now you can use no more than m units of flow and you are interested in paying minimum price for it.

      You haven't mentioned constants; if pretty big means something like 500, then it isn't so bad, and this solution can pass; but if it is 100000 — well, that's bad :)

      • »
        »
        »
        »
        10 лет назад, # ^ |
          Проголосовать: нравится 0 Проголосовать: не нравится

        Thanks for helping out.
        Could you please elaborate this statement of yours. I didn't quite get it.
        "split every person in two vertices with edge of cost -1 and capacity 1 between them"

        • »
          »
          »
          »
          »
          10 лет назад, # ^ |
            Проголосовать: нравится 0 Проголосовать: не нравится

          This is a common trick when you need to give capacity to vertices, not only edges. You can make two new vertices from every original vertex. Let's call them left half and right half. Now all edges (a,b) of original graph should go from right half of a to left half of b. In case you have edge of capacity x between left half and right half of some vertex — it gives you additional limitation, now no more than x units of flow can run through this vertex.

»
10 лет назад, # |
  Проголосовать: нравится 0 Проголосовать: не нравится

The length of longest decreasing sub-sequence gives you the minimum number of ships required to make all people board some ship.

so the problem is reduced to remove minimum number of elements in the permutation in order to make the length of longest decreasing sub-sequence no more than the number of ships you have , try to solve this one using dynamic programming

  • »
    »
    10 лет назад, # ^ |
      Проголосовать: нравится 0 Проголосовать: не нравится

    Using dynamic programming triggers the intuition — apply on the set of elements removed. But that would be exponential right?
    Could you give me a hint, please?

»
10 лет назад, # |
  Проголосовать: нравится +14 Проголосовать: не нравится

I decided to google this problem :) Here msg555 said that it can be solved with RSK algorithm. I haven't heard about this algorithm before; I've found few articles like this one, but I'm still not sure, will the answer to your question be equal to size of first M rows of generated tableaux (this makes some sense and works for random small cases), or we have to do some more complicated stuff :) Probably we should wait for someone more competent in it to provide correct solution, but now at least you have some key words for search :)

»
10 лет назад, # |
  Проголосовать: нравится +14 Проголосовать: не нравится

I_love_Tanya_Romanova is correct when he suggests that the RSK algorithm can be used to solve this problem. The number of cells in the first k rows in the generated Tableau equals the maximum cardinality of k disjoint increasing subsequences over the input permutation. The RSK tableau can be calculated completely in O(n^2) or the first k rows in O(nk log n). Using some symmetries you can actually compute the whole tableau in O(n^(3/2) log n) time.

To give a bit of the background, the RSK algorithm was thought up to facilitate a combinatorial proof relating the number of these (standard young) tableau to the number of permutations (it turns out there is a bijection between pairs of tableau of the same size and permutations). You can find the original proof of the theorem relating maximal cardinality of k increasing subsequences to RSK tableau at An Extension of Schensted's Algorithm.

On the average (i.e. for a random permutation) case the RSK algorithm already performs around O(n^(3/2)) time. However a long decreasing permutation, for example, takes O(n^2) time. To deal with this it's enough to observer the following. Let P(pi) give the result tableau for the permutation pi; then the following identity exists

P(pi) = P(reverse(pi))'

Where the apostrophe indicates an operation analogous to the transpose of matricies (exchanging columns and rows). Using this identity we can reconstruct P(pi) by computing the first sqrt(n) rows of P(pi) and the first sqrt(n) rows of P(reverse(pi)) and combining. Since the number of columns in each row of a tableau is non-increasing it follows that no cell can have a row and column index > sqrt(n). This lets us compute P(pi) in O(n^(3/2) log n).

  • »
    »
    10 лет назад, # ^ |
    Rev. 3   Проголосовать: нравится +14 Проголосовать: не нравится

    If it's useful to anyone to have a concrete implementation to look at here's an implementation of the O(n^(3/2) log n) RSK algorithm that I wrote two years ago (it doesn't generate the recording tableau, though). Note that when k = 1, bounded_rsk is just the classic Longest Increasing Subsequence algorithm.

    https://gist.github.com/msg555/4242182

    I'm unaware of any way to reconstruct the increasing subsequences using anything other than min cost flow, however.

    • »
      »
      »
      10 лет назад, # ^ |
        Проголосовать: нравится +8 Проголосовать: не нравится

      Isn't reconstruction discussed in section 4 of your link?

      • »
        »
        »
        »
        10 лет назад, # ^ |
          Проголосовать: нравится +3 Проголосовать: не нравится

        Oh good call. Seems like you need to do in the worst case W(n^2) swaps so I think we can't do better than that. Fortunately it looks like this can be practically implemented with that complexity. I wrote up a sample implementation to make sure I understood; I'll share it at https://gist.github.com/msg555/868661a5dfcdb66f353a if it's helpful for anyone else.