Problem Statement :

Given a sequence a_1, a_2,…., a_n, find the largest subset such that for every i < j, ai < aj. – Algorithmist

Example – Wikipedia

In the binary Van der Corput sequence

0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15, …

a longest increasing subsequence is

0, 2, 6, 9, 13, 15.

This subsequence has length six; the input sequence has no seven-member increasing subsequences. The longest increasing subsequence in this example is not unique: for instance,

0, 4, 6, 9, 11, 15

is another increasing subsequence of equal length in the same input sequence.

Approach:

Dynamic Programming: – Algorithmist

There is a straight-forward Dynamic Programming solution in O(n^{2}) time. Though this is asymptotically equivalent to the Longest Common Subsequence version of the solution, the constant is lower, as there is less overhead.

Algorithm Pseudocode:

function lis_length( a )
n := a.length
q := new Array(n)
for k from 0 to n:
max := 0;
for j from 0 to k, if a[k] > a[j]:
if q[j] > max, then set max = q[j].
q[k] := max + 1;
max := 0
for i from 0 to n:
if q[i] > max, then set max = q[i].
return max;

Implementation: – Finding Length of LIS using DP

import random
def lis_length( a ):
n = len(a)
q = []
for x in range (0,n):
q.append(1)
for k in range(0,n):
max = 0
for j in range (0,k):
if a[k] > a[j]:
if q[j] > max:
max = q[j]
q[k] = max + 1
max = 0
for i in range (0,n):
if q[i] > max:
max = q[i]
return max
a = [0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15]
value = lis_length(a)
print "Length of LIS =",value

OUTPUT:

labuser@ubuntu:~$ python LISDP.py

Length of LIS = 6

Finding LIS using Patience Sorting – Greedy optimal Strategy

the British use the term Patience to refer to Solitaire with cards

David Aldous and Persi Diaconis wrote the paper entitled: “Longest Increasing Subsequences: From Patience Sorting to the Baik-Deift-Johansson Theorem”.

The Authors explain the Patience game or Solitaire game played in a special manner

“Take a deck of cards labeled 1, 2, 3, … , n. The deck is shuffled, cards are turned up one at a time and dealt into piles on the table, according to the rule

* A low card may be placed on a higher card (e.g. 2 may be placed on 7), or may be put into a new pile to the right of the existing piles.

At each stage we see the top card on each pile. If the turned up card is higher than the cards showing, then it must be put into a new pile to the right of the others. The object of the game is to finish with as few piles as possible”

From studies they suggested a target number = 9 piles, a 5 % chance of getting 9 piles with greedy optimal strategy

import bisect
import random
def LIS(seq):
piles = []
for x in seq:
new_pile = [x]
i = bisect.bisect_left(piles, new_pile)
if i != len(piles):
piles[i].insert(0, x)
else:
piles.append(new_pile)
return len(piles)
a = range(1,53)
lis = []
random.shuffle(a)
for i in range (1,10001):
random.shuffle(a)
value = LIS(a)
lis.append(value)
print dict((item, lis.count(item)) for item in set(lis))

The Authors in the paper explain how this mechanism of forming piles can be used to find LIS

If we define L(π) to be the length of the longest increasing subsequence of a permutation of our card deck, π, then the papers Lemma states

Lemma 1. With deck π, patience sorting played with the greedy strategy ends with exactly L(π) piles. Furthermore, the game played with any legal strategy ends with at least L(π) piles. So the greedy strategy is optimal and cannot be improved by any look-ahead strategy.

Proof. If cards a1 < a2 < … < al appear in increasing order, then under any legal strategy each ai must be placed in some pile to the right of the pile containing ai-1, because the card number on top of that pile can only decrease. Thus the final number of piles is at least l, and hence at least L(π). Conversely, using the greedy strategy, when a card c is placed in a pile other than the first pile, put a pointer from that card to the currently top card c′ < c in the pile to the left. At the end of the game, let al be the card on top of the rightmost pile l. The sequence a1 ← a2 ← … ← al-1 ← al obtained by following the pointers is an increasing subsequence whose length is the number of piles

Further more from Monte-carlo Simulation they were able to conclude by making 10000 trials on a 52 deck card that average number of piles for a random run gives 10-13 piles , but for a successful run with minimum no of piles i.e 7, 8 or 9 the chances are approx 5%.

OUTPUT:

labuser@ubuntu:~$ python patience.py

{8: 47, 9: 501, 10: 1695, 11: 2805, 12: 2606, 13: 1473, 14: 610, 15: 218, 16: 38, 17: 5, 18: 2}

labuser@ubuntu:~$

labuser@ubuntu:~$ python patience.py

{7: 1, 8: 55, 9: 476, 10: 1736, 11: 2852, 12: 2575, 13: 1480, 14: 581, 15: 183, 16: 48, 17: 10, 18: 3}

labuser@ubuntu:~$

labuser@ubuntu:~$ python patience.py

{7: 1, 8: 64, 9: 521, 10: 1713, 11: 2758, 12: 2596, 13: 1498, 14: 595, 15: 207, 16: 38, 17: 6, 18: 3}

labuser@ubuntu:~$

labuser@ubuntu:~$ python patience.py

{7: 2, 8: 48, 9: 497, 10: 1697, 11: 2794, 12: 2588, 13: 1493, 14: 626, 15: 207, 16: 41, 17: 6, 18: 1}

labuser@ubuntu:~$

labuser@ubuntu:~$ python patience.py

{8: 48, 9: 526, 10: 1748, 11: 2735, 12: 2513, 13: 1530, 14: 642, 15: 215, 16: 38, 17: 4, 18: 1}

labuser@ubuntu:~$

import bisect
import random
def LIS(seq):
piles = []
for x in seq:
new_pile = [x]
i = bisect.bisect_left(piles, new_pile)
if i != len(piles):
piles[i].insert(0, x)
else:
piles.append(new_pile)
return len(piles)
a = [0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15]
value = LIS(a)
print "Length of LIS =",value

labuser@ubuntu:~$ python patience.py

Length of LIS = 6

So for the sequence 7 2 8 1 3 4 10 6 9 5

We have a pile formation shown below with no of piles = 5

Thus the Longest increasing subsequence = 5

Similar results are formed in the paper which helps confirm the 10-13 average pile figure the paper suggests.

Space Complexity:

O(P) to find the Length of LIS for P no of piles. The patience sorting take O(Nlgn) where N is the no of elements are n is no of piles.

Misc Information: As per the Paper

According to D. Aldous and P. Diaconis, patience sorting was first recognized as an algorithm to compute the longest increasing subsequence length by Hammersley, and by A.S.C. Ross and independently Robert W. Floyd as a sorting algorithm. Initial analysis was done by Mallows.