我正在研究下面的算法:
Given an integer matrix, find the length of the longest increasing path.
From each cell, you can either move to four directions: left, right, up or down. You may NOT move diagonally or move outside of the boundary (i.e. wrap-around is not allowed).
Example 1:
nums = [
[9,9,4],
[6,6,8],
[2,1,1]
]
Return 4
The longest increasing path is [1, 2, 6, 9].
Example 2:
nums = [
[3,4,5],
[3,2,6],
[2,2,1]
]
Return 4
The longest increasing path is [3, 4, 5, 6]. Moving diagonally is not allowed.
我最初的冲动是在每个方向上有4个具有增加值的函数。例如,如果整数1在[1,1]中,并且围绕该单元格的四个值增加,则将创建三个函数。这个过程,对于最坏情况下的每个单元,我相信是O(4 ^(M * N)[每个元素有4个函数调用,并且有mxn元素]。
然而,解决方案提出了以下蛮力(在他们继续通过记忆优化它之前):
Algorithm
Each cell can be seen as a vertex in a graph G. If two adjacent cells have value a < ba<b, i.e. increasing then we have a directed edge (a, b)(a,b). The problem then becomes:
Search the longest path in the directed graph G.
Naively, we can use DFS or BFS to visit all the cells connected starting from a root. We update the maximum length of the path during search and find the answer when it finished.
Usually, in DFS or BFS, we can employ a set visited to prevent the cells from duplicate visits. We will introduce a better algorithm based on this in the next section.
Java
// Naive DFS Solution
// Time Limit Exceeded
public class Solution {
private static final int[][] dirs = {{0, 1}, {1, 0}, {0, -1}, {-1, 0}};
private int m, n;
public int longestIncreasingPath(int[][] matrix) {
if (matrix.length == 0) return 0;
m = matrix.length;
n = matrix[0].length;
int ans = 0;
for (int i = 0; i < m; ++i)
for (int j = 0; j < n; ++j)
ans = Math.max(ans, dfs(matrix, i, j));
return ans;
}
private int dfs(int[][] matrix, int i, int j) {
int ans = 0;
for (int[] d : dirs) {
int x = i + d[0], y = j + d[1];
if (0 <= x && x < m && 0 <= y && y < n && matrix[x][y] > matrix[i][j])
ans = Math.max(ans, dfs(matrix, x, y));
}
return ++ans;
}
}
此算法的时间复杂度非常相似(在最坏的情况下仅针对单个单元启动4个dfs函数)具有以下复杂性分析:
Complexity Analysis
Time complexity : O(2^{m+n))
). The search is repeated for each valid increasing path. In the worst case we can have O(2^{m+n)) calls. For example:
1 2 3 . . . n
2 3 . . . n+1
3 . . . n+2
. .
. .
. .
m m+1 . . . n+m-1
Space complexity : O(mn). For each DFS we need O(h) space used by the system stack, where hh is the maximum depth of the recursion. In the worst case, O(h) =O(mn).
我没有关注他们如何获得这种时间复杂性或空间复杂性。对于空间成本,我想最糟糕的情况是矩阵按行和列的升序排序,但单个堆栈将与mxn平方的对角线长度大致成比例。这就是为什么空间最坏的情况与O(mn)成正比?另外,空间如何花费O(2 ^(m + n))而不是O(4 ^(m * n)?
答案 0 :(得分:1)
我认为Sunny是正确的,强力搜索的时间复杂度为O(4^(m*n)
),其中n
和m
是与之关联的矩阵中的行数和列数给定的数组。我想建议一个O(n*m
)算法和实现它的Ruby代码。
在诸如此类的问题中,倾向于考虑从矩阵中的每个元素移动到所有相邻的较大值元素(最多4个),然后从这些元素中的每个元素移动到所有相邻的较大值元素元素等等(因此O(4^m*n
))。但是,由于路径必须增加,我们可以通过不同的方式查看问题,从而开发出高效的优化算法。
假设我们按值对所有位置进行分组。对于问题中给出的示例,可以表达:
value_to_locs
#=> { 9=>[[0, 0], [0, 1]], 4=>[[0, 2]], 6=>[[1, 0], [1, 1]], 8=>[[1, 2]],
# 2=>[[2, 0]], 1=>[[2, 1], [2, 2]]}
接下来让我们按递减顺序考虑哈希的值:
to_process = value_to_locs.keys.sort.reverse
#=> [9, 8, 6, 4, 2, 1]
首先,我们处理值为9
的两个位置。从这两个位置开始增加路径的长度显然为1
(路径为[[0, 0]]
和[[1, 0]]
),因为无处可去。我们将此信息保存到以前为空的哈希processed
,现在为:
processed = { [0, 0]=>{ len: 1, next: nil }, [1, 1]=>{ len: 1, next: nil } }
接下来考虑具有值8
的单个位置(to_process
的第二个元素),[1, 2]
。如果此位置的增加路径长度大于1
,则必须接下来转到processed
的其中一个元素。 [1, 2]
与[0, 0]
和[1, 1]
都不相邻,因此我们添加了键值对:
[1, 2]=>{ len: 1, next: nil }
到processed
,获取
processed
#=> { [0, 0]=>{ len: 1, next: nil }, [1, 1]=>{ len: 1, next: nil },
# [1, 2]=>{ len: 1, next: nil } }
要检查的下一个值(to_process
)是6
,它出现在两个位置,[1, 0]
和[1, 1]
。前者与processed
([0, 0]
)中的一个位置相邻,因此我们将processed
添加到键值对
[1, 0]=>{ len: 1 + processed[[0, 0]][:len], next: [0, 0] }
#=>{ len: 2, next: [0, 0] }
价值为6
的其他元素,[1, 1]
在processed
,[0, 1]
和[1, 2]
中有两个相邻元素,因此添加
[1, 1]=>{ len: 1 + processed[[0, 1]][:len], next: [0, 1] }
#=>{ len: 2, next: [0, 1] }
或
[1, 1]=>{ len: 1 + processed[[1, 2]][:len], next: [1, 2] }
#=>{ len: 2, next: [1, 2] }
到processed
,即:len
的值最大的那个。 (这里,它是一个平局,所以要么可以选择。)
我们继续这种方式,直到原始数组的所有元素都是processed
中的键。然后,我们选择loc
最大的位置processed[loc][:len]
作为最长增长路径的起点,并重建相关路径。
注意,只需要重建:next
来重建最长的路径。如果只需要最长路径的长度,则不需要该密钥。
<强>代码强>
def longest_increasing_path(arr)
row_indices, col_indices = 0..arr.size-1, 0..arr.first.size-1
value_to_locs = row_indices.each_with_object(Hash.new { |h,k| h[k] = [] }) { |r,h|
col_indices.each { |c| h[arr[r][c]] << [r,c] } }
processed = {}
value_to_locs.keys.sort.reverse.each do |x|
value_to_locs[x].each do |loc|
next_on_path = greater_neighbors(loc, arr, row_indices, col_indices).
max_by { |nloc| processed[nloc][:len] }
processed[loc] = next_on_path ?
{ len: 1+processed[next_on_path][:len], next: next_on_path } :
{ len: 1, next: nil }
end
end
extract_path(processed)
end
def longest_increasing_path(arr)
row_indices, col_indices = 0..arr.size-1, 0..arr.first.size-1
value_to_locs = row_indices.each_with_object(Hash.new { |h,k| h[k] = [] }) { |r,h|
col_indices.each { |c| h[arr[r][c]] << [r,c] } }
processed = {}
low, high = value_to_locs.keys.minmax
high.downto(low) do |x|
next unless value_to_locs.key?(x)
value_to_locs[x].each do |loc|
next_on_path = greater_neighbors(loc, arr, row_indices, col_indices).
max_by { |nloc| processed[nloc][:len] }
processed[loc] = next_on_path ?
{ len: 1+processed[next_on_path][:len], next: next_on_path } :
{ len: 1, next: nil }
end
end
extract_path(processed)
end
def greater_neighbors((r,c), arr, row_indices, col_indices)
curr_val = arr[r][c]
[[-1,0], [1,0], [0,-1], [0, 1]].each_with_object([]) do |(rdelta, cdelta), a|
ra = r + rdelta
ca = c + cdelta
a << [ra, ca] if row_indices.cover?(ra) && col_indices.cover?(ca) &&
curr_val < arr[ra][ca]
end
end
def extract_path(processed)
loc, g = processed.max_by { |loc,g| g[:len] }
len = g[:len]
path = [loc]
loop do
break if g[:next].nil?
loc = g[:next]
path << loc
g = processed[loc]
end
[len, path]
end
<强>实施例强>
<强>#1 强>
arr = [
[9,9,4],
[6,6,8],
[2,1,1]
]
longest_increasing_path(arr)
#=> [4, [[2, 1], [2, 0], [1, 0], [0, 0]]]
<强>#2 强>
rows = 10
cols = 10
a = (1..9).to_a
arr = Array.new(rows) { Array.new(cols) { a.sample } }
#=> [[4, 7, 5, 3, 5, 4, 2, 2, 9, 3],
# [8, 3, 3, 5, 4, 2, 8, 1, 8, 3],
# [7, 1, 9, 4, 2, 7, 1, 4, 4, 6],
# [3, 7, 5, 5, 2, 3, 9, 1, 9, 7],
# [2, 6, 7, 1, 5, 9, 3, 5, 2, 9],
# [4, 4, 6, 7, 8, 4, 9, 7, 6, 1],
# [9, 7, 5, 4, 6, 8, 8, 4, 4, 8],
# [3, 1, 9, 9, 5, 7, 9, 6, 7, 2],
# [5, 6, 4, 8, 2, 3, 4, 3, 3, 9],
# [7, 9, 6, 9, 5, 2, 9, 7, 6, 3]]
require 'time'
t = Time.now
longest_increasing_path(arr)
#=> [5, [[6, 3], [6, 2], [5, 2], [5, 3], [5, 4]]]
Time.now - t
#=> 0.003606 seconds
因此,最长路径的长度为5
,其中包含4
元素,然后是[6, 3]
,最后是5
,右边是6
,7
。
<强>#3 强>
8
<强>解释强>
首先让我们考虑辅助方法rows = 100
cols = 200
a = (1..20).to_a
arr = Array.new(rows) { Array.new(cols) { a.sample } }
t = Time.now
len, path = longest_increasing_path(arr)
#=> [12, [[86, 121], [86, 120], [86, 119], [87, 119], [87, 118], [86, 118],
# [85, 118], [85, 117], [86, 117], [87, 117], [88, 117], [89, 117]]]
Time.now - t
#=> 0.153562 seconds
path.map { |r,c| arr[r][c] }
#=> [1, 2, 3, 5, 7, 8, 9, 10, 11, 13, 19, 20]
,例如#1。对于greater_neighbors
,如图所示,
arr
接下来,(将row_indices = 0..arr.size-1
#=> 0..2
col_indices = 0..arr.first.size-1
#=> 0..2
视为矩阵)我们将具有相同值的位置分组:
arr
此哈希将包含已检查过的位置。
value_to_locs = row_indices.each_with_object(Hash.new { |h,k| h[k] = [] }) { |r,h|
col_indices.each { |c| h[arr[r][c]] << [r,c] } }
#=> {9=>[[0, 0], [0, 1]], 4=>[[0, 2]], 6=>[[1, 0], [1, 1]],
# 8=>[[1, 2]], 2=>[[2, 0]], 1=>[[2, 1], [2, 2]]}
processed = {}
这提供了处理具有给定值的位置的顺序,从low, high = value_to_locs.keys.minmax
#=> [1, 9]
到high
。
low
生成enum0 = high.downto(low)
#=> #<Enumerator: 9:downto(1)>
的第一个元素并将其传递给块:
enum0
和
x = enum0.next
#=> 9
已执行,因此我们不执行value_to_locs.key?(x)
#=> true
。
我们现在考虑的所有地点的值为next
,然后是价值为9
的地点,依此类推。
生成8
并传递给块后,并且未执行9
,执行以下计算:
next
方法b = value_to_locs[x]
#=> [[0, 0], [0, 1]]
enum1 = b.each
#=> #<Enumerator: [[0, 0], [0, 1]]:each>
loc = enum1.next
#=> [0, 0]
c = greater_neighbors(loc, arr, row_indices, col_indices)
#=> []
只是构建一个与greater_neighbors
相邻的所有位置的数组,其值大于loc
的值。
loc
我们现在生成next_on_path = c.max_by { |nloc| processed[nloc][:len] }
#=> nil
processed[loc] = next_on_path ?
{ len: 1+processed[next_on_path][:len], next: next_on_path } :
{ len: 1, next: nil }
#=> {:len=>1, :next=>nil}
processed
#=> {[0, 0]=>{:len=>1, :next=>nil}}
的下一个和最后一个元素并将其传递给块:
enum1
此位置的计算与loc = enum1.next
#=> [0, 1]
的计算类似,结果为:
[0, 0]
我们已达到processed
#=> {[0, 0]=>{:len=>1, :next=>nil}, [0, 1]=>{:len=>1, :next=>nil}}
的最后一个元素,因此我们接下来执行:
enum1
同样,这个地方无处可去,所以我们得到:
x = enum0.next
#=> 8
value_to_locs.key?(x)
#=> true # so next is not executed
b = value_to_locs[x]
#=> [[1, 2]]
enum1 = b.each
#=> #<Enumerator: [[1, 2]]:each>
loc = enum1.next
#=> [1, 2]
c = greater_neighbors(loc, arr, row_indices, col_indices)
#=> []
继续,
processed
#=> {[0, 0]=>{:len=>1, :next=>nil}, [0, 1]=>{:len=>1, :next=>nil},
# [1, 2]=>{:len=>1, :next=>nil}}
最后,我们来到了一个与更高价值的位置(x = enum0.next
#=> 7
value_to_locs.key?(x)
#=> false # so next is executed
x = enum0.next
#=> 6
value_to_locs.key?(x)
#=> true # so next is executed
b = value_to_locs[x]
#=> [[1, 0], [1, 1]]
enum1 = b.each
#=> #<Enumerator: [[1, 0], [1, 1]]:each>
loc = enum1.next
#=> [1, 0]
c = greater_neighbors(loc, arr, row_indices, col_indices)
#=> [[0, 0]]
next_on_path = c.max_by { |nloc| processed[nloc][:len] }
#=> [0, 0]
processed[loc] = next_on_path ?
{ len: 1+processed[next_on_path][:len], next: next_on_path } :
{ len: 1, next: nil }
#=> {:len=>2, :next=>[0, 0]}
processed
#=> {[0, 0]=>{:len=>1, :next=>nil}, [0, 1]=>{:len=>1, :next=>nil},
# [1, 2]=>{:len=>1, :next=>nil}, [1, 0]=>{:len=>2, :next=>[0, 0]}}
)相邻的位置([1, 0]
)。
这种计算以这种方式继续,直到我们获得:
[0, 0]
剩下的就是找到processed
#=> {[0, 0]=>{:len=>1, :next=>nil}, [0, 1]=>{:len=>1, :next=>nil},
# [1, 2]=>{:len=>1, :next=>nil}, [1, 0]=>{:len=>2, :next=>[0, 0]},
# [1, 1]=>{:len=>2, :next=>[0, 1]}, [0, 2]=>{:len=>2, :next=>[1, 2]},
# [2, 0]=>{:len=>3, :next=>[1, 0]}, [2, 1]=>{:len=>4, :next=>[2, 0]},
# [2, 2]=>{:len=>2, :next=>[1, 2]}}
最大的键值对k=>v
,然后提取最长的路径。这是由助手v[:len]
完成的,与extract
一样,直截了当。