我编写了一个函数,如果数组包含重复项,则返回true,否则返回false。我的运行时间仅在Leet Code提交的第50百分位数中。为什么?这不是O(n),你怎么能让它更快?
def contains_duplicate(nums)
hsh = Hash.new(0)
nums.each do |num|
hsh[num] +=1
if hsh[num] > 1
return true
end
end
return false
end
Runtime hash submission only in the 50th percentile
*编辑 对于好奇,这里是Leet Code上编码问题的链接:https:// leetcode.com/problems/contains-duplicate /
哟偷看,我运行了建议的设置代码并且运行时间更差:def contains_duplicate(nums)
s = Set.new
nums.each { |num| return true unless s.add?(num) }
false
end
Runtime set submission in 20th percentile
**最快的运行时间
def contains_duplicate(nums)
hsh = Hash.new(0)
count=0
nums.each do |num|
count+=1
hsh[num]=1
if hsh.size < count
return true
end
end
return false
end
http://i.stack.imgur.com/Xx21p.png
答案 0 :(得分:2)
你可以使用一套。
require 'set'
s = Set.new
nums.each { |num| return true unless s.add?(num) }
false
请参阅Set#add?。
我不希望与OP的方法有任何显着差异,因为集合是用哈希实现的。
...但是让我们看看。
require 'fruity'
要比较的方法如下:
def hash_way(nums)
hsh = Hash.new(0)
nums.each do |num|
return true if hsh.key?(num)
hsh[num] = 1
end
false
end
以上是OPs代码,只有很小的改动。
def set_way(nums)
s = Set.new
nums.each { |num| return true unless s.add?(num) }
false
end
@ gonzolo2000的方法(自删除后)和@Jack的方法,略有修改:
def uniq_way(nums)
nums.uniq.size < nums.size
end
def hash2_way(nums)
hsh = Hash.new(0)
count=0
nums.each do |num|
count+=1
hsh[num]=1
if hsh.size < count
return true
end
end
return false
end
def bench(nums, n)
nums = arr(n)
compare do
_hash { hash_way(nums) }
_set { set_way(nums) }
_uniq { uniq_way(nums) }
_hash2 { hash2_way(nums) }
end
end
首先考虑一个带有一个重复元素的数组:
def arr(n)
((1..n).to_a << 1).shuffle
end
例如,
arr(20)
#=> [17, 12, 1, 20, 3, 10, 15, 9, 5, 2, 14, 1, 18, 16, 7, 13, 19, 4, 8, 11, 6]
bench(nums, 100)
Running each test 128 times. Test will take about 1 second.
_hash2 is similar to _hash
_hash is similar to _uniq
_uniq is similar to _set
bench(nums, 1_000)
Running each test 32 times. Test will take about 1 second.
_hash2 is similar to _hash
_hash is similar to _set
_set is faster than _uniq by 2x ± 1.0
bench(nums, 10_000)
Running each test 2 times. Test will take about 1 second.
_hash2 is similar to _hash
_hash is similar to _set
_set is similar to _uniq
bench(nums, 100_000)
Running each test once. Test will take about 2 seconds.
_hash2 is similar to _hash
_hash is similar to _set
_set is faster than _uniq by 7x ± 1.0
bench(nums, 1_000_000)
Running each test once. Test will take about 51 seconds.
_hash2 is similar to _hash
_hash is faster than _uniq by 10.000000000000009% ± 10.0%
_uniq is similar to _set
现在我将更改测试数据,以便阵列中10%的唯一元素具有一个dup:
def arr(n)
(1..n).to_a.concat((1..n/10).to_a).shuffle
end
例如,
arr(30)
#=> [14, 3, 1, 5, 20, 11, 4, 2, 25, 15, 23, 18, 30, 2, 19, 10, 13,
# 26, 24, 8, 6, 21, 16, 27, 7, 17, 12, 1, 29, 3, 28, 9, 22]
bench(nums, 100)
Running each test 512 times. Test will take about 1 second.
_hash2 is similar to _hash
_hash is similar to _set
_set is faster than _uniq by 3x ± 1.0
bench(nums, 1_000)
Running each test 128 times. Test will take about 2 seconds.
_hash2 is similar to _hash
_hash is similar to _set
_set is faster than _uniq by 9x ± 1.0
bench(nums, 10_000)
Running each test 128 times. Test will take about 8 seconds.
_hash2 is similar to _hash
_hash is similar to _set
_set is faster than _uniq by 79x ± 10.0
bench(nums, 100_000)
Running each test 16 times. Test will take about 17 seconds.
_hash2 is similar to _hash
_hash is similar to _set
_set is faster than _uniq by 180x ± 10.0
bench(nums, 1_000_000)
Running each test 4 times. Test will take about 56 seconds.
_hash2 is similar to _hash
_hash is similar to _set
_set is faster than _uniq by 810x ± 100.0
答案 1 :(得分:1)
我不熟悉ruby,但我可以看到你的循环每个项目需要3个哈希查找,而哈希查找是分配新项目后最昂贵的操作。
尝试这样的事情,每个项目只需要一次查找:
def contains_duplicate(nums)
hsh = Hash.new(0)
count=0
nums.each do |num|
count+=1
hsh[num]=1
if hsh.size < count
return true
end
end
return false
end
答案 2 :(得分:0)
我总是喜欢编写明显代码的想法。因此,我个人最喜欢的东西会是这样的:
array.uniq == array
使用原始方法进行一些基准测试:
a = (1..20).to_a
Benchmark.realtime { 100000.times{ contains_duplicate(a) }}
=> 0.937844
Benchmark.realtime { 100000.times{ a.uniq == a }}
=> 0.804872
还包含一个包含重复项的数组:
a = a * 3
Benchmark.realtime { 100000.times{ contains_duplicate(a) }}
=> 1.068844
Benchmark.realtime { 100000.times{ a.uniq == a }}
=> 0.919273