我对一种算法感兴趣,该算法可以按照增加的总和生成或排序某些集合的子集。我已经回顾了一些类似的问题,但他们只讨论了按线性顺序生成子集,例如Algorithm to generate k element subsets in order of their sum和Algorithm wanted: Enumerate all subsets of a set in order of increasing sums
有更聪明的方法可以在更快的时间内完成这项工作吗?
我之前尝试从所有子集生成间隔树,然后沿着该节点搜索,其中根节点是集合中最右边的整数,左节点将最左边的整数向下移动一个,右边节点追加下一个最大整数。所以{1,3,5,8}
是
8 5 5,8 3 3,5 3,8 3,5,8 1 1,3 1,5 1,3,5 1,8 1,3,8 1,5,8 1,3,5,8
在任何节点,左边距将是子集中的最小值,替换为左边节点子集的集合中的最小值,其中包含左边节点子集中最大元素左侧的所有元素。正确的间隔是相同的逻辑,但镜像。如果目标总和不在其中一个间隔中,则不搜索子树。如果它在两个子树中,则搜索两个。这可以虚拟地完成,可以在不必生成任何子树的情况下检索范围,因此不需要实际构建树,只需要在每个步骤中构建每个节点。这种方法似乎在一般情况下有效,但在最坏的情况下是指数级的。
这些方面有什么方法吗?
答案 0 :(得分:2)
根据您的间隔树示例和链接的答案,听起来好像您想要生成k-th
最大的子集而不生成先前的子集,并且在时间小于O(k)
的情况下这样做,如果我&#39正确地理解,正如你所提到的,想要比以前的线性方法更快的东西。对此的解决方案将证明P=NP,因为您可以通过在子指数时间内生成每个k-th
最大子集来对所有子集进行二分搜索。
我几年前解决了这个问题,尝试按照和顺序生成k-th
最大的子集,然后按照它们的总和对子集进行二元搜索,所以也许我可以试着解释一个基本的这种方法的问题,即某些子集组本质上是无法比拟的,并且在最坏的情况下必须比较的无法比较的子集的数量随着输入大小的增长呈指数级增长。
子集求和问题的搜索空间是输入集的幂集。更具体地,搜索空间是输入集的功率集的每个子集的总和。例如,对于输入集{1, 2, 3}
,搜索空间为{{1}, {2}, {1, 2}, {3}, {1, 3}, {2, 3}, {1, 2, 3}}
,或简称为{1, 2, 3, 3, 4, 5, 6}
。无论输入集如何,最小子集总和将始终是第一个元素的单个,下一个最小子集总和将始终是第二个元素的单个,而下一个最小子集总和将始终是前两个的子集输入集的元素。类似地,最大子集和将始终是整个输入集的总和,而下一个最大子集总和将始终是第一个元素的输入集的总和,下一个最大子集总和将始终是输入集没有第二个元素,下一个最大的子集总和将始终是输入集的第一个和第二个元素的总和。
但是回到上一个输入集{1, 2, 3}
,那个相同大小的输入集如{1, 2, 2}
怎么样?搜索空间变为{{1}, {2}, {1, 2}, {2}, {1, 2}, {2, 2}, {1, 2, 2}}
或{1, 2, 3, 2, 4, 5, 6}
。如果您尝试按照集合{a, b, c}
的总和顺序对幂集进行排序,则必须比较{a, b}
和{c}
,因为有一些输入集{{1} } {更大}和{a, b}
更大的其他人。这两个子集是无法比拟的。如果你可以保证一个人总是比另一个人大,那么你可以适当地设计一个搜索算法,但是你没有,所以你必须至少检查这两个元素。
对于大小为4 {c}
的输入集,还有两个无法比较的子集:{a, b, c, d}
和{a, d}
;比较{b, c}
和{1, 2, 4, 8}
的这些子集的总和。实际上,还有一些其他的双重无法比较的子集,例如{1, 3, 3, 3}
和{a, b, c}
。如果我们绘制Hasse diagram这个,我们得到(为ASCII艺术道歉):
a,b,c,d | b,c,d | a,c,d--------| | | a,b,d---|---c,d | | a,b,c b,d | | b,c a,d | | a,c----d -------| | a,b c-----| | | |------b | a
换句话说,你可以设计一个算法来对{b, d}
的链进行二进制搜索,对{{a}, {b}, {a, b}, {a, c}, {b, c}, {a, b, c}, {a, b, d}, {a, c, d}, {b, c, d}, {a, b, c, d}}
链进行另一个二进制搜索,或者对这些链的另一个配置进行二次二进制搜索,但最终你必须至少执行两次二进制搜索。您始终可以保证{{c}, {d}, {a, d}, {b, d}, {c, d}}
或a+b <= a+c
(作为b+d <= a+c+d
),但您无法保证,例如b+d <= c+d <= a+c+d
。在最坏的情况下,你必须进行这些比较。
更进一步,对于大小为5 b+c <= a+d
的输入集,{a, b, c, d, e}
,{a, d}
和{b, c}
的子集是无法比拟的。例如:
{e}
有{1, 2, 4, 8, 16}
{b, c} <= {a, d} <= {e}
有{1, 2, 2, 2, 5}
{a, d} <= {b, c} <= {e}
有{5, 5, 5, 6, 6}
{e} <= {b, c} <= {a, d}
有{1, 5, 5, 6, 6}
{e} <= {a, d} <= {b, c}
有{1, 4, 4, 4, 6}
{a, d} <= {e} <= {b, c}
有{1, 2, 2, 6, 6}
将另一个ASCII Hasse图表放在一起:
a,b,c,d,e | b,c,d,e | a,c,d,e-------------------------| | | a,b,d,e-------------------|---c,d,e | | a,b,c,e------------|----b,d,e | | | a,b,c,d a,d,e b,c,e | | | b,c,d d,e a,c,e | | | a,c,d--------|---c,e-----|---a,b,e | | | | a,b,d---|---c,d b,e-----------| | | | a,b,c b,d --a,e | | / | b,c -a,d---/ e | / | | a,c d---------| -------| | a,b c-----| | | |------b | a
在最坏的情况下,你必须进行三次二进制搜索,因为无法比较的最大分组是三个。
这里有一种模式。子集的总和形成部分顺序。对于大小为3的集合,此部分顺序的宽度(也称为maximum antichain)为2.对于大小4,它也是2.对于大小5,宽度为3.对于大小为6的集合,宽度为5.对于尺寸7,宽度为8.对于尺寸8,宽度为14.对于尺寸9,宽度为23.对于尺寸10,宽度为40.对于尺寸11,宽度为70。
事实上,这个整数序列是已知的。它在Online Encyclopedia of Integer Sequences A025591中作为解决方案的数量为+ - 1 + - 2 + - 3 + - ... + - n = 0或1.此整数序列也在{{}中讨论过3}}其中找到一组具有相同和的子集尽可能大的n个不同正实数的问题被显示为前n个正整数:{b, c} <= {e} <= {a, d}
。 Proctor给出了这个结果的第一个基本证明,只需要跟随线性代数的背景。对{1,2,...,n}
具有相同总和的{1,2,...,n}
子集的最大数量为1,1,2,2,3,5,8,14,23等,或Robert A. Proctor's 1982 paper "Solution of Two Difficult Combinatorial Problems with Linear Algebra,",上面讨论的相同整数序列。事实上,这个序列是在论文中以与上面讨论过的相同的方式构建的。
回到识别可比较子集的问题,可以推广该排序以考虑三种情况下输入集的所有子集。给定任意输入集n=1, 2, ...
的两个子集A
和B
:
S
的{{3}}大于A
的基数的情况。然后无法保证B
的总和大于B
的总和。A
的基数等于A
的基数的情况。对于来自B
的每个元素A[i]
和B[i]
,如果i=0 to cardinality(A)
中绘制S
的索引大于A[i]
中的索引从中绘制了S
,然后无法保证B[i]
的总和大于B
的总和。否则,可以保证A
的总和大于B
的总和。A
的基数小于A
的基数的情况。删除集B
中的最少元素,以使B
和A
的基数相等。现在可以应用第二种情况。为了帮助说明这一点,我汇总了一些代码,这些代码从输入集的幂集构建有向非循环图,其中每个边连接具有较小子集和的节点到具有更大子集和的所有节点。此过程形成传递闭包,因为所有较小的节点将连接到所有更大的节点。然后将传递减少应用于该图,并通过向上走Hasse图并在每个级别存储宽度,将最大反链的大小与构成该反链的子集(格式为B
)一起返回。 。最终图形的最大反链数等于整数序列OEIS A025591。
(这段代码很快被拼凑起来展示我想说的内容。我提前为任何糟糕的编码决定道歉!)
[index, value]
import com.google.common.graph.*;
import java.util.*;
public class AntichainDecomposition {
MutableGraph<Subset> graph;
public static void main(String[] args) {
// Input set. Modify this as needed.
int[] set = new int[]{1, 2, 3, 4, 5, 6, 7, 8, 9};
ArrayList<Subset> input = buildSubsets(set);
AntichainDecomposition antichain = new AntichainDecomposition(input);
}
public AntichainDecomposition(ArrayList<Subset> input) {
graph = GraphBuilder.directed().build();
for (int i = 0; i < input.size(); ++i) {
graph.addNode(input.get(i));
}
for (int i = 0; i < input.size(); ++i) {
for (int j = 0; j < input.size(); ++j) {
if (i != j && isTargetGreater(input.get(i), input.get(j))) {
graph.putEdge(input.get(i), input.get(j));
}
}
}
graphReduction();
int width = getWidth(input.get(input.size() / 2));
System.err.println(width);
}
private int getWidth(Subset first) {
HashMap<Integer, HashSet<Subset>> levelMap = new HashMap<Integer, HashSet<Subset>>();
HashMap<Subset, Integer> subsetToLevel = new HashMap<Subset, Integer>();
int level = 1;
// Mark all the vertices as not visited
HashMap<Subset, Boolean> visited = new HashMap<Subset, Boolean>();
Iterator iter = graph.nodes().iterator();
while (iter.hasNext()) {
Subset node = (Subset)iter.next();
visited.put(node, false);
}
// Create a queue for breadth first search
LinkedList<Subset> queue = new LinkedList<Subset>();
// Mark the current node as visited and enqueue it
levelMap.put(level, new HashSet<Subset>());
levelMap.get(level).add(first);
subsetToLevel.put(first, level);
visited.put(first, true);
queue.add(first);
while (queue.size() != 0) {
// Dequeue a vertex from the queue and store it in the appropriate level
Subset s = queue.poll();
level = subsetToLevel.get(s);
// Get all adjacent vertices of the dequeued vertex s
// If a successor has not been visited, then mark it
// visited and enqueue it
iter = graph.successors(s).iterator();
while (iter.hasNext()) {
Subset n = (Subset)iter.next();
if (!visited.get(n)) {
if (!levelMap.containsKey(level + 1)) {
levelMap.put(level + 1, new HashSet<Subset>());
}
levelMap.get(level + 1).add(n);
subsetToLevel.put(n, level + 1);
visited.put(n, true);
queue.add(n);
}
}
}
int width = Integer.MIN_VALUE;
iter = levelMap.values().iterator();
Iterator subsetIter = null;
while (iter.hasNext()) {
HashSet<Subset> levelSet = (HashSet<Subset>)iter.next();
if (levelSet.size() > width) {
width = levelSet.size();
subsetIter = levelSet.iterator();
}
}
if (subsetIter != null) {
while (subsetIter.hasNext()) {
System.out.println((Subset)subsetIter.next());
}
}
return width;
}
private void graphReduction() {
// Reflexive reduction
Iterator iter1 = graph.nodes().iterator();
while (iter1.hasNext()) {
Subset i = (Subset)iter1.next();
graph.removeEdge(i, i);
}
// Transitive reduction
iter1 = graph.nodes().iterator();
while (iter1.hasNext()) {
Subset j = (Subset)iter1.next();
Iterator iter2 = graph.nodes().iterator();
while (iter2.hasNext()) {
Subset i = (Subset)iter2.next();
if (graph.removeEdge(i, j)) {
graph.putEdge(i, j);
Iterator iter3 = graph.nodes().iterator();
while (iter3.hasNext()) {
Subset k = (Subset)iter3.next();
if (graph.removeEdge(j, k)) {
graph.putEdge(j, k);
graph.removeEdge(i, k);
}
}
}
}
}
}
private Stack<Subset> topologicalSort() {
Stack<Subset> stack = new Stack<Subset>();
int vertices = graph.nodes().size();
// Mark all the vertices as not visited
HashMap<Subset, Boolean> visited = new HashMap<Subset, Boolean>();
Iterator iter = graph.nodes().iterator();
while (iter.hasNext()) {
Subset node = (Subset)iter.next();
visited.put(node, false);
}
// Call the recursive helper function to store topological sort
// starting from all vertices one by one
iter = graph.nodes().iterator();
while (iter.hasNext()) {
Subset node = (Subset)iter.next();
if (!visited.containsKey(node) || !visited.get(node)) {
topologicalSortHelper(node, visited, stack);
}
}
return stack;
}
private void topologicalSortHelper(Subset v, HashMap<Subset, Boolean> visited, Stack<Subset> stack) {
visited.put(v, true);
// Recurse for all the vertices adjacent to this vertex
Iterator iter = graph.successors(v).iterator();
while (iter.hasNext()) {
Subset node = (Subset)iter.next();
if (!visited.containsKey(node) || !visited.get(node)) {
topologicalSortHelper(node, visited, stack);
}
}
// Push current vertex to stack which stores topological sort
stack.push(v);
}
private boolean isTargetGreater(Subset source, Subset target) {
// An edge between two nodes exists if each index in the target subset is greater than or
// equal to its respective index in the source subset. If the target subset size is greater
// than the source subset size, then an edge between the two subsets exists if and only if
// the target subset has indices that are greater than or equal to corresponding indices of
// the source subset, ignoring the additional indices of the target subset.
if (source.size() > target.size()) {
return false;
}
SubsetEntry[] newSubset = new SubsetEntry[target.size()];
System.arraycopy(target.getSubset(), 0, newSubset, 0, newSubset.length);
Subset newTarget = new Subset(Arrays.asList(newSubset).subList(target.size() -
source.size(), target.size()).
toArray(new SubsetEntry[source.size()]));
for (int i = 0; i < source.size(); ++i) {
if (source.getEntry(i).getIndex() > newTarget.getEntry(i).getIndex()) {
return false;
}
}
return true;
}
private static ArrayList<Subset> buildSubsets(int[] set) {
ArrayList<Subset> power = new ArrayList<Subset>();
int elements = set.length;
int powerElements = (int) Math.pow(2, elements);
for (int i = 0; i < powerElements; ++i) {
// Convert the binary number to a string containing n digits
String binary = intToBinary(i, elements);
// Create a new set
ArrayList<SubsetEntry> innerSet = new ArrayList<SubsetEntry>();
// Convert each digit in the current binary number to the corresponding element
// in the given set
for (int j = 0; j < binary.length(); ++j) {
if (binary.charAt(j) == '1') {
innerSet.add(new SubsetEntry(j, set[j]));
}
}
// Add the new set to the power set
if (!innerSet.isEmpty()) {
power.add(new Subset(innerSet.toArray(new SubsetEntry[innerSet.size()])));
}
}
return power;
}
private static String intToBinary(int binary, int digits) {
String temp = Integer.toBinaryString(binary);
int foundDigits = temp.length();
String returner = temp;
for (int i = foundDigits; i < digits; ++i) {
returner = "0" + returner;
}
return returner;
}
}
class SubsetEntry {
private int index;
private int value;
public SubsetEntry(int i, int v) {
index = i;
value = v;
}
public int getIndex() {
return index;
}
public int getValue() {
return value;
}
public String toString() {
return "[" + index + ", " + value + "]";
}
}
class Subset {
private SubsetEntry[] entries;
public Subset(SubsetEntry[] e) {
entries = new SubsetEntry[e.length];
System.arraycopy(e, 0, entries, 0, entries.length);
}
public void setSubset(SubsetEntry[] subset) {
entries = new SubsetEntry[subset.length];
System.arraycopy(subset, 0, entries, 0, subset.length);
}
public SubsetEntry[] getSubset() {
return entries;
}
public SubsetEntry getEntry(int index) {
return entries[index];
}
public int size() {
return entries.length;
}
public String toString() {
String s = "{";
for (int i = 0; i < entries.length; ++i) {
s += entries[i].toString();
}
s += "}";
return s;
}
}
Per cardinality,对于任何部分有序的集合,最大的antichain的基数等于可用于覆盖部分有序集合的最小链数。对于任何输入集,这会在最坏的情况下产生具有A025591个链的部分顺序。此外,搜索部分顺序的最坏情况时间是import com.google.common.graph.*;
import java.util.*;
public class AntichainDecomposition {
MutableGraph<Subset> graph;
public static void main(String[] args) {
// Input set. Modify this as needed.
int[] set = new int[]{1, 2, 3, 4, 5, 6, 7, 8, 9};
ArrayList<Subset> input = buildSubsets(set);
AntichainDecomposition antichain = new AntichainDecomposition(input);
}
public AntichainDecomposition(ArrayList<Subset> input) {
graph = GraphBuilder.directed().build();
for (int i = 0; i < input.size(); ++i) {
graph.addNode(input.get(i));
}
for (int i = 0; i < input.size(); ++i) {
for (int j = 0; j < input.size(); ++j) {
if (i != j && isTargetGreater(input.get(i), input.get(j))) {
graph.putEdge(input.get(i), input.get(j));
}
}
}
graphReduction();
int width = getWidth(input.get(input.size() / 2));
System.err.println(width);
}
private int getWidth(Subset first) {
HashMap<Integer, HashSet<Subset>> levelMap = new HashMap<Integer, HashSet<Subset>>();
HashMap<Subset, Integer> subsetToLevel = new HashMap<Subset, Integer>();
int level = 1;
// Mark all the vertices as not visited
HashMap<Subset, Boolean> visited = new HashMap<Subset, Boolean>();
Iterator iter = graph.nodes().iterator();
while (iter.hasNext()) {
Subset node = (Subset)iter.next();
visited.put(node, false);
}
// Create a queue for breadth first search
LinkedList<Subset> queue = new LinkedList<Subset>();
// Mark the current node as visited and enqueue it
levelMap.put(level, new HashSet<Subset>());
levelMap.get(level).add(first);
subsetToLevel.put(first, level);
visited.put(first, true);
queue.add(first);
while (queue.size() != 0) {
// Dequeue a vertex from the queue and store it in the appropriate level
Subset s = queue.poll();
level = subsetToLevel.get(s);
// Get all adjacent vertices of the dequeued vertex s
// If a successor has not been visited, then mark it
// visited and enqueue it
iter = graph.successors(s).iterator();
while (iter.hasNext()) {
Subset n = (Subset)iter.next();
if (!visited.get(n)) {
if (!levelMap.containsKey(level + 1)) {
levelMap.put(level + 1, new HashSet<Subset>());
}
levelMap.get(level + 1).add(n);
subsetToLevel.put(n, level + 1);
visited.put(n, true);
queue.add(n);
}
}
}
int width = Integer.MIN_VALUE;
iter = levelMap.values().iterator();
Iterator subsetIter = null;
while (iter.hasNext()) {
HashSet<Subset> levelSet = (HashSet<Subset>)iter.next();
if (levelSet.size() > width) {
width = levelSet.size();
subsetIter = levelSet.iterator();
}
}
if (subsetIter != null) {
while (subsetIter.hasNext()) {
System.out.println((Subset)subsetIter.next());
}
}
return width;
}
private void graphReduction() {
// Reflexive reduction
Iterator iter1 = graph.nodes().iterator();
while (iter1.hasNext()) {
Subset i = (Subset)iter1.next();
graph.removeEdge(i, i);
}
// Transitive reduction
iter1 = graph.nodes().iterator();
while (iter1.hasNext()) {
Subset j = (Subset)iter1.next();
Iterator iter2 = graph.nodes().iterator();
while (iter2.hasNext()) {
Subset i = (Subset)iter2.next();
if (graph.removeEdge(i, j)) {
graph.putEdge(i, j);
Iterator iter3 = graph.nodes().iterator();
while (iter3.hasNext()) {
Subset k = (Subset)iter3.next();
if (graph.removeEdge(j, k)) {
graph.putEdge(j, k);
graph.removeEdge(i, k);
}
}
}
}
}
}
private Stack<Subset> topologicalSort() {
Stack<Subset> stack = new Stack<Subset>();
int vertices = graph.nodes().size();
// Mark all the vertices as not visited
HashMap<Subset, Boolean> visited = new HashMap<Subset, Boolean>();
Iterator iter = graph.nodes().iterator();
while (iter.hasNext()) {
Subset node = (Subset)iter.next();
visited.put(node, false);
}
// Call the recursive helper function to store topological sort
// starting from all vertices one by one
iter = graph.nodes().iterator();
while (iter.hasNext()) {
Subset node = (Subset)iter.next();
if (!visited.containsKey(node) || !visited.get(node)) {
topologicalSortHelper(node, visited, stack);
}
}
return stack;
}
private void topologicalSortHelper(Subset v, HashMap<Subset, Boolean> visited, Stack<Subset> stack) {
visited.put(v, true);
// Recurse for all the vertices adjacent to this vertex
Iterator iter = graph.successors(v).iterator();
while (iter.hasNext()) {
Subset node = (Subset)iter.next();
if (!visited.containsKey(node) || !visited.get(node)) {
topologicalSortHelper(node, visited, stack);
}
}
// Push current vertex to stack which stores topological sort
stack.push(v);
}
private boolean isTargetGreater(Subset source, Subset target) {
// An edge between two nodes exists if each index in the target subset is greater than or
// equal to its respective index in the source subset. If the target subset size is greater
// than the source subset size, then an edge between the two subsets exists if and only if
// the target subset has indices that are greater than or equal to corresponding indices of
// the source subset, ignoring the additional indices of the target subset.
if (source.size() > target.size()) {
return false;
}
SubsetEntry[] newSubset = new SubsetEntry[target.size()];
System.arraycopy(target.getSubset(), 0, newSubset, 0, newSubset.length);
Subset newTarget = new Subset(Arrays.asList(newSubset).subList(target.size() -
source.size(), target.size()).
toArray(new SubsetEntry[source.size()]));
for (int i = 0; i < source.size(); ++i) {
if (source.getEntry(i).getIndex() > newTarget.getEntry(i).getIndex()) {
return false;
}
}
return true;
}
private static ArrayList<Subset> buildSubsets(int[] set) {
ArrayList<Subset> power = new ArrayList<Subset>();
int elements = set.length;
int powerElements = (int) Math.pow(2, elements);
for (int i = 0; i < powerElements; ++i) {
// Convert the binary number to a string containing n digits
String binary = intToBinary(i, elements);
// Create a new set
ArrayList<SubsetEntry> innerSet = new ArrayList<SubsetEntry>();
// Convert each digit in the current binary number to the corresponding element
// in the given set
for (int j = 0; j < binary.length(); ++j) {
if (binary.charAt(j) == '1') {
innerSet.add(new SubsetEntry(j, set[j]));
}
}
// Add the new set to the power set
if (!innerSet.isEmpty()) {
power.add(new Subset(innerSet.toArray(new SubsetEntry[innerSet.size()])));
}
}
return power;
}
private static String intToBinary(int binary, int digits) {
String temp = Integer.toBinaryString(binary);
int foundDigits = temp.length();
String returner = temp;
for (int i = foundDigits; i < digits; ++i) {
returner = "0" + returner;
}
return returner;
}
}
class SubsetEntry {
private int index;
private int value;
public SubsetEntry(int i, int v) {
index = i;
value = v;
}
public int getIndex() {
return index;
}
public int getValue() {
return value;
}
public String toString() {
return "[" + index + ", " + value + "]";
}
}
class Subset {
private SubsetEntry[] entries;
public Subset(SubsetEntry[] e) {
entries = new SubsetEntry[e.length];
System.arraycopy(e, 0, entries, 0, entries.length);
}
public void setSubset(SubsetEntry[] subset) {
entries = new SubsetEntry[subset.length];
System.arraycopy(subset, 0, entries, 0, subset.length);
}
public SubsetEntry[] getSubset() {
return entries;
}
public SubsetEntry getEntry(int index) {
return entries[index];
}
public int size() {
return entries.length;
}
public String toString() {
String s = "{";
for (int i = 0; i < entries.length; ++i) {
s += entries[i].toString();
}
s += "}";
return s;
}
}
,其中w是图的宽度(等于最大的反链的基数)。这可以通过以下事实来证明:反链被定性为无序列表,其中没有两个元素是可比较的,并且搜索无序列表的最坏情况时间是。此外,链被表征为有序列表,其中每个元素与列表中的所有其他元素相当,并且用于搜索有序列表的最坏情况时间是
O(w*log(n))
。因此,对于长度为O(n)
的反链中的每个元素,必须在最坏的情况下进行相应链中的O(log n)
比较,从而导致在最坏情况下搜索时间为w
任何偏序。
该偏序搜索时间为任意输入集的每个子集的和的排序提供最坏情况表征。这是因为,对于任何输入集,需要观察反对中的每个和以便推导出最佳搜索树。回想一下,使用原始集合log(n)
作为示例,索引1和2处的子集O(w*log(n))
的总和小于索引0和3处的子集{1, 3, 5, 8}
的总和。但是,对于集合{3, 5}
,索引1和2处的子集{1, 8}
的总和大于索引0和3处的子集{1, 2, 3, 3}
的总和。索引集{ {1}}和{2, 3}
随后无法比拟。随着集合的扩展,这种不可比性的顺序以Dilworth's Theorem中定义的指数速率增长。
我将通过说我假设输入集中使用的所有数字都是正数并且输入集已经排序来关闭它。事实上,如果您使用的是未排序的列表或正数和负数的混合,那么就不能保证两个元素不具有可比性。
如果这个答案漫长而漫无边际,我很抱歉,但我希望这有助于您深入了解您要解决的问题。
答案 1 :(得分:1)
如果您的集合只能包含正整数,那么直接的方法是使用优先级队列,如下所示:
这基本上在子集的图形上使用Dijkstra算法,其中每个集合是一个顶点,边缘是生成增加总和的新集合的所有增量方法。
如果您的集合中可以包含负整数,您仍然可以使用上述变体。只需确保从最小的子集开始 - 包含所有负数的子集,然后在步骤3中,您将通过所有增量方式来获得更大的总和,这意味着要么添加正数,要么删除负数