您好,在练习递归时,我发现了一个无需使用%运算符就能找到模数的练习。 因此,我编写了函数,一切正常。 除非我按5位数或更多的数字,否则此功能将失败。 而且我不确定我是在做错事还是失败了,因为来电太多了。 如果呼叫太多,这正常吗?真的有太多的函数调用吗?如果将来有一个实际上有用的递归函数,如何防止它发生呢? 因为这对我来说真的没有意义。我做了Hanoi递归塔,无论我要移动多少磁盘,它从来没有这个问题。
这是我的职责,也是两个数字始终为正的前提:
import tensorflow as tf
import os
graph_def = tf.GraphDef()
labels = []
# These are set to the default names from exported models, update as needed.
filename = "model.pb"
labels_filename = "labels.txt"
# Import the TF graph
with tf.gfile.FastGFile(filename, 'rb') as f:
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
# Create a list of labels.
with open(labels_filename, 'rt') as lf:
for l in lf:
labels.append(l.strip())
from PIL import Image
import numpy as np
import cv2
def update_orientation(image):
exif_orientation_tag = 0x0112
if hasattr(image, '_getexif'):
exif = image._getexif()
if (exif != None and exif_orientation_tag in exif):
orientation = exif.get(exif_orientation_tag, 1)
# orientation is 1 based, shift to zero based and flip/transpose based on 0-based values
orientation -= 1
if orientation >= 4:
image = image.transpose(Image.TRANSPOSE)
if orientation == 2 or orientation == 3 or orientation == 6 or orientation == 7:
image = image.transpose(Image.FLIP_TOP_BOTTOM)
if orientation == 1 or orientation == 2 or orientation == 5 or orientation == 6:
image = image.transpose(Image.FLIP_LEFT_RIGHT)
return image
def convert_to_opencv(image):
# RGB -> BGR conversion is performed as well.
r,g,b,a = np.array(image).T
opencv_image = np.array([b,g,r]).transpose()
return opencv_image
def crop_center(img,cropx,cropy):
h, w = img.shape[:2]
startx = w//2-(cropx//2)
starty = h//2-(cropy//2)
return img[starty:starty+cropy, startx:startx+cropx]
def resize_down_to_1600_max_dim(image):
h, w = image.shape[:2]
if (h < 1600 and w < 1600):
return image
new_size = (1600 * w // h, 1600) if (h > w) else (1600, 1600 * h // w)
return cv2.resize(image, new_size, interpolation = cv2.INTER_LINEAR)
def resize_to_256_square(image):
h, w = image.shape[:2]
return cv2.resize(image, (256, 256), interpolation = cv2.INTER_LINEAR)
# Load from a file
imageFile = "test01.png"
image = Image.open(imageFile)
# Update orientation based on EXIF tags, if the file has orientation info.
image = update_orientation(image)
# Convert to OpenCV format
image = convert_to_opencv(image)
# If the image has either w or h greater than 1600 we resize it down respecting
# aspect ratio such that the largest dimension is 1600
image = resize_down_to_1600_max_dim(image)
# We next get the largest center square
h, w = image.shape[:2]
min_dim = min(w,h)
max_square_image = crop_center(image, min_dim, min_dim)
# Resize that square down to 256x256
augmented_image = resize_to_256_square(max_square_image)
# Get the input size of the model
with tf.Session() as sess:
input_tensor_shape = sess.graph.get_tensor_by_name('Placeholder:0').shape.as_list()
network_input_size = input_tensor_shape[1]
# Crop the center for the specified network_input_Size
augmented_image = crop_center(augmented_image, network_input_size, network_input_size)
# These names are part of the model and cannot be changed.
output_layer = 'loss:0'
input_node = 'Placeholder:0'
with tf.Session() as sess:
prob_tensor = sess.graph.get_tensor_by_name(output_layer)
sess.run(prob_tensor, {input_node: [augmented_image] })
predictions, = sess.run(prob_tensor, {input_node: [augmented_image] })
# Print the highest probability label
highest_probability_index = np.argmax(predictions)
print('Classified as: ' + labels[highest_probability_index])
print()
# Or you can print out all of the results mapping labels to probabilities.
label_index = 0
for p in predictions[0]:
truncated_probablity = np.float64(np.round(p,8))
print (labels[label_index], truncated_probablity)
label_index += 1
错误是:
GuessNumber.exe中0x00007FF77D5C2793的未处理异常: 0xC00000FD:堆栈溢出(参数:0x0000000000000001, 0x0000006F322F3F30)。
对于我尝试的数字40001%10,它可以工作,但是44001%10失败 而超过44001的一切对我来说也都失败了。我没有尝试过其他任何号码
答案 0 :(得分:1)
如果递归太深,则程序将耗尽堆栈内存。这称为堆栈溢出。
int modulo(int n, int m)
{
if (n < m) return n;
else return modulo(n - m, m);
}
例如,modulo(1000000, 2)
调用modulo(999998, 2)
,然后调用modulo(999996, 2)
,依此类推,直到modulo(0, 2)
最后有500001个有效的modulo
功能堆。在任何合理的系统上,每个函数调用都应使用两个整数(返回地址和一个帧指针)至少占用16个字节。总的堆栈空间为8 MB,通常大于最大堆栈大小。
每个函数调用都必须等待下一个函数的结果,直到可以完成其计算并返回。在返回之前,堆栈必须保存所有变量,参数和返回地址。返回地址是程序在return
语句之后应在其中恢复的位置。
这些调用将填满堆栈,直到达到最大限制并且程序崩溃为止。
在某些情况下,编译器可以将递归转换为循环。在这种情况下,由于递归位于return
语句中,因此它可以简单地goto
到函数的开头,而根本不执行调用。这称为尾部呼叫优化:
int modulo(int n, int m)
{
start:
if (n < m) return n;
else {
n -= m;
goto start;
}
}
如果启用了优化(在clang或G ++中为-O2,在Visual C ++中为发布模式),则不会发生崩溃,因为编译器将创建循环而不是递归。为避免崩溃,只需启用优化即可。
请注意,编译器不需要对此进行优化,也不总是可以。这就是为什么不建议进行如此深层次的递归的原因。
答案 1 :(得分:0)
您要递归到4400的深度,这很麻烦。同样在这里也没有必要,因为您可以通过循环实现相同的算法:
while (n >= m) n -= m ;
return n ;