FileNotFoundError:没有此类文件:->由于Google云端硬盘超时而发生错误?

时间:2020-06-19 11:00:30

标签: python error-handling gpu google-colaboratory file-not-found

我用Sequence类创建了一个DataGenerator

import tensorflow.keras as keras
from skimage.io import imread
from skimage.transform import resize
import numpy as np
import math
from tensorflow.keras.utils import Sequence

Here, `x_set` is list of path to the images and `y_set` are the associated classes.

class DataGenerator(Sequence):
    def __init__(self, x_set, y_set, batch_size):
    self.x, self.y = x_set, y_set
    self.batch_size = batch_size

def __len__(self):
    return math.ceil(len(self.x) / self.batch_size)

def __getitem__(self, idx):
    batch_x = self.x[idx * self.batch_size:(idx + 1) *
    self.batch_size]
    batch_y = self.y[idx * self.batch_size:(idx + 1) *
    self.batch_size]

    return np.array([
        resize(imread(file_name), (224, 224))
           for file_name in batch_x]), np.array(batch_y)

然后,将其应用于我的训练和验证数据。 X_train是一个字符串列表,其中包含训练数据的图像路径。 y_train是训练数据的单编码标签。验证数据也一样。

我使用以下代码创建了图像路径:

X_train = []
for name in train_FileName:
  file_path = r"/content/gdrive/My Drive/data/2017-IWT4S-CarsReId_LP-dataset/" + name
  X_train.append(file_path)

此后,我将DataGenerator应用于训练和验证数据:

training_generator = DataGenerator(X_train, y_train, batch_size=32)
validation_generator = DataGenerator(X_val, y_val, batch_size=32)

然后,我使用fit_generator方法来运行模型:

model.fit_generator(generator=training_generator,
                    validation_data=validation_generator,
                    steps_per_epoch = num_train_samples // 32,
                    validation_steps = num_val_samples // 32,
                    epochs = 10,
                    use_multiprocessing=True,
                    workers=2)

在CPU上第一次运行良好,初始化了我的模型并开始了第一个纪元。然后,我将Google Colab中的运行时类型更改为GPU,然后再次运行模型。

并出现以下错误:

---------------------------------------------------------------------------
FileNotFoundError                         Traceback (most recent call last)
<ipython-input-79-f43ade94ee10> in <module>()
      5                     epochs = 10,
      6                     use_multiprocessing=True,
----> 7                     workers=2)

16 frames
/usr/local/lib/python3.6/dist-packages/imageio/core/request.py in _parse_uri(self, uri)
    271                 # Reading: check that the file exists (but is allowed a dir)
    272                 if not os.path.exists(fn):
--> 273                     raise FileNotFoundError("No such file: '%s'" % fn)
    274             else:
    275                 # Writing: check that the directory to write to does exist

FileNotFoundError: No such file: '/content/gdrive/My Drive/data/2017-IWT4S-CarsReId_LP-dataset/s01_l01/1_1.png'

今天,在不使用GPU的情况下运行程序时,也出现了此错误。运行该程序时,Colab告诉我Google Drive Time Out。那么,这是由于Google云端硬盘超时导致的错误吗?如果是,我该如何解决? 有人知道我应该在程序中更改什么吗?

2 个答案:

答案 0 :(得分:1)

您可以编写此代码以避免在控制台的google colab中超时

ConnectButton() {     
    console.log("Connect pushed");      
    document.querySelector("#top-toolbar > colab-connect- 
        button").shadowRoot.querySelector("#connect").click()  
} 
setInterval(ConnectButton,60000);

来源:How to prevent Google Colab from disconnecting?

答案 1 :(得分:0)

问题似乎出在输入上。您的模型找不到输入文件。如果您更改运行时间,则将恢复出厂设置。会话中所有磁盘内容将被删除。

如果您在两者之间更改运行时,请从头开始运行单元格。