在我的应用程序中,当数据同步时,我可以从应该同步到本地设备的服务器获得20k条目(来自给定的时间戳)。对于每个条目我尝试获取它(如果它已经存在),如果不是我创建新的。问题是整个操作太慢 - 在iphone 5上20k是10分钟。我的另一个解决方案是删除给定时间戳中的所有条目并为所有返回的条目创建新条目,并且不需要为每个条目执行获取操作?如果有人有任何建议会很好。以下是当前状态的示例代码:
var logEntryToUpdate:LogEntry!
if let savedEntry = CoreDataRequestHelper.getLogEntryByID(inputID: inputID, fetchAsync: true) {
logEntryToUpdate = savedEntry
} else {
logEntryToUpdate = LogEntry(entity: logEntryEntity!, insertInto: CoreDataStack.sharedInstance.saveManagedObjectContext)
}
logEntryToUpdate.populateWithSyncedData(data: row, startCol: 1)
以下是实际的请求方法:
class func getLogEntryByID(inputID:Int64, fetchAsync:Bool) ->LogEntry? {
let logEntryRequest = NSFetchRequest<NSFetchRequestResult>(entityName: "LogEntry")
logEntryRequest.predicate = NSPredicate(format: "inputId == %@", NSNumber(value: inputID as Int64))
logEntryRequest.fetchLimit = 1
do {
let mocToFetch = fetchAsync ? CoreDataStack.sharedInstance.saveManagedObjectContext : CoreDataStack.sharedInstance.managedObjectContext
if let fetchResults = try mocToFetch.fetch(logEntryRequest) as? [LogEntry] {
if ( fetchResults.count > 0 ) {
return fetchResults[0]
}
return nil
}
} catch let error as NSError {
NSLog("Error fetching Log Entries by inputID from core data !!! \(error.localizedDescription)")
}
return nil
}
我尝试的另一件事是检查特定请求的计数,但又是太慢了。
class func doesLogEntryExist(inputID:Int64, fetchAsync:Bool) ->Bool {
let logEntryRequest = NSFetchRequest<NSFetchRequestResult>(entityName: "LogEntry")
logEntryRequest.predicate = NSPredicate(format: "inputId == %@", NSNumber(value: inputID as Int64))
//logEntryRequest.resultType = .countResultType
logEntryRequest.fetchLimit = 1
do {
let mocToFetch = fetchAsync ? CoreDataStack.sharedInstance.saveManagedObjectContext : CoreDataStack.sharedInstance.managedObjectContext
let count = try mocToFetch.count(for: logEntryRequest)
if ( count > 0 ) {
return true
}
return false
} catch let error as NSError {
NSLog("Error fetching Log Entries by inputID from core data !!! \(error.localizedDescription)")
}
return false
}
答案 0 :(得分:3)
无论是获取实例还是获取计数,您仍然会对每个传入记录执行一次获取请求。这将是缓慢的,你的代码将花费几乎所有的时间来执行提取。
一项改进是批量记录以减少提取次数。将多个记录ID添加到数组中,然后使用类似
的谓词一次性获取所有这些IDNSPredicate(format: "inputId IN %@", inputIdArray)
然后浏览获取结果以查看找到的ID。在阵列中累积50或100个ID,您可以将读取次数减少50x或100x。
删除时间戳的所有条目然后重新插入它们可能会很好,但很难预测。你必须插入所有20,000。这比减少提取次数更快还是更慢?这是不可能肯定的。
答案 1 :(得分:0)
根据Paulw11的评论,我想出了以下方法来评估导入核心数据的var spawn = require('child_process').spawn;
// Spawn the child with a node process executing the plus_one app var
child = spawn('node', ['plus_one.js']);
// Call this function every 1 second (1000 milliseconds):
setInterval(function() {
// Create a random number smaller than 10.000
var number = Math.floor(Math.random() * 10000);
// Send that number to the child process:
child.stdin.write(number + "\n");
// Get the response from the child process and print it:
child.stdout.once('data', function(data) {
console.log('child replied to ' + number + ' with: ' + data);
});
}, 1000);
child.stderr.on('data', function(data) {
process.stdout.write(data);
});
。
在我的示例中,我有一个用于存储搜索词的类。在搜索类中,创建一个谓词,该谓词描述我的结构数组中的东西的值。
Structs
要实例化func importToCoreData(dataToEvaluateArray: [YourDataStruct]) {
// This is what Paul described in his comment
let newDataToEvaluate = Set(dataToEvaluateArray.map{$0.id})
let recordsInCoreData = getIdSetForCurrentPredicate()
let newRecords = newDataToEvaluate.subtracting(recordsInCoreData)
// create an empty array
var itemsToImportArray: [YourDataStruct] = []
// and dump records with ids contained in newRecords into it
dataToEvaluateArray.forEach{ record in
if newRecords.contains(record.id) {
itemsToImportArray.append(record)
}
}
// THEN, import if you need to
itemsToImportArray.forEach { struct in
// set up your entity, properties, etc.
}
// Once it's imported, save
// You can save each time you import a record, but it'll go faster if you do it once.
do {
try self.managedObjectContext.save()
} catch let error {
self.delegate?.errorAlert(error.localizedDescription, sender: self)
}
self.delegate?.updateFetchedResultsController()
}
,我创建了此方法,该方法返回recordsInCoreData
中存在的一组唯一标识符:
managedObjectContext