如何实现Amazon Kinesis Streaming在iOS App中实时处理数据

时间:2017-03-23 10:32:36

标签: ios swift amazon-web-services

我已经浏览了亚马逊网络服务(AWS)的Kinesis Streaming(实时数据处理)文档。但我对如何实现Kinesis Streaming感到困惑。

  • 在项目中安装了适用于iOS的SDK。
  • Cognito Credentials的书面代码,包含region和pool-ID。
  • 使用AWSServiceConfiguration成功完成。
  • 在viewcontroller中,我编写了一个代码来保存数据 AWSKinesisRecorder Stream。

但我如何知道数据是否成功保存在Stream中?我怎样才能打印日志?

我想在控制台中显示已成功保存的数据。

AppDelegate Code:


 func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplicationLaunchOptionsKey: Any]?) -> Bool {
        AWSLogger.default().logLevel = .verbose

        let CognitoPoolID = "ap-northeast-1:430840dc-4df5-469f"
        let Region = AWSRegionType.APNortheast1
        let credentialsProvider = AWSCognitoCredentialsProvider(regionType:Region,identityPoolId:CognitoPoolID)

        let configuration = AWSServiceConfiguration(region:Region, credentialsProvider:credentialsProvider)
        AWSServiceManager.default().defaultServiceConfiguration = configuration

        return true
    }

ViewController代码:导入UIKit导入AWSKinesis

类ViewController:UIViewController {

var kinesisRecorder : AWSKinesisRecorder! = nil

override func viewDidLoad() {
    super.viewDidLoad()

    //use AWSKinesisRecorder with Amazon Kinesis. The following snippet returns a shared instance of the Amazon Kinesis service client
     kinesisRecorder = AWSKinesisRecorder.default()


    configureAWSKinesisRecorder()
    saveDataInStream()
}

override func didReceiveMemoryWarning() {
    super.didReceiveMemoryWarning()
}

/*
 * Method to configure perties of AWSKinesisRecorder
 */
func configureAWSKinesisRecorder()  {
    //The diskAgeLimit property sets the expiration for cached requests. When a request exceeds the limit, it's discarded. The default is no age limit.

    kinesisRecorder.diskAgeLimit = TimeInterval(30 * 24 * 60 * 60); // 30 days


    //The diskByteLimit property holds the limit of the disk cache size in bytes. If the storage limit is exceeded, older requests are discarded. The default value is 5 MB. Setting the value to 0 means that there's no practical limit.

    kinesisRecorder.diskByteLimit = UInt(10 * 1024 * 1024); // 10MB


    //The notficationByteThreshold property sets the point beyond which Kinesis issues a notification that the byte threshold has been reached. The default value is 0, meaning that by default Amazon Kinesis doesn't post the notification.

    kinesisRecorder.notificationByteThreshold = UInt(5 * 1024 * 1024); // 5MB

}

/*
 * Method to save real time data in AWS stream
 */
func saveDataInStream()  {


    //Both saveRecord and submitAllRecords are asynchronous operations, so you should ensure that saveRecord is complete before you invoke submitAllRecords.

    // Create an array to store a batch of objects.
    var tasks = Array<AWSTask<AnyObject>>()
    for i in 0...5 {
        //create an NSData object .
        //StreamName should be a string corresponding to the name of your Kinesis stream.
        //save it locally in kinesisRecorder instances

        tasks.append(kinesisRecorder!.saveRecord(String(format: "TestString-%02d", i).data(using: .utf8), streamName: "my_stream")!)
    }

    ////submitAllRecords sends all locally saved requests to the Amazon Kinesis service.
    AWSTask(forCompletionOfAllTasks: tasks).continueOnSuccessWith(block: { (task:AWSTask<AnyObject>) -> AWSTask<AnyObject>? in
        return self.kinesisRecorder?.submitAllRecords()
    }).continueWith(block: { (task:AWSTask<AnyObject>) -> Any? in
        if let error = task.error as? NSError {
            print("Error: \(error)")
        }
        return nil
    }

    )



} }

3 个答案:

答案 0 :(得分:0)

Kinesis中没有任何功能可以执行您想要的操作。但是,另一种方法是查询流将数据放入的服务(S3,dynamo等...)

答案 1 :(得分:0)

您可以直接使用AWSKinesis而不是AWSKinesisRecorder

putRecords方法(http://docs.aws.amazon.com/AWSiOSSDK/latest/Classes/AWSKinesis.html#//api/name/putRecords:completionHandler :)将为您显式写入记录,而不是在记录器中抽象记录保存。

此方法为您提供了一个completionHandler块,可用于查看Kinesis正在响应的内容,无论是成功的记录ID还是错误代码。从那里你可以登录你的控制台或做任何你喜欢的响应。需要注意的是,您需要手动批量记录以调用putRecords。您可以使用用户默认值或领域在本地存储事件,然后按计划将它们刷新到Kinesis。这就是录音机免费提供的功能。

或者,如果您只是希望看到事件成功通过您的Kinesis流,您可以将Kinesis Firehose流连接到您的Kinesis流,您可以将其配置为将事件数据传递到S3,Amazon ElasticSearch或Amazon Redshift:https://aws.amazon.com/kinesis/data-firehose/

答案 2 :(得分:0)

另一种方法是连接该流的使用者。例如:KinesisFirehose。然后,您可以从该Firehose将其输出定向到S3存储桶。