使用ADF REST连接器读取和转换FHIR数据

时间:2019-04-25 20:41:13

标签: azure azure-data-factory hl7-fhir

我正在尝试使用Azure数据工厂从FHIR服务器读取数据并将结果转换为Azure Blob存储中以换行符分隔的JSON(ndjson)文件。具体来说,如果您查询FHIR服务器,则可能会得到类似以下内容的信息:

{
    "resourceType": "Bundle",
    "id": "som-id",
    "type": "searchset",
    "link": [
        {
            "relation": "next",
            "url": "https://fhirserver/?ct=token"
        },
        {
            "relation": "self",
            "url": "https://fhirserver/"
        }
    ],
    "entry": [
        {
            "fullUrl": "https://fhirserver/Organization/1234",
            "resource": {
                "resourceType": "Organization",
                "id": "1234",
                // More fields
        },
        {
            "fullUrl": "https://fhirserver/Organization/456",
            "resource": {
                "resourceType": "Organization",
                "id": "456",
                // More fields
        },

        // More resources
    ]
}

基本上是一堆资源。我想将其转换为换行符(又名ndjson)文件,其中每行只是资源的json:

{"resourceType": "Organization", "id": "1234", // More fields }
{"resourceType": "Organization", "id": "456", // More fields }
// More lines with resources

我能够设置REST连接器,并且它可以查询FHIR服务器(包括分页),但是无论如何尝试,我似乎都无法生成想要的输出。我设置了一个Azure Blob存储数据集:

{
    "name": "AzureBlob1",
    "properties": {
        "linkedServiceName": {
            "referenceName": "AzureBlobStorage1",
            "type": "LinkedServiceReference"
        },
        "type": "AzureBlob",
        "typeProperties": {
            "format": {
                "type": "JsonFormat",
                "filePattern": "setOfObjects"
            },
            "fileName": "myout.json",
            "folderPath": "outfhirfromadf"
        }
    },
    "type": "Microsoft.DataFactory/factories/datasets"
}

并配置复制活动:

{
    "name": "pipeline1",
    "properties": {
        "activities": [
            {
                "name": "Copy Data1",
                "type": "Copy",
                "policy": {
                    "timeout": "7.00:00:00",
                    "retry": 0,
                    "retryIntervalInSeconds": 30,
                    "secureOutput": false,
                    "secureInput": false
                },
                "typeProperties": {
                    "source": {
                        "type": "RestSource",
                        "httpRequestTimeout": "00:01:40",
                        "requestInterval": "00.00:00:00.010"
                    },
                    "sink": {
                        "type": "BlobSink"
                    },
                    "enableStaging": false,
                    "translator": {
                        "type": "TabularTranslator",
                        "schemaMapping": {
                            "resource": "resource"
                        },
                        "collectionReference": "$.entry"
                    }
                },
                "inputs": [
                    {
                        "referenceName": "FHIRSource",
                        "type": "DatasetReference"
                    }
                ],
                "outputs": [
                    {
                        "referenceName": "AzureBlob1",
                        "type": "DatasetReference"
                    }
                ]
            }
        ]
    },
    "type": "Microsoft.DataFactory/factories/pipelines"
}

但是最后(尽管配置了模式映射),blob的最终结果始终只是从服务器返回的原始包。如果将输出Blob配置为以逗号分隔的文本,则可以提取字段并创建展平的表格视图,但这并不是我真正想要的。

任何建议将不胜感激。

2 个答案:

答案 0 :(得分:0)

如评论中简要讨论的那样,Copy Activity除了映射数据外,没有提供太多功能。如文档中所述,Copy activity执行以下操作:

  
      
  1. 从源数据存储中读取数据。
  2.   
  3. 执行序列化/反序列化,压缩/解压缩,列映射等。它基于   输入数据集,输出数据集和复制的配置   活动。
  4.   
  5. 将数据写入接收器/目标数据存储。
  6.   

Copy Activity除了有效地复制内容外,似乎没有做任何其他事情。

我发现正在工作的是使用Databrick。

以下是步骤:

  1. 将Databricks帐户添加到您的订阅中;
  2. 通过单击创作按钮转到“数据块”页面;
  3. 创建笔记本;
  4. 编写脚本(Scala,Python或.Net was recently announced)。

脚本将如下:

  1. 从Blob存储中读取数据;
  2. 根据需要过滤和转换数据;
  3. 将数据写回到Blob存储;

您可以从此处测试脚本,一旦准备好,就可以返回管道并创建一个Notebook activity,该指向存储该脚本的笔记本。

我很难在Scala中进行编码,但这是值得的:)

答案 1 :(得分:0)

所以我找到了解决方案。如果我执行原始的转换步骤,即将捆绑包简单地转储到JSON文件中,然后进行从JSON文件到我假装为文本文件的另一个转换为另一个Blob,则可以创建njson文件。

基本上,定义另一个Blob数据集:

{
    "name": "AzureBlob2",
    "properties": {
        "linkedServiceName": {
            "referenceName": "AzureBlobStorage1",
            "type": "LinkedServiceReference"
        },
        "type": "AzureBlob",
        "structure": [
            {
                "name": "Prop_0",
                "type": "String"
            }
        ],
        "typeProperties": {
            "format": {
                "type": "TextFormat",
                "columnDelimiter": ",",
                "rowDelimiter": "",
                "quoteChar": "",
                "nullValue": "\\N",
                "encodingName": null,
                "treatEmptyAsNull": true,
                "skipLineCount": 0,
                "firstRowAsHeader": false
            },
            "fileName": "myout.json",
            "folderPath": "adfjsonout2"
        }
    },
    "type": "Microsoft.DataFactory/factories/datasets"
}

请注意,这个TextFormatquoteChar是空白的。如果再添加其他复制活动:

{
    "name": "pipeline1",
    "properties": {
        "activities": [
            {
                "name": "Copy Data1",
                "type": "Copy",
                "policy": {
                    "timeout": "7.00:00:00",
                    "retry": 0,
                    "retryIntervalInSeconds": 30,
                    "secureOutput": false,
                    "secureInput": false
                },
                "typeProperties": {
                    "source": {
                        "type": "RestSource",
                        "httpRequestTimeout": "00:01:40",
                        "requestInterval": "00.00:00:00.010"
                    },
                    "sink": {
                        "type": "BlobSink"
                    },
                    "enableStaging": false,
                    "translator": {
                        "type": "TabularTranslator",
                        "schemaMapping": {
                            "['resource']": "resource"
                        },
                        "collectionReference": "$.entry"
                    }
                },
                "inputs": [
                    {
                        "referenceName": "FHIRSource",
                        "type": "DatasetReference"
                    }
                ],
                "outputs": [
                    {
                        "referenceName": "AzureBlob1",
                        "type": "DatasetReference"
                    }
                ]
            },
            {
                "name": "Copy Data2",
                "type": "Copy",
                "dependsOn": [
                    {
                        "activity": "Copy Data1",
                        "dependencyConditions": [
                            "Succeeded"
                        ]
                    }
                ],
                "policy": {
                    "timeout": "7.00:00:00",
                    "retry": 0,
                    "retryIntervalInSeconds": 30,
                    "secureOutput": false,
                    "secureInput": false
                },
                "typeProperties": {
                    "source": {
                        "type": "BlobSource",
                        "recursive": true
                    },
                    "sink": {
                        "type": "BlobSink"
                    },
                    "enableStaging": false,
                    "translator": {
                        "type": "TabularTranslator",
                        "columnMappings": {
                            "resource": "Prop_0"
                        }
                    }
                },
                "inputs": [
                    {
                        "referenceName": "AzureBlob1",
                        "type": "DatasetReference"
                    }
                ],
                "outputs": [
                    {
                        "referenceName": "AzureBlob2",
                        "type": "DatasetReference"
                    }
                ]
            }
        ]
    },
    "type": "Microsoft.DataFactory/factories/pipelines"
}

然后一切顺利。这是不理想的,因为我现在在blob中有两个数据副本,但是我想可以轻松删除其中一个。

如果有人有一个一步的解决方案,我还是很想听听。