我们尝试从文件中将新文件发送到系统(非常严格),文件名中包含新西兰日期时间(ADF均为UTC)
我的输出数据集如下所示:
"typeProperties": {
"fileName": "MasterFile-{fileDateNameVariable}.csv",
"folderPath": "something",
"format": {
"type": "TextFormat",
"columnDelimiter": ",",
"nullValue": "",
"firstRowAsHeader": true
},
"partitionedBy": [
{
"name": "fileDateNameVariable",
"value": {
"type": "DateTime",
"date": "$$addhours(SliceStart, 13)",
"format": "yyyyMMdd"
}
}
]
}
正如您所看到的,我已尝试添加" $$ addhours(SliceStart,13)"但无济于事:
Input is malformed. Reason: inputTable.typeProperties : The date $$addhours(SliceStart, 13) is not a valid variable to partition by. Valid values are SliceStart and SliceEnd..
有没有办法在文件名中创建时间变量而不引用partitionedBy区域?
答案 0 :(得分:0)
如果您使用的是ADFv2,则可以使用@ pipeline.TriggerTime来获得相同的结果,而无需使用PartitionedBy。
但它看起来像一个ADFv1 json,所以看起来应该是这样的:
"typeProperties": {
"fileName": "MasterFile-{Time.AddHours(fileDateNameVariable,13)}.csv",
"folderPath": "something",
"format": {
"type": "TextFormat",
"columnDelimiter": ",",
"nullValue": "",
"firstRowAsHeader": true
},
"partitionedBy": [
{
"name": "fileDateNameVariable",
"value": {
"type": "DateTime",
"date": "SliceStart",
"format": "yyyyMMdd"
}
}
]
}
也许这不会按原样运行,我现在无法测试,但这是调用函数的正确方法。
请务必在此处查看函数和系统变量https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-functions-variables
希望这有帮助!