I have multiple JSON files, and I am searching for the specific elements of the file’s content. If there is a match, copy the parent node and children nodes inside the {metadata} to another storage.
So far, it copies the entire JSON file to another blob storage if it matches the criteria.
Sample.json:
{
"record": {
"org":{
"name": "School",
"postalCode": "40121"
},
"metadata": {
"an": "file6.json",
"markings": {
"document": {
"distribution": {
"code": "A"
}
}
},
"publicationDate": {
"date": "2022-11-25"
}
}
}
}
if condition match then result sample.json only have metadata{..} will be copy to another storage:
"metadata": {
"an": "file6.json",
"markings": {
"document": {
"distribution": {
"code": "A"
}
}
},
"publicationDate": {
"date": "2022-11-25"
}
}
But my ADF copies entire json to another storage.
3
You can achieve your requirement by using a temporary csv file.
Create a temporary csv file like below.
one
1
After checking your condition, create a set variable activity with below expression inside your if activity True activities. Enable the firstRow in the lookup activity
@concat('{"metadata":',activity('Lookup1').output.firstRow.record.metadata,'}')
Here, I have skipped the if activity.
Then take a copy activity with the above temporary CSV dataset as source. In the additional columns, create a new column new
and give the above variable in the dynamic expression like below.
In the sink, create a CSV dataset and you can use dataset parameter for the filename. Give the below configurations so that it creates a JSON file using csv dataset.
Give your required target file name as <fiename>.json
to the above parameter.
Now, go to the mapping of the copy activity and give the below expression as dynamic content.
@json('{
"type": "TabularTranslator",
"mappings": [
{
"source": {
"name": "new",
"type": "String"
},
"sink": {
"type": "String",
"physicalType": "String",
"ordinal": 1
}
}
],
"typeConversion": true,
"typeConversionSettings": {
"allowDataTruncation": true,
"treatBooleanAsNumber": false
}
}')
Now, run the pipeline and it will create the required JSON files for each iteration like below.