I have seen a few different questions asking how to parse large json files, but every one of those I have come across have uniform data throughout the json file. Every example I have seen with non-uniform json objects typically involve a for loop which for my case is extremely slow.
My problem is that I have about 5-10GB of data, split into a bunch of json files each around 200-2,000kB, that have multiple different json objects interspersed in them. It seems like most of the solutions I have seen all assume that all of the objects will be of the same type. I do have a definition for all of the objects types but I want a fast and efficient way to read them all in and get them in a useable format.
I currently have a script to read through the json file object-by-object and parse it that way but it is currently taking a few hours to get through all of the data and this is something that I have to do pretty often so I would hope I can decrease that time somehow. The end goal would be to organize all of this data into a bunch of arrays that can then be added to an existing SQL database based on timestamps.
I have about 15 object types. Each object type will have some information about where the data came from and then the actual data, although some objects include data from multiple sources in a nested fashion. Below I put an example for the most straightforward and most complicated objects just to give an idea.
Most complicated: Typically this has around 50 devices it sends out info for but I reduced that to two for brevity. So the deviceStatus and devices lists are around 50 items long for most of these messages.
{
"type": "location",
"source": "gps",
"broadcastStatus": 1,
"timestamp": xxxxxxxxxx,
"deviceStatus": {
"id1": {
"msgCount": 1,
"latestCheckin": "ACK",
"timestamp": xxxxxxxxxx
},
"id2": {
"msgCount": 3,
"latestCheckin": "ACK",
"timestamp": xxxxxxxxxx
}
},
"devices": [
{
"deviceId": "id1",
"position": [
xx.xxxxx,
-xx.xxxxx
],
"attitude": [
x.xxx,
x.xxx,
x.xxx
],
"velocity": [
x.xxx,
x.xxx,
x.xxx
],
"spd": xxx.xxxx,
"percentComplete": x.xxx
},
{
"deviceId": "id2",
"position": [
xx.xxxxx,
-xx.xxxxx
],
"attitude": [
x.xxx,
x.xxx,
x.xxx
],
"velocity": [
x.xxx,
x.xxx,
x.xxx
],
"spd": xxx.xxxx,
"percentComplete": x.xxx
}
]
}
Most straighforward: These are all of the same type but there are around 40 channels that broadcast data like this.
{
"type": "timebased"
"source": "daq",
"deviceId": "id1"
"timestamp": xxxxxxxxxx,
"channelName": "Channel1",
"value": xxx,
"addData": {
"junk1": xxxx,
"junk2": xxxx,
"junk3": xxxx
}
}
And below is some pseudocode that shows how I currently go through and parse everything.
basePath = r'pathtofolder'
fileList = glob(basePath+'*.json')
timebased = {} # this is a nested dictionary full of lists for the different devices and data channels
location = {} # same as above only for the location data
for fileName in fileList:
with open(fileName) as f:
try:
objJson = json.load(f)
except:
objJson = []
print('******* Failed to load json data from '+fileName+' **********')
for msg in objJson:
if msg["type"] == "timebased" and msg["source"] == "daq":
## Do some things to process this type of message
timebased[msg["deviceId"]][msg["channelName"]]["Time"].append(msg["timestamp"])
timebased[msg["deviceId"]][msg["channelName"]]["Data"].append(msg["value"])
elif msg["type"] == "location" and msg["source"] == "gps":
## Do some things to process this type of message similar to the above case
..... that keeps going for around 15 different cases
## Then this all gets pushed to a csv for now but ideally will be a DB in the future
I know this isn’t all ideal, but I really would like help for the best way to go through these messages and organize all of the data in a more efficient manner. I know appending to lists is very much not ideal but that is also something I need help with as I don’t know how else to do it since I will not know the size ahead of time. As I said, the end goal would be to have a bunch of lists or arrays with time and data values that I can then insert into an SQL-type database that is made for storing data like this. This SQL-type database has its own API that I will have to use which will require passing one list/array of time values and another of equal length of data values for each channel.
My next attempt was going to be to switch to asyncio since I am parsing tens of thousands of files and that I/O may be my choke point but I wanted some input on the actual message parsing as well.
3