I had an AutoAnnotation script for yolo v7, which is part of a annotation tool. When I call an api, it take the dataset, weights path and all required details, and publish it via rabbitmq to a gpu server, In gpu server, there is a call back instance, which is loading the model to do auto annotation.
I am loading the model using YOLOv7 –>models–>experimental–>attempt_load function, where Iam successfully loading the function, but
def attempt_load(weights, map_location=None):
# Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
model = Ensemble()
for w in weights if isinstance(weights, list) else [weights]:
attempt_download(w)
ckpt = torch.load(w, map_location=map_location) # load
model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model
# Compatibility updates
for m in model.modules():
if type(m) in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6, nn.SiLU]:
m.inplace = True # pytorch 1.7.0 compatibility
elif type(m) is nn.Upsample:
m.recompute_scale_factor = None # torch 1.11.0 compatibility
elif type(m) is Conv:
m._non_persistent_buffers_set = set() # pytorch 1.6.0 compatibility
if len(model) == 1:
return model[-1] # return model
else:
print('Ensemble created with %sn' % weights)
for k in ['names', 'stride']:
setattr(model, k, getattr(model[-1], k))
return model # return ensemble
the python instance is always breaking at the below line
model.append(ckpt['ema' if ckpt.get('ema') else 'model'].float().fuse().eval()) # FP32 model
but if, I run the same instance in the python console manually, It’s working fine and completing auto annotation. I kept logs but , There are no any exceptions found. What could be the reason. I didn’t understand this behaviour.
This same issue is occuring with different gpu servers.
//
This issue is new to me, I never faced any thing like this, I tried for searching, similar kind of issue over internet, but I didn’t find any solution.
Thanks for any advice in advance.