-
Notifications
You must be signed in to change notification settings - Fork 64
Description
TensorRT inference engine settings:
- Inference precision - DataType.FLOAT
- Max batch size - 1
[TensorRT] ERROR: UffParser: Validator error: concat_box_loc: Unsupported operation _FlattenConcat_TRT
Building TensorRT engine. This may take few minutes.
[TensorRT] ERROR: Network must have at least one output
Engine: None
Traceback (most recent call last):
File "detect_objects_webcam.py", line 190, in
main()
File "detect_objects_webcam.py", line 157, in main
batch_size=args.max_batch_size)
File "/home/nvidia/object-detection-tensorrt-example-master/SSD_Model/utils/inference.py", line 116, in init
engine_utils.save_engine(self.trt_engine, trt_engine_path)
File "/home/nvidia/object-detection-tensorrt-example-master/SSD_Model/utils/engine.py", line 91, in save_engine
buf = engine.serialize()
AttributeError: 'NoneType' object has no attribute 'serialize'