I am trying to implement the original YOLO architecture for object detection, but I am using the COCO dataset. However, I am a bit confused about the image sizes of COCO. The original YOLO was trained on the VOC dataset and it is designed to take 448x448 size images. Since I am using COCO, I thought of cropping down the images to that size. But that would mean I would have to change the annotations file and it might make the process of object detection a bit harder because some of the objects might not be visible.
I am pretty new to this, so I am not sure if this is the right way or what are some other things that I can do. Any help will be appreciated.