Beery, S., Wu, G., Rathod, V., Votel, R., Huang, J., 2020. Context R-CNN: Long term temporal context for per-camera object detection. In: Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society, pp. 13072-13082. https://doi.org/10.1109/CVPR42600.2020.01309. Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M., 2020. Yolov4: Optimal Speed and Accuracy of Object Detection arXiv:2004.10934. Driessen, M.M., Jarman, P.J., Troy, S., Callander, S., 2017. Animal detections vary among commonly used camera trap models. Wildl. Res. 44, 291. https://doi.org/ 10.1071/wr16228. Droissart, V., Azandi, L., Onguene, E.R., Savignac, M., Smith, T.B., Deblauwe, V., 2021. PICT: a low-cost, modular, open-source camera trap system to study plant-insect interactions. Methods Ecol. Evol. 12, 1389-1396. https://doi.org/10.1111/2041- 210X.13618. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A., 2010. The PASCAL visual object classes (VOC) challenge. Int. J. Comput. Vis. 88, 303-338. Findlay, M.A., Briers, R.A., White, P.J.C., 2020. Component processes of detection probability in camera-trap studies: understanding the occurrence of false-negatives. Mammal Res. 65, 167-180. https://doi.org/10.1007/s13364-020-00478-y. Glen, A.S., Cockburn, S., Nichols, M., Ekanayake, J., Warburton, B., 2013. Optimising camera traps for monitoring small mammals. PLoS One 8. https://doi.org/10.1371/ Hobbs, M.T., Brehme, C.S., 2017. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates. PLoS One 12, e0185026. https://doi.org/ 10.1371/journal.pone.0185026. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H., 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications arXiv e-prints arXiv:1704.04861. Howard, A., Sandler, M., Chen, B., Wang, W., Chen, L., Tan, M., Chu, G., Vasudevan, V., Zhu, Y., Pang, R., Adam, H., Le, Q., 2019. Searching for MobileNetV3. In: International Conference on Computer Vision, pp. 1314-1324. Intel, 2016. Efficient Implementation of Neural Network Systems Built on FPGAs, and Programmed with OpenCL™. https://www.intel.co.uk/content/dam/www/p rogrammable/us/en/pdfs/literature/solution-sheets/efficient_neural_networks.pdf. Jolles, J.W., 2021. Broad-scale applications of the raspberry pi: a review and guide for Swanson, A., Kosmala, M., Lintott, C., Simpson, R., Smith, A., Packer, C., 2015. Snapshot biologists. Methods Ecol. Evol. 12, 1-18. https://doi.org/10.1111/2041- 210X.13652. Jumeau, J., Petrod, L., Handrich, Y., 2017. A comparison of camera trap and permanent recording video camera efficiency in wildlife underpasses. Ecol. Evol. 7, 7399-7407. images: applications in ecology. Methods Ecol. Evol. 10, 585-590. https://doi.org/ https://doi.org/10.1002/ece3.3149. Klemens, J.A., Tripepi, M., McFoy, S.A., 2021. A motion-detection based camera trap for small nocturnal mammals with low latency and high signal-to-noise ratio. Methods Ecol. Evol. 12, 1323-1328. https://doi.org/10.1111/2041-210X.13607. Ko, B.H., Jeong, S.G., Ahn, Y.G., Park, K.S., Park, N.C., Park, Y.P., 2014. Analysis of the Tan, M., Pang, R., Le, Q.V., 2020. EfficientDet: Scalable and efficient object detection. In: correlation between acoustic noise and vibration generated by a multi-layer ceramic Conference on Computer Vision and Pattern Recognition. IEEE. https://doi.org/ capacitor. Microsyst. Technol. 20, 1671-1677. https://doi.org/10.1007/s00542- 014-2209-5. McIntyre, T., Majelantle, T.L., Slip, D.J., Harcourt, R.G., 2020. Quantifying imperfect camera-trap detection probabilities: implications for density modelling. Wildl. Res. https://doi.org/10.1016/j.proeng.2017.06.153. 47, 177. https://doi.org/10.1071/wr19040. Meek, P.D., Ballard, G., Claridge, A., Kays, R., Moseby, K., O'Brien, T., O'Connell, A., you've been missing: an assessment of Reconyx® PC900 Hyperfire cameras. Wildl. Sanderson, J., Swann, D.E., Tobler, M., Townsend, S., 2014a. Recommended guiding principles for reporting on camera trapping research. Biodivers. Conserv. 23, 2321-2343. https://doi.org/10.1007/s10531-014-0712-8. Meek, P.D., Ballard, G.A., Fleming, P.J.S., Schaefer, M., Williams, W., Falzon, G., 2014b. partial network. In: Conference on Computer Vision and Pattern Recognition Camera traps can be heard and seen by animals. PLoS One 9, e110832. https://doi. org/10.1371/journal.pone.0110832. Meek, P., Ballard, G., Fleming, P., Falzon, G., 2016. Are we getting the full picture? Animal responses to camera traps and implications for predator studies. Ecol. Evol. Weingarth, K., Zimmermann, F., Knauer, F., Heurich, M., 2013. Evaluation of six digital 6, 3216-3225. https://doi.org/10.1002/ece3.2111. Nazir, S., Newey, S., Irvine, R.J., Verdicchio, F., Davidson, P., Fairhurst, G., van der Wald¨kologie Online 13, 87-92. Wal, R., 2017. WiseEye: next generation expandable and programmable camera trap platform for wildlife research. PLoS One 12. https://doi.org/10.1371/journal. pone.0169758. Petso, T., Jamisola Jr., Rodrigo S., Mpoeleng, D., 2022. Review on methods used for wildlife species and individual identification. Eur. J. Wildl. Res. 68 https://doi.org/ video object detection. In: Proceeding of the IEEE International Conference on 10.1007/s10344-021-01549-4. Powers, D., Ailab, 2011. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness & correlation. J. Mach. Learn. Technol. 2, 2229-3981. https://doi.org/10.9735/2229-3981. Proppe, D.S., Pandit, M.M., Bridge, E.S., Jasperse, P., Holwerda, C., 2020. Semi-portable Zivkovic, Z., van der Heijden, F., 2006. Efficient adaptive density estimation per image solar power to facilitate continuous operation of technology in the field. Methods Ecol. Evol. 11, 1388-1394. https://doi.org/10.1111/2041-210x.13456. Ren, S., He, K., Girshick, R., Sun, J., 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In: Cortes, C., Lawrence, N., Lee, D., Robley, A., Gormley, A., Woodford, L., Lindeman, M., Whitehead, B., Albert, R., Bowd, M., Smith, A., 2010. Evaluation of Camera Trap Sampling Designs Used to Determine Change in Occupancy Rate and Abundance of Feral Cats. Arthur Rylah Institute for Environmental Research. Schindler, F., Steinhage, V., 2021. Identification of animals and recognition of their actions in wildlife videos using deep learning techniques. Ecol. Inform. 61, 101215 https://doi.org/10.1016/j.ecoinf.2021.101215. Si, J., Harris, S.L., Yfantis, E., 2020. Neural networks on an FPGA and hardware-friendly activation functions. J. Comput. Commun. 08, 251-277. https://doi.org/10.4236/ jcc.2020.812021. Smith, S.W., 2002. Digital Signal Processing: A Practical Guide for Engineers and Scientists. Elsevier. Serengeti, high-frequency annotated camera trap images of 40 mammalian species in an African savanna. Sci. Data 2. https://doi.org/10.1038/sdata.2015.26. Tabak, M.A., et al., 2019. Machine learning to classify animal species in camera trap 10.1111/2041-210X.13120. Taggart, P.L., Peacock, D.E., Fancourt, B.A., 2020. Camera trap flash-type does not influence the behaviour of feral cats (Felis catus). Aust. Mammal. 42, 220. https:// doi.org/10.1071/am18056. 10.1109/cvpr42600.2020.01079. Trnovszký, T., Sýkora, P., Hudec, R., 2017. Comparison of background subtraction methods on near infra-red spectrum video sequences. Proc. Eng. 192, 887-892. Urbanek, R.E., Ferreira, H.J., Olfenbuttel, C., Dukes, C.G., Albers, G., 2019. See what Soc. Bull. 43, 630-638. https://doi.org/10.1002/wsb.1015. van Rijsbergen, C.J., 1979. Information Retrieval, 2nd ed. Butterworth-Heinemann, USA. Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M., 2021. Scaled-YOLOv4: Scaling cross stage (CVPR) , pp. 13029-13038. Wei, W., Luo, G., Ran, J., Li, J., 2020. Zilong: a tool to identify empty images in camera trap data. Ecol. Inform. 55, 101021 https://doi.org/10.1016/j.ecoinf.2019.101021. camera models for the use in capture-recapture sampling of Eurasian Lynx. o Xi, T., Wang, J., Qiao, H., Lin, C., Ji, L., 2021. Image filtering and labelling assistant (ifla): expediting the analysis of data obtained from camera traps. Ecol. Inform. 64, 101355 https://doi.org/10.1016/j.ecoinf.2021.101355. Zhu, X., Wang, Y., Dai, J., Yuan, L., Wei, Y., 2017. Flow-guided feature aggregation for Computer Vision (ICCV). Venice, Italy, pp. 408-417 arXiv:1703.10025. Zivkovic, Z., 2004. Improved adaptive Gaussian mixture model for background subtraction. In: International Conference on Pattern Recognition. IEEE. https://doi. org/10.1109/icpr.2004.1333992. pixel for the task of background subtraction. Pattern Recogn. Lett. 27, 773-780. https://doi.org/10.1016/j.patrec.2005.11.005.