£1.4m EPSRC Project: Internet of Silicon Retinas
£1.4m EPSRC Project: Internet of Silicon Retinas (IOSIRE) – Dr Yiannis Andreopoulos and Dr Miguel Rodrigues
This Engineering and Physical Sciences Research Council (EPSRC)-funded project will see Dr Yiannis Andreopoulos (PI) and Dr Miguel Rodrigues (Co-I) from the Department collaborating with Academics from King’s College London and Kingston University to explore how recently-developed dynamic vision sensing (DVS) hardware could be used in conjunction with cloud-based analytics in order to achieve state-of-the-art results in image analysis, image super-resolution, and a variety of other image and vision computing systems for Internet-of-Things (IoT) oriented deployments in the next 10 to 20 years.
Current high speed visual sensing involves high frame-rate video capture that incurs high latency, CPU and energy consumption requirements. However, recent hardware designs of neuromorphic sensors, a.k.a., DVS cameras, now begin to change the landscape in visual sensing for IoT and machine-to-machine (M2M) application contexts. DVS devices use asynchronous sensing hardware that mimics the low-level operation of a human retina in that local illumination changes caused by movement are captured and streamed off-chip in an asynchronous manner. In this way, a DVS camera can capture fast motion events while requiring significantly lower power (e.g., 10-20 milliwatts instead of 150-200 milliwatts) and providing for substantially faster sensing and reaction times than a regular shutter based camera. For example, when the events are rendered as video frames, 700-2000 frames per second (fps) can be achieved via a DVS representation, while conventional surveillance cameras typically operate at 10-24 fps. DVS cameras are currently beginning to be available for R&D and commercial use, and applications have begun to emerge; as illustrated in Fig. 1, applications include, but are not limited to, advanced vision systems for drones, robots and smart devices connected to cloud servers via a novel M2M communications paradigm. The major challenges addressed by the UCL researchers in the context of this project will be how to represent and compact the captured neuromorphic vision streams of data in order to allow for the most efficient transmission and processing by a cloud-based back-end processing system.
Project partners include global technology firms Samsung, Ericsson and Thales, who are interested in exploring how such sensors could be incorporated into the next generation of smart devices and used in future machine to machine communications, as well as semiconductor company Mediatek and neuromorphic sensor specialists iniLabs and iniVation from Zurich, Switzerland.
Related Press Coverage:
Dr Andreopoulos was recently interviewed in The Engineer magazine about the project: https://www.theengineer.co.uk/project-looks-at-human-eye-to-sharpen-sight-of-robots-and-drones/
Fig 1. Dynamic vision sensing, M2M communications, processing and adaptation envisaged by IOSIRE.