Prophesee’s free release


Prophesee, a developer of neuromorphic vision systems, has announced that the newest release of its Metavision Intelligence suite will be offered in its entirety for free, including all modules.

The suite is intended to deliver an accelerated path to explore and implement differentiated machine vision applications that leverage the performance and efficiency of event-based vision.

The Metavision suite, which Prophesee claims is industry’s most comprehensive suite of software tools and code samples, will be available for free from initial adoption use, through to commercial development and the release of market-ready products.

With this advanced toolkit, engineers will be able to develop computer vision applications on a PC for a wide range of markets, including industrial automation, Internet of Things (IoT), surveillance, mobile, medical, automotive and more.

The free modules in Metavision Intelligence 3.0 are available through C++ and Python APIs and include a comprehensive machine learning toolkit. The suite also offers a no-code option through the Studio tool, which enables users to play prerecorded datasets for free, without needing to own an event camera. With an event camera, users can stream or record events from their event camera in seconds. In total, the suite consists of 95 algorithms, 67 code samples and 11 ready-to-use applications. Plug-and-play provided algorithms include high-speed counting, vibration monitoring, spatter monitoring, object tracking, optical flow, ultra-slow-motion, machine learning and others. It provides users with both C++ and Python APIs, as well as extensive documentation and a wide range of samples organised by its implementation level to incrementally introduce the concept of event-based machine vision.

The latest release includes enhancements to help speed-up time to production, allowing developers to stream their first events in minutes or even build their own event camera from scratch using the provided camera plug-ins under an open-source licence as a base.
They now also have the tools to port their developments on Windows or Ubuntu operating systems. Metavision Intelligence 3.0 
features also allow access to the full potential of advanced sensor features (for example anti-flickering or bias adjustment) by providing source code access to key sensor plug-ins.

The Metavision Studio tool has also enhanced the user experience with improvements to the onboarding guidance, user interface (UI), return on investment (ROI) and bias set-up process.

The core ML modules include an open-source event-to-video converter, as well as a video-to-event simulator. The event-to-video converter utilises the pretrained neural network to build greyscale images based on events. This allows users to make the best use of their existing development resources to process event-based data and build algorithms upon it.

The video-to-event pipeline breaks down the barrier of data scarcity in the event-based domain by enabling the conversion of conventional frame-based datasets to event-based dataset