2016
DOI: 10.1109/jiot.2015.2504622
|View full text |Cite
|
Sign up to set email alerts
|

EZ-VSN: An Open-Source and Flexible Framework for Visual Sensor Networks

Abstract: Abstract-We present a complete, open-source framework for rapid experimentation of Visual Sensor Networks (VSN) solutions. From the software point of view, we base our architecture on open-source and widely known C++ libraries to provide the basic image processing and networking primitives. The resulting system can be leveraged to create different types of VSNs, characterized by the presence of multiple cameras, relays and cooperator nodes, and can be run on any Linux-based hardware platform, such as the Beagl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 34 publications
0
9
0
Order By: Relevance
“…They provide a baseline for "green routing protocols". Similar to ours, Bondi et al [9] provided a framework that is implemented on real VSN nodes. They advocate on comparing analyze-then-compress (distributed approach) verses compress-then-analyze (centralized approach).…”
Section: Related Workmentioning
confidence: 99%
“…They provide a baseline for "green routing protocols". Similar to ours, Bondi et al [9] provided a framework that is implemented on real VSN nodes. They advocate on comparing analyze-then-compress (distributed approach) verses compress-then-analyze (centralized approach).…”
Section: Related Workmentioning
confidence: 99%
“…We deployed a VSN composed of 2 cameras on the roof of our university building and used them to monitor a portion of the underneath parking lot. The camera nodes are based on the design introduced in [37], using BeagleBone platforms mounting Radi-umBoards cameras and wireless transceivers. The resolution of the images taken by the camera nodes is 320 × 240 pixels.…”
Section: Model Fittingmentioning
confidence: 99%
“…Each node in the sensor network is modeled based on a real-life implementation of a camera node: in particular, we rely on a Linux-operated BeagleBone Black platform, which is coupled with a IEEE 802.15.4-compliant Memsic TelosB dongle and with an ad-hoc camera board. The platform runs an open-source framework for VSNs capable of performing several processing tasks, including features extraction and multi-view features encoding [37]. According to the chosen platform, in our simulation the MAC and PHY layers are compliant with the IEEE 802.15.4 specifications and the network layer runs IPv6 RPL.…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…This goes against the always-on requirement of many IoT applications, where low sampling rates or wakeup delays directly impact application quality metrics. In the context of visual sensor network, an energy-efficient computational framework consists of locally analyzing the huge amount of raw visual data before the transmission of the extracted information [6]. A flexible and low-power hardware platform is required to implement such paradigm.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, several smart cameras have been proposed to develop intelligent and autonomous systems [7]. Typically, such devices feature a power consumption of hundreds of mW, because of the high computational and bandwidth requirements [6], [8], [9]. Power-optimized solutions rely on vision chips that integrate focal-plane processing circuits, which enable a first stage of visual processing in a distributed and efficient way at the sensor-level [10].…”
Section: Introductionmentioning
confidence: 99%