Proceedings of the 17th ACM International Conference on Multimedia 2009
DOI: 10.1145/1631272.1631417
|View full text |Cite
|
Sign up to set email alerts
|

Compressed domain spatial adaptation for H.264 video

Abstract: In this paper, we present a metadata-based compressed-domain spatial adaptation scheme for H.264/AVC video. We have enhanced the H.264/AVC encoder with our proposed adaptation strategies in order to reduce video size by cropping individual frames in an intermediary node prior to transmitting that video to heterogeneous devices. In this regard, we exploit the sliced architecture of the video frames within the first version of the H.264/AVC specification and devise different slicing strategies.The compressed-dom… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2010
2010
2012
2012

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…This is done by parsing the adapted metadata and then discarding those parts from the original bitstream that are missing in the adapted/transformed metadata. As discussed in [24], for spatial adaptation, it is preferable to have a smaller number of slices for each frame. The reason is that if we increase the number of slices, then it will eventually increase the overhead due to the added number of slice-headers.…”
Section: <Dia>mentioning
confidence: 99%
See 2 more Smart Citations
“…This is done by parsing the adapted metadata and then discarding those parts from the original bitstream that are missing in the adapted/transformed metadata. As discussed in [24], for spatial adaptation, it is preferable to have a smaller number of slices for each frame. The reason is that if we increase the number of slices, then it will eventually increase the overhead due to the added number of slice-headers.…”
Section: <Dia>mentioning
confidence: 99%
“…For temporal adaptation, we drop frames from the compressed bitstream using a frame skip pattern [1], and for spatial adaptation, we drop slices outside of the region of interest (ROI) [24]. To implement our approach, we used gBSDUnitType and introduced gBSDFrameUnitType and gBSDSliceUnitType to describe each frame and slices with each frame, as shown previously in Box I. gBSDFrameUnitType consists of frame number, frame start, frame type and length of each frame for temporal adaptation.…”
Section: Processing Video Feedmentioning
confidence: 99%
See 1 more Smart Citation