Collective Control over Sensitive Video Data using Secret Sharing

Full text can be found here:
P. K. Atrey, S. Alharthi, M. A. Hossain, A. AlGhamdi and A. El Saddik. Collective control over sensitive video data using secret sharing. Springer Int. J. Multimedia Tools and Applications, (July, 2013)

About this research

Today digital video is used extensively in many applications. Sometimes a video could be treated as a top secret for an organization, for example military secrets, surveillance footage and corporate product designs, and may need to be shared among a group of people in a secure manner. Traditional data security methods such as encryption techniques are prone to single-point attack, i.e. the secret can be revealed by obtaining the decryption key from any single person. Alternatively, the secret sharing scheme provides collective control over the secrecy of information and is considered information theoretically secure. In this research, we propose to adopt a secret sharing based approach to provide collective control over a given sensitive video. We present three methods that utilize the spatial and temporal redundancy in videos in different ways. We analyze the security of these methods and compare them for efficiency in terms of computation time and space using extensive experimentation.

Core idea

The core idea behind our approach is to analyze the repetition of data in video in both spatial as well as temporal dimensions. Based on this, we propose the following three methods: (1) Temporal Secret Video Sharing (or "TemporalSVS") method, (2) The Spatio-Temporal Secret Video Sharing (or "SpatioTemporalSVS") method, and (3) Block Secret Video Sharing (or "BlockSVS") method. TemporalSVS and SpatioTemporalSVS methods exploit pixel-wise temporal and spatial as well as temporal redundancy, respectively. Alternatively, the BlockSVS method uses blockwise redundancy in both spatial and temporal dimensions. We attempt to minimize computations and provide shares of smaller size compared to secret video, while preserving the information theoretic security. By information theoretic security wemean that a scheme is secure even when the adversary has unlimited computing power because the adversary cannot have enough information necessary to obtain the secret information.

Data Set

* The users of this data are expected to cite this website and the above mentioned paper that resulted from this work. *


To analyze the performance of the proposed methods, we performed experiments on four videos. Each video had specific characteristics, as shown in Table 1 . The representative frames from all the four videos are shown in Figure 1. The brief descriptions of these four videos are as follows:
- The HSHT video consisted of high spatial and high temporal redundancy. This video was recorded in the hallway of the Applied Computer Science department with a camera being stationed on a tripod. In this video there was no change in the scene, hence it provided data redundancy in both spatial and temporal dimensions. The Euclidean distance between blocks and frames were on low side (4.92 and 0.66), which showed high spatial and temporal redundancies.
- The second video, HSLT, was recorded in an indoor garage by installing a camera in a moving car. This video exhibited high spatial and low temporal redundancy. The increased Euclidean distance between frames (5.09) substantiated that this video had low temporal redundancy. - To record a video with low spatial and high temporal redundancy, i.e. LSHT video, we installed a camera on a tripod in an office and recored a scene with books of different colors in the background and with few moving people in front. Ahigh Euclidean distance between blocks (12.99) and a small Euclidean distance between frames (1.43) verified the low spatial and high temporal redundancies in this video.
- The last video, LSLT, was recorded to have low spatial and low temporal redundancy characteristics. So, it was recorded in an outdoor scenario from a moving car which was traveling on snowy roads. Note that this video did not have as high euclidean distance (6.29) between blocks as it was with LSHT (12.99), but it was higher that for HSHT (4.92) and HSLT (5.05).We believe that it was mainly due to lot of snow on the roads which resulted in the increase in the number of whiter pixels in the frames.

Table 1
All videos are recorded at resolution 720 * 480 pixels and 11 frames per second In brackets, the average euclidean distances between 8 * 8 blocks within a frame (for spatial redundancy) and frames (for temporal redundancy) are provided
Secret video Number of frames Spatial redundancy Temporal redundancy
HSHT 1,096 High (4.92) High (0.66)
HSLT 1,157 High (5.05) Low (5.09)
LSHT 1,024 Low (12.99) High (1.43)
LSLT 1,183 Low (6.29) Low (5.54)

Figure 1: Representative frames from four videos: a HSHT b HSLT c LSHT d LSLT
sample frames

The data set can be found below:

Video files:
HSHT.avi (13.1 MB)
HSLT.avi (22.4 MB)
LSHT.avi (22.0 MB)
LSLT.avi (43.7 MB)

Zipped files containing frames:
HSHT.7z (71.4 MB)
HSLT.7z (72.9 MB)
LSHT.7z (136.0 MB)
LSLT.7z (108.0 MB)


Any query can be directed to: Pradeep Atrey (

Acknowledgement: This research is supported by National Plan for Science and Technology (NPST) program by King Saud University Project Number 11-INF1830-02.