About

Hello. Welcome to the SYNS site.


From here, you can use the Browse menu item above to take a look at the dataset.

The Southampton-York Natural Scenes (SYNS) dataset consists of image and 3D range data measured from a diverse set of ~100 rural and urban locations. Each sample is comprised of co-registered panoramic HDR image and LiDAR range data, as well as stereoscopic image pairs.

The SYNS team

The project is an EPSRC-funded collaboration between Professor Wendy Adams, Drs. Erich Graf (Psychology) and Julian Leyland (Geography) from the University of Southampton (UK) and Professor James Elder at York University (Canada).

Postdoctoral researchers Drs. Arthur Lugtigheid and Alexander Muryy calibrated the equipment and were instrumental in collecting and prepping the data. Arthur Lugtigheid developed the website.

Sampling locations

We made use of the proprietary UKLand dataset, which segments the United Kingdom into category regions for areas as small as 50m x 50m. We used an area of the UKLand data forming a 40Km radius around the University of Southampton campus. This region contained 19 distinct scene categories (see ‘Built’ and ‘Natural’ categories on the map). With each category represented as a map overlay in ArcGIS we selected random location points for each. We then worked our way down the list of points for each category. If a selected location was physically inaccessible, or required permission that was not granted, we resampled the category.

Our indoor sampling locations were chosen opportunistically, and consist of rooms in buildings on the University of Southampton campus, as well as residences nearby. We have also included measurements taken in residential gardens.

Scene Capture

Each measurement took about 2.5 hours to complete. We took outdoor measurements when the weather was mainly dry.

Using the latitude-longitude coordinates, we found a location closest to the coordinates that placed our tripod mount at least 1m from large objects (trees, buildings, ets. ). We placed three high-contrast targets in the scene to aid in post-processing and co-registration of the LiDAR and Spheron data.

Our three measurement devices (LiDAR, panoramic HDR camera, stereoscopic camera) were mounted sequentially on a tripod. Device nodal points were aligned to within ~3mm at an average human eye level 165cm.

LiDAR

Each range scan captured a 360º x 135º field of view, with maximum range of 120m. The nominal angular accuracy (azimuth and elevation) of the Leica ScanStation P20 is 8arcsec; the nominal depth accuracy varies from 0.4mm to 9mm at 100m, dependent upon distance, surface albedo and surface attitude.

SpheroCam HDR

The SpheroCam rotates around a vertical axis through its nodal point, and captures a 360º x 180º image by combining thin vertical slices of image data captured through a Nikkor 16mm fish-eye lens and a narrow, vertical CCD sensor. The camera operates over a range of 26 f-stops.

Stereoscopic camera

Stereo images were captured using two Nikon D5200 digital SLR cameras with AF Nikkor 35mm f/2D lenses, with a diagonal field of view of 44º at infinity. The cameras were mounted on a custom-built rig to achieve an interocular distance of 63mm. Near simultaneous shutter timing was ensured through a wired shutter release remote control. Laboratory testing measured the maximum temporal offset between the two cameras to be 6ms. Each panoramic scene consists of 18 stereo image pairs.

Data processing

LiDAR and Spheron measurements were repeated twice to allow for correction of motion artifacts and estimation of reliability. The LiDAR data were automatically corrected to remove returns from larger moving foreground objects (e.g., pedestrians, cars) from the first LiDAR scan and replace with background returns from the second LiDAR scan. However some LiDAR motion artifact, particularly in foliage, remains.

Since the Spheron sensor is also a scanning device, motion artifacts exist here as well. The current release does not include correction for these artifacts; we hope to post corrections soon.

Residual rotation and translation differences between range and image data were estimated and corrected for in post-processing. We achieved alignment accuracy of ~1arcmin (the LiDAR resolution is 2 arcmin/pixel, Spheron resolution is 4arcmin/pixel). Detailed notes on the co-registration of LiDAR and HDR data are provided here, with Matlab script located here.

Our HDR files are in the Radiance HDR format. One good option for for manipulating HDR images is the pfstools package.

We also provide co-registered Stereo and LiDar data files. Details of our process for generating these files can be found here.

Camera colour characterization

Using the NERC/NCEO Field Spectroscopy Facility at the University of Edinburgh, we have characterized the colour spectra of the sensors of our three cameras. We gratefully acknowledge the assistance of Chris MacLellan and Alasdair MacArthur at the University of Edinburgh.

Details of the characterization process are given in the attached PDF file. Normalised camera response data are included in the linked text files for the Spheron HDR camera, and the Left and Right stereo cameras.

Citing our work

When citing the work, please use our 2016 Scientific Reportsarticle:

Adams, W.J., Elder, J.H., Graf, E.W., Leyland, J., Lugtigheid, A.J., Muryy, A. (2016). The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude. Scientific Reports, 6, 35805.

All users should abide by our Terms of Use.