Skip to content

Conversation

@IanMaquignaz
Copy link
Contributor

Adds support for generating environment maps from images captured with a calibrated omnidirectional camera per the OCamCalib: Omnidirectional Camera Calibration Toolbox for Matlab

Example Calibration: 20150909_144054_stack.meta.xml

<?xml version="1.0" encoding="utf-8"?>
<document version="1">
   <data file="/dev/shm/dataCreateHDR-1440544d7f310d09054dadae12f1fa0bca017b/stack.exr" format="Omnidirectional"/>
   <calibrationModel c="0.99981" d="0.00024371" e="3.7633e-06" height="3840" width="5760" xc="1904.2826" yc="2895.3481">
      <ss s="-1283.8735"/>
      <ss s="0"/>
      <ss s="0.00035359"/>
      <ss s="-1.2974e-07"/>
      <ss s="6.7764e-11"/>
   </calibrationModel>
   <date day="9" hour="14" minute="40" month="9" second="55.7143" utc="-4" year="2015"/>
   <exposure EV="15"/>
</document>

The above calibration is loaded to a dictionary via added functionality in envmap/xmlhelper.py

To support the new function envmap.EnvironmentMap.from_omnicam(...) two new transformations world2ocam(...) and ocam2world(...) have been added to envmap/projections.py. To ensure conformity between the coordinate systems, a test cases has been added to test/test_projections.py.

Example

input image:

get latlong environment map:

e = envmap.EnvironmentMap.from_omnicam(im_path, targetDim=256, targetFormat='latlong')

World XYZ coordinates to omnidirectional camera image:

Know issues:

  • Computationally expensive. If needed for batch operations, strongly suggest looking into Opencv Remap.

@IanMaquignaz
Copy link
Contributor Author

If you wan to project back to the omnidirectional camera, this is the code:

import numpy as np
import envmap
from envmap.xmlhelper import EnvmapXMLParser
from scipy.ndimage import map_coordinates

metadata = envmap.xmlhelper.EnvmapXMLParser(im_path_meta)
OcamCalib_ = metadata.get_calibration()

width_cols = int(OcamCalib_['width'])
height_rows = int(OcamCalib_['height'])

# Define image of coordinate system
sz = 4096
e = envmap.EnvironmentMap(sz, 'cube', channels=3)
e.data[:sz//4,:,:] = [0, 1, 0]                   # +Y
e.data[sz//4:int(0.5*sz),:,:] = [1, 0, 1]        # -Y
e.data[:,int(0.5*sz):,:] = [1, 0, 0]             # +X
e.data[:,:int(0.25*sz),:] = [0, 1, 1]            # -X
e.data[int(3/4*sz):,:,:] = [0, 0, 1]             # +Z
e.data[int(0.5*sz):int(3/4*sz):,:,:] = [1, 1, 0] # -Z
e.convertTo('skyangular')

u,v = np.meshgrid(np.linspace(0, 1, width_cols), np.linspace(0, 1, height_rows))
dx, dy, dz, valid = envmap.projections.ocam2world(v, u, OcamCalib_)

u,v = e.world2image(dx, dy, dz)

# Interpolate
# Repeat the first and last rows/columns for interpolation purposes
h, w, d = e.data.shape
source = np.empty((h + 2, w + 2, d))
source[1:-1, 1:-1] = e.data
source[0,1:-1] = e.data[0,:]; source[0,0] = e.data[0,0]; source[0,-1] = e.data[0,-1]
source[-1,1:-1] = e.data[-1,:]; source[-1,0] = e.data[-1,0]; source[-1,-1] = e.data[-1,-1]
source[1:-1,0] = e.data[:,0]
source[1:-1,-1] = e.data[:,-1]

# To avoid displacement due to the padding
u += 0.5/e.data.shape[1]
v += 0.5/e.data.shape[0]
target = np.vstack((v.flatten()*e.data.shape[0], u.flatten()*e.data.shape[1]))

order=1
data = np.zeros((height_rows, width_cols, d))
for c in range(d):
    map_coordinates(source[:,:,c], target, output=data[:,:,c].reshape(-1), cval=np.nan, order=order, prefilter=filter)

# Result is in data

This function was not implemented as it doesn't seem to have any applicability

@IanMaquignaz IanMaquignaz force-pushed the omnidirectional_camera branch from 7d21b91 to 1837e8f Compare June 22, 2023 03:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant