Robert Kooima

Highway Work Zone Construction Safety Research and Training

2016–2018

Supported by the Louisiana Transportation Research Center, this project integrates the LSU Driving Simulator in the Department of Civil Engineering with ongoing work in immersive visualization and head mounted displays to help understand and improve construction zone safety. This is a collaboration with Dr. Sherif Ishak of Civil Engineering, Dr. Yimin Zhu of Construction Management.


OLED Virtual Reality Environment

2016–2017

Supported by LSU Student Technology Fees, with matching support by the Bert S. Turner Department of Construction Management, Dr. Yimin Zhu and I have created a stereoscopic, immersive, interactive, virtual reality environment using an array of 4K OLED displays. With design assistance by PhD student Sanaz Saeidi and fabrication by MMR Constructors, this system will be made available for student use in the renovated and expanded Patrick Taylor Hall.


Sensor Environment Imaging (SENSEI)

2016–2017

Supported by the NSF, the Development of the Sensor Environment Imaging (SENSEI) Instrument is a collaboration led by Maxine Brown of the Electronic Visualization Lab, including the University of California San Diego, the University of Hawaiʻi at Mānoa, the Scripps Institution of Oceanography, Jackson State University, and LSU. The objective of the project is to develop an end-to-end system to capture, deliver, and display omni-stereoscopic video. My role is an extension of work done in high-resolution omni-stereoscopic still imagery to compress and display the data captured by this device.


Real-time Shadows for Gigapixel Displacement Maps

2015–2016

Kevin Cherry’s PhD dissertation work is an offshoot of my my own work in scalable rendering. Much of my early work was motivated by visualization of the Moon. The shadows on the Moon, currently studied by scientists pursuing frozen water there, have been of interest since Galileo first used them to measure lunar mountains. The real-time generation of accurate shadows in lunar visualization is complicated by the vast scale of data describing the terrain. Kevin transformed the generation of physically correct soft shadows on arbitrary spherical terrains from a rendering problem to a data problem, implementing his approach as a straightforward add-on to existing visualization tools.


Multi-Pass Gaussian Contact-Hardening Soft Shadows

Fall 2014–Spring 2015

Kevin Cherry performed a very comprehensive review of real-time shadow generation techniques in support of his dissertation work, complete with implementations of all relevant algorithms. Through this work, he discovered and implemented additional means of generating shadow penumbrae. We presented this work at GRAPP 2015 in Berlin, Germany.


Spherical Cube Map Rendering Library

Fall 2013–Spring 2014

The spherical cube map rendering library is the core of the Panoptic renderer in an isolated C++ class library. This library provides a heterogeneous data representation and rendering engine for the interactive display of spherical data sets at scales of hundreds of gigapixels and beyond. Full documentation is included.


Stereoscopic Spherical Panorama Rendering

Fall 2011

In 2011 I developed an improved set of image processing tools and caching algorithms for efficiently managing and displaying very high resolution stereoscopic spherical panoramas. This YouTube video demonstrates the technology. This work was demonstrated at Cinegrid 2011 and SC 2011. In this YouTube video (image at the left) Dan Sandin visits the Bluebonnet Swamp using the Calit2 StarCAVE.


Orbiter: Moonwall Mk II

Spring 2012

The original Moonwall installation used the Tellurion renderer. In 2012 a new Moonwall implementation, based on the Panoview renderer, was created. This new renderer allows for significantly larger data sets to be displayed with increased efficiency. A YouTube video titled LRO and the Real-time 3D Moon” demonstrates its capabilities. Another video, “A Lazy Orbit of the Moon” uses this technology to provide a relaxing look at the moon in close-up. Here is a gallery of screenshots.


Stereoscopic Spherical Panorama Capture

Fall 2011

In support of my work in high-performance large-scale spherical data rendering, I’ve acquired a GigaPan EPIC Pro and a stereoscopic camera system developed by Dick Ainsworth. With these, I’ve captured a number of stereoscopic spherical panoramas at sites around Louisiana. Here is a catalog of these images, with full-resolution downloads.


A Multi-viewer Tiled Autostereoscopic Virtual Reality Display

2010

This paper documents the development of the Autostereo Interleaver and its application to a large-scale multi-user autostereoscopic display. The primary contributions of this work include the use of linescreen shift to “synchronize” the lenticulars of a large number of autostereo displays, causing them to behave in concert as a single large display, and the application of user tracking to mitigate some of the issues in autostereo display. This work was presented at ACM VRST 2010 in Hong Kong. The image to the left shows the KAUST REVE, described in this article on the future of the CAVE.


TacTile Multitouch Table

2008–2009

One of the last projects that I contributed to during my time at EVL was the creation of the TacTile Multi-touch LCD table. The TacTile works on the FTIR principle, but differs from contemporary FTIR displays in its use of a high-definition LCD instead of a projector display, and the use of multiple overlapping infrared sensing cameras for increased resolution. The EVL prototype was demonstrated at SC’08 in Austin. The image at the left shows the TacTile that I constructed and installed at CCT / LSU, and used for two semesters of CSC 4263 Video Game Design.


Tellurion

2007—2011

Tellurion implements a highly-scalable real-time planetary terrain composition algorithm. A GPGPU process tessellates terrain geometry in real time, and transitions smoothly from planet scale down to sub-meter resolution. Tellurion enables the investigation of a class of generalized terrain composition operators that merge data of widely disparate resolution, projection, and coverage on the fly. These composition operations are uniformly applicable to both terrain height maps and surface maps. The cluster-parallel renderer is supported by a multi-threaded data paging mechanism that performs view-dependant loading of data sources of arbitrary size. The system has been demonstrated displaying 115GB of data in real time, including height data covering the U.S. at a resolution of 30 meters. See the events page for a listing of several showings. Tellurion was my Ph.D. work. See my dissertation, Planetary-scale Terrain Composition, for detailed coverage or this YouTube video for an overview. I successfully defended on October 23, 2008, and a paper summarizing the technology was subsequently published in IEEE Transactions on Visualization & Computer Graphics.


A GPU Sub-pixel Algorithm for Autostereoscopic Virtual Reality

Spring 2007

The Varrier Combiner is a GPU-based algorithm performing real-time autostereoscopic sub-pixel spatial multiplexing. Such an algorithm is necessary for the correct function of parallax barrier displays such as the Varrier. Prior to this work, autostereo interleaving was a very expensive process which placed a heavy performance burden upon all autostereo applications. By expressing the problem in terms of GLSL vertex and pixel shading, the computational expense is moved to the GPU, which eliminates the performance degradation completely. Varrier Combiner was presented at IEEE VR 07 and published in the conference proceedings. A C module implementing the algorithm may be used to port OpenGL applications to the Varrier.


Varrier

Spring 2005—2010

The Varrier autostereoscopic virtual reality system is a high-resolution, real-time, parallax-barrier 3D display. It enables stereoscopic viewing without the need for 3D glasses. The Varrier project was started by EVL co-founder Dan Sandin in 2001 and included EVL graduate students Todd Margolis, Tom Peterka, Jinghua Ge, Javier Girado, and myself. We designed and built a 35-panel display, a 65-panel display, a 6-panel display, two Personal Varrier displays, and a 2-panel Personal Varrier. This technology has produced a SIGGRAPH paper, an IEEE VR paper, and a SPIE paper summarizing the results over the history of the project. The system is described in this YouTube video.


kooima@csc.lsu.edu