First off, watch my 360-degree Video of the World Record Dodgeball Game at the U of A
I’ve seen a few 360-degree videos out there but not as many as you would think considering how freaking cool they are.
Since 360-degree videos is pretty uncharted territory in the photojournalism world I absolutely had to take the challenge.
To shoot my 360-degree dodgeball video I used four GoPro Hero HD cameras on 1280×960 mode mounted vertically. This gives enough overlap to get a full 360-degree view as well the cameras are nice and small and light. Since the cameras shoot at 30 frames per second (actually 29.97) you can think of it as 30 still pictures per second which can be stitched together into panoramas.
The short version of this story is that I shot with four GoPros, extracted still images from video, stitched the stills together into panoramas then recombined them back into video.
For the much more detailed and nerdy answer read on….
I got tips for arranging the cameras properly at diy-streetview.org
I simply used a plastic leg from a table that was the same width as the naked GoPro cameras.
I used Gaffers tape and a lot of elastics to hold the cameras in place.
In the future I may build a proper aluminum box for everything.
Setting up a fifth GoPro camera in the catwalk to be used for an overhead view for livestream of the game.
I use a Telus Aircard pluged into a Cradlepoint CTR-379 wireless router for internet for livestreams.
Here was my shooting process.
Hit record on my Olympus LS-10 PCM recorder. Say “scene one” out loud.
Hit Record on Camera 1. Say “camera one” out loud.
Hit Record on Camera 2. Say “camera two” out loud.
Hit Record on Camera 3. Say “camera three”out loud.
Hit Record on Camera 4. Say “camera four” out loud.
Now that everything is recording I clap my hands really fast or yelp really loud so that I have a sharp audio cue that I can sync all the cameras with.
Some people say “You’re crazy for putting your cameras in a dodgeball game like that!”
I say. It’s not about the camera. It’s about the end result. A camera is a tool like a hammer. If your hammer breaks, you get it fixed.
Never let your camera get in the way of a good photo.
As soon as the game ended I ingested all my footage into my MacBook Pro. It’s always important to get video up as fast as possible if you want to get a lot of views.
I just selected the first 60-seconds of the game and plunked it into Final Cut Pro. I created a large canvas and lined up the different cameras so that they overlapped a bit.
There would be very noticeable seams between the videos but I knew people wouldn’t mind the seams if they got to see the video asap. It took an hour to render the 60-seconds of video in Final Cut Pro and another hour to export it as FLV. The game ended around 1:30pm and I had a quick and dirty 60-second version of the panorama up on edmontonjournal.com before the 6:00pm news on TV! In comparison I think I had last year’s video up at the same time.
First year NAIT photography Student Nathan Smith was doing a ridalong with me that day and he was a HUGE help! He also shot all these awesome photos of me. Thanks!
Okay now for the high quality version with properly stitched images.
For post-processing I created a new timeline in Final Cut Pro 7 with codec Apple Intermediate Codec and size 3840×1280.
Since the cameras are mounted vertically they are recording 960×1280 video. So 4×960=3840.
I find my audio sync point on each camera and set it to be the in-point for the video. I drag each video from each camera into my timeline and line them up so that all the audio sync points line up.
Once my video and audio is all synced then I select each clip and go to “File–>Export –>Export Using Quicktime Conversion–> Image Sequence”
Final Cut Pro 7 extracts JPEG still images for every frame of video. Each frame is about 1.2MBs and you are shooting about 120 frames per second.
That works out to 8.6GB of stills for each minute of video you shoot. Or 520GB per hour.
Since there are four cameras each “frame” of video is actually four pictures which need to be stitched together into a single panorama.
I organize all the images using Photo Mechanic and batch name them 0001a, 0001b, 0001c, 00001d, 0002a, 0002b, 0002c, 0002d, etc.
Then I used PTgui Pro to stitch all my panoramas together into equarectangular panoramas.
PTgui Pro has a great batch process where you can setup a template for your first panorama and then it will auto stitch the rest of the panoramas in file order. This meant that (0001a, 0001b, 0001c, 00001d)–>Panorama1.jpg , (0002a, 0002b, 0002c, 0002d)–>Panorama2.jpg, etc.
I stitched them together in the highest resolution so that each panorama would be 3561×1308 pixels big. About 5MB per panorama. You are now at 18GB per minute of video or about a Terabyte per hour.
This process took the longest. I had three MacBook Pro laptops and my home server all going at the same time. The laptops took around 12 seconds per panorama to stitch.
If you do the math that works out six hours to stitch one minute worth of panoramas together!
I basically had four laptops crunching for 24 hours straight to make all the panoramas.
Once the tens of thousands of panoramas were stitched together I used Quicktime Pro File–>Open Image Sequence (at 29.97) to open all the still panorama images as a video. I then exported the video as .mov’s in Apple Intermediate Codec 3561×1308 at 280Mb/sec
I then created a new sequence in Final Cut Pro 7 with the same settings and dragged back in the .mov files and synced them with the .wav audio from my Olympus LS-10.
I chose about 17 minutes of footage in total to convert to panoramas and I then cut that down to the best 5 mins and exported as full-quality Apple Intermediate Codec.
I then used Adobe Flash Video Encoder to convert and downsize my video to FLV 2722×1000, On2 VP6, 2000kb video, 96kb audio which I find to be a good balance of quality to file size. It took about 8hrs for my 2.6GHz MacBook Pro to compress 5 minutes of video into 2722×1000 On2 VP6 Flash video.
Here is my puppy Mr. Woofertons napping while I wait for my video to compress.
Once the video is done compressing into FLV I then used KrPano as the flash panorama player to display the panoramic video as a 360-degree video.
It’s THAT easy!
I actually did this same process for my Murder of Crows time lapse last year but this was way more intense.
Next time I do this though I will wire the GoPro’s together so that I can trigger them all at the same time. My Olympus LS-10 has a remote trigger port too so I should be able to trigger all four cameras and my audio recorder at the same time which saves time syncing the videos in Final Cut Pro.
There may also be a way to get KrPano to play .mp4 instead of .flv so I could use an Elgato turbo.264 HD to speed up exporting the final video.
You could also write a few simple Applescripts to speed up the file renaming and automate Quicktime Pro. This could eliminate the need for Photo Mechanic and manually moving files around.
What did all this cost?
Four GoPro HD’s would be 4 x $300 = $1,200
Final Cut Pro is $1,000
Quicktime Pro is $30
Photo Mechanic is $150
PTgui Pro is $210
Adobe Flash is $700
KrPano is $150
Cheaper than a $6,000 Ladybug camera and a better field of view and higher resolution than a Pano Pro mirror. Though a PanoPro would be much much easier to use.
As crazy complicated as this may sound I wouldn’t be surprised if whatever Smartphone we all use in a couple years will do this with a 99-cent app.
What I love about 360-video is that almost everyone who sees it is blown away. I love how it opens your mind to new and exiting ways to tell stories.