Stitching multi-camera 360 Video: an Open-Source Workflow
Definitely not usable in production environments
In brief:
1. Unpack video into images
ffmpeg -i yourvideo.mp4 -r 1/1 $filename%05d.jpg
2. Stitch each frame with Hugin
2.1 Create project file
pto_gen -o prj/project.pto
2.2 Find control points
cpfind --multirow -o prj/project.pto prj/project.pto
2.3 Prune the control points
celeste_standalone -i prj/project.pto -o prj/project.pto
2.4 Clean the control points
cpclean -o prj/project.pto prj/project.pto
2.5 Optimise the control points
autooptimiser -a -l -s -o prj/project.pto prj/project.pto
2.6 Stitch a frame
hugin_executor --stitching --prefix=prefix prj/project.pto
3. Repack the stitched images into a video
ffmpeg -r *framerate* -f image2 -s *v_width*x*v_height -i filename%05d.jpg -vcodec libx264 -crf 25 -pix_fmt yuv420p video.mp4
- Edit with your choice of video editor
General Setup
We’re going to do this in the Windows Linux Subsystem, or as I like to call it, GNU/NT. It should work on any Debian based system, as well as macOS.
Dependencies
We’re only going to need ffmpeg and Hugin.
sudo apt-get install ffmpeg
sudo apt-get install hugin
Step 1: Extract Frames and audio
The trick is to feed the stitching algorithm the right groups of pictures. For our example, we’re going to use 8 video streams from 8 GoPro Sessions arranged in a circle, using this mount I designed.
Step 2: Create Project File
pto_gen -o project.pto *.jpg -f 98
98 should be the correct field of view for GoPro cameras. Hugin should be able to use the correct one automatically, but ffmpeg isn’t writing the appropriate EXIF.
My naive approach was to repeat this step for every single frame, searching for control points and redoing all the optimizing steps. Of course this is very inefficient, which was intended as an intermediate step. However, the real problem is that if frames are stitched independently, they’re going to be stitched inconsistently, and it’s going to result in an unwatchable video.
Therefore, we’re going to create the project and optimize it based on the first frame, then edit the reference to the image files in the project itself (luckily PTO is a textual format) and just do the last step of the process on all other frames.
file = open("prj/project.pto", "r")
new_File = open("prj/project"+format(i, '05')+".pto", "w")
data = file.read()
newdata = data.replace("00001.jpg", ""+format(i, '05') +".jpg")
new_File.write(newdata)
Step 3: reassemble the video
This is the easiest step, we just need to give ffmpeg the list of frames, the original audio, and most importantly this little video filter instruction, which ensures an even vertical resolution.
-vf scale=1920:-2
Without this, and sticking to yuv420, which is advisable for compatibility, we could get an error.
We must also specify :
-strict -2
to allow the use of the proper audio codec.
The whole command is:
ffmpeg -r 30 -f image2 -s 1920x1440 -i %05d.tif -i ogAudio.mp3 -shortest -vcodec libx264 -pix_fmt yuv420p -vf scale=1920:-2 -strict -2 video_out.mp4
Get the script
I’ve put together an over-simplistic Python script that does the whole sequence of operations, you can get it on my GitHub: https://github.com/xorgol/ui_stitcher
Results
First of all, it currently has no multi-threading, so it’s really slow.
The biggest problem is that we only compute the stitching on the first frame, so it has all sorts of artifacts. Here’s a sample.