A single video that is (mostly) synchronised with a large picture showing the overall view of the robot performing the test, and a small video in the upper left corner showing the operator interface.
Use two cameras (or mobile phones), making sure to record the points mentioned in the rules document. It is highly recommended that the two cameras start recording at the same time (it may be helpful to do a count-down so that the record buttons are pushed at the same time). It is possible to shift the timing later if necessary.
The following is an example procedure to turn the two videos into a Picture-in-Picture video, suitable for uploading. It uses the free “ffmpeg” program, which can be batched up to process many videos at the same time. You may also use any other software you wish as long as a similar effect is achieved.
The ffmpeg program can be downloaded for free from https://ffmpeg.org/download.html and is available for Linux, Windows, and MacOS. It is recommended that ffmpeg be configured in your path so that you can call it from anywhere in your file system.
Naming convention for these instructions:
- The main video will be MainVideo.mp4. Substitute this filename with your main video.
- The inset video will be SubVideo.mp4. Substitute this filename with your inset video.
- The output video will be PiP.mp4. Substitute this filename with the output filename.
- The ffmpeg executable is assumed to be in your path and called “ffmpeg”. Point to this directly if this isn’t the case on your system.
For a plain Picture-in-Picture video (no splash screen, inset video appears in the bottom right corner, ⅓ the size in both dimensions), use the following. Note that it is all one line, it is just split up here to fit on the page.
ffmpeg -i MainVideo.mp4 -i SubVideo.mp4 -filter_complex "scale=iw/3:-1 [pip]; [pip] overlay=0:0" -vcodec h264 -acodec libfaac PiP.mp4
Here’s what each bit does, in case you wish to customise it.
- ffmpeg : call to the ffmpeg program. Replace this with the full path to where your ffmpeg executable is if it isn’t in your path.
- -i MainVideo.mp4 : the first input file (-i means input).
- -i SubVideo.mp4 : the second input file. Note that you can have more than 2 input files, just keep going “-i ”.
- -filter_complex : defines a complex filter, the subsequent part of the command (within quotes) defines the filter.
- scale=iw/3:-1 [pip] : Take the sub video (video 1, where we count from 0) and scale it such that its width is that of the main video (“iw”) divided by 3. The “-1” means make the height whatever it needs to be to fit that width. Call the result “[pip]” later in the filter graph.
- [pip] overlay=0:0 : Take the main video (video 0) and the [pip] video that we just scaled and apply the overlay filter, with the upper left corner of the overlaid video in the upper left corner of the main video (0:0). The output is not assigned to any label, meaning it becomes the output of the filter graph.
- -vcodec h264 : Use the h264 codec to encode the resulting video.
- -acodec libfaac : Use the AAC codec to encode the resulting audio.
- PiP.mp4 : The final filename is the desired output file.
Starting at a given time:
If the two videos were not started at exactly the same time, you can tell ffmpeg to discard a certain amount of one (or both) videos. For example, to discard the first 1 second of the Main video and the first 2 seconds of the Sub video, use the following command. The added parts are shown in bold.
ffmpeg -ss 00:00:01.00 -i MainVideo.mp4 -ss 00:00:02.00 -i SubVideo.mp4 -filter_complex "scale=iw/3:-1 [pip]; [pip] overlay=0:0" -vcodec h264 -acodec libfaac PiP.mp4
Note that the time format is HH:mm:ss.ff where HH = hours, mm = minutes, ss = seconds, and ff = frames (generally 24 or 30 frames per second). The time parameter comes *before* the video parameter that it refers to.