• Steve Pomroy

Extracting Images from Video for Deep Learning

For a machine to learn to recognize objects in pictures/video, you need to have a whole bunch of labeled pictures to feed that deep learning machine. In this post, I cover how you can create those pictures from video recorded on a Raspberry Pi.

Overview

As I mentioned in my previous post, before you can train a machine to detect objects in your pictures, you need to have a ton of pictures of the object(s) you want to detect along with labels for those objects. While you can use individual still pictures taken with a smart phone or fancy camera, it's much easier to record video of your object(s) and then extract the still images from that video. After all, a video is really just a sequence of still images displayed fast enough to create the illusion of movement.


Below is the step-by-step process I follow to extract images from video recorded on a Raspberry Pi. I cover this process for Ubuntu Linux in this post but I'll be sure to post a Windows version as well. As an aside, I've also done an integration with Google's Team Drive for downloading/extraction to make it easier to collaborate with my distributed team - which I'll cover in a later post.


The high-level image extraction process looks like this:

  1. Install video processing software

  2. Record video (surprise!)

  3. Transfer video to your laptop

  4. Run image extraction process


1. Install Video Processing Software

I'm a big fan of open source software so I use FFmpeg for extracting images from my videos. It's powerful and incredibly flexible but complicated to use (especially for beginners). Don't let the funky name intimidate you either. It's supposed to stand for Fast Forward MPEG. Now that we've got that all cleared up, let's move along to the install.


(Ubuntu) Linux Installation

1. Open a terminal/command line window (CTRL + ALT + T is a quick shortcut for that) and run the following command to start the installation:

sudo apt-get install ffmpeg

2. If prompted about additional storage space type 'Y' and press 'Enter' to continue with the installation. You'll see a bunch of info appear on the screen as the installation process takes place.


3. Verify the installation by running:

ffmpeg -version

You should see something similar to what I see on my Ubuntu Linux laptop:

Figure 2. Validating ffmpeg install

2. Record video

Obviously, before you can extract individual pictures from a video, you need to record a video. You can use your mobile phone, your fancy DSLR camera or whatever device you choose for that. I'm using a camera attached to a Raspberry Pi for my current project so I'll cover that here. (I like to think of the Raspberry Pi as a tiny computer to power big ideas.)


1. Be sure your Raspberry Pi is up and running, connected to your network and has a camera module attached to it. If you've never set up a Raspberry Pi before, you're in luck. I'll be posting a blog on that next week with all the gory details.


2. Connect to your Raspberry Pi from your Linux machine using Secure Shell (ssh) by launching a command/terminal window (CTRL + ALT + T) and running the command below which connects your laptop to a computer named 'raspberrypi.local' with the username 'pi' via a secure shell:


ssh pi@raspberrypi.local

If all goes well, you'll be prompted for a password. The default password is 'raspberry', although the security nut inside my brain insists that I tell you to change that default password as soon as you connect to your Raspberry Pi for the first time. Or else!


3. Record a short test video by running the command below which will use the 'raspivid' utility to record a five second video clip named 'testvideo.h264':

raspivid -o testvideo.h264

4. To verify that your test video was recorded correctly, let's list the contents of the current folder to see that there is indeed a video file in it. On the Raspberry Pi, the default video format is h264 so issue the following command to display a list of Raspberry Pi videos in the current folder:

ls *.h264

You should see something similar to the following if you've successfully recorded a video.

Figure 3. Recording test video on a raspberry pi

3. Transfer Video to Your Laptop

Although you can extract the images directly on your Raspberry Pi, you'll probably want to work in a more capable setting such as your laptop. I have image labeling software and other video/image editing tools installed on my laptop so it's much more convenient to work with the files there. Not to mention the Raspberry Pi will usually run much slower than your laptop (unless your laptop is a x286 of course).


1. To copy the video file from your Raspberry Pi to your laptop, open up a terminal window (CTRL+Alt+T) on your laptop and run the command below. This command uses the secure copy utility 'scp' to connect to user account 'pi' on the machine 'raspberrypi.local' and downloads the test video file we just recorded '/home/pi/testvideo.h264' to the current folder '.' on your laptop:


scp pi@raspberrypi.local:/home/pi/testvideo.h264 .

Look closely at the command above. It's important to include the dot '.' at the end of the command or you'll get errors. The dot at the end tells the scp program to copy the video to the current folder (the folder you're running scp from on your laptop).


2. Let's verify that the transfer succeeded by checking the directory listing on your laptop:


ls *.h264

Which should look similar to the following:

Figure 4. Downloading test video to laptop

3. For kicks, let's preview the video using the media player installed on your laptop by navigating to the download directory and double clicking on the video file. Here's what that looks like for me on Ubuntu 18.04:


Figure 5. H264 video not recognized

Right - the H264 video format is not well understood by Ubuntu. So go ahead and click "Select Application", choose the "Videos" application and click "Select".


Figure 6. Select Videos to play test video

The Videos application then does its best to play your video. You'll notice that it seems a lot shorter than five seconds but that's OK. As I mentioned, Ubuntu doesn't understand the Raspberry Pi's H264 video format very well. There are other tools we can use to package it up in a more widely understood format but we don't need to do that in order to use ffmpeg to extract the still images. As long as you get something that resembles what your Raspberry Pi's camera was looking at, you'll be fine. My test video looks something like the following:


Figure 7. Preview of H264 video file of me waving at the camera

4. Run Image Extraction Process


Now that we've confirmed testvideo.h264 has been copied to your laptop, we are ready to use ffmpeg to extract the individual images (frames) from the video file.


1. Open a command/terminal window if you haven't already done so (CTRL + ALT + T).


2. Run the following command. I know, it looks a little scary - ffmpeg is super powerful but not the most intuitive. I'll talk you through the basic options momentarily. Note that this command assumes testvideo.h264 is in the same directory you're running the command from.

ffmpeg -i testvideo.h264 -q:v 1 -f image2 testvideoimage-%03d.jpg

This command tells ffmpeg to :

  • Process the testvideo.h264 video file

  • Extract high quality images (-q:v 1 -f image2) from it

  • Write each of the individual pictures to your hard drive prefixed with testvideoimage-

  • Include the image number (%03d) as part of each file name (001, 002, 003, etc.). The result is a whole bunch of images sitting in your current folder something like this:

Figure 8. Images extracted from test video

3. Browse the images to verify that they were correctly extracted and that the quality is what you expected. As a side note, higher quality pictures also require more storage space (surprise) so you may want to adjust the quality depending on what you plan to do with all of these shiny new still images.


Next Steps

Now that you have a folder full of individual still images, you can dig into the wonderfully tedious task of labeling the objects you're interested in so that they can be used in training a Deep Learning framework . Check out my previous blog post for the details on the labeling process.

0 views

© 2019 Steve Pomroy and Associates

  • LinkedIn - Black Circle
  • Facebook - Black Circle