Video Codec Unit (VCU) Demo Example: CAMERA->ENCODE ->STREAMOUT

Introduction

Video Codec Unit (VCU) in ZynqMP SOC is capable of encoding and decoding AVC/HEVC compressed video streams in real time.

This notebook example shows Live streaming use case using 2 ZCU106 board connected in common network wherein board-1(acts as server) captures video and audio from USB camera, encode video using VCU and audio using software Gstreamer element, mux both audio and video data and stream over ethernet. On the side, board-2 (acts as client) receives data, demux decode video audio data and renders it on DP/HDMI monitor.

Implementation Details

This example requires two boards, board 1 is used for encoding live A/V feed from camera and stream-out (as a server) and board 2 is used for streaming-in and decode purpose to play received Audio/Video stream (as a client).(More details regarding Test Setup for board 2 can be found in "vcu-demo-streamin-decode-display.ipynb" Example).

Note: This notebook needs to be run along with "vcu-demo-streamin-decode-display.ipynb". The configuration settings below are for Sever side pipeline.

Board Setup

Board-1 is used for transcode and stream-out (as a server)

  1. Connect serial cable to monitor logs on serial console.
  2. Connect USB camera(preferably Logitech HD camera, C920) with board.
  3. If Board is connected to private network, then export proxy settings in /home/root/.bashrc file on board as below,
    • create/open a bashrc file using "vi ~/.bashrc"
      • Insert below line to bashrc file
        • export http_proxy="< private network proxy address >"
        • export https_proxy="< private network proxy address >"
      • Save and close bashrc file.
  4. Connect two boards in the same network so that they can access each other using IP address.
  5. Check server IP.
    • root@zcu106-zynqmp:~#ifconfig
  6. Check client IP on client board.
  7. Check connectivity for board-1 & board-2.
    • root@zcu106-zynqmp:~#ping <board-2's IP>
  8. Provide client's board IP as Client IP parameters.
  9. Run Camera Audio/Video → Stream out on board-1

Determine Audio Device Names

The audio device name of audio source(Input device) and playback device(output device) need to be determined using arecord and aplay utilities installed on platform.

Audio Input

ALSA sound device names for capture devices

  • Run below command to get ALSA sound device names for capture devices

    root@zcu106-zynqmp:~#arecord -l

    It shows list of Audio Capture Hardware Devices. For e.g

      - card 1: C920 [HD Pro Webcam C920], device 0: USB Audio [USB Audio]
          - Subdevices: 1/1
          - Subdevice #0: subdevice #0

Here card number of capture device is 1 and device id is 0. Hence " hw:1,0 " to be passed as auido input device.

Pulse sound device names for capture devices

  • Run below command to get PULSE sound device names for capture devices

    root@zcu106-zynqmp:~#pactl list short sources

    It shows list of Audio Capture Hardware Devices. For e.g

    - 0 alsa_input.usb-046d_HD_Pro_Webcam_C920_758B5BFF-02.analog-stereo ...

Here "alsa_input.usb-046d_HD_Pro_Webcam_C920_758B5BFF-02.analog-stereo" is the name of audio capture device. Hence it can be passed as auido input device.

USB Camera Capabilities

Resolutions for this example need to set based on USB Camera Capabilities

  • Capabilities can be found by executing below command on board

    root@zcu106-zynqmp:~#"v4l2-ctl -d < dev-id > --list-formats-ext".

    < dev-id >:- It can be found using dmesg logs. Mostly it would be like "/dev/video0"

  • V4lutils if not installed in the pre-built image, need to install using dnf or rebuild petalinux image including v4lutils

In [1]:
from IPython.display import HTML

HTML('''<script>
code_show=true; 
function code_toggle() {
 if (code_show){
 $('div.input').hide();
 } else {
 $('div.input').show();
 }
 code_show = !code_show
} 
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')


Out[1]:

Run the Demo


In [2]:
from ipywidgets import interact
import ipywidgets as widgets
from common import common_vcu_demo_camera_encode_streamout
import os
from ipywidgets import HBox, VBox, Text, Layout

Video


In [3]:
video_capture_device=widgets.Text(value='',
    placeholder='"/dev/video1"',
    description='Camera Dev Id:',
    style={'description_width': 'initial'},
    #layout=Layout(width='35%', height='30px'), 
    disabled=False)
address_path=widgets.Text(value='',
    placeholder='192.168.1.101 ',
    description='Client IP:',
    disabled=False)
HBox([video_capture_device, address_path])



In [4]:
codec_type=widgets.RadioButtons(
    options=['avc', 'hevc'],
    description='Codec Type:',
    disabled=False)
video_size=widgets.RadioButtons(
    options=['640x480', '1280x720', '1920x1080', '3840x2160'],
    description='Resolution:',
    description_tooltip='To select the values, please refer USB Camera Capabilities section',
    disabled=False)
sink_name=widgets.RadioButtons(
    options=['none', 'fakevideosink'],
    description='Video Sink:',
    disabled=False)
HBox([codec_type, video_size, sink_name])


Audio


In [5]:
device_id=Text(value='',
    placeholder='(optional) "hw:1"',
    description='Input Dev:',
    description_tooltip='To select the values, please refer Determine Audio Device Names section',
    disabled=False)
device_id



In [6]:
audio_sink={'none':['none'], 'aac':['auto','alsasink','pulsesink'],'vorbis':['auto','alsasink','pulsesink']}
audio_src={'none':['none'], 'aac':['auto','alsasrc','pulseaudiosrc'],'vorbis':['auto','alsasrc','pulseaudiosrc']}

#val=sorted(audio_sink, key = lambda k: (-len(audio_sink[k]), k))
def print_audio_sink(AudioSink):
    pass
    
def print_audio_src(AudioSrc):
    pass

def select_audio_sink(AudioCodec):
    audio_sinkW.options = audio_sink[AudioCodec]
    audio_srcW.options = audio_src[AudioCodec]

audio_codecW = widgets.RadioButtons(options=sorted(audio_sink.keys(), key=lambda k: len(audio_sink[k])), description='Audio Codec:')

init = audio_codecW.value

audio_sinkW = widgets.RadioButtons(options=audio_sink[init], description='Audio Sink:')
audio_srcW = widgets.RadioButtons(options=audio_src[init], description='Audio Src:')
#j = widgets.interactive(print_audio_sink, AudioSink=audio_sinkW)
k = widgets.interactive(print_audio_src, AudioSrc=audio_srcW)
i = widgets.interactive(select_audio_sink, AudioCodec=audio_codecW)

HBox([i, k])


Advanced options:


In [7]:
frame_rate=widgets.Text(value='',
    placeholder='(optional) 15, 30, 60',
    description='Frame Rate:',
    disabled=False)
bit_rate=widgets.Text(value='',
    placeholder='(optional) 1000, 20000',
    description='Bit Rate(Kbps):',
    style={'description_width': 'initial'},
    disabled=False)
gop_length=widgets.Text(value='',
    placeholder='(optional) 30, 60',
    description='Gop Length',
    disabled=False)

display(HBox([bit_rate, frame_rate, gop_length]))



In [8]:
no_of_frames=Text(value='',
    placeholder='(optional) 1000, 2000',
    description=r'<p>Frame Nos:</p>',
    #layout=Layout(width='25%', height='30px'),
    disabled=False)
output_path=widgets.Text(value='',
    placeholder='(optional)',
    description='Output Path:',
    disabled=False)
periodicity_idr=widgets.Text(value='',
    placeholder='(optional) 30, 40, 50',
    description='Periodicity Idr:',
    style={'description_width': 'initial'},
    #layout=Layout(width='35%', height='30px'),
    disabled=False)
#entropy_buffers
#output_path
#gop_length
HBox([periodicity_idr, no_of_frames, output_path])



In [9]:
#entropy_buffers
show_fps=widgets.Checkbox(
    value=False,
    description='show-fps',
    #style={'description_width': 'initial'},
    disabled=False)
compressed_mode=widgets.Checkbox(
    value=False,
    description='compressed-mode',
    disabled=False)
HBox([compressed_mode, show_fps])



In [10]:
from IPython.display import clear_output
from IPython.display import Javascript

def run_all(ev):
    display(Javascript('IPython.notebook.execute_cells_below()'))

def clear_op(event):
    clear_output(wait=True)
    return

button1 = widgets.Button(
    description='Clear Output',
    style= {'button_color':'lightgreen'},
    #style= {'button_color':'lightgreen', 'description_width': 'initial'},
    layout={'width': '300px'}
)
button2 = widgets.Button(
    description='',
    style= {'button_color':'white'},
    #style= {'button_color':'lightgreen', 'description_width': 'initial'},
    layout={'width': '83px'}
)
button1.on_click(run_all)
button1.on_click(clear_op)

In [11]:
def start_demo(event):
    #clear_output(wait=True)
    arg = [];
    arg = common_vcu_demo_camera_encode_streamout.cmd_line_args_generator(device_id.value, video_capture_device.value, video_size.value, codec_type.value, audio_codecW.value, frame_rate.value, output_path.value, no_of_frames.value, bit_rate.value, show_fps.value, audio_srcW.value, periodicity_idr.value, gop_length.value, compressed_mode.value, sink_name.value, address_path.value);
    #!sh vcu-demo-camera-encode-decode-display.sh $arg > logs.txt 2>&1
    !sh vcu-demo-camera-encode-streamout.sh $arg
    return

button = widgets.Button(
    description='click to start camera-encode-streamout demo',
    style= {'button_color':'lightgreen'},
    #style= {'button_color':'lightgreen', 'description_width': 'initial'},
    layout={'width': '300px'}
)
button.on_click(start_demo)
HBox([button, button2, button1])