LibrePlanet: Conference/2016/Streaming
(Adding category) |
|||
(18 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | + | [[Category:Conferences]] | |
== Keynote capture script == | == Keynote capture script == | ||
Line 16: | Line 16: | ||
device2="alsa_input.usb-Burr-Brown_from_TI_USB_Audio_CODEC-00-CODEC.analog-stereo" | device2="alsa_input.usb-Burr-Brown_from_TI_USB_Audio_CODEC-00-CODEC.analog-stereo" | ||
− | gst-launch-1.0 --gst-debug-level=$1 --eos-on-shutdown ximagesrc use-damage=false $xidparameter show-pointer=false ! videoconvert ! videorate ! video/x-raw,framerate=20/1 ! videoscale ! video/x-raw, width=1024, height=768 ! \ | + | gst-launch-1.0 --gst-debug-level=$1 --eos-on-shutdown ximagesrc use-damage=false $xidparameter show-pointer=false !\ |
− | vp8enc min_quantizer=1 max_quantizer=10 cpu-used=10 deadline=41000 threads=2 ! queue ! mux. \ | + | videoconvert ! videorate ! video/x-raw,framerate=20/1 ! videoscale ! video/x-raw, width=1024, height=768 ! \ |
− | pulsesrc device=$device1 ! adder name=mix ! vorbisenc quality=0.4 ! queue ! mux. \ | + | vp8enc min_quantizer=1 max_quantizer=10 cpu-used=10 deadline=41000 threads=2 ! queue ! mux. \ |
− | pulsesrc device=$device2 ! mix. \ | + | pulsesrc device=$device1 ! adder name=mix ! vorbisenc quality=0.4 ! queue ! mux. \ |
− | webmmux streamable=true name=mux ! queue ! tee name=s ! queue ! filesink location=keynote-${DATE}.webm \ | + | pulsesrc device=$device2 ! mix. \ |
− | s. ! queue leaky=2 ! shout2send ip=live2.fsf.org port=80 mount=keynote.webm password=$PASSWORD sync=false 2>&1 | tee keynote-${DATE}.log | + | webmmux streamable=true name=mux ! queue ! tee name=s ! queue ! filesink location=keynote-${DATE}.webm \ |
+ | s. ! queue leaky=2 ! shout2send ip=live2.fsf.org port=80 mount=keynote.webm password=$PASSWORD sync=false 2>&1 |\ | ||
+ | tee keynote-${DATE}.log | ||
</pre> | </pre> | ||
== OpenBroadcaster 2 v4l == | == OpenBroadcaster 2 v4l == | ||
+ | |||
+ | This instructions are for building OpenBroadcaster on Debian '''testing''', plus ffmpeg as a dependency. | ||
+ | |||
+ | '''NOTE''' that for OBS to run it is necessary OpenGL 3.2 which is not available in Debian 8 Stable. Follow the install instructions ''after'' upgrading to Debian testing. | ||
=== Building/installing: === | === Building/installing: === | ||
− | |||
− | + | Install dependencies: | |
− | # apt-get install build-essential pkg-config cmake git checkinstall libx11-dev libgl1-mesa-dev libpulse-dev libxcomposite-dev | + | # apt-get install build-essential pkg-config cmake git checkinstall libx11-dev libgl1-mesa-dev \ |
+ | libpulse-dev libxcomposite-dev libxinerama-dev libv4l-dev libudev-dev libfreetype6-dev \ | ||
+ | libfontconfig-dev qtbase5-dev libqt5x11extras5-dev libx264-dev libxcb-xinerama0-dev \ | ||
+ | libxcb-shm0-dev libjack-jackd2-dev libcurl4-openssl-dev zlib1g-dev yasm gstreamer-tools pavucontrol | ||
Download and build ffmpeg: | Download and build ffmpeg: | ||
− | $ git clone --depth 1 git://source.ffmpeg.org/ffmpeg.git | + | $ git clone --depth 1 git://source.ffmpeg.org/ffmpeg.git |
− | $ cd ffmpeg | + | $ cd ffmpeg |
− | $ ./configure --enable-shared --prefix=/usr | + | $ ./configure --enable-shared --prefix=/usr |
− | $ make -j4 | + | $ make -j4 |
− | + | ||
− | # checkinstall --pkgname=FFmpeg --fstrans=no --backup=no \ | + | # checkinstall --pkgname=FFmpeg --fstrans=no --backup=no \ |
− | + | --pkgversion="$(date +%Y%m%d)-git" --deldoc=yes | |
Download and build obs: | Download and build obs: | ||
− | $ git clone https://github.com/jp9000/obs-studio.git | + | $ git clone https://github.com/jp9000/obs-studio.git |
− | $ cd obs-studio | + | $ cd obs-studio |
− | $ mkdir build && cd build | + | $ mkdir build && cd build |
− | $ cmake -DUNIX_STRUCTURE=1 -DCMAKE_INSTALL_PREFIX=/usr .. | + | $ cmake -DUNIX_STRUCTURE=1 -DCMAKE_INSTALL_PREFIX=/usr .. |
− | $ make -j4 | + | $ make -j4 |
− | # checkinstall --pkgname=obs-studio --fstrans=no --backup=no \ | + | # checkinstall --pkgname=obs-studio --fstrans=no --backup=no \ |
− | + | --pkgversion="$(date +%Y%m%d)-git" --deldoc=yes | |
Install v4l loop: | Install v4l loop: | ||
− | # apt-get install v4l2loopback-dkms | + | # apt-get install v4l2loopback-dkms |
− | # echo v4l2loopback >> /etc/modules | + | # echo v4l2loopback >> /etc/modules |
− | # modprobe v4l2loopback | + | # modprobe v4l2loopback |
Install local stream manager: | Install local stream manager: | ||
− | # apt-get install crtmpserver | + | # apt-get install crtmpserver |
+ | === Setup === | ||
+ | |||
+ | Run obs, set streaming settings to: | ||
+ | * URL: ''rtmp://127.0.0.1/flvplayback'' | ||
+ | * Stream Key: ''stream'' | ||
+ | |||
+ | |||
+ | Run streaming to v4l: | ||
+ | |||
+ | # modprobe snd-aloop | ||
+ | # echo snd-aloop >> /etc/modules | ||
+ | $ gst-launch-1.0 rtmpsrc location="rtmp://127.0.0.1/flvplayback/stream.flv live=1" ! decodebin name=decoded ! videorate !\ | ||
+ | video/x-raw,framerate=25/1 ! queue leaky=1 ! v4l2sink device=/dev/video1 sync=false \ | ||
+ | decoded. ! audioconvert ! queue leaky=1 ! pulsesink device=alsa_output.1.analog-stereo sync=false | ||
+ | |||
+ | Start Jitsi Meet session on Chromium (recommended browser, others may work). | ||
+ | |||
+ | * Configure Chromium to use '''Dummy video device (0x0000)''' as the video source, and '''Loopback Analog Stereo''' as the audio source. | ||
+ | * Launch pavucontrol and on the ''Recording'' tab, set Chromium to record from '''Monitor of Loopback Analog Stereo'''. | ||
+ | |||
+ | === Caveats === | ||
+ | |||
+ | * This setup introduces ~1s delay on video and audio. | ||
+ | |||
+ | === Troubleshooting === | ||
+ | |||
+ | Test streaming: | ||
+ | |||
+ | $ gst-launch-1.0 rtmpsrc location="rtmp://127.0.0.1/flvplayback/stream.flv live=1" ! decodebin ! xvimagesink sync=false | ||
+ | |||
+ | Test v4l device: | ||
+ | |||
+ | $ gst-launch-1.0 v4l2src device=/dev/video1 ! xvimagesink | ||
+ | |||
+ | == Speaker streaming feed == | ||
+ | |||
+ | The streaming feed is processed by ABYSS front-end. ABYSS (ABYSS Broadcast Your Stream Successfully) is written in Python, in version 0.1 it consist of a main module used for generating the GUI via GTK3, a module for creating and handling the pipeline using GStreamer-1.0 plugins, and a config file to set inputs/ouputs and server parameters. | ||
+ | |||
+ | Here is the main pipeline (using the Elphel camera) as it could be used in a bash command-line: | ||
+ | <pre> | ||
+ | gst-launch-1.0 -e rtspsrc location=rtsp://192.168.48.2:554 ! queue ! rtpjpegdepay ! tee name=rawvideo ! queue ! jpegdec max-errors=-1 ! tee name=videodecoded ! queue ! xvimagesink sync=false ! \ | ||
+ | rawvideo. ! queue ! filesink location=[rawvideo_defaultname] \ | ||
+ | videodecoded. ! videoscale ! video/x-raw, width=640, height=360 ! vp8enc min_quantizer=1 max_quantizer=13 cpu-used=5 deadline=42000 threads=2 sharpness=7 ! queue ! webmmux name=mux2 \ | ||
+ | pulsesrc device=[input_source_name]! level interval=200000000 ! queue ! vorbisenc quality=0.3 ! tee name=rawaudio \ | ||
+ | rawaudio. ! queue ! oggmux ! tee name=streamaudio ! queue ! filesink location=[audio_defaultname] \ | ||
+ | streamaudio. ! queue leaky=2! shout2send ip=server.domain.name port=80 mount=testaudio.ogg password=[server_password] \ | ||
+ | rawaudio. ! queue ! mux2. \ | ||
+ | mux2. ! tee name=streamfull ! queue ! filesink location=[stream_defaultname] \ | ||
+ | streamfull. ! queue leaky=2! shout2send ip=server.domain.name port=80 mount=teststream.webm password=[server_password] | ||
+ | </pre> | ||
+ | |||
+ | Here is the backup pipeline in case of Elphel failure, the input is based on USB webcam: | ||
+ | <pre> | ||
+ | gst-launch-1.0 -e v4l2src device=/dev/video[0-9]* ! video/x-raw, width=640, height=360 ! tee name=rawvideo ! queue ! xvimagesink sync=false ! \ | ||
+ | rawvideo. ! queue ! mkvmux ! filesink location=[rawvideo_defaultname] \ | ||
+ | rawvideo. ! vp8enc min_quantizer=1 max_quantizer=13 cpu-used=5 deadline=42000 threads=2 sharpness=7 ! queue ! webmmux name=mux2 \ | ||
+ | pulsesrc device=[input_source_name]! level interval=200000000 ! queue ! vorbisenc quality=0.3 ! tee name=rawaudio \ | ||
+ | rawaudio. ! queue ! oggmux ! tee name=streamaudio ! queue ! filesink location=[audio_defaultname] \ | ||
+ | streamaudio. ! queue leaky=2! shout2send ip=server.domain.name port=80 mount=testaudio.ogg password=[server_password] \ | ||
+ | rawaudio. ! queue ! mux2. \ | ||
+ | mux2. ! tee name=streamfull ! queue ! filesink location=[stream_defaultname] \ | ||
+ | streamfull. ! queue leaky=2! shout2send ip=server.domain.name port=80 mount=teststream.webm password=[server_password] | ||
+ | </pre> | ||
+ | |||
+ | Here is the test pipeline to check the Elphel framing/focusing and audio inputs: | ||
+ | <pre> | ||
+ | gst-launch-1.0 -e rtspsrc location=rtsp://192.168.48.2:554 ! queue ! rtpjpegdepay ! jpegdec max-errors=-1 ! queue ! xvimagesink sync=false ! \ | ||
+ | pulsesrc device=[input_source_name]! level interval=200000000 ! queue ! pulsesink device=[output_source_name] sync=false | ||
</pre> | </pre> | ||
=== Setup === | === Setup === | ||
− | + | * Boot the computer | |
+ | * Fill out .abyss config file | ||
+ | * Set a virtual interface (if not hard coded) | ||
+ | # ifcongif eth1:0 192.168.48.50 | ||
+ | '''NOTE''' Command-line use on a X200 laptop with an Elphel address of 192.168.48.2 | ||
+ | * Plug and power the Elphel camera | ||
+ | * Plug the webcam | ||
+ | * Plug the USB mixing desk | ||
+ | * Launch ABYSS | ||
+ | * Test the setup by clicking 'Set-up test' button | ||
+ | * If everything ok, quit test mode by clicking the previous button | ||
+ | * Start the streaming by clicking 'Stream' button | ||
+ | |||
+ | === Caveats === | ||
+ | |||
+ | * No automatic switch from main to backup pipeline if the Elphel camera is physically disconnected from its switch. | ||
+ | Should be solved in next version of ABYSS. | ||
+ | * If internet connection is lost during streaming, the pipeline doesn't fail, moreover the files are still recorded on the local disk. |
Latest revision as of 11:57, 28 March 2016
Contents
Keynote capture script
#!/bin/bash xid=$(xwininfo -root -all|grep "Jitsi Meet" | sed -e 's/^ *//' | cut -d\ -f1) [ 1${xid}1 != 11 ] && xidparameter="xid=$xid" export GST_DEBUG=4 DATE=$(date +%Y-%m-%d-%H_%M_%S) PASSWORD=xxx # pactl list|grep alsa_output device1="alsa_output.usb-Burr-Brown_from_TI_USB_Audio_CODEC-00-CODEC.analog-stereo.monitor" device2="alsa_input.usb-Burr-Brown_from_TI_USB_Audio_CODEC-00-CODEC.analog-stereo" gst-launch-1.0 --gst-debug-level=$1 --eos-on-shutdown ximagesrc use-damage=false $xidparameter show-pointer=false !\ videoconvert ! videorate ! video/x-raw,framerate=20/1 ! videoscale ! video/x-raw, width=1024, height=768 ! \ vp8enc min_quantizer=1 max_quantizer=10 cpu-used=10 deadline=41000 threads=2 ! queue ! mux. \ pulsesrc device=$device1 ! adder name=mix ! vorbisenc quality=0.4 ! queue ! mux. \ pulsesrc device=$device2 ! mix. \ webmmux streamable=true name=mux ! queue ! tee name=s ! queue ! filesink location=keynote-${DATE}.webm \ s. ! queue leaky=2 ! shout2send ip=live2.fsf.org port=80 mount=keynote.webm password=$PASSWORD sync=false 2>&1 |\ tee keynote-${DATE}.log
OpenBroadcaster 2 v4l
This instructions are for building OpenBroadcaster on Debian testing, plus ffmpeg as a dependency.
NOTE that for OBS to run it is necessary OpenGL 3.2 which is not available in Debian 8 Stable. Follow the install instructions after upgrading to Debian testing.
Building/installing:
Install dependencies:
# apt-get install build-essential pkg-config cmake git checkinstall libx11-dev libgl1-mesa-dev \ libpulse-dev libxcomposite-dev libxinerama-dev libv4l-dev libudev-dev libfreetype6-dev \ libfontconfig-dev qtbase5-dev libqt5x11extras5-dev libx264-dev libxcb-xinerama0-dev \ libxcb-shm0-dev libjack-jackd2-dev libcurl4-openssl-dev zlib1g-dev yasm gstreamer-tools pavucontrol
Download and build ffmpeg:
$ git clone --depth 1 git://source.ffmpeg.org/ffmpeg.git $ cd ffmpeg $ ./configure --enable-shared --prefix=/usr $ make -j4 # checkinstall --pkgname=FFmpeg --fstrans=no --backup=no \ --pkgversion="$(date +%Y%m%d)-git" --deldoc=yes
Download and build obs:
$ git clone https://github.com/jp9000/obs-studio.git $ cd obs-studio $ mkdir build && cd build $ cmake -DUNIX_STRUCTURE=1 -DCMAKE_INSTALL_PREFIX=/usr .. $ make -j4 # checkinstall --pkgname=obs-studio --fstrans=no --backup=no \ --pkgversion="$(date +%Y%m%d)-git" --deldoc=yes
Install v4l loop:
# apt-get install v4l2loopback-dkms # echo v4l2loopback >> /etc/modules # modprobe v4l2loopback
Install local stream manager:
# apt-get install crtmpserver
Setup
Run obs, set streaming settings to:
- URL: rtmp://127.0.0.1/flvplayback
- Stream Key: stream
Run streaming to v4l:
# modprobe snd-aloop # echo snd-aloop >> /etc/modules $ gst-launch-1.0 rtmpsrc location="rtmp://127.0.0.1/flvplayback/stream.flv live=1" ! decodebin name=decoded ! videorate !\ video/x-raw,framerate=25/1 ! queue leaky=1 ! v4l2sink device=/dev/video1 sync=false \ decoded. ! audioconvert ! queue leaky=1 ! pulsesink device=alsa_output.1.analog-stereo sync=false
Start Jitsi Meet session on Chromium (recommended browser, others may work).
- Configure Chromium to use Dummy video device (0x0000) as the video source, and Loopback Analog Stereo as the audio source.
- Launch pavucontrol and on the Recording tab, set Chromium to record from Monitor of Loopback Analog Stereo.
Caveats
- This setup introduces ~1s delay on video and audio.
Troubleshooting
Test streaming:
$ gst-launch-1.0 rtmpsrc location="rtmp://127.0.0.1/flvplayback/stream.flv live=1" ! decodebin ! xvimagesink sync=false
Test v4l device:
$ gst-launch-1.0 v4l2src device=/dev/video1 ! xvimagesink
Speaker streaming feed
The streaming feed is processed by ABYSS front-end. ABYSS (ABYSS Broadcast Your Stream Successfully) is written in Python, in version 0.1 it consist of a main module used for generating the GUI via GTK3, a module for creating and handling the pipeline using GStreamer-1.0 plugins, and a config file to set inputs/ouputs and server parameters.
Here is the main pipeline (using the Elphel camera) as it could be used in a bash command-line:
gst-launch-1.0 -e rtspsrc location=rtsp://192.168.48.2:554 ! queue ! rtpjpegdepay ! tee name=rawvideo ! queue ! jpegdec max-errors=-1 ! tee name=videodecoded ! queue ! xvimagesink sync=false ! \ rawvideo. ! queue ! filesink location=[rawvideo_defaultname] \ videodecoded. ! videoscale ! video/x-raw, width=640, height=360 ! vp8enc min_quantizer=1 max_quantizer=13 cpu-used=5 deadline=42000 threads=2 sharpness=7 ! queue ! webmmux name=mux2 \ pulsesrc device=[input_source_name]! level interval=200000000 ! queue ! vorbisenc quality=0.3 ! tee name=rawaudio \ rawaudio. ! queue ! oggmux ! tee name=streamaudio ! queue ! filesink location=[audio_defaultname] \ streamaudio. ! queue leaky=2! shout2send ip=server.domain.name port=80 mount=testaudio.ogg password=[server_password] \ rawaudio. ! queue ! mux2. \ mux2. ! tee name=streamfull ! queue ! filesink location=[stream_defaultname] \ streamfull. ! queue leaky=2! shout2send ip=server.domain.name port=80 mount=teststream.webm password=[server_password]
Here is the backup pipeline in case of Elphel failure, the input is based on USB webcam:
gst-launch-1.0 -e v4l2src device=/dev/video[0-9]* ! video/x-raw, width=640, height=360 ! tee name=rawvideo ! queue ! xvimagesink sync=false ! \ rawvideo. ! queue ! mkvmux ! filesink location=[rawvideo_defaultname] \ rawvideo. ! vp8enc min_quantizer=1 max_quantizer=13 cpu-used=5 deadline=42000 threads=2 sharpness=7 ! queue ! webmmux name=mux2 \ pulsesrc device=[input_source_name]! level interval=200000000 ! queue ! vorbisenc quality=0.3 ! tee name=rawaudio \ rawaudio. ! queue ! oggmux ! tee name=streamaudio ! queue ! filesink location=[audio_defaultname] \ streamaudio. ! queue leaky=2! shout2send ip=server.domain.name port=80 mount=testaudio.ogg password=[server_password] \ rawaudio. ! queue ! mux2. \ mux2. ! tee name=streamfull ! queue ! filesink location=[stream_defaultname] \ streamfull. ! queue leaky=2! shout2send ip=server.domain.name port=80 mount=teststream.webm password=[server_password]
Here is the test pipeline to check the Elphel framing/focusing and audio inputs:
gst-launch-1.0 -e rtspsrc location=rtsp://192.168.48.2:554 ! queue ! rtpjpegdepay ! jpegdec max-errors=-1 ! queue ! xvimagesink sync=false ! \ pulsesrc device=[input_source_name]! level interval=200000000 ! queue ! pulsesink device=[output_source_name] sync=false
Setup
- Boot the computer
- Fill out .abyss config file
- Set a virtual interface (if not hard coded)
# ifcongif eth1:0 192.168.48.50
NOTE Command-line use on a X200 laptop with an Elphel address of 192.168.48.2
- Plug and power the Elphel camera
- Plug the webcam
- Plug the USB mixing desk
- Launch ABYSS
- Test the setup by clicking 'Set-up test' button
- If everything ok, quit test mode by clicking the previous button
- Start the streaming by clicking 'Stream' button
Caveats
- No automatic switch from main to backup pipeline if the Elphel camera is physically disconnected from its switch.
Should be solved in next version of ABYSS.
- If internet connection is lost during streaming, the pipeline doesn't fail, moreover the files are still recorded on the local disk.