Webrtc streamer

Webrtc streamer DEFAULT

WebRTC-streamer

TravisCICircleCIAppveyorCirusCISnap StatusGithubCIGithubCI

Codacy Badge

ReleaseDownloadDocker Pulls

HerokuGitpod ready-to-code

NanoPi

WebRTC-streamer is an experiment to stream video capture devices and RTSP sources through WebRTC using simple mechanism.

It embeds a HTTP server that implements API and serves a simple HTML page that use them through AJAX.

The WebRTC signaling is implemented through HTTP requests:

  • /api/call : send offer and get answer

  • /api/hangup : close a call

  • /api/addIceCandidate : add a candidate

  • /api/getIceCandidate : get the list of candidates

The list of HTTP API is available using /api/help.

Nowdays there is builds on CircleCI, Appveyor, CirrusCI and GitHub CI :

  • for x86_64 on Ubuntu Bionic
  • for armv7 crosscompiled (this build is running on Raspberry Pi2 and NanoPi NEO)
  • for armv6+vfp crosscompiled (this build is running on Raspberry PiB and should run on a Raspberry Zero)
  • for arm64 crosscompiled
  • Windows x64 build with clang

The webrtc stream name could be :

  • an alias defined using -n argument then the corresponding -u argument will be used to create the capturer
  • an "rtsp://" url that will be openned using an RTSP capturer based on live555
  • an "file://" url that will be openned using an MKV capturer based on live555
  • an "screen://" url that will be openned by webrtc::DesktopCapturer::CreateScreenCapturer
  • an "window://" url that will be openned by webrtc::DesktopCapturer::CreateWindowCapturer
  • an "v4l2://" url that will capture H264 frames and store it using webrtc::VideoFrameBuffer::Type::kNative type (obviously not supported on Windows)
  • a capture device name

Dependencies :

It is based on :

Install the Chromium depot tools (for WebRTC).

Download WebRTC

Build WebRTC Streamer

It is possible to specify cmake parameters WEBRTCROOT & WEBRTCDESKTOPCAPTURE :

  • $WEBRTCROOT/src should contains source (default is $(pwd)/../webrtc)
  • WEBRTCDESKTOPCAPTURE enabling desktop capture if available (default is ON)

Arguments of '-H' are forwarded to option listening_ports of civetweb, then it is possible to use the civetweb syntax like or .

Using allow to store compressed frame from backend stream using . This Hack the stucture storing data in a override of i420 buffer. This allow to forward H264 frames from V4L2 device or RTSP stream to WebRTC stream. It use less CPU and have less adaptation (resize, codec, bandwidth are disabled).

Examples

Screenshot

Live Demo

We can access to the WebRTC stream using webrtcstreamer.html for instance :

An example displaying grid of WebRTC Streams is available using option "layout=x" Screenshot

Live Demo

It is possible start embeded ICE server and publish its url using:

The command is getting the public IP, it could also given as a static parameter.

In order to configure the NAT rules using the upnp feature of the router, it is possible to use upnpc like this:

Adapting with the HTTP port, STUN port, TURN port.

Instead of using the internal HTTP server, it is easy to display a WebRTC stream in a HTML page served by another HTTP server. The URL of the webrtc-streamer to use should be given creating the WebRtcStreamer instance :

A short sample HTML page using webrtc-streamer running locally on port 8000 :

Using web-component could be a simple way to display some webrtc stream, a minimal page could be :

Live Demo

Using the webcomponent with a stream selector :

Screenshot

Live Demo

Using the webcomponent over google map :

Screenshot

Live Demo

Screenshot

Live Demo

A simple way to publish WebRTC stream to a Janus Gateway Video Room is to use the JanusVideoRoom interface

A short sample to publish WebRTC streams to Janus Video Room could be :

Screenshot

Live Demo

This way the communication between Janus API and WebRTC Streamer API is implemented in Javascript running in browser.

The same logic could be implemented in NodeJS using the same JS API :

A simple way to publish WebRTC stream to a Jitsi Video Room is to use the XMPPVideoRoom interface

A short sample to publish WebRTC streams to a Jitsi Video Room could be :

Live Demo

You can start the application using the docker image :

You can expose V4L2 devices from your host using :

The container entry point is the webrtc-streamer application, then you can :

  • get the help using :

  • run the container registering a RTSP url using :

  • run the container giving config.json file using :

Sours: https://github.com/mpromonet/webrtc-streamer

olijouve/webrtc-streamer-card

Homeassistant Lovelace card that stream zero delay video from webrtc-streamer(RTSP, H264, H265...)

You need a running Webrtc-streamer instance, see official repository for more details and full options : https://github.com/mpromonet/webrtc-streamer

webrtc-streamer is based on live555 lib so it should be able to process MPEG, H.265, H.264, H.263+, DV or JPEG video, and several audio codecs. http://www.live555.com/liveMedia/

Can be run the simplest way through Docker:

Clone or uncompress this repo in your homeassistant www directory.

Custom card must be add as module in your dashboard ressources :

Then in Lovelace add your card manually

stream_uri : URI or label name of your original device video stream(RTSP,RTP). For label see webrtc-streamer documentation webrtc_server_url : url and port of your webrtc-streamer instance

Please note this is only a visualisation card, no entity can be created and it is not streamable anywhere else than in a webrtc compatible browser, but it's still very nice for "realtime" PTZ positioning.

webrtc-streamer have crashed and over consummed my laptop ressources while stress testing it, but it seems to be best solution for my personnal case and play very nice with my cheap chinese cam. Should diserve an add-on...

Disclaimer : This simple card is given as is, more as a proof of concept because i have no skills in lovelace card development and no support will be made. Feel free to fork the repo, enhanced it and share it back.

Sours: https://githubmemory.com/repo/olijouve/webrtc-streamer-card
  1. Bard heat pump troubleshooting
  2. Aztec tribal tattoo
  3. Full moon backdrop

npm

TravisCICircleCIAppveyorCirusCISnap StatusGithubCIGithubCI

Codacy Badge

ReleaseDownloadDocker Pulls

HerokuGitpod ready-to-code

NanoPi

WebRTC-streamer is an experiment to stream video capture devices and RTSP sources through WebRTC using simple mechanism.

It embeds a HTTP server that implements API and serves a simple HTML page that use them through AJAX.

The WebRTC signaling is implemented through HTTP requests:

  • /api/call : send offer and get answer

  • /api/hangup : close a call

  • /api/addIceCandidate : add a candidate

  • /api/getIceCandidate : get the list of candidates

The list of HTTP API is available using /api/help.

Nowdays there is builds on CircleCI, Appveyor, CirrusCI and GitHub CI :

  • for x86_64 on Ubuntu Bionic
  • for armv7 crosscompiled (this build is running on Raspberry Pi2 and NanoPi NEO)
  • for armv6+vfp crosscompiled (this build is running on Raspberry PiB and should run on a Raspberry Zero)
  • for arm64 crosscompiled
  • Windows x64 build with clang

The webrtc stream name could be :

  • an alias defined using -n argument then the corresponding -u argument will be used to create the capturer
  • an "rtsp://" url that will be openned using an RTSP capturer based on live555
  • an "file://" url that will be openned using an MKV capturer based on live555
  • an "screen://" url that will be openned by webrtc::DesktopCapturer::CreateScreenCapturer
  • an "window://" url that will be openned by webrtc::DesktopCapturer::CreateWindowCapturer
  • an "v4l2://" url that will capture H264 frames and store it using webrtc::VideoFrameBuffer::Type::kNative type (obviously not supported on Windows)
  • a capture device name

Dependencies :

It is based on :

Install the Chromium depot tools (for WebRTC).

Download WebRTC

Build WebRTC Streamer

It is possible to specify cmake parameters WEBRTCROOT & WEBRTCDESKTOPCAPTURE :

  • $WEBRTCROOT/src should contains source (default is $(pwd)/../webrtc)
  • WEBRTCDESKTOPCAPTURE enabling desktop capture if available (default is ON)

Arguments of '-H' are forwarded to option listening_ports of civetweb, then it is possible to use the civetweb syntax like or .

Using allow to store compressed frame from backend stream using . This Hack the stucture storing data in a override of i420 buffer. This allow to forward H264 frames from V4L2 device or RTSP stream to WebRTC stream. It use less CPU and have less adaptation (resize, codec, bandwidth are disabled).

Examples

Screenshot

Live Demo

We can access to the WebRTC stream using webrtcstreamer.html for instance :

An example displaying grid of WebRTC Streams is available using option "layout=x" Screenshot

Live Demo

It is possible start embeded ICE server and publish its url using:

The command is getting the public IP, it could also given as a static parameter.

In order to configure the NAT rules using the upnp feature of the router, it is possible to use upnpc like this:

Adapting with the HTTP port, STUN port, TURN port.

Instead of using the internal HTTP server, it is easy to display a WebRTC stream in a HTML page served by another HTTP server. The URL of the webrtc-streamer to use should be given creating the WebRtcStreamer instance :

A short sample HTML page using webrtc-streamer running locally on port 8000 :

Using web-component could be a simple way to display some webrtc stream, a minimal page could be :

Live Demo

Using the webcomponent with a stream selector :

Screenshot

Live Demo

Using the webcomponent over google map :

Screenshot

Live Demo

Screenshot

Live Demo

A simple way to publish WebRTC stream to a Janus Gateway Video Room is to use the JanusVideoRoom interface

A short sample to publish WebRTC streams to Janus Video Room could be :

Screenshot

Live Demo

This way the communication between Janus API and WebRTC Streamer API is implemented in Javascript running in browser.

The same logic could be implemented in NodeJS using the same JS API :

A simple way to publish WebRTC stream to a Jitsi Video Room is to use the XMPPVideoRoom interface

A short sample to publish WebRTC streams to a Jitsi Video Room could be :

Live Demo

You can start the application using the docker image :

You can expose V4L2 devices from your host using :

The container entry point is the webrtc-streamer application, then you can :

  • get the help using :

  • run the container registering a RTSP url using :

  • run the container giving config.json file using :

Sours: https://www.npmjs.com/package/webrtc-streamer
WebRTC in 100 Seconds // Build a Video Chat app from Scratch

gwicke/webrtc-streamer

Name: webrtc-streamer

Owner:Gabriel Wicke

Description: WebRTC streamer for V4L2 capture devices and RTSP sources

Forked from:mpromonet/webrtc-streamer

Created: Jul 13, 2017

Updated: Jul 13, 2017

Pushed: Jul 14, 2017

Homepage:https://webrtc-streamer.herokuapp.com/

Size: 911

Language: C++

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Build status

ReleaseDownloadDocker Pulls

Heroku

WebRTC-streamer

This is a try to stream video sources through WebRTC using simple mechanism.

It embeds a HTTP server that implements API and serve a simple HTML page that use them through AJAX.

The WebRTC signaling is implemented throught HTTP requests:

  • /call : send offer and get answer

  • /hangup : close a call

  • /addIceCandidate : add a candidate

  • /getIceCandidate : get the list of candidates

The list of HTTP API is available using /help.

Nowdays there is 2 builds on Travis CI :

  • for x86_64 on Ubuntu trusty
  • for arm crosscompiling with gcc-linaro-arm-linux-gnueabihf-raspbian-x64 (this build is running on Raspberry Pi and NanoPi NEO)
Dependencies :

It is based on :

Build

Build WebRTC with H264 support
Build live555 to enable RTSP support(optional)
Build WebRTC Streamer

where WEBRTCROOT and WEBRTCBUILD indicate how to point to WebRTC :

  • $WEBRTCROOT/src should contains source
  • $WEBRTCROOT/src/out/$WEBRTCBUILD should contains libraries and where PREFIX point to live555 installation (default is /usr/local)

Usage

Example

Screenshot

Live Demo

You can access to the WebRTC stream coming from an RTSP url using webrtcstream.html page with the RTSP url as argument, something like:

https://webrtc-streamer.herokuapp.com/webrtcstream.html?rtsp://217.17.220.110/axis-media/media.amp

Embed in a HTML page:

Instead of using the internal HTTP server, it is easy to display a WebRTC stream in a HTML page served by an external HTTP server. The URL of the webrtc-streamer to use should be given creating the WebRtcStreamer instance :

A short sample using webrtc-streamer running locally on port 8000 :

Connect to Janus Gateway Video Room

A simple way to publish WebRTC stream to a Janus Gateway Video Room is to use the JanusVideoRoom interface

A short sample to publish WebRTC streams to Janus Video Room could be :

Live Demo

This way the communication between Janus API and WebRTC Streamer API is implemented in Javascript running in browser. Same logic could be implemented using NodeJS, or whatever langague allowing to call HTTP requests.

Docker image

You can start the application using the docker image :

The container accept arguments that are forward to webrtc-streamer application, then you can :

  • get the help using :

  • expose the V4L2 device /dev/video0 using :

This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.

Sours: https://labs.cd2h.org/gitforager/repository/repository.jsp?id=97146590

Streamer webrtc

Hosting WebRTC Server on IP Camera

Popular industrial and home monitoring systems are based on a central media server that connects cameras with the end users. This architecture is fine for most use cases, however have some drawbacks such as: higher latency, privacy concerns (if you use 3rd party server), and high cost.

In the article we present a peer-to-peer alternative: let's remove a media server and directly access a camera streaming service running on the camera itself.

The project is based on WebRTC for audio and video streaming to a web browser. Access to the server over the Internet is possible thanks to Husarnet VPN Client.

Here are some of the advantages of our solution:

  • low latency over the Internet
  • simple infrastructure architecture (only your laptop and Internet camera)
  • quick setup (everything is dockerized)

Basically all WebRTC infrastructure is hosted on the Internet camera (Single Board Computer + webcam) together with a simple web server.

Janus WebRTC server hosted inside a Docker container on Raspberry Pi with remote access over the internet

TL;DR

Use the prebuilt Docker image from our Docker Hub account to run the project faster.

👉 https://hub.docker.com/r/husarnet/webrtc-streamer 👈

Supported architectures:

  • (Intel x64)
  • (eg. Raspberry Pi 4)

Otherwise, the following steps will show you how to build and run a container by yourself.

About WebRTC#

WebRTC is a technology designed for web browsers for a real-time audio and video streaming. It is commonly used in teleconferencing products like Google Meet, Jitsi or TokBox to mention a few of them. External WebRTC servers help web browsers in establishing a real-time connection over the Internet.

In the project we run the WebRTC server not on external server, but on the Internet camera itself. That makes the infrastructure maintanance and setup far easier. Establishing P2P connection is done by Husarnet VPN, so we do not need to host WebRTC servers with a static IP any more.

When it comes to WebRTC streaming there are multiple options available, including but not limited to:

In this project we have choosen Janus as it's a free, open source soultion with relatively easy installation and configuration. In combination wtih FFmpeg, a simple websocket server written in Python and utilizing Husarnet's P2P connection establishment, we are able to provide video stream over WAN with latency as low as 200 - 400 ms.

Why use WebRTC?

One potential question about the technical feasibility of this project may be "Why use WebRTC instead of simple RTSP server?". Anwser to this question is quite straight forward:

RTSP is not directly supported by web browsers.

Requirements#

  1. A single board computer (SBC) with connected USB camera

  2. End user's laptop running Linux with Firefox or Chrome web browser to access a video stream over the Internet.

  3. Husarnet VPN Join Code.

    You will find your Join Code at https://app.husarnet.com
    -> Click on the desired network
    -> button
    -> tab

The whole project runs in a Docker Container.

A SBC with a connected USB camera is in our case Raspberry Pi 4 running Ubuntu 20.04 and Logitech C920 (the old version of C920 had embedded H.264 support - current model unfortunately do not).

If connected USB camera provides a H.264 stream, then this stream is directly used by a WebRTC server. If not, the FFmpeg VP8 codec is used.

To access the webserver with a video stream over the Internet it is required to be in the same Husarnet VPN network. So basically to access a video stream over the Internet install Husarnet VPN client on your laptop and add it to the same Husarnet network.

info

Image has been build and run on the following clear installations of host systems:

Installing Docker#

If you have Docker already installed - you can skip this step.

The official instruction is the best tutorial but here's a quick rundown for you:

Clonning the Example and Building an Image#

All code is in the GitHub repository:

Ensure bash scrpit is executable.

Then build an image.

Raspberry Pi Note

On Raspberry Pi OS you may see a signature error when trying to build the image. In order to solve this you need to manually install the latest libseccomp2.

To do so go to: https://packages.debian.org/sid/libseccomp2 and download armhf version.

Then install it as such:

Starting the Project Using #

Creating File:#

After you created it, specify the Husarnet JoinCode and hostname there. Also change to if you can't hear a sound. The file should look something like this:

You will find your JoinCode at https://app.husarnet.com
-> Click on the desired network
-> button
-> tab

Running a Container#

description:

  • - you need to make as a volume to preserve it's state for example if you would like to update the image your container is based on. If you would like to run multiple containers on your host machine remember to provide unique volume name for each container (in our case ).
  • - you need to give the container access to your webcam in this case which will be referenced in the script as

Result#

Runing above commands should result in the following output:

SBC + webcam shell

Accessing a Web User Interface and a Stream#

To access a web UI and video stream hosted by the SBC, you need to connect your computer running Firefox or Chrome web browser to the same Husarnet network as the SBC.

Simply use the same Husarnet Join Code as used befor for SBC

  1. Save your Husarnet VPN Join Code as an environmental variable:

  2. Install Husarnet VPN client:

    For more detail about installation of the VPN client and managing networks go to https://husarnet.com/docs/begin-linux

  3. Connect your laptop to the same Husarnet VPN network as SBC, by using the same Husarnet Join Code:

  4. Open the URL provided by the SBC in Firefox or Chrome web browser on your laptop:

    streamer ui

    With stream parameters panel after expansion looking as such

    streamer ui expanded

info

Due to the way Janus handles networking you may need to disable mDNS in your browser setttings in order to view the stream.

On FireFox open URL: and setup:

Remarks#

tip

Lowering latency#

In order to stream with lowest latency possible it is advisable to use a camera that offers feed preencoded with the H264 codec (such as Logitech C920 which was used during testing). The application detects camera support for H264 codec and makes FFmpeg take advantage of it, which reduces latency significantly. However any camera that offers feed in YUVU format is supported.

On Linux you can check available output stream options for your USB camera like that:

Another important thing to remember is to ensure that a P2P connection has been established between hosts, connection status is detected by server running in the container and sent to UI which displays connection status int the lower part of the screen.

If you face any issues in establishing a P2P connection read our troubleshotting guide

tip

Audio streaming#

By default audio is streamed alongside the video, and can be muted in the UI.

If You don't want to stream audio altogether in order to preserve CPU or bandwith it can be stoped by setting the environment variable to false when running the container this results in stoping aduio streaming and encoding.

tip

Testing feed#

If You don't have a camera at hand or just want to quickly test the stream without bothering with the hardware aspects of the setup it is possible to use a feed generated by ffmpeg rather than one coming from a camera. In order to do so just set he TEST environment variable to false when running the container

tip

Adding support for different codecs#

You can quite easily add support for any of the WebRTC used codecs. By implementing Your own functions that setup ffmpeg pipelines and caling them in appropiate places in websocket server.

Summary#

In this blog post we have shown a simple and fast way to setup WebRTC stream over VPN all inside a docker container.

The solution is quite flexible, with all WebRTC infrastructure contained inside the container which can run on a range of hosts form Raspberry Pi to a standard laptop.

Thanks to high configurabilty of stream parameters, codecs and other options described solution can be adopted in different use cases ranging from telepresence robots remote control to survelience or "baby monitoring".

If you would like to send a comment to this blog post, you can do that on Husarnet Community Forum.

Read also ...#

Tags:dockerwebrtcjanusRaspberry Pi

Sours: https://husarnet.com/blog/webrtc-streamer/
WebRTC Tutorial - How does WebRTC work?

.

Similar news:

.



350 351 352 353 354