WebRTC – Browser Support

WebRTC – Browser Support ”; Previous Next The Web is moving so fast and it is always improving. New standards are created every day. Browsers allow updates to be installed without the user ever knowing, so you should keep up with what is going on in the world of the Web and WebRTC. Here is an overview of what this is up to today. Browser Support Every browser doesn”t have all the same WebRTC features at the same time. Different browsers may be ahead of the curve, which makes some WebRTC features work in one browser and not another. The current support for WebRTC in the browser is shown in the following picture. You can check an up-to-date WebRTC support status at http://caniuse.com/#feat=rtcpeerconnection. Chrome, Firefox, and Opera The latest versions of Chrome, Firefox, and Opera on mainstream PC operating systems such as Mac OS X, Windows, and Linux, all support WebRTC out-of-the-box. And most importantly, the engineers from Chrome and Firefox developer teams have been working together to fix issues so these two browsers could communicate with each other easily. Android OS On Android operating systems, WebRTC applications for Chrome and Firefox should work outof-the-box. They are able to work with other browsers after Android Ice Cream Sandwich version (4.0). This is due to the code sharing between desktop and mobile versions. Apple Apple has not yet made any announcement about their plans to support WebRTC in Safari on OS X. One of the possible workarounds for hybrid native iOS applications os to embed the WebRTC code directly into the application and load this app into a WebView. Internet Explorer Microsoft doesn”t support WebRTC on desktops. But they have officially confirmed that they are going to implement ORTC (Object Realtime Communications) in future versions of IE(Edge). They are not planning to support WebRTC 1.0. They labeled their ORTC as WebRTC 1.1, although it is just a community enhancement and not the official standard. Recently they”ve added the ORTC support to the latest Microsoft Edge version. You may learn more at https://blogs.windows.com/msedgedev/2015/09/18/ortc-api-is-now-available-in-microsoftedge/. Summary Notice that WebRTC is a collection of APIs and protocols, not a single API. The support for each of these is developing on different browsers and operating systems at a different level. A great way to check the latest level of support is through http://canisue.com. It tracks adoption of modern APIs across multiple browsers. You can also find the latest information on browser supports as well as WebRTC demos at http://www.webrtc.org, which is supported by Mozilla, Google, and Opera. Print Page Previous Next Advertisements ”;

WebRTC – Security

WebRTC – Security ”; Previous Next In this chapter, we are going to add security features to the signaling server we created in the “WebRTC Signaling” chapter. There will be two enhancements − User authentication using Redis database Enabling secure socket connection Firstly, you should install Redis. Download the latest stable release at http://redis.io/download(3.05 in my case) Unpack it Inside the downloaded folder run sudo make install After the installation is finished, run make test to check whether everything is working correctly. Redis has two executable commands − redis-cli − command line interface for Redis (client part) redis-server − Redis data store To run the Redis server type redis-server in the terminal console. You should see the following − Now open a new terminal window and run redis-cli to open a client application. Basically, Redis is a key-value database. To create a key with a string value, you should use the SET command. To read the key value you should use the GET command. Let”s add two users and passwords for them. Keys will be the usernames and values of these keys will be the corresponding passwords. Now we should modify our signaling server to add a user authentication. Add the following code to the top of the server.js file − //require the redis library in Node.js var redis = require(“redis”); //creating the redis client object var redisClient = redis.createClient(); In the above code, we require the Redis library for Node.js and creating a redis client for our server. To add the authentication modify the message handler on the connection object − //when a user connects to our sever wss.on(”connection”, function(connection) { console.log(“user connected”); //when server gets a message from a connected user connection.on(”message”, function(message) { var data; //accepting only JSON messages try { data = JSON.parse(message); } catch (e) { console.log(“Invalid JSON”); data = {}; } //check whether a user is authenticated if(data.type != “login”) { //if user is not authenticated if(!connection.isAuth) { sendTo(connection, { type: “error”, message: “You are not authenticated” }); return; } } //switching type of the user message switch (data.type) { //when a user tries to login case “login”: console.log(“User logged:”, data.name); //get password for this username from redis database redisClient.get(data.name, function(err, reply) { //check if password matches with the one stored in redis var loginSuccess = reply === data.password; //if anyone is logged in with this username or incorrect password then refuse if(users[data.name] || !loginSuccess) { sendTo(connection, { type: “login”, success: false }); } else { //save user connection on the server users[data.name] = connection; connection.name = data.name; connection.isAuth = true; sendTo(connection, { type: “login”, success: true }); } }); break; } }); } //… //*****other handlers******* In the above code if a user tries to login we get from Redis his password, check if it matches with the stored one, and if it successful we store his username on the server. We also add the isAuth flag to the connection to check whether the user is authenticated. Notice this code − //check whether a user is authenticated if(data.type != “login”) { //if user is not authenticated if(!connection.isAuth) { sendTo(connection, { type: “error”, message: “You are not authenticated” }); return; } } If an unauthenticated user tries to send offer or leave the connection we simply send an error back. The next step is enabling a secure socket connection. It is highly recommended for WebRTC applications. PKI (Public Key Infrastructure) is a digital signature from a CA (Certificate Authority). Users then check that the private key used to sign a certificate matches the public key of the CA”s certificate. For the development purposes. we will use a self-signed security certificate. We will use the openssl. It is an open source tool that implements SSL (Secure Sockets Layer) and TLS (Transport Layer Security) protocols. It is often installed by default on Unix systems. Run openssl version -a to check whether it is installed. To generate public and private security certificate keys, you should follow the steps given below − Generate a temporary server password key openssl genrsa -des3 -passout pass:x -out server.pass.key 2048 Generate a server private key openssl rsa -passin pass:12345 -in server.pass.key -out server.key Generate a signing request. You will be asked additional questions about your company. Just hit the “Enter” button all the time. openssl req -new -key server.key -out server.csr Generate the certificate openssl x509 -req -days 1095 -in server.csr -signkey server.key -out server.crt Now you have two files, the certificate (server.crt) and the private key (server.key). Copy them into the signaling server root folder. To enable the secure socket connection modify our signaling server. //require file system module var fs = require(”fs”); var httpServ = require(”https”); //https://github.com/visionmedia/superagent/issues/205 process.env.NODE_TLS_REJECT_UNAUTHORIZED = “0”; //out secure server will bind to the port 9090 var cfg = { port: 9090, ssl_key: ”server.key”, ssl_cert: ”server.crt” }; //in case of http request just send back “OK” var processRequest = function(req, res) { res.writeHead(200); res.end(“OK”); }; //create our server with SSL enabled var app = httpServ.createServer({ key: fs.readFileSync(cfg.ssl_key), cert: fs.readFileSync(cfg.ssl_cert) }, processRequest).listen(cfg.port); //require our websocket library var WebSocketServer = require(”ws”).Server; //creating a websocket server at port 9090 var wss = new WebSocketServer({server: app}); //all connected to the server users var users = {}; //require the redis library in Node.js var redis = require(“redis”); //creating the redis client object var redisClient = redis.createClient(); //when a user connects to our sever wss.on(”connection”, function(connection){ //…other code In the above code, we require the fs library to read private key and certificate, create the cfg object with the binding port and paths for private key and certificate. Then, we create an HTTPS server with our keys along with WebSocket server on the port 9090. Now open https://localhost:9090 in Opera. You should see the following − Click the “continue anyway” button. You should see the “OK” message. To test our secure signaling server, we will modify the chat application we created in the “WebRTC Text Demo” tutorial. We just need to add a password field. The following is the

WebRTC – MediaStream APIs

WebRTC – MediaStream APIs ”; Previous Next The MediaStream API was designed to easy access the media streams from local cameras and microphones. The getUserMedia() method is the primary way to access local input devices. The API has a few key points − A real-time media stream is represented by a stream object in the form of video or audio It provides a security level through user permissions asking the user before a web application can start fetching a stream The selection of input devices is handled by the MediaStream API (for example, when there are two cameras or microphones connected to the device) Each MediaStream object includes several MediaStreamTrack objects. They represent video and audio from different input devices. Each MediaStreamTrack object may include several channels (right and left audio channels). These are the smallest parts defined by the MediaStream API. There are two ways to output MediaStream objects. First, we can render output into a video or audio element. Secondly, we can send output to the RTCPeerConnection object, which then send it to a remote peer. Using the MediaStream API Let”s create a simple WebRTC application. It will show a video element on the screen, ask the user permission to use the camera, and show a live video stream in the browser. Create an index.html file − <!DOCTYPE html> <html lang = “en”> <head> <meta charset = “utf-8” /> </head> <body> <video autoplay></video> <script src = “client.js”></script> </body> </html> Then create the client.js file and add the following; function hasUserMedia() { //check if the browser supports the WebRTC return !!(navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia); } if (hasUserMedia()) { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; //enabling video and audio channels navigator.getUserMedia({ video: true, audio: true }, function (stream) { var video = document.querySelector(”video”); //inserting our stream to the video tag video.src = window.URL.createObjectURL(stream); }, function (err) {}); } else { alert(“WebRTC is not supported”); } Here we create the hasUserMedia() function which checks whether WebRTC is supported or not. Then we access the getUserMedia function where the second parameter is a callback that accept the stream coming from the user”s device. Then we load our stream into the video element using window.URL.createObjectURL which creates a URL representing the object given in parameter. Now refresh your page, click Allow, and you should see your face on the screen. Remember to run all your scripts using the web server. We have already installed one in the WebRTC Environment Tutorial. MediaStream API Properties MediaStream.active (read only) − Returns true if the MediaStream is active, or false otherwise. MediaStream.ended (read only, deprecated) − Return true if the ended event has been fired on the object, meaning that the stream has been completely read, or false if the end of the stream has not been reached. MediaStream.id (read only) − A unique identifier for the object. MediaStream.label (read only, deprecated) − A unique identifier assigned by the user agent. You can see how the above properties look in my browser − Event Handlers MediaStream.onactive − A handler for an active event that is fired when a MediaStream object becomes active. MediaStream.onaddtrack − A handler for an addtrack event that is fired when a new MediaStreamTrack object is added. MediaStream.onended (deprecated) − A handler for an ended event that is fired when the streaming is terminating. MediaStream.oninactive − A handler for an inactive event that is fired when a MediaStream object becomes inactive. MediaStream.onremovetrack − A handler for a removetrack event that is fired when a MediaStreamTrack object is removed from it. Methods MediaStream.addTrack() − Adds the MediaStreamTrack object given as argument to the MediaStream. If the track has already been added, nothing happens. MediaStream.clone() − Returns a clone of the MediaStream object with a new ID. MediaStream.getAudioTracks() − Returns a list of the audio MediaStreamTrack objects from the MediaStream object. MediaStream.getTrackById() − Returns the track by ID. If the argument is empty or the ID is not found, it returns null. If several tracks have the same ID, it returns the first one. MediaStream.getTracks() − Returns a list of all MediaStreamTrack objects from the MediaStream object. MediaStream.getVideoTracks() − Returns a list of the video MediaStreamTrack objects from the MediaStream object. MediaStream.removeTrack() − Removes the MediaStreamTrack object given as argument from the MediaStream. If the track has already been removed, nothing happens. To test the above APIs change change the index.html in the following way − <!DOCTYPE html> <html lang = “en”> <head> <meta charset = “utf-8” /> </head> <body> <video autoplay></video> <div><button id = “btnGetAudioTracks”>getAudioTracks() </button></div> <div><button id = “btnGetTrackById”>getTrackById() </button></div> <div><button id = “btnGetTracks”>getTracks()</button></div> <div><button id = “btnGetVideoTracks”>getVideoTracks() </button></div> <div><button id = “btnRemoveAudioTrack”>removeTrack() – audio </button></div> <div><button id = “btnRemoveVideoTrack”>removeTrack() – video </button></div> <script src = “client.js”></script> </body> </html> We added a few buttons to try out several MediaStream APIs. Then we should add event handlers for our newly created button. Modify the client.js file this way − var stream; function hasUserMedia() { //check if the browser supports the WebRTC return !!(navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia); } if (hasUserMedia()) { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; //enabling video and audio channels navigator.getUserMedia({ video: true, audio: true }, function (s) { stream = s; var video = document.querySelector(”video”); //inserting our stream to the video tag video.src = window.URL.createObjectURL(stream); }, function (err) {}); } else { alert(“WebRTC is not supported”); } btnGetAudioTracks.addEventListener(“click”, function(){ console.log(“getAudioTracks”); console.log(stream.getAudioTracks()); }); btnGetTrackById.addEventListener(“click”, function(){ console.log(“getTrackById”); console.log(stream.getTrackById(stream.getAudioTracks()[0].id)); }); btnGetTracks.addEventListener(“click”, function(){ console.log(“getTracks()”); console.log(stream.getTracks()); }); btnGetVideoTracks.addEventListener(“click”, function(){ console.log(“getVideoTracks()”); console.log(stream.getVideoTracks()); }); btnRemoveAudioTrack.addEventListener(“click”, function(){ console.log(“removeAudioTrack()”); stream.removeTrack(stream.getAudioTracks()[0]); }); btnRemoveVideoTrack.addEventListener(“click”, function(){ console.log(“removeVideoTrack()”); stream.removeTrack(stream.getVideoTracks()[0]); }); Now refresh your page. Click on the getAudioTracks() button, then click on the removeTrack() – audio button. The audio track should now be removed. Then do the same for the video track. If you click the getTracks() button you should see all MediaStreamTracks (all connected video and audio inputs). Then click on the getTrackById() to get audio MediaStreamTrack. Summary In this chapter, we created a simple WebRTC application using the MediaStream API. Now you should have a clear

WebRTC – Mobile Support

WebRTC – Mobile Support ”; Previous Next In the mobile world, the WebRTC support is not on the same level as it is on desktops. Mobile devices have their own way, so WebRTC is also something different on the mobile platforms. When developing a WebRTC application for desktop, we consider using Chrome, Firefox or Opera. All of them support WebRTC out of the box. In general, you just need a browser and not bother about the desktop”s hardware. In the mobile world there are three possible modes for WebRTC today − The native application The browser application The native browser Android In 2013, the Firefox web browser for Android was presented with WebRTC support out of the box. Now you can make video calls on Android devices using the Firefox mobile browser. It has three main WebRTC components − PeerConnection − enables calls between browsers getUserMedia − provides access to the camera and microphone DataChannels − provides peer-to-peer data transfer Google Chrome for Android provides WebRTC support as well. As you”ve already noticed, the most interesting features usually first appear in Chrome. In the past year, the Opera mobile browser appeared with WebRTC support. So for Android you have Chrome, Firefox, and Opera. Other browsers don”t support WebRTC. iOS Unfortunately, WebRTC is not supported on iOS now. Although WebRTC works well on Mac when using Firefox, Opera, or Chrome, it is not supported on iOS. Nowadays, your WebRTC application won”t work on Apple mobile devices out of the box. But there is a browser − Bowser. It is a web browser developed by Ericsson and it supports WebRTC out of the box. You can check its homepage at http://www.openwebrtc.org/bowser/. Today, it is the only friendly way to support your WebRTC application on iOS. Another way is to develop a native application yourself. Windows Phones Microsoft doesn”t support WebRTC on mobile platforms. But they have officially confirmed that they are going to implement ORTC (Object Realtime Communications) in future versions of IE. They are not planning to support WebRTC 1.0. They labeled their ORTC as WebRTC 1.1, although it is just a community enhancement and not the official standard. So today Window Phone users can”t use WebRTC applications and there is no way to beat this situation. Blackberry WebRTC applications are not supported on Blackberry either, in any way. Using a WebRTC Native Browser The most convenient and comfortable case for users to utilize WebRTC is using the native browser of the device. In this case, the device is ready to work any additional configurations. Today only Android devices that are version 4 or higher provide this feature. Apple still doesn”t show any activity with WebRTC support. So Safari users can”t use WebRTC applications. Microsoft also did not introduce it in Windows Phone 8. Using WebRTC via Browser Applications This means using a third-party applications (non-native web browsers) in order to provide the WebRTC features. For now, there are two such third-party applications. Bowser, which is the only way to bring WebRTC features to the iOS device and Opera, which is a nice alternative for Android platform. The rest of the available mobile browsers don”t support WebRTC. Native Mobile Applications As you can see, WebRTC does not have a large support in the mobile world yet. So, one of the possible solutions is to develop a native applications that utilize the WebRTC API. But it is not the better choice because the main WebRTC feature is a cross-platform solution. Anyway, in some cases this is the only way because a native application can utilize device-specific functions or features that are not supported by HTML5 browsers. Constraining Video Stream for Mobile and Desktop Devices The first parameter of the getUserMedia API expects an object of keys and values telling the browser how to process streams. You can check the full set of constraints at https://tools.ietf.org/html/draft-alvestrand-constraints-resolution-03. You can setup video aspect ration, frameRate, and other optional parameters. Supporting mobile devices is one of the biggest pains because mobile devices have limited screen space along with limited resources. You might want the mobile device to only capture a 480×320 resolution or smaller video stream to save power and bandwidth. Using the user agent string in the browser is a good way to test whether the user is on a mobile device or not. Let”s see an example. Create the index.html file − <!DOCTYPE html> <html lang = “en”> <head> <meta charset = “utf-8” /> </head> <body> <video autoplay></video> <script src = “client.js”></script> </body> </html> Then create the following client.js file − //constraints for desktop browser var desktopConstraints = { video: { mandatory: { maxWidth:800, maxHeight:600 } }, audio: true }; //constraints for mobile browser var mobileConstraints = { video: { mandatory: { maxWidth: 480, maxHeight: 320, } }, audio: true } //if a user is using a mobile browser if(/Android|iPhone|iPad/i.test(navigator.userAgent)) { var constraints = mobileConstraints; } else { var constraints = desktopConstraints; } function hasUserMedia() { //check if the browser supports the WebRTC return !!(navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia); } if (hasUserMedia()) { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; //enabling video and audio channels navigator.getUserMedia(constraints, function (stream) { var video = document.querySelector(”video”); //inserting our stream to the video tag video.src = window.URL.createObjectURL(stream); }, function (err) {}); } else { alert(“WebRTC is not supported”); } Run the web server using the static command and open the page. You should see it is 800×600. Then open this page in a mobile viewport using chrome tools and check the resolution. It should be 480×320. Constraints are the easiest way to increase the performance of your WebRTC application. Summary In this chapter, we learned about the issues that can occur when developing WebRTC applications for mobile devices. We discovered different limitations of supporting the WebRTC API on mobile platforms. We also launched a demo application where we set different constraints for desktop and mobile browsers. Print Page Previous Next

WebRTC – RTCPeerConnection APIs

WebRTC – RTCPeerConnection APIs ”; Previous Next The RTCPeerConnection API is the core of the peer-to-peer connection between each of the browsers. To create the RTCPeerConnection objects simply write var pc = RTCPeerConnection(config); where the config argument contains at least on key, iceServers. It is an array of URL objects containing information about STUN and TURN servers, used during the finding of the ICE candidates. You can find a list of available public STUN servers at code.google.com Depending upon whether you are the caller or the callee the RTCPeerConnection object is used in a slightly different way on each side of the connection. Here is an example of the user”s flow − Register the onicecandidate handler. It sends any ICE candidates to the other peer, as they are received. Register the onaddstream handler. It handles the displaying of the video stream once it is received from the remote peer. Register the message handler. Your signaling server should also have a handler for messages received from the other peer. If the message contains the RTCSessionDescription object, it should be added to the RTCPeerConnection object using the setRemoteDescription() method. If the message contains the RTCIceCandidate object, it should be added to the RTCPeerConnection object using the addIceCandidate() method. Utilize getUserMedia() to set up your local media stream and add it to the RTCPeerConnection object using the addStream() method. Start offer/answer negotiation process. This is the only step where the caller”s flow is different from the callee”s one. The caller starts negotiation using the createOffer() method and registers a callback that receives the RTCSessionDescription object. Then this callback should add this RTCSessionDescription object to your RTCPeerConnection object using setLocalDescription(). And finally, the caller should send this RTCSessionDescription to the remote peer using the signaling server. The callee, on the other, registers the same callback, but in the createAnswer() method. Notice that the callee flow is initiated only after the offer is received from the caller. RTCPeerConnection API Properties RTCPeerConnection.iceConnectionState (read only) − Returns an RTCIceConnectionState enum that describes the state of the connection. An iceconnectionstatechange event is fired when this value changes. The possible values − new − the ICE agent is waiting for remote candidates or gathering addresses checking − the ICE agent has remote candidates, but it has not found a connection yet connected − the ICE agent has found a usable connection, but is still checking more remote candidate for better connection. completed − the ICE agent has found a usable connection and stopped testing remote candidates. failed − the ICE agent has checked all the remote candidates but didn”t find a match for at least one component. disconnected − at least one component is no longer alive. closed − the ICE agent is closed. RTCPeerConnection.iceGatheringState (read only) − Returns a RTCIceGatheringState enum that describes the ICE gathering state for the connection − new − the object was just created. gathering − the ICE agent is in the process of gathering candidates complete the ICE agent has completed gathering. RTCPeerConnection.localDescription (read only) − Returns an RTCSessionDescription describing the local session. It can be null if it has not yet been set. RTCPeerConnection.peerIdentity (read only) − Returns an RTCIdentityAssertion. It consists of an idp(domain name) and a name representing the identity of the remote peer. RTCPeerConnection.remoteDescription (read only) − Return an RTCSessionDescription describing the remote session. It can be null if it has not yet been set. RTCPeerConnection.signalingState (read only) − Returns an RTCSignalingState enum that describes the signaling state of the local connection. This state describes the SDP offer. A signalingstatechange event is fired when this value changes. The possible values − stable − The initial state. There is no SDP offer/answer exchange in progress. have-local-offer − the local side of the connection has locally applied a SDP offer. have-remote-offer − the remote side of the connection has locally applied a SDP offer. have-local-pranswer − a remote SDP offer has been applied, and a SDP pranswer applied locally. have-remote-pranswer − a local SDP has been applied, and a SDP pranswer applied remotely. closed − the connection is closed. Event Handlers Given below are the commonly used Event Handlers of RTCPeerConnection. S.No. Event Handlers & Description 1 RTCPeerConnection.onaddstream This handler is called when the addstream event is fired. This event is sent when a MediaStream is added to this connection by the remote peer. 2 RTCPeerConnection.ondatachannel This handler is called when the datachannel event is fired. This event is sent when a RTCDataChannel is added to this connection. 3 RTCPeerConnection.onicecandidate This handler is called when the icecandidate event is fired. This event is sent when a RTCIceCandidate object is added to the script. 4 RTCPeerConnection.oniceconnectionstatechange This handler is called when the iceconnectionstatechange event is fired. This event is sent when the value of iceConnectionState changes. 5 RTCPeerConnection.onidentityresult This handler is called when the identityresult event is fired. This event is sent when an identity assertion is generated during the creating of an offer or an answer of via getIdentityAssertion(). 6 RTCPeerConnection.onidpassertionerror This handler is called when the idpassertionerror event is fired. This event is sent when the IdP (Identitry Provider) finds an error while generating an identity assertion. 7 RTCPeerConnection.onidpvalidation This handler is called when the idpvalidationerror event is fired. This event is sent when the IdP (Identitry Provider) finds an error while validating an identity assertion. 8 RTCPeerConnection.onnegotiationneeded This handler is called when the negotiationneeded event is fired. This event is sent by the browser to inform the negotiation will be required at some point in the future. 9 RTCPeerConnection.onpeeridentity This handler is called when the peeridentity event is fired. This event is sent when a peer identity has been set and verified on this connection. 10 RTCPeerConnection.onremovestream This handler is called when the signalingstatechange event is fired. This event is sent when the value of signalingState changes. 11 RTCPeerConnection.onsignalingstatechange This handler is called when the removestream event is fired. This event is sent when a MediaStream is removed from this connection. Methods Given below are

WebRTC – Architecture

WebRTC – Architecture ”; Previous Next The overall WebRTC architecture has a great level of complexity. Here you can find three different layers − API for web developers − this layer contains all the APIs web developer needed, including RTCPeerConnection, RTCDataChannel, and MediaStrean objects. API for browser makers Overridable API, which browser makers can hook. Transport components allow establishing connections across various types of networks while voice and video engines are frameworks responsible for transferring audio and video streams from a sound card and camera to the network. For Web developers, the most important part is WebRTC API. If we look at the WebRTC architecture from the client-server side we can see that one of the most commonly used models is inspired by the SIP(Session Initiation Protocol) Trapezoid. In this model, both devices are running a web application from different servers. The RTCPeerConnection object configures streams so they could connect to each other, peer-to-peer. This signaling is done via HTTP or WebSockets. But the most commonly used model is Triangle − In this model both devices use the same web application. It gives web developer more flexibility when managing user connections. The WebRTC API It consists of a few main javascript objects − RTCPeerConnection MediaStream RTCDataChannel The RTCPeerConnection object This object is the main entry point to the WebRTC API. It helps us connect to peers, initialize connections and attach media streams. It also manages a UDP connection with another user. The main task of the RTCPeerConnection object is to setup and create a peer connection. We can easily hook keys points of the connection because this object fires a set of events when they appear. These events give you access to the configuration of our connection − The RTCPeerConnection is a simple javascript object, which you can simply create this way − [code] var conn = new RTCPeerConnection(conf); conn.onaddstream = function(stream) { // use stream here }; [/code] The RTCPeerConnection object accepts a conf parameter, which we will cover later in these tutorials. The onaddstream event is fired when the remote user adds a video or audio stream to their peer connection. MediaStream API Modern browsers give a developer access to the getUserMedia API, also known as the MediaStream API. There are three key points of functionality − It gives a developer access to a stream object that represent video and audio streams It manages the selection of input user devices in case a user has multiple cameras or microphones on his device It provides a security level asking user all the time he wants to fetch s stream To test this API let”s create a simple HTML page. It will show a single <video> element, ask the user”s permission to use the camera and show a live stream from the camera on the page. Create an index.html file and add − [code] <html> <head> <meta charset = “utf-8”> </head> <body> <video autoplay></video> <script src = “client.js”></script> </body> </html> [/code] Then add a client.js file − [code] //checks if the browser supports WebRTC function hasUserMedia() { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; return !!navigator.getUserMedia; } if (hasUserMedia()) { navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; //get both video and audio streams from user”s camera navigator.getUserMedia({ video: true, audio: true }, function (stream) { var video = document.querySelector(”video”); //insert stream into the video tag video.src = window.URL.createObjectURL(stream); }, function (err) {}); }else { alert(“Error. WebRTC is not supported!”); } [/code] Now open the index.html and you should see the video stream displaying your face. But be careful, because WebRTC works only on the server side. If you simply open this page with the browser it won”t work. You need to host these files on the Apache or Node servers, or which one you prefer. The RTCDataChannel object As well as sending media streams between peers, you may also send additional data using DataChannel API. This API is as simple as MediaStream API. The main job is to create a channel coming from an existing RTCPeerConnection object − [code] var peerConn = new RTCPeerConnection(); //establishing peer connection //… //end of establishing peer connection var dataChannel = peerConnection.createDataChannel(“myChannel”, dataChannelOptions); // here we can start sending direct messages to another peer [/code] This is all you needed, just two lines of code. Everything else is done on the browser”s internal layer. You can create a channel at any peer connection until the RTCPeerConnectionobject is closed. Summary You should now have a firm grasp of the WebRTC architecture. We also covered MediaStream, RTCPeerConnection, and RTCDataChannel APIs. The WebRTC API is a moving target, so always keep up with the latest specifications. Print Page Previous Next Advertisements ”;