As you plan your WebRTC project, some decisions you make will significantly impact the capabilities you will be able to offer, the experience for users, how future-proof your deployment is, and the amount of effort you will need to invest in maintaining your service and keeping it up to date.
While many moving parts comprise a communications solution, you'll need to consider the following five primary factors as you work on bringing real-time communications into your environment.
API platforms are a set of servers and client software development kits (SDKs) that provide everything you need for developing a WebRTC service.
On the server side, all API platforms handle basic functions such as signaling between the parties, session connections, and media flows across various network topologies and network address translations. Some API platforms enable advanced features, too. These include support for multi-party communications, recording, streaming, and support for third-party integrations for identity management and other capabilities.
On the client SDK side, most API platforms offer support for desktop browsers as well as common mobile devices.
While API platforms can provide a great way to create a WebRTC service, they have their drawbacks. They are:
Who Should Use?
API platforms are great for Web developers who don't have VoIP experience and want to focus only on their Web services. They don't have the knowledge for handling the media and network complexities on the server side, and will struggle to get WebRTC working on mobile.
That said, VoIP experts might also find API platforms useful. They can be great for building proof of concepts or launching quick service introductions, leaving the potential to rebuild projects as usage increases.
WebRTC is an open-source standard, which means you can take the code and use it on your own.
This puts great responsibility on your plate. You would need to:
Who Should Use?
VoIP experts who really need full control of every part of the system and can afford the initial and ongoing work. Even in such cases, however, you would want to look at some reusable components instead of building everything yourself, as discussed below.
This refers to various components and SDKs that will help you through the process of building your application. These are grouped into:
Client Wrappers + Signaling Server
A client-side wrapper is a set of SDKs that wrap WebRTC on the client side and typically include a signaling server. Since WebRTC APIs change and since browser incompatibility is still an issue, having a wrapper on top of your WebRTC service that is maintained continuously can come in handy, eliminating your need to update your WebRTC client application as WebRTC deployment evolves.
Examples for such SDKs include PeerJS, EasyRTC, simpleWebRTC, and rtc.io. Naturally these SDKs vary in functionality, maintainability, and the amount of flexibility they provide. Before making your choice, make sure to evaluate them based on your application needs, the future plans of the SDK, and how easy will it be for you to fork out of the SDK's main track as necessary. Give special attention to the proprietary signaling that comes along with the SDK and make sure it answers your application needs. Changing the signaling is possible, but that puts further responsibility on the developer when upgrading to new versions of the SDK.
Server-Side Functional Elements
These are specific functional elements that come hosted in the cloud or with on-premises options. Examples include Twilio's STUN/TURN service and the media server functionality provided by Jitsi and Kurento.
You can mix and match such components, but switching from component A to component B takes some work. That is the tradeoff between building a WebRTC service on your own and building only the application level and some elements you can't find out in the open market.
To me it sounds like a fair compromise.
Who Should Use?
VoIP experts who need control over all parts of the solution but don't want to start from scratch should find value in this approach.
The following decision points are relevant mainly for those companies that don't go for a closed API platform because an API platform provider makes these choices for you.
Signaling is going to require your attention, even if you've opted to use one of the available wrapper SDKs or server components rather than build your own WebRTC service.
The main debate that heats up every once in a while is about the use of standard signaling (such as SIP) vs. proprietary signaling. But before getting to that, let's talk about transport. One of the common options regardless of the signaling itself is the WebSocket API, which supports the ability to send and receive messages. A WebSocket is pretty similar in concept to a TCP connection.
If you are building a new stand-alone WebRTC service, you most likely will not have a need for standard signaling. SIP in most cases will be an overkill, more complex than necessary for your purposes.
Continue to next page: Codecs, Server-Side Functional Elements, Mobile
Continued from Page 1
Making the wrong decision on which audio and video codecs to use may mean bad quality of voice or even service failure due to codec incompatibility.
On the voice side, WebRTC supports Opus and G.711 as mandatory codecs, which also find their way into the browsers. Where you'll run into problems is should you want to connect a WebRTC service to an existing telephony system that doesn't support Opus (since most typically don't). Since Opus transcoding is CPU-intensive (and thus increases cost) it is tempting to go for a common codec such as G.711 and avoid the transcoding. This is one thing you would really not want to do if you care about voice call quality because G.711 is not built for going over the open Internet.
After long debates, the IETF decided to make the VP8 and H.264 video codecs mandatory to implement for WebRTC. We are starting to see browsers adhering to this decision, but not fully. Mozilla has supported both VP8 and H.264 in Firefox for some time now. Google supports VP8 in Chrome, and as of Chrome 50 beta also supports H.264 (still, however, behind a flag). Microsoft's support is more complicated; today in Edge it supports a H.264 UC spec, but has said it does plan on supporting H.264 and is also working on adding VP9.
Trying to Clear the Complexity...
If you are mainly expecting usage of Chrome and Firefox browsers, VP8 (and VP9) would be a good choice.
If you are planning to use a plugin for adding WebRTC support to Apple's Safari and Microsoft's Explorer browsers, be sure to check which codec the plugin supports. The Temasys WebRTC plugin, for example, supports H.264 in its commercial option.
A Future-Proof Decision
You also need to consider future plans, with H.264 going to H.265 and VP8 going to VP9. Considering royalty requirements associated with H.265 and, since it looks like all browsers (putting aside Safari as it is a wildcard for now) already support or are going to support VP9, you would probably be better off going the VP8/VP9 route.
However, H.264/H.265 does have a couple of points worth considering:
SERVER-SIDE FUNCTIONAL ELEMENTS
As you consider the server-side functional components mentioned above, an important first step is to list and prioritize the server functionalities required. Then, and based on that list, make decisions with regards to self-development or usage of cloud/on-premises components.
While you'll find some level of vendor lock-in when using third-party server-side components, I believe it is a good compromise that saves a lot of time and money.
Support for WebRTC on mobile devices is twofold -- within mobile browsers and in mobile applications. However, since most mobile phone use is in applications, browsers are important mainly for occasional usage scenarios when someone who is not a regular user of a service comes to a website that offers WebRTC communications.
On the browser side, Chrome and Firefox support WebRTC on Android devices but not on iOS devices. Safari, of course, doesn't yet support WebRTC on iOS or any other mobile device.
The solution for iOS will probably come once Apple adds WebRTC to its WebView (UIWebView allows displaying Web content in an iOS application, a similar concept of WebView exists in Android as well and already includes WebRTC in it. This will take time, and there are still open questions on things such as codec support (see a related webinar, conducted by the WebRTCStandards.info guys -- me included). will probably happen only in 2017 (different from my previous thought that it will happen in 2016)
We already have seen integration of WebRTC services into both Android and iOS applications. Question is, how hard is it to achieve that? In the past this task was very complex and required hard work compiling WebRTC for the mobile device.
WebRTC, which is supported in Android WebView, gets automatically upgraded in a similar way as the Chrome browser itself. This means it is usable in hybrid applications. In iOS, you will need to compile the integration manually if you are going for the do-it-yourself option and will not be using an SDK.
Alternatively, you can use a device-agnostic framework. This is an option IBM took using the IOSRTC plugin of Cordova for this purpose.
WebRTC removes a lot of complexity when building a real-time communications service, but you still have many decisions to make and many moving parts to handle. Making the right choice requires study and consulting with people who have already walked this trail.