We use cookies to optimize your user experience. We also share information about your use of our site with our social media, advertising and analytics partners. By continuing to use our site you agree to use cookies in accordance with our Privacy Policy.

Want to talk about your project?


How to create your own messaging app?

Messenger, Whatsapp, Slack – there are many communicators on the market that are used by a large number of people. Why don’t you build your own? From this article, you will learn how to create a video chat application based on the WebRTC library. Easier said than done? Not necessarily!


WebRTC (Web Real-Time Communication) is a free, open-source project that provides web browsers and mobile applications with real-time communication (RTC) via simple application programming interfaces (APIs). It lets audio and video communication to work embedded to web pages by allowing direct peer-to-peer communication, eliminating the need for plugins or download native apps.
The advancement of WebRTC technology has given impetus to the development of services that use audio / video calls and the creation of online conferences. Various APIs, services, servers, and frameworks have appeared.
In this article, I will provide a wealth of knowledge for an Android developer that will facilitate understanding of WebRTC and show how to write a video chat application based on that library for Android between a web browser and a native Android application.
There are many examples of WebRTC implementation for Android on the internet, but few of them tell you what functions are performed by individual methods. Therefore, I will explain in detail about the work of WebRTC and describe individual methods and how to use them.

In the process of implementing video chat, it is desirable to understand basic concepts such as SDP, ICE Server, Candidates, STUN/TURN.

SDP, Candidate, ICE Server…

The Session Description Protocol (SDP) is a stream parameters description format, used for media communication. WebRTC always starts with the exchange of SDP information. Suppose we want to make communication between Web and Android platforms. The first thing everyone should do is to send information about their SDP.

    o=- 1990196634084781235 2 IN IP4
    t=0 0
    a=group:BUNDLE video
    a=msid-semantic: WMS ARDAMS local_track_stream
    m=video 9 UDP/TLS/RTP/SAVPF 96 97 98 99 100 101 127 124 125
    c=IN IP4
    a=rtcp:9 IN IP4
    a=ice-options:trickle renomination
    a=fingerprint:sha-256 89:3B:40:0D:47:B8:5F:B0:7D:64:F1:88:B1:91:AE:3B:A4:D5:65:85:34:83:F2:3B:A1:4B:53:FA:19:D1:C8:BA
    a=extmap:14 urn:ietf:params:rtp-hdrext:toffset
    a=extmap:2 http://www.webrtc.org/experiments/rtp-hdrext/abs-send-time
    a=extmap:13 urn:3gpp:video-orientation
    a=extmap:3 http://www.ietf.org/id/draft-holmer-rmcat-transport-wide-cc-extensions-01
    a=extmap:5 http://www.webrtc.org/experiments/rtp-hdrext/playout-delay
    a=extmap:6 http://www.webrtc.org/experiments/rtp-hdrext/video-content-type
    a=extmap:7 http://www.webrtc.org/experiments/rtp-hdrext/video-timing
    a=extmap:8 http://tools.ietf.org/html/draft-ietf-avtext-framemarking-07
    a=extmap:9 http://www.webrtc.org/experiments/rtp-hdrext/color-space
    a=rtpmap:96 VP8/90000
    a=rtcp-fb:96 goog-remb
    a=rtcp-fb:96 transport-cc
    a=rtcp-fb:96 ccm fir
    a=rtcp-fb:96 nack
    a=rtcp-fb:96 nack pli
    a=rtpmap:97 rtx/90000
    a=fmtp:97 apt=96
    a=rtpmap:98 VP9/90000
    a=rtcp-fb:98 goog-remb
    a=rtcp-fb:98 transport-cc
    a=rtcp-fb:98 ccm fir
    a=rtcp-fb:98 nack
    a=rtcp-fb:98 nack pli
    a=rtpmap:99 rtx/90000
    a=fmtp:99 apt=98
    a=rtpmap:100 H264/90000
    a=rtcp-fb:100 goog-remb
    a=rtcp-fb:100 transport-cc
    a=rtcp-fb:100 ccm fir
    a=rtcp-fb:100 nack
    a=rtcp-fb:100 nack pli
    a=fmtp:100 level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01f
    a=rtpmap:101 rtx/90000
    a=fmtp:101 apt=100
    a=rtpmap:127 red/90000
    a=rtpmap:124 rtx/90000
    a=fmtp:124 apt=127
    a=rtpmap:125 ulpfec/90000
    a=ssrc-group:FID 310312430 271036256
    a=ssrc:310312430 cname:OnitL2N/El5rjP2K
    a=ssrc:310312430 msid:ARDAMS VideoTrack
    a=ssrc:310312430 mslabel:ARDAMS
    a=ssrc:310312430 label:VideoTrack
    a=ssrc:271036256 cname:OnitL2N/El5rjP2K
    a=ssrc:271036256 msid:ARDAMS VideoTrack
    a=ssrc:271036256 mslabel:ARDAMS
    a=ssrc:271036256 label:VideoTrack
    a=ssrc-group:FID 3522930219 3516052652
    a=ssrc:3522930219 cname:OnitL2N/El5rjP2K
    a=ssrc:3522930219 msid:local_track_stream local_track
    a=ssrc:3522930219 mslabel:local_track_stream
    a=ssrc:3522930219 label:local_track
    a=ssrc:3516052652 cname:OnitL2N/El5rjP2K
    a=ssrc:3516052652 msid:local_track_stream local_track
    a=ssrc:3516052652 mslabel:local_track_stream
    a=ssrc:3516052652 label:local_track

Above, you see the SDP generated for my Android device by the WebRTC library. From there, we can conclude that I only send video streams, without audio. I’m going to use H.264, VP8, VP9 codecs.
After SDP protocols, it is required to exchange information about the candidates, in simple terms, this information about the network connection can be found in more detail here and here. Candidates are generated by the so-called Ice Servers – STUN/TURN, a list of free stun/turn servers.
How to generate SDP, candidates, where to write ICE Server?
Do not worry, all this has already been implemented in the WebRTC library, you just need to configure everything correctly and call the necessary methods.

Actions speak louder than words…

After setting up the Android project, you need to add the WebRTC library dependency to the Gradle file. Google provides pre-compiled versions of WebRTC for Android through Maven. You can also compile the library yourself from the source code. To use the precompiled version, just add the following dependency.


You have to provide the necessary permissions for the application to work properly. Below is a list of permissions to put in the Manifest file. Also, remember about the permission for network calls!


<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.CHANGE_NETWORK_STATE"/>
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/>
<uses-permission android:name="android.permission.RECORD_AUDIO"/>
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>

To display a preview of your own camera and the other person’s camera, use the SurfaceViewRenderer attribute.


   app:layout_constraintTop_toTopOf="parent" />

In order to send and receive streams, we will use separate PeerConnection instances. First, you need to initialize WebRTC:

private fun initPeerConnectionFactory(context: Application) {
   val options = PeerConnectionFactory.InitializationOptions.builder(context)

Get an instance of PeerConnectionFactory:

val rootEglBase: EglBase = EglBase.create()
val peerConnectionFactory = PeerConnectionFactory
   .setVideoEncoderFactory(DefaultVideoEncoderFactory(rootEglBase.eglBaseContext, true, true))
   .setOptions(PeerConnectionFactory.Options().apply {
       networkIgnoreMask = 0 } )

Get an instance of RTCConfiguration with these configuration candidates generated, with the list of STUN/TURN servers.

private fun getPeerConnectionConfig(): PeerConnection.RTCConfiguration{

   val iceServers = listOf(

   val config = PeerConnection.RTCConfiguration(iceServers)
   PeerConnection.RTCConfiguration(iceServers).run {
       tcpCandidatePolicy = PeerConnection.TcpCandidatePolicy.DISABLED
       bundlePolicy = PeerConnection.BundlePolicy.MAXBUNDLE
       rtcpMuxPolicy = PeerConnection.RtcpMuxPolicy.REQUIRE
       continualGatheringPolicy =
       // Use ECDSA encryption.
       keyType = PeerConnection.KeyType.ECDSA
   return config

Initialize PeerConnection.Observer in this listener, the STUN/TURN candidates will be generated by the server and you can install MediaStream in your SurfaceViewRenderer:

private fun initPeerConnectionObserver(): PeerConnection.Observer {
   return object : PeerConnection.Observer {
       override fun onIceCandidate(p0: IceCandidate?) {
           p0?.let {
               //here you must implement a method to send generated candidates to the client

       override fun onIceGatheringChange(p0: PeerConnection.IceGatheringState?) {

       override fun onAddStream(p0: MediaStream?) {
           p0?.let {

Get an instance of PeerConnection:

val peerConnectionObserver = initPeerConnectionObserver()
val config = getPeerConnectionConfig()
val peerConnection = peerConnectionFactory.createPeerConnection(config, peerConnectionObserver)

And so we configured and received instances of the main classes: PeerConnection and PeerConnectionFactory. In the next step, you need to open the stream from the camera and show a preview of the local camera:
To initialize SurfaceViewRenderer, call the function:

fun initSurfaceView(view: SurfaceViewRenderer) {
   view.run {
       init(rootEglBase.eglBaseContext, null)

To create media streams and display a preview, call the following functions, pass the next function an instance of the local SurfaceViewRenderer, if you did everything right, the front camera preview should start displaying on the screen.

fun creatingMediaStream(localViewRenderer: SurfaceViewRenderer){
   val localMediaStream = peerConnectionFactory.createLocalMediaStream("local_stream")
   val videoCapturer = getVideoCapturer(context)
   val localVideoSource = peerConnectionFactory.createVideoSource(videoCapturer.isScreencast)
   val videoTrack  = peerConnectionFactory.createVideoTrack("VideoTrack", localVideoSource)
   peerConnection.addTrack(videoTrack, listOf("ARDAMS"))
   val surfaceTextureHelper = SurfaceTextureHelper.create(Thread.currentThread().name, rootEglBase.eglBaseContext)
   videoCapturer.initialize(surfaceTextureHelper, localViewRenderer.context, localVideoSource.capturerObserver)
   videoCapturer.startCapture(640, 480, 30)

private fun getVideoCapturer(context: Context) =
   Camera2Enumerator(context).run {
       deviceNames.find {
       }?.let {
           createCapturer(it, null)
       } ?: throw IllegalStateException()

Once the previous steps are completed, we can start the video call. To do this, initialize the SDP listener:

val sdpObserver = SDPObserver()

private inner class SDPObserver : SdpObserver {

   override fun onCreateSuccess(origSdp: SessionDescription) {
       peerConnection.setLocalDescription(sdpObserver, origSdp)

   override fun onSetSuccess() {
       if (isInitiator) {
           if (peerConnection.remoteDescription == null) {
             //  presenter.onReceivedSDP(Jsep("offer", peerConnection.localDescription.description))
       } else {
           if (peerConnection.localDescription != null) {
           //    presenter.onReceivedSDP(Jsep("answer", peerConnection.localDescription.description))
           } else {
               peerConnection.createAnswer(sdpObserver, MediaConstraints())

   override fun onCreateFailure(error: String) {

   override fun onSetFailure(error: String) {

Now call the createOffer method from PeerConnection.

peerConnection.createOffer(sdpObserver, MediaConstraints())

As soon as the method above is called, the following things will happen:

  1. In the onIceCandidate function, new candidates will be generated according to our configuration, which must be sent to the second peer.
  2. In the onCreateSuccess function, our local SDP is generated; it must be sent to the second peer.
  3. The second peer, in turn, must install the SDP received from us as a remote SDP, generate its local one and send us its local SDP, which we need to install as a remote one.

If everything is correct, the second peer is going to receive your video stream but we won’t be able to see it. Returning back to point 3, I pointed out to you that it is necessary to install the SDP received from the second peer as remote, how can this be done?
Since the transfer of information between peer can occur in different ways, I will describe an example function:

fun addRemoteDescription(jsep: Data) {
   if(peerConnection.remoteDescription == null) {
       val description = jsep.sdp
       val type = if (!isInitiator) SessionDescription.Type.OFFER else SessionDescription.Type.ANSWER
       val sdpDescription = SessionDescription(type, description)
       peerConnection.setRemoteDescription(sdpObserver, sdpDescription)

You need to understand that one PeerConnection, can be set to either OFFER or ANSWERjust one role at a time. So to create a full-fledged duplex, video connection between two peers, you need two PeerConnection instances.
Going back to the addRemoteDescription function, after we receive the SDP of the second peer, we have to install it on our PeerConnection instance.
We check if the previously deleted SDP has been added since, if so, we must create a new PeerConnection instance:

 if(peerConnection.remoteDescription == null) 

Next, we want to get information about the type of remote SDP, we will need this in order to create a SessionDescription instance and set it as in setRemoteDescription.

val type = SessionDescription.Type.ANSWER
peerConnection.setRemoteDescription(sdpObserver, SessionDescription(type, description))

The next we create another connection, this time with the ANSWER role. To do this, we need to create another pair of PeerConnection, with the ANSWER role and the second with the OFFER. As in the previous case, we need to set the SDP of the second peer as remote but with the OFFER type instead of ANSWER, as in the previous case:

val type = SessionDescription.Type.OFFER
peerConnectionAnswer.setRemoteDescription(sdpObserver, SessionDescription(type, description))

After we installed the remote peer, we must generate our own SDP and send to the second peer, as I said earlier in this PeerConnection, our role is ANSWER, and we must call the createAnswer () method:

peerConnectionAnswer.createAnswer(sdpObserver, MediaConstraints())

After calling the method above, as in the previous case, API will generate candidates that need to be sent to the second peer, and in the sdpObserver listener in the onCreateSuccess method, your new SDP for the new PeerConnection will appear. The second scenario differs from the first one in a way that you act as ANSWER, and the second peer is OFFER, all actions follow the same principle.
Depending on the WebRTC implementation, you may need to manually add ‘candidates’ to the second peer connection using the following function:


PeerConnection does not use all the candidates but selects only the one best suited. therefore, if you didn’t install all the candidates, this should not interfere with the connection, but most likely will affect only the quality of the connection.


During my project implementation, I came across unknown errors that WebRTC returned, I advise you to correctly configure the proguards-rules.pro file in Gradle!


As you see, implementing WebRTC for a native Android application is not difficult and the technology opens up for us Real-Time communication for Web, Android, and iOS platforms. I hope you’ll find this article helpful. I’m open to discussing and happy to provide any other more detailed information – just reach out to us Codahead on our social media (pm) or email.

Also, if you are interested in what we do at Codahead and how we can help your business grow by implementing innovative software solutions, feel free to contact us at sales@codahead.com.


Hackernoon | Real time communication with Webrtc on Android
Github | App RTC Demo
Vivekc | Peer to peer video calling webrtc for android



Andrei Liudkievich

AI Dev