| 196 | === I didn't hear ringing tones when I make a call, why? === #ringtone |
| 197 | |
| 198 | ''pjsua'' does not play any ring tones when the call gets 180/Ringing, since it's just a simple application for demonstration purpose. When you're building your own application, you can add this feature yourself. You can play a ringing tone by using the tone generator as described above (but connecting the tone generator to the sound device instead of to the call), or you can play your own WAV file instead. |
| 199 | |
| 200 | |
| 201 | === Outgoing RTP transmissions are not timed equally/properly. Why? === #tx-timing |
| 202 | |
| 203 | In PJMEDIA default setup, media flow is normally triggered by the ''clock'' from the audio device. We made this kind of design to ensure that the audio device always gets fed whenever it needs to be fed, and also since this design would work best for DSP devices, where audio flow would be triggered by some kind of (soft) IRQs thus it should provide the best timing source for audio flow (and without needing to have multi-threading capability). |
| 204 | |
| 205 | Unfortunately, not all audio devices provide good timing. Especially in PC world, and also with some uC-Linux based development boards that only support OSS, it is very common to have sound cards that can't provide reliable timing. On these platforms, audio frames will come in burst rather than one by one and spaced equally. So with 20ms ''ptime'' for example, rather than having one frame every 20ms, these devices would give PJMEDIA three or four frames every 60ms or 80ms. Since RTP packets are transmitted as soon as audio frame is available from the sound card, this would cause PJMEDIA to transmit RTP packets at (what looks like) irregular interval. |
| 206 | |
| 207 | In my opinion, this should be fine, as the remote endpoint should be able to accommodate this with its ''jitter buffer''. |
| 208 | |
| 209 | If you don't like this, and rather want PJMEDIA to transmit RTP packets at good interval, you can install a [/pjmedia/docs/html/group__PJMEDIA__MASTER__PORT.htm master clock port] between sound device and conference bridge, so that the master port will drive the media clock instead. A master clock port uses an internal thread to drive the media flow, so it should provide better timing on most platforms. |
| 210 | |
| 211 | The steps to install master port between sound device and conference bridge are as follows: |
| 212 | 1. Create a [/pjmedia/docs/html/group__PJMEDIA__SPLITCOMB.htm splitter/combiner (splitcomb) port]. |
| 213 | 1. Create a reverse phase port on the splitcomb ([/pjmedia/docs/html/group__PJMEDIA__SPLITCOMB.htm#gc59338fd9d471a14c06b0f51c2290b68 pjmedia_splitcomb_create_rev_channel()]). |
| 214 | 1. Create the [/pjmedia/docs/html/group__PJMED__SND__PORT.htm sound device port] as usual. |
| 215 | 1. Connect the sound device port to the splitcomb (use [/pjmedia/docs/html/group__PJMED__SND__PORT.htm#g046156b765a34e6c640b0534e6b21f9c pjmedia_snd_port_connect()]). |
| 216 | 1. Create a [/pjmedia/docs/html/group__PJMEDIA__MASTER__PORT.htm master clock port], and specify the splitcomb's reverse channel as the ''upstream'' port, and the [/pjmedia/docs/html/group__PJMEDIA__CONF.htm conference bridge] as the ''downstream'' port. |
| 217 | 1. Start the master port. |
| 218 | |
| 219 | For example, normally we ''connect'' the sound device to the conference bridge (the default setup in pjsua-lib) with the code below: |
| 220 | |
| 221 | {{{ |
| 222 | void connect_conf_bridge_to_snd_dev(pj_pool_t *pool, pjmedia_port *conf) |
| 223 | { |
| 224 | pjmedia_snd_port *snd; |
| 225 | |
| 226 | pjmedia_snd_port_create(..., &snd); |
| 227 | pjmedia_snd_port_connect(snd, conf); |
| 228 | } |
| 229 | }}} |
| 230 | |
| 231 | The change required to install master clock between sound device and conference bridge would be something like this: |
| 232 | |
| 233 | {{{ |
| 234 | void connect_conf_bridge_to_snd_dev2(pj_pool_t *pool, pjmedia_port *conf) |
| 235 | { |
| 236 | pjmedia_port *splitcomb, *rev; |
| 237 | pjmedia_snd_port *snd; |
| 238 | pjmedia_master_port *m; |
| 239 | |
| 240 | pjmedia_splitcomb_create(pool, CLOCK_RATE, 1, SAMPLES_PER_FRAME, |
| 241 | BITS, 0, &splitcomb); |
| 242 | pjmedia_splitcomb_create_rev_channel(pool, splitcomb, 0, 0, &rev); |
| 243 | |
| 244 | pjmedia_snd_port_create(..., &snd); |
| 245 | pjmedia_snd_port_connect(snd, splitcomb); |
| 246 | |
| 247 | pjmedia_master_port_create(pool, rev, conf, 0, &m); |
| 248 | pjmedia_master_port_start(m); |
| 249 | } |
| 250 | }}} |
| 251 | |
| 252 | (Note that with PJSUA-LIB, we can get the instance of the conference bridge by calling [/pjsip/docs/html/group__PJSUA__LIB__MEDIA.htm#g4b6ffc203b8799f08f072d0921777161 pjsua_set_no_snd_dev()]). |
| 253 | |
| 254 | With the above snippet, the irregularity of the sound device clock will be ''normalized'' by the master port, so the conference bridge will be run by the ''good'' clock. Of course this means some buffering is needed to compensate the ''clock'' difference between the sound device and the master port, and this is what the ''reverse channel'' of the ''splitcomb'' is for. The ''reverse channel'' can accommodate up to [/pjmedia/docs/html/group__PJMEDIA__CONFIG.htm#ga2e53b9eb4cb76294f95a15a2a0ef8cc PJMEDIA_SOUND_BUFFER_COUNT] frames, so if the clock difference is larger than this, you will need to enlarge {{{PJMEDIA_SOUND_BUFFER_COUNT}}} to an acceptable value (32, for example). |
| 255 | |
| 256 | |
| 257 | |
| 258 | === Does PJSIP support G.723 or G.729 codecs? === #g729-g723 |
| 259 | |
| 260 | Yes and no. |
| 261 | |
| 262 | No, because there is no ready to use G.723/G.729 codec implementation in PJMEDIA. We specifically don't include G.723/G.729 support in our code because G.723/G.729 are both patented and royalty based codecs, so we are quite nervous with the possibility that some lawyers may contact us should we include them in PJMEDIA. So our decision is to include only free and open source codecs in PJMEDIA (such as G.711, GSM, and Speex). |
| 263 | |
| 264 | But yes, because you can always add more codecs in PJMEDIA. Please see [#adding-codec Adding a new codec] question below for more info. |
| 265 | |
| 266 | === How can I add new codec to PJMEDIA? === #adding-codec |
| 267 | |
| 268 | First of all, read the [/pjmedia/docs/html/group__PJMEDIA__CODEC.htm Codec Framework] documentation. Then, the easiest is to take other codec source file in {{{pjmedia-codec}}} directory, such as [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia-codec/gsm.c gsm.c], and replace GSM specific function calls with the functions that are provided by your codec library. |
| 269 | |
| 270 | There are two "classes" to implement: |
| 271 | - The codec factory ([/pjmedia/docs/html/structpjmedia__codec__factory.htm pjmedia_codec_factory]) - Codec factory will be queried by PJMEDIA to see if it can instantiate a codec instance based on a codec descriptor ([/pjmedia/docs/html/structpjmedia__codec__info.htm pjmedia_codec_info]). Codec factory also provides information about what particular codecs it supports, so that PJMEDIA can list these codecs in the local SDP capability descriptor. |
| 272 | - The codec itself ([/pjmedia/docs/html/structpjmedia__codec.htm pjmedia_codec]) - The codec instance provides functions to parse, encode and decode audio frames. Optionally it may provide a function to recover lost frames (known as ''Packet Lost Concealment'' feature, or PLC). |
| 273 | |
| 274 | Once it's finished, you should end up with just two public APIs exported by the implementation: an initialization function, and a deinitialization function. The initialization function's primary task is to register the codec factory to PJMEDIA's codec manager, so that PJMEDIA knows about this new codec. While the deinitialization function is to unregister the codec factory, and to release resources, if any. |
| 275 | |
| 276 | Then call the codec initialization function in the application. After this, the codec should be picked up automagically by the rest of PJMEDIA framework (that is, PJSIP should advertise the codec in outgoing SDP and negotiate it with remote's SDP, and encode/decode audio frames with the codec, if the codec is selected for the session). |
| 277 | |
| 278 | |
| 279 | === How can I manipulate audio samples directly? === #audio-man |
| 280 | |
| 281 | In PJMEDIA, audio frames are sent back and forth between what is called [/pjmedia/docs/html/group__PJMEDIA__PORT__CONCEPT.htm media port (pjmedia_port)]. So to be able to peek or manipulate audio frames, we need to implement our own media port. |
| 282 | |
| 283 | Implementing media port should be easy. Basically we just need to implement these: |
| 284 | 1. Create a media port structure, ''deriving'' from {{{pjmedia_port}}} structure: |
| 285 | {{{ |
| 286 | struct my_media_port |
| 287 | { |
| 288 | pjmedia_port base; |
| 289 | |
| 290 | // Your data goes here: |
| 291 | ... |
| 292 | }; |
| 293 | }}} |
| 294 | 1. Fill in the media port information to describe the media port (like, the name, clock rate, bits per sample, etc.). Use [/pjmedia/docs/html/group__PJMEDIA__PORT__INTERFACE.htm#gb3259d2924c7a2243733391f6f8f0a9a pjmedia_port_info_init()] to initialize the port into. |
| 295 | 1. Implement {{{get_frame()}}} callback (of the ''pjmedia_port'') if the media port is a source (that is, the media port feed audio frames to other media ports). |
| 296 | 1. Implement {{{put_frame()}}} callback (of the ''pjmedia_port'') if the media port is a sink (that is, other media ports may feed audio frames to our media port). |
| 297 | 1. Implement {{{on_destroy()}}}, if you need to reclaim resources when the media port is destroyed. |
| 298 | |
| 299 | There are many media port sample implementations in PJMEDIA. |
| 300 | |
| 301 | For source only media ports, samples include: |
| 302 | - [/trac/browser/pjproject/trunk/pjsip-apps/src/samples/playsine.c playsine.c] from the {{{pjsip-apps/samples}}} directory. |
| 303 | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/mem_player.c mem_player.c] from pjmedia (media port to playback audio from a buffer). |
| 304 | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/wav_player.c wav_player.c] from pjmedia (media port to playback audio from WAVE file). |
| 305 | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/tonegen.c tonegen.c] from pjmedia (media port to generate sine waves/DTMF/multi-frequency tones). |
| 306 | |
| 307 | For sink only media ports, samples include: |
| 308 | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/mem_capture.c mem_capture.c] from pjmedia (media port to save audio to a buffer). |
| 309 | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/wav_writer.c wav_writer.c] from pjmedia (media port to save audio to a WAVE file). |
| 310 | |
| 311 | For media ports that manipulates audio and provide both sink and source callbacks: |
| 312 | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/resample_port.c resample_port.c] from pjmedia (to convert sampling rate) |
| 313 | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/echo_port.c echo_port.c] from pjmedia (the AEC) |
| 314 | |
| 315 | |
| 316 | === I always get "Bad RTP pt" error. Why? === #bad-rtp-pt |
| 317 | |
| 318 | From our experience, this can be caused by one of these: |
| 319 | 1. Remote endpoint has agreed to use one codec in the SDP negotiation, but it sends RTP with different codec. This had happened with some old version of a popular but not open source softphone (don't want to name it to protect the innocent). When this happens, the "Bad RTP pt" error message will be printed continuously in the log or screen (basically for every RTP packet!). The remedy in this case was to upgrade that softphone to a new version. |
| 320 | 1. Remote endpoint is sending a comfort noise packet. When this happens, the error message is not printed as often as the other case (maybe once in every few seconds). |
| 321 | |
| 322 | When you encounter this problem, and if upgrading the softphone doesn't solve the problem, you can report this to PJSIP mailing list. When reporting, please include the error log and the complete INVITE message and the 200/OK response (containing the SDP), so that we can analyze which endpoint is behaving badly. |
| 323 | |
| 324 | |
| 325 | === Does PJSIP support SRTP or ZRTP? If no, how can I implement them? === #srtp-zrtp |
| 326 | |
| 327 | Currently PJMEDIA doesn't support SRTP nor ZRTP, although this has been on our TODO list for sometime now. Fortunately, implementing them is not too difficult, since media transport in PJMEDIA is separated from the stream. |
| 328 | |
| 329 | Have a look at: |
| 330 | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/transport_udp.c transport_udp.c], the normal UDP media transport, and |
| 331 | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/transport_ice.c transport_ice.c], ICE media transport. |
| 332 | |
| 333 | There, it's quite easy to see where RTP/RTCP packets are received (that is in {{{on_rx_rtp()}}}/{{{on_rx_rtcp()}}} respectively) or sent (in {{{transport_send_rtp()}}}/{{{transport_send_rtcp()}}}). So these are the locations where you should call the decryption and encryption function of SRTP/ZRTP. |
| 334 | |
| 335 | Alternatively, there is a cleaner way to plug in SRTP/ZRTP functionality without changing existing code. As mentioned earlier, media transport is separated from stream. They are only ''attached'' to each other in application code (or in pjsua-lib). So then one can build an ''adapter'', which is plugged between the media transport and the stream to do the SRTP/ZRTP stuffs. To the stream, this adapter will look like a media transport, and to existing media transport, this adapter will look like a stream. The benefit of this approach is we can use the same adapter for both kind of media transports, that is the UDP and ICE media transport. |
| 336 | |
| 337 | ---- |
| 338 | |
| 339 | == DTMF/Tone Related Questions == #dtmf |
| 340 | |
303 | | === I didn't hear ringing tones when I make a call, why? === #ringtone |
304 | | |
305 | | ''pjsua'' does not play any ring tones when the call gets 180/Ringing, since it's just a simple application for demonstration purpose. When you're building your own application, you can add this feature yourself. You can play a ringing tone by using the tone generator as described above (but connecting the tone generator to the sound device instead of to the call), or you can play your own WAV file instead. |
306 | | |
307 | | |
308 | | === Outgoing RTP transmissions are not timed equally/properly. Why? === #tx-timing |
309 | | |
310 | | In PJMEDIA default setup, media flow is normally triggered by the ''clock'' from the audio device. We made this kind of design to ensure that the audio device always gets fed whenever it needs to be fed, and also since this design would work best for DSP devices, where audio flow would be triggered by some kind of (soft) IRQs thus it should provide the best timing source for audio flow (and without needing to have multi-threading capability). |
311 | | |
312 | | Unfortunately, not all audio devices provide good timing. Especially in PC world, and also with some uC-Linux based development boards that only support OSS, it is very common to have sound cards that can't provide reliable timing. On these platforms, audio frames will come in burst rather than one by one and spaced equally. So with 20ms ''ptime'' for example, rather than having one frame every 20ms, these devices would give PJMEDIA three or four frames every 60ms or 80ms. Since RTP packets are transmitted as soon as audio frame is available from the sound card, this would cause PJMEDIA to transmit RTP packets at (what looks like) irregular interval. |
313 | | |
314 | | In my opinion, this should be fine, as the remote endpoint should be able to accommodate this with its ''jitter buffer''. |
315 | | |
316 | | If you don't like this, and rather want PJMEDIA to transmit RTP packets at good interval, you can install a [/pjmedia/docs/html/group__PJMEDIA__MASTER__PORT.htm master clock port] between sound device and conference bridge, so that the master port will drive the media clock instead. A master clock port uses an internal thread to drive the media flow, so it should provide better timing on most platforms. |
317 | | |
318 | | The steps to install master port between sound device and conference bridge are as follows: |
319 | | 1. Create a [/pjmedia/docs/html/group__PJMEDIA__SPLITCOMB.htm splitter/combiner (splitcomb) port]. |
320 | | 1. Create a reverse phase port on the splitcomb ([/pjmedia/docs/html/group__PJMEDIA__SPLITCOMB.htm#gc59338fd9d471a14c06b0f51c2290b68 pjmedia_splitcomb_create_rev_channel()]). |
321 | | 1. Create the [/pjmedia/docs/html/group__PJMED__SND__PORT.htm sound device port] as usual. |
322 | | 1. Connect the sound device port to the splitcomb (use [/pjmedia/docs/html/group__PJMED__SND__PORT.htm#g046156b765a34e6c640b0534e6b21f9c pjmedia_snd_port_connect()]). |
323 | | 1. Create a [/pjmedia/docs/html/group__PJMEDIA__MASTER__PORT.htm master clock port], and specify the splitcomb's reverse channel as the ''upstream'' port, and the [/pjmedia/docs/html/group__PJMEDIA__CONF.htm conference bridge] as the ''downstream'' port. |
324 | | 1. Start the master port. |
325 | | |
326 | | For example, normally we ''connect'' the sound device to the conference bridge (the default setup in pjsua-lib) with the code below: |
327 | | |
328 | | {{{ |
329 | | void connect_conf_bridge_to_snd_dev(pj_pool_t *pool, pjmedia_port *conf) |
330 | | { |
331 | | pjmedia_snd_port *snd; |
332 | | |
333 | | pjmedia_snd_port_create(..., &snd); |
334 | | pjmedia_snd_port_connect(snd, conf); |
335 | | } |
336 | | }}} |
337 | | |
338 | | The change required to install master clock between sound device and conference bridge would be something like this: |
339 | | |
340 | | {{{ |
341 | | void connect_conf_bridge_to_snd_dev2(pj_pool_t *pool, pjmedia_port *conf) |
342 | | { |
343 | | pjmedia_port *splitcomb, *rev; |
344 | | pjmedia_snd_port *snd; |
345 | | pjmedia_master_port *m; |
346 | | |
347 | | pjmedia_splitcomb_create(pool, CLOCK_RATE, 1, SAMPLES_PER_FRAME, |
348 | | BITS, 0, &splitcomb); |
349 | | pjmedia_splitcomb_create_rev_channel(pool, splitcomb, 0, 0, &rev); |
350 | | |
351 | | pjmedia_snd_port_create(..., &snd); |
352 | | pjmedia_snd_port_connect(snd, splitcomb); |
353 | | |
354 | | pjmedia_master_port_create(pool, rev, conf, 0, &m); |
355 | | pjmedia_master_port_start(m); |
356 | | } |
357 | | }}} |
358 | | |
359 | | (Note that with PJSUA-LIB, we can get the instance of the conference bridge by calling [/pjsip/docs/html/group__PJSUA__LIB__MEDIA.htm#g4b6ffc203b8799f08f072d0921777161 pjsua_set_no_snd_dev()]). |
360 | | |
361 | | With the above snippet, the irregularity of the sound device clock will be ''normalized'' by the master port, so the conference bridge will be run by the ''good'' clock. Of course this means some buffering is needed to compensate the ''clock'' difference between the sound device and the master port, and this is what the ''reverse channel'' of the ''splitcomb'' is for. The ''reverse channel'' can accommodate up to [/pjmedia/docs/html/group__PJMEDIA__CONFIG.htm#ga2e53b9eb4cb76294f95a15a2a0ef8cc PJMEDIA_SOUND_BUFFER_COUNT] frames, so if the clock difference is larger than this, you will need to enlarge {{{PJMEDIA_SOUND_BUFFER_COUNT}}} to an acceptable value (32, for example). |
362 | | |
363 | | |
364 | | |
365 | | === Does PJSIP support G.723 or G.729 codecs? === #g729-g723 |
366 | | |
367 | | Yes and no. |
368 | | |
369 | | No, because there is no ready to use G.723/G.729 codec implementation in PJMEDIA. We specifically don't include G.723/G.729 support in our code because G.723/G.729 are both patented and royalty based codecs, so we are quite nervous with the possibility that some lawyers may contact us should we include them in PJMEDIA. So our decision is to include only free and open source codecs in PJMEDIA (such as G.711, GSM, and Speex). |
370 | | |
371 | | But yes, because you can always add more codecs in PJMEDIA. Please see [#adding-codec Adding a new codec] question below for more info. |
372 | | |
373 | | === How can I add new codec to PJMEDIA? === #adding-codec |
374 | | |
375 | | First of all, read the [/pjmedia/docs/html/group__PJMEDIA__CODEC.htm Codec Framework] documentation. Then, the easiest is to take other codec source file in {{{pjmedia-codec}}} directory, such as [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia-codec/gsm.c gsm.c], and replace GSM specific function calls with the functions that are provided by your codec library. |
376 | | |
377 | | There are two "classes" to implement: |
378 | | - The codec factory ([/pjmedia/docs/html/structpjmedia__codec__factory.htm pjmedia_codec_factory]) - Codec factory will be queried by PJMEDIA to see if it can instantiate a codec instance based on a codec descriptor ([/pjmedia/docs/html/structpjmedia__codec__info.htm pjmedia_codec_info]). Codec factory also provides information about what particular codecs it supports, so that PJMEDIA can list these codecs in the local SDP capability descriptor. |
379 | | - The codec itself ([/pjmedia/docs/html/structpjmedia__codec.htm pjmedia_codec]) - The codec instance provides functions to parse, encode and decode audio frames. Optionally it may provide a function to recover lost frames (known as ''Packet Lost Concealment'' feature, or PLC). |
380 | | |
381 | | Once it's finished, you should end up with just two public APIs exported by the implementation: an initialization function, and a deinitialization function. The initialization function's primary task is to register the codec factory to PJMEDIA's codec manager, so that PJMEDIA knows about this new codec. While the deinitialization function is to unregister the codec factory, and to release resources, if any. |
382 | | |
383 | | Then call the codec initialization function in the application. After this, the codec should be picked up automagically by the rest of PJMEDIA framework (that is, PJSIP should advertise the codec in outgoing SDP and negotiate it with remote's SDP, and encode/decode audio frames with the codec, if the codec is selected for the session). |
384 | | |
385 | | |
386 | | === How can I manipulate audio samples directly? === #audio-man |
387 | | |
388 | | In PJMEDIA, audio frames are sent back and forth between what is called [/pjmedia/docs/html/group__PJMEDIA__PORT__CONCEPT.htm media port (pjmedia_port)]. So to be able to peek or manipulate audio frames, we need to implement our own media port. |
389 | | |
390 | | Implementing media port should be easy. Basically we just need to implement these: |
391 | | 1. Create a media port structure, ''deriving'' from {{{pjmedia_port}}} structure: |
392 | | {{{ |
393 | | struct my_media_port |
394 | | { |
395 | | pjmedia_port base; |
396 | | |
397 | | // Your data goes here: |
398 | | ... |
399 | | }; |
400 | | }}} |
401 | | 1. Fill in the media port information to describe the media port (like, the name, clock rate, bits per sample, etc.). Use [/pjmedia/docs/html/group__PJMEDIA__PORT__INTERFACE.htm#gb3259d2924c7a2243733391f6f8f0a9a pjmedia_port_info_init()] to initialize the port into. |
402 | | 1. Implement {{{get_frame()}}} callback (of the ''pjmedia_port'') if the media port is a source (that is, the media port feed audio frames to other media ports). |
403 | | 1. Implement {{{put_frame()}}} callback (of the ''pjmedia_port'') if the media port is a sink (that is, other media ports may feed audio frames to our media port). |
404 | | 1. Implement {{{on_destroy()}}}, if you need to reclaim resources when the media port is destroyed. |
405 | | |
406 | | There are many media port sample implementations in PJMEDIA. |
407 | | |
408 | | For source only media ports, samples include: |
409 | | - [/trac/browser/pjproject/trunk/pjsip-apps/src/samples/playsine.c playsine.c] from the {{{pjsip-apps/samples}}} directory. |
410 | | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/mem_player.c mem_player.c] from pjmedia (media port to playback audio from a buffer). |
411 | | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/wav_player.c wav_player.c] from pjmedia (media port to playback audio from WAVE file). |
412 | | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/tonegen.c tonegen.c] from pjmedia (media port to generate sine waves/DTMF/multi-frequency tones). |
413 | | |
414 | | For sink only media ports, samples include: |
415 | | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/mem_capture.c mem_capture.c] from pjmedia (media port to save audio to a buffer). |
416 | | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/wav_writer.c wav_writer.c] from pjmedia (media port to save audio to a WAVE file). |
417 | | |
418 | | For media ports that manipulates audio and provide both sink and source callbacks: |
419 | | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/resample_port.c resample_port.c] from pjmedia (to convert sampling rate) |
420 | | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/echo_port.c echo_port.c] from pjmedia (the AEC) |
421 | | |
422 | | |
423 | | === I always get "Bad RTP pt" error. Why? === #bad-rtp-pt |
424 | | |
425 | | From our experience, this can be caused by one of these: |
426 | | 1. Remote endpoint has agreed to use one codec in the SDP negotiation, but it sends RTP with different codec. This had happened with some old version of a popular but not open source softphone (don't want to name it to protect the innocent). When this happens, the "Bad RTP pt" error message will be printed continuously in the log or screen (basically for every RTP packet!). The remedy in this case was to upgrade that softphone to a new version. |
427 | | 1. Remote endpoint is sending a comfort noise packet. When this happens, the error message is not printed as often as the other case (maybe once in every few seconds). |
428 | | |
429 | | When you encounter this problem, and if upgrading the softphone doesn't solve the problem, you can report this to PJSIP mailing list. When reporting, please include the error log and the complete INVITE message and the 200/OK response (containing the SDP), so that we can analyze which endpoint is behaving badly. |
430 | | |
431 | | |
432 | | === Does PJSIP support SRTP or ZRTP? If no, how can I implement them? === #srtp-zrtp |
433 | | |
434 | | Currently PJMEDIA doesn't support SRTP nor ZRTP, although this has been on our TODO list for sometime now. Fortunately, implementing them is not too difficult, since media transport in PJMEDIA is separated from the stream. |
435 | | |
436 | | Have a look at: |
437 | | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/transport_udp.c transport_udp.c], the normal UDP media transport, and |
438 | | - [/trac/browser/pjproject/trunk/pjmedia/src/pjmedia/transport_ice.c transport_ice.c], ICE media transport. |
439 | | |
440 | | There, it's quite easy to see where RTP/RTCP packets are received (that is in {{{on_rx_rtp()}}}/{{{on_rx_rtcp()}}} respectively) or sent (in {{{transport_send_rtp()}}}/{{{transport_send_rtcp()}}}). So these are the locations where you should call the decryption and encryption function of SRTP/ZRTP. |
441 | | |
442 | | Alternatively, there is a cleaner way to plug in SRTP/ZRTP functionality without changing existing code. As mentioned earlier, media transport is separated from stream. They are only ''attached'' to each other in application code (or in pjsua-lib). So then one can build an ''adapter'', which is plugged between the media transport and the stream to do the SRTP/ZRTP stuffs. To the stream, this adapter will look like a media transport, and to existing media transport, this adapter will look like a stream. The benefit of this approach is we can use the same adapter for both kind of media transports, that is the UDP and ICE media transport. |
| 448 | === How Can I Send DTMF INFO Method? === #dtmf-info1 |
| 449 | |
| 450 | Please see the section on [#info-method sending INFO request with PJSUA-LIB]. |