Changes between Initial Version and Version 1 of helgrind


Ignore:
Timestamp:
Mar 6, 2015 6:02:20 AM (9 years ago)
Author:
ming
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • helgrind

    v1 v1  
     1This wiki describes the issue from ticket #1820 (Helgrind thread error detector test and analysis) 
     2 
     3Helgrind issue !#1 
     4 
     5Description: Data race over possible concurrent read/write access for thread's quit flag.[[br]] 
     6One thread tries to write the quit flag while another tries to read it.[[br]] 
     7Status: false positive, this is intentional behaviour. 
     8 
     9{{{ 
     10==26641== ---Thread-Announcement------------------------------------------ 
     11==26641== 
     12==26641== Thread #6 was created 
     13==26641==    at 0x5953FCE: clone (clone.S:74) 
     14==26641==    by 0x5352199: do_clone.constprop.3 (createthread.c:75) 
     15==26641==    by 0x53538BA: pthread_create@@GLIBC_2.2.5 (createthread.c:245) 
     16==26641==    by 0x4C30C90: pthread_create_WRK (hg_intercepts.c:269) 
     17==26641==    by 0x56704B: pj_thread_create (os_core_unix.c:616) 
     18==26641==    by 0x49CC57: pjmedia_endpt_create (endpoint.c:169) 
     19==26641==    by 0x43102A: pjsua_media_subsys_init (pjsua_media.c:80) 
     20==26641==    by 0x42B576: pjsua_init (pjsua_core.c:1058) 
     21==26641==    by 0x40800A: app_init (pjsua_app.c:1346) 
     22==26641==    by 0x408F1A: pjsua_app_init (pjsua_app.c:1881) 
     23==26641==    by 0x405AA9: main_func (main.c:108) 
     24==26641==    by 0x568914: pj_run_app (os_core_unix.c:1930) 
     25==26641== 
     26==26641== Possible data race during write of size 4 at 0x8182E8 by thread #1 
     27==26641== Locks held: none 
     28==26641==    at 0x42AA5E: pjsua_stop_worker_threads (pjsua_core.c:730) 
     29==26641==    by 0x42C753: pjsua_destroy2 (pjsua_core.c:1548) 
     30==26641==    by 0x42CDC9: pjsua_destroy (pjsua_core.c:1775) 
     31==26641==    by 0x409291: app_destroy (pjsua_app.c:2011) 
     32==26641==    by 0x409304: pjsua_app_destroy (pjsua_app.c:2035) 
     33==26641==    by 0x405ADF: main_func (main.c:116) 
     34==26641==    by 0x568914: pj_run_app (os_core_unix.c:1930) 
     35==26641==    by 0x405B32: main (main.c:129) 
     36==26641== 
     37==26641== This conflicts with a previous read of size 4 by thread #7 
     38==26641== Locks held: none 
     39==26641==    at 0x42A978: worker_thread (pjsua_core.c:691) 
     40==26641==    by 0x566E30: thread_main (os_core_unix.c:523) 
     41==26641==    by 0x4C30E26: mythread_wrapper (hg_intercepts.c:233) 
     42==26641==    by 0x5353181: start_thread (pthread_create.c:312) 
     43==26641==    by 0x595400C: clone (clone.S:111) 
     44}}} 
     45 
     46Helgrind issue !#2 
     47 
     48Lock order inversion involving rwmutex_lock_read()[[br]] 
     49Both threads try to acquire read locks, which should be ok. However, on some platforms, read write mutex may be implemented as some regular mutex, and deadlock can occur. 
     50 
     51{{{ 
     52==26641== Thread #7: lock order "0x5EDC798 before 0x5E25638" violated 
     53(lock read should be ok) 
     54==26641== 
     55==26641== Observed (incorrect) order is: acquisition of lock at 0x5E25638 
     56==26641==    at 0x4C2FE45: pthread_rwlock_rdlock_WRK (hg_intercepts.c:1549) 
     57==26641==    by 0x567DA5: pj_rwmutex_lock_read (os_core_unix.c:1448) 
     58==26641==    by 0x46D6A5: pjsip_endpt_process_rx_data (sip_endpoint.c:853) 
     59==26641==    by 0x46DAD0: endpt_on_rx_msg (sip_endpoint.c:1036) 
     60==26641==    by 0x476652: pjsip_tpmgr_receive_packet (sip_transport.c:1789) 
     61==26641==    by 0x4774DC: udp_on_read_complete (sip_transport_udp.c:173) 
     62==26641==    by 0x563CFC: ioqueue_dispatch_read_event (ioqueue_common_abs.c:591) 
     63==26641==    by 0x565E20: pj_ioqueue_poll (ioqueue_select.c:963) 
     64==26641==    by 0x46D47C: pjsip_endpt_handle_events2 (sip_endpoint.c:741) 
     65==26641==    by 0x42CE6D: pjsua_handle_events (pjsua_core.c:1833) 
     66==26641==    by 0x42A964: worker_thread (pjsua_core.c:694) 
     67==26641==    by 0x566E30: thread_main (os_core_unix.c:523) 
     68==26641== 
     69==26641==  followed by a later acquisition of lock at 0x5EDC798 
     70==26641==    at 0x4C32145: pthread_mutex_lock (hg_intercepts.c:518) 
     71==26641==    by 0x5679DB: pj_mutex_lock (os_core_unix.c:1243) 
     72==26641==    by 0x56ED6F: pj_lock_acquire (lock.c:180) 
     73==26641==    by 0x56EFED: grp_lock_acquire (lock.c:290) 
     74==26641==    by 0x56F471: pj_grp_lock_acquire (lock.c:437) 
     75==26641==    by 0x483370: pjsip_tsx_recv_msg (sip_transaction.c:1764) 
     76==26641==    by 0x481802: mod_tsx_layer_on_rx_response (sip_transaction.c:872) 
     77==26641==    by 0x46D7E6: pjsip_endpt_process_rx_data (sip_endpoint.c:894) 
     78==26641==    by 0x46DAD0: endpt_on_rx_msg (sip_endpoint.c:1036) 
     79==26641==    by 0x476652: pjsip_tpmgr_receive_packet (sip_transport.c:1789) 
     80==26641==    by 0x4774DC: udp_on_read_complete (sip_transport_udp.c:173) 
     81==26641==    by 0x563CFC: ioqueue_dispatch_read_event (ioqueue_common_abs.c:591) 
     82==26641== 
     83==26641== Required order was established by acquisition of lock at 0x5EDC798 
     84==26641==    at 0x4C32145: pthread_mutex_lock (hg_intercepts.c:518) 
     85==26641==    by 0x5679DB: pj_mutex_lock (os_core_unix.c:1243) 
     86==26641==    by 0x56ED6F: pj_lock_acquire (lock.c:180) 
     87==26641==    by 0x56EFED: grp_lock_acquire (lock.c:290) 
     88==26641==    by 0x56F471: pj_grp_lock_acquire (lock.c:437) 
     89==26641==    by 0x48326E: pjsip_tsx_send_msg (sip_transaction.c:1721) 
     90==26641==    by 0x486252: pjsip_endpt_send_request (sip_util_statefull.c:117) 
     91==26641==    by 0x44E1FB: pjsip_regc_send (sip_reg.c:1410) 
     92==26641==    by 0x41DA10: pjsua_acc_set_registration (pjsua_acc.c:2527) 
     93==26641==    by 0x418489: pjsua_acc_add (pjsua_acc.c:485) 
     94==26641==    by 0x408CC7: app_init (pjsua_app.c:1809) 
     95==26641==    by 0x408F1A: pjsua_app_init (pjsua_app.c:1881) 
     96==26641== 
     97==26641==  followed by a later acquisition of lock at 0x5E25638 
     98==26641==    at 0x4C2FE45: pthread_rwlock_rdlock_WRK (hg_intercepts.c:1549) 
     99==26641==    by 0x567DA5: pj_rwmutex_lock_read (os_core_unix.c:1448) 
     100==26641==    by 0x46DB92: endpt_on_tx_msg (sip_endpoint.c:1066) 
     101==26641==    by 0x474740: pjsip_transport_send (sip_transport.c:802) 
     102==26641==    by 0x470CCF: stateless_send_transport_cb (sip_util.c:1243) 
     103==26641==    by 0x471017: stateless_send_resolver_callback (sip_util.c:1344) 
     104==26641==    by 0x4727F6: pjsip_resolve (sip_resolve.c:306) 
     105==26641==    by 0x46DDB8: pjsip_endpt_resolve (sip_endpoint.c:1148) 
     106==26641==    by 0x471132: pjsip_endpt_send_request_stateless (sip_util.c:1388) 
     107==26641==    by 0x4840E7: tsx_send_msg (sip_transaction.c:2119) 
     108==26641==    by 0x484875: tsx_on_state_null (sip_transaction.c:2349) 
     109==26641==    by 0x4832A3: pjsip_tsx_send_msg (sip_transaction.c:1727) 
     110}}} 
     111 
     112Helgrind issue !#3 
     113 
     114Pulse Audio related[[br]] 
     115Status: outside PJSIP's scope[[br]] 
     116TODO: create a suppression file for this 
     117 
     118{{{ 
     119==26641== Possible data race during read of size 4 at 0x6CEBC28 by thread #1 
     120==26641== Locks held: none 
     121==26641==    at 0x6AB5FE1: pa_once_begin (in 
     122/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-4.0.so) 
     123==26641==    by 0x6AB616A: pa_run_once (in 
     124/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-4.0.so) 
     125==26641==    by 0x6ACAE43: pa_thread_self (in 
     126/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-4.0.so) 
     127==26641==    by 0x666146B: pa_threaded_mainloop_lock (in 
     128/usr/lib/x86_64-linux-gnu/libpulse.so.0.16.2) 
     129==26641==    by 0x8F573AF: pulse_connect (in 
     130/usr/lib/x86_64-linux-gnu/alsa-lib/libasound_module_pcm_pulse.so) 
     131==26641==    by 0x8F56C9B: _snd_pcm_pulse_open (in 
     132/usr/lib/x86_64-linux-gnu/alsa-lib/libasound_module_pcm_pulse.so) 
     133==26641==    by 0x55B36FC: ??? (in /usr/lib/x86_64-linux-gnu/libasound.so.2.0.0) 
     134==26641==    by 0x55B3CF5: ??? (in /usr/lib/x86_64-linux-gnu/libasound.so.2.0.0) 
     135==26641==    by 0x559982: OpenPcm (pa_linux_alsa.c:544) 
     136==26641==    by 0x559A34: FillInDevInfo (pa_linux_alsa.c:580) 
     137==26641==    by 0x55A9B8: BuildDeviceList (pa_linux_alsa.c:853) 
     138==26641==    by 0x558E8B: PaAlsa_Initialize (pa_linux_alsa.c:266) 
     139==26641== 
     140==26641== This conflicts with a previous write of size 4 by thread #2 
     141==26641== Locks held: 1, at address 0x5E41130 
     142==26641==    at 0x6AB6093: pa_once_end (in 
     143/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-4.0.so) 
     144==26641==    by 0x6ACAEEC: ??? (in 
     145/usr/lib/x86_64-linux-gnu/pulseaudio/libpulsecommon-4.0.so) 
     146==26641==    by 0x4C30E26: mythread_wrapper (hg_intercepts.c:233) 
     147==26641==    by 0x5353181: start_thread (pthread_create.c:312) 
     148==26641==    by 0x595400C: clone (clone.S:111) 
     149}}} 
     150 
     151Helgrind issue !#4 
     152 
     153Data race over access to g_last_thread variable.[[br]] 
     154Status: won't fix. The variable is only used for information purpose only, while mutex locking for logging will be quite expensive. 
     155 
     156{{{ 
     157==26641== Lock at 0x5E24F98 was first observed 
     158==26641==    at 0x4C31DDA: pthread_mutex_init (hg_intercepts.c:443) 
     159==26641==    by 0x567730: init_mutex (os_core_unix.c:1139) 
     160==26641==    by 0x567909: pj_mutex_create (os_core_unix.c:1192) 
     161==26641==    by 0x56798B: pj_mutex_create_recursive (os_core_unix.c:1220) 
     162==26641==    by 0x42AD68: pjsua_create (pjsua_core.c:823) 
     163==26641==    by 0x407E0C: app_init (pjsua_app.c:1288) 
     164==26641==    by 0x408F1A: pjsua_app_init (pjsua_app.c:1881) 
     165==26641==    by 0x405AA9: main_func (main.c:108) 
     166==26641==    by 0x568914: pj_run_app (os_core_unix.c:1930) 
     167==26641==    by 0x405B32: main (main.c:129) 
     168==26641== 
     169==26641== Lock at 0x5EDC798 was first observed 
     170==26641==    at 0x4C31DDA: pthread_mutex_init (hg_intercepts.c:443) 
     171==26641==    by 0x567730: init_mutex (os_core_unix.c:1139) 
     172==26641==    by 0x567909: pj_mutex_create (os_core_unix.c:1192) 
     173==26641==    by 0x56EB58: create_mutex_lock (lock.c:75) 
     174==26641==    by 0x56EBE5: pj_lock_create_recursive_mutex (lock.c:96) 
     175==26641==    by 0x56F3D5: pj_grp_lock_create (lock.c:413) 
     176==26641==    by 0x481C13: tsx_create (sip_transaction.c:1011) 
     177==26641==    by 0x48258F: pjsip_tsx_create_uac2 (sip_transaction.c:1306) 
     178==26641==    by 0x482427: pjsip_tsx_create_uac (sip_transaction.c:1270) 
     179==26641==    by 0x4861C2: pjsip_endpt_send_request (sip_util_statefull.c:103) 
     180==26641==    by 0x44E1FB: pjsip_regc_send (sip_reg.c:1410) 
     181==26641==    by 0x41DA10: pjsua_acc_set_registration (pjsua_acc.c:2527) 
     182==26641== 
     183==26641== Possible data race during write of size 8 at 0x8100E0 by thread #7 
     184==26641== Locks held: 2, at addresses 0x5E24F98 0x5EDC798 
     185==26641==    at 0x570214: pj_log (log.c:421) 
     186==26641==    by 0x570675: pj_log_3 (log.c:515) 
     187==26641==    by 0x41BB3B: acc_check_nat_addr (pjsua_acc.c:1674) 
     188==26641==    by 0x41C996: regc_tsx_cb (pjsua_acc.c:2093) 
     189==26641==    by 0x44D7BF: regc_tsx_callback (sip_reg.c:1092) 
     190==26641==    by 0x486100: mod_util_on_tsx_state (sip_util_statefull.c:81) 
     191==26641==    by 0x4822A9: tsx_set_state (sip_transaction.c:1210) 
     192==26641==    by 0x4858B5: tsx_on_state_proceeding_uac (sip_transaction.c:3008) 
     193==26641==    by 0x484C09: tsx_on_state_calling (sip_transaction.c:2492) 
     194==26641==    by 0x48338B: pjsip_tsx_recv_msg (sip_transaction.c:1765) 
     195==26641==    by 0x481802: mod_tsx_layer_on_rx_response (sip_transaction.c:872) 
     196==26641==    by 0x46D7E6: pjsip_endpt_process_rx_data (sip_endpoint.c:894) 
     197==26641== 
     198==26641== This conflicts with a previous read of size 8 by thread #1 
     199==26641== Locks held: none 
     200==26641==    at 0x5701E8: pj_log (log.c:419) 
     201==26641==    by 0x56CF92: invoke_log (errno.c:223) 
     202==26641==    by 0x56D062: pj_perror_imp (errno.c:244) 
     203==26641==    by 0x56D127: pj_perror (errno.c:252) 
     204==26641==    by 0x4059EE: on_app_started (main.c:32) 
     205==26641==    by 0x409040: pjsua_app_run (pjsua_app.c:1915) 
     206==26641==    by 0x405ABC: main_func (main.c:110) 
     207==26641==    by 0x568914: pj_run_app (os_core_unix.c:1930) 
     208}}} 
     209 
     210Helgrind issue !#5 
     211 
     212Data race over cp->used_size variable in pool_caching.[[br]] 
     213Status: won't fix, the variable is for statistics purpose only. 
     214 
     215{{{ 
     216==26641== Lock at 0x817F90 was first observed 
     217==26641==    at 0x4C31DDA: pthread_mutex_init (hg_intercepts.c:443) 
     218==26641==    by 0x567730: init_mutex (os_core_unix.c:1139) 
     219==26641==    by 0x567909: pj_mutex_create (os_core_unix.c:1192) 
     220==26641==    by 0x56EB58: create_mutex_lock (lock.c:75) 
     221==26641==    by 0x56EBB3: pj_lock_create_simple_mutex (lock.c:89) 
     222==26641==    by 0x571AAE: pj_caching_pool_init (pool_caching.c:81) 
     223==26641==    by 0x42ACF9: pjsua_create (pjsua_core.c:815) 
     224==26641==    by 0x407E0C: app_init (pjsua_app.c:1288) 
     225==26641==    by 0x408F1A: pjsua_app_init (pjsua_app.c:1881) 
     226==26641==    by 0x405AA9: main_func (main.c:108) 
     227==26641==    by 0x568914: pj_run_app (os_core_unix.c:1930) 
     228==26641==    by 0x405B32: main (main.c:129) 
     229==26641== 
     230==26641== Possible data race during write of size 8 at 0x817DA0 by thread #1 
     231==26641== Locks held: 1, at address 0x817F90 
     232==26641==    at 0x57224E: cpool_on_block_free (pool_caching.c:332) 
     233==26641==    by 0x577841: default_block_free (pool_policy_malloc.c:67) 
     234==26641==    by 0x571625: reset_pool (pool.c:254) 
     235==26641==    by 0x5716C6: pj_pool_destroy_int (pool.c:292) 
     236==26641==    by 0x571EDA: cpool_release_pool (pool_caching.c:238) 
     237==26641==    by 0x571137: pj_pool_release (pool_i.h:92) 
     238==26641==    by 0x409281: app_destroy (pjsua_app.c:2007) 
     239==26641==    by 0x409304: pjsua_app_destroy (pjsua_app.c:2035) 
     240==26641==    by 0x405ADF: main_func (main.c:116) 
     241==26641==    by 0x568914: pj_run_app (os_core_unix.c:1930) 
     242==26641==    by 0x405B32: main (main.c:129) 
     243==26641== 
     244==26641== This conflicts with a previous write of size 8 by thread #7 
     245==26641== Locks held: none 
     246==26641==    at 0x57224E: cpool_on_block_free (pool_caching.c:332) 
     247==26641==    by 0x577841: default_block_free (pool_policy_malloc.c:67) 
     248==26641==    by 0x571625: reset_pool (pool.c:254) 
     249==26641==    by 0x5716A7: pj_pool_reset (pool.c:276) 
     250==26641==    by 0x477597: udp_on_read_complete (sip_transport_udp.c:221) 
     251}}} 
     252 
     253Helgrind issue !#6 
     254 
     255Data race over tcp->pending_connect variable[[br]] 
     256Status: should be safe. A modification to suppress this would be desirable. 
     257 
     258{{{ 
     259==5950== Lock at 0x6CAE988 was first observed 
     260==5950==    at 0x4C2DDF1: pthread_mutex_init (in 
     261/usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) 
     262==5950==    by 0x50AEE1: init_mutex (os_core_unix.c:1157) 
     263==5950==    by 0x50B789: pj_mutex_create (os_core_unix.c:1211) 
     264==5950==    by 0x427C06: pjsua_create (pjsua_core.c:817) 
     265==5950==    by 0x408553: app_init (pjsua_app.c:1292) 
     266==5950==    by 0x40A780: pjsua_app_init (pjsua_app.c:1885) 
     267==5950==    by 0x407A0B: main_func (main.c:108) 
     268==5950==    by 0x64F476C: (below main) (libc-start.c:226) 
     269==5950== 
     270==5950== Lock at 0x6D42E58 was first observed 
     271==5950==    at 0x4C2DDF1: pthread_mutex_init (in 
     272/usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) 
     273==5950==    by 0x50AEE1: init_mutex (os_core_unix.c:1157) 
     274==5950==    by 0x50B789: pj_mutex_create (os_core_unix.c:1211) 
     275==5950==    by 0x5112BE: create_mutex_lock (lock.c:75) 
     276==5950==    by 0x5117DB: pj_grp_lock_create (lock.c:438) 
     277==5950==    by 0x511B40: pj_grp_lock_create_w_handler (lock.c:463) 
     278==5950==    by 0x46D575: tsx_create (sip_transaction.c:1014) 
     279==5950==    by 0x46E2B5: pjsip_tsx_create_uac2 (sip_transaction.c:1309) 
     280==5950==    by 0x46F144: pjsip_endpt_send_request (sip_util_statefull.c:103) 
     281==5950==    by 0x440F73: pjsip_regc_send (sip_reg.c:1424) 
     282==5950==    by 0x418408: pjsua_acc_set_registration (pjsua_acc.c:2579) 
     283==5950==    by 0x41C164: pjsua_acc_add (pjsua_acc.c:487) 
     284==5950== 
     285==5950== Lock at 0x6D45728 was first observed 
     286==5950==    at 0x4C2DDF1: pthread_mutex_init (in 
     287/usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) 
     288==5950==    by 0x50AEE1: init_mutex (os_core_unix.c:1157) 
     289==5950==    by 0x50B789: pj_mutex_create (os_core_unix.c:1211) 
     290==5950==    by 0x5112BE: create_mutex_lock (lock.c:75) 
     291==5950==    by 0x5117DB: pj_grp_lock_create (lock.c:438) 
     292==5950==    by 0x463DE7: tcp_create.constprop.3 (sip_transport_tcp.c:683) 
     293==5950==    by 0x46441E: lis_create_transport (sip_transport_tcp.c:1021) 
     294==5950==    by 0x4606A4: pjsip_tpmgr_acquire_transport2 (sip_transport.c:2040) 
     295==5950==    by 0x4165C1: pjsua_acc_get_uac_addr.part.6 (pjsua_acc.c:3151) 
     296==5950==    by 0x417FB7: pjsua_acc_create_uac_contact (pjsua_acc.c:3233) 
     297==5950==    by 0x418670: pjsua_acc_set_registration (pjsua_acc.c:2334) 
     298==5950==    by 0x41C164: pjsua_acc_add (pjsua_acc.c:487) 
     299==5950== 
     300==5950== Possible data race during read of size 4 at 0x6D43E20 by thread #1 
     301==5950== Locks held: 2, at addresses 0x6CAE988 0x6D42E58 
     302==5950==    at 0x463652: tcp_send_msg (sip_transport_tcp.c:1246) 
     303==5950==    by 0x45F93A: pjsip_transport_send (sip_transport.c:833) 
     304==5950==    by 0x45B0FD: stateless_send_transport_cb (sip_util.c:1243) 
     305==5950==    by 0x45B439: stateless_send_resolver_callback (sip_util.c:1344) 
     306==5950==    by 0x45E09E: pjsip_resolve (sip_resolve.c:306) 
     307==5950==    by 0x45D16F: pjsip_endpt_send_request_stateless (sip_util.c:1388) 
     308==5950==    by 0x46C087: tsx_send_msg (sip_transaction.c:2129) 
     309==5950==    by 0x46C223: tsx_on_state_null (sip_transaction.c:2360) 
     310==5950==    by 0x46ED35: pjsip_tsx_send_msg (sip_transaction.c:1737) 
     311==5950==    by 0x46F1AE: pjsip_endpt_send_request (sip_util_statefull.c:117) 
     312==5950==    by 0x440F73: pjsip_regc_send (sip_reg.c:1424) 
     313==5950==    by 0x418408: pjsua_acc_set_registration (pjsua_acc.c:2579) 
     314==5950== 
     315==5950== This conflicts with a previous write of size 4 by thread #7 
     316==5950== Locks held: 1, at address 0x6D45728 
     317==5950==    at 0x46380A: on_connect_complete (sip_transport_tcp.c:1423) 
     318==5950==    by 0x509C05: ioqueue_dispatch_write_event (ioqueue_common_abs.c:280) 
     319==5950==    by 0x50A82E: pj_ioqueue_poll (ioqueue_select.c:969) 
     320==5950==    by 0x459D8A: pjsip_endpt_handle_events2 (sip_endpoint.c:741) 
     321==5950==    by 0x426946: pjsua_handle_events (pjsua_core.c:1833) 
     322==5950==    by 0x426EB9: worker_thread (pjsua_core.c:696) 
     323==5950==    by 0x50BD89: thread_main (os_core_unix.c:541) 
     324==5950==    by 0x4C2DC3D: ??? (in 
     325/usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) 
     326==5950== 
     327==5950== Address 0x6D43E20 is 416 bytes inside a block of size 6144 alloc'd 
     328==5950==    at 0x4C2B87D: malloc (in 
     329/usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) 
     330==5950==    by 0x51BEAE: default_block_alloc (pool_policy_malloc.c:46) 
     331==5950==    by 0x5130B5: pj_pool_allocate_find (pool.c:60) 
     332==5950==    by 0x5131EC: pj_pool_calloc (pool_i.h:69) 
     333==5950==    by 0x463BC8: tcp_create.constprop.3 (pool.h:476) 
     334==5950==    by 0x46441E: lis_create_transport (sip_transport_tcp.c:1021) 
     335==5950==    by 0x4606A4: pjsip_tpmgr_acquire_transport2 (sip_transport.c:2040) 
     336==5950==    by 0x4165C1: pjsua_acc_get_uac_addr.part.6 (pjsua_acc.c:3151) 
     337==5950==    by 0x417FB7: pjsua_acc_create_uac_contact (pjsua_acc.c:3233) 
     338==5950==    by 0x418670: pjsua_acc_set_registration (pjsua_acc.c:2334) 
     339==5950==    by 0x41C164: pjsua_acc_add (pjsua_acc.c:487) 
     340==5950==    by 0x408DCF: app_init (pjsua_app.c:1813) 
     341}}}