原创 :扣钉日记(微信公众号ID:codelogs) ,接口竟又欢迎分享,接口竟又转载请保留出处。接口竟又
继上次我们JVM停顿十几秒的接口竟又问题解决后
,我们系统终于稳定了,接口竟又再也不会无故重启了!接口竟又
这是接口竟又之前的文章:耗时几个月,终于找到了JVM停顿十几秒的接口竟又原因
但有点奇怪的是,每隔一段时间,接口竟又我们服务接口就会有一小波499超时,接口竟又经过查看gc日志,接口竟又又发现JVM停顿了好几秒!接口竟又
有了上次JVM停顿排查经验后,接口竟又我马上就检查了gc日志与safepoint日志,接口竟又发现如下日志:
$ cat gc-*.log | awk '/application threads were stopped/ && $(NF-6)>1'|tailn2022-05-08T16:40:53.886+0800: 78328.993: Total time for which application threads were stopped: 9.4917471 seconds,接口竟又 Stopping threads took: 9.3473059 secondsn2022-05-08T17:40:32.574+0800: 81907.681: Total time for which application threads were stopped: 3.9786219 seconds, Stopping threads took: 3.9038683 secondsn2022-05-08T17:41:00.063+0800: 81935.170: Total time for which application threads were stopped: 1.2607608 seconds, Stopping threads took: 1.1258499 secondsnn$ cat safepoint.log | awk '/vmop/{ title=$0;getline;if($(NF-2)+$(NF-4)>1000){ print title;print $0}}'n vmop [threads: total initially_running wait_to_block] [time: spin block sync cleanup vmop] page_trap_countn78319.500: G1IncCollectionPause [ 428 0 2 ] [ 0 9347 9347 7 137 ] 0n vmop [threads: total initially_running wait_to_block] [time: spin block sync cleanup vmop] page_trap_countn81903.703: G1IncCollectionPause [ 428 0 4 ] [ 0 3903 3903 14 60 ] 0n vmop [threads: total initially_running wait_to_block] [time: spin block sync cleanup vmop] page_trap_countn81933.906: G1IncCollectionPause [ 442 0 1 ] [ 0 1125 1125 8 126 ] 0n
从日志上可以看到,JVM停顿也是由safepoint导致的,而safepoint耗时主要在block阶段 !
通过添加JVM参数-XX:+SafepointTimeout -XX:SafepointTimeoutDelay=1000后 ,可打印出哪些线程超过1000ms没有到达safepoint,如下:
可以看到都是一些http或grpc的worker线程没走到safepoint,但为啥没到达safepoint
,看不出关键,只好又去网上搜索了。
在上面的safepoint日志中 ,spin与block都是等待线程进入safepoint的耗时 ,而vmop就是需要在安全点执行的JVM操作耗时,遗憾的是,网上讲safepoint的文章虽多 ,但基本没有将block阶段与spin阶段区别讲清楚的 !
没办法 ,只能去看看JVM内部safepoint的实现代码了,在阅读了safepoint.cpp后,对spin与block的区别也大致有点理解了 ,如下 :
而spin阶段实际在做两件事情 ,一是将thread_in_native状态的线程刨除掉 ,这并不会太耗时 ,二是轮询各线程状态 ,等待thread_in_Java状态的线程变为其它状态(如走到了safepoint),这也是为什么counted loop这种代码会导致spin阶段耗时高,因为它是thread_in_Java状态的 。
而block阶段实际就是在等待thread_in_vm状态的线程走到safepoint,与spin不同的是,safepoint线程将自己挂起,以等待最后一个thread_in_vm线程到达safepoint后将其唤醒 。
如果看完我的描述 ,还是无法理解,强烈建议大家自己去阅读下safepoint源码 ,要看懂大概脉络还是不难的,而网上文章用来了解一些基础知识即可,不必费力看太多 。
safepoint源码:http://hg.openjdk.java.net/jdk8u/jdk8u/hotspot/file/818b1963f7a2/src/share/vm/runtime/safepoint.cpp
主要方法:SafepointSynchronize::begin, SafepointSynchronize::block,SafepointSynchronize::end
回到之前遇到的问题,我们是block阶段耗时长,这是在等待thread_in_vm状态的线程到达safepoint,而线程处于thread_in_vm状态则说明线程在运行JVM内部代码 。
难道我们什么代码用法,导致线程在jvm内部执行耗时过长?特别是在jvm社区找到一个提议,即建议在System.arraycopy中添加safepoint,让我也有点怀疑它了 ,但如何证明呢?
提议链接:https://bugs.openjdk.org/browse/JDK-8233300
。
经过一段时间了解 ,发现目前分析safepoint主流工具如下 :
async-profiler提供了--ttsp的选项 ,用来分析safepoint事件,如下 :
# 下载async-profilern$ wget https://github.com/jvm-profiling-tools/async-profiler/releases/download/v2.8/async-profiler-2.8-linux-x64.tar.gz && tar xvf async* && cd async*nn# 启动async-profiler采集safepoint时的线程栈n$ ./profiler.sh start -e wall -t -o collapsed -f /tmp/tts.collased --ttsp jpsnn# 发现safepoint问题产生后,停止采集并导出线程栈n$ ./profiler.sh stop -e wall -t -o collapsed -f /tmp/tts.collased --ttsp jpsnn# 线程栈转换为火焰图工具n$ wget https://github.com/jvm-profiling-tools/async-profiler/releases/download/v2.8/converter.jarn$ java -cp converter.jar FlameGraph /tmp/tts.collapsed tts.htmln
最开始 ,抓到的火焰图是这样的 ,如下 :