I have a Netty Server defined as follows in the code.
private static final int BOSS_GROUP_THREAD_COUNT = 1;
private static final int WORKER_GROUP_THREAD_COUNT = Runtime.getRuntime().availableProcessors() * 2;
private ChannelFuture bootStrapNettyServer(Context context) {
ServerBootstrap bootstrap = new ServerBootstrap()
.option(ChannelOption.SO_BACKLOG, 512)
.option(ChannelOption.SO_REUSEADDR, true)
.childOption(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.SO_KEEPALIVE, true)
.childOption(ChannelOption.CONNECT_TIMEOUT_MILLIS, 5)
.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
.localAddress(config.getListenPort())
.childHandler(new Initializer(context, createHandlers()));
if (Epoll.isAvailable()) {
bossGroup = new EpollEventLoopGroup(BOSS_GROUP_THREAD_COUNT);
workerGroup = new EpollEventLoopGroup(WORKER_GROUP_THREAD_COUNT);
bootstrap.group(bossGroup, workerGroup)
.channel(EpollServerSocketChannel.class);
} else {
bossGroup = new NioEventLoopGroup(BOSS_GROUP_THREAD_COUNT);
workerGroup = new NioEventLoopGroup(WORKER_GROUP_THREAD_COUNT);
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class);
log.fatal("bootStrapNettyServer", "Epoll is not available on this host. Failed over to NioEventLoopGroup class");
}
log.info("bootStrapNettyServer","Boss group executor thread count and WorkerGroup executor thread count are",
"bossGroup", BOSS_GROUP_THREAD_COUNT, "workerGroup", WORKER_GROUP_THREAD_COUNT);
return bootstrap.bind();
}
And each of the handlers responsible for doing their own job when called through reflection. Each of the Handler classes have very minimal code that it takes less than ~10 msecs to complete.
private List<MessageHandler<Context>> createHandlers() {
List<MessageHandler<Context>> handlers = new ArrayList<>();
handlers.add(new InfoHandler<>(RateEstimator.getRateEstimator()));
handlers.add(new GetConfigHandler<>(config));
handlers.add(new SetConfigHandler<>(config));
handlers.add(new ListHandler(config, appName));
handlers.add(new DiscoverHandler(appName));
return handlers;
}
The Initializer class is defined as follows
public class Initializer extends ChannelInitializer<Channel> {
private final StumpyVerbDispatcher<CdsContext> dispatcher;
public Initializer(Context context, List<MessageHandler<Context>> handlers) {
Objects.requireNonNull(context, "context cannot be null");
Objects.requireNonNull(handlers, "handlers cannot be null");
this.dispatcher = new VerbDispatcher<>(context, handlers);
}
@Override
protected void initChannel(Channel channel) {
ChannelPipeline pipeline = channel.pipeline();
pipeline.addLast(new EventDataInjector()); // duplex
pipeline.addLast(new ConcurrentRequestsTracker()); // inbound
pipeline.addLast(new IdleStateHandler(0, 0, 5)); // duplex
pipeline.addLast(new IdleConnectionCloser()); // duplex
pipeline.addLast(new StumpyMessageDecoder()); // inbound
pipeline.addLast(new StumpyMessageEncoder()); // outbound
pipeline.addLast(dispatcher);
}
}
When I try to send a request from a client which is residing on a different machine this Netty server is not able to handle more than 1k TPS. And the latency for the requests is 5 seconds because the IdleStateHandler
is set 5.
I see logs inside the IdleConnectionCloser
which is defined as follows
private static final class IdleConnectionCloser extends ChannelDuplexHandler {
private static final Logger log = new Logger(IdleConnectionCloser.class);
@Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception {
EventData eventData = ctx.channel().attr(EventDataInjector.KEY).get();
if (evt instanceof IdleStateEvent) {
String peer = ctx.channel().remoteAddress().toString();
if (eventData != null) {
eventData.incCounter("idleConnection", 1);
log.warn("IdleConnectionCloser", "Killing idle connection", "peer", peer, "time: ", System.currentTimeMillis(), "requestId", eventData.getValue(REQUEST_ID));
eventData.setStatus(FAILED);
}
ctx.close();
} else {
log.warn("IdleConnectionCloser", "Time in IdleConnectionCloser", "time: ", System.currentTimeMillis(), "requestId", eventData.getValue(REQUEST_ID));
super.userEventTriggered(ctx, evt);
}
}
}
Can anyone help me understand if there is anything wrong with my setup ?
I tried increasing the Socket Backlog size to 4096 but it didn’t help as well.
sniperwolf196 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.