70

Performing millions of HTTP requests with different Java libraries gives me threads hung on:

java.net.SocketInputStream.socketRead0()

which is a native function.

I tried to set up the Apache Http Client and RequestConfig to have timeouts on (I hope) everythig that is possible but still, I have (probably infinite) hangs on socketRead0. How to get rid of them?

The hung ratio is about ~1 per 10000 requests (to 10000 different hosts) and it can last probably forever (I've confirmed thread hung as still valid after 10 hours).

JDK 1.8 on Windows 7.

My HttpClient factory:

SocketConfig socketConfig = SocketConfig.custom()
            .setSoKeepAlive(false)
            .setSoLinger(1)
            .setSoReuseAddress(true)
            .setSoTimeout(5000)
            .setTcpNoDelay(true).build();

    HttpClientBuilder builder = HttpClientBuilder.create();
    builder.disableAutomaticRetries();
    builder.disableContentCompression();
    builder.disableCookieManagement();
    builder.disableRedirectHandling();
    builder.setConnectionReuseStrategy(new NoConnectionReuseStrategy());
    builder.setDefaultSocketConfig(socketConfig);

    return HttpClientBuilder.create().build();

My RequestConfig factory:

    HttpGet request = new HttpGet(url);

    RequestConfig config = RequestConfig.custom()
            .setCircularRedirectsAllowed(false)
            .setConnectionRequestTimeout(8000)
            .setConnectTimeout(4000)
            .setMaxRedirects(1)
            .setRedirectsEnabled(true)
            .setSocketTimeout(5000)
            .setStaleConnectionCheckEnabled(true).build();
    request.setConfig(config);

    return new HttpGet(url);

OpenJDK socketRead0 source

Note: Actually I have a "trick": I can schedule .getConnectionManager().shutdown() in other Thread with a cancellation of the Future if request finished properly. But, it is deprecated and also it kills whole HttpClient, not just that single request.

Kirby
  • 15,127
  • 10
  • 89
  • 104
Piotr Müller
  • 5,323
  • 5
  • 55
  • 82
  • Well they're going to block until data arrives or they timeout. Do you mean that these threads are permanently blocked and not timing out? – user207421 Feb 28 '15 at 22:40
  • Yes, I mean it hungs forever (I checked in 6 hours scenerio) – Piotr Müller Mar 02 '15 at 08:02
  • Is it correct that `HttpClientBuilder` has `builder.disableRedirectHandling()` and `RequestConfig` has `.setRedirectsEnabled(true)` ? – Anton Danilov Mar 05 '15 at 14:30
  • Yes, but I think it's unrelated. Hung is on socketRead0() and also with other clients than Apache Http – Piotr Müller Mar 06 '15 at 13:33
  • Please show the code you use to initiate the http request – Bohemian Mar 09 '15 at 20:23
  • Are you sure it's the same thread that is hung for six hours? The fact that you're doing blocking reads will always mean there are threads blocked in `read(),` but the timeout should ensure that they unblock eventually. I would consider five seconds rather short for a read timeout BTW. – user207421 Mar 10 '15 at 02:05
  • @Bohemian Just the simplest HttpGet and client execute. Also hungs were present on simple URLConnection usage withot apache client – Piotr Müller Mar 10 '15 at 13:02
  • @Ejp I'm totally sure, I've tracked every thread/client/request very much, even watching stacktraces through MBeans. I feel like the hung is infinite. – Piotr Müller Mar 10 '15 at 13:02
  • 4
    Don't you think it's just OpenJDK bug? E.g.: https://bugs.openjdk.java.net/browse/JDK-8049846 – qwwdfsad Mar 10 '15 at 21:47
  • @qwwdfsad I've just grabbed OpenJDK source for demo and some help. I've used Oracle JDK 8 with that problem. – Piotr Müller Mar 11 '15 at 07:06
  • 3
    As of Feb 2017, still no sign of a fix for the hang on Windows. In contrast, With JDK-8075484 (JDK 9 in Sep 2016) and JDK-8172578 (JDK 8u152 in Jan 2017), Oracle seems to have fixed the hang in linux, solaris, macosx, and aix. The closest Windows bug seems to be JDK-8000679. – buzz3791 Feb 24 '17 at 16:13
  • 1
    Stuart Marks decided to close JDK-8000679 (the Windows version of this bug) in May 2017 sadly commenting "This is either a bug in the Java networking code or in the OS network layer. Closing as Cannot Reproduce." – buzz3791 Jun 26 '17 at 13:54

9 Answers9

21

Though this question mentions Windows, I have the same problem on Linux. It appears there is a flaw in the way the JVM implements blocking socket timeouts:

To summarize, timeout for blocking sockets is implemented by calling poll on Linux (and select on Windows) to determine that data is available before calling recv. However, at least on Linux, both methods can spuriously indicate that data is available when it is not, leading to recv blocking indefinitely.

From poll(2) man page BUGS section:

See the discussion of spurious readiness notifications under the BUGS section of select(2).

From select(2) man page BUGS section:

Under Linux, select() may report a socket file descriptor as "ready for reading", while nevertheless a subsequent read blocks. This could for example happen when data has arrived but upon examination has wrong checksum and is discarded. There may be other circumstances in which a file descriptor is spuriously reported as ready. Thus it may be safer to use O_NONBLOCK on sockets that should not block.

The Apache HTTP Client code is a bit hard to follow, but it appears that connection expiration is only set for HTTP keep-alive connections (which you've disabled) and is indefinite unless the server specifies otherwise. Therefore, as pointed out by oleg, the Connection eviction policy approach won't work in your case and can't be relied upon in general.

Trevor Robinson
  • 15,694
  • 5
  • 73
  • 72
  • 1
    It looks like the bug has been fixed in September. Have you stopped experiencing the problem? – Arya Oct 22 '16 at 06:23
16

As Clint said, you should consider a Non-blocking HTTP client, or (seeing that you are using the Apache Httpclient) implement a Multithreaded request execution to prevent possible hangs of the main application thread (this not solve the problem but is better than restart your app because is freezed). Anyway, you set the setStaleConnectionCheckEnabled property but the stale connection check is not 100% reliable, from the Apache Httpclient tutorial:

One of the major shortcomings of the classic blocking I/O model is that the network socket can react to I/O events only when blocked in an I/O operation. When a connection is released back to the manager, it can be kept alive however it is unable to monitor the status of the socket and react to any I/O events. If the connection gets closed on the server side, the client side connection is unable to detect the change in the connection state (and react appropriately by closing the socket on its end).

HttpClient tries to mitigate the problem by testing whether the connection is 'stale', that is no longer valid because it was closed on the server side, prior to using the connection for executing an HTTP request. The stale connection check is not 100% reliable and adds 10 to 30 ms overhead to each request execution.

The Apache HttpComponents crew recommends the implementation of a Connection eviction policy

The only feasible solution that does not involve a one thread per socket model for idle connections is a dedicated monitor thread used to evict connections that are considered expired due to a long period of inactivity. The monitor thread can periodically call ClientConnectionManager#closeExpiredConnections() method to close all expired connections and evict closed connections from the pool. It can also optionally call ClientConnectionManager#closeIdleConnections() method to close all connections that have been idle over a given period of time.

Take a look at the sample code of the Connection eviction policy section and try to implement it in your application along with the Multithread request execution, I think the implementation of both mechanisms will prevent your undesired hangs.

Community
  • 1
  • 1
vzamanillo
  • 9,905
  • 1
  • 36
  • 56
  • Thanks for detailed answer. Link about eviction policy was that what i'm looking for. I have done similar thing with whole connection manager, now I know how to do it on actual separate connections. Thanks. But finally probably I will switch to non blocking client. – Piotr Müller Mar 11 '15 at 07:09
  • 2
    Eviction policy is intended to remove stale _idle_ connections. It will have no effect of what so ever on connections leased from the pool and being used to execute requests (and blocked in a read operation). – ok2c Mar 11 '15 at 12:24
  • @oleg If so, I've unaccepted the answer. Maybe something new will came up. – Piotr Müller Mar 11 '15 at 12:47
  • If you want to figure out what is going _please_ get me the wire log of the hanging sessions as I requested in my answer – ok2c Mar 11 '15 at 13:00
5

You should consider a Non-blocking HTTP client like Grizzly or Netty which do not have blocking operations to hang a thread.

Clint
  • 8,988
  • 1
  • 26
  • 40
  • Good idea and probably I will finish with that, but I just wanted to clarify how to achieve that with blocking Http (to get socketRead0 called, but not hang). So other response accepted. Thanks. I would only add that Apache Http Client also has async non blocking version. – Piotr Müller Mar 11 '15 at 07:10
5

I have more than 50 machines that make about 200k requests/day/machine. They are running Amazon Linux AMI 2017.03. I previously had jdk1.8.0_102, now I have jdk1.8.0_131. I am using both apacheHttpClient and OKHttp as scraping libraries.

Each machine was running 50 threads, and sometimes, the threads get lost. After profiling with Youkit java profiler I got

ScraperThread42 State: RUNNABLE CPU usage on sample: 0ms
java.net.SocketInputStream.socketRead0(FileDescriptor, byte[], int, int, int) SocketInputStream.java (native)
java.net.SocketInputStream.socketRead(FileDescriptor, byte[], int, int, int) SocketInputStream.java:116
java.net.SocketInputStream.read(byte[], int, int, int) SocketInputStream.java:171
java.net.SocketInputStream.read(byte[], int, int) SocketInputStream.java:141
okio.Okio$2.read(Buffer, long) Okio.java:139
okio.AsyncTimeout$2.read(Buffer, long) AsyncTimeout.java:211
okio.RealBufferedSource.indexOf(byte, long) RealBufferedSource.java:306
okio.RealBufferedSource.indexOf(byte) RealBufferedSource.java:300
okio.RealBufferedSource.readUtf8LineStrict() RealBufferedSource.java:196
okhttp3.internal.http1.Http1Codec.readResponse() Http1Codec.java:191
okhttp3.internal.connection.RealConnection.createTunnel(int, int, Request, HttpUrl) RealConnection.java:303
okhttp3.internal.connection.RealConnection.buildTunneledConnection(int, int, int, ConnectionSpecSelector) RealConnection.java:156
okhttp3.internal.connection.RealConnection.connect(int, int, int, List, boolean) RealConnection.java:112
okhttp3.internal.connection.StreamAllocation.findConnection(int, int, int, boolean) StreamAllocation.java:193
okhttp3.internal.connection.StreamAllocation.findHealthyConnection(int, int, int, boolean, boolean) StreamAllocation.java:129
okhttp3.internal.connection.StreamAllocation.newStream(OkHttpClient, boolean) StreamAllocation.java:98
okhttp3.internal.connection.ConnectInterceptor.intercept(Interceptor$Chain) ConnectInterceptor.java:42
okhttp3.internal.http.RealInterceptorChain.proceed(Request, StreamAllocation, HttpCodec, Connection) RealInterceptorChain.java:92
okhttp3.internal.http.RealInterceptorChain.proceed(Request) RealInterceptorChain.java:67
okhttp3.internal.http.BridgeInterceptor.intercept(Interceptor$Chain) BridgeInterceptor.java:93
okhttp3.internal.http.RealInterceptorChain.proceed(Request, StreamAllocation, HttpCodec, Connection) RealInterceptorChain.java:92
okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(Interceptor$Chain) RetryAndFollowUpInterceptor.java:124
okhttp3.internal.http.RealInterceptorChain.proceed(Request, StreamAllocation, HttpCodec, Connection) RealInterceptorChain.java:92
okhttp3.internal.http.RealInterceptorChain.proceed(Request) RealInterceptorChain.java:67
okhttp3.RealCall.getResponseWithInterceptorChain() RealCall.java:198
okhttp3.RealCall.execute() RealCall.java:83

I found out that they have a fix for this

https://bugs.openjdk.java.net/browse/JDK-8172578

in JDK 8u152 (early access). I have installed it on one of our machines. Now I am waiting to see some good results.

Stefan Matei
  • 1,076
  • 11
  • 10
  • Thanks for the update, please notify about the results. – Piotr Müller Jun 20 '17 at 13:49
  • No luck. It got stuck over the night. I will try to contact them at oracle about the bug. It was marked as resolved. And also find a workaround (abort connection from another thread) as I got tired of restarting the machines every day. – Stefan Matei Jun 21 '17 at 09:29
  • 1
    @Stefan Thanks for the info. If you get a bug filed against the Windows JDK please post the bug number on this stackoverflow question. – buzz3791 Sep 06 '17 at 20:05
  • Still happening in Java 8 U181 on Windows. – chiperortiz May 24 '22 at 19:48
3

Given no one else responded so far, here is my take

Your timeout setting looks perfectly OK to me. The reason why certain requests appear to be constantly blocked in a java.net.SocketInputStream#socketRead0() call is likely to be due to a combination of misbehaving servers and your local configuration. Socket timeout defines a maximum period of inactivity between two consecutive i/o read operations (or in other words two consecutive incoming packets). Your socket timeout setting is 5,000 milliseconds. As long as the opposite endpoint keeps on sending a packet every 4,999 milliseconds for a chunk encoded message the request will never time out and will end up sending most of its time blocked in java.net.SocketInputStream#socketRead0(). You can find out whether or not this is the case by running HttpClient with wire logging turned on.

ok2c
  • 26,450
  • 5
  • 63
  • 71
  • 1
    Socket read timeout defines the maximum interval between entering the `recv()` method and the arrival of data. It has nothing to do with the interval befween read operations, or between packets. – user207421 Mar 09 '15 at 08:37
  • Correct. Doesn't change the error in your answer. The timer starts when you enter recv(), or read(), and stops when it expires or data or arrives or EOS or an error occurs. Nothing to do with the interval between two reads or two packets. What you've written above doesn't begin to make sense. It implies that you can't get a timeout on the first read, for example. And the time between two reads isn't the same thing as the time between two packets in the first place. – user207421 Mar 09 '15 at 22:32
  • Marvelous. The problem with your wonderful argument is that Java attempts to abstract away low level TCP/IP machinery and provides a different contract based on I/O stream APIs. Consumer of the API has no control over timers, buffers or #recv() method. The consumer can see how long the current execution thread stays blocked in a read operation. For long streams of data such as an HTTP content body what matters is how long it takes for an operation to unblock or in other words how long one read operation stays inactive before the next one can begin and reset the timer. – ok2c Mar 10 '15 at 08:28
  • 1
    The problem with your 'marvelous' answer is that it is false, as you could easily have determined by experiment, rather than just arguing about it, and posting yet more unsubstantiated nonsense about the intervals between two reads, or two packets, or whatever else you're trying to contort this into. I suggest you try it before you debate this further. – user207421 Mar 10 '15 at 08:35
  • And that was it? Suggestion to try it out actually works both ways. – ok2c Mar 10 '15 at 09:02
  • It's your answer: it's your assertion: it's been challenged. It's up to you to prove it. Or rather, them, as you've asserted two mutually inconsistent positions. Let us know when you have some evidence, or an acceptable source to cite, for whichever of them you decide to maintain. But they can't both be right. I will just hint that I'm not guessing about this. – user207421 Mar 10 '15 at 09:13
  • Depressing it may be, but that doesn't absolve you of the responsibility of backing up your claims. In this case you can't, as you're embedded in self-contradiction. Suppose you explain this: if it's the interval between packets, or reads, whichever you like, how come you can get a timeout on the first read? Or the first packet? And cut out the personal remarks. All they accomplish is to underline your lack of proof and logical argument. – user207421 Mar 10 '15 at 10:58
  • Allow me to ignore most of your blathering. As far as the question goes: read operations starts, no packet arrives, read unblocks with exception, no more reads attempted, socket timeout equals the maximum period inactivity between consecutive reads. – ok2c Mar 10 '15 at 11:09
  • 2
    Of course @oleg is right: If the server you're connected to is extremely slow, sending a 1TB file via one byte per 4.9 seconds, you will spend very much time blocked on that socketRead0(), without being kicked out by the timeout. Once you have lots of threads in this situation, you've depleted your thread pool, and the system is "down". This is one of the reasons why HTTP/REST is a shitty solution for comms between "Micro Services". – stolsvik Sep 20 '17 at 09:53
3

For Apache HTTP Client (blocking) I found best solution is to getConnectionManager(). and shutdown it.

So in high-reliability solution I just schedule shutdown in other thread and in case request does not complete I'm shutting in down from other thread

Piotr Müller
  • 5,323
  • 5
  • 55
  • 82
3

I bumped into the same issue using apache common http client.

There's a pretty simple workaround (which doesn't require shutting the connection manager down):

In order to reproduce it, one needs to execute the request from the question in a new thread paying attention to details:

  • run request in separate thread, close request and release it's connection in a different thread, interrupt hanging thread
  • don't run EntityUtils.consumeQuietly(response.getEntity()) in finally block (because it hangs on 'dead' connection)

First, add the interface

interface RequestDisposer {
    void dispose();
}

Execute an HTTP request in a new thread

final AtomicReference<RequestDisposer> requestDisposer = new AtomicReference<>(null);  

final Thread thread = new Thread(() -> {
    final HttpGet request = new HttpGet("http://my.url");
    final RequestDisposer disposer = () -> {
        request.abort();
        request.releaseConnection();
    };
    requestDiposer.set(disposer);

    try (final CloseableHttpResponse response = httpClient.execute(request))) {
        ...
    } finally {
      disposer.dispose();
    } 
};)
thread.start()

Call dispose() in the main thread to close hanging connection

requestDisposer.get().dispose(); // better check if it's not null first
thread.interrupt();
thread.join();

That fixed the issue for me.

My stacktrace looked like this:

java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:253)
at org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:227)
at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:186)
at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:137)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)

To whom it might be interesting, it easily reproducable, interrupt the thread without aborting request and releasing connection (ratio is about 1/100). Windows 10, version 10.0. jdk8.151-x64.

Sergei Voitovich
  • 2,804
  • 3
  • 25
  • 33
3

I feel that all these answers are way too specific.

We have to note that this is probably a real JVM bug. It should be possible to get the file descriptor and close it. All this timeout-talk is too high level. You do not want a timeout to the extent that the connection fails, what you want is an ability to hard break this stuck thread and stop or interrupt it.

The way the JVM should implemented the SocketInputStream.socketRead function is to set some internal default timeout, which should be even as low as 1 second. Then when the timeout comes, immediately looping back to the socketRead0. While that is happening, the Thread.interrupt and Thread.stop commands can take effect.

The even better way of doing this of course is not to do any blocking wait at all, but instead use a the select(2) system call with a list of file descriptors and when any one has data available, let it perform the read operation.

Just look all over the internet all these people having trouble with threads stuck in java.net.SocketInputStream#socketRead0, it's the most popular topic about java.net.SocketInputStream hands down!

So, while the bug is not fixed, I wonder about the most dirty trick I can come up with to break up this situation. Something like connecting with the debugger interface to get to the stack frame of the socketRead call and grab the FileDescriptor and then break into that to get the int fd number and then make a native close(2) call on that fd.

Do we have a chance to do that? (Don't tell me "it's not good practice") -- if so, let's do it!

Gunther Schadow
  • 1,490
  • 13
  • 22
2

I faced the same issue today. Based on @Sergei Voitovich I've tried to make it work still using Apache Http Client.

Since I am using Java 8 its simpler to make a timeout to abort the connection.

Here's is a draft of the implementation:

private HttpResponse executeRequest(Request request){
    InterruptibleRequestExecution requestExecution = new InterruptibleRequestExecution(request, executor);
    ExecutorService executorService = Executors.newSingleThreadExecutor();
    try {
        return executorService.submit(requestExecution).get(<your timeout in milliseconds>, TimeUnit.MILLISECONDS);
    } catch (TimeoutException | ExecutionException e) {
        // Your request timed out, you can throw an exception here if you want
        throw new UsefulExceptionForYourApplication(e);
    } catch (InterruptedException e) {
        // Always remember to call interrupt after catching InterruptedException
        Thread.currentThread().interrupt();
        throw new UsefulExceptionForYourApplication(e);
    } finally {
        // This method forces to stop the Thread Pool (with single thread) created by Executors.newSingleThreadExecutor() and makes the pending request to abort inside the thread. So if the request is hanging in socketRead0 it will stop and also the thread will be terminated
        forceStopIdleThreadsAndRequests(requestExecution, executorService);
    }
}

private void forceStopIdleThreadsAndRequests(InterruptibleRequestExecution execution,
                                             ExecutorService executorService) {
    execution.abortRequest();
    executorService.shutdownNow();
}

The code above will create a new Thread to execute the request using org.apache.http.client.fluent.Executor. Timeout can be easily configured.

The execution of the thread is defined in InterruptibleRequestExecution which you can see below.

private static class InterruptibleRequestExecution implements Callable<HttpResponse> {
    private final Request request;
    private final Executor executor;
    private final RequestDisposer disposer;

    public InterruptibleRequestExecution(Request request, Executor executor) {
        this.request = request;
        this.executor = executor;
        this.disposer = request::abort;
    }

    @Override
    public HttpResponse call() {
        try {
            return executor.execute(request).returnResponse();
        } catch (IOException e) {
            throw new UsefulExceptionForYourApplication(e);
        } finally {
            disposer.dispose();
        }
    }

    public void abortRequest() {
        disposer.dispose();
    }

    @FunctionalInterface
    interface RequestDisposer {
        void dispose();
    }
}

The results are really good. We've had times where some connections where hanging in sockedRead0 for 7 hours! Now, it never passes the defined timeout and its working in production with millions of requests per day without having any problems.

Dharman
  • 30,962
  • 25
  • 85
  • 135
Eduardo Brito
  • 121
  • 1
  • 4