After researching for these exceptions I have already tried a lot, but none of the suggested solutions has helped for me so far, hence another stackoverflow. Maybe I am missing something.
I am using the 'request' module and I've got 800 requests / second available for a REST Api I want to use. Therefore I wrote a wrapper class which includes rate limiting + methods for all the possible API endpoints. Whenever I try to perform more than ~100 requests / second (either as single instance or cluster) I get exceptions like these returned:
Error: getaddrinfo ENOTFOUND api.domain.com api.domain.com:80
Error:ESOCKETTIMEDOUT
Error: ETIMEDOUT
These are the default options I am using for my requests:
apiRequest = request.defaults({
headers: {
Authorization: `Bearer ${this.apiJwt}`,
},
json: true,
timeout: 6000,
baseUrl: this.baseUrl,
pool: { maxSockets: Infinity },
})
// First parameter is priority, second the function to be called,
// third parameter is the first parameter for the called function
this.limiter.schedulePriority(priority, this.apiRequest, url)
What I've tried:
- Putting
process.env.UV_THREADPOOL_SIZE = 128
right after my imports in index.js (see: Node.js request module getting ETIMEDOUT and ESOCKETTIMEDOUT) - Adding
pool: { maxSockets: Infinity }
to the default request options (see code snippet above) - Checking the network throughoutput which is around ~16-25mbps down and 1mbps up. I've got 10mbps upload and 50mbps download available on my localhost.
- Starting the application in a cluster of 8 instances where each instance would perform 100 requests / second
Is there something I am missing to perform that many concurrent requests? Maybe the process.env.UV_THREADPOOL_SIZE doesn't get applied (is there a way to check that) correctly?