2

So, I'm requesting data from an API. So far, my API key is limited to:

10 requests every 10 seconds 500 requests every 10 minutes

Bascially, I want to request a specific value from every game the user has played. That are, for example, about 300 games.

So I have to make 300 requests with my PHP. How can I slow them down to observe the rate limit? (It can take time, site does not have to be fast)

I tried sleep(), which resulted in my script crashing.. Any other ways to do this?

PattyLi
  • 29
  • 1
  • 4
  • `sleep()` does not make a script crash when used correctly. Further have a look at `usleep()`, too – mblaettermann Jan 03 '16 at 18:13
  • does this help.. ? http://stackoverflow.com/questions/1375501/how-do-i-throttle-my-sites-api-users – Sudhir Bastakoti Jan 03 '16 at 18:15
  • Also you should handle the rate limit error in some way, to sleep only when needed or such. How long are you blocked if you hit the rate limit? You could also do some timing calculations and adjust your usleep accordily after each request to take your approach one step further – mblaettermann Jan 03 '16 at 18:16
  • @mblaettermann It may not make a *script* crash, but it can cause a *webserver* to become unavailable. If you're using Apache, for example, `sleep()` calls tie up an Apache worker until they complete. With a smallish server, a couple dozen visitors hitting the script would bring it to a halt. – ceejayoz Jan 03 '16 at 18:16

3 Answers3

0

I suggest setting up a cron job that executes every minute, or even better use Laravel scheduling rather than using sleep or usleep to imitate a cron.

Here is some information on both:

https://laravel.com/docs/5.1/scheduling

http://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/

Colin Schoen
  • 2,526
  • 1
  • 17
  • 26
  • No need for Laravel or Cronjob. Basic usleep should be enough. – mblaettermann Jan 03 '16 at 18:14
  • You can implement such a long lasting scheme in php. It works. The only thing that might be a minor disadvantage is that there is no recovery in case of a failure. This would be different when using a cron job: a new process would simply be started. – Colin Schoen Jan 03 '16 at 18:16
  • "I suggest setting up a cron job that executes every 10 seconds" That's technically impossible. – ceejayoz Jan 03 '16 at 18:16
  • I suppose you could schedule the job to run every minute, and sleep in a loop in 10s intervals. This would be predicated on your command being completed before the ten second interval expires, or you'll get overlap when the next command runs. This feels like a precarious solution, but if you can guarantee very short execution of the main command of the script, it would work. – Colin Schoen Jan 03 '16 at 18:18
0

This sounds like a perfect use for the set_time_limit() function. This function allows you to specify how long your script can execute, in seconds. For example, if you say set_time_limit(45); at the beginning of your script, then the script will run for a total of 45 seconds. One great feature of this function is that you can allow your script to execute indefinitely (no time limit) by saying: set_time_limit(0);.

You may want to write your script using the following general structure:

<?php
// Ignore user aborts and allow the script
// to run forever
ignore_user_abort(true);
set_time_limit(0);

// Define constant for how much time must pass between batches of connections:
define('TIME_LIMIT', 10); // Seconds between batches of API requests

$tLast = 0;
while( /* Some condition to check if there are still API connections that need to be made */ ){

    if( timestamp() <= ($tLast + TIME_LIMIT) ){ // Check if TIME_LIMIT seconds have passed since the last connection batch
        // TIME_LIMIT seconds have passed since the last batch of connections
        /* Use cURL multi to make 10 asynchronous connections to the API */

        // Once all of those connections are made and processed, save the current time:
        $tLast = timestamp();
    }else{
        // TIME_LIMIT seconds have not yet passed
        // Calculate the total number of seconds remaining until TIME_LIMIT seconds have passed:
        $timeDifference = $tLast + TIME_LIMIT - timestamp();
        sleep( $timeDifference ); // Sleep for the calculated number of seconds
    }

} // END WHILE-LOOP

/* Do any additional processing, computing, and output */
?>

Note: In this code snippet, I am also using the ignore_user_abort() function. As noted in the comment on the code, this function just allows the script to ignore a user abort, so if the user closes the browser (or connection) while your script is still executing, the script will continue retrieving and processing the data from the API anyway. You may want to disable that in your implementation, but I will leave that up to you.

Obviously this code is very incomplete, but it should give you a decent understanding of how you could possibly implement a solution for this problem.

Spencer D
  • 3,376
  • 2
  • 27
  • 43
-1

Don't slow the individual requests down.

Instead, you'd typically use something like Redis to keep track of requests per-IP or per-user. Once the limit is hit for a time period, reject (with a HTTP 429 status code, perhaps) until the count resets.

http://redis.io/commands/INCR coupled with http://redis.io/commands/expire would easily do the trick.

ceejayoz
  • 176,543
  • 40
  • 303
  • 368
  • The OP isn't writing the API they are implementing it. The requested answer should describe how to observe the rate limit for their requests not how to restrict all requests to that rate limit. – Colin Schoen Jan 03 '16 at 18:32
  • @ColinSchoen The same technique can be used for outgoing requests. – ceejayoz Jan 03 '16 at 18:44