Free API Rate Limit & “Sleep” Calculator:
1. API Constraints
2. Throttling Requirements
3. Implementation Settings
One of the hardest lessons in B2B automation is realizing that computers are too fast. If you build a loop to sync 10,000 HubSpot contacts to your PostgreSQL database, your automation platform will attempt to fire all 10,000 HTTP requests in a fraction of a second.
The receiving API will immediately block your IP address and return an HTTP 429: Too Many Requests error.
To prevent this, Ops Engineers must inject a “Sleep” or “Wait” function into their loops to artificially throttle the data transfer. Use the calculator above to find the exact millisecond delay required to stay under the API’s limit, while factoring in a safety buffer for network latency.
Understanding HTTP 429 “Too Many Requests” Errors
Every major SaaS platform protects its servers using Rate Limiting. This ensures that one rogue automation doesn’t crash the server for everyone else.
- Airtable: 5 requests per second.
- Shopify (REST): 40 requests per minute.
- Notion: 3 requests per second.
When you exceed this limit, the server responds with a 429 status code. In a properly built API, the response header will also include a Retry-After value, which tells your script exactly how many seconds it has been placed in “timeout” before it is allowed to ask for data again.
If your Make.com or n8n workflow ignores this code and continues to blast the server with requests, you risk having your API token permanently revoked.
How to Configure the “Sleep” Node in Zapier, Make, and n8n
Once you have generated your required delay using the calculator above, you must implement it inside your workflow’s Iterator or Do/While loop.
1. Throttling in Make.com
Make.com has a built-in module for this. Search for the green “Tools” app and select the “Sleep” module. Place this module immediately after your API HTTP request, but inside your Iterator loop. Note: Make.com requires the input to be in Seconds, not Milliseconds. Our calculator outputs the exact decimal conversion for Make automatically.
2. Throttling in n8n (Node.js)
If you are writing a custom loop in an n8n Code Node, you cannot use standard Sleep() because Node.js is asynchronous. You must use a Promise. Paste this exact line generated by our tool at the bottom of your loop: await new Promise(resolve => setTimeout(resolve, 1500));
3. Throttling in Zapier
Zapier makes rate-limiting incredibly difficult. The native “Delay by Zapier” tool only accepts whole minutes (you cannot delay a Zap by 1.5 seconds). To achieve millisecond throttling in Zapier, you must use a “Code by Zapier” (JavaScript) step and manually write a blocking promise.
Why Your Automation Platform Timeouts on Long Migrations
If you use the calculator above and see that your Total Est. Runtime is 2.4 Hours, you have a massive architectural problem.
iPaaS platforms are built for micro-tasks, not massive ETL (Extract, Transform, Load) data migrations. They have strict server execution timeouts:
- Zapier: Kills any workflow running longer than 15 minutes.
- Make.com: Kills any scenario running longer than 40 minutes.
- AWS Lambda: Max timeout is 15 minutes.
If your calculated runtime exceeds these limits, your workflow will crash halfway through, leaving your databases entirely out of sync.
Batching vs. Sequential API Calls (The Ultimate Fix)
If your workflow takes too long, you cannot just lower the sleep timer (or you will hit the 429 error). Instead, you must change how you ask the API for data.
Instead of making 1 API call per record (Sequential), you need to look at the API documentation for Bulk Endpoints (Batching). For example, instead of pushing 1 row to Airtable 5000 times, you can push an array of 10 rows to Airtable 500 times. This reduces your total API requests by 90%, instantly solving both your rate-limit issues and your execution timeout risks.
Are your webhooks constantly timing out? Stop trying to force Make.com to do an ETL pipeline’s job. Download our guide on handling massive B2B data migrations using dedicated bulk APIs and Message Queues.