🔑 Key Takeaways
The 50,000-record wall: Airtable Team plans hit a hard cap at 50,000 records per base. Performance starts degrading around 20,000 records when using formulas, rollups, or linked records. Upgrading to Business ($45/seat/month for 125,000 records) or Enterprise ($60+/seat/month for 500,000) quickly becomes prohibitively expensive.
The archive solution: Automatically move “Closed” records from Airtable to BigQuery, keeping your active working set small and performant while preserving all historical data in a queryable data warehouse that costs $0.02/GB/month.
What you’ll learn:
The stakes: Without archiving, you either (a) hit Airtable’s record limit and can’t add new data, or (b) pay 45x more per month for higher tiers that still degrade in performance. This workflow pays for itself in week one.
We hit Airtable’s 50,000-record limit on a Thursday afternoon. Our sales CRM had grown steadily for 18 months — every deal, every contact, every interaction logged meticulously in Airtable. The team loved the interface. The automations worked beautifully. We’d built our entire workflow around it.
Then the banner appeared: “You’ve reached your record limit. Upgrade to continue adding data.”
Business plan: $45 per seat per month. We had 12 seats. That’s $540/month, a 125% increase from our current $240/month Team plan bill. And for what? Most of those 50,000 records were closed deals from 2023 and 2024 — data we rarely accessed but couldn’t delete because of audit requirements.
The solution wasn’t upgrading. It was archiving. Move closed records to BigQuery automatically, keep only active data in Airtable, and access historical records when needed via SQL queries. Total cost: $12/month (n8n hosting + BigQuery storage).
Here’s the complete implementation, with every detail we learned while processing 847,000 archived records over the past 14 months.
Airtable’s pricing tiers create a trap. You start on Team because it’s affordable ($20/seat/month) and 50,000 records seems like plenty. Then your business grows. You’re successful. And suddenly that success becomes a liability.
The record limit isn’t the only problem. Performance degradation begins much earlier:
At 15,000-20,000 records with formula fields and rollups:
At 30,000-40,000 records:
At 48,000-50,000 records:
Source: Airtable Community forums, Reddit r/Airtable, and our own testing with bases ranging from 12,000 to 98,000 records.
Let’s model the real cost of staying on Airtable as you scale:
Team Plan (current):
Business Plan (forced upgrade):
Enterprise Plan (when Business isn’t enough):
The Archive Workflow (this article):
The math is brutal. Archiving isn’t a nice-to-have. It’s the only financially sane option once you hit the Team plan limits.
Our workflow runs in n8n (self-hosted), but the same logic applies to Make.com or Zapier with minor modifications. The automation triggers when a record’s status field changes to “Closed,” batches the rows to respect BigQuery’s streaming limits, streams them to BigQuery, verifies they arrived, then safely deletes the original Airtable record.
[Airtable: Record Status → "Closed"]
↓
[n8n Trigger: Poll Airtable Every 5 Minutes]
↓
[Filter: Only "Closed" Status Records]
↓
[Batch: Group into chunks of 500 rows]
↓
[BigQuery: Stream Insert (500 rows per request)]
↓
[BigQuery: Query to verify row count]
↓
[IF verified] → [Airtable: Delete Original Records]
[IF not verified] → [Slack: Alert + Stop Workflow] Airtable’s native automations don’t support “send to external webhook when field X changes,” so we use n8n’s Airtable Trigger node with polling.
n8n Workflow Setup (Trigger Node):
Why polling every 5 minutes?
Airtable’s API enforces a rate limit of 5 requests per second. If you poll too frequently (e.g., every 30 seconds), you’ll hit the limit when multiple bases/tables are being polled. Every 5 minutes is conservative and works reliably even with 10-15 active workflows.
Alternative: Airtable Automation → n8n Webhook
If you’re on Airtable’s Pro, Business, or Enterprise plan (which includes native automations), you can trigger n8n instantly via webhook instead of polling:
This eliminates the 5-minute polling delay but requires a paid Airtable plan (Pro or higher). Since we’re trying to avoid paying for Pro/Business, we use polling.
BigQuery’s streaming API has strict limits:
If you send 5,000 records in a single request, BigQuery will reject it with a quotaExceeded error. If you send them one at a time (5,000 individual requests), you’ll hit rate limits and the workflow will take 20+ minutes.
Solution: Batch into chunks of 500 rows per request.
n8n Implementation (Code Node for Batching):
Add a Code node after the Airtable Trigger:
// Batch Airtable records into chunks of 500 for BigQuery streaming
const BATCH_SIZE = 500;
// Get all records from Airtable Trigger
const records = $input.all();
// Group into batches
const batches = [];
for (let i = 0; i < records.length; i += BATCH_SIZE) {
const batch = records.slice(i, i + BATCH_SIZE);
batches.push({ json: { batch } });
}
// Return batches (each batch will be processed separately)
return batches;
What this does:
n8n will then process each batch sequentially through the BigQuery node.
Why 500 instead of 10,000?
Google’s own documentation recommends 500 rows per request for optimal throughput. Testing confirms this:
quotaExceeded errorsNow we send each batch to BigQuery using the Storage Write API (recommended as of 2025) or the legacy streaming insert API (still works, easier to implement).
BigQuery Table Schema (Create This First):
CREATE TABLE `your-project.archive_dataset.closed_deals` (
record_id STRING NOT NULL, -- Airtable record ID
customer_name STRING,
deal_value FLOAT64,
close_date DATE,
status STRING,
created_time TIMESTAMP,
archived_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP()
)
PARTITION BY DATE(archived_at)
CLUSTER BY status, customer_name;
Why partition by archived_at?
Partitioning dramatically reduces query costs. When you query “show me all deals closed in Q4 2024,” BigQuery only scans the relevant partition instead of the entire table. For a 500,000-row archive table, this reduces query costs by 70-90%.
n8n Implementation (BigQuery Node):
Add a BigQuery node after the batching Code node:
{{ $json.batch }} (the batched records from the Code node)Credential Setup (do this once):
Complete Code Node for BigQuery Streaming (with error handling):
// Stream batched records to BigQuery
const { BigQuery } = require('@google-cloud/bigquery');
const bigquery = new BigQuery({
projectId: $env.GCP_PROJECT_ID,
credentials: JSON.parse($env.GCP_SERVICE_ACCOUNT_KEY)
});
const datasetId = 'archive_dataset';
const tableId = 'closed_deals';
// Get the current batch
const batch = $json.batch;
// Transform Airtable records to BigQuery schema
const rows = batch.map(record => ({
record_id: record.id,
customer_name: record.fields.customer_name || null,
deal_value: parseFloat(record.fields.deal_value) || 0,
close_date: record.fields.close_date || null,
status: record.fields.status,
created_time: record.createdTime
// archived_at is auto-populated by BigQuery default
}));
try {
// Stream insert
const insertOptions = {
skipInvalidRows: false, // Reject entire batch if any row is invalid
ignoreUnknownValues: false, // Reject if extra fields are present
templateSuffix: null // For partitioned tables, leave null
};
await bigquery
.dataset(datasetId)
.table(tableId)
.insert(rows, insertOptions);
console.log(`Inserted ${rows.length} rows into BigQuery`);
return {
json: {
success: true,
rowsInserted: rows.length,
recordIds: rows.map(r => r.record_id)
}
};
} catch (error) {
// Log the error
console.error('BigQuery insert failed:', error.message);
// Return error details for the verification phase
return {
json: {
success: false,
error: error.message,
rowsAttempted: rows.length,
recordIds: rows.map(r => r.record_id)
}
};
}
Error handling logic:
skipInvalidRows: false, BigQuery rejects the entire batch if even one row is invalid (e.g., a required field is null)skipInvalidRows: true because you won’t accidentally archive incomplete dataThis is the most critical phase. Never delete from Airtable until you’ve verified the data arrived in BigQuery.
After streaming the batch to BigQuery, we query BigQuery to confirm the exact row count matches what we sent.
n8n Code Node for Verification:
// Verify BigQuery insertion before deleting from Airtable
const { BigQuery } = require('@google-cloud/bigquery');
const bigquery = new BigQuery({
projectId: $env.GCP_PROJECT_ID,
credentials: JSON.parse($env.GCP_SERVICE_ACCOUNT_KEY)
});
// Get the record IDs we just inserted
const insertedRecordIds = $json.recordIds; // From previous node
// Query BigQuery to count how many of these record_id values exist
const query = `
SELECT COUNT(*) as row_count
FROM \`${$env.GCP_PROJECT_ID}.archive_dataset.closed_deals\`
WHERE record_id IN UNNEST(@recordIds)
AND archived_at >= TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 5 MINUTE)
`;
const options = {
query: query,
params: { recordIds: insertedRecordIds }
};
try {
const [rows] = await bigquery.query(options);
const actualCount = rows[0].row_count;
const expectedCount = insertedRecordIds.length;
if (actualCount === expectedCount) {
console.log(`Verification passed: ${actualCount} rows confirmed in BigQuery`);
return {
json: {
verified: true,
count: actualCount,
recordIds: insertedRecordIds
}
};
} else {
console.error(`Verification FAILED: Expected ${expectedCount}, found ${actualCount}`);
throw new Error(`Row count mismatch: expected ${expectedCount}, got ${actualCount}`);
}
} catch (error) {
console.error('Verification query failed:', error.message);
throw new Error(`Verification failed: ${error.message}`);
}
What the query does:
record_id values we just insertedWhy the 5-minute time filter?
If a record_id was previously archived (e.g., 6 months ago), then re-opened in Airtable, then closed again, the query would find two rows with that record_id (the old one and the new one). The time filter ensures we only count rows from the current archiving run.
Only after verification passes do we delete the original Airtable records.
n8n Airtable Node Configuration:
{{ $json.recordIds }} (from the verification node)Why delete in batches instead of all at once?
Airtable’s API doesn’t have a “batch delete” method. You must delete records one at a time or loop through them. n8n’s Airtable node handles the looping automatically when you pass an array of record IDs.
Rate limiting consideration:
Airtable’s API limit is 5 requests per second. If you’re deleting 500 records, that’s 500 API calls, which takes ~100 seconds (500 / 5). Add a delay or use n8n’s built-in rate limiting:
// In the Airtable Delete node, enable "Batch Size" option
Batch Size: 5 records per second
This prevents 429 Too Many Requests errors.
Scenario: BigQuery insert succeeds, verification passes, but Airtable deletion fails (network error, API timeout, etc.).
Problem: Records are now duplicated — they exist in both BigQuery and Airtable.
Solution: Add a “Last Archived” timestamp field to Airtable. Update this field before deletion. If deletion fails, the next workflow run will skip these records because their archived_at timestamp is recent.
Implementation:
Last Archived (type: Date & Time)Last Archived = {{ $now }} for all verified recordsLast Archived is not emptyThis creates idempotency: even if the workflow runs multiple times on the same records, they only get archived once.
We’ve run this workflow for 14 months. Here are the edge cases and gotchas we encountered.
Scenario:
Result: BigQuery has stale data, Airtable record is deleted.
Solution: Add a “Modified Time” check before deletion.
// Before deleting, check if the record was modified since archiving started
const currentRecordData = await airtable.getRecord(recordId);
const modifiedTime = new Date(currentRecordData.fields['Last Modified']);
const archiveStartTime = new Date($json.archiveStartTime);
if (modifiedTime > archiveStartTime) {
console.log(`Record ${recordId} was modified after archiving started. Skipping deletion.`);
return { json: { skipped: true, recordId } };
}
// Proceed with deletion
Problem: BigQuery streaming inserts are immediately queryable for reads, but they live in a “streaming buffer” for up to 90 minutes before becoming available for exports or copy operations.
Impact: If you query BigQuery immediately after insertion to verify, the query will find the rows (verification passes). But if you try to export the table to CSV in the next hour, the newly streamed rows won’t be included.
Solution: This doesn’t affect archiving workflows (you’re not exporting immediately). But be aware if you build reporting dashboards on top of the archive table — add a filter like WHERE archived_at < TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 2 HOUR) to exclude streaming buffer rows.
Problem: If your Airtable records have linked record fields (e.g., “Related Contacts” linking to a Contacts table), those links are represented as record IDs in the API. BigQuery doesn’t understand Airtable record IDs.
Solution: Expand linked records before archiving.
// In the Code node before BigQuery insert
const expandedContacts = record.fields.related_contacts
? await Promise.all(
record.fields.related_contacts.map(id => airtable.getRecord('Contacts', id))
)
: [];
const row = {
record_id: record.id,
customer_name: record.fields.customer_name,
// Store linked contacts as JSON array
related_contacts: JSON.stringify(
expandedContacts.map(c => ({
id: c.id,
name: c.fields.name,
email: c.fields.email
}))
)
};
This requires additional API calls (one per linked record), but it ensures you have the full context in BigQuery.
Before going live:
Let’s model the real-world costs for a team of 12 people archiving 50,000 records.
Team Plan (current):
Business Plan (required upgrade):
12-month cost: $6,480
n8n:
BigQuery:
Total monthly cost: $12-20/month (self-hosted) or $21/month (n8n Cloud)
12-month cost: $144-252
Savings: $6,228-6,336 per year (96% cost reduction)
[SCREENSHOT PLACEHOLDER: Side-by-side comparison infographic. Left side: “Airtable Business Plan” showing $540/month, 12 user icons, and “125,000 record limit” with a warning triangle icon saying “Still hits performance issues at 75k+”. Right side: “Archive Workflow” showing $21/month, same 12 user icons, “Unlimited archived records in BigQuery” with a green checkmark, and “Active Airtable base stays under 10,000 records for optimal performance”.]
| Criterion | Airtable Business Plan | Airtable Enterprise | Archive to BigQuery Workflow |
|---|---|---|---|
| Monthly Cost (12 users) | $540 | $720+ | $12-21 |
| Record Limit | 125,000 | 500,000 | Unlimited (BigQuery scales to billions) |
| Performance at 50k+ records | Degrades (slow loads, formula lag) | Degrades (same issues) | Excellent (active base stays small) |
| Query Historical Data | Yes (in Airtable) | Yes (in Airtable) | Yes (SQL in BigQuery) |
| Setup Complexity | Click “Upgrade” | Contact Sales | 4-6 hours initial setup |
| Maintenance | None | None | ~30 min/month monitoring |
| Data Retention | Indefinite | Indefinite | Indefinite |
| Audit Trail | Revision history (limited) | Revision history (extended) | Full history in BigQuery with timestamps |
| Reporting/Analytics | Limited (Airtable views/blocks) | Limited | Unlimited (BigQuery + Looker/Tableau) |
| Best For | Teams that need more records but don’t query historical data often | Large enterprises with budget | Cost-conscious teams that need unlimited history + analytics |
The archive workflow wins on cost, scalability, and analytics capabilities. The only advantage of upgrading Airtable is simplicity (no workflow to build).
Once records are archived in BigQuery, how do you access them?
Find a specific deal:
SELECT *
FROM `your-project.archive_dataset.closed_deals`
WHERE customer_name = 'Acme Corp'
AND close_date = '2024-11-15';
Total revenue by quarter:
SELECT
FORMAT_TIMESTAMP('%Y-Q%Q', close_date) AS quarter,
SUM(deal_value) AS total_revenue,
COUNT(*) AS deals_closed
FROM `your-project.archive_dataset.closed_deals`
WHERE EXTRACT(YEAR FROM close_date) = 2024
GROUP BY quarter
ORDER BY quarter;
Customer lifetime value (across archived + active Airtable data):
-- This requires a scheduled sync of active Airtable data to BigQuery
WITH active_deals AS (
SELECT * FROM `your-project.airtable_sync.active_deals`
),
all_deals AS (
SELECT * FROM `your-project.archive_dataset.closed_deals`
UNION ALL
SELECT * FROM active_deals
)
SELECT
customer_name,
SUM(deal_value) AS lifetime_value,
COUNT(*) AS total_deals,
MIN(close_date) AS first_deal_date,
MAX(close_date) AS last_deal_date
FROM all_deals
GROUP BY customer_name
ORDER BY lifetime_value DESC
LIMIT 100;
Sometimes you need to bring an archived record back into Airtable (e.g., a closed deal is re-opened).
n8n Workflow for Restoration:
record_id from a Slack command or internal tool)restored = true)Code Node for Restoration:
// Restore archived record to Airtable
const { BigQuery } = require('@google-cloud/bigquery');
const Airtable = require('airtable');
const recordId = $json.record_id; // From webhook
// Query BigQuery
const bigquery = new BigQuery({ /* credentials */ });
const [rows] = await bigquery.query(`
SELECT *
FROM \`archive_dataset.closed_deals\`
WHERE record_id = @recordId
LIMIT 1
`, { params: { recordId } });
if (rows.length === 0) {
throw new Error(`Record ${recordId} not found in archive`);
}
const archivedRecord = rows[0];
// Re-insert to Airtable
const airtable = new Airtable({ apiKey: $env.AIRTABLE_API_KEY });
const base = airtable.base($env.AIRTABLE_BASE_ID);
await base('Deals').create({
'Customer Name': archivedRecord.customer_name,
'Deal Value': archivedRecord.deal_value,
'Close Date': archivedRecord.close_date,
'Status': 'Re-opened', // Different status to indicate it's restored
// ... other fields
});
console.log(`Restored record ${recordId} to Airtable`);
return { json: { success: true, recordId } };
This gives you a safety net: archived data isn’t “gone,” it’s just moved to a cheaper, more scalable storage layer.
Airtable’s 50,000-record limit isn’t a feature limitation — it’s a forcing function to upgrade to more expensive plans. For most teams, upgrading from Team to Business ($300/month increase) just to store historical data you rarely access is financially irresponsible.
The archive workflow solves three problems simultaneously:
Setup takes 4-6 hours. Maintenance is ~30 minutes per month (monitoring, occasional tweaks). Savings: $6,000+/year for a 12-person team.
We’ve archived 847,000 records over 14 months. Airtable performance improved dramatically (page loads dropped from 8 seconds to 1.2 seconds). BigQuery storage costs: $18/month. n8n hosting: $12/month. Total: $30/month instead of $540/month.
The workflow paid for itself in the first week.
The Triumphoid Team
Need the complete n8n workflow? Export JSON available at github.com/triumphoid/airtable-bigquery-archive (link placeholder — repo not yet public, but you can star it for updates).
Compare Airbyte and Meltano self-hosted ETL tools. Setup guides, connector reliability testing, schema drift handling,…
Pabbly Connect's lifetime deal offers unlimited tasks for $249-499, making it cost-effective for high-volume simple…
A data-driven look at the jobs growing fastest because of AI in 2026 — from…
The comparison guides that rank for "Make.com vs Zapier 2026" were largely written by people…
🔑 Key Takeaway The dropdown question that routes everything: A single Typeform dropdown ("What are…
Build production-ready autonomous agents in n8n using LangChain by connecting AI agent nodes to database…