If you run an iGaming operation on a ten-year-old backend, you’re probably stuck in the most annoying middle ground imaginable.
Regulators expect real-time risk controls, video KYC, sanctions screening, device checks, and flawless audit trails. Players expect one-minute onboarding from a phone. Meanwhile your core platform is a mix of old PHP, stored procedures, and “don’t touch that table or payments will die.”
That’s exactly where modern KYC vendors like Veriff and Sumsub live: they give you sleek, automated KYC compliance out of the box, but your stack wasn’t built with their APIs in mind.
The good news: you don’t have to rebuild everything to plug them in. You just need to bolt a modern KYC layer onto your legacy stack in a way that your regulators, your players, and your dev team can all live with.
That’s what this piece is about.
For iGaming, KYC is no longer an “operations thing” that happens in the background.
Regulators are tightening:
Players are less patient than ever:
And from the business side, manual KYC review:
Automated KYC compliance is basically the only sane answer: you let tools like Veriff and Sumsub do what they’re good at, while your platform focuses on player wallets, game logic, bonuses, and reporting.
The trick is wiring it all together without rewriting the core.
Here’s the pattern I keep seeing in iGaming:
Veriff and Sumsub give you:
Your legacy stack gives you:
So the question becomes: where do you plug modern APIs into a system that was never designed for them?
The answer is almost always: in a thin, purpose-built integration layer that sits between your old backend and the KYC vendors.
Before a single line of integration work, you need a clear picture of where KYC actually lives in your player journey.
A simple mapping makes everything easier:
| Stage | Example trigger | What KYC must do | Where it runs |
|---|---|---|---|
| Registration | New account created | Optional or basic ID validation (by GEO) | Veriff / Sumsub |
| First deposit | Player tries to deposit over threshold | Full KYC, face + document | Veriff / Sumsub |
| High-risk behavior | Multiple cards, VPN, high bets | Enhanced checks, manual review | Sumsub / Veriff + staff |
| Withdrawal | Payout above X, or to new method | Verify identity + payment method ownership | KYC + payment checks |
| Ongoing monitoring | Sanctions updates, PEP list changes | Re-screen existing players as needed | Vendor’s AML engine |
Two things matter most:
Once you have that, you can design the integration as a workflow instead of a pile of API calls.
You do not want your old backend calling Veriff or Sumsub directly from random parts of the code. That’s how you end up with:
Instead, you introduce a KYC orchestration layer – a small service (or module, if you can’t do services yet) that:
Think of it as a translator and traffic cop between “old world” and “new world.”
| Component | Responsibility |
|---|---|
| KYC rules engine | Decide when and what level of KYC is required |
| Provider router | Choose Veriff or Sumsub based on GEO/risk |
| Session manager | Create and track verification sessions |
| Webhook handler | Receive status updates from both providers |
| Status normalizer | Convert provider codes to your internal statuses |
| Audit logger | Store decisions, timestamps, and reference IDs |
Your old backend never has to know about Veriff’s status codes or Sumsub’s JSON structure. It just deals in a few internal KYC states like pending, verified, failed, and maybe enhanced_review.
If your platform is truly “legacy,” you probably don’t have Kafka and microservices. That’s fine. You just need stable touch points.
A practical pattern looks like this:
player_id = 123, status = verified, level = standard.A simple request from legacy to KYC layer might look like this:
{
"player_id": 12345,
"scenario": "first_deposit_over_threshold",
"country": "DE",
"risk_score": 42
}
And a response:
{
"kyc_session_id": "k-session-88df2d",
"provider": "veriff",
"verification_url": "https://magic-url-for-player",
"status": "pending"
}
The legacy backend stores kyc_session_id and status in its own schema. That’s all it needs.
Let’s walk through a concrete example using Veriff in this orchestrated model.
Player from a regulated GEO hits the deposit button for 500 EUR. Your legacy code checks a simple rule and decides: KYC required.
Instead of trying to talk to Veriff directly, it sends:
{
"player_id": 12345,
"scenario": "high_value_deposit",
"country": "ES"
}
to your KYC orchestration API.
The orchestration layer choses Veriff (say, for EU flows) and calls Veriff’s API to create a session with the needed configuration (documents, liveness, etc.).
Veriff responds with a session ID and a URL or token.
The orchestration layer passes that back to the legacy backend, which then:
kyc_session_idNo direct Veriff calls from the legacy code, no vendor-specific logic scattered around.
When the player finishes or abandons, Veriff sends a webhook to your KYC orchestration endpoint.
Webhook payload example (simplified):
{
"sessionId": "k-session-88df2d",
"status": "approved",
"reason": null
}
The orchestration layer:
approved to your internal "verified_standard"{
"kyc_session_id": "k-session-88df2d",
"player_id": 12345,
"status": "verified_standard"
}
Your old platform just flips player.kyc_status and allows the deposit to proceed. It does not need to understand the full Veriff universe.
Most serious operators don’t want to be married to one KYC provider forever. GEO rules change, pricing changes, coverage changes. Sometimes one provider is simply better for a region or document set.
That’s where the orchestration layer really earns its keep.
You can define routing rules like:
| Condition | Provider |
|---|---|
| Country in EU, normal risk | Veriff |
| Country in LATAM, normal risk | Sumsub |
| VIP tier, enhanced due diligence | Sumsub |
| Fallback when primary fails | Other |
To the legacy backend, nothing changes. It still says:
{
"player_id": 12345,
"scenario": "registration",
"country": "BR"
}
The KYC layer decides “BR goes to Sumsub,” creates the session, and sends back:
{
"kyc_session_id": "sumsub-44c2",
"provider": "sumsub",
"verification_url": "https://sumsub-url",
"status": "pending"
}
When Sumsub calls back on completion, you normalize their statuses exactly the same way:
GREEN → verified_standardRED → failedYELLOW → enhanced_reviewSo the legacy backend always sees a consistent internal vocabulary. Veriff vs Sumsub is an implementation detail hidden behind the orchestration layer.
Real life is messy. Players abandon flows, documents are blurry, networks glitch, and some cases truly require human judgment.
Your automated KYC compliance design has to account for that, or operations will drown.
A useful state model looks like this:
| Internal status | Meaning | Typical action |
|---|---|---|
| pending | KYC session created, player not finished | Allow limited actions, nudge player to complete |
| in_review | Provider or internal staff reviewing | Freeze risky actions (large deposits/withdrawals) |
| verified_standard | Normal KYC passed | Full access allowed for that jurisdiction’s rules |
| verified_enhanced | Extra checks passed (source of funds, etc.) | Higher limits allowed |
| failed | KYC failed (fraud, mismatch, non-cooperation) | Lock wallet, notify risk/compliance |
| expired | Session expired without completion | Restrict until player re-starts KYC |
The orchestration layer manages these transitions, not the legacy backend. The old system just applies business rules based on whatever status it sees.
For example:
pending at registration: allow low deposits, no withdrawalspending at high deposit: force KYC completion before acceptingin_review: pause withdrawals and certain bonusesfailed: hard block and flag account for complianceThis split keeps your old code relatively simple while your compliance logic shifts into a place where it’s easier to evolve.
Regulators don’t just care that you “do KYC.” They care that you can prove it, explain it, and reproduce it on request.
Your automated KYC compliance setup should automatically generate the audit trail as a side effect of normal operation.
At a minimum, you want your KYC orchestration layer to log:
| Field | Purpose |
|---|---|
| player_id | Who this decision belongs to |
| kyc_session_id | Reference with Veriff/Sumsub |
| provider | Veriff or Sumsub |
| scenario | Why KYC was triggered |
| request_timestamp | When you asked for KYC |
| decision_timestamp | When you got a final answer |
| final_status | Internal status after mapping |
| country / GEO | Jurisdiction context |
| risk_flags | Any risk indicators in play |
| operator_overrides | If human changed anything |
When a regulator asks “why did you allow this VIP to withdraw 20k on that day?”, you want to be able to show:
The legacy backend doesn’t need to hold all that detail. The orchestration layer can store it in a more modern form (separate DB, even a warehouse), as long as IDs align.
This is usually the scariest part for operators: how to integrate Veriff and Sumsub without accidentally blocking deposits for two days.
A few pragmatic steps:
From the legacy backend’s perspective, it can literally be as small as:
You grow from there instead of flipping the entire company to a brand-new process overnight.
When automated KYC compliance is integrated properly into a legacy iGaming stack, you see clear patterns:
From operations:
From players:
From compliance and management:
And from tech:
In other words, Veriff and Sumsub become plug-ins to your compliance workflow, not invasive surgery on your older stack.
Underneath all the acronyms and vendor promises, that’s the real job here: turn automated KYC compliance into a predictable, trackable, API-driven workflow that wraps around your legacy system instead of fighting it.
The only real question is whether you want your KYC layer to feel like a bolt-on band-aid… or a quiet piece of infrastructure that just keeps you compliant, day after day, while your core platform keeps doing what it’s always done best.
Compare Airbyte and Meltano self-hosted ETL tools. Setup guides, connector reliability testing, schema drift handling,…
Pabbly Connect's lifetime deal offers unlimited tasks for $249-499, making it cost-effective for high-volume simple…
A data-driven look at the jobs growing fastest because of AI in 2026 — from…
The comparison guides that rank for "Make.com vs Zapier 2026" were largely written by people…
🔑 Key Takeaway The dropdown question that routes everything: A single Typeform dropdown ("What are…
Build production-ready autonomous agents in n8n using LangChain by connecting AI agent nodes to database…