UUIDv4 vs v7 vs bigint: Which One Should You Pick? Benchmarked ID Strategies by Project Scale
Is UUIDv7 really a drop-in upgrade from v4? We benchmarked insert performance on PostgreSQL 18 with Python 3.14 and mapped out the best ID strategy by project scale and table purpose.
Lab
kkm
Backend Engineer / AWS / Django
Is UUIDv7 really a drop-in upgrade from v4? We benchmarked insert performance on PostgreSQL 18 with Python 3.14 and mapped out the best ID strategy by project scale and table purpose.
Peeking Inside UUIDv7
UUIDv7 has a creation timestamp embedded in it. Let me extract it using Python.
import uuid
from datetime import datetime, timezone
# uuid7() added in Python 3.14
id = uuid.uuid7()
print(id) # 019648a0-7c5f-7def-b321-4a7e8f3c1b09
# Extract timestamp from the upper 48 bits
timestamp_ms = id.int >> 80
dt = datetime.fromtimestamp(timestamp_ms / 1000, tz=timezone.utc)
print(dt) # 2026-04-13 05:23:17.631000+00:00In just three lines, you can tell that "this ID was generated on April 13, 2026 at 5:23 AM." Millisecond precision. You can get the same result by pasting a UUIDv7 into an online tool.
UUIDv4 cannot do this. Out of its 128 bits, 122 are completely random.
"UUIDv7 is a strict upgrade over v4" is a claim you see a lot. DB performance does improve. But the security characteristics actually regress. In this article, I will lay out what each of UUIDv4, v7, and bigint is "good at" and what you "should never do" with it, backed by benchmark data.
UUIDv4 vs. v7: What Is the Difference
A UUID (Universally Unique Identifier) is a 128-bit identifier used to generate non-colliding IDs in distributed systems. v7 was standardized in RFC 9562, officially published in May 2024.
| Property | UUIDv4 | UUIDv7 | bigint auto-increment |
|---|---|---|---|
| Structure | 122-bit random | 48-bit timestamp + 74-bit random/counter | 64-bit sequential |
| Size | 16 bytes (36 chars) | 16 bytes (36 chars) | 8 bytes |
| Time-ordered sorting | Not possible | Possible | Possible |
| ID generation without DB round-trip | Possible | Possible | Not possible (DB-assigned) |
| Randomness | 122 bits | 74 bits (per spec) 32 bits (Python 3.14 impl) | 0 (fully predictable) |
| B-Tree Insert perf (at large scale) | Frequent page splits | Append-only, fast | Append-only, fastest |
| Creation time leakage | None | Leaks at ms precision | Order can be inferred |
The defining feature of v7 is that it is a time-sortable UUID. This is why it plays well with B-Tree indexes. v4 is fully random, so Inserts scatter across the index, causing frequent page splits (index page divisions). v7 appends sequentially to the tail, which avoids this.
However, there are two facts that tend to be overlooked here. The random portion of v7 is significantly smaller than v4 (122 bits down to 74 bits, and only 32 bits in the Python implementation). And "appending to the tail" can actually become a bottleneck under high concurrency. I will verify this later.
What Goes Wrong When You Expose IDs in URLs
"Primary keys should never be exposed externally" is a claim you see often. But this conflates two separate concerns. Let me break them apart.
Concern 1: The ID format problem (enumeration risk)
When you see a URL like /user/42, you can guess that /user/43 or /user/1 might also exist. This is the risk of enumeration attacks (IDOR: Insecure Direct Object Reference). Exposing auto-increment bigints in URLs creates this risk.
However, this is a problem with the ID format, not with it being a primary key. Even a non-primary-key sequential column exposed in a URL carries the same risk, and if the primary key is UUIDv4, enumeration is practically impossible.
Concern 2: Should primary keys and public-facing IDs be separated
This is not a security question -- it is about design flexibility. If you use the primary key in URLs, changing the primary key becomes a breaking change to the URL (= API contract). For example, if you later want to switch from bigint to UUID, changing the URL affects every client.
If you maintain a separate column for the public-facing ID, you can freely change the internal primary key. You could use a short nanoid in URLs while optimizing performance internally with UUIDv7 -- that kind of flexible design becomes possible.
The #1 item in the OWASP API Security Top 10 (2023 edition) is "Broken Object Level Authorization (BOLA)," and the prescribed countermeasure is "always implement authorization checks." It does not say "use UUIDs." UUIDs are a supplementary measure to reduce enumeration risk, not a replacement for authorization checks.
So is it okay to put bigint in query parameters?
The short answer: exposing auto-increment bigints directly in URLs or query parameters is not recommended. There are three reasons.
- 1. Easy to enumerate. If you see
?order=1482, anyone can try?order=1483. Authorization checks can prevent unauthorized access, but the information about whether a record exists leaks (404 vs. 403 difference). - 2. Business metrics leakage. If order IDs are sequential, a competitor can estimate "yesterday's order was 1,482 and today's is 1,519, so that's 37 orders per day."
- 3. Hard to change later. URL schemes are public API contracts. Changing from
/api/orders/1482to/api/orders/019648a0-7c5f...forces every client to migrate versions.
Then is any UUID fine? With UUIDv4, enumeration is practically impossible, so exposing it in URLs eliminates the "enumeration risk." But if you expose UUIDv7 in URLs, the timestamp leaks instead. As shown at the beginning, the creation time is visible down to the millisecond.
Incidentally, Twitter's Snowflake IDs contain timestamps but are publicly exposed as tweet IDs worldwide. How much risk "timestamp leakage" poses depends on the business context. A tweet's posting time is already public information, so it is not a problem. But for internal resource IDs on a private API, the story is different.
Summary: When exposing IDs in URLs, the question is not "is it a primary key" but rather: "can this ID format be enumerated?", "does it leak information?", and "might it need to change in the future?" Separating primary keys from public-facing IDs is more of a design insurance policy than a security measure.
Benchmarking Insert Performance on PostgreSQL 18
The theory makes sense. But how much difference does it actually make? I measured using PostgreSQL 18's native uuidv7() function (GA September 2025) and Python 3.14's uuid.uuid7().
Test Environment
| Item | Details |
|---|---|
| DB | PostgreSQL 18 |
| Primary key type | UUIDv4 / UUIDv7 / bigint GENERATED ALWAYS AS IDENTITY |
| Index | Primary key B-Tree index only |
| Measurement | Bulk Insert of 10K / 100K / 1M rows |
Bulk Insert Results (Single Session)
First, the results for a single session (continuous Inserts from one client). The benchmark code is available in our GitHub repository.
| Row count | bigint | UUIDv7 | UUIDv4 |
|---|---|---|---|
| 10K rows | 0.02 sec | 0.02 sec | 0.02 sec |
| 100K rows | 0.17 sec | 0.20 sec | 0.26 sec |
| 1M rows | 1.63 sec | 2.16 sec | 2.98 sec |
At 10K rows, there is zero difference. All three finish in 0.02 seconds. At this scale, your choice of ID format makes no perceivable impact. A slight gap appears at 100K, and at 1M rows, v7 is about 28% faster than v4. bigint is faster still.
Two takeaways from this. For projects at the tens-of-thousands-of-rows scale, the choice of ID format has virtually no impact on Insert performance. The performance difference only becomes meaningful at scales of hundreds of thousands of rows or more.
A Look at the Internal Index Structure
Using PostgreSQL's pgstattuple extension, you can inspect the internal state of a B-Tree index. I compared the indexes after inserting 1M rows.
| Metric | UUIDv7 | UUIDv4 |
|---|---|---|
| Leaf pages | 3,832 | 4,839 (approx. 26% more) |
| Index size | 30.1 MB | 38.0 MB (approx. 26% larger) |
| Average leaf density | 90.0% | 71.3% |
The v4 index has about 26% more leaf pages than v7. Random IDs scatter across the entire index, producing free space through continuous page splits. Average leaf density is 90.0% for v7 versus 71.3% for v4. v7 appends to the tail in time order, so pages are packed tight with no gaps. The v4 index is also about 26% larger overall.
However, whether this difference affects read performance depends on the workload. For range scans (e.g., "records from the last hour"), v7 has a massive advantage, but for random single-row lookups, the gap is limited.
With a Realistic Schema, the Gap Shrinks
The benchmark above uses a minimal table with just id + payload text. Real-world tables have many more columns and foreign keys. I re-ran the benchmark under more realistic conditions.
This time I modeled an "e-commerce product table" with 13 columns + a foreign key constraint, populated with realistic data generated by Faker. I also measured SELECT, JOIN, and UPDATE in addition to INSERT.
-- Parent table: category master (20 rows)
CREATE TABLE categories (
id uuid PRIMARY KEY DEFAULT uuidv7(), -- swapped for bigint / uuidv4 / uuidv7
name varchar(100) NOT NULL,
description text,
created_at timestamptz DEFAULT now()
);
-- Child table: products (1M rows)
CREATE TABLE products (
id uuid PRIMARY KEY DEFAULT uuidv7(), -- primary key under test
category_id uuid NOT NULL REFERENCES categories(id), -- foreign key
name varchar(200) NOT NULL,
description text,
price numeric(10,2) NOT NULL,
stock integer NOT NULL DEFAULT 0,
sku varchar(50),
weight_kg numeric(6,2),
is_active boolean DEFAULT true,
rating numeric(3,2),
tags text[],
created_at timestamptz DEFAULT now(),
updated_at timestamptz DEFAULT now()
);
-- FK index (standard in production)
CREATE INDEX idx_products_category ON products(category_id);Every column is populated with realistic Faker-generated values. Here is what the data looks like.
| Column | Type | Sample value | Generator |
|---|---|---|---|
| name | varchar(200) | Innovative asymmetric hub | fake.catch_phrase() |
| description | text | Method pick entire check... (up to 300 chars) | fake.text(300) |
| price | numeric(10,2) | 3847.29 | random(1.0, 9999.99) |
| stock | integer | 4821 | random(0, 10000) |
| sku | varchar(50) | QWE-8432-PLM | fake.bothify('???-####-???') |
| weight_kg | numeric(6,2) | 12.45 | random(0.01, 50.0) |
| is_active | boolean | true | 75% true / 25% false |
| rating | numeric(3,2) | 4.23 | random(1.0, 5.0) |
| tags | text[] | {nature,color,east} | fake.words(1–4) |
Row sizes are intentionally realistic (several hundred bytes including text), quite different from the minimal benchmark's fixed 100-char payload.
| Operation (1M rows) | bigint | UUIDv7 | UUIDv4 | v4 vs bigint |
|---|---|---|---|---|
| INSERT (all rows) | 76.5 sec | 77.0 sec | 77.8 sec | 1.02x |
| SELECT by PK (1000) | 0.075 sec | 0.063 sec | 0.077 sec | 1.03x |
| SELECT range by FK (100) | 0.025 sec | 0.023 sec | 0.039 sec | 1.54x |
| JOIN aggregate (100) | 6.93 sec | 7.21 sec | 7.31 sec | 1.05x |
| UPDATE by PK (1000) | 0.081 sec | 0.089 sec | 0.082 sec | 1.01x |
The INSERT difference has essentially vanished. With the minimal 2-column table, 1M rows gave us bigint 1.63 sec vs v4 2.98 sec (83% slower). With the 13-column realistic table, it's 76.5 sec vs 77.8 sec (2% slower). The row data I/O completely overshadows the index maintenance cost.
On the other hand, range queries show a clear difference. Fetching products by category (foreign key range scan) was 54% slower with v4 than with bigint. v7 was on par with bigint. v4's scattered leaf pages are at a disadvantage for range scans.
For single-row SELECT by PK, UPDATE by PK, and JOIN aggregates, the differences across all three types are within the margin of error.
| Storage (1M rows) | bigint | UUIDv7 | UUIDv4 |
|---|---|---|---|
| Table | 430.9 MB | 447.0 MB | 447.2 MB |
| Indexes | 27.7 MB | 36.4 MB (31% more) | 44.0 MB (59% more) |
| Total | 458.7 MB | 483.6 MB (5% more) | 491.3 MB (7% more) |
Index size is where the difference shows up. UUID is 16 bytes vs bigint's 8 bytes, so the combined primary key index + foreign key index is 59% larger for v4. However, since the table data itself dominates, the total storage overhead is only 7%.
Bottom line: "UUID is slow for inserts" is an artifact of minimal-schema benchmarks. With realistic schemas, the difference nearly vanishes. What actually matters in production is range query performance and index size -- and v7 beats v4 on both.
Under High Concurrency, v7 Loses
The results so far are from "single-session bulk Inserts." In production, multiple requests from a web application Insert concurrently. Under these conditions, v7's characteristics can backfire.
Because v7 is time-ordered, all Inserts concentrate on the rightmost leaf page of the B-Tree index. This is called "rightmost leaf contention." When multiple sessions try to write to the same page simultaneously, page-level lock contention occurs.
In practice, a Spring Boot + PostgreSQL case study reported that switching from v4 to v7 caused throughput to plummet from 12,000 inserts/sec to below 3,000. No infrastructure changes. They only changed the primary key type.
What about v4? Random IDs are distributed across the entire index, so different Inserts have a high probability of writing to different leaf pages. Under high-concurrency OLTP, this randomness actually helps reduce lock contention.
In other words: v7 is fast for bulk loads and single-session workloads. But under OLTP with hundreds to thousands of concurrent connections, v4 can be more stable. "v7 is faster for Inserts" is not always true.
Mitigation options include PostgreSQL partitioning and fillfactor tuning to distribute contention. AWS's Marc Brooker (Distinguished Engineer) has proposed a modification that disperses the timestamp portion using XOR operations. However, both approaches come with additional design and operational costs.
Python 3.14's uuid7() Has Only 32 Bits of Randomness
This is a security fact that should not be overlooked.
Python 3.14's uuid.uuid7() uses Method 1 (monotonic counter method) from RFC 9562. The 128-bit breakdown is as follows.
| Field | Bits | Purpose |
|---|---|---|
| unix_ts_ms | 48 bits | Unix timestamp (milliseconds) |
| ver | 4 bits | Version (fixed value 0111) |
| counter_hi | 12 bits | Monotonic counter (upper) |
| var | 2 bits | Variant (fixed value 10) |
| counter_lo | 30 bits | Monotonic counter (lower) |
| random | 32 bits | Random |
The 42-bit counter is used to guarantee monotonic increase within the same millisecond. The trade-off is that the random portion shrinks to only 32 bits (approximately 4.3 billion possibilities). Compared to v4's 122 bits (approximately 5.3 x 1036 possibilities), the guessable space is roughly 290 times narrower.
What is the problem? If one ID generated within the same millisecond leaks, the counter only increments by +1, so adjacent IDs can be guessed with high probability.
import uuid
# Generate consecutively within the same millisecond
ids = [uuid.uuid7() for _ in range(5)]
for id in ids:
# Only the lower 32 bits change; the counter increments by +1
print(id)What if this were a password reset token? An attacker triggers a reset on their own account and obtains their token (a UUIDv7). If the victim's reset was executed within the same millisecond, the attacker can reach the victim's token by shifting the counter by just a few. This is the same principle as the Sandwich Attack demonstrated with UUIDv1.
Iron rule: For password reset tokens, API keys, invitation links, and any ID where unpredictability is required, do not use UUIDv7. Use UUIDv4 or a dedicated function like secrets.token_urlsafe().
6 Anti-Patterns to Avoid
Here is a summary of the ID design "landmines" found through the benchmarks and research above.
1. Using UUIDv7 for password reset tokens
As discussed, the Python 3.14 implementation has only 32 bits of randomness, and adjacent tokens can be guessed from the counter. Use UUIDv4 or the secrets module for security tokens.
2. Starting with bigint everywhere and planning to "switch to UUID later"
Changing a primary key type requires changing the type of every foreign key that references it. In systems with deep relational structures (ORM polymorphic associations, page tree structures, etc.), the amount of work becomes practically impossible. If the key is also exposed in external APIs, breaking URL changes pile on top. "We can change it later" only gets harder the more you put it off.
3. Using UUIDv7 as the primary key in Google Spanner or DynamoDB
Spanner distributes data across splits by key range. Time-ordered UUIDv7 concentrates all writes on the tail split, creating a hotspot. The official Google documentation explicitly states "use a non-sequential value at the start of the primary key" and recommends UUIDv4. The limit for a single split is approximately 3,500 writes/sec, and exceeding that breaks scalability.
4. Using auto-increment for order IDs and exposing them in URLs
If someone sees /orders/1482, a competitor can estimate "how many orders per day." Furthermore, hitting /orders/1483 reveals whether another person's order exists. Authorization checks can protect the contents, but the mere existence of a record leaks.
5. Upgrading to PostgreSQL 18 with an existing uuidv7() polyfill in place
PG18 added uuidv7() as a native function. Applications that previously defined their own uuidv7() as a SQL function or extension will have their migrations halt due to name collision. This actually happened with the AI platform Dify, and in managed DB environments without superuser privileges, manually dropping the function was impossible, making resolution extremely difficult.
6. Unifying all tables to UUIDv7 in a high-concurrency OLTP system
Due to the rightmost leaf contention described above, concurrent Inserts at thousands of ops/sec can cause significant performance degradation. For write-heavy tables (logs, events, sessions, etc.), consider using v4 or designing with table partitioning to distribute writes.
The Optimal Choice Depends on Your Project's Nature
To decide "which one should I pick," start by looking at the nature of your project. This is the most important consideration.
| Project type | Recommended ID | Reason |
|---|---|---|
| Single service, single DB (internal tools, personal projects, etc.) | bigint | No need for distributed IDs. Only 8 bytes. Fast JOINs and WHERE clauses. There is no technical benefit to using UUID |
| Web service with external API (SaaS, marketplace, etc.) | UUID (v4 or v7) | IDs are exposed to clients. Migrating from bigint to UUID later is a breaking change across all APIs |
| Microservices architecture | UUIDv7 recommended | Each service generates IDs independently. Time-ordering makes it easier to trace causality across services |
| Offline-first (mobile app sync, etc.) | UUIDv4 recommended | IDs generated client-side, synced to server later. bigint will collide. v7 cannot trust the client clock |
| Distributed DB (Spanner, CockroachDB, etc.) | UUIDv4 | Time-ordered IDs create write hotspots. Google officially recommends v4 |
| Large-scale event streaming (log infrastructure, IoT, etc.) | UUIDv7 | Time-ordering + uniqueness are essential. Can also replace a created_at column, reducing column count |
"bigint is sufficient for a single service" is correct. But if you "plan to offer an external API eventually" or "anticipate a microservices migration," choosing UUID in the initial design is more rational. Changing the primary key type later is a major surgery that cascades through foreign keys and the entire API, so this decision should be made at initial design time.
Choose by Table Role Too
Even within the same project, the optimal ID varies by table role. There is no need to use the same ID scheme for every table.
| Table type | Recommended ID | What not to do |
|---|---|---|
| Users, orders, invoices (externally exposed via API) | Internal: UUIDv7 External: prefixed ID or nanoid | Exposing bigint publicly leads to enumeration attacks. Changing the ID scheme later is a breaking API change |
| Password resets, API keys, invitation tokens | UUIDv4 orsecrets.token_urlsafe() | UUIDv7 allows adjacent ID guessing (Sandwich Attack) |
| Settings, master data, categories, permission roles | bigint or natural key (string code) | UUID is overkill. Tables with few records and low change frequency do not need UUID |
| Audit logs, event history | UUIDv7 | bigint collides during sharding. UUIDv4 cannot be sorted by time, causing operational pain |
| Tables planned for sharding | UUID (v4 or v7) | bigint shards collide on merge. This becomes unrecoverable technical debt |
| High-write-frequency tables (sessions, caches, etc.) | UUIDv4 or bigint (depends on concurrency) | Blindly choosing UUIDv7 leads to rightmost leaf contention and performance degradation |
The key point is that "you do not have to use the same ID for every table." UUIDv7 + public nanoid for externally exposed resource tables, UUIDv4 for security tokens, bigint for internal settings tables. This kind of mix-and-match is actually practiced by major companies like Stripe and GitHub in their APIs.
What Major Tech Companies Actually Do
Let me look at "best practices from top companies." Keep in mind, though, that their design decisions cannot be directly applied to small projects. The scale and context are completely different. Take this as reference material.
Stripe: Prefixed IDs
Stripe uses prefixed IDs like cus_NffrFeUfNV2Hib (customer) and pi_3MtwBwLkdIwHu7ix28a3tqPa (payment). The prefix immediately identifies the resource type, so when reading support logs, it is instantly clear -- "this is a customer ID." These are generated separately from the internal primary key and serve as the public-facing API ID.
The purpose is not security. It is for human readability and polymorphic lookup (knowing which table to query just by looking at the ID).
Twitter/X: Snowflake ID
Twitter (now X) uses its proprietary Snowflake ID. It is a 64-bit integer composed of: timestamp (41 bits) + data center ID (5 bits) + worker ID (5 bits) + sequence number (12 bits).
It contains a timestamp, yet is publicly exposed as tweet IDs worldwide. This is a case where timestamp leakage is not a problem because a tweet's posting time is already public information. Discord also uses the same Snowflake scheme.
Buildkite: Migration from sequential IDs to UUIDv7
The CI/CD service Buildkite migrated from sequential integers to UUIDv7. As a result, WAL (Write Ahead Log) rate was reduced by 50%, and Write I/O saw an equivalent reduction.
The motivation for the migration was a PostgreSQL sharding plan. In a distributed environment, guaranteeing uniqueness with sequential integers is impractical, so they unified on UUIDv7 and eliminated the dual management of "sequential PK + UUID external ID." Conversely, if sharding were not planned, this migration would not have been necessary.
Google Spanner: Recommends UUIDv4
Google's Cloud Spanner is a DB where UUIDv7 becomes an anti-pattern, as mentioned earlier. The official documentation explicitly states "use a non-sequential value at the start of the primary key" and recommends UUIDv4 or bit-reversed sequential values.
Common thread: Major companies do not "unify on a single ID scheme." They choose by use case. And "separating public-facing IDs from internal primary keys" is done more for human readability and maintainability than for security.
Know the Alternatives Beyond UUID
There are time-sortable ID formats besides UUID.
| ID format | Length | Time-sortable | RFC standard | Primary use case |
|---|---|---|---|---|
| UUIDv7 | 36 chars | Yes | RFC 9562 | RDB primary key (B-Tree optimized) |
| ULID | 26 chars | Yes | None | Functionally near-equivalent to UUIDv7. Shorter via Base32 |
| KSUID | 27 chars | Yes | None | Developed by Segment. Pairs well with Stripe-style prefixed IDs |
| nanoid | 21 chars | No | None | Short ID for URLs. Not suitable as a primary key |
| CUID2 | Variable | No | None | Collision-resistant. Designed for client-side generation |
| Snowflake ID | 64-bit integer | Yes | None | Used by Twitter/Discord. Requires a central ID-assignment server |
ULID and UUIDv7 are functionally near-equivalent, but UUIDv7 being an RFC standard is the decisive difference. It can be stored directly in the native UUID type of databases, and standard libraries in various languages are adding support. For new projects, UUIDv7 has the edge in terms of compatibility. If you are already using ULID, there is no need to migrate.
nanoid is most rationally used not as "a primary key replacement" but as "an additional public-facing ID." Optimize DB performance with a UUIDv7 primary key while exposing a short nanoid in URLs.
Ecosystem Support as of April 2026
"I want to use it, but can I in my environment?" Here is a summary of support as of April 2026.
Languages and Runtimes
| Environment | Support status | Notes |
|---|---|---|
| Python 3.14 | ✅ Standard library | uuid.uuid7(). However, only 32 bits of randomness |
| Go | ✅ google/uuid | uuid.NewV7() |
| Rust | ✅ uuid crate | Uuid::now_v7() |
| Node.js | ❌ No standard support | Requires the npm uuid package orBun's Bun.randomUUIDv7() |
| Java | ❌ No JDK standard support | Requires a library such as java-uuid-generator |
Databases
| DB | Support status | Notes |
|---|---|---|
| PostgreSQL 18 | ✅ Native | uuidv7() function. Beware of name collisions with existing polyfills |
| MariaDB 11.7 | ✅ Support added | v4/v7 support. Timestamp extraction not supported |
| MySQL | ❌ v1 only | v7 must be generated at the application layer |
Frameworks / ORMs
| Framework | Support status | Notes |
|---|---|---|
| Laravel 9.30+ | ✅ v7 by default | Auto-generated via the HasUuids trait |
| Django 5.2 | ✅ With Python 3.14 + PG18 | UUIDField with default=uuid.uuid7 |
| Rails | ⚠️ Manual setup | Default is v4. Requires a gem + callback |
| Spring Boot | ⚠️ Library required | Relies on external libraries since JDK has no support |
The Python + PostgreSQL combination is the most convenient. Both natively support v7, requiring no additional libraries. The Node.js + MySQL combination requires libraries on both sides, so factor in that cost if you adopt it.
Note: UUIDv7 was officially standardized as RFC 9562 in May 2024, but AWS's Marc Brooker has identified four design concerns (information leakage, entropy reduction, correlated behavior, UI display) and proposed fixes. Despite being standardized, it is still an evolving specification. Please read the information in this article as a snapshot from April 2026.
Decision Flowchart
Here is a single flowchart summarizing everything covered so far. Work through these questions in order during initial design.
Additionally, check the following on a per-table basis.
- → Security tokens (password resets, API keys, etc.) → UUIDv4 or
secrets.token_urlsafe() - → Settings and master data (low record count) → bigint is sufficient
- → Audit logs and event history → UUIDv7 (time-sortable + unique)
- → High-write-frequency tables → UUIDv4 (avoids rightmost leaf contention)
- → Tables planned for sharding → UUID (bigint collides on merge)
Conclusion: Not a Strict Upgrade, but a Purpose-Specific Evolution
UUIDv7 is a good technology. It improves Insert performance on B-Tree-based RDBs, enables time-ordered sorting, and generates unique IDs in distributed environments. It has become an RFC standard too.
But it is not a "strict upgrade."
- • For security tokens requiring randomness, v4 is the right choice (v7's random portion is 32 bits in the Python implementation)
- • On distributed DBs (Spanner, etc.), v7 is an anti-pattern (hotspots)
- • Under high-concurrency OLTP, v7 can be slower than v4 (rightmost leaf contention)
- • At scales of tens of thousands of rows, there is no performance difference from bigint in the first place
- • Security when exposing IDs in URLs is ensured by authorization checks, not the ID format
"Just use UUIDv7 for everything and you will be fine" is about as lazy as "just use React for everything and you will be fine." Technology choices require context. Project scale, table role, future scaling plans. If you copy the format without examining these, you will end up with either over-engineering or a design mistake.
Using the decision criteria laid out in this article, you should be able to explain "why I chose this ID." That, I think, is the starting point for moving beyond the knee-jerk "default to UUIDv4" or "UUIDv7 is the best" mindset.
Note that UUIDv7 itself is still an evolving specification. As an AWS Distinguished Engineer has pointed out design concerns and proposed fixes, best practices may continue to change. Please read this article as a snapshot from April 2026.
References
- • RFC 9562: Universally Unique IDentifiers (UUIDs) (officially published May 2024)
- • Python 3.14 uuid module (uuid7() reference)
- • UUIDv7 Comes to PostgreSQL 18 (The Nile)
- • PostgreSQL UUID Performance Benchmarking (DEV Community)
- • Fixing UUIDv7 (for database use-cases) (Marc Brooker, AWS Distinguished Engineer)
- • UUID v7 in PostgreSQL: The Tiny Choice That Quietly Destroys Insert Performance (Medium)
- • OWASP API Security Top 10 - API1:2023 Broken Object Level Authorization
- • Sandwich Attack UUID v1 (IBM PTC Security)
- • Designing APIs for Humans: Object IDs (Stripe ID design explanation)
- • Goodbye to sequential integers, hello UUIDv7! (Buildkite)
- • Spanner Schema Design Best Practices (Google Cloud)
- • Understanding UUIDv7 and Its Impact on Cloud Spanner (Google Cloud Blog)
- • Dify: Migration fails on PostgreSQL 18 due to uuidv7 function conflict (GitHub Issue)
- • How to use UUIDv7 in Python, Django and PostgreSQL
- • Hacker News: UUIDv7 timing attack discussion
- • Benchmark code for this article (GitHub)