The "Mixed Mode Operations" Anti-Pattern in ORMs

The "Mixed Mode Operations" Anti-Pattern in ORMs

When Your Change Tracker and Database Stop Agreeing

ORMEntity FrameworkDesign PatternsDatabase

If you've ever seen an error like this:

DbUpdateConcurrencyException: The database operation was expected to affect 1 row(s),
but actually affected 0 row(s); data may have been modified or deleted since entities were loaded.

...and spent hours debugging only to find that your own code deleted the row moments before trying to update it, you've encountered one of the most common ORM anti-patterns: Mixed Mode Operations.

This article explains what this anti-pattern is, why it happens, and how to fix it.


What Is An ORM And What Does It Promise?

Object-Relational Mappers (ORMs) like Entity Framework, Hibernate, and Django ORM make a simple promise:

"Give me your objects. I'll track changes and sync them with the database. You don't need to write SQL."

To deliver on this promise, ORMs use a Change Tracker (also called Unit of Work, Session, or Identity Map). This component:

  1. Loads entities from the database into memory
  2. Tracks which entities are new, modified, or deleted
  3. Generates SQL statements when you save
  4. Syncs your objects with the database
Your Code                    Change Tracker                  Database
─────────────────────────────────────────────────────────────────────────
Load entities        →       Tracks as "Unchanged"     ←     SELECT
Modify properties    →       Marks as "Modified"
Call SaveChanges()   →       Generates UPDATE          →     UPDATE
                             Marks as "Unchanged"      ←     Success

This works beautifully for single-entity operations. The problem starts when you need performance.


The Performance Problem

Imagine you need to update 10,000 rows. With standard ORM operations:

// Entity Framework - Standard way
var items = await context.Items.ToListAsync();  // Load 10,000 entities
foreach (var item in items)
{
    item.Status = "Processed";  // Modify each one
}
await context.SaveChangesAsync();  // Generate 10,000 UPDATE statements

This is painfully slow because:

  • 10,000 entities loaded into memory
  • 10,000 entities tracked
  • 10,000 individual UPDATE statements generated and executed

The solution? Bulk operations that bypass the ORM and execute direct SQL:

// Much faster - one SQL statement (EF Core 7+ built-in)
await context.Items
    .Where(x => x.Status == "Pending")
    .ExecuteUpdateAsync(s => s.SetProperty(x => x.Status, "Processed"));

This generates a single SQL statement:

UPDATE Items SET Status = 'Processed' WHERE Status = 'Pending'

Fast. Efficient. But dangerous if misused.


The Anti-Pattern: Mixed Mode Operations

The anti-pattern occurs when you mix bulk operations (direct SQL) with change-tracked entities in the same unit of work.

A Real-World Example

Here's an inventory audit system. When auditing a warehouse location, we remove items that weren't found (lost/stolen) and mark the remaining items as verified:

public async Task ApplyInventoryAudit(int locationId, int actualCount)
{
    // Step 1: Load inventory items (TRACKED by change tracker)
    var items = await context.InventoryItems
        .Where(x => x.LocationId == locationId)
        .ToListAsync();
    // Result: 40 items loaded and tracked as "Unchanged"

    // Step 2: Auditor found only 20 items - delete the 20 "lost" ones
    var itemsToDelete = items.Take(items.Count - actualCount).ToList();
    
    // Using third-party library (e.g., EFCore.BulkExtensions)
    // This executes direct SQL, bypassing change tracker
    await context.BulkDeleteAsync(itemsToDelete);
    // Result: 20 items deleted from DATABASE
    // BUT: Change tracker still thinks all 40 exist!

    // Step 3: Mark remaining items as verified
    foreach (var item in items)  // Still iterating over 40 items!
    {
        item.LastAuditedAt = DateTime.UtcNow;
        item.IsVerified = true;
    }
    await context.SaveChangesAsync();  // Tries to UPDATE 40 items
    // EXCEPTION: 20 of those items no longer exist in the database!
}

Note: BulkDeleteAsync is from third-party libraries like EFCore.BulkExtensions, not built into EF Core. EF Core 7+ has built-in ExecuteDeleteAsync which also bypasses change tracking.

Why Does This Happen?

                    Change Tracker              Database
                    ──────────────              ────────
After Load:         40 items (Unchanged)        40 items
After BulkDelete:   40 items (Unchanged)        20 items  ← OUT OF SYNC!
                    ↑ Doesn't know about        ↑ 20 deleted
                      the deletion!
After Modify:       40 items (Modified)         20 items
SaveChanges:        Generates 40 UPDATEs   →    20 fail (rows don't exist)
                                                EXCEPTION!

The bulk operation bypassed the change tracker. The tracker and database became out of sync. When SaveChanges tried to update entities that no longer existed, it failed.


This Is An ORM Paradigm Problem

This isn't just an Entity Framework problem. Any ORM that provides both:

  1. Change tracking (automatic sync between objects and database)
  2. Direct SQL execution (bypassing that sync for performance)

...will have this potential conflict. The specific behavior varies:

  • Some ORMs throw exceptions (like EF Core with concurrency checks)
  • Some fail silently (UPDATE affects 0 rows, no error)
  • Some have partial solutions (session clearing, object expiration)

The core issue is the same: the change tracker and database become out of sync.


The Solutions

Solution 1: Consistent Persistence Strategy (Recommended)

Use the same approach for related operations. If you use direct SQL for one operation, use it for all related operations.

public async Task ApplyInventoryAudit(int locationId, int actualCount)
{
    // Load items only to determine which IDs to delete
    // (We won't use change tracking for updates)
    var items = await context.InventoryItems
        .Where(x => x.LocationId == locationId)
        .ToListAsync();

    var idsToDelete = items.Take(items.Count - actualCount).Select(x => x.Id).ToList();

    // Direct SQL delete (EF Core 7+ built-in)
    await context.InventoryItems
        .Where(x => idsToDelete.Contains(x.Id))
        .ExecuteDeleteAsync();

    // Direct SQL update - no change tracker involved
    await context.InventoryItems
        .Where(x => x.LocationId == locationId)
        .ExecuteUpdateAsync(s => s
            .SetProperty(x => x.LastAuditedAt, DateTime.UtcNow)
            .SetProperty(x => x.IsVerified, true));

    // No change tracker involved = No conflict possible
}

Why this works: Both operations use direct SQL. Neither touches the change tracker. No sync issues.

Solution 2: Bulk Operations Last

If you must mix paradigms, do bulk operations at the end when you no longer need the tracked entities.

// Step 1: Do all tracked operations FIRST
foreach (var item in items)
{
    item.LastAuditedAt = DateTime.UtcNow;
    item.IsVerified = true;
}
await context.SaveChangesAsync();  // Updates all 40 items - they still exist

// Step 2: THEN do bulk delete (no more tracked operations after this)
await context.BulkDeleteAsync(itemsToDelete);  // Safe - we're done with tracking

Why this works: By the time we do the bulk delete, we've already finished all change-tracked operations.

Solution 3: Detach After Bulk Operations

Explicitly remove deleted entities from the change tracker after bulk operations.

// Bulk delete using third-party library
await context.BulkDeleteAsync(itemsToDelete);

// Tell EF these entities are no longer tracked
foreach (var item in itemsToDelete)
{
    context.Entry(item).State = EntityState.Detached;
}

// Remove from our in-memory list too
foreach (var item in itemsToDelete)
{
    items.Remove(item);
}

// Now safe to work with remaining items
foreach (var item in items)  // Only 20 items now
{
    item.LastAuditedAt = DateTime.UtcNow;
    item.IsVerified = true;
}
await context.SaveChangesAsync();  // Updates only 20 items

Why this works: We manually sync the change tracker with reality after the bulk operation.

Solution 4: Refresh From Database

After bulk operations, reload entities to sync the tracker with reality.

// Bulk delete
await context.BulkDeleteAsync(itemsToDelete);

// Reload from database - deleted items won't be included
items = await context.InventoryItems
    .Where(x => x.LocationId == locationId)
    .ToListAsync();  // Only returns 20 items now

// Safe to proceed
foreach (var item in items)
{
    item.LastAuditedAt = DateTime.UtcNow;
    item.IsVerified = true;
}
await context.SaveChangesAsync();

Why this works: We get a fresh view from the database that reflects the bulk delete.


Solution Comparison

SolutionProsConsBest For
Consistent StrategyClean, no conflictsCan't use change trackingNew code, bulk-heavy operations
Bulk LastSimple to implementLimits operation orderSimple flows
Detach After BulkKeeps flexibilityManual, error-proneComplex existing code
Refresh From DBAlways accurateExtra database round-tripWhen accuracy is critical

Best Practices Summary

DO:

  1. Choose one paradigm per operation flow - Either use change tracking throughout OR use direct SQL throughout
  2. Use built-in bulk methods when available - EF Core 7+ has ExecuteUpdateAsync and ExecuteDeleteAsync. Prefer these over third-party libraries when possible.
  3. Understand what your bulk library does - If using third-party libraries like EFCore.BulkExtensions, know that methods like BulkDeleteAsync bypass the change tracker.
  4. Put bulk operations at the end - When mixing is unavoidable, do bulk operations last

DON'T:

  1. Don't iterate over tracked collections after bulk delete - The collection still contains references to deleted entities
  2. Don't assume the tracker knows about direct SQL - It doesn't. Ever. Whether it's built-in ExecuteDeleteAsync or third-party BulkDeleteAsync.
  3. Don't mix bulk and tracked operations on the same entities - This is the core anti-pattern

Conclusion

The "Mixed Mode Operations" anti-pattern is a fundamental tension in ORM design: performance vs. consistency.

ORMs provide a beautiful abstraction with change tracking, but that abstraction breaks down when you bypass it for performance. The change tracker and database become out of sync, leading to cryptic errors or silent failures.

The solution is simple: be consistent. If you use direct SQL for one operation, use it for related operations. If you use change tracking, stick with it. Don't mix paradigms on the same entities in the same unit of work.

Understanding this anti-pattern will save you hours of debugging and make you a more effective developer with any ORM.


References

Entity Framework Core

General ORM Concepts


This article was written based on real-world debugging experience with Entity Framework Core. Similar patterns and issues exist in other ORMs like Hibernate, Django ORM, and SQLAlchemy - consult their documentation for specific behaviors and solutions.