A scheduled flow runs every night at 2 AM. It checks open leads, evaluates routing criteria, and updates assignment fields based on territory logic. Nobody monitors it — it's been working fine for eight months.

Then the lead volume crosses 47,000 records.

On a Tuesday night, the flow hits the governor limit. The platform throws a FLOW_ELEMENT_ERROR: too many DML statements in a single transaction. Half the lead-routing updates fail silently. No notification fires. No fault path was configured. For three days, new leads route to the wrong reps, no one knows why, and a $340K pipeline sits in the wrong queues.

The root cause was visible in the .flow-meta.xml from day one: a recordUpdates element nested inside a loops element, firing one DML per record.

This is not a rare edge case. It's the most common flow failure pattern in production orgs, and it's one of seven structural problems that Salesforce's built-in Flow checks consistently miss. Here's what each pattern looks like in metadata, how it breaks, and how to catch it before the 2 AM call.

Pattern 1: Missing Fault Paths on DML Elements

Metadata Signature

<recordCreates>
  <name>Create_Case</name>
  <connector>
    <targetReference>Next_Step</targetReference>
  </connector>
  <!-- No faultConnector element -->
</recordCreates>

When a recordCreates, recordUpdates, or recordDeletes element has no faultConnector, the flow exits on failure with no record of what happened.

How It Breaks in Production

A flow that creates follow-up tasks after an Opportunity stage change hits a validation rule on the Task object — one added last month by a different team. The DML fails. The flow terminates. No log entry is created in a place the admin monitors. The Opportunity moves stages normally from the rep's perspective. 600 follow-up tasks are never created over four days.

Why Salesforce's Built-In Checks Miss It

Flow Builder shows no warning for missing fault paths. The Flow Analyzer (accessible via Debug > Analyze) checks for unreachable elements and unused variables, but it does not flag DML elements without fault connectors. This is considered a design choice, not an error.

The Fix

Add a faultConnector to every recordCreates, recordUpdates, and recordDeletes element. At minimum, route it to a Create Records that logs the error to a custom Flow_Error_Log__c object:

<faultConnector>
  <targetReference>Log_Error</targetReference>
</faultConnector>

For critical flows, route the fault path to a sendEmailAlert action or a platform event so ops teams get real-time notification.

Detecting This Across Your Org

Grep your .flow-meta.xml files for recordCreates, recordUpdates, and recordDeletes — then check whether each element has a sibling faultConnector. Audit your org's flows in 60 seconds at /flow-health — it scans all active flows and flags every DML element missing a fault path, with severity ratings and fix recommendations.

Scan for missing fault paths across your entire org

Paste your flow metadata and get a complete audit in under 60 seconds — no Salesforce connection, no signup required.

Run Flow Health Audit →

Pattern 2: DML or SOQL Inside Loops

Metadata Signature

<loops>
  <name>Loop_Through_Contacts</name>
  <collectionReference>Contact_Collection</collectionReference>
  ...
</loops>

<!-- Danger: recordUpdates nested after the loop element,
     connected from INSIDE the loop -->
<recordUpdates>
  <name>Update_Contact</name>
  ...
</recordUpdates>

The tell is a recordUpdates or recordLookups element that connects FROM a decision or assignment inside the loop body, rather than from the loop's "No More Values" connector.

How It Breaks in Production

This is the scenario from the intro. Salesforce enforces a limit of 150 DML operations per transaction and 100 SOQL queries. A loop over 200 contacts that fires one recordUpdates per contact consumes 200 DML operations — 50 over the limit. At 47,000 leads, the flow fails at record #150 with no rollback, no partial-success confirmation, and (without a fault path) no notification.

The Salesforce Developer documentation on Flow governor limits is explicit: each DML element inside a loop fires once per iteration.

Why Salesforce's Built-In Checks Miss It

Flow Builder does not statically analyze loop structure for DML placement. It has no concept of "this element is inside a loop." The warning surfaces only when the flow actually executes against a record set large enough to breach the limit — which in many orgs doesn't happen until months after go-live.

The Fix

Move all DML outside the loop. Build a collection inside the loop, then update the entire collection in a single operation after the loop exits:

Loop → Assign (add to collection variable)
Loop exits → Update Records (single DML on full collection)

One DML for 10,000 records. Same limit consumption as one DML for one record.

Detecting This Across Your Org

The metadata signature is unambiguous once you know what to look for. Run a flow health check at /flow-health — it traces the connection graph of every active flow and flags DML or SOQL elements reachable from within a loop body. No Salesforce connection required; paste your .flow-meta.xml files and it runs entirely in your browser.

Pattern 3: Unbounded Get Records Without Filters or Limits

Metadata Signature

<recordLookups>
  <name>Get_All_Contacts</name>
  <object>Contact</object>
  <!-- No filters block. No limit element. -->
  <outputReference>Contact_Collection</outputReference>
  <queriedFields>Id</queriedFields>
  <queriedFields>Email</queriedFields>
</recordLookups>

A recordLookups element with no filters block and no limit element will return every record on the object — up to the 50,000-record SOQL query limit, consuming heap proportional to the fields queried.

How It Breaks in Production

A flow built for an org with 3,000 contacts works fine. Eighteen months of growth later, the org has 140,000 contacts. The Get Records element starts returning the first 50,000. The flow's heap allocation for the collection variable hits 6MB and the transaction fails with a System.LimitException. The error is logged, but the failure mode is confusing — the flow itself looks correct, the object looks fine, and no individual field is suspicious.

Why Salesforce's Built-In Checks Miss It

Salesforce cannot know at design time how many records a query will return. The Flow Analyzer checks syntax and connectivity, not runtime query scope. It will not warn you about a filterless Get Records on a high-volume object.

The Fix

Always specify filter conditions and a row limit on Get Records:

<filters>
  <field>IsActive__c</field>
  <operator>EqualTo</operator>
  <value>true</value>
</filters>
<limit>200</limit>

If you genuinely need to process all records on an object, use a Scheduled Flow with the record scope setting enabled — which batches records in groups of 200 automatically, like Batch Apex.

Detecting This Across Your Org

Every recordLookups element without a filters block is a ticking clock on a growing org. This is especially dangerous on standard high-volume objects: Contact, Lead, Task, Event, Case. Pair this audit with a review of your permission set architecture — unbounded queries on sensitive objects can also expose more data than intended.

Pattern 4: Hardcoded Record and RecordType IDs

Metadata Signature

<decisions>
  <name>Check_Record_Type</name>
  <rules>
    <conditionLogic>and</conditionLogic>
    <conditions>
      <leftValueReference>$Record.RecordTypeId</leftValueReference>
      <operator>EqualTo</operator>
      <rightValue>
        <stringValue>0124W000001AbCdEAF</stringValue>
      </rightValue>
    </conditions>
  </rules>
</decisions>

An 18-character Salesforce ID hardcoded directly into a Decision element's condition. The same pattern appears in assignments, formulas, and filterValue elements.

How It Breaks in Production

Salesforce record IDs are org-specific. The RecordType ID 0124W000001AbCdEAF in production does not exist in sandbox — or worse, maps to a completely different RecordType. A flow deployed from sandbox to production (or vice versa) that contains hardcoded IDs behaves correctly in the source org and silently misroutes, skips logic, or errors in the target org.

This pattern also creates maintainability debt: if the RecordType is deleted and recreated, or if a record is migrated, the flow's hardcoded ID is invalid with no warning.

Why Salesforce's Built-In Checks Miss It

Flow Builder does not validate whether a hardcoded ID corresponds to a valid record in the current org during design time. Change Sets and the Metadata API will deploy the flow without complaint. The failure surfaces only at runtime in the target org.

The Fix

Replace hardcoded IDs with Custom Metadata Types or Custom Labels:

<rightValue>
  <elementReference>$CustomMetadata.Flow_Config__mdt.Lead_RecordType_Id__c</elementReference>
</rightValue>

This makes the ID org-configurable without a metadata deployment. You can also use the $RecordType global variable in flows that support it.

Detecting This Across Your Org

Search your .flow-meta.xml files for strings matching the pattern [0-9A-Za-z]{15,18} inside stringValue elements. The /flow-health auditor flags hardcoded IDs automatically — it uses the same ID pattern-matching approach and cross-references against known Salesforce ID prefixes to distinguish actual record IDs from other strings. Also worth checking: if your org uses the Salesforce Migration Auditor at /migration, it catches hardcoded IDs in Workflow Rules and Process Builder automations that are being converted to Flow.

Find hardcoded IDs and other production risks

The Flow Health Auditor scans all 7 failure patterns — including hardcoded IDs, DML in loops, and missing fault paths — across your entire org.

Audit Your Flows →

Pattern 5: Missing Descriptions and Weak Naming

Metadata Signature

<Flow xmlns="http://soap.sforce.com/2006/04/metadata">
  <label>Lead Routing V2</label>
  <description></description>
  ...
  <decisions>
    <name>Decision_1</name>
    <label>Decision 1</label>
    ...
  </decisions>
</Flow>

An empty description field on the Flow itself. Decision elements labeled "Decision 1," "Decision 2." Assignment elements named "Assign_1." Loop elements with no label that describes what they iterate over.

How It Breaks in Production

This pattern doesn't cause a runtime error. It causes a debugging disaster three months after the admin who built it has moved on. When lead-routing breaks at 2 AM, the on-call admin opens the flow and sees eleven Decision elements labeled "Decision 1" through "Decision 11." They spend forty minutes reverse-engineering what each branch does before they can identify which path is executing incorrectly.

The operational cost is real: a well-documented flow takes fifteen minutes to debug. An undocumented flow of the same complexity takes two to four hours.

Why Salesforce's Built-In Checks Miss It

Flow Builder does not require descriptions. It does not validate that element names are meaningful. It allows a fully deployed, production-active flow with no description and auto-generated element names. The Flow Analyzer does not flag naming quality.

The Fix

Enforce as a deployment standard:

One naming rule that holds: if you can't describe an element in five words, the element is doing too much.

Detecting This Across Your Org

Look for: empty description elements on Flow objects, label values matching the pattern "Decision [0-9]" or "Assignment [0-9]", and API names that are auto-incremented suffixes. Also review your admin documentation practices — undocumented flows and unused custom fields are symptoms of the same operational blind-spot problem.

Pattern 6: Scheduled Flows Running on Unbounded Record Sets

Metadata Signature

<start>
  <scheduledPaths>
    <pathType>Run</pathType>
    <timeSource>...</timeSource>
    <offsetNumber>0</offsetNumber>
    <offsetUnit>Hours</offsetUnit>
  </scheduledPaths>
  <!-- No filters. No object scope limit. -->
</start>

A schedule-triggered flow with no filter conditions on the Start element — meaning it runs against every record on the specified object, every time it fires.

How It Breaks in Production

Salesforce creates one flow interview per record for schedule-triggered flows and batches them in groups of 200. At 50,000 records, that's 250 batches. If the flow contains any heavy processing — multiple Get Records, complex decision trees, nested subflows — each batch may approach the 10-second CPU limit. Total runtime extends into hours. Paused interviews pile up in the Scheduled Jobs queue. If the job schedule fires again before the previous run completes, the org starts accumulating a backlog.

Per the Salesforce Architects' guide to record-triggered automation, scheduled flows running against hundreds of thousands of records should be replaced with Scheduled Apex (Batch Apex), which supports batch sizes up to 2,000 and gives fine-grained control over retry and error handling.

Why Salesforce's Built-In Checks Miss It

The Flow Builder Start element accepts a schedule without requiring filter criteria. There is no warning when a scheduled flow's query scope is potentially unbounded. The failure surfaces only at runtime, often at a record volume threshold nobody anticipated when the flow was designed.

The Fix

Always add filter conditions to the Start element of schedule-triggered flows:

<filters>
  <field>Status__c</field>
  <operator>EqualTo</operator>
  <value>Pending</value>
</filters>
<filters>
  <field>LastModifiedDate</field>
  <operator>GreaterThan</operator>
  <value>LAST_N_DAYS:7</value>
</filters>

If you truly need to process all records: scope the filter to records modified in the last N days, run during off-peak hours (midnight or later), and monitor using the FLOW_START_SCHEDULED_RECORDS event in debug logs to track actual batch sizes.

For record sets consistently above 50,000, migrate to Scheduled Apex.

Detecting This Across Your Org

Any schedule-triggered flow without filter conditions on the Start element is a risk. The severity scales with object record volume — a scheduled flow on a 200-record custom object is low risk; one on Lead, Contact, or Case in a mature org is critical. Check this pattern alongside your sharing rules review — flows running in system context bypass your sharing model entirely, so scope matters for data governance too.

Pattern 7: Version Sprawl

Metadata Signature

<!-- Flow with API version 50.0 — deprecated -->
<apiVersion>50.0</apiVersion>

<!-- Or, retrieved via tooling API:
  Multiple FlowDefinition records with Status = 'Active'
  on the same DeveloperName

  5+ FlowVersion records with Status = 'Obsolete'
  created before 2022
-->

How It Breaks in Production

Version sprawl has two failure modes. The first is technical: flows built on API version 50.0 or earlier (pre-Spring '22) can't access newer platform features — current record variables, scheduled paths, After-Save fast field updates — and may behave differently when those features interact with them. A flow upgraded in another automation that calls this one as a subflow can produce unexpected results when version compatibility gaps exist.

The second failure mode is operational: an org with multiple active versions of the same flow on the same object creates double-execution risk. Two active record-triggered flows on Opportunity that both fire on "Stage changes" will both run. If their logic overlaps — both updating the same field — the last write wins, and the outcome depends on execution order, which isn't guaranteed.

Five or more inactive versions of a flow is also a diagnostic signal: the flow has been iterated without a clear rollback plan. When something breaks, no one knows which version to reactivate.

Why Salesforce's Built-In Checks Miss It

Salesforce allows multiple active versions of the same flow. It allows flows to remain active on deprecated API versions indefinitely. Flow Builder does not warn when you're creating a third active version on a trigger object or when inactive versions are accumulating.

The Fix

Detecting This Across Your Org

Query the Tooling API for FlowDefinition records and count active/inactive/obsolete versions per flow. Any flow with 5+ inactive versions or 2+ active versions on the same object is flagged. Flow-health checks this automatically — it's one of the seven weighted categories in the health score, and version sprawl is often the first thing surfaced in mature orgs that have been iterating on flows for two or more years.

Why Salesforce's Native Checks Aren't Enough

The Salesforce Flow Analyzer is a useful start. It catches syntax errors, unreachable elements, and unused variables. What it can't do: analyze runtime behavior against your actual data volumes, cross-reference your sharing model and permission architecture, or flag patterns that are structurally valid but operationally dangerous.

The seven patterns above are all valid, deployable, and often production-active in orgs that have passed every Salesforce-native check. They fail at scale, at data volume thresholds nobody anticipated, or in the gap between sandbox and production behavior.

The only way to catch them systematically is to analyze the metadata directly — not the visual canvas, but the .flow-meta.xml structure that describes what each flow actually does.

We built /flow-health to do exactly that. Paste your flow XML or upload your metadata files and you'll get a weighted health score across all seven categories, a prioritized findings list with severity badges, and specific fix recommendations — no Salesforce connection, no signup, runs entirely in your browser.

If you're also dealing with the Workflow Rule EOL or Process Builder migration, the /migration auditor covers the parallel set of patterns in legacy automations.

Conclusion: Find These Patterns Before They Find You

Governor limits don't give warnings. Flow errors in scheduled contexts don't always surface in places admins look. Hardcoded IDs don't fail until deployment. Version sprawl doesn't break anything until it does.

The metadata has all of this information. It was there from the moment the flow was saved. The question is whether you read it before production does.

Audit your org's flows in 60 seconds

Free, no signup, runs entirely in your browser. Get a weighted health score across all 7 failure patterns with prioritized fix recommendations.

Run Free Flow Health Audit →