I have sat across the table from more BSA examiners than I can count. I have watched careful institutions get cited for thin SAR narratives and sloppy ones get passes for good ones. The pattern that emerges, from that vantage point, is consistent and underappreciated: examiners are not primarily evaluating whether you filed a SAR. They are evaluating whether the narrative you filed demonstrates that your institution knows what it is looking at.
This is a meaningful distinction, and it has practical consequences for how a BSA officer — especially one running a lean compliance function at a community bank — should think about the output their monitoring system produces.
The three questions an examiner is silently asking
When an examiner pulls a SAR off the stack for review, they are running the narrative through three questions. These questions are not written in the FFIEC BSA/AML Examination Manual in quite this form, but anyone who has been through enough exams learns to hear them.
One: Does this narrative explain why the activity is suspicious, in terms that an outside reader could follow? Not just "the customer deposited cash in structured amounts" — but why the pattern fits a known typology, what made the amounts structured versus coincidental, and what the investigator's analysis of the customer's expected activity was that made this pattern stand out.
Two: Does the narrative tie back to evidence? Specifically, does it reference the supporting data — transaction IDs, counterparty identifiers, account history windows, negative news hits, KYC file notes — that an examiner could theoretically pull if they wanted to verify the analysis? A narrative that asserts conclusions without tying them to retrievable evidence is a narrative an examiner will probe.
Three: Does the disposition — whether the filing was made, whether the account was restricted, whether law enforcement was contacted under 314(b) — match the narrative's own logic? Internal inconsistency is what triggers citations. A narrative that describes high-risk activity and concludes with no account-level action reads, to an examiner, as a red flag about the institution's risk judgment, not about the customer.
Those are the three questions. Everything else in SAR quality is in service of answering them clearly.
Why most SARs from community institutions fail at the first question
The harder truth, which rarely gets said out loud: the structural reason community-bank SARs tend to be thinner than large-institution SARs is not a skill gap in the BSA officers filing them. It is the tooling.
When a monitoring system produces an alert, it delivers to the investigator a fact — a single transaction or a small cluster that crossed some threshold. The investigator then has to build, from scratch, the narrative context around that fact: pulling customer history, reconstructing counterparty relationships, checking internal notes, reviewing prior SARs, cross-referencing sanctions and PEP lists, examining the geographic risk of the counterparty's jurisdiction. At a large institution, there are specialized analysts and tooling for each of those steps. At a community bank, it is often one person, with an Excel sheet and a manual workflow.
The result is predictable. Under time pressure, the narrative ends up anchored to the fact — "structured cash deposits totaling $9,200 over three days" — without the surrounding context that would let an examiner answer their first question. Not because the BSA officer does not know that context matters. Because producing it, at scale, across an alert queue of hundreds per week, is not economically possible with the tools most community institutions have.
What good narratives actually contain
The SARs that survive examination without friction have, in my experience, six structural elements. I'll walk through each one because these are the things examiners are checking against, whether or not they're explicit about it.
Customer context — who the customer is, how long they have been with the institution, what their expected activity pattern is, and what specifically about the current activity deviates from that expected pattern. A good narrative establishes the baseline before describing the deviation. An examiner reading "expected activity based on 18 months of account history is approximately $4,000 in monthly deposits from payroll sources; the pattern flagged here represents a 7x deviation in frequency and a 3x deviation in magnitude, with a shift in funding source from ACH to cash" is being given exactly what they need.
Typology attribution — a concrete reference to which red-flag pattern from FinCEN guidance, BSA/AML Examination Manual sections, or established typology literature this activity maps to. "Structuring" is a start. "Structuring consistent with FinCEN Advisory FIN-2014-A005 indicators, specifically the use of multiple sub-threshold cash deposits across sequential business days" is the version that tells an examiner you know what pattern you're looking at.
Counterparty detail — for wire, ACH, and cross-border activity, identification of the counterparty, the counterparty's jurisdiction, any sanctions or adverse media context, and the nature of the relationship (expected, unexpected, new, dormant-then-active). Counterparty context is where the layering story gets told. A narrative without it is a narrative describing only the institution's own side of the transaction.
Network context — if the activity is part of a larger pattern across related accounts or related customers, the narrative needs to say so and name the related parties. This is the element most frequently missing from community-bank SARs. It requires a system capable of identifying relationship graphs, which rule-based engines generally do not produce.
Evidence trail — explicit reference to the underlying records. Transaction identifiers. KYC refresh dates. Prior SAR filings on the same customer. Adverse media hits with dates and sources. CTR filings. The point is not to clutter the narrative; it is to demonstrate that the analysis is reconstructable. An examiner who wants to pull the tape can pull the tape.
Disposition logic — what the institution did with the account, and why that action was proportional to the assessed risk. Filing a SAR is one disposition. Enhanced due diligence is another. Account restriction is a third. Exit is a fourth. An examiner wants to see that the institution's action matches its own assessment of the risk. A narrative describing serious layering activity with a disposition of "continue standard monitoring" will be probed.
The structural problem: alerts versus cases
Step back from the narrative mechanics and the underlying issue becomes visible. SAR narrative quality is a downstream symptom of an upstream problem. If the monitoring system delivers alerts — atomic facts, unaccompanied by context — then the BSA officer must construct the narrative context manually for every filing. The narrative quality is bounded by how much manual reconstruction the officer has time to do.
If the monitoring system delivers cases — investigation packages that include the customer context, typology attribution, counterparty enrichment, network graph, adverse media links, prior-activity timeline, and disposition-ready evidence — then the BSA officer is working downstream of a system that has already done the context reconstruction. The narrative is being written on top of complete information, not assembled from scratch under time pressure.
This is the architectural difference that most strongly predicts SAR quality at examination. Not the skill of the BSA officer. Not the rigor of the institution's policies. The structural question of whether the monitoring output is an alert or a case.
What to ask your monitoring vendor
If you run BSA for a community bank or credit union reading this, and you want to stress-test your current infrastructure, I would suggest three questions to put to your current vendor.
First: for any given flagged transaction, what context does your system automatically surface alongside the flag? If the answer is "the transaction details," you are working with an alert system and your investigators are doing manual context reconstruction. If the answer includes customer baseline, counterparty enrichment, related-party signals, and prior activity, you are working with a case system and your investigators are working downstream of completed analysis.
Second: when a flag fires, can your system explain — in terms that would hold up in an exam — why this transaction was surfaced and not another? If the explanation is "rule 47 matched," you have an interpretable but thin explanation. If the explanation is a ranked feature attribution showing which behavioral signals contributed how much to the risk score, you have explainability that translates directly into narrative material.
Third: does your system's output make the six structural narrative elements — customer context, typology attribution, counterparty detail, network context, evidence trail, disposition logic — faster to produce, or does your team still assemble each one by hand?
The answers to those three questions will tell you more about your likely examination trajectory than any vendor's marketing materials. And in a year when the enforcement environment has shifted, those answers are worth knowing.