Most organizations believe they perform security. They possess antivirus software, an IT guy, and an inkling that those passwords of ours ought to be strong. It’s not security maturity – that’s security by instinct. The way to tell the difference between the two is to measure if your best practices are: documented, repeatable, and if they hold up to a person who wasn’t around at the time you developed them.
Start with your data, not your tools
First, you need to determine what you’re protecting. It seems obvious, but it’s often overlooked.
Compile a list of every system, application, or business process that touches business data. Look at everything as a whole, then divide it into two categories. The first category is general business data such as financial reports, HR data, or internal business communication. The second category is Federal Contract Information or Controlled Unclassified Information (used or processed by the DoD). The government imposes a higher level of security for the latter. The requirements that govern the protection of CUI are derived from NIST SP 800-171.
If you’re not sure which category your business data falls into, you’re already behind the curve.
Evaluate institutionalization, not just implementation
Most self-evaluations get off track here. It’s a question of whether your team determines that they’re _doing_ something – e.g., running multi-factor authentication, monthly system patching, access control maintenance, etc. – without first evaluating if those behaviors exist in practice. Institutionalization gauges whether your security practices exist as defined, repeatable actions, not just as activities. Are you repeating behaviors that somebody could “shut off” because they’re not nailed down in policy, procedure, role, or responsibility?
Is MFA in place because one IT person turned it on and hasn’t yet been frustrated enough to disable it? That’s not institutional. Is it in place because you can hand over a policy with signatory approval, date, and current owner and operator for the solution, as well as date of next scheduled review? That’s institutional.
For every practice, take a moment to ask yourself “If this service/deployment/person left tomorrow, would the control stay on?” The answer has to be yes, or it’s not a mature control. It’s a dependency.
Cmmc level 1 for defense contractors, for example, is the starting tier. It comprises 17 basic cyber hygiene practices. But even with those basics, the model doesn’t expect you to already possess the controls. It expects, instead, that you’ve defined them to whatever degree is feasible, given their simplicity, and that you can provide evidence they’re being followed.
Use a red-yellow-green scoring approach
Once you’ve gone through your control requirements, a straightforward three-tier scoring system helps you do something with the results rather than just reporting them.
If the control is fully implemented, documented, and you can readily produce evidence, mark it as Green. If the control exists but is not documented or not consistently followed, mark it as Yellow. If it’s defective, missing, or you can’t determine, mark it Red.
Most organizations have a lot more yellow than they think. This is where you identify coverage which doesn’t match a formal assessment of controls. For that purpose, yellow is the same as red. When you’re prioritizing your remediation work, treat them equally.
From the red and yellow answers, prioritize your remediation work and assign them to a Plan of Action and Milestones (POA&M in compliance-speak). For each item, apply a priority, assign an owner, and set a target date. Initially, you may group these by ease of implementation versus risk exposure. Some quick wins have high coverage. Others may require more resources over time. Remember, you can’t close every gap at once. It’s a sequence.
Collect evidence as you go
The most common error in self-assessment is regarding it as something that you pass or fail. You check “Met” against a requirement and that’s the end of it. Then, a third-party auditor requests the supporting evidence and you realize that you don’t have any.
For every control that you check as met, obtain the proof right away. Screenshots of MFA while in use, exported access logs, a dated extract from your System Security Plan, a copy of the incident response plan with a last review-date visible on it.
This new documentation habit will do two things for you. It will get you ready for external audits in a panic-free manner. And, it will force you to admit how much you want to believe you’re compliant and how little you’ve proven that you are.
Not only do companies with overall higher security realize a lower breach risk, but they also save money. According to the 2023 IBM Cost of a Data Breach Report the amount saved by organizations employing high levels of security AI and automation adds up to a significant total in excess of $1.5m, compared to those with low levels.
The case to take the risk and maturity reduction steps is becoming overwhelming. A well-documented risk assessment and a current System Security Plan are the two artifacts that do some of the heavy lifting by tying everything together. The SSP, in particular, is a living document. It shouldn’t be written once and shelved. A quick rule of thumb is that every time a control changes the SSP also changes.
From doing security to proving it
The change in operations needed in this context is not related to technology, but to the organization’s culture. The technology used in organizations that pass formal audits is not better than the technology used in organizations that fail them – they just keep better records. Self-assessment is like a rehearsal where you can determine which type of organization you are and make necessary adjustments before it’s too late.

