Opinion: The Case for Security-Embedded Architecture with cATO
cATO is transforming federal cybersecurity by embedding security into system design from the start, enabling faster, more secure innovation across agencies.
Continuous Authorization to Operate (cATO) represents a fundamental shift from treating security as an overlay, to embedding security as functional requirements from project inception. Despite its name focusing on authorization, cATO’s primary value lies in establishing security-embedded, shifted-left architectures where security requirements are treated as non-negotiable functional specifications that must be designed, integrated, tested and validated with the same rigor as performance or availability requirements.
Traditional development methodologies regard security as a subsequent validation step that is implemented following the completion of core functional development. This creates systems where security controls are bolt-on additions that may satisfy compliance documentation requirements but fail to provide genuine threat protection. Security requirements become optional considerations that teams can defer or compromise when facing schedule or budget pressures, resulting in systems that achieve compliance certification while remaining vulnerable to real-world attacks.
cATO closes this gap by requiring organizations to embed security requirements as core functional specifications from project initiation. Security becomes a design constraint that shapes architecture decisions and operational realities rather than a validation checkpoint applied to completed systems. When security requirements are handled as functional requirements, development teams verify that security features operate as specified by using the same testing and validation methods as those used for other system functions.
The framework enables organizations to build genuinely secure systems by embedding security as functional requirements throughout the development lifecycle, then leverage the operational visibility from those security implementations to support continuous authorization decisions. This approach produces systems that are secure by design rather than compliant by documentation, with authorization maintenance emerging naturally from active security management rather than periodic paperwork exercises.
Security Requirements as Functional Specifications
Security requirements must be defined at project initiation as functional specifications that development teams treat with the same priority and rigor as performance, scalability or user interface requirements. These requirements cannot be optional or deferrable. These requirements must be integrated into project planning, resource allocation and acceptance criteria as core system functionality. Security requirements should specify measurable outcomes such as “prevent deployment of containers with critical vulnerabilities” or “detect and alert on unauthorized privilege escalation within 30 seconds” rather than procedural activities like “conduct security reviews.”
Development teams must design system architectures that can satisfy security requirements through technical implementation rather than administrative processes. A requirement to “maintain data confidentiality” drives technical decisions about encryption implementation, key management and access control mechanisms rather than documentation about data handling policies. Security requirements must be testable through automated verification that can validate security functionality works as specified under operational conditions.
Security requirements integration requires development teams to understand threat models and attack patterns that could affect their specific system context. Generic security checklists cannot substitute for requirements analysis that considers the actual threat landscape, system exposure and potential attack vectors relevant to the specific operational environment. Requirements must address both preventive security measures that reduce attack surfaces and detective security measures that identify threats when prevention fails.
This integration creates requirements specifications that reflect the actual complexity of building secure systems rather than treating security as an independent overlay. Security requirements that conflict with performance requirements can be resolved through architectural approaches that satisfy both constraints, but only when both requirement types are analyzed together during system design. Separate security requirements processes typically produce specifications that assume unlimited system resources and fail to account for operational constraints that affect security implementation effectiveness.
Each requirement includes specific authorization checks, measurable tolerances and time limits linked to pass, degraded or fail outcomes. Tests simulate real threats, so meeting a requirement proves actual operational protection, not just paperwork compliance.
Security-Embedded Development Architecture
DevSecOps pipelines primarily enforce security requirements by automating processes, reducing organizational friction and security risks during execution. Traditional security validation approaches rely on human judgment and organizational processes that consistently fail when schedule pressures mount or resource constraints limit available options. Pipeline-enforced security requirements function within established development team structures, guaranteeing consistent adherence to security standards throughout the development process.
Source code analysis tools integrated into development workflows prevent security defects from accumulating in system codebases by blocking problematic code at the point of introduction. This approach addresses security deficiencies before they compound, avoiding the exponential remediation costs that occur when security issues are discovered after integration with other system components. Organizations that defer security defect resolution typically discover that fixing security issues requires costly remediation efforts that could affect multiple system components.
Systems that enforce security baselines through automated validation serve to eliminate configuration drift that undermines information systems over time. Infrastructure configurations that deviate from security requirements often occur through incremental changes that individually appear benign but collectively create exploitable attack surfaces. Automated enforcement maintains configuration consistency that human oversight cannot achieve at scale, where infrastructure changes often occur frequently and through multiple operational channels.
Container and deployment security enforcement prevents vulnerable components from reaching production environments where security remediation becomes operationally disruptive and potentially service-affecting. Post-deployment security remediation often requires service interruptions, rollback procedures and emergency change processes that create operational risk beyond the original security vulnerability. Pre-deployment security validation enables teams to address security issues through normal development processes that do not affect operational stability.
Authorization only works when the organization defines what the frameworks actually mean in the pipeline. RMF, 800-53, and the SSDF do not enforce themselves. They have to be translated into specific conditions that are enforced across workflows including code promotion, infrastructure deployment, configuration management and access control. These conditions must be tied to telemetry, evaluated continuously and reflected directly in the system’s authorization state. When something moves out of tolerance, that state must update immediately. The change should be visible without interpretation and backed by traceable evidence. Management controls required to responsibly run the program should be inherited from the program management office that is responsible for the system. Services like logging, authentication and system configuration should be implemented once at the enterprise level and exposed as secured application services.
The development team’s responsibility should be business/mission logic, not rebuilding platform controls. As more of these services are delivered through secured infrastructure, the set of controls that need to be implemented directly in the application becomes smaller. In this model security is already present in the environment, and the system inherits that operational, management and technical protection.
Testing and Validation of Security Functionality
Security validation is essential because security defects create vulnerabilities that do not affect normal system operation until exploited by attackers. Unlike functional defects that typically prevent systems from performing intended operations, security defects allow systems to function normally while creating attack vectors that bypass intended access controls or data protections. Without testing organizations discover these vulnerabilities only when attacks have succeeded, often after significant damage has occurred.
Prior to full deployment, stress validation tests security behaviors under load and adversarial scenarios, ensuring that results represent genuine attack vectors rather than solely component correctness. Upon deployment, continuous monitoring is initiated and immediately applies those same standards starting from the first production transaction. Telemetry is aligned with clearly defined pass/fail tolerances within specified timeframes, allowing authorization status to dynamically adjust — either confirmed or contingent — in response to evolving conditions.
Automated security testing provides the only scalable approach to validating security functionality across the rapid deployment cycles that DevSecOps enables. Manual security testing cannot keep pace with automated deployment frequencies, creating gaps in security validation that grow as deployment velocity increases. Security test automation also eliminates the human inconsistency that affects manual security assessments, ensuring that security validation maintains consistent standards across all deployment cycles. Once a control is validated with decision traces and test artifacts in one environment, the results are published so other teams can avoid duplicating assessments for the same behavior.
Integration testing becomes critical for security validation because security controls often depend on coordination between system components that function correctly in isolation but fail when integrated. Authentication systems that work correctly with individual applications may create bypass opportunities when applications interact with shared resources. Cross-system security testing identifies these integration vulnerabilities that component-level testing cannot detect. Validation packages can be reused across programs.
Performance testing validates that security implementations operate within acceptable operational parameters because security controls that degrade system performance create operational pressures to disable or circumvent security protections. Security implementations that significantly impact system response times or resource utilization face inevitable operational compromise as teams prioritize system availability over security protections. Performance validation ensures that security requirements can be satisfied sustainably within operational constraints.
Operational Security Visibility and Active Management
Operational security visibility becomes critical for continuous authorization because authorization officials need real-time understanding of whether security implementations continue to function as designed in production environments. Traditional authorization approaches rely on point-in-time assessments that validate security control configuration but cannot demonstrate ongoing security effectiveness under operational conditions. Authorization officials making continuous authorization decisions require evidence that security measures embedded during development are actively protecting systems against current threats.
Security-embedded development architectures generate operational telemetry that directly supports authorization maintenance by producing evidence of security control effectiveness rather than security control presence. When organizations implement security requirements as functional capabilities, those security implementations would generate operational data showing whether security functions are working correctly under production conditions. Vulnerability prevention measures produce metrics about threats blocked, access control implementations generate logs showing authorization decisions and configuration enforcement systems provide evidence of baseline maintenance.
This operational evidence addresses the fundamental gap in traditional authorization approaches where authorization officials must make ongoing risk decisions based on historical security assessments that may not reflect current system security posture. Security implementations that satisfy requirements at deployment time may degrade over time due to configuration drift, software updates, or changing threat conditions. Continuous authorization requires operational evidence that security implementations continue to satisfy their original requirements under current operational and threat conditions.
The integration between development-time security implementation and production-time operational visibility enables authorization officials to evaluate whether security architectures are performing as intended rather than relying on documentation about how security architectures should perform. When security requirements are embedded as functional specifications and tested through automated validation, operational systems can demonstrate that those security functions continue to operate correctly through the same monitoring approaches used for other system functionality. This creates authorization evidence that reflects actual security capability rather than administrative compliance with security procedures.
System Risk Monitoring for Continuous Authorization
cATO treats authorization as a continuous operational requirement, maintained in production through ongoing evidence collection and analysis, rather than as a one-time milestone at project completion with periodic reviews thereafter. The system is authorized to operate solely when telemetry data indicates compliance with established risk thresholds. In this scenario, executives address IT system risks by overseeing tactical responses to signals and telemetry, ensuring that teams are held accountable for maintaining effective controls during actual operational and threat situations. Compliance is achieved through established operational practices, wherein collected telemetry and implemented controls serve to verify conformity with regulatory requirements.
Controls are assessed against established thresholds to ensure the system maintains its authorized risk posture. These evaluations are consistently applied throughout all releases and changes, preventing unsafe conditions from progressing through pipelines or remaining in production. Telemetry must provide definitive information regarding whether the system remains safe to operate within the accepted risk parameters at any given moment. Tool outputs serve as decision support only when policies explicitly correlate indicators with authorization statements, establishing clear thresholds and defined timeframes. Controls are evaluated as meeting requirements, being degraded or having failed. If degradation or failure impacts authorization, the status should be updated immediately to enable leadership to respond efficiently and without ambiguity.
Static summaries disrupt the operational decision-making process by introducing outdated information and removing essential timing context. Transitioning to continuously evaluated indicators enhances alignment between control performance and executive awareness. This approach enables leadership to observe posture adjustments in real-time, initiate remediation during the early stages of an issue and systematically document evidence of system restoration within standard operational procedures, eliminating reliance on separate artifact generation processes.
Implementation associates rules with measurements in policy and applies those rules within pipelines and monitoring systems. If a policy specifies requirements such as no exploitable vulnerabilities on internet-facing services and multifactor authentication for privileged identities, the monitoring layer continuously checks these conditions and reports safety status for that control. If a rule is not met, the system designates an exception state, initiates remediation actions and notifies governance that authorization depends on resolving the issue and providing verifiable evidence.
Runtime gating must operate as part of the DevSecOps workflow so that unsafe conditions are identified and blocked before promotion. Pipelines enforce this automatically by testing builds against policy and halting release when vulnerabilities, configuration drift or degraded service objectives breach thresholds. Anything that reaches production should already have cleared these gates, but conditions can still appear once the system is running. In production, telemetry provides the verification layer that detects drift, newly introduced exposures or failures in control performance. These signals are bound to the same thresholds used in pipelines, so authorization remains current with the live state of the system. When telemetry shows that conditions have moved outside tolerance, the authorization status shifts immediately, ensuring leadership decisions are based on the real posture of the system rather than static assumptions.
Executive visibility relies on a comprehensive operating picture that outlines current authorization assertions, associated tolerances and the present state, all supported by traceable telemetry. Dashboards should accurately reflect the real-time status of defenses rather than relying on retrospective narratives from previous reviews. As conditions evolve, the display must update accordingly. Upon completion of remediation, the underlying data confirming the restoration of a secure state is immediately linked to the event and accessible for audit.
Evidence collection must operate within the same data framework that drives operational decision-making. The telemetry used for authorization, including controls, configurations, identities and workloads, should feed directly into compliance records without duplication or secondary pipelines. Each event must retain time context and data integrity so that when reconstruction is required, it reflects what occurred in production. This produces a verifiable record of system behavior that shows how controls performed within the authorization boundaries over time. Compliance becomes the record of production security, not a separate process. Authorization remains tied to live conditions, and governance decisions are made from current operational data rather than retrospective reports.
Real-Time Security Understanding Through Dashboard Integration
In this context, dashboards are active security tools that deliver real-time insights by converting telemetry into actionable awareness. Unlike static reports, they display live data on system status, control effectiveness and authorization posture, sourced directly from policy-enforcing systems. Dashboards should clearly present authorization actions, risk levels and exceptions in a single view to support timely, coordinated decisions by both leadership and technical teams.
An effective dashboard integrates telemetry from infrastructure, application and identity systems to form a complete picture of security behavior. Fragmented or tool-specific displays create blind spots that obscure patterns across interconnected environments. Correlating vulnerability data, access activity, configuration changes and workload telemetry against defined control thresholds enables continuous assessment of whether the system remains within authorized bounds. If a control exceeds tolerance, the dashboard must display the change immediately for prompt remediation. This alignment allows operational staff to correct issues while maintaining executive awareness of the system’s live risk posture.
Metrics displayed on these dashboards must represent operational effectiveness rather than static compliance progress. They should demonstrate whether defensive measures are performing within established parameters, whether preventive and detective functions are sustaining baselines and whether remediation is occurring within approved timeframes. Presenting this data in a normalized, time-aligned manner ensures that what leadership sees reflects the actual state of protection in production. When remediation completes, the restored control state should automatically update the view, closing the loop between detection, response and verification without requiring manual artifact generation.
Real-time dashboards provide the visibility necessary to operate security as a continuous discipline. They replace the lag and subjectivity of report-based oversight with evidence-driven insight that shows when systems deviate from authorization conditions and how they are restored. This capability enables governance at production speed and ensures that executive decisions are based on validated data rather than historical summaries.
Integration with Risk Management Framework
Continuous authorization corrects the way organizations have traditionally interpreted the Risk Management Framework (RMF). The framework itself was never intended to be a paperwork exercise, but over time it has been reduced to static documentation and periodic reviews that describe security rather than enforce it. Continuous authorization restores the framework’s original purpose by tying risk decisions directly to live operational evidence. Security controls are regarded as ongoing requirements that require continuous monitoring and management, rather than being approached as a one-time documentation task.
Under this model, the framework’s structure remains intact, but its implementation shifts from administrative validation to operational enforcement. Security categorization still defines baseline protections, and control families still describe required behaviors, but each requirement is translated into a measurable, enforceable rule within pipelines and monitoring systems.
Controls are expressed through executable specifications such as automated policy checks, configuration validation and access control verification that run as part of normal system operation. This approach moves compliance from interpretation to verification by producing continuous proof of control effectiveness.
Assessments focus on live control behavior rather than historical assertions. Evidence originates from the same telemetry that supports daily operations, providing assessors and authorizing officials with continuous visibility into whether safeguards remain effective. When degradation or failure occurs, it appears immediately in the same dashboards used to track authorization status.
Risk evaluation becomes an operational function rather than a scheduled event, enabling corrective actions and governance decisions to occur as conditions change instead of after formal reporting cycles.
Authorization decisions are based on verifiable data showing the system’s current alignment with its approved risk posture. Officials no longer rely on summary reports or audit artifacts detached from reality. Instead, they act on validated evidence that reflects how the system is performing in real time.
Organizational Transformation for Security-Embedded Development
Achieving security-embedded architecture requires changing how organizations structure development and operational responsibilities. Security must be integrated into engineering practice as a design discipline, not an external checkpoint. Development teams must include security professionals who participate in architectural design, requirement analysis and implementation planning so that protection mechanisms shape the foundation of system design rather than being introduced after functionality is complete. Security validation cannot occur as a separate review once development is finished; it must exist within the same workflows, automation and acceptance criteria that govern all software delivery.
Leadership must treat security as a core component of mission delivery. Security functionality must receive the same resource allocation, development time and testing rigor as system performance or feature delivery. Security defects must be prioritized with the same urgency as operational outages or critical system bugs, because both directly affect mission availability and reliability. Program management structures must ensure that security implementation time and test coverage are accounted for in project schedules, eliminating the ability to defer security work under schedule or budget pressure.
Operations teams must also evolve from reactive responders to active participants in system assurance. Security monitoring and incident response must be embedded into normal operational procedures, not performed through detached functions with limited system visibility. Operational dashboards must incorporate security telemetry as part of standard situational awareness, allowing operations staff to identify and address issues with the same focus they apply to performance or availability events. This integration reduces response latency and ensures that remediation efforts align with the live authorization state of the system.
Cultural transformation is required to sustain this model. Security must become a shared responsibility between development and operations rather than a specialty owned by isolated teams. All participants must recognize that security requirements are technical specifications that define expected behavior and must be implemented, tested and maintained like any other system function. This shift replaces the compliance mindset with an engineering mindset focused on measurable system protection and resilience.
Performance Integration and Scaling Architecture
Security-embedded systems must maintain both operational performance and protection. Security cannot compete with mission delivery; it must be designed to support it. Performance validation must confirm that security mechanisms perform effectively under production load, ensuring that protection does not create operational bottlenecks that teams are pressured to bypass.
As architectures scale, security protections must scale with them. Security controls must maintain consistent enforcement and telemetry across distributed systems, regardless of where services or components operate. Expansion across multiple environments, platforms or regions cannot introduce variations in protection or monitoring fidelity. Uniform enforcement ensures that every system instance adheres to the same control standards and authorization thresholds, maintaining a coherent security posture even as infrastructure becomes more complex. Correlated visibility across systems and applications is needed to ensure that distributed controls operate as an integrated security system. Each control must inform and reinforce the others so that monitoring, analysis and authorization remain aligned under a single operational view.
Resource planning must reflect that security functionality is part of the mission system, not overhead. Computational resources, storage and bandwidth required for security enforcement and telemetry must be built into system capacity planning. Security must be provisioned as an operational requirement that enables sustained mission execution.
Conclusion
A security-embedded architecture enabled by cATO ensures that systems are verifiably secure during operation, rather than merely compliant on paper. By establishing security as a core functional requirement from the outset, protection is seamlessly integrated into system design, verification and ongoing operations. Continuous telemetry and embedded validation mechanisms support sustained authorization, replacing manual audits with real-time evidence of effective control performance.
This approach positions compliance as an inherent result of genuine security rather than as a distinct activity. By integrating protective measures throughout both development and operational processes, organizations sustain authorization via ongoing effectiveness. This enables security to support mission objectives while consistently fulfilling regulatory requirements through verifiable operational assurance.
This is a carousel with manually rotating slides. Use Next and Previous buttons to navigate or jump to a slide with the slide dots
-
Executing the RMF as an Engineering Discipline, Not a Paperwork Exercise
The Risk Management Framework aligns security with system design and operational telemetry to enable continuous, real-time authorization.
6m read -
New SHARE IT Act Mandates Federal Code Sharing to Cut Software Costs
Agencies are under pressure to make code public, with CMS leading efforts to drive open-source collaboration and governmentwide savings.
5m read -
VA’s Platform One Offers Sandbox to Software Developers
Platform One’s sandbox environment allows developers to create applications in protected conditions, keeping veteran data safe and secure.
14m watch -
Modernizing IT Systems for AI Adoption
USPS, NIH and Lumen discuss how modernization, data strategies and security are shaping AI’s future role in government.
32m watch