Pages

Monday, August 11, 2025

Cybersecurity Architecture: Fundamentals of Confidentiality, Integrity, and Availability

The CIA Triad (Not spies — it’s a cybersecurity model) 

Confidentiality 

Confidentiality ensures that sensitive information is only accessible to authorized users. It's about keeping secrets safe.

This is primarily achieved through two methods:

  • Access Control: This involves a two-step process:

    • Authentication: Verifies that a user is who they claim to be (e.g., using a password, fingerprint, or Multi-Factor Authentication).

    • Authorization: Checks if the authenticated user has the necessary permissions to access a specific resource (e.g., a manager can access payroll data, but an intern cannot).

  • Encryption: This scrambles data so that it's unreadable to anyone without a special key. Even if an unauthorized person intercepts the data, all they'll see is a meaningless jumble of characters. This is often done using a shared, private key (Symmetric Encryption), where both the sender and receiver have the same key to lock and unlock the message.

  • Example: Like locking your diary and only giving the key to your best friend.


Integrity 

Integrity guarantees that data has not been altered or tampered with. It's about ensuring data is trustworthy and "true to itself."

Integrity is maintained using cryptographic functions that can detect any changes to data.

  • Digital Signatures and Message Authentication Codes (MACs): These technologies create a unique digital "fingerprint" for a message or data. If even a single character is changed, the fingerprint will no longer match, instantly alerting the system that the data has been compromised.

  • Immutable Ledgers (e.g., Blockchains): Some systems, like blockchains, are designed so that once data is added, it cannot be changed or deleted. Any attempt to modify a record will be flagged and rejected by the system, ensuring the history of the data remains intact.

  • Example: Like sealing an envelope — if the seal is broken, you know someone tampered with it.


Availability 

Availability ensures that systems and data are accessible to authorized users whenever they need them. It's about keeping services up and running.

The main threats to availability are Denial of Service (DoS) attacks, which aim to make a system unusable.

  • Denial of Service (DoS): A single attacker overwhelms a server with so much traffic or so many requests that it can no longer respond to legitimate users.

  • Distributed Denial of Service (DDoS): This is a more powerful version of a DoS attack. An attacker controls a network of compromised computers (a botnet) and commands them all to flood a target server at the same time. The sheer volume of traffic makes the server crash or become unresponsive.

  • SYN Flood: A specific type of DoS attack that exploits a normal network process (the TCP three-way handshake). The attacker sends a request to a server, which reserves resources for the connection, but never completes the handshake. By repeating this process thousands of times, the attacker can use up all the server's resources, preventing any new, legitimate connections.

  • Example: Like making sure the store doors are open for customers — and not blocked by pranksters piling shopping carts in the entrance.


CIA Checklist for any IT project:

  1. Confidentiality: Can only the right people access sensitive info?

  2. Integrity: Will I know if the data changes unexpectedly?

  3. Availability: Will the system work for legitimate users when needed?

Cybersecurity Architecture: Five Principles to Follow

1. Five Foundational Security Principles

The five core principles that are essential for building a robust cybersecurity architecture.

  1. Defense in Depth: This principle is about creating multiple layers of security so that no single security mechanism is solely responsible for protecting a system. The analogy of a medieval castle with thick walls, a moat, and a drawbridge is used. In a modern IT example, this means using a combination of Multi-Factor Authentication (MFA), Endpoint Detection and Response (EDR), firewalls, and data encryption. If one layer fails, the others can still protect the system (FAILSAFE)

  2. Least Privilege: Users and systems should only be given the minimum level of access and permissions required to perform their specific tasks. This prevents unauthorized access and minimizes the potential damage from a compromised account. This principle also involves hardening systems by removing unnecessary services (like FTP or SSH on a web server) and changing default credentials. It actively combats "privilege creep," where users accumulate unnecessary permissions over time.

  3. Separation of Duties: This principle ensures that no single person has complete control over a critical process. It requires the involvement of at least two people to complete a task, making collusion necessary to compromise a system. For example, a person who requests access to a database cannot be the same person who approves that request. This prevents a single point of failure and makes it harder to conceal malicious activity.

  4. Secure by Design: Security should not be an afterthought or something "bolted on" at the end of a project. Instead, it must be integrated into every phase of development, from initial requirements gathering to design, coding, testing, and production. The responsibility for security belongs to everyone—designers, administrators, and users—but it must start with the initial design.

  5. Keep It Simple, Stupid (KISS): Security should be as simple as possible without sacrificing effectiveness. If security measures are too complex or cumbersome, users will find ways to bypass them, creating new vulnerabilities. The goal is to make it easier for "good guys" to do the right thing while making it difficult for "bad guys" to get in. For example, complex password rules can lead users to write down their passwords, defeating the purpose of the security rule.


2. The One Principle to Avoid: Security by Obscurity

This is the one security principle you should never rely on. Security by obscurity is the idea that a system is safe because its inner workings are unknown to an attacker. This is a false sense of security, as history has shown that relying on secrecy is a failed strategy.

A prime example is Kerckhoffs's Principle, which states that a cryptographic system should be secure even if everything about it is known except for the key. Truly secure systems like AES and RSA are "glass box" security; their algorithms are public and peer-reviewed, and their security relies solely on the secrecy of a private key, not on a secret method.