The news is packed with reports of huge data breaches. A quick survey of the last few years revealed a huge array of verified data breaches that exposed everything from credit cards and Social Security Numbers to work history to the full contents of credit reports. A concerned observer might wonder just how hard these companies are trying to protect your data.
If you ask, you’ll find out that corporations are doing everything they can to protect their systems. A big attack costs a lot of money and degrades customer trust. And if there’s one thing corporations hate, it’s losing money.
However, securing computer systems is radically difficult. If you forget the mortar on a single brick when constructing a wall, your house probably won’t fall down. But in computer security, some Russian tween kicks down the wall and laughs at you for thinking your mortar job was up to snuff.
Security Is Complicated
The complexity of security systems can be surprising to the uninitiated. If only there were a big “Hackers Off” switch.
While no educated person imagines that security is so simple, the true level of complexity is hard to grasp. Millions of lines of code describe the thousands of functions that make up common frameworks, which each piece interacting with other, equally complex systems. Even when perfectly coded, each of these lines, functions, frameworks, and connections represents a potential security flaw, or “attack vector,” which can be exploited.
In security, the sum of the opportunities for compromise is called the “attack surface.” Even when applications are built wit high security in mind, their attack surface is frighteningly enormous. Vexatious human nature prohibits any other reality.
Open Source Is a Risk
A huge majority of the internet runs on open-source software. This software is maintained by volunteers, with no formal rules for code review beyond those that they set themselves. Plans are based on the availability of unpaid members, their expertise, and their interests.
On one hand, open source is essential. We cannot have every coder re-securing the wheel. And truly, the people that build and maintain the mundane open-source projects that support the Internet are worthy of canonization. But the volunteer basis of many open-source projects means a blind expectation of security and interoperability is a serious risk.
While essential projects often get financial support from corporations like EFF and Mozilla, that support is often insufficient when compared to the work required to keep a project of this scale and importance safe.
Look no further than Heartbleed, the enormous SSL vulnerability that existed for years before its painful and public discovery. Remember, that attack was so large that system admins were roused worldwide (many rudely shaken from beds in the dead of night) to mitigate damage and patch systems within hours of the attack becoming public knowledge. Never has a single vulnerability caused such widespread panic in the sysops community. Everything uses OpenSSL. It can’t be vulnerable. But it was, and may still be.
Since a huge majority of security systems run on open-source software, there’s always the possibility of a hidden but devastating bug lurking in your favorite open-source framework.
Hackers Only Need to Win Once
There’s an expression in the world of digital security: sysops needs to win every time, but hackers only need to win once. A single chink in the armor is all that’s required for a database to be compromised. These hackers have all the time in the world, and they can wait until you get sloppy and bored with your job.
Sometimes that chink is the result of a developer taking a shortcut or being negligent. Sometimes it’s the result of an unknown zero-day attack. As careful as a developer might be, it’s a fool’s errand to imagine you’ve patched every security hole.
Declaring any lock “pick proof” is the quickest way to find out just how optimistic your lock designers were. Computer systems are no different. No system is unhackable. It just depends on the resources available. Even a major part of the mechanism underlying modern encryption, finding two prime factors of a large number, is not impossible to crack: it simply takes decades or more with today’s technology. Quantum computing could change all that, and we’d need to have new encryption systems ready to come online as soon as the attackers got their hands on the new devices.
But even if your system is perfectly coded, it’s still built and maintained by humans. And as long as humans exist in the system at any stage, from design to execution to maintenance, the system can be subverted.
Balance Between Convenience and Security
Security is always a balance between convenience and safety. A perfectly protected system can never be used. The more secure a system, the harder it is to use. This is a fundamental truth of systems design.
Security operates by throwing up roadblocks that must be overcome. No roadblock worth running can take zero time to complete. Hence, the greater a system’s security, the less usable it is.
The humble password is the perfect example of these complementary qualities in action.
You might say the longer the password, the harder it is to crack via brute force, so long passwords for everyone, right? But passwords are a classic double-edged sword. Longer passwords are harder to crack, but they’re also harder to remember. Now, frustrated users will duplicate credentials and write down their logins. I mean, what kind of shady character would look at the secret note under Debra’s work keyboard?
Attackers don’t need to worry about the kind of password cracking you see in the movies. They just need to find a sticky note on the monitor of the Assistant (to the) Regional Manager and they will have all the access they want. Not that Dwight would ever be so careless.
We must balance security and convenience to keep our systems usable and safe. This means that every system, in one way or another, is insecure.
Hacking: Theft by Digital Deception
The defining characteristic of a hacking attack is system-level deception. In one way or another, the hacker tricks some part of a security system into permitting or even cooperating with their attacks despite its design. Whether an attacker poses as a contractor to gain physical access to secure site or subverts an unpatched bug, it can be called a “hack.” The crucial element is the deception of the security system. From there, the core theft is executed, whether it’s theft of data, plans, money, secrets, or something else.
The sheer variety of hacking attacks seen in the wild is extraordinary. A summary like this one can offer only a broad overview before digressing into the details of individual attacks. But we will try and illustrate the broad strokes.
At the very least, an aspiring hacker must learn the system they are attacking.
From a practical perspective, an attacker could look for open ports on a server to find out which services a device is running, and at which version. Correlate that with known vulnerabilities, and, most of the time, you will find viable attack vectors.
You might even purchase an unmatched zero-day from some onion-scented corner of the internet so you can make your own door.
Many times, it’s a death by a thousand cuts: small vulnerabilities chained together to gain access. Once a vulnerability is discovered, it can be exploited.
If that doesn’t work, there are password attacks, pretexting, social engineering, credential forgery… the list of potential exploits grows in concert with human creativity. There is a chink in ever plate of armor. It needs only be found.
You might like the following posts about Mac security: