The Hackback Debate

The vulnerability of computer networks to hacking grows more troubling every year. No network is safe, and hacking has evolved from an obscure hobby to a major national security concern. Cybercrime has cost consumers and banks billions of dollars. Yet few cyberspies or cybercriminals have been caught and punished. Law enforcement is overwhelmed both by the number of attacks and by the technical unfamiliarity of the crimes.

Can the victims of hacking take more action to protect themselves? Can they hack back and mete out their own justice? The Computer Fraud and Abuse Act (CFAA) has traditionally been seen as making most forms of counterhacking unlawful. But some lawyers have recently questioned this view. Some of the most interesting exchanges on the legality of hacking back have occurred as dueling posts on the Volokh Conspiracy. In the interest of making the exchanges conveniently available, they are collected here a single document.

The debaters are:

Stewart Baker, a former official at the National Security Agency and the Department of Homeland Security, a partner at Steptoe & Johnson with a large cybersecurity practice. Stewart Baker makes the policy case for counterhacking and challenges the traditional view of what remedies are authorized by the language of the CFAA. Orin Kerr, Fred C. Stevenson Research Professor of Law at George Washington School of Law, a former computer crimes prosecutor, and one of the most respected computer crime scholars. Orin Kerr defends the traditional view of the Act against both Stewart Baker and Eugene Volokh. Eugene Volokh, Gary T. Schwartz Professor of Law at UCLA School of Law, founder of the Volokh Conspiracy, and a sophisticated technology lawyer, presents a challenge grounded in common law understandings of trespass and tort. Baker-Kerr

RATs and Poison: The Policy Side of Counterhacking

Stewart Baker

Good news for network security: the tools attackers use to control compromised computers are full of security holes. Undergrad students interning for Matasano Security have reverse-engineered the Remote Access Tools (RATs) that attackers use to gain control of compromised machines.

RATs, which can conduct keylogging, screen and camera capture, file management, code execution, and password-sniffing, essentially give the attacker a hook in the infected machine as well as the targeted organization.

This is great news for cybersecurity. It opens new opportunities for attribution of computer attacks, along lines I've suggested before: "The same human flaws that expose our networks to attack will compromise our attackers' anonymity."

In this case, the possibility of a true counterhack is opened up. The flaws identified by Hertz and Denbow could allow defenders to decrypt stolen documents and even to break into the attacker's command and control link – while the attacker is still on line.

It's only a matter of time before counterhacks become possible. The real question is whether they'll ever become legal. Both the reporter and the security researcher agree that "legally, organizations obviously can't hack back at the attacker."

I believe they are wrong on the law, but first let's explore the policy question.

Should victims be able to poison attackers' RATs and then use the compromised RAT against their attacker?

It's obvious to me that somebody should be able to do this. And, indeed, it seems nearly certain that somebody in the US government — using some combination of law enforcement, intelligence, counterintelligence, and covert action authorities — can do this. (I note in passing, though, that there may be no one below the President who has all these authorities, so that as a practical matter RAT poisoning may not happen without years of delay and a convulsive turf fight. That's embarrassing, but beside the point, at least today.)

There are drawbacks to having the government do the job. It is likely that counterhacking will work best if the attacker is actually on line, when the defenders can stake out the victim's system, give the attacker bad files, monitor the command and control machine, and to copy, corrupt, or modify ex-filtrated material. Defenders may have to swing into action with little warning.

Who will do this? Put aside the turf fight; does NSA, the FBI, or the CIA have enough technically savvy counterhackers to stake out the networks of the Fortune 500, waiting for the bad guys to show up?

Even if they do, who wants them there? Privacy campaigners will not approve of the idea of giving the government that kind of access to private networks, even networks that are under attack. For that matter, businesses with sensitive data won't much like the stark choice of either letting foreign governments steal it all or giving the US government wide access to their networks.

On a policy perspective, surely everyone would be happier if businesses could hire their own network defenders to do battle with attackers. This would greatly reinforce the thin ranks of government investigators. It would make wide-ranging government access to private networks less necessary. And busting the government monopoly on active defense would probably increase the diversity, imagination, and effectiveness of the counterhacking community.

But there is always the pesky question of vigilantism...

First, as I've mentioned previously, allowing private counterhacking does not mean reverting to a Hobbesian war of all against all. Government sets rules and disciplines violators, just as it does with other privatized forms of law enforcement, from the securities industry's FINRA to private investigators.

Second, the "vigilatism" claim depends heavily on sleight of hand. Those against the idea call it "hacking back," with the heavy implication that the defenders will blindly fire malware at whoever touches their network, laying indiscriminate waste to large swaths of the Internet. For the record, I'm against that kind of hacking back too. But RAT poison makes possible a kind of counterhacking that is far more tailored and prudent. Indeed, with such a tool, trashing the attacker's system is dumb; it is far more valuable as an intelligence tool than for any other purpose.

Of course, the defenders will be collecting information, even if they aren't trashing machines. And gathering information from someone else's computer certainly raises moral and legal questions. So let's look at the computers that RAT poisoning might allow investigators to access.

First, and most exciting, this research could allow us to short-circuit some of the cutouts that attackers use to protect themselves. Admittedly, this is beyond my technical capabilities, but it seems highly unlikely to me that an attacker can use a RAT effectively without a real-time connection from his machine to the compromised network. Sure, the attacker can run his commands through onion routers and cutout controllers. But at the end of all the hops, the attacker is still typing here and causing changes there. If the software he's using can be compromised, then it may also be possible to inject arbitrary code into his machine and thus compromise both ends of the attacker's communications. That's the Holy Grail of attribution, of course.

Is there a policy problem with allowing private investigators to compromise the attacker's machine for the purpose of gathering attribution information? Give me a break. Surely not even today's ACLU could muster more than a flicker of concern for a thief's right to keep his victim from recovering stolen data.

The harder question comes when the attacker is using a cutout — an intermediate command and control computer that actually belongs to someone else. In theory, gathering information on the intermediate computer intrudes on the privacy of the true owner. But, assuming that he's not a party to the crime, he has already lost control of his computer and his privacy, since the attacker is already using it freely. What additional harm does the owner suffer if the victim gathers information on his already-compromised machine about the person who attacked them both? Indeed, an intermediate command and control machine is likely to hold evidence about hundreds of other compromised networks. Most of those victims don't know they've been compromised, but their records are easy to recover from the intermediate machine once it has been accessed. Surely the social value of identifying and alerting all those victims outweighs the already attenuated privacy interest of the true owner.

In short, there's a strong policy case for letting victims of cybercrime use tools like this to counterhack their attackers. If the law forbids it, then to paraphrase Mr. Bumble, "the law is a ass, a idiot," and Congress should change it.

But I don't think the law really does prohibit counterhacking of this kind, for reasons I'll offer in a later post.

RATs and Poison Part II: The Legal Case for Counterhacking

Stewart Baker

In an earlier post, I made the policy case for counterhacking, and specifically for exploiting security weaknesses in the Remote Access Tools, or RATs, that hackers use to exploit computer networks.

There are three good reasons to poison an attacker's RAT:

We can make sure the RAT doesn't work or that it actually tells us what the attackers are doing on our networks; We gain access to the command and control machines that serve as waystations that let attackers download stolen data or upload new malware; and If we're very lucky and very good, we can use the poisoned RAT to compromise the attacker's home machine, directly identifying him and his organization. More problematic is the legal case for counterhacking, due to long-standing opposition from the Justice Department's Computer Crime and Intellectual Property Section, or CCIPS. Here's what CCIPS says in its Justice Department manual on computer crime:

Although it may be tempting to do so (especially if the attack is ongoing), the company should not take any...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT