Cyber cowboys | National Center for Supercomputing Applications at the University of Illinois
04.17.14 - Permalink
by Barbara Jewett
Major corporate computer breaches put computer security in the public eye, only for security thoughts to wane until the next big breach. For NCSA’s cybersecurity team, keeping systems safe and fending off attacks is an on-going job.
News reports of mega-retailers or banks having their computer systems surreptitiously hacked begs the question: How safe is NCSA?
Randy Butler, director of NCSA’s cybersecurity, says that what we see in the press are financially motivated intrusions, people looking for “information that they can then use to turn into money somehow. Organizations like NCSA and other academic computing centers don’t deal in money or government secrets. We’re open research. The things that happen at our center usually are published at some point.”
And while getting the adrenaline flowing responding to an intrusive incident can be the fun part of a cybersecurity job, changes made in security practices through the years mean the center’s security operations team seldom don their imaginary white cowboy hats before galloping their fingers across their keyboards to protect the systems. Which, Butler says, is really a good thing. He attributes the lack of need for the white hats to two big security changes at NCSA: two-factor identification and increased monitoring.
Are you real?
When Adam Slagell, NCSA’s assistant director for cybersecurity, began planning the security for Blue Waters four years ago he started by evaluating past incidents at NCSA. What he discovered was that 80 percent of the center’s incident response was directed toward account credential compromise.
The center’s biggest vulnerability has always come from compromised user accounts, says Butler. If a hacker can get into a system that has very little security, he can monitor for passwords, then use those passwords in attacking elsewhere. This is made worse by the fact that users often reuse their password on different accounts.
Next the hacker will look for the account holder’s associations and quickly infect other systems simply by using that same user name and password. Since the other systems initially have no reason to doubt it is a real user, the attack can spread quickly. Butler notes that a few months ago a system out of state was attacked and within an hour a system at the University of Illinois was successfully attacked with that same user name and password.
Slagell knew he needed to mitigate that risk for Blue Waters. The solution was two-factor identification, meaning that in addition to their user name a user also must have another factor to prove legitimacy. For Blue Waters users, that second factor is a one-time-use password issued for a particular login attempt. The next time a user logs in it must be with a new one-time-use password.
“NCSA is a little bit ahead of industry in that regard but they are now catching up to us. We are now seeing two-factor for all sorts of things. There are options now for two-factor user identification for Facebook and Twitter; Google started rolling it out a couple years ago,” notes Slagell.
Increased system monitoring has also helped the figurative cowboy hats see less action. For many years, monitoring the center’s systems included manually blocking malicious hosts that were detected. (A host is a network that’s been detected actively trying to scan, looking for open vulnerabilities in NCSA’s systems.) The list of blocked hosts numbered about a dozen. Utilizing tools that automated the checking as well as broadened the search for hosts grew that list to about 1,000 blocked hosts.
Since security analyst Warren Raquel joined NCSA’s cybersecurity team a year ago there haven’t been many incidents, he says, though there have been numerous attempts. Raquel’s been focusing on hardening NCSA’s network, making it even more impervious to intruders. For example, he says, the center started doing more detection by leveraging the newest features of Bro. For more than a decade Bro has offered open-source software and tools designed for network monitoring at academic institutions. Since 2010, NCSA has been actively involved in development for the Bro project.
“Bro is able to detect systems that are scanning on different levels on different types of rule sets,” explains Raquel. “Today, we have over 50,000 hosts blocked, and those are hosts we hadn’t detected before. A lot of what we see every day is people trying to find a resource they can access. They’re just trying to find an entry point.”
Although NCSA’s production systems are secure, says Butler, that entry point is often found in research group systems, including some located within NCSA.
“Research groups that have money to purchase their own computer systems often want to manage it themselves,” says Raquel. “But they don’t follow any type of security plan to keep the systems protected from vulnerabilities. They don’t keep their systems patched and up-to-date; they aren’t using proper techniques for user names and passwords. Sometimes they don’t even protect it at any level. Their goal is just to get their research done and they don’t feel their research is that sensitive. But the ramifications of their actions are not thought out. That system can be used as a jumping point to access other systems on NCSA’s network or any other system that system has access to.”
NCSA has to run a very open network because of the high throughput the center has and the tremendous amount of bandwidth, says Slagell. Situational awareness, or monitoring, was the approach developed to protect systems where they can’t place overly-onerous security controls in the way.
Butler credits NCSA’s vigilant systems administration staff for helping keep the center’s systems safe. “We have excellent people here, and they let us know anything that might be out of the ordinary. The last big incident we had was over a year ago and that’s how we were notified about it. One of our systems administrators noticed connections to a site that didn’t seem to make any sense and alerted our security folks. It turned out to be a real incident.”
Butler, Raquel, and Slagell say stopping an incident and preventing one like it from recurring can be time-consuming and challenging. They compare it to being dropped into a big puzzle. Only there’s no context where you get dropped in, says Butler.
“You don’t know where you are in the puzzle, if you are at the beginning, or the middle, or the end. You just have to keep growing it wherever it wants to grow until you find a beginning, then spend time trying to tease it apart and understand what is going on until you find the end,” he explains. “And then close the holes.”
Little R, big D
NCSA’s security team also does research and development, an aspect of that is seldom seen at a supercomputing center. Often those R&D projects relate to things that are very pertinent to NCSA’s operations.
“I’d say our R&D is little R and big D,” says Slagell.“Our research is really applied. Our developers help our security operations team, who in turn help our developers by talking with them about the real problems we want to solve and explaining what we currently do. We put a lot of effort into doing real security risk assessments and top-down security architecture, trying to understand what we are trying to protect against from a holistic point of view. Not many in academia do that.”
The team has obtained funding to export their security risk assessment into various NSF projects, some at NCSA but also elsewhere. Understanding a project’s threats and risks, then turning that into a cybersecurity plan is “a lot different from the earliest days when we were just responding to threats,” notes Slagell.
New technology, new threats
NCSA’s cybersecurity experts say they don’t see the need for people with their skills going away any time soon. But they do note that the threats are constantly changing, which keeps their jobs interesting.
“More and more things get networked, with more and more complexity,” says Slagell. “All these home automation things—control your home security or your heating and cooling from your smartphone. There is huge vulnerability in those. With these sorts of products people don’t think about building security in from the beginning. In our offices, we have to worry about digital projection systems and networked printers.”
Butler says not only do people overlook printers they don’t stop to think that a car is now just one big computer. And people are connecting their devices to their cars.
“In the old days, people never thought about the technology they were building being connected to anything, being connected to an Internet. For example, managing technologies remotely from your computer at home. Few people considered security to do that cool thing that let you handle problems from home. We’re still in the mode of ‘Oh, isn’t this cool.’ And the problem is that there is somebody out there who says ‘Look, there’s an opportunity for me’ and takes advantage of that hole in the system.”
Protecting sensitive information
A big difficulty is that almost all information now is online, says Raquel. So when cybersecurity experts are dealing with sensitive information, they usually have to configure an existing system to deal with passing the information across an existing infrastructure. And it’s hard to do that.
“You can’t just have a building that is completely closed off from the world that contains all your information. You usually have to interact with external systems. It’s a tricky juggling game,” he says.
But Raquel feels the biggest problem is the user who has to deal with all that sensitive information. Mainly, he says, because it’s the user’s job to manage all that sensitive information—they have to do payroll, or enter patient information or some other confidential task—but it is not their job to secure it. That’s handed off to an IT group or a security group. And, he emphasizes, trying to get the user and the IT group/security group to come up with a system that is acceptable for both is very difficult.
“The IT group/security group is going to want to lock down the information very tightly. But it will get to the point where it is unusable for the user who has to do their job. So the user wants to have certain freedoms to do their job and make their job easier. That’s how you end up with these holes that are overlooked and the data leaks out,” says Raquel.
Look at the history of cybersecurity, says Butler. First it was all about securing the organization. Then it became about securing the hosts themselves—the big research computers. And then it became focused on securing the applications and the networks.
“Now today it really is about securing the data. And then the metadata, there’s more levels down,” he notes.
“It’s sort of like firefighting,” quips Slagell. “But there will always be a need for firefighters.”
And cybercowboys in their imaginary white hats.
Blue Waters is supported by the National Science Foundation through awards ACI-0725070 and ACI-1238993.