Friday, July 20, 2012
The Best Hacking Film You Haven't Seen (Yet)
Code2600 is a rich visual history of computer hacking's past as told by some of its principal participants.
The film opens with news of a Soviet satellite orbiting the earth in the late 1950s. The United States, which once thought itself on top of the world in technology, found itself behind. Suddenly, says Zerechak, the US military was keen on computer technology. He points out that in the 60s and 70s the military had all the best high-grade computer equipment, but after the computer revolution of the 80s and 90s that was no longer the case, with the military today buying off-the-shelf mobile devices.
Somewhere in those intermediate 60 years of military history we have the origins of computer hacking.
Like Steven Levy's 1984 classic book Hackers, the film explores early computer hackers who studied the original wired telephone switching system. One hacker, John Draper, discovered that the sound produced by an inexpensive Capt'n Crunch cereal toy whistle could interrupt the normal AT&T long-distance billing process. This 2600 hertz tone (hence the title of Zerechak's documentary) was very important to early hackers, known as Phone Phreaks, who wanted to access fast computers on the other side of the world without paying long distance charges. AT&T, at great expense, began to change its switching system.
Around the same time, the Homebrew Computer Club was starting in the San Francisco Bay Area. Member Bob Lash remembers a young Steve Wozniak showing off his early Apple computers – along with everyone else who was also building their own computers at the time. There was a lot of trial and error. But smart people where able to do very sophisticated things at home.
Throughout the film, Zerechak uses classic footage to capture a moment or to make a point. One reoccurring sequence is the 1950s black and white footage of Dr. Claude Shannon, mathematician, cryptographer and the father of information theory, with his metal mouse and its square maze. This was one of the first experiments in artificial intelligence, demonstrating how Theseus, his robotic mouse, could learn and adapt to a rapidly changing environment. This is an obvious metaphor for computer hackers who probe the phone networks, and later the Internet, simply wondering what is connected to what.
In one of his interview segments, Marcus Ranum, Chief Security Officer at Tenable Security, says that in the early days there was limited addressing. In other words, without a Google search, you had to know where on the Internet you wanted to go. Or, like the metal mouse, you had to search until you found something new or interesting. Often, you used your phone modem to find other phone modems. In looking for computers set with default "guest" accounts, hackers used war dialing -- randomly dialing phone numbers until they got a computer on the other end -- to access corporate or military computers. At the time, says Ranum, system administrators would laugh at logs that showed 800 attempts for access using the default word "guest." But that was when the Internet was still an intimate community of military, academics, and a few curious hackers, barely a few years removed from the days of the early ARPANET that predates today's Internet.
The upcoming shift, from in invite-only world to what we have today, is important; that's when hackers realized they were no longer alone on the Internet and had to go underground. Jeff Moss, founder of Black Hat and DefCon, describes in one of his interview segments growing up in the Bay Area in the 1980s and having one of the first affordable home computers that, with a modem, connected over the phone to various bulletin boards. He says that he could connect and no one would know his true identity or age; he would only be judged by what he wrote. For a 14 year old boy, Moss says it was liberating to be able to talk about sex and drugs.
Then in the early 1990s, Moss says AOL, Prodigy, and CompuServe destroyed the local community bulletin board, opening up what had been an exclusive neighborhood of thought and discussion to the entire world. It created a gold rush—it gave us spamming and phishing which both got started only once the masses starting surfing the net. It also threatened to push the curious hacker community into a dark corner -- until Moss founded DefCon in the summer of 1992. DefCon is a real-world computer bulletin board where communities of hackers and law enforcement talk openly about the Internet with an eye toward fixing what is broken.
Not every computer hacker is malicious; Moss makes the point that there are good plumbers and bad plumbers. And not all famous computer hackers are ex-felons like Kevin Mitnick. Zerechak's film includes footage of the Boston-based L0pht Heavy Industries members testifying before Congress in May of 1998, saying confidently that they had the knowledge to take down the Internet in 30 minutes (but also that they wouldn't do it). Today, one of the original members of L0pht, Peiter Zatko aka "Mudge," works for DARA. Another, Joe Grand aka "Kingpin," runs a hardware design studio in San Francisco. And even Moss, who wasn't part of Lopht, has served on President Obama's Homeland Security Advisory Council and is today ICANN's Chief Security Officer.
The film digresses into the important privacy issues we face today, with insight from Jennifer Granick, who at the time of production was a lawyer with the Electronic Frontier Foundation (EFF), and Lorrie Cranor, a researcher with Carnegie Mellon University's CyLabs. They remind us that with each digital transaction we're leaving digital breadcrumbs everywhere, and that we don't always have a say in how that information might later be used.
One of the really cool moments within the documentary is when penetration tester Gideon Lenkey shows off a mobile version of the Metaploit software running on an iPhone: Lenkey uses it to log into a Windows laptop in an open Internet Café. Lenkey also reveals some of his social engineering tricks he uses to get inside corporate campuses without explicit permission.
Capping the film are interview segments with security expert Bruce Schneier who says "the Internet is the greatest Generation Gap since Rock N Roll," and that our kids, who grew up with this technology already available to them, will be the best to decide how electronic devices should be used going forward.
Moss agrees: "People can't control what they don't understand. How do you evaluate the risk of a computer controlled car? Well, people don't really know. We've never had computer controlled cars before."
I should disclose that I am one of the handful of supporting computer security experts that appear throughout Code2600. Although my interview segments were shot at Black Hat DC back in January 2010, they hold up well today. Indeed all of the interviews Zerechak captured in the three and half years he worked on the film appear eerily prescient today.
Since premiering at the Cinefest Film Festival in San Jose, California, last March, Code 2600 has enjoyed a limited run exclusively in film festivals around the country. At the Atlanta Film Festival the documentary won a coveted Grand Jury Award. Zerechak is currently working on a major film distribution deal so hopefully Code 2600 will receive the wider audience it deserves. In the meantime, you can see it next Friday night, 8pm, July 27, 2012, at the Rio Hotel in Las Vegas, Nevada. Admission to DefCon 20 is $200, cash only (of course).
This blog also appeared on Forbes.com
Friday, March 4, 2011
Why Cybersecurity Should Focus on Failure
In the 1940s “there were about ten deaths per one hundred million passenger miles,” he said. That meant the average passenger would expect to die for every ten million plane miles flown. Today when air travel is much more common most people have flown at least a million or so air miles. In terms of 1940s aviation, most of us would have a 1 in 5 chance of being dead because of a plane crash. With that track record, the aviation industry might not have survived or be as robust as it is today.
Yet we tolerate similar failures and crashes within the computer industry every day.
Kocher said there’s been a thousand-fold improvement in aviation safety over the years because every time a plane crashes, the industry doesn’t say “Oops, that piece of metal broke.” Or “Too bad.” Or “the pilot made that dumb mistake because they didn’t deal with the engine failure properly.” Instead there’s a formal process that leads to exponential improvement in aviation safety.
Every aviation accident gets investigated, and often there is not one, but a number of root causes behind it. “It’s is essentially impossible that one error can bring down an airplane today,” he said, since three, four, or five failures usually compound on each other. With the mandatory use of black boxes, extensive field investigations, and expensive reconstructions, each aviation failure becomes less and less likely in the future.
“In computer security we’re going the other direction,” Kocher said, because the industry doesn’t take a professional, analytic view of failure. Some vendors will spend many months looking for problems that don’t exist. On the other hand, some vendors will only fix the bugs and do no more.
“In aviation industry there’s not an attempt to put gloss around aviation safety to try and convince consumers there’s no possibility of an airplane crash if you carry the magic wand in your hand,” he said. Instead there are individuals and companies that try to gather as much information. They perform a root cause analysis and try to learn as much as they can from each failure.
On the other hand, Kocher said, within computer security if you go to ten practitioners and ask what should you do to solve your particular data security problem, you’ll get ten difference answers. One or two of those solutions may work. Eight of the ten solutions may not.
He compared computer security to medicine in the 1820s “when you had snake oil being sold along with some things that worked well but we may not know why they work.” Even when solutions do work, we often don’t know enough about it to explain why they worked. After more than fifty years, we don’t yet understand the root causes of computer failure.
Kocher cites Moore’s Law, which states that the number of transistors placed on a chip will double every two years. Moore’s Law allows for the inexpensive installation of many additional layers of protection. That way if one piece fails the others will ensure that the overall security properties are met. Eventually if you build up enough barriers “it works but it is not very elegant,” he said. But “it’s like putting thirty layers of concrete bunker around your house, a wooden one, a steal one, etc., and then trying to make them interlock in various ways to keep your teenage daughter from leaving the house at night.”
Kocher said it’s important to understand the underlying motivations as well. Today the computer attacker has more incentive to learn about failures than the solutions vendors. The good guys collect their salaries whether or not a given solution worked. But the bad guys only get paid if they are successful.
This originally appeared on Forbes.com
Tuesday, May 18, 2010
Cybercriminals phone it in
The criminals start by acquiring your account information, either by placing keystroke loggers on your desktop or by deploying sniffer programs on the network or by using traditional phishing campaigns, which entice you to volunteer personal data. The criminals then masquerade as the account holder in a call to the customer service representative (CSR) at the targeted financial service institution.
In the past fraud at the ATMs has been relatively out of reach; the criminal might get your account number but not the associated PIN. One call center scam involves calling the CSR to change the PIN on an ATM card. By providing the call center with a name, address, even the 9-digits of a social security number and the targeted account number, the criminal is able to reset a 4-to-6-digit ATM or Credit Card PIN. After burning the stolen account data onto a blank magnetic stripe card, the criminal is then able to use this new PIN at any ATM.
Another way cybercriminals are using the call center is to simply change the contact phone number on an existing account. Most of us may not be accustomed to having banks contact us over the phone, but when there’s a particularly large transaction pending that is atypical most institutions will call or text to confirm. Now the criminals are changing the contact number on record to their own. Then, when the bank calls to confirm, the criminals approve the transfer because the financial institution has called them and not you. But the financial institutions are aware of this scam and have now started calling both the new and the old phone numbers for confirmation.
The criminals, of course, are one step ahead.
In one case, documented by Kim Zetter over at Wired, a doctor’s home, office and cell numbers were jammed with repeated calls. Some were solicitations for sex websites, others pure silence. When customers complain to their telephone carrier , some telephone companies are now warned that there might be a financial crime associated with these calls.
All of these attacks expose weaknesses in the call center’s authentication of account holders. Financial institution call center customer service representatives often rely on the Automatic Number Identification (ANI), a phone number that appears with each incoming call. ANI is unrelated to CallerID, based on billing data, and thus can be captured by a CSR system even if the caller has blocked CallerID. Cybercriminals can and do manipulate ANI, making their call appear to be from anywhere, including the original registered contact phone number for a stolen account.
Challenge-response questions aren’t the answer either. Cybercrminals can search for and often find the answers to many common questions online. For example, the password to Sarah Palin’s Yahoo e-mail account was reset by someone guessing that she met her husband in high school.
Instead, institutions should use more than one type of call center authentication — ANI plus challenge-response questions where the questions are derived from past financial interactions with the customers (“Where was your last ATM transaction?”). Better yet, a mutually agreed upon password. Additionally institutions should automatically enroll account holders a package of security-based e-mail, text, and voice alerts including, but not limited to, changes to the physical address, the addition of a new person to an existing account, changes made to the contact phone number, and changes made to the PIN on an account.
In theory the average account holder should never see these alerts. But when they do hopefully they’ll realize that they’ll need to react and stop the fraud in real time.
Originally published in Forbes.com
Thursday, April 29, 2010
The Dangers in Following the Crowd
When Benjamin Jun received a winter catalog in the mail from Nike with a personal URL on the cover, he didn’t realize the wealth of information that would soon be available to him online. Jun, Vice President of technology at Cryptography Research, said that once online he was able to access a database showing what those he knew had purchased at various Nike stores. The site (and the entire winter campaign) is now down, but social media mashups such as this raise serious questions about companies that combine various databases–often without our direct consent.
This week Facebook has come under scrutiny for its new social media network. While logged into Facebook a simultaneous visit to one of Facebook’s partner sites will reveal what your Facebook friends think of content on that site. The application also allows you to be interactive with your Facebook friends on the partner site, extending your social media experience.
However, the application also allows third parties to collect data about you and your friends, making public (in some cases) data that you may have marked as “friends only” within the privacy settings on the Facebook side. More ominously Facebook is allowing its partner sites to store this demographic and marketing information indefinitely.
On Monday, four senators –including Charles Schumer of New York, Michael Bennet of Colorado, Mark Begich of Alaska and Al Franken of Minnesota—wrote to Facebook CEO Mark Zuckerberg with several privacy concerns, including asking why is it so difficult for customers to opt out of this new networking platform? Indeed, there are multiple settings within Facebook that must be tweaked in order to restrict private information.
Facebook has responded that it takes privacy serious, though it offered no specifics. Facebook, to its credit, has launched a new safety page, designed to better educate its users around sharing passwords and other factors, but it does nothing to mitigate the potential privacy and security risks inherent within Facebook’s proposed privacy policy changes.
The true dangers lie beneath the surface, beyond the mere marketing information of likes and dislikes.
In his talk last month at the 2010 RSA Conference, Jun spoke about the underlying assumptions being made by the site designers (not just at Nike and Facebook or their partners) who are incorporating mashup strategies–assumptions that might not be true. For example, the process of authorization for credentials on a social networking site is very different from the process of obtaining credentials on an e-commerce or online banking site. Site developers might be tempted to accept the APIs from a popular social media site as a way to increase revenue. Jun says the application designers should instead avoid or at least carefully consider the information being passed to them from another source.
To prevent unintended access, Jun advocates the creation of a “session manager,” one more hoop in the security chain. While it’s always controversial to propose slowing down the consumer experience, the session manager would receive credentials from a third-party site, vet the data, then prompt for additional authentication if necessary.
Simply passing credentials from one site to another without reevaluating is dangerous, said Jun. He cites, in particular, the three R’s of application development: redirects, renegotiation and reconnections. It is within these that gaps of trust among different systems that could allow bad actors access to sensitive data without proper authentication. Jun says in the case of the Nike solicitation for authentication there was only a unique URL on the cover of the catalog. Anyone reading the mailing could have gone online as him.
I for one do not need to know what news stories my friends are reading right now—let them surprise me later in a real (not virtual) conversation. Nor do I need to see what my friends are buying from an e-commerce site; really, I’m probably the last person to go online, learn that someone I know bought a pair of blue running shorts, size medium, and say “Hey, order me a pair also!” Just because the crowd is doing something doesn’t mean I’m going to do it.
But for many, social networking is a way of life, a connection to others. For them, let’s get the security right. With online data leakage occurring in new and surprising ways these days, why take the chance of sharing databases without providing additional back-end controls?
Originally published in Forbes.comTuesday, March 23, 2010
Be Careful Who You Know
Beyond date of birth, what other personal information are we giving away on social network sites? In a talk a few weeks ago at the 2010 RSA Conference, security researcher Nitesh Dhanjani explored some non-traditional ways social networking could be used to profile individuals. He says just by studying your social networking presence one can identify, for example, pending business deals.
Dhanjani , who says his exploration is just a hobby, says he created a LinkedIN account for friend who didn’t yet have an account—we’ll call him “Jack”— then invited a mutual friend to join Jack’s LinkedIn network. Within a short time, Jack acquired over 80 connections. What’s surprising here, says Dhanjani, isn’t that people linked to this fraudulent LinkedIn profile, but what information he as an impostor was able to glean about Jack’s sphere of influence and business.
For example a competitor cybersquatting as Jack could now see Jack’s clients. And, if Jack’s company was about to be acquired (and that information was not yet public), an outsider might further see a recent influx of new connections from several people at a rival organization. The lesson here is to establish a presence on the major social networks, if only to stake claim to your name and reputation.
Even legitimate social networks can be hacked: someone could friend you just to get access to someone else you know. A law enforcement officer could be seeking information on a person of interest who happens to be part of your social network. According to the Electronic Freedom Foundation, social networks are being used by federal investigators, and last week the privacy organization released a 38-page PDF training course (obtained through the Federal Freedom of Information Act) that the EFF said was used for conducting investigations via social networks. While federal agents can’t legally pretend to be someone else, they can request to be your friend and thus see all your posts, as well as those of others in your network. The EFF has been studying the privacy issues associated with this new form of surveillance. Often we accept people into our social networks by extension of trust, i.e. a friend of a friend, so a good rule of thumb might be to question how well you really know a person before accepting a new friend request.
But one doesn’t have to join a social network to define your social network.
In his RSA presentation Dhanjani also demonstrated how outsiders can use publicly available social network information to define spheres of influence around a targeted individual. Popular social networks display the top 8 friends for a person as means of identifying exactly which John Smith you’re currently looking at. By comparing the 8 friends on MySpace with the sample 8 friends on FaceBook, Dhanjani says he can map who are the critical contacts for the targeted individual. And by going one step further, by looking at the friends of those friends, one can further map who has the most influence with a targeted individual, their “posse” if you will, and do so without joining the network. A hacker using social engineering could then contact the targeted individual and say “Jane said I should contact you about Alice.”
Some may see all this as nothing new. Kevin Mitnick pioneered social engineering years ago. But now the means to profile someone is much more convenient. Be careful who you know and what you post online. You never know who might be listening.
Orginally published in Forbes.com
Tuesday, March 16, 2010
Device Fingerprinting to Fight Real-time Transaction Fraud
On Tuesday ThreatMetrix unveiled its new cloud-based transactional fraud network. Using its global database of device fingerprints—unique details about the PC, mobile phone or other Internet connecting device–the company says it can detect fraudulent transactions without the need for acquiring personally identifiable information. By correlating incoming TCP/IP information with its database, for example, the company was recently able to identify and stop one malware-infected computer from making an online transaction.
ThreatMetrix, a Los Altos, California-based company, has been working on its fraud network for four or five years, says Alisdair Faulkner, chief product officer at the company. What’s different from other transaction-based fraud networks is that ThreatMetrix uses device fingerprinting not necessarily transaction details for its fraud detection, providing a new set of tools for organizations to verify new accounts, authorize payments and transactions, and authorize user logins. Faulkner describes the new network as “fraud middleware” in that it is designed to complement and integrate with existing fraud solutions.
It is very different solution from the approach taken by other transactional fraud networks such as ID Analytics, a San Diego, California-based company that uses data mining of consumer purchases to address identity fraud. By collecting transaction data, ID Analytics says it can profile a customer’s typical purchasing behavior and flag an abnormal transaction as a possible fraudulent transaction. Unlike the credit bureaus which look at static elements of a person’s profile (SSNs or open accounts) transactional fraud networks look at the live transaction data instead.
What ThreatMetix brings to the table is a proprietary device fingerprinting methodology that is able to probe beyond mere cookies and browser data to identify the machine being used for online access.
Clearly there is a need for such alternative analysis. Cybercrminals have shown increasing technical sophistication year after year. Being able to mask one’s hardware identity seems mere child-splay today–unless one has the sophisticated tools to analyze the output from a compromised machine.
By cataloging devices internationally, ThreatMetrix says it can see through a typical TCP/IP proxy and learn that a machine pretending to be a Windows XP machine located within the United States is in reality a Linux machine located in Vietnam. This could be a machine set to emulate a legitimate user. Or it could indicate a possible man-in-the-middle attack as well, where a third party is eavesdropping on a user’s online session.
ThreatMetrix has also seen one device log into multiple financial services accounts within seconds of each other as well as numerous devices attempting to log into the same online account. This could indicate the use of a botnet, a rogue network of compromised PCs.
Despite the new avenues for fraud taken by cybercriminals today it’s nice to the see the security industry thinking outside the box and offering innovative solutions.
Orginally published in Forbes.comWednesday, March 10, 2010
With ISP offline, criminal malware infections drop dramatically
On Wednesday, RSA alerted its customers to a substantial decrease within the last twenty four hours in Trojan horse activity on the Internet as the result of a key Internet host going offline. Criminal enterprises use such hosts as a common point of contact. On the front end, it is the Internet address that thousands of infected computers worldwide point to in order to download the latest version of malware. On the back end, the bad guys connect through such a common network to mask their true locations. Removing the network breaks the connection between the infected PC and the criminal enterprise. Additionally, Cisco reports that there was a flood of last minute malware activity prior to the shut down which could have been the criminals seeking to change IP addresses.
The facility, known as AS Troyak (Russian slang for “Trojan”) is believed to be the source of several major strains of Trojans currently active on the Internet. AS Troyak is home to Rock Phish gang’s JabberZeus drop server, Gozi Trojan servers, among other lesser known Trojans. Zeus is a new class of banking Trojan that uses stealth in ACH transfers to defraud its victims.
A dramatic example of the impact of the loss of AS Troyak can be found on the site Zeus Tracker (this site uses a generic certificate so your browser may need to add site as an exception), which reported a substantial drop in Zeus infections on Tuesday evening.
In the past bullet-proof hosting facilities have used AS Troyak. Bullet-proof hosting means the owners are likely to be involved in some criminal activity themselves and thus ignore requests by law enforcement to shut down any illegal activity on the server. That isn’t to say all of AS Troyak’s clients are engaged in illegal activity, only that those that are likely to find safe haven with these facilities.
facilities.According to RSA the range of IP addresses affected by the AS Troyak shutdown include:
91.200.164.0/22
91.201.196.0/22
193.104.27.0/24
193.104.94.0/24
193.104.176.0/24
The exact cause of AS Troyak’s demise is not known, nor does the team at RSA think it is likely to be long-lived. The server could, for instance, be moving to a new physical location, or the shutdown could be the result of a technical failure. Or the party operating it may have decided not to continue with the service. It is also possible, though unlikely, that a coordinated effort by law enforcement and/or the security community may have shuttered AS Troyak.
“While the excitement is likely to be rather short-lived,” said Sean Brady, product marketing manager for RSA’s IPV Team, “seeing a wholesale throttling of a significant volume of online fraudulent activity provides a valuable glimpse at how to perform large-scale crime prevention efforts. It’s akin to the traditional methods of taking on organized crime – if you can go after the money, or in this case, the infrastructure, you can do more damage to the organization’s activity than going after individuals or individual resources.”