Threatpost for B2B
CloudFlare claims government requests for user data are affecting fewer than .017 percent of their two million global customers
The Web performance and security company yesterday issued the report in accordance with the Department of Justice’s new regulations for publishing information pertaining to law enforcement requests for user data. While the figure is necessarily a bit off – given that current law bars companies from including specific figures regarding domains affected by National Security Letters (NSLs) – the report suggests that the government has sought information on perhaps as many as 3400 of CloudFlare’s clients.
Their data reflect all requests as of December 31, 2013.
The company says it received 18 subpoenas last year, complying with only one of those requests. Another one request is still in process. The requests pertained to 17 separate domains but only affected one customer account.
CloudFlare says it pushed back on 16 subpoenas, all of which were rescinded. In some instances, the company claims court orders were issued in lieu of the original subpoena. In other cases, CloudFlare was simply not able to provide any information.
The company says it received 28 court orders, complying with 25 such orders. Two of the government requests remain in process. In total, court orders affected 227 domains under 38 customer accounts. For one of these court orders, CloudFlare was incapable of providing any information.
The company says it received three search warrant requests, one of which was eventually rescinded. They only ended up complying with one of the orders, though a second remains in process. The warrants affected four domains under one user account.
“In the rare instances where law enforcement has sought content such as abuse complaints or support communications, CloudFlare has insisted on a warrant for those electronic communications,” the company says. “To date, we have received no such warrants.”
The company received and complied with just one request for a pen register/trap and trace order that affected only one domain under one customer account.
In both 2012 and 2013, CloudFlare claims it received between 0-249 NSLs.
“Even assuming the high end of the range at 249 accounts affected,” the company wrote in its transparency report, “such national security orders would affect fewer than 0.02% of CloudFlare customer accounts.”
Under the new Justice Department rules, companies are allowed to report the reception of NSLs in batches of 250, starting with 0-249. In other words, no company is permitted to say that they received zero NSLs.
The company notes that the new rules are an improvement on the old ones, but “still consider[s] these new regulations to be an undue prior restraint on the freedom of speech.”
CloudFlare is also clear that has never turned over its SSL keys or its customers’ SSL keys to anyone; it’s never installed any law enforcement software or equipment anywhere on its network; it’s never terminated a customer or taken down content due to political pressure; nor has it ever provided any law enforcement organization a feed of its customers’ content transiting its network.
“If CloudFlare were asked to do any of the above,” the company claims, “we would exhaust all legal remedies, in order to protect its customers from what we believe are illegal or unconstitutional requests.”
CloudFlare’s report follows similar ones by AT&T, which received more than 2,000 NSLs, as well as Twitter, and various of the other tech giants, all of which seem to indicate that government requests for user data are on the rise.
SAN FRANCISCO–The security of data being transmitted over the Web relies on a large number of moving parts, from the integrity of the machine sending the data, to the security of the browser, to the implementation of encryption, to the fragility of the certificate authority system. Experts have been spending the best part of the last decade trying to address many of these issues, but there are still a number of hard problems to solve.
One of the most difficult of these is the way that users and browsers interact with the CA system and how the CAs handle certificate issuance and attempts to tamper with the system. In the last few years, a number of methods for addressing these issues have been proposed, and some of them show real promise, including the notion of certificate transparency. This is the work of some engineers at Google and the system is designed to provide a public log of every certificate that’s issued. The user’s browser also would receive a proof with each certificate. The logs themselves are append-only and cryptographically assured.
“When implemented, Certificate Transparency helps guard against several types of certificate-based threats, including misissued certificates, maliciously acquired certificates, and rogue CAs. These threats can increase financial liabilities for domain owners, tarnish the reputation of legitimate CAs, and expose Internet users to a wide range of attacks such as a website spoofing, server impersonation, and man-in-the-middle attacks,” Google’s description of the framework says.
The method requires the CAs to cooperate and submit their certificates to these public logs, and that’s one of the things that’s holding up its broad adoption.
“We need to get the CAs to change their behavior so they emit certificates this way,” Chris Palmer, a security engineer on the Chrome team at Google, said in a talk at TrustyCon here Thursday.
Palmer said that among the current proposal to help fix the trust problem online, he believes certificate transparency has the most potential for success. Some CAs have already committed to the idea, including Digicert and GlobalSign, one of the handful of larger certificate authorities in the world. Google is running a certificate log server and there is information available in Chrome on the certificate transparency status of a given site using SSL. Google also has set up a forum that lists certificate pushes and other issues surrounding the system.
But in order for the proposal to gain more steam, more CAs need to participate and agree to publish their certificates in this way. When that happens, it could have a significant effect on finding and revoking malicious or mistakenly issued certificates.
SAN FRANCISCO – As more Web-based services are encrypted, privacy advocates are concerned the next wave of aggressive surveillance activity could target automated update services that essentially provide Internet companies root access to machines.
Chris Soghoian, principal technologist with the American Civil Liberties Union, said today at TrustyCon that current malware delivery mechanisms such as phishing schemes and watering hole attacks could soon be insufficient for intelligence agencies and law enforcement such as the NSA and FBI.
“The FBI is in the hacking business. The FBI is in the malware business,” Soghoian said. “The FBI may need more than these two tools to deliver malware. They may need something else and this is where my concern is. This is where we are going and why I’m so worried about trust.”
Update services from Microsoft such as Windows Update Services have already been exploited in nation-state attacks such as Flame. Flame was a terribly complex attack that made use of collision attack to forge a Microsoft digital certificate to spoof the update service and allow infected computers to receive malicious updates from the phony service.
Soghoian said his concern is that the government will not only exploit the convenience of these update services offered by most large providers, but also that it will erode the trust users have in the services leaving them vulnerable to cybercrime, identity theft and fraud.
“There are really sound security reasons why we want automatic security updates. If consumers have to do work to get updates, they won’t, and they will stay vulnerable,” Soghoian said. “What that means though is giving companies root on our computers—and we really don’t know what’s in the code after fact. This is a point of leverage the government can use. We have no evidence they are using it right now, but these companies have a position of power over our devices that is unparalleled.”
Soghoian provided historical context to back up his overall claim that whatever access the government has to the intelligence is never enough. Going back to the early days of wiretapping 100 years ago, Soghoian said the government and law enforcement has always enjoyed a cozy relationship with telephone companies and today with telecommunications providers. Transparency reports published by Google, Facebook, Twitter, Microsoft and other giant Internet companies offer a window into the number of law enforcement requests these companies get for user data, as well as government requests related to matters of national security.
Soghoian also shared testimony going back as far as 2010 from former FBI general counsel Valerie Caproni, now a U.S. District Court judge in New York, who warned Congress repeatedly about how changes in technology will lead consumers to use Internet services that would be difficult monitor.
Soghoian cautioned that the government could take advantage of existing features in technology to get their way, citing as an example a feature in Google Android phone locks where if a user fails on their pattern to unlock their phone, Android will offer the user a prompt for the Google account credentials synched with the device. Soghoian said through Freedom of Information Act requests, it’s been revealed the government has asked for password resets on particular users in order to access their accounts or devices.
More concerning still is the government’s ability to use a court order to add features that do not exist in products currently. Skype, he said, was served with a directive from the Attorney General to modify its end-to-end encryption capabilities in order to give the FBI access to encrypted communication, something that was revealed in the Snowden documents.
“We still don’t know what Skype did and when, and what law was used,” Soghoian said, adding that Edward Snowden’s secure email provider Lavabit was also served with a similar court order for its SSL keys. Rather than remain complicit, Lavabit closed its doors. If update services are the next surveillance frontier, Soghoian hopes the respective companies remain vigilant, because the APIs used to deliver code can be used to deliver code to specific people.
“I would hope Google would fight that type of order all the way to the Supreme Court. The same goes for Apple and Microsoft and others,” he said. “I hope the companies we depend on and trust would fight.”
SAN FRANCISCO–The Lavabit case, which saw the secure email provider’s owner shut the company down after being forced to hand over to the government the encryption key that protected his users’ data, may seem like an extreme reaction to a unique situation. But, experts say it’s likely that there will be similar situations in the near future, and technology providers an users should change the way they think about what the threats to their data may be.
The FBI went to Lavabit’s founder, Ladar Levison, last year in the wake of the NSA revelations and demanded access to the encrypted emails of one of its users, Edward Snowden. After a lot of back and forth and legal wrangling, Levison eventually turned over the encryption key that protected the communications of all of his users, and then promptly closed the business. Marcia Hoffman, one of Levison’s lawyers, said that she believes there will soon be other cases like Lavabit.
“I don’t believe that Lavabit is a unicorn,” she said in a talk at the TrustyCon conference here Thursday. “We need to update our threat models. Ladar was worried about data at rest, not data in transmission. The threats are different than we thought. Security and privacy enhancing services are really in the crosshairs. To the extent that you design a service like Lavabit, you should be thinking about how you’re going to deal with government requests.”
Those threats now include not just attackers and cybercriminals, but governments and their lawyers. Hoffman said that the way the government is interpreting surveillance and wiretapping laws now has put technology companies in a difficult position. CALEA, the statute that requires telecom companies and others to help law enforcement agencies with lawful intercept and wiretapping operations specifically didn’t apply to information technology companies, she said.
“The government has taken the position that a service provider has to provide any information that the government wants,” she said. “If you don’t like turning over your keys, you can just backdoor your system. Putting this kind of pressure on Internet companies really flies in the face of what Congress decided.”
The Lavabit case is still wending its way through the court system, as Levison is appealing a contempt of court order against him. Hoffman said that the broader issues related to the case–the use of encryption and the government’s efforts to get at encrypted data–will only become more important in the months and years ahead. And she also warned users not to become too enamored of new, supposedly surveillance-resistant communications services that are springing up.
“If you don’t have a reasonable expectation of privacy in encrypted data, where do you have that expectation?” she said. “We need to stop making promises to users that we don’t know if we can keep, like NSA-proof email. I would be very skeptical of claims like that. I don’t know if anybody can actually make a promise like that.”
Dennis Fisher and Mike Mimoso run down the news from day two of the RSA Conference, including the new FBI director’s speech and preview Trusty Con.http://threatpost.com/files/2014/02/digital_underground_147.mp3
The official mobile application for the ongoing RSA Conference contains a half-dozen security vulnerabilities, according to an analysis performed by researchers from the security service provider IOActive.
IOActive chief technical officer Gunter Ollmann claims the most severe of the vulnerabilities could give an attacker the ability to perform man-in-the-middle attacks, injecting malicious code and stealing login credentials.
“If we were dealing with a banking application,” Ollmann writes, “then heads would have been rolling in an engineering department, but this particular app has only been downloaded a few thousand times, and I seriously doubt that some evil hacker is going to take the time out of their day to target this one application (out of tens-of-millions) to try phish credentials to a conference.”
While Ollmann notes that the man-in-the-middle vulnerability mentioned above is the most severe, he says the second most sever bug is actually more interesting. The application apparently downloads a SQLite database file that is then used to populate the app’s user interface with various conference information, like speaker profiles and schedules. Seems innocuous enough, but that database – for reasons that remain a mystery to Ollmann – contains the first and last names, employers, and titles of every user that has downloaded and registered with the application.
Ollmann admits he’s taking a bit of potshot at one of the premiere security industry conferences, but the point he is really trying to make, he claims, is a bigger one.
“Security flaws in mobile applications (particularly these rapidly developed and targeted apps) are endemic, and I think the RSA example helps prove the point that there are often inherent risks in even the most benign applications,” he said.
SAN FRANCISCO – Outgoing FBI Director Robert Mueller predicted to his successor James B. Comey that cybersecurity would dominate his 10-year tenure much the same way terrorism did Mueller’s.
“After five months, he’s right,” Comey said today during his keynote address at RSA Conference 2014.
Comey’s first appearance at RSA was a breezy 30-minute monologue disguised as a familiar plea for enhanced information sharing between the public and private sectors, in addition to a rundown of the nation-state and criminal threats facing the U.S., and an announcement that the FBI will release an unclassified version of its Binary Analysis Characterization and Storage System (BACSS) malware repository later this year.
“Send your malware sample to us, and you get a report back in a matter of hours on how it works, what it’s targeting and where it’s been seen elsewhere,” Comey said. “We hope to get BACSS on the same level as our repositories for fingerprints, criminal records and DNA.”
That seems to be an easier goal for Comey to attain than the information sharing vision he outlined. The NSA’s surveillance activities have made corporations gun shy of the government, and talk of machine-to-machine communication in real time about threats and vulnerabilities is sure to make some uneasy.
“I imagine a day when intelligence sources, the government, antivirus companies, financial and communications companies share machine data instantaneously,” Comey said. “To do all that, we need an automated intrusion detection system, in a standard language and native format, to all communicate in real time. And we must do this and be mindful of the need to protect privacy.”
Government agencies already are mandated to deploy the Einstein IDS on all network gateways in order to monitor traffic for attacks. The NSA has a proposal out for an enhancement to Einstein that would allow for monitoring of government traffic on private sector computers.
“We need help,” Comey said. “You are victims, and the key to defeating cybercrime as well. The information is on your servers; you have the expertise and knowledge to help us and we are actively trying to listen.”
Information sharing has never managed to clear significant hurdles. Private companies are in no hurry to give the government access to networks to investigate attacks or collect forensic information. Attack data must also be sanitized so as not to expose companies to additional attacks or hurt their competitive standing. Comey, who spent the latter half of his career working as general counsel at Lockheed Martin and Bridgewater Associates, tried to assure the audience he understood their hesitation.
“We don’t do a good job clarifying what we need to do,” Comey said. “There’s no unifying threat reporting system. Who in government is responsible for what in terms of cybercrime? I get that. I know where you’re coming from.
“Information always seemed to flow in one direction toward government,” Comey said. “No doubt government has information it cannot always share for reasons that I’m sure make sense to you. We will share much as we can, as quickly as we can, and in the most usable format.”
*James Comey image via RSA
SAN FRANCISCO – Privacy has been in a stranglehold for a long time. Some believe it’s a fleeting concept done irreparable harm by the Snowden revelations. Others believe it’s merely in a transition until the norms of Internet behavior are sorted out.
The privacy chiefs of Google, Microsoft and Intel Security tried to do some table-setting today at RSA Conference 2014, lobbying for even more transparency in reporting government requests for user data and explaining how the notion of privacy has been flipped on its ear.
Microsoft chief privacy officer Brendon Lynch reiterated a point made by CSO Scott Charney a day earlier that Microsoft has never had a request for bulk of data, and how it would fight such a request if it ever landed on their desk. Google senior corporate counsel Keith Enright enforced the point too that the company never granted the government direct access to internal Google systems.
“Our leadership has been clear and vocal,” Enright said. “We are leading the charge for increased transparency. It’s important people understand what their governments are doing. Transparency is the only way we have accountability and are able to push for change.”
Recently, the Department of Justice reached a settlement with major Internet companies, establishing new reporting norms around government requests for data in the form of National Security Letters and Foreign Intelligence Surveillance Court orders. Since material in the Snowden documents intimated on several occasions that the National Security Agency (NSA) had some sort of access to Google, Facebook, Yahoo and others, the providers believed their hands were tied by reporting restrictions.
“Some of the reporting was hyperbolic and there were some frustrations not being able be as transparent as we could be,” Lynch said. “We are now able to refute reports of unfettered access to data and bulk collection. I’m happy report that those requests impact only a fraction of 1 percent of customer data.”
The panelists, including Intel Security chief privacy officer Michelle Dennedy, emphasized too that this is a global issue impacting economies worldwide since the NSA proved it had a long reach when it was disclosed the agency was tapping overseas connections between data centers to intercept data.
“It’s not just one government, it’s all governments,” Dennedy said, emphasizing the S.
On a less global scale, the experts contemplated how Web-based services and social interaction online necessitates a compromise and redefining of privacy.
“Privacy is about user expectations. User engage with a service and on some level recognize data moves,” Google’s Enright said. “This requires us to be responsible, inform users and give them meaningful control. If a service is appropriately designed and users understand what they’re engaging with and the transactions their making, I don’t think there’s a net loss of privacy.”
The panelists also lamented the depth and breadth legal counsel requires them to provide in privacy statements that are largely ignored by users. Users are also forced to make changes to settings in services or browsers they’re unlikely to seek out or understand. Google, for example, offers its users the option to turn off behavioral targeting within their ad settings. The Do Not Track option in web browsers is also available to users, but standards hang-ups leave it up to websites to decide whether they’ll honor the DNT signal.
“Privacy is not synonymous with secrecy,” Dennedy said. “It’s the processing of personally identifiable information. You’re not giving up privacy when you choose to communicate with another person. We want our customers to stick with us. We want transparency to be a differentiator; that’s a good thing.”
It’s only been a few days since Apple fixed the nasty certificate-validation “goto fail” vulnerability in iOS and OSX and now word comes that another bug, one that could allow an attacker to monitor keystrokes on iOS 7 devices without the user being any the wiser, also exists.
The problem apparently exists on iOS 6.1.x and the following versions iOS 7 versions: 7.0.4, 7.0.5, and 7.0.6.
Researchers at FireEye found the problem which could essentially permit someone to keep track of “all the user touch/press events” happening in the background of one’s iPhone or iPad. Some of the details an attacker could glean include knowing whenever a user touches the screen, presses the home button, increases or decreases the volume or uses Touch ID, the fingerprint identity sensor Apple shipped with its latest iPhone 5S device.
The bug relies on bypassing the phone’s “background app refresh,” a resource that the operating system would normally allocate for apps to run in the background. The feature is intended to help users multitask and easily switch between recent apps they’ve used.
The FireEye researchers, Min Zheng, Hui Xue and Tao Wei, were able to exploit the hole by creating a proof-of-concept monitoring app they claim could bypass Apple’s app review process and cause trouble as long as they could get the victim to download the app via phishing. The researchers claim they could also “exploit another remote vulnerability of some app and then monitor in the background.”
While the app can’t tell exactly which keys the victim is pressing, it does provide a list of time-stamped X and Y coordinates that an attacker could later decode to determine which buttons were pressed.
Keyloggers or apps that allow background monitoring for iOS devices are nothing new. Multiple apps can run silently in the background on phones, mining users’ activities. The apps can be purchased, some for a lump sum, some for a subscription fee, through third party merchants. The caveat for those apps is that the device needs to have been jailbroken. In this situation however the FireEye researchers claim that non-jailbroken devices could also be vulnerable.
According to the blog the group wrote Monday for their exploit they used a non-jailbroken iPhone 5S running on version 7.0.4.
The trio claim they’re in the middle of collaborating with Apple on the bug. While a fix is coming, if anyone is truly concerned about being spied on in the meantime, the researchers recommend selectively killing apps running in the background, which users can do by pressing the home button twice and swiping an app up and out of preview to disable unnecessary or suspicious applications.
SAN FRANCISCO–The concept of threat modeling has evolved quite a lot in the last few years, moving from an activity that massive software companies such as Microsoft and Google use to anticipate and defend against potential threats to their products to something that many smaller organizations practice. Starting a threat modeling system can seem daunting, but the good news is that there’s no one right way to do it, just the right way for a given organization.
Microsoft has been using some form of threat modeling internally for many years now and the company’s security group has spent a lot of time speaking publicly about the benefits of the practice and advocating for wider adoption of it. Adam Shostack, a program manager in Microsoft’s Trustworthy Computing group, has been one of the main proponents of threat modeling’s use, and he said that he’s reached the conclusion that threat modeling is not one defined set of methods or principles but a fluid and dynamic way of reducing security risks to products and services.
“I now think of threat modeling like Legos. There are things you can snap together and use what you need,” he said during a talk at the RSA Conference here Wednesday. “There’s no one way to threat model. The right way is the way that fixes good threats.”
Security experts often will tell developers that in order to build defensible and resilient products, they need to think like an attacker. That is, look at the product or system the way that a potential adversary would see it, find the weak spots that are ripe for exploitation and correct them. But Shostack said that isn’t exactly the most useful advice.
“Being told to think like an attacker is like being told to think like a professional chef,” said Shostack, who recently published a new book on the topic, Threat Modeling: Designing for Security. “A lot of security people like to cook, but if someone told you to go to the store and buy enough chickens for a restaurant that seats 78 people and turns over three times a night, you’d have no idea what to do.”
As with nearly everything in security these days, there are a number of methodologies, models, checklists and other aids designed to help organizations implement threat modeling. Those tools can be useful and have their places, Shostack said, but none of them should be seen as the perfect answer. Rather, use them as part of the process of putting building blocks in place as you construct a threat modeling program.
“We want to focus on finding good threats. Use your assets and the actions of attackers to make threats real,” he said. “It’s hard to go from a checklist to a broader system. You have to think about threat modeling your software as an end-to-end process.”
Of course, even the best and most well-constructed threat modeling program still has to deal with the most unpredictable and dangerous threat to the product: the end user. Trying to predict how users will misuse, abuse and break a piece of software is a fool’s errand, but Shostack said it’s still up to the professionals to put their products in the best position to survive in today’s environment.
“To tell people that they can’t use their computers for what they want it a battler we’re going to lose over and over again,” he said. “People don’t buy their computers to be secure. They buy them to watch dancing babies.”
SAN FRANCISCO — Two zero-day vulnerabilities in Avaya’s latest one-X 9608 IP telephones have been discovered and are expected to be patched on Friday by the provider.
Researcher Ang Cui, a Ph.D. candidate at Columbia University and chief scientist at Red Balloon Security, will demonstrate an exploit and provide details on the previously unreported vulnerabilities during a presentation, also on Friday at RSA Conference 2014.
Cui has previously discovered zero-days in other network enabled embedded devices. He said the Avaya bugs are remotely exploitable, the exploits are relatively simple, and potentially millions of phones are at risk (Avaya and Cisco are IP phone market share leaders).
“It will absolutely compromise the phone remotely,” Cui said. His presentation will include a demonstration of a worm he wrote that remotely exploits the bug and exfiltrates raw audio data by turning the circuit board into a radio transmitter.
“It will do real-time speech detection and transmit a text transcript,” Cui said.
Dr. Salvatore J. Stolfo, a director at Red Balloon and advisor at Columbia University where Cui is a Ph.D. candidate, said the phone will continue to function as intended, but will also be turned into a listening post.
“With the receiver on the hook, the phone will transmit over the network,” Stolfo said. “You can spy on someone in an office if you are able to inject malcode remotely.”
The exploit, Cui said, bypasses security appliances scanning for malicious outgoing network traffic. He said the same attack is applicable to other embedded network devices such as printers and routers.
Cui and Stolfo said an attacker would be able to pivot from other vulnerable embedded devices on the network as well, again eluding detection by IPS and other security technology. Cui’s worm, for example, begins with a printer exploit of a 2011 firmware vulnerability which replaces the existing code with malicious firmware. An attacker would need to entice the victim to print, for example, an attachment containing the embedded malicious firmware. Once executed, the malware establishes a backdoor and awaits commands; the attacker could scan for other embedded devices such as IP phones and routers listening on the same port.
More than a year ago, Cui demonstrated an attack against a Cisco VoIP phone that also turned it into a listening device. He was able to put code on the phone by installing—and then removing—an external circuit board from the Ethernet port on the phone. Then using his smartphone, Cui was able to spy through the phone even though its Off-Hook switch was enabled. Cui said he was also able to pull of the same attack remotely, without the need for physical access to the device.
Dennis Fisher and Mike Mimoso discuss the happenings on day one of the RSA Conference, including Art Coviello’s keynote and what makes the NSA mad.http://threatpost.com/files/2014/02/digital_underground_146.mp3
SAN FRANCISCO–Of the small pool of people who have seen the Snowden documents, few, if any, are as technically savvy and knowledgeable about security and surveillance as Bruce Schneier. And after reading through stacks and stacks of them, Schneier says that yes, the NSA is extremely capable and full of smart people but “they are not made of magic”.
A cryptographer by training and a security thinker by trade, Schneier has spent many hours reading the Snowden documents and thinking about what they mean, both in terms of the NSA’s actual capabilities and their effect on data security and privacy. Much of the news, clearly, is not good on that front. The NSA has a dual mission: to protect the communications infrastructure of the United States and to eavesdrop on the communications of foreign nations The agency, Schneier said, is very, very good at both of those missions, but it’s the eavesdropping piece that has grown exponentially in recent years as the Internet and mobile devices have became pervasive.
“The NSA has turned the Internet into a giant surveillance platform, one that is robust politically and technologically,” Schneier said during a talk at the RSA Conference here Tuesday. “When you have the budget of the NSA and you have the choice to get the data this way or get it that way, the correct answer is both. Fundamentally the NSA’s mission is to collect everything, and that’s how you have to think about it.”
That collect-everything mentality is enabled by the vast budget, reach and computing power that the NSA has at its disposal. Those advantages allow the agency to not just collect, but store, virtually any amount of data it chooses. But one of the NSA’s other key assets–and perhaps its largest advantage over other intelligence agencies–is its brain power. The agency employs an untold number of top mathematicians and cryptographers and computer scientists, and they all work on solving difficult problems. One of their tasks is overcoming a key obstacle for NSA data collection: encryption.
The NSA is known to be working on an unspecified capability to defeat SSL, and Schneier said that while he hasn’t seen any direct evidence of what that capability might be, there are a number of possibilities.
“My favorite idea right now is elliptic curves. If they know that certain curves are weak they could then try to get algorithms using those curves,” he said.
Other possibilities are some kind of factoring breakthrough, a successful attack on the RC4 cipher, which is known to have some problems already, or a method for exploiting weak random-number generators. But even with all of the resources at its disposal, the NSA currently has a difficult time dealing with encrypted traffic, Schneir said, and that’s something that users should use to their advantage.
“The NSA can’t break Tor and it [ticks] them off. Most crypto drives the NSA batty,” he said. “Encryption works and it works at scale. The NSA may have a large budget than all of the other intelligence agencies combined, but they are not made of magic. Our goal should be to make eavesdropping more expensive. We should have the goal of limiting bulk collection and forcing targeted collection.”
Schneier added that now that many of the NSA’s methods and tools are out in the open, it’s reasonable to expect other agencies, as well as other classes of attackers, to adopt some of them.
“These techniques are spreading. Figure that this is a three to five-year window for cybercriminals to use them,” he said. “Today’s NSA programs are tomorrow’s PhD theses and the next day’s hacker tools. Surveillance is the business model of the Internet.”
SAN FRANCISCO – Enterprises beat up by wave after wave of Java exploits and calls to disable the platform may soon have some relief in sight.
Microsoft’s free Enhanced Mitigation Experience Toolkit will soon have a new feature that allows users to configure where plug-ins, especially those targeted by hackers such as Java and Adobe Flash, are allowed to run by default. The feature is called Attack Surface Reduction, and it’s one of two that Microsoft has made available in a technical preview of EMET 5.0 released today at RSA Conference 2014.
“ASR is going to help a lot of people,” said Microsoft software security engineer Jonathan Ness.
Blocking Java outright, despite some of the dire attacks reported during the past 15 months, isn’t an option for most companies that have built custom Java applications for critical processes such as payroll or human resources. With 5.0, users will have the option to run plug-ins in the Intranet zone while blocking them in the browser’s Internet zone, or vice-versa.
“It gives customers more control over how plug-ins are loaded into applications,” said Ness, explaining users will have the flexibility, for example, to allow Flash to load in a browser, but block it in an Office application such as Word or Excel. A number of advanced attacks have contained malicious embedded Flash files inside benign Word documents or Excel spreadsheets. Microsoft hopes to use feedback received on the Technical Preview to shape the final 5.0 product.
“Feedback is really valuable, and has helped shape this tool,” Ness said, adding that the release of EMET 4.1 was delayed right before launch to correct a shortcoming pointed out by a beta user. The customer was not pleased with EMET’s automatic termination of applications upon detecting an exploit, rather than having a configuration option available where the event could be logged an analyzed later.
Microsoft has been vocal about recommending EMET as a temporary mitigation for zero-day attacks against previously unreported vulnerabilities. EMET includes a dozen mitigations that block exploit attempts targeting memory vulnerabilities. Most of the mitigations are for return-oriented programming exploits, in addition to memory-based mitigations ASLR, DEP, heap spray and SEHOP protections. EMET is not meant as a permanent fix, but only as a stopgap until a patch is ready for rollout.
The second new feature in the EMET 5.0 Technical Preview is a number of enhanced capabilities to Export Address Table Filtering, or EAF+. Ness said EAF+ blocks how shellcode calls are made into EA table filtering.
“With OS functions such as open file or create process, exported code wants to jump into EAF. This filters the shellcode and blocks it if it’s an exploit,” Ness said. “We’re extending that with new filtering (KERNELBASE exports and additional integrity checks on stack registers and limits).”
EMET raises development costs for exploit writers with its memory protections, so much so that the recent Operation SnowMan APT attack included a module that detected whether an EMET library was present and if so, the exploit would not execute itself. Researchers have developed bypasses of EMET’s mitigations, first Aaron Portnoy of Exodus Intelligence last summer, and most recently, researchers at Bromium Labs who developed a complete EMET bypass.
Microsoft’s Ness said improvements to EMET’s Deep Hooks API protections have been rolled into the 5.0 Technical Preview that address the Bromium bypass. Whether it remains on by default in the final 5.0 remains to be seen as application compatibility issues have to be resolved first, Ness said.
Apple today shipped a security update resolving a critical certificate-validation vulnerability in its OS X Mavericks operating system.
Details of the bug, which exists in OS X version 10.9.1 and is resolved by version 10.9.2, emerged on Friday after the company patched essentially the same bug in its iOS mobile operating system.
On unpatched systems, the bug affects the signature verification process in such a way that a server could send a valid certificate chain to the client and not have to sign the handshake at all, according to an analysis performed by security researcher Adam Langley. Langley says the problem arose from the way the certificate validation code processed two failures in a row.
“This signature verification is checking the signature in a ServerKeyExchange message. This is used in DHE and ECDHE ciphersuites to communicate the ephemeral key for the connection. The server is saying ‘here’s the ephemeral key and here’s a signature, from my certificate, so you know that it’s from me’,” Langley wrote in his analysis. “Now, if the link between the ephemeral key and the certificate chain is broken, then everything falls apart. It’s possible to send a correct certificate chain to the client, but sign the handshake with the wrong private key, or not sign it at all! There’s no proof that the server possesses the private key matching the public key in its certificate.”
Apple made the update that fixes this and a number of other bugs available a few hours ago. Apple warns that an attacker with a privileged network position can capture or modify data in sessions that should be protected by SSL or TLS on unpatched systems. Apple attributes the issue to a failure on the part of its secure transport mechanism to validate the authenticity of the connection. They claim to have resolved the problem by restoring certain validation steps that had been missing.
Due to the nature of the bug, Langely said certificate pinning – a defensive method that gives browsers the ability to associate a specific certificate with a specific site, thus preventing man-in-the-middle attacks – likely would not have any impact on this flaw, because there is no problem with the certificate itself:
“Because the certificate chain is correct and it’s the link from the handshake to that chain which is broken, I don’t believe any sort of certificate pinning would have stopped this.”
Another group of researchers from the security company CrowdStrike also looked at the code and noted that potential exploits of this vulnerability could include interception of sessions with webmail services, or any other SSL-protected sites.
“Due to a flaw in authentication logic on iOS and OS X platforms, an attacker can bypass SSL/TLS verification routines upon the initial connection handshake,” reads the CrowdStrike analysis. “This enables an adversary to masquerade as coming from a trusted remote endpoint, such as your favorite webmail provider and perform full interception of encrypted traffic between you and the destination server, as well as give them a capability to modify the data in flight (such as deliver exploits to take control of your system).”
The CrowdStrike researchers said that finding non-encrypted packet data in the SSL/TLS handshake could be indicative of exploit attempts targeting this vulnerability.
Of course, this certificate-validation problem is not the sole security fix issued by Apple today, who is well known for publishing long and pedantic security updates. Other updates include fixes for:
- a number of Apache vulnerabilities;
- a memory corruption problem related to the handling of type 1 fonts;
- a few application sandbox bypasses;
- the root certificate system;
- a buffer overflow that could allow for arbitrary code execution in CoreAnimation;
- a signedness issue in CoreText’s handling of unicode fonts that could lead to arbitrary code execution or unexpected application termination;
- a credential intercept for anyone using curl to connect to an HTTPS URL containing an IP address;
- a bug that could allow an attacker to take control of the system clock;
- an issue in finder that could permit unauthorized access to certain files;
- a memory leak problem spurred by maliciously crafted JPEGs;
- an issue with the NVIDIA drivers through which the execution of a malicious application could result in arbitrary code execution within the graphics card;
- multiple PHP vulnerabilities;
- a double free bug that existed in QuickLook that could be exploited to result in an unexpected application termination or arbitrary code execution if an attacker dowloaded a maliciously crafted Microsoft Word document;
- a handful of QuickTime bugs that could lead to application termination or arbitrary code execution;
- and a whole slew of problems affecting users that have not yet updated to the latest Mavericks iteration of OS X.
You can examine the full security contents of the update here.
Attackers leveraged a Pony botnet controller to not only siphon away a large batch of account credentials but also to make off with over $200,000 in Bitcoin and other virtual currencies over a four month span, according to researchers this week.
It’s the second high profile instance of the Pony botnet seen over the last several months.
The source code for Pony, a botnet management interface, was initially leaked in the summer of 2013. The Trojan, whose sole purpose is to steal private data from infected machines, has been attributed to a sharp rise in data gathering attacks since.
According to a post on Trustwave SpiderLabs’ Anterior blog yesterday the botnet’s latest iteration is much more advanced and while the latest round of attacks only compromised a scant 85 wallets, they yielded roughly $200,000 in crypto-currency including Bitcoins (355), LiteCoins (280), PrimeCoins (33) and FeatherCoins (46).
“Despite the small number of wallets compromised, this is one of the larger caches of Bitcoin wallets stolen from end-users,” Daniel Chechik and Anat Davidi, two researchers with the company rationalized Monday.
The two assert that “it’s only natural” that Pony would begin to start going after people’s virtual wallets.
The Bitcoin theft is in addition to a slew of credentials, over 700,000, that Pony pilfered from September 2013 to January including:
- 600,000 website login credentials
- 100,000 email account credentials
- 16,000 FTP account credentials
- 900 Secure Shell account credentials
- 800 Remote Desktop credentials
While 700,000 may sound like a lot, the numbers are actually way down from a separate instance of Pony that SpiderLabs reported in December in which a campaign unearthed two million account credentials. Those usernames and passwords were mostly linked to Facebook, Google, and Twitter along with other social media sites but some were also linked to the ADP payroll service, something Chechik and Davidi warned at the time could “have direct financial repercussions.”
Also unlike the Pony incident in December, Trustwave researchers were able to glean a little more information about the geographical location of its victims this time around. In December the cybercriminals used a reverse proxy to drop the bots but this time the Pony bots interacted with a command-and-control (C+C) server, giving the researchers a much better idea about the campaign’s target. A chunk of the attacks were found taking aim at European users, with sites in Germany, Poland, Italy and the Czech Republic seeing 62 percent of the attacks.
As the following graphic illustrates, after a series of up and down attacks the attackers decided to pull the plug on the most recent campaign at 3 a.m. on January 17.
Still though, Trustwave believes Pony isn’t done infecting users. Speaking to Reuters, Ziv Mador, a security research director with the company claims that while the company was able to disrupt its servers, he believes the crime ring is still operating and will continue to target virtual wallets in the future.
Along with Bitcoin and the currencies listed above, Pony also looks for 30+ different types of virtual currency, including Anoncoin, Fastcoin and Luckycoin to name a few. Trustwave is warning users with unencrypted wallets associated with one of the listed currencies, right, that Pony may be looking for their money.
Chechik and Davidi claim that while it’s difficult to say with certainty that the Bitcoin wallets associated with the attack were necessarily raided, it’s also tough to verify that the transfers associated with them were legitimate. When it comes down to it though, they were compromised in some shape or form.
It’s because of the uncertainty around the compromised wallets – there’s really no way to contact their owners – as a public service, the company has set up a tool to let users know if they’ve been implicated by Pony. Users can input their Bitcoin wallet public key or on another site, their email address, to see if their credentials have been compromised by the most recent campaign.
News of the most recent Pony attack comes in the wake of revelations that the largest and most popular Bitcoin exchange, Mt. Gox, is nearing collapse. Monday saw Mt. Gox’s chief executive resign from the Bitcoin Foundation, the company delete all of its tweets and take its site offline as word began to circulate that the service was expected to file for bankruptcy. Bitcoin loyalists fear the worst amid rumors that the company may have suffered a catastrophic theft to the tune of 744,408 Bitcoins, or $350 million over the last few months.
SAN FRANCISCO–Security people are, by nature, cautious and methodical, and that is even more true of cryptographers. And in the current environment, when new adversaries seem to emerge on a daily basis and cryptographic standards are under intense scrutiny, a panel of some of the biggest names in cryptography said more conservatism and caution in the development and deployment of encryption is warranted.
In most years, the cryptographers’ panel at the RSA Conference here is a deep discussion of crypto standards, key lengths and the relative merits of various hash functions. But the bright light that has been shone on the NSA’s activities recently gave the panelists quite a bit more to discuss this year. The panelists, who included Adi Shamir of the Weizmann Institute, Ron Rivest of MIT, Whit Diffie of SafeLogic and Brian LaMacchia of Microsoft Research, had plenty to say about the revelations of the NSA’s reported efforts to undermine crypto algorithms and influence technical standards.
“I was most surprised by the Americans’ deep involvement in this,” said Shamir.
“We’ve had a loss of innocence as we’ve seen what goes on behind the curtains of government,” said Paul Kocher of Cryptography Research, who moderated the panel.
Some of the most damaging and concerning revelations to come from the Edward Snowden leaks have been about the agency’s alleged efforts to weaken some technical standards and crypto algorithms. There are also reams of documents showing the NSA’s work at getting around SSL in various ways, which Shamir said is actually a good sign.
“In all of the documents, there isn’t any indication that they manager to break the mathematics,” he said.
Still, the panelists agreed that the NSA revelations should serve as a reminder to cryptographers and product designers to err on the side of caution when it comes to design choices.
“We should really putting a hefty degree of conservatism in our standards,” said Rivest, who, along with Shamir and Len Adleman, designed the RSA algorithm.
As the events of the last year have shown, standards and technologies that seem to be on solid footing one day can be revealed as weak or compromised the next. LaMacchia, of Microsoft Research, said that the prudent thing is to work under the assumption that at some point, the algorithm you’re designing or using will fail.
“You have to plan for you algorithm to fail. Early on I think we underestimated the effort it takes to move to a new cipher suite,” he said.
SAN FRANCISCO – RSA Security executive chairman Art Coviello today at RSA Conference 2014 made his first public comments about the security company’s relationship with the National Security Agency, painting the landmark firm as a victim of the spy agency’s blurring of the lines between its offensive and defensive missions.
A Reuters report in December alleged RSA Security was paid $10 million in a secret contract with the NSA to use encryption software—specifically the Dual EC DRBG random number generator—that the spy agency could easily crack as part of its surveillance programs. The deal goes back nearly a decade to 2006, and according to Reuters, represented one third of the company’s crypto revenue at the time.
The bombshell came three months after RSA Security followed NIST’s lead in September and recommended that developers no longer use the algorithm, which has long been considered weak and likely backdoored.
Coviello reiterated that RSA’s partnership with the NSA is a matter of public record, but that circumstances require a re-evaluation of that relationship. RSA, for example, works closely with the NSA’s defensive arm, the Information Assurance Directive (IAD); Coviello said he supports a presidential review group’s recommendation to simplify the NSA’s role as solely a foreign intelligence gathering unit and that the IAD be spun out and managed by another agency.
“When or if the NSA blurs the line between its defensive and intelligence gathering roles, and exploits its position of trust within the security community, then that’s a problem,” Coviello said during his keynote address kicking off the conference. “Because, if in matters of standards, in reviews of technology, or in any area where we open ourselves up, we can’t be sure which part of the NSA we’re actually working with, and what their motivations are, then we should not work with the NSA at all.”
Coviello also called for global reform of surveillance and privacy protections, outlining four principles he urges governments worldwide to consider. Those include the international renouncing of cyberweapons; cooperation between governments to investigate and prosecute cybercriminals; ensure the security of commerce online and the protection of intellectual property; and ensure privacy for individuals.
“All intelligence agencies around the world need to adopt a governance model that enables them to do more to defend us, and less to offend us,” said Coviello, who strongly denounced the use of cyberweapons and said governments should put limitations and bans on them similar to those imposed on nuclear and chemical weapons.
Coviello tried to bring historical context to the Dual EC DRBG controversy, which he said has flipped the industry’s perception of RSA Security to one of being in cahoots with the government rather than leading the charge against it in matters of privacy and protecting infrastructure. Coviello said the landscape changed in the late 1990s when RSA’s crypto patents expired and open source implementations of the famed RSA algorithm became the norm. Rather than fight the trend, Coviello said the company made a decision to lead as a contributor to standards efforts, including NIST and ANSI X9.
Coviello said in the early 2000s, RSA Security supported the moved to the NIST-sponsored Dual EC DRBG, an elliptic-curve algorithm, over hash-derived algorithms. By 2006, NIST had made Dual EC DRBG a standard and RSA made the algorithm the default random-number generator in its BSAFE crypto libraries that were made available to developers and became foundational encryption technology in any number of home-grown and commercial applications. Dual EC DRBG was also the default RNG in its key management product RSA Data Protection Manager. BSAFE is embedded in many applications, providing cryptography, digital certificates and TLS security.
“Given that RSA’s market for encryption tools was increasingly limited to the U.S. federal government and organizations selling applications to the federal government, use of this algorithm as a default in many of our toolkits allowed us to meet government certification requirements,” Coviello said.
Dual EC DRBG had a target on its back going back to 2007 when suspicions were raised by cryptographers Dan Shumow and Niels Ferguson during a presentation at the CRYPTO conference, as well as in an essay by Bruce Schneier who said the inherent weakness in the algorithm “can only be described as a backdoor.”
The knock against the maligned algorithm is that it’s slow and contains a bias, meaning the random numbers it generates aren’t so random. Schneier wrote that the numbers have a relationship with a secret second set of numbers that enables anyone who knows that second set to predict the output of the random number generator.
“To put that in real terms, you only need to monitor one TLS Internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG,” Schneier said. “The researchers don’t know what the secret numbers are. But because of the way the algorithm works, the person who produced the constants might know; he had the mathematical opportunity to produce the constants and the secret numbers in tandem.”
Coviello said the rapid growth and relative young age of the Internet as a platform for commerce and communication has put us at a crossroads where “norms” are required.
“We are in the midst of chaos and confusion, but if we don’t figure out digital norms and do so quickly, the alternative may be extinction,” Coviello said. “Extinction of the Internet as a trusted environment to do business; extinction as a trusted environment to coordinate research and development; extinction as a trusted environment to communicate with each other.”
SAN FRANCISCO–Despite all of the revelations and accusations and recriminations in the security industry in the last year, Microsoft CSO Scott Charney said he is still optimistic about the industry’s ability to defend users. However, that optimism is tempered by concern about the threats those users face from attackers and governments alike.
The threat landscape is an ever-shifting thing, and the last 12 months have seen a massive change in the way that defenders perceive who their adversaries are. Governments and intelligence agencies have been added to many of those lists, and for companies like Microsoft that work closely with governments around the world, but also have hundreds of millions of corporate and home users, this makes for a precarious situation. They are often asked for user data by law enforcement and other government agencies, through court orders and search warrants and other tools.
However, Charney said Microsoft doesn’t simply hand over data any time it gets a request.
“We have never gotten an order for bulk data, and we would fight an order for bulk data,” Charney said during a keynote speech at the ESA Conference here Tuesday.
Microsoft, Google and other tech giants have in recent months been pushing the United States government for the ability to publish more data on the kinds and volume of government requests they get. The government has relented in part, allowing these companies to become slightly more specific about these requests.
On the other side of the coin, Microsoft also shares its source code with governments around the world, something that Charney acknowledged has raised concerns in some circles, with people questioning whether a government could find a new bug in Windows and use it for its own purposes.
“Is it possible a government could find a bug? Sure. But we do code reviews to look for bugs, too,” he said. “We require them to report bugs they find, but how do you enforce that? By the way, you don’t need the source code to find bugs. People find bugs all the time.”
Addressing the issue of government surveillance and the developments of the last year, Charney said he still has faith in the security community’s ability to respond.
“”We’ve had hard problems before and we have to address them,” he said. “We have to do this while thinking about which actions are appropriate or not.”
TextSecure, the secure messaging app developed by the encrypted communication provider WhisperSystems, is no longer merely a private short messaging service (SMS) application. According to a blog post penned by WhisperSystems co-founder Moxie Marlinspike, TextSecure is now a private, asynchronous instant messaging application that does not depend on SMS or multimedia messaging service (MMS).
In its latest version – released on Google Play today – encrypted group chat and push messaging capabilities are among the app’s new features. However it also offers end-to-end encryption, forward secrecy, and deniability with little or no user-input. To be clear, the TextSecure server never stores or has access to any user communication or other data.
“The new TextSecure protocol doesn’t require a round trip key exchange process, eliminates half-open sessions, and is lightning fast – all without compromising forward secrecy or deniability,” Marlinspike writes. “This creates an experience that takes encryption entirely out of the user’s way. A user simply sends a message, and it’s encrypted end to end, every time.”
Like Apple’s iMessage service, when a TextSecure user communicates with another TextSecure user, the service sends messages over a data-network rather than via SMS – the protocol used by most other text messaging services. Under one configuration of the application, users can opt-into allowing TextSecure to fall back to SMS or MMS when they are communicating with users that do not have the TextSecure app. Also like iMessage, the messaging transport method is indicated by a color scheme (green for SMS; blue for data).
Under a second configuration, TextSecure acts more like WhatsApp, only ever communicating over a data-channel and only allowing for TextSecure-to-TextSecure communications.
(Image via WhisperSystems)
At present, there is no iOS version of TextSecure, but Marlinspike says the the app will be available for users of Apple’s various mobile devices in the near future.
The new version also added support for encrypted group chat. In order to maintain the privacy of these group sessions, the TextSecure server neither stores nor has access to group metadata such as lists of group members, the group title, or even the group’s avatar icon.