Feed aggregator

Federal Court Rejects Lavabit’s Contempt Appeal

Threatpost for B2B - Wed, 04/16/2014 - 15:33

A Federal court struck down Lavabit’s appeal today, affirming contempt of court sanctions against the now-shuttered secure email provider that was forced to release its SSL keys to the FBI last year.

Those keys could have decrypted emails belonging to the company’s founder Ladar Levison along with Lavabit’s entire user base, a collective of 400,000 that reportedly included former National Security Agency contractor turned whistleblower Edward Snowden. Levison ultimately shut Lavabit down in August 2013 before disclosing the keys.

According to the ruling, issued today by the Unites States Court of Appeals for the Fourth Circuit, (.PDF) one of Lavabit’s biggest missteps is that it failed to raise its arguments before the District Court after it was initially held in contempt last year, something that “significantly alters the standard of review.”

Lavabit specifically argued against the Pen/Trap Statute, an order that allows the placement of a pen register and a trace-and-trap device on its system. Pen/Trap orders are court-ordered surveillance mechanisms that give the government access to all “non-content dialing, routing, addressing and signaling information” on a real-time basis for 60 days.

Lavabit’s appeal contended that the government overstepped the bounds of the Pen/Trap order when the FBI asked the firm to release its SSL keys.

Apparently Levison only made one statement in his appeal that related to the order and that was back in July when he objected to turning over the private keys, insisting the move would “compromise all of the secure communications in and out of his network.”

Levison’s argument was not comprehensive enough, in the eyes of the court, which called the remark “vague,” and simply a reflection of his personal angst at the time over having to comply with the order.

“Lavabit never challenged the statutory validity of the Pen/Trap Order below or the court’s authority to act. To the contrary, Lavabit’s only point below alluded to the potential damage that compliance could cause to its chosen business model,” Judge G. Steven Agee, who authored the decision, wrote in the ruling today.

Agee’s opinion – joined by Judges Paul Niemeyer and Roger Gregory – was that the Pen/Trap Order levied on Lavabit always covered the encryption keys.

“If Lavabit truly believed the Pen/Trap Order to be an invalid request for the encryption keys, then the Government’s continuing reliance on that order should have spurred Lavabit to challenge it,” the decision reads, adding that the company should have acted after the district court issued the order on Aug. 1.

“Lavabit failed to make its most essential argument anywhere in its briefs or at oral argument,” Judge Agee said.

Lavabit brought up a handful of other arguments – that the case should be viewed as a matter of “immense public concern,” that the firm was unrepresented during some of its proceedings, etc. – but the court found no merit in these arguments, choosing not to rule on these claims.

“We reiterate that our review is circumscribed by the arguments that Lavabit raised below and in this Court. We take this narrow course because an appellate court is not a freestanding open forum for the discussion of esoteric hypothetical questions,” Agee wrote regarding Lavabit’s claims.

“The district court did not err, then, in finding Lavabit and Levison in contempt once they admittedly violated that order,” the ruling says of Lavabit’s actions, in closing.

The 10-year-old encrypted email service used a single set of SSL keys for all of its users that would unlock all traffic coming in and out of the company’s network.

Levison publicly maintained in an interview last fall that the FBI was exceeding its statutory authority in demanding Lavabit’s keys and claimed he was being forced by law to keep quiet about the case.

Refusing to become a “listening post” for the FBI Levison elected to shut down the service in August amid looming legal threats that would have given the government access.

After filing his appeal Levison gave users a brief five day window of time in October to download their email archives and account data in October.

As Snowden is clearly tangled up in an ongoing criminal investigation his name isn’t directly mentioned in today’s ruling, but it’s common knowledge that it’s his information the FBI was seeking when it initially imposed the Pen/Trap Statute on Lavabit last year.

In a talk at February’s TrustyCon conference, one of Levison’s lawyers, former Electronic Frontier Foundation attorney Marcia Hofmann, said that the Lavabit case could prove to be just the beginning and that the incident should help prompt other outfits to reconsider how to handle government requests.

“We need to update our threat models. Ladar was worried about data at rest, not data in transmission. The threats are different than we thought. Security and privacy enhancing services are really in the crosshairs. To the extent that you design a service like Lavabit, you should be thinking about how you’re going to deal with government requests,” Hofmann said.

Oracle Fixes 104 Security Vulnerabilities in Quarterly Patch Update

Threatpost for B2B - Wed, 04/16/2014 - 12:32

P { margin-bottom: 0.08in; }A:link { }
-->Software maker and database management company Oracle yesterday released its quarterly Critical Patch Update. The release resolves more than 100 security vulnerabilities, many of which received high common vulnerability scoring system base scores and should be applied as soon as possible.

Products affected by the patch include but are not limited to Oracle Database, Fusion Middleware, Hyperion, Supply Chain Product Suite, iLearning, PeopleSoft Enterprise, Siebel CRM, Java SE, and Sun Microsystems Products Suite, including Oracle Linux and Virtualization, and Oracle MySQL.

Last week, Oracle released a list of products affected by the Heartbleed OpenSSL vulnerability, as well as their current status with respect to vulnerable versions of the encryption library.

Among the patches that should be prioritized are two bugs in Oracle’s database products. The more severe of these two issues could lead to a full compromise of impacted Windows systems, though exploitation would require that an attacker authenticate him or herself. Other platforms like Linux and Solaris are less affected because the database does not extend into the underlying operating system there.

The update also closes off 20 Fusion middleware vulnerabilities, the most critical of which is remotely exploitable without authentication and could lead to a wide compromise of the WebLogic Server.

Also included in its April release are 37 Java vulnerabilities. Four of those received the highest possible CVSS ratings of 10.0. Oracle urges all user – home users in particular – to apply these patches immediately.

The patch update also fixes five vulnerabilities affecting Oracle Linux and Virtualization products. The most severe of these vulnerabilities could affect certain versions of Oracle Global Secure Desktop.

“Due to the relative severity of a number of the vulnerabilities fixed in this Critical Patch Update, Oracle strongly recommends that customers apply this Critical Patch Update as soon as possible,” wrote Oracle security assurance manager, Eric Maurice.

Earlier this month, researchers from Security Explorations disclosed more than two dozen outstanding issues with the company’s Java Cloud Service platform. There is no mention of that line of products in the update, so it appears that the company did not resolve those bugs. At the beginning or March, researchers at the London-based computer security firm Portcullis claimed to uncover four bugs in the Oracle’s Demantra Value Chain Planning suite of software. The update makes no mention of these vulnerabilities either.

Certificate Revocation Slow for Heartbleed Servers

Threatpost for B2B - Wed, 04/16/2014 - 12:05

The rush to revoke and replace digital certificates on Heartbleed-vulnerable Web servers seems to be no rush at all.

Internet research and security services firm Netcraft reports today that of the more than 500,000 servers it knows of that are running vulnerable versions of OpenSSL, only 80,000 certificates have been revoked so far. The urgency to do so was ramped up on Friday when four unrelated security researchers each were able to take advantage of the TLS heartbeat vulnerability to steal private SSL keys in a challenge set up by vendor CloudFlare.

Also, the first public reports of exploits against websites resulting in stolen data were reported against the Canada Revenue Agency and Mumsnet of the U.K.

“While some companies quickly recognized the need to issue new certificates in response to the Heartbleed bug, the number of revocations has not kept up,” wrote Paul Mutton. “This is a mistake, as there is little point issuing a new certificate if an attacker is still able to impersonate a website with the old one.”

Heartbleed is a dangerous Internet-wide bug that can be exploited to steal sensitive information such as user credentials, and also private encryption keys if the attack is replayed often enough. One researcher in the CloudFlare Challenge, Russian Fedor Indutny, replayed his attack 2.5 million times before he was able to steal a key from a nginx server running an unpatched instance of OpenSSL set up by CloudFlare.

Researchers had speculated it was incredibly difficult and unlikely to steal private keys by exploiting Heartbleed, but that was proven incorrect as by Saturday morning there were four reported winners of the challenge, including Indutny who was the first. Making matters more challenging is that Heartbleed attacks do not leave a log entry, for example, and are undetectable.

The process of revoking old certificates and reissuing new ones involves working closely with a certificate authority, many of which offer self-service tools or APIs that help facilitate the process. The problem is that the wonky code was introduced into OpenSSL in December 2011 and there have been public reports that it has been exploited as far back as last November.

“You have to get your infrastructure patched so that any future damage will not be incurred because of the vulnerability, and the second priority is replacing or reissuing certificates to mitigate the risk from private keys stolen while the vulnerability existed in the wild,” said Marc Gaffan, cofounder of Incapsula. Users, for example, should make sure that sites on which they’re changing credentials have been patched, otherwise an attacker could continue to exploit an unpatched site stealing new credentials in the process.

Netcraft, meanwhile, estimates the cost of replacing compromised certs with new ones at more than $100 million; some CAs, however, are allowing customers to reissue and revoke certificates free of charge, Netcraft said. It points out also that many sites are buying new certificates rather than reissuing.

“Perhaps in the haste of resolving the problem, this seemed the easiest approach, making Heartbleed a bonanza for certificate authorities,” Mutton said.

Netcraft also points out that some companies—including large sites such as Yahoo’s mobile log-in page, the U.S. Senate large file transfer system, and GeoTrust’s SSL Toolbox—have deployed new certs but have yet to revoke old ones. Some of those not yet on a Certificate Revocation List are still sending OCSP responses that those certificates are “good,” Netcraft said.

Revocation may not help in some cases, Netcraft cautions, saying that four percent do not specify a URL for an OCSP responder and can only be revoked through a CRL.

“This makes the certificates effectively irrevocable in some browsers — for example, the latest version of Mozilla Firefox no longer uses CRLs at all (previously it would fall back to checking a CRL if an OCSP request failed, but only for Extended Validation certificates),” Mutton said.

There are still other certificates, Netcraft said, that may have been compromised and do specify either a OCSP or CRL address and cannot be revoked until they expire.

“These certificates are therefore completely irrevocable in all browsers and could be impersonated until their natural expiry dates if an attacker has already compromised the private keys,” Mutton said.

Eugene Kaspersky on Critical Infrastructure Security

Threatpost for B2B - Wed, 04/16/2014 - 11:00

Dennis Fisher talks with Eugene Kaspersky about the need for better critical infrastructure security, the major threats facing enterprises today and the specter of cyberwar.

http://threatpost.com/files/2014/04/digital_underground_150.mp3

Download: digital_underground_150.mp3

Crypto Examination Awaits in Phase Two of TrueCrypt Audit

Threatpost for B2B - Wed, 04/16/2014 - 10:22

Phase two of the TrueCrypt audit figures to be a labor-intensive, largely manual cryptanalysis, according to the two experts behind the Open Crypto Audit Project (OCAP).

Matthew Green, crypto expert and professor at Johns Hopkins University, said a small team of experts will have to, by hand, examine the cipher suites, key algorithms and random number generators used in the open source encryption software.

Green said he hopes to crowdsource experts for the second phase of the audit, attracting people skilled in examining cryptography.

“We’re still flushing out the idea, but it will be a group of people who are well respected in the industry who have done this type of thing on a smaller scale,” Green said, adding he was not yet ready to publicly name them. “We would not be doing this if it were not for these people. We’ve created a series of challenges and we’re going to divide them up. I’m sure it will be fairly successful; we’re still in the planning stages.”

ISEC Partners, the consultants who were hired to conduct the first phase of the TrueCrypt audit that looked at the TrueCrypt bootloader and Windows kernel driver, will not be involved in phase two, Green said, adding that the results for the second half of the audit may not be available for a few months.

The movement to audit TrueCrypt began last fall, a few months after the Snowden leaks began going public. TrueCrypt, which provides full disk and file encryption capabilities, has been downloaded close to 30 million times, making it a tempting target for intelligence agencies that have been accused of subverting other commercial and open source software.

ISEC on Monday released its report on the first phase of the audit and said it found no backdoors in the portions of the software it looked at. There were, however, worrisome vulnerabilities around the quality of the code and build processes.

“The good news is that there is nothing devastating in the code,” Green said. “The auditors said there were problems in code quality and pointed out other legitimate issues. These are not reasons to stop using it.”

One of the first concerns leading to suspicions that the Windows binary version of TrueCrypt had been backdoored was a mysterious string of 65,024 encrypted bytes in the header. Experts wondered why these random bytes were there and whether they could be an encrypted password. Adding to the intrigue was that, aside from the fact the Windows package behaves differently than versions built from source code, no one really knew who the developers behind TrueCrypt are.

In October, however, some of those concerns were laid to rest when Green and OCAP co-organizer Kenneth White, senior security engineer at Social & Scientific Systems, were contacted by the anonymous developers who endorsed the audit. Also, an independent audit of TrueCrypt conducted by Xavier de Carne de Carnavalet of Concordia University in Canada, was able to reproduce a deterministic compilation process for the Windows version that matches the binaries. He concluded TrueCrypt was not backdoored.

“We’re not going to say the issue is closed, but we’re a lot less panicked about it,” Green said. “That doesn’t mean there isn’t something there, it’s just not on my list of things to worry about.”

The relief with the initial results is that there isn’t a widespread bug in the software; while TrueCrypt isn’t deployed on a scale of OpenSSL or Apple software, the recent Heartbleed and so-called gotofail iOS bugs have left some in the security community a little shell-shocked. White, for one, is hoping the cryptanalysis turns up equally positive results as in phase one.

“Our confidence in encryption software is driven by the level of expertise afforded proper peer-to-peer review, by deep experts in the field. And there is a very small group of people who are qualified to conduct this kind of analysis, particularly with the encryption components,” White said. “What they find might be gross errors or might be a trivial single character mistake.”

While very few of these types of public audits have been conducted—perhaps the most high-profile security tool subjected to a public audit was open source private chat application Cryptocat—Green and White see the potential for more of these in the future.

“It’s much harder to do than it seems. It’s not just about getting the money and paying people; you have to find people who are interested doing it. Not every firm is interested in doing a public audit,” Green said, adding that the TrueCrypt audit is the first of its kind that was crowd funded. “We have a good technical advisory board who were willing to put in the time to make this happen. You need good organization with people whose job it is to do this; you can’t do this in your spare time.”

White said future projects are under consideration, but for now 100 percent of their efforts and funding is going toward the TrueCrypt audit.

“I think there is a subset of people who had their minds made up before we started, and have no intention of changing. For me, the appeal of this work has been to begin to establish a framework for conducting community-driven security audits and formal cryptanalysis on open source (or, in the case of TrueCrypt, source-available) software,” White said. “I think if after the final report we can say, ‘We marshaled some of the best minds in the field, and they looked at the code, the crypto, and the implementation and we found [X]’ then that’s a victory. As a privacy advocate, I’m obviously hoping for a clean verdict, but as a security engineer, I remain skeptical until the end.”

Financial Services Companies Facing Varied Threat Landscape

Threatpost for B2B - Wed, 04/16/2014 - 05:00

SAN FRANCISCO — Many of the stories about attacks on banks, payment processors and other portions of the financial services system around the world depict these intrusions as highly sophisticated operations conducted by top-level crews. However, the majority of the attacks these companies see aren’t much more advanced than a typical malware attack, experts say.

“About two thirds of the attacks on our merchant community are low to moderate complexity,” Ellen Richey, executive vice president and chief enterprise risk officer at Visa, said during a panel discussion on threats to the financial services industry at the Kaspersky Lab Cyber Security Summit here Tuesday.

The last couple of years have been tough on banks and other financial services companies when it comes to security. Many of the larger banks in the United States  and elsewhere have been the targets of massive DDoS attacks for more than a year now, with many of these attacks being attributed to hacktivist groups. These banks, of course, always are targets for cybercrime gangs looking for some quick money. But Richey and the other panelists said that while they certainly see attacks against their networks from determined, skilled attackers, a great deal of what they see every day is pretty mundane.

Attackers looking for a nice pay day often won’t target a bank directly, but will hit a partner or supplier the bank uses and go from there.

That strategy isn’t new, but it’s proven to be effective.

“People aren’t going to go after hard targets, because it exposes them,” said Steve Adegbite, senior vice president of enterprise information security program oversight and strategy organization at Wells Fargo & Co. “They go after the lower level merchants and walk up the chain from there.”

While figuring out who is attacking an organization can be an intriguing exercise, Adegbite said that in a lot of cases it doesn’t matter much who is doing what. The end result of a successful attack is the same: a disruption to the business.

“Within financial services, it’s about customer service and keeping things running and keeping the lights on. When I go in there after the fact and strip everything down, whether it’s a nation state or a kid in his basement, it’s forcing us to deal with an incident.”

Richey said that Visa, with its massive network of merchants and huge profile around the globe, sees all shapes and sizes of attacks, but has seen a big jump in the number of DDoS attacks in recent years.

“The piece we’re seeing in the last two to three years is denial of service attacks. It’s primarily hacktivists,” she said. “The industry has amped up its defenses to deal with it.”

That increase in defenses has occurred across the financial services industry, but as well-funded and sophisticated as the security teams in these companies are, they can’t go it alone. Adegbite said that he and the Wells Fargo security team collaborate with as many people and organizations as they can when it comes to defending their networks.

“Cybersecurity is a team sport. The amount of things we’re dealing with, we can’t handle it all ourselves,” he said. “We form a community of defenders all the way through.”

Microsoft Releases Updated Threat Modeling Tool 2014

Threatpost for B2B - Tue, 04/15/2014 - 15:07

Threat modeling has been part of the security culture at Microsoft for the better part of a decade, an important piece of the Security Development Lifecycle that’s at the core of Trustworthy Computing.

Today, Microsoft updated its free Threat Modeling Tool with a number of enhancements that bring the practice closer to not only large enterprises, but also smaller companies with a growing target on their back.

Four new features have been added to the tool, including enhancements to its visualization capabilities, customization features older models and threat definitions, as well as a change to it generates threats.

“More and more of the customers I have been talking to have been leveraging threat modeling as a systematic way to find design-level security and privacy weaknesses in systems they are building and operating,” said Tim Rains, a Trustworthy Computing manager. “Threat modeling is also used to help identify mitigations that can reduce the overall risk to a system and the data it processes. Once customers try threat modeling, they typically find it to be a useful addition to their approach to risk management.”

The first iteration of Microsoft Threat Modeling Tool was issued in 2011, but Rains said customer feedback and suggestions for improvements since then have been rolled into this update. The improvements include a new drawing surface that no longer requires Microsoft Visio to build data flow diagrams. The update also includes the ability migrate older, existing threat models built with version 3.1.8 to the new format. Users can also upload existing custom-built threat definitions into the tool, which also comes with its own definitions.

The biggest change in the new version is in its threat-generation logic. Where previous versions followed the STRIDE framework (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege) per element, this one follows STRIDE per interaction of those elements. STRIDE helps users map threats to the properties guarding against them, for example, spoofing maps to authentication.

“We take into consideration the type of elements used on the diagram (e.g. processes, data stores etc.) and what type of data flows connect these elements,” Rains said.

At the RSA Conference in February, Trustworthy Computing program manager Adam Shostack said that there is no one defined way to model threats; that they must be specific to organizations and their particular risks.

“I now think of threat modeling like Legos. There are things you can snap together and use what you need,” Shostack said. “There’s no one way to threat model. The right way is the way that fixes good threats.”

Install April Windows 8.1 Update If You Want Security Patches

Threatpost for B2B - Tue, 04/15/2014 - 14:40

In a bizarre and somewhat befuddling move, Microsoft announced yesterday on its Technet blog that it would no longer provide security updates to users running out-of-date versions of Windows 8.1. In order to receive updates, customers will have to have updated their machines with the most recent Windows 8.1 Update, which the company pushed out in April.

Microsoft recently released a fairly large update for Windows 8.1. Users who installed the update (or have their updates installed automatically) and even users that never updated to 8.1 in the first place will continue to receive updates. However, users running older versions of Windows 8.1 will not receive any security updates moving forward. If they attempt to install an update, they will receive a message informing them that the update is “not applicable.”

Users running Windows 7 or Vista are not affected by this announcement. Users running Windows XP are no longer eligible for security updates either since Microsoft’s long-awaited cessation of support for the more-than-12-year-old operating system became official in April.

It’s not clear whether this decision is to become a precedent for future update cycles.

“Since Microsoft wants to ensure that customers benefit from the best support and servicing experience and to coordinate and simplify servicing across both Windows Server 2012 R2, Windows 8.1 RT and Windows 8.1, this update will be considered a new servicing/support baseline,” wrote Steve Thomas, a senior consultant at Microsoft.

Thomas goes on to explain that users who install updates manually will have 30 days to install the Windows 8.1 update from April. Beginning with the May Patch Tuesday, any Windows 8.1 devices that have not installed the update will no longer receive security updates.

The move is even more of a head-scratcher considering the trouble many users have reportedly faced while attempting to install that April update. Microsoft even references the troubles the patch has presented, saying:

“Microsoft plans to issue an update as soon as possible that will correct the issue and restore the proper behavior for Windows 8.1 Update KB 2919355 scanning against all supported WSUS configurations. Until that time, we are delaying the distribution of the Windows 8.1 Update KB 2919355 to WSUS servers.”

Despite its promise to cut off support for out-of-date versions of Windows 8.1, the company has little choice but to “recommend that you suspend deployment of this update in your organization until we release the update that resolves this issue.”

Threatpost has reached out to Microsoft for clarification and will update this story with any comment.

Government, Private Sector Must Have a ‘Need to Share’ Mindset on Threats

Threatpost for B2B - Tue, 04/15/2014 - 14:22

SAN FRANCISCO–The security of both government and private enterprise systems going forward relies on the ability of those two parties to share threat, attack and compromise information on a real-time basis, former Department of Homeland Security secretary Tom Ridge said. Without that cooperation, he said, the critical infrastructure of the United States will continue to be “a target-rich environment”.

The idea of information sharing is a well-worn one in the security industry. Private companies have been trying to get timely intelligence on attacks and threats from the federal government for years, without much success. On the other side of that coin, the government has been ingesting threat intelligence from the private sector for decades, while typically not reciprocating. Ridge, speaking at the Kaspersky Lab Cybersecurity Summit here Tuesday, said that the federal government needs to change that situation if it hopes to make any real improvement in security.

“We’ve been trying for three years to get the government to create a protected avenue to share information from the government down to the private sector and from the private sector up to the government,” he said. “We’ve been unsuccessful.”

Part of the reason for that failure, Ridge said, is that the federal government often defaults to over-classifying information, especially as it relates to attacks and threats. That information often could be valuable to organizations in the private sector that may be affected by the same kinds of threats, but is sitting dormant somewhere because it’s not cleared for release to private companies. That mindset must be changed, Ridge said.

“The knowledge in the hands of the federal government relating to critical infrastructure and the security of our economy shouldn’t be held and parceled out,” he said. “We need to go from a need-to-know basis to a need-to-share mindset.”

Private enterprises have their own set of challenges surrounding security, and Ridge said that one of the main issues he still sees in large organizations is a lack of awareness that attackers are targeting them specifically.

“This isn’t a preventable risk, it’s a manageable risk,” he said.

“Private enterprises are foolish to think it won’t happen to them. We’re a target rich environment.”

Ridge said one of the other key obstacles to improving critical infrastructure security is the fact that the federal government must rely on the private sector to do nearly all the work. The government itself doesn’t own much in the way of utilities, power grids, financial systems or other prime targets. That’s all in the hands of private companies. So there’s a clear incentive for the two parties to share information, he said.

“The government has no critical infrastructure of its own. It relies on the private sector for that, and when it goes down, the government goes down,” Ridge said. “National security and economic security are intertwined.”

Attackers, of course, are well aware of that fact, and know that going after a country’s power grid or utilities other vital systems is a quick path to crippling the country’s economy. Those kinds of attacks, Ridge said, could be precursors to armed conflicts in the near future or part of an ongoing war.

“What if at some point someone infiltrates the power grid and plants malware? Is that a precursor to a larger attack? How do you respond, kinetically or electronically? What’s the threshold for response?” he said.

HD Manufacturer LaCie Admits Yearlong Data Breach

Threatpost for B2B - Tue, 04/15/2014 - 14:21

The French computer hardware company LaCie, perhaps best known for their external hard drives, announced this week it fell victim to a data breach that may have put at risk the sensitive information of anyone who has purchased a product off their website during the last year.

According to an incident notification posted today, an attacker used malware to infiltrate LaCie’s eCommerce website for almost a month, and in turn, glean customer information. Attackers had access from March 27, 2013 to March 10, 2014, but it wasn’t until last Friday when LaCie began to inform customers at risk.

In addition to its ubiquitous rugged orange external hard drives, LaCie, which is headquartered in Paris, also manufactures RAID arrays, flash drives, and optical drives.

The announcement warns that anyone who purchased an external hard drive or any form of LaCie hardware off of the company’s website during that time period may have had their data stolen. That information includes customers’ names, addresses, email addresses, as well as payment card information and card expiration dates.

While the company has hired a “leading forensic investigation firm” to continue looking into the technicalities of the breach – how many are affected, etc. – for the time being LaCie has suspended all online sales until they can “transition to a provider that specializes in secure payment processing services.”

A report from KrebsonSecurity.com last month speculated that the company’s storefront may have been hijacked by hackers using security vulnerabilities in Adobe’s ColdFusion development platform.

According to Krebs, LaCie’s eCommerce site was one of nearly 50 eCommerce websites spotted ensnared in a nasty ColdFusion botnet that was leaking consumer credit card information. The security reporter previously surmised that the hackers behind the botnet are the same attackers behind last year’s Adobe breach that leaked source code for Reader and ColdFusion, not to mention the personal information of millions of its customers.

At the time Clive Over, a spokesman for Seagate, who bought LaCie in 2012, told Krebs the company was not “aware that company or third party information was improperly accessed” when informed that one of its servers had been targeted and breached in 2013. Over went on to say that LaCie was “working with third party experts to do a deeper forensic analysis,” the same search that would eventually yield the breach’s discovery.

*Image via fncll‘s Flickr photostream, Creative Commons

Programming Language Security Examined

Threatpost for B2B - Tue, 04/15/2014 - 12:08

When building an enterprise Web application, the most foundational decision your developers make will be the language in which the app is written. But is there a barometer that measures the security of the programming languages developers have at their disposal, or are comfortable with, versus other options?

WhiteHat Security, an application security vendor, released its 2014 Website Security Statistics Report today that measures the security of programming languages and development frameworks and examines not only what classes of vulnerabilities they’re most susceptible to, but also how long it takes to remediate bugs and whether there’s a real difference that would impact a business decision as to which language to use.

The report is based on vulnerability assessments conducted against 30,000 customer websites using a proprietary scanner, and the results point toward negligible differences in the relative security of languages such as .NET, Java, PHP, ASP, ColdFusion and Perl. Those six shared relatively similar mean numbers of vulnerabilities, and problems such as SQL injection and cross-site scripting vulnerabilities remain pervasive.

“Ultimately, what we found was that across the board there were no significant differences between languages,” said Gabriel Gumbs, lead researcher on White Hat’s Website Security Statistics Report. “There are some peaks and valleys with regard to vulnerability classes and remediation rates, but no one stood out as a clear winner as more secure.”

One conclusion, therefore, is that web application security woes, including the chronic existence of SQL injection and cross-site scripting vulnerabilities in code, are a human issue.

“A lot of it is the human factor,” Gumbs said. Static and dynamic testing controls are available to developers that test code as it is being developed as well as in production. But they have to be used throughout the development lifecycle, Gumbs said. “During the design phase of an app, security implications must be taken into account.”

As for the numbers compiled by White Hat, .NET and Java are the most widely used languages, accounting for a combined 53 percent, while the creaky ASP is next at 16 percent. SQL injection were especially prevalent in ColdFusion sites, while Perl sites were found most vulnerable to cross-site scripting. ColdFusion sites, however, had the best overall remediation rates while PHP sites one of the lowest.

Cross-site scripting was the most prevalent vulnerability in five of the six languages, except for .NET where information leakage flaws were highest. It’s worse in Perl (67 percent of sites) and Java (57 percent). Content spoofing, SQL injection and cross-site request forgery round out the top five most prevalent vulnerabilities.

“The education is out there and the frameworks are out there [to address cross-site scripting]. My best guess is that it’s a combination of the speed at which companies are implementing new functionality and exposing it to the business that is driving that number,” Gumbs said. “We don’t know what it will take to tip the scales and make those numbers go down. It may be something we have to live with. If we can accept that and then approach how we address that based on risk assessments, it may drive down the number.”

Looking at specific industries, in particular those that are heavily regulated such as financials and health care, those don’t show a noticeable difference in either the number of vulnerabilities present or remediation rates. This is in spite of over-arching regulations such as PCI-DSS protecting credit cards and HIPAA protecting health care that mandate a certain minimum standard. The problem is that many organizations that are regulated do what it takes to reach that minimum standard, and not much else.

“What we found is that industries with more regulations are insecure because they fix vulnerabilities that the regulation only calls for,” Gumbs said. “If PCI says fix these five vulnerabilities, that’s all they fixed. It proved to me they were more insecure than the other industries because they put that effort into compliance, not security.”

Heartbleed Saga Escalates With Real Attacks, Stolen Private Keys

Threatpost for B2B - Mon, 04/14/2014 - 15:34

Heartbleed went from a dangerous Internet-wide vulnerability over the weekend to one with real exploits, real victims and real problems for private SSL server keys.

Mumsnet, a U.K.-based parenting website, said it was victimized by hackers exploiting the vulnerability in OpenSSL to steal passwords, as was the Canada Revenue Agency, who reported the loss of social insurance numbers for 900 citizens, according to a BBC report today.

Hackers were using the stolen Mumsnet credentials to post messages to the site on Friday, while the CRA said hackers were busy exploiting Heartbleed during a six-hour period before its systems were patched.

While experts warned it was possible from the outset to steal credentials and other sensitive information in plaintext, it was thought that stealing private SSL keys that would provide unfettered access to web traffic emanating from a server was a much more difficult proposition.

Starting on Friday, however, three researchers had in fact managed to do just that.

Russian engineer Fedor Indutny was the first to break the so-called CloudFlare Challenge set up by web traffic optimization vendor CloudFlare. The company had set up a nginx server running an unpatched version of OpenSSL and issued a challenge to researchers to steal the private SSL key.

Indutny replayed his attack more than two million times before he was able to steal the key, which he submitted at 7:22 Eastern time on Friday, less than an hour before Ilkka Mattila of NCSC-FI submitted another valid key using just 100,000 requests.

Since then, two more submissions were confirmed on Saturday, one by Rubin Xu, a PhD student at Cambridge University and researcher Ben Murphy.

The vulnerability is present in OpenSSL versions 1.0.1 to 1.0.1f and it allows attackers to snag 64KB of memory per request per server using its heartbeat function. The bits of memory can leak anything from user names and passwords to apparently private keys if the attack is repeated often enough. A number of large sites, including Yahoo, Lastpass and many others were vulnerable, but quickly patched. Once the vulnerability is patched, old certificates must be revoked and new ones validated and installed.

Users, meanwhile, would need to change their passwords for accounts on these sites, but only after the patch is applied, or their new credentials could be stolen as well. Worse, the attacks don’t show up in logs and leave no trace behind. Therefore, it’s impossible to know whether a private key has been stolen and malicious sites signed by a legitimate certificate key, for example, would appear benign.

The story took a strange twist Friday night when Bloomberg reported that the U.S. National Security Agency had been exploiting Heartbleed for two years, according to a pair of unnamed sources in the article. A bug such as Heartbleed could simplify surveillance efforts for the agency against particular targets, but given the arsenal of attacks at its disposal, the NSA might have more efficient means with which to gather personal data on targets.

To that end, the agency via the Office of the Director of National Intelligence issued a rare denial Friday night. The memo said the NSA was not aware of the flaw in OpenSSL. “Reports that say otherwise are wrong,” it said.

The DNI’s office also said the Federal government uses OpenSSL to encrypt a number of government sites and services and would have reported the vulnerability had it discovered it.

“When Federal agencies discover a new vulnerability in commercial and open source software – a so-called ‘Zero day’ vulnerability because the developers of the vulnerable software have had zero days to fix it – it is in the national interest to responsibly disclose the vulnerability rather than to hold it for an investigative or intelligence purpose,” the DNI said.

Meanwhile, a report in the New York Times on Saturday said that President Obama has given the NSA leeway in using bugs such as Heartbleed where there is a “clear national security or law enforcement need.” The NSA has thrived on such loopholes, according to numerous leaks made public in the Snowden documents. The president’s decision was made in January, the Times article said, after he addressed the nation on the government’s surveillance of Americans.

The U.S. government, it was made public in September, had bought a subscription to a zero-day exploit service sold by VUPEN of France.

The contract, made public through a Freedom of Information Act request by MuckRock, an open government project that publishes a variety of such documents, shows that the NSA bought VUPEN’s services on Sept. 14, 2012. The NSA contract is for a one-year subscription to the company’s “binary analysis and exploits service.”

So Far, So Good for TrueCrypt: Initial Audit Phase Turns Up No Backdoors

Threatpost for B2B - Mon, 04/14/2014 - 13:42

A initial audit of the popular open source encryption software TrueCrypt turned up fewer than a dozen vulnerabilities, none of which so far point toward a backdoor surreptitiously inserted into the codebase.

A report on the first phase of the audit was released today by iSEC Partners, which was contracted by the Open Crypto Audit Project (OCAP), a grassroots effort that not only conducted a successful fundraising effort to initiate the audit, but raised important questions about the integrity of the software.

TrueCrypt is praised as not only free and open source encryption software, but also that it’s easy to install, configure and use. Given that it has been downloaded upwards of 30 million times, it stood to reason that it could be a prime target for manipulation by intelligence agencies that have been accused of subverting other widely used software packages, commercial and open source.

The first phase of the audit focused on the TrueCrypt bootloader and Windows kernel driver; architecture and code reviews were performed, as well as penetration tests including fuzzing interfaces, said Kenneth White, senior security engineer at Social & Scientific Systems. The second phase of the audit will look at whether the various encryption cipher suites, random number generators and critical key algorithms have been implemented correctly.

“With Phase II, we will be conducting a formal cryptanalysis and looking at these issues,” White said. “In security engineering, we never say a system is ‘unbreakable,’ but rather, ‘we looked at X, Y, and Z and couldn’t find a vulnerability.’

“But yes, I would say there is certainly an increased level of confidence in TrueCrypt,” White said.

Among the still-outstanding questions publicly asked by OCAP, which was kicked off by White and Johns Hopkins professor and crypto expert Matthew Green, revolved around the Windows version of TrueCrypt. Since those are available only as downloadable binaries, they cannot be compared to the original source code, yet behave differently than versions compiled from source code. There were also concerns about the license governing TrueCrypt use, as well as the anonymous nature of the development group behind the software.

iSEC Partners’ report gave TrueCrypt a relatively clean bill of health.

“iSEC did not identify any issues considered ‘high severity’ during this testing. iSEC found no evidence of backdoors or intentional flaws. Several weaknesses and common kernel vulnerabilities were identified, including kernel pointer disclosure, but none of them appeared to present immediate exploitation vectors,” iSEC’s Tom Ritter said in a statement. “All identified findings appeared accidental.”

Ritter said iSEC recommends improvements be made to the quality of code in the software and that build process be updated to relay on tools with a “trustworthy provenance.”

“In sum, while TrueCrypt does not have the most polished programming style, there is nothing immediately dangerous to report,” Ritter said.

Specifically, iSEC security engineers Andreas Junestam and Nicolas Guigo audited the bootloader and Windows kernel driver in TrueCrypt 7.1a. The report says iSEC performed hands-on testing against binaries available from the TrueCrypt download page and binaries compiled from source code. Work was completed Feb. 14.

The engineers found 11 vulnerabilities, four rated medium severity, four low severity and three were rated informational issues having to do with defense in depth.

“Overall, the source code for both the bootloader and the Windows kernel driver did not meet expected standards for secure code,” the report said. “This includes issues such as lack of comments, use of insecure or deprecated functions, inconsistent variable types, and so forth.”

The team dug deeper into its recommendations of updating the Windows build environment and code quality improvements, specifically replacing outdated tools and software packages, some of which date back to the early 1990s.

“Using antiquated and unsupported build tools introduces multiple risks including: unsigned tools that could be maliciously modified, unknown or un-patched security vulnerabilities in the tools themselves, and weaker or missing implementations of modern protection mechanisms such as DEP and ASLR,” the team wrote in its report. “Once the build environment has been updated, the team should consider rebuilding all binaries with all security features fully enabled.”

They added that “lax” quality standards make the source code difficult to review and maintain, impeding vulnerability assessments.

Of the four most serious bugs uncovered in the audit, the most serious involves the key used to encrypt the TrueCrypt Volume Header. It is derived using PBKDF2, a standard algorithm, that uses an iteration count that’s too small to prevent password-guessing attacks.

“TrueCrypt relies on the what’s known as a PBKDF2 function as a way to ‘stretch” a users’ password or master key, and there is concern that it could have been stronger than the 1,000 or 2,000 iterations it uses currently.” White said. “The TrueCrypt developers’ position is that the current values are a reasonable tradeoff of protection vs. processing delay, and that if one uses a weak password, a high-count PBK2DF2 hash won’t offer much more than a false sense of security.”

White said the OCAP technical advisors are also concerned about TrueCrypt’s security model which offers narrowly restricted privacy guarantees,” White said.

So, for example, if you are not running whole volume (system disk) encryption, there are many known exploits to recover plaintext data, including decryption keys,” White said, pointing out that Microsoft’s Bitlocker software and PGP, for example, have similar attack paths.

“But in the case of TrueCrypt, whole volume disk encryption is only available for the Windows port, and there exists today point-and-click forensic tools that can be purchased for a few hundred dollars that can easily decrypt data from a running machine with any of these packages, TrueCrypt included,” White said. “I have a feeling that while most in the security industry understand this, it is probably worth emphasizing to a broader audience: on the vast majority of machines that use file or disk encryption, if the underlying operating system or hardware can be compromised, then so too can the encryption software.”

With a Warning, FTC Approves WhatsApp, Facebook Union

Threatpost for B2B - Mon, 04/14/2014 - 12:54

Facebook’s acquisition of messaging application WhatsApp was approved by the Federal Trade Commission late last week, but not without a stern notice from the agency, which warned that it would be keeping a watchful eye on the two companies going forward.

In a letter addressed to officials at Facebook and WhatsApp on Thursday, the FTC’s Bureau of Consumer Protection Director Jessica Rich made it clear that the agency would continue to ensure the companies honor their promises to users.

“WhatsApp has made a number of promises about the limited nature of the data it collects, maintains, and shares with third parties–promises that exceed the protections currently promised to Facebook users,” Rich wrote. “We want to make clear that, regardless of the acquisition, WhatsApp must continue to honor these promises to consumers.”

The privacy policy for WhatsApp, the popular app that allegedly sends 50 billion messages between users daily, states that user information will not be used for advertising purposes and won’t be sold to a third party. The FTC’s letter (.PDF) claims this is something that shouldn’t be nullified by the Facebook purchase. The FTC adds that if Facebook were to go ahead and share any of its newly acquired WhatsApp information, it would violate its privacy promises, not to mention an order the agency has placed on the social network.

That order basically makes sure Facebook doesn’t misrepresent the way it handles users’ privacy or the security of consumers’ personal information.

The letter, which was addressed to Facebook’s Chief Privacy Officer Erin Egan and WhatsApp’s General Counsel Anne Hoge, goes on to state that data collecting changes could be made as long as they get users’ “affirmative consent.” If users don’t agree with new procedures they should be granted the opportunity to opt out or at least understand “that they have an opportunity to stop using the WhatsApp service.”

“Failure to take these steps could constitute a violation of Section 5 and/or the FTC’s order against Facebook,” the letter states.

When the $19 billion acquisition was first announced in February, privacy advocates were rattled that Facebook would be able to mine WhatsApp’s vast reservoir of user information and convert that into ad revenue without the users’ consent.

Organizations such as the Center for Digital Democracy (CDD) and the Electronic Privacy Information Center (EPIC) both decried the move in March, requesting the FTC look into it. Jan Koum, WhatsApp’s founder later responded with a blog post, “Setting the record straight,” that insisted both firms would “remain autonomous and operate independently.”

*Photo via alvy‘s Flickr photostream, Creative Commons

Arbitrary Code Execution Bug in Android Reader

Threatpost for B2B - Mon, 04/14/2014 - 11:04

P { margin-bottom: 0.08in; }A:link { }
-->The Android variety of Adobe Reader reportedly contains a vulnerability that could give an attacker the ability to execute arbitrary code on devices running Google’s mobile operating system.

The problem arises from the fact that Adobe Reader for Android exposes a number of insecure JavaScript interfaces, according to security researcher Yorick Koster, who submitted the details of the bug to the Full Disclosure mailing list.

In order to exploit the security vulnerability, an attacker would have to compel his victim to open a maliciously crafted PDF file. Successful exploitation could then give the attacker the ability to execute arbitrary Java access code and, in turn, compromise reader documents and other files stored on the device’s SD card.

Adobe verified the existence of the vulnerability in version 11.1.3 of Reader for Android and has provided a fix for it with version 11.2.0.

On the point of exploitation, the specially crafted PDF file required to exploit this vulnerability would have to contain Javascript that runs when the targeted-user interacts with the PDF file in question. An attacker could deploy any of the Javascript objects included in Koster’s report to obtain access to the public reflection APIs inherited by those objects. It is these public reflection APIs that the attacker can abuse to run arbitrary code.

In other Android-related news, Google announced late last week that it would bolster its existing application regulation mechanism with new a feature that will continually monitor installed Android applications to ensure that they aren’t acting maliciously or performing unwanted actions.

Syndicate content