Feed aggregator

Financial Services Companies Facing Varied Threat Landscape

Threatpost for B2B - Wed, 04/16/2014 - 05:00

SAN FRANCISCO — Many of the stories about attacks on banks, payment processors and other portions of the financial services system around the world depict these intrusions as highly sophisticated operations conducted by top-level crews. However, the majority of the attacks these companies see aren’t much more advanced than a typical malware attack, experts say.

“About two thirds of the attacks on our merchant community are low to moderate complexity,” Ellen Richey, executive vice president and chief enterprise risk officer at Visa, said during a panel discussion on threats to the financial services industry at the Kaspersky Lab Cyber Security Summit here Tuesday.

The last couple of years have been tough on banks and other financial services companies when it comes to security. Many of the larger banks in the United States  and elsewhere have been the targets of massive DDoS attacks for more than a year now, with many of these attacks being attributed to hacktivist groups. These banks, of course, always are targets for cybercrime gangs looking for some quick money. But Richey and the other panelists said that while they certainly see attacks against their networks from determined, skilled attackers, a great deal of what they see every day is pretty mundane.

Attackers looking for a nice pay day often won’t target a bank directly, but will hit a partner or supplier the bank uses and go from there.

That strategy isn’t new, but it’s proven to be effective.

“People aren’t going to go after hard targets, because it exposes them,” said Steve Adegbite, senior vice president of enterprise information security program oversight and strategy organization at Wells Fargo & Co. “They go after the lower level merchants and walk up the chain from there.”

While figuring out who is attacking an organization can be an intriguing exercise, Adegbite said that in a lot of cases it doesn’t matter much who is doing what. The end result of a successful attack is the same: a disruption to the business.

“Within financial services, it’s about customer service and keeping things running and keeping the lights on. When I go in there after the fact and strip everything down, whether it’s a nation state or a kid in his basement, it’s forcing us to deal with an incident.”

Richey said that Visa, with its massive network of merchants and huge profile around the globe, sees all shapes and sizes of attacks, but has seen a big jump in the number of DDoS attacks in recent years.

“The piece we’re seeing in the last two to three years is denial of service attacks. It’s primarily hacktivists,” she said. “The industry has amped up its defenses to deal with it.”

That increase in defenses has occurred across the financial services industry, but as well-funded and sophisticated as the security teams in these companies are, they can’t go it alone. Adegbite said that he and the Wells Fargo security team collaborate with as many people and organizations as they can when it comes to defending their networks.

“Cybersecurity is a team sport. The amount of things we’re dealing with, we can’t handle it all ourselves,” he said. “We form a community of defenders all the way through.”

Microsoft Releases Updated Threat Modeling Tool 2014

Threatpost for B2B - Tue, 04/15/2014 - 15:07

Threat modeling has been part of the security culture at Microsoft for the better part of a decade, an important piece of the Security Development Lifecycle that’s at the core of Trustworthy Computing.

Today, Microsoft updated its free Threat Modeling Tool with a number of enhancements that bring the practice closer to not only large enterprises, but also smaller companies with a growing target on their back.

Four new features have been added to the tool, including enhancements to its visualization capabilities, customization features older models and threat definitions, as well as a change to it generates threats.

“More and more of the customers I have been talking to have been leveraging threat modeling as a systematic way to find design-level security and privacy weaknesses in systems they are building and operating,” said Tim Rains, a Trustworthy Computing manager. “Threat modeling is also used to help identify mitigations that can reduce the overall risk to a system and the data it processes. Once customers try threat modeling, they typically find it to be a useful addition to their approach to risk management.”

The first iteration of Microsoft Threat Modeling Tool was issued in 2011, but Rains said customer feedback and suggestions for improvements since then have been rolled into this update. The improvements include a new drawing surface that no longer requires Microsoft Visio to build data flow diagrams. The update also includes the ability migrate older, existing threat models built with version 3.1.8 to the new format. Users can also upload existing custom-built threat definitions into the tool, which also comes with its own definitions.

The biggest change in the new version is in its threat-generation logic. Where previous versions followed the STRIDE framework (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege) per element, this one follows STRIDE per interaction of those elements. STRIDE helps users map threats to the properties guarding against them, for example, spoofing maps to authentication.

“We take into consideration the type of elements used on the diagram (e.g. processes, data stores etc.) and what type of data flows connect these elements,” Rains said.

At the RSA Conference in February, Trustworthy Computing program manager Adam Shostack said that there is no one defined way to model threats; that they must be specific to organizations and their particular risks.

“I now think of threat modeling like Legos. There are things you can snap together and use what you need,” Shostack said. “There’s no one way to threat model. The right way is the way that fixes good threats.”

Install April Windows 8.1 Update If You Want Security Patches

Threatpost for B2B - Tue, 04/15/2014 - 14:40

In a bizarre and somewhat befuddling move, Microsoft announced yesterday on its Technet blog that it would no longer provide security updates to users running out-of-date versions of Windows 8.1. In order to receive updates, customers will have to have updated their machines with the most recent Windows 8.1 Update, which the company pushed out in April.

Microsoft recently released a fairly large update for Windows 8.1. Users who installed the update (or have their updates installed automatically) and even users that never updated to 8.1 in the first place will continue to receive updates. However, users running older versions of Windows 8.1 will not receive any security updates moving forward. If they attempt to install an update, they will receive a message informing them that the update is “not applicable.”

Users running Windows 7 or Vista are not affected by this announcement. Users running Windows XP are no longer eligible for security updates either since Microsoft’s long-awaited cessation of support for the more-than-12-year-old operating system became official in April.

It’s not clear whether this decision is to become a precedent for future update cycles.

“Since Microsoft wants to ensure that customers benefit from the best support and servicing experience and to coordinate and simplify servicing across both Windows Server 2012 R2, Windows 8.1 RT and Windows 8.1, this update will be considered a new servicing/support baseline,” wrote Steve Thomas, a senior consultant at Microsoft.

Thomas goes on to explain that users who install updates manually will have 30 days to install the Windows 8.1 update from April. Beginning with the May Patch Tuesday, any Windows 8.1 devices that have not installed the update will no longer receive security updates.

The move is even more of a head-scratcher considering the trouble many users have reportedly faced while attempting to install that April update. Microsoft even references the troubles the patch has presented, saying:

“Microsoft plans to issue an update as soon as possible that will correct the issue and restore the proper behavior for Windows 8.1 Update KB 2919355 scanning against all supported WSUS configurations. Until that time, we are delaying the distribution of the Windows 8.1 Update KB 2919355 to WSUS servers.”

Despite its promise to cut off support for out-of-date versions of Windows 8.1, the company has little choice but to “recommend that you suspend deployment of this update in your organization until we release the update that resolves this issue.”

Threatpost has reached out to Microsoft for clarification and will update this story with any comment.

Government, Private Sector Must Have a ‘Need to Share’ Mindset on Threats

Threatpost for B2B - Tue, 04/15/2014 - 14:22

SAN FRANCISCO–The security of both government and private enterprise systems going forward relies on the ability of those two parties to share threat, attack and compromise information on a real-time basis, former Department of Homeland Security secretary Tom Ridge said. Without that cooperation, he said, the critical infrastructure of the United States will continue to be “a target-rich environment”.

The idea of information sharing is a well-worn one in the security industry. Private companies have been trying to get timely intelligence on attacks and threats from the federal government for years, without much success. On the other side of that coin, the government has been ingesting threat intelligence from the private sector for decades, while typically not reciprocating. Ridge, speaking at the Kaspersky Lab Cybersecurity Summit here Tuesday, said that the federal government needs to change that situation if it hopes to make any real improvement in security.

“We’ve been trying for three years to get the government to create a protected avenue to share information from the government down to the private sector and from the private sector up to the government,” he said. “We’ve been unsuccessful.”

Part of the reason for that failure, Ridge said, is that the federal government often defaults to over-classifying information, especially as it relates to attacks and threats. That information often could be valuable to organizations in the private sector that may be affected by the same kinds of threats, but is sitting dormant somewhere because it’s not cleared for release to private companies. That mindset must be changed, Ridge said.

“The knowledge in the hands of the federal government relating to critical infrastructure and the security of our economy shouldn’t be held and parceled out,” he said. “We need to go from a need-to-know basis to a need-to-share mindset.”

Private enterprises have their own set of challenges surrounding security, and Ridge said that one of the main issues he still sees in large organizations is a lack of awareness that attackers are targeting them specifically.

“This isn’t a preventable risk, it’s a manageable risk,” he said.

“Private enterprises are foolish to think it won’t happen to them. We’re a target rich environment.”

Ridge said one of the other key obstacles to improving critical infrastructure security is the fact that the federal government must rely on the private sector to do nearly all the work. The government itself doesn’t own much in the way of utilities, power grids, financial systems or other prime targets. That’s all in the hands of private companies. So there’s a clear incentive for the two parties to share information, he said.

“The government has no critical infrastructure of its own. It relies on the private sector for that, and when it goes down, the government goes down,” Ridge said. “National security and economic security are intertwined.”

Attackers, of course, are well aware of that fact, and know that going after a country’s power grid or utilities other vital systems is a quick path to crippling the country’s economy. Those kinds of attacks, Ridge said, could be precursors to armed conflicts in the near future or part of an ongoing war.

“What if at some point someone infiltrates the power grid and plants malware? Is that a precursor to a larger attack? How do you respond, kinetically or electronically? What’s the threshold for response?” he said.

HD Manufacturer LaCie Admits Yearlong Data Breach

Threatpost for B2B - Tue, 04/15/2014 - 14:21

The French computer hardware company LaCie, perhaps best known for their external hard drives, announced this week it fell victim to a data breach that may have put at risk the sensitive information of anyone who has purchased a product off their website during the last year.

According to an incident notification posted today, an attacker used malware to infiltrate LaCie’s eCommerce website for almost a month, and in turn, glean customer information. Attackers had access from March 27, 2013 to March 10, 2014, but it wasn’t until last Friday when LaCie began to inform customers at risk.

In addition to its ubiquitous rugged orange external hard drives, LaCie, which is headquartered in Paris, also manufactures RAID arrays, flash drives, and optical drives.

The announcement warns that anyone who purchased an external hard drive or any form of LaCie hardware off of the company’s website during that time period may have had their data stolen. That information includes customers’ names, addresses, email addresses, as well as payment card information and card expiration dates.

While the company has hired a “leading forensic investigation firm” to continue looking into the technicalities of the breach – how many are affected, etc. – for the time being LaCie has suspended all online sales until they can “transition to a provider that specializes in secure payment processing services.”

A report from KrebsonSecurity.com last month speculated that the company’s storefront may have been hijacked by hackers using security vulnerabilities in Adobe’s ColdFusion development platform.

According to Krebs, LaCie’s eCommerce site was one of nearly 50 eCommerce websites spotted ensnared in a nasty ColdFusion botnet that was leaking consumer credit card information. The security reporter previously surmised that the hackers behind the botnet are the same attackers behind last year’s Adobe breach that leaked source code for Reader and ColdFusion, not to mention the personal information of millions of its customers.

At the time Clive Over, a spokesman for Seagate, who bought LaCie in 2012, told Krebs the company was not “aware that company or third party information was improperly accessed” when informed that one of its servers had been targeted and breached in 2013. Over went on to say that LaCie was “working with third party experts to do a deeper forensic analysis,” the same search that would eventually yield the breach’s discovery.

*Image via fncll‘s Flickr photostream, Creative Commons

Programming Language Security Examined

Threatpost for B2B - Tue, 04/15/2014 - 12:08

When building an enterprise Web application, the most foundational decision your developers make will be the language in which the app is written. But is there a barometer that measures the security of the programming languages developers have at their disposal, or are comfortable with, versus other options?

WhiteHat Security, an application security vendor, released its 2014 Website Security Statistics Report today that measures the security of programming languages and development frameworks and examines not only what classes of vulnerabilities they’re most susceptible to, but also how long it takes to remediate bugs and whether there’s a real difference that would impact a business decision as to which language to use.

The report is based on vulnerability assessments conducted against 30,000 customer websites using a proprietary scanner, and the results point toward negligible differences in the relative security of languages such as .NET, Java, PHP, ASP, ColdFusion and Perl. Those six shared relatively similar mean numbers of vulnerabilities, and problems such as SQL injection and cross-site scripting vulnerabilities remain pervasive.

“Ultimately, what we found was that across the board there were no significant differences between languages,” said Gabriel Gumbs, lead researcher on White Hat’s Website Security Statistics Report. “There are some peaks and valleys with regard to vulnerability classes and remediation rates, but no one stood out as a clear winner as more secure.”

One conclusion, therefore, is that web application security woes, including the chronic existence of SQL injection and cross-site scripting vulnerabilities in code, are a human issue.

“A lot of it is the human factor,” Gumbs said. Static and dynamic testing controls are available to developers that test code as it is being developed as well as in production. But they have to be used throughout the development lifecycle, Gumbs said. “During the design phase of an app, security implications must be taken into account.”

As for the numbers compiled by White Hat, .NET and Java are the most widely used languages, accounting for a combined 53 percent, while the creaky ASP is next at 16 percent. SQL injection were especially prevalent in ColdFusion sites, while Perl sites were found most vulnerable to cross-site scripting. ColdFusion sites, however, had the best overall remediation rates while PHP sites one of the lowest.

Cross-site scripting was the most prevalent vulnerability in five of the six languages, except for .NET where information leakage flaws were highest. It’s worse in Perl (67 percent of sites) and Java (57 percent). Content spoofing, SQL injection and cross-site request forgery round out the top five most prevalent vulnerabilities.

“The education is out there and the frameworks are out there [to address cross-site scripting]. My best guess is that it’s a combination of the speed at which companies are implementing new functionality and exposing it to the business that is driving that number,” Gumbs said. “We don’t know what it will take to tip the scales and make those numbers go down. It may be something we have to live with. If we can accept that and then approach how we address that based on risk assessments, it may drive down the number.”

Looking at specific industries, in particular those that are heavily regulated such as financials and health care, those don’t show a noticeable difference in either the number of vulnerabilities present or remediation rates. This is in spite of over-arching regulations such as PCI-DSS protecting credit cards and HIPAA protecting health care that mandate a certain minimum standard. The problem is that many organizations that are regulated do what it takes to reach that minimum standard, and not much else.

“What we found is that industries with more regulations are insecure because they fix vulnerabilities that the regulation only calls for,” Gumbs said. “If PCI says fix these five vulnerabilities, that’s all they fixed. It proved to me they were more insecure than the other industries because they put that effort into compliance, not security.”

Heartbleed Saga Escalates With Real Attacks, Stolen Private Keys

Threatpost for B2B - Mon, 04/14/2014 - 15:34

Heartbleed went from a dangerous Internet-wide vulnerability over the weekend to one with real exploits, real victims and real problems for private SSL server keys.

Mumsnet, a U.K.-based parenting website, said it was victimized by hackers exploiting the vulnerability in OpenSSL to steal passwords, as was the Canada Revenue Agency, who reported the loss of social insurance numbers for 900 citizens, according to a BBC report today.

Hackers were using the stolen Mumsnet credentials to post messages to the site on Friday, while the CRA said hackers were busy exploiting Heartbleed during a six-hour period before its systems were patched.

While experts warned it was possible from the outset to steal credentials and other sensitive information in plaintext, it was thought that stealing private SSL keys that would provide unfettered access to web traffic emanating from a server was a much more difficult proposition.

Starting on Friday, however, three researchers had in fact managed to do just that.

Russian engineer Fedor Indutny was the first to break the so-called CloudFlare Challenge set up by web traffic optimization vendor CloudFlare. The company had set up a nginx server running an unpatched version of OpenSSL and issued a challenge to researchers to steal the private SSL key.

Indutny replayed his attack more than two million times before he was able to steal the key, which he submitted at 7:22 Eastern time on Friday, less than an hour before Ilkka Mattila of NCSC-FI submitted another valid key using just 100,000 requests.

Since then, two more submissions were confirmed on Saturday, one by Rubin Xu, a PhD student at Cambridge University and researcher Ben Murphy.

The vulnerability is present in OpenSSL versions 1.0.1 to 1.0.1f and it allows attackers to snag 64KB of memory per request per server using its heartbeat function. The bits of memory can leak anything from user names and passwords to apparently private keys if the attack is repeated often enough. A number of large sites, including Yahoo, Lastpass and many others were vulnerable, but quickly patched. Once the vulnerability is patched, old certificates must be revoked and new ones validated and installed.

Users, meanwhile, would need to change their passwords for accounts on these sites, but only after the patch is applied, or their new credentials could be stolen as well. Worse, the attacks don’t show up in logs and leave no trace behind. Therefore, it’s impossible to know whether a private key has been stolen and malicious sites signed by a legitimate certificate key, for example, would appear benign.

The story took a strange twist Friday night when Bloomberg reported that the U.S. National Security Agency had been exploiting Heartbleed for two years, according to a pair of unnamed sources in the article. A bug such as Heartbleed could simplify surveillance efforts for the agency against particular targets, but given the arsenal of attacks at its disposal, the NSA might have more efficient means with which to gather personal data on targets.

To that end, the agency via the Office of the Director of National Intelligence issued a rare denial Friday night. The memo said the NSA was not aware of the flaw in OpenSSL. “Reports that say otherwise are wrong,” it said.

The DNI’s office also said the Federal government uses OpenSSL to encrypt a number of government sites and services and would have reported the vulnerability had it discovered it.

“When Federal agencies discover a new vulnerability in commercial and open source software – a so-called ‘Zero day’ vulnerability because the developers of the vulnerable software have had zero days to fix it – it is in the national interest to responsibly disclose the vulnerability rather than to hold it for an investigative or intelligence purpose,” the DNI said.

Meanwhile, a report in the New York Times on Saturday said that President Obama has given the NSA leeway in using bugs such as Heartbleed where there is a “clear national security or law enforcement need.” The NSA has thrived on such loopholes, according to numerous leaks made public in the Snowden documents. The president’s decision was made in January, the Times article said, after he addressed the nation on the government’s surveillance of Americans.

The U.S. government, it was made public in September, had bought a subscription to a zero-day exploit service sold by VUPEN of France.

The contract, made public through a Freedom of Information Act request by MuckRock, an open government project that publishes a variety of such documents, shows that the NSA bought VUPEN’s services on Sept. 14, 2012. The NSA contract is for a one-year subscription to the company’s “binary analysis and exploits service.”

So Far, So Good for TrueCrypt: Initial Audit Phase Turns Up No Backdoors

Threatpost for B2B - Mon, 04/14/2014 - 13:42

A initial audit of the popular open source encryption software TrueCrypt turned up fewer than a dozen vulnerabilities, none of which so far point toward a backdoor surreptitiously inserted into the codebase.

A report on the first phase of the audit was released today by iSEC Partners, which was contracted by the Open Crypto Audit Project (OCAP), a grassroots effort that not only conducted a successful fundraising effort to initiate the audit, but raised important questions about the integrity of the software.

TrueCrypt is praised as not only free and open source encryption software, but also that it’s easy to install, configure and use. Given that it has been downloaded upwards of 30 million times, it stood to reason that it could be a prime target for manipulation by intelligence agencies that have been accused of subverting other widely used software packages, commercial and open source.

The first phase of the audit focused on the TrueCrypt bootloader and Windows kernel driver; architecture and code reviews were performed, as well as penetration tests including fuzzing interfaces, said Kenneth White, senior security engineer at Social & Scientific Systems. The second phase of the audit will look at whether the various encryption cipher suites, random number generators and critical key algorithms have been implemented correctly.

“With Phase II, we will be conducting a formal cryptanalysis and looking at these issues,” White said. “In security engineering, we never say a system is ‘unbreakable,’ but rather, ‘we looked at X, Y, and Z and couldn’t find a vulnerability.’

“But yes, I would say there is certainly an increased level of confidence in TrueCrypt,” White said.

Among the still-outstanding questions publicly asked by OCAP, which was kicked off by White and Johns Hopkins professor and crypto expert Matthew Green, revolved around the Windows version of TrueCrypt. Since those are available only as downloadable binaries, they cannot be compared to the original source code, yet behave differently than versions compiled from source code. There were also concerns about the license governing TrueCrypt use, as well as the anonymous nature of the development group behind the software.

iSEC Partners’ report gave TrueCrypt a relatively clean bill of health.

“iSEC did not identify any issues considered ‘high severity’ during this testing. iSEC found no evidence of backdoors or intentional flaws. Several weaknesses and common kernel vulnerabilities were identified, including kernel pointer disclosure, but none of them appeared to present immediate exploitation vectors,” iSEC’s Tom Ritter said in a statement. “All identified findings appeared accidental.”

Ritter said iSEC recommends improvements be made to the quality of code in the software and that build process be updated to relay on tools with a “trustworthy provenance.”

“In sum, while TrueCrypt does not have the most polished programming style, there is nothing immediately dangerous to report,” Ritter said.

Specifically, iSEC security engineers Andreas Junestam and Nicolas Guigo audited the bootloader and Windows kernel driver in TrueCrypt 7.1a. The report says iSEC performed hands-on testing against binaries available from the TrueCrypt download page and binaries compiled from source code. Work was completed Feb. 14.

The engineers found 11 vulnerabilities, four rated medium severity, four low severity and three were rated informational issues having to do with defense in depth.

“Overall, the source code for both the bootloader and the Windows kernel driver did not meet expected standards for secure code,” the report said. “This includes issues such as lack of comments, use of insecure or deprecated functions, inconsistent variable types, and so forth.”

The team dug deeper into its recommendations of updating the Windows build environment and code quality improvements, specifically replacing outdated tools and software packages, some of which date back to the early 1990s.

“Using antiquated and unsupported build tools introduces multiple risks including: unsigned tools that could be maliciously modified, unknown or un-patched security vulnerabilities in the tools themselves, and weaker or missing implementations of modern protection mechanisms such as DEP and ASLR,” the team wrote in its report. “Once the build environment has been updated, the team should consider rebuilding all binaries with all security features fully enabled.”

They added that “lax” quality standards make the source code difficult to review and maintain, impeding vulnerability assessments.

Of the four most serious bugs uncovered in the audit, the most serious involves the key used to encrypt the TrueCrypt Volume Header. It is derived using PBKDF2, a standard algorithm, that uses an iteration count that’s too small to prevent password-guessing attacks.

“TrueCrypt relies on the what’s known as a PBKDF2 function as a way to ‘stretch” a users’ password or master key, and there is concern that it could have been stronger than the 1,000 or 2,000 iterations it uses currently.” White said. “The TrueCrypt developers’ position is that the current values are a reasonable tradeoff of protection vs. processing delay, and that if one uses a weak password, a high-count PBK2DF2 hash won’t offer much more than a false sense of security.”

White said the OCAP technical advisors are also concerned about TrueCrypt’s security model which offers narrowly restricted privacy guarantees,” White said.

So, for example, if you are not running whole volume (system disk) encryption, there are many known exploits to recover plaintext data, including decryption keys,” White said, pointing out that Microsoft’s Bitlocker software and PGP, for example, have similar attack paths.

“But in the case of TrueCrypt, whole volume disk encryption is only available for the Windows port, and there exists today point-and-click forensic tools that can be purchased for a few hundred dollars that can easily decrypt data from a running machine with any of these packages, TrueCrypt included,” White said. “I have a feeling that while most in the security industry understand this, it is probably worth emphasizing to a broader audience: on the vast majority of machines that use file or disk encryption, if the underlying operating system or hardware can be compromised, then so too can the encryption software.”

With a Warning, FTC Approves WhatsApp, Facebook Union

Threatpost for B2B - Mon, 04/14/2014 - 12:54

Facebook’s acquisition of messaging application WhatsApp was approved by the Federal Trade Commission late last week, but not without a stern notice from the agency, which warned that it would be keeping a watchful eye on the two companies going forward.

In a letter addressed to officials at Facebook and WhatsApp on Thursday, the FTC’s Bureau of Consumer Protection Director Jessica Rich made it clear that the agency would continue to ensure the companies honor their promises to users.

“WhatsApp has made a number of promises about the limited nature of the data it collects, maintains, and shares with third parties–promises that exceed the protections currently promised to Facebook users,” Rich wrote. “We want to make clear that, regardless of the acquisition, WhatsApp must continue to honor these promises to consumers.”

The privacy policy for WhatsApp, the popular app that allegedly sends 50 billion messages between users daily, states that user information will not be used for advertising purposes and won’t be sold to a third party. The FTC’s letter (.PDF) claims this is something that shouldn’t be nullified by the Facebook purchase. The FTC adds that if Facebook were to go ahead and share any of its newly acquired WhatsApp information, it would violate its privacy promises, not to mention an order the agency has placed on the social network.

That order basically makes sure Facebook doesn’t misrepresent the way it handles users’ privacy or the security of consumers’ personal information.

The letter, which was addressed to Facebook’s Chief Privacy Officer Erin Egan and WhatsApp’s General Counsel Anne Hoge, goes on to state that data collecting changes could be made as long as they get users’ “affirmative consent.” If users don’t agree with new procedures they should be granted the opportunity to opt out or at least understand “that they have an opportunity to stop using the WhatsApp service.”

“Failure to take these steps could constitute a violation of Section 5 and/or the FTC’s order against Facebook,” the letter states.

When the $19 billion acquisition was first announced in February, privacy advocates were rattled that Facebook would be able to mine WhatsApp’s vast reservoir of user information and convert that into ad revenue without the users’ consent.

Organizations such as the Center for Digital Democracy (CDD) and the Electronic Privacy Information Center (EPIC) both decried the move in March, requesting the FTC look into it. Jan Koum, WhatsApp’s founder later responded with a blog post, “Setting the record straight,” that insisted both firms would “remain autonomous and operate independently.”

*Photo via alvy‘s Flickr photostream, Creative Commons

Arbitrary Code Execution Bug in Android Reader

Threatpost for B2B - Mon, 04/14/2014 - 11:04

P { margin-bottom: 0.08in; }A:link { }
-->The Android variety of Adobe Reader reportedly contains a vulnerability that could give an attacker the ability to execute arbitrary code on devices running Google’s mobile operating system.

The problem arises from the fact that Adobe Reader for Android exposes a number of insecure JavaScript interfaces, according to security researcher Yorick Koster, who submitted the details of the bug to the Full Disclosure mailing list.

In order to exploit the security vulnerability, an attacker would have to compel his victim to open a maliciously crafted PDF file. Successful exploitation could then give the attacker the ability to execute arbitrary Java access code and, in turn, compromise reader documents and other files stored on the device’s SD card.

Adobe verified the existence of the vulnerability in version 11.1.3 of Reader for Android and has provided a fix for it with version 11.2.0.

On the point of exploitation, the specially crafted PDF file required to exploit this vulnerability would have to contain Javascript that runs when the targeted-user interacts with the PDF file in question. An attacker could deploy any of the Javascript objects included in Koster’s report to obtain access to the public reflection APIs inherited by those objects. It is these public reflection APIs that the attacker can abuse to run arbitrary code.

In other Android-related news, Google announced late last week that it would bolster its existing application regulation mechanism with new a feature that will continually monitor installed Android applications to ensure that they aren’t acting maliciously or performing unwanted actions.

Blog: SyScan 2014

Secure List feed for B2B - Sun, 04/13/2014 - 07:18
In the first week of April 2014 we were at “The Symposium on Security for Asia Network" (SyScan) (http://www.syscan.org/), a “geeky” single-track conference located in Singapore.

Stealing Private SSL Keys Using Heartbleed Difficult, Not Impossible

Threatpost for B2B - Fri, 04/11/2014 - 13:49

Heartbleed can be patched, and passwords can be changed. But can you steal private keys by taking advantage of the Internet-wide bug in OpenSSL?

Yes, but it’s difficult.

Stealing private server SSL keys are a real pot at the end of a rainbow for criminal hackers and intelligence agencies alike. Private keys bring unfettered access to Web traffic, and you can be sure that if someone has been able to steal them, they’re not going to crow about it on Twitter or Full Disclosure.

In the meantime, companies running the vulnerable version of OpenSSL in their infrastructure need to assess the risks involved, and then decide whether it’s worth their time and resources to revoke existing certs and reissue new ones. And do you shut down services in the meantime? Again, another tough call some companies would have to make.

“The vulnerability has been out there for two years, so we don’t know who has been on it. But if someone has figured out how to steal private keys, they’re not going to go public about it,” said Marc Gaffan, cofounder of Incapsula.

Incapsula, an application delivery company that offers a range of web security services, patched its infrastructure and is in the process of replace every certificate on behalf of its customers. Gaffan said, adding that other companies with a similar zero tolerance for risk will do the same.

Stealing a private key using the Heartbleed bug, however, is easier said than done. Researchers at CloudFlare said it is possible to steal private keys, but to date they have been unable to successfully use Heartbleed to do so.

“Note, that is not the same as saying it is impossible to use Heartbleed to get private keys. We do not yet feel comfortable saying that,” said CloudFlare’s Nick Sullivan. “However, if it is possible, it is at a minimum very hard. And we have reason to believe based on the data structures used by OpenSSL and the modified version of NGINX that we use, that it may in fact be impossible.”

The Heartbleed vulnerability enables an attacker to retrieve the most 64KB of memory processed by a website running vulnerable versions of OpenSSL. Attackers that are able to replay an attack could steal sensitive data from a server, including credentials. Finding private keys is much more labor intensive and is dependent on multiple variables, including the timing of attacks. Incapsula’s Gaffan said a private key could be in memory 10 seconds before an attacker arrives, and gone when he’s there.

“It’s like looking for a needle in a haystack; it’s not always there and it’s not always deterministic where the needle, or private key, may be,” said Incapsula’s Gaffan. “Different scenarios cause memory to shape the way it does; that’s why there’s the potential for the private key to be there.”

If the heartbeat feature is enabled in OpenSSL, attacks against the Heartbleed vulnerability are undetectable, experts say.

“The request is a naïve request. It will not appear in a log as an attempt and it doesn’t leave a trace,” Gaffan said.

Mitigating Heartbleed is a process, starting with applying the patch to OpenSSL before revoking old certificates and installing new ones. Users, meanwhile, will likely have to change their passwords for a number of online services they use, but shouldn’t do so until they’re sure the service has done its part with regard to patching and updating certificates.

“Users need to be aware that this is going to be a longtail issue,” said Trustwave security manager John Miller. “There are bound to be more stories about this in the weeks and months to come.”

The Internet-wide implications of Heartbleed are still being fathomed. OpenSSL is likely to be running in any number of home networking gear, smartphone software and applications, and industrial control and SCADA systems.

“OpenSSL is probably less prevalent in ICS (since many don’t use any encryption at all).  ICS backbone servers may be affected since those are more likely to use OpenSSL,” said Chris Sistrunk, senior consultant with Mandiant. “The risks of the Heartbleed vulnerability pale in comparison to the general fragility and lack of security features like authentication and encryption.  Availability is still king and confidentiality is the least important.  For those who do have OpenSSL, the patch may or may not be rolled out right away depending on the type of ICS.  (Do we have to interrupt our batch in process etc to patch?)”

Adam Crain, a security researcher and founder of Automatak, cautioned that TLS is used in industrial control systems to wrap insecure protocols such as DNP3.

“Attackers can now read memory from these servers/clients.  Futhermore, people sometimes use TLS wrapped DNP3/ICCP between entities over the internet,” Crain said.  “A load-based DoS was always possible on these endpoints, but now it’s possible that encryption keys or other credentials could be lifted to infiltrate these systems.”

Threatpost News Wrap, April 11, 2014

Threatpost for B2B - Fri, 04/11/2014 - 12:06

Dennis Fisher and Mike Mimoso discuss–what else–the OpenSSL heart bleed vulnerability and the doings at the Source Boston conference this week.


BlackBerry, Cisco Products Vulnerable to OpenSSL Bug

Threatpost for B2B - Fri, 04/11/2014 - 07:37

Vendors are continuing to check their products for potential effects from the OpenSSL heartbleed vulnerability, and both Cisco and BlackBerry have found that a variety of their products contain a vulnerable version of the software.

BlackBerry on Thursday said that several of its software products are vulnerable to the OpenSSL bug, but that its phones and devices are not affected. The company said its BBM for iOS and Android, Secure Workspace for iOS and Android and BlackBerry Link for Windows and OS X all are vulnerable to the OpenSSL flaw.

“BlackBerry is currently investigating the customer  impact of the recently announced OpenSSL vulnerability. BlackBerry customers can rest assured that while BlackBerry continues to investigate, we have determined that BlackBerry smartphones, BlackBerry Enterprise Server 5 and BlackBerry Enterprise Service 10 are not affected and are fully protected from the OpenSSL issue. A list of known affected and unaffected products is supplied in this notice, and may be updated as we complete our investigation,” the company’s advisory says.

Meanwhile, the list of Cisco products affected by the heartbleed vulnerability is much longer.

The company said in its advisory that many of its products, including its TelePresence Video Communications Server, WebEx Meetings Server, many of its Unified IP phones and several others, are vulnerable. Cisco also said that a far larger list of products are potentially vulnerable and are under investigation.

Cisco’s Sourcefire Vulnerability Research Team did some testing on the vulnerability and found that on vulnerable systems it could retrieve usernames, passwords and SSL certificates.

“To detect this vulnerability we use detection_filter (“threshold”) rules to detect too many inbound heartbeat requests, which would be indicative of someone trying to read arbitrary blocks of data. Since OpenSSL uses hardcoded values that normally result in a 61 byte heartbeat message size, we also use rules to detect outbound heartbeat responses that are significantly above this size. Note: you can’t simply compare the TLS record size with the heartbeat payload size since the heartbeat message (including the indicated payload size) is encrypted,” Brandon Stultz of Cisco wrote in a blog post.


Syndicate content