Threatpost for B2B

Syndicate content
The First Stop For Security News
Updated: 12 hours 48 min ago

Certificate Revocation Slow for Heartbleed Servers

Wed, 04/16/2014 - 12:05

The rush to revoke and replace digital certificates on Heartbleed-vulnerable Web servers seems to be no rush at all.

Internet research and security services firm Netcraft reports today that of the more than 500,000 servers it knows of that are running vulnerable versions of OpenSSL, only 80,000 certificates have been revoked so far. The urgency to do so was ramped up on Friday when four unrelated security researchers each were able to take advantage of the TLS heartbeat vulnerability to steal private SSL keys in a challenge set up by vendor CloudFlare.

Also, the first public reports of exploits against websites resulting in stolen data were reported against the Canada Revenue Agency and Mumsnet of the U.K.

“While some companies quickly recognized the need to issue new certificates in response to the Heartbleed bug, the number of revocations has not kept up,” wrote Paul Mutton. “This is a mistake, as there is little point issuing a new certificate if an attacker is still able to impersonate a website with the old one.”

Heartbleed is a dangerous Internet-wide bug that can be exploited to steal sensitive information such as user credentials, and also private encryption keys if the attack is replayed often enough. One researcher in the CloudFlare Challenge, Russian Fedor Indutny, replayed his attack 2.5 million times before he was able to steal a key from a nginx server running an unpatched instance of OpenSSL set up by CloudFlare.

Researchers had speculated it was incredibly difficult and unlikely to steal private keys by exploiting Heartbleed, but that was proven incorrect as by Saturday morning there were four reported winners of the challenge, including Indutny who was the first. Making matters more challenging is that Heartbleed attacks do not leave a log entry, for example, and are undetectable.

The process of revoking old certificates and reissuing new ones involves working closely with a certificate authority, many of which offer self-service tools or APIs that help facilitate the process. The problem is that the wonky code was introduced into OpenSSL in December 2011 and there have been public reports that it has been exploited as far back as last November.

“You have to get your infrastructure patched so that any future damage will not be incurred because of the vulnerability, and the second priority is replacing or reissuing certificates to mitigate the risk from private keys stolen while the vulnerability existed in the wild,” said Marc Gaffan, cofounder of Incapsula. Users, for example, should make sure that sites on which they’re changing credentials have been patched, otherwise an attacker could continue to exploit an unpatched site stealing new credentials in the process.

Netcraft, meanwhile, estimates the cost of replacing compromised certs with new ones at more than $100 million; some CAs, however, are allowing customers to reissue and revoke certificates free of charge, Netcraft said. It points out also that many sites are buying new certificates rather than reissuing.

“Perhaps in the haste of resolving the problem, this seemed the easiest approach, making Heartbleed a bonanza for certificate authorities,” Mutton said.

Netcraft also points out that some companies—including large sites such as Yahoo’s mobile log-in page, the U.S. Senate large file transfer system, and GeoTrust’s SSL Toolbox—have deployed new certs but have yet to revoke old ones. Some of those not yet on a Certificate Revocation List are still sending OCSP responses that those certificates are “good,” Netcraft said.

Revocation may not help in some cases, Netcraft cautions, saying that four percent do not specify a URL for an OCSP responder and can only be revoked through a CRL.

“This makes the certificates effectively irrevocable in some browsers — for example, the latest version of Mozilla Firefox no longer uses CRLs at all (previously it would fall back to checking a CRL if an OCSP request failed, but only for Extended Validation certificates),” Mutton said.

There are still other certificates, Netcraft said, that may have been compromised and do specify either a OCSP or CRL address and cannot be revoked until they expire.

“These certificates are therefore completely irrevocable in all browsers and could be impersonated until their natural expiry dates if an attacker has already compromised the private keys,” Mutton said.

Eugene Kaspersky on Critical Infrastructure Security

Wed, 04/16/2014 - 11:00

Dennis Fisher talks with Eugene Kaspersky about the need for better critical infrastructure security, the major threats facing enterprises today and the specter of cyberwar.

http://threatpost.com/files/2014/04/digital_underground_150.mp3

Download: digital_underground_150.mp3

Crypto Examination Awaits in Phase Two of TrueCrypt Audit

Wed, 04/16/2014 - 10:22

Phase two of the TrueCrypt audit figures to be a labor-intensive, largely manual cryptanalysis, according to the two experts behind the Open Crypto Audit Project (OCAP).

Matthew Green, crypto expert and professor at Johns Hopkins University, said a small team of experts will have to, by hand, examine the cipher suites, key algorithms and random number generators used in the open source encryption software.

Green said he hopes to crowdsource experts for the second phase of the audit, attracting people skilled in examining cryptography.

“We’re still flushing out the idea, but it will be a group of people who are well respected in the industry who have done this type of thing on a smaller scale,” Green said, adding he was not yet ready to publicly name them. “We would not be doing this if it were not for these people. We’ve created a series of challenges and we’re going to divide them up. I’m sure it will be fairly successful; we’re still in the planning stages.”

ISEC Partners, the consultants who were hired to conduct the first phase of the TrueCrypt audit that looked at the TrueCrypt bootloader and Windows kernel driver, will not be involved in phase two, Green said, adding that the results for the second half of the audit may not be available for a few months.

The movement to audit TrueCrypt began last fall, a few months after the Snowden leaks began going public. TrueCrypt, which provides full disk and file encryption capabilities, has been downloaded close to 30 million times, making it a tempting target for intelligence agencies that have been accused of subverting other commercial and open source software.

ISEC on Monday released its report on the first phase of the audit and said it found no backdoors in the portions of the software it looked at. There were, however, worrisome vulnerabilities around the quality of the code and build processes.

“The good news is that there is nothing devastating in the code,” Green said. “The auditors said there were problems in code quality and pointed out other legitimate issues. These are not reasons to stop using it.”

One of the first concerns leading to suspicions that the Windows binary version of TrueCrypt had been backdoored was a mysterious string of 65,024 encrypted bytes in the header. Experts wondered why these random bytes were there and whether they could be an encrypted password. Adding to the intrigue was that, aside from the fact the Windows package behaves differently than versions built from source code, no one really knew who the developers behind TrueCrypt are.

In October, however, some of those concerns were laid to rest when Green and OCAP co-organizer Kenneth White, senior security engineer at Social & Scientific Systems, were contacted by the anonymous developers who endorsed the audit. Also, an independent audit of TrueCrypt conducted by Xavier de Carne de Carnavalet of Concordia University in Canada, was able to reproduce a deterministic compilation process for the Windows version that matches the binaries. He concluded TrueCrypt was not backdoored.

“We’re not going to say the issue is closed, but we’re a lot less panicked about it,” Green said. “That doesn’t mean there isn’t something there, it’s just not on my list of things to worry about.”

The relief with the initial results is that there isn’t a widespread bug in the software; while TrueCrypt isn’t deployed on a scale of OpenSSL or Apple software, the recent Heartbleed and so-called gotofail iOS bugs have left some in the security community a little shell-shocked. White, for one, is hoping the cryptanalysis turns up equally positive results as in phase one.

“Our confidence in encryption software is driven by the level of expertise afforded proper peer-to-peer review, by deep experts in the field. And there is a very small group of people who are qualified to conduct this kind of analysis, particularly with the encryption components,” White said. “What they find might be gross errors or might be a trivial single character mistake.”

While very few of these types of public audits have been conducted—perhaps the most high-profile security tool subjected to a public audit was open source private chat application Cryptocat—Green and White see the potential for more of these in the future.

“It’s much harder to do than it seems. It’s not just about getting the money and paying people; you have to find people who are interested doing it. Not every firm is interested in doing a public audit,” Green said, adding that the TrueCrypt audit is the first of its kind that was crowd funded. “We have a good technical advisory board who were willing to put in the time to make this happen. You need good organization with people whose job it is to do this; you can’t do this in your spare time.”

White said future projects are under consideration, but for now 100 percent of their efforts and funding is going toward the TrueCrypt audit.

“I think there is a subset of people who had their minds made up before we started, and have no intention of changing. For me, the appeal of this work has been to begin to establish a framework for conducting community-driven security audits and formal cryptanalysis on open source (or, in the case of TrueCrypt, source-available) software,” White said. “I think if after the final report we can say, ‘We marshaled some of the best minds in the field, and they looked at the code, the crypto, and the implementation and we found [X]’ then that’s a victory. As a privacy advocate, I’m obviously hoping for a clean verdict, but as a security engineer, I remain skeptical until the end.”

Financial Services Companies Facing Varied Threat Landscape

Wed, 04/16/2014 - 05:00

SAN FRANCISCO — Many of the stories about attacks on banks, payment processors and other portions of the financial services system around the world depict these intrusions as highly sophisticated operations conducted by top-level crews. However, the majority of the attacks these companies see aren’t much more advanced than a typical malware attack, experts say.

“About two thirds of the attacks on our merchant community are low to moderate complexity,” Ellen Richey, executive vice president and chief enterprise risk officer at Visa, said during a panel discussion on threats to the financial services industry at the Kaspersky Lab Cyber Security Summit here Tuesday.

The last couple of years have been tough on banks and other financial services companies when it comes to security. Many of the larger banks in the United States  and elsewhere have been the targets of massive DDoS attacks for more than a year now, with many of these attacks being attributed to hacktivist groups. These banks, of course, always are targets for cybercrime gangs looking for some quick money. But Richey and the other panelists said that while they certainly see attacks against their networks from determined, skilled attackers, a great deal of what they see every day is pretty mundane.

Attackers looking for a nice pay day often won’t target a bank directly, but will hit a partner or supplier the bank uses and go from there.

That strategy isn’t new, but it’s proven to be effective.

“People aren’t going to go after hard targets, because it exposes them,” said Steve Adegbite, senior vice president of enterprise information security program oversight and strategy organization at Wells Fargo & Co. “They go after the lower level merchants and walk up the chain from there.”

While figuring out who is attacking an organization can be an intriguing exercise, Adegbite said that in a lot of cases it doesn’t matter much who is doing what. The end result of a successful attack is the same: a disruption to the business.

“Within financial services, it’s about customer service and keeping things running and keeping the lights on. When I go in there after the fact and strip everything down, whether it’s a nation state or a kid in his basement, it’s forcing us to deal with an incident.”

Richey said that Visa, with its massive network of merchants and huge profile around the globe, sees all shapes and sizes of attacks, but has seen a big jump in the number of DDoS attacks in recent years.

“The piece we’re seeing in the last two to three years is denial of service attacks. It’s primarily hacktivists,” she said. “The industry has amped up its defenses to deal with it.”

That increase in defenses has occurred across the financial services industry, but as well-funded and sophisticated as the security teams in these companies are, they can’t go it alone. Adegbite said that he and the Wells Fargo security team collaborate with as many people and organizations as they can when it comes to defending their networks.

“Cybersecurity is a team sport. The amount of things we’re dealing with, we can’t handle it all ourselves,” he said. “We form a community of defenders all the way through.”

Microsoft Releases Updated Threat Modeling Tool 2014

Tue, 04/15/2014 - 15:07

Threat modeling has been part of the security culture at Microsoft for the better part of a decade, an important piece of the Security Development Lifecycle that’s at the core of Trustworthy Computing.

Today, Microsoft updated its free Threat Modeling Tool with a number of enhancements that bring the practice closer to not only large enterprises, but also smaller companies with a growing target on their back.

Four new features have been added to the tool, including enhancements to its visualization capabilities, customization features older models and threat definitions, as well as a change to it generates threats.

“More and more of the customers I have been talking to have been leveraging threat modeling as a systematic way to find design-level security and privacy weaknesses in systems they are building and operating,” said Tim Rains, a Trustworthy Computing manager. “Threat modeling is also used to help identify mitigations that can reduce the overall risk to a system and the data it processes. Once customers try threat modeling, they typically find it to be a useful addition to their approach to risk management.”

The first iteration of Microsoft Threat Modeling Tool was issued in 2011, but Rains said customer feedback and suggestions for improvements since then have been rolled into this update. The improvements include a new drawing surface that no longer requires Microsoft Visio to build data flow diagrams. The update also includes the ability migrate older, existing threat models built with version 3.1.8 to the new format. Users can also upload existing custom-built threat definitions into the tool, which also comes with its own definitions.

The biggest change in the new version is in its threat-generation logic. Where previous versions followed the STRIDE framework (Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege) per element, this one follows STRIDE per interaction of those elements. STRIDE helps users map threats to the properties guarding against them, for example, spoofing maps to authentication.

“We take into consideration the type of elements used on the diagram (e.g. processes, data stores etc.) and what type of data flows connect these elements,” Rains said.

At the RSA Conference in February, Trustworthy Computing program manager Adam Shostack said that there is no one defined way to model threats; that they must be specific to organizations and their particular risks.

“I now think of threat modeling like Legos. There are things you can snap together and use what you need,” Shostack said. “There’s no one way to threat model. The right way is the way that fixes good threats.”

Install April Windows 8.1 Update If You Want Security Patches

Tue, 04/15/2014 - 14:40

In a bizarre and somewhat befuddling move, Microsoft announced yesterday on its Technet blog that it would no longer provide security updates to users running out-of-date versions of Windows 8.1. In order to receive updates, customers will have to have updated their machines with the most recent Windows 8.1 Update, which the company pushed out in April.

Microsoft recently released a fairly large update for Windows 8.1. Users who installed the update (or have their updates installed automatically) and even users that never updated to 8.1 in the first place will continue to receive updates. However, users running older versions of Windows 8.1 will not receive any security updates moving forward. If they attempt to install an update, they will receive a message informing them that the update is “not applicable.”

Users running Windows 7 or Vista are not affected by this announcement. Users running Windows XP are no longer eligible for security updates either since Microsoft’s long-awaited cessation of support for the more-than-12-year-old operating system became official in April.

It’s not clear whether this decision is to become a precedent for future update cycles.

“Since Microsoft wants to ensure that customers benefit from the best support and servicing experience and to coordinate and simplify servicing across both Windows Server 2012 R2, Windows 8.1 RT and Windows 8.1, this update will be considered a new servicing/support baseline,” wrote Steve Thomas, a senior consultant at Microsoft.

Thomas goes on to explain that users who install updates manually will have 30 days to install the Windows 8.1 update from April. Beginning with the May Patch Tuesday, any Windows 8.1 devices that have not installed the update will no longer receive security updates.

The move is even more of a head-scratcher considering the trouble many users have reportedly faced while attempting to install that April update. Microsoft even references the troubles the patch has presented, saying:

“Microsoft plans to issue an update as soon as possible that will correct the issue and restore the proper behavior for Windows 8.1 Update KB 2919355 scanning against all supported WSUS configurations. Until that time, we are delaying the distribution of the Windows 8.1 Update KB 2919355 to WSUS servers.”

Despite its promise to cut off support for out-of-date versions of Windows 8.1, the company has little choice but to “recommend that you suspend deployment of this update in your organization until we release the update that resolves this issue.”

Threatpost has reached out to Microsoft for clarification and will update this story with any comment.

Government, Private Sector Must Have a ‘Need to Share’ Mindset on Threats

Tue, 04/15/2014 - 14:22

SAN FRANCISCO–The security of both government and private enterprise systems going forward relies on the ability of those two parties to share threat, attack and compromise information on a real-time basis, former Department of Homeland Security secretary Tom Ridge said. Without that cooperation, he said, the critical infrastructure of the United States will continue to be “a target-rich environment”.

The idea of information sharing is a well-worn one in the security industry. Private companies have been trying to get timely intelligence on attacks and threats from the federal government for years, without much success. On the other side of that coin, the government has been ingesting threat intelligence from the private sector for decades, while typically not reciprocating. Ridge, speaking at the Kaspersky Lab Cybersecurity Summit here Tuesday, said that the federal government needs to change that situation if it hopes to make any real improvement in security.

“We’ve been trying for three years to get the government to create a protected avenue to share information from the government down to the private sector and from the private sector up to the government,” he said. “We’ve been unsuccessful.”

Part of the reason for that failure, Ridge said, is that the federal government often defaults to over-classifying information, especially as it relates to attacks and threats. That information often could be valuable to organizations in the private sector that may be affected by the same kinds of threats, but is sitting dormant somewhere because it’s not cleared for release to private companies. That mindset must be changed, Ridge said.

“The knowledge in the hands of the federal government relating to critical infrastructure and the security of our economy shouldn’t be held and parceled out,” he said. “We need to go from a need-to-know basis to a need-to-share mindset.”

Private enterprises have their own set of challenges surrounding security, and Ridge said that one of the main issues he still sees in large organizations is a lack of awareness that attackers are targeting them specifically.

“This isn’t a preventable risk, it’s a manageable risk,” he said.

“Private enterprises are foolish to think it won’t happen to them. We’re a target rich environment.”

Ridge said one of the other key obstacles to improving critical infrastructure security is the fact that the federal government must rely on the private sector to do nearly all the work. The government itself doesn’t own much in the way of utilities, power grids, financial systems or other prime targets. That’s all in the hands of private companies. So there’s a clear incentive for the two parties to share information, he said.

“The government has no critical infrastructure of its own. It relies on the private sector for that, and when it goes down, the government goes down,” Ridge said. “National security and economic security are intertwined.”

Attackers, of course, are well aware of that fact, and know that going after a country’s power grid or utilities other vital systems is a quick path to crippling the country’s economy. Those kinds of attacks, Ridge said, could be precursors to armed conflicts in the near future or part of an ongoing war.

“What if at some point someone infiltrates the power grid and plants malware? Is that a precursor to a larger attack? How do you respond, kinetically or electronically? What’s the threshold for response?” he said.

HD Manufacturer LaCie Admits Yearlong Data Breach

Tue, 04/15/2014 - 14:21

The French computer hardware company LaCie, perhaps best known for their external hard drives, announced this week it fell victim to a data breach that may have put at risk the sensitive information of anyone who has purchased a product off their website during the last year.

According to an incident notification posted today, an attacker used malware to infiltrate LaCie’s eCommerce website for almost a month, and in turn, glean customer information. Attackers had access from March 27, 2013 to March 10, 2014, but it wasn’t until last Friday when LaCie began to inform customers at risk.

In addition to its ubiquitous rugged orange external hard drives, LaCie, which is headquartered in Paris, also manufactures RAID arrays, flash drives, and optical drives.

The announcement warns that anyone who purchased an external hard drive or any form of LaCie hardware off of the company’s website during that time period may have had their data stolen. That information includes customers’ names, addresses, email addresses, as well as payment card information and card expiration dates.

While the company has hired a “leading forensic investigation firm” to continue looking into the technicalities of the breach – how many are affected, etc. – for the time being LaCie has suspended all online sales until they can “transition to a provider that specializes in secure payment processing services.”

A report from KrebsonSecurity.com last month speculated that the company’s storefront may have been hijacked by hackers using security vulnerabilities in Adobe’s ColdFusion development platform.

According to Krebs, LaCie’s eCommerce site was one of nearly 50 eCommerce websites spotted ensnared in a nasty ColdFusion botnet that was leaking consumer credit card information. The security reporter previously surmised that the hackers behind the botnet are the same attackers behind last year’s Adobe breach that leaked source code for Reader and ColdFusion, not to mention the personal information of millions of its customers.

At the time Clive Over, a spokesman for Seagate, who bought LaCie in 2012, told Krebs the company was not “aware that company or third party information was improperly accessed” when informed that one of its servers had been targeted and breached in 2013. Over went on to say that LaCie was “working with third party experts to do a deeper forensic analysis,” the same search that would eventually yield the breach’s discovery.

*Image via fncll‘s Flickr photostream, Creative Commons

Programming Language Security Examined

Tue, 04/15/2014 - 12:08

When building an enterprise Web application, the most foundational decision your developers make will be the language in which the app is written. But is there a barometer that measures the security of the programming languages developers have at their disposal, or are comfortable with, versus other options?

WhiteHat Security, an application security vendor, released its 2014 Website Security Statistics Report today that measures the security of programming languages and development frameworks and examines not only what classes of vulnerabilities they’re most susceptible to, but also how long it takes to remediate bugs and whether there’s a real difference that would impact a business decision as to which language to use.

The report is based on vulnerability assessments conducted against 30,000 customer websites using a proprietary scanner, and the results point toward negligible differences in the relative security of languages such as .NET, Java, PHP, ASP, ColdFusion and Perl. Those six shared relatively similar mean numbers of vulnerabilities, and problems such as SQL injection and cross-site scripting vulnerabilities remain pervasive.

“Ultimately, what we found was that across the board there were no significant differences between languages,” said Gabriel Gumbs, lead researcher on White Hat’s Website Security Statistics Report. “There are some peaks and valleys with regard to vulnerability classes and remediation rates, but no one stood out as a clear winner as more secure.”

One conclusion, therefore, is that web application security woes, including the chronic existence of SQL injection and cross-site scripting vulnerabilities in code, are a human issue.

“A lot of it is the human factor,” Gumbs said. Static and dynamic testing controls are available to developers that test code as it is being developed as well as in production. But they have to be used throughout the development lifecycle, Gumbs said. “During the design phase of an app, security implications must be taken into account.”

As for the numbers compiled by White Hat, .NET and Java are the most widely used languages, accounting for a combined 53 percent, while the creaky ASP is next at 16 percent. SQL injection were especially prevalent in ColdFusion sites, while Perl sites were found most vulnerable to cross-site scripting. ColdFusion sites, however, had the best overall remediation rates while PHP sites one of the lowest.

Cross-site scripting was the most prevalent vulnerability in five of the six languages, except for .NET where information leakage flaws were highest. It’s worse in Perl (67 percent of sites) and Java (57 percent). Content spoofing, SQL injection and cross-site request forgery round out the top five most prevalent vulnerabilities.

“The education is out there and the frameworks are out there [to address cross-site scripting]. My best guess is that it’s a combination of the speed at which companies are implementing new functionality and exposing it to the business that is driving that number,” Gumbs said. “We don’t know what it will take to tip the scales and make those numbers go down. It may be something we have to live with. If we can accept that and then approach how we address that based on risk assessments, it may drive down the number.”

Looking at specific industries, in particular those that are heavily regulated such as financials and health care, those don’t show a noticeable difference in either the number of vulnerabilities present or remediation rates. This is in spite of over-arching regulations such as PCI-DSS protecting credit cards and HIPAA protecting health care that mandate a certain minimum standard. The problem is that many organizations that are regulated do what it takes to reach that minimum standard, and not much else.

“What we found is that industries with more regulations are insecure because they fix vulnerabilities that the regulation only calls for,” Gumbs said. “If PCI says fix these five vulnerabilities, that’s all they fixed. It proved to me they were more insecure than the other industries because they put that effort into compliance, not security.”

Heartbleed Saga Escalates With Real Attacks, Stolen Private Keys

Mon, 04/14/2014 - 15:34

Heartbleed went from a dangerous Internet-wide vulnerability over the weekend to one with real exploits, real victims and real problems for private SSL server keys.

Mumsnet, a U.K.-based parenting website, said it was victimized by hackers exploiting the vulnerability in OpenSSL to steal passwords, as was the Canada Revenue Agency, who reported the loss of social insurance numbers for 900 citizens, according to a BBC report today.

Hackers were using the stolen Mumsnet credentials to post messages to the site on Friday, while the CRA said hackers were busy exploiting Heartbleed during a six-hour period before its systems were patched.

While experts warned it was possible from the outset to steal credentials and other sensitive information in plaintext, it was thought that stealing private SSL keys that would provide unfettered access to web traffic emanating from a server was a much more difficult proposition.

Starting on Friday, however, three researchers had in fact managed to do just that.

Russian engineer Fedor Indutny was the first to break the so-called CloudFlare Challenge set up by web traffic optimization vendor CloudFlare. The company had set up a nginx server running an unpatched version of OpenSSL and issued a challenge to researchers to steal the private SSL key.

Indutny replayed his attack more than two million times before he was able to steal the key, which he submitted at 7:22 Eastern time on Friday, less than an hour before Ilkka Mattila of NCSC-FI submitted another valid key using just 100,000 requests.

Since then, two more submissions were confirmed on Saturday, one by Rubin Xu, a PhD student at Cambridge University and researcher Ben Murphy.

The vulnerability is present in OpenSSL versions 1.0.1 to 1.0.1f and it allows attackers to snag 64KB of memory per request per server using its heartbeat function. The bits of memory can leak anything from user names and passwords to apparently private keys if the attack is repeated often enough. A number of large sites, including Yahoo, Lastpass and many others were vulnerable, but quickly patched. Once the vulnerability is patched, old certificates must be revoked and new ones validated and installed.

Users, meanwhile, would need to change their passwords for accounts on these sites, but only after the patch is applied, or their new credentials could be stolen as well. Worse, the attacks don’t show up in logs and leave no trace behind. Therefore, it’s impossible to know whether a private key has been stolen and malicious sites signed by a legitimate certificate key, for example, would appear benign.

The story took a strange twist Friday night when Bloomberg reported that the U.S. National Security Agency had been exploiting Heartbleed for two years, according to a pair of unnamed sources in the article. A bug such as Heartbleed could simplify surveillance efforts for the agency against particular targets, but given the arsenal of attacks at its disposal, the NSA might have more efficient means with which to gather personal data on targets.

To that end, the agency via the Office of the Director of National Intelligence issued a rare denial Friday night. The memo said the NSA was not aware of the flaw in OpenSSL. “Reports that say otherwise are wrong,” it said.

The DNI’s office also said the Federal government uses OpenSSL to encrypt a number of government sites and services and would have reported the vulnerability had it discovered it.

“When Federal agencies discover a new vulnerability in commercial and open source software – a so-called ‘Zero day’ vulnerability because the developers of the vulnerable software have had zero days to fix it – it is in the national interest to responsibly disclose the vulnerability rather than to hold it for an investigative or intelligence purpose,” the DNI said.

Meanwhile, a report in the New York Times on Saturday said that President Obama has given the NSA leeway in using bugs such as Heartbleed where there is a “clear national security or law enforcement need.” The NSA has thrived on such loopholes, according to numerous leaks made public in the Snowden documents. The president’s decision was made in January, the Times article said, after he addressed the nation on the government’s surveillance of Americans.

The U.S. government, it was made public in September, had bought a subscription to a zero-day exploit service sold by VUPEN of France.

The contract, made public through a Freedom of Information Act request by MuckRock, an open government project that publishes a variety of such documents, shows that the NSA bought VUPEN’s services on Sept. 14, 2012. The NSA contract is for a one-year subscription to the company’s “binary analysis and exploits service.”

So Far, So Good for TrueCrypt: Initial Audit Phase Turns Up No Backdoors

Mon, 04/14/2014 - 13:42

A initial audit of the popular open source encryption software TrueCrypt turned up fewer than a dozen vulnerabilities, none of which so far point toward a backdoor surreptitiously inserted into the codebase.

A report on the first phase of the audit was released today by iSEC Partners, which was contracted by the Open Crypto Audit Project (OCAP), a grassroots effort that not only conducted a successful fundraising effort to initiate the audit, but raised important questions about the integrity of the software.

TrueCrypt is praised as not only free and open source encryption software, but also that it’s easy to install, configure and use. Given that it has been downloaded upwards of 30 million times, it stood to reason that it could be a prime target for manipulation by intelligence agencies that have been accused of subverting other widely used software packages, commercial and open source.

The first phase of the audit focused on the TrueCrypt bootloader and Windows kernel driver; architecture and code reviews were performed, as well as penetration tests including fuzzing interfaces, said Kenneth White, senior security engineer at Social & Scientific Systems. The second phase of the audit will look at whether the various encryption cipher suites, random number generators and critical key algorithms have been implemented correctly.

“With Phase II, we will be conducting a formal cryptanalysis and looking at these issues,” White said. “In security engineering, we never say a system is ‘unbreakable,’ but rather, ‘we looked at X, Y, and Z and couldn’t find a vulnerability.’

“But yes, I would say there is certainly an increased level of confidence in TrueCrypt,” White said.

Among the still-outstanding questions publicly asked by OCAP, which was kicked off by White and Johns Hopkins professor and crypto expert Matthew Green, revolved around the Windows version of TrueCrypt. Since those are available only as downloadable binaries, they cannot be compared to the original source code, yet behave differently than versions compiled from source code. There were also concerns about the license governing TrueCrypt use, as well as the anonymous nature of the development group behind the software.

iSEC Partners’ report gave TrueCrypt a relatively clean bill of health.

“iSEC did not identify any issues considered ‘high severity’ during this testing. iSEC found no evidence of backdoors or intentional flaws. Several weaknesses and common kernel vulnerabilities were identified, including kernel pointer disclosure, but none of them appeared to present immediate exploitation vectors,” iSEC’s Tom Ritter said in a statement. “All identified findings appeared accidental.”

Ritter said iSEC recommends improvements be made to the quality of code in the software and that build process be updated to relay on tools with a “trustworthy provenance.”

“In sum, while TrueCrypt does not have the most polished programming style, there is nothing immediately dangerous to report,” Ritter said.

Specifically, iSEC security engineers Andreas Junestam and Nicolas Guigo audited the bootloader and Windows kernel driver in TrueCrypt 7.1a. The report says iSEC performed hands-on testing against binaries available from the TrueCrypt download page and binaries compiled from source code. Work was completed Feb. 14.

The engineers found 11 vulnerabilities, four rated medium severity, four low severity and three were rated informational issues having to do with defense in depth.

“Overall, the source code for both the bootloader and the Windows kernel driver did not meet expected standards for secure code,” the report said. “This includes issues such as lack of comments, use of insecure or deprecated functions, inconsistent variable types, and so forth.”

The team dug deeper into its recommendations of updating the Windows build environment and code quality improvements, specifically replacing outdated tools and software packages, some of which date back to the early 1990s.

“Using antiquated and unsupported build tools introduces multiple risks including: unsigned tools that could be maliciously modified, unknown or un-patched security vulnerabilities in the tools themselves, and weaker or missing implementations of modern protection mechanisms such as DEP and ASLR,” the team wrote in its report. “Once the build environment has been updated, the team should consider rebuilding all binaries with all security features fully enabled.”

They added that “lax” quality standards make the source code difficult to review and maintain, impeding vulnerability assessments.

Of the four most serious bugs uncovered in the audit, the most serious involves the key used to encrypt the TrueCrypt Volume Header. It is derived using PBKDF2, a standard algorithm, that uses an iteration count that’s too small to prevent password-guessing attacks.

“TrueCrypt relies on the what’s known as a PBKDF2 function as a way to ‘stretch” a users’ password or master key, and there is concern that it could have been stronger than the 1,000 or 2,000 iterations it uses currently.” White said. “The TrueCrypt developers’ position is that the current values are a reasonable tradeoff of protection vs. processing delay, and that if one uses a weak password, a high-count PBK2DF2 hash won’t offer much more than a false sense of security.”

White said the OCAP technical advisors are also concerned about TrueCrypt’s security model which offers narrowly restricted privacy guarantees,” White said.

So, for example, if you are not running whole volume (system disk) encryption, there are many known exploits to recover plaintext data, including decryption keys,” White said, pointing out that Microsoft’s Bitlocker software and PGP, for example, have similar attack paths.

“But in the case of TrueCrypt, whole volume disk encryption is only available for the Windows port, and there exists today point-and-click forensic tools that can be purchased for a few hundred dollars that can easily decrypt data from a running machine with any of these packages, TrueCrypt included,” White said. “I have a feeling that while most in the security industry understand this, it is probably worth emphasizing to a broader audience: on the vast majority of machines that use file or disk encryption, if the underlying operating system or hardware can be compromised, then so too can the encryption software.”

With a Warning, FTC Approves WhatsApp, Facebook Union

Mon, 04/14/2014 - 12:54

Facebook’s acquisition of messaging application WhatsApp was approved by the Federal Trade Commission late last week, but not without a stern notice from the agency, which warned that it would be keeping a watchful eye on the two companies going forward.

In a letter addressed to officials at Facebook and WhatsApp on Thursday, the FTC’s Bureau of Consumer Protection Director Jessica Rich made it clear that the agency would continue to ensure the companies honor their promises to users.

“WhatsApp has made a number of promises about the limited nature of the data it collects, maintains, and shares with third parties–promises that exceed the protections currently promised to Facebook users,” Rich wrote. “We want to make clear that, regardless of the acquisition, WhatsApp must continue to honor these promises to consumers.”

The privacy policy for WhatsApp, the popular app that allegedly sends 50 billion messages between users daily, states that user information will not be used for advertising purposes and won’t be sold to a third party. The FTC’s letter (.PDF) claims this is something that shouldn’t be nullified by the Facebook purchase. The FTC adds that if Facebook were to go ahead and share any of its newly acquired WhatsApp information, it would violate its privacy promises, not to mention an order the agency has placed on the social network.

That order basically makes sure Facebook doesn’t misrepresent the way it handles users’ privacy or the security of consumers’ personal information.

The letter, which was addressed to Facebook’s Chief Privacy Officer Erin Egan and WhatsApp’s General Counsel Anne Hoge, goes on to state that data collecting changes could be made as long as they get users’ “affirmative consent.” If users don’t agree with new procedures they should be granted the opportunity to opt out or at least understand “that they have an opportunity to stop using the WhatsApp service.”

“Failure to take these steps could constitute a violation of Section 5 and/or the FTC’s order against Facebook,” the letter states.

When the $19 billion acquisition was first announced in February, privacy advocates were rattled that Facebook would be able to mine WhatsApp’s vast reservoir of user information and convert that into ad revenue without the users’ consent.

Organizations such as the Center for Digital Democracy (CDD) and the Electronic Privacy Information Center (EPIC) both decried the move in March, requesting the FTC look into it. Jan Koum, WhatsApp’s founder later responded with a blog post, “Setting the record straight,” that insisted both firms would “remain autonomous and operate independently.”

*Photo via alvy‘s Flickr photostream, Creative Commons

Arbitrary Code Execution Bug in Android Reader

Mon, 04/14/2014 - 11:04

P { margin-bottom: 0.08in; }A:link { }
-->The Android variety of Adobe Reader reportedly contains a vulnerability that could give an attacker the ability to execute arbitrary code on devices running Google’s mobile operating system.

The problem arises from the fact that Adobe Reader for Android exposes a number of insecure JavaScript interfaces, according to security researcher Yorick Koster, who submitted the details of the bug to the Full Disclosure mailing list.

In order to exploit the security vulnerability, an attacker would have to compel his victim to open a maliciously crafted PDF file. Successful exploitation could then give the attacker the ability to execute arbitrary Java access code and, in turn, compromise reader documents and other files stored on the device’s SD card.

Adobe verified the existence of the vulnerability in version 11.1.3 of Reader for Android and has provided a fix for it with version 11.2.0.

On the point of exploitation, the specially crafted PDF file required to exploit this vulnerability would have to contain Javascript that runs when the targeted-user interacts with the PDF file in question. An attacker could deploy any of the Javascript objects included in Koster’s report to obtain access to the public reflection APIs inherited by those objects. It is these public reflection APIs that the attacker can abuse to run arbitrary code.

In other Android-related news, Google announced late last week that it would bolster its existing application regulation mechanism with new a feature that will continually monitor installed Android applications to ensure that they aren’t acting maliciously or performing unwanted actions.

Stealing Private SSL Keys Using Heartbleed Difficult, Not Impossible

Fri, 04/11/2014 - 13:49

Heartbleed can be patched, and passwords can be changed. But can you steal private keys by taking advantage of the Internet-wide bug in OpenSSL?

Yes, but it’s difficult.

Stealing private server SSL keys are a real pot at the end of a rainbow for criminal hackers and intelligence agencies alike. Private keys bring unfettered access to Web traffic, and you can be sure that if someone has been able to steal them, they’re not going to crow about it on Twitter or Full Disclosure.

In the meantime, companies running the vulnerable version of OpenSSL in their infrastructure need to assess the risks involved, and then decide whether it’s worth their time and resources to revoke existing certs and reissue new ones. And do you shut down services in the meantime? Again, another tough call some companies would have to make.

“The vulnerability has been out there for two years, so we don’t know who has been on it. But if someone has figured out how to steal private keys, they’re not going to go public about it,” said Marc Gaffan, cofounder of Incapsula.

Incapsula, an application delivery company that offers a range of web security services, patched its infrastructure and is in the process of replace every certificate on behalf of its customers. Gaffan said, adding that other companies with a similar zero tolerance for risk will do the same.

Stealing a private key using the Heartbleed bug, however, is easier said than done. Researchers at CloudFlare said it is possible to steal private keys, but to date they have been unable to successfully use Heartbleed to do so.

“Note, that is not the same as saying it is impossible to use Heartbleed to get private keys. We do not yet feel comfortable saying that,” said CloudFlare’s Nick Sullivan. “However, if it is possible, it is at a minimum very hard. And we have reason to believe based on the data structures used by OpenSSL and the modified version of NGINX that we use, that it may in fact be impossible.”

The Heartbleed vulnerability enables an attacker to retrieve the most 64KB of memory processed by a website running vulnerable versions of OpenSSL. Attackers that are able to replay an attack could steal sensitive data from a server, including credentials. Finding private keys is much more labor intensive and is dependent on multiple variables, including the timing of attacks. Incapsula’s Gaffan said a private key could be in memory 10 seconds before an attacker arrives, and gone when he’s there.

“It’s like looking for a needle in a haystack; it’s not always there and it’s not always deterministic where the needle, or private key, may be,” said Incapsula’s Gaffan. “Different scenarios cause memory to shape the way it does; that’s why there’s the potential for the private key to be there.”

If the heartbeat feature is enabled in OpenSSL, attacks against the Heartbleed vulnerability are undetectable, experts say.

“The request is a naïve request. It will not appear in a log as an attempt and it doesn’t leave a trace,” Gaffan said.

Mitigating Heartbleed is a process, starting with applying the patch to OpenSSL before revoking old certificates and installing new ones. Users, meanwhile, will likely have to change their passwords for a number of online services they use, but shouldn’t do so until they’re sure the service has done its part with regard to patching and updating certificates.

“Users need to be aware that this is going to be a longtail issue,” said Trustwave security manager John Miller. “There are bound to be more stories about this in the weeks and months to come.”

The Internet-wide implications of Heartbleed are still being fathomed. OpenSSL is likely to be running in any number of home networking gear, smartphone software and applications, and industrial control and SCADA systems.

“OpenSSL is probably less prevalent in ICS (since many don’t use any encryption at all).  ICS backbone servers may be affected since those are more likely to use OpenSSL,” said Chris Sistrunk, senior consultant with Mandiant. “The risks of the Heartbleed vulnerability pale in comparison to the general fragility and lack of security features like authentication and encryption.  Availability is still king and confidentiality is the least important.  For those who do have OpenSSL, the patch may or may not be rolled out right away depending on the type of ICS.  (Do we have to interrupt our batch in process etc to patch?)”

Adam Crain, a security researcher and founder of Automatak, cautioned that TLS is used in industrial control systems to wrap insecure protocols such as DNP3.

“Attackers can now read memory from these servers/clients.  Futhermore, people sometimes use TLS wrapped DNP3/ICCP between entities over the internet,” Crain said.  “A load-based DoS was always possible on these endpoints, but now it’s possible that encryption keys or other credentials could be lifted to infiltrate these systems.”

Threatpost News Wrap, April 11, 2014

Fri, 04/11/2014 - 12:06

Dennis Fisher and Mike Mimoso discuss–what else–the OpenSSL heart bleed vulnerability and the doings at the Source Boston conference this week.

http://threatpost.com/files/2014/04/digital_underground_1491.mp3

BlackBerry, Cisco Products Vulnerable to OpenSSL Bug

Fri, 04/11/2014 - 07:37

Vendors are continuing to check their products for potential effects from the OpenSSL heartbleed vulnerability, and both Cisco and BlackBerry have found that a variety of their products contain a vulnerable version of the software.

BlackBerry on Thursday said that several of its software products are vulnerable to the OpenSSL bug, but that its phones and devices are not affected. The company said its BBM for iOS and Android, Secure Workspace for iOS and Android and BlackBerry Link for Windows and OS X all are vulnerable to the OpenSSL flaw.

“BlackBerry is currently investigating the customer  impact of the recently announced OpenSSL vulnerability. BlackBerry customers can rest assured that while BlackBerry continues to investigate, we have determined that BlackBerry smartphones, BlackBerry Enterprise Server 5 and BlackBerry Enterprise Service 10 are not affected and are fully protected from the OpenSSL issue. A list of known affected and unaffected products is supplied in this notice, and may be updated as we complete our investigation,” the company’s advisory says.

Meanwhile, the list of Cisco products affected by the heartbleed vulnerability is much longer.

The company said in its advisory that many of its products, including its TelePresence Video Communications Server, WebEx Meetings Server, many of its Unified IP phones and several others, are vulnerable. Cisco also said that a far larger list of products are potentially vulnerable and are under investigation.

Cisco’s Sourcefire Vulnerability Research Team did some testing on the vulnerability and found that on vulnerable systems it could retrieve usernames, passwords and SSL certificates.

“To detect this vulnerability we use detection_filter (“threshold”) rules to detect too many inbound heartbeat requests, which would be indicative of someone trying to read arbitrary blocks of data. Since OpenSSL uses hardcoded values that normally result in a 61 byte heartbeat message size, we also use rules to detect outbound heartbeat responses that are significantly above this size. Note: you can’t simply compare the TLS record size with the heartbeat payload size since the heartbeat message (including the indicated payload size) is encrypted,” Brandon Stultz of Cisco wrote in a blog post.

 

Cyber Intelligence Asia 2014: CERTs and Industrial Security

Thu, 04/10/2014 - 20:47

In March I spoke at Cyber Intelligence Asia 2014, where CERTs from most Asians countries were presented.

The fact is that only a few CERTs are now dealing in some way with industrial security, ICS and SCADA matters. One of the best of those is CERT of Japan, which is doing a great job here, and Jack YS Lin provided a nice overview of their activities and experience. Japan has a national ICS Test Bed, somewhat similar to Idaho National Lab, and is the only country besides the US that has an ISASecure certification entity. However, not all Japanese CNIs (Critical National Infrastructures) or even Industrial Automation vendors are doing enough in the security space.

The other countries seem to me much less advanced than Japan in understanding the ICS security domain, its problems and pursuing country-wide enhancements.

During the conference, we discussed the government role in enhancing critical infrastructure protection, and found that it is not about putting more compliance toward the CNI operators (we all know that compliance is not security). Instead, it is more about educating, creating actionable awareness by using engaging techniques and tools so CNI operators will be involved in developing their own solutions for strengthening security.

My personal take is that the regulator’s role is mainly to do what business/market won’t do by itself. So in my opinion, the list includes (but surely not limited to):

  • Enhancing intelligence & law enforcement in the cyber space;

  • Following both short and long-term security strategies, targeted both for CNI operators and automation vendors;

  • Engaging CNI management in security decisions by raising awareness in tangible form, and not just developing cybersecurity frameworks;

  • Imposing the need to pass Cyber Resilience tests at ICS commissioning;

  • Including cyber security as a mandatory part of industrial safety/liability programs;

  • Investing in CNI professional trainings and certifications;

  • Creating ICS-CERTs, ICS honeypots and industrial cyber drills.

PS: and, as always, people at Cyber Intelligence Asia enjoyed practicing with the Kaspersky Industrial Protection Simulation. There were moderate results, compared with other security professionals we played with in north America and Europe. This might be correlated with a certain lack of understanding of ICS specifics as stated above. I hope, however, that the things will change sooner, than later.

Does your country have an ICS CERT or ICS activity in its CERT already? What’s working best in favor of Industrial Security in your area?

Cisco Patches Vulnerabilities, Looking Into Heartbleed Impact

Thu, 04/10/2014 - 16:32

Cisco patched four different vulnerabilities this week in one of its core operating systems and is now is beginning to look into the potential impact of this week’s Heartbleed vulnerability in at least 60 of its other products.

The patches, released yesterday, fix problems in the company’s Adaptive Security Appliance (ASA) software that could have led to privilege escalation, authentication bypass, and opened products running ASA to a denial of service attack. ASA is a family of security devices, firewalls and other apps.

If exploited, an attacker could combine the first two vulnerabilities – a Privilege Escalation vulnerability in its Adaptive Security Device Manager (ASDM) and a SSL VPN Privilege Escalation vulnerability – to gain administrative access to the affected system.

Another VPN bug, an authentication bypass vulnerability, could allow an attacker to access the internal network via SSL VPN.

The last and perhaps most serious bug affects ASA’s Session Initiation Protocol (SIP). Dug up by researchers from Trustwave’s SpiderLabs and Dell’s SecureWorks, the bug could allow an attacker to exhaust the system’s memory. If SIP’s inspection engine is enabled – and it is by default on systems – an attacker could send a  handcrafted packets to the system, make it unstable, force it to reload and trigger a denial of service (DoS) condition.

According to a security advisory the company posted Wednesday, a series of firewalls, routers and other Cisco appliances that run ASA are affected. The full list can be found here.

Cisco makes a point to note that on the whole, ASA is not one of the products it manufactures that is affected by this week’s much-buzzed-about OpenSSL Heartbleed vulnerability.

Cisco does acknowledge however that its ASDM product – which comes bundled with ASA – may be affected by the vulnerability. The company is now reportedly in the beginning stages of evaluating its entire product line to determine Heartbleed’s potential impact.

Ultimately however, when it comes to vulnerable software, it sounds as if it’s not going to be a “is it or isn’t it?” question but a “how many?” question.

In an advisory yesterday the company claimed that “multiple” Cisco products incorporate a version of the OpenSSL package that’s affected by Heartbleed, something that could “allow an unauthenticated, remote attacker to retrieve memory in chunks of 64 kilobytes from a connected client or server.”

In a list updated today, there are apparently only 25 or so products that are not affected by Heartbleed but 11 that definitely are. Cisco is still looking into an extensive list of remaining products, 60+ in all, that may or may not be affected. It eventually plans to remediate the issues by releasing updates, along with workarounds if possible, in the near future.

The internet-wide Heartbleed bug stems from the way OpenSSL handles heartbeat extensions for TLS and was disclosed Monday but now speculation is rampant that it may have been exploited as far back as last November.

Heartbleed: A Bug With A Past and A Future

Thu, 04/10/2014 - 15:16

Bruce Schneier stood on the Source Boston keynote stage yesterday and used the word “ginormous” to describe the severity of the OpenSSL heartbleed bug.

“My guess is that when heartbleed became public, the top 20 governments in the world started exploiting it immediately,” Schneier said.

That’s assuming, of course, that those top 20 governments didn’t already have heartbleed and haven’t been exploiting it all along. The vulnerability in OpenSSL is an Internet-wide bug, one that’s kept a lot of people busy the last two days patching servers, revoking certificates, updating new ones, and changing a whole lot of passwords. And as Schneier said, governments may be slow in adopting new technologies, but when they do, they generally have the resources to do it well.

So is it equally ginormously dangerous to think the NSA, the Chinese or take-your-pick hacktivist group hasn’t been exploiting heartbleed since close to the time it was introduced into OpenSSL on New Year’s Eve 2011?

Ars Technica reported yesterday that MediaMonks of the Netherlands had evidence of exploit attempts going back to last November. Electronic Frontier Foundation technology projects director Peter Eckersley said inbound packets to MediaMonks contained TCP payload bytes that match those used by a proof-of-concept exploit.

Eckersley said the source IP addresses for those bytes belong to a botnet that’s been recording Freenode and other IRC activity.

“This is an activity that makes a little more sense for intelligence agencies than for commercial or lifestyle malware developers,” Eckersley said.

The EFF is asking network operators to check logs not only for the IP addresses in question, but for the TCP payload.

“A lot of the narratives around heartbleed have viewed this bug through a worst-case lens, supposing that it might have been used for some time, and that there might be tricks to obtain private keys somewhat reliably with it,” Eckersley said. “At least the first half of that scenario is starting to look likely.”

Heartbleed is so dangerous not only because it’s everywhere OpenSSL 1.0.1 to 1.0.1f is deployed, but also because attacks leave no trace. Everyone must assume they’re compromised. As expert Dan Kaminsky wrote today: “It’s a significant change, to assume the worst has already occurred.”

Kaminsky’s comment appears in a wide-ranging article on heartbleed, and the most salient point is that while OpenSSL may be the most prevalent TLS library and stands to reason that it’s among the most coveted technologies for compromise by intelligence agencies, it’s run by only a handful of competent and undercompensated people. A Wall Street Journal article points out that OpenSSL Project which funds OpenSSL development received less than $1 million from donations and consulting contracts.

“We are building the most important technologies for the global economy on shockingly underfunded infrastructure,” Kaminsky said. “We are truly living through Code in the Age of Cholera.”

Johns Hopkins professor and crypto expert Matthew Green said OpenSSL supports more than 80 platforms and reviews code contributions and changes from numerous sources, all with a fairly impressive record of not falling down on itself until this week.

“Maybe in the midst of patching their servers,” Green wrote this week, “some of the big companies that use OpenSSL will think of tossing them some real no-strings-attached funding so they can keep doing their job,”

Google Adds Continuous Monitoring of Android Apps

Thu, 04/10/2014 - 14:41

Google is adding a new security feature to Android designed to scan installed apps on a device and ensure that they’re not acting maliciously or taking unwanted actions. The system is built on Google’s existing app-verification model, which warns users if there’s a potential problem with an app they’re installing.

The addition to Android’s security system is meant to augment the Bouncer tool that Google uses to scan apps in the Play store for malicious functionality. That feature has been in place since 2012 and has enabled the company to help stem the tide of malicious apps making their way into the app store and onto users’ devices. Bouncer looks for known malware and other malicious behavior.

Android also has a feature that will verify apps during installation and may block them or warn the user of a problem.

Now, Google is adding the ability for Android to monitor the behavior of apps while they’re on a device.

“Building on Verify apps, which already protects people when they’re installing apps outside of Google Play at the time of installation, we’re rolling out a new enhancement which will now continually check devices to make sure that all apps are behaving in a safe manner, even after installation. In the last year, the foundation of this service—Verify apps—has been used more than 4 billion times to check apps at the time of install. This enhancement will take that protection even further, using Android’s powerful app scanning system developed by the Android security and Safe Browsing teams,” Rich Cannings, an android security engineer, wrote in a blog post.

Most Android users likely haven’t seen the warnings that the Verify apps system throws, but Cannings said that the new system provides an extra meausre of defense against malicious apps. Researchers have found that developers will sometimes send updates to installed apps, adding malicious or otherwise unwanted functionality.

“Because potentially harmful applications are very rare, most people will never see a warning or any other indication that they have this additional layer of protection. But we do expect a small number of people to see warnings (which look similar to the existing Verify apps warnings) as a result of this new capability,” Cannings said.