The head of the working group designing the next version of HTTP said the HTTP/2 protocol will work only with encrypted URIs.
“I believe the best way that we can meet the goal of increasing use of TLS on the Web is to encourage its use by only using HTTP/2.0 with https:// URIs,” wrote Mark Nottingham on a W3C mailing list.
The move is a shot across the bow of increasing surveillance by the U.S. government against its own citizens and foreigners. A number of major Internet players such as Google and Facebook have turned HTTPS on by default on a number of crucial services such as Gmail. Should Nottingham’s proposal with HTTP/2 pass muster, it would be a massive step forward toward full TLS securing Web traffic.
“To be clear – we will still define how to use HTTP/2.0 with http:// URIs, because in some use cases, an implementer may make an informed choice to use the protocol without encryption,” Nottingham said. “However, for the common case — browsing the open Web — you’ll need to use https:// URIs and if you want to use the newest version of HTTP.”
There were three proposals before the working group, Nottingham said:
- Opportunistic encryption for http:// URIs without server authentication, also known as TLS Relaxed;
- Opportunistic encryption for http:// URIs with server authentication;
- HTTP/2 used only with HTTPS on the open Internet.
Nottingham said discussions landed on the third option because it introduces the fewest complexity and HSTS is an option for downgrade protection. Also, browser vendors have been vocal about the need to encrypt Web traffic, which would throw some significant weight behind the movement.
As the Snowden leaks continue to expose the breadth of NSA surveillance on Americans beyond the collection of phone call metadata to the tapping of fiber links between Internet data centers, calls for enhanced encryption get louder. Cryptographer and security expert Bruce Schneier wrote in an essay last month that the Internet facilitates surveillance as companies collect data from site visitors and companies, often in the clear.
He urged large Internet players to raise the costs of surveillance and force the NSA to put the brakes on large-scale collection in favor of targeted surveillance.
“Moore’s law has made computing cheaper. All of us have made computing ubiquitous. And because computing produces data, and that data equals surveillance, we have created a world of ubiquitous surveillance,” Schneier wrote. “Now we need to figure out what to do about it. This is more than reining in the NSA or fining a corporation for the occasional data abuse. We need to decide whether our data is a shared societal resource, a part of us that is inherently ours by right, or a private good to be bought and sold.”
In the meantime, Nottingham said that as HTTP/2 adoption moves forward, other options such as Perfect Forward Secrecy could be considered.
“I believe this approach is as close to consensus as we’re going to get on this contentious subject right now,” he said. “As HTTP/2 is deployed, we will evaluate adoption of the protocol and might revisit this decision if we identify ways to further improve security.”
As targeted Chinese espionage campaigns are disclosed, it’s easy to get caught up in the immediate impact and details with regard to the compromised site or malware samples involved. It’s also simple to discount them as separate endeavors, one-off projects targeting the secrets held so precious by manufacturers, software companies or government officials.
But what if they weren’t one-offs? What if there was a connection between major APT-style intrusions aside from the country of origin?
Security company FireEye thinks they’ve found connections between at least 11 different espionage campaigns linked to China that happened during the last two years that suggest to them that there is a centralized operations organization supplying attackers with malware, builder tools, stolen digital certificates and many more artifacts.
“We think, based off the data we collected, based off the timing of events and what we believe the data means in terms of the sharing of the tools, we think that can be more reasonably explained by a more formal apparatus that sits underneath the intrusion operators,” said Ned Moran, a senior malware researcher at FireEye.
Moran pointed to a number of clues that distinguish the suppliers from the attackers, most notably, the existence of a builder tool that allows the attackers to quickly create new malware variants. Typically, such builders are created by developers, rather than an attacker skilled in intrusions, Moran said, adding that this aspect of specialization brings efficiency and speed to an operation.
“The reason for producing the builder is so that you could have someone who’s not necessarily a coder simply press and click buttons on a screen to create new malware,” Moran said. “We think that builder is good evidence of the fact there is specialization; there are people who build these tools and people who use these tools.”
Some of the malware tools found in APT attacks, such as the McRAT variant known as Trojan.APT.9002 exploiting the most recent Internet Explorer zero-day vulnerability, are exclusive to these operations and are not available for sale in the underground, as are the Poison Ivy or Gh0stRAT malware also used among the 11 campaigns. So are the suppliers and attackers part of the same operation, or is a buyer-seller relationship?
“We don’t know if within the clusters we document if they are buying the tool, or sharing it among themselves, or if there’s a formal apparatus that delivers it,” Moran said. “If you’re in the infantry, you don’t have buy your M16, they give it to you from the armory. We’re not sure how it works. All we know for sure is that the tool is not available for purchase in the cybercriminal underworld. We can deduce from that, that it’s privately held.”
FireEye proposes three answers: A) that this “quartermaster” as they put it exists and supports multiple APT campaigns via a shared development and logistics operation focusing on cyberespionage; B) that a single attacker group is behind all 11 APT campaigns; or C) that rather than having a centralized operation, the attackers behind the 11 campaigns are merely sharing artifacts.
Moran thinks there is enough evidence to suggest that a “quartermaster” of sorts is in place supplying artifacts that support these campaigns.
Kurt Baumgartner, senior security researcher with the Global Research and Analysis Team at Kaspersky Lab, said APT groups do share tools and techniques, but relationships between the groups are complex.
“Some of the groups jealously guard custom made components of their attacks, but share lots of other stuff,” Baumgartner said. “This includes research that leads to custom development of offensive components, like rootkit components and exploit code, and kits to crank out backdoors and spearphish attachments. It’s quite possible that individuals move between groups too.”
Moran said FireEye’s suspicions were raised with the emergence of the Sunshop watering hole attacks, reported in May. Analysis of the campaign revealed connections to others targeting high tech companies, financial services institutions, telecommunications companies, and energy and utilities. They all had different techniques, tactics and procedures, FireEye said, but shared a common development infrastructure. They shared portable executable resources, digital certificates, API import tables, compile times and dates, and command and control infrastructure.
FireEye was able to capture 110 unique binaries, 70 of them were APT.9002; 47 were signed with any of six digital certificates, including those stolen from Microsoft and gaming companies, including MGame, used in the Winnti campaign identified by Kaspersky. The certificates are now either revoked or expired. Moran also said 64 of the 100 samples were also packed with almost identical PE resources and share common compile times, the most common being Dec. 19, 2012.
“We have seen much of this sort of crossover as well. For example, the Winnti stolen certificates have made their way around to several groups and campaigns. We have also seen many other backdoors and exploits being shared between groups,” Baumgartner said. “It’s interesting that these crews are off to the races with the recently publicized CVE-2013-3906, targeting a Windows TIFF handling vulnerability. Multiple groups have been using that one in particular, including Winnti, a likely Indian group behind Operation Hangover, and the Taidoor attackers.”
In the first six months of this year, Google received seven wiretap orders from the United States government and complied with all of them. The company also received 207 pen register requests in the same period and complied with 89 percent of them, according to Google’s new transparency report.
The company’s latest report reveals a fairly dramatic increase in the volume of user data requests from the U.S. government since the beginning of 2010. In the first half of that year, Google received 4,287 requests for user data. In the latest reporting period, the company got 10,918 requests. However, the percentage of requests that Google complies with has been dropping over time, with the company providing some data in 94 percent of requests in the second half of 2010 and 83 percent in the first half of 2013. Overall, requests from all governments have more than doubled since 2010.
Google is one of a growing number of companies that have decided to publish information about the number and kind of requests they get from law enforcement agencies for user data. Those requests can run the gamut from simple data about the account all the way up to a wiretap that gives law enforcement access to the full content of a user’s communications in real time. Twitter, Facebook, Microsoft and several other companies have taken to publishing these reports, which have become all the more significant in the last six months as the revelations of the NSA’s surveillance methods have accumulated.
Several of those companies have petitioned the Foreign Intelligence Surveillance Court for permission to provide more information on the volume of National Security Letters they receive. Right now, they’re only allowed to reveal those requests in ranges of 1,000. NSLs are sent by the FBI in national security investigations and force companies to reveal detailed information about a user’s account, including name, address and how long they’ve had an account. Google officials said the company is continuing to petition the government for the ability to provide more data.
“We believe it’s your right to know what kinds of requests and how many each government is making of us and other companies. However, the U.S. Department of Justice contends that U.S. law does not allow us to share information about some national security requests that we might receive. Specifically, the U.S. government argues that we cannot share information about the requests we receive (if any) under the Foreign Intelligence Surveillance Act. But you deserve to know,” saidRichard Salgado, legal director, law enforcement and information security, at Google.
“Earlier this year, we brought a federal case to assert that we do indeed have the right to shine more light on the FISA process. In addition, we recently wrote a letter of support (PDF) for two pieces of legislation currently proposed in the U.S. Congress. And we’re asking governments around the world to uphold international legal agreements that respect the laws of different countries and guarantee standards for due process are met.”
This is the first transparency report from Google that has included the volume of wiretap requests, pen register orders and other court orders the company has received. Wiretaps are the most difficult kind of order for law enforcement agencies to get because they provide the most detailed information about the target. There’s no indication in the Google report about which services the seven wiretap orders covered.
The popular humor website, Cracked[dot]com reportedly hosted malware that infected the machines of its visitors over the weekend and may still be doing so, according to Barracuda Labs research.
The malware proliferated via drive-by-downloads, and it is not known how many systems became infected as a result of visiting the site. Barracuda Labs claims the number of infections could be quite high considering that the site ranks 289 in the U.S. and 654 globally, according to the Web information firm, Alexa.
Barracuda Labs claims that the infection is a stealthy one, leaving infected users with no indication of compromise other than the fact that a java plugin has launched and that the system is running on low memory.
You can find out more about the specific piece of malware in use here.
At the time of their Barracuda Labs’ publication, just seven of 46 malware engines were detecting the threat.
Cracked[dot]com did not respond to Barracuda Labs disclosure initially, but later posted in a forum that they had resolved the problem sometime Tuesday. Despite that, Barracuda Labs claims the site is still infected and that similar attacks on the site seem to be a recurring problem.
There have been countless hearings in both the House and Senate since the Snowden leaks began in June, and there seems to be no end in sight. The latest committee to get in on the action was the Senate Committee on the Judiciary’s Subcommittee on Privacy, Technology and the Law, which held a hearing today in which lawmakers and technology experts discussed the National Security Agency’s expansive and increasingly public surveillance practices, examining a proposed bill that would require that the U.S. spy agency carry out its operations in a more transparent fashion.
When all was said and done, the general consensus of those not advocating for the NSA was that a bill introduced by Sen. Al Franken (D-Mich.), chairman of the committee, would be a great step forward, but that transparency alone would not undo the damages done to U.S. companies and its government by PRISM and other similar surveillance programs. Nor, they seemed to agree, would the addition of transparency make the NSA’s programs lawful or constitutional.
Franken said that The Surveillance Transparency Act would require that the NSA disclose to the public how many people are having their data collected under each key foreign intelligence authority. It would also make the NSA estimate how many of those people are American citizens or green card holders and how many of those Americans had their information actually looked at by a government agent. His bill, he continued, would lift the gag order on Internet and phone companies so that those companies can tell Americans general information about the number of orders they are receiving and the numbers of users whose information have been produced in response to those orders.
American cloud providers are losing as much as $180 billion per year as a direct result of their inability to report how often the government requests information, how often they comply with those orders, and how much information they hand over to federal authorities, Franken said. Other witnesses agreed that the U.S. government and the civil liberty of its citizenry were not the only victims of pervasive government surveillance.
“My bill would permanently ensure that American citizens have information they need to develop an informed opinion about government surveillance and it would protect American companies from losing business about misconceptions about their roles in these programs,” Franken said. “Americans still have no way of knowing if their government is striking the right balance between privacy and security and whether their privacy is being violated.”
“I believe that the bulk collection program mostly authorized under section 215 of the PATRIOT Act should come to an end,”said Sen. Dean Heller (R-Neb.). “While there is disagreement on whether this program should continue, I am confident that we can all agree that these programs need more transparency.”
Robert Lit, the Director of National Intelligence’s general counsel, said that more transparency is needed, but his reason for it was different than that of the other witnesses. His goal, and that of the DNI presumably, was to use transparency as a tool to dispel exaggerations, myths, and general misinformation about the government’s spying programs.The DNI’s counsel claimed that they – proponent’s of Franken’s bill and the national security community – agree on the broad view of the bill, but they have concerns that some of the bill’s provisions could harm intelligence and national security operations.
“The DNI has declassified and released thousands of pages of documents about these programs and we are continuing to review documents to release more of them,” he said. “These documents demonstrate that these programs are all authorized by law and subject to vigorous oversight by all three branches of government.
“It’s important to emphasize that this info was all properly classified. It is being declassified now only because in the present circumstances, the public interest in declassification outweighs the national security concerns that required classification.”
More specifically, Lit said one of the intelligence community’s primary concerns is that enumerating the exact number of U.S. citizens monitored under their surveillance programs would be too difficult and resource-intensive.
“It is often not possible to determine whether a person who receives an email is a U.S. person. The email address says nothing about the citizenship or nationality of that person,” Lit said. “Even in cases where we would be able to get the information that would allow us to make the determination of whether someone is a U.S. person, doing the research and collecting that information would perversely require a greater invasion of that person’s privacy that would otherwise occur.”
Therefore, he said, the NSA and the intelligence community have written letters to Congress informing them that this kind of information simply can not be reasonably obtained.
Kevin Bankston, Director, Free Expression Project at the Center for Democracy and Technology, in later testimony called the NSA’s incapability to provide estimation of the number of individuals swept up in their surveillance “shocking.” He would then say that Lit’s other claim – namely law enforcement can not make a meaningful estimate of how many people’s data it has collected – “just doesn’t make sense.” Lit took more issue with the bill before that point though, going on to say that the intelligence community also has significant concerns about giving companies permission to publish information about the number of orders of data request they receive.
“Providing that information in that level of detail,’ Lit said, “could provide our adversaries a detailed road-map of which providers and which platforms to avoid in order to escape surveillance.”
Bankston laid out three reasons why it is important to allow companies to more transparently disclose the information requests they receive. First, he claimed that both citizens and policy makers have the right and the need to know about the scope of government programs. Second, he said that companies have a clear First Amendment right to tell us this information. The government’s attempts to gag them is clearly unconstitutional, he said. Lastly, Bankston argued that greater transparency is needed to restore trust in the U.S. government and businesses.
“Indeed you will see this prior restraint at work today in the room,” Bankston said. “Even though everyone in this room knows and understands that Google has received Foreign Intelligence Surveillance Act process, Google’s representative is the one person in the room who cannot admit it.”
Later, Sen. Patrick Leahy (D-Vt.) would echo that sentiment, asking another witness, Richard Salgado, the director for law enforcement and information security matters at Google, if he was permitted to tell the committee if Google had received any FISA orders. Salgado responded, with a smirk, that he would have to decline to answer the question until the bill being discussed today was passed. Leahy proceeded to ask if Salgado thought that the country was safer as a result of his inability to answer the question.
“I can not imagine the country is safer as a result of that,” Salgado said, again smiling.
Before that exchange, Salgado commended the Surveillance Transparency Act as activists in the back row held up signs urging Google to, “Keep [their] data private.”
Salgado said there has been no intimation from the Department of Justice to suggest that publishing National Security Letter information – another contentious issue tied up in and intrinsically bound to the surveillance debate – has any real impact on the country’s national security. Despite this, he said, Justice has not given Google permission to publish any meaningful information about the number of NSLs it receives – other than publishing vague ranges of numbers including both NSLs and individual data requests. In fact, Salgado explained, the permissions that Google has been granted by the department would be a significant step backward from the level of transparency demonstrated by transparency reports Google already publishes.
Bankston then compared the publishing of vague ranges of numbers as a transparency report to a doctor who is forced to diagnose disease by examining a patient’s shadow.
“Only the grossest, most obvious abuse would be evident, if even that,” he said.
Amid all of this, Lit and the other witness at the hearing on behalf of the national security community, Brad Wiegmann, deputy assistant attorney general for the National Security Division, continued to assure the committee members that the NSA had made changes to and increased the operational transparency of the government’s spying programs in light of public interest. All in all, they said these programs have proper regulation and oversight.
“In short,” Salgado responded to those claims, “the DoJ proposal would not provide the type of transparency that is reflected in the Transparency Surveillance Act of 2013. Transparency is critical in informing the public debate on these issues, but it is only one step among the many that are needed.”
Leahy chimed in later:
“Is just enhancing transparency going to be enough to bring back global confidence in American technology companies?” he asked.
Salgado replied that transparency is a good first step, but that, ultimately, it would not be enough. Users, he said, need to be assured that such surveillance practices are done under law, in a rule-bound and narrowly tailored manner, and that there is oversight and accountability for it. Bankson agreed, saying that substantial reform – in addition to transparency – will be needed to repair the U.S. government’s image.
As public outrage grows, especially among the technical elite, bills similar to Franken’s are popping up in the House of Representatives on the other side of the Capitol as well as in the Senate.
The hacker behind the MacRumors Forums breach said the attack was “friendly” and that none of the data accessed will be leaked. Editorial Director Arnold Kim confirmed to Threatpost that a post on the forums from the hacker is legitimate.
Kim posted an advisory on the forum on Monday informing users that a breach had occurred, and advising the site’s 860,000-plus members to change their passwords on the forum and anywhere else they might have used the same credential. MacRumors Forums said it has enlisted a third-party security firm to investigate the attack, which it likened to a July break-in at the Ubuntu Forums.
The hacker, who posted the portion of Kim’s password hash and salt as proof of his legitimacy, blamed a MacRumors Forums moderator whose credentials were stolen and used to access the password database.
“We’re not going to ‘leak’ anything. There’s no reason for us to. There’s no fun in that. Don’t believe us if you don’t want to, we honestly could not care less,” the hacker wrote. Kim said this afternoon that the site has no further details on the status of the investigation.
“In situations like this, it’s best to assume that your MacRumors Forum username, email address and (hashed) password is now known,” Kim said.
The hacker confirmed that 860,106 passwords were dumped, and 488,429 still had a salt at least three bits long.
“Anyone that’d been active recently will have a longer salt, which will slow down the hash cracking by a fraction of the time it would have taken (duplicate salts = less work do, it’s like to have many with a 3 bit salt),” the hacker’s post said. “We’re not ‘mass cracking’ the hashes. It doesn’t take long whatsoever to run a hash through hashcat with a few dictionaries and salts, and get results.”
The hacker said put the blame on users for re-using passwords, which is against generally accepted security practices, adding that the credentials are not being exploited to log into web-based email accounts or other online services.
“We’re not terrorists. Stop worrying, and stop blaming it on Macrumors when it was your own fault for reusing passwords in the first place,” the hacker wrote.
MacRumors Forums, much like the Ubuntu site, runs on the vBulletin platform; all current versions of vBulletin share the same hashing algorithm, according to the attacker, who added that the attack’s success had nothing to do with outdated software or vBulletin, rather the moderator credentials they were able to compromise.
The attack on free Linux distribution Ubuntu in July affected close to 2 million of its forum account members. The access every user’s email address and hashed passwords; Canonical, a U.K.-based software company that backs the distro, also recommended that its users change their forum passwords and anywhere else the password might have been used. Ubuntu’s password trove was also hashed and salted; salting involves adding random characters to a password before it’s hashed. The practice reduces the ability of a hacker to use common password attacks such as dictionary attacks.
“Consider the ‘malicious’ attack friendly,” the MacRumors Forums hacker said. “The situation could have been catastrophically worse if some fame-drive idiot was the culprit and the database were to be leaked to the public.”
When the first NSA surveillance story broke in June, about the agency’s collection of phone metadata from Verizon, most people likely had never heard the word metadata before. Even some security and privacy experts weren’t sure what the term encompassed, and now a group of security researchers at Stanford have started a new project to collect data from Android users to see exactly how much information can be drawn from the logs of phone calls and texts.
The project, dubbed Metaphone, is soliciting volunteers who agree to allow the collection of various kinds of metadata from their phones, which will then be sent automatically to Stanford’s researchers. The Stanford Security Lab, which is running the project, is interested in showing that the collection of metadata amounts to surveillance, something that NSA leaders and Congress have said is not the case.
“Phone metadata is inherently revealing. We want to rigorously prove it—for the public, for Congress, and for the courts,” Jonathan Mayer, a PhD student at Stanford and a junior affiliate scholar at the Security Lab, wrote in an explanation of the project.
People interested in participating in the program can download the Metaphone app from Google Play. As part of the project, Metaphone will collect and transmit a variety of information to the researchers. The data will be destroyed at the end of the study.
“In the course of the study, your mobile phone will transmit device logs and social network information to researchers at Stanford University. Device data will include records about your recent calls and text messages. Social network data will include your profile, connections, and recent activity. The data will be stored and analyzed at Stanford, then deleted at the end of the study. Research staff will take reasonable precautions to secure the data in transit, storage, analysis, and destruction,” the researchers said.
In an email interview, Mayer said that he hopes the study will provide some clear answers on what metadata is and how invasive the collection of it can be for users.
“We intend to report preliminary results as soon as we have enough crowdsourced data. Phone records are plainly a hot-button issue: Congress is considering intelligence reform legislation, courts are hearing litigation challenges, and many in the public aren’t sure who’s telling the truth. Our aim is to provide rigorous answers about the sensitivity of phone metadata,” Mayer said.
“It is difficult to estimate the amount of data that we need because the quality, not just quantity, of the data coming back will affect how well our learning algorithms work. The general principle, though, is that more data is better and if we want to make a strong claim about metadata we would like to have as much data as possible. However, the analysis can be a continuous process so we can get started once we have some participants and then refine our approach as more data comes in,” said Patrick Mutchler, also of the Security Lab.
This story was updated on Nov. 13 to clarify that the data collection will not be anonymous.
Image from Flickr photos of Harshlight.
The tentacles of the massive Adobe breach, called one of the worst in U.S. history by one security expert, have reached Facebook users, specifically those who used the same email and password combination for the social network as well as Adobe.
A Facebook representative confirmed to Threatpost today that users in that situation are being presented with a message telling them they have to change their passwords.
“We actively look for situations where the accounts of people who use Facebook could be at risk—even if the threat is external to our service,” Facebook’s Jay Nancarrow said in a statement. “When we find these situations, we present messages to people to help them secure their accounts.”
The data from the Adobe breach, disclosed in early October, was discovered online by blogger Brian Krebs and Hold Security CEO Alex Holden. The software giants were breached by unknown Russian-speaking attackers who were able to steal source code for Adobe products such as Acrobat, ColdFusion and Photoshop. Adobe initially said up to three million customer records were also compromised, including encrypted passwords and credit card numbers. That number was adjusted to upwards of 40 million after more of the data surfaced online. Analysis of the encrypted passwords revealed that Adobe had used a weak encryption scheme to secure the credentials; the passwords were secured with a symmetric encryption cipher, meaning that anyone able to guess the key can unlock all of the passwords in question.
Facebook said it has been combing through the passwords looking for matching username-password combinations in order to keep its users’ accounts secure. Chris Long, a Facebook security team member, confirmed this in a comment posted on Krebs on Security.
“We used the plaintext passwords that had already been worked out by researchers,” Long said. “We took those recovered plaintext passwords and ran them through the same code that we use to check your password at log-in time.
“We’re proactive about finding sources of compromised passwords on the Internet. Through practice, we’ve become more efficient and effective at protecting accounts with credentials that have been leaked, and we use an automated process for securing those accounts.”
In the meantime, a 20-year-old from the Netherlands who goes by the handle Lucb1e built a tool that facilitates a search of the stolen data for a user’s email address or partial address. The tool is still online, though Lucb1e said it won’t be forever.
“Searching a 10GB file is not trivial, so instead of searching it for everyone individually, I wrote a program that does it in the background (daemon),” he wrote. “Whenever someone adds a search, it is added to the database. The daemon checks every few seconds whether any (and how many ) searches have been added, and runs all searches at the same time.”
Adobe was compromised between July 31 and Aug. 15, but the breach was not discovered for more than a month. Adobe disclosed the breach to its customers on Oct. 3 and has yet to provide details on how attackers were able to bypass its defenses. Krebs and Holden found 40 GB of data stolen from Adobe and other organizations on the same server used by criminals who pulled off breaches against LexisNexis and Dun & Bradstreet. These same attackers are believed to be responsible for a number of breaches using ColdFusion exploits going back to December of last year.
BlackBerry addressed a pair of serious vulnerabilities yesterday in its BlackBerry Link product that enables users to sync content between a BlackBerry 10 device and a desktop or laptop.
The vulnerability lies in the Peer Manager component of Link that provides remote file access, which according to BlackBerry allows a user to access documents and files in a remote folder using their mobile device.
The risk is limited, BlackBerry said, because an exploit would require user interaction.
“Successful exploitation can require that an attacker must persuade a user on a system with BlackBerry Link installed to click on a specifically crafted link or access a webpage containing maliciously crafted code,” BlackBerry said in its advisory BSRT 2013-12. “In the alternative scenario, successful exploitation requires that a local attacker must be able to log in to the affected system while the BlackBerry Link remote file access feature is running under a different user account.”
An attacker could then read or modify any data from the remote folder accessible through Link.
BlackBerry said in multiuser systems, a successful exploit could elevate privileges for users on the same system to access the remote folder belonging to the account on which Peer Manager is running. Remote attackers could also access these folders by enticing a user to click on a malicious link or surf to an infected website. Remote attackers could also leverage users on a multiuser system to gain access to remote folders and the data within.
“An attacker must persuade a lower privileged local user to click on a specifically crafted link or access a webpage containing maliciously crafted code while the user is logged into their account on a machine on which a higher privileged user has previously logged in, resulting in Peer Manager running under the higher privileged user account,” BlackBerry said.
The existing state of affairs in which government agencies and intelligence services work to insert backdoors into various hardware, software and networks is not only a problem in terms of civil rights but also represents a serious security risk to most users and the Internet itself, a recent report by the Citizen Lab says. And, the revelations of the U.S. surveillance programs of the last few months may also spawn a variety of copycat programs in emerging countries.
Some of the more explosive and troubling revelations to come out of the steady flow of NSA leaks this year have involved the U.S. government’s efforts to compromise encryption standards, software programs and Internet infrastructure used by millions and millions of people as part of intelligence-gathering operations. Documents made public in recent months show efforts by the NSA to influence the standards process at NIST, specifically regarding the Dual EC DRBG random number generator, which NIST has warned developers to stop using. There also have been allegations that the agency and its allies are tapping unencrypted links between data centers owned by Google and Yahoo, a revelation that has infuriated security engineers at Google.
Security experts have long argued that inserting backdoors in widely deployed software or hardware for law-enforcement or intelligence-gathering purposes is not just questionable with regard to civil rights but also harms the security of the entire system. The presence of a vulnerability in an application or piece of hardware opens that target up to exploitation by anyone, not just the people who inserted the backdoor. The Citizen Lab report, called “Shutting the Backdoor“, by Ron Deibert of the University of Toronto, argues that the NSA revelations have brought this problem into sharp focus.
“Quite apart from these concerns about privacy and potential abuse of unchecked power is an additional concern around the security implications of backdoors. Building backdoors into devices and infrastructure may be useful to law enforcement and intelligence agencies, but it also provides a built-in vulnerability for those who would otherwise seek to exploit them and in doing so actually contributes to insecurity for the whole of society that depends on that infrastructure,” Deibert says in the report.
“In 2008 Citizen Lab researchers discovered that the Chinese version of the popular VOIP product, Skype (called TOM-Skype) had been coded with a special surveillance system in place such that whenever certain keywords were typed into the chat client, data would be sent to a server in mainland China (presumably to share with China’s security services).20 Upon further investigation, it was discovered that the server onto which the chat messages were stored was not password protected, allowing for the download of millions of personal chats, many of which included credit card numbers, business transactions, and other private information.”
In addition to the unintended consequences these programs can produce, the Citizen Lab report also says that Edward Snowden’s revelations of the NSA’s methods and techniques could provide a blueprint for regimes in emerging countries that are interested in exerting more control over their communications infrastructure.
“No doubt one implication of Snowden’s revelations will be the spurring on of numerous national efforts to regain control of information infrastructures through national competitors to Google, Verizon, and other companies implicated, not to mention the development of national signals intelligence programs that attempt to duplicate the US model,” Deibert writes in the report.
“Already prior to the revelations, numerous companies faced complex and, at times, frustrating national ‘lawful access’ requests from newly emerging markets. Many countries of the global South lack even basic safeguards and accountability mechanisms around the operations of security services, and their demands on the private sector could contribute to serious human rights violations and other forms of repression.”
Deibert argues that while there are legitimate uses for lawful intercept technologies, they should be deployed sparingly and with great oversight.
“Those lawful access provisions that are still required should be infrequent and strictly controlled with rigorous oversight and public accountability provisions. Direct tapping of entire services wholesale should be eliminated. Not only will this protect civil liberties and prevent the concentration of power in unchecked hands, it will ensure that we are not doing more to undermine our own security in an overzealous surveillance quest,” he writes.
Image from Flickr photos of anyjazz65.
Adobe patched two vulnerabilities in its ColdFusion web application server today, and also released a Flash Player update that patched a remote code execution bug in the software.
A company spokesperson said none of the vulnerabilities are being exploited, nor are they related to the recent theft of Adobe source code and up to 150 million customer records, including passwords.
One of the ColdFusion bugs, however, was reported by Alex Holden of Hold Security; Holden is one of the experts who uncovered the data lost in the Adobe breach along with blogger Brian Krebs. Krebs reported today that one of the now-patched ColdFusion bugs was a zero-day being used by attackers earlier this year to break into a number of companies.
The security hotfix for ColdFusion 10 on Windows is the most critical, according to Adobe. The vulnerability affects versions 10, 9.0.2, 9.0.1 and 9.0, as well as Mac OS X and Linux. Adobe said a cross-site scripting vulnerability was patched that could be remotely exploited by an attacker with credentials when the CFIDE directory is exposed. The other bug could permit unauthorized remote read access, Adobe said.
Adobe also updated Flash Player to version 11.9.900.117 for Windows and Mac OS X, and 126.96.36.1990 for Linux. The patches fix flaws that could crash the Flash Player and enable an attacker to remotely take control of the underlying system hosting the software.
Both products have been patched multiple times this year. ColdFusion is of particular interest because of its involvement in the massive October breach. The attackers were able to access source code for ColdFusion, along with Acrobat, Publisher, PhotoShop and other Adobe products. More than 150 million customer records were also accessed, including unsalted passwords.
ColdFusion has been patched several times by Adobe this year, going as far back as Jan. 4 when the company reported that ColdFusion exploits were in the wild for unpatched vulnerabilities in the software. Since then, vulnerabilities were patched in the software in May, after weeks prior cloud-hosting company Linode revealed it was breached by attackers using a ColdFusion zero day, and customer records including payment card information were lost. Previously, on Dec. 11, Adobe patched a sandbox permissions flaw in ColdFusion, weeks after an out-of-band patch resolved a denial-of-service vulnerability.
BOSTON – If you’re looking for tangible information sharing success stories around attack intelligence, some might point to the prompt publishing of indicators of compromise (IOC) as an example. Security and forensics companies will publish MD5 hashes of malware, IP addresses involved in attacks, malware signatures and more artifacts relevant to a breach or malware outbreak. Problem is, all of the artifacts are made available post-attack, and don’t satisfy the need for real-time data on intrusions, in particular for sensitive industries such as financial services or utilities.
“I want to hear about this stuff as, or before, it impacts me,” said James Caulfield, advanced threat protection program manager for the Federal Reserve Bank in Boston. “[IOCs] just isn’t fast enough.”
Among Caulfield’s responsibilities at the Fed is the coordination of threat information among other regional Federal Reserve banks. He hopes the development of standards for the collection and dissemination of threat intelligence such as CRITs (Collaborative Research Into Threats) and STIX (Structured Threat Information Expression) will eventually pave the way for automated information sharing between machines.
“We need to set standards and fill this stuff out, but in agnostic ways, not in ways that say you need to buy this stuff from Vendor X. That way lies madness,” Caulfield said. “We want this to be as close to open source as feasible. So we’re not tying people to vendors or to products. We’re not looking to sell anything; we’re looking to claw back some of the space we lost.”
Caulfield was speaking about the Advanced Cyber Security Center (ACSC) which hosted its annual conference at the Fed here Tuesday. The ACSC is a cross-sector group of more than 30 public and private sector security officers who meet monthly to facilitate information sharing. Standards such as CRITs and STIX define how attack intelligence is analyzed and transmitted, respectively, and while some industry groups such as the Financial Services Information Sharing and Analysis Center (FS-ISAC) have succeeded in collaborating and sharing sanitized information, ACSC hopes to see that kind of sharing not only between the government and private sector, but horizontally across private companies, even competitors.
“Threat sharing gives us that lead time to get in front of threats,” Caulfield said, referring in particular to targeted attacks and APT-style intrusions where well-funded attackers use commodity malware and custom Trojans to access networks and steal data. “When we get indicators like domains or emails to give us some insight into what we’re looking for, we can begin to not only scour through our instrumentation and logs to see how that happened here, but we can also begin to alert toward those types of things and put in a much stronger net to catch this stuff as it comes at us.”
The challenges to success, however, aren’t necessarily in a desire to share information, but legal hurdles put in place by lawyers or executives afraid to share too much information with a competitor in the same industry.
Phyllis Schneck, Deputy Undersecretary Cyber Security National Protection and Programs Directorate, U.S. Department of Homeland Security and keynote speaker today, pointed out the obvious truth that attackers often times do a better job sharing information than do defenders.
“We face adversaries with no lawyers, no rules, most of them met in prison and they have plenty of money,” Schneck said. “We have to fight that by taking our infrastructure back. When machines talk, there isn’t any reason they can’t tell each other something bad is coming. Global situational awareness is the dream and we plan to live that dream by engaging people to get their trust and incentivize companies to build in something into their networks that talks to these protocols”
CRITs, for example, is essentially a threat depository developed by MITRE Corp., where indicators of compromise are studied and enumerated. STIX, meanwhile, is the language by which this information can be transmitted to those who need it in a sanitized fashion that is still useful to others.
MITRE chief security officer Gary Gagnon likened this kind of sharing to crowdsourcing where threat indicators, rather than vulnerabilities or compromises are shared. Gagnon believes those are the wrong types of information to be shared and don’t help organizations under attack understand their adversaries or tactics.
“That kind of threat information can drive many things inside the enterprise,” Gagnon said. “Things like patch prioritization, training for staff and employees, and technology investments.
In the meantime, groups such as ACSC and others continue to chase the elusive answer to information sharing where organizations are comfortable sharing with one another in a competitive environment.
“The challenges are messy. The ACSC is cross sector, and the way we’ve structured it, we’ve eliminated that and hammered it out,” Caulfield said. “We’ve structured things in such a way through a very carefully crafted NDA that we don’t use this stuff for recrimination on each other and we don’t use this stuff for a competitive advantage either. The type of stuff we’re sharing—and we do sanitize some of it just because we’re protecting the names of the innocent—it’s not a problem to us. That’s a testament to the people involved.”
The RC4 and SHA-1 algorithms have taken a lot of hits in recent years, with new attacks popping up on a regular basis. Many security experts and cryptographers have been recommending that vendors begin phasing the two out, and Microsoft on Tuesday said that is now recommending to developers that they deprecate RC4 and stop using the SHA-1 hash algorithm.
RC4 is among the older stream cipher suites in use today, and there have been a number of practical attacks against it, including plaintext-recovery attacks. The improvements in computing power have made many of these attacks more feasible for attackers, and so Microsoft is telling developers to drop RC4 from their applications.
“In light of recent research into practical attacks on biases in the RC4 stream cipher, Microsoft is recommending that customers enable TLS1.2 in their services and take steps to retire and deprecate RC4 as used in their TLS implementations. Microsoft recommends TLS1.2 with AES-GCM as a more secure alternative which will provide similar performance,” Microsoft’s William Peteroy said in a blog post.
“One of the first steps in evaluating the customer impact of new security research and understanding the risks involved has to do with evaluating the state of public and customer environments. Using a sample size of five million sites, we found that 58% of sites do not use RC4, while approximately 43% do. Of the 43% that utilize RC4, only 3.9% require its use. Therefore disabling RC4 by default has the potential to decrease the use of RC4 by over almost forty percent.”
The software company also is recommending that certificate authorities and others stop using the SHA-1 algorithm. Microsoft cited the existence of known collision attacks against SHA-1 as the main reason for advising against its use. Also, after January 2016, Microsoft developers can no longer use SHA-1 in code-signing or developer certificates.
Image from Flickr photos of Josh Bancroft.
Microsoft today issued eight bulletins addressing 19 separate vulnerabilities in its Windows operating system, Internet Explorer Web browser, Office, and other products.
Microsoft gave three of the bulletins its highest “critical” rating, while the remaining five received the second-most-severe “important” rating. One of the critically rated bulletins addresses an Internet Explorer zero-day vulnerability that attackers have exploited to launch watering hole attacks against an unnamed U.S.-based non-governmental organization.
The zero-day bug is fixed by MS13-090, a cumulative update for ActiveX Kill Bits. The actively exploited vulnerability, which exists in the InformationCardSigninHelper Class ActiveX control, could allow an attacker to initiate remote code execution if a user views a maliciously crafted webpage in Internet Explorer. As always, users with less user-rights could be less impacted than those administrative rights.
Microsoft is not patching a second zero-day in its Office product suite yet, but they have built a work-around for it. Known as the TIFF zero-day, researchers from SpiderLabs wrote on their blog that Microsoft’s FixIt tool should mitigate the issue until Microsoft patches it with what will likely be an out-of-band patch before next month’s Patch Tuesday release.
Ross Barrett, senior manager of security engineering at Rapid7, noted in an email conversation with Threatpost that Microsoft’s failure to patch the TIFF bug is frustrating, but that they are seeing a very limited, targeted exploitation of the vulnerability – only in a specific region – and requiring user interaction to exploit. He is, therefore, saying that he wouldn’t worry about it too much.
Beyond these, MS13-088, Microsoft’s cumulative update for Internet Explorer, which is not related to the zero-days, is likely the next highest-priority fix for network operators. It resolves 10 privately reported bugs, the most severe of which could allow for remote code execution again if a user views a maliciously crafted webpage in Internet Explorer, thus granting an attacker the same user rights as the current user. The impact would once again depend on the level of rights the victim has on the browser.
The other critically rated bug resolves an issue in Windows’ graphics device interface and could also enable remote code execution if a user views or opens a specially crafted Windows Write file in WordPad. Again, users with less rights will be less impacted.
The remaining, important-rated bulletins, MS13-091 through MS13-095, resolve seven publicly and privately reported bugs: a remote code execution vulnerability in Office, an elevation of privileges flaw in Hyper-V, information disclosures in the Windows ancillary function driver and Outlook, and a denial of service problem in Windows digital signatures.
Tyler Reguly, a technical manager of security research and development at Tripwire, told Threatpost that the most interesting important-rated bugs are likely the Outlook vulnerability, which could enable port-scanning, and the Hyper-V vulnerability, because it could allow Guest OS to Guest OS code execution, and an X.509 issue in schannel.dll that could allow denial of service.
“Overall, while it is only a medium-sized Patch Tuesday, pay special attention to the two 0-days and the Internet Explorer update,” wrote Wolfgang Kandek, CTO of the IT security firm Qualys, in his analysis of the patch release. “Browsers continue to be the favorite target for attackers, and Internet Explorer, with its leading market share, is one of the most visible and likely targets.”
You can read Microsoft’s full bulletin advisories here.
Microsoft's November 2013 Patch Tuesday delivers a set of three critical Bulletins and five Bulletins rated "important". This month's MS13-088 patches eight critical vulnerabilities and two important vulnerabilities in Internet Explorer. Overall, Microsoft is addressing 19 issues in Internet Explorer, Office and Windows itself.
Google has fixed 12 security vulnerabilities in Chrome, including six high-risk bugs. The new version of the browser includes a number of fixes for bugs discovered by external researchers as well as by Google’s own internal security team.
Two of the more serious vulnerabilities patched in Chrome include use-after-free bugs in various elements of the browser, and there also are two out of bounds reads in the browser. Those are listed as high-risk flaws, as well. But perhaps the most interesting bug fixed in the new version is a medium-risk vulnerability related to the TLS negotiation process. During that process, Chrome failed to do a check of some certificates it encountered.
Here’s the full list of bugs fixed Chrome 31:
$500] Medium CVE-2013-6621: Use after free related to speech input elements. Credit to Khalil Zhani.
-  Medium-Critical CVE-2013-2931: Various fixes from internal audits, fuzzing and other initiatives.
-  Medium CVE-2013-6629: Read of uninitialized memory in libjpeg and libjpeg-turbo. Credit to Michal Zalewski of Google.
-  Medium CVE-2013-6630: Read of uninitialized memory in libjpeg-turbo. Credit to Michal Zalewski of Google.
-  High CVE-2013-6631: Use after free in libjingle. Credit to Patrik Höglund of the Chromium project.
As part of its bug reward program, Google paid out $11,000 in bounties to external researchers.