There have been countless hearings in both the House and Senate since the Snowden leaks began in June, and there seems to be no end in sight. The latest committee to get in on the action was the Senate Committee on the Judiciary’s Subcommittee on Privacy, Technology and the Law, which held a hearing today in which lawmakers and technology experts discussed the National Security Agency’s expansive and increasingly public surveillance practices, examining a proposed bill that would require that the U.S. spy agency carry out its operations in a more transparent fashion.
When all was said and done, the general consensus of those not advocating for the NSA was that a bill introduced by Sen. Al Franken (D-Mich.), chairman of the committee, would be a great step forward, but that transparency alone would not undo the damages done to U.S. companies and its government by PRISM and other similar surveillance programs. Nor, they seemed to agree, would the addition of transparency make the NSA’s programs lawful or constitutional.
Franken said that The Surveillance Transparency Act would require that the NSA disclose to the public how many people are having their data collected under each key foreign intelligence authority. It would also make the NSA estimate how many of those people are American citizens or green card holders and how many of those Americans had their information actually looked at by a government agent. His bill, he continued, would lift the gag order on Internet and phone companies so that those companies can tell Americans general information about the number of orders they are receiving and the numbers of users whose information have been produced in response to those orders.
American cloud providers are losing as much as $180 billion per year as a direct result of their inability to report how often the government requests information, how often they comply with those orders, and how much information they hand over to federal authorities, Franken said. Other witnesses agreed that the U.S. government and the civil liberty of its citizenry were not the only victims of pervasive government surveillance.
“My bill would permanently ensure that American citizens have information they need to develop an informed opinion about government surveillance and it would protect American companies from losing business about misconceptions about their roles in these programs,” Franken said. “Americans still have no way of knowing if their government is striking the right balance between privacy and security and whether their privacy is being violated.”
“I believe that the bulk collection program mostly authorized under section 215 of the PATRIOT Act should come to an end,”said Sen. Dean Heller (R-Neb.). “While there is disagreement on whether this program should continue, I am confident that we can all agree that these programs need more transparency.”
Robert Lit, the Director of National Intelligence’s general counsel, said that more transparency is needed, but his reason for it was different than that of the other witnesses. His goal, and that of the DNI presumably, was to use transparency as a tool to dispel exaggerations, myths, and general misinformation about the government’s spying programs.The DNI’s counsel claimed that they – proponent’s of Franken’s bill and the national security community – agree on the broad view of the bill, but they have concerns that some of the bill’s provisions could harm intelligence and national security operations.
“The DNI has declassified and released thousands of pages of documents about these programs and we are continuing to review documents to release more of them,” he said. “These documents demonstrate that these programs are all authorized by law and subject to vigorous oversight by all three branches of government.
“It’s important to emphasize that this info was all properly classified. It is being declassified now only because in the present circumstances, the public interest in declassification outweighs the national security concerns that required classification.”
More specifically, Lit said one of the intelligence community’s primary concerns is that enumerating the exact number of U.S. citizens monitored under their surveillance programs would be too difficult and resource-intensive.
“It is often not possible to determine whether a person who receives an email is a U.S. person. The email address says nothing about the citizenship or nationality of that person,” Lit said. “Even in cases where we would be able to get the information that would allow us to make the determination of whether someone is a U.S. person, doing the research and collecting that information would perversely require a greater invasion of that person’s privacy that would otherwise occur.”
Therefore, he said, the NSA and the intelligence community have written letters to Congress informing them that this kind of information simply can not be reasonably obtained.
Kevin Bankston, Director, Free Expression Project at the Center for Democracy and Technology, in later testimony called the NSA’s incapability to provide estimation of the number of individuals swept up in their surveillance “shocking.” He would then say that Lit’s other claim – namely law enforcement can not make a meaningful estimate of how many people’s data it has collected – “just doesn’t make sense.” Lit took more issue with the bill before that point though, going on to say that the intelligence community also has significant concerns about giving companies permission to publish information about the number of orders of data request they receive.
“Providing that information in that level of detail,’ Lit said, “could provide our adversaries a detailed road-map of which providers and which platforms to avoid in order to escape surveillance.”
Bankston laid out three reasons why it is important to allow companies to more transparently disclose the information requests they receive. First, he claimed that both citizens and policy makers have the right and the need to know about the scope of government programs. Second, he said that companies have a clear First Amendment right to tell us this information. The government’s attempts to gag them is clearly unconstitutional, he said. Lastly, Bankston argued that greater transparency is needed to restore trust in the U.S. government and businesses.
“Indeed you will see this prior restraint at work today in the room,” Bankston said. “Even though everyone in this room knows and understands that Google has received Foreign Intelligence Surveillance Act process, Google’s representative is the one person in the room who cannot admit it.”
Later, Sen. Patrick Leahy (D-Vt.) would echo that sentiment, asking another witness, Richard Salgado, the director for law enforcement and information security matters at Google, if he was permitted to tell the committee if Google had received any FISA orders. Salgado responded, with a smirk, that he would have to decline to answer the question until the bill being discussed today was passed. Leahy proceeded to ask if Salgado thought that the country was safer as a result of his inability to answer the question.
“I can not imagine the country is safer as a result of that,” Salgado said, again smiling.
Before that exchange, Salgado commended the Surveillance Transparency Act as activists in the back row held up signs urging Google to, “Keep [their] data private.”
Salgado said there has been no intimation from the Department of Justice to suggest that publishing National Security Letter information – another contentious issue tied up in and intrinsically bound to the surveillance debate – has any real impact on the country’s national security. Despite this, he said, Justice has not given Google permission to publish any meaningful information about the number of NSLs it receives – other than publishing vague ranges of numbers including both NSLs and individual data requests. In fact, Salgado explained, the permissions that Google has been granted by the department would be a significant step backward from the level of transparency demonstrated by transparency reports Google already publishes.
Bankston then compared the publishing of vague ranges of numbers as a transparency report to a doctor who is forced to diagnose disease by examining a patient’s shadow.
“Only the grossest, most obvious abuse would be evident, if even that,” he said.
Amid all of this, Lit and the other witness at the hearing on behalf of the national security community, Brad Wiegmann, deputy assistant attorney general for the National Security Division, continued to assure the committee members that the NSA had made changes to and increased the operational transparency of the government’s spying programs in light of public interest. All in all, they said these programs have proper regulation and oversight.
“In short,” Salgado responded to those claims, “the DoJ proposal would not provide the type of transparency that is reflected in the Transparency Surveillance Act of 2013. Transparency is critical in informing the public debate on these issues, but it is only one step among the many that are needed.”
Leahy chimed in later:
“Is just enhancing transparency going to be enough to bring back global confidence in American technology companies?” he asked.
Salgado replied that transparency is a good first step, but that, ultimately, it would not be enough. Users, he said, need to be assured that such surveillance practices are done under law, in a rule-bound and narrowly tailored manner, and that there is oversight and accountability for it. Bankson agreed, saying that substantial reform – in addition to transparency – will be needed to repair the U.S. government’s image.
As public outrage grows, especially among the technical elite, bills similar to Franken’s are popping up in the House of Representatives on the other side of the Capitol as well as in the Senate.
The hacker behind the MacRumors Forums breach said the attack was “friendly” and that none of the data accessed will be leaked. Editorial Director Arnold Kim confirmed to Threatpost that a post on the forums from the hacker is legitimate.
Kim posted an advisory on the forum on Monday informing users that a breach had occurred, and advising the site’s 860,000-plus members to change their passwords on the forum and anywhere else they might have used the same credential. MacRumors Forums said it has enlisted a third-party security firm to investigate the attack, which it likened to a July break-in at the Ubuntu Forums.
The hacker, who posted the portion of Kim’s password hash and salt as proof of his legitimacy, blamed a MacRumors Forums moderator whose credentials were stolen and used to access the password database.
“We’re not going to ‘leak’ anything. There’s no reason for us to. There’s no fun in that. Don’t believe us if you don’t want to, we honestly could not care less,” the hacker wrote. Kim said this afternoon that the site has no further details on the status of the investigation.
“In situations like this, it’s best to assume that your MacRumors Forum username, email address and (hashed) password is now known,” Kim said.
The hacker confirmed that 860,106 passwords were dumped, and 488,429 still had a salt at least three bits long.
“Anyone that’d been active recently will have a longer salt, which will slow down the hash cracking by a fraction of the time it would have taken (duplicate salts = less work do, it’s like to have many with a 3 bit salt),” the hacker’s post said. “We’re not ‘mass cracking’ the hashes. It doesn’t take long whatsoever to run a hash through hashcat with a few dictionaries and salts, and get results.”
The hacker said put the blame on users for re-using passwords, which is against generally accepted security practices, adding that the credentials are not being exploited to log into web-based email accounts or other online services.
“We’re not terrorists. Stop worrying, and stop blaming it on Macrumors when it was your own fault for reusing passwords in the first place,” the hacker wrote.
MacRumors Forums, much like the Ubuntu site, runs on the vBulletin platform; all current versions of vBulletin share the same hashing algorithm, according to the attacker, who added that the attack’s success had nothing to do with outdated software or vBulletin, rather the moderator credentials they were able to compromise.
The attack on free Linux distribution Ubuntu in July affected close to 2 million of its forum account members. The access every user’s email address and hashed passwords; Canonical, a U.K.-based software company that backs the distro, also recommended that its users change their forum passwords and anywhere else the password might have been used. Ubuntu’s password trove was also hashed and salted; salting involves adding random characters to a password before it’s hashed. The practice reduces the ability of a hacker to use common password attacks such as dictionary attacks.
“Consider the ‘malicious’ attack friendly,” the MacRumors Forums hacker said. “The situation could have been catastrophically worse if some fame-drive idiot was the culprit and the database were to be leaked to the public.”
When the first NSA surveillance story broke in June, about the agency’s collection of phone metadata from Verizon, most people likely had never heard the word metadata before. Even some security and privacy experts weren’t sure what the term encompassed, and now a group of security researchers at Stanford have started a new project to collect data from Android users to see exactly how much information can be drawn from the logs of phone calls and texts.
The project, dubbed Metaphone, is soliciting volunteers who agree to allow the collection of various kinds of metadata from their phones, which will then be sent automatically to Stanford’s researchers. The Stanford Security Lab, which is running the project, is interested in showing that the collection of metadata amounts to surveillance, something that NSA leaders and Congress have said is not the case.
“Phone metadata is inherently revealing. We want to rigorously prove it—for the public, for Congress, and for the courts,” Jonathan Mayer, a PhD student at Stanford and a junior affiliate scholar at the Security Lab, wrote in an explanation of the project.
People interested in participating in the program can download the Metaphone app from Google Play. As part of the project, Metaphone will collect and transmit a variety of information to the researchers. The data will be destroyed at the end of the study.
“In the course of the study, your mobile phone will transmit device logs and social network information to researchers at Stanford University. Device data will include records about your recent calls and text messages. Social network data will include your profile, connections, and recent activity. The data will be stored and analyzed at Stanford, then deleted at the end of the study. Research staff will take reasonable precautions to secure the data in transit, storage, analysis, and destruction,” the researchers said.
In an email interview, Mayer said that he hopes the study will provide some clear answers on what metadata is and how invasive the collection of it can be for users.
“We intend to report preliminary results as soon as we have enough crowdsourced data. Phone records are plainly a hot-button issue: Congress is considering intelligence reform legislation, courts are hearing litigation challenges, and many in the public aren’t sure who’s telling the truth. Our aim is to provide rigorous answers about the sensitivity of phone metadata,” Mayer said.
“It is difficult to estimate the amount of data that we need because the quality, not just quantity, of the data coming back will affect how well our learning algorithms work. The general principle, though, is that more data is better and if we want to make a strong claim about metadata we would like to have as much data as possible. However, the analysis can be a continuous process so we can get started once we have some participants and then refine our approach as more data comes in,” said Patrick Mutchler, also of the Security Lab.
This story was updated on Nov. 13 to clarify that the data collection will not be anonymous.
Image from Flickr photos of Harshlight.
The tentacles of the massive Adobe breach, called one of the worst in U.S. history by one security expert, have reached Facebook users, specifically those who used the same email and password combination for the social network as well as Adobe.
A Facebook representative confirmed to Threatpost today that users in that situation are being presented with a message telling them they have to change their passwords.
“We actively look for situations where the accounts of people who use Facebook could be at risk—even if the threat is external to our service,” Facebook’s Jay Nancarrow said in a statement. “When we find these situations, we present messages to people to help them secure their accounts.”
The data from the Adobe breach, disclosed in early October, was discovered online by blogger Brian Krebs and Hold Security CEO Alex Holden. The software giants were breached by unknown Russian-speaking attackers who were able to steal source code for Adobe products such as Acrobat, ColdFusion and Photoshop. Adobe initially said up to three million customer records were also compromised, including encrypted passwords and credit card numbers. That number was adjusted to upwards of 40 million after more of the data surfaced online. Analysis of the encrypted passwords revealed that Adobe had used a weak encryption scheme to secure the credentials; the passwords were secured with a symmetric encryption cipher, meaning that anyone able to guess the key can unlock all of the passwords in question.
Facebook said it has been combing through the passwords looking for matching username-password combinations in order to keep its users’ accounts secure. Chris Long, a Facebook security team member, confirmed this in a comment posted on Krebs on Security.
“We used the plaintext passwords that had already been worked out by researchers,” Long said. “We took those recovered plaintext passwords and ran them through the same code that we use to check your password at log-in time.
“We’re proactive about finding sources of compromised passwords on the Internet. Through practice, we’ve become more efficient and effective at protecting accounts with credentials that have been leaked, and we use an automated process for securing those accounts.”
In the meantime, a 20-year-old from the Netherlands who goes by the handle Lucb1e built a tool that facilitates a search of the stolen data for a user’s email address or partial address. The tool is still online, though Lucb1e said it won’t be forever.
“Searching a 10GB file is not trivial, so instead of searching it for everyone individually, I wrote a program that does it in the background (daemon),” he wrote. “Whenever someone adds a search, it is added to the database. The daemon checks every few seconds whether any (and how many ) searches have been added, and runs all searches at the same time.”
Adobe was compromised between July 31 and Aug. 15, but the breach was not discovered for more than a month. Adobe disclosed the breach to its customers on Oct. 3 and has yet to provide details on how attackers were able to bypass its defenses. Krebs and Holden found 40 GB of data stolen from Adobe and other organizations on the same server used by criminals who pulled off breaches against LexisNexis and Dun & Bradstreet. These same attackers are believed to be responsible for a number of breaches using ColdFusion exploits going back to December of last year.
BlackBerry addressed a pair of serious vulnerabilities yesterday in its BlackBerry Link product that enables users to sync content between a BlackBerry 10 device and a desktop or laptop.
The vulnerability lies in the Peer Manager component of Link that provides remote file access, which according to BlackBerry allows a user to access documents and files in a remote folder using their mobile device.
The risk is limited, BlackBerry said, because an exploit would require user interaction.
“Successful exploitation can require that an attacker must persuade a user on a system with BlackBerry Link installed to click on a specifically crafted link or access a webpage containing maliciously crafted code,” BlackBerry said in its advisory BSRT 2013-12. “In the alternative scenario, successful exploitation requires that a local attacker must be able to log in to the affected system while the BlackBerry Link remote file access feature is running under a different user account.”
An attacker could then read or modify any data from the remote folder accessible through Link.
BlackBerry said in multiuser systems, a successful exploit could elevate privileges for users on the same system to access the remote folder belonging to the account on which Peer Manager is running. Remote attackers could also access these folders by enticing a user to click on a malicious link or surf to an infected website. Remote attackers could also leverage users on a multiuser system to gain access to remote folders and the data within.
“An attacker must persuade a lower privileged local user to click on a specifically crafted link or access a webpage containing maliciously crafted code while the user is logged into their account on a machine on which a higher privileged user has previously logged in, resulting in Peer Manager running under the higher privileged user account,” BlackBerry said.
The existing state of affairs in which government agencies and intelligence services work to insert backdoors into various hardware, software and networks is not only a problem in terms of civil rights but also represents a serious security risk to most users and the Internet itself, a recent report by the Citizen Lab says. And, the revelations of the U.S. surveillance programs of the last few months may also spawn a variety of copycat programs in emerging countries.
Some of the more explosive and troubling revelations to come out of the steady flow of NSA leaks this year have involved the U.S. government’s efforts to compromise encryption standards, software programs and Internet infrastructure used by millions and millions of people as part of intelligence-gathering operations. Documents made public in recent months show efforts by the NSA to influence the standards process at NIST, specifically regarding the Dual EC DRBG random number generator, which NIST has warned developers to stop using. There also have been allegations that the agency and its allies are tapping unencrypted links between data centers owned by Google and Yahoo, a revelation that has infuriated security engineers at Google.
Security experts have long argued that inserting backdoors in widely deployed software or hardware for law-enforcement or intelligence-gathering purposes is not just questionable with regard to civil rights but also harms the security of the entire system. The presence of a vulnerability in an application or piece of hardware opens that target up to exploitation by anyone, not just the people who inserted the backdoor. The Citizen Lab report, called “Shutting the Backdoor“, by Ron Deibert of the University of Toronto, argues that the NSA revelations have brought this problem into sharp focus.
“Quite apart from these concerns about privacy and potential abuse of unchecked power is an additional concern around the security implications of backdoors. Building backdoors into devices and infrastructure may be useful to law enforcement and intelligence agencies, but it also provides a built-in vulnerability for those who would otherwise seek to exploit them and in doing so actually contributes to insecurity for the whole of society that depends on that infrastructure,” Deibert says in the report.
“In 2008 Citizen Lab researchers discovered that the Chinese version of the popular VOIP product, Skype (called TOM-Skype) had been coded with a special surveillance system in place such that whenever certain keywords were typed into the chat client, data would be sent to a server in mainland China (presumably to share with China’s security services).20 Upon further investigation, it was discovered that the server onto which the chat messages were stored was not password protected, allowing for the download of millions of personal chats, many of which included credit card numbers, business transactions, and other private information.”
In addition to the unintended consequences these programs can produce, the Citizen Lab report also says that Edward Snowden’s revelations of the NSA’s methods and techniques could provide a blueprint for regimes in emerging countries that are interested in exerting more control over their communications infrastructure.
“No doubt one implication of Snowden’s revelations will be the spurring on of numerous national efforts to regain control of information infrastructures through national competitors to Google, Verizon, and other companies implicated, not to mention the development of national signals intelligence programs that attempt to duplicate the US model,” Deibert writes in the report.
“Already prior to the revelations, numerous companies faced complex and, at times, frustrating national ‘lawful access’ requests from newly emerging markets. Many countries of the global South lack even basic safeguards and accountability mechanisms around the operations of security services, and their demands on the private sector could contribute to serious human rights violations and other forms of repression.”
Deibert argues that while there are legitimate uses for lawful intercept technologies, they should be deployed sparingly and with great oversight.
“Those lawful access provisions that are still required should be infrequent and strictly controlled with rigorous oversight and public accountability provisions. Direct tapping of entire services wholesale should be eliminated. Not only will this protect civil liberties and prevent the concentration of power in unchecked hands, it will ensure that we are not doing more to undermine our own security in an overzealous surveillance quest,” he writes.
Image from Flickr photos of anyjazz65.
Adobe patched two vulnerabilities in its ColdFusion web application server today, and also released a Flash Player update that patched a remote code execution bug in the software.
A company spokesperson said none of the vulnerabilities are being exploited, nor are they related to the recent theft of Adobe source code and up to 150 million customer records, including passwords.
One of the ColdFusion bugs, however, was reported by Alex Holden of Hold Security; Holden is one of the experts who uncovered the data lost in the Adobe breach along with blogger Brian Krebs. Krebs reported today that one of the now-patched ColdFusion bugs was a zero-day being used by attackers earlier this year to break into a number of companies.
The security hotfix for ColdFusion 10 on Windows is the most critical, according to Adobe. The vulnerability affects versions 10, 9.0.2, 9.0.1 and 9.0, as well as Mac OS X and Linux. Adobe said a cross-site scripting vulnerability was patched that could be remotely exploited by an attacker with credentials when the CFIDE directory is exposed. The other bug could permit unauthorized remote read access, Adobe said.
Adobe also updated Flash Player to version 11.9.900.117 for Windows and Mac OS X, and 22.214.171.1240 for Linux. The patches fix flaws that could crash the Flash Player and enable an attacker to remotely take control of the underlying system hosting the software.
Both products have been patched multiple times this year. ColdFusion is of particular interest because of its involvement in the massive October breach. The attackers were able to access source code for ColdFusion, along with Acrobat, Publisher, PhotoShop and other Adobe products. More than 150 million customer records were also accessed, including unsalted passwords.
ColdFusion has been patched several times by Adobe this year, going as far back as Jan. 4 when the company reported that ColdFusion exploits were in the wild for unpatched vulnerabilities in the software. Since then, vulnerabilities were patched in the software in May, after weeks prior cloud-hosting company Linode revealed it was breached by attackers using a ColdFusion zero day, and customer records including payment card information were lost. Previously, on Dec. 11, Adobe patched a sandbox permissions flaw in ColdFusion, weeks after an out-of-band patch resolved a denial-of-service vulnerability.
BOSTON – If you’re looking for tangible information sharing success stories around attack intelligence, some might point to the prompt publishing of indicators of compromise (IOC) as an example. Security and forensics companies will publish MD5 hashes of malware, IP addresses involved in attacks, malware signatures and more artifacts relevant to a breach or malware outbreak. Problem is, all of the artifacts are made available post-attack, and don’t satisfy the need for real-time data on intrusions, in particular for sensitive industries such as financial services or utilities.
“I want to hear about this stuff as, or before, it impacts me,” said James Caulfield, advanced threat protection program manager for the Federal Reserve Bank in Boston. “[IOCs] just isn’t fast enough.”
Among Caulfield’s responsibilities at the Fed is the coordination of threat information among other regional Federal Reserve banks. He hopes the development of standards for the collection and dissemination of threat intelligence such as CRITs (Collaborative Research Into Threats) and STIX (Structured Threat Information Expression) will eventually pave the way for automated information sharing between machines.
“We need to set standards and fill this stuff out, but in agnostic ways, not in ways that say you need to buy this stuff from Vendor X. That way lies madness,” Caulfield said. “We want this to be as close to open source as feasible. So we’re not tying people to vendors or to products. We’re not looking to sell anything; we’re looking to claw back some of the space we lost.”
Caulfield was speaking about the Advanced Cyber Security Center (ACSC) which hosted its annual conference at the Fed here Tuesday. The ACSC is a cross-sector group of more than 30 public and private sector security officers who meet monthly to facilitate information sharing. Standards such as CRITs and STIX define how attack intelligence is analyzed and transmitted, respectively, and while some industry groups such as the Financial Services Information Sharing and Analysis Center (FS-ISAC) have succeeded in collaborating and sharing sanitized information, ACSC hopes to see that kind of sharing not only between the government and private sector, but horizontally across private companies, even competitors.
“Threat sharing gives us that lead time to get in front of threats,” Caulfield said, referring in particular to targeted attacks and APT-style intrusions where well-funded attackers use commodity malware and custom Trojans to access networks and steal data. “When we get indicators like domains or emails to give us some insight into what we’re looking for, we can begin to not only scour through our instrumentation and logs to see how that happened here, but we can also begin to alert toward those types of things and put in a much stronger net to catch this stuff as it comes at us.”
The challenges to success, however, aren’t necessarily in a desire to share information, but legal hurdles put in place by lawyers or executives afraid to share too much information with a competitor in the same industry.
Phyllis Schneck, Deputy Undersecretary Cyber Security National Protection and Programs Directorate, U.S. Department of Homeland Security and keynote speaker today, pointed out the obvious truth that attackers often times do a better job sharing information than do defenders.
“We face adversaries with no lawyers, no rules, most of them met in prison and they have plenty of money,” Schneck said. “We have to fight that by taking our infrastructure back. When machines talk, there isn’t any reason they can’t tell each other something bad is coming. Global situational awareness is the dream and we plan to live that dream by engaging people to get their trust and incentivize companies to build in something into their networks that talks to these protocols”
CRITs, for example, is essentially a threat depository developed by MITRE Corp., where indicators of compromise are studied and enumerated. STIX, meanwhile, is the language by which this information can be transmitted to those who need it in a sanitized fashion that is still useful to others.
MITRE chief security officer Gary Gagnon likened this kind of sharing to crowdsourcing where threat indicators, rather than vulnerabilities or compromises are shared. Gagnon believes those are the wrong types of information to be shared and don’t help organizations under attack understand their adversaries or tactics.
“That kind of threat information can drive many things inside the enterprise,” Gagnon said. “Things like patch prioritization, training for staff and employees, and technology investments.
In the meantime, groups such as ACSC and others continue to chase the elusive answer to information sharing where organizations are comfortable sharing with one another in a competitive environment.
“The challenges are messy. The ACSC is cross sector, and the way we’ve structured it, we’ve eliminated that and hammered it out,” Caulfield said. “We’ve structured things in such a way through a very carefully crafted NDA that we don’t use this stuff for recrimination on each other and we don’t use this stuff for a competitive advantage either. The type of stuff we’re sharing—and we do sanitize some of it just because we’re protecting the names of the innocent—it’s not a problem to us. That’s a testament to the people involved.”
The RC4 and SHA-1 algorithms have taken a lot of hits in recent years, with new attacks popping up on a regular basis. Many security experts and cryptographers have been recommending that vendors begin phasing the two out, and Microsoft on Tuesday said that is now recommending to developers that they deprecate RC4 and stop using the SHA-1 hash algorithm.
RC4 is among the older stream cipher suites in use today, and there have been a number of practical attacks against it, including plaintext-recovery attacks. The improvements in computing power have made many of these attacks more feasible for attackers, and so Microsoft is telling developers to drop RC4 from their applications.
“In light of recent research into practical attacks on biases in the RC4 stream cipher, Microsoft is recommending that customers enable TLS1.2 in their services and take steps to retire and deprecate RC4 as used in their TLS implementations. Microsoft recommends TLS1.2 with AES-GCM as a more secure alternative which will provide similar performance,” Microsoft’s William Peteroy said in a blog post.
“One of the first steps in evaluating the customer impact of new security research and understanding the risks involved has to do with evaluating the state of public and customer environments. Using a sample size of five million sites, we found that 58% of sites do not use RC4, while approximately 43% do. Of the 43% that utilize RC4, only 3.9% require its use. Therefore disabling RC4 by default has the potential to decrease the use of RC4 by over almost forty percent.”
The software company also is recommending that certificate authorities and others stop using the SHA-1 algorithm. Microsoft cited the existence of known collision attacks against SHA-1 as the main reason for advising against its use. Also, after January 2016, Microsoft developers can no longer use SHA-1 in code-signing or developer certificates.
Image from Flickr photos of Josh Bancroft.
Microsoft today issued eight bulletins addressing 19 separate vulnerabilities in its Windows operating system, Internet Explorer Web browser, Office, and other products.
Microsoft gave three of the bulletins its highest “critical” rating, while the remaining five received the second-most-severe “important” rating. One of the critically rated bulletins addresses an Internet Explorer zero-day vulnerability that attackers have exploited to launch watering hole attacks against an unnamed U.S.-based non-governmental organization.
The zero-day bug is fixed by MS13-090, a cumulative update for ActiveX Kill Bits. The actively exploited vulnerability, which exists in the InformationCardSigninHelper Class ActiveX control, could allow an attacker to initiate remote code execution if a user views a maliciously crafted webpage in Internet Explorer. As always, users with less user-rights could be less impacted than those administrative rights.
Microsoft is not patching a second zero-day in its Office product suite yet, but they have built a work-around for it. Known as the TIFF zero-day, researchers from SpiderLabs wrote on their blog that Microsoft’s FixIt tool should mitigate the issue until Microsoft patches it with what will likely be an out-of-band patch before next month’s Patch Tuesday release.
Ross Barrett, senior manager of security engineering at Rapid7, noted in an email conversation with Threatpost that Microsoft’s failure to patch the TIFF bug is frustrating, but that they are seeing a very limited, targeted exploitation of the vulnerability – only in a specific region – and requiring user interaction to exploit. He is, therefore, saying that he wouldn’t worry about it too much.
Beyond these, MS13-088, Microsoft’s cumulative update for Internet Explorer, which is not related to the zero-days, is likely the next highest-priority fix for network operators. It resolves 10 privately reported bugs, the most severe of which could allow for remote code execution again if a user views a maliciously crafted webpage in Internet Explorer, thus granting an attacker the same user rights as the current user. The impact would once again depend on the level of rights the victim has on the browser.
The other critically rated bug resolves an issue in Windows’ graphics device interface and could also enable remote code execution if a user views or opens a specially crafted Windows Write file in WordPad. Again, users with less rights will be less impacted.
The remaining, important-rated bulletins, MS13-091 through MS13-095, resolve seven publicly and privately reported bugs: a remote code execution vulnerability in Office, an elevation of privileges flaw in Hyper-V, information disclosures in the Windows ancillary function driver and Outlook, and a denial of service problem in Windows digital signatures.
Tyler Reguly, a technical manager of security research and development at Tripwire, told Threatpost that the most interesting important-rated bugs are likely the Outlook vulnerability, which could enable port-scanning, and the Hyper-V vulnerability, because it could allow Guest OS to Guest OS code execution, and an X.509 issue in schannel.dll that could allow denial of service.
“Overall, while it is only a medium-sized Patch Tuesday, pay special attention to the two 0-days and the Internet Explorer update,” wrote Wolfgang Kandek, CTO of the IT security firm Qualys, in his analysis of the patch release. “Browsers continue to be the favorite target for attackers, and Internet Explorer, with its leading market share, is one of the most visible and likely targets.”
You can read Microsoft’s full bulletin advisories here.
Microsoft's November 2013 Patch Tuesday delivers a set of three critical Bulletins and five Bulletins rated "important". This month's MS13-088 patches eight critical vulnerabilities and two important vulnerabilities in Internet Explorer. Overall, Microsoft is addressing 19 issues in Internet Explorer, Office and Windows itself.
Google has fixed 12 security vulnerabilities in Chrome, including six high-risk bugs. The new version of the browser includes a number of fixes for bugs discovered by external researchers as well as by Google’s own internal security team.
Two of the more serious vulnerabilities patched in Chrome include use-after-free bugs in various elements of the browser, and there also are two out of bounds reads in the browser. Those are listed as high-risk flaws, as well. But perhaps the most interesting bug fixed in the new version is a medium-risk vulnerability related to the TLS negotiation process. During that process, Chrome failed to do a check of some certificates it encountered.
Here’s the full list of bugs fixed Chrome 31:
$500] Medium CVE-2013-6621: Use after free related to speech input elements. Credit to Khalil Zhani.
-  Medium-Critical CVE-2013-2931: Various fixes from internal audits, fuzzing and other initiatives.
-  Medium CVE-2013-6629: Read of uninitialized memory in libjpeg and libjpeg-turbo. Credit to Michal Zalewski of Google.
-  Medium CVE-2013-6630: Read of uninitialized memory in libjpeg-turbo. Credit to Michal Zalewski of Google.
-  High CVE-2013-6631: Use after free in libjingle. Credit to Patrik Höglund of the Chromium project.
As part of its bug reward program, Google paid out $11,000 in bounties to external researchers.
While researchers and academics are just at the beginning of the process of trying to judge the value of a recent paper on a vulnerability in the Bitcoin protocol, some are arguing that there is a smaller point that’s being missed in all of the back and forth: There is a problem with the peer-to-peer set-up of the Bitcoin network that could be exploited for profit.
The main claim in the Cornell researchers’ Bitcoin paper is that a cartel of so-called selfish miners comprising at least one-third of the total mining population could eventually earn more than their fair share of Bitcoin revenue. That could then lead to a snowball effect that would cause other miners to join this cartel in the hope of greater financial rewards. Some other researchers and academics have disputed this claim, saying that’s not the way that people would behave in the real world and that it’s difficult to predict the behavior of large groups of individuals, especially when money is involved.
But a pair of researchers who analyzed the paper say that there is a different issue raised by the Cornell paper that is being overlooked, namely that an attacker’s position in the Bitcoin network could make a difference in the way an attack works.
“Here’s the thing: this is the first time a serious issue with Bitcoin’s consensus mechanism has exploited the peer-to-peer aspect of the system. This is a problem for our ability to reason about Bitcoin. The cryptography in Bitcoin is considered solid. We also have some ability to model and write equations about miners’ incentives and behavior. Based on this, we thought we had strong reasons to believe that ‘X% of miners can earn no more than X% of mining revenue’,” Andrew Miller of the University of Maryland and Arvind Narayanan of Princeton University wrote in a new analysis of the paper.
“But if network position can make a difference to the attacker’s prospects, all such bets are off. Weaknesses that depend on the attacker creating ‘sybil’ nodes in the network are in a very different category. Bitcoin’s P2P network is ‘open to the public.’ Nodes can come and go as they please, and are not expected to identify themselves. Running a Bitcoin node means being willing to accept connections from strangers. This makes it problematic to apply existing theoretical models to analyze the security of Bitcoin.”
Miller and Narayanan say that while it will likely take some time to determine whether the broader claims in the Cornell paper are accurate, they believe that the basic assumption that a minority cartel of minors could earn more than its due of revenue is probably valid.
“The assumption that X% of the hashpower cannot earn more than X% of the revenue is almost certainly not true, once X% exceeds 33.3%,” the researchers say.
Image from Flickr photos of Btckeychain.
Microsoft announced this afternoon that the zero-day vulnerability being exploited in a watering hole attack against an unnamed U.S.-based NGO website was already scheduled to be patched in a cumulative Internet Explorer update tomorrow.
The zero-day was reported publicly on Friday by FireEye researchers and today a few more dots were connected on the attack, which is dropping a variant of the McRAT Trojan that has been used in a number of targeted espionage attacks targeting industrial secrets.
Microsoft promised a relatively light Patch Tuesday tomorrow that included another IE rollup, a staple of the company’s monthly security updates in 2013. Dustin Childs, a group manager in the Microsoft Trustworthy Computing group, said today that the vulnerability in an IE ActiveX Control will be patched in MS13-90 tomorrow.
In its advanced notification released last Thursday, Microsoft said the IE bulletin is rated critical because it involves flaws that can lead to remote code execution. The critical rating applies to IE 6-8 on Windows XP, IE7-9 on Vista, IE 8-10 on Windows 7, and IE 10 on Windows 8 and 8.1; all other versions are rated important.
FireEye, today told Threatpost, that the attack is limited to a single U.S.-based website hosting domestic and international policy guidance. No details were available on how the site was compromised, only that the victims were hit by malware in drive-by download attacks targeting an information leakage vulnerability and a memory corruption issue leading to remote code execution.
What differentiates this attack from other watering hole attacks is that victims are not subject to malicious iframes or traffic-redirects to attacker-controlled sites and further malware downloads. Instead, McRAT is injected directly into memory, a new twist on advanced targeted attacks.
“By using memory-only methods, the attack is exceptionally difficult for network defenders to detect, when trying to examine and confirm which endpoints are infected, using traditional disk-based forensics methods,” said Darien Kindlund, FireEye director of threat intelligence.
Microsoft said a number of mitigations are available to IE users as a mitigation until a patch is applied, namely setting security zone settings to “High” to block ActiveX Controls and Active Scripting, though users could experience some usability issues. IE can also be configured to prompt a user before running Active Scripting. The Enhanced Mitigation Experience Toolkit (EMET) is also a viable mitigation, Microsoft said.
The IE patch is one of eight bulletins scheduled for tomorrow, three of those rated critical. The scheduled security updates, however, will not include a patch for the Windows TIFF zero day being actively exploited in attacks primarily in Pakistan. The vulnerability in several Windows and Office versions is being exploited in targeted attacks against Windows XP systems running Office 2007. Microsoft released a Fix-It tool as a stopgap measure until a patch is released out of band or with the December security updates.
The developers behind OpenSSH, the suite of connectivity tools that helps users encrypt traffic on Internet sessions, acknowledged and patched over the weekend that a memory corruption vulnerability exists in some builds of the main suite.
If exploited, the vulnerability, which can be found in both 6.2 and 6.3 builds of OpenSSH, could lead to an authenticated code execution flaw, according to a security advisory on the group’s site.
The main problem stems from the post-authentication SSHD process and the AES-GCM cipher during key exchange. The SSHD wasn’t initializing a message authentication code (MAC) and ”a cleanup callback was still being invoked during a re-keying operation.”
Hash-based message authentication codes (MACs) afford the SSHD better security through data integrity protection.
In this case however the MAC was going unused and the address that was being called back was from a previous batch of heap contents.
The advisory claims OpenBSD, who develop the suite, fixed the issue by “pre-loading the heap with a useful callback address” and by enforcing address-space layout randomization (ASLR) to the SSHD and the shared libraries it depends on.
Addressing the vulnerability was a quick turnaround for OpenSSH. Markus Friedl, a German programmer and OpenSSH developer found the vulnerability and reported it just a day before, on Thursday.
Microsoft may be promising a relatively light Patch Tuesday release tomorrow, but that doesn’t mean its researchers and developers won’t have their hands full. Not only is Microsoft busy on a patch for the TIFF zero day vulnerability reported two weeks ago, but now another previously unreported Internet Explorer bug has landed on its queue.
Last Friday, researchers at FireEye reported a new watering hole attack against an unnamed U.S.-based non-governmental organization (NGO) website hosting domestic and international policy guidance. FireEye director of threat intelligence Darien Kindlund said it is still unclear how the attackers compromised the website. The exploit code is targeting a new bug in IE and infecting victims via drive-by downloads. That exploit targets an information leakage vulnerability as well as a memory issue in IE that allows remote code execution. Various versions of IE on Windows XP and Windows 7 are impacted by these attacks, which can be mitigated by Microsoft’s Enhanced Mitigation Experience Toolkit (EMET), FireEye said.
The payload—a variant of the McRAT Trojan—is injected directly into memory making detection and forensic investigation a challenge. FireEye has also made a connection between these attacks, which it is calling Operation Ephemeral Hydra, and the earlier DeputyDog attack. DeputyDog, so named after a string found in the attack code, surfaced in September and was limited at the time to a number of popular Japanese media websites. The malware was used to gather intelligence, stealing documents and system data from computers belonging to government, high tech and manufacturing companies in Japan. Kindlund said it is unclear what types of information are being stolen in this current campaign.
“Based on our visibility into this threat actor’s targeting preferences, it appears the threat actor is interested in industry-specific intelligence,” Kindlund said.
So far, FireEye said, the new IE zero day is limited to this one unnamed website, which unlike other watering hole attacks, is not spiked with a malicious iframe or redirecting compromised machines to an attacker-controlled site where more malware is downloaded. Instead, the shellcode is directly injected into memory, which is a new twist on these types of targeted attacks.
“By using memory-only methods, the attack is exceptionally difficult for network defenders to detect, when trying to examine and confirm which endpoints are infected, using traditional disk-based forensics methods,” Kindlund said.
The malicious payload goes through a number of steps before it is executed, including three levels of XOR decoding before the McRAT variant, identified as Trojan.APT.9002, takes over the infected machine. Those various types of encoding and decoding introduce complexity that could stymie traditional detection technologies, Kindlund said, adding that the malware is also fairly lightweight meaning the victim would not notice anything happening on their machine.
By injecting the malware into memory, however does present some limitations to the attackers. The lack of persistence, for example, means the attackers must exfiltrate data quickly before the machine is rebooted by the user, which would wipe the Trojan from memory.
“This means that the attacker must quickly get onto the infected endpoint and exfiltrate data or move laterally within the compromised network before the endpoint is rebooted/reset,” Kindlund said. “If the endpoint reboots or resets, then the malware is completely wiped from the endpoint and the attacker will have to re-infect the system again.”
“Alternatively, the use of this non-persistent first stage may suggest that the attackers were confident that their intended targets would simply revisit the compromised website and be re-infected,” FireEye said.
By choosing this means of infection, also limits the amount of automation involved, Kindlund said.
“This sort of activity requires more man-power; because of this, it appears the attacker turned the exploit ‘on’ and ‘off’ at will throughout this campaign, in order to limit the number of infected systems, because they did not have proper resources to scale and automate this portion of the attack (it was all human driven),” Kindlund said. “As a result of turning the exploit ‘on’ and ‘off’, it also made network defender’s jobs more difficult to verify the attack was still occurring throughout the campaign.”
This version of Trojan.APT.9002 connects to a command and control server housed at 111[.]68[.]9[.]93 using port 443, FireEye said, though it uses a different protocol for communication than previous versions. The researchers were also able to piece together, from an analysis of the MD5 hash, that it shared behaviors of other McRAT variants, including a domain dll[.]freshdns[.]org used in the DeputyDog campaign.
“We believe there is a link between this campaign and DeputyDog; however, we do not have enough evidence to confirm that the threat actor is one in the same,” Kindlund said. “Possible theories at this time are that: 1) there are multiple, related threat actors are reusing the same infrastructure or 2) it is the same threat actor responsible for both campaigns.”
D-Link’s 2760N (DSL-2760U-BN) routers allegedly contain a number of stored and reflective cross-site scripting (XSS) vulnerabilities.
Researcher Liad Mizrachi said he contacted D-Link to disclose the details of the bugs to them on six separate occasions – twice in August, twice in September, and once in October – but that the vendor has failed to respond to any of the disclosures. Threatpost reached out to D-Link for comment but it did not respond to the request before publication.
The multiple vulnerabilities are present in a various sections of the router’s Web user-interface.
According to a posting on the Full Disclosure Mailing list, the 2760N router’s XSS bugs exist in the NTS settings, parental control, URL filtering, NAT port triggering, IP filtering, interface grouping, simple network managing protocol, incoming IP filter, policy routing, printer server, SAMBA configuration, and Wi-Fi SSID Web interfaces respectively.
These bugs follow a more serious backdoor vulnerability that emerged last month and could have given an attacker the ability to access affected routers and perform any action he or she pleased. D-Link is reportedly in the process of patching that bug.