Threatpost for B2B

Syndicate content
The First Stop For Security News
Updated: 14 hours 29 min ago

Yahoo Encrypts Data Center Links, Boosts Other Services

Thu, 04/03/2014 - 10:26

Yahoo certainly has taken its share of knocks during the past nine months of surveillance revelations and Snowden leaks for its encryption shortcomings. But the bruises are healing and the company is slowly working its way back into good graces.

After months of being an encryption laggard, Yahoo gained on the field with a number of enhancements announced last night by new chief information security officer Alex Stamos.

Chief among the improvements is that as of Monday, traffic moving between Yahoo data centers is encrypted. This, along with a lack of email encryption, was an area critics were especially harsh on Yahoo after top secret documents revealed the National Security Agency was able to sniff communications between Yahoo and Google data centers.  The Washington Post reported at the time that a combined initiative between the NSA and Britain’s GCHQ called MUSCULAR allowed the intelligence agencies to copy data from the company’s fiber-optic cables outside the U.S. Google, meanwhile, announced in November it had turned encryption on between its data centers.

“In light of reports that governments have directly tapped Internet backbones to obtain secret access to millions of people’s private communications, it’s become clear that routine use of encryption is an important basic measure for privacy and security online,” said Seth Schoen, senior staff technologist at the Electronic Frontier Foundation. “Without it, any network operator (from the smallest Wi-Fi node to the largest Internet backbone companies), or anyone who can coerce or infiltrate one, can easily see the intimate details of what people are saying online.”

As for email, Yahoo was one of the last major web-based email providers to turn on SSL by default, doing so in January after an initial foray in November when users were given the option to turn it on manually. Stamos said yesterday that in the last month, Yahoo turned on encryption of its email service between Yahoo’s servers and other email providers who support the SMTPTLS standard.

Yahoo has also turned on HTTPS encryption on its home page, search queries that run on the home page and most of its properties. Yahoo supports TLS 1.2, Perfect Forward Secrecy and 2048-bit RSA encryption for its home page, mail and digital magazines, Stamos said. He added that users can initiate encrypted sessions for Yahoo News, Sports, Finance and Good Morning America on Yahoo by typing HTTPS in the URL. He also promised an encrypted version of Yahoo Messenger in the coming months.

“Our goal is to encrypt our entire platform for all users at all time, by default,” Stamos said.

Also on the road map, Stamos said, Yahoo plans to implement HSTS, Perfect Forward Secrecy and Certificate Transparency in the near future.

“One of our biggest areas of focus in the coming months is to work with and encourage thousands of our partners across all of Yahoo’s hundreds of global properties to make sure that any data that is running on our network is secure,” Stamos said. “Our broader mission is to not only make Yahoo secure, but improve the security of the overall web ecosystem.”

Forward secrecy has long been advocated by security and privacy experts as an important failsafe to secure data and communications. The technology keeps the content of old encrypted connections private even if the encryption key is lost or stolen in the future.

Yahoo was criticized heavily for its lack of encryption on its services, which experts said facilitated the NSA’s ability to snoop on traffic, and harmed users’ ability to keep their identities and personal information secure from criminals operating on the web. While it doesn’t stop the government or law enforcement from obtaining user data via court orders or warrants, it does hamper their efforts to hack into servers and communication lines.

Meanwhile, the EFF’s Encrypt the Web report, which it continues to update, demonstrated Yahoo’s glaring encryption weaknesses in the wake of the initial Snowden leaks. Since then, most of the technology companies surveyed have tightened up their encryption practices, leaving only carriers such as Verizon, Comcast and AT&T in the rear.

“We commend Yahoo for taking these steps, and hope today’s announcements will continue to foster a recognition that encryption is an industry standard,” the EFF’s Shoen said.

DNS-Based Amplification Attacks Key on Home Routers

Wed, 04/02/2014 - 15:51

DNS providers Nominum have published new data on DNS-based DDoS amplification attacks that are using home and small office routers as a jumping off point.

The provider said that in February alone, more than five million home routers were used to generate attack traffic; that number represents more than one-fifth of the 24 million routers online that have open DNS proxies.

The impact hits Internet service providers (ISPs) especially hard because amplification attacks not only consume bandwidth, but also drive up support costs and impact customer confidence in their ISP, Nominum said.

“Existing in-place DDoS defenses do not work against today’s amplification attacks, which can be launched by any criminal who wants to achieve maximum damage with minimum effort,” said Sanjay Kapoor, CMO and SVP of Strategy, Nominum. “Even if ISPs employ best practices to protect their networks, they can still become victims, thanks to the inherent vulnerability in open DNS proxies.”

Craig Young, senior security researcher with Tripwire, said the problem can largely be traced to weak default configurations on the home and SOHO routers.

“They shouldn’t have open DNS resolvers on the Net,” Young said. “Routers are designed so that someone inside the network can send a DNS request to the router, which passes that on to the ISP, which sends the request back to you inside the network. That’s fine and proper. What’s not fine is when someone else can send a message to an external interface and have the router send that to the ISP.”

Outsiders can take advantage of these open resolvers, spoof traffic and amplify the size of the request coming back. With a botnet, for example, this can quickly escalate and cause a denial-of-service condition against large organizations that criminals can find particularly effective in extortion schemes or hacktivism.

“DDoS has always relied on address spoofing so anything can be targeted and traffic cannot be traced to its origin; but as with any exploit, attackers continuously refine their tactics,” Nominum said in its report. “The new and dangerous DNS DDoS innovation has emerged, where attackers exploit a backdoor into provider networks: tens of millions of open DNS proxies scattered across the Internet. A few thousand can create Gigabits of unwanted traffic.”

In the past 18 months, the volume of bad traffic used in DDoS attacks has skyrocketed to unprecedented levels. A year ago, 300 Gbps DDoS attacks launched against Spamhaus reached 300 Gbps, causing the blacklist service to drop offline for periods of time. Earlier this year, that threshold was surpassed when traffic optimization firm CloudFlare reported it had fought back a 400 Gbps DDoS attack for one of its European customers. The attackers took advantage of a weakness in the Network Time Protocol (NTP) to amplify the volume of that attack, while in the Spamhaus attack, the attackers took advantage of open DNS resolvers.

Nominum said ISPs can resolve the spoofing issue, in particular with regard to home routers.

“Solving the open resolver problem is straightforward: configure production resolvers properly (restrict access to IP ranges controlled by the server operator) and seek out long forgotten and malicious servers and shut them down,” Nominum said. “This is not to suggest it’s a trivial undertaking, this advice has been around a long time and the problem persists.”

Tripwire’s Young said ISPs could also filter against reputation lists which share attack information among providers to recognize DNS requests for domains that are part of an attack. Those packets could then be dropped.

“It’s not hard to have a DDoS-specific system and recognize abnormal patterns, apply rate-limiting, and drop traffic,” Young said.

Amazon Web Services Combing Third Parties for Exposed Credentials

Wed, 04/02/2014 - 15:01

Amazon Web Services is actively searching a number of sources, including code repositories and application stores, looking for exposed credentials that could put users’ accounts and services at risk.

A week ago, a security consultant in Australia said that as many as 10,000 secret Amazon Web Services keys could be found on Github through a simple search. And yesterday, a software developer reported receiving a notice from Amazon that his credentials were discovered on Google Play in an Android application he had built.

Raj Bala printed a copy of the notice he received from Amazon pointing out that the app was not built in line with Amazon’s recommended best practices because he had embedded his AWS Key ID (AKID) and AWS Secret Key in the app.

“This exposure of your AWS credentials within a publicly available Android application could lead to unauthorized use of AWS services, associated excessive charges for your AWS account, and potentially unauthorized access to your data or the data of your application’s users,” Amazon told Baj.

Amazon advises users who have inadvertently exposed their credentials to invalidate them and never distribute long-term AWS keys with an app. Instead, Amazon recommends requesting temporary security credentials.

Rich Mogull, founder of consultancy Securosis, said this is a big deal.

“Amazon is being proactive and scanning common sources of account credentials, and then notifying customers,” Mogull said. “They don’t have to do this, especially since it potentially reduces their income.”

Mogull knows of what he speaks. Not long ago, he received a similar notice from Amazon regarding his AWS account, only his warning was a bit more dire—his credentials had been exposed on Gitbub and someone had fired up unauthorized EC2 instances in his account.

Mogull wrote an extensive description of the incident on the Securosis blog explaining how he was building a proof-of-concept for a conference presentation, storing it on Github, and was done in because a test file he was using against blocks of code contained his Access Key and Secret Key in a comment line.

Turns out someone was using the additional 10 EC2 instances to do some Bitcoin mining and the incident cost Mogull $500 in accumulated charges.

Amazon told an Australian publication that it will continue its efforts to seek out these exposed credentials on third-party sites such as Google Play and Github.

“To help protect our customers, we operate continuous fraud monitoring processes and alert customers if we find unusual activity,” iTnews quoted Amazon.

Said Mogull: “It isn’t often we see a service provider protecting their customers from error by extending security beyond the provider’s service itself. Very cool.”

Researchers Divulge 30 Oracle Java Cloud Service Bugs

Wed, 04/02/2014 - 13:26

Upset with the vulnerability handling process at Oracle, researchers yesterday disclosed more than two dozen outstanding issues with the company’s Java Cloud Service platform.

Researchers at Security Explorations published two reports, complete with proof of concept codes, explaining 30 different vulnerabilities in the platform, including implementation and configuration weaknesses, problems that could let users access other users’ applications, and an issue that could leave the service open to a remote code execution attack.

The Polish firm released the information after Oracle apparently failed to produce a monthly status report, a document that usually surfaces around the 24th of each month, for the reported vulnerabilities in March.

Adam Gowdiak, the company’s founder and CEO believes that Oracle is on the fence regarding the way it handles its cloud vulnerability handling policies.

“The company openly admits it cannot promise whether it will be communicating resolution of security vulnerabilities affecting their cloud data centers in the future,” Gowdiak said in an open letter posted to Security Explorations’ site on Tuesday.

Researchers dug up the following bugs in both US1 and EMEA1 versions of Oracle Java Cloud data centers.

  • The first block of issues, 1-16, stem from an insecure implementation of the perpetually fickle Java Reflection API in the service’s chief server, WebLogic. If exploited the vulnerabilities could lead to a full compromise of the Java security sandbox.
  • The second batch of vulnerabilities, issues 17-20, ties into a problem with the platform’s whitelisting functionality, which can also be bypassed thanks to the Java Reflection API.
  • Issue 21 revolves around shared WebLogic administrator credentials. Usernames and passwords, which are usually encrypted, can be decrypted with a “standard API,” and are also present across the platform.
  • Issue 22 pertains to the insecurity of the platform’s Policy Store. Sensitive usernames and passwords – often times those belonging to users with admin privileges – are exposed in plaintext form.
  • Issue 23 exposes several WebLogic applications to the public internet. These internal applications are usually only accessible by authenticated Oracle Access Managers (OAM) but a problem the platform could put them at risk.
  • Issue 24 is a Directory Traversal Vulnerability that could let anyone access files that wouldn’t otherwise be deployed on WebLogic from a public internet.
  • Issue 25 exploits a year-old version of Java SE, a problem that opens the platform up to even more vulnerabilities, since all of the fixes from the tail end of 2012 and 2013 have not been applied yet.
  • The 26th issue also involves an authentication bypass, this time via the T3 protocol. While it sounds a little more complicated to exploit, Security Explorations researchers discovered it’s possible to send a “a specially crafted object instance to a remote server identified by a given object identifier (OID) value and successfully impersonate the WebLogic kernelIdentity.”
  • Issue 27 makes it possible to tunnel T3 protocol requests through Oracle’s HTTP Server (OHS) to mimic HTTPS requests.
  • Issue 28 also deals with T3 protocol messages, as they relate to an out of bounds vulnerability with chunk data.

Researchers argue a remote code execution attack would be quite easy to pull off if an attacker combined several of the aforementioned vulnerabilities.

“As a result of the combination of the implementation and configuration flaws outlined… arbitrary code execution access could be gained on a WebLogic server instance hosting Java Cloud services of other users from the same regional data center,” the report, which gets much more in depth regarding attack vectors, reads.

Essentially the attack would involve having a custom .JSP (JavaServer Page) file uploaded to a target WebLogic server, which could later be called upon to trigger the execution of Java code embedded in it.

Security Explorations initially got in touch with Oracle about the preceding vulnerabilities (.PDF) in late January but while it waiting on Oracle’s response, managed to find two additional issues.

Those bugs, 29 and 30 (.PDF), like several of the other 28, involve the service’s whitelisting implementation and can ultimately lead to its API being bypassed.

Oracle’s next batch of updates is set to be bundled together in its quarterly Critical Patch Update on April 15 although it’s unclear if the vulnerabilities from Java Cloud Service, a service the company introduced in 2012 to assist businesses in managing data and building database applications across the cloud will be addressed.

Matthew Green on the NSA and Crypto Backdoors

Wed, 04/02/2014 - 11:38

Dennis Fisher talks with Matthew Green of Johns Hopkins University about the paper he co-authored on the Extended Random extension for Dual EC DRBG and whether it could be considered a backdoor.

http://threatpost.com/files/2014/04/digital_underground_149.mp3

Download: digital_underground_149.mp3

Apple Fixes More Than 25 Flaws in Safari

Wed, 04/02/2014 - 07:20

Apple has updated its Safari browser, dropping a pile of security fixes that patch more than 25 vulnerabilities in the WebKit framework.

Many of the vulnerabilities Apple repaired in Safari can lead to remote code execution, depending upon the attack vector. There are a number of use-after-free vulnerabilities fixed in WebKit, along with some buffer overflows and other memory corruption issues. One of the vulnerabilities, CVE-2014-1289, for example, allows remote code execution.

“WebKit, as used in Apple iOS before 7.1 and Apple TV before 6.1, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted web site,” the vulnerability description says.

That flaw was fixed in iOS and other products earlier this year but Apple just released the fix for Safari on Monday. Along with the 25 memory corruption vulnerabilities the company fixed, it also pushed out a patch for a separate issue in Safari that could enable an attacker to read arbitrary files on a user’s machine.

“An attacker running arbitrary code in the WebProcess may be able to read arbitrary files despite sandbox restrictions. A logic issue existed in the handling of IPC messages from the WebProcess. This issue was addressed through additional validation of IPC messages,” the Apple advisory says.

More than half of the WebKit flaws fixed in Safari 6.1.3 and 7.0.3 were discovered by the Google security team, which isn’t unusual. Google Chrome uses the WebKit framework, too, and the company’s security team is constantly looking for new vulnerabilities in it.

LinkedIn Goes After Email-Scraping Browser Plug-In

Tue, 04/01/2014 - 14:54

UPDATE: The makers of the controversial Sell Hack browser plug-in responded this afternoon to a cease-and-desist order from LinkedIn and confirmed their extension no longer works on LinkedIn pages and that all of the publicly visible data it had processed from LinkedIn profiles has been deleted.

LinkedIn has sent a cease-and-desist letter Monday night to Sell Hack, a JavaScript-based browser plug-in that scrapes email addresses associated with social media profiles from the web. The company markets that data to sales and marketing professionals.

“We’ve been described as sneaky, nefarious, no good, not ‘legitimate’ amongst other references by some,” the Sell Hack team said. “We’re not. We’re dads from the Midwest who like to build web and mobile products that people use.”

LinkedIn said none of its member data was put at risk by the two-month-old Sell Hack’s plug-in.

According to the Sell Hack website, once the browser extension is installed and a user browses to a social media profile page, a “Hack In” button is visible that will search the web for email addresses that could be associated with a particular profile.

According to another post on the Sell Hack blog: “The magic happens when you click the ‘Hack In’ button. You’ll notice the page slides down and our system starts checking publicly available data sources to return a confirmation of the person’s email address, or our best guesses.”

LinkedIn’s legal team reached out to Sell Hack with its cease-and-desist last night.

“We are doing everything we can to shut Sell Hack down,” said a LinkedIn spokesperson. “Yesterday LinkedIn’s legal team delivered Sell Hack a cease and desist letter as a result of several violations. LinkedIn members who downloaded Sell Hack should uninstall it immediately and contact Sell Hack requesting that their data be deleted.”

While the issue may not be a security vulnerability, since the Snowden leaks began, technology providers are ultra-sensitive about maintaining the privacy of their users’ data, which in this case is being collected and sold without consent.

“We advise LinkedIn members to protect themselves and to use caution before downloading any third-party extension or app,” LinkedIn said. “Often times, as with the Sell Hack case, extensions can upload your private LinkedIn information without your explicit consent.”

LinkedIn is one of a handful of major technology providers who lobbied hard against the government for additional transparency in reporting government requests for user data. Many of those same companies were initially accused of providing the government direct access to servers in order to obtain user data.

Unlike other providers such as Google or Facebook, LinkedIn does not offer Web-based email or storage. Instead, its appeal to the intelligence community was its mapping of connections between its hundreds of millions of members.

LinkedIn called the transparency ban unconstitutional in September; the technology companies eventually won out in January when the Justice Department agreed to ease a gag order that prevented the companies from reporting on national-security-related data requests.

This article was updated on April 1 with additional comments from LinkedIn and the Sell Hack team.

Clapper: NSA Has Searched Databases for Information on U.S. Persons

Tue, 04/01/2014 - 14:18

UPDATE–The NSA searches the data it collects incidentally on Americans, including phone calls and emails, during the course of terrorism investigations. James Clapper, the director of national intelligence, confirmed the searches in a letter to Sen. Ron Wyden, the first time that such actions have been confirmed publicly by U.S. intelligence officials.

Clapper, the head of all U.S. intelligence agencies, said in the letter that the NSA, which is tasked with collecting intelligence on foreign nationals, has searched the data that is has collected on Americans as part of its collection of foreign intelligence. The agency collects some Americans’ data, such as phone calls and emails, in the course of collecting the communications of foreign targets. But it has been unclear until now whether the NSA in fact searches those databases specifically for information on U.S. citizens.

The agency collects some Americans’ data, such as phone calls and emails, in the course of collecting the communications of foreign targets.

Clapper made it clear in his letter that it does.

“As reflected in the August 2013 Semiannual Assessment of Compliance with Procedures and Guidelines Issued Pursuant to Section 702. which we declassified and released on August 21, 2013, there have been queries, using U.S. person identifiers. of communications lawfully acquired to obtain foreign intelligence by targeting non U.S. persons reasonably believed to be located outside the U.S. pursuant to Section 702 of FISA,” Clapper said in a letter sent March 28 to Wyden (D-Ore.).

Wyden, a member of the Senate Intelligence Committee, has been a frequent critic of the NSA and its collection methods in recent years. During a hearing in January, Wyden asked whether the NSA ever had performed queries against its databases looking for information on U.S. citizens. Clapper’s letter was meant as an answer to the question. He did not say in the letter how many such searches the NSA had performed.

Responding to Clapper’s letter Wyden and Sen. Mark Udall (D-Colo.) isued a statement, saying that the DNI’s revelations show that the NSA has been taking advantage of a loophole in the existing law.

“It is now clear to the public that the list of ongoing intrusive surveillance practices by the NSA includes not only bulk collection of Americans’ phone records, but also warrantless searches of the content of Americans’ personal communications,”* Wyden and Udall said. ”This is unacceptable. It raises serious constitutional questions, and poses a real threat to the privacy rights of law-abiding Americans. If a government agency thinks that a particular American is engaged in terrorism or espionage, the Fourth Amendment requires that the government secure a warrant or emergency authorization before monitoring his or her communications. This fact should be beyond dispute.

“ Senior officials have sometimes suggested that government agencies do not deliberately read Americans’ emails, monitor their online activity or listen to their phone calls without a warrant. However, the facts show that those suggestions were misleading, and that intelligence agencies have indeed conducted warrantless searches for Americans’ communications using the ‘back-door search’ loophole in section 702 of the Foreign Intelligence Surveillance Act.”

Section 702 of the Foreign Intelligence Surveillance Act is the measure that governs the way that the NSA can target foreigners for intelligence collection and spells out the methods it must use to ensure that data on Americans or other so-called “U.S. persons” are not collected. The NSA also must take pains to minimize the amount of information it gathers that isn’t relevant to a foreigner who is being targeted.

Clapper said in his letter that the NSA has followed the minimization procedures when it does query its databases on information related to U.S. persons. He also said that Congress had the chance to do away with the agency’s ability to run such queries, and didn’t.

“As you know, when Congress reauthorized Section 702, the proposal to restrict such queries was specifically raised and ultimately not adopted,” the letter says.

This story was updated on April 2 to include the statement from Wyden and Udall.

DVR Infected with Bitcoin Mining Malware

Tue, 04/01/2014 - 13:57

P { margin-bottom: 0.08in; }A:link { }
-->Johannes Ullrich of the SANS Institute claims to have found malware infecting digital video recorders (DVR) predominately used to record footage captured by surveillance camera systems.

Oddly enough, Ullrich claims that one of the two binaries of malware implicated in this attack scheme appears to be a Bitcoin miner. The other, he says, looks like a HTTP agent that likely makes it easier to download further tools or malware. However, at the present time, the malware seems to only be scanning for other vulnerable devices.

“D72BNr, the bitcoin miner (according to the usage info based on strings) and mzkk8g, which looks like a simplar(sp.) http agent, maybe to download additional tools easily (similar to curl/wget which isn’t installed on this DVR by default),” Ullrich wrote on SANS diary.

The researcher first became aware of the malware last week after he observed Hiksvision DVR (again, commonly used to record video surveillance footage) scanning for port 5000. Yesterday, Ullrich was able to recover the malware samples referenced above. You can find a link to the samples for yourself included in the SANS Diary posting.

Ullrich noted that sample analysis is ongoing with the malware, but that it appears to be an ARM binary, which is an indication that the malware is targeting devices rather than your typical x86 Linux server. Beyond that, the malware is also scanning for Synology (network attached storage) devices exposed on port 5000.

“Using our DShield Sensors, we initially found a spike in scans for port 5000 a while ago,” Ullrich told Threatpost via email. “We associated this with a vulnerability in Synology Diskstation devices which became public around the same time. To further investigate this, we set up some honeypots that simulated Synology’s web admin interface which listens on port 500o.”

Upon analyzing the results from the honeypot, Ullrich says he found a number of scans: some originating from Shodan but many other still originating from these DVRs.

“At first, we were not sure if that was the actual device scanning,” Ullrich admitted. “In NAT (network address translation) scenarios, it is possible that the DVR is visible from the outside, while a different device behind the same IP address originated the scans.”

Further examination revealed that the DVRs in question were indeed originating the scans.

These particular DVRs, Ullrich noted, are used in conjunction with security cameras, and so they’re often exposed to the internet to give employees the ability to monitor the security cameras remotely. Unlike normal “TiVo” style DVRs, these run on a stripped down version of Linux. In this case, the malware was specifically compiled to run in this environment and would not run on a normal Intel based Linux machine, he explained.

This is the Malware sample’s HTTP request:

DVR Malware HTTP Request

The malware is also extracting the firmware version details of the devices it is scanning for. Those requests look like this:

Firmware Scan Request

While Ullrich notes that the malware is merely scanning now, he believes that future exploits are likely.

 

With Extended Random, Cracking Dual EC in BSAFE ‘Trivial’

Tue, 04/01/2014 - 12:56

UPDATE: Known theoretical attacks against TLS using the troubled Dual EC random number generator— something an intelligence agency might try its hand at—are in reality a bit more challenging than we’ve been led to believe.

The addition of the Extended Random extension to RSA Security’s BSAFE cryptographic libraries, for example, where Dual EC is the default random number generator, makes those challenges a moot point for the National Security Agency.

“By adding the extension, cracking Dual EC is trivial for TLS,” said Matt Fredrikson, one of the researchers who yesterday published a paper called “On the Practical Exploitability of Dual EC in TLS Implementations,” which explained the results of a study determining the costs of exploiting the Dual EC RNG where TLS is deployed.

The presence of Extended Random in BSAFE means the incursion into RSA Security by the NSA went beyond the inclusion of a subverted NIST-approved technology, as is alleged in the documents leaked by Edward Snowden, and an alleged $10 million payout by the government. Its presence solidifies that the NSA will leave no stone unturned to ensure its surveillance efforts are successful.

BSAFE was a prime target since it was used by developers not only in commercial and FIPS-approved software, but also in a number of open source packages. An attacker with a presence on the wire, say at an ISP or a key switching point on the Internet, could just passively sit and watch client or server handshake messages and be able to decrypt traffic at a relatively low cost.

Ironically, Extended Random is not turned on by default in BSAFE, and RSA says it is present only in BSAFE Java versions. Fredrikson confirmed the researchers did not see support for the extension compiled into the C/C++ version they studied despite the fact that the BSAFE documentation says it is supported.

“We say as much in the paper: ‘The BSAFE-C library documentation indicates that both watermarking and extended random are supported in some versions of the library; however, the version we have appears to have been compiled without this support,’” he said. “We only had the documentation and compiled libraries to work from–not the source code. If the documentation was mistaken, we would have no clear way of knowing.”

By attacking Dual EC minus Extended Random, the researchers were able to crack the C/C++ version of BSAFE in seconds, whereas Microsoft Windows SChannel and OpenSSL took anywhere from 90 minutes to three hours to crack. In SChannel, for example, less of Dual EC’s output is sent making it more difficult to crack.

“Dual EC, as NIST printed it, allows for additional entropy to be mixed into the computation,” Fredrikson said. “OpenSSL utilizes that alternative, where BSAFE did not. That’s significant because the attacker would have to guess what randomness is given by OpenSSL.”

Dual EC, written by the NSA, was a questionable choice from the start for inclusion in such an important encryption tool as BSAFE. Experts such as Bruce Schneier said it was slower than available alternatives and contained a bias that led many, Schneier included, to believe it was a backdoor.

Extended Random, meanwhile, was an IETF draft proposed by the Department of Defense for acceptance as a standard. Written by Eric Rescorla, an expert involved in the design of HTTPS and currently with Mozilla, Extended Random was never approved as an IETF standard and its window as a draft for consideration has long expired.

Yet, it found its way into BSAFE. In a Reuters article yesterday that broke the story, RSA Security CTO Sam Curry declined to say whether RSA was paid by the NSA to include the extension in BSAFE; he added that it has been removed from BSAFE within the last six months. In September, NIST and RSA recommended that developers move away from using Dual EC in products because it was no longer trustworthy.

The researchers tested Dual EC in BSAFE C, BSAFE Java, Microsoft Windows SChannel I and II and OpenSSL. BSAFE C fell in fewer than four seconds while BSAFE Java took close to 64 minutes; and while Extended Random was not enabled for their experiments, it was simple to extrapolate its impact, the researchers said. They concluded the extension makes Dual EC much less expensive to exploit in BSAFE Java, for example, by a factor of more than 65,000.

The DOD’s reasoning for Extended Random was a claim that the nonces used should be twice as long as the security level, e.g., 256-bit nonces for 128-bit security, the researchers said in the study. Instead, Dual EC’s bias, which already makes it easier for an attacker to guess the randomness of the numbers it generates, is exacerbated by the Extended Random extension which does not enhance the randomness of numbers generated by Dual EC.

“When transmitting more randomness, that translates to faster attacks on session keys,” Fredrikson said. “That’s pretty bad. I haven’t seen anything quite like this.”

This article was updated on April 2 with clarifications throughout.

Why Full Disclosure Still Matters

Tue, 04/01/2014 - 10:58

When the venerable Full Disclosure security mailing list shut down abruptly last month, many in the security community were surprised. But a lot of people, even those who had been members of the list for a long time, greeted the news with a shrug. Twitter, blogs and other outlets had obviated the need for mailing lists, they said. But Fyodor, the man who wrote Nmap, figured there was still a need for a public list where people could share their thoughts openly, so he decided to restart Full Disclosure, and he believes the security community will be better for it.

Mailing lists such as Full Disclosure, Bugtraq and many others once were a key platform for communication and the dissemination of new research and vulnerability information in the security community. Many important discoveries first saw the light of day on these lists and they served as forums for debates over vulnerability disclosure, vendor responses, releasing exploit code and any number of other topics.

But the lists also could be full of flame wars, name-calling and all kinds of other useless chaff. Still, Fyodor, whose real name is Gordon Lyon, said he sees real value in the mailing list model, especially in today’s environment where critical comments or information that a vendor might deem unfavorable can be erased from a social network in a second, never to be seen again.

“Lately web-based forums and social networks have gained in popularity, and they can offer fancy layout and great features such as community rating of posts. But mailing lists still have them beat in decentralization and resiliency to censorship. A mail sent to the new full disclosure list is immediately remailed to more than 7,000 members who then all have their own copy which can’t be quietly retracted or edited.” Fyodor said via email. “And even when John shut down the old list, the messages (more than 91,000) stayed in our inboxes and on numerous web archives such as SecLists.org.  With centralized web systems, the admins can be forced to take down or edit posts, or can lose interest (or suffer a technical failure) and shut down the site, taking down all the old messages with it.

The stated reason for John Cartwright, one of the creators of Full Disclosure, shutting down the list in March after 12 years of operation is that he had tired of dealing with one list member’s repeated requests to remove messages from the list’s archives. Legal threats from vendors and others were not uncommon on Full Disclosure, and Fyodor, who maintains one of the many Full Disclosure mirrors and archives online, said he had received his share of those threats, as well. Asked whether he expected he legal threats to continue, he said he did, but that it wouldn’t matter.

Asked whether he expected he legal threats to continue, he said he did, but that it wouldn’t matter.

“Yes, but we have already been dealing with them as we were already the most popular web archive for the old Full Disclosure list.  Also, this isn’t an ‘everything goes’ forum where people can post blatantly illegal content.  If folks start posting pirated software or other people’s credit card and social security numbers, we’ll take those down from the archive or not let them through in the first place.  But the point of this list is for network security information, and we will stand up against vendors who use legal threats and intimidation to try and hide the evidence of their shoddy and insecure products,” he said.

Since Fyodor rebooted the list last week, it has revived quickly, with researchers returning to posting their advisories and vendors notifying users about new patches. Fyodor said he’s hopeful that the list will continue to have an important place in the community for years to come.

“I think it is important for the community to have a vendor-neutral outlet like this for detailed discussion of security vulnerabilities and exploitation techniques,” he said.

Image from Flickr photos of Thanh Kim