Feed aggregator

DNS-Based Amplification Attacks Key on Home Routers

Threatpost for B2B - Wed, 04/02/2014 - 15:51

DNS providers Nominum have published new data on DNS-based DDoS amplification attacks that are using home and small office routers as a jumping off point.

The provider said that in February alone, more than five million home routers were used to generate attack traffic; that number represents more than one-fifth of the 24 million routers online that have open DNS proxies.

The impact hits Internet service providers (ISPs) especially hard because amplification attacks not only consume bandwidth, but also drive up support costs and impact customer confidence in their ISP, Nominum said.

“Existing in-place DDoS defenses do not work against today’s amplification attacks, which can be launched by any criminal who wants to achieve maximum damage with minimum effort,” said Sanjay Kapoor, CMO and SVP of Strategy, Nominum. “Even if ISPs employ best practices to protect their networks, they can still become victims, thanks to the inherent vulnerability in open DNS proxies.”

Craig Young, senior security researcher with Tripwire, said the problem can largely be traced to weak default configurations on the home and SOHO routers.

“They shouldn’t have open DNS resolvers on the Net,” Young said. “Routers are designed so that someone inside the network can send a DNS request to the router, which passes that on to the ISP, which sends the request back to you inside the network. That’s fine and proper. What’s not fine is when someone else can send a message to an external interface and have the router send that to the ISP.”

Outsiders can take advantage of these open resolvers, spoof traffic and amplify the size of the request coming back. With a botnet, for example, this can quickly escalate and cause a denial-of-service condition against large organizations that criminals can find particularly effective in extortion schemes or hacktivism.

“DDoS has always relied on address spoofing so anything can be targeted and traffic cannot be traced to its origin; but as with any exploit, attackers continuously refine their tactics,” Nominum said in its report. “The new and dangerous DNS DDoS innovation has emerged, where attackers exploit a backdoor into provider networks: tens of millions of open DNS proxies scattered across the Internet. A few thousand can create Gigabits of unwanted traffic.”

In the past 18 months, the volume of bad traffic used in DDoS attacks has skyrocketed to unprecedented levels. A year ago, 300 Gbps DDoS attacks launched against Spamhaus reached 300 Gbps, causing the blacklist service to drop offline for periods of time. Earlier this year, that threshold was surpassed when traffic optimization firm CloudFlare reported it had fought back a 400 Gbps DDoS attack for one of its European customers. The attackers took advantage of a weakness in the Network Time Protocol (NTP) to amplify the volume of that attack, while in the Spamhaus attack, the attackers took advantage of open DNS resolvers.

Nominum said ISPs can resolve the spoofing issue, in particular with regard to home routers.

“Solving the open resolver problem is straightforward: configure production resolvers properly (restrict access to IP ranges controlled by the server operator) and seek out long forgotten and malicious servers and shut them down,” Nominum said. “This is not to suggest it’s a trivial undertaking, this advice has been around a long time and the problem persists.”

Tripwire’s Young said ISPs could also filter against reputation lists which share attack information among providers to recognize DNS requests for domains that are part of an attack. Those packets could then be dropped.

“It’s not hard to have a DDoS-specific system and recognize abnormal patterns, apply rate-limiting, and drop traffic,” Young said.

Amazon Web Services Combing Third Parties for Exposed Credentials

Threatpost for B2B - Wed, 04/02/2014 - 15:01

Amazon Web Services is actively searching a number of sources, including code repositories and application stores, looking for exposed credentials that could put users’ accounts and services at risk.

A week ago, a security consultant in Australia said that as many as 10,000 secret Amazon Web Services keys could be found on Github through a simple search. And yesterday, a software developer reported receiving a notice from Amazon that his credentials were discovered on Google Play in an Android application he had built.

Raj Bala printed a copy of the notice he received from Amazon pointing out that the app was not built in line with Amazon’s recommended best practices because he had embedded his AWS Key ID (AKID) and AWS Secret Key in the app.

“This exposure of your AWS credentials within a publicly available Android application could lead to unauthorized use of AWS services, associated excessive charges for your AWS account, and potentially unauthorized access to your data or the data of your application’s users,” Amazon told Baj.

Amazon advises users who have inadvertently exposed their credentials to invalidate them and never distribute long-term AWS keys with an app. Instead, Amazon recommends requesting temporary security credentials.

Rich Mogull, founder of consultancy Securosis, said this is a big deal.

“Amazon is being proactive and scanning common sources of account credentials, and then notifying customers,” Mogull said. “They don’t have to do this, especially since it potentially reduces their income.”

Mogull knows of what he speaks. Not long ago, he received a similar notice from Amazon regarding his AWS account, only his warning was a bit more dire—his credentials had been exposed on Gitbub and someone had fired up unauthorized EC2 instances in his account.

Mogull wrote an extensive description of the incident on the Securosis blog explaining how he was building a proof-of-concept for a conference presentation, storing it on Github, and was done in because a test file he was using against blocks of code contained his Access Key and Secret Key in a comment line.

Turns out someone was using the additional 10 EC2 instances to do some Bitcoin mining and the incident cost Mogull $500 in accumulated charges.

Amazon told an Australian publication that it will continue its efforts to seek out these exposed credentials on third-party sites such as Google Play and Github.

“To help protect our customers, we operate continuous fraud monitoring processes and alert customers if we find unusual activity,” iTnews quoted Amazon.

Said Mogull: “It isn’t often we see a service provider protecting their customers from error by extending security beyond the provider’s service itself. Very cool.”

Researchers Divulge 30 Oracle Java Cloud Service Bugs

Threatpost for B2B - Wed, 04/02/2014 - 13:26

Upset with the vulnerability handling process at Oracle, researchers yesterday disclosed more than two dozen outstanding issues with the company’s Java Cloud Service platform.

Researchers at Security Explorations published two reports, complete with proof of concept codes, explaining 30 different vulnerabilities in the platform, including implementation and configuration weaknesses, problems that could let users access other users’ applications, and an issue that could leave the service open to a remote code execution attack.

The Polish firm released the information after Oracle apparently failed to produce a monthly status report, a document that usually surfaces around the 24th of each month, for the reported vulnerabilities in March.

Adam Gowdiak, the company’s founder and CEO believes that Oracle is on the fence regarding the way it handles its cloud vulnerability handling policies.

“The company openly admits it cannot promise whether it will be communicating resolution of security vulnerabilities affecting their cloud data centers in the future,” Gowdiak said in an open letter posted to Security Explorations’ site on Tuesday.

Researchers dug up the following bugs in both US1 and EMEA1 versions of Oracle Java Cloud data centers.

  • The first block of issues, 1-16, stem from an insecure implementation of the perpetually fickle Java Reflection API in the service’s chief server, WebLogic. If exploited the vulnerabilities could lead to a full compromise of the Java security sandbox.
  • The second batch of vulnerabilities, issues 17-20, ties into a problem with the platform’s whitelisting functionality, which can also be bypassed thanks to the Java Reflection API.
  • Issue 21 revolves around shared WebLogic administrator credentials. Usernames and passwords, which are usually encrypted, can be decrypted with a “standard API,” and are also present across the platform.
  • Issue 22 pertains to the insecurity of the platform’s Policy Store. Sensitive usernames and passwords – often times those belonging to users with admin privileges – are exposed in plaintext form.
  • Issue 23 exposes several WebLogic applications to the public internet. These internal applications are usually only accessible by authenticated Oracle Access Managers (OAM) but a problem the platform could put them at risk.
  • Issue 24 is a Directory Traversal Vulnerability that could let anyone access files that wouldn’t otherwise be deployed on WebLogic from a public internet.
  • Issue 25 exploits a year-old version of Java SE, a problem that opens the platform up to even more vulnerabilities, since all of the fixes from the tail end of 2012 and 2013 have not been applied yet.
  • The 26th issue also involves an authentication bypass, this time via the T3 protocol. While it sounds a little more complicated to exploit, Security Explorations researchers discovered it’s possible to send a “a specially crafted object instance to a remote server identified by a given object identifier (OID) value and successfully impersonate the WebLogic kernelIdentity.”
  • Issue 27 makes it possible to tunnel T3 protocol requests through Oracle’s HTTP Server (OHS) to mimic HTTPS requests.
  • Issue 28 also deals with T3 protocol messages, as they relate to an out of bounds vulnerability with chunk data.

Researchers argue a remote code execution attack would be quite easy to pull off if an attacker combined several of the aforementioned vulnerabilities.

“As a result of the combination of the implementation and configuration flaws outlined… arbitrary code execution access could be gained on a WebLogic server instance hosting Java Cloud services of other users from the same regional data center,” the report, which gets much more in depth regarding attack vectors, reads.

Essentially the attack would involve having a custom .JSP (JavaServer Page) file uploaded to a target WebLogic server, which could later be called upon to trigger the execution of Java code embedded in it.

Security Explorations initially got in touch with Oracle about the preceding vulnerabilities (.PDF) in late January but while it waiting on Oracle’s response, managed to find two additional issues.

Those bugs, 29 and 30 (.PDF), like several of the other 28, involve the service’s whitelisting implementation and can ultimately lead to its API being bypassed.

Oracle’s next batch of updates is set to be bundled together in its quarterly Critical Patch Update on April 15 although it’s unclear if the vulnerabilities from Java Cloud Service, a service the company introduced in 2012 to assist businesses in managing data and building database applications across the cloud will be addressed.

Matthew Green on the NSA and Crypto Backdoors

Threatpost for B2B - Wed, 04/02/2014 - 11:38

Dennis Fisher talks with Matthew Green of Johns Hopkins University about the paper he co-authored on the Extended Random extension for Dual EC DRBG and whether it could be considered a backdoor.


Download: digital_underground_149.mp3

Apple Fixes More Than 25 Flaws in Safari

Threatpost for B2B - Wed, 04/02/2014 - 07:20

Apple has updated its Safari browser, dropping a pile of security fixes that patch more than 25 vulnerabilities in the WebKit framework.

Many of the vulnerabilities Apple repaired in Safari can lead to remote code execution, depending upon the attack vector. There are a number of use-after-free vulnerabilities fixed in WebKit, along with some buffer overflows and other memory corruption issues. One of the vulnerabilities, CVE-2014-1289, for example, allows remote code execution.

“WebKit, as used in Apple iOS before 7.1 and Apple TV before 6.1, allows remote attackers to execute arbitrary code or cause a denial of service (memory corruption and application crash) via a crafted web site,” the vulnerability description says.

That flaw was fixed in iOS and other products earlier this year but Apple just released the fix for Safari on Monday. Along with the 25 memory corruption vulnerabilities the company fixed, it also pushed out a patch for a separate issue in Safari that could enable an attacker to read arbitrary files on a user’s machine.

“An attacker running arbitrary code in the WebProcess may be able to read arbitrary files despite sandbox restrictions. A logic issue existed in the handling of IPC messages from the WebProcess. This issue was addressed through additional validation of IPC messages,” the Apple advisory says.

More than half of the WebKit flaws fixed in Safari 6.1.3 and 7.0.3 were discovered by the Google security team, which isn’t unusual. Google Chrome uses the WebKit framework, too, and the company’s security team is constantly looking for new vulnerabilities in it.

Analysis: Financial cyber threats in 2013. Part 1: phishing

Secure List feed for B2B - Wed, 04/02/2014 - 04:05
It has been quite a few years since cybercriminals started actively stealing money from user accounts at online stores, e-payment systems and online banking systems.

LinkedIn Goes After Email-Scraping Browser Plug-In

Threatpost for B2B - Tue, 04/01/2014 - 14:54

UPDATE: The makers of the controversial Sell Hack browser plug-in responded this afternoon to a cease-and-desist order from LinkedIn and confirmed their extension no longer works on LinkedIn pages and that all of the publicly visible data it had processed from LinkedIn profiles has been deleted.

LinkedIn has sent a cease-and-desist letter Monday night to Sell Hack, a JavaScript-based browser plug-in that scrapes email addresses associated with social media profiles from the web. The company markets that data to sales and marketing professionals.

“We’ve been described as sneaky, nefarious, no good, not ‘legitimate’ amongst other references by some,” the Sell Hack team said. “We’re not. We’re dads from the Midwest who like to build web and mobile products that people use.”

LinkedIn said none of its member data was put at risk by the two-month-old Sell Hack’s plug-in.

According to the Sell Hack website, once the browser extension is installed and a user browses to a social media profile page, a “Hack In” button is visible that will search the web for email addresses that could be associated with a particular profile.

According to another post on the Sell Hack blog: “The magic happens when you click the ‘Hack In’ button. You’ll notice the page slides down and our system starts checking publicly available data sources to return a confirmation of the person’s email address, or our best guesses.”

LinkedIn’s legal team reached out to Sell Hack with its cease-and-desist last night.

“We are doing everything we can to shut Sell Hack down,” said a LinkedIn spokesperson. “Yesterday LinkedIn’s legal team delivered Sell Hack a cease and desist letter as a result of several violations. LinkedIn members who downloaded Sell Hack should uninstall it immediately and contact Sell Hack requesting that their data be deleted.”

While the issue may not be a security vulnerability, since the Snowden leaks began, technology providers are ultra-sensitive about maintaining the privacy of their users’ data, which in this case is being collected and sold without consent.

“We advise LinkedIn members to protect themselves and to use caution before downloading any third-party extension or app,” LinkedIn said. “Often times, as with the Sell Hack case, extensions can upload your private LinkedIn information without your explicit consent.”

LinkedIn is one of a handful of major technology providers who lobbied hard against the government for additional transparency in reporting government requests for user data. Many of those same companies were initially accused of providing the government direct access to servers in order to obtain user data.

Unlike other providers such as Google or Facebook, LinkedIn does not offer Web-based email or storage. Instead, its appeal to the intelligence community was its mapping of connections between its hundreds of millions of members.

LinkedIn called the transparency ban unconstitutional in September; the technology companies eventually won out in January when the Justice Department agreed to ease a gag order that prevented the companies from reporting on national-security-related data requests.

This article was updated on April 1 with additional comments from LinkedIn and the Sell Hack team.

Clapper: NSA Has Searched Databases for Information on U.S. Persons

Threatpost for B2B - Tue, 04/01/2014 - 14:18

UPDATE–The NSA searches the data it collects incidentally on Americans, including phone calls and emails, during the course of terrorism investigations. James Clapper, the director of national intelligence, confirmed the searches in a letter to Sen. Ron Wyden, the first time that such actions have been confirmed publicly by U.S. intelligence officials.

Clapper, the head of all U.S. intelligence agencies, said in the letter that the NSA, which is tasked with collecting intelligence on foreign nationals, has searched the data that is has collected on Americans as part of its collection of foreign intelligence. The agency collects some Americans’ data, such as phone calls and emails, in the course of collecting the communications of foreign targets. But it has been unclear until now whether the NSA in fact searches those databases specifically for information on U.S. citizens.

The agency collects some Americans’ data, such as phone calls and emails, in the course of collecting the communications of foreign targets.

Clapper made it clear in his letter that it does.

“As reflected in the August 2013 Semiannual Assessment of Compliance with Procedures and Guidelines Issued Pursuant to Section 702. which we declassified and released on August 21, 2013, there have been queries, using U.S. person identifiers. of communications lawfully acquired to obtain foreign intelligence by targeting non U.S. persons reasonably believed to be located outside the U.S. pursuant to Section 702 of FISA,” Clapper said in a letter sent March 28 to Wyden (D-Ore.).

Wyden, a member of the Senate Intelligence Committee, has been a frequent critic of the NSA and its collection methods in recent years. During a hearing in January, Wyden asked whether the NSA ever had performed queries against its databases looking for information on U.S. citizens. Clapper’s letter was meant as an answer to the question. He did not say in the letter how many such searches the NSA had performed.

Responding to Clapper’s letter Wyden and Sen. Mark Udall (D-Colo.) isued a statement, saying that the DNI’s revelations show that the NSA has been taking advantage of a loophole in the existing law.

“It is now clear to the public that the list of ongoing intrusive surveillance practices by the NSA includes not only bulk collection of Americans’ phone records, but also warrantless searches of the content of Americans’ personal communications,”* Wyden and Udall said. ”This is unacceptable. It raises serious constitutional questions, and poses a real threat to the privacy rights of law-abiding Americans. If a government agency thinks that a particular American is engaged in terrorism or espionage, the Fourth Amendment requires that the government secure a warrant or emergency authorization before monitoring his or her communications. This fact should be beyond dispute.

“ Senior officials have sometimes suggested that government agencies do not deliberately read Americans’ emails, monitor their online activity or listen to their phone calls without a warrant. However, the facts show that those suggestions were misleading, and that intelligence agencies have indeed conducted warrantless searches for Americans’ communications using the ‘back-door search’ loophole in section 702 of the Foreign Intelligence Surveillance Act.”

Section 702 of the Foreign Intelligence Surveillance Act is the measure that governs the way that the NSA can target foreigners for intelligence collection and spells out the methods it must use to ensure that data on Americans or other so-called “U.S. persons” are not collected. The NSA also must take pains to minimize the amount of information it gathers that isn’t relevant to a foreigner who is being targeted.

Clapper said in his letter that the NSA has followed the minimization procedures when it does query its databases on information related to U.S. persons. He also said that Congress had the chance to do away with the agency’s ability to run such queries, and didn’t.

“As you know, when Congress reauthorized Section 702, the proposal to restrict such queries was specifically raised and ultimately not adopted,” the letter says.

This story was updated on April 2 to include the statement from Wyden and Udall.

DVR Infected with Bitcoin Mining Malware

Threatpost for B2B - Tue, 04/01/2014 - 13:57

P { margin-bottom: 0.08in; }A:link { }
-->Johannes Ullrich of the SANS Institute claims to have found malware infecting digital video recorders (DVR) predominately used to record footage captured by surveillance camera systems.

Oddly enough, Ullrich claims that one of the two binaries of malware implicated in this attack scheme appears to be a Bitcoin miner. The other, he says, looks like a HTTP agent that likely makes it easier to download further tools or malware. However, at the present time, the malware seems to only be scanning for other vulnerable devices.

“D72BNr, the bitcoin miner (according to the usage info based on strings) and mzkk8g, which looks like a simplar(sp.) http agent, maybe to download additional tools easily (similar to curl/wget which isn’t installed on this DVR by default),” Ullrich wrote on SANS diary.

The researcher first became aware of the malware last week after he observed Hiksvision DVR (again, commonly used to record video surveillance footage) scanning for port 5000. Yesterday, Ullrich was able to recover the malware samples referenced above. You can find a link to the samples for yourself included in the SANS Diary posting.

Ullrich noted that sample analysis is ongoing with the malware, but that it appears to be an ARM binary, which is an indication that the malware is targeting devices rather than your typical x86 Linux server. Beyond that, the malware is also scanning for Synology (network attached storage) devices exposed on port 5000.

“Using our DShield Sensors, we initially found a spike in scans for port 5000 a while ago,” Ullrich told Threatpost via email. “We associated this with a vulnerability in Synology Diskstation devices which became public around the same time. To further investigate this, we set up some honeypots that simulated Synology’s web admin interface which listens on port 500o.”

Upon analyzing the results from the honeypot, Ullrich says he found a number of scans: some originating from Shodan but many other still originating from these DVRs.

“At first, we were not sure if that was the actual device scanning,” Ullrich admitted. “In NAT (network address translation) scenarios, it is possible that the DVR is visible from the outside, while a different device behind the same IP address originated the scans.”

Further examination revealed that the DVRs in question were indeed originating the scans.

These particular DVRs, Ullrich noted, are used in conjunction with security cameras, and so they’re often exposed to the internet to give employees the ability to monitor the security cameras remotely. Unlike normal “TiVo” style DVRs, these run on a stripped down version of Linux. In this case, the malware was specifically compiled to run in this environment and would not run on a normal Intel based Linux machine, he explained.

This is the Malware sample’s HTTP request:

DVR Malware HTTP Request

The malware is also extracting the firmware version details of the devices it is scanning for. Those requests look like this:

Firmware Scan Request

While Ullrich notes that the malware is merely scanning now, he believes that future exploits are likely.


With Extended Random, Cracking Dual EC in BSAFE ‘Trivial’

Threatpost for B2B - Tue, 04/01/2014 - 12:56

UPDATE: Known theoretical attacks against TLS using the troubled Dual EC random number generator— something an intelligence agency might try its hand at—are in reality a bit more challenging than we’ve been led to believe.

The addition of the Extended Random extension to RSA Security’s BSAFE cryptographic libraries, for example, where Dual EC is the default random number generator, makes those challenges a moot point for the National Security Agency.

“By adding the extension, cracking Dual EC is trivial for TLS,” said Matt Fredrikson, one of the researchers who yesterday published a paper called “On the Practical Exploitability of Dual EC in TLS Implementations,” which explained the results of a study determining the costs of exploiting the Dual EC RNG where TLS is deployed.

The presence of Extended Random in BSAFE means the incursion into RSA Security by the NSA went beyond the inclusion of a subverted NIST-approved technology, as is alleged in the documents leaked by Edward Snowden, and an alleged $10 million payout by the government. Its presence solidifies that the NSA will leave no stone unturned to ensure its surveillance efforts are successful.

BSAFE was a prime target since it was used by developers not only in commercial and FIPS-approved software, but also in a number of open source packages. An attacker with a presence on the wire, say at an ISP or a key switching point on the Internet, could just passively sit and watch client or server handshake messages and be able to decrypt traffic at a relatively low cost.

Ironically, Extended Random is not turned on by default in BSAFE, and RSA says it is present only in BSAFE Java versions. Fredrikson confirmed the researchers did not see support for the extension compiled into the C/C++ version they studied despite the fact that the BSAFE documentation says it is supported.

“We say as much in the paper: ‘The BSAFE-C library documentation indicates that both watermarking and extended random are supported in some versions of the library; however, the version we have appears to have been compiled without this support,’” he said. “We only had the documentation and compiled libraries to work from–not the source code. If the documentation was mistaken, we would have no clear way of knowing.”

By attacking Dual EC minus Extended Random, the researchers were able to crack the C/C++ version of BSAFE in seconds, whereas Microsoft Windows SChannel and OpenSSL took anywhere from 90 minutes to three hours to crack. In SChannel, for example, less of Dual EC’s output is sent making it more difficult to crack.

“Dual EC, as NIST printed it, allows for additional entropy to be mixed into the computation,” Fredrikson said. “OpenSSL utilizes that alternative, where BSAFE did not. That’s significant because the attacker would have to guess what randomness is given by OpenSSL.”

Dual EC, written by the NSA, was a questionable choice from the start for inclusion in such an important encryption tool as BSAFE. Experts such as Bruce Schneier said it was slower than available alternatives and contained a bias that led many, Schneier included, to believe it was a backdoor.

Extended Random, meanwhile, was an IETF draft proposed by the Department of Defense for acceptance as a standard. Written by Eric Rescorla, an expert involved in the design of HTTPS and currently with Mozilla, Extended Random was never approved as an IETF standard and its window as a draft for consideration has long expired.

Yet, it found its way into BSAFE. In a Reuters article yesterday that broke the story, RSA Security CTO Sam Curry declined to say whether RSA was paid by the NSA to include the extension in BSAFE; he added that it has been removed from BSAFE within the last six months. In September, NIST and RSA recommended that developers move away from using Dual EC in products because it was no longer trustworthy.

The researchers tested Dual EC in BSAFE C, BSAFE Java, Microsoft Windows SChannel I and II and OpenSSL. BSAFE C fell in fewer than four seconds while BSAFE Java took close to 64 minutes; and while Extended Random was not enabled for their experiments, it was simple to extrapolate its impact, the researchers said. They concluded the extension makes Dual EC much less expensive to exploit in BSAFE Java, for example, by a factor of more than 65,000.

The DOD’s reasoning for Extended Random was a claim that the nonces used should be twice as long as the security level, e.g., 256-bit nonces for 128-bit security, the researchers said in the study. Instead, Dual EC’s bias, which already makes it easier for an attacker to guess the randomness of the numbers it generates, is exacerbated by the Extended Random extension which does not enhance the randomness of numbers generated by Dual EC.

“When transmitting more randomness, that translates to faster attacks on session keys,” Fredrikson said. “That’s pretty bad. I haven’t seen anything quite like this.”

This article was updated on April 2 with clarifications throughout.

Why Full Disclosure Still Matters

Threatpost for B2B - Tue, 04/01/2014 - 10:58

When the venerable Full Disclosure security mailing list shut down abruptly last month, many in the security community were surprised. But a lot of people, even those who had been members of the list for a long time, greeted the news with a shrug. Twitter, blogs and other outlets had obviated the need for mailing lists, they said. But Fyodor, the man who wrote Nmap, figured there was still a need for a public list where people could share their thoughts openly, so he decided to restart Full Disclosure, and he believes the security community will be better for it.

Mailing lists such as Full Disclosure, Bugtraq and many others once were a key platform for communication and the dissemination of new research and vulnerability information in the security community. Many important discoveries first saw the light of day on these lists and they served as forums for debates over vulnerability disclosure, vendor responses, releasing exploit code and any number of other topics.

But the lists also could be full of flame wars, name-calling and all kinds of other useless chaff. Still, Fyodor, whose real name is Gordon Lyon, said he sees real value in the mailing list model, especially in today’s environment where critical comments or information that a vendor might deem unfavorable can be erased from a social network in a second, never to be seen again.

“Lately web-based forums and social networks have gained in popularity, and they can offer fancy layout and great features such as community rating of posts. But mailing lists still have them beat in decentralization and resiliency to censorship. A mail sent to the new full disclosure list is immediately remailed to more than 7,000 members who then all have their own copy which can’t be quietly retracted or edited.” Fyodor said via email. “And even when John shut down the old list, the messages (more than 91,000) stayed in our inboxes and on numerous web archives such as SecLists.org.  With centralized web systems, the admins can be forced to take down or edit posts, or can lose interest (or suffer a technical failure) and shut down the site, taking down all the old messages with it.

The stated reason for John Cartwright, one of the creators of Full Disclosure, shutting down the list in March after 12 years of operation is that he had tired of dealing with one list member’s repeated requests to remove messages from the list’s archives. Legal threats from vendors and others were not uncommon on Full Disclosure, and Fyodor, who maintains one of the many Full Disclosure mirrors and archives online, said he had received his share of those threats, as well. Asked whether he expected he legal threats to continue, he said he did, but that it wouldn’t matter.

Asked whether he expected he legal threats to continue, he said he did, but that it wouldn’t matter.

“Yes, but we have already been dealing with them as we were already the most popular web archive for the old Full Disclosure list.  Also, this isn’t an ‘everything goes’ forum where people can post blatantly illegal content.  If folks start posting pirated software or other people’s credit card and social security numbers, we’ll take those down from the archive or not let them through in the first place.  But the point of this list is for network security information, and we will stand up against vendors who use legal threats and intimidation to try and hide the evidence of their shoddy and insecure products,” he said.

Since Fyodor rebooted the list last week, it has revived quickly, with researchers returning to posting their advisories and vendors notifying users about new patches. Fyodor said he’s hopeful that the list will continue to have an important place in the community for years to come.

“I think it is important for the community to have a vendor-neutral outlet like this for detailed discussion of security vulnerabilities and exploitation techniques,” he said.

Image from Flickr photos of Thanh Kim

Second NSA Crypto Tool Found in RSA BSafe

Threatpost for B2B - Mon, 03/31/2014 - 15:59

A team of academics released a study on the maligned Dual EC DRBG algorithm used in RSA Security’s BSafe and other cryptographic libraries that includes new evidence that the National Security Agency used a second cryptographic tool alongside Dual EC DRBG in Bsafe to facilitate spying.

Allegations in top secret documents leaked by Edward Snowden say the NSA subverted the NIST standards process years ago in order to contribute weaknesses to the Dual EC DRBG algorithm. Reuters then reported in December that RSA Security was paid $10 million to make it the default random number generator in Bsafe. Those libraries are not only in RSA products, but in a good number of commercial and open source software packages.

The paper, “On the Practical Exploitability of Dual EC in TLS Implementations,” concludes that Dual EC can be cracked in short order given its inherent predictability weaknesses in generating random numbers. The inclusion of the Extended Random extension in Bsafe reduced the time required to crack the algorithm exponentially, from three hours on Microsoft Windows SChannel II down to four seconds in Bsafe for C. The researchers also tested OpenSSL’s implementation of Dual EC and found it the most difficult to crack.

A report this morning by Reuters outed the presence of Extended Random in Dual EC DRBG; the extension works contrary to its mission of enhancing the randomness of numbers generated by the algorithm.

Reuters said today that, while use of Extended Random isn’t pervasive, RSA built support for Extended Random in BSafe for Java in 2009. The paper explains how the researchers used $40,000 worth of servers in their experiment and that cracking BSafe for C and BSafe for Java were the most straightforward attacks.

“The BSAFE implementations of TLS make the Dual EC back door particularly easy to exploit in two ways,” the researchers wrote. “The Java version of BSAFE includes fingerprints in connections, making them easy to identify. The C version of BSAFE allows a drastic speedup in the attack by broadcasting longer strings of random bits than one would at first imagine to be possible given the TLS standards.”

Stephen Checkoway, assistant research professor at Johns Hopkins, told Reuters it would have been 65,000 times faster with Extended Random.

RSA Security said it had removed Extended Random within the last six months, but its CTO Sam Curry would not comment on whether the government had paid RSA to include the protocol in BSafe as well.

RSA advised developers in September to move off Dual EC DRBG, one week after NIST made a similar recommendation. But experts were skeptical about the algorithm long before Edward Snowden and surveillance were part of the day-to-day lexicon. In 2007, cryptography experts Dan Shumow and Niels Ferguson gave a landmark presentation on weaknesses in the algorithm, and Bruce Schneier wrote a seminal essay in which is he said the weaknesses in Dual EC DRBG “can only be described as a backdoor.”

Schneier wrote that the algorithm was slow and had a bias, meaning that the random numbers it generates aren’t so random. According to the new paper, assuming the attacker generated the constants in Dual EC—as the NSA would have if it inserted a backdoor into the RNG—would be able to predict future outputs.

“What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random-number generator after collecting just 32 bytes of its output,” Schneier wrote in essay. “To put that in real terms, you only need to monitor one TLS Internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.

“The researchers don’t know what the secret numbers are,” Schneier said. “But because of the way the algorithm works, the person who produced the constants might know; he had the mathematical opportunity to produce the constants and the secret numbers in tandem.”

Over the weekend, Steve Marquess, founding partner at the OpenSSL Software Foundation, slammed FIPS 140-2 validation testing and speculated that the weaknesses in Dual EC DRBG were carefully planned and executed, likening them to an advanced persistent threat in a post on his personal website. FIPS 140-2 is the government standard against which cryptographic modules are certified.

Marquess said FIPS-140-2 validation prohibits changes to validated modules, calling it “deplorable.”

“That, I think, perhaps even more than rigged standards like Dual EC DRBG, is the real impact of the cryptographic module validation program,” he wrote. “It severely inhibits the naturally occurring process of evolutionary improvement that would otherwise limit the utility of consciously exploited vulnerabilities.”

He offered up the OpenSSL FIPS module as an example where vulnerabilities live on, including Lucky 13 and CVE-2014-0076.

“That’s why I’ve long been on record as saying that ‘a validated module is necessarily less secure than its unvalidated equivalent’, e.g. the OpenSSL FIPS module versus stock OpenSSL,” he said.

Dual EC DRBG, however, is not enabled by default in the OpenSSL FIPS Object Module, but its presence offers an attacker who is on a server by another means the chance to enable it silently.

“As an APT agent you already have access to many target systems via multiple means such as ‘QUANTUM INTERCEPT’ style remote compromises and access to products at multiple points in the supply chain. You don’t want to install ransomware or steal credit card numbers, you want unobtrusive and persistent visibility into all electronic communications,” Marquess wrote. “You want to leave as little trace of that as possible, and the latent Dual EC DRBG implementation in the OpenSSL FIPS module aids discrete compromise. By only overwriting a few words of object code you can silently enable use of Dual EC, whether FIPS mode is actually enabled or not. Do it in live memory and you have an essentially undetectable hack.”

Marquess said the best defense is not to have the code present at all and that the OSF is trying to have it removed from its FIPS Module.

Researcher Identifies Potential Security Issues With Tesla S

Threatpost for B2B - Mon, 03/31/2014 - 14:41

The current move by auto makers to stuff their vehicles full of networked devices, Bluetooth radios and WiFi connectivity has not gone unnoticed by security researchers. Charlie Miller and Chris Valasek spent months taking apart–literally and figuratively–a Toyota Prius to see what vulnerabilities might lie inside; and they found plenty. Now, another researcher has identified a number of issues with the security of the Tesla S, including its dependence upon a weak one-factor authentication system linked to a mobile app that can unlock the car remotely.

The Tesla S is a high-end, all-electric vehicle that includes a number of interesting features, including a center console touchscreen that controls much of the car’s systems. There also is an iPhone app that allows users to control a number of the car’s functions, including the door locks, the suspension and braking system and sunroof. Nitesh Dhanjani found that when new owners sign up for an account on the Tesla site, they must create a six-character password. That password is then used to login to the iPhone app.

Dhanjani discovered that the Tesla site doesn’t seem to have a function to limit the number of login attempts on a user account, so an attacker potentially could try to brute force a user’s password. An attacker also could phish a user to get her password and then, if he had access to the user’s iPhone, log in to the Tesla app and control the vehicle’s systems. The attacker also could use the Tesla API to check the location of the user’s vehicle, even without the iPhone app.

Dhanjani said that the attacks he’s most concerned about don’t involve brute-forcing, though. He’s more worried about attackers running a phishing campaign against Tesla owners.

“The point here (and subsequent attack vectors) is that Tesla needs to implement an authentication mechanism that is beyond 1-factor. Attackers shouldn’t be able to use traditional and well known attack vectors like phishing to remotely locate and unlock a 100k+ car built in 2014,” he said via email.

“In cases where the attacker is able to hack another website, he or she can use the usernames and credentials from the compromised accounts to attempt them on Tesla’s website and APIs given that users have the tendency to re-use passwords.”

Other possible attack vectors Dhanjani envisioned include an attacker installing malware on a target user’s machine to log his password for the Tesla site or using social-engineering attacks against Tesla employees to have them turn over passwords or remotely unlock a vehicle. The phishing and malware attack vectors are threats that any site that relies on a password faces. But they take on extra importance when the password is associated with something as valuable as a car.

“The Tesla Model S is a great car and a fantastic product of innovation. Owners of Tesla as well as other cars are increasingly relying on information security to protect the physical safety of their loved ones and their belongings. Given the serious nature of this topic, we know we can’t attempt to secure our vehicles the way we have attempted to secure our workstations at home in the past by relying on static passwords and trusted networks. The implications to physical security and privacy in this context have raised stakes to the next level,” Dhanjani said.

Along with the authentication issues, Dhanjani also found that by connecting a laptop to the vehicle through a port in the dashboard, he could identify three separate IP-enabled devices in the vehicle, potentially the dashboard screen, the center console and an unidentified third device. Both the console and the dashboard have a number of services exposed, including SSH and HTTP, and the third device has tlnet exposed, as well.

He said that he has sent the information he gathered to a Tesla employee through a friend and the company is aware of what he’s published, but he hasn’t heard an official response.

Image from Flickr photos of AutoMotoPortal.HR

Google DNS Intercepted in Turkey

Threatpost for B2B - Mon, 03/31/2014 - 09:23

Internet service providers in Turkey have been intercepting traffic to Google’s DNS servers and redirecting it, shutting off a workaround that Turkish users had employed to get to sites such as Twitter and YouTube after the government had blocked them.

Google software engineers said they had received credible reports over the weekend about the traffic interceptions, and they had looked into the situation and confirmed the problem themselves. Google operates free DNS servers to help accelerate Web traffic, and its engineers said that Turkish ISPs have set up their own servers that are imitating Google’s.

“Google operates DNS servers because we believe that you should be able to quickly and securely make your way to whatever host you’re looking for, be it YouTube, Twitter, or any other,” Steven Carstensen, software engineer at Google, wrote in a blog post.

“But imagine if someone had changed out your phone book with another one, which looks pretty much the same as before, except that the listings for a few people showed the wrong phone number. That’s essentially what’s happened: Turkish ISPs have set up servers that masquerade as Google’s DNS service.”

Turkish authorities last week began blocking sites such as YouTube and Twitter, saying that there was information on those sites that was harmful to the country’s national security. The Tor service also was blocked in Turkey at certain points last week. One method for blocking a site is for ISPs to redirect requests for those sites to other destinations. Users in Turkey noticed the problem and began working around it by using other DNS servers, including those run by Google.

If ISPs in Turkey are intercepting traffic to Google’s DNS servers, that means that ISPs or the government can redirect those users to whatever site they choose. DNS hijacks sometimes are used by attackers as part of larger attacks, and intercepting DNS traffic also has become a common tactic for governments in some countries that are trying to prevent citizens from reaching certain sites, often social media networks.

Google officials did not say whether they have taken any steps to address the situation in Turkey.

Image from Flickr photos of Bob Mical

Blog: Caution: Malware pre-installed!

Secure List feed for B2B - Mon, 03/31/2014 - 05:03
China’s leading TV station, CCTV, has a long-standing tradition of marking World Consumer Rights Day on March 15 with its ‘315 Evening Party’.

WiFi Bug Plagues Philips Internet-Enabled TVs

Threatpost for B2B - Fri, 03/28/2014 - 16:53

UPDATE — Some versions of Philips’ internet-enabled SmartTVs are vulnerable to cookie theft and a mélange of other tricks that abuse a lax WiFi setting.

The problem lies in Miracast, a WiFi feature that comes enabled by default, with a fixed password, no PIN, and no request of permission, according to researchers at the Malta-based firm ReVuln.

The vulnerability allows anyone within range of the device’s WiFi adapter to connect to the TV and access its many features. This includes being able to access potentially sensitive information within the TV’s system and configuration files as well as any files that may be on a USB stick connected to the TV. If the user browses the Internet on the same TV, an attacker could also be able to glean some of the cookies used to access certain websites.

The WiFi hole could also open the TV up to a whole mess of hijinks: An attacker could broadcast their own video, audio or images to the TV, and change the channel on a whim, without the viewer being any the wiser.

A video posted by ReVuln’s Luigi Auriemma on Wednesday points out that the default settings are present in the TV’s most recent firmware update, QF2EU-, which allows anyone to connect to the device’s WiFi without authorization and without asking permission. The device’s hardcoded password is just ‘Miracast,’ and after users are connected they are not given the option to set a custom password.

In the proof of concept video Auriemma goes on to steal files from a USB device that’s plugged into the device, along with Gmail cookie files stored in the web browser.

According to ReVuln the vulnerabilities exist in all 2013 models of SmartTV (6, 7, 8, 9xxx) that have the most recent firmware installed.

The WiFi Alliance, a consortium in charge of overseeing all things WiFi said later Friday that it was looking into the vulnerability and have been in touch with Philips regarding the security of Miracast.

“The recent report of a non-compliant passphrase implementation appears to be limited to a single vendor’s implementation,” a statement from the Alliance read Friday. “We enforce the requirements of our certification programs and have been in contact with the company in question to ensure that any device bearing the Miracast mark meets our requirements.”

The vulnerability is the latest in the line of “internet of things” instabilities, software flaws that plague everyday items that connected to the internet such as vehicles, light bulbs and medical devices.

The researchers at ReVuln found a flaw similar to the SmartTV bug in Samsung’s LED 3D TV last year where in an attacker could exploit a vulnerability to retrieve personal information from the device, spy on users and root the TV remotely.

FTC Settles With Fandango, Credit Karma Over SSL Issues in Mobile Apps

Threatpost for B2B - Fri, 03/28/2014 - 14:30

The makers of two major mobile apps, Fandango and Credit Karma, have settled with the Federal Trade Commission after the commission charged that they deliberately misrepresented the security of their apps and failed to validate SSL certificates. The apps promised users that their data was being sent over secure SSL connections, but the apps had disabled the validation process.

The settlements with the FTC don’t include any monetary penalties, but both companies have been ordered to submit to independent security audits every other year for the next 20 years and to put together comprehensive security programs.

“Consumers are increasingly using mobile apps for sensitive transactions. Yet research suggests that many companies, like Fandango and Credit Karma, have failed to properly implement SSL encryption,” said FTC Chairwoman Edith Ramirez. “Our cases against Fandango and Credit Karma should remind app developers of the need to make data security central to how they design their apps.”

The FTC complaint against Fandango alleges that the Fandango Movies app on iOS, which enables users to buy movie tickets, included an assertion during checkout telling users that their sensitive information was being sent over a secure connection. However, the app didn’t validate those connections, so users’ financial information was exposed during transmission.

“Before March 2013, Fandango did not test the Fandango Movies application to ensure that the application was validating SSL certificates and securely transmitting consumers’ sensitive personal information. Although Fandango commissioned limited security audits of its applications starting in 2011, more than two years after the release of its iOS application, respondent limited the scope of these security audits to issues presented when the ‘code is decompiled or disassembled,’ i.e., threats arising only from attackers who had physical access to a device. As a result, these audits did not assess whether the iOS application’s transmission of information, including credit card information, was secure,” the FTC complaint says.

The FTC also said that Fandango didn’t have a good process for responding to vulnerability reports from security researchers, leading to the company missing an advisory from a researcher who had discovered the SSL vulnerability.

“In December 2012, a security researcher informed respondent through its Customer Service web form that its iOS application was vulnerable to man-in-the-middle attacks because it did not validate SSL certificates. Because the security researcher’s message included the term “password,” Fandango’s Customer Service system flagged the message as a password reset request and replied with an automated message providing the researcher with instructions on how to reset passwords. Fandango’s Customer Service system then marked the security researcher’s message as “resolved,” and did not escalate it for further review,” the complaint says.

The problems with the Credit Karma app were similar, as it did not validate SSL certificates during supposedly secure connection attempts. The FTC alleges in its complaint that the company failed to validate SSL certificates on both its iOS and Android apps.

“During the iOS application’s development, Credit Karma had authorized its service provider, the application development firm, to use code that disabled SSL certificate validation ‘in testing only,’ but failed to ensure this code’s removal from the production version of the application. As a result, the iOS application shipped to consumers with the SSL certificate validation vulnerability. Credit Karma could have identified and prevented this vulnerability by performing an adequate security review prior to the iOS application’s launch,” the complaint says.

“In February 2013, one month after addressing the vulnerability in its iOS application, Credit Karma launched the Android version of its application, again without first performing an adequate security review or at least testing the application for previously identified vulnerabilities. As a result, like the iOS application before it, the Android application failed to validate SSL certificates, overriding the defaults provided by the Android APIs.”

The FTC’s complaint against Credit Karma also alleges that the app was storing users’ authentication tokens and passcodes in the clear on users’ devices.

Image from Flickr photos of Erik Drost

Syndicate content